id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9906/cond-mat9906142.html
ar5iv
text
# Pairing symmetry and long range pair potential in a weak coupling theory of superconductivity ## I Introduction Many experiments were performed to find clues regarding mechanism of high-$`T_c`$ superconductivity and the nature of the superconducting pair wave function. Notwithstanding this effort the nature of the orbital symmetry of the order parameter is not yet known completely after a decayed of its discovery although strong evidence of a major $`\mathrm{d}_{\mathrm{x}^2\mathrm{y}^2}`$ symmetry exists . Phase and node sensitive experiments also reported a sign reversal of the order parameter supporting $`d`$ wave symmetry . The most current scenario as appears from various experiments and theory that the pairing symmetry of these family could be a mixed one like $`\mathrm{d}_{\mathrm{x}^2\mathrm{y}^2}+\mathrm{e}^{\mathrm{i}\theta }\alpha `$ where $`\alpha `$ could be something in the $`s`$ wave family or $`d_{xy}`$. The electron doped $`\mathrm{Nd}_{2\mathrm{x}}\mathrm{Ce}_\mathrm{x}\mathrm{CuO}_4`$ superconductors are however pure $`s`$ wave like . Tunneling experiments had questioned the pure d-wave symmetry as the data were interpreted as an admixture of d and s-wave components due to orthorhombicity in YBCO. Possibility of a minor but finite $`id_{xy}`$ symmetry alongwith the predominant $`d_{x^2y^2}`$ has also been suggested in connection with magnetic defects or small fractions of a flux quantum $`\mathrm{\Phi }_0=hc/2e`$ in YBCO powders. Similar proposals came from various other authors in the context of magnetic field, magnetic impurity, interface effect etc. These proposals got the correct momentum when experimental data on longitudinal thermal conductivity by Krishana et al, of $`\mathrm{Bi}_2\mathrm{Sr}_2\mathrm{CaCu}_2\mathrm{O}_8`$ compounds and that by Movshovich et al, showed supportive indication to such proposals. There are experimental results related to interface effects as well as in the bulk that indicates mixed pairing symmetry (with dominant $`d`$-wave) , thus providing a strong threat to the pure $`d`$ wave models. In this paper our main aim is to study the possibility of a mixed pairing symmetry state with $`\mathrm{\Delta }(k)=\mathrm{\Delta }_{d_{x^2y^2}}+e^{i\theta }s_\alpha `$ where $`\alpha =xy,x^2+y^2`$ for $`\theta =0,\pi /2`$ with both $`d`$ and $`s`$ on an equal footing. We show that $`d_{x^2y^2}`$ can mix with $`s_{xy}`$ in the tetragonal group for $`\theta =0`$ but not for $`\theta =\pi /2`$. The phase of the second condensate state is thus extremely important. We then show that eventhough the lowest order $`d_{x^2y^2}`$ cannot mix with $`s_{x^2+y^2}`$, the corresponding higher order symmetries can mix freely with each other. By lowest order we mean the usual $`d`$-wave (i.e, simple $`\mathrm{cos}k_x\mathrm{cos}k_y`$ form), extended $`s`$ wave (i.e, simple $`\mathrm{cos}k_x+\mathrm{cos}k_y`$ form) and so on. By higher order we mean such symmetries with higher harmonics present in it, like $`\mathrm{cos}\xi k_x\pm \mathrm{cos}\xi k_y`$ form where $`\xi =na`$ ($`n=1,2,3..`$) or even more complicated like $`\mathrm{cos}2k_x\mathrm{cos}k_y\pm \mathrm{cos}k_x\mathrm{cos}2k_y`$ and so on. This will be clearer as we proceed. Now, in order to obtain such pairing symmetry in the respective channels one needs effective attractive pairing potential $`V(\xi k,\xi k^{})`$. We derive, in the spirit of tight binding longer range attraction than the usual nearest (or next nearest) neighbour one such interaction potential. The potential $`V(\xi k,\xi k^{})`$ therefore, changes the position of its minimum from that of the usual $`d`$ or $`s`$ wave cases for $`n>1`$. We show, depending on the position of the pair potential or in other words, longer ranged attractions $`\xi =2a,3a,4a`$ etc. the dominant symmetry changes from $`d_{x^2y^2}`$ for $`\xi =a`$ to $`s`$ like otherwise. This study can particularly be justified based on the following grounds. (i) On general grounds, long range interaction arise from a decrease in screening as one approaches the insulator. In specific models of superconductivity like the spin-fluctuation mediated models, an increase in the antiferromagnetic correlation length occurs with underdoping. (ii) One of the potential theories of high temperature superconductivity that favors $`d`$ wave symmetry is the spin fluctuation theory . The gap symmetry of the spin fluctuation theory is however not the simplest $`d`$-wave but higher order $`d`$-wave, approximately of the form $`(\mathrm{cos}k_x\mathrm{cos}k_y)(\mathrm{cos}k_x+\mathrm{cos}k_y)^N`$ . Explicit $`k`$-anisotrpy of the gap in spin fluctuation mediated superconductivity was obtained by Lenck and Carbotte in BCS theory with the phenomenological spin susceptibility as pairing interaction using fast-Fourier-Transform technique, without any prior assumption about the symmetry of the gap. They concluded, the gap although have nodal lines along $`k_x=k_y`$, does not have the simplest $`d`$-wave symmetry but rather higher order $`d`$ wave symmetry with higher harmonics present in it. Therfore, this work provide a real space derivation of a pair potential that produces higher order $`d`$-wave symmetry similar to that present in the spin fluctuation theory. (iii) In the magnetic scenario of the cuprates , one can set $`\xi `$ equal to the magnetic coherence length which is larger than the lattice spacing . The coherence length in the superconducting state which is different for different materials may be because a short range interaction requires larger densities than a long range one in order to produce coherent motion that leads to superconductivity. (The $`T_cx`$ relationship is not unique in all high $`T_c`$ systems, some starts to superconduct with very small doping, $`x`$ whereas some systems require larger $`x`$). (iv) The high $`T_c`$ systems are in very complicated circuit and the electronic correlation effects may not be adequately accounted unless one considers next nearest or further neighbour repulsion. Therefore, in the spirit of tight binding lattice the effective attraction may only arise with more distant attractive interation. (v) In a most recent angel resolved photoemission (ARPES) experiment by a well known group , such requirement of long range interaction was realized. One of their essential findings is, as the doping decreases, the maximum gap increases, but the slope of the gap near the nodes decreases. This particular feature although consistent with $`d`$ wave but cannot be fit by simple $`\mathrm{cos}(2\varphi )`$ but requires a finite mixing of $`\mathrm{cos}(6\varphi )`$ as well, where $`\varphi `$ is angle between $`k_x,k_y`$ given as, $`\mathrm{tan}^1(k_y/k_x)`$. The $`\mathrm{cos}(6\varphi )`$ contains higher harmonics than simple $`(\mathrm{cos}k_x\mathrm{cos}k_y)`$. Rest of the lay out of the paper is as follows. In section II, we derive the pair potential required for higher anisotropic $`d`$ and extended $`s`$ wave symmetries. We also provide a brief prescription of finding coupled gap equations for the amplitudes of such higher anisotropic symmetries. In section III, we present and discuss in details all the numerical results providing strong signature of change in dominant pairing symmetry with range of interation. Finally, we conlude in section IV. ## II Model Calculation Let us consider that the overlap of orbitals in different unit cells is small compared to the diagonal overlap. Then in the spirit of tight binding lattice description, the matrix element of the pair potential may be obtained as, $`V(\stackrel{}{q})`$ $`=`$ $`{\displaystyle \underset{\stackrel{}{\delta }}{}}V_\stackrel{}{\delta }e^{i\stackrel{}{q}\stackrel{}{R}_\delta }=V_0^r+V_1f^d(k)f^d(k^{})+V_1g(k)g(k^{})`$ (7) $`+V_2f^{d_{xy}}(k)f^{d_{xy}}(k^{})+V_2f^{s_{xy}}(k)f^{s_{xy}}(k^{})`$ $`+V_3f^d(2k)f^d(2k^{})+V_3g(2k)g(2k^{})`$ $`+2V_4\stackrel{~}{f}_1^d(2k)\stackrel{~}{f}_1^d(2k^{})+2V_4\stackrel{~}{f}_2^d(2k)\stackrel{~}{f}_2^d(2k^{})`$ $`+2V_4\stackrel{~}{g}_1(2k)\stackrel{~}{g}_1(2k^{})+2V_4\stackrel{~}{g}_2(2k)\stackrel{~}{g}_2(2k^{})`$ $`+V_5f^{d_{xy}}(2k)f^{d_{xy}}(2k^{})+V_5f^{s_{xy}}(2k)f^{s_{xy}}(2k^{})`$ $`+V_6f^d(3k)f^d(3k^{})+V_6g(3k)g(3k^{})`$ where in the first result of the equation (7) $`\stackrel{}{R}_\delta `$ locates nearest neighbour and further neighbours, $`\stackrel{}{\delta }`$ labels and $`V_n`$, $`n=1,\mathrm{},6`$ represents strength of attraction between the respective neighbour interaction. The first term in the above equation $`V_0^r`$ refers to the on-site interaction which is considered as repulsive but can be attractive as well giving rise isotropic $`s`$ wave. In this paper, we shall not consider the isotropic $`s`$ wave for a mixed symmetry with $`d`$ wave (cf. ). The form factors of the potential are obtained as, $`f^d(nk)=\mathrm{cos}(nk_xa)\mathrm{cos}(nk_ya)`$ (8) $`g(nk)=\mathrm{cos}(nk_xa)+\mathrm{cos}(nk_ya)`$ (9) $`f^{d_{xy}}(nk)=2\mathrm{sin}(nk_xa)\mathrm{sin}(nk_ya)`$ (10) $`f^{s_{xy}}(nk)=2\mathrm{cos}(nk_xa)\mathrm{cos}(nk_ya)`$ (11) $`\stackrel{~}{f}_1^d(2k)=\mathrm{cos}(2k_xa)\mathrm{cos}(k_ya)\mathrm{cos}(k_xa)\mathrm{cos}(2k_ya)`$ (12) $`\stackrel{~}{f}_2^d(2k)=\mathrm{sin}(2k_xa)\mathrm{sin}(k_ya)\mathrm{sin}(k_xa)\mathrm{sin}(2k_ya)`$ (13) $`\stackrel{~}{g}_1(2k)=\mathrm{cos}(2k_xa)\mathrm{cos}(k_ya)+\mathrm{cos}(k_xa)\mathrm{cos}(2k_ya)`$ (14) $`\stackrel{~}{g}_2(2k)=\mathrm{sin}(2k_xa)\mathrm{sin}(k_ya)+\mathrm{sin}(k_xa)\mathrm{sin}(2k_ya)`$ (15) where $`f^d(nk)`$, $`g(nk)`$ leads to usual $`d_{x^2y^2}`$, $`s_{x^2+y^2}`$ pairing symmetry for $`n=1`$ and unusual or higher order $`d_{x^2y^2}`$, $`s_{x^2+y^2}`$ pairing symmetry respectively which results from interations along the $`x`$ and $`y`$ axes (i.e, $`1^{st}`$, $`3^{rd}`$, $`6^{th}`$ neighbour interaction). While the usual and higher order $`d_{xy}`$, $`s_{xy}`$ pairing symmetry results from $`f^{d_{xy}}(nk)`$, $`f^{s_{xy}}(nk)`$, the $`4^{th}`$ neighbour interaction gives rise to unconventional $`d`$ and extended $`s`$-wave pairing symmetry through $`\stackrel{~}{f}_n^d(2k)`$ and $`\stackrel{~}{g}_n(2k)`$ given in equation (2). In deriving Eqs. (7,2) terms responsible for triplet pairing which are not important for high $`T_c`$ systems are neglected. We shall discuss now the mixed phase symmetry of $`d_{x^2y^2}`$ with other symmetries taking two of the potential terms at a time, namely, a combination of potential terms in (7) ($`2^{nd},3^{rd}`$), ($`6^{th},7^{th}`$), ($`14^{th},15^{th}`$) gives rise to pairing symmetry $`\mathrm{\Delta }(k)=\mathrm{\Delta }_{d_{x^2y^2}}(0)f^d(\xi k)+e^{i\theta }\mathrm{\Delta }_{s_{x^2+y^2}}(0)g(\xi k)`$ where $`\xi =na`$, $`a`$ is the lattice constant and will be taken as unity. Similarly, a comibnation of ($`2^{nd},4^{th}`$), ($`6^{th},12^{th}`$) and so on will give rise to pairing symmetry $`\mathrm{\Delta }(k)=\mathrm{\Delta }_{d_{x^2y^2}}(0)f^d(\xi k)+e^{i\theta }\mathrm{\Delta }_{d_{xy}}(0)f^{d_{xy}}(\xi k)`$ etc. Free energy of a superconductor with arbritary pairing symmetry may be written as, $$F_{k,k^{}}=\frac{1}{\beta }\underset{k,p=\pm }{}\mathrm{ln}(1+e^{p\beta E_k})+\frac{\mathrm{\Delta }_k^2}{V_{kk^{}}}$$ (16) where $`E_k=\sqrt{(ϵ_k\mu )^2+\mathrm{\Delta }_k^2}`$ are the energy eigen values of a Hamiltonian that describes superconductivity. We minimize the free energy, Eq. (16) i.e, $`F/\mathrm{\Delta }`$ = 0, to get the gap equation as, $$\mathrm{\Delta }_k=\underset{k^{}}{}V_{kk^{}}\frac{\mathrm{\Delta }_k^{}}{2E_k^{}}\mathrm{tanh}(\frac{\beta E_k^{}}{2})$$ (17) where $`ϵ_k`$ is the dispersion relation taken from the ARPES data and $`\mu `$ the chemical potential will control band filling through a number conserving equation given below. For two component order parameter symmetries as mentioned above, we substitute the required form of the potential and the corresponding gap structure into the either side of Eq. (17) which gives us an identity equation. Then separating the real and imaginary parts together with comparing the momentum dependences on either side of it we get gap equations for the amplitudes in different channels as, $$\mathrm{\Delta }_j=\underset{k}{}V_j\frac{\mathrm{\Delta }_jf_k^{j^2}}{2E_k}\mathrm{tanh}\left(\frac{\beta E_k}{2}\right),j=1,2$$ (18) Considering mixed symmetry of the form $`\mathrm{\Delta }(k)=\mathrm{\Delta }_{d_{x^2y^2}}(0)f^d(nk)+\mathrm{\Delta }_{s_{x^2+y^2}}(0)g(nk)`$ one identifies $`\mathrm{\Delta }_1=\mathrm{\Delta }_{d_{x^2y^2}}(0)`$, $`\mathrm{\Delta }_2=\mathrm{\Delta }_{s_{x^2+y^2}}(0)`$ and $`f_k^1=f^d(nk)`$, $`f_k^2=g(nk)`$. Similarly, for mixed symmetries of the form $`\mathrm{\Delta }(k)=\mathrm{\Delta }_{d_{x^2y^2}}(0)f^d(nk)+\mathrm{\Delta }_{\alpha _{xy}}(0)f^{\alpha _{xy}}`$ where $`\alpha s,d`$ $`\mathrm{\Delta }_2=\mathrm{\Delta }_{\alpha _{xy}}(0)`$ and $`f_k^2=f_{nk}^{\alpha _{xy}}`$ and so on. The potential required to get such pairing symmetries are discussed in Eq. (7). The number conserving equation that controls the band filling through chemical potential, $`\mu `$ is given by, $$\rho (\mu ,T)=\underset{k}{}\left(1\frac{(ϵ_k\mu )}{E_k}\mathrm{tanh}\frac{\beta E_k}{2}\right).$$ (19) We solve self-consistently the above three equations (Eq.18 and Eq.19) in order to study the phase diagram of a mixed order parameter superconducting phase. The numerical results obtained for the gap amplitudes through Eqs. (18,19) will be compared with free energy minimizations via Eq. (16) to get the phase diagrams. ## III Results and Discussions We present in this section our numerical results for a set of fixed parameters, e.g, a cut-off energy $`\mathrm{\Omega }_c`$= 500 K around the Fermi level above which superconducting condensate does not exist, a fixed ratio $`V_1/V_2=0.71`$ in Eq. (18) between the strengths of pairing interaction channels through out. In figures 1 and 2 we present results for $`\mathrm{\Delta }(k)=\mathrm{\Delta }_{d_{x^2y^2}}(0)f^d(\xi k)+e^{i\theta }\mathrm{\Delta }_{s_{x^2+y^2}}(0)g(\xi k)`$ symmetries for $`\theta =\pi /2`$ and $`\theta =0`$ respectively. Such symmetries would arise from a combination of two component pair potentials ($`2^{nd},3^{rd}`$), ($`6^{th},7^{th}`$), ($`14^{th},15^{th}`$) and so on. We shall discuss only the results of $`\theta =0`$ and $`\theta =\pi /2`$. These two phases of $`\theta `$ can cause important differences (cf. figures 3, 4). It is known that for any $`\theta 0`$, time reversal symmetry is locally broken which correspond to a phase transition to an almost fully gapped phase (except at the points $`\pm \pi /2,\pm \pi /2`$ due to common nodal points from both the channels) from a partially ungapped phase of $`d_{x^2y^2}`$ symmetry. On the other hand, the $`\theta =0`$ phase still remains nodeful, although the nodal lines shifts a lot from the usual $`k_x=k_y`$ lines of the $`d_{x^2y^2}`$. The solid lines represent the amplitude of $`d_{x^2y^2}`$ channel whereas the dashed lines indicate that of $`s_{x^2+y^2}`$. These figures (1 $`\&`$ 2) clearly demostrate that the usual $`d_{x^2y^2}`$ and $`s_{x^2+y^2}`$ symmetries do not mix with each other (cf. Figures 1(a), 2(a)) but the higher order $`d_{x^2y^2}`$, $`s_{x^2+y^2}`$ symmetries do mix with each other (cf. figures 1(c,d), 2(c,d)). In fact, as the interaction becomes longer ranged (i.e, $`\xi /a=1,2,3,4`$ as is demonstrated in figures 1, 2 (a), (b), (c), (d) respectively) the dominant symmetry changes drastically; as the typical length $`\xi `$ is odd multiple of the lattice constant, the dominant symmetry at lower doping is $`d_{x^2y^2}`$ like whereas when the $`\xi `$ is even multiple of the lattice constant, the dominant symmetry at lower doping is something in the $`s`$-wave family (see also figures 3, 4). As the typical length $`\xi `$ is increased the predominant symmetry at the optimal doping ) changes from $`d`$-wave at $`\xi =a`$, to an extended $`s`$-wave $`s_{x^2+y^2}`$, $`s_{xy}`$ for $`\xi =2a`$, to again a predominant $`d`$-wave symmetry at $`\xi =3a`$ and finally for $`\xi =4a`$ to extended $`s`$ wave symmetry for $`\theta =\pi /2`$. These phase diagrams (figures 1,2,3,4) drawn at T = 1 mK does not change the scenario even for $`\theta =0`$, in the mixed phase of $`d`$-wave with $`s_{x^2+y^2}`$ symmetry but causes significant change for that with $`s_{xy}`$ symmetry (cf. Fig.4). More significantly, the case of $`\xi =2a`$ is universal (i.e, independent of $`\theta `$ and $`s_{x^2+y^2}`$ or $`s_{xy}`$ mixing with $`d`$ -wave), the dominant symmetry at zero temperature is $`s`$-wave type. This work therefore, has revealed in a significant way the change in predominant pairing symmetry as the interaction range is changed at T=0. It is to be noted that in contrast to hole doped material, the electron doped materials (like $`\mathrm{Nd}_{2\mathrm{x}}\mathrm{Ce}_\mathrm{x}\mathrm{CuO}_4`$) have no signature of dominant $`d`$-wave symmetry. Furthermore, the antiferromagnetic phase in the electron doped systems is more extended or exists till larger doping in comparison to the hole doped material. Therefore, considering models related to spin fluctuation mediated superconductivity, the longer range attraction should be more important. In the present picture, we showed that such longer range interaction cause change in the pairing symmetry which might make this study to have imporatant bearings for the high-$`T_c`$ compunds. Some interesting features of the data presented is that optimal doping remains unchanged irrespective of $`\xi `$ that causes a significant crossover in the dominant symmetry of the order parameter. The position of the $`d`$-wave does not change appreciably except the case of $`\xi /a=4`$ while the extended $`s`$ wave region moves drastically with $`\xi `$. In particular, for $`\xi /a=1`$, the extended $`s`$ wave family has finite amplitude only at densities close to zero ($`\rho 0`$) (cf. figures 1,2,3) leading to no mixed phases except the outstanding case of $`\theta =0`$ for $`s_{xy}`$ (cf. figure 4). In $`\xi /a=2`$ case, the extended $`s`$-wave family completely takes over the position of the $`d`$-wave that it had in case of $`\xi /a=1`$. For $`\xi /a=3`$, the $`d`$-wave regains its poisition although both the amplitude and width decreases to about $`50\%`$ to that of the $`\xi /a=1`$ case and the $`s`$-wave shifts towards larger doping having its amplitude minimum at the maximum of $`d`$-wave. For $`\xi /a=4`$ the extended $`s`$-wave dominates and the $`d`$-wave either becomes a minor component or does not appear at all. Furthermore, in the optimal doping whichever symmetry dominates causes the amplitude of the other minimum i.e, the dominant symmetry always expels the other one at the optimum doping. Following the above discussion it is obvious that the Fig.4 represents an exceptional case. Fig.4 represents phase diagram of superconductors having mixed phase symmetry like $`\mathrm{\Delta }_{d_{x^2y^2}}(0)f^d(\xi k)+e^{i\theta }\mathrm{\Delta }_{s_{xy}}(0)f^{s_{xy}}(\xi k)`$ with $`\theta =0`$ (the case of $`\theta =\pi /2`$ is discussed in Fig.3 and should be contrasted with Fig.4). The phase diagram comprises the amplitudes of the respective symmetry channels as a function band filling $`\rho `$. In striking contrast to all the figures Fig.1, Fig.2 and Fig.3, there is strong mixing of $`d_{x^2y^2}`$ with $`s_{xy}`$ for $`\xi /a=1,3\&4`$. In fact mixing between the two symmetries is so strong that it is difficult to find out the predominant symmetry for the cases $`\xi /a=1\&3`$. In this mixed symmetry, for $`\theta =\pi /2`$ and $`\xi /a=4`$ (cf. Fig. 3(d)), the $`d`$-wave amplitude is practically zero whereas for $`\theta =0`$ (cf. Fig. 4(d)) it has strong mixing regime. This is the only mixed phase where both of the symmetries at optimal doping has large values (see Figs 4(a), (c)) unlike those in figures 1 to figures 3. The results of this figure thus convincingly points out the role of the phase between the two mixing symmetries. All the experimentally observed properties of cuprates will be consistent with the scenario of Fig. 4, including the sign change of the order parameter as well as gap nodes. The strong interplay between the two order parameters of mixed $`ds_{xy}`$ symmetry has also been reflected in their thermal behaviors (cf. Fig. 5). In Figures 5 and 6 we display the temperature dependencies of the amplitudes (in eV) of different symmetry order parameters for $`\xi /a=3`$ as maximum mixing is found in this case. When the $`s_{xy}`$ component determines the bulk $`T_c`$, (e.g, at $`\rho =0.75`$ in Fig. 5(a)) the amplitude of the $`s_{xy}`$ component is suppressed with the onset of the $`d`$-wave component. However, when the bulk $`T_c`$ is determined by the $`d`$-wave, the amplitude of the $`d`$-wave is not affected by the onset of the $`s_{xy}`$ component. This behavior is indeed new. In a study of mixed phase with usual $`d+is`$ phase with $`s`$ as isotropic $`s`$-wave, it was shown earlier that the $`d`$-wave component gets suppressed with the onset of $`s`$-wave but not the reverse. In contrast to Fig. 5, the temperature dependencies of the amplitudes of the $`d`$ and $`s_{x^2+y^2}`$ symmetries remain unaffected by each other as displayed in figure 6. In general, however, the growth of the amplitudes of different symmetries with lowering in temperature is faster in case of $`\theta =0`$ than that for $`\theta =\pi /2`$. This once again emphasize the role of the phase $`\theta `$. Temperature dependencies for other values of $`\xi /a`$ is qualitatively same as those shown in figures 5 and 6. So far we have discussed the interplay of order parameters in mixed phases like $`\mathrm{\Delta }_{d_{x^2y^2}}+e^{i\theta }s_\alpha `$, $`\alpha =x^2+y^2`$ or $`xy`$. This excluded discussion of some other exotic $`d`$ and $`s_{x^2+y^2}`$ symmetries that can arise from the $`4^{th}`$ neighbour attraction as discussed earlier in the context of Eqs. (7,13). More specifically, a combination of ($`8^{th}+9^{th}`$) and ($`10^{th}+11^{th}`$) terms of Eq. (7) can give rise to mixed pairing symmetries such as, $`\mathrm{\Delta }(k)=\mathrm{\Delta }_{d_{x^2y^2}}(0)F^d(k)+e^{i\theta }\mathrm{\Delta }_{s_{x^2+y^2}}(0)G^s(k)`$ where $`F^d(k)=f^d(k)[1+f^{d_{xy}}(k)+f^{s_{xy}}(k)]`$, $`G^s(k)=g(k)[f^{d_{xy}}(k)+f^{s_{xy}}(k)1]`$. These exotic symmetries are not discussed in the literature. Following the same procedure as deriving Eq. (18) one can find the gap equation for the components $`\mathrm{\Delta }_{d_{x^2y^2}}(0)`$ and $`\mathrm{\Delta }_{s_{x^2+y^2}}(0)`$, although bit complicated arrives at the same gap equation as Eq. (18) with the pair vertex $`V_jV_j/2`$ and $`f_k^1=F^d(k)`$, $`f_k^2=G^s(k)`$. Solving the gap equations together with the number equation (19) simultaneously no mixing between these unconventional $`d`$ and $`s`$ wave symmetries was found. Within the same parameter as in earlier figures (i.e, $`V_1/V_2=0.71`$), $`d`$-wave remains very strong at lower dopings (within the range $`1\rho >0.70`$) whereas the $`s`$-wave amplitude appears very close to zero band filling. Therefore, in Fig. 7 we present the momentum anisotropy of the unconventional $`d`$-wave gap originated from $`4^{th}`$ neighbour attraction. It is clear that gap anisotropy is undoubtedly very different form the usual nearest-neighbour $`d`$-wave symmetry, although basic features of change in sign, nodes etc. remains same as that of the ordinary $`d`$-wave. This gap symmetry at $`\rho =0.8`$ gives rise to a BCS gap ratio $`2\mathrm{\Delta }(k)_{max}/k_BT_c=5.0`$ against 4.29 in case of usual $`d`$-wave. Such higher anisotropic $`d`$ wave symmetries will have advantage of avoiding electronic repulsion in strongly correlated system like the cuprates. ## IV Conclusions We have studied the superconducting phase with two component order parameter scenario, such as, $`\mathrm{d}_{\mathrm{x}^2\mathrm{y}^2}+\mathrm{e}^{\mathrm{i}\theta }\mathrm{s}_\alpha `$, where $`\alpha =xy,x^2+y^2`$. We showed, that in absence of orthorhombocity, the usual $`\mathrm{d}_{\mathrm{x}^2\mathrm{y}^2}`$ does not mix with usual $`\mathrm{s}_{\mathrm{x}^2+\mathrm{y}^2}`$ symmetry gap in an anisotropic band structure. But the $`\mathrm{s}_{\mathrm{xy}}`$ symmetry does mix with the usual $`d`$-wave for $`\theta =0`$. Even in absence of orthorhombocity, the higher anisotropic $`d`$-wave symmetry mixes with higher anisotropic extended $`s`$ wave symmetry. This is obtained by considering longer ranged two-body attractive potential in the spirit of tight binding lattice than the usual nearest neighbour. This study revealed that the dominant pairing symmetry changes drastically from $`d`$ to $`s`$ like as the attractive pair potential is obtained from longer ranged attraction – if the interaction is sufficiently short ranged that can be mapped into a nearest neighbour potential, at low doping, the system is described by pure $`d_{x^2y^2}`$ order parameter. Such consideration of longer range attraction has also been revealed by recent ARPES data . The role of longer range pair potential on pairing symmetry within weak coupling theory of superconductivity has thus been established. We showed that the momentum distribution of the higher anisotropic $`d`$-wave symmetries is quite different from the usual $`d`$-wave symmetries. We found that the typical interplay in the temperature dependencies of these higher order $`d`$ and $`s`$ wave pairing symmetries can be different from what is known. In brief, we believe such study of higher anistropic symmetries is potentially important and will stimulate further studies in contrast to the usual $`d`$ and $`s`$ wave symmetries. ## V Acknowledgments A large part of this work was carried out at UFF, Niterói, Rio de Janeiro and was financially supported by the Brazilian agency FAPERJ, project no. E-26/150.925/96-BOLSA.
no-problem/9906/physics9906022.html
ar5iv
text
# References Diffraction of X-ray pulse in crystals. V.G.Baryshevsky Institute of Nuclear Problems, Bobruiskaya Str.11, Minsk 220080 Belarus Electronic address: bar@inp.minsk.by Recently the investigation of the extremely short (subpicosecond) X-ray pulses interaction with crystals takes interest because of the development of linac-driven X-ray Free Electron Laser, operating in the SASE mode and X-ray Volume Free Electron Laser . According to the analysis short X-ray pulse passing through a crystal is accompanied by the significant time delay of radiation. The $`\delta `$pulse delay law for the Bragg diffraction is proportional to $`\left|{\displaystyle \frac{J_1(at)}{t}}\right|^2`$ , where $`J_1`$ \- is the Bessel function, $`a`$ \- a coefficient will be defined below, $`t`$ \- time. In the present paper the delay law dependence on the diffraction asymmetry parameters is analyzed. It is shown that the use of subpicosecond pulses allows to observe the phenomenon of the time delay of pulse in crystal and to investigate the delay law experimentally. It is also shown that the pulse delay law depends on the quanta polarization. Let us consider the pulse of electromagnetic radiation passing through the medium with the refraction index $`n(\omega )`$. The wave packet group velocity is as follows: $$v_{gr}=\left(\frac{\omega n(\omega )}{c\omega }\right)^1=\frac{c}{n(\omega )+\omega \frac{n(\omega )}{\omega }},$$ (1) where $`c`$ \- is the speed of light, $`\omega `$ \- is the quantum frequency. In the X-ray range ( $``$tens of keV) the index of refraction has the universal form $`n(\omega )=1{\displaystyle \frac{\omega _L^2}{2\omega ^2}}`$ , $`\omega _L`$ is the Langmour frequency. Additionally, $`n110^61`$. Substituting $`n(\omega )`$ to (1) one can obtain that $`v_{gr}c\left(1{\displaystyle \frac{\omega _L^2}{\omega ^2}}\right)`$. It is clear that the group velocity is close to the speed of light. Therefore the time of wave packet delay in a medium is much shorter than the that needed for passing the length equal to the target width in a vacuum. $$\mathrm{\Delta }T=\frac{1}{v_{gr}}\frac{1}{c}\frac{l}{c}\frac{\omega _L^2}{\omega ^2}\frac{l}{c}.$$ (2) To consider the pulse diffraction in a crystal one should solve Maxwell equations that describe pulse passing through a crystal. Maxwell equations are linear, therefore it is convenient to use Fourier transform by time and to rewrite these equations as functions of frequency: $$\left[curlcurl\stackrel{}{E}_\stackrel{}{k}(\stackrel{}{r},\omega )+\frac{\omega ^2}{c^2}\stackrel{}{E}_\stackrel{}{k}(\stackrel{}{r},\omega )\right]_i+\chi _{ij}(\stackrel{}{r},\omega )E_{\stackrel{}{k},j}(\stackrel{}{r},\omega )=0,$$ (3) where $`\chi _{ij}(\stackrel{}{r},\omega )`$ \- is the spatially periodic tensor of susceptibility, $`i,j=1,2,3,`$ repeated indices imply summation. Making the Fourier transformation of these equations by coordinate variables one can derive a set of equations matching the incident and diffracted waves. When two strong waves are excited under diffraction (so-called two-beam diffraction case) the following set of equations for wave amplitudes determining can be obtained: $$\begin{array}{c}\left(\frac{k^2}{\omega ^2}1\chi _0\right)\stackrel{}{E}_\stackrel{}{k}^sc_s\chi _\stackrel{}{\tau }\stackrel{}{E}_{\stackrel{}{k}_\tau }^s=0\\ \\ \left(\frac{k_\tau ^2}{\omega ^2}1\chi _0\right)\stackrel{}{E}_{\stackrel{}{k}_\tau }^sc_s\chi _\stackrel{}{\tau }\stackrel{}{E}_\stackrel{}{k}^s=0\end{array}$$ (4) Here $`\stackrel{}{k}`$ is the wave vector of the incident wave, $`\stackrel{}{k}_\stackrel{}{\tau }=\stackrel{}{k}+\stackrel{}{\tau }`$, $`\stackrel{}{\tau }`$ is the reciprocal lattice vector; $`\chi _0,\chi _\stackrel{}{\tau }`$ are the Fourier components of the crystal susceptibility: $$\chi (\stackrel{}{r})=\underset{\stackrel{}{\tau }}{}\chi _\stackrel{}{\tau }\mathrm{exp}(i\stackrel{}{\tau }\stackrel{}{r})$$ (5) $`C_s=\stackrel{}{e}^s\stackrel{}{e}_\stackrel{}{\tau }^s`$, $`\stackrel{}{e}^s(\stackrel{}{e}_\stackrel{}{\tau }^s)`$ are the unit polarization vectors of the incident and diffracted waves, respectively. The condition for the linear system (4) to be solvable leads to a dispersion equation that determines the possible wave vectors $`\stackrel{}{k}`$ in a crystal. It is convenient to present these wave vectors as: $$\stackrel{}{k}_{\mu s}=\stackrel{}{k}+\text{æ}_{\mu s}\stackrel{}{N},\ae _{\mu s}=\frac{\omega }{c\gamma _0}\epsilon _{\mu s},$$ where $`\mu =1,2`$; $`\stackrel{}{N}`$ is the unit vector of a normal to the entrance crystal surface directed into a crystal , $$\epsilon _s^{(1,2)}=\frac{1}{4}[(1+\beta )\chi _0\beta \alpha _B]\pm \frac{1}{4}\left\{[(1+\beta )\chi _0\beta \alpha _B2\chi _0]^2+4\beta C_S^2\chi _\stackrel{}{\tau }\chi _\stackrel{}{\tau }\right\}^{1/2},$$ (6) $`\alpha _B=(2\stackrel{}{k}\stackrel{}{\tau }+\tau ^2)k^2`$ is the off-Bragg parameter ($`\alpha _B=0`$ when the Bragg condition of diffraction is exactly fulfilled), $$\gamma _0=\stackrel{}{n}_\gamma \stackrel{}{N},\stackrel{}{n}_\gamma =\frac{\stackrel{}{k}}{k},\beta =\frac{\gamma _0}{\gamma _1},\gamma _1=\stackrel{}{n}_{\gamma \tau }\stackrel{}{N},\stackrel{}{n}_{\gamma \tau }=\frac{\stackrel{}{k}+\stackrel{}{\tau }}{|\stackrel{}{k}+\stackrel{}{\tau }|}$$ The general solution of (3,4) inside a crystal is: $$\stackrel{}{E}_\stackrel{}{k}^s(\stackrel{}{r})=\underset{\mu =1}{\overset{2}{}}\left[\stackrel{}{e}^sA_\mu \mathrm{exp}(i\stackrel{}{k}_{\mu s}\stackrel{}{r})+\stackrel{}{e}_\tau ^sA_{\tau \mu }\mathrm{exp}(i\stackrel{}{k}_{\mu s\tau }\stackrel{}{r})\right]$$ (7) By matching these solutions with the solutions of Maxwell equation for the vacuum area we can find the explicit expression for $`\stackrel{}{E}_\stackrel{}{k}^s(\stackrel{}{r})`$ throughout the space. It is possible to discriminate several types of diffraction geometries, namely, the Laue and the Bragg schemes are the most well-known . In the case of two-wave dynamical diffraction crystal can be described by two effective refraction indices $$n_s^{(1,2)}=1+\epsilon _s^{(1,2)},$$ $$\epsilon _s^{(1,2)}=\frac{1}{4}\left\{\chi _0(1+\beta )\beta \alpha \pm \sqrt{(\chi _0(1\beta )+\beta \alpha )^2+4\beta C_s\chi _\tau \chi _\tau }\right\}.$$ (8) The diffraction is significant in the narrow range near the Bragg frequency, therefore $`\chi _0`$ and $`\chi _\tau `$ can be considered as constants and the dependence on $`\omega `$ should be taken into account for $`\alpha ={\displaystyle \frac{2\pi \stackrel{}{\tau }(2\pi \stackrel{}{\tau }+2\stackrel{}{k})}{k^2}}={\displaystyle \frac{(2\pi \tau )^2}{k_B^3c}}(\omega \omega _B)`$, where $`k={\displaystyle \frac{\omega }{c}}`$; $`2\pi \stackrel{}{\tau }`$ \- the reciprocal lattice vector that characterizes the set of planes where the diffraction occurs; Bragg frequency is determined by the condition $`\alpha =0`$. From (1,8) one can obtain $$v_{gr}^{(1,2)s}=\frac{c}{n^{(1,2)}(\omega )\pm \beta {\displaystyle \frac{(2\pi \tau )^2}{4k_B^2}}{\displaystyle \frac{(\chi _0(1\beta )+\beta \alpha )}{\sqrt{(\chi _0(1\beta )+\beta \alpha )^2+4\beta C_s\chi _\tau \chi _\tau }}}}.$$ (9) In the general case $`(\chi _0(1\beta )+\beta \alpha )2\sqrt{\beta }\chi _0`$, therefore the term that is added to the $`n_s^{(1,2)}(\omega )`$ in the denominator (9) is of the order of 1. Moreover, $`v_{gr}`$ significantly differs from $`c`$ for the antisymmetric diffraction $`(\left|\beta \right|1).`$ It should be noted that because of the complicated character of the wave field in a crystal one of the $`v_{gr}^{(i)s}`$ can appear to be much higher than $`c`$ and negative. When $`\beta `$ is negative the subradical expression in (9) can become equal to zero (Bragg reflection threshold) and $`v_{gr}0`$ . It should be noted that in the presence of the time-alternative external field a crystal can be described by the effective indices of refraction that depend on the external field frequency $`\mathrm{\Omega }`$ . Therefore, in this case $`v_{gr}`$ appears to be the function of $`\mathrm{\Omega }`$ . This can be easily observed in the conditions of X-ray-acoustic resonance. The analysis done allows to conclude that center of the X-ray pulse can undergo the significant delay in a crystal $`\mathrm{\Delta }T{\displaystyle \frac{l}{c}}`$ that it is possible to investigate experimentally. Thus, when $`\beta =10^3`$, $`l=0,1`$ cm and $`l/c310^{12}`$ the delay time can be estimated as $`\mathrm{\Delta }T310^9`$sec. Let us study now the time dependence of delay law of radiation after passing through a crystal. Assuming that $`B(\omega )`$ is the reflection or transmission amplitude coefficients of a crystal one can obtain the following expression for the pulse form $$E(t)=\frac{1}{2\pi }B(\omega )E_0(\omega )e^{i\omega t}𝑑\omega =B(tt^{})E_0(t^{})𝑑t^{}.$$ (10) where $`E_0(\omega )`$ is the amplitude of the electromagnetic wave incident on a crystal In accordance with the general theory for the Bragg geometry the amplitude of the diffractionally reflected wave for the crystal width that is much greater than the absorbtion length can be written $$B_s(\omega )=\frac{1}{2\chi _\tau }\left\{\chi _0(1+\left|\beta \right|)\left|\beta \right|\alpha \sqrt{(\chi _0(1\left|\beta \right|)\left|\beta \right|\alpha )^24\left|\beta \right|C_s\chi _\tau \chi _\tau }\right\}$$ (11) In the absence of resonance scattering the parameters $`\chi _0`$ and $`\chi _{\pm \tau }`$ can be considered as constants and frequency dependence is defined by the term $`\alpha ={\displaystyle \frac{(2\pi \tau )^2}{k_B^3c}}(\omega \omega _B)`$. So, $`B_s(t)`$ can be find from $$B_s(t)=\frac{1}{4\pi \chi _\tau }\left\{\chi _0(1+\left|\beta \right|)\left|\beta \right|\alpha \sqrt{(\chi _0(1\left|\beta \right|)\left|\beta \right|\alpha )^24\left|\beta \right|C_s\chi _\tau \chi _\tau }\right\}e^{i\omega t}𝑑\omega .$$ (12) Fourier transform of the first term results in $`\delta (t)`$ and we can neglet it, because the delay is described by the second term. The second term can be calculated by the methods of theory of function of complex argument: $$B_s(t)=\frac{i}{4\chi _\tau }\left|\beta \right|\frac{(2\pi \tau )^2}{k_B^2\omega _B}\frac{J_1(a_st)}{t}e^{i(\omega _B+\mathrm{\Delta }\omega _B)t}\theta (t),$$ (13) or $$B_s(t)=\frac{i\sqrt{\left|\beta \right|}}{2}\frac{J_1(a_st)}{a_st}e^{i(\omega _B+\mathrm{\Delta }\omega _B)t}\theta (t),$$ (14) where $$a_s=\frac{2\sqrt{C_s\chi _\tau \chi _\tau }\omega _B}{\sqrt{\left|\beta \right|}{\displaystyle \frac{(2\pi \tau )^2}{k_B^2}}},\mathrm{\Delta }\omega _B=\frac{\chi _0(1+\left|\beta \right|)\omega _Bk_B^2}{\left|\beta \right|(2\pi \tau )^2}.$$ Since $`\chi _0`$ and $`\chi _\tau `$ are complex, both $`a_s`$ and $`\mathrm{\Delta }\omega _B`$ have real and imaginary parts. According to (12-14) in the case of Bragg reflection of short pulse (pulse frequency band width $``$ frequency width of the total reflection range) appear both the instantly reflected pulse and the pulse with amplitude undergoing damping beatings. Beatings period increases with $`\left|\beta \right|`$ grows and $`\chi _\tau `$ decrease. Pulse intensity can be written as $$I_s(t)\left|B_s(t)\right|^2=\frac{\left|\beta \right|}{2}\left|\frac{J_1(a_st)}{at}\right|^2e^{2\mathrm{I}m\mathrm{\Delta }\omega _Bt}\theta (t).$$ (15) It is evident that the reflected pulse intensity depends on the orientation of photon polarization vector $`\stackrel{}{e_s}`$ and undergoes the damping oscillations on time. Let us evaluate the effect. Characteristic values are $`\mathrm{I}m\mathrm{\Delta }\omega _B\mathrm{I}m\chi _0\omega _B`$ and $`\mathrm{I}ma{\displaystyle \frac{\mathrm{I}m\chi _\tau \omega _B}{\sqrt{\beta }}}.`$ For 10 keV for the crystal of Si $`\mathrm{I}m\chi _0=1,610^7`$ , for LiH $`\mathrm{I}m\chi _0=7,610^{11},\mathrm{I}m\chi _\tau =710^{11}`$, for LiF $`\mathrm{I}m\chi _010^8.`$ Consequently, the characteristic time $`\tau `$ for the exponent decay in (15) can be estimated as follows ($`\omega _B=10^{19}`$): for Si - $`\tau 10^{12}`$ sec, for LiF - $`\tau 10^{10}`$ sec, for LiH - $`\tau 10^9`$ sec!! The reflected pulse also undergoes oscillations period of which increases with $`\left|\beta \right|`$ grows and decreasing of $`\mathrm{R}e\chi _\tau .`$ This period can be estimated for $`\beta =10^2`$ and $`\mathrm{R}e\chi _\tau 10^6`$ as $`T\mathrm{~}10^{12}`$ sec (for Si, LiH, LiF). When the resolving time of the detection equipment is greater than the oscillation period the expression (15) should be averaged over the period of oscillations. Then, for the time intervals when $`\mathrm{R}ea_st1,`$ $`\mathrm{I}m\mathrm{\Delta }\omega _Bt1`$ the delay law (15) has the power function form: $$I_s\left(t\right)t^3.$$
no-problem/9906/gr-qc9906112.html
ar5iv
text
# References Anderson et al. reply (to the comment by Katz on “Indication, from Pioneer 10/11, Galileo, and Ulysses Data, of an Apparent Anomalous, Weak, Long-Range Acceleration”). > We conclude that Katz’s proposal (anisotropic heat reflection off of the back of the spacecraft high-gain antennae, the heat coming from the RTGs) does not provide enough power and so can not explain the Pioneer anomaly. In his comment , Katz proposes that the anomalous acceleration seen in the Pioneer 10/11 spacecraft is due to anisotropic heat reflection off of the back of the spacecraft high-gain antennae, the heat coming from the RTGs. Before launch the four RTGs delivered a total electrical power of 160 W (now $``$ 70-80 W), from a total thermal fuel inventory of 2580 W (now $``$ 2090 W). Presently $`2000`$ W of RTG heat must be dissipated. Only $`75`$ W of directed power could explain the anomaly . Therefore, in principle there is enough power to explain the anomaly this way. However, 1) the geometry of the spacecraft and 2) the radiation pattern preclude it. Many years ago this problem was discussed with John W. Dyer, who was a Pioneer Project engineer at NASA/ARC, and with James A. Van Allen. What comes below is at least a partial reconstruction of those discussions, which we wish to acknowledge. 1) SPACECRAFT GEOMETRY. The RTGs are located at the end of booms, and rotate about the craft in a plane that contains the approximate base of the antenna. From the RTGs the antenna is thus seen “edge on” and subtends a solid angle of $``$ 1.5 % of $`4\pi `$ steradians . This already means the proposal could provide at most $`30`$ W. But there is more. 2) RADIATION PATTERN. The above estimate is based on the assumption that the RTGs are spherical black bodies. But they are not. The main bodies of the RTGs are cylinders and they are grouped in two packages of two. Each package has the two cylinders end to end extending away from the antenna. Every RTG has six fins that go radially out from the cylinder Thus, the fins are “edge on” to the antenna (the fins point perpendicular to the cylinder axes). Ignoring edge effects, this means that only 2.5 % of the surface area of the RTGs is facing the antenna. Further, for better radiation from the fins, the Pioneer SNAP 19 RTGs had larger fins than the earlier test models, and the packages were insulated so that the end caps had lower temperatures and radiated less than the cylinder/fins . As a result, the vast majority of the RTG heat is symmetrically radiated to space, unobscured by the antenna. We conclude that Katz’s proposal does not provide enough power and so can not explain the Pioneer anomaly . Independent of the above, we continue to search for a systematic origin of the effect. A few weeks after our letter was accepted, we began using new JPL software (SIGMA) to reduce the Pioneer 10 Doppler data to 50-day averages of acceleration, extending from January 1987 to July 1998, over a distance interval from 40 to 69 AU. Before mid-1990, the spacecraft rotation rate changed (slowed) by about -0.065 rev/day/day. Between mid-1990 and mid-1992 the spin-deceleration increased to -0.4 rev/day/day. But after mid-1992 the spin rate remained $``$ constant. In units of 10<sup>-8</sup> cm/s<sup>2</sup>, the mean acceleration levels obtained by SIGMA from the Doppler data in these periods are : $`(7.94\pm 0.11)`$ before mid-1990, $`(8.39\pm 0.14)`$ between mid-1990 and mid-1992, and $`(7.29\pm 0.17)`$ after mid-1992. \[Similar values $`(8.27\pm 0.05,8.77\pm 0.04,7.76\pm 0.08)`$ were obtained using CHASMP.\] We detect no long-term deceleration changes from mid-1992 to mid-1998, and only two spin-related discontinuities over the entire data period. Assume that the slowing of the spin rate was caused by spacecraft systems (perhaps gas leak changes) that also account for a few % systematic effect. Then, excluding other biases (such as the radio beam decreasing the measured anomaly), we should adopt the post-1992 value as the most accurate measure of the anomalous Pioneer 10 acceleration. This work was supported by the Pioneer Project, NASA/Ames Research Center, and was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. P.A.L. and A.S.L. acknowledge support by a grant from NASA through the Ultraviolet, Visible, and Gravitational Astrophysics Program. M.M.N. acknowledges support by the U.S. DOE and the Alexander von Humboldt Foundation. John D. Anderson,<sup>a</sup> Philip A. Laing,<sup>b</sup> Eunice L. Lau,<sup>a</sup> Anthony S. Liu,<sup>c</sup> Michael Martin Nieto,<sup>d,e</sup> and Slava G. Turyshev<sup>a</sup> <sup>a</sup>Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 <sup>b</sup> The Aerospace Corporation, 2350 E. El Segundo Blvd., El Segundo, CA 90245-4691 <sup>c</sup> Astrodynamic Sciences, 2393 Silver Ridge Ave., Los Angeles, CA 90039 <sup>d</sup> Theoretical Division (MS-B285), Los Alamos National Laboratory, University of California, Los Alamos, NM 87545 <sup>e</sup>Abteilung für Quantenphysik, Universität Ulm, D-86069 Ulm, Germany Received PACS numbers: 04.80.-y, 95.10.Eg, 95.55.Pe
no-problem/9906/astro-ph9906481.html
ar5iv
text
# 1 The potential is 𝑉=𝑚²⁢𝜓²+𝛽²⁢𝜓¹². The choice of 𝑧=10¹⁸(rather than inflation) has been arbitrarily chosen as the initial time for convenience of computation and illustration and we have used Ω_{𝐶⁢𝐷⁢𝑀}=0.95 today. The bar on the left hand side represents the range of initial values (spanning over 70 orders of magnitude if we extrapolate back to inflation) that converge to a Ω_{𝐶⁢𝐷⁢𝑀}=0.95 today. The solid circle represents the unique choice of initial conditions that gives Ω_{𝐶⁢𝐷⁢𝑀}=0.95 today if the CDM potential is pure quadratic 𝑉=𝑚²⁢𝜓². The dot-dashed line is the radiation density, the long dashed is the baryon density, the solid thick line is the tracker solution and the short dashed lines are examples of different initial conditions that undershoot the tracker solution but converge on it. December 1998 A tracker solution to the cold dark matter cosmic coincidence problem Ivaylo Zlatev<sup>a,b</sup> and Paul J. Steinhardt<sup>a</sup> <sup>a</sup>Department of Physics, Princeton University, Princeton, NJ 08540 <sup>b</sup>Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104 ## Abstract Recently, we introduced the notion of “tracker fields,” a form of quintessence which has an attractor-like solution. Using this concept, we showed how to construct models in which the ratio of quintessence to matter densities today is independent of initial conditions. Here we apply the same idea to the standard cold dark matter component in cases where it is composed of oscillating fields. Combining these ideas, we can construct a model in which quintessence, cold dark matter, and ordinary matter all contribute comparable amounts to the total energy density today irrespective of initial conditions. One of the leading cosmological models nowadays is $`\mathrm{\Lambda }`$CDM which consists of a mixture of vacuum energy or cosmological constant ($`\mathrm{\Lambda }`$) and cold dark matter (CDM). A number of recent observations suggest that $`\mathrm{\Omega }_m`$, the ratio of the (baryonic plus dark) matter density to the critical density, is significantly less than unity, and at the same time recent supernova results suggest that the expansion of the Universe is accelerating . A serious problem with the $`\mathrm{\Lambda }`$CDM scenario is the “cosmic coincidence problem” . Since the vacuum density is constant and the matter density decreases as the universe expands, it appears that their ratio must be set to a specific, infinitesimal value ($`10^{120}`$) in the very early universe in order for the two densities to nearly coincide today, some 15 billion years later. We will refer to the coincidence problem also as the “initial conditions problem” from now on. Recently, we proposed the notion of “tracker quintessence fields” to resolve the $`\mathrm{\Lambda }`$ coincidence problem . Quintessence, as a substitute for the cosmological constant, is a slowly-varying, spatially inhomogeneous component with a negative equation of state. An example of quintessence is the energy associated with a scalar field ($`Q`$) slowly evolving down its potential $`V(Q)`$. “Tracker fields” are a form of quintessence in which the tracker field $`Q`$ rolls down a potential $`V(Q)`$ according to an attractor-like solution to the equations-of-motion. The tracker solution is an attractor in the sense that a very wide range of initial conditions for $`Q`$ and $`\dot{Q}`$ rapidly approach a common evolutionary track, so that the cosmology is insensitive to the initial conditions. Tracking has an advantage similar to inflation in that a wide range of initial conditions is funneled into the same final condition. In the present Letter we examine the initial conditions problem for a class of theories that treat CDM as composed of oscillating fields. There are two components that determine $`\mathrm{\Omega }_{CDM}`$: the value of the field $`\psi `$ and its effective mass $`m_{eff}V^{\prime \prime }(\psi )`$ ( is a derivative with respect to the field $`\psi `$). The density of this CDM candidate then is described by $`\rho _{CDM}m_{eff}^2\psi ^2`$. The axion field is an example of an oscillating field CDM candidate. Once the oscillations begin, the density redshifts, just like $`CDM`$, as one over the scale factor cubed. The initial conditions problem for the oscillating field model of $`CDM`$ is that there is one unique initial density or, equivalently a unique value of $`\psi `$, that leads to the present day observed CDM density. If one imagines that $`\psi `$ is set randomly, or by equipartition after inflation, the probability of obtaining the right density today is infinitesimal. In this Letter we propose a resolution of the initial conditions problem based on applying the “tracker fields” idea to models of CDM composed of oscillating fields. To this end we will construct potentials that have attractor-like solutions at early times, but at later times the solutions are oscillatory and CDM-like. Thus, at the beginning, the potential funnels a large number of different initial conditions into one state. At late times this state enters an oscillatory phase and the field behaves as the dark matter oscillatory candidates discussed above. The removal of the initial conditions dependence is achieved at the expense of introducing an additional tuned parameter in the CDM field potential. We also point out how, by combining Quintessence with CDM tracker solutions, one can construct a toy model in which the density ratios today are all determined by parameters in the potential and are insensitive to initial conditions. The possibility that the initial energy densities of all cosmic density constituents were in equilibrium initially is allowed for. Before we embark on a discussion of oscillatory fields as CDM candidates, we should first answer the question: what observations characterize CDM and what field potentials satisfy these observations. The two observations that characterize CDM are (1) equation of state of zero since CDM is non-relativistic and (2) CDM should cluster on scales larger than a $`Mpc`$ in order for galaxies and quasars to form at a moderate redshift. There are a number of potentials that yield equations-of-state equal to zero. We will separate them in two categories. The first one is the category of potentials with either constant mass or a variable mass that became bigger than the Hubble parameter at some time in the past. As soon as the Hubble parameter redshifts to a value several times smaller than the field mass, the field begins to oscillate. The quadratic potential $`m^2\psi ^2`$ is the most widely considered example of this category. The second category is comprised of potentials that have effective masses that decrease at the same rate as the Hubble parameter and never become bigger than it. A widely studied example of the second category is the exponential potential $`Ae^{\alpha \psi }`$ . It is the second observation, the necessity for CDM to cluster above a $`Mpc`$ scale that renders only the first category as valid CDM candidates. This is so because the field clusters like CDM only on scales larger than the mass of the field . On smaller scales cluster formation is suppressed. By expanding the analysis initiated in one can show that for galaxies and quasars to form at a moderate redshift, the mass of the field has to be larger than $`420\times 10^{36}GeV`$ which in turn translates in $`z>(13.5)\times 10^4`$ as the redshift at which the oscillations can start at the latest. The value of $`\mathrm{\Omega }_{cdm}`$ obtained today is highly sensitive to the initial condition for the oscillating field, $`\psi _i`$. Let us consider the quadratic potential, $`m^2\psi ^2`$, for example. The values of $`m`$ and $`\psi _i`$ are fixed by the condition that $`\mathrm{\Omega }_\psi `$ is equal to the observed value of $`\mathrm{\Omega }_{cdm}`$ today. Although the field is presumed to be oscillating today (which requires $`m>3H`$), it was frozen at some $`\psi _i`$ for most of its history when $`mH`$. Hence, we only need consider the initial expectation value and not the initial kinetic energy of $`\psi _i`$. If $`\psi `$ where set to $`\psi _i`$ at the end of inflation, say, then it would maintain that same value until very late in the history of the universe when $`H`$ decreases below $`m`$. However, this requires that $`\rho _\psi `$ be set many orders of magnitude less than the matter-radiation energy at the end of inflation. A different initial value of $`\rho _\psi `$ leads to a different $`CDM`$ density today and only a limited range is compatible with large-scale structure formaetion. One of the most widely discussed oscillating field CDM candidates is the axion. The axion field is a quantum field $`\psi `$ added to particle-physics models in order to solve the strong CP problem . It has been shown that $`\mathrm{\Omega }_a^0h^210^7(f_a/M_p)(\psi _i/f_a)`$ where by subscript $`i`$ we mean the value of the quantity when the harmonic oscillations of the field begin, $`f_a`$ is the Pecci-Quinn symmetry breaking scale and $`\mathrm{\Omega }_a^0`$ is the fraction of the present cosmic energy density that is attributed to the axions. The axion field acquires a high enough mass to commence oscillations at around the $`QCD`$ scale of $`200MeV`$ (before that it is effectively massless). Although it is commonly assumed that the initial value of $`\psi `$ is $`\psi _if_a`$ and this is used to estimate $`\mathrm{\Omega }_a`$, in fact, $`\mathrm{\Omega }_a`$ is very sensitive to the precise value of $`\psi _i`$ and can take on any value between $`0`$ and $`1`$ (assuming a flat universe say). The initial value of $`\psi `$ has to be set very precisely (and spatially uniformly) to obtain the correct $`\mathrm{\Omega }_a`$ today. The initial condition corresponds to setting the initial value of $`\rho _{axion}`$ over $`70`$ orders of magnitude less than the energy density at the end of inflation (when $`\psi _i`$ was presumably set). Some discount this tuning as being no different from fixing any other parameter in a theory, such as Newton’s constant. If this be the case for you, dear reader, there is no point to continuing this paper since you do not acknowledge the problem we attempt to address. Others (including us) would object that the situation is not analogous to the case of measuring Newton’s constant: By the structure of the theory, $`G`$ must take some value and measuring it through gravitational effects is reasonable; it seems remarkable that the axion, invented to solve the $`U(1)`$ problem, should happen to have the right coupling and initial conditions to comprise the dark matter today. Bear in mind that the existence of stars, galaxies and large-scale structure depend on the $`CDM`$ density being neither too large nor too small, within a fairly narrow range. The high sensitivity to initial conditions seems particularly disturbing since they are determined by some random process, most likely. While we proposed the tracking mechanism in order to resolve the $`\mathrm{\Lambda }`$ initial conditions problem, we will now apply it to resolve the CDM initial conditions problem as well. Let us first review the essence of tracking. Tracker fields have an equation of motion with attractor-like solutions in which a very wide range of initial conditions rapidly converge to a common, cosmic evolutionary track. The central theorem that we proved was that tracking behavior with $`w_Q<w_B`$ occurs for any potential in which $`\mathrm{\Gamma }V^{\prime \prime }V/(V^{})^2>1`$ and is nearly constant ($`|d(\mathrm{\Gamma }1)/Hdt||\mathrm{\Gamma }1|`$). We needed $`w_Q<w_B`$ in order to explain the present day acceleration of the Universe suggested by the recent supernova results. While looking for trackers with quintessential behavior ($`w_Q<w_B`$), we discovered another category of trackers with $`w_Bw_Q<(1/2)(1+w_B)`$ provided $`\mathrm{\Gamma }1`$ and nearly constant. While these trackers do not have the accelerating properties of Quintessence, they are useful for resolving the CDM initial conditions problem. Potentials that yield such behavior are for example $`V(\psi )=Ae^{\alpha \psi },V(\psi )=B\psi ^\beta ;\beta >10`$. Since these potentials exhibit tracking behavior, they funnel a large number of initial conditions in one final state. If these potentials also had the CDM clustering properties, then they would have been the resolutions of the CDM initial conditions problem. Alas, these potentials by themselves cannot play the role of oscillating field CDM since their effective masses ($`V^{\prime \prime }`$) decrease at the same rate as the Hubble parameter does and thus fail to reproduce the observed clustering at scales larger than a $`Mpc`$. Thus, we do not want the tracker condition satisfied at all times. We want it satisfied at early times so that a large number of initial conditions are funneled in one final state. At late times we want the field to exhibit non-tracking, oscillatory clustering behavior. One way to achieve this goal would be to add a tracker potential to the quadratic one: $`V=m^2\psi ^2+V_{tracking}`$ and to adjust the coefficient in $`V_{tracking}`$ in such a way that $`V_{tracking}`$ dominates at early times but is sub-dominant at late times. This way we gain independence of initial conditions at the price of tuning a parameter in the potential. As a pedagogical illustration, we will first consider the toy model $`V=m^2\psi ^2+\beta \psi ^\alpha ;\alpha >10`$ (1) where $`\beta `$ has the dimensions of $`M_p^{4\alpha }`$ and for calculational purposes we have taken $`\alpha =12`$. The mass of the field $`m`$ is determined by the minimum scale at which we want our CDM to exhibit clustering properties. If only the quadratic term were present, then there would have been only one unique initial value of the field ($`\psi _i`$) that that would have led to present day CDM energy density. This unique initial value is denoted by a solid circle in Fig. 1. In order to resolve this initial conditions problem, we add an additional term to the potential that is equal to the quadratic term at the time the oscillations begin; this term dominates at earlier times and is sub-dominant at late times. This is achieved by tuning the parameter $`\beta `$. This additional term exhibits tracking behavior as already discussed; but it exhibits it only at early times when it is dominant. At late times it is sub-dominant and the field can cluster. Namely, we have taken a potential where before we had to set $`\psi `$ by hand to get the right value and now we add a term that dominates at early times and automatically drives $`\psi `$ to the right value. As shown in Fig. 1 while tracking at early times, the potential funnels a large range (spanning over seventy orders of magnitude) of initial conditions in one final state. We would like to emphasize that our general approach does not depend on the specific choice of potential. We could have achieved the same desirable effect of funneling a large number of initial conditions in one final state via any combination of quadratic with any number of higher power polynomial terms. While in an arbitrary combination of numerous terms one would have to adjust a number of parameters, a gaussian-like potential is a particular combination of infinitely many high power polynomial terms with only two parameters. As shown in Fig. 2, we can resolved the CDM initial conditions problem using a potential of the type $`V=A[e^{\lambda \psi ^2}1].`$ (2) Expanding this potential in a Taylor series, we see that at early times the series is dominated by the high power polynomial terms which exhibit tracking behavior as argued above, but at late times the quadratic term dominates and the field starts to oscillate and cluster like CDM. Using the tracker equations of motion one can show rigorously that $`\lambda `$ is set by $`\mathrm{\Omega }_\psi ^0`$ and $`\lambda A`$, being the effective mass of the oscillating field, determines the redshift at which the oscillations commence. Hence, two observable quantities determine the two potential parameters. This example is only a toy model to illustrate that the scenario is dependent on only two parameters. In general, polynomial and exponential (non-perturbative) potentials with high-order powers of the field are considered in supersymmetric particle physics theories . Current data suggests that the universe contains both a CDM component and a $`\mathrm{\Lambda }`$ or Quintessence component. Arranging either component to be comparable to the baryon density today appears to require a fine tuning of initial conditions (if the CDM component never reached thermal equilibrium). It is interesting that both fine tunings can be resolved by a common mechanism, tracking. A trivial example is the combined two-field potential $`V=Ae^{\lambda \psi ^2}+B/Q^\alpha `$ (3) where $`Q`$ stands for Quintessence (see Fig. 2). The reaction of $`\mathrm{\Omega }_Q`$ and $`\mathrm{\Omega }_{CDM}`$ to $`\mathrm{\Omega }_{baryon}`$ does not depend on the initial conditions of the components or the initial values of $`Q`$, $`\dot{Q}`$, $`\psi `$ and $`\dot{\psi }`$ but only on the parameters of the potential. Although this example is artificial, it points out the power of tracking to resolve problems of initial conditions in cosmology and offers hope of an even more economical model.
no-problem/9906/cond-mat9906326.html
ar5iv
text
# Phase Transition in the Two-Dimensional Gauge Glass ## Abstract The two-dimensional XY gauge glass, which describes disordered superconducting grains in strong magnetic fields, is investigated, with regard to the possibility of a glass transition. We compute the glass susceptibility and the correlation function of the system via extensive numerical simulations and perform the finite-size scaling analysis. This gives strong evidence for a finite-temperature transition, which is expected to be of a novel type. The gauge glass model, which was originally proposed as a generalization of the spin-glass model , has attracted much attention in relation to the vortex-glass phase of high-$`T_c`$ superconductors . In three dimensions, the XY gauge glass model is believed to exhibit a finite-temperature glass transition , in agreement with experimental evidence for the vortex-glass phase at finite temperatures . In two dimensions, on the other hand, there has been controversy as to the existence of a finite-temperature transition. Equilibrium studies of several quantities such as the defect-wall energy and the root-mean-square current have suggested the absence of ordering at finite temperatures; this appears to be consistent with experiment, where no evidence for the glassy phase at finite temperatures has been found . Indeed the gauge-glass order parameter has been shown analytically to be zero at any finite temperature . However, the helicity modulus computed via Monte Carlo simulations indicates a signal of a glassy phase at low but finite temperatures . Dynamical simulations also give conflicting results: Whereas earlier study of the current-voltage characteristics has shown a strong evidence for the possibility of glass ordering at finite temperatures , later one has been interpreted to be consistent with the zero-temperature transition . However, the lowest temperature considered in Ref. is apparently higher than the estimated transition temperature $`T_c0.15`$ (in units of the coupling energy $`J`$) . Indeed the recent study of relaxation dynamics has indicated a finite transition temperature $`T_c0.22`$ . It should be noted here that the absence of an ordered phase does not necessarily imply the absence of a phase transition, as we will discuss later. We thus believe that the presence/absence of a finite-temperature transition in two-dimensions as well as its nature is still inconclusive. For the resolution of this, careful analysis of the behavior should be performed at sufficiently low temperatures. In this work we investigate the two-dimensional XY gauge glass via extensive numerical calculations, with particular attention to the possibility of a finite-temperature glass transition. We adopt the equilibration test method in Refs. to obtain the equilibrium and to determine the equilibration time, up to the system size $`L=48`$. ¿From the obtained equilibrium configuration, we compute the glass susceptibility and examine it by the finite-size scaling analysis . This reveals remarkable divergent behavior at low but non-zero temperatures, providing strong evidence for the finite-temperature transition. It is discussed how the presence of such a finite-temperature transition can be reconciled with the results of existing studies. We consider the standard XY gauge glass model on a square $`L\times L`$ lattice, which is described by the Hamiltonian $$H=J\underset{i,j}{}\mathrm{cos}(\varphi _i\varphi _jA_{ij}),$$ (1) where $`J`$ is the coupling energy between nearest-neighboring grains, $`\varphi _i`$ is the phase of the order parameter of the grain at site $`i(x,y)`$$`(x,y=1,2,\mathrm{},L)`$, and the bond angle $`A_{ij}`$’s are taken to be quenched random variables distributed uniformly on the interval $`[0,2\pi )`$. The presence of a glass transition in the system can be conveniently described by the divergence of the glass susceptibility, given by $`\chi _G=_jG_{ij}`$ with the correlation function of the glass order parameter $$G_{ij}\left[\left|e^{i(\varphi _i\varphi _j)}\right|^2\right],$$ (2) where $`[\mathrm{}]`$ denotes the disorder average and $`\mathrm{}`$ the thermal average. In the limit the distance between the two grains $`i`$ and $`j`$ becomes large, the correlation function reduces to $`q^2`$, the square of the Edwards-Anderson glass order parameter $$q\left[\left|e^{i\varphi _i}\right|^2\right].$$ (3) In an infinite system the glass susceptibility is expected to display the critical behavior $$\chi _G\left(TT_c\right)^\gamma $$ (4) with the scaling relation among the exponents $$\gamma =(2\eta )\nu ,$$ (5) where $`\eta `$ describes the power law decay of the correlation at $`T_c`$ and $`\nu `$ the divergence of the correlation length $`\xi `$: $$\xi (TT_c)^\nu ,$$ (6) as the temperature $`T`$ approaches $`T_c`$. In a system of size $`L`$, the glass susceptibility has the finite-size-scaling form $$\chi _G=L^{2\eta }\overline{\chi }\left(L^{1/\nu }\left(TT_c\right)\right)$$ (7) with the appropriate scaling function $`\overline{\chi }`$, which can be examined via extensive simulations at various temperatures and sizes. In the simulation concerning equilibrium properties, equilibration is an important issue. We follow Ref. to have a criterion for equilibration, and consider two replicas $`\alpha `$ and $`\beta `$ of the system with the same realization of disorder. The susceptibility can be calculated from the overlap between the two replicas, according to $$\chi _G(t_0)=\frac{1}{Nt_0}\left[\underset{t=1}{\overset{t_0}{}}\left|\underset{j}{}e^{i\left(\varphi _j^\alpha (t_0+t)\varphi _j^\beta (t_0+t)\right)}\right|^2\right],$$ (8) where $`NL^2`$ is the number of spins and time $`t`$ is measured in units of the Monte Carlo sweep (MCS). Note also that the time-dependent four-spin-correlation function defined as $$\chi _G(t_0)=\frac{1}{N}\left[\left|\underset{j}{}e^{i\left(\varphi _j(t_0)\varphi _j(2t_0)\right)}\right|^2\right]$$ (9) converges to the glass susceptibility $`\chi _G`$ in the limit $`t_0\mathrm{}`$. Thus the equilibration time $`\tau `$ can be estimated as the value of $`t_0`$ at which the two expressions, Eqs. (8) and (9), give coincident results. The expected behavior of the equilibration time $`\tau `$ is $$\tau \xi ^z(TT_c)^{\nu z}$$ (10) in an infinite system near $`T_c`$, with the dynamical exponent $`z`$; in a finite system at $`T=T_c`$, it is expected $$\tau L^z.$$ (11) Equations (10) and (11) naturally lead to the finite-size scaling form $$\tau =L^z\overline{\tau }(L^{1/\nu }(TT_c)).$$ (12) To examine the behavior of the glass susceptibility, we have performed extensive simulations at several temperatures, ranging from $`T=0.2`$ to $`T=1.0`$. After the equilibration time $`\tau `$ estimated as above, the glass susceptibility has been measured according to Eq. (8) except for that the data have been taken during sufficiently large time intervals, from three to ten times of the equilibration time. Namely, $`t_0`$ has been chosen to be from $`3\tau `$ to $`10\tau `$, depending on the size. We have also performed independent runs with up to 16000 different disorder configurations, over which the disorder average has been taken. Since the different disorder realizations give statistically independent thermal averages, the statistical error can be estimated from the standard deviation of the results for different samples. Here the number of the disorder realizations to get sufficiently reliable statistics for the equilibration time turns out much larger than that for the glass susceptibility. Thus it has taken much more averages to obtain reliable data for the equilibration time than those for the glass susceptibility. In this way, for example, the equilibration time at $`T=0.3`$ has been estimated to vary from $`(7\pm 1)\times 10^4`$ MCS ($`L=4`$) to $`(2\pm 0.1)\times 10^6`$ MCS ($`L=24`$); it increases rapidly as the temperature is lowered, reaching at $`T=0.25`$ the value $`(1.5\pm 0.4)\times 10^6`$ for $`L=12`$. Figure 1 shows the behavior of the obtained glass susceptibility $`\chi _G`$ with the temperature $`T`$ for system sizes $`L=4`$, $`6`$, $`8`$, $`12`$, $`24`$, and $`48`$. In all cases, periodic boundary conditions have been employed, and the data points without error bars indicate that the errors estimated by the standard deviations are smaller than the size of the symbols. It is indeed observed that, as $`L`$ is increased, the data points apparently approach the dashed line representing $`(TT_c)^\gamma `$ with $`T_c`$ and $`\gamma `$ obtained from the finite-size-scaling method below. The finite-size scaling analysis of the data is displayed in the inset of Fig. 1, where the data collapse nicely to the finite-size-scaling form given by Eq. (7). The corresponding exponents and the transition temperature are obtained: $`\eta `$ $`=`$ $`0.30\pm 0.05,`$ (13) $`\nu `$ $`=`$ $`1.14\pm 0.07,`$ (14) $`T_c`$ $`=`$ $`0.22\pm 0.02,`$ (15) which agree with the results of Ref. . The errors have been estimated by the size of the region beyond which the data do not scale well. The scaling law in Eq. (5) then gives the susceptibility exponent $$\gamma =1.93\pm 0.08.$$ (16) We have also tried to fit the data to the finite-size scaling form with $`T_c=0`$, and observed that the high-temperature data ($`T0.35`$) appear to fit also the $`T_c=0`$ scaling, in agreement with Ref. . At lower temperatures, however, systematic deviation from the scaling has been revealed, which becomes conspicuous for large sizes ($`L=24,\mathrm{\hspace{0.17em}48}`$. Note that such low-temperature regions, which are crucial in differentiating the two scalings ($`T_c=0`$ and $`T_c0`$), have not been probed properly in previous studies concluding a zero-temperature transition. Moreover, even for small sizes such zero-temperature scaling yields $`\eta `$ very close to zero, which is rather unlikely in view of the large ground-state degeneracy expected in the system . We also perform the finite-size scaling analysis of the equilibration time. In this case, assuming the scaling form in Eq. (12), we use the values of the exponent $`\nu `$ and the transition temperature $`T_c`$ in Eq. (15), and estimate the dynamic exponent $`z`$. Figure 2 shows that such scaling is indeed reasonable, with the dynamic exponent estimated as $$z=2.4\pm 0.3.$$ (17) In contrast, these equilibration time data hardly fit the scaling with $`T_c=0`$, displaying marked deviation at $`T0.4`$ . We now examine the implications of such a finite-temperature transition, and first consider the scaling behavior of the defect-wall energy, which is defined to be the difference in the ground-state energy upon changing the boundary conditions along one direction from periodic to antiperiodic . The defect-wall energy fluctuates from sample to sample with zero mean, and its typical value, which may be taken as the average of the absolute value over quenched randomness, scales with the system size $`L`$ according to $$\mathrm{\Delta }EL^\theta ,$$ (18) in the asymptotic domain ($`L\mathrm{}`$. The sign of the exponent $`\theta `$ then determines the presence or absence of long-range order at finite temperatures: Whereas for $`\theta `$ positive, the system displays rigidity or ordering, the negative value of $`\theta `$, obtained numerically in two dimensions , implies ubiquity of long-wavelength fluctuations and thus the absence of order. Here it should be stressed that the absence of (long-range) order does not necessarily correspond to the absence of a finite-temperature transition, as the Berezinskii-Kosterlitz-Thouless (BKT) transition and the associated algebraic order present an example . In fact it is easy to show that in any two-dimensional system described by the Hamiltonian (1), gapless spin-wave excitations prevent $`\theta `$ from having a positive value: Suppose that the ground-state energy in the periodic boundary conditions (PBC), $`E_p`$, is smaller than that in the antiperiodic boundary conditions (APBC). ¿From the ground-state configuration $`\{\varphi _i^{(0)}\}`$ in the PBC, we can make a configuration satisfying the APBC by rotating the phases according to $`\varphi _i^{(0)}\varphi _i^{(0)}+x\pi /L`$. It is then obvious that the energy $`\stackrel{~}{E}`$ of the new configuration is not smaller than the ground-state energy $`E_a`$ in the same APBC. In the limit $`L\mathrm{}`$, upon expanding the energy $`\stackrel{~}{E}`$ of the new configuration around the ground-state configuration $`\{\varphi _i^{(0)}\}`$, we obtain the energy change (in units of $`J`$) $`\stackrel{~}{E}E_p=O(L^0),`$ and thus the desired relation $$E_aE_p\stackrel{~}{E}E_p=O(L^0),$$ (19) which shows that $`\theta `$ cannot be positive. For $`E_a`$ smaller than $`E_p`$, similar argument starting from the ground-state configuration in the APBC then yields $`E_pE_aO(L^0)`$. Accordingly, it may be the case that the negative value of $`\theta `$ results from spin-wave excitations, which destroy long-range order but maintain criticality. To examine this possibility, we have generated ground states in the PBC and in the APBC via extensive simulations for several system sizes (up to $`L=24`$), which has confirmed the value of $`\theta `$ obtained in the existing studies on small sizes . We have then carefully compared the two corresponding ground-state configurations, one in the PBC and the other in the APBC, and found only small difference in the total number of vortices. In particular the relative difference appears to decrease with the system size , indicating that no (bulk) vortex excitation is involved. This suggests that the ubiquitous fluctuations implied by the negative value of $`\theta `$ are indeed of the spin-wave type rather than of the vortex one . Such spin-wave excitations should lead to the algebraic decay of the correlation function in Eq. (2) and to the vanishing Edwards-Anderson glass order parameter defined in Eq. (3) at all finite temperatures, thus consistent with the result of Ref. . It is thus strongly suggested that the system displays quasi-long-range glass order below the finite transition temperature , characterized by the algebraic decay of the glass correlation function. We have thus computed the glass correlation function via large-scale simulations for the system size $`L=48`$ and display the obtained behavior in Fig. 3, where the difference in the behavior at the two temperatures $`T=0.15`$ and $`0.50`$ is spectacular: While the dashed line at $`T=0.50`$ represents the least-square fit to the usual exponential decay $`r^\eta e^{r/\xi }`$ with $`\eta =0.31`$ and $`\xi =1.90`$, the algebraic decay at $`T=0.15`$ is manifested by the least-square fit (represented by the dotted line) to $`r^\eta `$ with $`\eta =0.27`$. The decrease of the exponent $`\eta `$ with temperature, from the value $`0.30`$ (at $`T_c`$), reflects the criticality of the system below $`T_c`$, confirming quasi-long-range glass order at low temperatures. The fit of the data at $`T=0.15`$ to the exponential-decay form has been tried, only to yield very large deviations as well as inconsistent values of parameters $`\eta `$ and $`\xi `$. We have also computed the correlation length at various temperatures above $`T_c`$ and found perfect scaling to the form in Eq. (6) with the values of $`T_c`$ and $`\nu `$ given by Eq. (15) (see the inset). We finally point out that the existence of the finite-temperature glass transition is not inconsistent with the experimental results. The temperature scale in the gauge-glass model may be obtained from the correspondence between the model temperature $`T=1.15`$ (in units of $`J`$) and the real temperature $`T=12\mathrm{K}`$ in experiment . Accordingly, the transition temperature estimated here corresponds to approximately $`2.0\mathrm{K}`$; such a low-temperature regime was not probed in Ref. and it would be interesting to investigate the low-temperature regime experimentally. We would like to thank B. Kim, D. Kim, S. Ryu, and D. Stroud for helpful discussions, and acknowledge the partial support from the Korea Research Foundation and from the Korea Science and Engineering Foundation. The numerical works have been performed on the cray T3E at SERI and on the SP2 supercomputer system at ERCC.
no-problem/9906/hep-ph9906238.html
ar5iv
text
# hep-ph/9906238 Low Scale Unification, Newton’s Law and Extra Dimensions. Motivated by recent work on low energy unification, in this short note we derive corrections on Newton’s inverse square law due to the existence of extra decompactified dimensions. In the four-dimensional macroscopic limit we find that the corrections are of Yukawa type. Inside the compactified space of $`n`$-extra dimensions the sub-leading term is proportional to the ($`n+1`$)- power of the distance over the compactification radius ratio. Some physical implications of these modifications are briefly discussed. IOA-TH/99-6 June 1999 One of the most tantalizing mysteries in modern unified theories is the magnitude of the unification scale. A well known result in the weakly coupled heterotic string theory is that the string scale, is of the order of the Planck mass $`M_P`$ . Recent developments have revealed the possibility that the string scale can be arbitrarily low in Type I and Type IIB theories . According to a recently proposed scenario, the hierarchy problem may be solved assuming the existence of extra spatial dimensions at low energies. In this picture, strong gravitational effects –which could not be described accurately by Newton’s law– may appear at short distances of the order of the compactification scale of the extra dimensions. If so, gravitons may propagate freely inside the space of extra dimensions, while all ordinary particles would leave in the four dimensional world. Experimental searches for possible deviations from Newton’s inverse square law imply that such effects should be limited below the sub-millimeter range. We note that this scenario can find a realization in the context of D–branes. Matter fields may live in a $`9`$ or $`3`$–brane, while gravitons can live in a larger dimensional bulk. Deviations from the gravitational law have been intensively studied also in the past. In the theoretical aspects of a gravitationally repulsive term in supergravity theories were investigated, while in string loop corrections which affect gravitational couplings were considered. In this letter we examine corrections to the gravitational force which are of particular importance in the case of experimental searches in the vicinity of the compactification radii. In the presence of $`n`$ compact spatial dimensions of radii $`R_{1,\mathrm{}n}`$, the fundamental scale $`M_X`$ of the theory for very short or very large distances can be estimated using the Gauss law. The approximate forms of the gravitational potential in two limiting cases in the presence of $`n`$-compactified extra dimensions are given as follows. Assuming for simplicity that all compactification radii are the same $`R_i=R`$, inside the volume of the extra dimensions i.e. when $`rR`$, the Gauss low gives $`V(r)`$ $``$ $`{\displaystyle \frac{1}{M_{P_{n+4}}^{n+2}}}{\displaystyle \frac{1}{r^{n+1}}},rR`$ (1) where $`M_{P_{n+4}}^{n+2}`$ is the Planck mass in $`n+4`$–dimensional space and is identified with the fundamental scale $`M_X`$. For large distances compared to the mean compactification radius of the extra dimensions i.e., when $`rR`$, the $`n`$-dimensional compactified volume confines the gravitational flux and as a result the approximate potential is given by $`V(r)`$ $``$ $`{\displaystyle \frac{1}{M_{P_{n+4}}^{n+2}}}{\displaystyle \frac{1}{R^nr}},rR`$ (2) The latter should be identical to the known $`4d`$ gravitational potential $`V(r)`$ $`=`$ $`{\displaystyle \frac{1}{M_{P_4}^2}}{\displaystyle \frac{1}{r}}`$ (3) The comparison of the last two formulae for distances far beyond the compactification scale $`M_C\frac{1}{R}`$ gives an approximate relation between the latter and the Planck mass in $`4`$ and $`n+4`$ dimensions $`{\displaystyle \frac{1}{M_C}}R{\displaystyle \frac{1}{M_{P_{n+4}}}}\left({\displaystyle \frac{M_{P_4}}{M_{P_{n+4}}}}\right)^{2/n}`$ (4) For distances comparable to the compactification scale corrections are expected to modify the above formulae. In what follows, we will present some analytic results for the case of $`n=1`$ and $`n=2`$ compactified dimensions. We will see that some important modifications of the above formulae will show up in both cases. In particular, inside the compactification circle, i.e., $`r<R`$, the first sub-dominant term will be shown to have a power dependence on the ratio $`r/R`$, while at large distances the potential has a Yukawa type correction, proportional to the form $`e^{r/R}/r`$. We will solve the Laplace equation in $`(n+3)`$ spatial dimensions where $`n`$ of them are compactified on a torus with radius $`R`$. Assume the coordinates $`x_{1,2,3}`$ for the 3–dimensional ordinary space and $`x_i^c,i=1,\mathrm{}n`$ for the compactified ones. Defining the angles $`\theta _{1,2,\mathrm{}n}`$ for the compactified dimensions with $`\theta _i[0,2\pi ]`$, we write them as $`x_i^c=R\theta _i`$ where we assumed for simplicity one common radius $`R`$. The Laplace equation may be written as follows $`\stackrel{}{}^2\mathrm{\Phi }`$ $`=`$ $`\delta ^3(\stackrel{}{x}\stackrel{}{y}){\displaystyle \frac{1}{R^n}}\delta ^n(\stackrel{}{\theta }\stackrel{}{\theta }_0)`$ (5) where the $`\delta `$–functions on the right-hand side (RHS) are given as usually by $`\delta ^3(\stackrel{}{x}\stackrel{}{y})`$ $`=`$ $`{\displaystyle \frac{1}{(2\pi )^3}}{\displaystyle d^3ke^{ı\stackrel{}{k}(\stackrel{}{x}\stackrel{}{y})}}`$ $`\delta ^n(\stackrel{}{x}^c\stackrel{}{x}_0^c)`$ $`=`$ $`{\displaystyle \frac{1}{(2\pi R)^n}}{\displaystyle \underset{\stackrel{}{m}}{}}e^{ı\stackrel{}{m}(\stackrel{}{\theta }^c\stackrel{}{\theta }_0^c)}`$ and the sums extend form $`\mathrm{}`$ to $`\mathrm{}`$ for all indices $`m_{1,2\mathrm{}n}`$. Using the Fourier transform, one finds $`\mathrm{\Phi }(r,q)`$ $`=`$ $`{\displaystyle \frac{1}{(2\pi )^{n+3}}}{\displaystyle \frac{1}{R^n}}{\displaystyle \underset{\stackrel{}{m}}{}}{\displaystyle d^3k\left\{e^{ı\stackrel{}{k}\stackrel{}{r}+ı\stackrel{}{m}\stackrel{}{q}}_0^{\mathrm{}}𝑑se^{s[k^2+(\frac{\stackrel{}{m}}{R})^2]}\right\}}`$ (6) where for simplicity we have denoted $`\stackrel{}{r}=\stackrel{}{x}\stackrel{}{y}`$ and $`\stackrel{}{q}=\stackrel{}{\theta }\stackrel{}{\theta _0}`$. In the integrand of (6), the summation is taken over the infinite tower of KK–excitations in all the additional space dimensions, $`\stackrel{}{m}=(m_1,\mathrm{}m_n)`$. It is easy now to perform the integration with respect to $`\stackrel{}{k}`$. The result is $`\mathrm{\Phi }(r,q)`$ $`=`$ $`{\displaystyle \frac{1}{(4\pi )^{3/2}}}{\displaystyle \frac{1}{(2\pi R)^n}}{\displaystyle _0^{\mathrm{}}}𝑑ss^{3/2}e^{\frac{r^2}{4s}}{\displaystyle \underset{\stackrel{}{m}}{}}e^{ı\stackrel{}{m}\stackrel{}{q}s(\frac{\stackrel{}{m}}{R})^2}`$ (7) In the above summations, $`\stackrel{}{m}^2=m_1^2+\mathrm{}+m_n^2`$ and $`\stackrel{}{m}\stackrel{}{q}=m_1q_1+\mathrm{}m_nq_n`$ is the inner product over the $`n`$dimensional compactified space. The above result can also be written in terms of a product of $`theta`$ functions as follows $`\mathrm{\Phi }(r,q)`$ $`=`$ $`{\displaystyle \frac{1}{(4\pi )^{3/2}}}{\displaystyle \frac{1}{(2\pi R)^n}}{\displaystyle _0^{\mathrm{}}}𝑑ss^{3/2}e^{\frac{\stackrel{}{r}^2}{4s}}{\displaystyle \underset{j=1}{\overset{n}{}}}\theta _3({\displaystyle \frac{q_j}{2\pi }},ı{\displaystyle \frac{s}{\pi R^2}})`$ (8) where, $`\theta _3(\nu ,\tau )`$ $`=`$ $`{\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}p^{\frac{n^2}{2}}z^n`$ (9) with $`p=e^{2ı\pi \tau }`$ and $`z=e^{2ı\pi \nu }`$. Performing the integral we obtain $`\mathrm{\Phi }(r,q)`$ $`=`$ $`{\displaystyle \frac{1}{4\pi }}{\displaystyle \frac{1}{(2\pi R)^n}}{\displaystyle \frac{1}{r}}\left\{1+2{\displaystyle \underset{\stackrel{}{m}}{\overset{\mathrm{}}{}}}e^{|\stackrel{}{m}|\frac{r}{R}}\mathrm{cos}(\stackrel{}{m}\stackrel{}{q})\right\}`$ (10) In the particular case of one extra dimension, $`n=1`$, we may obtain an exact result of the above integral. We first perform the integration in (7) to obtain $`\mathrm{\Phi }(r,q)`$ $`=`$ $`{\displaystyle \frac{1}{8\pi ^2}}{\displaystyle \frac{1}{R}}{\displaystyle \frac{1}{r}}\left\{1+2{\displaystyle \underset{m=1}{\overset{\mathrm{}}{}}}e^{m\frac{r}{R}}\mathrm{cos}(mq)\right\}`$ (11) Performing the sum in this formula one gets the final expression for $`n=1`$. Suppressing an overall numerical factor, we have the following form for the potential $`V_{n=1}(r)`$ $``$ $`{\displaystyle \frac{1}{M_{P_5}^3}}\left(1+2{\displaystyle \frac{e^{\frac{r}{R}}\mathrm{cos}q1}{e^{2\frac{r}{R}}2e^{\frac{r}{R}}\mathrm{cos}q+1}}\right){\displaystyle \frac{1}{Rr}}`$ (12) The dependence on the distance $`r`$ in this formula is exact and valid for any value of $`r`$. For fixed $`r`$, its maximum value is obtained when $`q=0`$, while for fixed $`q`$ the maxima are along the path determined by the equation $`r=R\mathrm{log}(1\pm \mathrm{sin}q)`$. The resulting potential as a function of $`r`$ and $`q=\theta \theta _0`$ is plotted in figure 1. In order to compare with the approximate formulae of the potential given in the introduction, we wish now to take the limit $`q=0`$ in (12) which gives $`V`$ $`=`$ $`{\displaystyle \frac{1}{M_{P_5}^3}}{\displaystyle \frac{e^{\frac{r}{R}}+1}{e^{\frac{r}{R}}1}}{\displaystyle \frac{1}{Rr}}`$ (13) The formula for $`rR`$ becomes $`V_{n=1}{\displaystyle \frac{1}{M_{P_5}^3}}{\displaystyle \frac{2}{r^2}}\left(1+{\displaystyle \frac{1}{12}}{\displaystyle \frac{r^2}{R^2}}\right)`$ (14) This formula which is valid for small $`r`$, differs by a factor of 2 compared to the approximation (1). For $`rR`$ we obtain an exponential correction of the form $`V_{n=1}`$ $``$ $`{\displaystyle \frac{1}{M_{P_5}^3}}{\displaystyle \frac{1}{Rr}}\left(1+2e^{\frac{r}{R}}\right)`$ (15) $``$ $`{\displaystyle \frac{M_C}{M_{P_5}^3}}{\displaystyle \frac{1}{r}}\left(1+2e^{M_Cr}\right)`$ which is a Yukawa type correction valid for large distances compared to the compactification radius. The approximation used, gives us the chance to compare directly the above formula with the usual parametrization of the long-range forces of gravitational strength in the literature $`V(r){\displaystyle \frac{1}{r}}(1+\alpha e^{\frac{r}{\lambda }}).`$ (16) Comparing the two formulae, we have a definite prediction for the strength $`\alpha `$ of Yukawa type gravitational corrections in the case of one extra compact dimension which is $`\alpha =2`$. Using the $`\alpha `$ \- $`\lambda `$ plot of which gives the experimentally determined region, we conclude that the allowed radius has an upper bound of the order $`\lambda R1mm`$. Next, let us return to the approximate formulae in (1, 3) which can be written in a single expression as, $`V{\displaystyle \frac{1}{M_{P_5}^3}}\left({\displaystyle \frac{1}{r^2}}\theta (rR)+{\displaystyle \frac{1}{rR}}\theta (Rr)\right).`$ (17) The formula (17) is plotted in figure 2 versus the exact expression (13). The plot shows that the two expressions coincide only for $`rR`$. For distances $`rR`$ and $`r<R`$ there exist significant deviations which might lead to interesting corrections in calculating various effects in physical processes. For more than one compact dimensions ($`n>1`$), we will work out approximated forms of the potential. As already stated, the approximations are straightforward in the case where the radii of the extra compactified dimensions are either very big or enormously small compared to the distance that the potential is estimated, being those obtained from the Gauss’ law in the ‘spherically’ symmetric case. At relatively large distances, $`r>R`$, we may also keep the first two terms of the series expansion in (7), to obtain the result $`\mathrm{\Phi }(r,R)`$ $`=`$ $`{\displaystyle \frac{1}{4\pi }}{\displaystyle \frac{1}{r}}{\displaystyle \frac{1}{(2\pi R)^n}}\left(1+2ne^{r/R}\right)`$ (18) which is a straightforward generalization of the approximation (15) for arbitrary $`n`$. The other interesting case, which may have particular importance for the experimental verification of strongly coupled gravity at the TeV scale, is when the distance is comparable with the compactification radius. When the experimental measurement is taken in distances smaller than the compactification radius of the extra dimensions $`rR`$, the behavior of the infinite sum is not manifest since an infinite number of terms may contribute. Then, the most effective tool to extract the asymptotic behaviour of the potential in the transition region where $`R`$ becomes effectively large, is the Jacobi’s transformation of theta functions $`1+2{\displaystyle \underset{m=1}{\overset{\mathrm{}}{}}}e^{m^2\mathrm{}^2}\mathrm{cos}(2\pi m\mathrm{}z)`$ $`=`$ $`{\displaystyle \frac{\sqrt{\pi }}{\mathrm{}}}e^{\pi ^2z^2}\left(1+2{\displaystyle \underset{m=1}{\overset{\mathrm{}}{}}}e^{m^2\pi ^2/\mathrm{}^2}\mathrm{cosh}{\displaystyle \frac{2\pi ^2mz}{\mathrm{}}}\right).`$ (19) Substitution of the above formula in (7) gives $`\mathrm{\Phi }(r,q)`$ $`=`$ $`{\displaystyle \frac{1}{(2\sqrt{\pi })^{n+3}}}{\displaystyle _0^{\mathrm{}}}𝑑ss^{\frac{n+3}{2}}e^{r^2/4s}`$ (20) $`\times `$ $`{\displaystyle \underset{j=1}{\overset{n}{}}}e^{q_j^2R^2/4s}\left(1+2{\displaystyle \underset{m_j}{\overset{\mathrm{}}{}}}e^{(m_j\pi R)^2/s}\mathrm{cosh}{\displaystyle \frac{m_jq_j\pi R^2}{s}}\right)`$ Now, for $`R>r`$ the exponentials in the sum converge rapidly and a certain number of terms in the product may give a good approximation. We are interested in the case of two extra dimensions. Taking the case of zero angles, i.e. $`q_j=\theta _j\theta _{0j}=0`$ for all $`j`$’s and $`n=2`$ we can split (20) into three integrals which can be evaluated. The results are, $`I_0`$ $`=`$ $`{\displaystyle \frac{1}{8\pi ^2}}{\displaystyle \frac{1}{r^3}}`$ (21) $`I_1`$ $`=`$ $`{\displaystyle \frac{4}{8\pi ^2}}{\displaystyle \underset{k}{}}{\displaystyle \frac{1}{(r^2+4\pi ^2k^2R^2)^{3/2}}}`$ (22) $`I_2`$ $`=`$ $`{\displaystyle \frac{4}{8\pi ^2}}{\displaystyle \underset{k=1}{}}{\displaystyle \underset{\mathrm{}=1}{}}{\displaystyle \frac{1}{(r^2+4\pi ^2(k^2+\mathrm{}^2)R^2)^{3/2}}}`$ (23) We note that the number 4 multiplying the corrections is the product of the factor 2 in front of the sum in the integral (20) times the number of dimensions $`n=2`$. Defining the parameter $`\rho =r/(2\pi R)`$, for $`\rho <1`$ we may expand to obtain $`I_1`$ $``$ $`{\displaystyle \frac{1}{8\pi ^2}}{\displaystyle \frac{4}{(2\pi R)^3}}\left(\zeta (3){\displaystyle \frac{3}{2}}\zeta (5)\rho ^2\right)`$ (24) $`I_2`$ $``$ $`{\displaystyle \frac{1}{8\pi ^2}}{\displaystyle \frac{4}{(2\pi R)^3}}\left(\zeta _2(3){\displaystyle \frac{3}{2}}\zeta _2(5)\rho ^2\right)`$ (25) where in the above expressions $`\zeta (\mathrm{})`$ is the Riemann zeta function and we have introduced the notation $`\zeta _2(\mathrm{})=_{k,m}(k^2+m^2)^{\mathrm{}}`$. Thus, we obtain an approximation for the corrections $`V(r)`$ $``$ $`{\displaystyle \frac{1}{r^3}}+4{\displaystyle \frac{2.24}{(2\pi R)^3}}`$ (26) An estimation of the correction terms may also be given in the limiting case $`\rho 1`$. Putting $`\rho =1`$ and performing the sums we obtain $`V(r)`$ $``$ $`{\displaystyle \frac{1}{r^3}}+4{\displaystyle \frac{1.32}{(2\pi R)^3}}`$ (27) The general result for $`n=2`$ can be written as a double convergent sum as follows $`\mathrm{\Phi }_{n=2}`$ $`=`$ $`{\displaystyle \frac{1}{8\pi ^2}}{\displaystyle \frac{1}{r^3}}\left[1+4\rho ^3{\displaystyle \underset{k=0}{\overset{\mathrm{}}{}}}{\displaystyle \underset{l=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{\left(\rho ^2+(k^2+l^2)\right)^{\frac{3}{2}}}}\right]`$ (28) The double sum takes also into account the degeneracy of a particular KK-contribution. Clearly, the sum of the two integers $`k^2+l^2=N^2`$ which appears in the denominator is related to the degeneracy. The generalization of the above result to higher dimensions is straightforward. Here, we have restricted in examining the corrections to the Newtonian gravity due to the possible existence of $`n=1`$ or $`n=2`$ extra space-time dimensions. We have succeeded to obtain useful exact forms of the potential for the case $`n=1`$. In the cases where $`n>1`$ the long range corrections can be approximated by a Yukawa type interaction, and the potential is written $$V_{r>R}\frac{1}{R^nr}(1+2ne^{r/R})$$ where $`r`$ is the distance and $`R`$ a common compactification radius of the $`n`$ extra dimensions. When the compact radius is effectively large, the modifications are expressed as powers of the ratio $`r/R`$, $$V_{r<R}\frac{1}{r^{n+1}}\left(1+2nc_n\left(\frac{r}{R}\right)^{n+1}\right)$$ where $`c_n`$ is a calculable coefficient which for cases $`n=1,2`$ is given by the expressions (14) and (28) in this work. As this work was being written, we received where a similar analysis is presented. FIGURE CAPTIONS FIGURE 1: The potential $`V(r)`$ for one extra dimension as a function of the distance-compactification ratio $`r/R`$ and the angle $`q=\theta \theta _0`$. FIGURE 2: Comparison of the exact (upper curve) and approximate forms (lower curve) in the $`n=1`$ potential.
no-problem/9906/astro-ph9906034.html
ar5iv
text
# 1 Introduction ## 1 Introduction Among the most plausible candidates for the dark matter in the Universe are Weakly Interacting Massive Particles (WIMPs), of which the supersymmetric neutralino is a favourite candidate (see e.g. Jungman et al., 1997 for a review). We will here consider the antiproton flux from neutralino dark matter annihilating in the galactic halo and we will also investigate the prospects of seeing such a signal above the conventional background. As antimatter seems not to exist in large quantities in the observable Universe, including our own Galaxy, any contribution to the cosmic ray generated antimatter flux (besides antiprotons also positrons) from exotic sources may in principle be a good signature for such sources. This issue has recently come into new focus thanks to upcoming space experiments like Pamela (Adriani et al., 1995) and Ams (Ahlen et al., 1994) with increased sensitivity to the cosmic antimatter flux. ## 2 Definition of the supersymmetric model We work in the minimal supersymmetric standard model with seven phenomenological parameters and have generated about $`10^5`$ models by scanning this parameter space (for details, see Bergström et al., 1999). For each generated model, we check if it is excluded by recent accelerator constraints of which the most important ones are the LEP bounds (Carr, 1998) on the lightest chargino mass (about 85–91 GeV), and the lightest Higgs boson mass $`m_{H_2^0}`$ (which range from 72.2–88.0 GeV) and the constraints from $`bs\gamma `$ (Ammar et al., 1993 and Alam et al.,1995). For each model allowed by current accelerator constraints we calculate the relic density of neutralinos $`\mathrm{\Omega }_\chi h^2`$ where the relic density calculation is done as described in Edsjö and Gondolo (1997), i.e. including so called coannihilations. We will only be interested in models where neutralinos can be a major part of the dark matter in the Universe, so we restrict ourselves to relic densities in the range $`0.025<\mathrm{\Omega }_\chi h^2<1`$. ## 3 Antiproton production by neutralino annihilation Neutralinos are Majorana fermions and will annihilate with each other in the halo producing leptons, quarks, gluons, gauge bosons and Higgs bosons. The quarks, gauge bosons and Higgs bosons will decay and/or form jets that will give rise to antiprotons (and antineutrons which decay shortly to antiprotons). The hadronization for all final states (including gluons) is simulated with the well-known particle physics Lund Monte Carlo program Pythia 6.115 (Sjöstrand, 1994). To calculate the source function of $`\overline{p}`$ from neutralino annihilation we also need to specify the halo profile. We we will here focus on the modified isothermal distribution with a local halo density of 0.3 GeV/cm<sup>3</sup>. ## 4 Propagation model and solar modulation We choose to describe the propagation of cosmic rays in the Galaxy by a transport equation of the diffusion type as written by Ginzburg and Syrovatskii (1964) (see also Berezinskii et al., 1990; Gaisser, 1990). The propagation region is assumed to have a cylindrical symmetry: the Galaxy is split into two parts, a disk of radius $`R_h`$ and height $`2h_g`$, where most of the interstellar gas is confined, and a halo of height $`2h_h`$ and the same radius. We assume that the diffusion coefficient is isotropic with possibly two different values in the disk and in the halo, reflecting the fact that in the disk there may be a larger random component of the magnetic fields. For the diffusion coefficient, we assume the same kind of rigidity dependence as in Chardonnet et al. (1996) and Bottino et al. (1998), i.e. that $`D(R)=D^0\left(1+R/R_0\right)^{0.6}`$. As a boundary condition we assume that the cosmic rays can escape freely at the border of the propagation region. For details about our propagation model and how the solutions are obtained, see Bergström et al. (1999). For the solar modulation we use the analytical force-field approximation by Gleeson & Axford (1967; 1968) for a spherically symmetric model. To compare with the two sets of Bess measurements, which are both near solar minimum, we choose the modulation parameter $`\varphi _F=500`$ MV. ## 5 Background estimates Secondary antiprotons are produced in cosmic ray collisions with the interstellar gas. Normally, only $`pp`$ interactions are included, which gives rise to a ‘window’ at low energies with low fluxes. However, we include $`pHe`$ interactions as well as $`pp`$ interactions and also energy losses during propagation (with the full energy distribution). Both of these processes tend to enhance the antiproton flux at low energies and in Fig. 1 (a) we show the background flux of antiprotons and the contributions from $`pHe`$ interactions and energy losses. We clearly see that the low-energy window has been filled-in. In Fig. 1 (b) we show the solar modulated curve compared with recent Bess measurements. We see that data is well described by this conventional source alone. ## 6 Signal from neutralino annihilation In Fig. 2 (a) we show the solar modulated fluxes versus the neutralino mass. We see that there are many models with fluxes above the Bess measurements. However, this conclusion depends strongly on which range one allows for the neutralino relic density. In Fig. 2 (a) we have coded the symbols according to the relic density interval. As can be seen, essentially all models which are in the Bess measurement band have a relic density $`\mathrm{\Omega }_\chi h^2<0.1`$. If we instead require $`0.1\text{ }<\mathrm{\Omega }_\chi h^2\text{ }<0.2`$ the rates are never higher than the measured flux. This points to a weakness of this indirect method of detecting supersymmetric dark matter: once the predicted rate is lower than the presently measured flux, the sensitivity to an exotic component is lost. This is because of the lack of a distinct signature which could differentiate between the signal and the background. We are now interested in finding out if there are any special features of the antiproton spectra from neutralino annihilation which distinguish these spectra from the background. We then ask ourselves if there is an optimal energy at which $`\mathrm{\Phi }_{\mathrm{signal}}/\mathrm{\Phi }_{\mathrm{background}}`$ has a maximum. In Fig. 2 (b) we show the interstellar flux at these optimal energies, $`T_{\mathrm{opt}}`$, versus $`T_{\mathrm{opt}}`$. We have two classes of models: one class which have highest signal to noise below 0.5 GeV (i.e. inaccessible in the solar system due to the solar modulation) and one which have highest signal to noise at 10–30 GeV. For this first class of models, we note that there exists a proposal of an extra-solar space probe (Wells et al., 1998) which would avoid the solar modulation problem and is thus an attractive possibility for this field. However, these models have high rates in the range 0.5–1 GeV as well, even though it would be even more advantageous to go to lower energies. The second class of models are much less affected by solar modulation and also give reasonably high fluxes. In Fig. 2 (c) we show some examples of spectra. They show maxima occurring at lower energies than for our canonical background. At higher energies, the trend is that the slope of the flux decreases as the neutralino mass increases. Model number 3 corresponds to a heavy neutralino and its spectrum is significantly less steep than the background. If such a spectrum is enhanced, for instance by changing the dark matter density distribution, we would get a bump in the spectrum above 10 GeV (Ullio, 1999). Finally, in Fig. 3 we show an example of a hypothetical composite spectrum which consists of our canonical background flux decreased by 24 % (obtained e.g. by decreasing the primary proton flux by $`1\sigma `$), and the signal for model 5 in Fig 2 (c). We can obtain a nice fit to the Bess data, but as noted before, there are no special features in the spectrum that allow us to distinguish between this case and the case of no signal. ## 7 Discussion and conclusions We have seen that there is room, but no need, for a signal in the measured antiproton fluxes. We have also seen that the optimal energy to look for when searching for antiprotons is either below the solar modulation cut-off or at higher energies than currently measured. However, there are no special spectral features in the signal spectra compared to the background, unless the signal is enhanced and one looks at higher energies (above 10 GeV). We have stressed the somewhat disappointing fact that since the present measurements by the Bess collaboration already exclude a much higher $`\overline{p}`$ flux at low energies than what is predicted through standard cosmic-ray production processes, an exotic signal could be drowned in this background. Even if it is not, the similar shape of signal and background spectra will make it extremely hard to claim an exotic detection even with a precision measurement, given the large uncertainties in the predicted background flux (at least a factor of a few, up to ten in a conservative approach). ## Acknowledgements We thank Mirko Boezio, Alessandro Bottino and collaborators, Per Carlson and Tom Gaisser for useful discussions, Paolo Gondolo for collaboration on many of the numerical routines used in the supersymmetry part and Markku Jääskeläinen for discussions at an early stage of this project. L.B. was supported by the Swedish Natural Science Research Council (NFR). References Adriani, O. & al., 1995. Proc. of 24th ICRC, Rome, 3, 591. Ahlen, S., & al., AMS Collaboration. 1994, Nucl. Instrum. Meth., A350, 351. Alam, M.S. & al. 1995, (Cleo Collaboration), Phys. Rev. Lett., 74, 2885. Ammar, R. & al. 1993, (Cleo Collaboration), Phys. Rev. Lett., 71, 674. Berezinskii, V.S., Bulanov, S., Dogiel, V., Ginzburg, V. & Ptuskin, V. 1990, Astrophysics of cosmic rays, North-Holland, Amsterdam. Bergström, L., Edsjö, J. & Ullio, P. 1999, astro-ph/9902012. Bottino, A., Donato, F., Fornengo, N. & Salati, P. 1998, Phys. Rev. D58 123503. Carr, J. 1998, The ALEPH Collaboration. 1998, Talk by Carr, J., March 31, 1998, http://alephwww.cern.ch/ALPUB/seminar/carrlepc98/index.html; Preprint ALEPH 98-029, 1998 winter conferences, http://alephwww.cern.ch/ALPUB/oldconf/oldconf.html. Chardonnet, P., Mignola, G., Salati, P. & Taillet, R. 1996, Phys. Lett., B384, 161. Edsjö, J. 1997, PhD Thesis, Uppsala University, hep-ph/9704384. Edsjö, J. & Gondolo, P. 1997, Phys. Rev., D56, 1879. Gaisser, T.K. 1990, Cosmic rays and particle physics, Cambridge University Press, Cambridge. Ginzburg, V.L. & Syrovatskii, S.I. 1964, The origin of cosmic rays, Pergamon Press, London. Gleeson, L.J. & Axford, W.I. 1967, ApJ, 149, L115. Gleeson, L.J. & Axford, W.I. 1968, ApJ, 154, 1011. Jungman, G., Kamionkowski M. & Griest, K. 1996, Phys. Rep., 267, 195. Matsunaga, H. & al. 1998, Phys. Rev. Lett., 81, 4052. Orito, S. 1998, Talk given at The 29th International Conference on High-Energy Physics, Vancouver, 1998. Ullio, P. 1999, astro-ph/9904086. Wells, J.D., Moiseev, A. & Ormes, J.F. 1998, preprint CERN-TH/98-362 (astro-ph/9811325). For a more detailed list of references, see Bergström, L., Edsjö, J. & Ullio, P., 1999
no-problem/9907/hep-ex9907054.html
ar5iv
text
# Development of a Laser Wire Beam Profile Monitor (I) ## 1 Introduction Development of high energy $`e^+e^{}`$ linear colliders is of crucial importance for the future particle physics. Various R$`\&`$D works are now in progress to achieve higher energy and higher luminosity. Development of a high gradient RF cavity is essential to attain high energy while realization of a low emittance beam as well as a good beam monitor is important for high luminosity. At KEK, the Accelerator Test Facility (ATF), consisting of a 1.54 GeV linac and a damping ring, has been built to study generation and manipulation of an ultra-low emittance electron beam. In accordance with this purpose, we have started developing a beam profile monitor to measure beam emittance in the ring. Table 1 shows the design parameters of the ring relevant to our monitor . Since the expected beam size is about $`10\mu `$m vertically and about $`60\mu `$m horizontally, a beam profile monitor with better than $`10\mu `$m resolution is required. A wire scanner made of tungsten or carbon is one candidate . However, a thin wire (for example, $`10\mu `$m in diameter to achieve a desired resolution) is expected to be destroyed due to thermal stress caused by interactions with the intense electron beam inside the ring. Wire material will also influence the beam condition; an undesirable property for the monitor. Use of a laser beam, in stead of material wire, has been proposed . For example, a monitor using an intense pulsed laser was tested successfully and achieved a resolution in a sub-micron range . This monitor, however, works at a repetition rate of 10Hz and is not best suited to the quasi-continuous beam in the ATF damping ring; it would either take measurement time very long or necessitate a powerful laser. A new beam profile monitor is designed using a CW laser and an optical cavity. A laser beam with a very thin waist is realized by employing the cavity of nearly concentric mirror configuration while the intensity is amplified by adjusting the cavity length to a Fabry-Perot resonance condition. This monitor, referred to as a laser wire beam profile monitor, operates as follows. An electron interacts with laser light and emits an energetic photon into the original electron beam direction (Compton scattering). The counting rate of a scattered photon is observed at various wire positions; its shape then gives a beam profile in one direction. The monitor should be able to withstand an intense electron beam without interfering it. In this report, we describe a conceptual design of the beam profile monitor and some experimental results of a basic study with a test cavity . The main aim of the study is to establish a method to measure the size of laser beam waist and the power enhancement factor. The paper is organized as follows : Sec. 2 deals with a theoretical approach to the design of a new monitor. An experimental test setup and the results are shown in Sec.3. A summary and discussions are given in Sec.4. ## 2 Theoretical approach In this section, we consider a Compton process and an optical cavity. We first present a relation between the energy and angle of the photon emitted by the Compton process. We then calculate its cross section and an expected counting rate. Several parameters, such as intensities and sizes of the electron and laser beam, must be assumed to do this calculation. We employ a set of parameters listed in the third column of Table 1 for the electron beam. As to the laser beam, a 10 mW He-Ne laser and an optical cavity with a beam waist of $`10\mu `$m and a power enhancement of 100 are supposed. These cavity parameters are our goals at present. We review a theory of an optical cavity in the latter half of this section. ### 2.1 Compton scattering #### Kinematics Fig.1 shows the Compton scattering kinematics in the laboratory system. Here a laser light with an energy $`k_0`$ is assumed to be injected perpendicular to an electron beam with an energy $`E`$. The energy of scattered photon $`k`$ is given by $`k={\displaystyle \frac{k_0E}{E+k_0\sqrt{E^2m_e^2}\mathrm{cos}\theta _c}}`$ where $`m_e`$ denotes the electron mass. In principle, $`k`$ depends upon both polar angle $`\theta _c`$ and azimuthal angle $`\varphi _c`$ (not shown in Fig.1). In practice, however, $`k`$ depends only on $`\theta _c`$ (hereafter referred to as a scattering angle), because the incident electron energy $`E`$ is much larger than the laser photon energy $`k_0(Ek_0)`$. The energy $`k`$ is plotted in Fig.2 as a function of $`\theta _c`$ for a He-Ne laser ($`\lambda =633`$nm). We note that energetic photons are emitted in the direction of the incident electron; for example, photons with 10 MeV or larger are emitted within $`0.53`$ mrad. #### Cross Section The cross section of the Compton process is given by Klein-Nishina formula when the initial electron is at rest. We made the appropriate Lorentz boost to calculate the cross section in the laboratory frame. The result is shown in Fig.3. We note that the cross section is sharply peaked at $`\theta _c=0`$ : for example, the partial cross section with $`\theta _c<0.53`$ mrad ($`k>10`$ MeV) amounts to 0.44 barn, which should be compared with the total cross section of 0.65 barn. Evidently, to identify the Compton scattering unambiguously, it is best to detect the energetic photons emitted in the forward direction. For the sake of argument, we assume that scattered photons with $`k>10`$ MeV can be detected with 100% efficiency in the following. #### Counting rate Having determined the cross section of the Compton scattering, we now estimate the counting rate. Here, as stated before, we assume a 10 mW laser and an optical cavity with a beam waist of $`10\mu `$m and a power enhancement factor of 100. The counting rate is linear to the laser power and/or the enhancement factor so that extrapolation for other values is straightforward. Taking all factors into account, we found the counting rate to be 3.2 kHz (horizontal measurement) and 28.8 kHz (vertical measurement) . Here the horizontal (vertical) measurement represents the case in which the electron’s horizontal (vertical) beam size is measured at the peak of a gaussian-like beam with a vertical (horizontal) laser wire. Since the vertical beam size is much smaller than that of horizontal one, the counting rate is larger accordingly. ### 2.2 Optical cavity In this section, we briefly summarize the theory of an optical cavity (resonator) which is necessary for discussions on the design and measurements of our test cavity. It can be derived from the Helmholtz equation that there exists a set of electromagnetic waves that have spherical wave fronts and Gaussian amplitudes. These waves are called Gaussian beams . The electric field of the fundamental TEM<sub>00</sub> mode is represented by $`E_0(x,y,z)=A{\displaystyle \frac{w_0}{w(z)}}\mathrm{exp}\left({\displaystyle \frac{x^2+y^2}{w^2(z)}}\right)\mathrm{exp}\left(i{\displaystyle \frac{2\pi }{\lambda }}{\displaystyle \frac{x^2+y^2}{2R(z)}}\right)\mathrm{exp}\left(i\mathrm{\Phi }(z)\right),`$ with $`w(z)`$ $`=`$ $`w_0\sqrt{1+\left({\displaystyle \frac{z}{z_0}}\right)^2},`$ (1) $`R(z)`$ $`=`$ $`z+{\displaystyle \frac{z_0^2}{z}},`$ $`\mathrm{\Phi }(z)`$ $`=`$ $`\mathrm{arctan}\left({\displaystyle \frac{z}{z_0}}\right),`$ $`z_0`$ $`=`$ $`{\displaystyle \frac{\pi w_0^2}{\lambda }}`$ where $`\lambda `$ represents the wave length, $`w(z)`$ the beam spot size at the location $`z`$, $`R(z)`$ the curvature of the wave front, $`\mathrm{\Phi }(z)`$ the Guoy phase factor, and $`z_0`$ the Rayleigh length. The parameter $`w_0`$ is called the beam waist, and represents the smallest spot size realized at $`z=0`$. The beam is well described by the geometrical optics where $`|z|>z_0`$. Suppose two spherical mirrors with curvatures $`R_1`$ and $`R_2`$ are placed at $`z_1`$ and $`z_2`$, respectively. If conditions $`R_1`$ $`=`$ $`R(z_1)=z_1+z_0^2/z_1`$ $`R_2`$ $`=`$ $`R(z_2)=z_2+z_0^2/z_2`$ are satisfied, then the Gaussian beam can be stably confined in the optical cavity formed by the two mirrors. It can be shown that there always exists a certain stable Gaussian beam for any curvatures $`R_1`$ and $`R_2`$ and any cavity length $`D`$ ($`D=z_2z_1`$), if the stability condition $`0\left(1{\displaystyle \frac{D}{R_1}}\right)\left(1{\displaystyle \frac{D}{R_2}}\right)1`$ is satisfied. In other words, the properties of the stable Gaussian beam are uniquely determined once $`R_1`$, $`R_2`$ and $`D`$ are given. In particular, the beam waist is represented by $`w_0^2={\displaystyle \frac{\lambda }{\pi }}{\displaystyle \frac{\sqrt{D(R_1D)(R_2D)(R_1+R_2D)}}{|R_1+R_22D|}}={\displaystyle \frac{\lambda }{\pi }}{\displaystyle \frac{\sqrt{D(2RD)}}{2}}`$ (2) where the latter equality holds for the mirrors with equal curvatures, i.e. $`R=R_1=R_2`$. Fig.4 shows the beam waist $`w_0`$ as a function of $`D`$ for $`\lambda =633`$ nm and $`R=20`$ mm. (These values of $`\lambda `$ and $`R`$ are employed in the actual studies described in the next section.) It can be seen from the figure that, in order to realize a very thin beam waist, the cavity must be nearly concentric ($`D2R`$). For example, a beam waist of 10 $`\mu `$m is realized when $`D=40\mathrm{m}\mathrm{m}24\mu \mathrm{m}`$. It should also be noted that the requirement for the setting accuracy in $`D`$ is very severe. A stable Gaussian beam can be produced by injecting an appropriate laser beam into a cavity. The spot and divergence of the input laser beam at the mirror must match with those of the Gaussian beam in the cavity. Some of the laser light is transmitted to the other side of the cavity. The transmission ratio T is given by the Airy function represented by $`T=1/\left[1+{\displaystyle \frac{4F^2}{\pi ^2}}\mathrm{cos}^2\left({\displaystyle \frac{2\pi D}{\lambda }}\delta \right)\right]`$ (3) where $`\delta `$ is a phase shift which characterize the reflection at the mirrors. The quantity $`F`$ is called “finesse” and is given by $`F={\displaystyle \frac{\pi \sqrt{R_m}}{1R_m}}`$ where $`R_m`$ is the reflectivity of the mirrors. (The two mirrors are assumed to be identical here.) Alternatively, the finesse $`F`$ may be expressed, from Eq.(3), by $`F={\displaystyle \frac{\mathrm{\Delta }D(peaktopeak)}{\delta D(fwhm)}}={\displaystyle \frac{\lambda /2}{\delta D(fwhm)}}`$ where $`\mathrm{\Delta }D`$ is the difference in $`D`$ between the two adjacent peaks, and $`\delta D`$ the full width at half maximum of a single peak. The peak-to-peak distance $`\mathrm{\Delta }D`$ is called the free spectral range, and is equal to a half of the wave length $`\lambda `$. The power enhancement factor $`P`$ inside the cavity is given by $`P={\displaystyle \frac{T}{1R_m}}\left\{1+R_m2\sqrt{R_m}\mathrm{sin}\left({\displaystyle \frac{\pi (zD)}{\lambda }}+\delta \right)\right\}.`$ It exhibits an interference pattern along the laser beam axis. Averaging the above formula over $`z`$, we obtain the average power enhancement factor $`\overline{P}`$: $`\overline{P}={\displaystyle \frac{1+R_m}{1R_m}}T{\displaystyle \frac{2}{3}}FT,`$ which takes its maximum value of $`{\displaystyle \frac{1+R_m}{1R_m}}{\displaystyle \frac{2}{3}}F`$ when $`T=1`$. For example, we need a mirror with reflectivity $`R_m=98\%`$ in order to obtain a power enhancement factor of 100. ## 3 Experimental Studies with a Test Cavity In this section, we describe our experimental studies with a test cavity. Their main purpose is to establish methods of measuring the relevant Gaussian beam parameters. Specifically, we would like to measure the beam waist $`w_0`$ and the average power enhancement factor $`\overline{P}`$. Since the beam waist is the most important parameter to our application, we examined several independent methods of measuring $`w_0`$. ### 3.1 Setup of the experiment Fig.5 shows a schematic diagram of our test setup. It consists of a He-Ne laser ($`\lambda =633`$ nm) , an isolator, an input lens system, a test cavity, and a photodiode detector. The laser provided a 1 mW beam with a diameter of 0.5 mm ($`1/e^2`$) and a full angular divergence of 1.6 mrad. The transverse spatial mode is $`\mathrm{TEM}_{00}`$. In order to avoid a reflected beam to reenter the laser cavity, an optical isolator consisting of a polarizer and a quarter-wave plate was placed at the laser exit. A set of concave and convex lens formed an input lens system, which adjusted the laser beam to match with the Gaussian mode characteristics to the cavity. We used two sets of spherical mirrors of equal curvature ($`R=20`$mm) with different reflectivity. One set had a nominal reflectivity of $`96\%`$ and the other set $`85\%`$. The substrate was BK7 glass ($`n=1.519`$): its spherical surface was coated with multi-layer dielectric materials and the other flat surface was polished. Thus they acted as a concave lens for input and output light. These optical elements were installed on adjustable positioners. In particular, the downstream mirror was staged on a piezo translator to scan the mirror along the beam direction $`z`$. The piezo translator was controlled by a controller and was monitored by a strain gauge sensor with a 10 nm position resolution. We could set the piezo translator manually or scan it by an external voltage signal via the controller. A PIN photodiode was placed at the exit of the cavity. Its output signal was fed into a simple current-to-voltage amplifier whose output was in turn monitored by an oscilloscope. Unless otherwise noted, we tuned the cavity to the fundamental TEM<sub>00</sub> mode. ### 3.2 Measurement of a power enhancement factor We measured the finesse to evaluate a power enhancement factor. A function generator was used to produce a triangle signal at a frequency of $``$100 Hz. It was fed into the piezo controller. Its amplitude was set so that the cavity length $`D`$ spanned over several free spectral ranges. A typical example of the detector output is shown in Fig.6. We expected the horizontal trace to be proportional to the change in the cavity length $`D`$ since the piezo translator was driven by a triangle signal. In reality, however, the relation was found to be non-linear due to a hysteresis effect of the piezo translator as well as a mechanical resonance effect of the mirror holder. We measured on the oscilloscope the full width of a single peak $`(\delta D(fwhm))`$ and the distances to the flanking peaks ($`\mathrm{\Delta }D`$). Then average was taken for the latter to cancel out the non-linear effect. Assuming that the observed $`\mathrm{\Delta }D`$ was equal to $`\lambda /2`$, we found the finesse $`F`$ to be 70 (22) for the mirrors with $`R_m=96\%(85\%)`$. The uncertainty in the measurement, stemming mainly from the non-linear effect in $`D`$, was estimated to be less than 10%. The measured value should be compared with an expected value of $`F`$=77 (19). We note that, for our actual application, precise knowledge of an absolute intensity is unnecessary as long as it remains constant since a beam profile would be measured as a relative shape of scattered photon’s counting rate. ### 3.3 Waist measurement by a shift-rotation method In order to obtain a beam waist of $``$$`10\mu `$m, the cavity length should be set very close to twice of the mirror curvatures ($`D2R`$). The tolerance in $`D`$ is rather severe; for example, to realize 10% accuracy in the beam waist ($`\delta w_0/w_0=10\%`$) for $`w_0=10\mu `$m, the setting error in $`D`$ must be as small as $``$$`10\mu `$m. This would be very difficult, if not impossible, to achieve since construction of the cavity necessarily induces setting errors, especially of glass mirrors. Thus it is highly desirable to measure the cavity length $`D`$ with sufficient accuracy. #### Principle of shift-rotation method At first, we employed the following method (referred to as a shift-rotation method). Suppose that we have a cavity with unknown cavity length. Suppose also that the cavity is tuned to the fundamental TEM<sub>00</sub> mode. Now we shift one of the mirrors laterally. Since the mirrors are both spherical, one can always realign it by rotating the whole cavity. A simple calculation shows that the displacement $`x_\theta `$ and rotation angle $`\theta `$ are related by $`\mathrm{tan}(\theta )={\displaystyle \frac{x_\theta }{2RD}}.`$ Because the actual mirror is made of concave glass, it acts as a lens. When this effect is taken into account, the formula above should be replaced by $`\mathrm{sin}(\theta )=n\mathrm{sin}\left\{\mathrm{arctan}\left({\displaystyle \frac{x_\theta }{2RD}}\right)\right\}`$ (4) where $`n`$ represents the refractive index of the glass substrate. By measuring $`x_\theta `$ and $`\theta `$, we can determine $`2RD`$, and can calculate the beam waist $`w_0`$ with Eq.(2). #### Results of the measurement We measured several sets of $`x_\theta `$ and $`\theta `$ at a fixed cavity length $`D`$. We then changed cavity length $`D`$ by moving the stage on which the downstream mirror was mounted. We could trace the amount of a change in $`D`$ because the stage was driven by a micrometer (attached to the stage in tandem with the piezo translator). Fig.7 (a) shows the results of such measurements. The ordinate is the quantity $`{\displaystyle \frac{x_\theta }{\mathrm{tan}\left\{\mathrm{arcsin}\left(\mathrm{sin}\theta /n\right)\right\}}}{\displaystyle \frac{nx_\theta }{\theta }}`$ (5) where the approximation holds for small $`\theta `$. The abscissa, labeled as $`2RD`$, represents actually $`z`$ position of the downstream mirror. For a fixed z position, each data point represents the measurement for different sets of $`x_\theta `$ and $`\theta `$. It can be seen from Eq.(4) that the quantity of Eq.(5) becomes $`2RD`$. Thus, ideally all the data points should reside on a straight line with a unit slope. The solid line in the figure represents such a fit. The intersection of the line with the abscissa is then relabeled as origin. Having established the relation between $`2RD`$ and the downstream mirror $`z`$ position, we calculated the beam waist $`w_0`$ with Eq.(2). We also calculated the root-mean-squared (rms) deviation of the data points from the fitted line at fixed $`2RD`$. Each rms deviation was then converted to a relative error in $`w_0`$ and is shown in Fig.7 (b). These errors originated from both $`x_\theta `$ and $`\theta `$ measurements: the former was due mainly to the reading error of the micrometer and the latter to the uncertainty in determining the exact $`\theta `$ at which the realignment of the optical axis was restored. In this paper, we assign a common error of $`\delta w_0/w_0=0.05`$ to the measurements for $`w_0>20\mu `$m. We would like to postpone drawing any conclusion for $`w_0<20\mu `$<sup>1</sup><sup>1</sup>1 This judgment was made partially because the cavity was found to be relatively unstable for $`w_0<20\mu `$m.. ### 3.4 Measurement of the far field beam divergence It can be seen from Eq.(1) that the beam width $`w(z)`$ in a far field is given by $`w(z){\displaystyle \frac{w_0}{z_0}}z={\displaystyle \frac{\lambda }{\pi w_0}}z`$. Thus a measurement of the output laser profiles gives information on the beam waist $`w_0`$. In particular, the beam intensity, when measured along $`x`$, reduces to $`{\displaystyle \frac{1}{\sqrt{e}}}`$ at $`x_{1/\sqrt{e}}={\displaystyle \frac{\lambda z}{2\pi w_0}}`$ from the peak value. Actual measurements were carried out for 4 different values of the beam waist; $`w_0`$=20, 25, 30 and 35 $`\mu `$m. We call them “nominal” values because these were all determined by the shift-rotation method. We inserted a slit with a horizontal width of $`200\mu `$m in front of the photodiode. It was then scanned horizontally ($`x`$ direction) to measure the beam profile. In order to enhance measurement accuracy, we determined the beam widths $`x_{1/\sqrt{e}}`$ as a function of $`z`$, and obtained the beam waist $`w_0`$ as a slope of these measurements <sup>2</sup><sup>2</sup>2 In the actual analysis, the effect due to the concave mirror (with refractive index $`n`$) is taken into account. Then the beam width $`x_{1/\sqrt{e}}`$ is given by $`x_{1/\sqrt{e}}={\displaystyle \frac{n\lambda z}{2\pi w_0}}`$, instead.. Fig.8 shows typical results of such a measurement (for the nominal beam waist of $`w_0=25\mu `$m). Each solid line represents a Gaussian fit to the data points, from which the width $`x_{1/\sqrt{e}}`$ is determined. Fig.9 shows $`x_{1/\sqrt{e}}`$ as a function of $`z`$. Each straight line in the figures is a linear fit to the data points and $`w_0`$ is deduced from its slope. Table 2 shows the summary of the measurements (see the third column). The listed errors are all fitting errors. Note that, since the nominal values are determined by the shift-rotation method, they also have uncertainties, which are listed in the second column. As seen, the two sets of the values agree well with each other. (The fourth column will be explained in the next section.) ### 3.5 Measurement of higher order transverse modes The phase of a higher transverse mode $`\mathrm{TEM}_{\mathrm{mn}}`$ is given by $`\varphi (z)={\displaystyle \frac{2\pi }{\lambda }}z+\left(m+n+1\right)\mathrm{\Phi }(z)`$ at $`z`$ on the beam axis. These transverse modes interfere constructively when the phases at the mirrors ($`z=\pm D/2`$) satisfy the resonance condition $`\varphi (D/2)\varphi (D/2)=p\pi `$ where $`p`$ denotes an integer. A straightforward calculation shows that the condition above is equivalent to $`D_p={\displaystyle \frac{\lambda }{2}}\left\{p+{\displaystyle \frac{m+n+1}{\pi }}\mathrm{arccos}\left({\displaystyle \frac{D_p}{R}}1\right)\right\}.`$ (6) The transmitted beam intensity also exhibits its maximum value when the resonance condition is met. Therefore, the spacing between the adjacent peaks belonging to the same p is given by $`\mathrm{\Delta }D_p=D_p(m+n=1)D_p(m+n=0)={\displaystyle \frac{\lambda }{2}}\left[{\displaystyle \frac{1}{\pi }}\mathrm{arccos}\left({\displaystyle \frac{D}{R}}1\right)\right]`$ (7) where the quantity $`D_p`$ in the right hand side of Eq.(6) is replaced by a representative value $`D`$ since its dependence is weak. Thus we can determine $`D`$ by measuring the spacing $`\mathrm{\Delta }D_p`$ of these peaks. As represented in Eq.(7), we actually measured the spacing between the first excitation mode ($`m+n=1`$) and the fundamental mode ($`m+n=0`$). After confirming the cavity to be in the fundamental TEM<sub>00</sub> mode, we detuned the optical axis by shifting one of the mirrors laterally to excite the transverse mode. Fig.10 shows a typical example of an output including higher transverse modes. We determined the spacing between the main peak (the fundamental mode) and its adjacent peak (the first excitation mode) as well as the spacing between two main peaks (a free spectral range). Then we deduced the cavity length $`D`$ via Eq.(7). The results of the measurements by this method are listed in the fourth column of Table 2. Uncertainty came mainly from reading errors on the oscilloscope and also from non-linearity in $`D`$. ## 4 Summary and Discussions We described in this paper a conceptual design of a beam profile monitor for an electron beam with a transverse size of $``$$`60\mu `$m (horizontal) and $``$$`10\mu `$m (vertical). Specifically, the monitor is intended to diagnose an electron beam in the ATF damping ring at KEK. The monitor works as follows: we realize a very thin ($``$$`10\mu `$m) laser wire in an optical cavity. Energetic photons are emitted by the Compton scattering in the direction of the electron beam. The counting rate, measured as a function of the laser beam position, will give information on the electron beam profile. We calculated the scattered photon energy spectrum (see Fig.2) and the Compton cross section (see Fig.3) for the 1.54 GeV electron beam. Assuming a 10 mW laser and an optical cavity with a beam waist of $`10\mu `$m and a power enhancement of 100, the expected counting rate is 3.2 kHz (horizontal) and 28.8 kHz (vertical) when scattered photons with energies $`>`$10 MeV are detected. A key element of the proposed monitor is the optical cavity, in which a Gaussian beam with a $`10\mu `$m beam waist must be realized. A power enhancement inside the cavity is an another important parameter to investigate. We thus constructed a test cavity to measure these parameters. The cavity realized the power enhancement of 50 and the beam waist of $`20\mu `$m. For the beam waist measurement, we employed three independent methods. As seen in Table 2, the results obtained by these methods are consistent with each other. The relative error $`\delta w_0/w_0`$ is about 5% by the shift-rotation method, less than 10% by the beam divergence measurement and less than 3% by the higher transverse mode measurement. We measured the finesse $`F`$ to obtain a power enhancement factor. Using the mirrors with reflectivity of $`R_m=96\%`$, we obtained the enhancement of 50 ($`F=70`$), which is consistent with the expected value. Several comments are in order here. First, we have not yet demonstrated a $`10\mu \mathrm{m}`$ beam waist. The main defect in this test cavity is its mechanical rigidity. In particular the mirror holder (supporting rod) caused disturbing resonant vibration. This can, however, be overcome with a proper cavity design. The second comment is on comparison between the three methods employed to measure the beam waist. The measurement of the beam width $`x_{1/\sqrt{e}}`$ is very simple and reliable. A newly devised method, the shift-rotation method, gave more accurate results than the beam divergence measurement. However, these methods would not be applicable for an actual cavity installed in an electron beam line. The third one, the observation of higher transverse modes, is simple and the most accurate among the three. Thus this method is best suited to an in-situ measurement. Finally, we comment on a power enhancement factor or an effective power inside the cavity. We demonstrated the enhancement factor of 50. Obviously a higher power is desirable: it would allow us faster measurement and would enable us to study, for example, an electron cooling process and/or a size of an individual bunch. In this regard, we plan to employ a laser with a higher power and mirrors with a higher reflectivity for a prototype monitor. A design work is now underway. It is our pleasure to thank Dr. M. Ross for suggesting this work and Prof. H. Sugawara and K. Takata for their support and encouragement.
no-problem/9907/hep-lat9907019.html
ar5iv
text
# Staggered Fermions and Gauge Field Topology ## 1 Introduction The study of the relation between staggered fermions and (lattice) gauge field topology has a long history, beginning with the work of Smit and Vink . In particular, it is well known that staggered fermions at realistic values of the gauge field coupling $`\beta `$ do not show the proper relation between fermion zero modes and gauge field topology as it is dictated by the index theorem in the continuum. This established problem with staggered fermions has recently surfaced again, as lattice gauge theory studies have begun to test in detail the exact analytical predictions for the microscopic Dirac operator spectrum of staggered fermions . The crucial point is that the whole microscopic Dirac operator spectrum, and not just that pertaining to the exact zero modes, has been predicted to be very strongly dependent on the gauge field topology . It falls into one of a set of universality classes that in addition depend on the gauge group, the number of fermion flavors and their color representation . In earlier studies of the spectrum of the smallest staggered Dirac operator eigenvalues all gauge field configurations, irrespective of their possibly non-trivial winding numbers, were simply bunched together, and the Dirac operator spectrum taken with respect to this full average. This implies an implicit sum over all topological sectors, for which the analytical predictions are very different from those of sectors with fixed topological index . Nevertheless, absolutely excellent agreement was found when comparing with just the sector of vanishing topological charge $`\nu =0`$. On the surface this may seem to be just a simple consequence of the fact that also exact zero fermion modes are missing in the staggered formulation. The issue is, however, much more complicated. The lattice studies of refs. involved the distribution of just around 10 of the lowest Dirac operator eigenvalues. Only positive eigenvalues were considered, since the staggered Dirac spectrum has an exactly $`\pm `$ symmetric spectrum. Of these few lowest eigenvalues, typically one would expect that up to 2-6 were actually the “would-be” zero modes, shifted away from the origin by the staggered fermion artefacts. What should be the distribution of these “wrong” small eigenvalues? In the Random Matrix Theory formulation of the problem there is no answer to this question, as there is no known way of imposing correctly almost-zero modes in the theory. In that formulation one either has or has not exact zero eigenvalues. This could incorrectly lead to the conclusion that as long as staggered fermions do not produce exact zero modes, the distribution of the smallest Dirac operator eigenvalues in that formulation will exactly equal that of the $`\nu =0`$ sector. The argument is false, because as $`\beta `$ is increased the appropriate number of the smallest Dirac operator eigenvalues will slowly separate out, and move towards the origin. As this happens, the whole microscopic Dirac operator spectrum will continuously shift, and in this intermediate region there will certainly no longer be agreement with the analytical predictions for any sectors with $`\nu 0`$, let alone the sum over all of them. These considerations immediately raise the question of whether already at the $`\beta `$-values considered in refs. there was appreciable contamination of “wrong” non-zero eigenmodes. To settle that issue, we report here on a high-statistics analysis (involving around 17,000 gauge field configurations) of the microscopic Dirac operator spectrum of SU(3) gauge theory with quenched staggered fermions. Correctly classifying the gauge field configurations according to topology is not a simple task, especially since by default we are excluded from using fermionic methods. We have chosen to do the classification according to the result of measuring the naive latticized topological charge $$\nu =\frac{1}{32\pi ^2}d^4x\text{Tr}[F_{\mu \nu }F_{\rho \sigma }]ϵ_{\mu \nu \rho \sigma }$$ (1) on configurations obtained after a large number of so-called APE smearing steps , details of which will be given below. Once classified, we have then measured the smallest Dirac operator eigenvalues on the original un-smeared gauge field configurations. We in no way claim that this is an optimal way in which to separate out the different topological sectors, but it should at least have quite some overlap with other methods. In particular, it is known, on average, to produce results very similar to more conventional types of semi-classical cooling . Studying the fate of the smallest Dirac operator eigenvalues with staggered fermions is here motivated by the need to understand the exact analytical predictions for the microscopic Dirac operator spectrum. But the problem is interesting also for other reasons. For instance, one needs to know the extent to which staggered fermions correctly couple to gauge field topology also if one wishes to compute, for instance, flavor singlet pseudoscalar masses in lattice QCD. We believe that the microscopic Dirac operator spectrum may be an excellent tool with which to assess how close simulations with staggered fermions are to continuum physics related to gauge field topology and the anomaly. This short paper is organized as follows. In the next section we briefly describe how we implement the smearing technique on gauge links, and show how this leads us to a classification of all gauge field configurations into distinct sectors, labelled by the value of $`\nu `$. We also trace the evolution of the smallest Dirac operator eigenvalues as a function of the number of smearing steps, and show that these results are completely in accord with expectations. In section 3 we compute the microscopic Dirac operator spectrum in each of the different gauge field sectors, and compare results to the analytical predictions. To zoom in on precisely the smallest eigenvalue, which in topologically non-trivial sectors should behave very differently from the ordinary Dirac operator eigenvalues, we also compare the distribution of just this smallest eigenvalue with the exact analytical predictions. Finally, section 4 contains our conclusions. ## 2 Analysis of Gauge Field Topology: APE-Smearing We perform the statistical analysis of the gauge field topology and the Dirac operator eigenvalues using a total of 17454 SU(3) pure gauge configurations with volume $`V=8^4`$ and lattice coupling $`\beta =6/g^2=5.1`$. We generate the configurations with an update consisting of 4 microcanonical overrelaxation sweeps over the volume followed by one pseudo–heat-bath update. The configurations we analyze are separated by 20 of these compound sweeps; this guarantees that the configurations are effectively uncorrelated. Let us give some technical details about the measurements of the topological charge. First, the naive topological density operator (1), when implemented on the lattice, is not truly topological: it is sensitive to lattice ultraviolet modes, and, if applied to the original lattice configurations, the (generally non-integer) value of $`\nu `$ is completely dominated by the UV noise. However, if the fields are smooth enough on the lattice scale, topology can be uniquely defined. We make the original lattice fields smoother by applying repeated APE-smearing: during one smearing sweep, SU(3) link variables $`U_\mu (x)`$ are replaced by $$U_\mu (x)P_{\mathrm{SU}(3)}\left[fU_\mu (x)+\underset{|\nu |\mu }{}U_\nu (x)U_\mu (x+\nu )U_\nu ^{}(x+\mu )\right],$$ (2) where the sum over $`\nu `$ goes over both positive and negative directions. The operator $`P_{\mathrm{SU}(3)}`$ projects the $`3\times 3`$ complex matrix $`W_\mu (x)`$, inside the brackets in (2), to SU(3), and $`f`$ is an adjustable parameter. The projection is performed by maximizing $`\mathrm{Tr}\left(U_\mu ^{}(x)W_\mu ^{}(x)\right)`$ over the SU(3) group elements $`U_\mu ^{}(x)`$. The smearing sweeps (consisting of a smearing of each link on the lattice) are performed up to 400 times. While the APE-smearing is not particularly sensitive to the actual choice of the parameter $`f`$, we used $`f=7`$ throughout our work. This value is motivated by the comparison to the RG blocking analysis by De Grand et al . According to their results, APE-smearing is quite effective in resolving instantons from quantum fluctuations, while it preserves the long-distance properties of the gauge field configurations much better than the standard cooling algorithms. Naturally, almost any cooling or smearing method will destroy instantons with a size of order of the lattice spacing; however, the lattice topology is not well defined at these length scales anyway. We calculate the field tensor $`F_{\mu \nu }`$ in Eq. (1) at lattice point $`x`$ by symmetrizing over the ‘clover’ of the 4 ($`\mu ,\nu `$)-plane plaquettes which have one corner at point $`x`$. In Fig. 1 we show a typical example of how the topological charge (as measured with operator (1)) develops during smearing. As expected, the charge measurement is very noisy when the configurations are rough, but after $`100`$ smearing sweeps the measurement almost always stabilizes to almost an integer value, which is then preserved for at least several hundreds of smearing steps. In Fig. 2 we show the distribution of the measured topological charge after 200 smearing steps. The distribution is strongly peaked near integer values, as expected, with a small downward drift of the peaks from exact integer values. This is obviously caused by our naive $`F\stackrel{~}{F}`$ operator; using an improved operator would presumably shift the peaks toward exact integer values. Nevertheless, this slight shift does not pose any difficulties in assigning a topological index to the smeared configurations. Indeed, it is fairly obvious from our figure that there is simply a trivial “renormalization” of the naive topological charge, which appears to be neatly quantized, not in units of integers, but in units of around $`0.8`$. If corrected with such a prefactor of $`1/0.8`$, our distribution of topological charges will fall on values very close to integers, and, as seen, the distribution has the expected infinite-volume Gaussian shape. We have measured the topological charge after 200 and 300 smearing sweeps, and accepted only configurations where the measured charge remained in the neighborhood of the same “renormalized” integer value 0, $`\pm 1`$, $`\pm 2`$. In this way we have rejected configurations where the measured topological charge changed too much to be uniquely defined. In this process we have rejected 37% of our configurations, which simply have been discarded in all of the subsequent analysis. Eigenvalues of the staggered Dirac operator $`\text{/}D_{x,y}`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{\mu }{}}\eta _\mu (x)\left(U_\mu (x)\delta _{x+\mu ,y}U_\mu ^{}(y)\delta _{x,y+\mu }\right)`$ (3) $``$ $`\text{/}D_{e,o}+\text{/}D_{o,e}`$ (4) are computed using the variational Ritz functional method . Here $`\eta _\mu (x)=(1)^{_{\nu <\mu }x_\nu }`$ are the staggered phase factors. Letting $`ϵ(x)=(1)^{_\nu x_\nu }`$, we have explicitly indicated how $`\text{/}D`$ connects even sites, i.e. those with $`ϵ(x)=+1`$, with odd (those with $`ϵ(x)=1`$) ones, and vice versa. We remind the reader that the staggered Dirac operator is antihermitian with purely imaginary eigenvalues that come in pairs, $`\pm i\lambda `$. The operator $`\text{/}D^2`$ is thus hermitian and positive semi-definite, and the sign function $`ϵ(x)`$ defined above plays the role of $`\gamma _5`$ in the continuum ($`i.e.`$, this quantity anticommutes with $`\text{/}D:\{\text{/}D,ϵ\}=0`$). Moreover, since $`\text{/}D^2`$ does not mix between even and odd lattice sites, it suffices to compute the eigenvalues, on, say, the even sublattice. In fact, if $`\psi _e`$ is a normalized eigenvector of $`\text{/}D^2`$ with eigenvalue $`\lambda ^2`$, then $`\psi _o\frac{1}{\lambda }\text{/}D_{o,e}\psi _e`$ is a normalized eigenvector of $`\text{/}D^2`$ with eigenvalue $`\lambda ^2`$, and non-zero only on odd sites. (Note that there is no difficulty with the above definition of $`\psi _o`$, since we will never encounter exact zero modes). In practice, we make use of these properties, and compute only the (positive) eigenvalues of $`\text{/}D^2`$ restricted to the even sublattice, and then take the (positive) square root. All eigenvalues to be shown in the following thus have an equal number of negative companions, of the exact same magnitude. We have first investigated how the lowest eigenvalues behave during the smearing process described previously. As shown in Fig. 3, the index theorem is seen to be valid for smooth configurations: after $`100`$ smearing sweeps, we have exactly $`4\times \nu `$ very small eigenvalues (corresponding to 4 continuum flavors for each staggered fermion flavor), whereas the other eigenvalues grow larger. This implies that in the continuum limit the eigenvalues of the staggered Dirac operator should indeed depend on the topology of the gauge configuration. As expected, on the smooth configurations the counting of flavors is precisely as in the continuum limit: we observe 4 flavor degrees of freedom associated with the staggered Dirac operator. ## 3 The Microscopic Dirac Operator Spectrum Having described our procedure for classifying our gauge field configurations according to their smeared topological charge $`\nu `$, we now proceed to measure the distribution of the smallest Dirac operator eigenvalues. We stress that we use the smearing technique only as a means of defining the classification of the original un-smeared gauge field configurations. All measurements presented below are performed on the original, un-smeared, gauge field configurations, after having discarded those that failed to be classified according to the criterion described above. The spectral density of the Dirac operator is given by $$\rho ^{(\nu )}(\lambda )\underset{n}{}\delta (\lambda \lambda _n)_\nu $$ (5) in each sector of topological charge $`\nu `$. The associated microscopic Dirac operator spectrum is defined by enhancing the smallest eigenvalues according to the size of the lattice space-time volume $`V`$. Let $$\mathrm{\Sigma }=\underset{m0}{lim}\underset{V\mathrm{}}{lim}\overline{\psi }\psi $$ (6) denote the infinite-volume chiral condensate as defined in the conventional manner. One then blows up the small eigenvalues by keeping $`\zeta \lambda \mathrm{\Sigma }V`$ fixed as $`V\mathrm{}`$, and introduces the microscopic spectral density $$\rho _s^{(\nu )}(\zeta )\frac{1}{V}\rho ^{(\nu )}\left(\frac{\zeta }{\mathrm{\Sigma }V}\right),$$ (7) which then again is measured in each topological sector. For the case at hand, the microscopic spectral density has been computed from both Random Matrix Theory and from the effective finite-volume partition functions . The results agree, and the simple analytical result for the quenched theory ($`J_n(x)`$ is the $`n`$th order Bessel function), $$\rho _s^{(\nu )}(\zeta )=\pi \rho (0)\frac{\zeta }{2}\left[J_\nu (\zeta )^2+J_{\nu 1}(\zeta )J_{\nu +1}(\zeta )\right]$$ (8) is exact in the limit where $`V\mathrm{}`$ and $`V1/m_\pi ^4`$. Here $`\rho (0)`$ is the macroscopic spectral density of the Dirac operator, evaluated at the origin. By the well-known Banks-Casher relation, it is related to the chiral condensate through $`\pi \rho (0)=\mathrm{\Sigma }`$. In Fig. 4 we compare these analytical predictions for the sectors of $`\nu =0,1`$ and 2. We did not have enough statistics to perform a similar analysis on what we would classify as $`\nu =3`$ configurations; however, as will be evident, there was no need to do this either. Our first comment concerns statistics. Because the configurations with $`+\nu `$ and $`\nu `$ should give rise to the same microscopic Dirac operator spectrum, we actually end up with better statistics for the $`\nu =\pm 1`$ configurations combined. In total, we had 2683 configurations labelled as $`\nu =0`$, 4797 configurations labelled as $`\nu =\pm 1`$, and 3493 configurations labelled as $`\nu =\pm 2`$. Indeed, we observe from Fig. 4 the curious fact that the agreement between the analytical curve for the microscopic spectral density of the staggered Dirac operator actually seems to be poorer on the $`\nu =0`$ than on the $`\nu =\pm 1`$ configurations. We attribute this solely to statistical fluctuations. It is quite obvious that even configurations classified as of $`\nu =\pm 1`$ and $`\nu =\pm 2`$ topological charge give rise to a microscopic staggered Dirac operator spectrum here which is indistinguishable from that of the $`\nu =0`$ configurations. The agreement with the exact analytical formula for $`\nu =0`$ configurations (8) is extraordinarily good on all three classes of configurations. We have also indicated on the figure the predictions for $`|\nu |=1`$ and $`|\nu |=2`$; there is clearly no way our Monte Carlo data can be compatible with these predictions. Combining all data, we obtain perfect agreement with the analytical $`\nu =0`$ prediction, with very small statistical errors. To focus more closely on just the smallest eigenvalue, we have also compared its distribution with the analytical predictions for different topological sectors . Let us denote the distribution of the lowest (rescaled) eigenvalue in a sector of topological charge $`\nu `$ by $`P^{(\nu )}(\zeta )`$. From the general formula of ref. one finds for the quenched theory, with our normalization convention: $`P^{(0)}(\zeta )`$ $`=`$ $`\pi \rho (0){\displaystyle \frac{\zeta }{2}}e^{\zeta ^2/4}`$ (9) $`P^{(1)}(\zeta )`$ $`=`$ $`\pi \rho (0){\displaystyle \frac{\zeta }{2}}I_2(\zeta )e^{\zeta ^2/4}`$ (10) $`P^{(2)}(\zeta )`$ $`=`$ $`\pi \rho (0){\displaystyle \frac{\zeta }{2}}\left[I_2(\zeta )^2I_1(\zeta )I_3(\zeta )\right]e^{\zeta ^2/4}.`$ (11) Checking the distribution of just the smallest eigenvalue is obviously the most sensitive test of whether there is any appreciable contamination of would-be zero modes in the different topological sectors. We show in Fig. 5 the lowest eigenvalue distributions in the three different topological sectors, and compare them with the analytical predictions. Clearly no deviations are seen at all from the $`\nu =0`$ prediction, in all sectors. Finally, one could well ask what would happen if we instead measured the distribution of the smallest staggered Dirac operator eigenvalues on the smeared configurations, after removing by hand those eigenvalues that obviously should be classified as zero modes. All indications are of course that such smeared configurations should produce the correct behavior of the eigenvalues in the different topological sectors. The reason why we have not performed such measurements on the smoothened configurations is that the ensemble average is not a very meaningful concept in that case. The results are clearly very sensitive to where we cut the APE-smearing procedure, and even with a fixed number of smearings it is not at all obvious that this could provide us with a sensible ensemble average. ## 4 Conclusions At a lattice coupling of $`\beta =5.1`$ the microscopic Dirac operator spectrum of staggered fermions in quenched SU(3) lattice gauge theory displays no deviations at all from the analytical prediction of the $`\nu =0`$ topological sector. Even after roughly classifying all gauge field configurations into different topological sectors by means of the value of $`F\stackrel{~}{F}`$ on sufficiently smeared configurations, we have found no deviations from the $`\nu =0`$ predictions, in any of the different sectors. In view of earlier results, which simply bunched all gauge field configurations together independently of their proper assignments of topological charge , and which found excellent agreement on this total sample of configurations, these results are not extremely surprising. It is nevertheless very disturbing that at the presently probed lattice couplings even gauge field configurations that clearly should be classified as carrying non-trivial topological charge $`\nu `$ are not at all seen as such by staggered fermions. The agreement with the analytical predictions for the microscopic Dirac operator spectrum in a gauge field sector of topological charge zero is thus a great success of the analytical framework , but it also indicates a clear failure of the staggered fermion formulation at these lattice couplings. What it means is that Monte Carlo simulations with staggered fermions at these lattice couplings are oblivious to net gauge field topology, an essential ingredient of the dynamics of non-Abelian gauge theories. Such simulations are simply not mimicking the correct path integral of the continuum theory. It is comforting that more sophisticated fermion formulations that correctly build in the broken/unbroken chiral Ward identities in sectors of fixed gauge field topology now are feasible alternatives. There are already clear results which show that these new fermion formulations correctly reproduce the analytical predictions for the microscopic Dirac operator spectrum in sectors of non-trivial gauge field topology . One surprising aspect of the present work is that it shows that the typically 2-6 would-be zero modes of staggered fermions at these lattice spacings behave entirely as “conventional” (small) Dirac operator eigenvalues on topologically trivial configurations. In particular, although they consist of up to 50% of all the small eigenvalues we probe here, their distributions fall entirely on top of the conventional small eigenvalues. Certainly, these would-be zero modes will eventually, as the lattice spacing is decreased, separate out, and completely distort the microscopic Dirac operator spectrum. This deformation of the smallest eigenvalue spectrum may already have been seen in the 2-d Schwinger model . It is challenging to search for the corresponding onset of correct topological properties with staggered fermions in this SU(3) gauge theory at smaller lattice spacings. The microscopic Dirac operator spectrum is an excellent tool with which to measure this in a precise and quantitative manner. Acknowledgements: The work of P.H.D. and K.R. has been partially supported by EU TMR grant no. ERBFMRXCT97-0122, and the work of U.M.H. has been supported in part by DOE contracts DE-FG05-85ER250000 and DE-FG05-96ER40979. In addition, P.H.D. and U.M.H. acknowledge the financial support of NATO Science Collaborative Research Grant no. CRG 971487 and the hospitality of the Aspen Center for Physics.
no-problem/9907/hep-th9907094.html
ar5iv
text
# KUNS-1586hep-th/9907094 D0-branes in SO(32)×SO(32) open type 0 string theory (July, 1999) We construct D0-branes in SO(32)$`\times `$SO(32) open type 0 string theory using the same method as the one used to construct non-BPS D0-brane in type I string theory. It was conjectured that this theory is S-dual to bosonic string theory compactified on SO(32) lattice, which has SO(32)$`\times `$SO(32) spinor states as excited states of fundamental string. One of these states seems to correspond to the D0-brane, and by the requirement that other states which do not have corresponding states must be removed, we can determine the way of truncation of the spectrum. This result supports the conjecture. PACS codes: 11.25.-w Keywords: string theory, type 0, D-brane, S-duality, tachyon condensation, boundary state Correspondence of SO(32) spinor states is an evidence for the S-duality between type I and Heterotic SO(32) string theories. Heterotic SO(32) theory has SO(32) spinor states as the first excited states of fundamental string. These are the lightest of the states which have SO(32) spinor charge and therefore cannot decay and must exist in strong coupling regime, which is described by type I string theory. Their type I counterpart is non-BPS D0-brane . Let us consider analogous correspondence in type 0 string theory. It was proposed that SO(32)$`\times `$SO(32) open type 0 string theory (we will abbreviate it as open type 0 theory) is S-dual to bosonic string theory compactified on SO(32) lattice . In this bosonic string theory fundamental string has the following worldsheet matter content: $$X^\mu (z),\stackrel{~}{X}^\mu (\overline{z}),\lambda ^A(z),\stackrel{~}{\lambda }^{\stackrel{~}{A}}(\overline{z}),\mu =0,\mathrm{},9,A,\stackrel{~}{A}=1,\mathrm{},32.$$ (1) Here $`A`$ is the index of fundamental representation of one SO(32), while $`\stackrel{~}{A}`$ is that of the other SO(32). The lightest states which have SO(32)$`\times `$SO(32) spinor charge are $`\lambda _{\frac{1}{2}}^A\lambda _{\frac{1}{2}}^B\lambda _{\frac{1}{2}}^C\lambda _{\frac{1}{2}}^D|0_R|\stackrel{~}{a}_L,\lambda _{\frac{3}{2}}^A\lambda _{\frac{1}{2}}^B|0_R|\stackrel{~}{a}_L,\alpha _1^\mu \lambda _{\frac{1}{2}}^A\lambda _{\frac{1}{2}}^B|0_R|\stackrel{~}{a}_L,`$ $`\alpha _1^\mu \alpha _1^\nu |0_R|\stackrel{~}{a}_L,\alpha _2^\mu |0_R|\stackrel{~}{a}_L,`$ (2) $`\stackrel{~}{\lambda }_{\frac{1}{2}}^{\stackrel{~}{A}}\stackrel{~}{\lambda }_{\frac{1}{2}}^{\stackrel{~}{B}}\stackrel{~}{\lambda }_{\frac{1}{2}}^{\stackrel{~}{C}}\stackrel{~}{\lambda }_{\frac{1}{2}}^{\stackrel{~}{D}}|a_R|0_L,\stackrel{~}{\lambda }_{\frac{3}{2}}^{\stackrel{~}{A}}\stackrel{~}{\lambda }_{\frac{1}{2}}^{\stackrel{~}{B}}|a_R|0_L,\stackrel{~}{\alpha }_1^\mu \stackrel{~}{\lambda }_{\frac{1}{2}}^{\stackrel{~}{A}}\stackrel{~}{\lambda }_{\frac{1}{2}}^{\stackrel{~}{B}}|a_R|0_L,`$ $`\stackrel{~}{\alpha }_1^\mu \stackrel{~}{\alpha }_1^\nu |a_R|0_L,\stackrel{~}{\alpha }_2^\mu |a_R|0_L,`$ (3) $`|a_R|\stackrel{~}{a}_L,`$ (4) where $`a`$ and $`\stackrel{~}{a}`$ are spinor indices of one SO(32) and the other SO(32) respectively. Here we do not consider the truncation of spectrum required by modular invariance, etc. We will return to this point later. In this paper we construct the type 0 counterpart to these states using the same method as the one used to construct non-BPS D0-brane in type I string theory. As we will see, the states corresponding to (4) can be found by this method, but the states corresponding to (2) and (3) are not found. This fact suggests what truncation we should adopt. The result is in accord with the proposal in ref. . Open type 0 theory is constructed from type 0B theory by $`\mathrm{\Omega }`$ projection, where $`\mathrm{\Omega }`$ is the worldsheet parity inversion, analogously to the construction of type I theory from type IIB theory . Type 0B theory has two types of RR fields and therefore has two types of D-branes. We denote their RR charges by $`(q,\overline{q})`$. Boundary states of these branes are $$|Dp;q,\overline{q}_0=\frac{1}{\sqrt{2}}\left(|Dp_{NS+NS+}+q\overline{q}|Dp_{NSNS}+q|Dp_{R+R+}+\overline{q}|Dp_{RR}\right),$$ (5) with $`|Dp_{NS\pm NS\pm }`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(|Dp,+_{NS}|Dp,_{NS}\right),`$ (6) $`|Dp_{R\pm R\pm }`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(|Dp,+_R\pm |Dp,_R\right).`$ (7) Boundary states of type IIB branes are $$|Dp;q_{\mathrm{II}}=|Dp_{NS+NS+}+q|Dp_{R+R+}.$$ (8) For the definition of $`|Dp,\pm _{NS}`$ and $`|Dp,\pm _R`$, and other notation about boundary states we adopt those of ref. . Strings stretched between $`(q,\overline{q})`$ brane and $`(\pm q,\pm \overline{q})`$ brane belong to $`\frac{1}{2}(1\pm (1)^F)`$NS sector only, and strings between $`(q,\overline{q})`$ brane and $`(\pm q,\overline{q})`$ brane belong to $`\frac{1}{2}(1q(1)^F)`$R sector only . Open type 0 theory have 32 D9-branes and 32 anti D9-branes of one type for tadpole cancellation. We choose $`(1,1)`$ and $`(1,1)`$ as these D9-branes. In this theory we can construct two types of D0-branes respectively from $`(1,1)`$ D1-brane-$`(1,1)`$ D1-brane system, and $`(1,1)`$ D1-brane-$`(1,1)`$ D1-brane system, in the same way to construct type I non-BPS D0-brane from D1-brane-anti D1-brane system (for details see ref. ): 1. Wrap the D1-branes around a compact direction with radius $`R_c=\sqrt{\frac{\alpha ^{}}{2}}`$ and put a $`𝐙_2`$ Wilson line on one of the D1-branes. 2. Define new worldsheet variables $`\varphi _R(z),\varphi _L(\overline{z}),\varphi _R^{}(z),\varphi _L^{}(\overline{z}),\xi (z),\stackrel{~}{\xi }(\overline{z}),\eta (z)`$, and $`\stackrel{~}{\eta }(\overline{z})`$ : $`X(z,\overline{z})=X_R(z)+X_L(\overline{z}),`$ (9) $`\mathrm{exp}(i\sqrt{{\displaystyle \frac{2}{\alpha ^{}}}}X_R)={\displaystyle \frac{1}{\sqrt{2}}}(\xi +i\eta ),\mathrm{exp}(i\sqrt{{\displaystyle \frac{2}{\alpha ^{}}}}X_L)={\displaystyle \frac{1}{\sqrt{2}}}(\stackrel{~}{\xi }+i\stackrel{~}{\eta }),`$ (10) $`\mathrm{exp}(i\sqrt{{\displaystyle \frac{2}{\alpha ^{}}}}\varphi _R)={\displaystyle \frac{1}{\sqrt{2}}}(\xi +i\psi ),\mathrm{exp}(i\sqrt{{\displaystyle \frac{2}{\alpha ^{}}}}\varphi _L)={\displaystyle \frac{1}{\sqrt{2}}}(\stackrel{~}{\xi }+i\stackrel{~}{\psi }),`$ (11) $`\mathrm{exp}(i\sqrt{{\displaystyle \frac{2}{\alpha ^{}}}}\varphi _R^{})={\displaystyle \frac{1}{\sqrt{2}}}(\eta +i\psi ),\mathrm{exp}(i\sqrt{{\displaystyle \frac{2}{\alpha ^{}}}}\varphi _L^{})={\displaystyle \frac{1}{\sqrt{2}}}(\stackrel{~}{\eta }+i\stackrel{~}{\psi }).`$ (12) 3. Give vev to the tachyon field, i.e. put the Wilson line $`\mathrm{exp}(i𝑑z\frac{1}{2\sqrt{2\alpha ^{}}}\varphi \sigma _1)`$ along $`\varphi `$. 4. Decompactify the compact direction. $`\varphi _D^{}(z,\overline{z})=\varphi _R^{}(z)\varphi _L^{}(\overline{z})`$, $`\xi `$ and $`\stackrel{~}{\xi }`$ are the variables for this direction with Dirichlet boundary condition. The only difference between type I D0-brane and type 0 D0-branes in this construction is that type 0 D0-branes do not have R sector strings. We can also construct boundary states of type 0 D0-branes following ref. : 1. Introduce $`|B,\pm _{NS}`$ and $`|B,\pm _R`$ for describing D1-brane and anti D1-brane with a $`𝐙_2`$ Wilson line wrapped around a compact direction with radius $`R_c`$: $`|B,\pm _{NS}`$ $`=`$ $`|D1,\pm _{NS}+|\overline{D1}^{},\pm _{NS},`$ (13) $`|B,\pm _R`$ $`=`$ $`|D1,\pm _R|\overline{D1}^{},\pm _R,`$ (14) where $`\overline{D1}^{}`$ means the anti D1-brane with the $`𝐙_2`$ Wilson line. 2. Rewrite these boundary states in terms of the new variables $`\varphi (z),\stackrel{~}{\varphi }(\overline{z}),\xi (z),\stackrel{~}{\xi }(\overline{z}),\eta (z)`$, and $`\stackrel{~}{\eta }(\overline{z})`$: $`X(z,\overline{z})={\displaystyle \frac{1}{2}}(X_R(z)+X_L(\overline{z})),`$ (15) $`\mathrm{exp}(i\sqrt{{\displaystyle \frac{1}{2\alpha ^{}}}}X_R)={\displaystyle \frac{1}{\sqrt{2}}}(\eta +i\xi ),\mathrm{exp}(i\sqrt{{\displaystyle \frac{1}{2\alpha ^{}}}}X_L)={\displaystyle \frac{1}{\sqrt{2}}}(\stackrel{~}{\eta }+i\stackrel{~}{\xi }),`$ (16) $`\mathrm{exp}(i\sqrt{{\displaystyle \frac{1}{2\alpha ^{}}}}\varphi _R)={\displaystyle \frac{1}{\sqrt{2}}}(\xi +i\psi ),\mathrm{exp}(i\sqrt{{\displaystyle \frac{1}{2\alpha ^{}}}}\varphi _L)={\displaystyle \frac{1}{\sqrt{2}}}(\stackrel{~}{\xi }+i\stackrel{~}{\psi }),`$ (17) $`|B,\pm _{NS/R}`$ $`=`$ $`{\displaystyle \frac{1}{4\pi \alpha ^{}g_s}}\sqrt{{\displaystyle \frac{2\pi R_c}{\mathrm{\Phi }}}}\mathrm{exp}[{\displaystyle \underset{n>0}{}}{\displaystyle \frac{1}{n}}\alpha _n\widehat{S}^{(1)}\stackrel{~}{\alpha }_n]\mathrm{exp}[\pm i{\displaystyle \underset{n>0}{}}\psi _n\widehat{S}^{(1)}\stackrel{~}{\psi }_n]`$ (18) $`\times \mathrm{exp}[{\displaystyle \underset{n>0}{}}\varphi _n\stackrel{~}{\varphi }_n]\mathrm{exp}[\pm i{\displaystyle \underset{n>0}{}}\eta _n\stackrel{~}{\eta }_n]|D1,\pm _{NS/R}^{(0)}`$ $`\times 2\delta ^{(8)}(q^i){\displaystyle \underset{i=0,2,\mathrm{},9}{\overset{9}{}}}|k^i=0{\displaystyle \underset{w_\varphi =\mathrm{even}/\mathrm{odd}}{}}|0,w_\varphi .`$ 3. Put the Wilson line $`\mathrm{exp}(i𝑑z\frac{1}{2\sqrt{2\alpha ^{}}}\varphi \sigma _1)`$ along $`\varphi `$: $`|B,\pm _{NS}`$ $``$ $`{\displaystyle \frac{1}{4\pi \alpha ^{}g_s}}\sqrt{{\displaystyle \frac{2\pi R_c}{\mathrm{\Phi }}}}\mathrm{exp}[{\displaystyle \underset{n>0}{}}{\displaystyle \frac{1}{n}}\alpha _n\widehat{S}^{(1)}\stackrel{~}{\alpha }_n]\mathrm{exp}[\pm i{\displaystyle \underset{n>0}{}}\psi _n\widehat{S}^{(1)}\stackrel{~}{\psi }_n]`$ (19) $`\times \mathrm{exp}[{\displaystyle \underset{n>0}{}}\varphi _n\stackrel{~}{\varphi }_n]\mathrm{exp}[\pm i{\displaystyle \underset{n>0}{}}\eta _n\stackrel{~}{\eta }_n]|0_{NS}`$ $`\times 2\delta ^{(8)}(q^i){\displaystyle \underset{i=0,2,\mathrm{},9}{\overset{9}{}}}|k^i=0{\displaystyle \underset{w_\varphi }{}}(1)^{w_\varphi }|0,2w_\varphi ,`$ $`|B,\pm _R`$ $``$ $`0.`$ (20) 4. Rewrite these boundary states in terms of the T-dualized variables and decompactify the compact direction: $`|B,\pm _{NS}`$ $``$ $`{\displaystyle \frac{1}{4\pi \alpha ^{}g_s}}\sqrt{{\displaystyle \frac{\pi \alpha ^{}}{R_c\mathrm{\Phi }}}}\mathrm{exp}[{\displaystyle \underset{n>0}{}}{\displaystyle \frac{1}{n}}\alpha _n\widehat{S}^{(1)}\stackrel{~}{\alpha }_n]\mathrm{exp}[\pm i{\displaystyle \underset{n>0}{}}\psi _n\widehat{S}^{(1)}\stackrel{~}{\psi }_n]`$ (21) $`\times \mathrm{exp}[{\displaystyle \underset{n>0}{}}{\displaystyle \frac{1}{n}}\alpha _n\stackrel{~}{\alpha }_n]\mathrm{exp}[i{\displaystyle \underset{n>0}{}}\psi _n\stackrel{~}{\psi }_n]|0_{NS}`$ $`\times 2\delta ^{(8)}(q^i){\displaystyle \underset{i=0,2,\mathrm{},9}{\overset{9}{}}}|k^i=0{\displaystyle \underset{w}{}}|0,w`$ $``$ $`\sqrt{2}|D0,\pm _{NS}.`$ (22) Thus we get two types of boundary states as follows: $`|D1;q,\overline{q}_0+|\overline{D1}^{};q,\overline{q}_0`$ $``$ $`|D0_{NS+NS+}+q\overline{q}|D0_{NSNS}|D0;q\overline{q}_0.`$ (23) The factor $`\sqrt{2}`$ in (22) means that the tension of these D0-branes $`T_0`$ is $`\sqrt{2}`$ times the tension of type 0A D0-brane: $$T_0=\sqrt{2}T_0^{0\mathrm{A}}=T_0^{\mathrm{IIA}}=\frac{1}{\sqrt{\alpha ^{}}g_s}.$$ (24) The rules for computing the spectrum and the interactions of open strings which end on the D0-branes are the same as in ref. except that the strings stretched between the same type (different types) of D0-branes belong to NS (R) sector only. Similarly, strings between $`(\pm 1,\pm 1)`$ D9-branes and $`|D0;+1_0`$ of (23) belong to NS sector only, while strings between $`(\pm 1,\pm 1)`$ D9-branes and $`|D0;1_0`$ belong to R sector only. The NS sector gives only massive states because its zero point energy is $`5/8>0`$ and the R sector has massless states. The R sector massless states belong to SO(32) fundamental representation corresponding to 32 $`(1,1)`$ D9-branes or 32 $`(1,1)`$ D9-branes. The zero modes of these massless states form a Clifford algebra and their quantization gives rise to spinor representation of SO(32)$`\times `$SO(32). Therefore $`|D0;1_0`$ corresponds to the state (4). On the other hand, $`|D0;+1_0`$ has no SO(32)$`\times `$SO(32) charge and does not correspond to any state in (2), (3) and (4). The type 0 states corresponding to the states (2) and (3) are not found. It is impossible to construct the states which have spinor charge of only one SO(32) like the states (2) and (3) by using boundary states. This is because the difference between D9 and anti D9-branes is only the signature of RR part of the boundary states, and it is NSNS part that can be interpreted by modular transformation as R sector of open strings which have massless states with SO(32) charge. This result suggests what truncation we should adopt. What is given in ref. as a ground of S-duality between open type 0 theory and bosonic string theory on SO(32) lattice is the fact that the worldsheet matter content of fundamental string of bosonic string theory on SO(32) lattice coincides with that of the counterpart of open type 0 theory. But this leaves two possibilities in the choice of truncations in bosonic string theory side: 1. We adopt separative GSO projection $`\frac{1}{2}(1+(1)^F)\frac{1}{2}(1+(1)^{\stackrel{~}{F}})`$, where $`F`$ and $`\stackrel{~}{F}`$ are the number operators of $`\lambda `$ and $`\stackrel{~}{\lambda }`$ respectively. 2. We adopt diagonal GSO projection $`\frac{1}{2}(1+(1)^{F+\stackrel{~}{F}})`$ and in addition remove NSR and RNS sectors. This removal is necessary for modular invariance of the 1-loop partition function. Indeed the partition function is given by $`{\displaystyle \frac{1}{2}}(1+(1)^{F+\stackrel{~}{F}})(\mathrm{NSNS}+\mathrm{RR})`$ $`={\displaystyle \frac{d\tau d\overline{\tau }}{4\mathrm{I}\mathrm{m}\tau }(4\pi ^2\alpha ^{}\mathrm{Im}\tau )^5\frac{1}{2}\frac{|\vartheta _{00}(0,\tau )|^{32}+|\vartheta _{01}(0,\tau )|^{32}+|\vartheta _{10}(0,\tau )|^{32}}{|\eta (\tau )|^{48}}},`$ (25) which is modular invariant. In ref. the latter truncation is adopted. Diagonal GSO projection leaves the charged tachyon $`\lambda _{\frac{1}{2}}^A\stackrel{~}{\lambda }_{\frac{1}{2}}^{\stackrel{~}{A}}|0_R|0_L`$ which is projected out by separative GSO projection. Condensation of this tachyon breaks some part of the gauge group. This corresponds to condensation of the tachyon from string stretched between D9 and anti D9-branes in open type 0 theory side. In addition since the states (2) and (3) belong to NSR and RNS sector respectively, they are removed only by the latter truncation. Therefore we should adopt the latter truncation. Then the S-duality conjecture in ref. is supported by the agreement on SO(32)$`\times `$SO(32) spinor states. Now we comment on the other branes. Type I string theory has non-BPS $`(1),7,8`$ brane as well as D0-brane . Analogously we can consider $`(1),7,8`$ brane in open type 0 theory. Their boundary states can be constructed following ref. : $`|Dp;q_0`$ $`=`$ $`{\displaystyle \frac{\mu _p}{\sqrt{2}}}(|Dp_{NS+NS+}+q|Dp_{NSNS}),`$ (26) where $`\mu _p=2(\mathrm{for}p=1,7),\sqrt{2}(\mathrm{for}p=8)`$. However, as pointed out in ref. , because strings stretched between D9-branes and D7 or D8-brane with $`q=1`$ have tachyon modes , D7 and D8-brane with $`q=1`$ are unstable. D$`(1)`$-branes break the disconnected components of O(32)$`\times `$O(32) . We have considered various branes in the background with tachyons from closed strings and open strings stretched between D9 and anti D9-branes. It is desired to consider these branes in stable background without tachyons. Acknowledgments I would like to thank S. Sugimoto for helpful discussions and M. R. Gaberdiel for e-mail correspondence.
no-problem/9907/cond-mat9907017.html
ar5iv
text
# Neutron diffraction evidence of microscopic charge inhomogeneities in the CuO2 plane of superconducting La2-xSrxCuO4 (𝟎≤𝐱≤0.30) ## Abstract We present local structural evidence supporting the presence of charge inhomogeneities in the CuO<sub>2</sub> planes of underdoped La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>. High-resolution atomic pair distribution functions have been obtained from neutron powder diffraction data over the range of doping $`0x0.30`$ at 10 K. Despite the average structure getting less orthorhombic we see a broadening of the in-plane Cu-O bond distribution as a function of doping up to optimal doping. Thereafter the peak abruptly sharpens. The peak broadening can be well explained by a local microscopic coexistence of doped and undoped material. This suggests a crossover from a charge inhomogeneous state at and below optimal doping to a homogeneous charge state above optimal doping. The strong response of the local structure to the charge-state implies a strong electron-lattice coupling in these materials. There is mounting interest in the possibility that the charge distribution in the CuO<sub>2</sub> planes of the high-temperature superconductors is microscopically inhomogeneous and that this has a bearing on the high temperature superconductivity itself. An inhomogeneous charge distribution has been predicted theoretically . There is also mounting experimental evidence that microscopic charge inhomogeneities exist in particular cuprate samples. The most compelling evidence for this comes from the observation from neutron diffraction of stripes of localized charge in La<sub>2-x-y</sub>Nd<sub>y</sub>Sr<sub>x</sub>CuO<sub>4</sub> . However, such direct evidence for charge stripes has only been seen in insulating compounds or in samples where this ordering competes with superconductivity . The evidence supporting the presence of dynamic local charge stripe distributions in the bulk of superconducting samples is based primarily on the observation of incommensurate spin fluctuations . We present local structural evidence that supports the fact that the charges are inhomogeneous in the underdoped and optimally doped region of the La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> phase diagram, consistent with the presence of charge domains or dynamic charge-stripes. The charge distribution becomes homogeneous on crossing into the overdoped region. The clear observation of these effects in the local structure also underscores the point that there is a strong electron-lattice coupling, at least to particular distortion modes, in these materials. We note that the disappearance of the structural distortions, and therefore the charge inhomogeneities, correlates with the disappearance of the normal-state pseudogap in these materials. The presence of charge inhomogeneities in the CuO<sub>2</sub> planes implies profound consequences for the local structure. It is well known that the average Cu-O bond length changes as the charge state of copper changes. Thus, the Cu-O bond in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> shortens from 1.904 Å to 1.882 Å as $`x`$ changes from 0 to 0.2 and the average copper charge changes from $`2+`$ to $`2.2+`$. This is a generic feature of variably dopable HTS samples and comes about because the Cu-O bond is a covalent anti-bonding state which is stabilized by removing electron density from it. Clearly, if the doped charge in the CuO<sub>2</sub> planes is inhomogeneously distributed, such that some copper sites have more charge than others, a distribution of in-plane Cu-O bond lengths will exist. A high resolution measurement of the in-plane Cu-O bond length distribution as a function of doping will therefore reveal the extent of charge inhomogeneities. We have used the atomic pair distribution function (PDF) analysis of neutron powder diffraction data to measure accurate Cu-O bond length distributions with high resolution for a series of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> samples with ($`0x0.3`$). The ability of high-resolution PDF studies to reveal local bond-length inhomogeneities which are not apparent in the average structure has been clearly demonstrated . The samples studied cover the range from undoped, through underdoped and optimally doped ($`x=0.15`$) to the overdoped regime. We find that at 10 K the mean-square width of the bond-length distribution, $`\sigma ^2`$, increases approximately linearly with $`x`$ until optimal doping above which it sharply decreases and returns to the value of the undoped material by $`x=0.25`$. This is strong evidence for charge inhomogeneities in the under- and optimally-doped regimes as we discuss below. This increase in bond-length distribution can be well explained by a linear superposition of the local structures of undoped and heavily doped material. Samples of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> with $`x=0.0`$, 0.05, 0.1, 0.125, 0.15, 0.2, 0.25, and 0.3 were made using standard solid state synthesis. Mixtures of La<sub>2</sub>O<sub>3</sub>, SrCO<sub>3</sub> and CuO were reacted at temperatures between 900C and 1050C with intermediate grindings, followed by an annealing step in air at 1100C for 100 hours and in oxygen at 800C for 100 hours. The long anneals were carried out to ensure doping homogeneity in the samples. The lattice $`c`$-axis parameter is a sensitive measure of oxygen stoichiometry . Its value was determined for each of our samples thru Rietveld refinement of the data. It varies smoothly with $`x`$ as expected for stoichiometric samples and lies on the $`c`$-axis vs $`x`$ curve of Radaelli et al. within the scatter of their data. Neutron powder diffraction data were collected at 10 K on the Special Environment Powder Diffractometer (SEPD) at the Intense Pulsed Neutron Source (IPNS) at Argonne National Laboratory. Approximately 10g of finely powdered sample was sealed in a cylindrical vanadium tube with He exchange gas. The samples were cooled using a closed-cycle He refrigerator. The data were corrected for experimental effects and normalized to obtain the total structure function $`S(Q)`$ where $`Q`$ represents neutron momentum transfer. The PDF, $`G(r)`$, is obtained by a Fourier transform of the data according to $`G(r)=\frac{2}{\pi }_0^{\mathrm{}}Q[S(Q)1]\mathrm{sin}QrdQ`$. PDFs from these samples are shown in Figs. 5-8 of Ref. and in Fig. 3 of this paper. The PDFs examined in this paper used data over a range 0.7 Å<sup>-1</sup>$`<Q<28`$ Å<sup>-1</sup>. We are interested in extracting the width of the distribution of in-plane Cu-O bond-lengths. This information is contained in the width of the first PDF peak at $`1.9`$ Å. The peak-width comes from the thermal and zero-point motion of the atoms plus any bond-length distribution originating from other effects such as charge inhomogeneities. We can determine the latter by considering the PDF peak width of this peak as a function of doping at 10 K. Three independent measures of the peak width all show that the width increases significantly with doping up to $`x=0.15`$, beyond which the peak quickly sharpens. First, this behavior is evident by simply looking at the data shown in Fig. 1. This shows the low-$`r`$ region of the PDF around the $`r=1.9`$ Å peak as a function of $`x`$. Since we want to compare the relative peak widths (and heights), in this Figure the peaks have been shifted to line up the peak centroids and rescaled slightly to ensure that the integrated intensity of each peak is the same. It is clear that some of the peaks are significantly broader than others with lower peak heights and less steeply sloping sides. We have quantified this by fitting the peak with a single Gaussian. The Gaussian is first convoluted with the Fourier transform of the step function which was used to terminate the data . This accounts for any termination ripples in the data introduced by the Fourier transform of the finite range data-set and doesn’t introduce any additional parameters into the fit. The fitting parameters were peak position, scale-factor and width. The baseline is set by the average number density of the material and this was fixed at $`\rho _0=0.07299`$ Å<sup>-3</sup>. The results are shown in Fig. 2 as the solid circles. The mean-square width of the distribution increases monotonically (and almost linearly) with $`x`$ until $`x=0.15`$. Between 0.15 and 0.2 the peak abruptly sharpens and returns to the width of the undoped sample by $`x=0.25`$. The same behavior can be obtained from the data in a totally model-independent way. If the integrated area of a Gaussian is constant the height is inversely proportional to the width. Thus, the peak height, $`h`$, of the rescaled data shown in Fig. 1 should be inversely proportional to the width. The open squares in Fig. 2 show $`C/h^2`$ where $`h`$ was determined directly from the peak maximum in the data and the constant $`C`$ was chosen to make the $`x=0.0`$ points line up. There is excellent agreement lending confidence to the results from the fitting. We would like to discuss possible origins for these doping dependent changes in Cu-O bond-length distribution. First we rule out the possibility that it simply comes from changes in the orthorhombicity of the sample. The Cu-O PDF peak first broadens smoothly with increasing doping then dramatically sharpens at a composition close to the LTO-HTT structural phase boundary. This behavior does not reflect the monotonic decrease in orthorhombicity of the average structure. Indeed, in the overdoped region the PDF peak returns to the same narrow width it had in the undoped material which has the largest orthorhombic distortion of any of the samples. We also note that the in-plane Cu-O bonds are not expected to be sensitive to the orthorhombic distortion. They lie along the unit cell diagonals and not along the unit cell edges in the orthorhombic unit cell. In this case an orthorhombic distortion will change the bond-length but will not lead to two distinct bond-lengths. Next we show that the observed behavior cannot be explained by doping dependent changes in the octahedral tilts. The average (and local) tilt amplitude monotonically decreases with increasing doping which does not correlate with the behavior of the Cu-O bond length distribution. On the other hand, PDF peaks at higher-$`r`$ do sharpen monotonically with increasing $`x`$ reflecting the reduced octahedral tilt amplitude . We also rule out the idea that the observed Cu-O peak broadening is an effect of size-effect disorder due to doping since the peak sharpens dramatically above $`x=0.2`$ where the dopant induced disorder should be the greatest. We also note that size-effect dopant induced disorder is expected to have a large effect on octahedral tilts and a small effect on Cu-O bond lengths since the energy to change a Cu-O-Cu bond angle is much less than the energy to stretch the short Cu-O covalent bond. Furthermore, the extent of dopant induced tilt disorder is relatively small in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> as evidenced by the observation that higher-$`r`$ PDF peaks sharpen on increased doping in this material. These peaks sharpen because of a decrease in both the average orthorhombicity and octahedral tilt angle with increased doping. However, significant size-effect octahedral tilt disorder due to the chemical dopants tends to counter this effect, as seen in La<sub>2-x</sub>Ba<sub>x</sub>CuO<sub>4</sub> . We can also rule out structural fluctuations associated with the HTT-LTO transition. First, these fluctuations will affect primarily octahedral tilts and local orthorhombicity (the two order parameters of this structural transition) and as we have discussed, the Cu-O bond is expected to be quite insensitive to disorder in these parameters. However, in addition, we would expect these fluctuations to be largest when the structural phase transition temperature, T<sub>s</sub>, is closest to our measurement temperature. These temperatures are closest for the $`x=0.2`$ sample (T<sub>s</sub>=60 K , T<sub>meas</sub>=10 K) and this sample exhibits a narrow distribution of Cu-O bond-lengths. The largest distribution of Cu-O bond-lengths is seen for $`x=0.15`$ where T<sub>s</sub>=180 K. The observed behavior of the Cu-O bond-length distribution is best explained by the presence of charge inhomogeneities. As we have described, the charge-state of copper has a direct effect on the Cu-O bond length with the bond-length decreasing with increasing doping. Charge inhomogeneities will, thus, give rise to a distribution of Cu-O bond lengths. Increased doping will result in more Cu-O bonds being affected and therefore a larger measured effect in the PDF, as observed. Above optimal doping the PDF peak width abruptly sharpens to its value in the undoped material. This is consistent with the idea that in the overdoped region, the charge distribution in the Cu-O planes is becoming homogeneous. We now discuss independent evidence from the data which supports this picture. In Fig. 3(a) we show the low-$`r`$ region of the PDF from the $`x=0.0`$ and $`x=0.25`$ samples. Referring to Fig. 2 we see that these two data-sets have relatively narrow Cu-O bond-length distributions. Furthermore, in Fig. 3(a) it is apparent that the peak position has shifted due to the change in the average Cu-O bond-length with doping, as expected. The difference curve below the data shows that the two data-sets are quite different due to the significant structural differences. In Fig. 3(b) we show the intermediate $`x=0.1`$ data-set plotted as open circles. This peak is centered at a position shown by the dashed line which is intermediate between the positions of the $`x=0.0`$ and $`x=0.25`$ data-sets. Referring to Fig. 2 we see that the Cu-O bond-length distribution is relatively broad at this composition. Plotted on top of the $`x=0.1`$ data-set in Fig. 3(b) as the solid line is the PDF obtained by taking a linear combination of the $`x=0.0`$ and $`x=0.25`$ data-sets in the 1:1 ratio, without rescaling the data at all. The difference curve is shown below. The good agreement clearly demonstrates that the observed PDF peak position and broadening of the $`x=0.1`$ data-set is entirely consistent with there being an underlying bimodal bond-length distribution consistent with heavily doped and undoped regions of the CuO<sub>2</sub> plane. We can infer from this analysis that the difference in the bond-lengths is $`0.024`$ Å which is the difference between the average bond-lengths of the $`x=0.0`$ and $`x=0.25`$ samples. Finally, we note that the PDFs from underdoped La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> are consistent with the presence of CuO<sub>6</sub> octahedral tilt disorder in the samples. In an earlier paper we showed that the measured PDFs could be well explained by a local structure which contains a mixture of large and small octahedral tilts. We note here that we also have evidence from the PDFs for the presence of tilt-directional disorder in the sense that there is a mixture of $`100`$ (“LTO”) and $`110`$ (“LTT”) symmetry tilts present in the local structure. This will be reported in detail elsewhere . To summarize, we have presented evidence from neutron diffraction data which strongly supports the idea that doped charge in the CuO<sub>2</sub> planes of superconducting La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> for $`0<x0.15`$ and at 10 K is inhomogeneous. For doping levels of $`x=0.2`$ and above the charge distribution in the Cu-O plane becomes homogeneous. This presumably reflects a crossover towards more fermi-liquid-like behavior in the overdoped regime. This work was supported financially by NSF through grant DMR-9700966. SJB is a Sloan Research Fellow of the Alfred P. Sloan Foundation. GHK is funded at Los Alamos by the Department of Energy under Contract W-7405-ENG-36. The experimental data were collected at the IPNS at Argonne National Laboratory. This facility is funded by the US Department of Energy under Contract W-31-109-ENG-38.
no-problem/9907/hep-ph9907571.html
ar5iv
text
# Searching for a 𝑊' at the Next-Linear-Collider using Single Photons*footnote **footnote *Supported in part by the Natural Sciences and Engineering Research Council of Canada. ## INTRODUCTION There have been many studies of processes sensitive to additional $`Z`$-like bosons ($`Z^{}`$s) but comparatively few studies pertinent to $`W^{}`$s (see for instance BR ; Hew ), especially at $`e^+e^{}`$ colliders. Why? Firstly, there are fewer models which predict $`W^{}`$s. Secondly, at LEP II energies, the $`W^{}`$ signal is rather weak. Hence direct searches have been limited to hadron colliders and the $`W^{}`$-quark couplings are poorly constrained in some models. In this contribution we present preliminary results of an investigation of the sensitivity of the process $`e^+e^{}\nu \overline{\nu }+\gamma `$ to $`W^{}`$ bosons in various models. Our results are obtained by measuring the deviation from the standard model expectation. Interesting discovery limits are obtained for center-of-mass $`e^+e^{}`$ energy of 500 GeV or higher – Next-Linear-Collider (NLC) energies. Direct searches have been performed and indirect bounds have been obtained for $`W^{}`$s in a few models, the details of which are given later. Bounds for the Left-Right Symmetric Model (LRM) and Sequential Standard Model (SSM) can be found in pdg . They are obtained from the non-observation of direct production of $`W^{}`$s at the Tevatron and from indirect $`\mu `$-decay constraints. For the LRM (with equal left- and right-handed couplings) CDF obtains $`M_W^{}\stackrel{>}{}650`$ GeV and for the SSM, D0 finds $`M_W^{}\stackrel{>}{}720`$ GeV. From $`\mu `$-decay, the LRM $`W^{}`$ is constrained to $`M_W^{}\stackrel{>}{}550`$ GeV Barenboim . A naive leading order analysis for the SSM yields a bound of between 900 GeV and 1 TeV. One expects a somewhat higher bound than for the LRM since, in $`\mu `$-decay, there will be a $`W`$-$`W^{}`$ interference term. The major limitation in this method is the uncertainty in the $`W`$ mass. The LHC will have a discovery reach in the TeV range LHC . The search is analogous to that done at the Tevatron, except for the higher energy and luminosity. On the down side, one has $`pp`$ instead of $`p\overline{p}`$, which means no valence-valence contribution in the large Feynman-$`x`$ region. For the LRM, the magnitude of the effect will depend also on the $`W^{}`$-quark couplings which are unknown. Therefore it is hard to make predictions a-priori concerning discovery limits. The NLC search nicely sidesteps the above problem as no $`W^{}`$-quark couplings enter. Other LHC disadvantages include a lack of initial state quark polarizability, parton distribution dependence and large QCD corrections. The latter problems will affect the ability to pin down the $`W^{}`$ couplings. Hence, the complimentary nature of the cleaner NLC measurement is obvious despite the LHC’s high energy reach. ## BASIC PROCESS The basic process under consideration is: $$e^{}(p_1)+e^+(p_2)\gamma (p_3)+[\nu (p_4)+\overline{\nu }(p_5)],$$ (1) where the square brackets indicate that since the neutrinos are not observed, we effectively only have single photon production. The diagrams representing the leading Standard Model (SM) contribution are shown in Fig. 1. The $`W^{}`$($`Z^{}`$) contributions are obtained by replacing $`WW^{}`$, $`ZZ^{}`$ in the SM diagrams. Then one must include all interferences between SM and beyond-SM diagrams in the squared amplitude. The resulting squared amplitude is quite short, including spin dependence which comes out automatically when expressing the result in terms of the left- and right-handed couplings. The result is quite general and includes an arbitrary number of $`W^{}`$s and $`Z^{}`$s. ## MODELS We have considered three models having $`W^{}`$s which contribute to our process and are briefly described below. Sequential Standard Model: This is the simplest $`W^{}`$-containing extension of the SM, although not well motivated by theory. One has an extra $`W^{}`$ which is heavier than the SM one, but which has identical couplings. Left-Right Symmetric Model: In this model LRM , the symmetry $`SU(3)_c\times SU(2)_L\times SU(2)_R\times U(1)_{BL}`$ is obeyed, giving rise to a $`W^{}`$ and a $`Z^{}`$. The $`W^{}`$ is purely right-handed; we do not consider mixing between the SM and beyond-SM bosons. The pure SM couplings remain unchanged and we take the new right-handed neutrinos to be massless. In principle they could be very heavy as well, but this would lead to decoupling of the $`W^{}`$ from our process and we would be effectively left with a $`Z^{}`$ model, which is not the principal interest of this study. Two parameters arise; $`\rho `$ and $`\kappa `$. For symmetry breaking via Higgs doublets (triplets) $`\rho =1`$ (2). $`\kappa `$ is defined by $`\kappa =g_R/g_L`$ and thus measures the relative strength of the $`W^{}l\nu _l`$ and $`Wl\nu _l`$ couplings. It lies in the range Hew ; kappa $$0.55\stackrel{<}{}\kappa \stackrel{<}{}2.$$ (2) More specifically, we have the coupling $$W^{}l\nu =i\frac{g\kappa }{\sqrt{2}}\gamma ^\mu \frac{1+\gamma _5}{2},$$ (3) suggesting that larger values of $`\kappa `$ will lead to larger deviations from the SM. In addition, we have the relation $$M_Z^{}^2=\frac{\rho \kappa ^2}{\kappa ^2\mathrm{tan}^2\theta _W}M_W^{}^2,$$ (4) so that $`\rho =1`$ leads to a lighter $`Z^{}`$ mass for fixed $`M_W^{}`$, which should yield a bigger effect versus $`\rho =2`$. Un-Unified Model (UUM): The UUM UUM obeys the symmetry $`SU(2)_q\times SU(2)_l\times U(1)_Y`$, again leading to a $`W^{}`$ and a $`Z^{}`$. Both new bosons are left-handed and generally taken to be approximately equal in mass. There are two parameters: a mixing angle $`\varphi `$, which represents a mixing between the charged bosons of the two SU(2) symmetries, and $`x=(u/v)^2`$, where $`u`$ and $`v`$ are the VEV’s of the two scalar multiplets of the model. The relation $`M_Z^{}M_W^{}`$ follows in the limit $`x/\mathrm{sin}^2\varphi 1`$ and the parameter $`x`$ may be replaced by $`M_W^{}`$, so that only $`\varphi `$ enters as a parameter for determining mass discovery limits. The leptonic couplings may be inferred from the Lagrangian $$_{\mathrm{lept}}=\frac{g\mathrm{sin}\varphi }{2\mathrm{cos}\varphi }[\sqrt{2}\overline{\psi }_{\nu _l}\gamma _\mu \psi _{l,L}W_2^{+,\mu }+(\overline{\psi }_{\nu _l}\gamma _\mu \psi _{\nu _l,L}\overline{\psi }_l\gamma _\mu \psi _{l,L})Z_2^\mu ],$$ (5) where $`\psi _L=\frac{1}{2}(1\gamma _5)\psi `$. The existing constraint on $`\varphi `$ is BR $$0.24\stackrel{<}{}\mathrm{sin}\varphi \stackrel{<}{}0.99.$$ (6) ## CROSS SECTIONS As inputs, we take $`M_W=80.33`$ GeV, $`M_Z=91.187`$ GeV, $`\mathrm{sin}^2\theta _W=0.23124`$, $`\alpha =1/128`$, $`\mathrm{\Gamma }_Z=2.49`$ GeV. Let $`E_\gamma `$, $`\theta _\gamma `$ denote the photon’s energy and angle in the $`e^+e^{}`$ center-of-mass, respectively. No binning or transverse momentum cuts have been explicitly introduced at this point. However, we have restricted the range of $`E_\gamma `$, $`\theta _\gamma `$ as follows: $$E_\gamma >10\mathrm{GeV},\mathrm{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}10}^o<\theta _\gamma <170^o,$$ (7) so that the photon may be detected cleanly. As well, the angular cut eliminates the collinear singularity arising when the photon is emitted parallel to the beam. Figure 2 shows the total unpolarized cross section versus center-of-mass energy for the SM, LRM (taking $`\kappa =\rho =2`$), SSM and UUM (taking a representative value of $`\mathrm{sin}\varphi =0.6`$). Throughout, we use $`M_W^{}`$ = 750 GeV. Figure 3 shows the same, except for pure right- and left-handed $`e^{}`$ beams. The peaks are due to the $`Z^{}`$s in the LRM and UUM. As expected, the LRM gives a large effect when the $`e^{}`$ is right-handed while the SSM and UUM give a larger effect for a left-handed $`e^{}`$. The fact that the SM right-handed cross section goes to zero for large $`\sqrt{s}`$ indicates $`W`$ ($`t`$-channel) dominance well above the $`Z`$ pole. In Figure 4(a), we plot the differential cross section with respect to $`E_\gamma `$ for $`\sqrt{s}`$ of 500 GeV. The peak in the photon distributions is due to the radiative return to the $`Z`$ resonance. At higher energies, there are additional peaks in the $`E_\gamma `$ spectrum due to $`Z^{}`$s. We see that most of the contribution comes from the lower $`E_\gamma `$ range. This must be weighted with the deviation from the SM in order to gauge the relative statistical significance of the various energy regions. This is done in Figure 4(b) where $`(d\sigma /dE_\gamma d\sigma _{SM}/dE_\gamma )/\sqrt{d\sigma _{SM}/dE_\gamma }`$ is plotted as a function of $`E_\gamma `$. Indeed, we see that one benefits little from the region $`E_\gamma `$ above $`200`$ GeV in all models. In Figure 5 we plot the the differential cross section with respect to $`\mathrm{cos}\theta _\gamma `$ and the corresponding relative statistical significance. We see that both the cross section and relative statistical significance are peaked in the forward/backward direction and the distributions are very nearly symmetric in $`\mathrm{cos}\theta _\gamma `$. In Figures 4(b) and 5(b), the overall normalization is unimportant. Experimentally, some binning scheme will be adopted and each bin will carry a weight proportional to the beam luminosity. ## NLC $`W^{}`$ mass discovery Limits For $`\sqrt{s}`$ of 500 GeV, we take an integrated luminosity of $`50fb^1`$ and for $`\sqrt{s}`$ of 1 TeV and 1.5 TeV we take $`200fb^1`$. For the polarized limits we take half the above luminosities assuming equal running in both polarization states (of the $`e^{}`$ beam). We assume 90% $`e^{}`$ polarization unless otherwise stated. In obtaining limits, we have imposed the additional cut: $$E_\gamma <E_{\gamma ,\mathrm{max}},$$ (8) in order to cut out the high energy events, especially those near the $`Z`$ pole, which are insensitive to $`W^{}`$s and $`Z^{}`$s. It was found that $`E_{\gamma ,\mathrm{max}}`$ of 200, 350 and 500 GeV for $`\sqrt{s}`$ of $`500`$ GeV, $`1`$ TeV and $`1.5`$ TeV, respectively, lead to the best limits in general, although the limits were not very sensitive to moderate variations in $`E_{\gamma ,\mathrm{max}}`$. The limits are given at 95% confidence level and are calculated for all three energies. Figure 6 presents the $`W^{}`$ mass discovery limits obtainable with an unpolarized beam for the LRM, plotted versus $`\kappa `$ for $`\rho =1,2`$. Depending on $`\sqrt{s}`$, $`\rho `$ and $`\kappa `$, they range from 600 GeV to 2.5 TeV. The predicted dependence on $`\kappa `$ and $`\rho `$ is generally observed, except at low $`\kappa `$ where we notice a moderate increase in the limits, even though the $`W^{}`$ couplings have weakened. We attribute this effect to the $`Z^{}`$, whose couplings are enhanced (but its mass increased) in the low $`\kappa `$ region. This is evidenced by the appreciable improvement in the bounds for low $`\kappa `$ and $`\rho =1`$. Figure 7 demonstrates the improvement in bounds in the moderate to large $`\kappa `$ region obtained when a (90%) polarized right-handed $`e^{}`$ beam is used. The beam polarization picks out the LRM $`W^{}`$ and suppresses the SM $`W`$. Increase of the polarization can lead to even higher limits as shown in the rightmost column of Table 1, where limits are tabulated for all three models. The highest limits are obtained for the SSM in most scenarios, except when $`\mathrm{sin}\varphi `$ is large, as indicated in Figure 8 which shows the limits in the UUM versus $`\mathrm{sin}\varphi `$. We observe a turn-on in sensitivity for $`\mathrm{sin}\varphi \stackrel{>}{}0.62`$ at $`\sqrt{s}=500`$ GeV and 1 TeV, while for $`\sqrt{s}=1.5`$ TeV this occurs for $`\mathrm{sin}\varphi \stackrel{>}{}0.73`$. The interference term may play a role in this behaviour. From another perspective, for fixed $`\mathrm{sin}\varphi `$, one may observe sudden changes in sensitivity as $`\sqrt{s}`$ is varied as can be seen from the changing of the sign of the effect on the cross section in Figs 2,3. The result is that for $`0.62\stackrel{<}{}\mathrm{sin}\varphi \stackrel{<}{}0.72`$, we obtain better limits at $`\sqrt{s}=1`$ TeV than we do at $`\sqrt{s}=1.5`$ TeV. For both the UUM and the SSM, where the $`W^{}`$s are left-handed, there is little benefit from polarization. The reason is that all the effect comes from the left-handed $`e^{}`$ initial state, which also dominates the unpolarized cross section. After folding in the luminosity decrease associated with running in a particular polarization state, all benefits are lost. ## SUMMARY AND OUTLOOK The usefulness of the process $`e^+e^{}\nu \overline{\nu }+\gamma `$ in searching for $`W^{}`$s has been demonstrated and should be complimentary to direct searches at the LHC. The results of our study will be extended to include cuts on the transverse momentum of the photon to reduce backgrounds (primarily from radiative Bhabha scattering with undetected $`e^+e^{}`$) and to examine the effect of binning. All remaining backgrounds will have to be included in the final analysis of the data and are currently under investigation, but are not expected to significantly affect our limits. Other models are also under consideration.
no-problem/9907/cond-mat9907155.html
ar5iv
text
# Elastic Effects in Disordered Nematic Networks ## Abstract Elastic effects in a model of disordered nematic elastomers are numerically investigated in two dimensions. Networks crosslinked in the isotropic phase exhibit unusual soft mechanical response against stretching. It arises from gradual alignment of orientationally correlated regions that are elongated along the director. A sharp crossover to a macroscopically aligned state is obtained on further stretching. The effect of random internal stress is also discussed. Nematic elastomers and gels exhibit rich mechanical effects due to elasticity-orientation coupling . While a considerable number of theoretical studies has been directed to homogeneous systems, nematic elastomers are often in a highly non-uniform polydomain state, in which the correlation length for the director orientation is typically of micron scales. Polydomain networks show unusual non-linear elastic response against stretching, often with an extremely low stress over a large interval of strain . As the strain is increased, the directors gradually rotate toward the direction of stretching until a macroscopically aligned state is attained. This structural change is called the polydomain-monodomain (P-M) transition. Attempting to describe the presumably equilibrium polydomain textures, Terentjev and coworkers proposed a random-field model analogous to those for random anisotropy magnets. They argued that crosslinkers of anisotropic shapes act as sources of quenched disorder. On the other hand, the mechanical response is not yet well understood. It is known that elasticity-mediated long-range interactions among spatial inhomogeneities are crucial in systems such as metallic alloys and gels . For polydomain networks, the role of elastic interactions among orientationally correlated regions (“domains”) is yet to be clarified. In this Rapid Communication, we numerically investigate the mechanical response and the domain structure of model nematic networks incorporating both rubber elasticity and quenched random anisotropy. Unusual soft response is obtained and is explained in terms of the elastic interaction. We briefly discuss the effect of random internal stress as another kind of quenched disorder which can destroy long-range orientational order . The total free energy of our model system is of the form $`F=F_{el}+F_R+F_F`$, where $`F_{el}`$, $`F_R`$, and $`F_F`$ are, respectively, the rubber-elastic, random disorder, and Frank contributions. We assume networks brought deep into the nematic phase after crosslinking in the isotropic phase, and apply the affine-deformation theory of nematic rubber elasticity due to Warner et.al.. Then $`F_{el}`$ is written in terms of the symmetric deformation tensor $`W_{ij}=(r_i/r_k^0)(r_j/r_k^0)`$, where $`r_i^0`$ and $`r_i`$ are the Cartesian coordinates of the material point at the moment of crosslinking and after deformation, respectively. Summation over repeated indices is implied throughout the paper unless otherwise stated. It is convenient to rewrite the original form of $`F_{el}`$ using the tensor $`Q_{ij}=n_in_j\delta _{ij}/d`$, where $`𝒏`$ is the director, to obtain $`F_{el}={\displaystyle \frac{\mu }{2}}{\displaystyle 𝑑𝒓(W_{ii}\alpha Q_{ij}W_{ij})}.`$ (1) The dimensionless coupling constant $`\alpha (>0)`$ is determined by chain anisotropy and does not exceed $`d/(d1)`$. The modulus $`\mu `$ is given by $`k_BT`$ multiplied by the crosslink number density and a numerical prefactor ($`1`$) which is weakly dependent on the temperature. We consider the incompressible limit and impose the constraint $`det𝖶=1`$. The disorder free energy is assumed in the form given in , and is rewritten as $`F_R={\displaystyle 𝑑𝒓P_{ij}Q_{ij}},`$ (2) where $`P_{ij}`$ is a symmetric, traceless, Gaussian random tensor with vanishing quenched average ($`P_{ij}(𝒓)=0`$) and with variance $`P_{ij}(𝒒)P_{kl}(𝒒)=U\left(\delta _{ik}\delta _{jl}+\delta _{il}\delta _{jk}{\displaystyle \frac{2}{d}}\delta _{ij}\delta _{kl}\right).`$ (3) For the Frank free energy we assume the form $`F_F={\displaystyle \frac{K}{2}}{\displaystyle 𝑑𝒓(𝒏)^2}.`$ (4) Here we treat the two-dimensional case for numerical and analytical advantages. Then, in the absence of elasticity, our model reduces to the random-anisotropy 2D XY model by regarding the unit vector $`𝒎=(2Q_{xx},2Q_{xy})=(\mathrm{cos}2\theta ,\mathrm{sin}2\theta )`$ as the spin variable, where $`\theta `$ is the director orientation defined by $`𝒏=(\mathrm{cos}\theta ,\mathrm{sin}\theta )`$. We consider deformations of the form $`r_i=\lambda _ir_i^0+u_i`$ (no summation) where $`\lambda _x=\lambda `$ and $`\lambda _y=1/\lambda `$ express the average deformation, and $`𝒖=𝒖(𝒓)`$ denotes the internal displacement. Cooling into the nematic phase tends to induce spontaneous elongation along the director . If the directors are uniformly aligned along the $`x`$-axis, the elastic free energy (1) is minimized at $`\lambda =\lambda _m`$ and $`𝒖=\text{0}`$ with $`\lambda _m=\left({\displaystyle \frac{1+\alpha /2}{1\alpha /2}}\right)^{1/4}.`$ (5) However, if the random field is sufficiently strong, the system under no external stress energetically favors a macroscopically isotropic state with $`\lambda =1`$. This can be identified with the polydomain state. Our questions concern how domains spontaneously deform and to what degree the elastic free energy is reduced in this highly non-uniform state. The mechanical response was numerically simulated by varying the macroscopic strain $`\lambda `$ and minimizing the free energy with respect to $`𝒏`$ and $`𝒖`$ for each value of $`\lambda `$. We solved the Langevin equation for the director, $`{\displaystyle \frac{𝒏}{t}}=(𝑰𝒏𝒏)\left(L{\displaystyle \frac{\delta F}{\delta 𝒏}}+𝜼\right),`$ (6) where $`𝜼`$ is an uncorrelated Gaussian thermal noise introduced to facilitate structural evolution. Without the noise the minimization process would stop at one of the local minima close to the initial configuration. After approaching the global minimum we turned off the noise as explained below. The displacement $`𝒖`$ was determined by solving the non-linear equation $`\delta (F_{el}+F_v)/\delta 𝒖=0`$ with a relaxation method, where $`F_v`$ is an artificial free energy functional of $`𝒖`$, which penalizes volume change. In this way the local volume was kept constant with errors below $`1\%`$ throughout the runs. Periodic boundary conditions were imposed on $`𝒏`$ and $`𝒖`$. The simulation was performed on a $`128^2`$ square lattice with the grid size $`\mathrm{\Delta }x=1`$. We set $`K=4`$ and $`U=1`$ for all the runs, whereas $`\mu `$ and $`\alpha `$ were varied for different runs. In each run the external strain $`\lambda `$ was slowly increased after an initial equilibration stage at $`\lambda =1`$. Occasionally, we stopped the increase of $`\lambda `$ and turned off the thermal noise for an interval of time. Thus a single run consisted of alternating periods of annealing (with increasing strain) and quenching. In each quench period we computed the spatially-averaged free energy density $`f=f_{el}+f_R+f_F`$ and the orientation $`S=2Q_{xx}=\mathrm{cos}2\theta `$. This procedure enabled us to approximately minimize the free energy at numerous values of $`\lambda `$ in reasonable computational time. For a further check, we then decreased $`\lambda `$ back from a large value in a similar manner. A small hysteresis was obtained but it does not affect the description below. In Fig.1 we show the strain-stress and strain-orientation relations for several values of $`\alpha `$ with $`\mu \alpha ^2=4`$ fixed. In both plots we can see a sharp crossover around $`\lambda =\lambda _m(\alpha )`$. Below $`\lambda _m`$ the total stress $`f/\lambda `$ is vanishingly small and slightly positive. The average orientation increases almost linearly in the same region. The free energy densities are plotted in Fig.2. The elastic free energy has a slightly negative slope in the region $`\lambda <\lambda _m`$, while the disorder free energy has a positive slope and makes the total stress slightly positive. We chose the parameters so that the Frank contribution is much smaller than $`\mu \alpha ^2`$, which is considered to be the case in typical experiments. We also studied a few cases with stronger or weaker elastic effects. For larger values of $`\mu \alpha ^2`$ the shapes of the strain-elastic stress and strain-orientation curves were almost unchanged. For cases with $`\mu \alpha ^2\stackrel{<}{}\mathrm{\hspace{0.33em}0.2}`$ these two curves exhibited less sharp crossovers. In order to understand the origin of the soft response it is useful to examine the structure of the polydomain state at $`\lambda =1`$, for which an analytical treatment is possible in the weak coupling case $`\alpha 1`$. We expand $`\mathrm{\Delta }F_{el}=F_{el}[𝒖]F_{el}[0]`$ with respect to $`𝒖`$ to obtain $`\mathrm{\Delta }F_{el}=\mu {\displaystyle 𝑑𝒓\left[\frac{1}{4}\left(\frac{u_i}{r_j}+\frac{u_j}{r_i}\right)^2\alpha Q_{ij}\frac{u_i}{r_j}\right]}.`$ (7) Eliminating the elastic field using the conditions of mechanical equilibrium $`\delta \mathrm{\Delta }F_{el}/\delta 𝒖=0`$ and incompressibility $`𝒖=0`$, we have a non-local elastic interaction among orientational inhomogeneities. We define new variables $`Q_1(𝒓)`$ and $`Q_2(𝒓)`$ through their Fourier transforms, $`Q_1(𝒒)`$ $`=`$ $`\mathrm{sin}(2\phi )Q_{xx}(𝒒)\mathrm{cos}(2\phi )Q_{xy}(𝒒),`$ (8) $`Q_2(𝒒)`$ $`=`$ $`\mathrm{cos}(2\phi )Q_{xx}(𝒒)+\mathrm{sin}(2\phi )Q_{xy}(𝒒),`$ (9) where $`\phi `$ is the azimuthal angle of the wave-vector $`𝒒=q(\mathrm{cos}\phi ,\mathrm{sin}\phi )`$. Then the average free energy density reads $`f_{el}|_{\lambda =1}=\mu \left(1{\displaystyle \frac{\alpha ^2}{2}}Q_1^2\right)`$ (10) to order $`\alpha ^2`$. Note that $`Q_1`$ and $`Q_2`$ satisfy $`Q_1^2+Q_2^2=Q_{xx}^2+Q_{xy}^2=1/4`$. We have $`Q_1^2=Q_2^2=1/8`$ in the absence of the elastic coupling. In its presence, asymmetry $`Q_1^2>Q_2^2`$ arises to reduce the elastic free energy (10). In the elasticity-dominated limit where $`\mu \alpha ^2`$ is much larger than the disorder and the Frank free energy densities, we expect $`Q_1^21/4`$, $`Q_2^20`$, and $`f_{el}|_{\lambda =1}\mu (1\alpha ^2/8)`$. Indeed these are numerically confirmed as shown in the next paragraph. On the other hand, the elastic free energy density under the uniform deformation with $`\lambda =\lambda _m`$ is also given by $`\mu (1\alpha ^2/8)`$ to order $`\alpha ^2`$. Thus, in the above limit, the P-M transition accompanies only a small change of order $`\alpha ^3`$ in the elastic free energy. To see how each domain is deformed at $`\lambda =1`$, we consider the local elastic stress which is given as $`\sigma _{ij}=\mu (_iu_j+_ju_i\alpha Q_{ij})`$ from (7). After some calculation, its variance in the mechanical equilibrium is obtained as $`\sigma _{ij}^{}{}_{}{}^{2}=2\mu ^2\alpha ^2Q_2^2.`$ (11) In the elasticity-dominated limit the variance of the quantity $`\mu ^1\sigma _{ij}=_iu_j+_ju_i\alpha Q_{ij}`$ vanishes due to the factor $`Q_2^2`$ in (11), which means that each part of the system is elongated by $`1+\alpha /4(\lambda _m)`$ times along the local director. This, together with the numerical result on the mechanical response, supports the following simple picture : In the polydomain state each domain is uniaxially elongated by $`\lambda _m`$ times along the local director, and thus the elastic free energy is equal to that for the monodomain state at $`\lambda =\lambda _m`$ (Fig.3). The P-M transition in the region $`1<\lambda <\lambda _m`$ proceeds via rotation of domains and does not change the elastic free energy. Next we present numerical results on the polydomain structure at $`\lambda =1`$, which was studied through the correlation function $`G(r)=2Q_{ij}(𝒓)Q_{ij}(0)`$ and the degree of structural asymmetry $`A=Q_1^2Q_2^2`$. To accelerate computation of the elastic field we assumed a weak coupling $`\alpha =0.1`$ and solved $`\delta \mathrm{\Delta }F_{el}/\delta 𝒖=0`$ under the constraint $`𝒖=0`$ using FFT, instead of the relaxation method above. The amplitude of the thermal noise was set constant in an initial stage and then gradually reduced to zero at a constant rate. The correlation function is computed for the final state and averaged over $`20`$ independent runs for each set of parameters. Runs were sufficiently long to insure that the initial configurations with uniform and random orientations give indistinguishable results for $`G(r)`$. Shown in Figs.4 and 5 are the correlation function and the correlation length $`R`$ defined by $`G(R)/G(0)=1/2`$. The elastic coupling increases the correlation length without qualitatively affecting the form of the correlation function. We could not deduce a quantitative decay law for $`G(r)`$ from the relatively small number of samples, but the decay was slightly faster than exponential near the origin. For the non-elastic case the same feature was obtained in the Monte Carlo simulation by Gingras and Huse in the presence of thermal noise, while Yu et.al. obtained exponential decay using free boundary conditions. Another important factor affecting $`G(r)`$ is the disorder strength. More systematic study of the decay law is left to future work. In Fig.5 the degree of asymmetry $`A`$ is also shown. With increasing the magnitude of the elastic interaction it approaches to the upper limit $`1/4`$ as expected. Finally we discuss the effect of random internal stress arising from microscopic heterogeneities in the network structure, which are intrinsic to gels . We restrict our discussion to the case $`\lambda =1`$ with small internal deformations. In the expansion of the elastic free energy with respect to $`𝒖`$ there will arise an additional term, $`\mathrm{\Delta }F_{el,R}={\displaystyle 𝑑𝒓R_{ij}\frac{u_i}{r_j}},`$ (12) where $`R_{ij}`$ is the Gaussian random stress with $`R_{ij}(𝒓)=0`$ and $`R_{ij}(𝒒)R_{kl}(𝒒)=V_1\delta _{ij}\delta _{kl}+V_2(\delta _{ik}\delta _{jl}+\delta _{il}\delta _{jk}).`$ (13) Eliminating the elastic field from $`\mathrm{\Delta }F_{el}+\mathrm{\Delta }F_{el,R}`$ we have a new interaction free energy $`\alpha 𝑑𝒓R_1^SQ_1`$, where $`R_1^S`$ is defined using the shear component $`R_{ij}^S=R_{ij}R_{kk}\delta _{ij}/d`$ by an equation parallel to (8) as $`R_1^S(𝒒)`$ $`=`$ $`\mathrm{sin}(2\phi )R_{xx}^S(𝒒)\mathrm{cos}(2\phi )R_{xy}^S(𝒒).`$ (14) Treating this interaction as a weak perturbation as in , we can see that it renders the equilibrium correlation length finite even in the absence of the disorder free energy (3). We mention that Golubović and Lubensky discussed another mechanism of long-range-orientational-order breaking due to random stress. Their argument is based on the observation that the amplitude of thermal fluctuations around a uniformly aligned state diverges. Its relevance to the present case of nematic networks is limited in that their free energy does not explicitly include the orientational degree of freedom. To summarize, we have numerically obtained a soft mechanical reponse during the P-M transition. It originates from structural self-organization of domains due to the long-range elastic interaction, and should be distinguished from the soft elasticity of uniformly oriented networks. The elastic contribution to the stress is slightly negative in the transition region. We have found a positive disorder contribution to the stress. The elastic interaction is found to increase the correlation length. We have demonstrated that random internal stress acts as a random field on the director. Further experimental and theoretical studies are necessary to examine its relevance to real polydomain textures. The author gratefully acknowledges Prof. A. Onuki for helpful discussions and a critical reading of the manuscript, and Dr. E. M. Terentjev for valuable comments on our related work.
no-problem/9907/hep-th9907182.html
ar5iv
text
# References Yang-Mills theory as an Abelian theory without gauge fixing Sergei V. Shabanov <sup>1</sup><sup>1</sup>1on leave from Laboratory of Theoretical Physics, JINR, Dubna, Russia L.P.T.H.E., Université Pierre et Marie Curie, 4 place Jussieu, Tour 16, 1er etage, Paris Cedex 05, F-75252, France ## Abstract A general procedure to reveal an Abelian structure of Yang-Mills theories by means of a (nonlocal) change of variables, rather than by gauge fixing, in the space of connections is proposed. The Abelian gauge group is isomorphic to the maximal Abelian subgroup of the Yang-Mills gauge group, but not its subgroup. A Maxwell field of the Abelian theory contains topological degrees of freedom of original Yang-Mills fields which generate monopole-like and flux-like defects upon an Abelian projection. ’t Hooft’s conjecture that “monopole” dynamics is projection independent is proved for a special class of Abelian projections. A partial duality and a dynamical regime in which the theory may have massive excitations being knot-like solitons are discussed. 1. General remarks. One of the physical scenarios of the color confinement is based on the idea that the vacuum state of quantum Yang-Mills theory is realized by a condensate of monopole-antimonopole pairs . In such a vacuum the field between two colored sources would be squeezed into a tube whose energy is proportional to its length. The picture is dual to the magnetic monopole confinement in a superconductor of the second kind. Monopoles as classical solutions with finite energy are absent in a pure Yang-Mills theory. To realize the dual scenario of the confinement, ’t Hooft proposed an Abelian projection where the gauge group is broken by a suitable gauge condition to its maximal Abelian subgroup . Since the topology of the SU(N) manifold and that of its maximal Abelian subgroup \[U(1)\]<sup>N-1</sup> are different, any such gauge is singular, meaning that a gauge group element which transforms a generic SU(N) connection onto the gauge fixing surface is not regular everywhere in spacetime. The singularities may form worldlines that are usually interpreted as worldlines of magnetic monopoles (whose charges are defined with respect to the unbroken Abelian subgroup). As a result the original Yang-Mills theory turns into electrodynamics with magnetic monopoles. Recent numerical simulations show that the monopole degrees of freedom in the Abelian projection can indeed form a condensate responsible for the confinement . Although the numerical results look rather appealing and stimulating, they still do not provide us with an understanding of the confinement mechanism and a nonperturbative spectrum in Yang-Mills theory. In particular, monopoles seem to emerge as an artifact of gauge fixing. The Abelian group appears as a subgroup of the full Yang-Mills gauge group. However one can easily construct colored states which are singlets with respect to the unbroken (maximal) Abelian subgroup, and, hence, they would not be confined even if the “monopoles” condense. A choice of the gauge may be convenient in practical computations. However, no physical phenomenon can depend on it. This suggests that in Yang-Mills theory there seems to be a new mechanism of confinement at work which has yet to be understood, and the reason of why Abelian projections work so well in the lattice theory must be explained in a gauge independent way. A first and necessary step in this direction is to reveal an Abelian structure of Yang-Mills theory without any gauge fixing. In this letter, a Yang-Mills theory is reformulated as an Abelian gauge theory via a (nonlocal) change of variables in the space of connections, rather than via a gauge fixing (or an Abelian projection). In particular, it turns out to be possible to construct the field variables in the Abelian theory so that they are invariant under the original non-Abelian gauge transformations. Therefore an effective Abelian structure is inherent to the Yang-Mills theory and gauge independent. An Abelian vector potential carries some topological degrees of freedom of the original Yang-Mills connection which generate monopole-like and flux-like defects upon an Abelian projection (in which the gauge group is broken to its maximal Abelian subgroup by a gauge fixing ). For a rather wide class of Abelian projections, which have a characteristic property that topological defects occur in the Abelian components of projected connections, we offer theoretical arguments to prove ’t Hooft’s conjecture that dynamics of “monopoles” does not depend on the choice of a projection, i.e., it is gauge independent. A generalization of the parameterization of the Yang-Mills connection proposed by Faddeev and Niemi is considered as a special example. While revealing a partial duality in Yang-Mills theory, it has an important advantage that it is a genuine change of variables in the functional integral. Therefore it provides a description of the off-shell dynamics of physical degrees of freedom which is compatible with the Gauss law. Following the Wilsonian arguments of , we discuss the partial duality in the theory and a dynamical regime in which the topological degrees of freedom may form massive excitations being knot-like solitons. 2. Gauge group SU(2). Let $`𝑨_\mu `$ be an SU(2) connection. Consider the following parameterization of the connection $$𝑨_\mu =𝜶_\mu +𝒏C_\mu +𝑾_\mu ,𝜶_\mu =g^1_\mu 𝒏\times 𝒏,𝑾_\mu 𝒏=0,$$ (1) where $`g`$ is a coupling constant, $`𝜶_\mu `$ is a connection introduced by Cho , $`𝒏`$ is a unit isotopic vector, $`𝒏^2=1`$. The dot and cross stand, respectively, for the dot and cross products in the isotopic space whose elements are denoted by boldface letters. Relation (1) is not yet a genuine change of variables in the affine space of connections. Two more conditions have to be imposed on $`𝑾_\mu `$ in order for (1) to be a change of variables. We may set in general $$𝝌(𝑾,𝒏,C)=0,𝝌𝒏0.$$ (2) The function $`𝝌`$ can be chosen so that a solution of Eq. (2) determines a local and explicit parameterization of eight components in $`𝑾_\mu `$ by six functional variables (see section 5), thus leading to a generalization of the parameterization given in . We will also show that some $`𝝌`$’s, for which Eq. (2) admits nonlocal parameterizations of the SU(2) connection, can naturally be associated with ’t Hooft’s Abelian projections where an Abelian vector potential contains magnetic monopoles described by the field $`𝒏`$. Before specifying $`𝝌`$ let us first analyze the gauge transformation law of the new variables. An infinitesimal gauge transformation of the SU(2) connection reads $$\delta 𝑨_\mu =g^1_\mu (𝑨)𝝋=g^1\left[_\mu 𝝋+g𝑨_\mu \times 𝝋\right].$$ (3) From (1) we infer $$C_\mu =𝒏𝑨_\mu ,𝑾_\mu =g^1𝒏\times (𝑨)𝒏.$$ (4) Substituting these relations into (2) and solving them for $`𝒏`$ (two equations for two independent variables in $`𝒏`$), we find $`𝒏=𝒏(𝑨)`$. The latter together with (4) specifies the inverse change of variables. Let $`\delta 𝒏`$ be an infinitesimal gauge transformation of $`𝒏`$. Then from (4) and (3) it follows that $`\delta C_\mu `$ $`=`$ $`𝑨_\mu (\delta 𝒏𝒏\times 𝝋)+g^1𝒏_\mu 𝝋,`$ (5) $`\delta 𝑾_\mu `$ $`=`$ $`𝑾\times 𝝋𝒏[𝑾_\mu (\delta 𝒏𝒏\times 𝝋)]+g^1𝒏\times _\mu (\delta 𝒏𝒏\times 𝝋),`$ (6) where we have used that $`𝒏\delta 𝒏=0`$. An explicit form of $`\delta 𝒏`$ can be found from the equation $`\delta 𝝌(𝒏,𝑨)=0`$ (taken on the surface $`𝝌=0`$) where $`𝝌(𝒏,𝑨)`$ is obtained by a substitution of (4) into $`𝝌(𝑾,C,𝒏)`$. We emphasize that $`\delta 𝒏`$ is determined by the choice of $`𝝌`$ and so are $`\delta C_\mu `$ and $`\delta 𝑾_\mu `$. Let us introduce a local orthonormal basis in the isotopic space $`𝒏`$, $`𝒏_r`$ and $`𝒏_r^{}`$: $`𝒏_r𝒏=0`$, $`𝒏_r^2=0`$ and $`𝒏_r𝒏_r^{}=1`$. We also have $`𝒏\times 𝒏_r=i𝒏_r`$ and $`𝒏_r\times 𝒏_r^{}=i𝒏`$. With $`𝒏`$ fixed, the basis is determined modulo local transformations $$𝒏_re^{i\xi }𝒏_r.$$ (7) It should be noted that this gauge freedom is not associated with the gauge group of the Yang-Mills theory because the new variables remain unchanged under (7). The local basis may not exist globally and the field $`𝒏_r`$ may have singularities. The reason is as follows. At the spatial infinity, the connection must be a pure gauge. Therefore $`𝒏`$ becomes a constant as $`|𝒙|`$ approaches infinity, say, $`𝒏_0=(0,0,1)`$. The field $`𝒏`$ is a map of the three-sphere $`S^3`$, being the compactified space, to the target two-sphere $`S^2`$ in the isotopic space. The homotopy group $`\pi _3(S^2)Z`$ is not trivial. Integers from $`Z`$ are given by the Hopf invariant. If one attempts to transform $`𝒏`$ to $`𝒏_0`$ everywhere in space by rotation, the rotation matrix will be ill-defined on some closed and, in general, knotted contours. Another type of singularities is associated with the homotopy group $`\pi _2(S^2)Z`$ when the field $`𝒏`$ is restricted on some $`S^2`$ being a subspace of $`S^3`$. We will show that in the new variables (1), the Yang-Mills theory looks like an Abelian theory in which a Maxwell potential contains magnetic monopoles and fluxes associated with the nontriviality of $`\pi _2(S^2)`$ and $`\pi _3(S^2)`$, respectively. Consider the decomposition $$𝑾_\mu =W_\mu ^{}𝒏_r+W_\mu 𝒏_r^{}.$$ (8) The fields strength is, by definition, $`𝑭_{\mu \nu }(𝑨)=_\mu 𝑨_\nu _\nu 𝑨_\mu +g𝑨_\mu \times 𝑨_\nu `$. In particular, $`𝑭_{\mu \nu }(𝜶+𝒏C)`$ $`=`$ $`𝒏(C_{\mu \nu }H_{\mu \nu }),C_{\mu \nu }=_\mu C_\nu _\nu C_\mu ,`$ (9) $`H_{\mu \nu }`$ $`=`$ $`g^1𝒏(_\mu 𝒏\times _\nu 𝒏)=_\mu H_\nu _\nu H_\mu H_{\mu \nu }^{(st)},`$ (10) where $`H_\mu =ig^1𝒏_r^{}_\mu 𝒏_r=H_\mu ^{}`$ and $`H_{\mu \nu }^{(st)}=ig^1𝒏_r^{}[_\mu ,_\nu ]𝒏_r=H_{\mu \nu }^{(st)}`$ which is the field strength of Dirac strings associated with the singularities of the local basis. For example, the Wu-Yang monopole configuration is determined by $`𝒏=𝒙/r`$, $`r^2=𝒙^2`$ and $`C_\mu =W_\mu =0`$. The Dirac string is extended along the negative part of the $`z`$-axis (if $`𝒙=(x,y,z)`$). It is also not difficult to give an example of $`𝒏`$ for which Dirac strings would form closed linked and/or knotted contours (see, e.g., ). Thus, the vector potential $`H_\mu `$ describes possible monopole-like and closed-flux-like degrees of freedom to which we refer as to topological degrees of freedom in the Yang-Mills theory. After a modest computation we obtain $$𝑭_{\mu \nu }^2=\left[C_{\mu \nu }H_{\mu \nu }+ig(W_\mu ^{}W_\nu W_\nu ^{}W_\mu )\right]^2+\left|D_\mu W_\nu D_\nu W_\mu \right|^2,$$ (11) where $`D_\mu W_\nu =_\mu W_\nu igA_\mu W_\nu `$ is the U(1) covariant derivative, $`A_\mu =C_\mu H_\mu `$. The Abelian gauge transformations have the form $$A_\mu A_\mu +g^1_\mu \xi ,W_\mu =e^{i\xi }W_\mu .$$ (12) The transformations (12) can obviously be generated by (7), and therefore they are not from the original SU(2) gauge group. In contrast to the Abelian gauge transformations (12), the SU(2) transformations depend on a concrete parameterization of $`𝑾_\mu `$. Because of topological defects associated with the nontriviality of $`\pi _2(S^2)`$, the Bianchi identity for the Abelian strength tensor $`F_{\mu \nu }=C_{\mu \nu }H_{\mu \nu }`$ is violated. Let $`{}_{}{}^{}F_{\mu \nu }^{}`$ be the dual tensor. Then one can define a conservative current $$J_\mu =_\nu {}_{}{}^{}F_{\mu \nu }^{},_\mu J_\mu =0.$$ (13) The conservation of the topological current $`J_\mu `$ indicates the existence of the U(1) (magnetic) symmetry on the classical level in the theory (11). 3. Abelian projections. An Abelian projection is introduced by imposing a gauge condition on $`𝑨_\mu `$ that breaks the gauge group SU(2) to its (maximal) Abelian subgroup U(1). As has already been pointed out in section 1, a gauge group element which transforms a generic connection to the gauge fixing surface in the space of connections is not regular everywhere in spacetime. The transformed (or projected) connections contain topological defects (singularities). Consider a special class of Abelian projections with the characteristic property that topological defects appear only in the Abelian components of the projected potentials. Let us describe this class in the new variables (1). Had the gauge transformations of the field $`𝒏`$ been just isotopic rotations, $$\delta 𝒏=𝒏\times 𝝋,$$ (14) then the connection $`𝜶_\mu +𝒏C_\mu `$ would become purely Abelian upon the projection $`𝒏𝒏_0`$ for any choice of $`𝝌`$ which is compatible with (14). Recall that in our formulation the gauge transformation law for $`𝒏`$ depends on $`𝝌`$. The addition to the Abelian component $`C_\mu `$ resulting from $`𝜶_\mu `$ upon the projection determines exactly the same topological defects as the connection $`H_\mu `$ in the Abelian theory (11), i.e., $`𝜶_\mu +𝒏C_\mu 𝒏_0A_\mu `$. Now we show that for every Abelian projection from the special class defined above one can construct $`𝝌`$ in (2) which determines a special parameterization of $`𝑾_\mu `$ such that (14) holds. Moreover the topological current (13) is invariant under the SU(2) gauge transformations (5), (6) and (14). Thus, the very existence of the (magnetic) symmetry (13) is not at all related to any gauge fixing in the theory. We first give an example of $`𝝌`$ associated with the so called maximal Abelian projection which is mostly used in numerical studies of the “monopole” dynamics: $$𝝌=_\mu (𝜶+𝒏C)𝑾_\mu .$$ (15) The compatibility of (15) with (14) follows from the fact that under the transformations (5), (6) and (14) the isovector (15) is covariant, $`\delta 𝝌=𝝌\times 𝝋`$, and it also fulfills the condition $`𝝌𝒏0`$ as one can easily be convinced by a direct computation. Upon the projection $`𝒏𝒏_0`$, $`𝝌`$ turns into the maximal Abelian gauge condition. The Abelian part of the connection equals $`A_\mu `$ and contains magnetic monopoles whose charges are defined with respect to the unbroken U(1) subgroup (rotations about $`𝒏_0`$) . By construction, the corresponding conservative monopole current coincides with (13). Suppose an Abelian projection is specified by a gauge condition $`𝝌(𝒏_0A^{(0)},𝑾^{(0)})=0`$, where $`𝑨_\mu =𝒏_0A_\mu ^{(0)}+𝑾_\mu ^{(0)}`$ and $`𝒏_0𝑾_\mu ^{(0)}=0`$. We also assume that $`𝝌`$ is covariant (or even invariant) under the Abelian gauge transformations $`\delta _aA_\mu ^{(0)}=g^1_\mu \phi `$ and $`\delta _a𝑾_\mu ^{(0)}=\phi 𝑾_\mu ^{(0)}\times 𝒏_0`$. This ensures that the gauge symmetry is broken to U(1). Consider the change of variables (1) in which the condition (2) is obtained by a simple replacement $`𝒏_0A_\mu ^{(0)}𝜶_\mu +𝒏C_\mu `$ and $`𝑾_\mu ^{(0)}𝑾_\mu `$ in the above gauge condition. By construction, the gauge transformation law (14) is guaranteed. All topological degrees of freedom of the Yang-Mills theory, which are singled out as magnetic monopoles upon the Abelian projection, are contained in the Abelian vector potential $`A_\mu `$ of the Maxwell theory (11). Thus, in the new variables the aforementioned special class of Abelian projections is described by a single “projection” $`𝒏𝒏_0`$. From (9) and (10) it follows that $$\delta J_\mu =_\nu {}_{}{}^{}\delta F_{\mu \nu }=0,$$ (16) that is, the topological current (13) is invariant under the SU(2) gauge transformations. Note that $`C_\mu `$ is not transformed by a simple gradient shift. The contribution of non-Abelian gauge transformations to $`\delta C_{\mu \nu }`$ is compensated in $`\delta F_{\mu \nu }`$ by $`\delta H_{\mu \nu }`$ which is generated by gauge rotations of the local basis $`\delta 𝒏_r=𝒏_r\times 𝝋`$, $`𝝋𝒏=0`$. The restriction on $`𝝋`$ has been imposed because $`\delta 𝒏=0`$ if $`𝝋=𝒏\phi `$, while $`\delta C_\mu =g^1_\mu \phi `$. Therefore there are two groups U(1) in the theory. One is associated with the subgroup of the gauge group which preserve $`C_{\mu \nu }`$, $`\delta C_{\mu \nu }=0`$, while the other is given by transformations (12). The Abelian potential $`A_\mu `$ is invariant under the U(1) subgroup of U(1)$`\times `$U(1) which is selected by the condition $`\phi =\xi `$. The charged field $`W_\mu `$ is also invariant under this U(1) subgroup. According (6) and (8), the SU(2) gauge transformations can be regarded as local generic rotations of any rigid local basis in the isotopic space, $`\delta 𝒏_r=𝒏_r\times 𝝋`$ (no restriction on $`𝝋`$), provided the $`𝝌`$ in (2) is such that (14) holds. In this case all field variables in the Maxwell theory (11) are invariant under the SU(2) Yang-Mills gauge group. 4. ’t Hooft’s conjecture. In lattice simulations one is interested in an effective dynamics of the topological degrees of freedom, i.e., in an effective theory of the field $`𝒏`$ in our formulation. Recently the dual scenario of the color confinement in the lattice Yang-Mills theory has been reported to occur in several Abelian projection . All the projections studied have a characteristic property that monopole-like topological defects are contained in Abelian components of projected connections. This certainly supports ’t Hooft’s conjecture that all Abelian projections are equivalent . Can one find theoretical arguments to prove this conjecture? Here we explain how the proof can be done. In our parameterization all the Abelian projections in question are described by one simple (singular) gauge condition $`𝒏=𝒏_0`$. The difference between projections is related to a reparameterization of $`𝑾_\mu `$. As we have a genuine change of variables in the functional integral, we can, in principle, integrate out $`𝑾_\mu `$, and get an effective action for $`𝒏`$ and $`C_\mu `$. From a technical point of view, this procedure involves two important steps. First, one has to compute a Jacobian of the change of variables. Second, a gauge has to be fixed, otherwise the integral is divergent. The latter can be done by means of the conventional Faddeev-Popov recipe with a nonsingular gauge (e.g. a background or Lorentz gauge) before the change of variables. The first problem is solved in the following way . Consider the identity $`1=D𝒏\mathrm{\Delta }(𝑨,𝒏)\delta (𝝌)`$ where $`𝝌=𝝌(𝑨,𝒏)`$ is obtained by a substitution of (4) into $`𝝌(𝑾,C,𝒏)`$. Clearly, $`\mathrm{\Delta }(𝑨,𝒏)=det[\delta 𝝌/\delta 𝒏]`$. Next, the identity is inserted into the integral $`D𝑨_\mu \mathrm{exp}(S)`$, with $`S`$ being the Yang-Mills action (gauge fixing and Faddeev-Popov ghost terms are not written explicitly), then $`𝑨_\mu `$ is replaced by $`𝒏C_\mu +𝑾_\mu `$, with a generic $`𝑾_\mu `$ perpendicular to $`𝒏`$ so that $`D𝑨_\mu DC_\mu D𝑾_\mu `$. Finally, one shifts the integration variables $`𝑾_\mu 𝑾_\mu +𝜶_\mu `$. As a result one arrives at the following representation $$𝒵D𝑨_\mu e^SD𝒏DC_\mu D𝑾_\mu \mathrm{\Delta }(𝑨,𝒏)\delta [𝝌(𝑾,C,𝒏)]e^S.$$ (17) In the integrand of the right-hand side of Eq. (17), $`𝑨_\mu `$ must be replaced by (1). The integral over $`𝑾_\mu `$ seems to depend on the choice of $`𝝌`$. However, this is not always the case. Various choices of $`𝝌`$ can be regarded as gauge fixing conditions for the gauge symmetry associated with a reparameterization of $`𝑾_\mu `$. As it stands, Eq.(1) contains 14 functions in the right-hand side, while there are only 12 components in $`𝑨_\mu `$. Therefore the gauge transformations (3) would, in general, be induced by five-parametric transformations of the new variables. There are two-parametric transformations of the new variables under which $`𝑨_\mu `$ remains invariant. Precisely this gauge freedom is fixed by (2) and by the corresponding delta function in (17). The key point is that the invariance of the integral over $`𝑾_\mu `$ in (17) under variations of $`𝝌`$ can be established just as the gauge invariance of the perturbative Faddeev-Popov path integral is proved. Since $`\mathrm{\Delta }`$ is a determinant, it can be lifted up to the exponential by introducing ghosts $`𝜼`$ (which should not be confused with the conventional Faddeev-Popov ghosts), and $`\delta (𝝌)`$ is replaced by $`D𝒇\mathrm{exp}(𝒇^2/2)\delta (𝝌𝒇)`$. A change of $`𝝌`$ is equivalent to some BRST transformation of $`𝜼`$ and the new variables. When $`𝑾_\mu `$ is integrated out, the invariance of the effective action for the remaining variables under variations of $`𝝌`$ should be guaranteed by the invariance under the corresponding BRST transformations of $`𝒏`$ and $`C_\mu `$. Now we recall that a change of $`𝝌`$ implies a modification of the gauge transformation law of $`𝒏`$ and $`C_\mu `$. But for all Abelian projections in question $`𝝌`$ varies within the special class for which $`𝒏`$ transforms according to (14), that is, neither the gauge transformation of $`𝒏`$ nor $`C_\mu `$ depend on $`𝑾_\mu `$. Hence, the BRST transformations of $`𝒏`$ and $`C_\mu `$ generated by varying $`𝝌`$ cannot be anything, but a subset of the conventional BRST transformations associated with a gauge fixing in the original integral over $`𝑨_\mu `$. Owing to the BRST invariance of the Faddeev-Popov action, we conclude that the effective action for $`𝒏`$ and $`C_\mu `$ will also be invariant under the BRST transformations generated by variations of $`𝝌`$. In short, one can say that ’t Hooft’s conjecture is a simple consequence of the gauge invariance. In the new nonlocal variables, a relevance of the gauge symmetry is obvious, while in the original variables it is less evident because of singularity of gauges used in Abelian projections. The general case when (14) is not valid will be considered elsewhere. If the dual scenario takes place, as suggested by lattice simulations, the effective action for the topological current $`J_\mu `$ has to be of the London type (as for superconductor). Since the Abelian theory (11) is SU(2) invariant, the U(1) symmetry associated with the conservation of $`J_\mu `$ can be dynamically broken regardless of any gauge fixing used to compute the functional integral. By means of the representation (17), where the integration over the field $`𝒏`$ provides the sum over topological configurations of Yang-Mills fields, we have circumvented a difficult problem of summing over monopole configurations in singular Abelian projection gauges. 5. Partial duality. The homotopy group arguments show that the field $`𝒏`$ may also contain configurations that upon the Abelian projection $`𝒏𝒏_0`$ form closed magnetic fluxes which are linked and/or knotted. Their topological number is known as the Hopf invariant and associated with the nontriviality of $`\pi _3(S^2)`$ of the map $`𝒏`$. Due to a nonlocality of the Hopf invariant, there is no conservative current related to such topological defects. The magnetic fluxes cannot be observed in numerical studies by the same procedure as that used for magnetic monopoles because they do not contribute to the total magnetic field flux through any closed surface. It has been conjectured that quantum fluctuations of other degrees of freedom of Yang-Mills fields may stabilize fluxes against shrinking so that they would behave like knot solitons . The dynamical regime in which fluxes exist as physical excitations is dual to some Higgs phase which is revealed via a special parameterization of the Yang-Mills connection . To verify this conjecture, one needs a more general parameterization of the connection than that used in in order to correctly describe an off-shell quantum dynamics of relevant physical degrees of freedom. The problem is to find an explicit and local parameterization of $`𝑾_\mu `$ by six functional variables, while keeping the partial duality between $`𝒏`$ and some components of $`𝑾_\mu `$. Then in the new variables the Yang-Mills theory will be a local Abelian theory (11) to which the Wilsonian arguments of can be applied. The necessary six functional variables can be unified into an antisymmetric tensor $`W_{\mu \nu }=W_{\nu \mu }`$. Consider the following representation $$𝑾_\mu =g\left[W_{\mu \nu }+V_{\mu \nu }(W,𝒏)\right]𝜶_\nu ,$$ (18) where $`V_{\mu \nu }`$ is a symmetric tensor which depends on $`W_{\mu \nu }`$ and $`𝒏`$. This is the most general form of $`𝑾_\mu `$. It should be noted that $`W_{\mu \nu }`$ is dimensionless just as the topological field $`𝒏`$, which is necessary for $`W_{\mu \nu }`$ to be a dual variable to $`𝒏`$. In principle, one can take a generic isotopic vector $`𝜸_\mu (𝒏)`$, perpendicular to $`𝒏`$, instead of $`𝜶_\mu `$ in (18). However, by a redefinition of the symmetric and antisymmetric components of the tensor $`W_{\mu \nu }+V_{\mu \nu }`$, $`𝜸_\mu `$ can always be replaced by $`𝜶_\mu `$ because any isotopic vector perpendicular to $`𝒏`$ is a linear combination of the Lorentz components of $`𝜶_\mu `$. The simplest choice $`V_{\mu \nu }=0`$ would already provide us with a sought-for parameterization to develop the off-shell dynamics of the physical degrees of freedom. It implies only one gauge condition on a generic connection (1), while we are allowed to impose three without solving the Gauss law. Indeed, if $`V_{\mu \nu }=0`$, $`𝑾_\mu `$ satisfies three (not two as required by (2)) equations $$𝑾_\mu 𝜶_\mu +𝜶_\mu 𝑾_\mu =0.$$ (19) The tensor product contains three independent components because both $`𝑾_\mu `$ and $`𝜶_\mu `$ are perpendicular to $`𝒏`$. Therefore there is one constraint on the components of $`W_{\mu \nu }`$: $`W_{\mu \nu }H_{\mu \nu }=0`$. Since for the functional integral one needs a change of variables, the latter restriction on $`W_{\mu \nu }`$ can be relaxed to achieve this goal if, for example, we set $$𝑾_\mu =gW_{\mu \nu }𝜶_\nu +g\rho 𝜶_\mu ,$$ (20) where $`\rho =\rho (W,𝒏)W_{\mu \nu }H_{\mu \nu }`$ is determined by (19). The field $`\rho `$ in (20) is, in general, specified modulo a factor which may depend on $`𝒏`$. For instance, the condition (19) can be modified by multiplying each of the two terms in the tensor product by coefficients depending on $`𝒏`$. Thanks to the gauge invariance of the Yang-Mills action, a particular choice of $`\rho `$ should not be relevant for the partial duality because $`\rho `$ can always be removed by an appropriate gauge transformation (6). Quantum dynamics of the charged fields in the Abelian theory (11) is described by the antisymmetric field $`W_{\mu \nu }`$. The Jacobian of the change of variables is the determinant of the Euclidean metric $`ds^2=𝑑x𝑑𝑨_\mu 𝑑𝑨_\mu `$ on the affine space of connections in the new variables (1) and (20); $`d𝑨_\mu `$ denotes a functional differential of the affine (field) coordinate $`𝑨_\mu `$. The Jacobian induces quantum corrections, associated with the curvilinearity of the new field variables, to the classical action (11). If the dynamics of the charged field $`W_{\mu \nu }`$ is such that the average over them yields $$\left(_\mu W_{\nu \sigma }_\nu W_{\mu \sigma }\right)\left(_\mu W_{\nu \lambda }_\nu W_{\mu \lambda }\right)m^2\delta _{\sigma \lambda },$$ (21) then in the large distance limit, the leading term of the gradient expansion of the effective action for the field $`𝒏`$ would contain the term $`m^2𝜶_\mu 𝜶_\mu =m^2(_\mu 𝒏)^2`$. Together with the tree level term proportional to $`H_{\mu \nu }^2`$, it forms, as follows from (11), the action of the Faddeev model which describes knot-like massive solitons. Such solitonic excitations could be good candidates for glueballs. Their stability in the effective theory depends on other terms which are contained in the gradient expansion of the effective action. Consider the decomposition $`_\mu 𝒏=b_\mu ^{}𝒏_r+b_\mu 𝒏_r^{}`$. We have $`H_{\mu \nu }=ig^1(b_\mu ^{}b_\nu b_\mu b_\nu ^{})`$ and $`W_\mu =iW_{\mu \nu }b_\mu +i\rho b_\mu `$. The dual (Higgs) phase reported in may also exist in the Abelian theory (11), provided the average over the field $`𝒏`$ has the property that $$b_\mu b_\nu =0,b_\mu ^{}b_\nu M^2\delta _{\mu \nu }.$$ (22) In particular, the property (22) implies that $`H_{\mu \nu }=0`$ and $`H_{\mu \sigma }H_{\nu \lambda }2g^2M^4(\delta _{\mu \nu }\delta _{\sigma \lambda }\delta _{\mu \lambda }\delta _{\sigma \nu })`$ (neglecting by a four-point function of the field $`b_\mu `$). The effective potential for $`W_{\mu \nu }`$ would have “classical” minima: $`W_{\mu \sigma }W_{\nu \sigma }`$ $`\delta _{\mu \nu }`$, therefore the Maxwell field acquires a mass proportional to $`M`$. The parameterization relevant for the partial duality is given by the first term in (18). Therefore the choice of $`V_{\mu \nu }`$ does not seem to be important. This suggests that the property (21) should be universal relative to a choice of $`V_{\mu \nu }`$. The Ansatz (18) can be used to solve Eq. (2) for $`V_{\mu \nu }`$. In this case $`V_{\mu \nu }`$ may even be nonlocal (cf., e.g., (15)). It would be interesting to find arguments to prove that (21) holds for any $`V_{\mu \nu }=V_{\mu \nu }(W,𝒏)`$ if it holds for at least one choice of $`V_{\mu \nu }`$. This amounts to the existence of the gauge $`V_{\mu \nu }=0`$ for any $`𝝌`$ (for algebraic conditions like (19), this is the case). The nontriviality of the problem is that the gauge transformation law of the new variables depends on $`𝝌`$. 6. Gauge group SU(N). To extend our description of the Yang-Mills theory as an Abelian theory with topological degrees of freedom to the gauge group SU(N), we introduce the Cartan-Weyl basis in the Lie algebra . Let $`\omega _k`$ be simple roots, $`k=1,2,\mathrm{},N1`$ (= rank of SU(N)), and $`\beta `$ be a positive root. It can be written in the form $`\beta =\omega _k+\omega _{k+1}+\mathrm{}+\omega _{k+j}`$. All simple roots have the same norm. The angle between $`\omega _k`$ and $`\omega _{k\pm 1}`$ is $`2\pi /3`$, while $`\omega _k`$ and $`\omega _{k\pm j}`$, $`j2`$, are perpendicular. As a consequence, all roots have the same norm. For every root $`\beta `$ two basis elements $`e_\beta `$ and $`e_\beta =e_\beta ^{}`$ are defined so that $$[h,e_\beta ]=(h,\beta )e_\beta ,[e_\beta ,e_\gamma ]=N_{\beta ,\gamma }e_{\beta +\gamma },[e_\beta ,e_\beta ]=\beta ,$$ (23) where $`h`$ is any element from the Cartan subalgebra; and for any two elements $`v`$ and $`w`$ of the Lie algebra the Killing form is defined by $`(v,w)=\mathrm{tr}(\mathrm{ad}v\mathrm{ad}w)`$. The operator $`\mathrm{ad}v`$ acts on any element $`w`$ as $`[v,w]`$. The structure constants $`N_{\beta ,\gamma }=N_{\beta ,\gamma }`$ are not zero only if $`\beta +\gamma `$ is a root. For SU(N), $`N_{\beta ,\gamma }^2=(2N)^1`$ and relative signs can be fixed by the Jacobi identity for the basis elements. Let $`h_k=h_k^{}`$ be an orthonormal basis with respect to the Killing form in the Cartan subalgebra. With the normalization of the structure constants as given in (23), the elements $`h_k`$, $`e_\beta `$ and $`e_\beta ^{}`$ form an orthonormal basis in the Lie algebra, $`(h_k,e_\beta )=0`$, $`(e_\beta ,e_\gamma )=0`$ and $`(e_\beta ,e_\gamma ^{})=\delta _{\beta \gamma }`$. Let $`U=U(x)`$ be a generic element of the coset $`SU(N)/[U(1)]^{N1}`$. Consider a local orthonormal basis $`n_k=U^{}h_kU`$, $`n_\beta =U^{}e_\beta U`$. The commutation relations (23) hold for the local basis too. For any element $`v`$ one can prove the identity $$v=N[n_k,[n_k,v]]+n_k(n_k,v).$$ (24) A proof is based on a straightforward computation of the double commutator in (24) in the Cartan-Weyl basis and the fact that all roots of SU(N) have the same norm which is $`(\beta ,\beta )=1/N`$ relative to the Killing form. A change of variables in the affine space of SU(N) connections reads $$A_\mu =\alpha _\mu +n_kC_\mu ^k+W_\mu ,\alpha _\mu =ig^1N[_\mu n_k,n_k],(W_\mu ,n_k)=0,$$ (25) where $`W_\mu `$ is subject to $`N^2N`$ conditions $`\chi (W,C^k,n_k)=0`$, $`(\chi ,n_k)0`$ Thus, in four dimensional spacetime, $`4(N^21)`$ independent components of $`A_\mu `$ are now represented by $`N^2N=dimSU(N)/[U(1)^{N1}]`$ independent components of $`n_k`$, by $`4(N1)`$ components of $`C_\mu ^k`$ and by $`3(N^2N)`$ components of $`W_\mu `$. The two first terms in (25) are constructed so that the corresponding field strength is purely Abelian in the local basis $$F_{\mu \nu }(\alpha +C)=n_k\left(C_{\mu \nu }^kH_{\mu \nu }^k\right)n_kF_{\mu \nu }^k,H_{\mu \nu }^k=ig^1N(n_k,[_\mu n_j,_\nu n_j]),$$ (26) and $`C_{\mu \nu }^k=_\mu C_\nu ^k_\nu C_\mu ^k`$. Relation (26) is obtained from the definition $`F_{\mu \nu }(A)=_\mu A_\nu _\nu A_\mu +ig[A_\mu ,A_\nu ]`$ by a successive use of the Jacobi identity and (24). The homotopy groups $`\pi _2(G/H)`$ and $`\pi _3(G/H)`$, where $`G=SU(N)`$ and $`H=[U(1)]^{N1}`$, of the map $`n_k`$ are nontrivial. Therefore $`n_k`$ carry topological (physical) degrees of freedom of Yang-Mills fields. It is not hard to establish the identity $$ig^1_\mu U^{}U=\alpha _\mu +n_kH_\mu ^k,H_{\mu \nu }^k=_\mu H_\nu ^k_\nu H_\mu ^kH_{\mu \nu }^{k(st)},$$ (27) where the group element $`U`$ specifies an orientation of the local basis with respect to the Cartan-Weyl basis, and $`H_{\mu \nu }^{k(st)}=ig^1(n_k,[_\mu ,_\nu ]U^{}U)`$ is the field of Dirac strings. The Abelian strength tensor $`F_{\mu \nu }^k`$ does not satisfy the Bianchi identity if the monopole-like defects associated with a nontriviality of $`\pi _2(G/H)`$ are present. The conservative topological current is a simple multi-component generalization of (13): $`J_\mu ^k=_\nu {}_{}{}^{}F_{\mu \nu }^{k}`$, $`_\mu J_\mu ^k=0`$. There is a (magnetic) symmetry group $`[U(1)]^{N1}`$ responsible for its conservation. To reformulate the Yang-Mills theory as an Abelian theory without gauge fixing, we introduce the decomposition $`W_\mu =W_\mu ^\beta n_\beta +W_\mu ^\beta n_\beta ^{}`$ and set $`A_\mu ^k=C_\mu ^kH_\mu ^k`$. The Lagrangian density in the new variables assumes the form $`F_{\mu \nu }^2`$ $`=`$ $`\left\{C_{\mu \nu }H_{\mu \nu }+ig(h_k,\beta )\left[W_\mu ^\beta W_\nu ^\beta W_\nu ^\beta W_\mu ^\beta \right]\right\}^2`$ (28) $`+\left|D_\mu W_\nu ^\beta D_\nu W_\mu ^\beta +ig\mathrm{\Gamma }_{\mu \nu }^\beta \right|^2,`$ $`\mathrm{\Gamma }_{\mu \nu }^\beta `$ $`=`$ $`{\displaystyle \underset{\alpha +\gamma =\beta }{}}N_{\alpha ,\gamma }\left[W_\mu ^\alpha W_\nu ^\gamma W_\nu ^\alpha W_\mu ^\gamma \right]+{\displaystyle \underset{\alpha \gamma =\beta }{}}N_{\alpha ,\gamma }\left[W_\mu ^\alpha W_\nu ^\gamma W_\nu ^\alpha W_\mu ^\gamma \right],`$ (29) where $`\alpha ,\beta `$ and $`\gamma `$ are positive roots, and $`D_\mu W_\nu ^\beta =_\mu W_\nu ^\beta ig(\beta ,h_k)A_\mu ^kW_\nu ^\beta `$. A calculation of the field strength tensor is somewhat tedious but straightforward. The identities $`_\mu (\alpha )n_\beta _\mu n_\beta +ig[\alpha _\mu ,n_\beta ]=ig(h_k,\beta )H_kn_\beta `$ and $`(_\mu n_j,n_k)=(_\mu n_k,n_j)=0`$, which can be deduced from (27), are useful to simplify the computation. The Lagrangian density (28) is invariant under the Abelian gauge transformations $$A_\mu ^kA_\mu ^k+_\mu \xi _k,W_\mu ^\beta e^{ig(\beta ,h_k)\xi _k}W_\mu ^\beta .$$ (30) The Abelian gauge group $`[U(1)]^{N1}`$ is not a subgroup of the original gauge group. Just as in the SU(2) case, it is related to the fact that the basis elements $`n_\beta `$ can be transformed locally $$n_\beta e^{i(\beta ,h_k)\xi _k}n_\beta $$ (31) without spoiling both the orthogonality and commutation relations in the local Cartan-Weyl basis. The gauge transformation law of the new variables can be found by the same method as in the SU(2) case. It depends on the choice of $`\chi `$. There is a wide class of $`\chi `$’s, associated with Abelian projection gauges as explained in section 3, for which the gauge transformation law has a particularly simple form $$\delta n_k=i[n_k,\phi ],\delta C_\mu ^k=g^1(n_k,_\mu \phi ),\delta W_\mu =i[W_\mu ,\phi ].$$ (32) In this case, the SU(N) gauge transformations are generated by adjoint transformations of any rigid local Cartan-Weyl basis, under which the field variables in the Abelian theory are invariant $$\delta A_\mu ^k=0,\delta W_\mu ^\beta =0.$$ (33) All Abelian projections in which topological defects occur in Abelian components of projected connections fall into one class defined by the projection $`n_kh_k`$ in the new variables. A proof of the ’t Hooft conjecture is a straightforward generalization of the SU(2) case. The key point is the gauge symmetry (32) which does not mix $`W_\mu `$ with the Abelian variables $`C_\mu ^k`$ and $`n_k`$. Therefore the monopole dynamics in the SU(N) Yang-Mills theory is projection independent. The coupling constants of the interaction of $`W_\mu ^\beta `$ among each other and with the Maxwell fields $`A_\mu ^k`$ in (28) are proportional to $`N^{1/2}`$ because $`N_{\beta ,\gamma }N^{1/2}`$ and $`|(\beta ,h_k)|N^{1/2}`$, while $`H_{\mu \nu }^kN^{1/2}`$. Therefore in the large $`N`$ limit the dynamics of the topological fields $`n_k`$ dominates . Finally, we observe that precisely in four dimensional spacetime, the $`3(N^2N)`$ independent components of $`W_\mu `$ can be unified into a tensor $`W_{\mu \nu }^{jk}`$ which is antisymmetric in the Lorentz indices and symmetric in the Cartan indices, $`W_{\mu \nu }^{jk}=W_{\nu \mu }^{jk}`$ and $`W_{\mu \nu }^{jk}=W_{\mu \nu }^{kj}`$. This suggests the following local parameterization to reveal a partial duality in the SU(N) Yang-Mills theory $$W_\mu =\left\{W_{\mu \nu }^{jk}+V_{\mu \nu }^{jk}(W,n)\right\}\alpha _\nu ^{jk},\alpha _\mu ^{jk}=i[_\mu n_j,n_k]=\alpha _\mu ^{kj},$$ (34) where the symmetric tensor $`V_{\mu \nu }^{jk}`$ can be specified as an explicit and local function of $`W_{\mu \nu }^{jk}`$ and $`n_k`$ via a simple generalization of the method of section 5. 7. Conclusions. An Abelian structure and the Abelian magnetic symmetry can be established in the SU(N) Yang-Mills theory without any gauge fixing (or Abelian projections). By making use of such an Abelian theory we have shown that the effective dynamics of the topological degrees of freedom that are singled out as magnetic monopoles in Abelian projections is independent of the projection (’t Hooft conjecture). We have also generalized a parameterization of Faddeev and Niemi in order to study an off-shell dynamics of physical degrees of freedom which may form knot-like solitons in the infrared region of the Yang-Mills theory. A general parameterization of the Yang-Mills connection to reveal a partial duality between the topological field $`𝒏`$ and a dimensionless antisymmetric field $`W_{\mu \nu }`$ has been proposed. It is believed that the separation of the topological degrees of freedom of the Yang-Mills theory via a change of variables in the functional integral is important for developing the corresponding effective action by analytical means.
no-problem/9907/astro-ph9907400.html
ar5iv
text
# The Early Palomar Program (1950-1955) for the Discovery of Classical Novae in M81: Analysis of the Spatial Distribution, Magnitude Distribution, and Distance Suggestion ## 1 Introduction A principal goal of the initial Palomar program on observational cosmology, that began with the commissioning of the 200-inch Hale telescope in 1949, was the testing and revision of the Mount Wilson extragalactic distance scale (Hubble 1951). That scale was defined by Hubble’s (1925, 1926, 1929) distances to NGC 6822, M33, M31, and the galaxies immediately beyond the Local Group in the M81/NGC 2403 and M101 groups (Hubble & Humason 1931; Hubble 1936; Holmberg 1950). An early central result was Baade’s (1952) discovery that the RR Lyrae variables in the disk of M31 did not resolve out of the background at the expected apparent magnitude of $`m_{pg}=22.4`$. Only the top of the globular cluster-like giant branch of the HR diagram resolved at that level. By a series of arguments, Baade (1952, 1956) could show that M31 was $`1.5`$ mag further away than Hubble’s modulus of $`(mM)=22.0`$, and that the assumed zero point of the classical Cepheid period-luminosity relations was in error by about that amount. A long-range program that was parallel to Baade’s M31 compaign (Baade and Swope 1955, 1963) was the study of the stellar content of other galaxies just beyond the Local Group, and also in selected E galaxies in the Virgo cluster. The purpose was to discover Cepheids and novae in the M81/NGC 2403 and the M101 groups, and also to attempt discovery of novae in the Virgo Cluster ellipticals. Progress on this program was described in various yearly reports of the Mount Wilson and Palomar Observatories (Bowen 1950-1970), and in the Introduction to the NGC 2403 Cepheid discovery paper (Tammann & Sandage 1968). The final result of the NGC 2403 campaign was that the distance modulus of that galaxy was $`(mM)=27.56`$ rather than Hubble’s (1936) modulus of $`(mM)=24.0`$, giving a factor of $`5`$ correction to Hubble’s distance scale even at this very small distance beyond the Local Group. Other galaxies surveyed for Cepheids were M81 and M101, and less extensively for brightest stars in NGC 2366, NGC 2976, IC 2574, NGC 4236, Ho I, Ho II of the M81/NGC 2403 Group (Sandage and Tammann 1974a), and NGC 5204, NGC 5474, NGC 5477, and NGC 5585 and M101 itself in the M101 Group (Sandage and Tammann 1974b). A progress report was given by Sandage (1954). After NGC 2403, the most complete coverage for Cepheids and normal novae was in M81, considered by Hubble, and assumed on that basis by Holmberg (1950), to be at the same distance as NGC 2403. The Cepheid program for NGC 2403, M81, and M101 was moderately telescope-intensive from 1950 through 1955. To assure somewhat adequate coverage for both Cepheids and novae, the observing runs during the two weeks of dark of the moon were usually split into three intervals of two days at the beginning of the 14 day interval, two days near the middle and two days at the end. By the end of 1955, a total of 79 blue plates had been taken of M81. The principal observers were Humason (30 plates), Sandage (38 plates), Baade (5 plates), Baum (3 plates), Hubble (2 plates), and Minkowski (1 plate). Until his death in 1953, Hubble blinked the total material available at the time. He discovered 10 faint variables (all near the plate limit at B$`23`$), and 18 classical novae in M81. The program was continued after Hubble’s death so that at the end of the campaign in late 1955, a total of 23 novae, 30 suspected faint variables (many of which are undoubtedly Cepheids), and 7 luminous blue variables (LBVs) had been discovered in M81. Two novae were also found in the E galaxy NGC 4486 in the Virgo cluster (Bowen 1952; Pritchet & van den Bergh 1987). None of the detailed data on the M81 novae or faint variables has been published. However, in the 1954 summary mentioned above, Hubble’s preliminary result on the modulus of M81, based on the first 18 novae, was discussed. Using a provisional apparent magnitude scale that had been set up by one of us (AS) before the photoelectric magnitude sequences had been established in the 1960’s in NGC 2403, M81, and the M81 companion of Ho IX, Hubble had concluded by early 1953 that M81 is $`3.8`$ mag further away than M31. In a remarkable procedure, Hubble reduced the novae data that were available to mid-1953 to the mean apparent magnitude of the nova system, averaged at 14 days after maximum. He used the number of days that had elapsed between the discovery date of a given nova and the last previous plate of the galaxy. In this way he calibrated the “dead time correction” to an inferred maximum magnitude by the statistical properties of the nova system using his 86 novae in M31 (Hubble 1929) as a template. The procedure was approximate at best (although similar to the “control time” algorithm now used extensively). Furthermore, no account was taken of the absolute maximum magnitude-decay rate (MMRD) relation of novae, found earlier by McLaughlin (1939, 1945, 1946) and fully confirmed and extended by Arp (1956), Schmidt (1957), Rosino (1964), Shara (1981a,b), Cohen (1985), Capaccioli et al. (1989), among others. Remarkably, however, the absolute magnitude of both “fast” and “slow” novae are now known to be closely the same at 14 days after maximum. The different shapes of the light curves all cross in a composite light curve near this time from maximum (eg. Buscombe & de Vaucouleurs 1955; Shara 1981a). Data on two of the Cepheids (V2 and V30 in an internal numbering used in the original working identification charts) were also analyzed (Sandage unpublished). The result was that periods were determined to be 30.073 and 30.625 days with mean B magnitudes of 22.5 and 22.6 respectively. Using these variables, Freedman and Madore (1988) measured I magnitudes for them. They derived an M81 modulus of $`(mM)_0=27.59`$ (see also Freedman et al. 1994). This modulus, combined with the original Palomar modulus of NGC 2403 (Tammann and Sandage 1968), confirmed the assumption of Hubble and Holmberg that M81 and NGC 2403 form a group at closely the same distance. The Freedman/Madore data also corrected a late, aberrant, claim to the contrary (Sandage 1984) that $`(mM)=28.8`$ for M81 that was based on a false precept concerning the M81 data, as one of us (AS) unfortunately set out in 1984. The purpose of the present paper is to publish the data for the Palomar M81 novae. These data also permit discussion of the implications of the surface distributions of the novae over the face of M81, compared with similar data for M31, for a division of normal novae into at least two classes, both spectroscopically (Williams 1992, Della Valle & Livio 1998), and spatially in the Galaxy and in M31, M33, and LMC (Della Valle et al. 1992, 1994). This division into separate populations is now widely believed to be caused by a difference in the mass distribution of the white dwarf progenitors to the novae, as discussed in section 4. ## 2 Novae as Distance and Binary Star Population Indicators With the discovery of the eclipsing light curve of the old nova DQ Her (Walker 1954, 1956) and the susequent discovery of periodic radial velocity variations in the many old and recurrent novae (Kraft 1964), and based on the mass transfer model for the U Gem cataclysmic variable AE Aqr (Crawford & Kraft 1956), Kraft (1959, 1963, 1964) argued that all normal novae occur in close binary systems. Walker’s (1968) subsequent discovery that the classical nova T Aur is also an eclipsing binary added to the evidence. “The model, in which a late type star is losing mass through the inner Lagrangian point to a compact companion, has become standard for cataclysmic variables” (Robinson 1976, Warner 1976). It has also become standard for normal novae (Gallagher and Starrfield 1978; Shara 1989 for reviews). The physical process leading to the large energy release in the outburst is known (with almost definitive certainty) to be a thermonuclear runaway caused by the ignition of hydrogen (burning into helium) after a critical mass is reached of accreted gas from the primary onto the surface of the white dwarf via the accretion disk. The model was developed over a three decade period by a number of authors. Entrance to the extensive early literature can be made through the defining papers of the process by Schatzman (1949, 1965), Starrfield, Sparks, & Truran (1975, 1976), Sparks, Starrfield, & Truran (1977a,b), Prialnik, Shara, & Shaviv (1978, 1979) with references to the many other principal authors therein. More recent papers and reviews are by Shara (1989), Truran (1990), Livio (1992), Della Valle (1992), Della Valle & Livio (1995), and papers to be cited later herein, as well as the recent monograph of Warner (1995). Because all normal novae are close binaries with mass exchange, it is clear that novae will erupt in all galaxies at all epochs after which close, mass-exchange binaries, one of which is a white dwarf, have been formed. The thermonuclear runaway occurs when the degenerate hydrogen that has been accreted onto the white dwarf surface from the Roche-lobe secondary is compressed beyond critical density on the surface. The thermonuclear heating from the nuclear reactions relieves the degeneracy, leading to rapid expansion and expulsion of the white dwarf envelope. An Eddingtion (or even super-Eddington) photon flux ensues, with an eventual decline of the light curve as envelope exhaustion occurs. Because the physics is well understood (Shara 1981a,b, 1989; Livio 1992), and because the novae luminosities are so high at maximum light, simply the discovery of novae in external galaxies provides a powerful method to trace the close binary and white dwarf star populations in the parent galaxies. The division into two nova groups, depending on the mass of the white dwarf component, also provides a method to study the different evolutionary properties of the older bulge/thick disk and the younger outer thin disk/spiral arm populations where the mass spectrum of the white dwarfs is expected to be different. ## 3 The M81 Novae Data ### 3.1 The Observing Record We list in Table 1 all 5 meter Palomar plates taken for this program, including those on which no M81 novae appear. The table contains plate number, observer, date taken, Julian date, plate quality and the novae visible on each plate. ### 3.2 Photometry Magnitudes of the novae on the plates of M81 where they appear were determined using local magnitude sequences (not shown) that were set up near each nova or groups of adjacent novae. The sequences were transferred, and combined, from three separate master photoelectric sequences that had been determined earlier in other programs. These master sequences were in Selected Area 57, (unpublished but used extensively since 1952, based on data from a number of Mount Wilson and Palomar observers; see eg. Majewski, 1992, and Reid & Majewski, 1993). The two other primary photoelectric sequences are in NGC 2403 (Tammann & Sandage 1968), and Ho IX and M81 itself (Sandage 1984). The systematic reliability of the latter two sequences, at the level of 0.1 mag, have been verified by Metcalfe & Shanks (1991). Independent verification of the Ho IX master sequence, also to this level, was made by one of us (MS) with CCD images kindly supplied by George Jacoby. The accuracy of the transferred secondary sequences that were spread over the face of M81 was also tested by Judith Cohen (1984 unpublished) with the result that our secondary sequences here, upon which our nova photometry rests, have been confirmed in systematic accuracy to a level of 0.2 mag. This is sufficient for the present purposes. The B magnitudes of each nova visible on each plate, measured relative to the local magnitude sequences just described, are listed in Table 2. ### 3.3 Astrometry The positions of the 23 novae have been measured from the discovery plates using the two axis Grant machine at the Kitt Peak National Observatory. The B1950 and J2000 coordinates and the apparent radial and de-projected radial distance of each nova is listed in Table 3. Calculating the de-projected radial distance requires that we know the inclination and the orientation of M81 on the sky. Isophotes were fit to M81 using the program ellipse in the stsdas.analysis.isophote package with in IRAF. This process also determines the ellipticity and the position angle (measured clockwise from the Y-axis of the CCD image) of each isophote. The inclination can be determined using the relationship $$cos^2(i)=\frac{(\frac{b}{a})^2r_o^2}{1r_o^2}$$ (Tully & Fisher 1977; see also Hubble 1926 and Sandage et al. 1970) where $`r_o=0.2`$ (this is the assumed axial ratio for a system completely edge on). Since, $`e=1\frac{b}{a}`$ and the ellipticity from the isophote fitting is 0.4662 we find that $`\frac{b}{a}=0.5338`$. It follows that $`i=60.4`$ using the above equation. Tully and Fisher (1977) found $`i=58`$ which is in excellent agreement with our value. To fully correct for the inclination of the galaxy the positions of the novae must be placed in the coordinate system of the galaxy and then corrected for the inclination. We have chosen the major axis as our X-axis and the minor axis as our Y-axis. The position of each nova is determined in the following way $$X=N_{ra}cos(\theta )N_{dec}sin(\theta )$$ $$Y=\frac{N_{ra}sin(\theta )+N_{dec}cos(\theta )}{cos(i)}$$ $`N_{ra}`$ is the distance of the nova from the center of M81 in arcseconds of right ascension ($`N_{ra}=15cos(M81(DEC))(Nova(RA)M81(RA))`$). $`N_{dec}`$ is the distance of the nova from the center of M81 in arcseconds of declination ($`N_{dec}=Nova(DEC)M81(DEC)`$). $`\theta `$ is the angle of the major axis from West (-121.44 degrees). The distances in X and Y are added in quadrature to give us the corrected radial distances for each nova presented in Table 3. ## 4 Spatial Distribution of Novae in Galaxies ### 4.1 Two Populations of Novae A strong debate on the spatial distribution of novae in galaxies and the populations to which they belong has appeared in the literature during the past decade. Ciardullo et al. (1987) and Capaccioli (1989) supported the view that most of the M31 novae are produced in the galaxy’s bulge. However, Della Valle et al. (1992, 1994, their Fig. 3), in discussing the distribution of Galactic novae relative to their distances above the galactic plane, and also concerning the frequency of novae in late type galaxies (M33, LMC) compared with bulge-dominated galaxies, concluded that younger, blue populations (outer disk and spiral arm regions and the fraction of bulge population that is young) produce most of the novae per unit K-band luminosity in all galaxies, regardless of Hubble Type. (We note, however, the recent criticism of the Della Valle et al. (1994) nova rates (because of normalization problems) by Shafter, Ciardullo, & Pritchet (1999).) Della Valle et al. (1992,1994) showed that the division into the two population groups is also supported by the difference in the decline rate distributions of the light curve between early and late type star-producing galaxies, ie. M31 vs LMC and M33, (Della Valle et al. 1994, their Figs. 1 and 3). The supposition is that the brighter, faster novae are in the young population where the white dwarf progenitor is expected to be of higher mass than in the older population. This is because the main sequence star that becomes a white dwarf in the outer disk and spiral arm populations presumably is of higher mass when it leaves the main sequence than progenitor stars in the bulge/thick disk population, at least at the present epoch. This follows because there is a strong relation between the final white dwarf mass and the initial mass of the original star. The higher the initial mass, the higher will be the white dwarf remnant after evolution. This is the famous initial mass-final mass relation for white dwarfs, now apparently solved beyond credible doubt (Weidemann & Koester 1983, Fig. 1; 1984; Weidemann 1990), based in part on the central discovery of massive white dwarfs in the young Galactic cluster NGC 2516 (Reimers & Koester 1982), together with later discoveries of the same type. Because the luminosity of the nova outburst is a strong function of the WD mass (going as the cube of the mass; Shara(1981b), Livio 1992), the strength of the outburst and the decay rate of the light curve are expected to differ according to the mass of the envelope-exploding white dwarf. The spectroscopic differences between fast and slow novae (Williams 1992; Della Valle & Livio 1998), and the striking difference in the ejection velocities summarized in these papers (eg. Fig. 2 of the last reference) are also explained in this way. Observations of the novae in M51, M87, and M101 support this view (Shafter, Ciardullo, & Pritchet 1996). Furthermore, the nova rate per unit mass in a young, blue, stellar population is expected to be higher than in an old, red population. Yungelson, Livio, & Tutukov (1997) predicted that this is because the massive white dwarfs produced in the young population need only accrete hydrogen from their companions for a relatively short time to reach the critical envelope mass and erupt as novae. These authors also suggest that the apparent numerical dominance of Galactic bulge novae over Galactic disk novae is an observational selection effect: disk novae are more likely to be dimmed by dust than bulge novae, therefore, apparently reducing their observed frequencies. Monte Carlo simulations by Hatano et al. (1997a,b) on novae in M31 and in the Galaxy strongly suggest the above selection effect, and show the possibility for a true dominance of disk novae over bulge/thick disk novae. (The Hatano et al. result depends on the accuracy of their light plus dust model for M31, which still requires verification.) Nova-rate studies in galaxies of different Hubble type (Della Valle et al. 1994, their Fig. 3) support this view. ### 4.2 The M81 novae The apparent distribution of the 23 novae over the face of M81 is shown in Figure 1, overlaid on a KPNO service CCD image of the galaxy. The absence of the novae within $`70\mathrm{}`$ of the center (the “nova hole”) is similar to that found by Hubble (1929), Arp (1956), and in the Asiago survey (Rosino 1964; Capaccioli et al. 1989). It is almost certainly due to discovery-incompleteness in the broad-band B surveys (eg. Ciardullo et al. 1987) near the center. The faintest detections of the M81 novae are at $`B23.0`$. We also note the detection at $`B=22.9`$ of nova #15, one of the novae closest to the center of M81. The seven of the 23 novae in M81 that we have found in the central part of the M81 bulge are numbers 7,9,11,14,15,18, and 24. These could be associated with the low mass white dwarf bulge population. However, as suggested by the Hatano et al. (1997a,b) simulations, many of the apparent bulge novae must also belong to the young spiral population. We note without comment that dusty spiral arms in M81 do in fact extend all the way into the central region of the M81 bulge (Fig. 2). The results of the Hatano et al. simulations are as follows. Assuming a range of bulge-to-disk novae and adopting the observed distribution of Galactic classical novae, Hatano et al. found that at least 67% (and more likely 89%) of the Galactic novae belong to the disk. They found a similar result for the M31 novae. Is the observed distribution of M81 novae in Figure 1 consistent with this finding? Consider first the standard method of analysis, used in many of the cited prior studies via the method of cumulative spatial and light distributions. Figure 3 shows the data in Table 2 analyzed in several ways. The M81 B band isophotal light is shown as the dashed curve. The isophotal light outside the isophote at $`70\mathrm{}`$, the radius where our plates begin to detect novae, is shown as the dotted (lower) light curve. The largest difference between the ($`70\mathrm{}`$) isophotal light and the deprojected nova radial distribution is $`D=0.23`$. The Kolmogorov-Smirnov statistic then states that the novae do not follow the galaxy light, but only with 80% confidence. Deprojection of the novae is only meaningful, of course, if the novae belong to the disk population. If the novae belong largely to the bulge population than figure 3 supports the view of Moses & Shafter (1993) that the distributions of light and novae in M81 are the same. The second demonstration that our sample contains many disk novae is the correction for incompleteness. As noted above, we have not detected the novae within $`70\mathrm{}`$ of the center. Using the Ciardullo et al. (1987) data with their much more complete survey of the center of M31 to detect novae using narrow band H alpha emission rather than broad-band continuum light, we can use the comparison in the Ciardullo et al. data of the number of their detected novae in and outside the central region to calculate our incompleteness. The portion of M31 surveyed by Ciardullo et al. covered an area of 15 by 30 arc minutes along the minor and major axes of that galaxy respectively. M81, with $`(mM)_0=27.75`$ is $`4.5`$ times farther away than M31 with $`(mM)_0=24.4`$. Hence, the region surveyed in M81 that would be equivalent to that in M31 is 3.3 by 6.7 arc minutes along the M81 minor and major axes. In their complete H alpha survey, Ciardullo et al. found a total of 35 novae, of which 21 were within 5 arc minutes of the center of M31. This distance corresponds to the $`70\mathrm{}`$ radius of the “nova hole” in our photographic survey of M81 where we found no novae. Because the Ciardullo survey is beyond doubt virtually complete, whereas our broad-band survey, as in Hubble, Arp, and the Asiago (Rosino), is not, the Ciardullo ratios should closely define our incompleteness factors. Thus, we expect that we have missed $`21/35=60\%`$ of the novae in the central 3.3X6.7 arc minute region of M81. We did find 11 objects in this area. Therefore we must have missed $`17`$ objects during the survey time. We also find that 13 of our 23 detected M81 novae lie outside the central 3.3X6.7 arc minute region. If we assume that we are complete in the discovery in this region, our complete survey, adding the $`17`$ novae assumed missed in the “nova hole”, should have detected $`(17+11+13)=41`$ novae. Having missed 17, we conclude that our total M81 survey is $`17/41=41\%`$ incomplete. Expressed the other way, it was $`23/41=59\%`$ complete. The incompleteness factor has been accounted for in Figure 4, which is the same as Figure 3 but with the 17 assumed novae missed in the central $`70\mathrm{}`$ radius added, assuming a radially uniform distribution of the undetected 17 central novae. Neither this radial distribution, nor any other addition of 17 inner novae can bring the galaxy isophotal light and the nova radial distribution into agreement in Fig. 4. There is, however, yet another central fact in the argument. Figure 5 is the same as Figure 1 but with the area surveyed by Ciardullo et al. marked, showing that they could not have found novae in the outer disk and spiral arms in M31, novae that are unquestionably of the arm (high mass white dwarf progenitor) population. Using our statistics of 13 spiral-arm-population novae in M81 out of a total (completeness corrected) of 41 novae of both population types, we would expect that Ciardullo et al. have missed $`13/41=31\%`$ of the total M31 nova population, almost all of which will be of the disk/spiral arm type. From the above arguments concerning completeness, we conclude that at most $`(4117)/41=59\%`$ of all the novae in M81 are in the bulge. Given (1) the small number statistics, (2) the uncertainties in the dust models of Hatano et al. (1997a,b), (3) possible differences between the M81 and M31 dust and nova distributions, and (4) the assumption that the limiting absolute magnitude survey limits for M31 and M81 are similar, we cannot claim a stronger value for the bulge/arm ratio. However, Figures 3 and 4 and the work of Hatano et al. support the value we have given that supports an appreciable outer disk/spiral arm nova population that is consistent with the two-population dichotomy of Duerbeck (1990), Della Valle et al. (1992, 1994), Williams (1992), and others cited in the above discussion. It must be mentioned that the surveys of M31 by Arp and by the Asiago group (eg. Rosino 1964) cover a much larger region than the survey of Ciardullo et al., reaching to greater than 30 arc minutes radius from the center, therefore encompassing more of the M31 spiral pattern. There is indeed evidence, first set out by Arp (1956, his Fig. 36), for a bimodal distribution of magnitude at maximum (see also Della Valle & Livio 1998, their Fig. 4, which also is from Arp) for the M31 novae. This distribution is also separated into the fast and slow groups, as are the characteristics of the novae in early and late type spirals (Della Valle et al. 1994, their Fig. 1). ## 5 A Nova Distance to M81 The Palomar survey of M81 for novae was not sufficiently dense to unambiguously determine the novae magnitudes at maximum light, nor the decay rate for the MMRD relation (eg. Arp 1956; van den Bergh 1975; Della Valle et al. 1994) that is necessary to determine a nova distance, with one notable exception. Table 2 for the photometry shows that nova #15 was discovered within one day of maximum and was followed for 35 days thereafter, giving a good determiniation of the decay rate. Figure 6 shows the light curve. The least-squares slope using the five observed data points is $`0.0461\pm 0.0061`$ B mag/day. This corresponds to a time of decline by three magnitudes of $`t_3=65.1`$ days. The MMRD relation, calibrated elsewhere (Shara 1981b), is $`M_B(max)=10.1+1.57logt_3`$, giving $`M_B=7.25\pm 0.10`$ for nova #15. From $`B(max)=20.6`$, and using an estimated absorptions of 0.1 mag (Sandage & Tammann 1987, column 13, noting that nova #15 is in the bulge and is assumed to suffer no internal absorption within M81), or 0.41 mag (Peimbert & Torres-Peimbert 1981) gives $`(mM)_0=27.75`$ or $`(mM)_0=27.44`$. The former is identical to the M81 Cepheid distance (Freedman et al. 1994). The true uncertainty using nova #15 is, of course, at least as large as 0.31 mag because of the highly uncertain absorption. An independent estimate of the nova distance to M81 is by comparison of the distribution of the magnitudes in Table 2 with similar data for novae in M31. The brightness distribution in the two galaxies to the completeness limit of the novae in M81 are expected to be similar if, (1) the sampling frequencies are similar, (2) the sample sizes are similar, and (3) the novae are drawn from the same populations. (Note that a potential fourth factor - metallicity - is ignorable. Nova eruptions are independent of the metallicity of the parent galaxies and/or the nova environment because the enrichments of CNO and the O-Ne-Mg elements by factors of between 5 and 50 are produced by hydrogen envelope mixing with the underlying white dwarf (Shara 1981b)). Each of the three requirements appear to be approximately fulfilled in the case of the M81 program compared with the M31 program of Ciardullo et al. (1987). (1) For both galaxies, a few observing runs of several nights length typically occurred each year, so the sampling frequencies are similar. (2) The 35 novae from Ciardullo et al. are only $`1.5`$ times more numerous than the 23 novae in the present survey - a difference than is not very significant in this context. (3) As we noted in section 4, our M81 nova sample is incomplete in the central part of the galaxy. The M31 survey of Ciardullo et al. is incomplete in the outer part of the galaxy, dominated by disk novae. However, if disk novae are equally significant in both galaxies even in the bulge, then the brightness distributions of the total samples are likely to be similar. Nevertheless, large, complete magnitude-limited samples of novae over the entire extents of M81 and M31 are required to confirm the results below. These apparent (ie. as actually observed in the discovery programs) brightness distributions for both galaxies are plotted in Fig. 7. The top distribution is from the M31 Ciardullo et al. sample. The bottom is from our M81 data in Table 2. Two important features of these distributions are useful in comparing the two samples. (1) The brightness of the single brightest nova in each sample is indicative of the most massive (disk) white dwarf in each sample. (2) The rapid increase in the number of nova detections that are, say, 1.4 mag fainter than the brightest nova is likely to be due to the ease of discovering these novae which must be near-Eddington luminosity objects that remain close to their maximum brightness for several weeks. Figure 7 shows that, on the precepts set out above, the differential distance modulus between M81 and M31 is $`3.4\pm 0.3`$ mag (remember Hubble’s value of 3.8). Adopting the true (absorption free) modulus for M31 as $`(mM)_0=24.26`$ from IR photometry (Welch et al. 1986), and assuming similar modest internal absorptions (0.3 mag) in the bulge of each galaxy, we derive $`(mM)_0=27.6\pm 0.3`$ for M81. Although far from definitive, this value is in good agreement with the MMRD distance derived above from M81 nova #15 as $`(mM)_0=27.75`$, and the Cepheid distance (Freedman et al. 1994) of $`(mM)_0=27.8`$. As to the general use of novae in external galaxies as distance indicators, we only note here, as have others, that the brightest novae are $`3`$ mag brighter than the “average” Cepheids. Furthermore, the physics of the nova eruption is now understood via the well defined MMRD relation, both from theory (Shara 1981a,b, 1989; Livio 1992), and from observation (McLaughlin 1945; Arp 1956; Cohen 1985; Della Valle et al. 1994, and many others). Therefore, because there are now precedures to make use of novae (here, and eg. Pritchet & van den Bergh, 1987), there is no question that surveys of normal novae in distant galaxies, done with understanding of the novae systematics now known, will be one of the more important observational programs in the future that will help to carry the quest for the local extragalactic distance scale to completion. We thank George Jacoby and Debra Wallace for obtaining KPNO CCD images of M81 to permit us to test the M81 photoelectric sequences that had been set up by AS previously in Ho IX and M81 itself at Palomar. AS thanks Judith Cohen for her independent (unpublished) testing using CCD technology of many of the local magnitude sequences over the face of M81 in 1984. MS thanks Ed Carder for setup assistance at the KPNO two axis Grant measuring engine, and Mike Potter of STScI for assistance with data reductions. John Bedke made the reproductions of the original Hubble/Sandage finder charts, for which we are grateful. The Mount Wilson/Palomar commitment of the early “nebular group” of that Observatory to the M81 nova campaign in the first decade of the 1950s at Palomar is evident. In that regard, AS is grateful to the Palomar mountain crew, from night assistants to all mountain personnel, for their crucial work behind the scenes in the observing period in that heady epoch nearly 50 years ago in which the data that are discussed here were obtained. Figure Captions
no-problem/9907/astro-ph9907346.html
ar5iv
text
# Correlations in the QPO Frequencies of Low Mass X-Ray Binaries and the Relativistic Precession Model ## 1 Introduction Old accreting neutron stars, NSs, in low mass X-ray binaries, LMXRBs, display a complex variety of quasi-periodic oscillation, QPO, modes in their X-ray flux. The low frequency QPOs ($`1100`$ Hz), were discovered and studied in the eighties (for a review see van der Klis 1995). In high luminosity Z-sources these QPOs were further classified into horizontal, normal and flaring branch oscillations (HBOs, NBOs and FBOs, respectively) depending on the instantaneous position occupied by a source in the X-ray colour-colour diagram. The kHz QPOs (range of $`0.2`$ to $`1.25`$ kHz) that were revealed and investigated with RXTE (van der Klis 1998, 1999a and references therein) in a number of NS LMXRBs involve timescales similar to the dynamical timescales close to the NS. A common phenomenon is the presence of a pair of kHz QPOs (centroid frequencies of $`\nu _1`$ and $`\nu _2`$) which drift in frequency while mantaining their difference frequency $`\mathrm{\Delta }\nu =\nu _2\nu _1250350`$ Hz approximately constant. kHz QPOs show remarkably similar properties across NS LMXRBs of the Z and Atoll groups, the luminosity of which differs by a factor of $`10`$ on average. During type I bursts from seven Atoll sources, a nearly coherent signal at a frequency of $`\nu _{burst}350600`$ Hz has also been detected. Four of these sources display also a pair of kHz QPOs. While in two of these $`\nu _{burst}`$ appears to be consistent, to within the errors, with the frequency separation of the kHz QPO pair $`\mathrm{\Delta }\nu `$, or twice its value $`2\mathrm{\Delta }\nu `$, there are currently two sources (4U1636-53, Mendez et al. 1999, and 4U1728-34, Mendez & van der Klis 1999) for which $`\nu _{burst}`$ is significantly different from $`\mathrm{\Delta }\nu `$ and its harmonics. The $`1560`$ Hz HBO frequency, $`\nu _{HBO}`$, shows an approximately quadratic dependence ($`\nu _2^2`$) on the higher kHz QPO frequency that is observed simultaneously in many Z-sources (Stella & Vietri 1998b; Psaltis et al. 1999). A similar $`\nu _2^2`$ dependence holds also for the centroid frequency of QPOs or peaked noise components of several Atoll sources, suggesting a close analogy with HBOs (Stella & Vietri 1998a; Ford & van der Klis 1998; Homan et al. 1998). Evidence for an equivalent to the NBOs/FBOs of Z-sources has been found in two Atoll sources (Wijnands et al. 1999; Psaltis, Belloni & van der Klis 1999, hereafter PBV). The origin of these QPOs is still debated. Beat frequency models, BFMs, require that interactions at two distinct radii take place simultaneously. These involve disk inhomogeneities at the magnetospheric boundary and sonic point radius which are accreted at the beat frequency between the local Keplerian frequency, $`\nu _\varphi `$, and the NS spin frequency, $`\nu _s`$, giving rise to the HBOs (Alpar & Shaham 1985; Lamb et al. 1985) and the lower frequency kHz QPOs, at $`\nu _1`$ (Miller et al. 1998a), respectively. The higher frequency kHz QPOs are attributed to the Keplerian motion ($`\nu _2=\nu _\varphi `$) at the sonic point radius. Therefore, the frequency separation $`\mathrm{\Delta }\nu `$ yields the NS spin frequency ($`\nu _s=\mathrm{\Delta }\nu `$). Simple BFMs are not applicable to black hole candidates, BHCs, because the no hair theorem excludes the possibility that an offset magnetic field or radiation beam can be stably anchored to the BH, as required to produce the beating with the disk Keplerian frequency. A variety of alternative QPO models has been proposed (for a list see PBV). In the relativistic precession model, RPM, the QPO signals arise from the fundamental frequencies of the motion of blobs in the vicinity of the NS. While the higher frequency kHz QPOs (as in other models) correspond to the Keplerian frequency of the innermost disk regions, the lower frequency kHz QPOs originate in the relativistic periastron precession of (slightly) eccentric orbits and the HBOs in the nodal precession of (slightly) tilted orbits in the same regions (Stella & Vietri 1998a, 1999). Within this model both the quadratic dependence of the HBO frequency on $`\nu _2`$ and the decreasing separation $`\mathrm{\Delta }\nu `$ for increasing $`\nu _2`$, observed in several Z and Atoll sources are naturally explained, (The near coincidence of $`\mathrm{\Delta }\nu `$ and $`\nu _{burst}`$, or $`2\nu _{burst}`$, in the framework of the RPM will be discussed elsewhere.) We note that the RPM can be applied to BHCs as well. In a recent study aimed at classifying the QPOs and peaked noise components of NS and BHC LMXRBs Psaltis, Belloni & van der Klis (PBV) identified two components the frequency of which follows a tight correlation over nearly three decades. This correlation (hereafter PBV correlation) involves both NS and BHC LMXRBs spanning different classes and a wide range of luminosities (see the points in Fig.1). In kHz QPO NS systems, these components are the lower frequency kHz QPOs, $`\nu _1`$, and the low frequency, HB or HB-like QPOs, $`\nu _{HBO}`$. For BHC systems and lower luminosity NS LMXRBs the correlation involves either two QPOs or a QPO and a peaked noise component. In all cases the frequency separation is about a decade and an approximate linear relationship ($`\nu _{HBO}\nu _1^{0.95}`$) holds. Note that the QPO frequencies from the peculiar NS system Cir X-1 varies over nearly a decade while closely following the PBV correlation and bridging its low and high frequency ends. PBV noted also that $`\nu _2`$ vs. $`\nu _1`$ relations of different Atoll and Z-sources line-up with good accuracy (cf. also Psaltis et al. 1998). If confirmed, the strong similarity of the QPO (and peaked noise) properties across NS and BHC systems that the results of PBV unveiled holds the potential to tightly constrain theoretical models for the QPO phenomenon. In this letter we show that the predictions of the RPM accurately match the PBV correlation, without resorting to additional assumptions. ## 2 The Relativistic Precession Model For the sake of simplicity we consider here blobs moving along infinitesimally eccentric and tilted test-particle geodesics. In the case of a circular geodesic in the equatorial plane ($`\theta =\pi /2`$) of a Kerr black hole of mass $`M`$ and specific angular momentum $`a`$, we have for the coordinate frequency measured by a static observer at infinity (Bardeen et al. 1972) $$\nu _\varphi =\pm M^{1/2}r^{3/2}[2\pi (1\pm aM^{1/2}r^{3/2})]^1$$ (1) (we use units such that $`G=c=1`$). $`\nu _\varphi `$ deviates from its Keplerian value at small radii. The upper sign refers to prograde orbits. If we slightly perturb a circular orbit introducing velocity components in the $`r`$ and $`\theta `$ directions, the coordinate frequencies of the small amplitude oscillations within the plane (the epicyclic frequency $`\nu _r`$) and in the perpendicular direction (the vertical frequency $`\nu _\theta `$) are given by (Okazaki, Kato & Fukue 1987; Kato 1990) $$\nu _r^2=\nu _\varphi ^2(16Mr^1\pm 8aM^{1/2}r^{3/2}3a^2r^2),$$ (2) $$\nu _\theta ^2=\nu _\varphi ^2(14aM^{1/2}r^{3/2}+3a^2r^2).$$ (3) In the Schwarzschild limit ($`a=0`$) $`\nu _\theta `$ coincides with $`\nu _\varphi `$, so that the nodal precession frequency $`\nu _{nod}\nu _\varphi \nu _\theta `$ is identically zero. $`\nu _r`$, on the other hand, is always lower than the other two frequencies, reaching a maximum for $`r=8M`$ and going to zero at $`r_{ms}=6M`$, the radius of the marginally stable orbit. This qualitative behaviour of the epyciclic frequency is preserved in the Kerr field ($`a0`$). Therefore the periastron precession frequency $`\nu _{per}\nu _\varphi \nu _r`$ is dominated by a “Schwarzschild” term over a wide range of parameters (cf. Stella & Vietri 1999). In the RPM the higher and lower frequency kHz QPOs are identified with $`\nu _2=\nu _\varphi `$ and $`\nu _1=\nu _{per}`$, respectively, whereas the HBOs at $`\nu _{HBO}`$ are identified with the 2nd harmonics of $`\nu _{nod}`$ (cf. Stella & Vietri 1998a, 1999). In fact in those few Atoll LMXRBs in which $`\nu _s`$ can be inferred from $`\nu _{burst}`$, using $`\nu _{HBO}=2\nu _{nod}`$ (instead of $`\nu _{nod}`$) provides a fairly good match (Morsink & Stella 1999). The geometry of a tilted inner disk might be such that a stronger signal is produced at the even harmonics of the nodal precession frequency (e.g. Psaltis et al. 1999). By analogy, we assume that also in BHCs $`\nu _{HBO}2\nu _{nod}`$. It should be noticed that, within the RPM, $`\nu _s`$ can be inferred only indirectly (i.e., by fitting $`\nu _{HBO}`$) for those NS systems that do not display coherent pulsations or burst oscillations. Overall the RPM yields a wider distribution of spin frequencies than inferred in BFMs based on $`\mathrm{\Delta }\nu `$ (cf. Stella et al. 1999). A magnetosphere is not required and, if present, must be such that the motion of the blobs is perturbed only marginally, implying magnetic fields $`10^810^9`$ G. Fig.1A shows $`2\nu _{nod}`$ and $`\nu _\varphi `$ as a function of $`\nu _{per}`$ for corotating orbits and selected values of $`M`$ and $`a/M`$. The high frequency end of each line is dictated by the orbital radius reaching $`r_{ms}`$ (where $`\nu _r=0`$, i.e. $`\nu _\varphi =\nu _{per}`$). It is apparent that $`r_{ms}`$ depends mainly on $`M`$ and to a lesser extent on $`a/M`$. The separation of the lines in Fig. 1A testifies that while $`\nu _{nod}`$ depends weakly on the mass and more strongly on $`a/M`$, the opposite is true for $`\nu _\varphi `$. By taking the weak field ($`M/r1`$) and slow rotation ($`a/M1`$) limit of Eqs. (1)-(3) the relevant first order dependence is made explicit (we use here $`m=M/M_{}`$) $$\nu _\varphi (2\pi )^{2/5}3^{3/5}M^{2/5}\nu _{per}^{3/5}33m^{2/5}\nu _{per}^{3/5}\mathrm{Hz},$$ (4) $$\nu _{nod}(2/3)^{6/5}\pi ^{1/5}(a/M)M^{1/5}\nu _{per}^{6/5}6.7\times 10^2(a/M)m^{1/5}\nu _{per}^{6/5}\mathrm{Hz}.$$ (5) In the case of rotating NSs the stellar oblateness introduces substantial modifications relative to a Kerr spacetime. Approximate expressions have been worked out to calculate the precession frequencies arising from the quadrupole component of the NS field (cf. Morsink & Stella 1999; Stella & Vietri 1999). These approximations, however, break down for NSs spinning within a factor of $`2`$ from breakup. In order to investigate a wide range of NS spin frequencies, we adopted a numerical approach and computed the spacetime metric of the NS using Stergioulas’ (1995) code, an equivalent of that of Cook et al. (1992) (cf. also Stergioulas and Friedman 1995). $`\nu _{nod}`$ was calculated according to the prescription of Morsink & Stella (1999), whereas $`\nu _{per}`$ was derived from the epicyclic frequency $`4\pi ^2\nu _r^2d^2V_{eff}/dr^2`$, with $`V_{eff}`$ the effective potential. Numerical results are shown in Fig.1B for a NS mass of 1.95 M, a relatively stiff equation of state (EOS AU cf. Wiringa et al. 1988) and $`\nu _s=`$ 300, 600, 900 and 1200 Hz (corresponding to $`a/M=`$ 0.11, 0.22, 0.34 and 0.47, respectively). While also in this case $`\nu _\varphi `$ depends only very weakly on $`\nu _s`$, the approximately linear dependence of $`\nu _{nod}`$ on $`\nu _s`$ is apparent. Note that the approximate scalings in Eqs. (4)-(5) remain valid over a wide range of parameters. Only for the largest values of $`\nu _{per}`$ and $`\nu _s`$, $`\nu _{nod}`$ departs substantially from the $`\nu _{per}^{6/5}`$ dependence. The measured QPO and peaked noise frequencies giving rise to the PBV correlation, together with the higher kHz QPO frequencies from NS systems ($`\nu _2`$), are also plotted in Fig.1B, to allow for a comparison with the model predictions. The agreement over the range of frequencies spanned by each kHz QPO NS system should not be surprising: together with the accurate matching of the corresponding $`\nu _1\nu _2`$ relationship in Z-sources, this is indeed part of the evidence on which the RPM model was proposed (Stella & Vietri 1998a, 1999). However the fact that the dependence of $`\nu _{nod}`$ on $`\nu _{per}`$ matches the observed $`\nu _{HBO}\nu _1`$ correlation to a good accuracy over $`3`$ decades in frequency (down to $`\nu _1`$ of a few Hz), encompassing both NS and BHC systems, is new and provides additional support in favor of the RPM. The observed variation of $`\nu _{HBO}`$ and $`\nu _1`$ in individual sources (Cir X-1 is the most striking example) provide further evidence in favor of the scaling predicted by the RPM. The agreement of $`\nu _\varphi `$ and $`\nu _{per}`$ with the observed $`\nu _2\nu _1`$ relation is also accurate. In the case of NS LMXRBs, a good matching is obtained for relatively stiff EsOS (e.g. AU as in Fig.1 and UU), which can also achieve the relatively high masses ($`m1.82.0`$) that are required to match the observed $`\nu _1`$ and $`\nu _2`$ values (on the contrary soft or very stiff EsOS do not produce acceptable results). For EOS AU and $`m=1.95`$, the $`\nu _{HBO}\nu _1`$ values of most NS LMXRBs are best matched for $`\nu _s`$ in the $`600`$ to 900 Hz range. It is apparent from Fig.1b that $`\nu _s`$ as low as $`300`$ Hz are required for a few Atoll sources (notably 4U1608-52 and 4U1728-34). The values above are close to the range of $`\nu _s350600`$ Hz inferred from $`\nu _{burst}`$ in a number of Atoll sources (van der Klis 1999). Z-type LMXRBs, appear to require $`\nu _s`$ in the $`600`$ to 900 Hz range, a possibility that is still open since for none of these sources there exists yet a measurement of $`\nu _s`$ based on burst oscillations or coherent pulsations (cf. also Stella, Vietri & Morsink 1999). A word of caution is in order concerning the Atoll source 4U1728-34, which displays two distinct branches, one similar to that of Z-sources, the other characterised by $`\nu _{HBO}`$ a factor of $`2`$ lower, and flips from one branch to the other (Ford & van der Klis 1998). (Only the lower branch is plotted in Fig.1, as in the upper branch only the QPOs at $`\nu _{HBO}`$ and $`\nu _2`$ have been observed so far). While this behaviour may be peculiar to this source, still it suggests that, if the lower branch corresponds to $`\nu _{HBO}2\nu _{nod}`$ (in agreement with the $`\nu _s360`$ Hz inferred from $`\nu _{burst}`$ and the idea that only even harmonics of the nodal precession signal are generated), then for the upper branch, $`\nu _{HBO}4\nu _{nod}`$. One could further speculate that the same holds for sources closely following the PBV correlation, Z-sources in particular, such that their $`\nu _s`$ might be expected in the $`300600`$ Hz range. For BHC LMXRBs the scatter around the PBV correlation implies values of $`a/M`$ in the $`0.10.4`$ range (see Fig.1A). Note that so far no evidence has been found for a BHC QPO (or peaked noise) signal that is the equivalent to the higher kHz QPOs (at $`\nu _2`$) of NS systems. The points from XTE J1550-564 while inconsistent with any single value of $`a/M`$, might lie along two distinct branches, separated by a factor of $`2`$ in $`\nu _{HBO}`$, similar to the case of 4U1728-34. ## 3 Discussion Interpreting the PBV correlation within the RPM implies that both NS and BHC systems share relatively high values of their specific angular momentum ($`a/M0.10.3`$ for BHCs or $`\nu _s=300900`$ Hz for relatively stiff EOS NSs). An interesting feature of the $`2\nu _{nod}`$ vs. $`\nu _{per}`$ curves obtained from relativistic rotating NS models (see Fig. 1b), is that for high values of $`\nu _{per}`$ the increase of $`\nu _{nod}`$ for increasing $`\nu _s`$ tends to saturate, such that the curves for relatively high values of $`\nu _s`$ are closely spaced. This results from the increasingly important role of the (retrograde) quadrupole nodal precession relative to the (prograde) frame dragging nodal precession. This might help explain the relatively narrow range of $`\nu _{HBO}`$ in Z-sources, if these possess $`\nu _s`$ in excess of $`500600`$ Hz. Population synthesis calculations show that a significant fraction of low magnetic field, relatively stiff EOS NSs in LMXRBs can be spunup by accretion torques to $`\nu _s1`$ kHz, provided that $`>0.30.4`$ M are accreted (Burderi et al. 1999). The combination of the relatively high NS mass and spin frequency required by the RPM is at least qualitatively in agreement with evolutionary scenarios for LMXRBs. Detecting the lower frequency end of the $`\nu _2\nu _1`$ correlation in NS systems would provide an important test of the RPM interpretation. In the Kerr metric, the increase in $`\nu _{nod}`$ for increasing values of $`a/M`$ shows no signs of saturation (Fig.1A). BHCs can in principle achieve extreme values of $`a/M1`$ (NS are instead limited to values of $`<0.60.7`$, cf. Salgado et al. 1994). In practice spinup to high values of $`a/M`$ is unlikely to occur in BHC LMXRBs, because of the limited mass that can be accreted from the donor star (King & Kolb 1999). In the RPM the higher frequency BHC QPOs (e.g. the $`300`$ Hz QPOs of GRO1655-40) are interpreted in terms of $`\nu _1=\nu _{per}`$ (while the lower frequency QPOs $`\nu _{HBO}=\nu _{nod}`$), implying $`a/M0.10.3`$, in agreement with accretion-driven spinup scenarios. This is at variance with the high values of $`a/M`$ ($`0.95`$ for GRO 1655-40) derived from the $`\nu _1=\nu _{nod}`$ interpretation of Cui et al. (1998). The detection of a QPO signal at $`\nu _\varphi `$ in BHC systems (i.e. the equivalent to the higher kHz QPO at $`\nu _2`$ in NS systems) could provide an additional test of the RPM. Due to the $`\nu _\varphi M^{2/5}`$ scaling (cf. Eq.4 and Fig.1A) the frequency of these BHC QPOs should be somewhat lower than the $`\nu _2`$ of NS systems. The case of the $`300`$ Hz QPOs from GRO 1655-40 is especially interesting in this respect: from Fig.1A it is apparent that the relevant points lie close to the high frequency end of the $`a/M=0.1`$, $`m=7`$ line. Since the mass of the BHC in GRO 1655-40 determined through optical observations is $`7`$ M (Orosz & Bailyn 1997; Shahbaz et al. 1999), we conclude that, according to the RPM, $`\nu _{per}300`$ Hz close to the marginally stable orbit, where $`\nu _\varphi \nu _{per}`$. This suggests that an additional QPO signal at $`\nu _2=\nu _\varphi `$ might well be close to or even blended with the QPO peak at $`\nu _1`$. The detection of two closeby or even partially overlapping QPO peaks close to $`300`$ Hz in GRO 1655-40 would therefore provide further evidence in favor of the RPM interpretation. Similar considerations might apply to the $`284`$ Hz QPOs of XTE J1550-564 (Homan et al. 1999), provided its BHC mass is in the $`7`$ M range. In the RPM the QPO signals at $`\nu _{nod}`$ and $`\nu _{per}`$ are produced over a limited range of radii. For kHz QPO NS systems these radii are remarkably close to the marginally stable orbit. In the model discussed here the eccentricity and tilt angle are allowed only very small values, such that the QPO frequency variations of a given source are ascribed entirely to variations of the radius at which the signals are produced. As shown in section 2, this is sufficient to interpret the salient features of the $`\nu _{HBO}`$ and $`\nu _2`$ vs. $`\nu _1`$ correlations. Corrections for non-negligible eccentricities or tilt angles are small. In the weak field and slow rotation limit, $`\nu _\varphi `$, $`\nu _{per}`$ and $`\nu _{nod}`$ are independent of the tilt angle, while a finite eccentricity $`e`$ would give rise to a factor of $`(1e^2)^{3/5}`$ and $`(1e^2)^{3/10}`$ on the right hand side of Eqs. 4 and 5 , respectively. The corresponding corrections amount to $`<16`$% for $`e<0.5`$. The orbital radius is approximately given by $`r/M100m^{2/5}\nu _{per}^{2/5}`$. To interpret in this context the lowest observed values of $`\nu _1`$, i.e. $`2`$ and $`10`$ Hz in NS and BHC systems respectively, radii as large as $`r/M30`$ are required. While $`45`$ times larger than the marginally stable orbit and/or the NS, these radii are still small enough that the gravitational energy released locally can account for the observed QPO amplitudes. At present we can only speculate on the physical mechanism determining the radius at which the QPOs are produced and its variation in each source and across different sources. This radius must decrease for increasing mass accretion rates, as there is much evidence that in Atoll and Z sources the frequency of the kHz QPOs is a reliable indicator of the mass accretion rate. Many NS and BHC LMXRBs display two component X-ray spectra consisting of a soft thermal component, usually interpreted in terms of emission from an optically thick accretion disk, and a harder often power-law like component, that might originate in a hot innner disk region. As the rms amplitude of kHz QPOs and HBOs increases steeply with energy, QPOs are likely associated to the hard spectral component. One possibility is that the QPOs originate at the transition radius between the optically thick disk to the hot inner region, perhaps as a result of occultation by orbiting blobs. A further exploration of this idea is beyond the purpose of this Letter. We note, however, that variations of the inner radius of the optically thick disk have been inferred in a few NS and BHC systems (notably GRS 1915+105, Belloni et al. 1997; see also the case of 4U 0614+091, Ford et al. 1997) through spectral variability analysis. The resonant blob z-oscillations discussed by Vietri & Stella (1998) do occur at well defined radii, but are not relevant to BHCs as they require an offset magnetic field anchored to the accreting star. In summary the RPM, unlike other models (e.g. BFMs), holds for BHCs as well as NSs; its applications to the striking correlation in the QPO (and/or peaked noise) frequencies of NS and BHC LMXRBs gives remarkably good results. LS acknowledges useful discussions with T. Belloni, D. Psaltis and M. van der Klis. ## Figure Caption
no-problem/9907/astro-ph9907376.html
ar5iv
text
# 1 Introduction ## 1 Introduction After 30 years since their detection by the VELA satellites, we just now start to understand the physics of gamma–ray bursts (GRB). This has been made possible by the precise location of the Wide Field Camera (WFC) of $`Beppo`$SAX, which allowed the detection of their X–ray afterglow emission (Costa et al. 1997) and the optical follow up observations, leading to the discovery that they are cosmological sources (van Paradijs et al. 1997). The huge energy and power releases required by their cosmological distances support the fireball scenario (Cavallo & Rees 1978; Rees & Mészáros 1992; Mészáros & Rees 1993), even if we do not know yet which kind of progenitor makes the GRB phenomenon. The most accepted picture for the burst and afterglow emission is the internal/external shock scenario (Rees & Mészáros 1992; Rees & Mészáros 1994; Sari & Piran 1997). According to this scenario, the burst emission is due to collisions of pairs of relativistic shells (internal shocks), while the afterglow is generated by the collisionless shocks produced by shells interacting with the interstellar medium (external shocks). The $`\gamma `$–ray light curves show an extremely variable emission, with variability timescales as short as a few milliseconds. This, coupled with the huge powers involved, requires that the emitting plasma is moving at relativistic speeds, with bulk Lorentz factors $`\mathrm{\Gamma }>100`$, to avoid strong suppression of high energy $`\gamma `$–rays due to photon–photon collisions. Indeed, all the radiation we see is believed to come from the transformation of ordered kinetic energy of the fireball into random energy. This must however happen at some distance from the explosion site, to allow the shells to be transparent to the produced radiation. Fiducial numbers for these distances are $`R10^{12}`$$`10^{13}`$ cm for the internal shocks, and $`R10^{16}`$ cm for external ones. It is reasonable to assume that internal and external shocks can amplify seed magnetic fields and accelerate electrons to relativistic random energies. If this is the case, then the main radiation mechanism is the synchrotron process, responsible for both the $`\gamma `$–ray and for the afterglow emission. There are strong evidences that this is the main process operating during the afterglow: the power law decay of the flux with time, the observed power law energy spectra (for reviews see Piran 1999; Mészáros 1999), and the recently detected linear optical polarization in GRB 990510 (Covino et al. 1999, Wijers et al. 1999). For the bursts itself, the main evidence is the prediction of the typical energy at which the observed spectrum peaks, in a $`\nu F_\nu `$ representation. However, the very same synchrotron shock scenario (ISS) inevitably predicts very fast radiative cooling of the emitting particles, with a resulting severe disagreement between the predicted and the observed spectrum (Ghisellini, Celotti & Lazzati 1999, see also Cohen et al. 1997; Sari, Piran & Narayan 1998; Chiang 1999). In the following, after having briefly recalled some basic facts concerning GRBs, I will discuss the problems with the synchrotron interpretation of the $`\gamma `$–ray emission and some possible alternatives. Another hot issue in the GRB field is the possible collimation of their emitting plasma, leading to anisotropic emission able to relax the power requirements, at the expense of an increased burst event rate. I will show that in this respect polarization studies could be crucial, since there can be a link between the deceleration of a collimated fireball, the time behavior of the polarized flux and its position angle, and the light curve of the total flux. ## 2 Some Facts Duration — The duration distribution is clearly bimodal. Short burst durations range between 0.01 to 2 seconds. Long burst last from 2 to a few hundred seconds. All information derived from the precise location of GRBs refer to long bursts. There is some indication that short bursts may have a different $`\mathrm{log}N\mathrm{log}S`$ (Tavani 1998). Spectra and fluences — Most $`\gamma `$–ray fluences (i.e. the flux integrated over the duration of the burst) are in the range $`10^6`$$`10^4`$ erg cm<sup>-2</sup>. GRB spectra are very hard, and can be fitted by two smoothly connected power laws, implying a peak (in a $`\nu `$$`\nu F_\nu `$ plot) at an energy $`\nu _{\mathrm{peak}}`$ of a few hundreds keV. Below $`\nu _{\mathrm{peak}}`$ the spectrum has a slope $`\alpha `$ in the range \[$``$1, $`+`$0.5\] (Preece et al. 1998, Lloyd & Petrosian 1999; $`F_\nu \nu ^\alpha `$). This spectral index varies during the burst, as does $`\nu _{\mathrm{peak}}`$. The general trend is that the spectrum softens, and $`\nu _{\mathrm{peak}}`$ decreases, with time. More precise statements must however wait for larger area detectors, since what we inevitably do, at present, is to fit a time integrated spectrum of a very rapidly variable source: the minimum integration time is $``$1 second for the strongest bursts, while the variability timescales can be hundreds of times shorter. Some bursts have been detected at very large $`\gamma `$–rays energies ($`>`$ 100 MeV) (see the review by Fishman & Meegan 1995, and references therein). X–ray afterglow — Nearly all of the burst detected by the BeppoSAX GRBM and WFC, and that could be followed with the Narrow Field Instruments of BeppoSAX a few (6–10) hours after the event, showed an X–ray afterglow, whose flux decays in time as a power law. Optical afterglow — For about 2/3 of the bursts with good locations an optical afterglow has been detected. The monochromatic flux decreases in time as a power law $`F_\nu (t)t^\delta `$, with $`\delta `$ in the range 0.8–2. Remarkably, GRB 990123 has been detected by the robotic telescope ROTSE 22 seconds after the $`\gamma `$–ray trigger at $`m11.7`$, reaching $`m8.97`$ 47 seconds after the trigger (Akerlof & McKay 1999). Usually, the magnitudes of the optical afterglow detected a few hours from the event are in the range 18–21. Radio afterglow — A few bursts showed radio emission, usually some time ($``$days) after the burst event. Violent radio flux variations in GRB 970508 (Frail et al. 1997) and GRB 980329 (Taylor et al. 1998) have been interpreted as due to interstellar scintillation, effective if the source is extremely compact. Transition from the phase of violent activity to a more quiescent phase has been interpreted as due to the expansion of the radio emitting source, beautifully confirming the relativistic expansion hypothesis. Redshifts — Up to July 15, 1999, we know the redshift of 8 GRBs, with an additional one (GRB 980329) estimated by a cut-off in the spectrum interpreted as $`L_\alpha `$ absorption (Fruchter 1999a), and another one (GRB 980425) of controversial identification with the supernova SN1998bw. In Table 1 we list some observed characteristics of the GRBs of known redshifts, and Fig. 1 shows the power emitted in $`\gamma `$–rays, the fluences and the magnitude of the host galaxy candidates as a function of redshift. Tab. 1 - Gamma–ray burst of known redshift Notes: $`a`$: BATSE $`\gamma `$–ray fluences in units of $`10^5`$ erg cm<sup>-2</sup>. $`b`$: WFC flux in Crab units. References: 1: Djorgovski et al. 1999b; 2: Metzger et al. 1997; 3: Kulkarni et al. 1998b 4: Fruchter 1999a; 5: Tinney et al. 1998; 6: Djorgovski et al. 1999a; 7: Djorgovski et al. 1998a; 8: Kelson et al. 1999; 9: Hjorth et al. 1999; 10: Vreeswijk et al. 1999; 11: Galama et al. 1999; 12: Fruchter et al. 1999b; 13: Bloom et al. 1998a; 14: Odewann et al. 1998; 15: Djorgovski et al. 1998a; 16: Kulkarni et al. 1998b; 17: Djorgovski et al. 1998c; 18: Bloom et al. 1999b; 19: Fruchter et al. 1999c; 20: Fruchter et al. 1999d. ## 3 The internal shock scenario The energy involved in $`\gamma `$–ray burst explosions is huge. No matter in which form the energy is initially injected, a quasi–thermal equilibrium between matter and radiation is reached, with the formation of electron–positron pairs accelerated to relativistic speeds by the high internal pressure. This is a fireball. When the temperature of the radiation (as measured in the comoving frame) drops below $``$50 keV the pairs annihilate faster than the rate at which are produced. But the presence of even a small amount of barions, corresponding to only $`10^6M_{}`$, makes the fireball opaque to Thomson scattering: the internal radiation thus continues to accelerate the fireball until most of its initial energy has been converted into bulk motion. After this phase the fireball expands at a constant speed and at some point becomes transparent. If the central engine is not completely impulsive, but works intermittently, it can produce many shells (i.e. many fireballs) with slightly different Lorentz factors. Late but faster shells can catch up early slower ones, producing shocks which give rise to the observed burst emission. In the meantime, all shells interact with the interstellar medium, and at some point the amount of swept up matter is large enough to decelerate the fireball and produce other radiation which can be identified with the afterglow emission observed at all frequencies. ## 4 Typical synchrotron frequency In the comoving frame of the faster shell, protons of the other shell have an energy density $`U_\mathrm{p}^{}=(\mathrm{\Gamma }^{}1)n_\mathrm{p}^{}m_\mathrm{p}c^2`$, where $`\mathrm{\Gamma }^{}2`$ is the bulk Lorentz factor of the slower shell measured in the rest frame of the other. The magnetic energy density $`U_\mathrm{B}^{}`$ can be amplified to values close to equipartition with the proton energy density, $`U_\mathrm{B}^{}=ϵ_\mathrm{B}U_\mathrm{p}^{}`$. The proton density of the shell can be estimated by its kinetic power: $`L_\mathrm{s}=4\pi R^2\mathrm{\Gamma }^2cn_\mathrm{p}^{}m_\mathrm{p}c^2`$, yielding $`B=(2ϵ_\mathrm{B}L_\mathrm{s}/c)^{1/2}/(R\mathrm{\Gamma })`$. Also each electron can share a fraction of the available energy, and if there is one electron for each proton, namely if electron–positron pairs are not important, then $`\gamma m_\mathrm{e}c^2=ϵ_\mathrm{e}m_\mathrm{p}c^2`$. These simple hypotheses lead to a predicted observed synchrotron frequency $$h\nu _\mathrm{s}\mathrm{\hspace{0.17em}4}ϵ_\mathrm{e}^2ϵ_\mathrm{B}^{1/2}\frac{L_{\mathrm{s},52}^{1/2}}{R_{13}(1+z)}\mathrm{MeV}$$ (1) in very good agreement with observations. Note that the ‘equipartition coefficients’, $`ϵ_\mathrm{B}`$ and $`ϵ_\mathrm{e}`$, must be close to unity for the observed value of $`\nu _{\mathrm{peak}}`$ to be recovered. In turn this also implies/requires that pairs cannot significantly contribute to the lepton density. ## 5 Cooling is fast The synchrotron process is a very efficient radiation process. With the strong magnetic fields required to produce the observed $`\gamma `$–rays, the synchrotron cooling time is therefore very short. As pointed out by Ghisellini, Celotti & Lazzati (1999), the cooling time (in the observer frame) can be written as: $$t_{\mathrm{cool}}\mathrm{\hspace{0.17em}10}^7\frac{ϵ_\mathrm{e}^3\mathrm{\Gamma }_2}{\nu _{\mathrm{MeV}}^2(1+U_\mathrm{r}/U_\mathrm{B})(1+z)}\mathrm{s},$$ (2) where $`U_\mathrm{r}`$ is the radiation energy density. Since the shortest integration times are of the order of 1 s, the observed spectrum is always the time integrated spectrum produced by a rapidly cooling particle distribution. Since $`t_{\mathrm{cool}}1/\gamma `$, in order to conserve the particle number, the instantaneous cooling distribution has to satisfy $`N(\gamma ,t)1/\gamma `$. When integrated over time, the contribution from particles with different Lorentz factors is ‘weighted’ by their cooling timescale $`1/\gamma `$. Therefore the predicted (integrated) flux spectrum is $`F_\nu t_{\mathrm{cool}}N(\gamma )\dot{\gamma }d\gamma /d\nu \nu ^{1/2}`$, extending from $``$1 keV to $`h\nu _{\mathrm{peak}}`$MeV energies. We thus conclude that, within the assumptions of the ISS, a major problem arises in interpreting the observed spectra as synchrotron radiation. Ghisellini, Celotti & Lazzati (1999) have discussed possible ‘escape routes’, such as deviations from equipartition, fastly changing magnetic fields, strong cooling by adiabatic expansion and re–acceleration of the emitting electrons. None of these possibilities help. The drawn conclusion is that the burst emission is probably produced by another radiation process. An alternative is quasi thermal Comptonization (Ghisellini & Celotti 1999; Liang 1997, Thompson 1994). ## 6 The typical peak frequency of GRBs Why do GRB spectra peak at a few hundreds keV? This must be due to a quite robust mechanism and/or a feedback process. The fact that $`h\nu _{\mathrm{peak}}`$ is close to $`m_\mathrm{e}c^2`$ is tempting. I discuss below three possibilities. ### 6.1 “Pair feedback” Assume that the main emission process is quasi thermal Comptonization (Ghisellini & Celotti 1999) by a distribution of electrons (and positrons) peaked at subrelativistic energies (not necessarily thermal). Photons produced above the pair production threshold get absorbed and create pairs. This process has been extensively studied in the past assuming thermal particles, steady state and pair equilibrium (equal pair production and annihilation rates). The main result is that for compact sources the maximum temperature is constrained in a narrow range \[30–300 keV \](Svensson 1982, 1984): for increasing luminosities the equilibrium pair density increases, more particles are then sharing the available energy, making the temperature to decrease. If an high energy tail is present, more photons are created above the threshold for pair production with respect to the case of a pure Maxwellian, and thus pairs become important for temperatures smaller than in the completely thermal case (see e.g. Stern 1999). ### 6.2 “Brainerd break” Brainerd (1994) linked the typical high energy cut–off of GRBs to the effect of down–scattering: photons with energies much larger than $`m_\mathrm{e}c^2`$ pass undisturbed through a scattering medium because of the reduction with energy of the Klein Nishina cross section, while photons with energies just below $`m_\mathrm{e}c^2`$ interact, and their energy after the scattering is reduced. The net effect is to produce a “downscattering hole” in the spectrum, between $`m_\mathrm{e}c^2/\tau ^2`$ and $`\tau m_\mathrm{e}c^2`$. The attractive feature of this model is that the cut–off energy is associated with $`m_\mathrm{e}c^2`$. The difficulty is that a significant part of the power originally radiated by the burst goes into heating (by the Compton process) of the scattering electrons. ### 6.3 Pair production break Suppose that the circumburst material has a density $`n_{\mathrm{ext}}`$ and that, close to the burst emission site (i.e. between $`R`$ and $`2R`$), the scattering optical depth is $`\tau _{\mathrm{ext}}`$. This material scatters back a fraction $`\tau _{\mathrm{ext}}L`$ of the burst power. If we require that the primary spectrum is not modified by photon–photon absorption, the optical depth of the scattering matter and its density must be (Ghisellini & Celotti 1999) $$\tau _{\mathrm{ext}}<\mathrm{\hspace{0.17em}3.7}\times 10^9\frac{R_{13}}{L_{50}}n_{\mathrm{ext}}<\frac{5.5\times 10^2}{L_{50}}\mathrm{cm}^3$$ (3) This requirement is particularly severe, especially in the case of bursts originating in dense stellar forming regions. On the other hand, photon–photon opacity may be an important ingredient to shape the spectrum, and the reason why GRB spectra peak at around 300 keV. The pairs created ahead of the fireball radiate their energy in a short time, and are re–accelerated by the incoming fireball. The net effect may be simply to increase the density of the radiating particles, introducing a feedback process: an increased density lowers the effective temperature $``$ less energy is radiated above $`m_ec^2`$ $``$ the number of pairs produced via the “mirror” process decreases $``$ the new pair density decreases, and so on. Another feedback is introduced by the fact that the photons scattered back to the emitting shell will increase the number of seed photons and the radiative cooling rate, softening the spectrum. ## 7 Polarized afterglows Covino et al. (1999) and Wijers et al. (1999) detected a small, but highly significant, degree of linear polarization of the optical afterglow of GRB 990510: $`P=(1.7\pm 0.2)`$% 18.5 and 20.5 hours after the burst, respectively. To produce some polarized light some asymmetry is required. In particular, if the radiation is due to the synchrotron process, the magnetic field cannot be completely tangled, but must have some degree of order. Prior of the observational discovery, there have been theoretical studies predicting a larger degree of polarization ($`P10\%`$: Gruzinov & Waxman 1999) due to causally disconnected region of maximally ordered magnetic field. Since the number $`N`$ of these regions is limited, the resulting polarization is of the order of $`60\%/\sqrt{N}`$, where 60% is the value reached in each single subregion. This model requires a perfectly ordered magnetic field, on a scale that increases in time at almost the speed of light. We (Ghisellini & Lazzati 1999) have instead considered an alternative scenario, in which the required asymmetry is due to field compression and light aberration. Following Laing (1980), assume to compress a region embedded in an completely tangled magnetic field. After compression, the region becomes a slab: the field is “squeezed” in one direction, and appears again completely random for face–on observes (with respect to the direction of compression), but highly ordered to edge–on observers. Photons emitted in the plane of the slab can then be highly polarized. If the slab moves with Lorentz factor $`\mathrm{\Gamma }`$, those photons emitted in the slab plane (perpendicularly to the direction of motion) are aberrated in the observer frame, and make an angle $`\theta =1/\mathrm{\Gamma }`$ with respect to the slab velocity. Observer looking at the moving slab at this angle will detect a large degree of optical polarization. Consider a fireball collimated into a cone of semi–aperture angle $`\theta _\mathrm{c}`$. Let $`\theta _\mathrm{o}`$ be the angle between the cone axis and the line of sight. As long as $`1/\mathrm{\Gamma }<\theta _\mathrm{c}/\theta _\mathrm{o}`$, the observer receive radiation from a circle entirely contained in the jet. There is no asymmetry, and no polarization. But when $`1/\mathrm{\Gamma }>\theta _\mathrm{c}/\theta _\mathrm{o}`$ (see Fig. 2), some position angles are not canceled, and some polarization survives. The degree of polarization as a function of time is shown in Fig. 3: Note the two maxima (a third could be present if one considers side expansion of the jet which we neglected, see Sari 1999). The position angles of the two maxima are orthogonal between each other. Fig. 3 also shows the light curve of the total flux, to illustrate the tight link that this model predicts between the existence of polarization and the gradual steepening of the light curve. Should these ideas be confirmed, we would have a very powerful tool to know the degree of collimation of the fireball, and hence the true total emitted power. ## 8 SWIFT: a dedicated satellite Our knowledge of gamma–ray bursts had a quantum leap in the last two years. But we do not know yet what is their main cause (i.e. their progenitors, see the review by Vietri 1999, this volume), if it is the same for short and long bursts, what is the main radiation process of the burst, why their spectrum peaks at a well defined energy. But the information in hand lead us to expect that they could be perfect cosmological probes, doing, for redshifts greater than 5, what quasars have done for lower redshifts. Therefore there is a double interest to pursue GRB research: to understand their nature and to use them to test the distant universe. For these reasons another quantum leap is necessary, and it is foreseen to come from the launch of a dedicated NASA–MIDEX satellite, called SWIFT, able to detect more than 100 burst per year and to slew to them with X–ray and optical–UV telescopes in less than 50 seconds! (see the dedicated web page at http://swift.gsfc.nasa.gov/). With these capabilities, we will be able to have a large sample of accurate locations, and then, after the optical follow–up from the ground, we will have a large number of redshifts. We will study the relatively soft X–ray emission \[0.1–10 keV\] and the optical emission while the burst is still on, and we will know the transition between the burst proper and the afterglow. Definitive approval of the mission is expected in the fall of 1999, and launch is foreseen for the year 2003. The Italian community is deeply involved and will provide the X–ray mirrors, developed by the Brera Observatory, and the Malindi ground station, developed by ASI for BeppoSAX. We think that this mission, besides being a cornerstone for the study of gamma–ray bursts, will be a great opportunity for the Italian community at large. ## Acknowledgements I thank Annalisa Celotti and Davide Lazzati for a very productive collaboration.
no-problem/9907/astro-ph9907318.html
ar5iv
text
# Emission spectrum of Sagittarius A∗ and the neutrino ball scenario ## 1 Introduction It is generally accepted that accretion onto compact objects is the most efficient mechanism of transforming gravitational potential energy into radiation (see, e.g., Frank et al. 1992). Sagittarius A (Sgr A ) at the Galactic center is an unusual source of radiation which has remained a longstanding mystery. The dynamics of stars around the Galactic center (Eckart and Genzel 1996, 1997; Genzel et al. 1996,1997 and Ghez et al. 1998) is usually interpreted as evidence for a supermassive black hole of mass $`2.6\times 10^6M_{}`$ near Sgr A. Observations of gas flows in the vicinity of Sgr A reveal a mass accretion rate onto the central object of $`10^4M_{}\mathrm{yr}^1`$ (Melia 1992; Genzel et al. 1994). In standard thin accretion disk theory, with a reasonable efficiency of $`10\%`$, this accretion rate would correspond to a luminosity of $`10^{42}\mathrm{erg}\mathrm{s}^1`$. However, the actual luminosity observed is $`10^{37}\mathrm{erg}\mathrm{s}^1`$. Moreover, the spectrum is essentially flat in $`\nu L_\nu `$ from radio waves to X-rays, with the exception of a few bumps (Rogers et al. 1994, Menten et al. 1997, Predehl and Trümper 1994, Merc et al. 1996). Thus both the observed low luminosity and the spectral energy distribution differ very much from the spectrum that would be expected from a standard thin disk around a supermassive black hole. This discrepancy is known as the “blackness problem” of the Galactic center. Both the blackness of Sgr A and its peculiar spectrum were the source of exhaustive debate in the recent past. Several models for the accretion and the emission spectrum of Sgr A have been proposed. Melia (1994) modelled the spectrum of Sgr A as synchrotron radiation emitted by thermal electrons, heated through the dissipation of magnetic energy, as a result of a Bondi-Hoyle accretion process fed by winds emanating from stars in the vicinity of Sgr A. Optically thick synchrotron radiation emitted by a jet-disk system was also proposed as an explanation for the radiation of Sgr A (Falcke, Mannheim and Biermann 1993; Falcke and Biermann 1996). Moreover, synchrotron radiation emitted by a quasi-monoenergetic ensemble of relativistic electrons (e.g. Beckert and Duschl 1997) has been put forward as a possible emission mechanism. Probably the most sophisticated model that is consistent with the observed emission spectrum of Sgr A from radio waves to $`\gamma `$ rays is based on Advection Dominated Accretion Flows (ADAF) (Narayan et al. 1995, 1998; Mahadevan 1998; Manmotto et al. 1997). This model relies on the idea that most of the energy released by viscous dissipation is stored as thermal energy in the gas which is then advected to the center, thereby radiating off only a small fraction of the energy (Narayan and Yi 1995; Abramowicz et al. 1995). An essential ingredient of the ADAF models is that the compact dark object at the Galactic center is a black hole. In fact, the existence of an event horizon around the black hole is essential in order to ensure that whatever energy falls into the central object disappears without being re-radiated. This model also requires the protons to have a much higher temperature than the electrons, and the gas must therefore have a two-temperature structure. However, it has also been recently pointed out that ADAF models, as a solution of astrophysical accretion problems, should be treated with some caution, as their physical basis is somewhat uncertain (Bisnovatyi-Kogan and Lovelace 1999). Moreover, it is important to note that none of the above models, including ADAF, can predict the intrinsic shape and size of Sgr A as observed at 7 mm ( Lo et al. 1998 ). It is also worthwhile to note that the theoretical models for the emission of Sgr A are unable to explain the VLBI observations of Sgr A, revealing that the observed size follows a $`\lambda ^2`$ dependence and the apparent source structure can be described by an elliptical Gaussian brightness distribution (Davies et al. 1976; Lo et al. 1985, 1993; Rogers et al. 1994; Krichbaum et al. 1998; Bower and Backer 1998). A direct proof of the existence of a supermassive black hole would require the observation of objects that are moving at relativistic velocities at distances close to the Schwarzschild radius. However, the best current observations only probe the gravitational potential at radii $`4\times 10^4`$ larger than the Schwarzschild radius of a black hole of mass $`2.6\times 10^6M_{}`$ (Ghez al. 1998). Thus there is no compelling direct evidence for the existence of a supermassive black hole at the Galactic center. It is therefore perhaps prudent not to focus too much on the black hole scenario as the only possible solution for the supermassive compact dark object at the Galactic center, without having explored alternative scenarios. For instance, a compact dark stellar cluster could be an alternative to the black hole scenario. However, such clusters obey stringent stability criteria (see, e.g., Maoz 1995, 1998). A viable cluster must thus have both evaporation and collision time scales larger than the lifetime of our Galaxy, i.e. $`10\mathrm{Gyr}`$, and this is more likely to be fulfilled with a cluster of substellar objects. But, apart from a compact cluster of very low-mass black holes or brown dwarfs that is free of stability problems, the most attractive alternative to a dense and dark stellar cluster is a cluster of elementary weakly interacting particles. In fact, such an alternative model for the supermassive compact dark object at the Galactic center has been developed (Viollier et al. 1992, 1993; Viollier 1994; Tsiklauri and Viollier 1996, 1998a,b, 1999; Bilić et al. 1998; Bilić and Viollier 1999a,b). Tsiklauri and Viollier (1998a) have argued that the Galactic center is made of nonbaryonic dark matter in the form of massive neutrinos condensed in a supermassive neutrino ball of $`2.5\times 10^6M_{}`$ in which the degeneracy pressure of the neutrinos balances their self-gravity. A supermassive neutrino ball differs from a black hole of the same mass mainly by the shallow gravitational potential inside the neutrino ball. Such neutrino balls could have been formed in the early Universe during a first-order gravitational phase transition (Bilić and Viollier 1997, 1998, 1999a,b). It has been shown that the dark matter observed through stellar motion at the Galactic center (Ghez et al. 1998) is consistent with a supermassive neutrino ball of mass of $`2.6\times 10^6`$ solar masses made of self-gravitating heavy neutrino matter (Munyaneza, Tsiklauri and Viollier 1999). Moreover , it has been pointed out that tracking the orbit of the fast moving star S1 (Genzel et al. 1997) or S0-1 (Ghez et al. 1998), which is perhaps moving inside the neutrino ball, offers the possibility to distinguish, within a few years time, the supermassive black hole scenario from that of the neutrino ball, for the compact dark object at the Galactic center (Munyaneza, Tsiklauri and Viollier 1998, 1999). The purpose of this paper is to calculate the spectrum of the compact dark object at the Galactic center based on standard thin accretion disk theory, assuming that this object is a supermassive neutrino ball rather than a black hole. We perform the calculation of the spectrum based on the most recent Ghez et al. 1998 data, including the error bars of the observations. While the observed motion of stars near the Galactic center yields a lower limit for the neutrino mass $`m_\nu `$, the observed infrared drop of the emission spectrum of Sgr A provides us with an upper limit for $`m_\nu `$. A distance to the Galactic center of 8 kpc has been assumed throughout this paper. The outline of this paper is as as follows: In section 2 we present the formalism used to calculate the spectrum in the neutrino ball scenario, and in section 3 we summarize and discuss our results. ## 2 Model and results The basic equations which govern the structure of neutrino balls have been derived in a series of papers (Viollier et al. 1992, Viollier et al. 1993, Viollier 1994, Viollier and Tsiklauri 1996, Bilić and Viollier 1999a,b); we thus can be very brief here. Let us denote the dimensionless neutrino Fermi momentum by $`X=p_\nu /(m_\nu c)`$, where $`p_\nu `$ stands for the local Fermi momentum of the neutrinos of mass $`m_\nu `$. The structure of the neutrino ball is governed by a system of two coupled differential equations (Bilić, Munyaneza and Viollier 1999), i.e. $$\frac{dX}{dx}=\frac{\mu }{x^2X},$$ (1) $$\frac{d\mu }{dx}=\frac{8}{3}x^2X^3,$$ (2) subject to the boundary condition $`X(0)=X_0`$ and $`\mu (0)=0`$. In Eqs. (1) and (2), $`x`$ stands for the dimensionless radial coordinate $`x=r/a_\nu `$, $`\mu `$ denotes the dimensionless mass enclosed within a radius $`x`$, i.e. $`\mu =m(r)/b_\nu `$, and $`a_\nu `$ and $`b_\nu `$ are the length and mass scales, respectively, which can be expressed as $$a_\nu =2\sqrt{\frac{\pi }{g_\nu }}\left(\frac{M_{\mathrm{Pl}}}{m_\nu }\right)^2L_{\mathrm{Pl}}=2.88233\times 10^{10}g_\nu ^{1/2}\left(\frac{17.2\mathrm{keV}}{m_\nu c^2}\right)^2\mathrm{km},$$ (3) $$b_\nu =2\sqrt{\frac{\pi }{g_\nu }}\left(\frac{M_{\mathrm{Pl}}}{m_\nu }\right)^2M_{\mathrm{Pl}}=1.95197\times 10^{10}M_{}g_\nu ^{1/2}\left(\frac{17.2\mathrm{keV}}{m_\nu c^2}\right)^2,$$ (4) in terms of Planck’s mass and length, $`M_{\mathrm{Pl}}=(\mathrm{}c/G)^{1/2}`$ and $`L_{\mathrm{Pl}}=(\mathrm{}G/c^3)^{1/2}`$, respectively. Here, $`g_\nu `$ is the spin degeneracy factor of the neutrinos and antineutrinos, i.e. $`g_\nu =2`$ for Majorana and $`g_\nu =4`$ for Dirac neutrinos and antineutrinos. By choosing the appropriate Fermi momentum and thus the neutrino density ( $`X^3`$) at the center of the neutrino ball, we can construct a solution corresponding to a neutrino ball of $`2.6\times 10^6M_{}`$. In order to describe the compact dark object at the Galactic center as a neutrino ball, and constrain its physical parameters appropriately, it is worthwhile to use the most recent observational data by Ghez et al. 1998, who established that the mass enclosed within 0.015 pc at the Galactic center is $`(2.6\pm 0.2)\times 10^6M_{}`$ solar masses. Following the analysis of Tsiklauri and Viollier 1998a, the constraints on the neutrino mass $`m_\nu `$, in order to reproduce the observed matter distribution (Munyaneza, Tsiklauri & Viollier 1999), are for a $`M=2.4\times 10^6M_{}`$ neutrino ball $`m_\nu 20.81\mathrm{keV}g_\nu ^{1/4}`$, and the radius of the neutrino ball therefore obeys $`R1.50\times 10^2\mathrm{pc}`$. Using the value $`M=2.6\times 10^6M_{}`$, the bounds on the neutrino mass are $`m_\nu 18.93\mathrm{keV}g_\nu ^{1/4}`$, and the radius of the neutrino ball turns out to be $`R1.88\times 10^2\mathrm{pc}`$. Finally, for a $`M=2.8\times 10^6M_{}`$ neutrino ball, the range of the neutrino mass is $`m_\nu 18.21\mathrm{keV}g_\nu ^{1/4}`$, and the corresponding neutrino ball radius $`R2.04\times 10^2\mathrm{pc}`$. We can calculate the angular velocity $`\mathrm{\Omega }`$ of the matter falling onto the neutrino ball as $$\mathrm{\Omega }=\sqrt{\frac{Gm(r)}{r^3}}=\frac{c}{a_\nu }\sqrt{\frac{\mu }{x^3}},$$ (5) where $`G`$ is Newton’s gravitational constant. The total mass of the neutrino ball is $`M=m(R)`$ . In the case of a black hole, we have $`M=m(r)`$ already for radii much larger than the Schwarzschild radius. In Fig. 1, we plot the angular velocity as a function of the distance from the center for a neutrino ball of mass $`M=2.6\times 10^6M_{}`$ and a neutrino mass $`m_\nu g_\nu ^{1/4}c^2=18.93\mathrm{keV}`$. The angular velocity corresponding to a black hole of the same mass is also shown for comparison. Close to the center of the neutrino ball, $`\mathrm{\Omega }(r)`$ is nearly constant, and the mass enclosed within a radius $`r`$ therefore scales as $`r^3`$. In the standard theory of steady and geometrically thin accretion disks, the power liberated in the disk per unit area is given by (Perry & Williams 1993; Frank et al. 1992) $$D(r)=\frac{\dot{M}\mathrm{\Omega }(r)\mathrm{\Omega }^{}(r)r}{4\pi }\left[1\left(\frac{R_i}{r}\right)^2\left(\frac{\mathrm{\Omega }_i}{\mathrm{\Omega }}\right)\right].$$ (6) Here $`R_i`$ is the inner radius of the disk and $`\mathrm{\Omega }_i`$ defines the angular velocity at the radius where $`\mathrm{\Omega }(r)`$ has a maximum, i.e. $`\mathrm{\Omega }_i=\mathrm{\Omega }(R_i)`$. The prime on the function $`\mathrm{\Omega }(r)`$ denotes the derivative with respect to $`r`$. The accretion rate $`\dot{M}`$ is parametrized as $$\dot{M}=\dot{m}\dot{M}_{\mathrm{Edd}},$$ (7) where $`\dot{M}_{\mathrm{Edd}}=2.21\times 10^8M\mathrm{yr}^1`$ denotes the Eddington limit accretion rate. The maximal and minimal accretion rate allowed by the observations are $`\dot{m}=4\times 10^3`$ and $`10^4`$ (Narayan et al. 1998), respectively. The outer radius of the disk has been taken as $`10^5`$ Schwarzschild radii, since for larger radii, the disk is unstable against self-gravity (Narayan et al. 1998). We now use Stefan-Boltzmann’s law, assuming that the gravitational binding energy is immediately radiated away $$D(r)=\sigma T_{\mathrm{eff}}^4(r),$$ (8) where $`\sigma `$ is the Stefan-Boltzmann constant. The effective temperature $`T_{\mathrm{eff}}`$ can be easily derived using Eqs. (5), (6) and (8) yielding $`T_{\mathrm{eff}}(r)`$ $`=`$ $`\left({\displaystyle \frac{\dot{M}_{\mathrm{Edd}}G}{8\pi \sigma }}\right)^{1/4}{\displaystyle \frac{b_\nu ^{1/4}}{a_\nu ^{3/4}}}\dot{m}^{1/4}\left({\displaystyle \frac{3\mu \mu ^{}x}{x^3}}\right)^{1/4}\left[1\left({\displaystyle \frac{x_i}{x}}\right)^2{\displaystyle \frac{\mathrm{\Omega }_i}{\mathrm{\Omega }}}\right]^{1/4}`$ (9) $`=`$ $`T_0\dot{m}^{1/4}\left({\displaystyle \frac{3\mu \mu ^{}x}{x^3}}\right)^{1/4}\left[1\left({\displaystyle \frac{x_i}{x}}\right)^2{\displaystyle \frac{\mathrm{\Omega }_i}{\mathrm{\Omega }}}\right]^{1/4},`$ where the constant $`T_0`$ is given by $$T_0=\left(\frac{\dot{M}_{\mathrm{Edd}}G}{8\pi \sigma }\right)^{1/4}\frac{b_\nu ^{1/4}}{a_\nu ^{3/4}}.$$ (10) Once the temperature distribution in the disk is specified, one can find its luminosity at a frequency $`\nu `$ using $$\frac{dL_\nu }{dr}=\frac{16\pi ^2h\nu ^3\mathrm{cos}(i)}{c^2}\frac{r}{\mathrm{exp}\left(\frac{h\nu }{k_BT_{\mathrm{eff}}}\right)1},$$ (11) with $`L_\nu (x_i)=0`$. In Eq. (11), $`h`$ and $`k_B`$ are Planck’s and Boltzmann’s constants, respectively, and the disk inclination angle $`i`$ is assumed to be $`60^0`$ as in Narayan et al. (1998). Picking up a particular value for $`\nu `$, we may integrate Eq. (11) numerically, taking the inner radius of the disk to be determined by $`\mathrm{\Omega }^{}(r)=0`$. However , the inner radius of the accreting disk can be chosen to be zero, as the inner region, where $`\mathrm{\Omega }(r)`$ is nearly constant, does not contribute to the emission spectrum anyway . It is worthwhile to note, that in the case of a neutrino ball, there is no last stable orbit, in contrast to the black hole case, where the inner radius of the disk is taken to be three Schwarzschild radii. The results of this integration are shown in Figure 2, where the spectrum emitted in the case of accretion onto a black hole (dotted lines) of $`M=2.6\times 10^6M_{}`$ is shown as well. Here, accretion rates of $`\dot{m}=10^3`$, $`10^4`$ and $`10^9`$ have been assumed for both scenarios. Also shown in this plot are the most up-to-date observations of the emission spectrum of the Galactic center (Narayan et al. 1998). The arrows represent upper limits, and the box at a frequency $`10^{17}\mathrm{Hz}`$ stands for the uncertainty in the observed X-ray flux. The open and filled squares represent various flux measurements and upper limits for Sgr A. The open squares stand for the low angular resolution points while the filled squares represent the data points with best resolution. The observed spectrum rises at radio and submillimeter frequencies of $`\nu 10^9`$ to $`10^{12}\mathrm{Hz}`$, where most of the emission occurs, and it has a sharp drop in the infrared. The X-ray observations consist of a possible detection at soft X-ray energies, and firm upper limits in the hard X-rays. As seen in Fig. 2, the neutrino ball model reproduces the observed spectrum from the radio ($`\lambda =0.3\mathrm{cm}`$) to the near infrared band ($`\lambda =10^3\mathrm{cm}`$) very well. Thus, as our model fulfils two of the most stringent conditions, i.e. it is consistent with the mass distribution (Genzel et al. 1997, Ghez et al. 1998) and the bulk part of the emitted spectrum, we conclude that the neutrino ball scenario is not in contradiction with most of the currently available observational data. As we see from Fig. 2 and also as pointed out by Narayan et al. 1998, the curves corresponding to the black hole (lines 4, 5 and 6) provide a poor fit to the observational data. A starving black hole , with an accretion rate of $`\dot{m}=10^9`$ (line 6 in Fig. 2) would not fit the observed spectrum either. This is in fact the main reason why standard accretion disk theory was abandoned as a possible candidate for the description of the Sgr A spectrum (Narayan et al. 1995). Figure 3 shows the temperature of the disk as a function of the radius, for an accretion rate of $`\dot{m}=10^3`$ in both scenarios. The spectrum presented in Fig. 2 corresponds to a neutrino ball or black hole of $`M=2.6\times 10^6M_{}`$. To draw definite conclusions about the emission spectrum of a neutrino ball, it is necessary to investigate the dependence of the spectrum on i) the mass of the neutrino ball ; ii) the neutrino mass $`m_\nu `$, both with the ranges allowed by the Ghez et al. 1998 data. In Fig. 4, we present the emission spectrum for a variety of neutrino ball masses, i.e. $`M=2.4,2.6,2.8\times 10^6M_{}`$. From this plot, we conclude that, within the uncertainties, the mass of the neutrino ball has no significant effect on the spectrum of the neutrino ball. In Fig. 5, we plot the spectrum as a function of the neutrino mass for different accretion rates. The top panel represents the spectrum for an accretion rate of $`\dot{m}=10^4`$ while the lower describes an accretion rate of $`\dot{m}=10^3`$. The neutrino mass $`m_\nu `$ has been varied as shown on the plot. As the observed emission spectrum has a sharp drop in the infrared region, we require the theoretical spectrum not to extend to frequencies beyond the innermost data points of the infrared drop of the observed spectrum, yielding an upper bound for the neutrino mass. For each value of the accretion rate, an upper bound for the neutrino mass is established using this condition. This is reflected in Fig. 6, where we plot the neutrino mass $`m_\nu c^2`$ as a function of the accretion rate $`\dot{m}`$. The vertical arrows pointing down show the inferred upper limits of the neutrino mass for each accretion rate. Thus for $`\dot{m}=10^4`$, the upper limit is $`m_\nu c^2g_\nu ^{1/4}29.73\mathrm{keV}`$; for $`\dot{m}=8\times 10^4`$, the range of the neutrino mass is $`m_\nu c^2g_\nu ^{1/4}19.74\mathrm{keV}`$; for $`\dot{m}=10^3`$, the neutrino mass is constrained by $`m_\nu c^2g_\nu ^{1/4}18.93\mathrm{keV}`$; and finally for $`\dot{m}=4\times 10^3`$, the upper limit is found to be $`m_\nu c^2g_\nu ^{1/4}17.24\mathrm{keV}`$. The horizontal line shows the lower limit on the neutrino mass obtained by fitting the mass distribution of the neutrino ball with the currently best observational data (Ghez et al. 1998). Combining both upper and lower limits for the neutrino mass, we arrive at the following constraints for the neutrino mass $$18.93\mathrm{keV}m_\nu c^2g_\nu ^{1/4}29.73\mathrm{keV}\mathrm{for}\dot{m}=10^4,$$ (12) $$18.93\mathrm{keV}m_\nu c^2g_\nu ^{1/4}19.74\mathrm{keV}\mathrm{for}\dot{m}=8\times 10^4.$$ (13) From Fig. 6, we may conclude: i)In order to be consistent with the observational Ghez et al. 1998 data, the accretion rate $`\dot{m}`$ onto the neutrino ball should be less than $`10^3`$, implying an accretion rate $`\dot{M}`$ onto the neutrino ball that is less than $`5.7\times 10^5M_{}\mathrm{yr}^1`$; ii) The neutrino mass range is bounded from below by the Galactic kinematics and also bounded from above by the spectrum. The range of allowed values of the neutrino mass narrows as the accretion rate increases, vanishing at $`\dot{m}10^3`$. ## 3 Summary and discussion We have studied the emission spectrum of Sgr A assuming that it is a neutrino ball of mass $`M=(2.6\pm 0.2)\times 10^6M_{}`$ with a size of a few tens of light days. We have shown that, in this case, the theoretical spectrum, calculated in standard thin accretion disk theory, fits the observations in the radio and infrared region of the spectrum much better in the neutrino ball than in the black hole scenario, as seen from Fig. 2. This is because, in the neutrino ball scenario, the accreting matter experiences a much shallower gravitational potential than in the case of a black hole with the same mass, and therefore less viscous torque will be exerted. Here , we note that the emitting region for this part of the spectrum is of the order of the size of the neutrino ball, i.e. a few tens of light days. We have shown that the error bars in the mass of the neutrino ball have practically no significant effect on the spectrum of Sgr A. By assuming that the emission spectrum cannot extend beyond the observed innermost data points of the infrared drop of the Sgr A spectrum, we have established that the range of possible values of the neutrino mass narrows as the accretion rate $`\dot{m}`$ increases. We have also shown that an accretion rate of more than $`\dot{M}>5.7\times 10^5M_{}\mathrm{yr}^1`$ would render the allowed range of neutrino masses inconsistent with the lower limit obtained from the observational data based on the kinematics of stars. The thin accretion disk neutrino ball scenario alone can, of course, neither explain the lower part of the radio spectrum, i.e. $`\nu \stackrel{<}{}2\times 10^{11}\mathrm{Hz}`$, nor can it explain a possible spectrum for $`\nu \stackrel{>}{}10^{14}\mathrm{Hz}`$. The latter is a consequence of the fact that the escape velocity from the center of the neutrino ball of $`2.6\times 10^6M_{}`$ is only about 1700 km/s. In order to get X-rays , the particles need to reach a sizable fraction of the velocity of light, which is impossible in the pure neutrino ball scenario. However, as the heavy neutrinos presumably decay radiatively ($`\nu _\tau \nu _\mu +\gamma `$ or $`\nu _\tau \nu _e+\gamma `$) with a lifetime of $`\stackrel{>}{}10^{18}\mathrm{yr}`$ (assuming Dirac neutrinos and the current limits for the mixing angles), there will be some X-ray emission of the order of $`\stackrel{<}{}10^{34}\mathrm{erg}\mathrm{s}^1`$ at an energy of $`m_\nu c^2/2`$, which could be presumably detected by the CHANDRA X-ray satellite. Moreover, if both neutrinos and antineutrinos are present in the neutrino ball, annihilation ($`\nu _\tau +\overline{\nu }_\tau \gamma +\gamma `$) will also contribute to the X-ray spectrum at an energy $`m_\nu c^2`$, concentrated at the center of the neutrino ball, albeit with a much smaller luminosity (Viollier 1994). Furthermore, it is worthwhile to speculate that a neutron star at the Galactic center, surrounded by a neutrino halo of $`M=2.6\times 10^6M_{}`$, might explain the observed spectrum of Sgr A. A similar idea was proposed long ago by Reynolds and McKnee 1980, who suggested that the radio emission of Sgr A could be due to an otherwise unobservable radio pulsar. However, as the accretion rate onto the neutrino ball is of the order of $`\dot{M}=10^5M_{}\mathrm{yr}^1`$, i.e. three orders of magnitude larger than the Eddington accretion rate of $`10^8M_{}\mathrm{yr}^1`$ onto a neutron star, much of the baryonic matter falling towards this neutron star will have to be expelled before reaching the neutron star surface. ## 4 Acknowledgements One of us (F. Munyaneza) gratefully acknowledges funding from the Deutscher Akademischer Austauschdienst (DAAD) and the University of Cape Town. This work is supported by the Foundation for Fundamental Research (FFR). We also thank D. Tsiklauri for useful comments. Figure captions: Fig 1: The angular velocity as a function of the distance from the center for the neutrino ball and the black hole scenarios. The neutrino ball and the black hole have the same mass $`M=2.6\times 10^6M_{}`$. Fig 2: The spectrum of Sgr A in both scenarios for various accretion rates. The continuous curves (lines 1,2,3) correspond to a disk immersed in the potential of a neutrino ball while the dashed lines (lines 4,5,6) correspond to a disk around a black hole. Lines 1 and 4 stand for an accretion rate of $`\dot{m}=10^3`$, while lines 2 and 5 correspond to an accretion rate of $`\dot{m}=10^4`$. Finally, an accretion rate of $`\dot{m}=10^9`$ for a starving disk is represented by the lines 3 and 6. The observed data points, taken from Narayan et al. 1998, have been included in this plot. The arrows denote upper bounds. The filled squares show the data with high resolution while the open circles represent the data with less resolution. Fig. 3: The temperature of the disk as a function of the distance from the center for both scenarios. The accretion rate is $`\dot{m}=10^3`$. Fig. 4: The Sgr A emission spectrum for neutrino ball masses $`M=2.4,2.6`$ and $`2.8\times 10^6`$ solar masses. The thick lines (1,3,5) correspond to an accretion rate of $`\dot{m}=10^3`$ while the thin lines (2,4,6) are drawn for $`\dot{m}=10^4`$. The mass of the neutrino ball does not have a significant effect on the spectrum of Sgr A. Fig. 5: The Sgr A emission spectrum for various neutrino masses $`m_\nu `$. An upper limit for the neutrino mass is inferred by requiring that the theoretical spectrum cannot go beyond the innermost points of the infrared drop of the observed spectrum. The top panel represents the spectrum for $`\dot{m}=10^4`$ while the lower describes an accretion rate of $`\dot{m}=10^3`$. Fig. 6: The neutrino mass $`m_\nu `$ as a function of the accretion rate $`\dot{m}`$ for $`g_\nu =2`$. The horizontal line, with arrows pointing up, shows the lower limit of the neutrino mass, as obtained from the dynamics of stars. The arrows pointing down denote the upper limit, determined from the drop of the spectrum in the infrared region. The range of the neutrino mass narrows as the accretion rate $`\dot{m}`$ increases. For $`\dot{m}>10^3`$ , the upper limit on the neutrino mass becomes inconsistent with the lower limit from the dynamics of stars.
no-problem/9907/cond-mat9907234.html
ar5iv
text
# SELF - AFFINITY OF ORDINARY LEVY MOTION, SPURIOUS MULTI - AFFINITY AND PSEUDO - GAUSSIAN RELATIONS ## I Introduction. By Levy motions, one designates a class of random functions, which are a natural generalization of the Brownian motion, and whose increments are stationary, statistically self-affine and stably distributed in the sense of P. Levy . Two important subclasses are (i) $`\alpha `$\- stable processes, or the ordinary Levy motion (oLm), which generalizes the ordinary Brownian motion, or the Wiener process, and whose increments are independent, and (ii) the fractional Levy motion, which generalizes the fractional Brownian motion and has an infinite span of interdependence. The theory of processes with independent increments was developed beginning from the Bachelier’s paper concerning Brownian motion. However, the rigorous construction of this process and studies of properties of its trajectories were undertaken by Wiener . The modern presentation of the general theory of processes with independent increments is contained in . The theory of the processes with independent increments possessing stable distributions has begun its history from the already cited work and, later on, was developed by other prominent mathematicians. In particular, the properties of extremes of $`\alpha `$\- stable symmetric processes were studied in . The geometric properties of their trajectories were considered in . The monograph contains a modern presentation of the theory of $`\alpha `$\- stable processes. The Levy random processes play an important role in different areas of application for at least two reasons. The first one is that the Levy motion can be considered as a generalization of the Brownian motion. Indeed, the mathematical foundation of the generalization are remarkable properties of stable probability laws. From the limit theorems point of view, the stable distributions are a generalization of widely used Gaussian distribution. Namely, stable distributions are the limit ones for the distributions of (properly normalized) sums of independent identically distributed (i.i.d.) random variables . Therefore, these distributions (like the Gaussian one) occur, when the evolution of a physical system or the result of an experiment are determined by the sum of a large number of identical independent random factors. An important distinction of stable probability densities is the power law tails decreasing as $`\left|x\right|^{1\alpha }`$ , $`\alpha `$ is the Levy index, 0 $`<\alpha `$ $`<`$2. Hence, the distribution moments of the order $`q\alpha `$ diverge. In particular, stably distributed variables possess a non-finite variance. The second reason for ubiquity of the Levy motions is their remarkable property of scale - invariance. From this point of view the Levy motions (like the Brownian ones) belong to the so - called fractal random processes. Indeed, the objects in nature rarely exhibit exact self - similarity ( like the Von Koch curve), or self - affinity. On the contrary, these properties have to be understood in a probabilistic sense . The random fractals are believed to be widely spread in nature. A coastline is a simple example of statistically self - similar object , as well as the spot of the Chernobyl contamination in the nearest zone . On the contrary, the trace of the Brownian motion is statistically self - affine. Several numerical algorithms were developed in order to simulate fractional Brownian motion \[10,12 - 15\]. They allow one to model many highly irregular natural objects, which can be viewed as random fractals . The traces of the Levy motions are also statistically self - affine, therefore, one may expect that they are also suited for modeling and studies of natural random fractals. The stable distributions and the Levy, or Levy - like, random processes are widely used in different areas, where the phenomena possessing scale invariance (in a probabilistic sense) are viewed or, at least, can be suspected, e.g., in economy \[16 - 19\], biology and physiology , turbulence and chaotic dynamics , solid state physics , plasma physics , geophysics etc. In this respect, the problems connected with experimental data processing are of great importance. It is also necessary to develop different numerical algorithms which allow one to simulate Levy motion with the given statistical properties. This, in turn, allows one to improve the methods aimed at analysis and interpretation of experimental data. Recently, three models of the oLm were proposed . They can be interpreted as ”difference schemes” to approximate the evolution equation for the distribution density of the ordinary Levy motion. In our paper we employ the different approximation, which is based on using Gnedenko limit theorem along with the inversion method for generating random variables. With the help of this approximation we study the consequences of self - affinity of the motion, namely, the time dependence of the structure functions and of the range. We show that the finiteness of the sample size plays an essential role when the consequences of self - affinity are considered. We demonstrate both analytically and numerically, that the finiteness of the sample size violates self - affinity, thus giving rise to spurious multi - affinity of the oLm. Further, the second order structure function and the normalized range (just these quantities are widely used in processing a huge number of various experimental data ) possess spurious, ”pseudo - Gaussian” time - dependence. This circumstance allows one to suggest that at estimating the second order structure function and normalized range from experimental data the ”Levy nature” of them can be easily masked. In order to avoid pseudo - Gaussianity when studying the range, we propose a modified Hurst method for rescaled range analysis. The paper is organized as follows. In Sec. 2 we discuss the property of self \- affinity and its consequences. In Sec. 3 we propose a model for numerical simulation of the oLm. In Sec. 4 the effects of a finite sample size are studied. In Sec. 5 we present numerical results. Finally, the conclusions are exposed in Sec. 6. ## II Self - affinity of the ordinary Levy motion Let us proceed with the self - affine properties of the ordinary Levy motion denoted below as $`L_\alpha \left(t\right)`$. In this paper we restrict ourselves by symmetric stable distributions. The characteristic function of the oLm increments is $$\widehat{p}_{\alpha ,D}(k,\tau )\mathrm{exp}\left[ik\left(L_\alpha (t+\tau )L_\alpha (t)\right)\right]=\mathrm{exp}(D\left|k\right|^\alpha \tau )$$ (1) Here $`\alpha `$is the Levy index, 0 $`<\alpha `$ 2, and D $`>`$ 0 is a scale parameter. The probability density $$p_{\alpha ,D}(x,t)=\underset{\mathrm{}}{\overset{\mathrm{}}{}}\frac{dk}{2\pi }\mathrm{exp}(ikx)\widehat{p}_{\alpha ,D}(k,t)$$ (2) is expressed in terms of elementary functions in two cases: (i) ordinary Brownian motion, $`L_\alpha (t)B(t)`$ , which has $`\alpha =2`$ and the probability density $`p_{2,D}(x,t)={\displaystyle \frac{1}{\sqrt{4\pi Dt}}}\mathrm{exp}\left[{\displaystyle \frac{x^2}{4Dt}}\right]`$, and (ii) ordinary Cauchy motion, which has $`\alpha =1`$ and the probability density $`p_{1,D}(x,t)={\displaystyle \frac{Dt/\pi }{D^2t^2+x^2}}`$. At $`\left|x\right|\mathrm{}`$ the probability densities of the oLm have power law tails, $`p_{\alpha ,D}(x,t){\displaystyle \frac{Dt}{\left|x\right|^{1+\alpha }}}`$. The increments of the oLm are stationary in a narrow sense, $$L_\alpha (t_1+\tau )L_\alpha (t_2+\tau )\stackrel{d}{=}L_\alpha (t_1)L_\alpha (t_2),$$ (3) and self-affine with the parameter $`H=1/\alpha `$, that is, for an arbitrary $`\kappa >`$ 0 $$L_\alpha (t+\tau )L_\alpha (t)\stackrel{d}{=}\left\{\kappa ^{1/\alpha }\left[L_\alpha (t+\kappa \tau )L_\alpha (t)\right]\right\},$$ (4) where $`\stackrel{d}{=}`$ implies that the two random functions have the same distribution functions. The exponent $`1/\alpha `$, by analogy with the theory of fractional Brownian motion , is named the Hurst index of the ordinary Levy motion. We consider two corollaries of Eqs.(1) - (4), which, again by analogy with the definitions of Ref. , may be called ”$`1/\alpha `$ laws” for the structure function and for the range. We first consider the structure function of the oLm. A ”$`1/\alpha `$ law” for the structure function can be stated as follows: for all 0 $`<\alpha `$ $`<`$ 2 $$S_q(\tau ,\alpha )=\left|L_\alpha (t+\tau )L_\alpha (t)\right|^q=\{\begin{array}{c}\tau ^{q/\alpha }V(q;\alpha ),0q<\alpha \\ \mathrm{},q\alpha \end{array},$$ (5) where $$V(\mu ;\alpha )=\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑x_2\left|x_2\right|^\mu \underset{\mathrm{}}{\overset{\mathrm{}}{}}\frac{dx_1}{2\pi }\mathrm{exp}(ix_1x_2\left|x_1\right|^\alpha ),$$ (6) whereas for $`\alpha =2`$ (ordinary Brownian motion) $$S_q(\tau ;2)=\tau ^{q/2}V(q;2)$$ (7) for an arbitrary $`q`$. Equations (5) - (7) have a direct physical consequence for description of an anomalous diffusion. Indeed, for the ordinary Brownian motion the characteristic displacement $`\mathrm{\Delta }x(\tau )`$ of a particle may be written in terms of the second order structure function as $$\mathrm{\Delta }x(\tau )=S_2^{1/2}(\tau ;2)=\sqrt{2}\tau ^{1/2},$$ (8) where the prefactor $`\sqrt{2}`$ is simply $`V^{1/2}(2;2)`$ . One may note from Eqs.(6), (7) that, for the normal diffusion $$S_q^{1/q}(q;2)\tau ^{1/2}$$ (9) at any $`q`$ and, thus any order of the structure function may serve as a measure of a normal diffusion rate: $$\mathrm{\Delta }x(\tau )S_q^{1/q}(q;2)\tau ^{1/2},$$ (10) if one is interested in time - dependence of the characteristic displacement, but not in the value of the prefactor. We remind that usually just the time - dependence, but not the prefactor, serves as an indicator of normal or anomalous diffusion . In analogy with Eqs. (9), (10) it follows from Eqs. (5), (6) that the quantity $`S_q^{1/q}(\tau ;\alpha )`$ at 0 $`<\alpha `$ $`<`$ 2 and any $`q`$ $`<\alpha `$ can serve as a measure of an anomalous diffusion rate: $$\mathrm{\Delta }x(\tau )S_q^{1/q}(q;\alpha )\tau ^{1/\alpha },0<q<\alpha <2.$$ (11) Here we have the case of a fast anomalous diffusion, or hyperdiffusion. The second corollary of Eqs. (1) - (4) is for the range of the oLm. A ”$`1/\alpha `$ law” for the range can be stated as follows: $$R(\tau )=\underset{0t\tau }{sup}L_\alpha (t)\underset{0t\tau }{inf}L_\alpha (t)\stackrel{d}{=}\tau ^{1/\alpha }R(1).$$ (12) For the ordinary Brownian motion $`\tau ^{1/2}R\left(\tau \right)`$ has a distribution independent of $`\tau `$. For the oLm, $`0<\alpha <2`$, the statistical mean of the range, $`R(\tau )`$ , behaves as $`\tau ^{1/\alpha }`$ at $`1<\alpha <2`$, and turn to infinity at $`0<\alpha 1`$. Both ”$`1/\alpha `$ law” are studied in numerical simulation in Sec. 5. However, at the end of this Section we point once more to an important property of the moments of stable distributions with the Levy index $`\alpha <`$ 2: the moments of the order $`q\alpha `$ diverge. This property, in turn, manifests itself in divergence of the $`q`$ \- th order structure function at $`q\alpha `$ , see Eq.(5), and of the mean of the range at $`\alpha 1`$. However, at experimental data processing both quantities are finite due to the finiteness of a sample size. In fact, one may expect that in the case of the Levy motion the finiteness of a sample size has a stronger influence on the results than in the case of the Brownian motion. Therefore, the estimates of such influence are needed. We study this problem in Sec. 4 before discussing results of numerical simulation. ## III A simple way to approximate ordinary Levy motion. The process of constructing approximation to the oLm can be divided into two steps. Step 1. At the first step we generate random sequence of i.i.d. random variables possessing stable probability law. These variables play the role of increments of the oLm having the Levy index , $`0<\alpha <2`$. The value $`\alpha =2`$ corresponds to the ordinary Brownian motion, hence in this case the sequence of independent increments is generated with the use of a standard Gaussian generator. Since in Ref. the sequence generated by the Gaussian generator is called ”approximate discrete - time white Gaussian noise”, we call the sequence generated at $`0<\alpha <2`$ ”approximate discrete - time white Levy noise”. We generate approximate white Levy noise possessing characteristic function $$\widehat{p}_{\alpha ,D}(k)=<\mathrm{exp}(ikx)>=\mathrm{exp}(D\left|k\right|^\alpha ).$$ (13) At $`0<\alpha <2`$ they have power law asymptotic tails , $$p_{\alpha ,D}(x)D\frac{\mathrm{\Gamma }(1+\alpha )\mathrm{sin}(\pi \alpha /2)}{\pi \left|x\right|^{1+\alpha }},x\pm \mathrm{}.$$ (14) In the literature there exist different algorithms for generating random variables distributed with stable probability law. We only mention two recently proposed schemes, which use the combinations of random number generators and the family of chaotic dynamical systems with broad probability distributions , respectively. However, we believe that the ways of generating stably distributed variables and, then the Levy motion are not exhausted, and various simulation models are needed, each of them may appear to be useful when studying some particular problem. In this paper we propose a simple approximation based on the Gnedenko limit theorem along with the method of inversion. Indeed, among the methods of generating random sequence with the given probability law $`F(x)`$ the method of inversion seems most simple and effective . However, it is well-known fact that its effectiveness is limited by the laws possessing analytic expressions for $`F^1`$, hence the direct application of the method of inversion to the stable law is not expedient. In this connection, it is natural to exploit an important property of stable distributions. Namely, such distributions are limiting for those of properly normalized sums of i.i.d. random variables . To be more concrete, we generate the needed random sequence in two steps. At the first one we generate an ”auxiliary” sequence of i.i.d. random variables $`\left\{\xi _j\right\}`$ , whose distribution density $`F^{}(x)`$ possesses asymptotics having the same power law dependence as the stable density with the Levy index has, see Eq.(14). However, contrary to the stable law, the function $`F(x)`$ is chosen as simple as possible in order to get analytic form of $`F^1`$. For example, $$F(x)=\frac{1}{2(1+\left|x\right|^\alpha )},x<0,$$ (15) $`F(x)=1{\displaystyle \frac{1}{2(1+x^\alpha )}},x>0.`$ Then, the normalized sum is estimated, $$X=\frac{1}{am^{1/\alpha }}\underset{j=1}{\overset{m}{}}\xi _j,$$ (16) where $`a=\left({\displaystyle \frac{\pi }{2\mathrm{\Gamma }(\alpha )\mathrm{sin}(\pi \alpha /2)}}\right)^{1/\alpha }.`$ According to the Gnedenko theorem on the normal attraction basin of the stable law , the distribution of the sum (16) is then converges to the stable law with the characteristic function (13) and $`D=1`$. It is reasonable to generate random variables having stable distribution with the unit $`D`$, with a consequent rescaling, if necessary. Repeating $`N`$ times the above procedure, we get a sequence of i.i.d. random variables $`\left\{X(t)\right\},t=1,2,\mathrm{},N`$ . This is an approximate discrete - time white Levy noise. In the top of Fig.1 the probability densities $`p(x)`$ for the members of the sequence $`\left\{X(t)\right\}`$ ($`m=30`$ in Eq.(16)) are depicted by black points for (a) $`\alpha =1.0`$, and (b) $`\alpha =1.5`$. The functions $`p_{\alpha ,1}(x)`$ obtained with the inverse Fourier transform, see Eq.(13), are shown by solid lines. In the bottom of Fig.1 the black points depict asymptotics of the same probability densities in log-log scale. The solid lines show the asymptotics given by Eq.(14). The examples presented demonstrate a good agreement between the probability densities for the sequences $`\left\{X(t)\right\}`$ obtained with the use of the numerical algorithm proposed and the densities of the stable laws. We would like to stress that a certain merit of the proposed model is its simplicity. It is entirely based on classical formulation of one of the limit theorems and can be easily generalized for the case of asymmetric stable distributions. It is also allows one, after some modifications, to speed up the convergence to the stable law. These problems, as well as the comparison with the other algorithms seems to be the subject of a separate paper. Step 2. With using approximate discrete - time white Levy noise $`X(t)`$ the approximation to the oLm is defined by $$L_\alpha (t)=\underset{\tau =1}{\overset{t}{}}X(\tau )$$ (17) In Fig.2 the approximate white Levy noises obtained with the numerical algorithm proposed are depicted by thin lines at 4 different Levy indexes. The thick lines depict the sample paths, or the trajectories, of the approximation to the oLm. It is clearly seen that with the Levy index decreasing, the amplitude of the noise increases. The large outliers lead to large “jumps” (often named as ”Levy flights”) on the trajectory. ## IV Effects of limited sample size. In this Section we discuss what is the time dependence of the $`q`$ \- th order structure function and the range, which are estimated from experimental data. We first give an estimate for the mode of the maximum value. Suppose we have a sequence $`\{X(t)\}`$, t = 1,2,…,N, of i.i.d. random variables possessing stable probability density $`p_{\alpha ,1}\left(x\right)`$ and cumulative probability $`P_{\alpha ,1}(Xx)=\underset{\mathrm{}}{\overset{x}{}}𝑑up_{\alpha ,1}(u)`$ . Then, $`P_{\alpha ,1}^N`$, is the probability that all $`N`$ terms of the sequence are less than $`x`$. This, in turn, implies that $`P_{\alpha ,1}^N`$ is the probability that the maximum value of $`N`$ terms is less than $`x`$. Therefore, $$\phi _N(x)=\left(P_{\alpha ,1}^N(x)\right)^{}=NP_{\alpha ,1}^{N1}(x)p_{\alpha ,1}(x)$$ (18) is the probability density of the maximum value in the sample consisting of $`N`$ terms. Let $`X_{\mathrm{max}}(N)`$ be the mode of maximum value, that is, the most probable maximum value. It obeys an equation $`\phi _N^{^{}}(x)|_{x=X_{\mathrm{max}}}=0`$ Since $`p_{\alpha ,1}(x)x^{1\alpha }`$ for large $`x`$, we may give the following estimate for the mode: $$X_{\mathrm{max}}(N)N^{1/\alpha },\mathrm{\hspace{0.33em}0}<\alpha <2.$$ (19) With the help of Eq.(19) we are able to roughly estimate the diverging statistical moment of the sequence $`\{X(t)\}`$, $`t=1,2,\mathrm{},N`$, as $$\underset{0}{\overset{X_{\mathrm{max}}(N)}{}}𝑑XX^qp_{\alpha ,1}(X)X_{\mathrm{max}}^{q\alpha }N^{q/\alpha 1},q>\alpha .$$ (20) Now we turn to the structure function. It can be written as $$S_q(\tau )=\left|\mathrm{\Delta }L_\alpha (\tau )\right|^q=\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑\mathrm{\Delta }L_\alpha \left|\mathrm{\Delta }L_\alpha \right|^qp_{\alpha ,1}(\mathrm{\Delta }L_\alpha ,\tau ),$$ (21) where $`\mathrm{\Delta }L_\alpha (t)L_\alpha (t+\tau )L_\alpha (t)`$ , and the probability density is given by Eqs. (1), (2). We may introduce a stochastic variable $`\xi `$, such that $`\mathrm{\Delta }L_\alpha (\tau )=\xi \tau ^{1/\alpha }`$ , and rewrite $`S_q`$, $$S_q=\tau ^{q/\alpha }\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑\xi \xi ^qp_{\alpha ,1}(\xi ),$$ (22) where the probability density for a new variable is given by Eqs. (13), (14). To estimate the integral in Eq.(22), we use Eq.(20) with an important note that here $`N`$ is equal to $`T/\tau `$, $`T`$ is the total length of the sample. Therefore, the integral can be estimated as $`(T/\tau )^{q/\alpha 1}`$, and $$S_q\tau T^{q/\alpha 1},q>\alpha .$$ (23) Thus, the effects of a limited sample size manifest itself in Eq.(23), which replaces ”theoretical infinity” in Eq.(5) for $`q>\alpha `$ . A particular case is the second order structure function, for which we have ”pseudo - Gaussian” relation (see Eq.(8)), $`S_2^{1/2}\tau ^{1/2}`$ . The linear $`\tau `$\- dependence of the $`q`$ \- th order structure function was found in . Equation (23) points to violation of self - affinity of the oLm. Indeed, self - affinity implies that the - exponent of the structure function depends linearly on $`q`$, see Eqs. (4), (5). On the contrary, the rough estimate of the finite sample effects demonstrates that the exponent does not depend on $`q`$at $`q`$ $`>\alpha `$ . Thus, in experimental data processing one may expect that the exponent smoothly changes its slope, thus giving rise to a convex curve. Such convexity seems as an indication of multi - affinity, see Ref. . However, in our case this behavior of the exponent is stipulated not by ”intrinsic” reason, but instead by the influence of a finite size of a sample of the self - affine process. That is why we call this effect ”spurious multi - affinity”. Now we consider the finite size effects for the range of the oLm. In the empirical rescaled range analysis, that is, at experimental data processing or in numerical simulation the range of the random process is divided by the standard deviation (that is, the square root of the second moment) for the sequence of increments, $`\sigma _2=\left({\displaystyle \frac{1}{\tau }}{\displaystyle \underset{t=1}{\overset{\tau }{}}}\left(X(t)\right)^2\right)^{1/2}`$ after subtraction of a linear trend. This procedure, called the Hurst method, or the method of normalized range, ”smoothes” the fluctuations of the range on different segments of time series, and is used in a great variety of applications . H. E. Hurst was the first who has collected large statistical material relating to water levels and other phenomena, which indicates that the observed normalized range do not increase (as it is expected for the ordinary Brownian motion) like the square root of the observational period $`\tau `$, but instead like a higher power. However, the Hurst method is not satisfactory for the oLm because of the infinity of the theoretical value of standard deviation. What one should expect when estimating the normalized range of the oLm from the finite sample size ? Since $`X^2\tau ^{2/\alpha 1}`$ , see Eq. (20), then $`\sigma _2\tau ^{1/\alpha 1/2}`$ , whereas $`R(\tau )\tau ^{1/\alpha }`$ , and, thus, $$\frac{R(\tau )}{\sigma _2}\tau ^{1/2}.$$ (24) Therefore, we conclude, that at the empirical rescaled range analysis the Hurst method gives ”spurious”, ”pseudo - Gaussian” time dependence. In order to get the correct exponent $`1/\alpha `$, and, at the same time, to smooth fluctuations of the range, we propose to modify the Hurst method by exploiting the $`\alpha `$ \- th root of the $`\alpha `$\- th moment instead of standard deviation, that is, $$\sigma _\alpha =\left(\frac{1}{\tau }\underset{t=1}{\overset{\tau }{}}\left|X(t)\right|^\alpha \right)^{1/\alpha }.$$ (25) Since it has only weak logarithmic divergence with the number of terms in the sum increasing, then one has $$\overline{\left(\frac{R(\tau )}{\sigma _\alpha }\right)}\tau ^H,$$ (26) where $`H1/\alpha `$ is the Hurst index for the oLm with the Levy index , and the bar denotes averaging over the number of segments (having the length ) of the sample path. The expediency of using modified Hurst method when studying the Levy motion can be explained as follows. In general case, the Hurst exponent in Eq.(4) contains information not only on the Levy index of the increment distribution, but also on long - time correlations between the increments. In case of the ordinary Levy motion the correlations between non - overlapping increments are absent, and the Hurst index is equal $`1/\alpha `$. If correlations exist, the Hurst index for the Levy motion differ from $`1/\alpha `$, and this circumstance leads to violation of the ”$`1/\alpha `$ laws” for the structure function and for the range. Therefore, when treating experimental data, it seems expedient to estimate with the use of increment distribution (there are different methods for estimating parameters of stable distributions, see, e.g., ) and, then by testing ”$`1/\alpha `$ laws” for the structure function and for the range, to get information about the presence (or absence) of long - time correlations. In the next Section we verify numerically the self - affine properties and finite sample effects with the help of approximation described in Sec. 3. ## V Self - affine properties and finite sample effects in numerical simulation We study numerically $`\tau `$\- and $`T`$ \- dependence of the $`q`$ \- th order structure function, $$S_q\tau ^{\mu (q)}T^{a(q)},$$ (27) where, as before, is the time argument of the structure function, and $`T`$ is the sample size. According to the analytical estimates $$\mu (q)=\{\begin{array}{c}q/\alpha ,q<\alpha ,\\ 1,q\alpha ,\end{array}$$ (28) whereas $$a(q)=\frac{q}{\alpha }1,$$ (29) see Eqs. (5) and (23). We first study $`\mu `$vs in the relation at $`q`$ less than $`\alpha `$. In Fig.3 a typical example is depicted by crosses at fixed $`q=1/2`$. The $`q/\alpha `$ curve is shown by primes. One can be convinced himself that the $`1/\alpha `$ law is well confirmed at q smaller than the smallest Levy index in numerical simulation. Also in the figure $`\mu `$vs $`\alpha `$ is depicted by black points at $`q=2`$. It is shown that the second order structure function, being estimated from a finite sample, leads to the spurious ”pseudo - Gaussian” behavior. At the clarifying inset we show $`S_q`$ vs $`\tau `$in a log - log scale with $`q`$ and being fixed. The exponent $`\mu `$in the main figure is obtained for the fixed $`\alpha `$as a slope of a straight line at the inset. In Fig. 4a, b we plot $`\mu `$vs $`q`$ for the Levy index 1.2 and 1.7, respectively. The analytical estimate, see Eq. (28), is shown by solid line for $`q<\alpha `$ and by dotted line for $`q>\alpha `$ . The results of simulation are shown by black points. The arrows indicate the value $`q=\alpha `$ , at which the bend of theoretical curves occur. It is shown that the results of simulation are well fitted by the analytical curves. Figure 5 demonstrates $`S_q`$ versus sample size $`T`$ in a log - log scale. The Levy index $`\alpha `$is equal 1.2. The dotted lines indicate $`S_q`$ vs $`T`$ at $`q=\alpha `$ (horizontal line), $`q=2.5\alpha `$ and $`4\alpha `$ (sloping lines). The black points indicate simulation results (4 points for each $`q`$). As in Fig. 4, we see a good agreement between experimental results and analytical estimates. It implies, that Eq. (23), seeming very rough estimate, nevertheless, accounts for the finite sample size effects for the structure function of the oLm. Figure 6 demonstrates the application of the modified Hurst method to the sample path of the approximation to the oLm with the Levy index $`\alpha =1`$. In Fig. 6a fluctuations of the range (thin curve) and those of $`\sigma _\alpha `$(thick curve) are shown for the case, when the total length of the sample is divided into 64 segments, each of $`\tau =16`$ lengthwise. Below the variations of the ratio $`R/\sigma _\alpha `$ are depicted. It is shown that fluctuations of the ratio is much smaller than those of the span. This circumstance justify the use of the ratio in the empirical analysis. In Fig. 6b the rescaled span, see Eq. (26) is depicted versus the time interval by black points in a log - log scale. The slope of the solid line is equal to $`H=0.9`$. In Fig.7 the Hurst index obtained with using Eq.(26) is depicted by crosses, whereas the curve $`1/\alpha `$ is indicated by primes. By comparing Fig.7 with Fig.3 one can see that the ”$`1/\alpha `$ law” is better fulfilled for the structure function of the simulated process than for the range. We also note that the same conclusion follows for ”$`\tau ^H`$ laws” for fractional Brownian motion . A more detailed discussion of this problem require the use of the theory of extremes for the processes with stationary increments. This is beyond the scope of our paper. However, we point readers attention on the Hurst index which is obtained with the use of ”traditional” ratio $`R/\sigma _2`$ and is shown by black points in Fig.7. In accordance with the discussion of Sec.4, it can be clearly seen, that the standard deviation, being used in empirical analysis of the oLm, ”suppresses” the variations of the range, thus giving rise to the spurious value $`H0.5`$for all Levy indexes. On the contrary, only ”smoothes” the variations of the range, thus leading to the (nearly) correct value $`H=1/\alpha `$. ## VI Conclusions. The ubiquity of the Levy motion rises the problems related to the experimental data processing. In particular, the fat tails of the stable distributions allows one to suggest that the effects of a finite sample size play an important role. To study these effects, we propose an approximation which is based on using Gnedenko limit theorem along with the method of inversion. It allows us to simulate the ordinary Levy motion and study the finite sample effects when estimating structure functions and the range. The results of simulation are in a quantitative agreement with theoretical estimates. In particular, we find, that the second order structure function being estimated from the ordinary Levy motion sample paths as well as the Hurst method both lead to spurious ”pseudo - Gaussian” time dependencies. We also propose to modify the Hurst method of rescaled range analysis in order to avoid ”pseudo - Gaussian” relation. At the end we note that introduction of correlations into the sequence of i.i.d. stably distributed random variables (with the help of the method used in Ref. for the sequence of i.i.d. Gaussian variables) allows us to study the finite sample effects for fractional Levy motion. Acknowledgements This work was supported by the National Academy of Science of Ukraine, the Project “Chaos-2“ and by the INTAS Program, Grants LA - 96 - 02 and 98 - 01. REFERENCES 1. P. Levy, Theorie de l’Addition des Variables Aleatories (Gauthier - Villiers, Paris, 1937). 2. L. Bachelier, Annales Scientifiques de l’Ecole Normale Superieure III 17 (1900) 21. 3. N. Wiener, Journ. Math. Phys. Mass. Inst. Technology 2 (1923) 131 . 4. A. V. Skorokhod, Random Processes with Independent Increments ( in Russian: Nauka, Moscow, 1964. Engl. transl.: Kluwer, Dordrecht, 1991); 5. D. A. Darling, Trans. Amer. Math. Soc. 83 (1956) 164. 6. R. M. Blummental, R. K. Getoor, Trans. Amer. Math. Soc. 95 (1960) 263; Journ. Math. 4 (1960) 370. 7. G. Samorodnitsky, M. S. Taqqu, Stable non - Gaussian Random Processes (Chapman & Hall, New York, 1994). 8. B. V. Gnedenko, A. N. Kolmogorov, Limit distributions for Sums of Independent Random Variables (in Russian: Izd-vo tekhniko - teor. lit - ry , Moskva, 1949; Engl. transl.:Addison Wesley, Reading, MA, 1954). 9. B. B. Mandelbrot, The Fractal Geometry of Nature (Freeman, New York, 1982). 10. . R. F. Voss, in: Fundamental Algorithms in Computer Graphics, edited by R. A. Earnshaw (Springer-Verlag, Berlin, 1985), p.805. 11. Bar’yakhtar V., Gonchar V. et al., Ukrainian Journal of Physics , 38 (1993) 967. 12. B. B. Mandelbrot and J. R. Wallis, Water Resources Research 5 (1969) 228 . 13. S. Rambaldi, O. Pinazza, Physica A 208 (1994) 21. 14. O. Magre, M. Guglielmi, Chaos, Solitons and Fractals 8 (1997) 377. 15. . A. V. Chechkin, V. Yu. Gonchar, Preprint cond - mat/9902209 (accepted to Chaos, Solitons & Fractals). 16. . B. B. Mandelbrot, Journ. Business 36 (1963) 394; 17.B. Mandelbrot, Fractals and Scaling in Finance (Springer-Verlag, New York, 1997). 18. . R. N. Mantegna, Physica A 179 (1991) 232. 19. . R. N. Mantegna, H. E. Stanley, Physica A 254 (1998) 77. 20. B. J. West and W. Deering, Phys. Reports 246 (1994) 1 . 21. M. F. Schlesinger, B. J. West, J. Klafter, Phys. Rev. Lett. 58 (1987) 1100. 22. M. F. Schlesinger, G. M. Zaslavsky, J. Klafter, Nature 363 (1993) 31; G. M. Zaslavsky, Physica D 76 (1994) 110 ; M. F. Schlesinger (editor), G. M.Zaslavsky, U. Frisch, Levy Flights and Related Topics in Physics: Proceedings of the International Workshop Held at Nice, France, 27 - 30 June 1994 , Springer, Berlin, 1995). 23. J. - P. Bouchaud, A. Georges, Phys. Reports 195 (1990) 127 . 24. G. Zimbardo, P. Veltri, G. Basile, S. Principato, Phys. Plasmas 2 (1995) 2653. 25. D. Schertzer, S. Lovejoy, Multifractals and Turbulence: fundamentals and applications (World Scientific, Singapore, 1993). 26. R. Gorenflo, G. De Fabritiis and F. Mainardi, Physica A269 (1999) 84. 27. J. Feder, Fractals (Plenum Press, New York, 1988). 28. B. B. Mandelbrot and van Ness, SIAM Review 10 (1968) 422. 29. W. Feller, Ann. Math. Stat. 22 (1951) 427. 30. R. N. Mantegna, Phys. Rev. E 5B (1994) 4677. 31. K. Umeno, Phys. Rev. E58 (1998) 2644. 32. M. G. Kendall, S. B. Babington, Random Sampling Numbers (Cambridge Univ. Press, Cambridge, 1939). 33. A. V. Chechkin, V. Yu. Gonchar, Preprint cond - mat/9901064. 34. F. Schmitt, D. Scherzer, S. Lovejoy, Appl. Stochastic Models Data Anal. 15 (1999). 35. B. Mandelbrot, A. Fisher, L. Calvet, Cowles Foundation Discussion Paper \# 1164 (1997). 36. H. E. Hurst, R. P. Black, and Y.M. Sinaika, Long Term Storage in Reservoirs. An Experimental Study (Constable, London, 1965). 37. J. P. Nolan, Comm. In Stat. - Stochastic Models 13 (1997) 759; M. Meerschaert and H.-P. Scheffler, J. Stat. Plann. Inference 71(1-2) (1998) 19. FIGURE CAPTIONS Fig. 1. Probability densities (above) and their asymptotics (below) are indicated for the sequences of random variables generated with the use of the proposed numerical algorithm at the Levy indexes (a) $`\alpha =1`$, and (b) $`\alpha =1.5`$. The probability densities and the asymptotics of the stable laws are indicated by solid lines. Fig. 2. Stationary sequences (thin lines) and ordinary Levy motion trajectories (thick lines) at the different Levy indexes. Fig. 3. Plots of the exponent in Eq.(12) versus the Levy index $`\alpha `$at $`q=0.5`$ (crosses) and $`q=2`$ (black points). The $`q/\alpha `$ curve is depicted by dashed line. At the inset the structure function $`S`$ versus is shown in a log - log scale at $`\alpha =1`$, $`q=0.5`$. Fig. 4. The exponent $`\mu `$(see Eqs. (27), (28)) versus the order $`q`$ of the structure function for (a) $`\alpha =1.2`$ and (b) $`\alpha =1.7`$. The analytical estimate, see Eq. (28), is shown by solid line for $`q<\alpha `$ and by dotted line for $`q>\alpha `$ . The results of simulation are shown by black points. The arrows indicate the value $`q=\alpha `$ , at which bend of theoretical curve occurs. Fig. 5. Structure function $`S`$ versus sample size $`T`$in a log - log scale for the Levy index $`\alpha =1`$ and different values of $`q`$: $`q=\alpha `$ (horizontal line), $`q=2.5\alpha `$ and $`q=4\alpha `$ (sloping lines). The black points indicate simulation results. Fig. 6. (a) Variations of the range (thin curve), of $`\sigma _\alpha `$(thick curve) and of their ratio (below) at the different time intervals for the oLm with $`\alpha =1`$. (b) Rescaled range, see Eq. (26), versus time interval in log-log scale (black points). Solid line has a slope $`H=0.9`$. Fig. 7. Plots of the Hurst exponent $`H`$ vs $`\alpha `$ estimated with the use of Eq.(16) (crosses) and with the use of the traditional Hurst method (black points). The $`1/\alpha `$ curve is depicted by dashed line.
no-problem/9907/gr-qc9907015.html
ar5iv
text
# Cosmology with two compactification scales ## 1 Introduction In recent years, there has been growing interest and a great deal of activity (see e.g. $`[1],[2],[3],[4],[5]`$) in multidimensional cosmology. A feature common to many of those works is to assume that the Universe is a $`(4+d)`$-dimensional manifold where, due to its evolution, only three spatial dimensions are actually observable, while the remaining $`d`$ have curled up into compact spaces of unobservable small radii. This point of view is also apparent in the Kaluza-Klein spacetime of multidimensional supergravity$`[6],[7],[8],[9]`$. In this letter we consider dynamical compactification of a different sort. The $`(4+d)`$-dimensional space is supposed to break up into a $`(4n)`$-dimensional Minkowski space and a compact $`(n+d)`$-dimensional manifold whose compactification radii are governed by Einstein’s field equations. Here the integer $`n`$, ranging from $`1`$ to $`3`$, is the number of usual spatial dimensions by hypothesis compactified, like the extra $`d`$ dimensions but with different radii, in a circle. Moreover we require that at the initial time $`t=0`$ the compactification radii of the $`(n+d)`$ spatial dimensions are all the same, and that each radius of the $`d`$ extra dimensions at the present time equals the Planck length, while each radius of the usual $`n`$ dimensions is comparable to the size of the Universe. In the simplified model we propose, only the cosmological constant $`\mathrm{\Lambda }`$ will be retained in Einstein’s equations, thus neglecting matter contributions as well as scalar field terms appropriate to inflationary cosmology and a scale factor for the flat dimensions. This model is admittedly not realistic, but it can prove to be useful for future developments. The letter is organised as follows: taking as guidelines the work of Chodos and Detweiler $`[10]`$, we find the solutions to the field equations and generalize those already found by Kasner when $`\mathrm{\Lambda }=0`$ $`[11]`$. Successively we consider eleven dimensional cosmological models with different values of $`n`$ and $`\mathrm{\Lambda }`$ and give numerical estimates of some quantities of interest such as the Universe age and the evolution of the compactification radii. ## 2 The line element The metric suitable to our problem has the form $$ds^2=dt^2\underset{ı=1}{\overset{3n}{}}dx^ıdx^ıa^2(t)\underset{ı=4n}{\overset{3}{}}d\phi ^ıd\phi ^ıl^2(t)\underset{ı=1}{\overset{d}{}}d\psi ^ıd\psi ^ı$$ (1) where $`a(t)`$ and $`l(t)`$ are respectively the compactification radii of each one of the $`n`$ and of the $`d`$ spatial dimensions and $`\phi ^ı`$ and $`\psi ^ı`$ have a $`2\pi `$ period. Einstein’s equations, with cosmological term only, can be written as $$R_{MN}=\frac{2\mathrm{\Lambda }}{n+d1}g_{MN},\text{M,N = 1,2,…,4+d}$$ (2) and the relevant ones are given explicitly by: $`n{\displaystyle \frac{\ddot{a}}{a}}+d{\displaystyle \frac{\ddot{l}}{l}}={\displaystyle \frac{2\mathrm{\Lambda }}{n+d1}}`$ (3a) $`{\displaystyle \frac{\ddot{a}}{a}}+(n1)\left({\displaystyle \frac{\dot{a}}{a}}\right)^2+d{\displaystyle \frac{\dot{a}}{a}}{\displaystyle \frac{\dot{l}}{l}}={\displaystyle \frac{2\mathrm{\Lambda }}{n+d1}}`$ (3b) $`{\displaystyle \frac{\ddot{l}}{l}}+(d1)\left({\displaystyle \frac{\dot{l}}{l}}\right)^2+n{\displaystyle \frac{\dot{a}}{a}}{\displaystyle \frac{\dot{l}}{l}}={\displaystyle \frac{2\mathrm{\Lambda }}{n+d1}}`$ (3c) where a dot means derivative with respect to the time. The system $`(3)`$ can be solved with the conditions that at the present time $`t=t_0`$ (age of the Universe) one has $$\begin{array}{cc}\hfill a(t_0)=a_0,& \dot{a}(t_0)=H_0a_0\hfill \\ \hfill l(t_0)=l_0,& \dot{l}(t_0)=h_0l_0\hfill \end{array}$$ (4) The Hubble constant $`H_0`$ and the new constant $`h_0`$ which appear in the above conditions are not independent as one can see from Eqs.(3) rewritten at $`t=t_0`$ with the introduction of the deceleration parameter $`q_0=(\ddot{a}a/\dot{a}^2)_0`$ and of its analogous $`Q_0=(\ddot{l}l/\dot{l}^2)_0`$: $`nq_0H_0^2+dQ_0h_0^2={\displaystyle \frac{2\mathrm{\Lambda }}{n+d1}}`$ (5a) $`(n1q_0)H_0^2+dH_0h_0={\displaystyle \frac{2\mathrm{\Lambda }}{n+d1}}`$ (5b) $`(d1Q_0)h_0^2+nH_0h_0={\displaystyle \frac{2\mathrm{\Lambda }}{n+d1}}`$ (5c) It is in fact straightforward to obtain: $$\frac{h_0}{H_0}=\{\begin{array}{cc}\frac{n}{d1}+\sqrt{\frac{n(n+d1)}{d(d1)^2}+\frac{2\lambda }{d(d1)}}\hfill & \text{if}d1\hfill \\ & \\ \frac{n1}{2}+\frac{\lambda }{n}\hfill & \text{if}d=1\hfill \end{array}$$ (6) with $$\lambda =\frac{\mathrm{\Lambda }}{H_0^2}$$ (7) When $`d1`$ it must be $`\lambda >n(n+d1)/2(d1)`$ for reality. For future convenience we define the dimensionless quantities: $`\tau =H_0t,\tau _0=H_0t_0`$ (8) $`\omega =\sqrt{{\displaystyle \frac{n+d}{2(n+d1)}}|\lambda |}`$ (9) $`\delta _>={\displaystyle \frac{1}{2}}\text{arctanh}{\displaystyle \frac{2\omega }{n+d{\displaystyle \frac{h_0}{H_0}}}}`$ (10) $`\delta _<={\displaystyle \frac{1}{2}}\text{arctan}{\displaystyle \frac{2\omega }{n+d{\displaystyle \frac{h_0}{H_0}}}}`$ (11) The solutions to system $`(3)`$ can then be written as $$\frac{a(\tau )}{a_0}=\{\begin{array}{cc}& \left[\frac{\mathrm{sinh}(\omega (\tau \tau _0)+\delta _>)}{\mathrm{sinh}\delta _>}\right]^{\beta _+}\left[\frac{\mathrm{cosh}(\omega (\tau \tau _0)+\delta >)}{\mathrm{cosh}\delta _>}\right]^\beta _{}\text{if}\lambda >0\hfill \\ & \\ & \left[1+(n+d\frac{h_0}{H_0})(\tau \tau _0)\right]^{\beta _+}\text{if}\lambda =0\hfill \\ & \\ & \left[\frac{\mathrm{sin}(\omega (\tau \tau _0)+\delta _<)}{\mathrm{sin}\delta _<}\right]^{\beta _+}\left[\frac{\mathrm{cos}(\omega (\tau \tau _0)+\delta _<)}{\mathrm{cos}\delta _<}\right]^\beta _{}\text{if}\lambda <0\hfill \end{array}$$ (12) $$\frac{l(\tau )}{l_0}=\{\begin{array}{cc}& \left[\frac{\mathrm{sinh}(\omega (\tau \tau _0)+\delta _>)}{\mathrm{sinh}\delta _>}\right]^\gamma _{}\left[\frac{\mathrm{cosh}(\omega (\tau \tau _0)+\delta _>)}{\mathrm{cosh}\delta _>}\right]^{\gamma _+}\text{if}\lambda >0\hfill \\ & \\ & \left[1+(n+d\frac{h_0}{H_0})(\tau \tau _0)\right]^\gamma _{}\text{if}\lambda =0\hfill \\ & \\ & \left[\frac{\mathrm{sin}(\omega (\tau \tau _0)+\delta _<)}{\mathrm{sin}\delta _<}\right]^\gamma _{}\left[\frac{\mathrm{cos}(\omega (\tau \tau _0)+\delta _<)}{\mathrm{cos}\delta _<}\right]^{\gamma _+}\text{if}\lambda <0\hfill \end{array}$$ (13) Here $`\beta _+`$ $`={\displaystyle \frac{1+\sqrt{d(n+d1)/n}}{n+d}},`$ $`\beta _{}`$ $`={\displaystyle \frac{1\sqrt{d(n+d1)/n}}{n+d}}`$ (14) $`\gamma _+`$ $`={\displaystyle \frac{1+\sqrt{n(n+d1)/d}}{n+d}},`$ $`\gamma _{}`$ $`={\displaystyle \frac{1\sqrt{n(n+d1)/d}}{n+d}}`$ (15) while $`l_0`$ is assumed to be of the order of the the Planck length and $`a_0`$ is the radius of the circle of the actually macroscopic dimensions. Once $`n`$ and $`d`$ are fixed and $`H_0`$ is taken as known, the ratios $`a(\tau )/a_0`$ and $`l(\tau )/l_0`$ result to depend only on $`\lambda `$ and $`\tau _0`$. Then, due to the fact that these two quantities are not sufficiently well established, we might vary them step by step in a neighborhood, say, of $`\lambda =0`$ and of $`\tau _0=1`$ to obtain numerical estimates of $`a(\tau )/a_0`$ and $`l(\tau )/l_0`$. We find however more convenient to proceed in a different manner. Noticing that $`\beta _+\gamma _{}=(\beta _{}\gamma _+)1/\alpha `$ and defining $$\rho \left(\frac{l_0a(0)}{a_0l(0)}\right)^\alpha $$ (16) which is expected to be a quantity much less than unity, one easily obtains from Equations $`(12)`$ and $`(13)`$ written at $`\tau =0`$: $$\tau _0=\{\begin{array}{cc}\frac{\delta _>\text{arctanh}(\rho \mathrm{tanh}\delta _>)}{\omega }\hfill & \text{if}\lambda >0\hfill \\ & \\ \frac{1\rho }{n+dh_0/H_0}\hfill & \text{if}\lambda =0\hfill \\ & \\ \frac{\delta _<\text{arctan}(\rho \mathrm{tan}\delta _<)}{\omega }\hfill & \text{if}\lambda <0\hfill \end{array}$$ (17) In this way we can calculate $`\tau _0`$ for a given $`\lambda `$ if we properly choose the parameter $`\rho `$ or, otherwise stated, the ratio $`a(0)/l(0)`$. To make an example we can recover, for $`\lambda =0`$, Kasner’s solution by choosing $`a(0)`$ equals to zero, or equivalently $`l(0)`$ equals to infinity, and therefore $`\rho =0`$. In view of the smallness of the ratio $`l_0/a_0`$, initial values for $`a(0)`$ and $`l(0)`$ of not too much different order of magnitude does not appreciably influence the results at farther times. Our choice is therefore to have at the initial time the same compactification radii for the $`(n+d)`$ spatial dimensions and so we put $`a(0)=l(0)`$. As a consequence, for a given pair $`(n,d)`$, we have only $`\lambda `$ as the parameter left free to evaluate both the actual age of the Universe $`\tau _0`$ and the ratios $`a(\tau )/a_0`$ and $`l(\tau )/l_0`$. ## 3 Numerical results for eleven dimensions We shall limit ourselves to the most popular choice of eleven dimensions and so fix the value $`d=7`$. To begin with, let us examine the values of $`\tau _0=H_0t_0`$ which can be obtained by our model when the adimensional cosmological constant $`\lambda `$ vary in the interval $`(\mathrm{\hspace{0.17em}1},1)`$. As to the Hubble constant $`H_0`$, it is common practice to write $`H_0=100\eta kms^{\mathrm{\hspace{0.17em}1}}Mpc^{\mathrm{\hspace{0.17em}1}}`$ where the uncertainty on it is put into the constant $`\eta `$, whose present value ranges from $`0.50`$ to $`0.85`$. Thus a characteristic time scale for the expansion of the Universe is the Hubble time $`1/H_0=(9.8/\eta )Gyr`$. If we look at the graph of Figure $`1`$, where the above stated restriction on the negative values of $`\lambda `$ is apparent only for $`n=1`$, we can see that $`\tau _0`$ can exceed unity, as one expects, only if $`n=1`$ and $`\lambda <0`$. Due however to the simplicity of our model, this fact does not seem a serious drawback and all other combinations of $`n`$ and $`\lambda `$ can not, in our opinion, be ruled out. The sign of $`\lambda `$ and the values of $`n`$ appear to be of great importance for the time evolution of the radii $`a(\tau )`$ and $`l(\tau )`$ as it is shown in Figures from $`2`$ to $`7`$. We can summarize the various behaviours as follows: 1) When $`\lambda >0`$, $`a(\tau )`$ is always increasing to infinity with time and so does $`l(\tau )`$ apart in the cases $`n=2,3`$ where $`l(\tau )`$ is initially decreasing in a finite time interval. 2) When $`\lambda =0`$, $`a(\tau )`$ is always increasing to infinity with time, while $`l(\tau )`$ is constant if $`n=1`$ and decreasing to zero if $`n=2,3`$. 3) When $`\lambda <0`$ , $`a(\tau )`$ increases to infinity and $`l(\tau )`$ decreases to zero until $`\tau `$ reaches the finite value $`\tau =\tau _0+(\pi /2\delta _<)/\omega `$ ; whether this is a final state or a new initial state of the Universe is a question we leave open. Let us notice, as one can see from Figures $`3`$,$`5`$ and $`7`$, that the rate of variation of the Planck length is not so dramatic in the range of times considered, and in any case still compatible with the experimental bounds due to the possible time variation of the fundamental constants involved in its definition. ## 4 Conclusions The widespread belief in existing multidimensional cosmological models is that three spatial dimensions expand isotropically while the remaining $`d`$ are actually curled up into spaces of dimensions comparable to the Planck length. Such a behaviour is exibited also by the $`(4+d)`$-dimensional Kaluza-Klein spacetime derived from M-theory. We instead propose that at least one of the three spatial macroscopic dimensions can undergo a compactification process with a consequent loss of isotropy. This fact would bring to important experimental consequences, for instance, with respect to the cosmic microwave background anisotropy. When all the three usual spatial dimensions compactify, that space becomes like a flat three-dimensional one with the scale factor $`a(\tau )/a_0`$ describing its espansion. Of course in all the cases we have considered, the espansion in the distant future is driven by the cosmological costant. Our model is admittedly greatly simplified, but it seems worth exploring the possibility of a compactification process also in the large scale. Figure captions Figure 1: $`\tau _0`$ as a function of $`\lambda `$ when $`n=1,2,3`$ Figure 2: $`\frac{a(\tau )}{a_0}`$ as a function of $`\tau `$ when $`n=1`$ Figure 3: $`\frac{l(\tau )}{l_0}`$ as a function of $`\tau `$ when $`n=1`$ Figure 4: $`\frac{a(\tau )}{a_0}`$ as a function of $`\tau `$ when $`n=2`$ Figure 5: $`\frac{l(\tau )}{l_0}`$ as a function of $`\tau `$ when $`n=2`$ Figure 6: $`\frac{a(\tau )}{a_0}`$ as a function of $`\tau `$ when $`n=3`$ Figure 7: $`\frac{l(\tau )}{l_0}`$ as a function of $`\tau `$ when $`n=3`$
no-problem/9907/astro-ph9907145.html
ar5iv
text
# Astrometric radial velocities ## 1 Introduction For well over a century, radial velocities for objects outside the solar system have been determined through spectroscopy, using the (Doppler) shifts of stellar spectral lines. The advent of high-accuracy (sub-milliarcsec) astrometric measurements, both on ground and in space, now permits radial velocities to be obtained by alternative methods, based on geometric principles and therefore independent of spectroscopy. The importance of such astrometric radial velocities stems from the fact that they are independent of phenomena which affect the spectroscopic method, such as line asymmetries and shifts caused by atmospheric pulsation, surface convection, stellar rotation, stellar winds, isotopic composition, pressure, and gravitational potential. Conversely, the differences between spectroscopic and astrometric radial velocities may provide information on these phenomena that cannot be obtained by other methods. Although the theoretical possibility of deducing astrometric radial velocities from geometric projection effects was noted already at the beginning of the 20th century (if not earlier), it is only recently that such methods have reached an accuracy level permitting non-trivial comparison with spectroscopic measurements. We have analysed three methods by which astrometric radial velocities can be determined (Fig. 1). Two of them are applicable to individual, nearby stars and are based on the well understood secular changes in the stellar trigonometric parallax and proper motion. The third method uses the apparent changes in the geometry of a star cluster or association to derive its kinematic parameters, assuming that the member stars share, in the mean, a common space velocity. In Sects. 4 to 6 we describe the principle and underlying assumptions of each of the three methods and derive approximate formulae for the expected accuracy of resulting astrometric radial velocities. For the first and second methods, an inventory of nearby potential target stars is made, and the second method is applied to several of these. However, given currently available astrometric data, only the third (moving-cluster) method is capable of yielding astrophysically interesting, sub-km s<sup>-1</sup> accuracy. In subsequent papers we develop in detail the theory of this method, based on the maximum-likelihood principle, as well as its practical implementation, and apply it to a number of nearby open clusters and associations, using data from the Hipparcos astrometry satellite. ## 2 Notations In the following sections, $`\pi `$, $`\mu `$ and $`v_r`$ denote the trigonometric parallax of a star, its (total) proper motion, and its radial velocity. The components of $`\mu `$ in right ascension and declination are denoted $`\mu _\alpha `$ and $`\mu _\delta `$, with $`\mu =(\mu _\alpha ^2+\mu _\delta ^2)^{1/2}`$. The dot signifies a time derivative, as in $`\dot{\pi }\mathrm{d}\pi /\mathrm{d}t`$. The statistical uncertainty (standard error) of a quantity $`x`$ is denoted $`ϵ(x)`$. (We prefer this non-standard notation to $`ϵ_x`$, since $`x`$ is itself often a subscripted variable.) $`\sigma _v`$ is used for the physical velocity dispersion in a cluster. $`A=1.49598\times 10^8`$ km is the astronomical unit; the equivalent values $`4.74047`$ km yr s<sup>-1</sup> and $`9.77792\times 10^8`$ mas km yr s<sup>-1</sup> are conveniently used in equations below (cf. Table 1.2.2 in Vol. 1 of ESA esa (1997)). Other notations are explained as they are introduced. ## 3 Astrometric accuracies In estimating the potential accuracy of the different methods, we consider three hypothetical situations: * Case A: a quasi-continuous series of observations over a few years, resulting in an accuracy of $`ϵ(\pi )=1`$ mas (milliarcsec) for the trigonometric parallaxes and $`ϵ(\mu )=1`$ mas yr<sup>-1</sup> for the proper motions. * Case B: similar to Case A, only a thousand times better, i.e. $`ϵ(\pi )=1`$ $`\mu `$as (microarcsec) and $`ϵ(\mu )=1`$ $`\mu `$as yr<sup>-1</sup>. * Case C: two sets of measurements, separated by an interval of 50 yr, where each set has the same accuracy as in Case B. The much longer-time baseline obviously allows a much improved determination of the accumulated changes in parallax and proper motion. The accuracies assumed in Case A are close to what the Hipparcos space astrometry mission (ESA esa (1997)) achieved for its main observation programme of more than 100 000 stars. Current ground-based proper motions may be slightly better than this, but not by a large factor. This case therefore represents, more or less, the state-of-the-art accuracy in optical astrometry. Accuracies in the 1 to 10 $`\mu `$as range are envisaged for some planned or projected space astrometry missions, such as GAIA (Lindegren & Perryman lindegren96 (1996)) and SIM (Unwin et al. unwin (1998)). The duration of such a mission is here assumed to be about 5 years. Using the longer-time baselines available with ground-based techniques, similar performance may in the future be reached with the most accurate ground-based techniques (Pravdo & Shaklan pravdo (1996); Shao shao (1996)). Case B therefore corresponds to what we could realistically hope for within one or two decades. Case C, finally, probably represents an upper limit to what is practically feasible in terms of long-term proper-motion accuracy, not to mention the patience of astronomers. ## 4 Radial velocity from changing annual parallax The most direct and model-independent way to determine radial velocity by astrometry is to measure the secular change in the trigonometric parallax (Fig. 1a). The distance $`b`$ (from the solar system barycentre) is related to parallax $`\pi `$ through $`b=A/\mathrm{sin}\pi A/\pi `$. Since $`v_r=\dot{b}`$, the radial velocity is $$v_r=A\frac{\dot{\pi }}{\pi ^2},$$ (1) where $`A`$ is the astronomical unit (Sect. 2). The equivalent of Eq. (1) was derived by Schlesinger (schlesinger (1917)), who concluded that the parallax change is very small for every known star. However, although extremely accurate parallax measurements are obviously required, the method is not as unrealistic as it may seem at first. To take a specific, if extreme, example: for Barnard’s star (Gl 699 = HIP 87937), with $`\pi =549`$ mas and $`v_r=110`$ km s<sup>-1</sup>, the expected parallax rate is $`\dot{\pi }=+34\mu `$as yr<sup>-1</sup>. According to our discussion in Sect. 3 this will almost certainly be measurable in the near future. It can be noted that the changing-parallax method, in contrast to the methods described in Sect. 5 and 6, does not depend on the object having a large and uniform space motion, and would therefore be applicable to all stars within a few parsecs of the Sun. ### 4.1 Achievable accuracy The accuracy in $`v_r`$ is readily estimated from Eq. (1) for a given accuracy in $`\dot{\pi }`$, since the contribution of the parallax uncertainty to the factor $`A/\pi ^2`$ is negligible. The achievable accuracy in $`\dot{\pi }`$ depends both on the individual astrometric measurements and on their number and distribution in time. Concerning the temporal distribution of the measurements we consider two limiting situations: Quasi-continuous observation. The measurements are more or less uniformly spread out over a time period of length $`L`$ centred on the epoch $`t_0`$. This is a good approximation to the way a single space mission would typically be operated; for example, Hipparcos had $`L3`$ yr and $`t_01991.25`$. In such a case there exist simple (mean) relations between how accurately the different astrometric parameters of the same star can be derived, depending on $`L`$. For instance, $`ϵ(\mu _\delta )(\sqrt{12}/L)ϵ(\delta )`$, if $`ϵ(\mu _\delta )`$ is the accuracy of the proper motion in declination and $`ϵ(\delta )`$ that of the declination at $`t_0`$. This approximation is applicable to Case A and B as defined in Sect. 3. Two-epoch observation. Two isolated parallax or proper-motion measurements are taken, separated by a long time interval (say, $`T`$ years) during which no observation takes place. Each measurement must actually be the result of a series covering at least a year or so, but the duration of each such series is assumed to be negligible compared with $`T`$. This could be two similar space missions separated by several decades and is applicable to Case C in Sect. 3. For quasi-continuous observation we may assume that the parallax variation is linear over the observation period $`L`$. Thus, $`\pi (t)=\pi _0+(tt_0)\dot{\pi }`$, where $`\pi _0`$ and $`\dot{\pi }`$ are two parameters to be determined from the observations. There exists an approximate relation between the accuracies of these two parameters that is similar to that between the proper motion and the position at the mean epoch, viz. $`ϵ(\dot{\pi })=(\sqrt{12}/L)ϵ(\pi _0)`$. Moreover, the estimates of the two parameters are uncorrelated, so $`ϵ(\pi _0)`$ equals the accuracy $`ϵ(\pi )`$ of a parallax determination in the absence of the parallax-change term; thus $$ϵ(v_r)\sqrt{12}A\frac{ϵ(\pi )}{L\pi ^2}.$$ (2) In the case of a two-epoch observation, let us assume that independent parallax measurements $`\pi _1`$ and $`\pi _2`$ are made at epochs $`t_1`$ and $`t_2=t_1+T`$. The estimated rate of change is $`\dot{\pi }=(\pi _2\pi _1)/T`$. With $`ϵ(\pi _1)`$ and $`ϵ(\pi _2)`$ denoting the accuracies of the two measurements, we have $`ϵ(\dot{\pi })=[ϵ(\pi _1)^2+ϵ(\pi _2)^2]^{1/2}/T`$ and consequently $$ϵ(v_r)\frac{A}{T\pi ^2}\left[ϵ(\pi _1)^2+ϵ(\pi _2)^2\right]^{1/2}.$$ (3) For given observational errors we find, from both Eq. (2) and (3), that the radial-velocity error is simply a function of distance. The number of potential target stars for a certain maximum radial-velocity uncertainty is therefore given by the total number of stars within the corresponding maximum distance. Table 1 gives the actual numbers of such stars, and the observational accuracies that may be reached. ## 5 Radial velocity from changing proper motion (perspective acceleration) To a good approximation, single stars move with uniform linear velocity through space. For a given linear tangential velocity, the angular velocity (or proper motion $`\mu `$), as seen from the Sun, varies inversely with the distance to the object. However, the tangential velocity changes due to the varying angle between the line of sight and the space-velocity vector (Fig. 1b). As is well known (e.g. van de Kamp vdkamp67 (1967), Murray murray (1983)) the two effects combine to produce an apparent (perspective) acceleration of the motion on the sky, or a rate of change in proper motion amounting to $`\dot{\mu }=2\mu v_r/b`$. With $`b=A/\pi `$ we find $$v_r=A\frac{\dot{\mu }}{2\pi \mu }.$$ (4) Schlesinger (schlesinger (1917)) derived the equivalent of this equation, calculated the perspective acceleration for Kapteyn’s and Barnard’s stars (cf. Table 2) and noted that, if accurate positions are acquired over long periods of time, “we shall be in position to determine the radial velocities of these stars independently of the spectroscope and with an excellent degree of precision”. The equation for the perspective acceleration was earlier derived by Seeliger (seeliger (1901))<sup>1</sup><sup>1</sup>1Some remarks in the literature, e.g. by Ristenpart (ristenpart (1902)) and Lundmark & Luyten (lundmark (1922)), seem to suggest that the perspective acceleration was discovered by Bessel (bessel44 (1844)). However, as far as we can determine, Bessel only discussed proper-motion changes caused by gravitational perturbations, explicitly neglecting terms depending on the radial motion. and used by Ristenpart (ristenpart (1902)) in an (unsuccessful) attempt to determine $`\dot{\mu }`$ observationally for Groombridge 1830. A major consideration for Ristenpart seems to have been the possibility to derive the parallax from the apparent acceleration in combination with a spectroscopic radial velocity. Such a determination of ‘acceleration parallaxes’ was also considered by Eichhorn (eichhorn81 (1981)). Subsequent attempts to determine the perspective acceleration of Barnard’s star by Lundmark & Luyten (lundmark (1922)), Alden (alden (1924)) and van de Kamp (1935b ) yielded results that were only barely significant or (in retrospect) spurious. Meanwhile, Russell & Atkinson (russell (1931)) suggested that the white dwarf van Maanen 2 might exhibit a gravitational redshift of several hundred km s<sup>-1</sup> and that this could be distinguished from a real radial velocity through measurement of the perspective acceleration. The astrophysical relevance of astrometric radial-velocity determinations was thus already established (Oort oort (1932)). In relatively recent times, the perspective acceleration was successfully determined for Barnard’s star by van de Kamp (vdkamp62 (1962), vdkamp63 (1963), vdkamp67 (1967), vdkamp70 (1970), vdkamp81 (1981)); for van Maanen 2 by van de Kamp (vdkamp71 (1971)), Gatewood & Russell (gatewood (1974)) and Hershey (hershey (1978)); and for Groombridge 1830 by Beardsley et al. (beardsley (1974)). Among these determinations the highest precisions, in terms of the astrometric radial velocity, were obtained for Barnard’s star (corresponding to $`\pm 4`$ km s<sup>-1</sup>; van de Kamp vdkamp81 (1981)) and van Maanen 2 ($`\pm 15`$ km s<sup>-1</sup>; Gatewood & Russell gatewood (1974)). Our application of the method, combining Hipparcos measurements with data in the Astrographic Catalogue, yielded radial velocities for 16 objects, as listed in Table 3. ### 5.1 Achievable accuracy The accuracy of the radial velocity calculated from Eq. (4) can be estimated as in Sect. 4.1. It depends on the parallax-proper-motion product $`\pi \mu `$. The most promising targets for this method are listed in Table 2, which contains the known nearby stars ranked after decreasing $`\pi \mu `$. For quasi-continuous observation during a period of length $`L`$ we may use a quadratic model for the angular position $`\varphi `$ of the star along the great-circle arc: $`\varphi (t)=\varphi (t_0)+(tt_0)\mu _0+{\scriptscriptstyle \frac{1}{2}}(tt_0)^2\dot{\mu }`$. Here $`\mu _0`$ is the proper motion at the central epoch $`t_0`$. The estimates of $`\mu _0`$ and $`\dot{\mu }`$ are found to be uncorrelated and their errors related by $`ϵ(\dot{\mu })=(\sqrt{60}/L)ϵ(\mu _0)`$. Consequently, $$ϵ(v_r)\sqrt{15}A\frac{ϵ(\mu )}{L\pi \mu }$$ (5) where $`ϵ(\mu )`$ is the accuracy of proper-motion measurements in the absence of temporal changes. We neglect the (small) contribution to $`ϵ(v_r)`$ from the uncertainty in the denominator $`\pi \mu `$. For a two-epoch observation, consider proper-motion measurements $`\mu _1`$ and $`\mu _2`$ made around $`t_1`$ and $`t_2=t_1+T`$. The estimated acceleration is $`\dot{\mu }=(\mu _2\mu _1)/T`$. Provided the two observation intervals centred on $`t_1`$ and $`t_2`$ do not overlap, the measurements are independent, yielding the standard error $`ϵ(\dot{\mu })=[ϵ(\mu _1)^2+ϵ(\mu _2)^2]^{1/2}/T`$. For the radial velocity this gives $$ϵ(v_r)\frac{A}{T\pi \mu }\left[ϵ(\mu _1)^2+ϵ(\mu _2)^2\right]^{1/2}.$$ (6) Based on these formulae, Table 2 gives the potential radial-velocity accuracy for the two cases B and C defined in Sect. 3. In a two-epoch observation we normally have, in addition, a very good estimate of the mean proper motion between $`t_1`$ and $`t_2`$, provided the positions $`\varphi _1`$ and $`\varphi _2`$ at these epochs are accurately known. In the previous quadratic model we may take the reference epoch to be $`t_0=(t_1+t_2)/2`$ and find $`\mu _0=(\varphi _2\varphi _1)/T`$ with standard error $`ϵ(\mu _0)=[ϵ(\varphi _1)^2+ϵ(\varphi _2)^2]^{1/2}/T`$. The three proper-motion estimates $`\mu _0`$, $`\mu _1`$ and $`\mu _2`$ (referred to $`t_0`$, $`t_1`$ and $`t_2`$) are mutually independent and may be combined in a least-squares estimate of $`\dot{\mu }`$. If $`ϵ(\mu _1)=ϵ(\mu _2)`$ (equal weight at $`t_1`$ and $`t_2`$), then it is found that $`\mu _0`$ does not contribute at all to the determination of $`\dot{\mu }`$, and the standard error is still given by Eq. (6). If, on the other hand, the two observation epochs are not equivalent, then some improvement can be expected by introducing the position measurements. An important special case is when there is just a position (no proper motion) determined at one of the epochs, say $`t_1`$. This is however equivalent to the two independent proper-motion determinations $`\mu _0`$ at $`t_0=(t_1+t_2)/2`$, and $`\mu _2`$ at $`t_2`$, separated by $`t_2t_0=T/2`$. Applying Eq. (6) on this case yields $$ϵ(v_r)\frac{2A}{T\pi \mu }\left[\frac{ϵ(\varphi _1)^2+ϵ(\varphi _2)^2}{T^2}+ϵ(\mu _2)^2\right]^{1/2}.$$ (7) This formula is applicable on the combination of a recent position and proper-motion measurement (e.g. by Hipparcos) with a position derived from old photographic plates (e.g. the Astrographic Catalogue). Taking $`t_11907`$, $`ϵ(\varphi _1)200`$ mas as representative for the Astrographic Catalogue, and $`t_2=1991.25`$, $`ϵ(\varphi _2)1`$ mas, $`ϵ(\mu _2)1`$ mas yr<sup>-1</sup> for Hipparcos, we find $`ϵ(v_r)(60\text{km s}\text{-1}\text{ arcsec}\text{2}\text{ yr}\text{-1})/(\pi \mu )`$. With such data, moderate accuracies of a few tens of km s<sup>-1</sup> can be reached for several stars (Sect. 5.3). ### 5.2 Effects of gravitational perturbations The perspective-acceleration method depends critically on the assumption that the star moves with uniform space motion relative the observer. The presence of a real acceleration of their relative motions, caused by gravitational action of other bodies, would bias the calculated astrometric radial velocity by $`g_t/2\mu `$, where $`g_t`$ is the tangential component of the relative acceleration. The acceleration towards the Galactic centre caused by the smoothed Galactic potential in the vicinity of the Sun is $`g2\times 10^{13}`$ km s<sup>-2</sup>. For a hypothetical observer near the Sun but unaffected by this acceleration, the maximum bias would be 0.06 km s<sup>-1</sup> for Barnard’s star, and 0.17 km s<sup>-2</sup> for Proxima. However, since real observations are made relative the solar-system barycentre, which itself is accelerated in the Galactic gravitational field, the observed (differential) effect will be very much smaller. In the case of Proxima the acceleration towards $`\alpha `$ Cen AB is of a similar magnitude as the Galactic acceleration. For the several orbital binaries in Table 2 the curvature of the orbit is much greater than the perspective acceleration. Application of this method will therefore require careful correction for all known perturbations: the possible presence of long-period companions may introduce a considerable uncertainty. Among other effects which may have to be considered are light-time effects which, to first order in $`c^1`$, may require a correction of $`v_t^2/2c`$ on the right-hand side of Eq. (4), where $`v_t`$ is the tangential velocity. For typical high-velocity (Population II) stars the correction is 0.1–0.2 km s<sup>-1</sup>. At this accuracy level, the precise definition of the radial-velocity concept itself requires careful consideration (Lindegren et al. lindegren99 (1999)). ### 5.3 Results from observed proper-motion changes Past determinations of the perspective acceleration, e.g. by van de Kamp (vdkamp81 (1981)) and Gatewood & Russell (gatewood (1974)), were based on photographic observations collected over several decades, in which the motion of the target star was measured relative to several background (reference) stars. One difficulty with the method has been that the positions and motions of the reference stars are themselves not accurately known, and that small errors in the reference data could cause a spurious acceleration of the target star (van de Kamp 1935a ). The Hipparcos Catalogue (ESA esa (1997)) established a very accurate and homogeneous positional reference frame over the whole sky. Using the proper motions, this reference frame can be extrapolated backwards in time. It is then possible to re-reduce measurements of old photographic plates, and express even century-old stellar positions in the same reference frame as modern observations. This should greatly facilitate the determination of effects such as the perspective acceleration, which are sensitive to systematic errors in the reference frame. As part of the Carte du Ciel project begun more than a century ago, an astrographic programme to measure the positions of all stars down to the 11th magnitude was carried out and published as the Astrographic Catalogue, AC (see Eichhorn eichhorn74 (1974) for a description). After transfer to electronic media, the position measurements have been reduced to the Hipparcos reference frame (Nesterov et al. nesterov (1998), Urban et al. urban (1998)). The result is a positional catalogue of more than 4 million stars with a mean epoch around 1907 and a typical accuracy of about 200 mas. We have used the version known as AC 2000 (Urban et al. urban (1998)), available on CD-ROM from the US Naval Observatory, to examine the old positions of all the stars with HIP identifiers in Table 2. For the stars in Table 3 we successfully matched the AC positions with the positions extrapolated backwards from the Hipparcos Catalogue and hence could calculate the astrometric radial velocities. Other potential targets in Table 2 were either outside the magnitude range of AC 2000 (e.g. $`\alpha `$ Cen and Proxima) or lacked an accurate proper motion from Hipparcos (e.g. van Maanen 2 and HIP 91768+91772). The basic procedure was as follows. The rigorous epoch transformation algorithm described in Sect. 1.5.5 of the Hipparcos Catalogue, Vol. 1, was used to propagate the Hipparcos position and its covariance matrix to the AC 2000 epoch relevant for each star. This extrapolated position was compared with the actual measured position in AC 2000, assuming a standard error of 200 mas in each coordinate for the latter. A $`\chi ^2`$ goodness-of-fit was then calculated from the position difference and the combined covariance of the extrapolated and measured positions. The epoch transformation algorithm requires that the radial velocity is known. The radial velocity was therefore varied until the $`\chi ^2`$ attained its minimum value. The $`\pm `$1$`\sigma `$ confidence interval given in the table was obtained by modifying the radial velocity until the $`\chi ^2`$ had increased by one unit above the minimum. For some of the stars, data had to be corrected for duplicity or known orbital motion. The solutions for the resolved binary 61 Cyg (HIP 104214+104217) refer to the mass centre, assuming a mass ratio of $`M_\mathrm{B}/M_\mathrm{A}=0.90`$, as estimated by means of standard isochrones from the absolute magnitudes and colour indices of the components (Söderhjelm, private communication). For the astrometric binary $`\mu `$ Cas (HIP 5336) the Hipparcos data explicitly refer to the mass centre using the orbit by Heintz & Cantor (heintz (1994)); the same orbit was used to correct the AC position of the primary to the mass centre. No correction for orbital motion was used for GX And and 40 Eri. Table 3 gives two solutions for 61 Cyg. The first solution was obtained as described above, using only the Hipparcos data plus the AC positions for the two components. The second solution, marked with an asterisk in the table, was derived by including also the observations by Bessel (bessel39 (1839)) from his pioneering determination of the star’s parallax. Bessel measured the angular distances from the geometrical centre (half-way between the components) of 61 Cyg to two reference stars, called $`a`$ and $`b`$ in his paper. After elimination of aberration, proper motion and parallax, he found the distances $`461.6171\pm 0.015`$ arcsec and $`706.2791\pm 0.017`$ arcsec for the beginning of year 1838 (B1838.0 = J1838.0022). The uncertainties are our estimates (standard errors) based on the scatter of the residuals in Bessel’s solution ‘II’. We identified the reference stars in AC 2000 and in the Tycho Catalogue (ESA esa (1997)) as $`a`$ = AC 1382543 = TYC 3168 708 1 and $`b`$ = AC 1382712 = TYC 3168 1106 1. Extrapolating the positions from these catalogues back to B1838 allowed us to compute the position of the geometrical centre of 61 Cyg in the Hipparcos/Tycho reference frame. This could then be transformed to the position of the mass centre, using Bessel’s own measurement of the separation and position angle in 61 Cyg and the previously assumed mass ratio. Actually, all the available data were combined into a $`\chi ^2`$ goodness-of-fit measure and the radial velocity was varied in order to find the minimum and the $`\pm `$1$`\sigma `$ confidence interval. This gave $`v_r=68.0\pm 11.1`$ km s<sup>-1</sup>. Table 3 also gives the spectroscopic radial velocities when available in the literature. A comparison between the astrometric and spectroscopic radial velocities is made in Fig. 2. Given the stated confidence intervals, the agreement is in all cases rather satisfactory. The exercise demonstrates the basic feasibility of this method, but also hints at some of the difficulties in applying it to non-single stars. ## 6 Radial velocity from changing angular extent (moving-cluster method) The moving-cluster method is based on the assumption that the stars in a cluster move through space essentially with a common velocity vector. The radial-velocity component makes the cluster appear to contract or expand due to its changing distance (Fig. 1c). The relative rate of apparent contraction equals the relative rate of change in distance to the cluster. This can be converted to a linear velocity (in km s<sup>-1</sup>) if the distance to the cluster is known, e.g. from trigonometric parallaxes. In practice, the method amounts to determining the space velocity of the cluster, i.e. the convergent point and the speed of motion, through a combination of proper motion and parallax data. Once the space velocity is known, the radial velocity for any member star may be calculated by projecting the velocity vector onto the line of sight. The method can be regarded as an inversion of the classical procedure (e.g. Binney & Merrifield binney (1998)) by which the distances to the stars in a moving cluster are derived from the proper motions and (spectroscopic) radial velocities: if instead the distances are known, the radial velocities follow. The first application of the classical moving-cluster method for distance determination was by Klinkerfues (klinkerfues (1873)), in a study of the Ursa Major system. The possibility to check spectroscopic radial velocities against astrometric data was recognised by Klinkerfues, but could not then be applied to the Ursa Major cluster due to the lack of reliable trigonometric parallaxes. This changed with Hertzsprung’s (hertzsprung (1909)) discovery that Sirius probably belongs to the Ursa Major moving group. The relatively large and well-determined parallax of Sirius, combined with its considerable angular distance from the cluster apex, could lead to a meaningful estimate for the cluster velocity and hence for the radial velocities. Rasmuson (rasmuson (1921)) and Smart (smart (1939)) appear to have been among the first who actually made this computation, although mainly as a means of verifying the cluster method for distance determination. Later studies by Petrie (petrie49 (1949)) and Petrie & Moyls (petrie53 (1953)) reached formal errors in the astrometric radial velocities below 1 km s<sup>-1</sup>. The last paper concluded “There does not appear to be much likelihood of improving the present results until a substantial improvement in the accuracy of the trigonometric parallaxes becomes possible.” One of the purposes of the Petrie & Moyls study was to derive the astrometric radial velocities of spectral type A in order to check the Victoria system of spectroscopic velocities. The method was also applied to the Hyades (Petrie petrie63 (1963)) but only with an uncertainty of a few km s<sup>-1</sup>. Given the expected future availability of more accurate proper motions and trigonometric parallaxes, Petrie (petrie62 (1962)) envisaged that one or two moving clusters could eventually be used as primary radial-velocity standards for early-type spectra. Such astrometric data are now in fact available. In Sect. 6.1 we derive a rough estimate of the accuracy of the method and survey nearby clusters and associations in order to find promising targets for its application. An important consideration is to what extent systematic velocity patterns in the cluster, in particular cluster expansion, will limit the achievable accuracy. This is discussed in Sect. 6.2 and Appendix A. In Sect. 6.3 we briefly consider the improvement in the distance estimates for individual stars resulting from the moving-cluster method. The present discussion of the moving-cluster method is only intended to highlight its theoretical potential and limitations. Its actual application requires a more rigorous formulation, which is developed in a second paper. ### 6.1 Potential accuracy The accuracy of the astrometric radial velocity potentially achievable by the moving-cluster method can be estimated as follows. Let $`b`$ be the (mean) distance to the cluster and consider a star at angular distance $`\rho `$ from the centre of the cluster, as seen from the Sun. The projected linear distance of the star from the centre of the cluster is $`b\mathrm{sin}\rho b\rho `$, provided the angular extent of the cluster is not very large. As the cluster moves through space, its linear dimensions remain constant, so that $`\dot{\rho }b+\rho \dot{b}=0`$. Putting $`\dot{\rho }=\mu `$ (the proper motion relative to the cluster centre), $`\dot{b}=v_r`$, and $`b=A/\pi `$, gives $`v_r=A\mu /(\rho \pi )`$. Now suppose that the parallaxes and proper motions of $`n`$ cluster stars are measured, each with uncertainties of $`ϵ(\pi )`$ and $`ϵ(\mu )`$. Standard error propagation formulae give the expected accuracy in $`v_r`$ as $$ϵ(v_r)A\frac{ϵ(\mu )}{\rho _{\mathrm{rms}}\pi \sqrt{n}}\left[1+\left(\frac{v_r\rho _{\mathrm{rms}}ϵ(\pi )}{Aϵ(\mu )}\right)^2\right]^{1/2}$$ (8) where $`\rho _{\mathrm{rms}}`$ is in radians; $`A`$ is the astronomical unit (Sect. 2). The expression within the square brackets derives from the uncertainty in the mean cluster distance, by which the derived radial velocity scales. For the type of (space) astrometry data considered here (Case A and B), $`ϵ(\pi )/ϵ(\mu )`$ is on the order of a few years (for Hipparcos the mean ratio is $`1.2`$ yr). The factor in brackets can then be neglected except for the most extended (and nearby) clusters. Under certain circumstances it is not the accuracy of proper-motion measurements that defines the ultimate limit on $`ϵ(v_r)`$, but rather internal velocity dispersion among the cluster stars. Assuming isotropic dispersion with standard deviation $`\sigma _v`$ in each coordinate, one must add $`\sigma _v\pi /A`$ quadratically to the measurement error $`ϵ(\mu )`$ in Eq. (8). Thus $`ϵ(v_{0r})`$ $``$ $`{\displaystyle \frac{1}{\rho _{\mathrm{rms}}\sqrt{n}}}\left[\sigma _v^2+\left({\displaystyle \frac{Aϵ(\mu )}{\pi }}\right)^2\right]^{1/2}`$ (9) $`\times \left[1+\left({\displaystyle \frac{v_r\rho _{\mathrm{rms}}ϵ(\pi )}{Aϵ(\mu )}}\right)^2\right]^{1/2},`$ is the accuracy achievable for the radial velocity of the cluster centroid. For the radial velocity of an individual star this uncertainty must be increased by the internal dispersion. The internal velocity dispersion will dominate the error budget for nearby clusters, viz. if $`\pi >Aϵ(\mu )/\sigma _v`$. Assuming a velocity dispersion of 0.25 km s<sup>-1</sup> and a proper-motion accuracy of 1 mas yr<sup>-1</sup> (as for Hipparcos), this will be the case for clusters within 50 pc of the Sun. For an observational accuracy in the 1–10 $`\mu `$as yr<sup>-1</sup> range the internal dispersion will dominate in practically all Galactic clusters and Eq. (9) can be simplified to $`ϵ(v_{0r})\sigma _v/(\rho _{\mathrm{rms}}\sqrt{n})`$. In this case the achievable accuracy becomes independent of the astrometric one. Table 4 lists some nearby clusters and associations, with estimates of the achievable accuracy in the radial velocity of the cluster centroid, assuming current (Hipparcos-type) astrometric performance (Case A in Sect. 3) as well as future (microarcsec) expectations (Case B). As explained above, increasing the astrometric accuracy still further gives practically no improvement; this is why Case C is not considered in the table. The entry ‘HIP 98321’ refers to the possible association identified by Platais et al. (platais (1998)) and named after one of its members. Of dubious status, it was included as an example of the extended, low-density groups that may exist in the general stellar field, but are difficult to identify with existing data. ### 6.2 Internal velocity fields, including cluster expansion Blaauw (blaauw64 (1964)) showed that the proper motion pattern for a linearly expanding cluster is identical to the apparent convergence produced by parallel space motions. Astrometric data alone therefore cannot distinguish such expansion from a radial motion. If such an expansion exists, and is not taken into account in estimating the astrometric radial velocity, a bias will result, as examined in Appendix A (Eq. 18). The gravitationally unbound associations are known to expand on timescales comparable with their nuclear ages (de Zeeuw & Brand dezeeuw+brand (1985)). But also for a gravitationally bound open cluster some expansion can be expected as a result of the dynamical evolution of the cluster (see Mathieu mathieu85 (1985) and Wielen wielen88 (1988) for an introduction to this complex issue). In either case the inverse age of the cluster or association may be taken as a rough upper limit on the cluster’s relative expansion rate $`\kappa `$ \[yr<sup>-1</sup>\]. Eq. (18) then gives $$|\delta _{\mathrm{exp}}(v_{0r})|0.9543\times \frac{b_0[\mathrm{pc}]}{\mathrm{age}[\mathrm{Myr}]}\mathrm{km}\mathrm{s}^1$$ (10) for the bias of a star near the centre of the cluster. (For an expanding cluster, $`\delta _{\mathrm{exp}}`$ is always negative.) Resulting values, in the last column of Table 4, are adequately small for a few nearby, relatively old clusters. In other cases the potential bias is very large and will certainly limit the applicability of the method. The OB associations are particularly troublesome, not only because they are young objects (implying large values of $`\kappa `$), but also because they sometimes appear to expand significantly faster than their photometric ages would suggest (de Zeeuw & Brand dezeeuw+brand (1985)). However, it should be remembered that the ultimate limitation set by cluster expansion depends on how accurately the expansion rate $`\kappa `$ can be estimated by some independent means. For instance, if $`\kappa `$ can somehow be estimated to within 10 per cent of its value, then the residual biases would still be on the sub-km s<sup>-1</sup> level for most of the objects in Table 4. Numerical simulation of the dynamical evolution of clusters might in principle provide such estimates of $`\kappa `$, as could the spectroscopic radial velocities as function of distance. The use of spectroscopic data would not necessarily defeat the purpose of the method, i.e. to determine absolute radial velocities, since the expansion is revealed already by relative measurements. ### 6.3 Distances to the individual stars In a rigorous estimation of the space motion of a moving cluster, such as will be presented in a second paper, the distances to the individual member stars of the cluster appear as parameters to be estimated. A by-product of the method is therefore that the individual distances are improved, sometimes considerably, compared with the original trigonometric distances (Dravins et al. dravins (1997); Madsen madsen (1999)). The improvement results from a combination of the trigonometric parallax $`\pi _{\mathrm{trig}}`$ with the kinematic (secular) parallax $`\pi _{\mathrm{kin}}=A\mu /v_t`$ derived from the star’s proper motion $`\mu `$ and tangential velocity $`v_t`$, the latter obtained from the estimated space velocity vector of the cluster. The accuracy of the combined parallax estimate $`\widehat{\pi }`$ can be estimated from $`ϵ(\widehat{\pi })^2=ϵ(\pi _{\mathrm{trig}})^2+ϵ(\pi _{\mathrm{kin}})^2`$. In calculating $`ϵ(\pi _{\mathrm{kin}})`$ we need to take into account the observational uncertainty in $`\mu `$ and the uncertainty in $`v_t`$ from the internal velocity dispersion. The result is $$ϵ(\widehat{\pi })\left[ϵ(\pi _{\mathrm{trig}})^2+\frac{(v_t/A)^2}{ϵ(\mu )^2+(\pi \sigma _v/A)^2}\right]^{1/2}.$$ (11) From the data in Table 4 we find that, in Case A, the moving-cluster method will be useful to resolve the depth structures of the Hyades cluster and of the associations Cassiopeia–Taurus, Upper Centaurus Lupus, Lower Centaurus Crux, Perseus OB3 and Upper Scorpius. In Case B, all the clusters and associations are resolved by the trigonometric parallaxes, so the kinematic parallaxes will bring virtually no improvement. Calculation of kinematic distances to stars in moving clusters is of course a classical procedure (e.g., Klinkerfues klinkerfues (1873) and van Bueren vanbueren (1952)); what is new in our treatment is that such distances are derived without recourse to spectroscopic data. ## 7 Further astrometric methods With improved astrometric data, further methods for radial-velocity determinations may become feasible. The moving-cluster method could in principle be applied to any geometrical configuration of a fixed linear size. To reach an accuracy of 1 km s<sup>-1</sup> in the astrometric radial velocity of an object at 10 pc distance requires a dimensional ‘stability’ of the order of $`10^7`$ yr<sup>-1</sup>; at a distance of 1 kpc the requirement is $`10^9`$ yr<sup>-1</sup>. Since these numbers are greater than or comparable with the inverse dynamical timescales of many types of galactic objects, there is at least a theoretical chance that the method could work, given a sufficient measurement accuracy. We consider briefly two such possibilities. ### 7.1 Binary stars According to the previous argument it would be possible to ignore the relative motion of the components in a binary if the period is longer than some $`10^7`$ yr. This implies an linear separation of at least some 50 000 astronomical units, or a few degrees on the sky at 10 pc distance. In principle, then, this case is equivalent to a moving cluster with $`n=2`$ stars. In the opposite case of a (relatively) short-period binary, the radial velocity might be obtained from apparent changes of the orbit. The projected orbit will not be closed, but form a spiral on the sky: slightly diverging if the stars are approaching, slightly converging if they recede. For a system at a distance of 10 pc, say, with a component separation of 10 astronomical units, a radial velocity of 100 km s<sup>-1</sup> will change the apparent orbital radius of 1 arcsec by 10 $`\mu `$as per year. The relative positions would need to be measured during at least a significant fraction of an orbital period, or some 20 years in our example, resulting in an accumulated change by about 0.2 mas. Since only relative position measurements between the same stars are required, the observational challenges are not as severe as in some other cases. For a binary with a few arcsec separation, the isoplanatic properties of the atmosphere imply that the cross-correlation distance between the speckle images of both stars should be stable to better than one mas. Averaging very many exposures should reduce the errors into the $`\mu `$as range, with practical limits possibly set by differential refraction (McAlister mcalister (1996)). ### 7.2 Globular clusters The moving-cluster method (Sect. 6) could in principle be applied also to globular clusters. Globular clusters differ markedly from open clusters in that (potentially) many more stars could be measured, and through a much larger velocity dispersion ($`5`$ km s<sup>-1</sup>; Peterson & King peterson (1975)). The higher number of stars partly compensates the larger dispersion. However, all globular clusters are rather distant, making their angular radii small. As discussed in Sect. 6.1 the approximate formula $`ϵ(v_{0r})\sigma _v/(\rho _{\mathrm{rms}}\sqrt{n})`$ applies in the case when the internal motions are well resolved. Taking $`\rho _{\mathrm{rms}}10`$–20 arcmin as representative for the angular radii of comparatively nearby globular clusters, we find that averaging over some $`3\times 10^4`$ to $`10^5`$ member stars is needed to reach a radial-velocity accuracy comparable with $`\sigma _v`$, i.e. several km s<sup>-1</sup>. Furthermore, in view of the discussion in Sect. 6.2, it is not unlikely that the complex kinematic structures of these objects (e.g. Norris et al. norris (1997)) would bias the results. Thus, globular clusters remain difficult targets for astrometric radial-velocity determination. ## 8 Conclusions The theoretical possibility to use astrometric data (parallax and proper motion) to deduce the radial motions of stars has long been recognised. With the highly accurate (sub-milliarcsec) astrometry already available or foreseen in planned space missions, such radial-velocity determinations are now also a practical possibility. This will have implications for the future definition of radial-velocity standards, as the range of geometrically determined accurate radial velocities, hitherto limited to the solar system and to solar-type spectra, is extended to many other stellar types represented in the solar neighbourhood. We have identified and analysed three main methods to determine astrometric radial velocities. The first method, using the changing annual parallax, is the intuitively most obvious one, but requires data of an accuracy beyond current techniques. It is nevertheless potentially interesting in view of future space missions or long-term observations from the ground. The second method, using the changing proper motion or perspective acceleration of stars, has a long history, and was previously applied to a few objects, albeit with modest precision in the resulting radial velocity. New results for a greater number of stars, obtained by combining old and modern data, were given in Table 3 and Fig. 2, thus proving the concept. However, to realise the full potential of the method again requires the accuracies of future astrometry projects. In both these methods the uncertainty in the astrometric radial velocity increases, statistically, with distance squared. They are therefore in practice limited to relatively few stars very close to the Sun and, in the second method, to stars with a large tangential velocity. In the general case, the two methods could actually be combined to yield a somewhat higher accuracy, but at least for the stars considered in Tables 1 and 2 this would only bring a marginal improvement. The third method, using the changing angular extent of a moving cluster or association, is an inversion of the classical moving-cluster method, by which the distance to the cluster was derived from its radial velocity and convergence point. If the distance is known from trigonometric parallaxes, one can instead calculate the radial velocities. It appears to be the only method by which astrophysically interesting accuracies can be obtained with existing astrometric data. In future papers we will develop and exploit this possibility in full, using data from the Hipparcos mission. A by-product of the method is that the distance estimates to individual cluster stars may be significantly improved compared with the parallax measurements. One would perhaps expect the moving-cluster method to become extremely powerful with the much more accurate data expected from future astrometry projects. Unfortunately, this is not really the case, as internal velocities (both random and systematic) become a limiting factor as soon as they are resolved by the proper-motion measurements. Nevertheless, even the limited number of clusters within reach of such determinations contain a great many stars spanning a wide range in spectral type and luminosity. ###### Acknowledgements. This project is supported by the Swedish National Space Board and the Swedish Natural Science Research Council. We thank Dr. S. Söderhjelm (Lund Observatory) for providing information on double and multiple stars, Prof. P.T. de Zeeuw and co-workers (Leiden Observatory) for advance data on nearby OB associations, and the referee, Prof. A. Blaauw, for stimulating criticism of the manuscript. ## Appendix A Effects of internal velocities on the moving-cluster velocity estimate In this Appendix we examine how sensitive the moving-cluster method is to systematic velocity patterns in the cluster, and to what extent such patterns can be determined independently of the astrometric radial velocity. For this purpose we may ignore the random motions as well as the observational errors and we consider only a linear (first-order) velocity field. Let $`b_0`$ be the position of the cluster centroid relative to the Sun and $`s=bb_0`$ the position of a member star relative to the centroid. The space velocity of the star is $`v=v_0+\eta `$, where $`\eta `$ is the peculiar velocity. The velocity field is described by the tensor $`T`$ such that $`\eta =Ts`$. In Cartesian coordinates the components of this tensor are simply the partial derivatives $`v_\alpha /b_\beta `$ for $`\alpha ,\beta =x,y,z`$. These nine numbers together describe the three components of a rigid-body rotation, three components of an anisotropic expansion or contraction, and three components of linear shear. It is intuitively clear that certain components of the linear velocity field, such as rotation about the line of sight, can be determined purely from the astrometric data. If the corresponding components of $`T`$ are included as parameters in the cluster model, they can be estimated and will not produce a systematic error (bias) in the astrometric radial velocity derived from the model fitting. Such components of $`T`$ are ‘observable’ and in principle not harmful to the method. Let us now examine more generally the extent to which $`T`$ is observable by astrometry. Suppose there exists a non-zero tensor $`T`$ such that the velocity fields $`v_0+Ts`$ and $`v_0+\mathrm{\Delta }v`$ produce identical observations for some vector $`\mathrm{\Delta }v`$. Since the cluster velocity $`v_0`$ is already a parameter of the model, the observational effects of the velocity field $`T`$ could then entirely be absorbed by adjusting $`v_0`$. In this case $`T`$ would be a non-observable component of the general velocity field. Moreover, if there exists such a component in the real velocities, then the estimated cluster velocity will have a bias equal to $`\mathrm{\Delta }v`$. We now need to calculate the effect of the arbitrary field $`T`$ on the observables. Since the parallaxes are not affected, only the proper motion vector $$\dot{r}=(Irr^{})v\pi /A$$ (12) needs to be considered. In this equation $`r`$ is the unit vector from the Sun towards the star, $`\dot{r}`$ is the rate of change of that direction, and $`Irr^{}`$ is the tensor representing projection perpendicular to $`r`$ \[$`I`$ is the unit tensor; thus $`(Irr^{})x=xrr^{}x`$ is the tangential component of the general vector $`x`$\]. With $`b=|b|=A/\pi `$ we can write $`s=rbb_0`$. $`T`$ is non-observable if the space velocities $`v_0+Ts`$ and $`v_0+\mathrm{\Delta }v`$ produce identical tangential velocities for every star, i.e. if $$(Irr^{})[v_0+T(rbb_0)]=(Irr^{})(v_0+\mathrm{\Delta }v)$$ (13) for all directions $`r`$ and distances $`b`$. This is equivalently written $$(Irr^{})Trb(Irr^{})(Tb_0+\mathrm{\Delta }v)=0.$$ (14) In order that this should be satisfied for all $`b`$, it is necessary that $`(Irr^{})Tr=0`$ and $`(Irr^{})(Tb_0+\mathrm{\Delta }v)=0`$ are separately satisfied for all unit vectors $`r`$. The latter equation implies that $$\mathrm{\Delta }v=Tb_0.$$ (15) The former equation can be written $`Tr=rr^{}Tr`$, which shows that $`r`$ is an eigenvector of $`T`$ (with eigenvalue $`r^{}Tr`$). But the only tensor for which every unit vector is an eigenvector is the isotropic tensor, $`T=I\kappa `$ for the arbitrary scalar $`\kappa `$. It follows that the only non-observable component of $`T`$ is of the form $`I\kappa `$, parametrised by the single scalar $`\kappa `$, and that consequently eight linearly independent components of $`T`$ can, in principle, be determined from the astrometric observations. The non-observable field $`T=I\kappa `$ describes a uniform isotropic expansion ($`\kappa >0`$) or contraction ($`\kappa <0`$) of the cluster with respect to its centroid. These effects are observationally equivalent to an approach or recession of the cluster, i.e. to a different value of its radial velocity. $`\mathrm{\Delta }v`$ is the bias for the centroid velocity. For any given star, the bias vector $`\delta v`$ is the difference between the derived (apparent) space velocity vector $`v_0+\mathrm{\Delta }v`$ and the true vector $`v_0+Ts`$. Using $`s=bb_0`$ and Eq. (15) we find $$\delta v=Tb.$$ (16) The resulting bias in the astrometric radial velocity is $$\delta (v_r)=r^{}Tb.$$ (17) Isotropic expansion ($`T=I\kappa `$), in particular, gives the bias $$\delta _{\mathrm{exp}}(v_r)=b\kappa .$$ (18) For a uniformly expanding cluster $`\kappa ^1`$ equals the expansion age, i.e. the time elapsed since all the stars were confined to a very small region of space.
no-problem/9907/gr-qc9907030.html
ar5iv
text
# Kerr-AdS and Kerr-dS solutions revisited ## Abstract We reconsider the Kerr metric with cosmological term $`\mathrm{\Lambda }`$ imposing the condition that the angular velocity $`\omega `$ of the dragging of inertial frames vanishes at spatial boundaries. Some properties of the extreme black holes in the revisited solutions are discussed. *Dipartimento di Fisica dell’Università di Genova <br>Istituto Nazionale di Fisica Nucleare,Sezione di Genova <br>Via Dodecaneso 33, 16146 Genova, Italy* PACS numbers: 04.20.Jb , 97.60.Lf With the advent of Maldacena’s conjecture of Anti de Sitter - Conformal Field Theory correspondence (AdS-CFT) , there has been a great deal of interest in studying the properties of black holes in AdS space \[2-7\], with special emphasis on the Kerr-AdS or Kerr-Newman-AdS solutions . Rotating black holes in four dimensions with asymptotic AdS behavior were first constructed by Carter many years ago . The purpose of this letter is to discuss various properties, which have not been considered in the literature before, of one of Carter’s families of Kerr vacuum solutions with cosmological term $`\mathrm{\Lambda }`$. We refer to the stationary and axisymmetric metric (family $`[A]`$ of Ref.11) which can be written as $$\begin{array}{c}ds^2=\frac{\mathrm{\Delta }_r}{\rho ^2}\left[d\chi \frac{a}{\mathrm{\Xi }}\mathrm{sin}^2\vartheta d\psi \right]^2\frac{\rho ^2}{\mathrm{\Delta }_r}dr^2\hfill \\ \hfill \frac{\rho ^2}{\mathrm{\Delta }_\vartheta }d\vartheta ^2\frac{\mathrm{sin}^2\vartheta \mathrm{\Delta }_\vartheta }{\rho ^2}\left[ad\chi \frac{(r^2+a^2)}{\mathrm{\Xi }}d\psi \right]^2\end{array}$$ (1) where $$\begin{array}{cc}\hfill \rho ^2& =r^2+a^2\mathrm{cos}^2\vartheta \hfill \\ \hfill \mathrm{\Delta }_r& =\frac{\mathrm{\Lambda }}{3}r^4+(1\frac{a^2\mathrm{\Lambda }}{3})r^22Mr+a^2\hfill \\ \hfill \mathrm{\Delta }_\vartheta & =1+\frac{a^2\mathrm{\Lambda }}{3}\mathrm{cos}^2\vartheta \hfill \\ \hfill \mathrm{\Xi }& =1+\frac{a^2\mathrm{\Lambda }}{3}\hfill \end{array}$$ (2) The parameter $`M`$ is related to the mass, $`a`$ to the angular momentum per unit mass while $`\chi `$ and $`\psi `$ are two ignorable coordinates. To express $`\chi `$ and $`\psi `$ by means of the usual time and azimuthal angle coordinates $`t`$ and $`\phi `$, we use the coordinate transformations $`\chi `$ $`=\alpha t`$ (3) $`\psi `$ $`=\beta \phi +\gamma t`$ (4) where the constants $`\alpha ,\beta `$ and $`\gamma `$ are to be determined with the conditions that the angular velocity $`\omega `$ of the dragging of inertial frames must vanish when $`r`$ reaches infinity if $`\mathrm{\Lambda }<0`$ and when $`r`$ reaches the cosmological horizon if $`\mathrm{\Lambda }>0`$ ; moreover $`t`$ and $`\phi `$ will be properly normalized. We first consider the Kerr-AdS case ($`\mathrm{\Lambda }<0`$). The required transformations are $`\chi `$ $`=\left(1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}\right)^2t`$ (5) $`\psi `$ $`=\sqrt{1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}}\phi +{\displaystyle \frac{a\mathrm{\Lambda }}{3}}\left(1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}\right)t`$ (6) and the corresponding line element (1) in Boyer-Lindquist coordinates becomes $$\begin{array}{c}ds^2=\frac{\left(1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}\mathrm{cos}^2\vartheta \right)\left[\left(1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}\mathrm{cos}^2\vartheta \right)\mathrm{\Delta }_ra^2\mathrm{sin}^2\vartheta \left(1{\displaystyle \frac{\mathrm{\Lambda }r^2}{3}}\right)^2\right]}{r^2+a^2\mathrm{cos}^2\vartheta }dt^2\hfill \\ \\ \hfill \frac{r^2+a^2\mathrm{cos}^2\vartheta }{\mathrm{\Delta }_r}dr^2\frac{r^2+a^2\mathrm{cos}^2\vartheta }{1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}\mathrm{cos}^2\vartheta }d\vartheta ^2+\frac{4Mra\mathrm{sin}^2\vartheta \left(1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}\mathrm{cos}^2\vartheta \right)}{\sqrt{1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}}(r^2+a^2\mathrm{cos}^2\vartheta )}dtd\phi \\ \\ \hfill \frac{\mathrm{sin}^2\vartheta \left[\left(1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}\mathrm{cos}^2\vartheta \right)(r^2+a^2)^2a^2\mathrm{sin}^2\vartheta \mathrm{\Delta }_r\right]}{\left(1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}\right)(r^2+a^2\mathrm{cos}^2\vartheta )}d\phi ^2\end{array}$$ (7) The solution is valid for $`1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}>0`$ and becomes singular when the latter quantity is zero. The event horizon is located at $`r=r_+`$, the larger of the two positive roots $`r_+`$ and $`r_{}`$ of the polynomial $`\mathrm{\Delta }_r`$. In this letter we limit ourselves to consider some properties of extreme black holes. In the parameter plane $`(M^2\mathrm{\Lambda }/3,a^2/M^2)`$ the curve $`r_+=r_{}`$ represents the locus of the extreme black holes, i.e. the borderline between black holes and naked singularities. The equation of this curve is obtained requiring that $`\mathrm{\Delta }_r=\mathrm{\Delta }_r^{}=0`$, where a prime denotes derivative with respect to $`r`$ and positive roots are to be considered. Putting for simplicity $`x=M^2\mathrm{\Lambda }/3,y=a^2/M^2`$, one obtains the following equation $$\begin{array}{c}x^3y^3+9x^2y^29xy+27x19x\sqrt{8y(xy1)+9}\hfill \\ \hfill +(xy1)^2\sqrt{xy(xy14)+1}=0\end{array}$$ (8) The corresponding plot is given in Fig.1 and is comprised between the “critical” values $`(\mathrm{\hspace{0.17em}64}/27,\mathrm{\hspace{0.17em}27}/64)`$ which correspond to $`1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}=0`$, and the values $`(0,1)`$. The angular velocity $`\omega `$ is given by $$\omega =\frac{2Mra\sqrt{1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}}\left(1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}\mathrm{cos}^2\vartheta \right)}{\left(1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}\mathrm{cos}^2\vartheta \right)(r^2+a^2)^2a^2\mathrm{sin}^2\vartheta \mathrm{\Delta }_r}$$ (9) One can immediately see that the angular velocity vanishes not only asymptotically, but also at the critical point above defined. A plot of $`\omega `$ as a function of the radius $`r_+`$ of the extreme black hole: $$\omega =\frac{a\sqrt{1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}}\left(1{\displaystyle \frac{\mathrm{\Lambda }r_+^2}{3}}\right)}{r_+^2+a^2}$$ (10) is given in Fig.2, in terms of the dimensionless quantities $`\omega ^{}=M\omega `$ and $`r_+^{}=r_+/M`$. The area of the horizon is $$A=4\pi \frac{r_+^2+a^2}{\sqrt{1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}}}$$ (11) It diverges at the critical point, gets its minimum near $`r_+=M/2`$ then increases till the value $`8\pi M^2`$ reached at $`r_+=M`$. A plot of $`A`$ as a function of $`r_+`$ is given in Fig.3, where are used the dimensionless quantities $`A^{}=A/(8\pi M^2)`$ and $`r_+^{}=r_+/M`$. We notice that the minimum value of the area, which corresponds to the minimum value of the entropy, is also in correspondence with the maximum value of the angular velocity. In a similar fashion we can now treat the Kerr-dS case ($`\mathrm{\Lambda }>0`$). The coordinate transformations are $`\chi `$ $`=\left(1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}\right)t`$ (12) $`\psi `$ $`=\sqrt{1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}}\phi +{\displaystyle \frac{a}{r_c^2+a^2}}\left(1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}\right)t`$ (13) where $`r_c`$ represents the position of the cosmological horizon and is the largest of the three positive roots of the polynomial $`\mathrm{\Delta }_r`$, the other two roots being still labelled by $`r_+`$ (the event horizon) and $`r_{}`$ (the Cauchy horizon). The line element (1) becomes $$\begin{array}{c}ds^2=\frac{(r_c^2+a^2\mathrm{cos}^2\vartheta )^2\mathrm{\Delta }_ra^2\mathrm{sin}^2\vartheta \left(1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}\mathrm{cos}^2\vartheta \right)(r_c^2r^2)^2}{(r_c^2+a^2)^2(r^2+a^2\mathrm{cos}^2\vartheta )}\hfill \\ \\ \hfill \frac{r^2+a^2\mathrm{cos}^2\vartheta }{\mathrm{\Delta }_r}dr^2\frac{r^2+a^2\mathrm{cos}^2\vartheta }{1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}\mathrm{cos}^2\vartheta }d\vartheta ^2\\ \\ \hfill +\frac{2a\mathrm{sin}^2\vartheta \left[\left(1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}\mathrm{cos}^2\vartheta \right)(r_c^2r^2)(r^2+a^2)(r_c^2+a^2\mathrm{cos}^2\vartheta )\mathrm{\Delta }_r\right]}{\sqrt{1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}}(r_c^2+a^2)(r^2+a^2\mathrm{cos}^2\vartheta )}dtd\phi \\ \\ \hfill \frac{\mathrm{sin}^2\vartheta \left[\left(1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}\mathrm{cos}^2\vartheta \right)(r^2+a^2)^2a^2\mathrm{sin}^2\vartheta \mathrm{\Delta }_r\right]}{\left(1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}\right)(r^2+a^2\mathrm{cos}^2\vartheta )}d\phi ^2\end{array}$$ (14) In this case the curve of the extreme black holes in the plane $`(M^2\mathrm{\Lambda }/3,a^2/M^2)`$, which is given again by Eq.($`8`$), begins at the point $`(0,1)`$ and ends at the point $`({\displaystyle \frac{16}{135+78\sqrt{3}}},{\displaystyle \frac{3+2\sqrt{3}}{16}})`$ where the three positive roots of the polynomial $`\mathrm{\Delta }_r`$ have the same value equal to $`{\displaystyle \frac{(3+2\sqrt{3})M}{4}}`$; the corresponding plot is shown in Fig.1. The angular velocity $`\omega `$ is given by $$\omega =\frac{a\sqrt{1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}}\left[\left(1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}\mathrm{cos}^2\vartheta \right)(r_c^2r^2)(r^2+a^2)(r_c^2+\mathrm{cos}^2\vartheta )\mathrm{\Delta }_r\right]}{(r_c^2+a^2)\left[\left(1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}\mathrm{cos}^2\vartheta \right)(r^2+a^2)^2a^2\mathrm{sin}^2\vartheta \mathrm{\Delta }_r\right]}$$ (15) and as requested goes to zero as $`rr_c`$ ; we notice that, as expected, also $`g_{tt}`$ goes to zero in this limit. A plot of $`\omega `$ as a function of $`r_+`$: $$\omega =\frac{a(r_c^2r_+^2)\sqrt{1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}}}{(r_c^2+a^2)(r_+^2+a^2)}$$ (16) is given in Fig.2. The area of the horizon can again be written as $$A=4\pi \frac{r_+^2+a^2}{\sqrt{1+{\displaystyle \frac{a^2\mathrm{\Lambda }}{3}}}}$$ (17) The plot of $`A`$ as a function of $`r_+`$ is given in Fig.3 and shows that $`A`$ increases monotonously as $`r_+`$ goes from $`M`$ to $`r_c`$. Some concluding remarks seem here appropriate. a) While in the Kerr metric ($`\mathrm{\Lambda }=0`$) the observer is put at infinity where $`g_{tt}=1`$, in the case $`\mathrm{\Lambda }0`$ all the pairs $`(r^{},\vartheta ^{})`$ solutions to $`g_{tt}=1`$ and fixing an observer should be considered. If then one wants to put the observer at a predefined position $`(r_0,\vartheta _0)`$ outside the ergosphere, it simply suffices to make the change of variable $$t=\frac{\overline{t}}{\sqrt{g_{tt}(r_0,\vartheta _0)}}$$ (18) which in turn modifies $`\omega `$ only by a scale factor. b) If we consider, when $`\mathrm{\Lambda }<0`$, the area of a surface at constant $`t`$ and $`r`$, and the lengths of the closed curves on it, we see that, while our coordinate transformations on $`\psi `$ and $`\chi `$ give asymptotically the correct value $`2\pi r\mathrm{sin}\vartheta `$ for a closed azimuthal curve at polar angle $`\vartheta `$, it is not possible to recover the asymptotically expected values $`4\pi r^2`$ and $`2\pi r`$ respectively for the area of a surface of radius $`r`$ and for the length of a polar curve $`\phi `$ = constant. The drawback is due to the particular form of the term $`\mathrm{\Delta }_\vartheta `$ which appears in the $`g_{\vartheta \vartheta }`$ component of the metric tensor. That term could be eliminated by the change of variable $$\overline{\vartheta }=\frac{d\vartheta }{\sqrt{\mathrm{\Delta }_\vartheta }}$$ (19) but it would then be unpossible to express analytically $`\vartheta `$ as a function of $`\overline{\vartheta }`$. We notice however that in calculating areas related to black holes, as well as to extreme black holes as made here, the term $`\mathrm{\Delta }_\vartheta `$ gets simplified in calculations by the use of the condition $`\mathrm{\Delta }_r=0`$. c) Finally, the fact that the Kerr-dS Universe is closed requires the presence of another antipodal mass $`M`$ equal to the mass of the original source and endowed with equal but opposite angular momentum. Figure captions Figure 1: The curve of the extreme black holes. Here $`x=M^2\mathrm{\Lambda }/3,y=a^2/M^2`$. A dashed line separates the two regions where $`\mathrm{\Lambda }`$ takes opposite signs. Figure 2: The angular velocity $`\omega ^{}`$ as a function of $`r_+^{}`$. The ordinate axis separates the regions where $`\mathrm{\Lambda }<0`$ (left) and where $`\mathrm{\Lambda }>0`$ (right). Figure 3: The area $`A^{}`$ as a function of $`r_+^{}`$. The ordinate axis separates the regions where $`\mathrm{\Lambda }<0`$ (left) and where $`\mathrm{\Lambda }>0`$ (right).
no-problem/9907/hep-ph9907266.html
ar5iv
text
# 1 Introduction ## 1 Introduction It is well-known that in soft hadron-hadron collisions the production of resonances gives an important contribution to the multiplicity of ”stable” secondaries (such as pions, kaons, etc.). For example in the additive quark model the probability of ”direct” production of secondary hadron having spin $`J`$ is proportional to the factor $`2J+1`$ that means the main parts of pions, kaons, etc. are produced via decay of vector, tensor and higher spin resonances. These results are in reasonable agreement with the data in soft hadron-hadron collisions. However, the information about the role of resonance production in hard processes is not sufficient. The mechanisms of multiparticle production in soft and hard processes can be different. So in the present paper we will consider the role of resonances in the neutral strange secondary production in the deep inelastic interactions of high energy neutrino and antineutrino with protons and neutrons. ## 2 Experimental data For comparison with the Quark Model predictions (QM) the experimental data of E632 Collaboration had been used . The experiment was done at the Fermilab Tevatron. The detector was the 15-ft. bubble chamber filled with a liquid neon-hydrogen mixture which also served as the target. The bubble chamber was exposed to a neutrino beam. The neutrino beam was produced by the quadrupole triplet train, which focused secondary particles produced by the interactions of 800 GeV protons from the Tevatron. The data sample consists of 6459 events ( 5416 - $`\nu Ne`$ interactions and 1043 - $`\overline{\nu }Ne`$ interactions). The neutrino interactions with a single nucleon were picked out by using the criterion of selection of the interactions with the peripheral nucleon or the neutrino interactions without the intranuclear cascades such as the mass of the target . The neutrino-nucleon interactions could be selected into neutrino-proton and neutrino-neutron interactions by using the total charge of the hadronic system (Table 1). This material was used for the determination of the numbers of generated $`K^0`$ and $`\mathrm{\Lambda }`$ particles as well their parts in the different groups of the events (Table 2). In the data sample of vee of the Table 2 it is not taken into consideration the corrections for losses of $`K^0`$ and $`\mathrm{\Lambda }`$ particles caused by the methodical sources (the limited volume of the bubble chamber, scanning and fitting efficiency, etc.) . Nevertheless the weighted coefficients, taking into account these effects, must not be distinguished for the neutrino-proton and the neutrino-neutron interactions. ## 3 Quark model predictions We will consider only events with charged current interactions. In the case of interactions with sea quarks every type of particle and antiparticle are produced practically in the same proportion independently on their isospin projection (say, we expect the equal multiplicities of $`K^+`$, $`K^0`$, $`\overline{K}^0`$ and $`K^{}`$). However the secondaries produced with comparatively large negative Feynman-$`x`$ ($`x_F`$) in the laboratory frame , in the target fragmentation region, should contain valence quarks of the target nucleon, so different kinds of kaons should be produced with different probabilities. For the model prediction we will use the fact that neutrino interacts with valence $`d`$-quark which transfers into $`u`$-quark whereas antineutrino interacts with $`u`$-quark which transfers into $`d`$-quark. So we have the following configurations: $$\nu puu+u^{},$$ (1) $$\overline{\nu }ndd+d^{},$$ (2) $$\overline{\nu }pud+d^{},$$ (3) $$\nu nud+u^{}.$$ (4) Here $`q^{}`$ means the fast quark in the laboratory frame which absorbs $`W`$-boson and determines the fragmentation in the current region and another two quarks determine the fragmentation of valence remnant into secondaries with comparatively large $`x_F`$ in the target hemisphere. One can see from Eqs. (1)-(4) that, say, direct production of $`K^0`$ ($`d\overline{s}`$) with comparatively large $`x_F`$ should be suppressed in the process (1), where there are no valence $`d`$-quarks, in comparison with another reactions. In the process (2), where there are two valence $`d`$-quarks, it should be about two times larger than in the cases of Eqs. (3) and (4). However, if a significant part of $`K^0`$ can be produced via decay of $`K^+(890)`$ and $`K^0(890)`$, the yields of $`K^0`$ with large $`x_F`$ can be more or less equal in all considered processes. A similar situation appears in the case of secondary $`\mathrm{\Lambda }`$-baryon production with large $`x_F`$. The direct $`\mathrm{\Lambda }`$ (containing two initial valence quarks, $`u`$ and $`d`$) can be produced with equal probabilities in the processes (3) and (4) and their production should be suppressed in reactions (1) and (2). However in the case of $`\mathrm{\Lambda }`$ production via $`\mathrm{\Lambda }\pi `$ decay of isotriplet resonance $`\mathrm{\Sigma }(1385)`$ the multiplicities of large-$`x_F`$ $`\mathrm{\Lambda }`$ should be of the same order in all reactions (1)-(4). ## 4 Results Here we compare the experimental results on neutral kaons and $`\mathrm{\Lambda }`$-hyperon production by $`\nu `$ and $`\overline{\nu }`$ beams with quark model predictions. The quark model predictions for the multiplicities of strange secondaries assuming only direct production of a kaon containing one valence quark of incident target nucleon and direct production of $`\mathrm{\Lambda }`$ containing two valence quarks of target nucleon are presented in Table 2. Here $`w_K`$ and $`w_\mathrm{\Lambda }`$ are the probabilities of $`K^0`$ and $`\mathrm{\Lambda }`$ production in the processes of fragmentation (or recombination) of one and two valence quarks of the target nucleon, respectively. Let us repeat again that in the case of large contributions of resonance decay the multiplicities of $`K^0`$ and $`\mathrm{\Lambda }`$ can be more or less equal (the exact values of their ratios are model dependent). One can compare the presented predictions with the experimental multiplicities of $`K^0`$ with $`x_F<0.2`$ and $`\mathrm{\Lambda }`$ with $`x_F<0.4`$. It is clear that the data for the both $`K^0`$ and $`\mathrm{\Lambda }`$ production do not agree with the presented predictions for direct mechanism of secondary production. Say, the multiplicity of $`K^0`$ in $`\overline{\nu }n`$ interactions should be equal to the sum of their multiplicities in $`\nu n`$ and $`\overline{\nu }p`$ interactions, i.e. $`0.005\pm 0.002`$ that is in disagreement with the experimental value. The most natural explanation is a large resonance contribution to the multiplicities of neutral strange secondaries which changes the predictions depending on the model of resonance contributions. ## 5 Conclusion We compare the experimental data on $`K^0`$ and $`\mathrm{\Lambda }`$ production by $`\nu `$ and $`\overline{\nu }`$ beams on the proton and neutron targets with the predictions of quark model assuming that their direct production dominates. Disagreement of these predictions with the data allows us to suppose that there exists considerable resonance decay contribution to the multiplicities of produced secondaries. Unfortunately the experimental statistics are not large enough for numerical estimations. This work is supported in part by grants RFBR 96-15.96764, NATO OUTR. LG 971390 and RFFI 99-02-16578.
no-problem/9907/hep-th9907166.html
ar5iv
text
# 1 D(𝑝-2)-branes in square torus of radius 𝑅≪√𝛼' and B-field flux and its T-dual D(𝑝-1)-branes on a skewed torus. Skewed geometry of the torus gives rise to non-locality in the open string excitations living on the D(𝑝-1)-brane. ## Note Added We have learned that J. Maldacena and J. Russo have considered related issues . ## Acknowledgments NI would like to thank S. Yankielowicz for discussions. The work of AH is supported in part by the National Science Foundation under Grant No. PHY94-07194. The work of NI is supported in part by NSF grant No. PHY9722022.
no-problem/9907/astro-ph9907333.html
ar5iv
text
# Ω_𝑚 from the Temperature-Redshift Distribution of EMSS Clusters of Galaxies ## 1 Introduction Massive distant clusters of galaxies can be used to constrain models of cosmological structure formation (e.g. Peebles, Daly & Juszkiewicz 1989; Arnaud et al. 1992; Oukbir & Blanchard 1992; Eke, Cole & Frenk 1996; Viana & Liddle 1996; Bahcall, Fan & Cen 1997; Donahue et al. 1998; Borgani et al. 1999). The mass function of clusters reflects the sizes and numbers of the original perturbations, and the evolution of the mass function depends sensitively on $`\mathrm{\Omega }_m`$, the mean density of matter. In a critical universe with $`\mathrm{\Omega }_m=1`$, perturbation growth continues forever, while in a low-density universe ($`\mathrm{\Omega }_m<1`$), growth significantly decelerates once $`z\mathrm{\Omega }_m^11`$. The Extended Medium Sensitivity Survey (EMSS; Gioia et al. 1990; Henry et al. 1992) has proved cosmologically interesting because it contains several massive high-redshift clusters (Henry 1997; Eke et al. 1998; Donahue et al. 99 – hereafter D99). We report here our analysis of a complete, high-redshift sample of clusters of galaxies culled from the EMSS, including the most distant EMSS clusters (D99). We use maximum likelihood analysis to compare the unbinned temperature-redshift data with analytical predictions of cluster evolution from Press-Schechter models, normalized to two different low-z cluster samples (Henry & Arnaud 1991; Markevitch 1998). Section 2 briefly describes the model for cluster evolution, §3 describes the cluster samples, and §4 describes the implementation of the maximum likelihood technique. Section 5 discusses our results and their sensitivity to various assumptions, and §6 outlines the results of a bootstrap resampling of our cluster catalogs. Section 7 summarizes our findings. ## 2 The Model The Press-Schechter formula (Press & Schechter 1974), as extended by Lacey & Cole (1993), adquately predicts the evolution of the cluster mass function ($`dn/dM`$) in numerical simulations (e.g. Eke, Cole & Frenk 1996 (ECF); Borgani et al. 1999; Bryan & Norman 1998). To obtain predicted cluster temperature functions ($`dn/dT`$) from this mass function we use a mass-temperature ($`MT`$) relation appropriate for all values of $`\mathrm{\Omega }_m`$ (Voit & Donahue 1998, 1999). At $`z=0`$ we normalize this relation to the simulations of Evrard et al. (1996). We will show in §5 that the $`MT`$ relation of ECF yields similar results. In this description of cluster evolution, the three main variables are the mean density of the universe $`\mathrm{\Omega }_m`$, the slope $`n`$ of the initial density perturbation spectrum near the cluster scale, and $`\nu _c`$, a parameter that reflects the abundance of virialized perturbations on a given mass scale at a particular moment in time. For a given $`\mathrm{\Omega }_m`$, $`n`$, and $`\nu _c`$, the number of clusters per unit steradian expected in a given redshift range is $`(dn/dM)(dM/dT)(dV/dz)F(T,z)`$ integrated over the relevant redshift and temperature ranges, where $`F(T,z)`$ is a window function defined by the flux and redshift limits of a given sample and the luminosity-temperature relation (See §4). ## 3 Cluster Samples Our fitting procedure compares three cluster samples each covering distinct redshift ranges to the model (§2). The EMSS provided two samples of distant clusters. Because the EMSS has multiple flux limits (Henry et al. 1992), it is equivalent to multiple surveys each with different flux limits and sky coverages. To compute the volumes associated with the EMSS samples, we correct the predicted flux of a cluster to that measured within a $`2.\mathrm{}4\times 2.\mathrm{}4`$ detection cell (Henry et al. 1992). The $`z=0.50.9`$ EMSS sample, described in D99, consists of 5 EMSS clusters at $`z=0.50.9`$ (Gioia et al. 1990; Henry et al. 1992.) These are all of the EMSS clusters with 0.5-3.5 keV fluxes $`>f_x=1.33\times 10^{13}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2`$. (MS2053 may also belong in this sample, but at a flux limit below what is listed in Henry et al. 1992.) The $`z=0.30.4`$ sample is described in Henry (1997). D99 modified that sample slightly by revising the redshift of one cluster upwards to 0.54 (MS1241), leaving 9 clusters in the Henry sample. The Henry sample has already been used to constrain $`\mathrm{\Omega }_m`$ by Eke et al. (1998) and Henry (1997). This paper extends the previous analysis from $`z=0.4`$ to $`z=0.8`$, at which the cluster evolution is expected to be much more dramatic. To establish a baseline for assessing cluster evolution, we used two different low-z samples: the Markevitch sample and the HEAO sample. The Markevitch sample of clusters from the ROSAT All Sky Survey with $`z=0.040.09`$ (Markevitch 1998), covers a sky area of 8.23 steradians to a 0.2-2.5 keV flux limit of $`2.0\times 10^{11}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2`$. The HEAO cluster sample is also an all-sky sample (Henry & Arnaud 1991), with a 2-10 keV flux limit of $`3.0\times 10^{11}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2`$. To compute the volumes available to these samples we assume that the detection techniques in both cases were sensitive to the total extended flux. We explore the consequences of our choice of low-redshift sample in §5. ## 4 Methods and Assumptions To assess how well cosmological models fit the cluster temperature data, we adopt the maximum likelihood technique described by Marshall et al. (1983). Specifically, we minimize the maximum likelihood function: $$S=2\underset{i}{}[\mathrm{ln}[\frac{dn}{dT}(z_i,T_i)]+\mathrm{ln}[\frac{dV}{dz}(z_i)]]+2\underset{k}{}_0^{\mathrm{}}𝑑T_{z_{min,k}}^{z_{max,k}(T)}\frac{dn}{dT}\frac{dV}{dz}\mathrm{\Omega }_kF(T,z)𝑑z$$ (1) where $`z_i`$ and $`T_i`$ are the redshift and temperature of cluster $`i`$, $`z_{min,k}`$ is the minimum redshift of sample $`k`$, and $`z_{max,k}(T)`$ is the maximum redshift at which a cluster of temperature $`T`$ can be seen in sample $`k`$. $`V`$ is the comoving volume per unit solid angle, and $`\mathrm{\Omega }_k`$ is the solid angle corresponding to sample $`k`$. In practice, the temperature integral is calculated between 3 and 15 keV. Only clusters with temperature greater than 3 keV are included in the analysis. Intervals around the minimum $`S`$ are distributed like $`\chi ^2`$ so differences in $`S`$ are similar to the familiar $`\mathrm{\Delta }\chi ^2`$. The $`z_{max,k}(T)`$ values for our samples depend on the cluster luminosity-temperature ($`LT`$) relation. Low redshift clusters of galaxies have a fairly well-defined $`LT`$ relation (e.g. David et al. 1993; Markevitch 1998) that high-redshift clusters of galaxies seem to follow (D99; Mushotzky & Scharf 1998). This relationship has a finite dispersion which we handle in two ways. One method is to replace the $`LT`$ relation with a line bounding the lower $`1\sigma `$ envelope in $`L`$ (Henry 1997), explicitly compute $`z_{max}(T)`$ for each flux limit, and set $`F(T,z)=1`$. The second method is to incorporate the dispersion relation into the a window function $`F(T,z)`$ (Eke et al. 1998). We have done this calculation both ways, and both methods yield very similar results. Since the window function seems to be the most realistic description of the data, we use it for our default analysis and the same dispersion as assumed in Eke et al. (1998). We explore the effect of including evolution in the $`LT`$ relation in §5. We vary 3 parameters for our model to span a cube of parameter space: $`0.1<\mathrm{\Omega }_m<1.0`$, spectral index $`2.8<n<1.0`$, and $`2.4<\nu _{c0}<3.1`$, where $`\nu _{c0}=\nu _c(5\mathrm{keV},z=0)`$. We compute a multidimensional matrix of $`S`$ for 25 temperatures between $`T=3`$ to 15 keV. The 2-10 keV $`LT`$ relation from David et al. (1993) defines the EMSS and HEAO volumes, appropriately k-corrected. The volume of the Markevitch (1998) sample, our default low-z sample, is defined by its own $`LT`$ relation. This procedure yields a best fit of $`\mathrm{\Omega }_m=0.45\pm 0.10`$, $`n=2.4\pm 0.2`$, and $`\nu _{c0}=2.77_{0.09}^{+0.05}`$, corresponding to a $`\sigma _8=0.64\pm 0.04`$ when $`\mathrm{\Lambda }=0`$. We achieve a similar degree of correspondence between observed and predicted temperature functions for flat models when $`\mathrm{\Omega }_m=0.27\pm 0.1`$, $`n=2.2\pm 0.2`$, $`\nu _{c0}=2.62_{0.09}^{+0.08}`$, corresponding to $`\sigma _8=0.73_{0.05}^{+0.03}`$. Figure 1 plots the observed temperature functions (D99) and the theoretical temperature function corresponding to the best fit to the temperature and redshift data for $`\mathrm{\Lambda }=0`$. Note that we fit the discrete, unbinned temperature-redshift data, not the binned temperature function. Figure 2 shows that our $`3\sigma `$ confidence limits on $`\mathrm{\Omega }_m`$ exclude $`\mathrm{\Omega }_m=1`$. Our value for the best-fit $`\mathrm{\Omega }_m`$ is consistent with that derived for the low-redshift subset of our data by Henry (1997), Eke et al. (1998), and Viana & Liddle (1999). However, Viana & Liddle (1999) report less stringent constraints than the previous studies because they were more conservative about uncertainties in the low-$`z`$ normalization of the temperature function. The maximum likelihood method we use naturally accounts for the uncertainty of the low-$`z`$ determination of the normalization; we investigate the use of somewhat different low-$`z`$ samples in the next section. Because our sample extends to higher redshifts, our 3$`\sigma `$ confidence limits are considerably stronger than those found by earlier cluster studies. ## 5 Results and Discussion Our best fit values for $`\mathrm{\Omega }_m`$ are fairly robust. This section briefly describes the sensitivity of $`\mathrm{\Omega }_m`$ to the assumptions in our model and procedure. Results for various assumptions are listed in Table 1. 1. Changing the low-redshift sample. If we use the updated HEAO sample (Henry & Arnaud 1991, with best-fit temperature updates provided by Henry, private communication) instead of the Markevitch sample, we obtain $`\mathrm{\Omega }_m0.3`$ rather than $`0.45`$ because of the somewhat lower normalization at $`z=0`$. We also used the Markevitch sample with uncorrected temperatures and a flatter low-$`z`$ $`LT`$ relation (Markevitch 1998), and obtained a somewhat lower best-fit value for $`\mathrm{\Omega }_m`$, $`\mathrm{\Omega }_m=0.4\pm 0.1`$. 2. Varying the $`MT`$ relation. We find our bounds on $`\mathrm{\Omega }_m`$ change little when we switch to the ECF (1996) $`MT`$ relation. Because the best-fit $`\mathrm{\Omega }_m`$ turns out to be $`>0.3`$, the unphysical behavior of the ECF $`MT`$ relation at low $`\mathrm{\Omega }`$ and low $`z`$ is not a factor. (See Voit & Donahue 1999 for more details.) 3. Dispersion in the $`MT`$ relation. Our default assumption was that the $`MT`$ relation has a finite dispersion of 7% (Evrard et al. 1997). Neglecting the dispersion results in a negligible difference in $`\mathrm{\Omega }_m`$; increasing the dispersion to 20% and using the ECF $`MT`$ relation increases $`\mathrm{\Omega }_m`$ slightly to $`0.50`$. Dispersion in the $`MT`$ relation scatters some of the more numerous low-mass clusters to higher temperatures, making the observed temperature function flatter, and somewhat enhancing the observed numbers of hot clusters relative to cool clusters. 4. Evolution of the $`LT`$ relation. If we allow the normalization of the $`LT`$ relation to evolve such that $`LT^\alpha (1+z)^A`$ and $`A=2`$, we find no significant differences in $`\mathrm{\Omega }_m`$ values. This null result is in contrast to similar exercises in modelling the evolution of the cluster luminosity function (e.g. Borgani et al. 1999) or cluster number counts (Ebeling et al. 1999), where evolution of the $`LT`$ relation in the appropriate direction ($`A2`$) allows models with larger $`\mathrm{\Omega }_m`$ ($`1`$) to nearly fit. $`LT`$ evolution only modestly affects the sample volume used to predict the distribution of cluster temperatures. 5. Omitting MS1054-0321. MS1054-0321, the hottest ($`kT=12.3`$ keV) and most distant ($`z=0.83`$) cluster in our sample (Donahue et al. 1998), may well be anomalous. However, omitting MS1054-0321 had virtually no effect on the best fit $`\mathrm{\Omega }_m`$. 6. Missing high redshift clusters in the EMSS. The EMSS could be incomplete due to the use of a single detect cell aperture, which could bias its cluster selection in favor of high central surface brightness even at high-z (Ebeling et al. 1999; Lewis et al. 1999). If the EMSS is missing clusters at higher redshift, the values for $`\mathrm{\Omega }_m`$ derived here are upper limits. 7. Deviations from Press-Schechter orthodoxy. Some numerical simulations indicate that massive, high-z clusters might be more common than the standard PS formula predicts (Governato et al. 1999; Evrard, private communication). We have tested the effects of reducing the standard evolution of $`\nu _c`$ by a factor $`(1+z)^{0.125}`$ (Governato et al. 1999), and find that the best fit $`\mathrm{\Omega }_m`$ rises to $`0.5\pm _{0.05}^{0.2}`$. Of all the systematic effects, this one has the largest effect on the best fit $`\mathrm{\Omega }_m`$. Even so, $`\mathrm{\Omega }_m=1`$ is barely allowed at the $`3\sigma `$ level. 8. A larger high-redshift sample. We simulated the effect of tripling the size of the EMSS by tripling the assumed sky coverage of the EMSS and replicating the existing $`Tz`$ data pairs. Tripling the number of known clusters with $`z=0.30.9`$ reduces the statistical uncertainty of $`\mathrm{\Omega }_m`$ by a factor of $`2`$. Because the uncertainty in the current estimate is now equal parts systematic and statistical, theoretical refinements will be needed if we wish to take full advantage of larger surveys. ## 6 Bootstrap Catalogs and Experimental Uncertainties In order to investigate the effects of measurement uncertainties within our cluster $`Tz`$ catalogs, we generated bootstrap catalogs with re-sampled temperatures. For the EMSS clusters, we used the mean temperatures from Gaussian fits to temperature probability distributions derived from the X-ray data (D99, Table 4). Ten thousand boot-strap catalogs were generated for each of the three original samples, Markevitch (1998), Henry (1997), and D99. The number of clusters in each catalog was predetermined from a Poisson distribution based on the number of clusters in the original catalog. Each set of data was then fit to obtain a best-fit $`\mathrm{\Omega }_m`$, normalization, and slope, using all of the standard assumptions. Out of 10,000 catalog combinations, we obtained a best fit $`\mathrm{\Omega }_m>0.95`$ for only 3. These three catalog combinations were the ones for which the low-redshift catalog had a high number of clusters while the high-redshift cluster catalogs were nearly empty. We got very similar results when we repeated bootstrap re-sampling of the three catalogs while assuming temperature measurement uncertainties for the EMSS clusters that were half the original uncertainty. This similarity suggests that temperature measurement errors do not dominate the uncertainty in this method of estimating $`\mathrm{\Omega }_m`$. ## 7 Summary We have used a maximum likelihood Press-Schechter analysis of the temperatures and redshifts of two high-z EMSS samples of clusters of galaxies and two low-z all-sky samples of clusters to constrain $`\mathrm{\Omega }_m`$. We find a simultaneous best fit to the low-z Markevitch (1998) sample, a moderate-z EMSS sample from Henry (1997), and a high-z EMSS sample (D99) of $`\mathrm{\Omega }_m=0.45\pm 0.1`$ for an open universe and $`\mathrm{\Omega }_m=0.27\pm 0.1`$ for a flat universe, quoting statistical uncertainties only. Our results are not very sensitive to the assumptions within our cluster evolution model, with systematic uncertainties $`\pm 0.1`$. Universes with $`\mathrm{\Omega }_m=1`$ are ruled out at greater than 99.7% ($`3\sigma `$) confidence in the scenarios described here. We acknowledge the NASA grants NAG5-3257, NAG5-6236, NAG5-3208 and NAG5-2570 for partial support of this work. We benefitted greatly from exchanges with J. Patrick Henry in the development of the code and by his generous release of revised temperature data for the low-z Henry & Arnaud (1991) sample.
no-problem/9907/cond-mat9907128.html
ar5iv
text
# Parallel Magnetic Field Induced Transition in Transport in the Dilute Two-Dimensional Hole system in GaAs \[ ## Abstract A magnetic field applied parallel to the two-dimensional hole system in the GaAs/AlGaAs heterostructure, which is metallic in the absence of an external magnetic field, can drive the system into insulating at a finite field through a well defined transition. The value of resistivity at the transition is found to depend strongly on density \] Several years ago, Kravchenko et al. observed that the resistivity ($`\rho `$) of the high mobility two-dimensional (2D) electron gas in their Si metal-oxide-semiconductor field-effect transistor (MOSFET) samples, decreased by almost an order of magnitude when they lowered their sample temperature ($`T`$) below about 2K. Their observation of such a metallic behavior contradicts the scaling theory of localization which predicts that, in the absence of electron-electron interaction, all states in 2D are localized in the $`T`$0 limit and that only an insulating phase characterized by an increasing $`\rho `$ with decreasing $`T`$ is possible at low $`T`$. They studied the $`T`$ dependence of $`\rho `$ as a function of the 2D carrier density ($`p`$) and demonstrated from the temperature coefficient (d$`\rho `$/d$`T`$) of $`\rho `$ a clear transition from the metallic to an insulating behavior at a “critical” density, $`p_c`$. This apparent metal-insulator transition (MIT) has since been reported in other low disorder 2D systems , and it appears to be a general phenomenon in low disorder dilute 2D systems where the Fermi energy is small and $`r_s`$ (the ratio of Coulomb interaction energy to Fermi energy) is large ($``$10). To date, despite the large number of experimental and theoretical papers on this zero field MIT in the literature, there is still no consensus on the physics and the mechanisms behind this metallic behavior. Two factors that strongly influence this MIT have become apparent from the more recent experiments. First, the application of a magnetic field parallel to the 2D system ($`B_{}`$) induces a drastic response . A giant positive magetoresistance is observed in both the metallic and the insulating phases, varying continuously across the transition. In the case of Si MOSFET’s, Simonian et al. have made a detailed study of the temperature and electric field dependences on the magnetoresistance and concluded that “in the $`T`$0 limit the metallic behavior is suppressed by an arbitrarily weak magnetic field”. Since $`B_{}`$ couples only to the carrier’s spin and does not affect its orbital motion, the spin degree of freedom must play a crucial role in the electronic processes that give rise to transport in both phases. The second factor that has become increasingly clear is the importance of the role played by disorder. A close examination of all 2D systems that show the metallic behavior and thus the MIT as the carrier density is reduced reveals that $`p_c`$ is lower in systems with a higher mobility (i.e., a lower carrier scattering rate) . Typically, the 2D electron system (2DES) in a high quality Si MOSFET has a peak mobility of $`1\times 10^45\times 10^4cm^2/Vs`$ and $`p_c1\times 10^{11}cm^2`$. On the other hand, the 2D hole system (2DHS) in GaAs/AlGaAs heterostructures, which has a comparable effective mass ($`m^{}`$) at low densities, usually has a peak mobility of about 10 times higher and $`p_c1\times 10^{10}cm^2`$. It is clear that $`p_c`$ decreases monotonically with decreasing disorder in the 2D system. In the ideal clean limit, it is well known that the Wigner crystal is the ground state in the low $`p`$ limit. The estimated critical density for Wigner crystallization is approximately $`2\times 10^9cm^2`$ for the 2DHS in GaAs, using $`m^{}=0.18m_e`$ ($`m_e`$ being the free electron mass) and a value of 37 for $`r_s`$ at the crystallization . We have recently investigated the transport properties of a 2DHS in the GaAs/AlGaAs heterostructure, which has an unprecedentedly high peak mobility of $`7\times 10^5cm^2/Vs`$, and observed a zero field MIT at $`p_c=7.7\times 10^9cm^2`$ . The mobility of this 2DHS is over 25 times that of the 2DES in the Si MOSFET whose parallel magnetic field response has been most extensively studied in references 8 and 9. In view of the fact that it is not yet clear what specific role the spins play in the two transport regimes and how the small amount of random disorder in high quality 2D systems influences the MIT, we have systematically studied the effect of a $`B_{}`$ on the transport in this high quality 2DHS. We find that for $`p>p_c`$, the metallic behavior persists to our lowest $`T`$ of 50mK until $`B_{}`$ reaches a well defined “critical” value $`B_{}^c`$, beyond which the 2DHS shows an insulating behavior. At $`B_{}^c`$, $`\rho `$ is independent of $`T`$. The nonlinear I-V characteristics across this $`B_{}`$ induced transition is found to be the same as those across the zero field MIT. Below, we describe in more detail the changes in the transport properties of the 2DHS under the influence of a $`B_{}`$ and report our observation of this $`B_{}`$ induced MIT. We used the 2DHS created in a Si modulation doped GaAs/Al<sub>x</sub>Ga<sub>1-x</sub>As heterostructure grown on the (311)$`A`$ surface of an undoped GaAs substrate by molecular beam epitaxy. The samples were Hall bars along the \[$`\overline{2}`$33\] crystalographic direction, and the measurements were made using a dilution refrigerator in the $`T`$ range from 50mK to 1.1K and under $`B_{}`$ up to 14T. The hole density was tuned by a back gate in the range $`5.7\times 10^9<p<4.1\times 10^{10}cm^2`$. In Fig. 1(a), the $`T`$ dependence of $`\rho `$ in the zero field metallic phase is shown on a semilog plot for a hole density $`p=3.7\times 10^{10}cm^2`$ at several different $`B_{}`$’s. The bottom trace taken at $`B_{}=0`$ clearly shows a positive $`d\rho /dT`$, which is characteristic of metallic-like transport. This metallic behavior is found to persist to our lowest $`T`$ of 50mK in a magnetic field of up to about 4T. As $`B_{}`$ increases from zero, the strength of the metallic behavior measured by the total change in $`\rho `$ from about 1K to 50mK weakens progressively, and for $`B_{}4.5`$T $`d\rho /dT`$ becomes negative. We take this negative $`d\rho /dT`$ as an indication that the 2DHS is insulating, and phenomenologically identify the two distinct transport regimes as a “metallic phase” and an “insulating phase”. It is clear from the figure that there exists a “critical” field $`B_{}^c`$ near 4T where $`\rho `$ becomes $`T`$ independent, separating the metallic and insulating phases. Another way of demonstrating the existence of a well defined $`B_{}^c`$ is to plot $`\rho `$ against $`B_{}`$ at several different $`T`$’s. Such a plot is shown in Fig. 1(b) for $`p=1.5\times 10^{10}cm^2`$. In this plot, the crossing point marked by the arrow defines $`B_{}^c`$. This is the direct consequence of the fact that $`\rho `$ decreases with decreasing $`T`$ in the metallic phase ($`B_{}<B_{}^c`$), increases in the insulating phase ($`B_{}>B_{}^c`$), and is independent of $`T`$ at $`B_{}^c`$. Across the zero field MIT, the differential resistivity ($`dV/dI`$) is known to show an increase with increasing voltage ($`V`$) in the metallic phase and a decrease in the insulating phase . In Fig. 1(c), the $`dV/dI`$ measured at 50mK across the $`B_{}`$ induced MIT for $`p=3.7\times 10^{10}cm^2`$ is shown at similar $`B_{}`$’s as in Fig. 1(a). It is clear that in the metallic phase ($`B_{}<4.2`$ T) $`dV/dI`$ increases as $`V`$ increases, and in the insulating phase ($`B_{}>4.2`$T) it decreases. At $`B_{}=4.2`$T which is the $`B_{}^c`$, $`dV/dI`$ is constant, implying a linear $`IV`$. This result also shows that there is a well defined critical field $`B_{}^c`$ separating the metallic and insulating phases in the presence of a parallel magnetic field. We have measured $`B_{}^c`$ and the “critical” resistivity ($`\rho _c`$) as a function of $`p`$ in the $`p>p_c`$ regime for two samples cut from the same wafer, and the results are shown in Fig. 2(a)-(c). Figures 2(a) and 2(b) show that $`B_{}^c`$ decreases with decreasing $`p`$ and approaches zero as $`p`$ is reduced towards $`p_c`$ where the zero field MIT is observed. When $`B_{}^c`$ is plotted against $`pp_c`$ on a log-log scale (Fig. 2(a)), all data from the two samples form two parallel lines, showing that $`B_{}^c(pp_c)^\alpha `$ with $`\alpha 0.7`$ for both samples. It is interesting to note that Hanein et al. extracted from the $`T`$ dependence of $`\rho `$ in the zero field metallic phase in a previous experiment, an energy scale, $`T_0`$, in the form of an activation energy. Their $`T_0`$ depends linearly on $`p`$ in the range $`p>2\times 10^{10}cm^2`$ and extrapolates to zero at $`p=0`$. We postulate that the magnetic energy at $`B_{}^c,g\mu _BB_{}^c`$ ($`g`$ being the $`g`$-factor of holes in GaAs and $`\mu _B`$ the Bohr magneton), is equivalent to their $`k_BT_0`$ ($`k_B`$ is the Boltzmann constant), and compare the $`p`$ dependences of $`B_{}^c`$ and $`T_0`$ by replotting $`B_{}^c`$ against $`p`$ on a linear scale in Fig. 2(b). We find that our data from both samples in the range $`p>2\times 10^{10}cm^2`$ fall on straight lines that extrapolate to zero at $`p=0`$, and therefore the dependence of $`B_{}^c`$ on $`p`$ is similar to that of $`T_0`$ on $`p`$ in the same $`p>2\times 10^{10}cm^2`$ range. If we equate our $`g\mu _BB_{}^c`$ with their $`k_BT_0`$ at $`p>2\times 10^{10}cm^2`$, we obtain a $`g`$-factor of 0.1, which is of the same order as the hole $`g`$-factor of a $`100\AA `$ wide GaAs/AlGaAs quantum well . However, we are not able to distinguish whether the $`p`$ dependence of $`B_{}^c`$ for $`p>2\times 10^{10}cm^2`$ is indeed linear or of the power law form because the density range covered in our measurements is small. The “critical” 2D resistivity $`\rho _c`$ at the transition depends strongly on $`B_{}^c`$ and therefore on $`p`$. At $`p=p_c`$, where $`B_{}^c=0`$, $`\rho _c`$ is of the order of one resistance quantum, $`\rho _Q=h/e^2`$ (where $`h`$ is Plank’s constant and $`e`$ the electron charge). For $`p>p_c`$, $`\rho _c`$ decreases steeply as $`B_{}^c`$ increases and drops to $`0.03\rho _Q`$ at $`B_{}^c7`$T, as shown by the solid circles in Fig. 3. Fig. 2(c) shows the $`\rho _c`$ data from both samples as a function of $`pp_c`$, and it is clear that for $`pp_c>2\times 10^9cm^2`$ $`\rho _c`$ decreases exponentially with increasing $`p`$. This strikingly strong dependence of $`\rho _c`$ on $`p`$ is not anticipated within the MIT framework. It suggests that the observed insulating behavior for $`B_{}>B_{}^c`$ cannot be the result of thermally activated processes in a simple Anderson type of insulator. However, it is reminiscent of the magnetic field driven superconductor-insulator transition reported by Yazdani and Kapitulnik in their experiments on thin films of amorphous MoGe, where similar decrease in critical resistivity with increasing critical $`B`$ field is observed. They have attributed this lack of universality in their critical 2D resistivity to the presence of conduction by unpaired electrons. In this context, we should also note that Phillips et al. have proposed that the metallic behavior observed in high mobility 2D systems is that of a superconductor, and the existence of a critical $`B`$ field is to be expected. However, $`\rho `$ in the metallic regime is known to saturate to a finite value instead of vanishing as $`T0`$, and the relation of the metallic behavior to superconductivity is not known at present. Now, we turn to the discussion on the overall in-plane magnetoresistance. In Fig. 3, the in-plane magnetoresistance in the zero $`B`$ field insulating phase ($`p<p_c`$) is shown as open circles and in the metallic phase ($`p>p_c`$) as solid lines. Regardless of whether the zero field transport is metallic or insulating, we observe a strong positive magnetoresistance. According to the $`B_{}`$ dependence of $`\rho `$, we can divide the entire $`\rho B_{}`$ plane into two regimes: a low field regime and a high field regime. In the low field regime, we find that the magnetoresistance is well described by $`\rho =\rho _0exp(B_{}^2/B_0^2)`$, where $`\rho _0`$ and $`B_0`$ are the fitting parameters. The value of $`B_0`$ is shown as the solid circles in Fig. 4 as a function of $`p`$. $`B_0`$ decreases as $`p`$ is reduced towards $`p_c`$ (marked by the arrow in Fig. 4) reflecting that the $`B_{}`$ dependence of $`\rho `$ becomes stronger. However, it is clearly visible in Fig. 4 that $`B_0`$ saturates to a constant value of $`3`$T as the 2DHS is brought into the zero field insulating phase. The $`B_{}`$ induced transition occurs in this low field regime, and the measured $`B_{}^c`$’s are marked by the solid circles in Fig. 3 (the dashed line is a guide to the eye). As the magnetic field is increased beyond $`B_{}^{}`$ (which is indicated by the dotted line in Fig. 3) into he high field regime, the dependence of $`\rho `$ on $`B_{}`$ changes. In this regime, the magnetoresistance is of the form $`\rho =\rho _1exp(B_{}/B_1)`$, where $`\rho _1`$ and $`B_1`$ are the fitting parameters. As shown by the open circles in Fig. 4, the $`p`$ dependence of $`B_1`$ is similar to that of $`B_0`$; $`B_1`$ also decreases as $`p`$ is reduced towards $`p_c`$ and saturates to $`1.5`$T at $`p<p_c`$. While the overall magnetoresistance evolves smoothly as $`p`$ is changed across $`p_c`$, it is obvious from Fig. 3 that $`B_{}^{}`$ (the boundary separating the low field and high field regimes) decreases with decreasing $`p`$ in the zero field metallic phase, but is independent of $`p`$ in the insulating phase. It is interesting to note that all three characteristic fields for the in-plane magnetoresistance, $`B_0`$, $`B_1`$, and $`B_{}^{}`$, become independent of $`p`$ when $`p`$ is reduced below $`p_c`$. The low field magnetoresistance at $`B_{}<B_{}^{}`$ is very similar to that observed in the Si MOSFET’s , where a positive magnetoresistance at low fields is followed by a saturation at high fields. Mertes et al. have interpreted the low field positive magnetoresistance in Si MOSFET’s by the hopping model of Kurobe and Kamimura , in which hopping becomes more difficult as more spins get aligned with $`B_{}`$. This interpretation is supported by the observation of Okamoto et al. that the field where magnetoresistance starts to saturate coincides with the field expected for complete spin alignment. Such a hopping model can also be applied to explain our data in the $`p<p_c`$ insulating regime. However, for $`p>p_c`$, transport is metallic and hopping is not relevant; some other mechanisms involving spins must be operative. The exponential divergence of the magnetoresistance observed at high fields ($`B_{}>B_{}^{}`$), on the other hand, has not been observed in other 2D systems before, and cannot be explained by existing theoretical models. The model by Lee and Ramakrishnan for a weakly disordered system, of which the in-plane magnetoresistance arises from spin splitting, predicts a logarithmic divergence. In the hopping model by Kurobe and Kamimura, the magnetoresistance is expected to saturate when all spins are aligned. In our data, the exponential dependence of $`\rho `$ on $`B_{}`$ is observed up to our highest field of 14T, and $`B_0`$, as seen in Fig. 4, varies continuously with $`p`$ across $`p_c`$. Also, it appears that the influence of the parallel magnetic field on the energy structure of our 2DHS is not the cause of such strong but simple $`B_{}`$ dependence in both transport regimes. It is possible that this exponential divergence of the magnetoresistance is a phonomenon characteristic of new electronic processes in the 2D system in its clean limit. We thank R. Bhatt, P. Phillips, M. Hilke, S. Papadakis, and Y. Hanein for fruitful discussions. This work is supported by the NSF. Fig. 1. (a) $`T`$ dependence of $`\rho `$ at $`p=3.7\times 10^{10}cm^2`$ and at $`B_{}`$=0, 2, 3, 3.5, 4, 4.5, 5, 5.5, 6, and 7T from the bottom. (b) $`\rho `$ is plotted as a function of $`B_{}`$ at $`p=1.5\times 10^{10}cm^2`$ and at five different temperatures. The crossing point defines the $`B_{}^c`$ at 2.1T as marked by the arrow. (c) Differential resistivity ($`dV/dI`$) is plotted against $`V`$ at 50mK and $`p=3.7\times 10^{10}cm^2`$, at magnetic fields, $`B_{}`$=0, 2, 3, 3.5, 4.2, 5, 5.5, 6, and 7T from the bottom. Fig. 2. (a) $`B_{}^c`$ is plotted against $`pp^c`$. The solid and open circles are for two samples cut from the same wafer. (b) $`B_{}^c`$ vs $`p`$ in a linear scale. The dotted lines are to indicate that $`B_{}^c`$ is approximately linear in the range $`p>2\times 10^{10}cm^2`$ extrapolating to zero at $`p=0`$. (c) $`\rho ^c`$ is shown as a function of $`pp^c`$. Fig. 3. The $`B_{}`$ dependence of $`\rho `$ is shown at 50mK at hole densities, from the bottom. 4.11, 3.23, 2.67, 2.12, 1.63, 1.10, 0.98, 0.89, 0.83, 0.79, 0.75, 0.67, and 0.57$`\times 10^{10}cm^2`$. The solid lines are for $`p>p^c`$ and the open circles for $`p<p^c`$. The colid circles denote experimentally determined $`B_{}^c`$’s, and the dashed line is a guide to the eye. $`B_{}^{}`$, the boundary separating the low field and the high field regimes, is marked as the dotted line. Fig. 4. $`B_0`$ and $`B_1`$, obtained from fitting the data in Fig. 3 in the form $`\rho =\rho _0exp(B_{}^2/B_0^2)`$ for the low field regime and $`\rho =\rho _1exp(B_{}/B_1)`$ for the high field regime, are plotted as a function of $`p`$. The dashed lines are guides to the eye. The “critical” density $`p^c`$ is marked by the arrow
no-problem/9907/cond-mat9907487.html
ar5iv
text
# Online Learning with Ensembles ## Abstract Supervised online learning with an ensemble of students randomized by the choice of initial conditions is analyzed. For the case of the perceptron learning rule, asymptotically the same improvement in the generalization error of the ensemble compared to the performance of a single student is found as in Gibbs learning. For more optimized learning rules, however, using an ensemble yields no improvement. This is explained by showing that for any learning rule $`f`$ a transform $`\stackrel{~}{f}`$ exists, such that a single student using $`\stackrel{~}{f}`$ has the same generalization behaviour as an ensemble of $`f`$-students. Online learning, where each training example is presented just once to the student, has proved to be a very successful paradigm in the study of neural networks using methods from statistical mechanics . On the one hand, it makes it possible to rigorously analyze a wide range of learning algorithms. On the other hand, online algorithms can in some cases yield a performance which equals that of the Bayes optimal inference procedure, e.g. asymptotically, when the probability of the data is a smooth function of the parameters of the network . Some problems, however, do remain. For nonsmooth cases, which arise e.g. in classification tasks, the Bayes optimal procedure yields a superior generalization performance, even asymptotically, to that of online algorithms . Also even for smooth problems, the online dynamics often has suboptimal stationary points arising from symmetries in the network architecture. Then the sample size needed to reach the asymptotic regime will scale faster than linearly with the number of free parameters if no prior knowledge is built into the initial conditions of the dynamics . It thus seems of interest to ask which extensions of the online framework make sense. Here we shall consider using an ensemble of students randomized by the choice of initial condition and classifying a new input by a majority vote. This may be motivated by the fact that in the batch case the Bayes optimal inference procedure can be implemented by an ensemble picked from the posterior (given the training set) distribution on the set of all students. We shall consider an ensemble of $`K`$ students; at time step $`\mu `$ the $`i`$-th student is characterized by a weight vector $`J_i^\mu IR^N`$. The learning dynamics is based on a training set of $`\alpha N`$ input/output pairs $`(\xi ^\mu ,\tau ^\mu )`$ where $`\xi ^\mu IR^N`$. We shall consider realizable learning in a perceptron, so $`\tau ^\mu =\text{sign}(B^T\xi ^\mu )`$ where $`B`$ is the $`N`$-dimensional weight vector defining the teacher and it is convenient to assume that $`|B|=1`$ holds for the Euclidean norm. The dynamics of the $`i`$-th student then takes the form $$J_i^{\mu +1}=J_i^\mu +\xi ^\mu N^1f(\mu /N,|J_i^\mu |,B^T\xi ^\mu ,J_{i}^{\mu }{}_{}{}^{T}\xi ^\mu )$$ (1) and the choice of the real valued function $`f`$ defines the learning rule. Reasonably $`f`$ may only depend on the third argument $`B^T\xi ^\mu `$ via its sign $`\tau ^\mu `$, but it is not helpful to make this explicit in the notation. Note that all of the members of the ensemble learn from the same training examples, and these are presented in the same order. Assuming that the components of the example inputs are independent random variables picked from the normal distribution on $`IR`$, the state of the ensemble can be described by the order parameters $`R_i(\alpha )=B^TJ_i^{\alpha N}`$ and $`Q_{ij}(\alpha )=J_{i}^{\alpha N}{}_{}{}^{T}J_j^{\alpha N}`$. For a reasonable choice of $`f`$ the order parameters will be nonfluctuating for large $`N`$ and satisfy the differential equations: $`\dot{R_i}`$ $`=`$ $`yf_i^\alpha _{x_i,y}`$ $`\dot{Q}_{ij}`$ $`=`$ $`x_if_j^\alpha +x_jf_i^\alpha +f_i^\alpha f_j^\alpha _{x_i,x_j,y}`$ $`f_i^\alpha `$ $``$ $`f(\alpha ,Q_{ii}^{\frac{1}{2}},y,x_i),`$ (2) where $`y`$ and the $`x_i`$ are zero mean Gaussian random variables with covariances $`x_iy=R_i`$ and $`x_ix_j=Q_{ij}`$. We shall only consider the case where the initial values $`J_i^0`$ are picked independently from the uniform distribution on a sphere with radius $`\sqrt{P(0)}`$. Then for large $`N`$ the initial conditions for (2) are $`R_i(0)=Q_{ij}(0)=0`$ for $`ij`$ and $`Q_{ii}(0)=P(0)`$. These conditions are invariant under permutations of the site indices $`i`$ and this also holds for the system of differential equations (2). Thus this site symmetry will be preserved for all times and we need only consider the three order parameters $`R(\alpha )=R_i(\alpha )`$, $`P(\alpha )=Q_{ii}(\alpha )`$ and $`Q(\alpha )=Q_{ij}(\alpha )`$ for $`ij`$. Since the length of the students is of little interest, it will often be convenient to consider the normalized overlaps $`r(\alpha )=R(\alpha )/\sqrt{P(\alpha )}`$ and $`q(\alpha )=Q(\alpha )/P(\alpha )`$. A new input $`\xi `$, picked from the same distribution as the training inputs, will be classified by the ensemble using a majority vote, that is by: $$\sigma (\xi )=\text{sign}(\underset{i=1}{\overset{K}{}}\text{sign}(J_{i}^{\alpha N}{}_{}{}^{T}\xi )).$$ (3) As an alternative to using a majority vote, one might consider constructing a new classifier by averaging the weight vectors of the students, setting $`\overline{J}^{\alpha N}=K^1_iJ_i^{\alpha N}`$. As in Gibbs theory a simple application of the law of large numbers yields that the two classifiers are equivalent in the large $`K`$ limit if $`q(\alpha )=𝒪(1)`$, that is $`\sigma (\xi )=\text{sign}(\overline{J}^{\alpha N}\xi )`$ for almost all inputs. In the sequel we shall only consider the large $`K`$ limit, assuming that $`KN`$ so that the fluctuations in the site symmetry of the initial conditions can be ignored. The generalization error $`ϵ_\mathrm{e}`$ of the ensemble, that is the probability of misclassifying $`\xi `$, is then given by $`ϵ_\mathrm{e}=ϵ(r(\alpha )/\sqrt{q(\alpha )})`$ where $$ϵ(x)=\frac{1}{\pi }\mathrm{arccos}x.$$ (4) Similarly, the generalization error of a single student in the ensemble is $`ϵ_\mathrm{s}=ϵ(r(\alpha ))`$. We shall first consider a soft version of the perceptron learning rule: $$f=\eta |J_i^\mu |H\left(\tau ^\mu \frac{k}{\sqrt{1k^2}}\frac{J_{i}^{\mu }{}_{}{}^{T}\xi ^\mu }{|J_i^\mu |}\right)\tau ^\mu ,$$ (5) where $`H(x)=\frac{1}{2}\mathrm{erfc}(x/\sqrt{2})`$ and $`\eta `$ is a time dependent learning rate. For $`k=0`$ this reduces to Hebbian learning whereas $`k=1`$ yields the perceptron learning rule. Note, however, that the $`|J_i^\mu |`$ prefactor makes the dynamics invariant with respect to the scaling of the student weight vectors. From (2) one obtains for the order parameters: $`\dot{r}`$ $`=`$ $`{\displaystyle \frac{\eta }{\sqrt{2\pi }}}(1r^2){\displaystyle \frac{\eta ^2}{2}}r\left(ϵ(kr){\displaystyle \frac{1}{2}}ϵ(k^2)\right)`$ $`\dot{q}`$ $`=`$ $`{\displaystyle \frac{2\eta }{\sqrt{2\pi }}}r(1q)+\eta ^2\left((1q)ϵ(kr){\displaystyle \frac{1}{2}}ϵ(k^2q)+{\displaystyle \frac{q}{2}}ϵ(k^2)\right).`$ (6) We first consider the perceptron learning rule i.e. $`k=1`$. In the limit $`r,q1`$ one finds $`\dot{r}\eta \sqrt{2/\pi }(1r)\eta ^2ϵ(r)/2`$ and $`\dot{q}\eta \sqrt{2/\pi }(1q)\eta ^2ϵ(q)/2`$, that is $`r`$ and $`q`$ satisfy the same differential equation. If the learning rate schedule is such that this limit is reached, this means that that $`(1r)/(1q)`$ will approach $`1`$ for large $`\alpha `$. Hence asymptotically $`ϵ_\mathrm{e}ϵ(\sqrt{r(\alpha )})`$, and the same improvement by a factor of $`1/\sqrt{2}`$ in the generalization error of the ensemble compared to single student performance is found as in Gibbs learning. (Interestingly the same asymptotic relationship between $`ϵ_\mathrm{e}`$ and $`ϵ_\mathrm{s}`$ also holds for the Adatron learning rule $`f=\mathrm{\Theta }(\tau ^\mu J_{i}^{\mu }{}_{}{}^{T}\xi ^\mu )J_{i}^{\mu }{}_{}{}^{T}\xi ^\mu `$). The optimal asymptotics of the learning rate schedule is $`\eta 2\sqrt{2\pi }/\alpha `$ and this yields an $`ϵ_\mathrm{e}\frac{2\sqrt{2}}{\pi \alpha }0.90/\alpha `$ decay of the ensemble generalization error. This is very close to the $`0.88/\alpha `$ decay found for the optimal single student algorithm . We next consider improving the performance by tuning $`k`$. From (6) one easily sees that single student performance is optimized when $`k=r`$. Asymptotically this may be achieved by setting $`k14/\alpha ^2`$ and choosing the optimal learning schedule which is asymptotically the same as for the standard perceptron learning rule. Then already a single student achieves $`ϵ_\mathrm{s}\frac{2\sqrt{2}}{\pi \alpha }`$, that is the same large $`\alpha `$ behaviour as the ensemble in the $`k=1`$ case. Unfortunately $`r`$ and $`q`$ now have a different asymptotics and one finds $`1q1r`$. So for all practical purposes the ensemble collapses to a single point and for large $`\alpha `$ to leading order $`ϵ_\mathrm{e}ϵ_\mathrm{s}`$. It is of course not clear that optimizing single student performance is a good idea, and we thus analyze more generic schedules, setting $`k1(\lambda /\alpha )^2`$. Figure 1 then, however, shows that the two case considered above are optimal for ensemble and respectively single student performance. The above analysis of the soft perceptron rule suggests that while for some rules using an ensemble does significantly improve on single student performance, for more optimized rules this may no longer be the case. We shall now prove that the generalization error of the optimal single student learning rule is also a lower bound of the ensemble performance for any learning rule $`f`$. To achieve this, a learning rule $`\stackrel{~}{f}`$ will be given which for each pattern yields the ensemble average of $`f`$. Then a single student $`\stackrel{~}{J}^\mu `$ using $`\stackrel{~}{f}`$ will have generalization behaviour equal to that of a large ensemble of students using $`f`$. The dynamics for $`\stackrel{~}{J}^\mu `$ may be written as $$\stackrel{~}{J}^{\mu +1}=\stackrel{~}{J}^\mu +\xi ^\mu N^1\stackrel{~}{f}(\mu /N,B^T\xi ^\mu ,\stackrel{~}{J}_{}^{\mu }{}_{}{}^{T}\xi ^\mu )$$ (7) where $`\stackrel{~}{f}`$ is the following integral transform of $`f`$: $$\stackrel{~}{f}(\alpha ,y,\stackrel{~}{x})=f(\alpha ,P(\alpha )^{\frac{1}{2}},y,\stackrel{~}{x}+(P(\alpha )Q(\alpha ))^{\frac{1}{2}}z)_z.$$ (8) Here the distribution of $`z`$ is normal. The entire procedure is quite intuitive: $`\stackrel{~}{J}^\mu `$ represents the center of mass of the ensemble and $`\stackrel{~}{J}_{}^{\mu }{}_{}{}^{T}\xi ^\mu +(P(\alpha )Q(\alpha ))^{\frac{1}{2}}z`$ is a guess for the value of the hidden field $`J_{i}^{\mu }{}_{}{}^{T}\xi ^\mu `$ of one of the ensemble members. For large $`K`$ the distribution of the last two quantities will be the same, and the ensemble average of $`f`$ can be reliably predicted. Further, note that the class of soft perceptron rules (5) is invariant under the integral transform (8) since $`H(a+bz)_z=H(a/\sqrt{1+b^2})`$. This explains why optimizing single student and optimizing ensemble performance within this class yields the same generalization behaviour. To demonstrate that $`\stackrel{~}{J}^\mu `$ does indeed emulate the large ensemble consider the order parameters $`\stackrel{~}{R}(\alpha )=B^T\stackrel{~}{J}^{\alpha N}`$ and $`\stackrel{~}{Q}(\alpha )=|\stackrel{~}{J}^{\alpha N}|^2`$. We shall start with $`\stackrel{~}{J}^0=0`$ , thus $`\stackrel{~}{R}(0)=R(0)=\stackrel{~}{Q}(0)=Q(0)=0`$, and it will suffice to show that the pair $`\stackrel{~}{R},\stackrel{~}{Q}`$ satisfies an identical differential equation as the pair $`R,Q`$. From (2) we obtain for Q: $$\dot{Q}=2x_if(\alpha ,P(\alpha )^{\frac{1}{2}},y,x_j)+f(\alpha ,P(\alpha )^{\frac{1}{2}},y,x_i)f(\alpha ,P(\alpha )^{\frac{1}{2}},y,x_j)_{y,x_i,x_j}$$ (9) where $`i`$ and $`j`$ are any two different indices. The Gaussians $`x_i`$ and $`x_j`$ may be rewritten in terms of normal random variables $`z_i,z_j`$ and $`z`$, independent of each other and of $`y`$, as $$x_i=\sqrt{PQ}z_i+\sqrt{QR^2}z+Ry\text{ and }x_j=\sqrt{PQ}z_j+\sqrt{QR^2}z+Ry.$$ (10) Carrying out the integrations over $`z_i`$ and $`z_j`$ in (9) yields $$\dot{Q}=2\stackrel{~}{x}\stackrel{~}{f}(\alpha ,y,\stackrel{~}{x})+\stackrel{~}{f}(\alpha ,y,\stackrel{~}{x})^2_{y,z},$$ (11) where $`\stackrel{~}{x}\sqrt{QR^2}z+Ry`$. The variance of $`\stackrel{~}{x}`$ is $`Q`$ and its covariance with $`y`$ is $`R`$. Applying (2) to $`\stackrel{~}{J}^\mu `$ yields $`\dot{\stackrel{~}{Q}}=2\stackrel{~}{x}\stackrel{~}{f}(\alpha ,y,\stackrel{~}{x})+\stackrel{~}{f}(\alpha ,y,\stackrel{~}{x})^2_{y,\stackrel{~}{x}}`$, where the variance of $`\stackrel{~}{x}`$ is $`\stackrel{~}{Q}`$ and its covariance with $`y`$ is $`\stackrel{~}{R}`$. Thus $`Q`$ and $`\stackrel{~}{Q}`$ satisfy the same differential equation and an analogous argument shows that the same holds for $`R`$ and $`\stackrel{~}{R}`$. It is interesting to ask whether the above equivalence between ensemble and single student behaviour carries over to more general situations. Let us first consider allowing interactions between the ensemble members. In this case much more complicated scenarios can arise. However, if one only considers global and symmetric interactions between the ensemble members, an equivalent single student rule will often exist. To be specific assume that $`f`$ may in addition depend on the output of the entire ensemble (3). This just amounts to allowing $`f`$ to depend on the random variable $`\stackrel{~}{x}`$, and with only minor modifications the above construction will again yield an equivalent single student rule. Next consider more general architectures than the simple perceptron. It is straightforward to generalize the construction to the case of a tree committee machine: one just has to carry out an integration analogous to (8) per branch of the tree. The case of the tree parity machine, however, is more involved since due to a gauge symmetry, students with differing weight vectors can implement the same function. Thus averaging the output of the ensemble members (3) may no longer be equivalent to averaging the weight vectors. But it is straightforward to break the symmetry in a formal way by adding a small deterministic drift term of the form $`B\delta N^1`$ to the update equations (1) of each branch. Then for $`\delta >0`$ the same procedure as for the tree committee will yield an equivalent single student rule. In the end, one will of course want to take the limit $`\delta 0`$. In this limit, however, for a training set size which is on the order of the number of free parameters in a single student, only a trivial generalization behaviour will result . So this procedure does not allow us to make any statement about the equivalence between ensemble and single student performance for the large training sets needed to achieve a nontrivial behaviour. It does, however, show that the pathological divergence of the training times which results from the symmetry, cannot be overcome by the use of an ensemble. Similar remarks as for the tree parity machine apply to fully connected architectures. So in sharp contrast to batch learning where ensemble performance is often superior to single student performance, in online learning one cannot improve on optimal single student performance through an ensemble. But obviously if the state space of the learning system where large enough to store the entire training set, online learning would reduce to the batch case. So an ensemble may simply not be an effective way making use of a state space which is larger than in the case of a single student, and future research should investigate more efficient strategies of utilizing a large state space. It is a pleasure to acknowledge helpful discussions with Manfred Opper and David Saad.
no-problem/9907/cond-mat9907450.html
ar5iv
text
# REFERENCES Ordering in the dilute weakly-anisotropic antiferromagnet $`Mn_{0.35}Zn_{0.65}F_2`$ F. C. Montenegro,<sup>1</sup> D. P. Belanger,<sup>2</sup> Z. Slanič,<sup>2</sup> and J. A. Fernandez-Baca<sup>3</sup> <sup>1</sup>Departamento de Fisica, Universidade Federal de Pernambuco, 50670-901 Recife PE, Brasil <sup>2</sup>Department of Physics, University of California, Santa Cruz, CA 95064 USA <sup>3</sup>Solid State Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831-6393 USA ## Abstract The highly diluted antiferromagnet $`Mn_{0.35}Zn_{0.65}F_2`$ has been investigated by neutron scattering in zero field. The Bragg peaks observed below the Néel temperature ($`T_N10.9`$ K) indicate stable antiferromagnetic long-range ordering at low temperature. The critical behavior is governed by random-exchange Ising model critical exponents ($`\nu 0.69`$ and $`\gamma 1.31`$), as reported for $`Mn_xZn_{1x}F_2`$ with higher $`x`$ and for the isostructural compound $`Fe_xZn_{1x}F_2`$. However, in addition to the Bragg peaks, unusual scattering behavior appears for $`|q|>0`$ below a glassy temperature $`T_g7.0`$ K. The glassy region $`T<T_g`$ corresponds to that of noticeable frequency dependence in earlier zero-field ac susceptibility measurements on this sample. These results indicate that long-range order coexists with short-range nonequilibrium clusters in this highly diluted magnet. Diluted uniaxial antiferromagnets have been extensively studied as physical realizations of theoretical models of random magnetism, including those pertaining to percolation phenomena. For three dimensions ($`d=3`$), two of the most extensively studied examples are the rutile compounds $`Mn_xZn_{1x}F_2`$ and $`Fe_xZn_{1x}F_2`$. These two systems differ effectively only in the strength and nature of the anisotropy, providing a unique opportunity to explore the role of anisotropy in the ordering of dilute magnets at low temperature. In $`Mn_xZn_{1x}F_2`$ the anisotropy is dipolar in origin. In $`Fe_xZn_{1x}F_2`$ the anisotropy is an order of magnitude greater for $`x=1`$ because of the additional crystal field contribution. In many experiments with the magnetic concentration, $`x`$, well above the percolation threshold concentration $`x_p=0.245`$, the behaviors for $`H=0`$ are qualitatively similar for $`Mn_xZn_{1x}F_2`$ and $`Fe_xZn_{1x}F_2`$. Antiferromagnetic (AF) long-range order (LRO) at low temperatures and characteristic random-exchange Ising critical behavior have been observed in the $`Fe_xZn_{1x}F_2`$ compounds for $`x0.31`$. Similar random-exchange Ising model (REIM) behavior is found in the $`Mn_xZn_{1x}F_2`$ system for $`x>0.4`$. For small fields applied parallel to the uniaxial direction and reasonably small magnetic dilution, the diluted antiferromagnet in a field (DAFF) is expected to show critical behavior belonging to the same universality class as the random-field Ising model (RFIM) for the ferromagnet, the latter being the model most used in simulations. Indeed, for all measured samples of both systems for which the REIM character is found at $`H=0`$, the application of a small field parallel to the easy axis generates critical behavior compatible with the predicted REIM to RFIM crossover scaling. In spite of the evidence supporting the DAFF as a realization of the RFIM, some non-equilibrium features inherent to DAFF compounds and also the newly explored field limits of the weak RFIM problem in $`d=3`$ make the nature of the phase transition at $`T_c(H)`$ still a matter of considerable controversy. Under strong random fields (corresponding to large H) and also close to the percolation threshold, the phase diagrams of DAFF’s have proven to be much more complicated than originally anticipated. For large $`H`$, AF LRO is predicted to become unstable. The generation of strong random fields induces a glassy phase in the upper part of the ($`H`$,$`T`$) phase diagram of $`d=3`$ Ising DAFF’s. The equilibrium boundary, $$T_{eq}(H)=T_NbH^2C_{eq}H^{2/\varphi },$$ (1) above which hysteresis is not observed, has a convex shape at high $`H`$ ($`\varphi >2`$), instead of the concave ($`\varphi =1.4`$) curvature seen at low field (where REIM to RFIM crossover occurs). This change of curvature in $`T_{eq}`$ vs. $`H`$ was first observed by magnetization measurements in $`Fe_{0.31}Zn_{0.69}F_2`$. Faraday rotation and neutron scattering experiments on a sample with the same $`x`$, confirmed the REIM to RFIM crossover scaling at low $`H`$ and the lack of stability of the AF LRO at large $`H`$, giving way to a random-field induced glassy phase in this highly diluted compound. Recent magnetization measurements indicated that similar structure in the phase diagram exists at very high fields for samples with higher values of $`x`$. At a still higher concentration the low temperature hysteresis observed for $`x<0.8`$ is absent. The magnetic features observed at large $`H`$ in samples of $`Fe_xZn_{1x}F_2`$ in the concentration range $`0.3<x<0.6`$ contrast with the behavior in weakly anisotropic system $`Mn_xZn_{1x}F_2`$ for intermediate $`x`$, where a strong $`H`$ induces a spin-flop phase. This distinct behavior may be solely a consequence of the stronger Ising character of the former system. In the strong dilution regime ($`xx_p`$), a number of magnetic features lead us to distinguish between these two systems, as well. For $`x0.27`$, no long range order is observed in $`Fe_xZn_{1x}F_2`$. Typical spin-glass behavior was found in a sample with $`x=0.25`$, although recent works suggest non-critical dynamics for $`x`$ close to $`x_p`$ in this system. Close to the percolation threshold even a minute exchange frustration is a suitable mechanism for the spin-glass phase in $`Fe_xZn_{1x}F_2`$, as supported by local mean-field simulations. For Ising systems, it is also expected that the dynamics even at zero field should be extremely slow. In $`Mn_xZn_{1x}F_2`$, ac susceptibility measurements indicates a spin-glass clustering at low temperatures for samples with $`Mn`$ concentrations $`0.2<x<0.35`$. Earlier neutron scattering studies suggest, however, that at $`H=0`$ the termination of the line of the AF-paramagnetic (P) continuous phase transition occurs at $`T=0`$ at $`x=x_p`$ in stark contrast to the behavior of $`Fe_{0.25}Zn_{0.75}F_2`$. In light of this contrast, the influence of the frozen spin-glass clusters on the stability of the AF LRO for $`x`$ close to $`x_p`$ in $`Mn_xZn_{1x}F_2`$ is an important question that motivated the present work. The dipolar anisotropy of this weakly anisotropic system is expected to become random in strength and direction as $`x`$ decreases, in contrast to the $`x`$-independent single-ion anisotropy of $`Fe_xZn_{1x}F_2`$. In the case of $`Mn_xZn_{1x}F_2`$ under strong dilution, the application of the results from numerical simulations applied to Ising systems is of course not warranted. Any differences observed in this system and the $`Fe_{0.31}Zn_{0.69}F_2`$ must certainly reflect the difference in anisotropy and this may give a window to the understanding of the general phase diagrams for dilute anisotropic antiferromagnets in applied fields. In this study we performed zero-field neutron scattering experiments in $`Mn_{0.35}Zn_{0.65}F_2`$ to verify the existence of a stable long-range ordered antiferromagnetic phase below a critical temperature $`T_N10.9`$ K, where REIM critical exponents $`\nu 0.69`$ and $`\gamma 1.31`$ govern the behavior and to investigate the dynamic features of the system at low temperature. An unusual scattering behavior appears, for $`|q|>0`$, below $`T_g7.0`$ K, corresponding to the region where earlier ac susceptibility studies indicated a noticeable frequency dependence in the real part of the susceptibility, in the absence of external field. The results indicate that long-range order coexists with non-equilibrium clusters in this highly diluted system. The neutron scattering experiments were performed at the Oak Ridge National Laboratory using the HB2 spectrometer in a two-axis configuration at the High Flux Isotope Reactor. We used the (002) reflection of pyrolitic graphite to monochromate the beam at $`14.7`$ meV. The collimation was 60 minutes of arc before the monochromator, 40 between the monochromator and sample, and 40 after the sample. A pyrolitic graphite filter reduced higher-energy neutron contamination. The c-axis of the crystal was vertical and parallel to the applied field. A small mosaic was observed from the Bragg peak scans at low temperature, with roughly a half-width of $`0.2`$ degrees of arc or $`0.0035`$ reciprocal lattice units (rlu). The mosaic was incorporated into the resolution correction by numerically convoluting the measured resolution functions, including the mosaic, with the line shapes used in the data fits. Most of the scans taken were (1 q 0) transverse scans. For simplicity, the line shapes used in the fits to the data are of the mean-field form $$S(q)=\frac{A}{q^2+\kappa ^2}+M_s^2\delta (q),$$ (2) where $`\kappa =1/\xi `$ is the inverse fluctuation correlation length and $`M_s`$ is the Bragg scattering from the long-range staggered magnetization. The critical power-law behaviors are expected to be $`\kappa =\kappa _o^\pm |t|^\nu `$, $`\chi =A\kappa ^2=\chi _o^\pm |t|^\gamma `$ and $`M_s=M_o|t|^\beta `$, where $`M_o`$ is non-zero only for $`t<0`$. The exponents $`\nu `$, $`\gamma `$, and $`\beta `$ and amplitude ratios $`\kappa _{o}^{}{}_{}{}^{+}/\kappa _{o}^{}{}_{}{}^{}`$ and $`\chi _{o}^{}{}_{}{}^{+}/\chi _{o}^{}{}_{}{}^{}`$ are universal parameters characterizing the random-exchange Ising model. The sample was wrapped in aluminum foil and mounted on an aluminum cold finger. A calibrated carbon resistor was used to measure the temperature. Transverse scans, taken after quenching to low temperatures ($`T=5K`$) and subsequently heating the sample, are shown in Fig. 1. For clarity, the data for the range $`|q|<0.008`$ rlu, which spans the Bragg scattering component, are not shown. For the most part, the scans are quite consistent with what is expected for a phase transition occurring near $`T=11K`$. However, a most unusual feature of the line shapes is evident in the data at the lowest temperature, $`T=5`$ K. The broad line shape indicates a great deal of short range order present upon quenching. The short-range order is evident for the scans with $`T<7`$ K. The scan at $`T=6`$ K shows striking asymmetry as shown in Fig. 2. Since the scans were taken with increasing $`q`$ from $`0.19`$ to $`0.19`$ rlu and each measurement took about $`35`$ seconds, the asymmetry is an indication that the short-range order is rapidly decreasing with time, i.e. the system is equilibrating. The slow relaxation for $`T<7`$ K corresponds very well to the large frequency dependence observed using ac susceptibility in the same sample for $`T<7`$ K. A transition to antiferromagnetic long-range order is indicated by the presence of a resolution limited Bragg scattering peak which decreases sharply as $`T`$ approaches $`T_N11`$ K. As $`TT_N`$ from above, the width of the non-Bragg scattering component decreases and the $`q=0`$ intensity increases. Similarly, as $`TT_N`$ from below, the width decreases and the $`q=0`$ intensity increases. Such behavior is typical of an antiferromagnetic phase transition. To fit the data, we used the Lorentzian term in Eq. 1, convoluted with the instrumental resolution. The data for $`|q|<0.008`$ rlu were eliminated from the fits to the Lorentzian term to avoid Bragg scattering. The results of the fits yield $`\kappa (T)`$ and the staggered susceptibility $`\chi (T)=A/\kappa ^2`$. The results for $`\kappa `$ are shown in Fig. 3, along with the expected random-exchange critical behavior as indicated by the solid curves with $`\nu =0.69`$ and $`\kappa _o^+/\kappa _o^{}=0.69`$. The overall amplitude of the solid curves is adjusted to approximately follow the data. A clear minimum $`\kappa 0.017`$ rlu is observed in the fitted values near $`T_N`$, indicating significant rounding due to a concentration gradient in the crystal. The gradient rounding is most likely the cause of the deviations of the data from the fit away from the minimum as well. Nevertheless, the present data are plausibly consistent with random-exchange critical behavior when the significant rounding due to the concentration gradient is taken into account. Results for the logarithm of $`\chi `$ vs. $`T`$ are shown in Fig. 4. The random-exchange behavior, with $`\gamma =1.31`$ and $`\chi _{o}^{}{}_{}{}^{+}/\chi _{o}^{}{}_{}{}^{}=2.8`$ and with the overall amplitude adjusted to approximately fit the data, is shown as the solid curves. The maximum in the data and the systematic deviations from the fit are indications of a significant gradient in the concentration, as we discussed with respect to Fig. 3. Again the data are fairly consistent with a concentration-rounded random-exchange transition to antiferromagnetic long-range order. The Bragg intensity, obtained by subtracting the fitted Lorentzian scattering intensity from the total $`q=0`$ scattering intensity, is shown vs. $`T`$ in Fig. 5. Once again, the data are fairly consistent with a random-exchange transition ($`\beta =0.35`$) near $`T=11K`$ represented by the solid curve. The nonzero Bragg component above $`T=11K`$ is probably attributable mainly to concentration gradient effects. The precise shape of the Bragg scattering intensity vs. $`T`$ in Fig. 5 must not be taken too seriously, particularly at low $`T`$, since it is known that severe extinction effects distort the behavior by saturating the measured value. In addition, for $`T<7`$ K, the sample shows nonequilibrium effects since it was quenched, as described above, and the magnitude of the Bragg scattering component might well be smaller than if the sample were in equilibrium. The large Bragg scattering component well below $`T_N`$, along with the minimum in $`\kappa `$ and maximum in $`\chi `$ near $`T_N`$ strongly indicate an antiferromagnetically ordered phase. Previous magnetization studies indicate a de Almeida-Thouless-like (AT) curve in the $`HT`$ phase diagram. The $`H=0`$ endpoint of this boundary coincides reasonably well with the antiferromagnetic phase transition observed with neutron scattering. In conclusion, we have shown neutron scattering evidence that this system, $`Mn_{0.35}Zn_{0.65}F_2`$, orders near $`T=11`$ K in a way consistent with the REIM model. In addition, significant relaxation takes place for $`T<7`$ K. This is consistent with previous magnetization measurements and demonstrates that only part of the system orders with long range order when the system is quenched to low temperatures. This behavior is consistent with clusters coexisting with long range order below $`T_N`$. A similar glassy low temperature region has been identified in the anisotropic system $`Fe_{0.31}Zn_{0.69}F_2`$ using magnetization and dynamic susceptibility measurements. However, the broad line shapes that indicate the glassy behavior were not observed in $`Fe_{0.31}Zn_{0.69}F_2`$ with neutron scattering techniques. It is interesting that neutron scattering measurements at the percolation threshhold in $`Mn_{0.25}Zn_{0.75}F_2`$ did not indicate any glassy behavior in contrast to $`Fe_{0.25}Zn_{0.75}F_2`$. This should be investigated further. This work has been supported by DOE Grant No. DE-FG03-87ER45324 and by ORNL, which is managed by Lockheed Martin Energy Research Corp. for the U.S. DOE under contract number DE-AC05-96OR22464. One of us (F.C.M.) also acknowledges the support of CAPES, CNPq, FACEPE and FINEP (Brazilian agencies).
no-problem/9907/astro-ph9907254.html
ar5iv
text
# Lensing Induced Cluster Signatures in Cosmic Microwave Background ## 1. Introduction Fluctuations in the Cosmic Microwave Background (CMB) are believed to originate from the era of hydrogen recombination at a redshift of $`z1100`$. Before recombination photons and electrons were tightly coupled via the process of Thompson scattering, while afterwards electrons were bound to protons in hydrogen and photons were allowed to propagate freely through the universe. Already before and specially during recombination the coupling was not perfect, leading to erasure of fluctuations in the CMB on small scales. As a result these primary fluctuations are expected to be very smooth on scales below 10’. On very small scales the CMB can be considered as a simple gradient. A mass concentration in front of such a gradient gravitationally deflects the light. This deflection causes a fluctuation in the CMB temperature, which is determined by the mapping between unperturbed and perturbed photon position (see also Kosowsky et al. 1999). This small scale power is preferentially generated in the regions of high gradient of primary CMB anisotropies. The effect can be generated by any mass concentrations along the line of sight, such as galaxy halos, clusters and superclusters. In this paper we concentrate on clusters, which being massive and big may generate a particularly strong effect. They are thus the primary candidates for detection of this effect on individual objects, as opposed to the statistical detection discussed in Zaldarriaga and Seljak (1999a,b). The purpose of this paper is to analyze their imprint on the microwave sky by analyzing a number of simple cluster profiles and discuss its detectability for realistic observational scenarios. It should be stressed that this gravitational lensing effect is different from the lensing effect of a cluster discussed in Zaldarriaga and Seljak (1999a). There CMB was viewed as a collection of peaks, with a well determined distribution of shapes and sizes in Gaussian models. These will be distorted as they pass by a large massive object, generating a coherent ellipticity or size distortion which can be identified by averaging over sufficient number of independent patches. By averaging over the CMB the lensing effect can be isolated and a cluster density profile can be reconstructed Zaldarriaga and Seljak (1999a). In practice this requires the presence of small scale CMB fluctuations at detectable levels and these are not likely to originate from primary anisotropies. Secondary processes and foregrounds reviewed in this paper could provide the small scale power required, although the level of these small scale fluctuations is still uncertain at the present. In principle this would provide an alternative method to reconstruct the cluster density profile in addition to the one discussed here. Given the uncertain level of secondary anisotropies we will in the remainder of this paper ignore this possibility, adopting a conservative position that secondary anisotropies are only a source of confusion to the signal one is trying to isolate. ## 2. Lensing effect of cluster on CMB The measured temperature field $`T(𝜽)`$ at observed position $`𝜽`$ originates from some unlensed position $`𝜽^{}`$ of the CMB field at the last scattering surface $`\stackrel{~}{T}(𝜽^{})`$. The relation between the two is given through the deflection angle of the CMB photons $`\delta 𝜽`$, $`T(𝜽)`$ $`=`$ $`\stackrel{~}{T}(𝜽^{})=\stackrel{~}{T}(𝜽\delta 𝜽)`$ (1) $``$ $`\stackrel{~}{T}(𝜽)\delta 𝜽\stackrel{~}{T}(𝜽).`$ In the second line we expanded the temperature using a linear expansion valid on scale below the coherence length of the CMB gradient, which is of the order of 15’ for typical models in a flat universe. On scales below that, the primary anisotropies are expected to have negligible power. In this case we can treat the unlensed temperature field as a pure gradient. We are ignoring all the secondary anisotropies and foregrounds generated along the line of sight that will contribute to fluctuations on these small scales. These act as a source of noise and are discussed later in the paper. We choose $`𝜽=(\theta _x,\theta _y)=(\theta \mathrm{cos}\varphi _\theta ,\theta \mathrm{sin}\varphi _\theta )`$ to be the observed position in the sky with origin at the cluster center. The derivative of deflection angle with respect to $`𝜽`$ is the shear tensor, which can be decomposed into its trace part, $`2\kappa `$, and two shear components $`\gamma _1`$ and $`\gamma _2`$. The convergence $`\kappa `$ is dimensionless and can be expressed in terms of projected density $`\mathrm{\Sigma }`$ as $`\kappa =\mathrm{\Sigma }/\mathrm{\Sigma }_{cr}`$, where $$\mathrm{\Sigma }_{cr}=\frac{c^2D_{OS}}{4\pi GD_{OL}D_{LS}}.$$ Here $`D_{LS}`$ is the angular diameter distance from the lens to the source, $`D_{OS}`$ that between the observer and the source and $`D_{OL}`$ between observer and lens. We may parameterize the density profile of the cluster in units of a characteristic length scale $`r_s`$ as $`\rho (x)`$, where where $`x=r/r_s`$ and $`r`$ is the radius. When we measure angles in units of $`\theta _s=r_s/D_{Ol}`$, so that $`x=r/r_s=\theta /\theta _s`$, the deflection angle scales as $`\delta \theta m(x)/x`$, where $`m(x)`$ is the mass enclosed within the projected radius $`x`$. Without a loss of generality we may take the gradient to be along the $`y`$ axis with an amplitude $`\stackrel{~}{T}_{y0}`$. The observed temperature becomes, $$T(𝜽)=\stackrel{~}{T}_{y0}(\theta _y\delta \theta _y).$$ (2) In the absence of deflection $`\delta \theta _y`$ one would measure a pure gradient. Any small scale deviation from it is a signature of the deflection $`\delta \theta _y`$. If we measure the value of the large scale gradient by filtering out small scales contaminated by the cluster lensing we would know where a certain value of the CMB anisotropy should have come from in the absence of deflection. The difference between the expected and measured position is a direct measurement of $`\delta \theta _y`$ and so of the gravitational effect of the cluster. The effect of lensing by a cluster on the CMB can be understood with the help of figure 1. In the absence of lensing we would observe just the gradient. Because of the lensing effect by the cluster, the light rays will be deflected radially so that for $`\theta _y>0`$ the rays are coming from a lower value of $`\theta _y`$ at the last scattering surface. If the gradient is positive this implies that for $`\theta _y>0`$ in the presence of the cluster we would observe a lower temperature than what would be observed if the cluster was not there. The opposite is true for $`\theta _y<0`$. Far away from the cluster the lensed temperature should coincide again with the gradient. Thus the cluster creates a wiggle on top of the large scale gradient. It is important to stress that the method proposed here is sensitive to one component of the deflection angle and not the shear or magnification as is the case for the usual weak lensing reconstruction from background galaxy ellipticities or magnitudes. It is sometimes argued that we can never measure the deflection angle in a lensing system because we do not know the original position of the background image. In this case we can get around this argument because we know that the background image is a gradient which we can measure on scales larger than the cluster. Although both shear and deflection angle are sensitive to the cluster mass profile the latter involves one derivative of gravitational potential less than the former. As such it is less sensitive to small scale fluctuations in the cluster profile and more sensitive to outer parts of the cluster, as discussed below. Another important point is that the effect discussed here is proportional to the gradient $`T_{y0}`$. This provides a unique signature which we may use to separate it from other sources of anisotropies. It also implies that one should select the clusters on which to look for this effect not only on the basis of the strength of the gravitational lensing signal, but also on the basis of the amplitude of CMB gradient at that position. MAP or some other CMB experiment with 15’ resolution could provide such information. ### 2.1. Singular isothermal sphere For a singular isothermal sphere the density scales as $`\rho r^2`$. In this case the deflection angle is constant, $$\delta \theta _y=b\frac{\theta _y}{\theta },$$ (3) where $`b=4\pi (\sigma _v/c)^2D_{LS}/D_{OS}1^{}(\sigma _v/1400km/s)^2D_{LS}/D_{OS}`$, where $`\sigma _v`$ the cluster velocity dispersion. With the source at $`z1100`$ we may assume $`D_{OS}D_{LS}`$, which in an Einstein De-Sitter universe gives $`D_{LS}/D_{OS}1/\sqrt{1+z_L}`$. The mass for a singular isothermal profile grows linearly with radius so at outer parts the profile must turn over to a steeper slope. We will adopt the profile $$\rho (x)[x^2(1+x)]^1,$$ (4) where the slope in the outer parts of the cluster has been matched to the NFW profile discussed below. We have numerically integrated the equations above to compute the mass within a given radius and the deflection angle. These are shown together with the surface density $`\kappa `$ in figure 2, where it can be seen that $`\kappa `$ is dropping with radius much more rapidly than $`\delta \theta `$ is. In figure 3 we show the signature of the effect on the CMB itself. We have subtracted out the gradient term. We focus on the temperature as a function of $`\theta _y`$ for a fixed $`\theta _x`$. We adopted $`\sigma _v=1400km/s`$ and $`\theta _s=r_s/D_{OL}1.4^{}`$ (corresponding to A370, see Williams, Navarro and Bartelmann 1999). The amplitude of the wiggle is $`T_{y0}\delta \theta `$, proportional to the amplitude of the gradient and the deflection angle. For $`\theta _x=0`$ the distortion caused by SIS cluster would be constant and negative for $`\theta _y>0`$ and constant and positive for $`\theta _y<0`$, with a step function at $`\theta _y=0`$, reflecting the absence of a core in this model. The change in slope in outer parts alters this prediction, so that only for $`\theta <\theta _s`$ is the deflection angle approximately constant (figure 2). For $`\theta _x0`$ the temperature profile is smooth, but the functional dependence still has odd symmetry with respect to the transformation across the $`y`$axis, as shown in figure 3a. The value for the distortion depends on the amplitude of the large scale gradient which has an rms value $`\sigma _T=<T_x^2+T_y^2>^{1/2}`$ of the order 13$`\mu K\mathrm{arcmin}^1`$ for standard CDM. Other models that fit the current observations give similar values for $`\sigma _T`$. We have adopted this value of the gradient for our calculation which gives a distortion $`\mathrm{\Delta }Tb\sigma _T13\mu K(\sigma _v/1400km/s)^2D_{LS}/D_{OS}`$. Note that $`\mu K`$ signals can be obtained well beyond the virial radius and that by averaging over the entire profile of the signal, one can significantly reduce the level of contamination from other contributions. This is discussed in more detail in §4, where we address more generally the observability of this signal. ### 2.2. NFW profile Navarro, Frenk and White (NFW; 1997) proposed a universal mass profile that was shown to fit most of the halos in cosmological N-body simulations. Its 3-d form is particularly simple and is given by $$\rho (x)=\frac{\rho _s}{x(1+x)^2},$$ (5) The transition between $`r^1`$ scaling in the center and $`r^3`$ outside is governed by the scale radius $`r_s`$. Typical numbers for a cluster halo are of the order of $`250h^1`$kpc for $`r_s`$, which is about 15-20% of the virial radius $`r_{200}`$, defined as the radius where the mean overdensity inside it is 200. We numerically calculated the expected temperature profile for the parameters of A370, shown in figure 3b. We normalized both SIS and NFW profiles to have the same total mass at large radii. Because of our choice of normalization, we see that the largest difference between the NFW and SIS profiles occurs near the core of the cluster, in the inner $`2^{}`$. The NFW profile has much less mass near the center and thus has a much smaller deflection angle. Arcminute or better resolution is needed to distinguish these two profiles with this method. We will show below that other contributions such as instrument noise, infrared (IR) or radio point sources as well as SZ emission further complicate this separation. ### 2.3. Non-axisymmetric profile Let us now introduce a quadrupole deviation from axial symmetry in the form of external shear. This can be parameterized with 2 components, distortion along the $`x,y`$-axis parameterized with $`\gamma _1`$ and distortion along the diagonals parameterized with $`\gamma _2`$. Fermat’s gravitational potential can be parameterized in the following form $$\mathrm{\Phi }=\frac{𝜹\theta 𝜹\theta }{2}f(\theta )\frac{1}{2}\theta ^2[\gamma _1\mathrm{cos}2\varphi _\theta +\gamma _2\mathrm{sin}2\varphi _\theta ],$$ (6) where $`f(\theta )`$ is a general function describing the axi-symmetric radial profile of the projected cluster potential. This can be expanded into a series $`f(\theta )=f_1\theta +f_2\theta ^2/2+\mathrm{}`$. From Fermat’s principle we obtain, $`\delta \theta _y=\theta _y\left(\gamma _1{\displaystyle \frac{f^{}(\theta )}{\theta }}\right)\gamma _2\theta _x.`$ (7) Inserting the expansion of $`f`$ above we find that $`\gamma _1`$ and $`f_2`$ are degenerate, since $`\delta \theta _y`$ has the same dependence on $`𝜽`$ for both parameters. This degeneracy is similar to the mass sheet degeneracy that exists in the case of cluster reconstruction from the shear. In that case a constant mass sheet cannot be detected using the shear information alone. Similarly here we cannot separate between a constant mass sheet and an external shear component $`\gamma _1`$. A more general form of this degeneracy is derived in the next section. External shear distortion that is not perpendicular or parallel to the $`y`$ axis can be measured from the profile. The case of $`\gamma _2=0.3`$ and an NFW profile is shown in figure 3c. Non-axial symmetry breaks the odd parity symmetry across y-axis for $`\theta _x0`$. At a given $`\theta _x`$ the whole wiggle is moved up or down depending on $`\gamma _2`$ (of course far away from the shear source it is restored back to the unperturbed value). This is not the only effect that can break this symmetry. As discussed in more detail in §4 kinetic and thermal SZ effects also imprint a signal in the CMB. Kinetic SZ in particular cannot be distinguished from lensing or primary CMB on the basis of frequency dependence. For an axial-symmetric cluster it produces a profile with even parity across y axis. This effect combined with the lensing effect also breaks the symmetry as shown in figure 3d. It is much more centrally concentrated than the effect of external shear, so that the two can be separated. ## 3. Reconstruction of projected density Rather than parameterizing the surface density of the cluster one may also attempt to reconstruct it directly. To do this we first subtract from the CMB anisotropies the pure gradient term and divide the temperature by $`T_{y0}`$. This gives an estimate of $`\delta \theta _y(𝜽)`$. We then Fourier transform it $`\delta \theta _y(𝒍)=d^2le^{i𝒍𝜽}\delta \theta _y(𝜽)`$. The dimensionless surface density $`\kappa (𝜽)`$ is given by inverse Fourier transform of $$\kappa (𝒍)=\frac{il^2\delta \theta _y(𝒍)}{l_y}.$$ (8) This inversion is possible for all modes except $`l_y=0`$. These are long wavelength modes in the $`y`$ direction that cannot be distinguished from the CMB gradient itself. Hence the inversion is not unique, although the number of modes for which the inversion fails is small compared to their total number (and becomes a set of measure 0 in the limit of perfect resolution). This degeneracy is similar to the mass-sheet degeneracy, which prevents one from reconstructing $`\kappa `$ from ellipticity data for $`𝒍=0`$ mode. It is more severe here, because there is a whole vector of modes for which the inversion fails, rather than just a single mode. In the previous section we discussed a particular example of this degeneracy which prevents one from distinguishing external shear in the direction parallel or perpendicular to the CMB gradient from a constant surface density term. Furthermore, even if $`l_y0`$, for modes with small $`l_y`$ and large $`l_x`$ this reconstruction amplifies any noise contribution present in the data, so the final map no longer has uniform noise properties. ## 4. Sources of noise To analyze whether the theoretical predictions above can be detected we need to compare them to various sources of noise. These sources can be divided into instrumental and astrophysical. Astrophysical sources can arise from earth atmosphere, our galaxy, various cosmological sources along the line of sight and from the cluster itself. They can also arise from gravitational lensing by other objects along the line of sight. Another source of noise is the CMB itself, in the form of its deviation from a pure gradient form assumed in the reconstruction. We may parameterize these sources of noise with their power spectrum, which will characterize the level of fluctuations as a function of scale. This does not contain all the information for a non Gaussian process, but most of the noise sources that are accumulated along the line of sight will be well approximated as Gaussian because of the projection. Others, such as primary CMB, are believed to be Gaussian already. The most important source of noise which cannot be described with power spectrum information is emission from the cluster itself, specially SZ and dust. We will discuss these sources of noise in more detail below. ### 4.1. Signal to noise analysis The total CMB anisotropy can be modeled as $$\mathrm{\Delta }T(𝜽)=Ag(𝜽)+n(𝜽)$$ (9) where $`g(𝜽)`$ is the angular profile of the deflection angle normalized to unity at $`\theta =\theta _s`$. For axi-symmetric clusters its form can be simplified to $`g(\theta )\mathrm{cos}(\varphi _\theta )`$, where $`\varphi _\theta `$ is the azimuthal angle of $`𝜽`$. $`A`$ is a constant that includes both the strength of the cluster and the magnitude of CMB gradient, while the noise term $`n(𝜽)`$ denotes all the other contributions to the measurement. They can be parameterized with the power spectrum $$n(𝒍)n(𝒍^{})=C(l)\delta (𝒍+𝒍^{}).$$ (10) If we have some knowledge on the profile of the cluster deflection angle $`g(𝜽)`$ we can average the temperature over this profile, thus reducing the noise contribution from other sources that do not correlate with the expected profile. We wish to derive the filtering function $`\mathrm{\Psi }(𝜽)`$ with which we process the data to obtain an estimate of $`A`$, $$\widehat{A}=\mathrm{\Psi }(𝜽)\mathrm{\Delta }T(𝜽)d^2\theta .$$ (11) We can vary the filtering function with respect to signal to noise estimate $`A/\sigma `$, where $$\sigma ^2=(A\widehat{A})^2=|\mathrm{\Psi }(𝒍)|^2C(l)d^2l.$$ (12) It can be easily shown that the optimal filter is $`\mathrm{\Psi }(𝒍)g(𝒍)/C(l)`$ and that the variance for this filter is given by (Haehnelt & Tegmark 1996) $$\sigma =\left[\frac{|g(𝒍)|^2}{C(l)}d^2l\right]^{1/2}.$$ (13) If the noise spectrum is white then the profile of the filter is simply the profile of the deflection angle. This is what one expects, since in that case one is obtaining a positive value. If however the noise power spectrum has more power on large (small) scales then those large (small) modes are more important to suppress than the small (large) scale modes. To suppress large scale modes one has to design a filter that is oscillating, so that its shape cancels the slowly changing mode, while still maximizing the information from the cluster profile. This is what is achieved with the optimal filter above. For an axi-symmetric cluster the Fourier transform simplifies to $$g(𝒍)=g(l)\mathrm{cos}\varphi _lg(l)=2\pi \theta g(\theta )J_1(l\theta )𝑑\theta $$ (14) where $`J_1(x)`$ is the Bessel function of first order and $`\varphi _l`$ is the azimuthal angle of $`𝒍`$. The variance becomes $$\sigma =\left[\pi \frac{g(l)^2}{C(l)}l𝑑l\right]^{1/2}.$$ (15) In the examples below we will use NFW profile for the cluster, observed out to a given radius $`\theta _0`$. We will use $`\theta _s=1.4^{}`$ and use various sources of confusion to estimate $`\sigma `$. This can be compared to the expected $`A`$ for large clusters, of the order of 5-10$`\mu K`$, to identify the main sources of noise and the range over which this method could be used to study the clusters. ### 4.2. Instrument degradation The detector adds noise to the signal. This can be parameterized by its power spectrum, which for many instruments can be approximated as a constant, $`C_n(l)=\sigma _n^2\mathrm{\Omega }_p`$, where $`\sigma _n`$ is the rms noise at each pixel and $`\mathrm{\Omega }_p`$ is its solid angle. Current observations of SZ at 30GHz are reaching noise levels of order of $`\sigma _n=15\mu K`$ at 1-2’ resolution (Carlstrom et al. 1999). This is approaching the level of the signal predicted here, although at these frequencies the dominant signal in the center is coming from the SZ effect. Interferometers are not sensitive to low spatial frequency modes, so one cannot obtain the direction of the gradient from the experiment itself. This must be obtained from a lower resolution experiment such as MAP. It is possible that the characteristic signature generated by the lensing effect could be observed at larger radii even at these low frequencies and it would certainly be worthwhile to integrate a few of the clusters down to 5$`\mu K`$ in search of this effect, specially once the direction and the amplitude of the gradient of CMB is better known. At 217 GHz, which is the zero crossing frequency for thermal SZ, current observations only reach 100 $`\mu K`$ noise per pixel at a similar resolution (Church et al. 1997), dominated by atmosphere noise discussed below. The next generation of small scale experiments with larger arrays and longer observation times such as MINT or ACBAR will have the sensitivity reaching 5 $`\mu K`$ per arcminute size pixel in a 100 pixel array over a month of observation and will be more suitable to detect this effect. Because of the finite angular resolution of the instrument the predicted $`\mathrm{\Delta }T/T`$ has to be convolved with the window function of the beam. This dilutes sharp features around the center of the cluster, such as those produced by SIS in figure 3a. For this case arcminute resolution would be desirable. For less steep profiles such as NFW this is less important. Since the actual signal extends quite far away from the center even a modest resolution of several arcminutes would still be useful, assuming that other sources of noise discussed below do not dominate the signal. To incorporate the beam dilution into the formalism above we may replace $`C_n(l)`$ with $`C_n(l)\mathrm{exp}[\theta _b^2l(l+1)]`$, where $`\theta _b`$ is the Gaussian width of the beam. Note that beam dilution does not affect the signal to noise if noise is not dominated by instrument noise. Applying the noise power spectrum to the equation (15) we find that a 10’$`\times `$10’ array with 100 pixels, each with $`5\mu K`$ noise gives rms noise $`\sigma =1.6\mu K`$ if the effect of the beam is negligible. For beam with 1’ FWHM this number increases to 3$`\mu K`$. Doubling the size of the array or halving the noise per pixel both reduce this number by roughly one half. These levels of noise are therefore necessary for a positive detection of the effect. Note that doubling the size of array and keeping the noise per solid angle fixed (equivalent to keeping the observing time fixed) does not change significantly the rms variance. This is because the signal is only slowly dropping off with distance from the center. In this case changing FWHM from 1’ to 2’ makes almost no difference. For Planck 217GHz channel with 12 $`\mu K`$ sensitivity per pixel and 5’ FWHM the noise level is of the order of 10 $`\mu K`$, which is at the detection limits for large clusters. Except for a few exceptional cases Planck will therefore not be able to detect this effect. ### 4.3. Intrinsic CMB fluctuations We have assumed throughout that the CMB can be approximated as a pure gradient. Typical coherence length for CMB gradient is 15’ and since the lensing effect extends well beyond this scale this approximation breaks down at large separations from the cluster center. To estimate the level of this contribution we can use the CMB power spectrum as a source of noise in equation (15). Because the large scale CMB is approximated as a gradient and removed we exclude the modes larger than the size of the observed field. The modes smaller than the size of the box cannot be approximated as a gradient and they contribute noise, which needs to be distinguished from the cluster signal. Without removing the long wavelength modes with the optimal filter the rms contribution from CMB is of the order of 15 $`\mu K`$ for a survey of $`10^{}\times 10^{}`$ increasing to 100 $`\mu K`$ for $`1^{}\times 1^{}`$. For such large fields long wavelength modes of CMB are a significant source of confusion. This reflects the strongly correlated nature of CMB on large scales. The optimal filter suppresses the influence of long wavelength modes by employing alternative positive and negative radial weights. This significantly reduces the long wavelength modes, while still preserving to a large extent the information about the cluster profile. In this case the variance decreases significantly to 4-5$`\mu K`$. For Planck 217 GHz channel the total variance remains 10 $`\mu K`$, dominated by the detector noise rather than CMB. The dominant contribution to the CMB gradient comes from $`l>500`$ with 50% contribution coming from $`l>1000`$. Degree size fields may thus have a significantly lower rms CMB gradient than 10’ size fields. For large areas the best strategy is to select fields with a smooth and large gradient across the entire field, thus enhancing the signal and reducing the level of CMB noise. ### 4.4. SZ effect from the cluster SZ effect is the dominant signal from clusters in the low frequency range. It is caused by scattering of photons by hot electrons in the cluster. The net effect is to increase the energy of photons and since their number is conserved this causes their redistribution from the low frequency Rayleigh-Jeans regime into the high frequency Wiener regime. This creates a deficit of photons and therefore a CMB decrement at low frequencies and an increment at high frequencies, with zero crossing at 217 GHz. The amplitude of the effect is proportional to the temperature of the cluster and its optical depth. Typical numbers are $`10^8K`$ for temperature and 0.01 for optical depth. Positive detections in the RJ-tail have by now been achieved for more than 30 clusters with central decrements exceeding $`500\mu K`$ (Carlstrom et al. 1999). This is a huge signal that can easily swamp the lensing signal. Convolving with the optimal filter reduces the level of fluctuations and even eliminates them for axi-symmetric profiles. However most of the clusters are not axi-symmetric and for reasonable ellipticities the remaining contamination could still be above the expected signal in the center. One may further reduce this contamination by eliminating the central region of the cluster in the analysis. Most of the SZ signal is coming from the inner 1-2’ radius, while the lensing signal extends well beyond that. To model the importance of the inner part of the cluster we repeated the noise analysis excluding the lensing information from the inner 4’ region. This increased the variance by 40% and so does not significantly reduce the sensitivity, while reducing the level of SZ signal by a factor of a few. Further reduction of this contamination is achieved by observing at 217 GHz. Although this frequency is a zero crossing for thermal SZ in the non relativistic limit, for most of the clusters with large signal relativistic effects are not negligible. This causes the zero crossing to scale linearly with the gas temperature (Rephaeli 1995). If the cluster is isothermal and its temperature can be measured from X-ray measurements then one can correct for this effect. If the cluster is not isothermal as suggested by recent ASCA measurements (Markevitch et al. 1996) then this will induce further fluctuations in the map which can be at a 10$`\mu K`$ level. These fluctuations can be reduced using lensing filter combined with exclusion of the center and so do not appear to be a major source of confusion. ### 4.5. Kinetic SZ effect from the cluster Even if SZ effect from hot electrons vanishes at 217 GHz there is another imprint of the CMB photons scattering off cluster electrons, caused by electron bulk motion. This is caused by the Doppler effect and its magnitude is given by the product of optical depth, typically around $`\tau =0.01`$ in the center and radial velocity of electrons in the cluster. The latter is dominated by the bulk motion of cluster $`v_r`$ with a typical value of $`v_r=300km/s`$. This gives the typical magnitude of the effect in the center around 30 $`\mu K`$, somewhat larger than the lensing signal. Note that the two have the same frequency dependence and cannot be distinguished using this information. However, just like in the case of thermal SZ kinetic SZ is much more centrally concentrated than the lensing effect. The temperature profile in the presence of both effects is shown in figure 3d for the case of the NFW profile (we are assuming that gas traces dark matter outside the cluster core). At larger separations from the center kinetic SZ becomes negligible, while lensing effect remains strong, so the two effects can be separated. Excluding the central portion of the cluster and using the optimal filter we find confusion levels of a few $`\mu K`$. Another potential source of contamination are the bulk motions within the cluster. If the cluster is not relaxed due to a recent merger this can produce significant internal motions of the gas (of the order of 500 km/s, Haehnelt and Tegmark 1996). The corresponding Doppler effect on the CMB can act as an additional source of fluctuations. Filtering reduces the noise level when the large scale CMB gradient is known, with the residual contamination at the $`\mu K`$ level. ### 4.6. SZ and OV along the line of sight In addition to the thermal and kinetic SZ effect from the cluster itself there is also the contribution from other objects along the line of sight. Kinetic SZ is sometimes divided into a contribution from quasi-linear structures called Ostriker-Vishniac (OV) effect and a contribution from nonlinear structures (kinetic SZ). All these will be a source of noise uncorrelated with the cluster itself. The magnitude of these contributions is somewhat model dependent. Both of them can be at a level of few $`\mu K`$ on arcminute scales, with thermal SZ being typically a few times stronger. However, since thermal SZ vanishes at 217 GHz (relativistic corrections are likely to be negligible for smaller halos contributing to the line of sight SZ), OV and kinetic SZ may be more important as a source of confusion at this frequency. To estimate their effect we have used the power spectra of thermal SZ as given in Persi et al. (1995) and OV given in Hu (1999). Thermal SZ power spectrum grows roughly proportional to $`l`$ and exceeds CMB around $`l2000`$ at low frequencies. Kinetic SZ is somewhat lower, but also grows at high $`l`$ in a similar fashion. Only on intermediate scales do the two exceed the combined CMB and instrument noise power spectrum. Their individual contribution to the lens filter variance varies as a function of angular scale. For the 5’ beam with 12 $`\mu K`$ noise per pixel the contribution from thermal SZ can double the variance from CMB and noise making the total 20 $`\mu K`$. At this angular resolution it is necessary to work at 217 GHz frequencies to reduce thermal SZ contamination, although kinetic SZ/OV still increases the variance somewhat. For a 1’ beam with 5 $`\mu K`$ noise per pixel the contributions from SZ and OV are lower and do not significantly change the rms noise. This is because instrument noise dominates the confusion. Only with a more sensitive detector would these contributions become important on these scales. ### 4.7. Dust emission Dust emission can arise from three separate sources. First there is the emission from our own galaxy. This contribution is fairly smooth, scaling as $`C_ll^3`$, and does not add significant power on small scales. A reasonable estimate for noise variance is 10$`\mu K`$ at $`l=10`$ for 217 GHz channel (Tegmark et al. 1999), dropping significantly at lower frequencies. Even at 217 GHz its power spectrum is below the CMB power spectrum everywhere except at very low $`l`$. This foreground is therefore less problematic than the primary CMB and its inclusion does not change significantly the conclusions above. As a caution we should note that this conclusion is based only on the power spectrum analysis, while dust emission can be strongly non Gaussian. There are regions where dust emission can be significantly larger than the above analysis would suggest. An example discussed below is in the field of A2163. Another source of dust emission are the infrared sources along the line of sight. These have been modeled by Toffolatti et al. (1998) and Guiderdoni et al. (1999). The overall contribution from the point sources to the power spectrum depends on the flux limit of the resolved sources. Strong point-like sources can be removed from the data as outliers, leaving the fluctuations produced by the unresolved sources. These Poisson fluctuations give a white noise power spectrum with rms fluctuations of the order of the flux limit converted to $`\mu K`$ in a beam area times square root of number of removed sources per beam area. Adopting conservative modeling as in Tegmark et al. (1999) we find that at 217 GHz the rms variance including point sources can reach 25 $`\mu K`$ for 1’ beam. This is reduced somewhat at larger angular scales, but there thermal SZ and CMB combined prevent one from reducing the noise below 10 $`\mu K`$. These results indicate that more sophisticated modeling of point sources will be necessary to reduce their contribution to acceptable levels. This can be achieved by using either higher frequencies or higher angular resolution to identify these sources. Finally there is also the possibility of the dust emission from the cluster itself. Such emission could explain recent recent sub-mm observation of A2163 by Pronaos (Lamarre et al. 1998) and may extend into the 200GHz range, although alternative explanation with galactic dust is just as likely. This would complicate the assertion that the 217 GHz frequency is the optimal one for identifying this effect. If the cluster dust emission at this frequency is still strong it may exceed the lensing signal at least in the center. It seems however unlikely that a strong dust component would also be present at larger separations from the center. ### 4.8. Radio point sources At low frequencies the main source of confusion are radio point sources (we ignore here free-free and synchrotron which typically do not exceed the CMB power spectrum and so are subdominant in contamination). Their modeling has also been presented in Toffolatti et al. (1998). At 30 GHz their contribution to the power spectrum using only internal identification (based on their identification as outliers in the flux distribution) is about 100 times larger than the point source contribution from IR sources at 217 GHz. The variance on the filtered profile is around 100 $`\mu K`$ for 5’ beam and twice that for 1’ beam. This is of course well known to observers operating in this frequency range, who routinely employ higher sensitivity multi frequency observations in the same region to eliminate point source contamination. This can reduce the variance from point sources below the instrument noise, 15-40 $`\mu K`$ for currently most sensitive experiments (Carlstrom et al. 1999). It remains to be seen however whether using this additional information can reduce the contamination to the required level of a few $`\mu K`$. ### 4.9. Gravitational lensing Distortion of the background CMB is caused by all the matter distribution along the line of sight, so there will be additional fluctuations added on top of the effect from the cluster. The effect can be partially modeled by using the lensed instead of unlensed power spectrum in the estimate of confusion from primary anisotropies discussed above. This gives additional noise contribution at the level below 1 $`\mu K`$ and so would appear not to significantly contaminate the signal. This approach however underestimates this contribution, because the generated CMB power is also correlated with the CMB gradient and the lens filter does not eliminate it as efficiently as the power spectrum analysis would indicate. As shown in Zaldarriaga and Seljak (1999b) lensing along the line of sight primarily generates power on scales smaller than the cluster, with rms amplitude of a few $`\mu K`$. Averaging over the expected cluster profile reduces this noise to negligible levels compared to other sources. ### 4.10. Atmosphere noise In addition to the sources of noise described above for ground experiments there is also the atmosphere noise arising from atmosphere temperature (around 15K) and atmosphere fluctuations. The first can be modeled as a white noise and has similar properties to the instrument noise. For bolometer arrays it dominates the noise at small scales, so sufficient long integrations are needed to reduce it to acceptable levels. Atmospheric fluctuations are more correlated and their precise estimate depends on the specifics of the detector, site, weather etc. They can be reduced significantly using interferometric techniques and it is expected that they can be reduced to a few $`\mu K`$ levels required. ## 5. Conclusions The characteristic signature of a cluster gravitationally lensing a smooth CMB gradient allows one to search and identify this effect among many other possible sources of fluctuations at small scales. The signal is small, at the level around 10 $`\mu K`$ in the center of the most promising clusters and a fraction of that away from it. Other sources of anisotropies almost certainly exceed this signal in the absence of filtering. Noise filtering over the expected profile can reduce the noise contamination to acceptable levels if the initial guess about the cluster profile is sufficiently close to the real one. The final signal to noise depends sensitively on the amplitude of various sources of fluctuations including instrument noise, all of which are still rather uncertain at present. The most promising among the existing or planned experiments are small scale interferometers with arcminute resolution and a few $`\mu K`$ noise per pixel sensitivity operating at frequencies close to 217 GHz. Planck experiment with 5’ beam at 217 GHz could also provide detection in some cases. Despite the signal being weak, this method of detecting cluster signature has some advantages over the current methods. First, like for the SZ effect the strength does not drop significantly with redshift, except for the slow decrease from the $`D_{LS}/D_{OS}`$ ratio. In addition, the signal depends on the deflection angle, which is more sensitive to the outer parts of the cluster as those probes that depend on the projected density. For the NFW profile the signal drops by about a factor of 2 between $`\theta _s`$ and $`10\theta _s`$ (which is beyond the virial radius), while the projected surface density drops by almost two orders of magnitude (see figure 2). Given that this is one of the few probes sensitive to the outer parts of the cluster it seems worth pursuing it with the next generation of CMB experiments. We thank L. Page for useful discussions. M.Z. is supported by NASA through Hubble Fellowship grant HF-01116.01-98A from STScI, operated by AURA, Inc. under NASA contract NAS5-26555. U.S. is supported by NASA grant NAG5-8084.
no-problem/9907/cond-mat9907502.html
ar5iv
text
# Disorder Induced Phase Transition for Random 2D Dirac Fermions \[ ## Abstract We present evidence that two dimensional Dirac fermions in the presence of random Abelian gauge potential exhibit a phase transition when the disorder strength exceeds a certain critical value. We argue that this phase transition has novel properties unique to disordered systems. It resembles in many ways the transition from dilute to dense polymers phase in two dimension. In particular, we argue that the central charge of the disordered Dirac fermions, being $`c=0`$ before the transition, changes to $`c=2`$. We discuss possible implications to quantum Hall transitions, in view of recently proposed model for quantum Hall transitions with $`c=2`$. 71.10.Pm, 64.60.-i, 73.40.Hm \] The study of two dimensional Dirac fermions in the context of quantum Hall effect was initiated in Ref. . These fermions exhibit an integer quantum Hall transition as the value of their mass is tuned through zero. Unfortunately, for pure fermions this transition lies in a universality class different from the one observed in quantum Hall effect experiments, with the exponents of the Ising model. In order to model the more realistic situation, various types of disorder should be turned on. Usually three types of disorder are studied, random mass, random scalar potential and random gauge potential. Random mass turns out to be irrelevant in the renormalization group sense and the critical properties of the transition do not change as we turn it on. Random gauge potential results in the critical line, with the critical properties of the transition continuously dependent on the disorder strength. And the random scalar potential turns out to be relevant, flowing away to some unknown critical point. It was further argued in Ref. that in order to simulate the realistic quantum Hall transition, all three types of randomness should be present. That causes the system to flow to a strongly coupled critical point with unknown properties. This critical point should be in the same universality class as the generic quantum Hall transitions, usually described by the Pruisken sigma model (see for its supersymmetric version). There is very little we can say about the properties of that critical point, except we might expect the effective field theory which describes it to have central charge $`c=0`$. In view of the difficulty of approaching the generic situation, a number of papers concentrated on the properties of critical line generated by the presence of random gauge potential . It appears that the correlation functions of such disordered fermions are very easy to calculate. Indeed, there are several methods available, all of them result in the correlation functions continuously dependent on the disorder strength . However, in this letter we will show that previous treatments were somewhat careless. In fact, the correlation functions of the Dirac fermions in the presence of random gauge potential depend continuously on the disorder strength up to a certain critical value. If we continue to increase the strength of the disorder beyond that value, the critical properties of the correlation functions stop being dependent on the disorder strength. The system as a whole undergoes a phase transition. The density of states at zero energy is zero below the transition and is a constant above it. And we argue that the effective field theory describing that critical line changes considerably. Its central charge was $`c=0`$ below the transition, and it changes to $`c=2`$ above the transition. We draw a direct parallel with the theory of polymers (self avoiding random walks) in two dimensions. They are known to undergo a phase transition called dilute to dense polymer transition. The central charge of the effective field theory describing polymers changes from $`c=0`$ to $`c=2`$ . Moreover, we will see that the field theories describing the dilute and dense polymer phases and those describing the two phases of the fermions with random gauge potential are closely related. In fact, the first evidence of this phase transition showed up in Ref. . There the multifractal exponents of the zero energy wave function of the Dirac fermions with random gauge potential has been calculated exactly. It was discovered that these exponents change sharply as the disorder strength exceeds a certain value. Evidently this behavior is just another manifestation of the phase transition discussed in this letter. We will conclude with discussing the implications of this transition for the not yet established theory of the integer quantum Hall transitions. It appears that a natural way to approach the generic quantum Hall transition point is to turn on strong random gauge potential. That generates the constant density of states expected at the generic quantum Hall transition, and changes the central charge of the theory from $`c=0`$ to $`c=2`$. After that, we may want to turn on random mass and random scalar potential, following the logic of . The theory will then flow to a critical point which, contrary to a naive belief, will not be a $`c=0`$ theory. This may shed new light on the recent proposal that the physics of quantum Hall transitions should be captured by a $`c=2`$ theory . However, the connection to the model proposed in is at this stage purely speculative. Now we proceed with the derivation of this phase transition. To begin with, let us fix the notations. We study the two dimensional Dirac fermions $`\psi `$, $`\overline{\psi }`$ in the presence of gauge potential $`A_\mu `$ with the Hamiltonian $$H=d^2x\overline{\psi }\left(i_\mu A_\mu \right)\sigma _\mu \psi +ϵ\overline{\psi }\psi ,$$ (1) where $`\sigma _1\sigma _x`$, $`\sigma _2\sigma _y`$ are Pauli matrices and $`ϵ`$ is the energy. $`A_\mu `$ is supposed to be random with the probability density $$P\mathrm{exp}\left(\frac{1}{g}d^2xA_\mu ^2\right),$$ (2) where $`g`$ is the disorder strength. It is convenient at this stage to decompose $`A_\mu `$ into a physical part $`\varphi `$ and a pure gauge $`\chi `$, following , $$A_\mu =_\mu \chi +ϵ_{\mu \nu }_\nu \varphi ,$$ (3) where $`ϵ_{\mu \nu }`$ is the antisymmetric tensor. In terms of the fields $`\chi `$ and $`\varphi `$ the probability density becomes $$P\mathrm{exp}\left[\frac{1}{g}d^2x\left(\left(_\mu \varphi \right)^2+\left(_\mu \chi \right)^2\right)d^2x_\mu \theta _\mu \overline{\theta }\right]$$ (4) where the anticommuting fields $`\theta `$ and $`\overline{\theta }`$ provide for the Jacobian of the variable change from $`A_\mu `$ to $`\varphi `$ and $`\chi `$ (see Ref. ). The simplest way to compute the correlation functions of (1) is to use the bosonization technique. These correlation functions can be obtained with the help of the following path integral $$Z=𝒟\phi \mathrm{exp}\left[d^2x\left(\left(_\mu \phi \right)^2+iϵ_{\mu \nu }A_\mu _\nu \phi \right)\right].$$ (5) It is very easy to compute the partition function $`Z`$ by completing the square, $$Z=\mathrm{exp}\left[\frac{1}{4}d^2x\left(_\mu \varphi \right)^2\right]$$ (6) which is the effective action of the celebrated Schwinger model. To compute an averaged correlation function of a physical operator, say $`\psi (z)\mathrm{exp}(\phi (z))`$ or any other object $`X`$, we need to average the following path integral $$X=\frac{𝒟\phi X\mathrm{exp}\left[d^2x\left(\left(_\mu \phi \right)^2+iϵ_{\mu \nu }A_\mu _\nu \phi \right)\right]}{Z}$$ (7) over the random $`A_\mu `$. With the help of (6) it becomes $$X=𝒟\phi X\mathrm{exp}\left[d^2x\left(_\mu \phi +\frac{i}{2}_\mu \varphi \right)^2\right].$$ (8) From this point on, we could proceed in two different ways. One of them would be to calculate the correlation function of $`X`$ and then average it over the random $`A_\mu `$ (that is, $`\varphi `$) with the help of (4). The other way would be to average the path integral (8) over the random $`A_\mu `$ first and then calculate the correlation function. In a quite intriguing way, these two methods do not produce the same results. Let us first proceed in the former way. We choose the left moving Dirac fermion $`\psi (z)\mathrm{exp}\left(i\phi (z)\right)`$ as the object $`X`$ whose correlation functions we would want to compute. We then compute the correlation function of $`\mathrm{exp}\left(i\phi (z)\right)`$ by shifting the variable of integration $`\phi `$ by the amount proportional to $`i\varphi `$. Then we average over the random $`\varphi `$ with the help of (4) (averaging over $`\chi `$ and $`\theta `$, $`\overline{\theta }`$ is irrelevant as these fields completely drop out of any physical correlation function). As a result, we discover that the dimension of the field $`\mathrm{exp}\left(i\phi (z)\right)`$, being equal to $`1/2`$ without the disorder, becomes $`1/2g/8`$ in presence of disorder. Thus we reproduced the standard result of . However we should become suspicious of this result. When $`g4`$, the dimension of the operator $`\mathrm{exp}\left(i\phi (z)\right)`$ becomes zero and then negative. That means this field acquires an expectation value (which, in case of the negative dimension, scales with the system size towards bigger values). Once it acquires an expectation value, we can no longer argue that we could perform the shift in the path integral to arrive at this result in the first place. The path integral is no longer invariant under the shift of $`\phi `$. We would have arrived at the same problem had we analyzed the conformal field theory approach of Ref. . There it was shown how to calculate the dimension of $`\psi (z)`$ with the help of U$`(1|1)`$ Kac-Moody algebra hidden in the supersymmetric approach to (1). In the very same way, its dimension becomes negative at large disorder and it acquires an expectation value. However, the expectation value for that operator would break the very Kac-Moody symmetry which the approach of utilized so successfully. A way out of this quagmire would be to add the symmetry breaking energy term of (1) to the path integral, $`ϵ\overline{\psi }\psi ϵ\mathrm{cos}\left(\phi \right)`$ and try to take $`ϵ`$ to zero. However, adding this term to the path integral (8) would make it completely intractable. Instead, we should try a different approach. Let us first average (8) with the help of the probability distribution (4), in other words do the $`\varphi `$ integral in $$𝒟\varphi 𝒟\phi \mathrm{exp}\left[d^2x\left\{\left(_\mu \phi +\frac{i}{2}_\mu \varphi \right)^2+\frac{1}{g}\left(_\mu \varphi \right)^2\right\}\right].$$ (9) It is not hard to do that if $`g<4`$ by completing the square. That gives back the dimension $`1/2g/8`$ to the operator $`\mathrm{exp}\left(i\phi (z)\right)`$. However, if $`g>4`$, the integral over $`\varphi `$ in (9) becomes divergent. A different technique is needed to compute it in the strong disorder regime. As always when dealing with divergent integrals, it is a good idea to regularize them. We are going to limit the $`\varphi `$ integration by a certain cutoff in the functional space. In fact, it is easiest to do that in the ordinary integral equivalent to (9). Consider the following ordinary integral $$_{\mathrm{}}^{\mathrm{}}_{\mathrm{}}^{\mathrm{}}\frac{dxdy}{\pi }\mathrm{exp}\left[\left(x+\frac{iy}{2}\right)^2\frac{1}{g}y^2\right].$$ (10) It is just an ordinary integral, and yet it captures the essential properties of (9). Let us try to compute $`x^2`$ with the help of this integral. By that we mean inserting $`x^2`$ inside the integral in (10) and then doing the integral. If we try to do the integral over $`x`$ first, and then integrate over $`y`$, we arrive at $`x^2=1/2g/8`$. However, if we reverse the order, the integral over $`y`$ becomes divergent if $`g>4`$, just like in (9). So we limit the integral over $`y`$ by the cutoff $`L`$ and $`L`$. It is not hard to estimate this integral at large $`L`$, $`{\displaystyle _L^L}𝑑y\mathrm{exp}\left[\left({\displaystyle \frac{1}{4}}{\displaystyle \frac{1}{g}}\right)y^2iyx\right]`$ (11) $`{\displaystyle \frac{1}{\left(1\frac{4}{g}\right)L}}\mathrm{exp}\left[\left({\displaystyle \frac{1}{4}}{\displaystyle \frac{1}{g}}\right)L^2\right]\mathrm{cos}\left(xL\right).`$ (12) We can now use that estimate to compute $`x^2`$ as in $$𝑑xx^2\mathrm{exp}\left[\left(x\pm \frac{iL}{2}\right)^2\frac{L^2}{g}\right]\mathrm{exp}\left(\frac{L^2}{g}\right).$$ (13) It is safe to extend the integration over $`x`$ to from $`\mathrm{}`$ to $`+\mathrm{}`$ since it converges very fast. Now it is obvious that taking the limit $`L\mathrm{}`$ we obtain $`x^2=0`$ for $`g>4`$ with the term $`\mathrm{exp}\left(L^2/g\right)`$ suppressing all others. Let us pause now and translate this result to the language of (9). It implies that doing the integration over $`\varphi `$ first, with a suitable cutoff, we obtain that the dimension of the field $`\mathrm{exp}\left(i\phi \right)`$ is equal to $`0`$ for all $`g>4`$. Thus we arrive at the result announced in the beginning. The dimension of the field $`\psi `$ of (1) is $`1/2g/8`$ if the disorder strength $`g`$ is less than $`4`$, and is equal to $`0`$ for $`g4`$. The field $`\overline{\psi }\psi \mathrm{cos}\left(\phi \right)`$, which is the density of states of the fermions, acquires an expectation value at $`g4`$. Figuratively speaking, integrating over disorder $`\varphi `$ suppresses the quantum fluctuations of $`\phi `$ when we are in the strong disorder phase. That is why $`\mathrm{exp}\left(i\phi \right)`$ becomes zero dimensional. We obtained this result using a clever ‘brute force’ averaging of the correlation functions of (1). However, we should expect to see the same phenomenon in the supersymmetry or replica technique. In principle we can now use the procedure outlined above to compute any correlation function of the theory in the strong disorder phase. Therefore for all practical purposes we now understand the nature of this phase transition. Let us repeat that to see it, we had to impose cutoff on the integration over disorder in the functional space. The nonanalytic behavior, typical of phase transitions, has been recovered as we took that cutoff to infinity. Later we will see that in this respect the transition we have been studying has a lot in common with a more established dilute to dense polymer transition. Now we need to understand what this phase transition implies for the effective field theory of the disordered Dirac fermions. Effective field theory can be derived in either replica or supersymmetry approach, and we are going to concentrate on the latter. Supersymmetrization of (1) can be achieved by adding Dirac bosons to it, as in $$H=d^2x\overline{\psi }\left(i_\mu A_\mu \right)\sigma _\mu \psi +b\left(i_\mu A_\mu \right)\sigma _\mu c,$$ (14) where $`b`$ and $`c`$ are commuting spinors. In calling them $`b`$ and $`c`$ we follow the accepted convention of . At this stage we can directly average over $`A_\mu `$ with the help of (2) and obtain the effective supersymmetric field theory. However, to avoid dealing with a complicated interacting field theory, it is convenient to bosonize the fields $`\psi `$, $`\overline{\psi }`$, and $`b`$, $`c`$ before the averaging. That is achieved with the help of two bosonic fields, $`\phi _1`$ and $`\phi _2`$, and two fermionic fields $`\theta `$, $`\overline{\theta }`$ as in follows $$e^{{\scriptscriptstyle d^2x\left(\left(_\mu \phi _1\right)^2+\left(_\mu \phi _2\right)^2+ϵ_{\mu \nu }A_\mu \left(i_\nu \phi _1+_\nu \phi _2\right)_\mu \theta _\mu \overline{\theta }\right)}}$$ (15) We recall that the field $`\phi _1`$ bosonizes the fermions $`\psi `$ and $`\overline{\psi }`$ while the fields $`\phi _2`$, $`\theta `$ and $`\overline{\theta }`$ are needed to bosonize the $`b`$, $`c`$ fields. Averaging over $`A_\mu `$ with the help of (2) we obtain (dropping the $`\theta `$ and $`\overline{\theta }`$ terms for the time being) $$𝒟\phi _1𝒟\phi _2e^{{\scriptscriptstyle d^2x\left(\left(_\mu \phi _1\right)^2+\left(_\mu \phi _2\right)^2{\scriptscriptstyle \frac{g}{4}}\left(i\phi _1+\phi _2\right)^2\right)}}$$ (16) It is not hard to see that this path integral and the one of (9) are the equivalent, as far as the averages involving the field $`\phi _1`$ or the field $`\phi `$ of (9) are concerned. Just as in (9), the integral over $`\phi _2`$ in (16) is divergent if $`g>4`$. Cutting this integral off and then taking the cutoff to infinity, one gets the same phase transition as we discussed before. Effectively, the integration over $`\phi _2`$ at strong disorder freezes out the fluctuations of the field $`\phi _1`$. We recall that $`\phi _2`$ was the result of bosonization of the auxiliary fields $`b`$, $`c`$. The only remaining fluctuating fields are $`\theta `$ and $`\overline{\theta }`$ of (15). It now becomes obvious what this phase transition implies for supersymmetry. Of course, after the transition the invariance under the supergroup U$`(1|1)`$, manifest in (14), is no longer there. In fact, a good way of thinking about this phase transition is to recall that according to , the action (14), after averaging over $`A_\mu `$, has a Kac-Moody U$`(1|1)`$ symmetry. It is generated by the currents $`J`$, $`j`$, $`\eta `$, and $`\overline{\eta }`$ with the energy momentum tensor quadratic in these currents, $`T(Jj+\eta \overline{\eta }\overline{\eta }\eta )+(4g)JJ`$. After the phase transition the currents $`J`$ and $`j`$, being just linear combinations of the fields $`\phi _1`$ and $`\phi _2`$, annihilate any physical state. The energy momentum tensor becomes $`T=\eta \overline{\eta }`$ and the central charge is now $`c=2`$. The fields $`\eta `$ and $`\overline{\eta }`$ are nothing else, but the derivatives of the fields $`\theta `$ and $`\overline{\theta }`$ introduced in (15). Therefore, the symmetry U$`(1|1)`$ changes to the symmetry PSU$`(1|1)`$ of the $`c=2`$ theory. It is tempting to add that if we worked with both the advanced and retarded sectors, the U$`(2|2)`$ symmetry would change into PSU$`(2|2)`$, in the spirit of the recent proposal of . We are going to change the subject slightly and discuss the connection of this phase transition with the physics of two dimensional polymers. The polymer problem is usually defined as self avoiding random walks, with the Green’s function given by $$G(x,x^{};\mu )=\mu ^L,$$ (17) where the sum goes over all the nonselfintersecting trajectories connecting the points $`x`$ and $`x^{}`$, $`L`$ is their length, and $`\mu `$ is a parameter usually called frugacity (for a review, see Ref. ). When $`\mu `$ is small, the polymer Green’s function (17) is short ranged. As we increase $`\mu `$, the Green’s function becomes critical, and the theory approaches the dilute polymer critical point. If we increase $`\mu `$ even further, the sum in (17) becomes divergent. At this point, we may want to put the polymers inside a finite box, not allowing the trajectories in (17) to go outside that box. Then the sum in (17) becomes convergent in that new sense, and the theory enters the new phase, the so-called dense phase. In this phase, the polymers fill the entire space available to them, hence the name of the phase . There is a deep analogy between the dilute to dense polymer phase transition and the disordered fermion phase transition we discussed in this letter. Indeed, in the path integral representation of (17) we sum over all the trajectories connecting the points $`x`$ and $`x^{}`$, penalizing those which intersect. The transition from a dilute to dense phase will manifest itself in the divergence of this path integral, and we have to cutoff the integral in the space of paths (restrict the paths to lie in a finite box) to see the dense phase, just like we did with the disordered fermions in this letter. It is quite instructive to recall that the polymer problem can also be cast in the form of a motion in the presence of disorder. It is known to be equivalent to the problem of a quantum-mechanical particle moving in a purely imaginary random potential . Studied by the replicated path integral, it can then be mapped into the $`n0`$ limit of the O$`(n)`$ model, as was first noticed in and later successfully utilized to find the critical properties of the dilute polymer phase. Moreover, in a direct analogy with disordered Dirac fermions, before the phase transition the critical properties of the dilute polymers can be captured by a version of the U$`(1|1)`$ Kac-Moody algebra . After the transition, however, the dense polymer phase is described by the same $`c=2`$ theory as the one proposed in this letter to describe the strong disorder phase of the Dirac fermions. Returning to our main problem, the integer quantum Hall transitions, we think it is clear now what the further course of action should be. One should study Dirac fermions in the presence of random gauge potential and other types of disorder. The strong random gauge potential effectively freezes out two bosonic degrees of freedom, leaving us with a $`c=2`$ theory with a constant density of states. Whether that will result in tractable theory is an open question, however. It is also not at all obvious that this theory will flow towards a PSU$`(2|2)`$ sigma model with a Wess-Zumino term, as was proposed in . However, it will flow to a theory similar at least in spirit to that sigma model. What kind of theory it will flow to definitely deserves further study. This work has been initiated as a result of inspiring discussions with Claudio de C. Chamon and Chetan Nayak. The author is also grateful to Matthew P.A. Fisher, T. Senthil, and Martin R. Zirnbauer for important comments. This work has been supported by the NSF grant PHY-94-07194.
no-problem/9907/cond-mat9907132.html
ar5iv
text
# Diffusons, Locons, Propagons: Character of Atomic Vibrations in Amorphous Si ## I Introduction Vibrational properties of disordered media have been reviewed by various authors, in particular Elliott and Leath (1975), Weaire and Taylor (1980), Visscher and Gubernaitis (1980), and Pohl (1998). Here we review and present new results in a program of numerical study of vibrations of amorphous Si. Among the new results not contained in earlier reviews are theoretical treatments of heat conductivity and thermalization rates in glasses. Harmonic normal modes of vibration can be rigorously classified as extended (E) or localized (L). In three dimensions the vibrational spectrum has sharp E/L boundaries (“mobility edges”) separating these two kinds of modes. There is another boundary, not sharp, which Mott called the “Ioffe-Regel limit” and which we call the “Ioffe-Regel crossover.” This P/D boundary separates the spectrum into a region with ballistic propagation (P) where wavevector is a reasonably good quantum number and a region with only diffusion (D) where wavevector cannot be defined but states are still extended. In region P, wave-packets can travel at sound velocity over distances of at least two or three interatomic spacings before scattering from disorder (Allen and Kellner, 1998). The distance of ballistic propagation is the mean free-path $`\mathrm{}`$. In region D, only diffusive propagation occurs over any meaningful distance, and the concepts of mean free path and wavevector lose usefulness. Although it may seem natural that the Ioffe-Regel crossover should be close to the mobility edge (Alexander, 1989), it is not true for the models we have studied. Indeed, Mott and Davis (1971) emphasize the non-coincidence. Both regions P and D lie in the extended (E) part of the spectrum. The non-coincidence of the E/L boundary and the P/D boundary means that the spectrum has three kinds of states. We find useful the terminology “propagon, diffuson, and locon” given in Fig. 2. The term “phonon” is avoided because of disagreement about what it means in a glass. We instinctively seek additional labels, to replace the detailed classification scheme by wavevector and branch so useful in crystals. Here we attempt to characterize as completely as possible the dominant diffuson portion of the spectrum, but fail to find useful sub-categories. As already noticed by Kamitakahara et al. (1987), all diffuson modes of a given frequency have essentially indistinguishable displacement patterns. We study amorphous silicon in harmonic approximation, using a model we believe to be very realistic. As shown in Fig. 1, the E/L boundary is near the top of the spectrum, with only 3% of the modes localized. The P/D boundary is near the bottom of the spectrum, with only 4% of the modes ballistically propagating. Diffusons fill 93% of the spectrum. We do not think this is special either to our model or to amorphous Si. Other models for amorphous Si (Kamitakahara et al. 1987, Lee et al., 1991, Nakhmanson and Drabold, 1998) agree that the E/L boundary occurs near the top of the spectrum. Similar results are found for other glasses (Bouchard et al. 1988; Feldman and Kluge, 1995; Cobb, Drabold, and Cappelletti, 1996; Taraskin and Elliott, 1997; Carles et al., 1998), and model systems (Sheng and Zhou, 1991; Sheng, Zhou, and Zhang, 1994; Schirmacher, Diezemann, and Ganter, 1998), with localized states occurring only near the top of the spectrum or in tails near gaps in the vibrational densities of states. For 3-d systems with artificially large disorder, the mobility edge can be moved down to the middle of the spectrum (Canisius and van Hemmen, 1985; Fabian and Allen, 1996). The position of the P/D boundary near the bottom of the spectrum is widely accepted. ## II Low-frequency anomalies Our study of normal modes by exact diagonalization on finite-size systems inherently lacks information at low frequency. For this reason, the present paper ignores the well-known but only partially understood low-frequency anomalies in vibrational properties of glasses. Our previous work (Feldman, Allen, and Bickham, 1999) argues that the homogeneous network models we use probably contain no low-frequency anomalies. For completeness, we give here a brief catalog and point to sources for further information. “Two-level”, or “tunneling” systems were introduced by Phillips (1972) and Anderson, Halperin, and Varma (1972) motivated by experimental discoveries by Zeller and Pohl (1971). The predictive strength of this concept is beyond question, but the physical objects the theory invokes remain elusive. A review was given by Phillips (1987). The phrase “boson peak” refers to a low frequency feature seen by Raman scattering (Stolen, 1970; Jäckle, 1981) in many glasses, which is correlated with the occurrence of “excess modes” in specific heat and other spectroscopies. There are many candidate explanations for these modes. One unified view, introduced by Karpov, Klinger, and Ignat’ev (1983) is called the “soft potential model.” This model holds that glasses generically have anharmonic regions, modelled as double-well potentials. These give rise to two-level systems , relaxational behavior, and quasi-localized or resonant harmonic normal modes. The last are a logical candidate for the excess modes. The subject was reviewed by Parshin (1994). Our models contain some quasi-localized modes at low frequencies, as is mentioned further in Sec. VIII. “Floppy modes” (Phillips, 1980; Thorpe, 1983) and the daughter concept of “rigid unit modes” (RUMs: Dove et al. 1996; Trachenko et al. 1998) refer to low-frequency modes which have zero restoring force in nearest-neighbor central-force models. Constraint counting algorithms provide methods of estimating the numbers of such modes. They are expected to be quasi-localized in harmonic approximation, but intrinsically highly harmonic. Such modes probably do not play any important role in amorphous Si because of the overconstrained coordination. Finally, experimental evidence shows that amorphous Si contains usually fewer two-level-type excitations than most glasses, and that samples with essentially zero such excitations can be prepared by treatment with hydrogen (Liu et al. 1997). ## III The model Amorphous Si is an over-constrained network glass. By one usual definition, it is not a glass since it is not produced by a glass transition upon cooling. Instead, thin films are condensed on cold substrates. When the film gets too thick, crystallization cannot be prevented. The absence of a glass transition can be attributed to good kinetic properties of a single-component system with one strongly-preferred bonding arrangement. To our mind, this just means that amorphous Si is simpler than most glasses, which for our purposes is more of an advantage than a disadvantage. With obvious caution required, we think that most of the properties we shall discuss can be regarded as typical of most glasses. For amorphous Si we use atomic coordinates generated using the algorithm of Wooten, Weiner, and Weaire (1985). We have studied models containing 216, 1000, and 4096 atoms, contained in cubic boxes of side 16.5, 28, and 44$`\AA `$, respectively, and continued periodically to infinity. All models are built using the Keating potential (Keating, 1966) and then subsequently re-relaxed to a local minimum of the Stillinger-Weber potential (Stillinger and Weber, 1985). Different models differ in minor details, both for ordinary statistical reasons and because the algorithm was implemented slightly differently in each case. In this section we present structural properties of a 4096-atom model. This model has larger distortions from tetrahedral form than some of our other models, with 102 four-membered rings. Figure 3 shows neighbor distributions $`𝒫_n(\omega )`$ The distribution $`𝒫_1`$ for the first neighbor is calculated by measuring the distance to the closest neighbor of each atom; $`𝒫_2`$ is found from the distances to the second closest neighbor. Note that the first four neighbors distribute tightly within $`0.1\AA `$ to the crystalline first neighbor distance, 2.35$`\AA `$. The next 12 neighbors in crystalline Si are at 3.84$`\AA `$, a number fixed by the bond length (2.35$`\AA `$) and the tetrahedral bond angle (109.5). In our model of amorphous Si, the closest of the next 12 (neighbor number 5) lies roughly between 3.2 and 3.7$`\AA `$, while the farthest of these 12 (neighbor number 16, not shown) lies roughly between 3.9 and 4.4$`\AA `$. This reflects a flexibility in the bond angles, with values distributed between 90 and 125, as shown in the inset to Fig. 3. The third set of neighbors in crystalline Si is 12 atoms at 4.50$`\AA `$. This is determined by the fact that diamond structure has a dihedral angle of 60, with all rings of the 6-member “chair” type. Rotation of the dihedral angle to 0 (“boat” type rings) reduces the third neighbor distance to 3.92$`\AA `$. Our model shows no gap at all between second shell (neighbors 5-16) and third shell (neighbors 17-28), consistent with random dihedral angles. The sum $`𝒫_n`$ over all $`n`$ gives the radial distribution function $`g(r)`$, plotted in Fig. 4 and compared with the experiments of Kugler et al. The close agreement is one measure of the realism of the model. However, much of the structure of $`g(r)`$ seems only to reflect atom density and nearest neighbor distance, so it may not be a very stringent test. ## IV Vibrational frequencies The other aspect of the model is the interatomic forces. We chose the model of Stillinger and Weber (1985) which is designed to work for the liquid as well as crystalline state. This required us to relax the coordinates from Wooten to a minimum of the Stillinger-Weber potential. The stability of the minimum is proven by the positivity of all eigenvalues $`\omega ^2`$ of the dynamical matrix. The eigenfrequency distribution is shown in Fig. 1. Qualitatively satisfactory agreement is found with the neutron scattering data of Kamitakara et al. (1987). A similar overestimate of vibrational frequencies is made when Stillinger-Weber forces are applied to crystalline Si, so we think the discrepancies should be attributed to the forces rather than the atomic coordinates. ## V localized states The definition of a localized state is exponential decay of the eigenvector with distance from some center $`\stackrel{}{R}_0`$: $$|\stackrel{}{ϵ}_i(\stackrel{}{R}_n)|\mathrm{exp}(|\stackrel{}{R}_n\stackrel{}{R}_0|/\xi _i).$$ (1) This defines the localization length $`\xi _i`$ of the $`i`$-th normal mode, if the decay is observed. Fig. 5 shows selected modes, showing the very different character of modes from the D and the L portions of the spectrum. There is still controversy concerning the location of the mobility edge in glasses. Unfortunately, most experiments shed little light, since measured spectral properties, being averages over a macroscopic region, do not differentiate between localized and delocalized states. Heat conductivity $`\kappa (T)`$ is the property most strongly affected by localization. We think the measured $`\kappa (T)`$ strongly supports our placement of the P/D and E/L boundaries near the lower and upper edges of the spectrum. This will be discussed in the next section. If eigenvectors are calculated, then no special tricks are needed to locate the theoretical mobility edge in models of amorphous silicon. A model with 216 atoms was large enough to locate the E/L boundary between 71 and 73 meV; the precise location varies somewhat from model to model. We have made some experiments with artificially enhanced disorder, randomly scaling half of the masses by a factor of 5. This pushes the E/L boundary into the middle of the spectrum where it becomes more blurred by size effects. Localized states of “pure” amorphous Si are easily recognized. They are trapped in especially defective regions. This was discovered (Fabian, 1997b) by defining a local coordination number $`Z_a`$, the number of neighbors of atom $`a`$ at $`\stackrel{}{R}_a`$. The following arbitrary definition of neighbor suffices: $$Z_a=\underset{b}{}w(|\stackrel{}{R}_a\stackrel{}{R}_b|)$$ (2) where $`w(r)`$ is 1 for $`r<2.35\AA `$, 0 for $`r>3.84\AA `$, and continuous and linear in between. This gives an average coordination of 4.7 neighbors. The “mode average coordination number” is defined as $$Z_i=\underset{a}{}Z_a\left|\stackrel{}{ϵ}_i(a)\right|^2.$$ (3) Most modes have $`Z_i`$ near average, but localized modes mostly collect at regions with significantly higher coordination, as shown on Fig. 6. We have seen the mobility edge in many independent calculations. 1. The participation ratio $`p_i`$ (Bell and Dean, 1970), plotted on Fig. 6, gives the number of atoms on which the mode has significant amplitude. For the E=P+D part of the spectrum, the value hovers near 500 for a model with 1000 atoms. 2. The diffusivity $`D_i`$ drops to zero at the mobility edge. This is shown in the next section. 3. Sensitivity to boundary conditions is also discussed in the next section. 4. Level spacing distributions have been computed for diffusons and locons (Fabian, 1997b). As expected, diffusons obey Wigner-Dyson statistics, while locons obey Poisson statistics. 5. Eigenvector self-correlation functions were carefully studied in a 4096-atom model by Feldman et al. (1998). 6. Many quantities $`q_i`$ which can be evaluated for each mode $`i`$ seem to depend only on $`\omega _i`$ for diffusons, but become mode-specific for locons. The ones we have looked at are: (a.) The mode-average coordination number, plotted on Fig. 6. (b.) The phase-quotient parameter, discussed in section VII.C. (c.) Bond-stretching parameters (Fabian and Allen, 1997). (d.) Grüneisen parameters for both volume and shear deformations (Fabian and Allen, 1997 and 1999). ## VI heat conductivity and diffusivity Theoretical interpretation of heat conduction of glasses has been contentious. Below the “plateau” region (typically 5-30K) it is agreed that heat is carried by ballistically propagating low-frequency modes (the P region of the spectrum). Above the plateau, $`\kappa (T)`$ rises, approaching at room temperature a constant value which is typically smaller than the crystalline value (a decreasing function of $`T`$ at room temperature.) A rigorous consequence (Allen and Feldman, 1993) of the Kubo formula (Kubo, 1957) and the harmonic approximation is the relation $$\kappa (T)=\frac{1}{V}\underset{i}{}C(\mathrm{}\omega _i/2k_BT)D_i,$$ (4) where C(x) is the specific heat of a harmonic oscillator $`(x/\mathrm{sinh}(x))^2`$ and $`D_i`$ is the “diffusivity” of the $`i`$-th normal mode of frequency $`\omega _i`$, given by $$D_i=\frac{\pi V^2}{3\mathrm{}^2\omega _i^2}\underset{ji}{}|S_{ij}|^2\delta (\omega _i\omega _j),$$ (5) where $`S_{ij}=<i|S|j>`$ is the intermode matrix element of the heat current operator. Eq. 4 also emerges, with $`D_Q`$ equal to $`v_Q^2\tau _Q/3`$, from the Peierls-Boltzmann phonon-gas model (Gurevich, 1986) of transport in crystals. The latter model is only justified if the mean free-path $`\mathrm{}_Q=v_Q\tau _Q`$ is longer than a wavelength. It was noticed by Birch and Clark (1940), and by Kittel (1948) that in glasses $`\kappa (T)`$ at $`T>`$20K could be interpreted as the specific heat $`C(T)/V`$ multiplied by a temperature-independent diffusivity $`\overline{D}`$ of order $`a^2\omega _D/3`$ where $`a`$ is an interatomic distance. In the phonon-gas model, this would correspond to $`\mathrm{}a`$, too small to justify use of the model. The success of this observation implies that the dominant normal modes in a glass are of the D variety, not P because P implies $`\mathrm{}a`$, and not L because L implies $`D=0`$ until anharmonic corrections are added which make $`D`$ depend on $`T`$. This successful (and we believe, essentially correct) interpretation lost favor after Anderson localization was understood, because a misconception arose that the P/D boundary (which certainly lies low in the spectrum of a glass) should lie close to the E/L boundary. Our numerical calculations of $`D_i`$ are shown in Fig. 7. Also shown are values of $`D_i`$ from a formula of Edwards and Thouless (1972), $$D_i=L^2\mathrm{\Delta }\omega _i,$$ (6) where $`\mathrm{\Delta }\omega _i`$ is the sensitivity of the eigenenergy to a twist of the boundary condition. We have simply used for $`\mathrm{\Delta }\omega _i`$ the change in $`\omega _i`$ when boundary conditions are changed from periodic to antiperiodic. The actual definition is probably $$\mathrm{\Delta }\omega _i=\underset{\varphi 0}{lim}\left[\frac{\pi ^2}{\varphi ^2}\mathrm{\Delta }\omega _i(\varphi )\right]$$ (7) where $`\mathrm{\Delta }\omega _i(\varphi )`$ is the shift when the phase is twisted by $`\varphi `$. Antiperiodic boundary conditions correspond to $`\varphi =\pi `$, while $`\varphi =2\pi `$ returns to periodic boundary conditions with $`\mathrm{\Delta }\omega _i(2\pi )=0`$. Therefore our calculation, which is the only one easily accessible for us, gives a probable upper bound to $`D_i`$ for each mode $`i`$. Inspection of Fig. 7 shows that with this interpretation, the two calculations agree reasonably well. Both go to zero at the mobility edge, and both become large and ragged in the P region below 10 meV. The raggedness comes from the sparseness of the eigenstates at low $`\omega `$, and the large values reflect the onset of ballistic propagation. In the D region above 12 meV, values have collapsed to the range of 1 mm<sup>2</sup>/s, which corresponds to $`\omega _Da^2/3`$, with $`\omega _D`$ set to 50 meV and $`a=2\AA `$. This diffusivity is well below any value that could be allowed in a phonon-gas picture, and agrees with the measured $`\kappa (T)`$ (Allen and Feldman, 1990; Feldman et al., 1993). The peak of $`D_i`$ around 33 meV corresponds to a feature in the “phase-quotient” that will be discussed in section VII.C. Similar results for vitreous SiO<sub>2</sub> have been reported by Feldman and Kluge (1995). When $`\kappa (T)`$ is calculated from Eq. 4, using the values of $`D_i`$ from Fig. 7, the results, shown in Fig. 8 agree roughly with the data at higher temperatures. At low temperature it is necessary to have an additional source of heat current, the ballistically propagating long-wavelength modes. In Fig. 8, this has been added in a thoroughly ad hoc fashion. We have simply assumed a Debye spectrum for the modes with $`\omega <\omega _0=`$3 meV, and a temperature independent diffusivity $`D(\omega )=D_0(\omega _0/\omega )^2`$. There is no theoretical justification for this. In principle, the temperature-independent diffusivity caused by glassy disorder should take the Rayleigh $`(\omega _0/\omega )^4`$ form at low $`\omega `$, and one needs a stronger type of scattering, inelastic, and therefore $`T`$-dependent, to match the data. However, the $`(\omega _0/\omega )^2`$ behavior has been seen at intermediate frequencies, both experimentally (Sette et al. 1998) and numerically (Dell’Anna et al. 1998; Feldman et al. 1999), so we have used this simpler fitting proceedure to match the data. Our most important conclusion is that the “re-increase” of thermal conductivity above the plateau region is attributable to heat carried by “diffuson” modes in much the way imagined by Birch, Clark, and Kittel, and that the plateau is a simple crossover region, not requiring any new physics to explain. In particular, we believe that “excess modes” (also known as a “Boson peak”) is not a necessary ingredient to explain the plateau. Amorphous silicon seems to lack these “excess modes” but still to have a plateau. ## VII thermal equilibration There is some evidence suggesting that vibrations in glasses, if disturbed from equilibrium, may return very slowly. For amorphous silicon, experiments were reported by Scholten and Dijkhuis (1996) and by Scholten, Akimov, and Dijkhuis (1996). Our investigations show that if the disturbance is not too large and is purely vibrational, then the rate of return to equilibrium should be as fast, if not faster, than in a corresponding crystal. Surprisingly, we find that this is true both in the locon and in the diffuson portion of the spectrum, contradicting a view (Orbach, 1996) supported by fracton theory (Alexander, 1989) that localized vibrations must equilibrate slowly. In thermal equilibrium, the harmonic vibrational eigenstates have average population given by the Bose-Einstein distribution. Fabian and Allen (1996) used anharmonic perturbation theory to calculate the inverse lifetime or equilibration rate $`1/\tau `$ by which a vibrational state returns to the Bose-Einstein distribution if driven out of equilibrium. Their results are shown in Fig. 9. The validity of the perturbation theory is confirmed both by the smallness of the ratio $`1/\omega \tau `$ and by a direct test in the classical regime using molecular dynamics by Bickham and Feldman (1998) and Bickham (1999) It can be seen in Fig. 9 that no change occurs in the size of $`1/\tau `$ at the mobility edge. A careful treatment of locons does not support the idea that they thermalize slowly. Their anharmonic thermalization rates (Fabian, 1997a) are as fast, or even faster, than diffusons, and also comparable to, or faster than, corresponding thermalization rates of vibrations in crystals. The source of the misconception of slow equilibration is the idea that direct decay of a locon into two locons should be negligible. This idea fails because, unlike for example, band-tail electronic states in amorphous Si, the vibrational states are not at all dilute. Slow thermalization rates (forbidden by all theories we understand) could be tested by looking for a contribution from thermal vibrations to attenuation of very high-frequency sound (Fabian and Allen, 1999). If the thermalization rate is extremely slow, this contribution to the attenuation would be greatly enhanced. ## VIII resonant modes Inspection of Fig. 6 shows that a few modes in the D region and somewhat more in the P region have anomalously small participation ratios, of order 100 out of the 1000 atoms available. These states are not exponentially localized (Fabian, 1997b; Feldman et al., 1999) but are temporarily trapped in regions of peculiar coordination, from which they can tunnel into the continuum of extended states. Such states were first reported by Biswas et al. (1988) in a small model similar to ours; a model with larger numbers of 3- and 5-fold coordinated atoms had more such modes, which were speculated to bear some relation to the “floppy modes” of Phillips and Thorpe. Such modes were studied in detail by Schober and coworkers (1988, 1991) and Oligschleger and Schober (1999). We have recently argued (Feldman et al., 1999) that, in our (mostly 4-fold coordinated) models of amorphous Si, such states tend to disappear as the size of the model gets bigger, presumably because each such mode is trapped only in a very specific peculiar region. As the number of atoms in the model increases, so does the number of peculiar regions, but if each resonant mode is trapped in only one region, the fraction of time spent outside that region increases because the volume outside that region has increased. On the other hand, such modes, especially the ones in the P region, may be more pronounced in real amorphous Si and other real glasses than they are in the models we study. This is because our models are spatially homogeneous on scales greater than $`4\AA `$, while real glasses may have mesoscopic defects such as voids which would attract more such modes. Fabian and Allen (1997, 1999) found that the resonant modes have giant ($`40`$) Grüneisen parameters $`\gamma _i`$. These parameters measure the sensitivity of $`\omega _i`$ to macroscopic strain. In a glass (just as in a crystal where positions of atoms are not all fixed by crystallographic symmetry) strains cause not just a homogeneous shift of atomic coordinates, but also an additional local relaxation, which turns out to be particularly large in just those peculiar regions where the resonant modes are temporarily trapped. Anomalously large values of $`\gamma _i`$ play an important role in explaining the anomalously large and sample-dependent measured low-$`T`$ thermal expansion of glasses, and also should show up in enhanced contributions to the attenuation of high $`\omega `$ sound waves at higher $`T`$. ## IX character of diffusons The most important property which distinguishes diffusons is their intrinsic diffusivity $`D_i`$ with values of order $`\omega _Da^2/3`$. If wave-packets were constructed in such a way as to be approximately monochromatic, and simultaneously localized at the center of a cell with a reasonably small radius (perhaps 6-8$`\AA `$), then we believe that no matter how well directed such a pulse was at $`t=0`$, the center of the pulse would hardly move, and the radius would evolve as $`<r^2>=6Dt`$ for all times until reaching the cell boundary where it would interfere with its periodic image. Unfortunately, a 44$`\AA `$ cell is only marginally big enough, and computational difficulties have so far prevented us performing this experiment. Here we describe our efforts to find other ways to characterize diffusons. ### A wavevector At the P/D boundary, wavevectors become ill-defined. Fig. 10 shows a test. The squared Fourier weight is defined as $$w_i(\stackrel{}{Q})=\left|\underset{\mathrm{}}{}e^{i\stackrel{}{Q}\stackrel{}{R}(\mathrm{})}\stackrel{}{ϵ}_i(\stackrel{}{R}(\mathrm{}))\right|^2$$ (8) where $`\stackrel{}{Q}`$ is chosen as $`(2\pi /L)(h,k,l)`$ with integer $`h,k,l`$ so that the periodic images interfere constructively. We define $`w_i(Q)`$ as $`w_i(\stackrel{}{Q})`$ averaged over spherical shells of wave vector of width $`0.2\times 2\pi /L`$. The value $`Q=9.2\times 2\pi /L`$ corresponds to neighboring atoms being completely out of phase. The 51 meV diffusons show a peak Fourier content near $`8\times 2\pi /L`$, but the peak height is less than twice a “background” value found at larger $`Q`$ which dominates the behavior. There is no ballistic character to these modes. ### B polarization Diffusons have no wave vector. Not surprisingly, they also lack a polarization as is shown in Fig. 11. Propagons, by definition, have a wavevector. The nature of the propagons in the 4096 atom model was examined by Feldman et al. (1999). As shown there, the modes near $`\omega =`$3.5 meV have well-defined TA character, with the smallest possible wave vector $`Q=2\pi /L`$. Fig 11 shows that these modes have only limited preference for a direction of polarization. Similarly, the mode at 7.2 meV has $`Q=2\pi /L`$ and LA character, but not much polarization. Polarization directions of diffusons wander uniformly over the unit sphere, and one may ask what is the spatial range of decay of polarization memory in a mode. This is shown in Fig. 12. In a crystal, one has surfaces in $`Q`$-space where modes have constant frequency $`\omega (\stackrel{}{Q})=`$const. A single eigenstate is some arbitrary linear combination of the degenerate Bloch waves on this surface. If the surfaces were spherical and the linear combinations random, we would expect that $`\stackrel{}{ϵ}(\stackrel{}{r}_i)\stackrel{}{ϵ}(\stackrel{}{r}_j)`$ would fall off spatially as $`\mathrm{cos}(Qr_{ij})/r_{ij}`$. Fig. 12 shows that for diffusons at 50 meV, some polarization memory, but much less than expected for a crystal, persists out to 12$`\AA `$, while higher $`\omega `$ diffusons lose polarization memory more rapidly. ### C phase quotient The “phase quotient” $`\varphi _i`$ was defined by Bell and Hibbins-Butler (1975) as $$\varphi _i=\frac{_{<a,b>}\stackrel{}{ϵ}_i(a)\stackrel{}{ϵ}_j(b)}{_{<a,b>}\left|\stackrel{}{ϵ}_i(a)\stackrel{}{ϵ}_j(b)\right|},$$ (9) and is plotted in Fig. 13. For low $`\omega _i`$, values near 1 indicate that nearest neighbor atoms (the only ones summed in Eqn. 9) vibrate mostly in-phase, while for high $`\omega _i`$, values near -1 indicate that nearest neighbors vibrate mostly out-of-phase. Like so many other properties, this depends only on $`\omega _i`$ until the E/L boundary is crossed, but is very mode-specific for locons. The sharp rise at $`\omega _i`$ 29 meV is interesting, and may help explain why at the same frequency $`D_i`$ (Fig. 7) has a sudden rise. In crystalline Si, at approximately the same point in the spectrum, the TA branch ends and the density of states has a local minimum. Thus 29 meV marks a point in the spectrum where diffusons change character from bond-bending (somewhat TA-like) with relatively high frequency because of large phase difference from atom to atom, to bond-stretching (somewhat LA-like) with not such a large phase difference but an equally high frequency because the bond-stretching forces are bigger than the bond-bending forces. Apparently the latter kind of mode has greater diffusivity by a factor 2 or more. In a crystal we attribute this to a larger group velocity of the LA branch and a smaller density of states for decay by elastic scattering. Neither of these properties can be properly invoked for diffuson modes in a glass, but apparently similar physics is somehow at work. ## X summary Since 95% of the states in amorphous Si, and probably many other glasses as well, are diffusons, we should understand their properties. All diffusons at a given frequency $`\omega `$ seem essentially identical. As $`\omega `$ changes, their properties evolve, mostly smoothly, but the sudden jump in diffusivity and in phase quotient at 29 meV shows that not all variability is lost. ## ACKNOWLEDGMENTS We thank S. Bickham and K. Soderquist for relaxing the coordinates, and B. Davidson for providing eigenvalues and eigenvectors of the 4096 atom model. Work of PBA was supported in part by NSF grant no. DMR-9725037. Work of JF was supported by the U. S. Office of Naval Research. REFERENCES Alexander, S., Laermans, C., Orbach, R., and Rosenberg, H. M., 1983, Phys. Rev. B 28, 4615; R. Orbach and A. Jagannathan, 1993, J. Phys. Chem. 98, 7411. Alexander, S., 1989, Phys. Rev. B 40, 7953. Allen, P. B., and Feldman, J. L., 1993, Phys. Rev. B 48, 12581. Allen, P. B., and Kelner, J., 1998 Am. J. Phys. 66, 497. Anderson, P. W., Halperin, B. I., and Varma, C. M., 1972, Philos. Mag. 25, 1. Bell, R. J., and Dean, P., 1970, Disc. Faraday Soc. 50, 55. Bell, R. J., and Hibbins-Butler, D. C., 1975, J. Phys. C 8, 787. Bickham, S. R., and Feldman, J. L., 1998, Phys. Rev. B 57, 12234 and Phil. Mag. B 77, 513. Bickham, S. R., 1999, Phys. Rev. B 59, 4894. Birch, F., and Clark, H., 1940, Am. J. Sci. 238, 529 and 612. Biswas, R., Bouchard, A. M., Kamitakahara, W. A., Grest, G. S., and Soukoulis, C. M., 1988, Phys. Rev. Letters 60, 2280. Bouchard, A. M., Biswas, R., Kamitakahara, W. A., Grest, G. S., and Soukoulis, C. M., 1988, Phys. Rev. B 38, 10499. Cahill, D. G., Fischer, H. E., Klitsner, T., Swartz, E. T., and Pohl, R. O., 1989, J. Vac. Sci. Technol. A 7, 1259. Cahill, D. G., Katiyar, M., and Abelson, J. R., 1994, Phys. Rev. B 50, 6077. Canisius, J., and van Hemmen, J. L., 1985, J. Phys. C 18, 4873. Carles, R., Zwick, A., Moura, C. and Djafari-Rouhani, M., 1998, Phil. Mag. 77, 397. Cobb, M., Drabold, D. A., and Cappelletti, R. L., 1996, Phys. Rev. B 54, 12162. Dell’Anna, R., Ruocco, G., Sampoli, M., and Viliani, G., 1998, Phys. Rev. Letters 80, 1236. Dove, M. T., Giddy, A. P., Heine, V., and Winkler, B., 1996, Am. Mineral. 81, 1057. Edwards, J. T., and Thouless, D. J., 1972, J. Phys. C. 5, 807. Elliott, R. J., and Leath, P. L., 1975, in Dynamical Properties of Solids, (eds. G. K. Horton and A. A. Maradudin), North Holland, Amsterdam; vol. 2, p.385. Fabian, J., and Allen, P. B., 1996, Phys. Rev. Letters 77, 3839. Fabian, J., 1997a, Phys. Rev. B 55, R3328. Fabian, J., 1997b, Ph.D. dissertation, SUNY Stony Brook. Fabian, J., and Allen, P. B., 1997, Phys. Rev. Letters 79, 1885. Fabian, J., and Allen, P. B., 1999, Phys. Rev. Letters 82, 1478. Feldman, J. L., Kluge, M. D., Allen, P. B., and Wooten, F., 1993, Phys. Rev. B 48, 12589. Feldman, J. L., and Kluge, M. D., 1995, Phil. Mag. B 71, 641. Feldman, J. L., Allen, P. B., and Bickham, S. R., 1999, Phys. Rev. B 59, 3551. Feldman, J. L., Bickham, S. R., Engel, G. E., and Davidson, B. N., 1998, Philos. Mag. B 77, 507. Gurevich, V. L., 1986, Transport in Phonon Systems, Elsevier Science Pub. Co., Amsterdam. Jäckle, J., 1981, in Amorphous Solids: Low Temperature Properties, (ed. W. A. Phillips) Springer Verlag, Berlin; p.135. Kamitakahara, W. A., Soukoulis, C. M., Shanks, H. R., Buchenau, U., and Grest, G. S., 1987, Phys. Rev. B 36, 6539. Karpov, V. G., Klinger, M. I., and Ignat’ev, F. N., 1983, Zh. Eksp. Teor. Fiz. 84, 760 \[Sov. Phys. JETP 57, 439.\] Keating, P. N., 1966, Phys. Rev. 145, 637. Kittel, C., 1948, Phys. Rev. 75, 972. Kubo, R., 1957, J. Phys. Soc. Jpn. 12, 570. Kugler, S., Pusztai, L., Rosta, L., Chieux, P., and Bellisent, R., 1993, Phys. Rev. B 48, 7685. Lee, Y. H., Biswas, R., Soukoulis, C. M., Wang, C. Z., Chan, C. T., and Ho, K. M., 1991, Phys. Rev. B 43, 6573. Liu, X., White, B. E. Jr., Pohl, R. O., Iwanizcko, E., Jones, K. M., Mahan, A. H., Nelson, B. N., Crandall, R. S., and Veprek, S., 1997, Phys. Rev. Letters 78, 4419. Mott, N. F., and Davis, E. A., 1971, Electronic Processes in Non-Crystalline Materials, Clarendon Press, Oxford; p.24. Nakhmanson, S. M., and Drabold, D. A., 1998, Phys. Rev. B 58, 15325. Oligschleger, C., and Schober, H. R., 1999, Phys. Rev. B 59, 811. Orbach, R., 1996, Physica B 220, 231. Parshin, D. A., 1994, Phys. Solid State 36, 991. Phillips, J. C., 1980, Phys. Stat. Solidi (b) 101, 473. Phillips, W. A., 1972, J. Low Temp. Phys. 7, 351. Phillips, W. A., 1987, Rep. Prog. Phys. 50, 1657. Pohl, R. O., 1998, in Encylopedia of Applied Physics, Wiley-VCH, vol. 23, p.223. Pompe, G. and Hegenbarth, E., 1988, Phys. Status Solidi B 147, 103. Schirmacher, W., Diezemann, G., and Ganter, C., 1998, Phys. Rev. Letters 81, 136. Schober, H. R., and Laird, B., 1991, Phys. Rev. B 44, 6746. Schober, H. R., and Oligschleger, C., 1996, Phys. Rev. B 53, 11469 (1996). Scholten, A. J., and Dijkhuis, J. I., 1996, Phys. Rev. B 53, 3837. Scholten, A. J., Akimov, A. V., and Dijkhuis, J. I., 1996, Phys. Rev. B 54, 12151. Sette, F., Krisch, M., Masciovecchio, C., Ruocco, G., and Monaco, G., 1998, Science P280, 1550. Sheng, P., and Zhou, M., 1991, Science 253, 539. Sheng, P., Zhou, M., and Zhang, Z.-Q., 1994, Phys. Rev. Letters 72, 234. Stillinger, F. H., and Weber, T. A., 1985, Phys. Rev. B 31, 5262. Stolen, R. H., 1970, Phys. Chem. Glasses 11, 83. Taraskin, S. N., and Elliott, S. R., 1997, Phys. Rev. B 56, 8605. Thorpe, M. F., 1983, J. Non-Cryst. Solids 57, 355. Trachenko, K., Dove, M. T., Hammonds, K. D., Harris, M. J., and Heine, V., 1998, Phys. Rev. Letters 81, 3431. Visscher, W. M., and Gubernatis, J. E., 1980, in Dynamical Properties of Solids, (eds. G. K. Horton and A. A. Maradudin), North Holland, Amsterdam; vol. 4, p.63. Weaire, D., and Taylor, P. C., 1980, in Dynamical Properties of Solids, (eds. G. K. Horton and A. A. Maradudin), North Holland, Amsterdam; vol. 4, p.1. Wooten, F., Winer, K., and Weaire, D., 1985, Phys. Rev. Letters 54, 1392. Zeller, R. C. and Pohl, R. O., 1971, Phys. Rev. B 4, 2029.
no-problem/9907/nucl-th9907057.html
ar5iv
text
# MKPH-T-99-17 Coherent pion electroproduction on the deuteron in the Δ⁢(1232) resonance region*footnote **footnote *Supported by the Deutsche Forschungsgemeinschaft (SFB 443) ## I Introduction Electromagnetic meson production off nuclei has become a major topic in medium energy physics. It offers the possibility to study the elementary production process in a strong interacting medium thus allowing to investigate in detail, for example, possible modifications of the elementary production process due to the surrounding medium. Another, though complementary aspect is to study the production on the neutron, which as a free process is in most cases not accessible. One exception is the radiative $`\pi ^+`$ capture on the proton. Thus it is not surprising that in recent years, photo- and electroproduction of $`\pi `$ and $`\eta `$ mesons on nuclei have been studied intensively (see for example recent work in ) with particular emphasis on testing the elementary production amplitude of the neutron. This renewed interest has been triggered primarily by the significant improvement on the quality of experimental data obtained with the new generation of high-duty cycle electron accelerators like, e.g., ELSA in Bonn and MAMI in Mainz . Pion photo- and electroproduction on the deuteron is especially interesting with respect to the above mentioned two complementary aspects since first it allows one to study this reaction on a bound nucleon in the simplest nuclear environment so that one can take into account medium effects in a reliable manner, at least in the nonrelativistic domain. Secondly, it provides important information on this reaction on the neutron. In view of this latter aspect, the deuteron is often considered as a neutron target assuming that binding effects can be neglected to a large extent. Most of the theoretical work has concentrated on the photo reaction (see Refs. where also references to earlier work can be found) while very few studies of the corresponding electroproduction process exist. The latter has been considered briefly in with respect to target asymmetries for two specific kinematic settings, while in the role of exchange contributions in the longitudinal form factor near threshold has been investigated. But a systematic study is still missing. The aim of the present work is to initiate such a systematic study of the $`d(e,e^{}\pi ^0)d`$ reaction from threshold through the $`\mathrm{\Delta }(1232)`$ resonance similar to what has been done already for the corresponding photo production process . As a first step, we will restrict ourselves in this work to the impulse approximation (IA), where the production takes place at one nucleon only, in order to study details of the elementary reaction amplitude and, furthermore, we are interested mainly in the $`\mathrm{\Delta }`$ resonance region. Because close to threshold the elementary operator used in this work does not give a realistic description. With respect to the elementary production operator, we follow essentially the treatment of supplementing the transverse current by charge and longitudinal contributions and including electromagnetic form factors at the elementary photon vertices. Any pion rescattering effects or two body contributions to the general current operator are neglected completely. Their treatment will be deferred to a later study. The paper is organized as follows: In the next section we present the theoretical framework for the coherent electroproduction process on the deuteron with definition of the structure functions which determine the differential cross section. Furthermore, we will briefly discuss a set of nonrelativistic amplitudes which form a basis for the expansion of the general $`T`$-matrix with appropriate invariant functions as coefficients. A derivation of this set is given in the Appendix. Then we discuss in Sect. III the impulse approximation which serves as a calculational basis for the evaluation. In this section we also outline the elementary operator used in this work. In Sect. IV we present and discuss the results for the structure functions and form factors in various kinematic regions. Finally, we give a brief summary and an outlook. ## II General formalism In this section we will briefly outline the formalism of coherent electroproduction of neutral pions on deuterium which in the one-photon approximation can be described as the absorption of a virtual photon, i.e. $$\gamma ^{}(k)+d(p)d(p^{})+\pi ^0(q),$$ (1) where the momenta of initial virtual photon and deuteron are denoted by $`k`$ and $`p`$, and the final deuteron and pion momenta by $`p^{}`$ and $`q`$, respectively. We will consider this reaction in the photon-deuteron ($`\gamma ^{}d`$) c.m. frame. There we choose the $`z`$-axis along the virtual photon momentum $`(\stackrel{}{e}_z=\widehat{k}=\stackrel{}{k}/k)`$, the $`y`$-axis parallel to $`\stackrel{}{k}\times \stackrel{}{q}`$ and the $`x`$-axis such as to form a right handed system. Thus the outgoing $`\pi `$ meson is described by the spherical angles $`\varphi `$ and $`\theta `$ with $`\mathrm{cos}\theta =\widehat{q}\widehat{k}`$. In the one-photon approximation the differential cross section for the production process using unpolarized electrons and an unpolarized target is given by $`{\displaystyle \frac{d\sigma }{dk_2^{lab}d\mathrm{\Omega }_e^{lab}d\mathrm{\Omega }_{\pi ^0}^{c.m.}}}`$ $`=`$ $`c\{\rho _Lf_L+\rho _Tf_T+\rho _{LT}f_{LT}\mathrm{cos}\varphi +\rho _{TT}f_{TT}\mathrm{cos}2\varphi \}.`$ (2) Here $`c`$ is a kinematic factor $$c(k_1^{lab},k_2^{lab})=\frac{\alpha }{6\pi ^2}\frac{k_2^{lab}}{k_1^{lab}k_\nu ^4},$$ (3) where $`k_{1/2}^{lab}`$ denote incoming and scattered electron momenta in the lab frame, $`\alpha `$ the fine structure constant, and $`k_\nu ^2=k_0^2\stackrel{}{k}^{\mathrm{\hspace{0.17em}2}}`$ the four momentum transfer squared $`(k=k_1k_2)`$. $`\rho _L=\beta ^2k_\nu ^2{\displaystyle \frac{\xi ^2}{2\eta }},`$ $`\rho _{LT}=\beta k_\nu ^2{\displaystyle \frac{\xi }{\eta }}\sqrt{{\displaystyle \frac{\xi +\eta }{8}}},`$ (4) $`\rho _T={\displaystyle \frac{1}{2}}k_\nu ^2(1+{\displaystyle \frac{\xi }{2\eta }}),`$ $`\rho _{TT}=k_\nu ^2{\displaystyle \frac{\xi }{4\eta }},`$ (5) with $$\beta =\frac{|\stackrel{}{k}^{lab}|}{|\stackrel{}{k}^c|},\xi =\frac{k_\nu ^2}{(\stackrel{}{k}^{lab})^{\mathrm{\hspace{0.17em}2}}},\eta =\mathrm{tan}^2(\frac{\theta _e^{lab}}{2}),$$ (6) where $`\theta _e^{lab}`$ denotes the electron scattering angle in the lab system and $`\beta `$ expresses the boost from the lab system to the frame in which the $`T`$-matrix is evaluated and $`\stackrel{}{k}^c`$ denotes the momentum transfer in this frame. The structure functions in (2) are given in terms of the reduced reaction matrix elements $`t_{m^{}\lambda m}`$ which are defined by the general $`T`$-matrix element of the current operator in the c.m. frame $`T_{m^{}\mu m}(W_{\gamma ^{}d},\theta ,\varphi )`$ $`=`$ $`\sqrt{{\displaystyle \frac{\alpha |\stackrel{}{q}|E_k^dE_q^d}{4\pi m_dW_{\gamma ^{}d}}}}1m^{}\stackrel{~}{J}_\mu (\stackrel{}{k}\stackrel{}{q})1m`$ (7) $`=`$ $`e^{i(\mu +m)\varphi }t_{m^{}\mu m}(W_{\gamma ^{}d},\theta ),`$ (8) where the initial and final deuteron spin projections are denoted by $`m`$ and $`m^{}`$, respectively, the photon polarization by $`\mu `$, and initial and final deuteron c.m. energies by $`E_k^d`$ and $`E_q^d`$, respectively, with $`E_p^d=\sqrt{m_d^2+\stackrel{}{p}^{\mathrm{\hspace{0.17em}2}}}`$ and $`m_d`$ as deuteron mass. Furthermore, $`W_{\gamma ^{}d}`$ denotes the invariant mass of the $`\gamma ^{}d`$ system and $`k`$ and $`q`$ the photon and $`\pi ^0`$ momenta, respectively, in the $`\gamma ^{}d`$ c.m. system. In detail one has $`f_L`$ $`=`$ $`{\displaystyle \underset{m^{},m}{}}\mathrm{}e(t_{m^{}0m}^{}t_{m^{}0m}),f_T=2{\displaystyle \underset{m^{},m}{}}\mathrm{}e(t_{m^{}1m}^{}t_{m^{}1m}),`$ (9) $`f_{LT}`$ $`=`$ $`4{\displaystyle \underset{m^{},m}{}}\mathrm{}e(t_{m^{}0m}^{}t_{m^{}1m}),f_{TT}=2{\displaystyle \underset{m^{},m}{}}\mathrm{}e(t_{m^{}1m}^{}t_{m^{}1m}).`$ (10) The structure functions are functions of the invariant mass $`W_{\gamma ^{}d}`$, the squared three momentum transfer $`(\stackrel{}{k}^{c.m.})^2`$ and the angle $`\theta `$ between $`\stackrel{}{k}`$ and the momentum $`\stackrel{}{q}`$ of the outgoing pion, i.e. $$f_\alpha =f_\alpha (W_{\gamma ^{}d},(\stackrel{}{k}^{c.m.})^2,\theta )\text{for}\alpha \{L,T,LT,TT\}.$$ (11) The inclusive cross section obtained by integrating over the pion angles is determined by the longitudinal and transverse form factors $`F_L`$ and $`F_T`$, respectively, which are defined by $$F_{L/T}(W_{\gamma ^{}d},(\stackrel{}{k}^{c.m.})^2)=𝑑\mathrm{\Omega }_qf_{L/T}(W_{\gamma ^{}d},(\stackrel{}{k}^{c.m.})^2,\theta ).$$ (12) In analogy to the CGLN-amplitudes of e.m. pion production on a nucleon, a set of 13 basic covariant amplitudes $`\mathrm{\Omega }_\alpha `$ has been derived in which allow a representation of the $`T`$-matrix as a linear superposition of these amplitudes with invariant functions $`F_\alpha (s,t,u)`$ as coefficients depending on the Mandelstam variables only $$T=\underset{\alpha }{}F_\alpha (s,t,u)\mathrm{\Omega }_\alpha .$$ (13) An equivalent nonrelativistic set of transverse $`𝒪_{T,\beta }`$ and longitudinal $`𝒪_{L,\beta }`$ operators has also been given in which is listed in Table I explicitly. Here $`\stackrel{}{S}`$ denotes the spin operator for $`(S=1)`$-particle and $`S^{[2]}`$ the corresponding tensor operator defined by $$S^{[2]}=[S^{[1]}\times S^{[1]}]^{[2]}$$ (14) in the notation of Fano and Racah . The operators $`\mathbb{𝟙}_3`$, $`S^{[1]}`$, and $`S^{[2]}`$ form a complete set of $`(3\times 3)`$-matrices in $`(S=1)`$-space. Furthermore, in Table I $`\widehat{k}`$ and $`\widehat{q}`$ denote corresponding unit vectors and with $`\widehat{k}=k^{[1]}`$ $$k^{[2]}=[k^{[1]}\times k^{[1]}]^{[2]}.$$ (15) In the Appendix, we give an independent proof that this set forms a complete basis for the representation of the $`T`$-matrix for the process under consideration. In the present nonrelativistic framework this set is more appropriate and one can represent the $`T`$-matrix correspondingly $$T_{m^{}\mu m}=\delta _{|\mu |1}\underset{\beta =1}{\overset{9}{}}f_\beta ^T(𝒪_{T,\beta })_{m^{}\mu m}+\delta _{\mu 0}\underset{\beta =1}{\overset{4}{}}f_\beta ^L(𝒪_{L,\beta })_{m^{}\mu m}$$ (16) with appropriate functions $`f_\beta ^{L/T}`$. Any contribution to the reaction can be expanded in terms of these amplitudes. In the present work, however, where we restrict ourselves to the pure impulse approximation (IA), we will evaluate directly the various terms of the elementary production operator on the nucleon without separating explicitly the various operator contributions. ## III The impulse approximation In the IA, the $`\pi ^0`$ production takes place at one nucleon while the other acts merely as a spectator. Thus the basic diagrams of the IA consisting of a resonance and two Born terms are the ones depicted in Fig. 1. Consequently, the production operator $`\widehat{t}_{\gamma \pi ^0}^d`$ for the reaction on the deuteron which governs the $`T`$-matrix in the c.m. frame of virtual photon and deuteron $$T_{m^{}\mu m}(W_{\gamma ^{}d},\theta ,\varphi )=\stackrel{}{q},m^{}|\widehat{t}_{\gamma ^{}\pi ^0}^d(W_{\gamma ^{}d})|\stackrel{}{k},m\mu ,$$ (17) is obtained from the elementary operator $`\widehat{t}_{\gamma ^{}\pi ^0}`$ by $$\widehat{t}_{\gamma ^{}\pi ^0}^d(W_{\gamma ^{}d})=\widehat{t}_{\gamma ^{}\pi ^0}^{(1)}(W_{\gamma ^{}N})\mathbb{𝟙}^{(2)}+\mathbb{𝟙}^{(1)}\widehat{t}_{\gamma ^{}\pi ^0}^{(2)}(W_{\gamma ^{}N}),$$ (18) where the upper index in brackets refers to the nucleon on which the operator acts. Off-shell effects will be neglected. The invariant mass of the $`\gamma ^{}N`$ system is denoted by $`W_{\gamma ^{}N}`$. The assignment of $`W_{\gamma ^{}N}`$ will be discussed below. Explicitly one finds $$T_{m^{}\mu m}=2\underset{m_s^{},m_s}{}d^3p\psi _{m_s^{}m^{}}^{}\left(\stackrel{}{p}\frac{\stackrel{}{q}}{2}\right)\stackrel{}{p}\stackrel{}{q};1m_s^{},00|\widehat{t}_{\gamma ^{}\pi ^0}^{(1)}|\stackrel{}{p}\stackrel{}{k};1m_s,00\psi _{m_sm}\left(\stackrel{}{p}\frac{\stackrel{}{k}}{2}\right),$$ (19) where $`\stackrel{}{p}^{};1m_s^{},00|\widehat{t}_{\gamma ^{}\pi ^0}^{(1)}|\stackrel{}{p};1m_s,00`$ denotes the elementary $`t`$-matrix for initial and final nucleon momentum $`\stackrel{}{p}`$ and $`\stackrel{}{p}^{}`$, respectively, evaluated between two-nucleon spin and isospin wave functions $`|1m_s^{},00=|(\frac{1}{2}\frac{1}{2})1m_s^{},(\frac{1}{2}\frac{1}{2})00`$ and $`|1m_s,00`$. Furthermore, $`\psi _{m_sm}(\stackrel{}{p})`$ denotes the intrinsic part of the deuteron wave function in momentum space projected onto $`|1m_s,00`$ $`\psi _{m_sm}(\stackrel{}{p})`$ $`=`$ $`{\displaystyle \underset{l=0,2}{}}{\displaystyle \underset{m_l}{}}(lm_l1m_s|1m)u_l(p)Y_{l,m_l}(\widehat{p}),`$ (20) where $`\stackrel{}{p}=\frac{1}{2}(\stackrel{}{p}_1\stackrel{}{p}_2)`$ denotes the relative momentum of the two nucleons in the deuteron. For the evaluation of (19) one has to specify the elementary operator $`\widehat{t}_{\gamma ^{}\pi ^0}`$ for the reaction $`\gamma ^{}+N\pi ^0+N`$. As already mentioned, we will follow the treatment in for the coherent photoproduction process taking into account the $`\mathrm{\Delta }`$ resonance contribution and as background the Born terms from the direct and crossed nucleon pole diagrams, denoted “$`NP`$” and “$`NC`$”, respectively, in Fig. 1, i.e. $$\widehat{t}_{\gamma ^{}\pi ^0}=\widehat{t}_{\gamma ^{}\pi ^0}^{(\mathrm{\Delta })}+\widehat{t}_{\gamma ^{}\pi ^0}^{(NP)}+\widehat{t}_{\gamma ^{}\pi ^0}^{(NC)}.$$ (21) We will first consider the Born terms whose contributions are given by $`\widehat{t}_{\gamma ^{}\pi ^0}^{(NP)}(z)`$ $`=`$ $`v_{\pi ^0N}^{}G^{(NP)}(z)v_{\gamma ^{}N},`$ (22) $`\widehat{t}_{\gamma ^{}\pi ^0}^{(NC)}(z)`$ $`=`$ $`v_{\gamma ^{}N}G^{(NC)}(z)v_{\pi ^0N}^{}.`$ (23) The vertex functions are $`v_{\pi ^0N}^{}`$ $`=`$ $`i{\displaystyle \frac{f_\pi }{m_\pi }}\stackrel{}{\sigma }\stackrel{}{q}\tau _0,`$ (24) $`v_{\gamma ^{}N}`$ $`=`$ $`ϵ_\nu (\mu )j_N^\nu (\stackrel{}{k}),`$ (25) where $`ϵ_\nu (\mu )`$ denotes the polarization vector of the virtual photon with helicity $`\mu `$. As $`\pi N`$ coupling constant we have chosen $`\frac{f_\pi ^2}{4\pi }=0.08`$, and the nucleon charge and current densities are given by the nonrelativistic expressions $`\rho _N`$ $`=`$ $`\widehat{e},`$ (26) $`\stackrel{}{ȷ}_N`$ $`=`$ $`{\displaystyle \frac{\widehat{e}}{2m_N}}(\stackrel{}{p}^{}+\stackrel{}{p})+{\displaystyle \frac{\widehat{e}+\widehat{\kappa }}{2m_N}}i\stackrel{}{\sigma }\times \stackrel{}{k},`$ (27) with $$\widehat{e}=\frac{e}{2}(1+\tau _0)F(k_\mu ^2),\widehat{\kappa }=\frac{e}{2}\left(\kappa _p(1+\tau _0)+\kappa _n(1\tau _0)\right)F(k_\mu ^2).$$ (28) Here $`e`$ denotes the elementary charge and $`\kappa _{n/p}`$ the anomalous magnetic moment of neutron and proton, respectively. Furthermore, we have introduced a common e.m. form factor $`F(k_\mu ^2)`$ for which we haven chosen the dipole parametrization $$F(k_\mu ^2)=\left(1\frac{k_\mu ^2}{0.71(\text{GeV})^2}\right)^2.$$ (29) The propagators of the direct and crossed terms are of the form $$G^{(NP)}(z)=\frac{1}{zE_\stackrel{}{p}^{(N)}+iϵ},G^{(NC)}(z)=\frac{1}{zE_\stackrel{}{p}^{(N)}E^{(\pi )}\omega +iϵ},$$ (30) where $`E^{(\pi )}=\sqrt{m_\pi ^2+\stackrel{}{q}^2}`$ denotes the relativistic pion energy and $`E_\stackrel{}{p}^{(N)}=m_N+\frac{\stackrel{}{p}^2}{2m_N}`$ the nonrelativistic nucleon energy. For the resonance contribution $$\widehat{t}_{\gamma ^{}\pi ^0}^{(\mathrm{\Delta })}(z)=v_{\pi ^0N\mathrm{\Delta }}^{}G^{(\mathrm{\Delta })}(z)v_{\gamma ^{}N\mathrm{\Delta }},$$ (31) we use for the $`\gamma ^{}N\mathrm{\Delta }`$ vertex the dominant $`M1`$ contribution of the $`N\mathrm{\Delta }`$ transition current $`\rho _{\gamma ^{}N\mathrm{\Delta }}^{M1}`$ $`=`$ $`eF(k_\mu ^2){\displaystyle \frac{\stackrel{~}{G}_{\gamma N\mathrm{\Delta }}^{M1}(E_\mathrm{\Delta })}{2m_Nm_\mathrm{\Delta }}}i(\stackrel{}{\sigma }_{\mathrm{\Delta }N}\times \stackrel{}{k})\stackrel{}{p}_N\tau _{\mathrm{\Delta }N,0},`$ (32) $`\stackrel{}{ȷ}_{\gamma ^{}N\mathrm{\Delta }}^{M1}`$ $`=`$ $`eF(k_\mu ^2){\displaystyle \frac{\stackrel{~}{G}_{\gamma N\mathrm{\Delta }}^{M1}(E_\mathrm{\Delta })}{2m_N}}i\stackrel{}{\sigma }_{\mathrm{\Delta }N}\times \stackrel{}{k}_{\gamma N}\tau _{\mathrm{\Delta }N,0},`$ (33) neglecting the tiny $`C2`$ and $`E2`$ parts, because they are not expected to play a significant role in the pure impulse approximation where also other small effects are not considered. Here we use $`m_\mathrm{\Delta }=1232`$ MeV and $$\stackrel{}{k}_{\gamma N}=\frac{m_N\stackrel{}{k}\omega \stackrel{}{p}_N}{m_\mathrm{\Delta }},$$ (34) and $`E_\mathrm{\Delta }=W_{\gamma ^{}N}`$ is the $`\mathrm{\Delta }`$ energy in its rest system. The $`\gamma ^{}N\mathrm{\Delta }`$ coupling is taken as energy dependent and parametrized in the form $$\stackrel{~}{G}_{\gamma N\mathrm{\Delta }}^{M1}(E_\mathrm{\Delta })=\{\begin{array}{cc}\mu _{M1}(q_0(E_\mathrm{\Delta }))\mathrm{exp}[i\mathrm{\Phi }_{M1}(q_0(E_\mathrm{\Delta }))]\hfill & \text{ if }E_\mathrm{\Delta }>m_\pi +m_N,\hfill \\ \mu _0\hfill & \text{ else},\hfill \end{array}$$ (35) where the different quantities are defined as $`\mu _{M1}(q_0)`$ $`=`$ $`\mu _0+\mu _2\left({\displaystyle \frac{q_0}{m_\pi }}\right)^2+\mu _4\left({\displaystyle \frac{q_0}{m_\pi }}\right)^4,`$ (36) $`\mathrm{\Phi }_{M1}(q_0)`$ $`=`$ $`{\displaystyle \frac{q_0^3}{a_1+a_2q_0^2}},`$ (37) with $`q_0(E_\mathrm{\Delta })`$ as the on-shell pion momentum in this system, given by $$E_\mathrm{\Delta }=\sqrt{m_\pi ^2+q_0^2}+m_N+\frac{q_0^2}{2m_N}.$$ (38) The parameters have been fixed by fitting the $`M_{1+}^{3/2}`$ multipole of pion photoproduction yielding the values $`\mu _0=4.16`$, $`\mu _2=0.542`$, $`\mu _4=0.0757`$, $`a_1=0.185`$ fm<sup>-3</sup>, and $`a_2=4.94`$ fm<sup>-1</sup> . The $`\pi ^0N\mathrm{\Delta }`$ vertex is given in the usual form $$v_{\pi ^0N\mathrm{\Delta }}^{}=i\frac{f_\mathrm{\Delta }}{m_\pi }F_\mathrm{\Delta }(\stackrel{}{q}_{\pi N}^2)\stackrel{}{\sigma }_{N\mathrm{\Delta }}\stackrel{}{q}_{\pi N}\tau _{N\mathrm{\Delta },0},$$ (39) with $$\stackrel{}{q}_{\pi N}=\frac{m_N\stackrel{}{q}E^{(\pi )}\stackrel{}{p}_N}{m_N+E^{(\pi )}},$$ (40) and the $`N\mathrm{\Delta }`$ coupling constant $`\frac{f_\mathrm{\Delta }^2}{4\pi }=1.393`$. Here $`\stackrel{}{\sigma }_{N\mathrm{\Delta }}`$ and $`\tau _{N\mathrm{\Delta },0}`$ denote the usual spin and isospin $`N\mathrm{\Delta }`$ transition operators, respectively. The hadronic form factor is taken of dipole type with parameters also obtained in the above mentioned fit $$F_\mathrm{\Delta }(\stackrel{}{q}_{\pi N}^2)=\frac{\mathrm{\Lambda }_\mathrm{\Delta }^2m_\pi ^2}{\mathrm{\Lambda }_\mathrm{\Delta }^2+\stackrel{}{q}_{\pi N}^2},$$ (41) where $`\mathrm{\Lambda }_\mathrm{\Delta }=287.9`$ MeV, and $`m_\mathrm{\Delta }^0=1315`$ MeV. Finally, the $`\mathrm{\Delta }`$ resonance propagator is of the form $$G^{(\mathrm{\Delta })}(E_\mathrm{\Delta })=\frac{1}{E_\mathrm{\Delta }M_\mathrm{\Delta }(E_\mathrm{\Delta })+\frac{i}{2}\mathrm{\Gamma }_\mathrm{\Delta }(E_\mathrm{\Delta })},$$ (42) where the energy dependent mass and width are given by $`M_\mathrm{\Delta }(E_\mathrm{\Delta })`$ $`=`$ $`m_\mathrm{\Delta }^0+{\displaystyle \frac{f_\mathrm{\Delta }^2}{12\pi ^2m_\pi ^2}}\mathrm{}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{dq^{}q_{}^{}{}_{}{}^{4}F_\mathrm{\Delta }^2(q_{}^{}{}_{}{}^{2})}{\sqrt{m_\pi ^2+q_{}^{}{}_{}{}^{2}}(E_\mathrm{\Delta }\sqrt{m_\pi ^2+q_{}^{}{}_{}{}^{2}}m_N\frac{q_{}^{}{}_{}{}^{2}}{2m_N})}},`$ (43) and $`\mathrm{\Gamma }_\mathrm{\Delta }(E_\mathrm{\Delta })`$ $`=`$ $`\{\begin{array}{cc}\frac{q_0(E_\mathrm{\Delta })^3m_N}{6\pi m_\pi ^2(\sqrt{m_\pi ^2+q_0(E_\mathrm{\Delta })^2}+m_N)}f_\mathrm{\Delta }F_\mathrm{\Delta }^2(q_0(E_\mathrm{\Delta })^2)\hfill & \text{ if }E_\mathrm{\Delta }>m_\pi +m_N,\hfill \\ 0\hfill & \text{ else}.\hfill \end{array}`$ (46) The elementary operator $`\widehat{t}_{\gamma ^{}\pi ^0}^{(1)}`$ is a function of the photon, nucleon and $`\pi ^0`$ momenta $`\stackrel{}{k}`$, $`\stackrel{}{p}`$, and $`\stackrel{}{q}`$, respectively, the photon polarization $`\mu `$, and of the invariant mass $`W_{\gamma ^{}N}`$ of the photon-nucleon subsystem. Implementing this operator into a bound system poses the problem of assigning an invariant mass $`W_{\gamma ^{}N}`$ for the struck or active nucleon. This question has been discussed in detail in for coherent $`\eta `$ photoproduction on the deuteron by studying various prescriptions. In this work we have adopted the spectator-on-shell assignment as in . ## IV Results and Discussion Having fixed the parameters of the $`\pi ^0`$ production model for the elementary reaction on the nucleon, we have calculated the coherent reaction on the deuteron. The $`t`$-matrix elements of (19) have been evaluated numerically using Gauss integration in momentum space. As deuteron wave function, we have taken the one of the Bonn r-space potential in the parametrized form . In order to test the numerical program, we have first calculated the differential and total cross sections at the photon point and compared the results to the one of for which we obtained complete agreement. Since in electroproduction energy and momentum transfers can be varied independently in the spacelike region, we have chosen various cuts along constant energy or momentum transfer as shown in Fig. 2 for which we calculated the four structure functions. For the sets A, B and C we have fixed the momentum transfer $`k`$ and varied the energy transfer $`\omega `$, while for set D we have fixed $`\omega `$ and varied $`k`$. The set A at a moderate momentum transfer of $`k^2=2\mathrm{fm}^2`$ covers the region right above the production threshold up to the onset of the $`\mathrm{\Delta }`$-resonance. The sets B and C are cuts across the $`\mathrm{\Delta }`$-resonance for fixed three momentum transfers $`k^2=5\mathrm{fm}^2`$ and $`k^2=10\mathrm{fm}^2`$, respectively, while set D for a fixed energy transfer $`\omega =300`$ MeV remains essentially above but close to the resonance region. We will start the discussion of the structure functions with set A shown in Fig. 3. The longitudinal structure function $`f_L`$ is almost not affected by the $`\mathrm{\Delta }`$ resonance. This is a consequence of the assumed pure $`M1`$ excitation of the $`\mathrm{\Delta }`$ leading to a completely transverse current in the $`\mathrm{\Delta }`$ rest frame, so that any longitudinal contribution which arises when going to other frames are negligible. Therefore, this structure function is governed by the Born terms only. One readily notes a strong destructive interference between direct and crossed terms resulting in a small forward peaked angular distribution, except very close to threshold. For the transverse structure function $`f_T`$ one notices an increasing forward peaking of the total IA with increasing $`\omega `$, except for the lowest energy transfer close to threshold where an almost symmetric forward backward peaking appears. At the lowest energy transfer, $`f_T`$ is dominated by the Born terms displaying a constructive interference between direct and crossed contributions in sharp contrast to the findings for $`f_L`$. This different behaviour of the Born terms is easily understood from the structure of the corresponding operators and the sign of the propagators. For the longitudinal part, the operators of direct and crossed contributions have the same sign, while they differ in sign for most of the transverse operators which combined with the opposite signs of the propagators leads to the observed destructive, respectively, constructive behaviour. Close to threshold, the $`\mathrm{\Delta }`$ contribution is small as expected. It shows a slight peaking around 90 degree. With increasing $`\omega `$ this peak moves more and more to forward angles. Also its relative size increases rapidly becoming dominant at the highest energy displayed. At the higher energies $`f_T`$ is more than an order of magnitude bigger than $`f_L`$. For the longitudinal-transverse interference structure function $`f_{LT}`$ the $`\mathrm{\Delta }`$ contribution yields a negative structure function of increasing size with increasing $`\omega `$. The direct Born term interferes constructively in the forward direction while destructively at backward angles and the crossover moves more and more to smaller angles with increasing energy. The crossed Born contribution interferes destructively with both, $`\mathrm{\Delta }`$ and direct Born terms, resulting in a positive structure function over a large forward angular range. The size of $`f_{LT}`$ is comparable to $`f_L`$ for the total contributions. Finally, the transverse interference structure function $`f_{TT}`$ receives a positive contribution from the $`\mathrm{\Delta }`$ with a peak moving from about 90 degree near threshold to smaller angles, about 55 degree for the highest energy transfer. There is a strong destructive interference effect from the Born terms which even leads to a sign change near threshold, but their importance becomes smaller and smaller when approaching the $`\mathrm{\Delta }`$ region. Its size is only a factor three smaller than $`f_T`$ at the two highest $`\omega `$ values. For the sets B and C at constant three momentum transfers of $`k^2=5\mathrm{fm}^2`$ and $`k^2=10\mathrm{fm}^2`$, respectively, we have chosen the four energy transfers $`\omega `$ in such a manner that pairwise they correspond to the same invariant energies $`W_{\gamma ^{}d}=`$ 2057, 2137, 2217, and 2297 MeV, i.e., to the same pion momentum. The structure functions are shown in Figs. 4 and 5, respectively. Comparing the corresponding structure functions in both figures, one notices a qualitative similarity in the shape although they differ substantially in absolute size due to the considerably different momentum transfer. Thus we can limit the discussion to set B in Fig. 4. As above, $`f_L`$ is dominated by the Born terms showing a sizable destructive interference between direct and crossed contributions. With increasing $`\omega `$ the absolute size increases and the forward peaking becomes more and more pronounced. In $`f_T`$ one clearly notices the increasing importance of the $`\mathrm{\Delta }`$ contribution as is expected when crossing the $`\mathrm{\Delta }`$ region. The panels at the two lowest energy transfers at $`\omega =130`$ and 210 MeV show qualitatively the same behaviour as the panels for $`f_T`$ at $`\omega =160`$ and 200 MeV of set A in Fig. 3. Right on the $`\mathrm{\Delta }`$ at $`\omega =290`$ MeV, they have almost no influence. Only above this region the Born terms become significant again and interfere destructively in the forward direction so that the forward peak moves to about 30 degree. With respect to the interference structure functions, $`f_{LT}`$ shows the same qualitative angular behaviour as for the set A for energy transfers below the $`\mathrm{\Delta }`$ resonance region. However, on and above this region this is not true any more. There the $`\mathrm{\Delta }`$ dominates and the Born terms contribute very little. The transverse interference structure function $`f_{TT}`$ exhibits a different behaviour compared to set A at all energies. At the lowest energy of 130 MeV, one notices an almost complete cancellation between resonance and Born contributions. But already at 210 MeV the Born terms become relatively small, and at $`\omega =290`$ MeV they are completely negligible while at higher energy transfers they start to show again some influence, although direct and crossed Born terms interfere destructively, but lead to a slight enhancement of the resonance contribution. Altogether, all structure functions of sets B and C except $`f_L`$ are dominated by the $`\mathrm{\Delta }`$ over a large region of energy transfers crossing the resonance position. This was to be expected because of the dominance of the transverse $`\mathrm{\Delta }`$ excitation current. Only $`f_L`$ is governed by the Born terms, a fact which might be changed somewhat if the neglected small $`C2`$ contribution is included. But we do not expect drastic changes due to the smallness of such contributions. As to the absolute size, $`f_T`$ is by far the largest below and on the resonance, followed by $`f_{TT}`$ which becomes of the same size above the resonance. Sizeably smaller are $`f_L`$ and $`f_{LT}`$. We also have calculated the form factors $`F_L`$ and $`F_T`$ of the inclusive cross section for the sets B and C, because they allow a better comparison of the absolute values. They are shown in Figs. 6 and 7. For both sets the longitudinal form factor $`F_L`$ shows a steady increase over the whole range of energy transfers considered which arises essentially from the e.m. form factor because one approaches the photon point with increasing $`\omega `$. By destructive interference the direct Born contribution is reduced to about 30 percent via the crossed one. As noted before, the $`\mathrm{\Delta }`$ is negligible here in contrast to the transverse form factor $`F_T`$. Here the direct and crossed Born contribution show a distinctly different behaviour. While the direct one leads only to a slight general increase of the $`\mathrm{\Delta }`$ contribution, the crossed Born contribution results in a downshift of the maximum by about 20 MeV and a significant reduction above the maximum. In absolute size, $`F_T`$ is about one to two orders of magnitude bigger than $`F_L`$. Comparing sets B and C one notices a decrease of the form factors by a factor of about 5 due to the higher four momentum transfers involved in set C. Finally, we will consider in Fig. 8 the structure functions of set D at a constant energy transfer $`\omega =300`$ MeV with varying three momentum transfers, which means close and slightly above the resonance region. Accordingly, one readily notices the angular behaviour which we had found before at corresponding kinematics. In $`f_L`$ the destructive interference of direct and crossed Born terms becomes again apparent. For $`f_T`$ one readily sees the increasing reduction of the forward $`\mathrm{\Delta }`$ peak by the background contributions with increasing momentum transfer resulting in a small shift away from the forward direction. On the contrary, the interference structure functions show qualitatively very little variation with the momentum transfer. Only the absolute size decreases rapidly which is also the case for the diagonal structure functions $`f_L`$ and $`f_T`$ and for the corresponding inclusive form factors shown in Fig. 9. This is a consequence of the increasing four momentum transfer which governs the e.m. form factors of the elementary vertices. ## V Summary and Conclusions Coherent $`\pi ^0`$ electroproduction on the deuteron has been studied in the impulse approximation neglecting rescattering and two-body effects. For the elementary reaction on the nucleon, we have considered besides the dominant excitation of the $`\mathrm{\Delta }`$(1232) resonance the usual Born terms, essentially in the same frame work as in the analogous photo process. The four structure functions which govern the differential cross section have been evaluated along various cuts in the plane of energy-momentum transfer. In particular, the relative importance of the resonance and background contributions have been studied in detail. They show quite different influences in the different structure functions, due to the fact, that the excitation of the $`\mathrm{\Delta }`$ resonance proceeds dominantly via M1 excitations, which are purely transverse. It now remains as a task for the future to study the influence of rescattering and, furthermore, to use a more refined elementary production operator, in particular for the threshold region. ## Appendix A: Completeness proof for the basic operator set In this appendix we will present an independent proof that the operators listed in Table I form a complete and independent basis for the representation of the $`T`$-matrix of coherent e.m. pseudoscalar production on a spin-one target. Independence means that a relation $$\underset{\alpha }{}f_\alpha (k,q,\mathrm{cos}\theta )𝒪_\alpha 0$$ (A.1) with $`k=|\stackrel{}{k}|`$, $`q=|\stackrel{}{q}|`$ and $`\mathrm{cos}\theta =\widehat{k}\widehat{q}`$, is fulfilled only if $`f_\alpha 0`$ for all $`\alpha `$. To this end we consider first an alternative set whose elements also have to be built out of the only available unit vectors $`\widehat{k}`$ and $`\widehat{q}`$ and the spin-one operators $`S^{[\mathrm{\Sigma }]}`$ $`(\mathrm{\Sigma }=0,1,2)`$. Explicitly these new operators have the form $`\stackrel{}{O}_{(KQ)L}^\mathrm{\Sigma }\stackrel{}{ϵ}_\mu `$ with $$\stackrel{}{O}_{(KQ)L}^\mathrm{\Sigma }:=[A_{KQ}^{[L]}\times S^{[\mathrm{\Sigma }]}]^{[1]},$$ (A.2) where we have introduced $$A_{KQ}^{[L]}:=[k^{[K]}\times q^{[Q]}]^{[L]}$$ (A.3) with the following recursive definitions of the spherical tensors $`a^{[n]}`$ $`a^{[0]}`$ $`:=`$ $`1,`$ (A.4) $`a^{[1]}`$ $`:=`$ $`\widehat{a},`$ (A.5) $`a^{[n+1]}`$ $`:=`$ $`[a^{[1]}\times a^{[n]}]^{[n+1]},n𝐍.`$ (A.6) They are related to the spherical harmonics by $$a^{[n]}=\alpha _nY^{[n]}\text{ with }\alpha _n=\sqrt{\frac{4\pi n!}{(2n+1)!!}}.$$ (A.7) Angular momentum coupling rules restrict the $`L`$-values in (A.2) to $`|\mathrm{\Sigma }1|L\mathrm{\Sigma }+1`$. The possible combinations are listed in Table II. Furthermore, in order to insure that $`\stackrel{}{O}_{(KQ)L}^\mathrm{\Sigma }`$ be a pseudo-vector, the condition $`K+Q=`$even has to be fulfilled. Taking into account this condition and the coupling rules, one finds for the values of the indices $`K,Q,L`$ the combinations listed in Table III. As next we will derive a recursion relation for the $`A_{KQ}^{[L]}`$ based on the recoupling of the following expression $`A_{NN}^{[0]}A_{KQ}^{[L]}`$ $`=`$ $`[A_{NN}^{[0]}\times A_{KQ}^{[L]}]^{[L]}`$ (A.8) $`=`$ $`{\displaystyle \underset{K_1=|KN|}{\overset{K+N}{}}}{\displaystyle \underset{Q_1=|QN|}{\overset{Q+N}{}}}C_{NKQ}^{(K_1Q_1)L}A_{K_1Q_1}^{[L]},`$ (A.9) where $`C_{NKQ}^{(K_1Q_1)L}`$ $`=`$ $`()^{N+L+K_1+Q}{\displaystyle \frac{\widehat{K}_1\widehat{Q}_1}{\widehat{N}}}\beta _{NK}^{K_1}\beta _{NQ}^{Q_1}\left\{\begin{array}{ccc}K& Q& L\\ Q_1& K_1& N\end{array}\right\},`$ (A.12) $`\beta _{AB}^C`$ $`=`$ $`()^C\sqrt{{\displaystyle \frac{A!B!(2C+1)!!}{(2A+1)!!(2B+1)!!C!}}}\left(\begin{array}{ccc}A& B& C\\ 0& 0& 0\end{array}\right).`$ (A.13) One should note that $`A_{NN}^{[0]}`$ is basically the Legendre polynomial $`P_N(\mathrm{cos}\theta )`$, namely $$A_{NN}^{[0]}=\alpha _N^2\frac{()^N\widehat{N}}{4\pi }P_N(\mathrm{cos}\theta ).$$ (A.14) Separating in (A.9) on the rhs the term with the highest indices $`K_1=K+N`$ and $`Q_1=Q+N`$, one finds $`A_{NN}^{[0]}A_{KQ}^{[L]}`$ $`=`$ $`{\displaystyle \underset{K_1=|KN|}{\overset{K+N1}{}}}{\displaystyle \underset{Q_1=|QN|}{\overset{Q+N}{}}}C_{NKQ}^{(K_1Q_1)L}A_{K_1Q_1}^{[L]}`$ (A.17) $`+{\displaystyle \underset{Q_1=|QN|}{\overset{Q+N1}{}}}C_{NKQ}^{((K+N)Q_1)L}A_{(K+N)Q_1}^{[L]}`$ $`+C_{NKQ}^{((K+N)(Q+N))L}A_{(K+N)(Q+N)}^{[L]}.`$ With the replacements $`K+NK`$ and $`Q+NQ`$ the above equation finally leads to the recursion relation $`C_{N(KN)(QN)}^{(KQ)L}A_{KQ}^{[L]}`$ $`=`$ $`A_{NN}^{[0]}A_{(KN)(QN)}^{[L]}`$ (A.20) $`{\displaystyle \underset{K_1=|K2N|}{\overset{K1}{}}}{\displaystyle \underset{Q_1=|Q2N|}{\overset{Q}{}}}C_{N(KN)(QN)}^{(K_1Q_1)L}A_{K_1Q_1}^{[L]}`$ $`{\displaystyle \underset{Q_1=|Q2N|}{\overset{Q1}{}}}C_{N(KN)(QN)}^{(KQ_1)L}A_{KQ_1}^{[L]}.`$ Since $`K,Q`$ and $`L`$ have to fulfil the usual triangular relation N is limited by $`N\frac{1}{2}(K+QL)`$. By this relation, $`A_{KQ}^{[L]}`$ is expressed as a linear superposition of $`A_{K_1Q_1}^{[L]}`$ where $`K_1K`$, $`Q_1Q`$, and $`K_1+Q_1<K+Q`$. According to Table III one has for $`L\{0,1\}`$ only the combination $`K=Q`$. While for $`L=0`$ the recursion formula becomes trivial $$A_{NN}^{[0]}=A_{NN}^{[0]}A_{00}^{[0]},$$ (A.21) one finds for $`L=1`$ $`C_{N(KN)(KN)}^{(KK)1}A_{KK}^{[1]}`$ $`=`$ $`A_{NN}^{[0]}A_{(KN)(KN)}^{[1]}`$ (A.22) $``$ $`{\displaystyle \underset{K_1=|K2N|}{\overset{K1}{}}}{\displaystyle \underset{Q_1=|K2N|}{\overset{K}{}}}C_{N(KN)(KN)}^{(K_1Q_1)1}A_{K_1Q_1}^{[1]}`$ (A.23) $``$ $`{\displaystyle \underset{Q_1=|K2N|}{\overset{K1}{}}}C_{N(KN)(KN)}^{(KQ_1)1}A_{KQ_1}^{[1]}.`$ (A.24) Exploiting the properties of the coefficients $`C_{NKQ}^{(K_1Q_1)L}`$ ($`N+K+K_1`$ and $`N+Q+Q_1`$ must both be even numbers while $`K_1,Q_1`$ and $`1`$ must obey the triangular relation) (A.24) simplifies to $$C_{N(KN)(KN)}^{(KK)1}A_{KK}^{[1]}=A_{NN}^{[0]}A_{(KN)(KN)}^{[1]}\underset{K_1=|K2N|}{\overset{K2}{}}C_{N(KN)(KN)}^{(K_1K_1)1}A_{K_1K_1}^{[1]}.$$ (A.25) Setting $`N=1`$, one finds $$C_{1(K1)(K1)}^{(KK)1}A_{KK}^{[1]}=A_{11}^{[0]}A_{(K1)(K1)}^{[1]}C_{1(K1)(K1)}^{((K2)(K2))1}A_{(K2)(K2)}^{[1]},$$ (A.26) from which readily follows that $`A_{11}^{[1]}`$ is the only initial element of the recursion for $`L=1`$ as was $`A_{00}^{[0]}`$ for $`L=0`$. Thus every $`A_{KK}^{[1]}`$ is related to $`A_{11}^{[1]}`$ times a function in $`\mathrm{cos}\theta `$. Proceeding in complete analogy, we obtain for $`L=2`$ and $`N=1`$ $`C_{1(K1)(Q1)}^{(KQ)2}A_{KQ}^{[2]}`$ $`=`$ $`A_{11}^{[0]}A_{(K1)(Q1)}^{[2]}{\displaystyle \underset{K_1=|K2|}{\overset{K1}{}}}{\displaystyle \underset{Q_1=|Q2|}{\overset{Q1}{}}}C_{1(K1)(Q1)}^{(K_1Q_1)2}A_{K_1Q_1}^{[2]}`$ (A.28) $`{\displaystyle \underset{K_1=|K2|}{\overset{K1}{}}}C_{1(K1)(Q1)}^{(K_1Q)2}A_{K_1Q}^{[2]}{\displaystyle \underset{Q_1=|Q2|}{\overset{Q1}{}}}C_{1(K1)(Q1)}^{(KQ_1)2}A_{KQ_1}^{[2]},`$ respectively, $`C_{1(K1)(Q1)}^{(KQ)2}A_{KQ}^{[2]}`$ $`=`$ $`A_{11}^{[0]}A_{(K1)(Q1)}^{[2]}C_{1(K1)(Q1)}^{((K2)(Q2))2}A_{(K2)(Q2)}^{[2]}`$ (A.30) $`C_{1(K1)(Q1)}^{((K2)Q)2}A_{(K2)Q}^{[2]}C_{1(K1)(Q1)}^{(K(Q2))2}A_{K(Q2)}^{[2]}.`$ As independent elements one finds here $`A_{11}^{[2]},A_{20}^{[2]},A_{02}^{[2]}`$ which cannot be further reduced. In other words, every $`A_{KQ}^{[2]}`$ can be expressed as a linear combination of these independent elements with appropriate functions in $`\mathrm{cos}\theta `$ as coefficients. An analogous relation holds for $`L=3`$ $`C_{1(K1)(Q1)}^{(KQ)3}A_{KQ}^{[3]}`$ $`=`$ $`A_{11}^{[0]}A_{(K1)(Q1)}^{[3]}C_{1(K1)(Q1)}^{((K2)(Q2))3}A_{(K2)(Q2)}^{[3]}`$ (A.32) $`C_{1(K1)(Q1)}^{((K2)Q)3}A_{(K2)Q}^{[3]}C_{1(K1)(Q1)}^{(K(Q2))3}A_{K(Q2)}^{[3]}.`$ Here the remaining independent elements are $`A_{22}^{[3]},A_{31}^{[3]},A_{13}^{[3]}`$. With this we finally have proven that the set of operators listed in Table IV is independent and complete. Although these basic elements do not separate in general into transverse and longitudinal operators this set provides a better starting point for a systematic (computer aided) decomposition of the amplitude. The transition to the transverse and longitudinal operators of Table I can then be obtained by the following transformation $`𝒪_{T,1}`$ $`=`$ $`i\sqrt{2}𝒪_1,`$ (A.33) $`(𝒪_{T,2},𝒪_{T,3},𝒪_{T,4},𝒪_{L,1},𝒪_{L,2})^T`$ $`=`$ $`𝒜(𝒪_2,𝒪_3,𝒪_4,𝒪_5,𝒪_6)^T,`$ (A.34) $`(𝒪_{T,5},𝒪_{T,6},𝒪_{T,7},𝒪_{T,8},𝒪_{T,9},𝒪_{L,3},𝒪_{L,4})^T`$ $`=`$ $`(𝒪_7,𝒪_8,𝒪_9,𝒪_{10},𝒪_{11},𝒪_{12},𝒪_{13})^T.`$ (A.35) with the transformation matrices $`𝒜`$ $`=`$ $`\left(\begin{array}{ccccc}\frac{2(1P_2(\mathrm{cos}\theta ))}{9}& 0& \frac{2\sqrt{5}}{\sqrt{3}}\mathrm{cos}\theta & \sqrt{\frac{5}{3}}& \sqrt{\frac{5}{3}}\\ \frac{2}{3}& 0& 0& 0& \sqrt{\frac{5}{3}}\\ \frac{2}{3}\mathrm{cos}\theta & 1& \sqrt{\frac{5}{3}}& 0& 0\\ \frac{1}{3}& 0& 0& 0& \sqrt{\frac{5}{3}}\\ \frac{1}{3}\mathrm{cos}\theta & 1& \sqrt{\frac{5}{3}}& 0& 0\end{array}\right),`$ (A.41) $``$ $`=`$ $`\left(\begin{array}{ccccccc}\frac{i\sqrt{2}}{\sqrt{15}}& \frac{i\sqrt{10}}{3}& 0& \frac{i\sqrt{10}}{3}\mathrm{cos}\theta & 0& 0& \frac{i\sqrt{28}}{3}\\ \frac{i\sqrt{2}}{\sqrt{15}}\mathrm{cos}\theta & 0& \frac{i\sqrt{5}}{3\sqrt{2}}& \frac{i\sqrt{5}}{3\sqrt{2}}& \frac{i\sqrt{7}}{\sqrt{3}}& 0& 0\\ \frac{i\sqrt{2}}{\sqrt{15}}& \frac{i\sqrt{10}}{3}& \frac{i\sqrt{10}}{3}\mathrm{cos}\theta & 0& 0& \frac{i\sqrt{28}}{3}& 0\\ 0& 0& 0& \frac{i\sqrt{3}}{\sqrt{2}}& 0& 0& 0\\ \frac{i}{\sqrt{2}}& \frac{i3\sqrt{5}}{\sqrt{2}}& 0& 0& 0& 0& 0\\ \frac{i3}{2\sqrt{2}}& \frac{i(3\sqrt{15}4)}{\sqrt{360}}& 0& \frac{i\sqrt{5}}{3\sqrt{2}}\mathrm{cos}\theta & 0& 0& \frac{i\sqrt{28}}{3}\\ \frac{i\sqrt{3}}{\sqrt{10}}\mathrm{cos}\theta & \frac{i\sqrt{5}}{\sqrt{2}}\mathrm{cos}\theta & \frac{i\sqrt{5}(\sqrt{3}1)}{3\sqrt{2}}& \frac{i\sqrt{5}}{3\sqrt{2}}& \frac{i\sqrt{7}}{\sqrt{3}}& 0& 0\end{array}\right).`$ (A.49)
no-problem/9907/chao-dyn9907010.html
ar5iv
text
# The identification of continuous, spatiotemporal systems thanks: published in: Phys. Rev. E 57 (1998) 2820 ## Abstract We present a method for the identification of continuous, spatiotemporal dynamics from experimental data. We use a model in the form of a partial differential equation and formulate an optimization problem for its estimation from data. The solution is found as a multivariate nonlinear regression problem using the ACE–algorithm. The procedure is successfully applied to data, obtained by simulation of the Swift–Hohenberg equation. There are no restrictions on the dimensionality of the investigated system, allowing for the analysis of high-dimensional chaotic as well as transient dynamics. The demands on the experimental data are discussed as well as the sensitivity of the method towards noise. The unstable dynamics observed in spatially extended systems attracted huge experimental and theoretical research activity in the last decades (see and references therein). Progress has been achieved by describing the dynamics in the vicinity of bifurcations with the help of universal amplitude equations, vastly reducing the complexity of the involved models. Additionally, the research has concentrated on the classification of the observed instabilities and the resulting patterns, and the investigation of scaling laws and intermittency effects . For most investigations the models for spatiotemporal systems arise from mainly theoretical considerations and their validity is affirmed by the comparison with experimental measurements. Here we address the problem of finding a model, which describes the dynamics of an observed system, directly from experimental data. For systems which exhibit temporal low–dimensional chaotic motion this was accomplished with the help of nonlinear maps . Other general methods rely on some sort of mode–expansion and were successfully applied . A different approach consists in a nonparametric model identification as proposed for systems with a time delay feedback . In this letter we extend that approach to the identification of the underlying evolution equation of spatially extended systems. At first we formulate an optimization problem for finding a model equation from the data. Then, we rewrite the equations in the form of a multivariate nonlinear regression problem. As a last step, we use a novel kind of numerical algorithm for solving the problem. Our approach does not include any parameter dependencies, rather those are delivered as a by-product. We discuss the identification of homogeneous and autonomous partial differential equations, but emphasize that the ideas are quite general and can be applied to other problems like finite-difference equations, coupled-map lattices or integro–differential equations. We assume the dynamics of the system under consideration to be governed by a PDE of the form $$[\stackrel{}{},_t,\stackrel{}{u}(\stackrel{}{x},t)]=0,$$ (1) where $`\stackrel{}{u}`$ is the field variable with $`N`$ components, $``$ is an $`N`$–dimensional function of $`\stackrel{}{u}(\stackrel{}{x},t)`$ and its spatial and temporal derivatives, with $`t`$ and $`\stackrel{}{x}`$ the temporal and spatial variables, respectively. To ease the analysis and the representation, we only consider systems with a single component $`u`$ and at most two spatial dimensions. We note that the following considerations are valid in principle also for multi–component systems and higher dimensions in space. In the following, we discuss the procedure to estimate a PDE of the form (1) from experimental data. Therefore, we distinguish between the solution $`u(\stackrel{}{x},t)`$ and the data $`v(\stackrel{}{x},t)`$. For the sake of simplicity, we denote both the continuous space–time variables of the model field and the discrete space–time variables of the data by the same symbols $`\stackrel{}{x}`$ and $`t`$. Considering the data $`v`$ as random variables, we act on a probability space and denote all entities estimated from the data by a hat $`\widehat{}`$ . Since one can get only an estimate for the true function $``$ from the data $`v`$, all one can achieve is to estimate the analogue of Eq. (1) $$\widehat{}[\widehat{},\widehat{_t},v(\stackrel{}{x},t)]=0,$$ (2) where $`\widehat{}`$ is the estimate of $``$ and the derivatives have been substituted by estimates computed from the data $`v`$. To obtain $`\widehat{}`$, we formulate from (1) the following optimization problem : $$\underset{}{inf}=e^2.$$ (3) The optimization lies in varying $``$ until $`e^2`$ converges to the infimum. The function $``$ is defined as a function of operators, e.g. $`^2,_t,\mathrm{id},\mathrm{}`$. We denote the set of constituting operators by $`\{𝒪_i\}_{i=1,\mathrm{},K}`$. Note that in our definition of the differential operators $`𝒪_i`$ also any product terms like $`u^2_{xx},_t_x`$ are included. Since we consider a nonlinear problem, it is useful to split the function $``$ into sub–functions $`\mathrm{\Phi }_i`$, which have as arguments $`𝒪_iu`$, e.g. $`\mathrm{\Phi }(_{xx}u)=(_{xx}u)^\alpha `$ or $`\mathrm{\Phi }(\mathrm{id}u)=u^\beta `$. These $`\mathrm{\Phi }_i`$ are elements of some class of functions $`S`$, which has to be specified according to the problem. Thus, the final representation of Eq. (1) reads $$=\underset{i=0}{\overset{K}{}}\mathrm{\Phi }_i(𝒪_iu)=0.$$ (4) We want to determine the constituents $`𝒪_iu`$ and the functions $`\mathrm{\Phi }_i`$ of $``$ from a data set $`v`$. Therefore, we estimate at first $`𝒪_iu`$ from the data by finite differencing or alternative schemes. Especially in the presence of noise, Fourier methods or kernel estimation could be helpful. The result consists of $`K`$ random variables $`\widehat{𝒪}_i\widehat{𝒪_iu}`$. If the underlying PDE is linear, i.e. the function $``$ a multilinear one, we could solve the problem with linear regression methods. Secondly, to obtain a solution for the nonlinear problem, we solve $$\underset{\mathrm{\Phi }S}{inf}\underset{i=1}{\overset{K}{}}\mathrm{\Phi }_i(\widehat{𝒪}_i)=e^2$$ (5) in varying the functions $`\mathrm{\Phi }_i`$. The result are the estimates $`\widehat{\mathrm{\Phi }}_i`$. Up to now, we anticipated to know the number and type of operators $`𝒪_i`$. For unknown systems, which are the ones we treat, it would be necessary to extend the number of operators $`K`$ to infinity. In practice one has to select a finite number. Redundant terms then deliver $`\widehat{\mathrm{\Phi }}_i0`$ as result. It is important to find a reasonable initial guess for the operators $`𝒪_i`$ of Eq. (4). If there exists already some description for the system in a special state, one starts with the operators which appear in the known equations and tries to determine additional terms which may appear when leaving that state. This situation appears e.g. near some critical points, where one can start with already derived amplitude equations. In the case of data analysis, the optimization problem (5) becomes a multiple nonlinear regression problem which can be solved using the alternating conditional expectation algorithm (ACE) . It has already been successfully applied in related fields of data analysis . In this case the norm of Eq. (5) is the $`L_2`$ norm. Using this algorithm, the functions $`\mathrm{\Phi }_i`$ of Eq. (4) are seen as so-called optimal transformations , which are defined to solve regression problems of the form $$E[(\mathrm{\Phi }_0(\widehat{𝒪}_0)\underset{i=1}{\overset{K}{}}\mathrm{\Phi }_i(\widehat{𝒪}_i))^2]\stackrel{!}{=}\mathrm{min}.$$ (6) The $`\mathrm{\Phi }_i`$ are varied in the space $`S`$ of the Borel–measurable functions with the additional requirement of zero expectation, $`E[\mathrm{\Phi }_i]=0(i=0,\mathrm{},K)`$, and $`E[\mathrm{\Phi }_0^2]=1`$. The convergence of the ACE algorithm in the case of discrete samples rather than random variables has also been proved in . The minimization of the expectation value (6) is equivalent to the calculation of the maximal correlation $`\mathrm{\Psi }`$ . Instead of $`e^2`$ we use the maximal correlation as a measure for the quality of the result: $`\mathrm{\Psi }[0,1]`$ equals 1 for perfect estimation, the smaller it is the worse is the estimation. The ACE–algorithm achieves the maximization by iteratively transforming each $`\widehat{𝒪}_i`$ by suitable, generally nonlinear, transformations such as to yield a linear relationship between the new random variables $`\mathrm{\Phi }_i(\widehat{𝒪}_i)`$. As a first example, we analyze data $`v(\stackrel{}{x},t)`$ from the numerical integration of the Swift–Hohenberg equation . The model is of the form $`_tu`$ $`=`$ $`\left[r(^2+k^2)^2\right]uu^3`$ (7) $`=`$ $`(rk^4)uu^32k^2(_{xx}+_{yy})u`$ (9) $`(_{xxxx}+_{yyyy}+2_{xxyy})u.`$ The global dynamics of the model can be derived from a potential, such that the asymptotic time dependence is trivial . Therefore, we analyze a transient state to have a sufficient variation in the time derivative. For the identification procedure we use data produced by an explicit Euler integration scheme with a time step of $`10^4`$, a spatial discretization of $`\mathrm{\Delta }x=\mathrm{\Delta }y=0.25`$, and periodic boundary conditions. As initial conditions we choose uniformly distributed independent random numbers from the interval $`[10,10]`$. The parameters are $`r=0.1`$ and $`k=1`$. The differential operators are estimated by symmetric differencing schemes, e.g. $`\widehat{_t}v(\stackrel{}{x},t)=(v(\stackrel{}{x},t+\mathrm{\Delta }t)v(\stackrel{}{x},t\mathrm{\Delta }t))/2\mathrm{\Delta }t`$. Thus, to estimate the time derivatives of first order in each spatial data point, we need three consecutive ”pictures” of data. The field size is $`100\times 100`$ points, i.e., the data set $`v(\stackrel{}{x},t)`$ contains $`310^4`$ values. The data for the central time point are shown in Fig. 1. In the following we can drop without ambiguity the hat $`\widehat{.}`$ . To identify the unknown system, we use an ansatz with non–mixed terms (like $`_xv`$) up to fourth order in the spatial derivatives. To show how to handle mixed terms we additionally include the terms $`v_xv`$, $`v_yv`$, $`_xv_yv`$. $`\mathrm{\Phi }_0(_tv)`$ $`=`$ $`\mathrm{\Phi }_1(v)+\mathrm{\Phi }_2(_xv)+\mathrm{\Phi }_3(_yv)`$ (15) $`+\mathrm{\Phi }_4(_{xx}v)+\mathrm{\Phi }_5(_{xy}v)+\mathrm{\Phi }_6(_{yy}v)`$ $`+\mathrm{\Phi }_7(_{xxx}v)+\mathrm{}+\mathrm{\Phi }_{10}(_{yyy}v)`$ $`+\mathrm{\Phi }_{11}(_{xxxx}v)+\mathrm{}+\mathrm{\Phi }_{15}(_{yyyy}v)`$ $`+\mathrm{\Phi }_{16}(v_xv)+\mathrm{\Phi }_{17}(v_yv)`$ $`+\mathrm{\Phi }_{18}(_xv_yv)`$ with the nine redundant terms $`\mathrm{\Phi }_2(_xv)`$, $`\mathrm{\Phi }_3(_yv)`$, $`\mathrm{\Phi }_5(_{xy}v)`$, $`\mathrm{\Phi }_7(_{xxx}v)`$, $`\mathrm{\Phi }_8(_{xxy}v)`$, $`\mathrm{\Phi }_9(_{xyy}v)`$, $`\mathrm{\Phi }_{10}(_{yyy}v)`$, $`\mathrm{\Phi }_{12}(_{xxxy}v)`$, $`\mathrm{\Phi }_{14}(_{xyyy}v)`$, $`\mathrm{\Phi }_{16}(v_xv)`$, $`\mathrm{\Phi }_{17}(v_yv)`$, $`\mathrm{\Phi }_{18}(_xv_yv)`$. We choose this ansatz as a compromise between generality and computational effort. Comparing Eq. (9) with Eq. (15), one expects in particular the following for the solution of Eq. (6): Up to an arbitrary common factor, $`\mathrm{\Phi }_0`$ should be the identity, $`\mathrm{\Phi }_1`$ should be a polynomial of third order, and for $`i=4,6,11,13,15`$ the $`\mathrm{\Phi }_i`$ should be linear functions $`\alpha _i𝒪_i`$. All other estimates should vanish. Furthermore, we expect for the slopes of the linear functions, after division by the slope of the l.h.s to remove the arbitrary common factor, $`\alpha _4=\alpha _6=\alpha _{13}=2`$, $`\alpha _{11}=\alpha _{15}=1`$. Performing the ACE–algorithm, we find a maximal correlation of $`0.9993`$ and optimal transformations as shown in Fig. 2. All functions approximate the expected shape well, and the terms which were expected to vanish are indeed very small compared to the other ones. Note that this does not mean that the values for the redundant terms are vanishing themselves but that these are independent from all the other terms involved. The comparison of the slope of the linear functions yields the possibility to estimate parameters; we obtain $`\alpha _4=\alpha _6=1.9`$, $`\alpha _{13}=2.0`$, $`\alpha _{11}=\alpha _{15}=1.0`$. The slopes of the terms which are expected to vanish are all smaller than $`0.03`$ in absolute value. Since it is difficult to estimate the errors of these values, it is recommended to check the range of validity of the reconstructed model by integration and comparison with the dynamics. In Fig. 3 the nonlinearity $`0.9uu^3`$ of Eq. (9) is compared explicitly with the estimate, taking now an ansatz with only the non–vanishing terms of the above result. Inside a range of 98% of the data values this term can be estimated with high accuracy. ¿From a practical point of view, it is essential to discuss the stability of the identification method against noise. The effect of noise is to increase the errors of the estimates for the partial differential operators applied to the data, and therefore also the value of the error estimate $`e^2`$. The identification detects the remaining correlations of the vector field and its derivatives via the function $``$. In general, if the vector field and its derivatives are still strongly correlated for a reasonable low noise level of a noisy system, the minimum according to Eq. (5) should still be detectable. To examine the stability against noise, we disturb each data point $`v(\stackrel{}{x},t)`$ by additive Gaussian white noise with a standard deviation of 0.5% of the data standard deviation. The estimates of the partial derivatives are disturbed severely due to error propagation. In spite of this, using again Eq. (15), the overall shape of all functions can still be recovered satisfactory, while being distorted (Fig. 3). The maximal correlation has decreased to 0.926. Two other systems which were analyzed with similar success as the example above, are: 1. The Kuramoto–Sivashinsky equation (in the form as given in Ref. ) in the fully chaotic regime. 2. A reaction–diffusion system with two components , where the nonlinearity in the inhibitor dynamics is a non–analytic function. We reconstructed that function with high accuracy for different states like a single rotating spiral and spiral–defect chaos. Summarizing our experiences, we find the following requirements on the data: 1) In order to identify a system with N components it is sufficient to measure N independent variables. 2) The sampling of the vector fields in space as well as in time has to be appropriate to estimate the partial derivatives with respect to time and space properly. 3) In the preceding example we analyzed data at one time point but for an extended region in space. In other situations it may be useful to perform the analysis at a single spatial point but for a longer interval in time, or even a combination of both procedures. However, as the example shows, even a small amount of data can be sufficient. 4) Since the reconstructed functions can only be determined at points which are attained by the data and the estimates for the differential operators, the data have to show a sufficient variability in space and time. 5) The inference from the data to a PDE is often not unique. But within the errors of the algorithm, our method reasonably reconstructs the dynamics on the trajectory given by the data. 6) We believe that due to the statistical nature of the method a higher noise level can typically be compensated by a larger data set. We would like to stress the fact that the procedure presented above is essentially independent of the part of phase space the system moves on and the boundary and initial conditions. Thus we do not require the dynamics to be close to or away from an attractor. This kind of analysis is thus applicable in the case of high-dimensional chaotic motion, as exhibited by nonlinear spatially extended systems, as well as transient motion. We did not discuss non-homogeneous or non-autonomous evolution laws, but in principle the arguments apply equally well, with some minor modifications, for those systems. In conclusion, we presented a method for the identification of spatiotemporal systems by numerical reconstruction of the PDE which describes the system. The identification procedure consists in solving an optimization problem which results in a nonlinear regression problem. Using the ACE–algorithm we showed that this task can indeed be solved in the case of numerical examples. We consider the method as a useful tool for the analysis of spatiotemporal systems and expect it to find a large variety of applications. The limits of this method and the requirements on the data have been discussed. Future work will concentrate on an extension of the ACE–algorithm to the solution of multi–component problems, and on the application to real data. Extensions to more general systems are envisaged. We acknowledge helpful discussions with H. Kantz, J. Kurths, J. Parisi, A. Pikovsky, M. Zaks, and financial support by the Max–Planck–Gesellschaft.
no-problem/9907/cond-mat9907158.html
ar5iv
text
# Universal conductance fluctuations in three dimensional metallic single crystals of Si \[ ## Abstract In this paper we report the measurement of conductance fluctuations in single crystals of Si made metallic by heavy doping ($`n22.5n_c`$, $`n_c`$ being critical composition at Metal-Insulator transition). Since all dimensions ($`L`$) of the samples are much larger than the electron phase coherent length $`L_\varphi `$ ($`L/L_\varphi 10^3`$), our system is truly three dimensional. Temperature and magnetic field dependence of noise strongly indicate the universal conductance fluctuations (UCF) as predominant source of the observed magnitude of noise. Conductance fluctuations within a single phase coherent region of $`L_\varphi ^3`$ was found to be saturated at $`(\delta G_\varphi )^2(e^2/h)^2`$. An accurate knowledge of the level of disorder, enables us to calculate the change in conductance $`\delta G_1`$ due to movement of a single scatterer as $`\delta G_1e^2/h`$, which is $``$ 2 orders of magnitude higher than its theoretically expected value in 3D systems. \] In disordered systems with strong impurity scattering at low temperatures, the noise can arise from the mechanism of universal conductance fluctuations (UCF) . The origin of the UCF lies in the quantum interference of multiply backscattered electrons from the scattering centers and is intimately related to the mechanism of weak localization . Theory of UCF , showed that at $`T=0`$ the electrical conductance ($`G`$) of a metallic system, is an extremely sensitive function of its impurity configuration and alteration of position of even a single impurity over a sufficient length scale may induce a conductance change $`\delta G_1e^2/h`$. At finite temperatures, one considers diffusive electronic motion in regions bounded by electron phase coherence length $`L_\varphi `$, within which the interference effects are relevant. Total conductance change due to motion of a number of scatterers inside $`L_\varphi ^d`$ is additive (as long as $`\delta G_1e^2/h`$) and when the number of such scatterers is sufficiently large, UCF noise inside one phase coherent region is saturated with the total noise power $`(\delta G_\varphi )^2(e^2/h)^2`$. At higher temperatures or in cleaner samples with longer elastic mean free path of the electrons, the noise due to UCF gives way to that due to mechanisms like local interference (LI) . Extensive experimental studies on UCF has been carried out in 1D and 2D disordered systems like, metallic films of Bi , Ag , C-Cu composites , Li wires GaAs/AlGaAs heterostructures or silicon inversion layers and the existence of UCF has been convincingly established in these systems. However, the absolute magnitude of noise has always been a very roughly estimated parameter since the nature of the defect and the the level of disorder causing the UCF were unknown in most of the cases. In this paper we report conductance noise measurement in a completely different class of “metal”, namely heavily doped single crystalline Si, where disorder is primarily introduced as substitutional impurities of P (and B) to the Si matrix. These are metallic systems with a low carrier density ($`n`$) ($`10^3`$ times the carrier density of a metal) and low conductivity ($`\sigma `$) ($`10^2`$ times the conductivity of bad metals) and are close to the critical composition for Metal-Insulator (MI) transition. There are the following compelling reasons to carry out the experiment on single crystals of Si made metallic by heavy doping: (1) It is a very well defined system where the number as well as the nature of defects are known and the samples used are extremely well characterized unlike most of the previous studies. This will allow us to quantitatively compare the experiment to the magnitude of the noise predicted by the theory. (2) The defects which give rise to the noise are in the bulk of the solid and being single crystal, issues like defects at the surface or grain boundary (which often is the case in polycrystalline metallic thin films) are absent here, (3) This has been the most extensively studied system in investigations of weak localization and M-I transition and is often taken as model solid in which new concepts in theories have been tested. Investigation of electronic phase induced fluctuation phenomena and UCF has thus been a long outstanding need of the field. In particular, it will be search for UCF in a bulk 3D system in contrast to past studies which were in 2D or 1D. Polished, $`111`$ \- Czochralski grown, P and B doped wafers of Si with thickness $``$ 300 $`\mu `$m were sized down to a length of 2 mm, width 0.10-0.15 mm and were thinned down by chemical etching to a thickness of $``$ 30 $`\mu `$m. These wafers were used extensively in conductivity studies earlier . We have chosen two systems with similar concentration of donors (P), but one was compensated with B (Si:P,B), whereas the other was left uncompensated (Si:P)(see table I). They are in the weak localization regime with 5.5 $`<k_Fl<`$ 2.3, where $`l`$ is the elastic mean free path. We have studied several samples of the same system for our experiment. Noise, electrical conductivity and magnetoresistance (MR) were all measured in the same sample to avoid any ambiguity. For noise measurement (done with a temperature stability $`|\mathrm{\Delta }T/T|<0.01\%`$) we used a five probe ac technique (carrier frequency 377 Hz), aided by digital signal processing methods . Sample volume for noise detection ($`\mathrm{\Omega }`$) $`1.52.0\times 10^{12}`$ m<sup>3</sup>. The peak current density was kept at $`10^6`$ A/m<sup>2</sup> (power dissipation $`<50\mu `$Watt) to avoid heating. Electrical contacts were made by a specially fabricated thermal wire bonder using gold wires of diameter $`25\mu `$m with average separation of electrodes $`200250\mu `$m. The contacts, with a temperature independent resistance $``$ 1 ohm, were ohmic. The Hall coefficient was found to be essentially temperature independent down to 2 K with the variation in the whole range being $``$ 20%. This ensured that issues like carrier freeze out are of no consequence here. Both the samples are metallic with $`\sigma _{4.2K}/\sigma _{300K}>`$ 1 (see inset of figure 1). At $`T<4.2`$ K the correction to conductivity, $`\mathrm{\Delta }\sigma (T)`$ (=$`\sigma (T)\sigma (T=0)`$), is different in the two samples. The Si:P sample showed a dominant correction which is from the electron-electron interaction and $`\mathrm{\Delta }\sigma (T)mT^{1/2}`$ with $`m<`$ 0 . The compensated sample Si:P,B, with higher disorder showed $`\mathrm{\Delta }\sigma (T)>0`$ as expected in a sample with correction to conductivity coming from the weak localization (WL) . The phase coherence length $`L_\varphi `$ was determined from MR ($`\mathrm{\Delta }\rho /\rho `$) measurements. The data are shown in fig.1. For both the samples MR is negative at low $`H`$ due to the WL contribution and at higher $`H`$ the interaction contribution dominates. Observed $`\mathrm{\Delta }\rho /\rho `$ were fitted to an expression of MR consisting of contributions from weak-localization (consisting of three field scales due to the inelastic scattering $`H_\varphi `$, spin-orbit scattering $`H_{so}`$ and the spin-flip scattering $`H_s`$) and interaction. Given the space considerations we are not giving the detailed fit procedure which are available elsewhere . The lines through the data are the calculated MR. In both the systems we observed $`H_\varphi H_{so},H_s`$. The $`L_\varphi `$ as obtained from the MR data using the relation $`H_\varphi =\mathrm{}/4eL_\varphi ^2`$ are shown as function of $`T`$ in fig.2. $`L_\varphi T^{p/2}`$ where $`p`$ 1.0. Even at the lowest temperature, $`L_x/L_\varphi 10^3`$, where $`L_x`$ is the smallest dimension (thickness) of the sample. This clearly shows that our system is truly 3D. For all the samples studied, the spectral power was $`V_{bias}^2`$ within experimental accuracy and found to scale with sample volume as $`\mathrm{\Omega }^\nu `$ . Typically, $`\nu `$ 1.1-1.3. This scaling was found to be independent of the surface to volume ratio and valid till low temperatures. This implies that predominant contribution to the noise arises from the bulk. This clean inverse dependence is a very important observation because almost all the past noise studies done on doped Si ($`n10^{21}`$ m<sup>-3</sup>) the source of noise was the surface or interface . In fig.3 we show the typical data of relative conductance fluctuations $`N(\delta G)^2/G^2`$ as function of $`T`$. The variance shows a minima at around 150 K - 175 K. In the temperature range $`T>`$ 175 K, $`(\delta G)^2`$ rises very rapidly following a near exponential dependence on $`T`$. For $`T<`$ 100 K, $`(\delta G)^2`$ rises again as $`T`$ is reduced as $`(\delta G)^2T^q`$, where $`q=0.53\pm 0.03`$ for Si:P and $`0.57\pm 0.03`$ for Si:P,B. Very often the noise is expressed through the normalized value $`\gamma `$ defined as: $$\gamma =f^\alpha S_v(f)(\mathrm{\Omega }n)/V_{bias}^2$$ (1) $`\alpha `$ 1-1.2 for $`T`$ 10 K. At $`T<`$ 10 K, the spectral dependence of the noise power differs significantly from $`1/f^\alpha `$ form as can be seen in the inset of fig.3. For comparison, in both systems at $`f=`$ 3 Hz, $`\gamma `$ 1.5 at $`T=`$ 300 K. It is $``$ 0.3 at the minima at $`T=`$ 150 K - 175 K and at $`T=`$ 2 K it has a value $``$ 1. This is about three orders of magnitude higher than that seen in conventional thin metallic films. In our experiment, the absolute value of $`\gamma `$ is reproducible within $``$ 20% from run to run and 50% from sample to sample of the same system, arising primarily from uncertainty in $`\mathrm{\Omega }`$. The temperature dependence of $`(\delta G)^2`$ (and hence of $`\gamma `$) including the minima at $`T`$ 150 K, is identical to that seen in films of Bi , Ag and C-Cu and is qualitatively different from that seen in lightly doped Si films on sapphire which are deep within the insulating side . It is amazing that the temperature dependence of $`\gamma `$ in these three widely different materials are so identical although the absolute value of $`\gamma `$ differ. It is even more spectacular considering the fact that the physical form of the samples (thin film vs bulk crystal) are completely different and so is the nature of defect that may cause the $`1/f`$ noise in these materials. In investigating the origin of noise, we note that the effect of weak localization was evident from $`\mathrm{\Delta }\sigma (T)`$ as well as MR for both the samples. UCF being a quantum interference related phenomenon, is expected to be the likely origin of noise in this case. Below 100 K the diffusive dynamics of electrons is ensured by $`l<L_\varphi `$. The increase in noise magnitude below 100 K also indicates strongly towards a UCF dominated noise mechanism. However, the cleanest signature of UCF is the sensitivity of $`(\delta G)^2`$ to an external magnetic field $`H`$. The zero field noise is expected to get reduced by a factor of 1/2 at some characteristic field scale $`H_{c1}𝒜(h/e)/L_\varphi ^2`$, where $`𝒜`$ is a constant of the order of unity . This is due to the breaking of time reversal symmetry as the magnetic field introduces an extra random phase to the electron’s wave function. In systems with weak spin-orbit scattering, a further drop by another factor of 1/2 is expected due to spin-symmetry breaking from the Zeeman splitting of the conduction electrons at a characteristic field $`H_{c2}k_BT/g\mu _{imp}`$, where $`g`$ is the g-factor and $`\mu _{imp}`$ is the magnetic moment at an impurity site . In fig.4, we plot the observed variance $`(\delta G)^2`$ as a function of $`H`$. The first reduction of 1/2 occurs very distinctly at field $`H_{c1}`$ 2.5 mT for both the samples. As the temperature is raised, $`H_{c1}`$ becomes larger. At higher magnetic field, the fluctuations are reduced further by an additional factor of 1/2. We identify this as $`H_{c2}`$, which agrees well with the calculated value of $``$ 1.4 T. The distinct dependence of the noise on the magnetic field is a clear indication that a substantial portion of the low temperature noise indeed arises from the UCF mechanism. From the observed magnitude of rms fluctuations over the experimental bandwidth, we can calculate the average variance, $`(\delta G_\varphi )^2`$, in a single phase coherent box of volume $`L_\varphi ^3`$. At length scales larger than $`L_\varphi `$, noise from different phase coherent boxes get superposed classically and hence, the observed conductance variance $`(\delta G)^2`$ is related to $`(\delta G_\varphi )^2`$ as , $$\frac{(\delta G)^2}{G^2}=\frac{L_\varphi ^3}{\mathrm{\Omega }}\frac{(\delta G_\varphi )^2}{G_\varphi ^2}$$ (2) where $`G_\varphi =\sigma L_\varphi `$. Thus, at $`T`$ 2 K, from the observed $`(\delta G)^2/G^2=1.3\times 10^{13}`$ for Si:P and $`3.1\times 10^{13}`$ for Si:P,B, we get, $`(\delta G_\varphi )^2^{1/2}1.5\times (e^2/h)`$ for Si:P and $`1.2\times (e^2/h)`$ for Si:P,B. This clearly shows that the noise magnitude within $`L_\varphi ^3`$ in both the samples are saturated. This saturation of $`(\delta G_\varphi )^2`$ is also reflected in the temperature dependence of the noise. Using $`G_\varphi =\sigma L_\varphi `$, we observe $`N(\delta G)^2/G^2L_\varphi T^{p/2}`$, where $`p1.0`$. ($`N`$ is the total number of carriers in volume $`\mathrm{\Omega }`$.) This matches very well with our experimental observation. One of the most basic feature of the UCF theory is the sensitivity of the fluctuations towards the movement of a single scatterer which is referred to as $`(\delta G_1)^2`$ . We show the important result that within a phase coherent volume $`L_\varphi ^3`$ even $`(\delta G_1)^2`$ is saturated to the value $`(e^2/h)^2`$. From the UCF theory one obtains that $`(\delta G_\varphi )^2=n_s(\delta G_1)^2\times L_\varphi ^3\times (L_T/L_\varphi )^2`$ where $`n_s`$ is the density of “active” scatterers and $`L_T=\sqrt{\mathrm{}D/k_BT}`$. Using $`(\delta G_\varphi )^2(e^2/h)^2`$ for both the samples, we get $`n_s(\delta G_1)^22\times 10^{23}\times (e^2/h)^2`$ /m<sup>3</sup> for Si:P and $`6\times 10^{23}\times (e^2/h)^2`$ /m<sup>3</sup> for Si:P,B. For both the samples total dopant concentration, $`n_d10^{25}`$ /m<sup>3</sup> (see table I). Thus for both the samples $`n_s(0.030.04)\times n_d`$ is all that is needed to saturate the fluctuations $`(\delta G_1)^2(e^2/h)^2`$. An independent estimate of $`n_s`$ can be obtained from the analysis of the high temperature noise ($`T>`$ 150 K). At this temperature range $`lL_\varphi `$. Under this condition the likely mechanism of noise can be the LI mechanism . From the observed $`(\delta G)^2`$ it is possible to estimate the fraction of active sites $`n_{LI}`$ taking part in the noise production by using the relation $$N\frac{(\delta G_{LI})^2}{G^2}=[n_{Si}\lambda (T)\beta _c\delta _c]^2\frac{n_{LI}}{n_{Si}}$$ (3) where $`n_{Si}=5\times 10^{28}`$ m<sup>-3</sup>, is the atomic density of Si, $`\beta _c`$ 0.25 is the anisotropy parameter, $`\delta _c4\pi /k_F^2`$, the average defect cross-section and $`\lambda (T)`$ is the net mean free path. We find that within the observed bandwidth at 300 K, $`n_{LI}5\times 10^6n_{Si}0.03\times n_d`$. As $`T`$ decreases $`n_{LI}`$ also decreases and below 100 K $`n_{LI}`$ is $`0.01\times n_d`$. Although $`n_s`$ and $`n_{LI}`$ are not the same, they will be of similar magnitude. Thus extending over $`10`$ decades of frequency, we obtain an independent estimate of $`n_s0.03\times n_d`$ which agrees well with that found in the previous paragraph. The saturation of $`(\delta G_1)^2`$ to $`(e^2/h)^2`$ is a surprising result in a 3D system. According to UCF theory, in a sample of side $`L`$ in $`d2`$ dimensions, only a fraction $`(l/L)^{d2}`$ of the Feynman paths passes through a particular scattering site. Assuming that the transmission amplitude due to this fraction of paths is statistically independent from that due to rest of the paths, one would expect, $`(\delta G_1)^2(e^2/h)^2(l/L)^{d2}`$. More detailed calculation shows in 3D in a sample of size $`L_\varphi `$, $$(\delta G_1)^2=C(e^2/h)^2\frac{\alpha (k_F\delta r)}{(k_Fl_e)^2}\left(\frac{l}{L_\varphi }\right)$$ (4) where $`C4\sqrt{3}\pi `$ and $`\alpha (x)=1\mathrm{sin}^2(x/2)/(x/2)^21.0`$. $`\delta r`$ is the length scale over which the atomic displacement takes place. Using the known value of other parameters and assuming $`\alpha 1`$, we get $`(\delta G_1)^20.01\times (e^2/h)^2`$ in Si:P and $`0.03\times (e^2/h)^2`$ in Si:P,B. This value of $`(\delta G_1)^2`$ is nearly two orders of magnitude less than the saturation value of $`(e^2/h)^2`$ observed experimentally. Thus UCF theory underestimates $`(\delta G_1)^2`$ for 3D systems. Our result indicates that the assumption of the theory of uncorrelated (statistically independent) Feynman paths for 3D system may not be the correct assumption. From the experimental observation of the saturation of noise, we have to reach the conclusion that when a scattering center in the system moves, transmission probability associated to even those paths which are not passing through the center gets affected by correlation effects. It would be interesting to know whether this correlation among the Feynman paths is due to electronic phase coherence intrinsic to any 3D system or a finite correlation length effect in systems near MI transition. In conclusion, we have observed saturated universal conductance fluctuations in a 3D disordered system, namely single crystals of silicon made metallic by heavy doping. We find that the existing theory of UCF grossly underestimates the absolute magnitude of noise in such systems even though the qualitative features do show satisfactory agreement . We would like to thank Prof. D. Holcomb of Cornell University for the Si samples and Prof. T.V. Ramakrishnan for a number of helpful discussions. Figure caption: figure 1: Magnetoresistance (MR) as a function of magnetic field ($`H`$). The inset shows temperature variation of the zero field conductivity. The solid lines are fit to the MR data according to the weak localization and electron-electron interaction correction. figure 2: The temperature dependence of the phase breaking length $`L_\varphi `$ determined from magnetoresistance fits. The solid lines show, $`L_\varphi T^{1/2}`$ in this temperature range. figure 3: Conductance fluctuations as a function of temperature. The inset shows the spectral dependence of noise power at the lowest $`T`$. figure 4: The magnetic field dependence of the conductance fluctuations.
no-problem/9907/math9907014.html
ar5iv
text
# Dynamical systems arising from elliptic curves ## 1. Introduction Let $`F\text{}[x]`$ denote a primitive polynomial with degree $`d`$, which factorizes as $`F(x)=b_i(x\alpha _i)`$. Then $`F`$ induces a homeomorphism $`T_F`$ on a compact, $`d`$-dimensional group $`X=X_F`$, via the companion matrix of $`F`$. The group $`X`$ is an example of a solenoid whose definition is discussed in Section 2 below. The essential properties of this dynamical system $`T_F:XX`$ are as follows. 1. The topological entropy $`h(T_F)`$ is equal to $`m(F)`$, the Mahler measure of $`F`$ (see (1) below). 2. Let $`\mathrm{Per}_n(F)`$ denote the subgroup of $`X`$ consisting of elements of period $`n`$ under $`T_F`$, $`\mathrm{Per}_n(T_F)=\{xX:T_F^n(x)=x\}`$. If no $`\alpha _i`$ is a root of unity, then $`\mathrm{Per}_n(T_F)`$ is finite with order $`|\mathrm{Per}_n(T_F)|=|b|^n_i|\alpha _i^n1|.`$ For background, and proofs of these statements, see \[7, II\] and . The solenoid is a generalization of the (additive) circle denoted 𝕋. Indeed, when $`F`$ is monic with constant coefficient $`\pm 1`$, we may take $`X`$ to be $`\text{𝕋}^d`$; the resulting map is the automorphism of the torus determined by the companion matrix to $`F`$. The immanence of the circle is seen in both 1 and 2 above. In 1, we may take the definition of the Mahler measure to be the logarithmic integral of $`|F|`$ over the circle, (1) $$m(F)=_0^1\mathrm{log}|F(e^{2\pi it})|dt=\mathrm{log}|b|+\underset{i=1}{\overset{d}{}}\mathrm{log}^+|\alpha _i|,$$ where the last equality follows from Jensen’s Formula (see \[7, Lemma 1.9\] for proof). In 2, the periodic points formula is equivalent to evaluating the $`n`$-th division polynomial of the circle on the zeros of $`F`$. That is, if we take $`\varphi _n(x)=_{\zeta ^n=1}(x\zeta )`$, we get the formula $`\alpha ^n1=\varphi _n(\alpha )`$, so $`|\mathrm{Per}_n(T_F)|=|b^n\times _i\varphi _n(\alpha _i)|`$. In and \[7, IV\] we set out reasons for believing in the existence of elliptic dynamical systems, regarding properties 1 and 2 as the paradigm. Assuming $`d=1`$ for example, there ought to be a dynamical system where the immanent group is a rational elliptic curve with the zero of $`F`$ corresponding to the $`x`$-coordinate of a rational point on that curve. In other words, for every elliptic curve $`E`$ and every point $`QE(\text{})`$ we are seeking a continuous map $`T=T_Q:XX`$ on some compact space $`X=X(E)`$ whose dynamical data should be described by well known quantities associated to the point on the curve. We expect the entropy of $`T`$ to be the global canonical height of the point $`Q`$ (a well-known analogue of Mahler’s measure) and the elements of period $`n`$ should be related to the elliptic $`n`$-th division polynomial evaluated at the point $`Q`$. There now follows a brief description of this paper, explaining where to look for our main conclusions. For reasons we will present in Sections 2 and 3, it is to be expected that the underlying space $`X`$ should be the adelic curve. Section 2 recalls the classical definition of the solenoid and the action $`F`$ induces on it. Lind and Ward re-worked the classical theory in adelic terms. They showed that the topological entropy can be decomposed into a sum of local factors, each of which is the entropy of a corresponding local action. Each of these local factors can be identified as a corresponding local component of the Mahler measure. Section 3 recalls the basic theory of elliptic curves needed. In particular, the decomposition of the global canonical height into a sum of local factors. Also, we recall that the $`p`$-adic curve is isomorphic to a simpler group, on which we may expect to define dynamical systems. In Section 4, in particular the conclusion, we will construct a dynamical system where the underlying space is a $`p`$-adic elliptic curve and where the map is induced by a point on that curve. The map in question is a $`p`$-adic analogue of the well-known $`\beta `$-transformation. The entropy of the map is the local canonical height of the point, and the periodic points can be counted exactly. In Section 5, we will consider how to glue together these local maps to get a global dynamical system. Here we are given an elliptic curve $`E`$ defined over and a rational point $`QE(\text{})`$. The point $`Q`$ induces a dynamical system where the underlying space is the elliptic adeles and the entropy turns out to be the global canonical height of the point $`Q`$. The construction of the map at the archimedean prime is artificial since it relies upon a priori knowledge of the height of the point (although it is a curious coincidence that the map is a classical $`\beta `$-transformation). We hope this will bring into better focus the construction at the non-archimedean primes where the map uses no such a priori knowledge of the height. The artificiality of the map is somewhat redeemed when we go on to show that the periodic points are counted asymptotically by the real division polynomial at the point $`Q`$. This last result makes use of some non-trivial results; one from elliptic transcendence theory and the other a result about periodic points for the classical $`\beta `$-transformation. Finally, in Section 6, we will make some remarks about putative elliptic dynamical systems with the precise periodic point behaviour and discuss possible connections with mathematical physics. ## 2. The solenoid Given $`F(x)=bxa`$, with $`a,b\text{}`$ coprime, let $`X`$ denote the subgroup of $`\text{𝕋}^Z`$ defined by (2) $$X=\{𝕩=(x_k):bx_{k+1}=ax_k\}.$$ The group $`\text{𝕋}^Z`$ is compact by Tychonoff’s theorem, and $`X`$ is a closed subgroup so it too is compact, an example of a (1-dimensional) solenoid. More generally, a solenoid is any compact, connected, abelian group with finite topological dimension (see ). The automorphism $`T`$ is defined by the left shift-action (3) $$T(𝕩)_k=x_{k+1}.$$ The map $`T`$ has the properties 1 and 2 of Section 1 by (see also for a more elementary discussion). In other words, $$h(T)=\mathrm{log}\mathrm{max}\{|a|,|b|\}=m(bxa)=m(F)$$ (a form of Abramov’s formula). Our assumption on the zero of $`F`$ not being a unit root amounts to $`ab`$, and the periodic points are given by (4) $$|\mathrm{Per}_n(T)|=|b|^n|\varphi _n(a/b)|=|b^na^n|.$$ At the end of this section, we will show how the periodic points formula (4) comes about. In order to motivate the name, and what follows, we will now give a second equivalent definition of the solenoid and the action of $`T`$ upon it. Define $`X`$ to be the topological dual of the ring $`\text{}[1/ab]`$, written $`X=\widehat{\text{}[1/ab]}`$. Then define $`T`$ to be the map which is dual to the map $`x\frac{a}{b}x`$ on $`\text{}[1/ab]`$. The adelic point of view arises because $`X`$ is isomorphic to the quotient of $`\text{}\times _{p|ab}\text{}_p`$ by the diagonally embedded discrete subgroup $`\text{}[1/ab]`$ (this is a simple finite version of the standard adelic construction of the dual of an 𝔸-field, see \[3, Section 3\] or \[28, Chapter IV\]). Each character on restricts to a character on $`\text{}[\frac{1}{ab}]`$; this induces a map from $`\text{}\widehat{\text{}}`$ into $`X`$ (injective since $`\text{}[\frac{1}{ab}]`$ is dense in ). The fact that the real line is ‘wrapped’ densely into the compact group $`X`$ accounts for the name solenoid. The group $`X`$ is a semi-direct product of 𝕋 by $`_{p|ab}\text{}_p`$. The action does not preserve the various local components, but a direct calculation of the entropy formula is possible (see ). Lind and Ward simplified this by working with the adeles proper, which live as a covering space to the one above. In that context, the map on each component is simply multiplication by $`a/b`$. Their approach involves tensoring the dual of $`X`$ with which gives quick access to the standard results on adeles but destroys any periodic point behaviour (see \[12, Section 3\]). It is probably beneficial to keep both points of view in mind. The elliptic system in Section 5 has the elliptic adeles as the base group, and for the finite primes, the local map is the local $`\beta `$-transformation by $`a/b`$. Thus it resembles the systems defined on both the solenoid and its adelic cover. Finally, we examine how the periodic points formula (4) comes about. This will be instructive in Section 6, when we consider a possible elliptic analogue. Suppose $`b=1`$ so that $`F(x)=xa`$ and consider first the simpler case where the underlying space $`X`$ is the (additive) circle 𝕋. The map sends $`x\text{𝕋}`$ to $`T_F(x)=axmod1`$. Thus, the points of period $`n`$ are the solutions of the equation $`a^nx=x`$ or $`(a^n1)x=0`$. Clearly there are $`|a^n1|=|\varphi _n(a)|`$ solutions, which is the division polynomial evaluated at $`a`$. When the underlying space is the solenoid $`X`$ as in (2), the map is the left shift $`T`$ as in (3), so the points of $`𝕩X`$ having period $`n`$ correspond to periodic vectors $`𝕪`$ of length $`n`$. The linear equation generated by such a vector is of the form $`C𝕪=\mathrm{𝟘}`$, where $`C`$ is the $`n\times n`$ circulant matrix on the row $`(a,1,\mathrm{})`$. The number of solutions $`𝕪\text{𝕋}^n`$ of this equation, and hence the number of periodic points, is easily verified (see \[7, Lemma 2.3\]) to be $`|det(C)|`$. From the well-known properties of circulants, this is equal to $`|a^n1|=|\varphi _n(a)|`$. ## 3. Elliptic curves In this section we will recall some basic results about elliptic curves and fix the notation. A good account of elliptic curves can be found in and ; all that follows in this section can be found in those two volumes. Denote by $`E`$ an elliptic curve defined over a field $`K`$, and by $`E(K)`$ the group of points of $`E`$ having co-ordinates in $`K`$. When $`K=\text{}`$, Mordell’s theorem says that $`E(\text{})`$ is finitely generated and the torsion-free rank is referred to simply as the rank. Denote by $`\widehat{h}:E(\text{})\text{}`$ the global canonical height on $`E(\text{})`$, a well known analogue of Mahler’s measure (see (1)). Denote by $`\lambda _p`$ the local canonical height relative to the $`p`$-adic valuation. The formula that follows gives an important decomposition of the global height as a sum of local heights (see \[19, VIII\] and \[21, VI\]): (5) $$\widehat{h}(Q)=\underset{p\mathrm{}}{}\lambda _p(Q),\text{ for }QE(\text{}).$$ For finite $`p`$, whenever $`Q`$ has good reduction under $`p`$, (6) $$\lambda _p(Q)=\frac{1}{2}\mathrm{log}\mathrm{max}\{|x(Q)|_p,1\}.$$ In general, the local height is only defined up to the addition of a constant. The definition (6) agrees with the one in . In , each local height is normalized by adding a constant to make it isomorphism-invariant. Whether normalized or not, (5) still holds. If $`K=\text{}`$, the curve $`E(\text{})`$ is isomorphic to either 𝕋 or $`C_2\times \text{𝕋}`$ (see \[21, V.2\]). Denote by $`E_1(\text{})`$ the connected component of the identity, which is always isomorphic to 𝕋. If $`K=\text{}_p`$, the curve $`E(\text{}_p)`$ can be reduced modulo $`p`$. The set of points having non-singular reduction is denoted by $`E_0(\text{}_p)`$ and the kernel of the reduction is denoted by $`E_1(\text{}_p)`$. For odd primes $`p`$, there is an isomorphism $`E_1(\text{}_p)\stackrel{}{}p\text{}p`$. This isomorphism is essentially a logarithm and it comes from the theory of formal groups. The situation when $`p=2`$ is similar; for details, see \[19, IV\]. These isomorphisms are analogous to the one from $`E_1(\text{})`$ to 𝕋. The local isomorphisms for all primes $`p`$ play a very important role in the development of dynamical systems because they allow actions on the additive local curves to be transported to the local curves proper. Consider the analogous situation in Section 1, where the immanent group is the circle. When $`d=1`$ for example, there is an isomorphism (the logarithm) from the circle to the additive group $`[0,1)`$. The action on the circle really arises from an action on $`[0,1)`$ which is then lifted via the logarithm to the circle itself. In the elliptic case, the local curve is isomorphic (via the elliptic logarithm) to an additive group. Subsequently, when we define an action on the $`p`$-adic curve, it will be one that is lifted from the additive curve. Thus, the dynamical systems which arise when the immanent group is the elliptic curve are exactly analogous to the case where the immanent group is the circle (or more generally, the solenoid). Finally, we recall the elliptic analogue of the division polynomial $`x^n1`$ on the circle. If $`E`$ denotes an elliptic curve defined over then without loss of generality it is defined by a generalized Weierstrass equation with integral coefficients. There is a polynomial $`\psi _n(x)`$ with integer coefficients having degree $`n^21`$ and leading coefficient $`n^2`$ whose zeros are precisely the $`x`$ co-ordinates of the points of $`E`$ having order dividing $`n`$; for details see . Later, we will consider the monic polynomial $`\nu _n(x)`$ of degree $`n1`$, whose zeros are the $`x`$ co-ordinates of the non-identity points in $`E_1(\text{})`$ having order dividing $`n`$, (7) $$\nu _n(x)=\underset{\genfrac{}{}{0pt}{}{nQ=O}{OQE_1(\text{})}}{}(xx(Q)).$$ The coefficients of $`\nu _n(x)`$ are real algebraic numbers. ## 4. The $`\beta `$-transformation and a $`p`$-adic analogue A comprehensive introduction to ergodic theory can be found in . Here we just recall the definitions of ergodicity and entropy before examining in more detail the $`\beta `$-transformation and introducing its $`p`$-adic analogue. Let $`T:XX`$ be a measure-preserving transformation on the probability space $`(X,\mu )`$. Then $`T`$ is ergodic if the only almost-everywhere invariant sets are trivial, in other words if $`\mu (T^1E\mathrm{\Delta }E)=0`$ implies that $`\mu (E)=0`$ or 1, where $`\mathrm{\Delta }`$ is the symmetric difference. Given two open covers $`𝒜,`$ of the compact topological space $`X`$, define their join to be $`𝒜=\{ABA𝒜,B\}`$, and define the entropy of $`𝒜`$ to be $`H(𝒜)=\mathrm{log}N(𝒜)`$ where $`N(𝒜)`$ is the number of sets in a finite subcover with minimal cardinality. The topological entropy of a continuous map $`T:XX`$ is defined to be $$h(T)=\underset{𝒜}{sup}\underset{n\mathrm{}}{lim}\frac{1}{n}H\left(\underset{j=0}{\overset{n1}{}}T^j(𝒜)\right),$$ where the supremum is taken over all open covers of $`X`$ (see ; the topological entropy is a measure of orbit complexity introduced as an analogue of the measure-theoretic entropy). The $`\beta `$-transformation $`T_\beta `$ is defined for real $`\beta >0`$ on the interval $`[0,1)`$ by $`T_\beta (x)=\{\beta x\}=\beta x(mod1)`$. If $`\beta >1`$, the $`\beta `$-transformation preserves an absolutely continuous probability measure with respect to which it is ergodic , the (measure–theoretic and topological) entropy is $`h(T_\beta )=\mathrm{log}\beta `$ (see and , ) and (see ) the asymptotic growth rate of the periodic points equals the entropy. The result about the asymptotic growth rate will be applied in Section 5, (see (13)). Strictly speaking, the definition of topological entropy in terms of open covers does not apply to the classical $`\beta `$-transformation because it has a discontinuity; the topological entropy referred to is that of an associated shift system (see \[24, Section 7.3\]). If $`\beta 1`$, the map is simply multiplication by $`\beta `$. If $`\beta <1`$, $`T_\beta `$ does not preserve an absolutely continuous measure, it has topological entropy zero and has no periodic points apart from 0. In all cases, the entropy is $`h(T_\beta )=\mathrm{log}^+\beta `$. Now we define a $`p`$-adic analogue of the $`\beta `$-transformation. For any $`q\text{}_p`$, define a map denoted $`T_q`$, sometimes referred to as the $`q`$-transformation, as follows. Let $`x`$ be a generic element of $`\text{}_p`$ and write $`qx=_{i=m}^{\mathrm{}}b_ip^i`$. Define $$T_q(x)=\underset{i=\mathrm{max}\{0,m\}}{\overset{\mathrm{}}{}}b_ip^i.$$ In other words, $`T_q`$ multiplies by $`q`$ and cuts away the fractional tail in order to come back to $`\text{}_p`$. Note that $`T_q`$ could be defined over $`p\text{}_p`$ in an analogous way, and the ergodic properties would not change once the Haar measure had been normalized again. 1. If $`|q|_p1`$, the map $`T_q`$ preserves Haar measure on $`\text{}_p`$. 2. If $`|q|_p<1`$ then $`T_q`$ is multiplication by $`q`$, and it only preserves the point mass at the identity. 3. The ring of $`p`$-adic integers $`\text{}_p`$ is homeomorphic to the space $`X=_{nN}Y`$ of one-sided sequences with elements in $`Y=\{0,\mathrm{},p1\}`$, and $`T_{1/p}`$ is conjugate to the left shift $`\sigma `$ on $`X`$. ###### Theorem 4.1. The topological entropy of the $`p`$-adic $`q`$-transformation is given by $`h(T_q)=\mathrm{log}^+|q|_p`$. ###### Proof. We follow Bowen and compute the topological entropy as a volume growth rate. It is a straightforward computation to check that that Haar measure on $`\text{}_p`$ is $`T_q`$-homogeneous, so (see \[2, Proposition 7\]) (8) $$h(T_q)=\underset{m\mathrm{}}{lim}\underset{n\mathrm{}}{lim\; sup}\frac{1}{n}\mathrm{log}\mu \left(\underset{k=0}{\overset{n1}{}}T_q^kB_m\right)$$ where $`B_m=p^m\text{}_p`$. If $`|q|_p1`$, $`T_q^1B_mB_m`$ so $$\underset{k=0}{\overset{n1}{}}T_q^kB_m=B_m,$$ and (8) gives $`h(T_q)=0=\mathrm{log}^+|q|_p.`$ If $`|q|_p=p^r>1`$, then $`T_q^1B_m=B_{m+r}`$, so $$\underset{k=0}{\overset{n1}{}}T_q^kB_m=B_{m+rn},$$ so by (8) $`h(T_q)=r\mathrm{log}p=\mathrm{log}^+|q|_p`$. ∎ ###### Theorem 4.2. Let $`q\text{}_p`$ with $`|q|_p1`$. The map $`T_q`$ is ergodic with respect to Haar measure for $`|q|_p>1`$, and is not ergodic for $`|q|_p=1`$. ###### Proof. Assume $`|q|_p>1`$ and let $`𝒜`$ denote the algebra of all finite unions of measurable rectangles and suppose $`E`$ is a measurable set invariant under $`T_q`$. For any given $`ϵ>0`$ it is possible to choose $`A𝒜`$ with $`\mu (EA)<ϵ`$, and thus $`|\mu (E)\mu (A)|<ϵ.`$ Choose $`n`$ such that $`B=T_q^nA`$ depends upon different co-ordinates from $`A`$: then $`\mu (AB)=\mu (A)\mu (B)=(\mu (A))^2`$. Also, $`\mu (EB)<ϵ`$, and $`\mu (E(AB))\mu ((EA)(EB))<2ϵ`$. Thus, $`|\mu (E)\mu (AB)|<2ϵ`$ and $`|\mu (E)\mu (E)^2|<4ϵ.`$ Since $`ϵ`$ is arbitrary, this implies $`\mu (E)=0`$ or $`1`$ and thus $`T`$ is ergodic. When $`q`$ is a unit, the open sets of the form $`p^n\text{}_p`$ for $`n1`$ are all invariant under $`T_q`$. ∎ ###### Remark 4.3. Notice that in this setting ergodicity and mixing coincide. Coelho and Parry have studied the ergodic decomposition of $`T_q`$ when $`q`$ is a unit (see ). A consequence of properties 1 and 2 for the systems in Section 1 is that the logarithmic growth rate of the periodic points coincides with the entropy (see ). That this also holds for $`T_q`$ follows from the next result. ###### Theorem 4.4. Given $`q\text{}_pU`$, where $`U`$ denotes the set of unit roots in $`\text{}_p`$, let $`T_q`$ denote the $`q`$-transformation on $`\text{}_p`$. Then (9) $$\mathrm{log}|\mathrm{Per}_n(T_q)|=n\mathrm{log}^+|q|_p.$$ ###### Proof. Firstly, consider the case $`|q|_p<1`$. Then for $`n\mathrm{}`$ $`T_q^n(x)0`$ for all $`x\text{}_p`$. Thus $`T_q`$ has only one periodic point (zero) and both sides of (9) are zero. When $`|q|=1`$, the action of $`q`$ on $`\text{}_p`$ is simply multiplication, so the periodic points are solutions to the equation $`q^nx=x`$. Since $`q`$ is not a unit root, there are no periodic points except $`x=0`$, so (9) holds. Finally suppose $`|q|_p>1`$. If $`q=p^k`$ with $`k>0`$, the periodic points are easy to determine. We have $`T_q^n(x)=_{i=0}^{\mathrm{}}a_{i+nk}p^i`$ and the solutions to $`T_q^nx=x`$ are given by the $`p^{kn}`$ points with $`a_{i+nk}=a_i`$ for $`i=0,\mathrm{},kn1`$. Thus, both sides of (9) are equal to $`nk\mathrm{log}p`$. In general, suppose $`|q|_p=p^k`$. We claim that for each integer $`a`$ with $`0a<p^{nk}`$, there is a unique $`y\text{}_p`$ with $`T_q^n(a+p^{nk}y)=a+p^{nk}y`$. This follows because the left hand side is $`b+q^np^{nk}y`$ for some $`b\text{}_p`$, which depends only upon $`a,q`$ and $`n`$. Write $`q^np^{nk}=v`$ for some $`p`$-adic unit $`v`$ then the equation $`b+vy=a+p^{nk}y`$ has a unique solution for $`y\text{}_p`$. This shows that there are at least $`p^{nk}`$ solutions of $`T_q^nx=x`$. That there can be no more follows because we may take the $`a`$ as above as coset representatives for $`\text{}_p/p^{nk}\text{}_p`$ so every element $`x\text{}_p`$ is represented by some $`a`$. ∎ In conclusion, given any $`p`$-adic elliptic curve $`E`$ and any point $`QE(\text{}_p)`$, we can construct a dynamical system in the following way. The curve is locally isomorphic to the group $`p\text{}_p`$ and therefore to $`\text{}_p`$. Now let $`Q`$ act via the $`q`$-transformation on the additive curve, where $`q=x(Q)`$. Then transport this action to the curve proper via the logarithm. This is an exact analogue of the toral dynamical systems in Sections 1 and 2. ## 5. Dynamics on the Elliptic Adeles From here on, let $`E`$ denote an elliptic curve defined over and let $`QE(\text{})`$. The explicit formula (6) for the local height of $`Q`$ does not hold if $`Q`$ has bad reduction or if $`p`$ is the prime at infinity. In particular, the local height in these cases can be negative. Since the entropy of a map is never negative, we will work with points whose local heights are guaranteed to be non-negative. ###### Claim 5.1. There exists an $`n1`$ for which the finite-index subgroup $`nE(\text{})E(\text{})`$ has $`\lambda _p(Q)0`$ for all $`p<\mathrm{}`$ and $`QnE(\text{})`$. ###### Proof. Since $`E_1(\text{})`$ is a subgroup of finite index in $`E(\text{})`$ (\[19, VII\]), for each bad prime $`p`$ there exists an integer $`n_p`$ such that $`E_1(\text{})`$ has index $`n_p`$ in $`E(\text{})`$. Let $`n=_{\text{bad }p}n_p`$, then $`nE(\text{})E_1(\text{})`$ for all bad $`p`$. Recall now that if $`Q`$ has good reduction at $`p`$ (and this includes the case where $`QE_1(\text{})`$) then the local height at $`p`$ is given by (6) and it follows that $`\lambda _p(Q)0`$. ∎ Define $`𝒮`$ to be the set of bad primes together with infinity. Assume that the point $`Q`$ satisfies (10) $$\lambda _p(Q)>0\text{ for all }p𝒮.$$ If $`QnE(\text{})`$ then $`QE_1(\text{})`$ for all the bad primes. It follows from (6) that the local height is actually positive. If the rank of $`E(\text{})`$ is not zero then $`nE(\text{})`$ has finite index in $`E(\text{})`$ so in that case, there is a large stock of points $`Q`$ which satisfy (10). At the infinite prime, this amounts to assuming that $`Q`$ lies in a neighbourhood of the identity. Suppose $`QE(\text{})`$ is a point for which the assumption (10) holds. Define $`X`$ to be the space (11) $$X=\underset{p\mathrm{}}{}E_1(\text{}_p).$$ The point $`Q`$ induces an action $`T_Q:XX`$ in the following way: $`(T_Q)_p`$ is the $`q`$-transformation if $`p`$ is finite (where $`q=x(Q)`$) and the $`\beta `$-transformation if $`p`$ is infinite, where $`\mathrm{log}\beta =2\lambda _{\mathrm{}}(Q)`$. Remember that these are actions on 𝕋 and $`p\text{}_p`$, but for every $`p`$, the action can be transported to $`E_1(\text{}_p)`$ via the isomorphisms in Section 3. The statements in the following theorem are analogues of statements 1 and 2 in the introduction. There we supposed that the zeros of $`F`$ were not torsion points of 𝕋. The assumption that $`Q`$ is not a torsion point of $`E`$ is built into (10): $`Q`$ is a torsion point if and only if $`\widehat{h}(Q)=0`$ and (10) guarantees that $`\widehat{h}(Q)>0`$. ###### Theorem 5.2. With the definitions and assumptions above, 1. the entropy of $`T_Q`$ is given by $`h(T_Q)=2\widehat{h}(Q)`$ and 2. the asymptotic growth rate of the periodic points is given by the division polynomial $`\nu _n(x)`$ in (7): $$\mathrm{log}|\mathrm{Per}_n(T_Q)|\mathrm{log}|b^n\nu _n(q)|\text{ as }n\mathrm{}.$$ ###### Proof. By Theorem 4.1, the entropy of each component of $`T_Q`$ is given by $`\mathrm{log}\beta _p`$, where $`\beta _p=\beta `$ if $`p=\mathrm{}`$ and $`\beta _p=\mathrm{max}\{|x(Q)|_p,1\}`$ if $`p`$ is finite. Since there are only finitely many primes for which the local dynamical systems are not isometries, Theorem 4.23 in applies giving $$h(T_Q)=h(T_\beta )+\underset{p<\mathrm{}}{}h(T_q)=\underset{p\mathrm{}}{}\mathrm{log}\beta _p=2\underset{p}{}\lambda _p(Q)=2\widehat{h}(Q).$$ For the asymptotic growth rate of the periodic points note that if dynamical systems $`\widehat{T}_i:X_iX_i`$ ($`i=1,\mathrm{},r`$) are given and the point $`x_i`$ has period $`m`$ under $`\widehat{T}_i`$ for $`i=1,\mathrm{},r`$ then $`(x_i)`$ has period $`m`$ under $`_i\widehat{T}_i`$. Thus we may count the contribution to the periodic points from each prime separately. For $`p<\mathrm{}`$, from Theorem 4.4(9) (12) $$\mathrm{log}|\mathrm{Per}_n(T_q)|=n\mathrm{log}^+|q|_p=n\mathrm{log}|b|_p.$$ Note that our assumption on $`Q`$ guarantees that $`q`$ is not an integer and so, in particular, $`q`$ is not a root of unity. Summing over all finite $`p`$ and using the product formula, we obtain a total contribution of $`n\mathrm{log}|b|`$ to the periodic points. For the infinite prime, we quote a deep result from which says (13) $$\mathrm{log}|\mathrm{Per}_n(T_\beta )|=n\mathrm{log}\beta +o(n).$$ From (12) and (13) we have the formula (14) $$\mathrm{log}|\mathrm{Per}_n(T_Q)|=n\mathrm{log}|b|+n\mathrm{log}\beta +o(n).$$ Finally, we quote from Theorem 6.24 in which gives (15) $$\mathrm{log}|\nu _n(q)|=n\mathrm{log}\beta +o(n).$$ The formula (15) depends upon an application of the elliptic analogue of Baker’s theorem from transcendence theory (see , \[6, Section 7\] and \[7, Theorem 6.18\]). It follows from (14) and (15) that $`\mathrm{log}|b^n\nu _n(q)|`$ is asymptotically equivalent to $`\mathrm{log}|\mathrm{Per}_n(T_Q)|`$. ∎ Set now $`𝒮^{}(Q)=\{p:|x(Q)|_p>1\}\{\mathrm{}\}`$, let $`X^{}=_{p𝒮^{}(Q)}`$ and let $`T_Q^{}`$ be defined component-wise as above. ###### Theorem 5.3. 1. The entropy of $`T_Q^{}`$ is given by $`h(T_Q^{})=2\widehat{h}(Q)`$, 2. the asymptotic growth rate of the periodic points is given by the division polynomial (7): $$\mathrm{log}|\mathrm{Per}_n(T_Q^{})|\mathrm{log}|b^n\nu _n(q)|,$$ 3. $`T_Q^{}`$ is ergodic. ###### Proof. For the entropy and the periodic points, the same arguments as in Theorem 5.2 holds giving the desired result. The ergodicity is proved in Theorem 4.2 The pros and cons of our construction may be summarized as follows. Firstly, we have constructed a dynamical system whose immanent group is the adelic elliptic curve. The map is defined locally by the $`p`$-adic $`\beta `$-transformation on the additive curve. Secondly, the construction exhibits phenomena which resemble those in the solenoid case. Against these comments we must set the following. Firstly, the maps we are using are not continuous because of the discontinuity of the classical $`\beta `$-transformation. The effect upon the map $`T_Q`$ is to deny continuity at infinity on the archimedean component. Secondly, we would have preferred to see periodic point behaviour which was counted precisely by the usual elliptic division polynomial (rather than just asymptotically by the real division polynomial). Thirdly, the map at the archimedean prime uses a priori knowledge of the archimedean height of the point. Fourthly, we made special assumptions to guarantee that each local height was non-negative. Although these assumptions were natural, at the infinite prime and each bad prime we assumed our point was to be found in a neighbourhood of the identity, we would have preferred not to have needed any assumptions. In the next section, we will discuss how these deficiencies might be overcome. ## 6. Putative Elliptic Dynamics Suppose $`E`$ denotes an elliptic curve defined by a generalized Weierstrass equation with integral coefficients. For each $`n\text{}`$, let $`\psi _n(x)=n^2x^{n^21}+\mathrm{}`$ denote the $`n`$-th division polynomial. Let $`Q`$ denote a non-torsion rational point on $`E`$, with $`x(Q)=a/b`$. It is tempting to conjecture that there must a compact space $`X`$ with a continuous action $`T_Q:XX`$ whose entropy of is given by $`h(T_Q)=2\widehat{h}(Q)`$, and whose periodic points are counted, in the sense of Section 1, by $`E_n(Q)=|b^{n^21}\psi _n(a/b)|`$. The sequence $`E_n(Q)`$ is certainly a divisibility sequence like its toral counterpart. However, there is a problem in making the obvious conjecture. Hitherto, the systems we have considered have been -actions (see or for the definition of $`\text{}^d`$-action). Recent work (see ) makes it unlikely that the sequence $`E_n(Q)`$ counts periodic points for a -action. Indeed, when $`E`$ is given by the equation $`y^2+y=x^3x`$ and $`Q`$ is the point $`Q=(0,0)`$, it follows from that $`E_n(Q)`$ cannot represent periodic point data for any -action. The sequence begins 1,1,1,1,2…so it violates the divisibility condition (with $`n=5`$) given in (3) of . What does seem possible is that $`E_n(Q)`$ represents periodic point data for a $`\text{}^2`$-action. The two reasons for saying this are firstly that the growth rate of the sequence is quadratic exponential in $`n`$ (see \[7, Theorem 6.18\]). Thus, the sequence is more likely to represent $`|\mathrm{Per}_{(n\text{})^2}(T)|`$, for some $`\text{}^2`$-action $`T`$, where $`(n\text{})^2`$ represents the subgroup $`\text{}^2`$ having index $`n^2`$ and consisting of all $`(x,y)`$ with $`n|x`$ and $`n|y`$. This would make it consistent with the known properties of algebraic $`\text{}^2`$-actions (see \[11, Theorem 7.1\]). Secondly, the general feeling persists that natural maps on elliptic curves tend to be quadratic. We conjecture that for every rational elliptic curve $`E`$ and every rational point $`QE(\text{})`$, there is a (necessarily infinite dimensional) compact space $`X`$ with a continuous $`\text{}^2`$-action $`T_Q:XX`$ having the following properties: 1. the entropy of $`T_Q`$ is given by $`h(T_Q)=2\widehat{h}(Q)`$, and 2. the periodic points are counted by $`\psi _n`$ in the sense that $$|\mathrm{Per}_{(nZ)^2}(T_Q)|=|b^{n^21}\psi _n(a/b)|.$$ The integral case (where $`b=1`$) of the conjecture is already challenging. Suppose then $`b=1`$ and we seek an action of the integral point $`Q`$, where $`x(Q)=a`$. We would hope to recognize the sequence $`|\psi _n(a)|`$ in some natural way as counting periodic points. In \[7, VI.4\], we noted that the numbers $`|\psi _n(a)|`$ arise as determinants of a nested sequence of integral $`n\times n`$ Hankel matrices. We suggest that these matrices might be the analogues of the circulant matrices $`C`$ in Section 2. We hope that an action with such beautiful dynamical data would not be deficient in the way that the map of Section 5 was deficient. In particular, the potential negativity of the local heights need not seem such a threat. Although a negative entropy cannot exist, nonetheless, the difference between two non-negative entropies can make sense. If one dynamical systems extends another then the difference between their two entropies represents the entropy across the fibres. This raises the possibility that a phenomenon such as bad reduction might well have a dynamical interpretation. Interest in our conjecture (see \[7, Question 14\]) is heightened because of the connection with the following remarkable circle of ideas. On the one hand, mathematical physicists have studied the dynamics of integrable systems (see , ). Here, inter alia, one looks for meromorphic maps on the complex plane which commute with polynomials. It is a classical result of Ritt (see ) that all non-trivial examples arise from the exponential function or the elliptic functions associated to some lattice. Coincidentally, Morgan Ward (see , ) showed that all integer sequences satisfying a certain natural recurrence relation arise from the exponential function or the elliptic functions associated to some lattice, suitably evaluated. In the exponential case, these sequences can always be identified with the periodic point data for toral automorphisms. It is hoped that in the elliptic case also, the sequences $`|\psi _n(a)|`$ represent the periodic point data for some elliptic systems. That being so, a new chapter in integrable systems could be written, yielding further inter-play between elliptic curves and mathematical physics.
no-problem/9907/astro-ph9907191.html
ar5iv
text
# Extinction bias of microlensed stars towards the LMC and the fraction of machos in the halo ## 1 Introduction One of the main puzzles of Galactic microlensing surveys is the poorly determined location of the lens population of the events towards the Magellanic Clouds. Currently there are two popular views on the issue: (a) the lenses are located in the halo, hence are likely baryonic dark matter candidates (Alcock et al. 1997); (b) both the lenses and sources are part of the Magellanic Clouds, hence are stars orbiting in the potential well of the Clouds (Sahu 1994, Wu 1994, Zhao 1998a,b, 1999a, Weinberg 1999). These lead to drasticly different predictions the relative amount of baryonic dark matter in galaxy halos as compared to baryons in stars and gas of various phases in galaxies and intergalactic medium. Gould (1995) showed that the low dispersion of many LMC tracers limits the amount of self-lensing in a virialized disk of the LMC, as assumed in original models of Sahu and Wu. Zhao (1998a) suggested that this constraint can be circumvented by invoking a plausible amount of unviralized material, particularly tidally excited material in the immediate surrounding of the LMC disk, as either foreground lenses or background sources. Zhao (1998b) suggested that the polar stream seen in Kunkel et al.’s (1997) kinematic survey of the LMC carbon stars could contribute to lensing. Weinberg (1999) showed a specific model for thickening the LMC disk by interaction with the Galaxy. Presently the observational status is still very confusing. Several observations suggest that our line of sight to the LMC passes through a 3-dimensional stellar distribution more extended in the line of sight than a simple thin disc of the LMC (Kunkel et al. 1997, Zaritsky et al. 1997, 1999 and references therein). Zaritsky et al. (1999) also include a review of the arguments and counter-arguments for these structures. To break the degeneracy of the models we need new observations which are sensitive to the thickness of the LMC and the location of the lenses. Several ways have been proposed, including studying the spatial distribution and magnitude distribution of the microlensed sources (Zhao 1999a, Zhao, Graff & Guhathakurta 1999), the radial velocity distribution of the sources (Zhao 1999b) and the reddening distribution of the sources (Zhao 1999c). Essentially if lensing is due to Galactic halo machos or a foreground object, the sample of lensed stars should be a random subsample of the observed stars in the LMC. On the other hand, if there are some tidal material behind the LMC disk, then the stars in the tidal debris can be strongly lensed by ordinary stars in the intervening LMC disk, hence the spatial and kinematical distributions of these source stars will be those of the tidal debris, rather than those of the LMC disk. It is suggested that the source stars in the tidal debris should stand out as outliers in the radial velocity distribution of the LMC (Zhao 1999b), or fainter-than-normal red clump stars in the color magnitude diagram (Zhao, Graff & Guhathakurta 1999). The idea about the reddening distribution is to measure the distribution of the reddening of individual LMC stars in small patchs of sky centered on the microlensed stars. Basicly some kind of “reddening parallaxes” can be derived for these stars from the line of sight depth effect, i.e., the dust layer in the LMC makes stars behind the layer systematicly redder than those in front of the layer. The method involves obtaining multi-band photometry and/or spectroscopy of fairly faint ($`1921`$mag) stars during or well after microlensing. From these data we can infer the reddening of microlensed stars and neighbouring unlensed stars. The method, in some sense, is a variation of the distance effect suggested by Stanek (1995) for the Galactic bulge microlensing events. Here we extend the analysis of Zhao (1999c) and present simulations of the reddening of microlensed sources and show how reddening might be used to constrain the macho fraction in the halo. The models are described in §2, results are shown in §3. We summarize in §4 and discuss a few practical issues. ## 2 Models ### 2.1 Density models for the dust, the stars and the lenses For the time being we shall assume a smoothed dust layer of the LMC with a $`\mathrm{sech}^2`$ profile and a FWHM of $`w`$. Let $`\rho _\mathrm{d}(D)`$ be the dust density at distance $`D`$, then $$\rho _\mathrm{d}(D)dD=\rho _\mathrm{d}(D_{\mathrm{LMC}})\mathrm{sech}^2\left(\frac{DD_{\mathrm{LMC}}}{0.56w}\right)dD,$$ (1) where $`\rho _\mathrm{d}(D_{\mathrm{LMC}})`$ is the dust density at the mid-plane of the LMC. For the LMC stars, we assume a two-component model. Let $`\rho _{\mathrm{LMC}}(D)`$ be the density of the LMC stars, then $$\rho _{\mathrm{LMC}}(D)=\frac{\mathrm{\Sigma }_{\mathrm{disk}}}{0.56W}\mathrm{sech}^2\left(\frac{|DD_{\mathrm{LMC}}|}{0.56W}\right)+\frac{\mathrm{\Sigma }_{\mathrm{extra}}}{1.06W_{\mathrm{extra}}}\mathrm{exp}\left[0.69L^2\right],$$ (2) where $`L`$ $`=`$ $`{\displaystyle \frac{(D_{\mathrm{max}}D_{\mathrm{min}})(DD_{\mathrm{extra}})}{(D_{\mathrm{max}}D_{\mathrm{extra}})W_{\mathrm{extra}}}},\mathrm{if}D_{\mathrm{max}}>D>D_{\mathrm{extra}},`$ (3) $`=`$ $`{\displaystyle \frac{(D_{\mathrm{max}}D_{\mathrm{min}})(DD_{\mathrm{extra}})}{(D_{\mathrm{extra}}D_{\mathrm{min}})W_{\mathrm{extra}}}},\mathrm{if}D_{\mathrm{min}}<D<D_{\mathrm{extra}}.`$ (4) Our LMC model includes a (thin) $`\mathrm{sech}^2`$-disk component of a surface density $`\mathrm{\Sigma }_{\mathrm{disk}}`$ and FWHM thickness $`W`$, and an extra component with a surface density $`\mathrm{\Sigma }_{\mathrm{extra}}`$ and FWHM thickness $`W_{\mathrm{extra}}`$. The extra component consists of two half-Gaussians, which are joined together at their peaks at $`D=D_{\mathrm{extra}}`$ and truncated at the lower and upper ends at $`D=D_{\mathrm{min}}`$ and $`D=D_{\mathrm{max}}`$. This is intended to model any non-virialized material in the vicinity of the LMC (Zhao 1998a, Weinberg 1999). To be most general, an offset between the extra component and the LMC disk, namely $`D_{\mathrm{extra}}D_{\mathrm{LMC}}`$, is allowed in our models. Let $`\nu _{}(D)dD`$ be the number of LMC star in a line of sight distance bin ($`D`$, $`D+dD`$), then the LMC star number density will be given by $$\nu _{}(D)dD=C_0\rho _{\mathrm{LMC}}(D)D^2dD,$$ (5) where $`C_0`$ is a normalization constant. For the lens population, we include the contribution from the LMC stars and intervening machos in an isothermal halo. The density of lenses at distance $`D_l`$ is $$\rho _{\mathrm{lens}}(D_l)=\rho _{\mathrm{LMC}}(D_l)+f_{\mathrm{macho}}\rho _{\mathrm{halo}}(D_l),$$ (6) where $`f_{\mathrm{macho}}`$ is the fraction of machos in the halo, and $`\rho _{\mathrm{halo}}(D)`$ is the dark halo density given by $$\rho _{\mathrm{halo}}(D)=0.01M_{}\mathrm{pc}^3\left[1+\left(\frac{D}{8\mathrm{k}\mathrm{p}\mathrm{c}}\right)^2\right]^1.$$ (7) The halo density corresponds to a surface density of dark matter (machos plus wimps) halo $`\mathrm{\Sigma }_{\mathrm{halo}}100M_{}\mathrm{pc}^2`$ towards the LMC. ### 2.2 Reddening vs. line of sight distance of a star and their distributions In general the reddening of a star is correlated with the distance to the star, with stars at the backside of the LMC seeing more dust and experiencing more reddening. But since the dust distribution is clumpy and the reddening distribution is patchy, any relation between the reddening and the line of sight distance is not a well-defined one-to-one relation, but has a significant amount of scatter due to patchiness of dust and measurement error. Nevertheless we can argue that stars in a small (e.g., $`4^{\prime \prime }\times 4^{\prime \prime }`$) patch of sky will see the same set of dust clouds, so if we sort these neighbouring stars according their reddening, we also get a sorted list in distance. Stars in the LMC can be ranked according to an observable reddening indicator $`\kappa _{\mathrm{obs}}`$. For any LMC star define $$\kappa _{\mathrm{obs}}\frac{E(BV)_{\mathrm{LMC}}^{\mathrm{obs}}}{E(BV)_{\mathrm{LMC}}^{\mathrm{obs}}},$$ (8) where $`E(BV)_{\mathrm{LMC}}^{\mathrm{obs}}`$ is the observed reddening in $`BV`$ color towards the LMC star, after discounting the reddening by the Galactic foreground, and $`E(BV)_{\mathrm{LMC}}^{\mathrm{obs}}`$ is the average over random stars in the small patch of sky centered on that LMC star, and is approximately the reddening of those stars at the mid-plane of the LMC. Surely the typical reddening $`E(BV)_{\mathrm{LMC}}^{\mathrm{obs}}`$ varies widely from patches to patches on scales of arcmins (Harris et al. 1997 and references therein), but the rescaled reddening $`\kappa _{\mathrm{obs}}`$ allows us to sort the stars not only in the same patch but also stars in different patches. A higher rank, i.e. a larger $`\kappa _{\mathrm{obs}}`$, means a deeper penetration into the dust layer. Stars at the midplane have a rank $`\kappa _{\mathrm{obs}}=1`$, and those at the front or back side of the LMC disk have a rank $`\kappa _{\mathrm{obs}}=0`$ or $`\kappa _{\mathrm{obs}}=2`$. Consider the reddening rank of stars in a smooth dust model. Let $`A_{\mathrm{LMC}}(D)`$ be the dust absorption towards a LMC star at distance $`D`$ (after discounting the absorption by the Galactic foreground dust), and let $`\kappa (D)`$ be the predicted reddening rank of the star, then $$\kappa (D)=\frac{A_{\mathrm{LMC}}(D)}{A_{\mathrm{LMC}}(D_{\mathrm{LMC}})}=\frac{_0^D\rho _\mathrm{d}(D)𝑑D}{_0^{D_{\mathrm{LMC}}}\rho _\mathrm{d}(D)𝑑D}$$ (9) (cf eq. 1), where $`\rho _\mathrm{d}(D_{\mathrm{LMC}})`$ and $`A_{\mathrm{LMC}}(D_{\mathrm{LMC}})`$ are the dust density and absorption at the mid-plane of the LMC. It is easy to show that $`\kappa (D)`$ $`=`$ $`1+\mathrm{tanh}\left({\displaystyle \frac{DD_{\mathrm{LMC}}}{0.56w}}\right)`$ (10) $`=`$ $`0,DD_{\mathrm{LMC}}w`$ (11) $`=`$ $`1,D=D_{\mathrm{LMC}}`$ (12) $`=`$ $`2,DD_{\mathrm{LMC}}w.`$ (13) So the reddening rank $`\kappa (D)`$ increases from 0 to 2 abruptly in the distance range $`D_{\mathrm{LMC}}\pm \frac{w}{2}`$ as a star enters and exits the dust layer. It turns out that this property is not limited to $`\mathrm{sech}^2`$ profile dust, but generic for smooth dust layer of a symmetric profile. Now to take into account of the scatter due to measurement error and any residual patchiness, we assume that the observed reddening has a simple Gaussian distribution centered on the predicted value from a smooth dust disk model. Then for a star at any distance $`D`$, the reddening rank $`\kappa _{\mathrm{obs}}`$ is drawn from the following distribution function $$B(\kappa _{\mathrm{obs}},D)=\frac{1}{\sqrt{2\pi }\sigma }\mathrm{exp}\left[\frac{\left(\kappa _{\mathrm{obs}}\kappa (D)\right)^2}{2\sigma ^2}\right],$$ (14) which is a Gaussian with a constant dispersion $`\sigma `$, and a mean $`\kappa (D)`$ as predicted from a smooth model. Now we integrate over LMC stars at all distance and bin them according to their reddening $`\kappa _{\mathrm{obs}}`$. Let $`N_{}(\kappa _{\mathrm{obs}})`$ be the relative frequencies of finding an unlensed star with a reddening $`\kappa _{\mathrm{obs}}`$, then $$N_{}(\kappa _{\mathrm{obs}})=𝑑D\nu _{}(D)B(\kappa _{\mathrm{obs}},D).$$ (15) ### 2.3 Optical depth as a function of distance and reddening The microlensing optical depth $`\tau (D_s)`$ of a source star at distance $`D_s`$ is defined by $$\tau (D_s)\frac{\nu _s(D_s)}{\nu _{}(D_s)}$$ (16) where $`\nu _{}(D_s)dD_s`$ and $`\nu _s(D_s)dD_s`$ are the numbers of stars and microlensed sources in the distance bin ($`D_s`$, $`D_s+dD_s`$) respectively (cf. eq. 5). It is well-known that in the standard full-macho isothermal halo model $`f_{\mathrm{macho}}=1`$, $`\rho _{\mathrm{lens}}=\rho _{\mathrm{halo}}`$ (cf. eq. 6), and the microlensing optical depth $`\tau _{\mathrm{std}}=5\times 10^7`$ (cf. e.g., Paczyński 1986). It is convenient to rescale the optical depth of a model with this standard value. The microlensing optical depth $`\tau (D_s)`$ is then given by $$\tau (D_s)=\tau _{\mathrm{std}}\frac{_0^{D_s}𝑑D_l\rho _{\mathrm{lens}}(D_l)(D_sD_l)D_l/D_s}{_0^{D_{\mathrm{LMC}}}𝑑D_l\rho _{\mathrm{halo}}(D_l)(D_{\mathrm{LMC}}D_l)D_l/D_{\mathrm{LMC}}}.$$ (17) The optical depth $`\tau (D_s)`$ increases with the source distance. Suppose all lenses are at distance $`D_l`$, then $$\tau (D_s)(D_sD_l).$$ (18) Unfortunately, it is difficult to observe this effect because it is hard to measure the distance to a LMC star at the accuracy of 1 kpc. But fortunately the distance to a star is correlated with the reddening, and the reddening of a star can be measured accurately to 20%. So we have an interesting effect that the rate of microlensing is correlated with the reddening of the stars. For each bin in terms of the rank of reddening ($`\kappa _{\mathrm{obs}}`$, $`\kappa _{\mathrm{obs}}+d\kappa _{\mathrm{obs}}`$), we can define an optical depth $$\tau _{\mathrm{obs}}(\kappa _{\mathrm{obs}})\frac{N_s(\kappa _{\mathrm{obs}})}{N_{}(\kappa _{\mathrm{obs}})}=\frac{𝑑D_s\nu _s(D_s)B(\kappa _{\mathrm{obs}},D_s)}{𝑑D\nu _{}(D)B(\kappa _{\mathrm{obs}},D)},$$ (19) where $`N_{}(\kappa _{\mathrm{obs}})`$ and $`N_s(\kappa _{\mathrm{obs}})`$ are the relative frequencies of finding an unlensed star and a microlensed source with reddening $`\kappa _{\mathrm{obs}}`$ respectively (cf. eq. 15). Note $`\tau _{\mathrm{obs}}(\kappa _{\mathrm{obs}})`$ is a function measurable observationally because we can always bin the microlensed events according their observable reddening, and get their frequencies $`N_{}(\kappa _{\mathrm{obs}})`$ and $`N_s(\kappa _{\mathrm{obs}})`$. Of particular interest is $`\tau _{\mathrm{obs}}(0<\kappa _{\mathrm{obs}}1)`$. Note stars at the mid-plane of the LMC disk have average reddening, so $`\kappa _{\mathrm{obs}}1\pm 3\sigma `$ at $`3\sigma `$ confidence level. In comparison stars well in front of the dust layer have $`\kappa _{\mathrm{obs}}<3\sigma `$, and stars well behind the dust layer have $`\kappa _{\mathrm{obs}}>23\sigma `$. So $`\tau _{\mathrm{obs}}(0<\kappa _{\mathrm{obs}}1)`$ is the optical depth of a thin layer of stars co-spatial with the dust layer and slightly closer to us the mid-plane of the LMC with $`D_{\mathrm{LMC}}w/2DD_{\mathrm{LMC}}`$. Since the thickness of the dust layer $`w100200`$pc is very small, these stars are virtually at the same distance. So we have $$\tau _{\mathrm{obs}}(0<\kappa _{\mathrm{obs}}1)\tau (D_{\mathrm{LMC}})f_{\mathrm{macho}}\tau _{\mathrm{std}}.$$ (20) So we have turned the theoretical quantity $`\tau (D_{\mathrm{LMC}})`$ to an observable, and this observable optical depth $`\tau _{\mathrm{obs}}(0<\kappa _{\mathrm{obs}}1)`$ sets an upper limit on the amount of machos in the foreground of the LMC. The above inequality reduces an equality in the absence of any LMC lenses in front of the LMC disk. If the picture that the LMC is a thin disk without any extra material in the front or back is correct, then we should find that $`\tau _{\mathrm{obs}}(\kappa _{\mathrm{obs}})`$ is at a constant level $`\tau (D_{\mathrm{LMC}})=f_{\mathrm{macho}}\tau _{\mathrm{std}}`$ for all reddening. Presently the survey teams have not studied the reddening dependence of the optical depth, so they quote only the observed optical depth averaged over all LMC stars, which is only one number. Let this be $`\tau _{\mathrm{obs}}`$, then current data require $$\tau _{\mathrm{obs}}\frac{_{D_{\mathrm{min}}}^{D_{\mathrm{max}}}𝑑D\nu _{}(D)\tau (D)}{_{D_{\mathrm{min}}}^{D_{\mathrm{max}}}𝑑D\nu _{}(D)}=\frac{𝑑\kappa _{\mathrm{obs}}N_{}(\kappa _{\mathrm{obs}})\tau (\kappa _{\mathrm{obs}})}{𝑑\kappa _{\mathrm{obs}}N_{}(\kappa _{\mathrm{obs}})}0.5\tau _{\mathrm{std}},$$ (21) where the factor $`0.5`$ comes from the fact that the observed microlensing optical depth account for about one-half of that of the standard macho dark halo model $`\tau _{\mathrm{std}}`$ (Alcock et al. 1997). We would like to know how much of the observed optical depth is due to stellar lenses in the LMC vicinity and how much is due to machos in Galactic halo. If we define $`f_{}`$ as the fraction of the observed microlensing depth due to stellar lenses, then the fraction of machos in the dark halo is given by $$f_{\mathrm{macho}}=\frac{\tau _{\mathrm{obs}}(1f_{})}{\tau _{\mathrm{std}}}=0.5(1f_{}).$$ (22) So for a given set of stellar distribution parameters $`\mathrm{\Sigma }_{\mathrm{extra}}`$ etc, we can predict the optical depth due to stellar lenses $`f_{}\tau _{\mathrm{std}}`$, we can then determine $`f_{\mathrm{macho}}`$ by fixing the overall optical depth to the observed value. Finally we would like to characterize the excess reddening of the microlensed stars. This can be done by defining an observable parameter $`\xi _{\mathrm{obs}}`$ such that $`\xi _{\mathrm{obs}}`$ $``$ $`{\displaystyle \frac{𝑑\kappa _{\mathrm{obs}}N_s(\kappa _{\mathrm{obs}})\kappa _{\mathrm{obs}}}{𝑑\kappa _{\mathrm{obs}}N_s(\kappa _{\mathrm{obs}})}}{\displaystyle \frac{𝑑\kappa _{\mathrm{obs}}N_{}(\kappa _{\mathrm{obs}})\kappa _{\mathrm{obs}}}{𝑑\kappa _{\mathrm{obs}}N_{}(\kappa _{\mathrm{obs}})}}`$ (23) $`=`$ $`{\displaystyle \frac{𝑑D\nu _s(D)\kappa (D)}{𝑑D\nu _s(D)}}{\displaystyle \frac{𝑑D\nu _{}(D)\kappa (D)}{𝑑D\nu _{}(D)}}.`$ (24) Because of the integration over the entire range of the reddening, $`\xi _{\mathrm{obs}}`$ is independent of the dispersion $`\sigma `$ between the observed reddening rank of a star and the reddening rank predicted from a smooth model (cf. eq. 14). ## 3 Results ### 3.1 Model parameters Here we study the effect of a mixed halo model on the reddening distribution of the microlensed stars. Since the parameters of the LMC are very uncertain, we will explore a range of dust models and stellar models. Table 1 lists the parameters for the models. We fix the distance to the LMC $`D_{\mathrm{LMC}}=50`$kpc, and set the tidal radius of the LMC $`|D_{\mathrm{min}}D_{\mathrm{LMC}}|=|D_{\mathrm{max}}D_{\mathrm{LMC}}|=10`$kpc. For the $`\mathrm{sech}^2`$ stellar disk of the LMC and the dust layer, we fix $`\mathrm{\Sigma }_{\mathrm{disk}}=400M_{}\mathrm{pc}^2`$. We fix the FWHM thickness of the dust layer $`w=200`$ pc, but allow the relative thickness of the two disks $`W/w`$ to vary in the range 0.5 to 2, a reasonable range for thin stellar and dust disks. We set the FWHM of the tidal material $`W_{\mathrm{extra}}=0.05(D_{\mathrm{max}}D_{\mathrm{min}})=1`$kpc, but allow the peak position to vary in the range 40 kpc $`D_{\mathrm{extra}}`$ 60 kpc, and the amount of stars in extra component to vary in the range $`0\mathrm{\Sigma }_{\mathrm{extra}}50M_{}\mathrm{pc}^2`$. We set this upper limit for the surface density of the extra material to about $`10\%`$ of the surface density of the LMC, an acceptable amount for some hidden material. In comparison the surface density of dark matter (machos plus wimps) halo $`\mathrm{\Sigma }_{\mathrm{halo}}100M_{}\mathrm{pc}^2`$ (cf. eq. 7). These parameters are comparable to those of previous models for the volume density of the LMC disk (Wu 1995, Weinberg 1999). It turns out the reddening distributions of microlensed and unlensed stars are insensitive to exact values of $`w`$, $`W`$, $`W_{\mathrm{extra}}`$ and $`\mathrm{\Sigma }_{\mathrm{disk}}`$ as long as we are in the regime where $`wWW_{\mathrm{extra}}|D_{\mathrm{max}}D_{\mathrm{min}}|`$ and $`\mathrm{\Sigma }_{\mathrm{extra}}\mathrm{\Sigma }_{\mathrm{halo}}\mathrm{\Sigma }_{\mathrm{disk}}`$. These distributions are more sensitive to the ratios such as $`W/w`$, $`\mathrm{\Sigma }_{\mathrm{extra}}/\mathrm{\Sigma }_{\mathrm{halo}}`$, and $`\frac{|D_{\mathrm{extra}}D_{\mathrm{LMC}}|}{|D_{\mathrm{max}}D_{\mathrm{min}}|}`$. For example, models with the extra stars centered on the LMC disk, i.e., $`\frac{|D_{\mathrm{extra}}D_{\mathrm{LMC}}|}{|D_{\mathrm{max}}D_{\mathrm{min}}|}0`$, require much more stars with $`\mathrm{\Sigma }_{\mathrm{extra}}/\mathrm{\Sigma }_{\mathrm{halo}}1`$ for come up with enough events, hence are much less efficient than models with extra stars $`510`$kpc in front or behind the LMC in terms of producing microlensing. So the main parameters that we will vary are $`\mathrm{\Sigma }_{\mathrm{extra}}`$, $`D_{\mathrm{extra}}`$ and $`W/w`$. The predictions are made for halo-lensing models and mixed models with different amount of stars in the extra component and different offset distance from the LMC and the relative thickness of the stellar disk vs. the dust disk. We also set the random error $`\sigma `$ for the observed reddening at 20%-40% level, which seems a reasonably amount. A larger dispersion would smooth out some tell-tale features of the reddening distributions. But it turns out that the mean excess $`\xi _{\mathrm{obs}}`$ and and the optical depth in the mid-plane $`\tau _{\mathrm{obs}}(\kappa _{\mathrm{obs}}=1)`$ are insensitive to the observational error $`\sigma `$. ### 3.2 Reddening distributions of microlensed and unlensed stars The line of sight distributions of a few model quantities are shown in Fig. 1 for one of the models (see caption). Clearly all stellar density distributions peak at $`D=D_{\mathrm{LMC}}=50`$kpc, the mid-plane of the high density disk of the LMC; similarly for the thin dust layer. The LMC stars have a second peak due to extra stars placed behind the LMC disk. The lens density distribution also has a gentle falling part in the range 10 kpc $`<D<`$ 40 kpc due to foreground machos, whose number density falls as $`D^2`$. The lensed stars (dotted line with diamonds) also have a higher tail. For sources in front the LMC disk $`D<D_{\mathrm{LMC}}`$ the microlensing optical depth is nearly constant, $`\tau (D_s)/\tau _{\mathrm{std}}f_{\mathrm{macho}}0.36`$. This is because the optical depth due to the ever decreasing density of the foreground machos is insensitive to the source distance. But after passing the LMC disk the optical depth takes off linearly with the source distance till $`\tau (D_s)/\tau _{\mathrm{std}}1`$. This is because any source star behind the LMC suddenly sees an intervening dense screen of lenses coming from the high surface density stellar disk of the LMC, so the optical depth goes up in proportional to the distance to the LMC disk (cf. eq. 18). This explains the higher tail of the lensed stars as compared to unlensed stars (cf. Fig. 1). Note that there is only a modest increase of the optical depth from entering to exiting the LMC thin disk. The amount is on the order of $`(\mathrm{\Sigma }_{\mathrm{disk}}/\mathrm{\Sigma }_{\mathrm{halo}})(W/10\mathrm{k}\mathrm{p}\mathrm{c})<8\%`$ of the total optical depth, consistent with the estimation of Sahu (1994) and Wu (1994) for a thin disk. The dust extinction is nearly a Heaviside function of the distance, climbing steeply from zero absorption in front of the layer to a constant value $`2\times A_{\mathrm{LMC}}(D_{\mathrm{LMC}})`$ a few hundred pc behind the layer, where $`A_{\mathrm{LMC}}(D_{\mathrm{LMC}})`$ is the absorption in the mid-plane. This is because as a star sinks deeper in the dust layer it experiences an increasing reddening before it emerges from the dust layer again. Fig. 2 shows the reddening distributions of microlensed and unlensed stars. The distribution for unlensed stars (upper panels) hardly depends on parameters of the extra component $`\mathrm{\Sigma }_{\mathrm{extra}}`$ and $`D_{\mathrm{extra}}`$. It is only sensitive to the relative thickness of the stellar disk and the dust layer. At one end of the extreme, we have a thin dust layer and a thicker stellar disk ($`W/w=2`$, solid lines). We see the familiar double-horn structure. This is because most stars in a thick disk are either in front of the dust layer and free from reddening or behind the layer and reddened by the maximum amount. At the opposite extreme, if the stellar disk is thinner than the dust layer ($`W/w=0.5`$, dash-dot-dot-dot lines), the distribution becomes peaky in the middle because most stars of the thin disk are in the mid-plane of the dust layer, hence reddened by half of the maximum (back-to-front) value. In between if the dust layer and the stellar disk have the same scale height and are evenly mixed ($`W/w=1`$, dashed lines), we have a flat top-hat distribution. The distributions for the microlensed stars depend on the location and the amount of the extra component. For halo-lensing models, the distribution of lensed stars and unlensed stars are virtually indistinguishable (cf. Fig. 2a). The situation is largely unchanged if we move all lenses from the halo to 10 kpc in front of the LMC disk (cf. Fig. 2b). The extra component shows up merely as a marginal excess of low reddening stars in the unlensed stars, a small effect which might escape detection. The situation is drasticly different if we put even a small amount ($`\mathrm{\Sigma }_{\mathrm{extra}}=10M_{}\mathrm{pc}^2`$, or 2.5%) of stars behind the LMC disk (Fig. 2c). The distribution are strongly skewed towards high reddening. This is because lensing favors source stars well-behind the lenses. More specificly if the lenses are machos midway to the LMC, or stars some 10 kpc in front of the LMC, then the probability of drawing a source star at 100 pc behind the dust layer is about 0.8% or 2% higher than drawing one at 100 pc in front of the dust layer (cf. eq. 18). So in average, the lensed sources should be more reddened than the unlensed ones by merely a few percent. In contrast, if the lenses are at mid-plane of the LMC, then nearly all sources will be behind the dust layer, and hence the reddening rank $`\kappa _{\mathrm{obs}}`$ increases by one (cf. eq. 8 and eq. 10). In Fig. 2c there are still some source stars at low reddening because we allow for machos in the foreground. The skewness effect becomes stronger as we decrease the macho fraction, and increase the extra stars behind the LMC. For $`\mathrm{\Sigma }_{\mathrm{extra}}=40M_{}\mathrm{pc}^2`$ and $`D_{\mathrm{extra}}=60`$kpc, then all lensed sources should have an observed reddening rank $`\kappa _{\mathrm{obs}}2`$. In this case $`f_{\mathrm{macho}}0`$ and all lenses come from the LMC disk. Another way to quantify the above effects is to look at the observed optical depth as a function of the reddening rank of stars (cf. Fig. 3a). The plateau of the $`\tau _{\mathrm{obs}}(\kappa _{\mathrm{obs}})`$ curves in the range $`0<\kappa _{\mathrm{obs}}1`$ are for stars imbeded in the dust layer of the LMC. They are slightly closer to us than the stars at the mid-plane, for which $`\kappa _{\mathrm{obs}}=1\pm 3\sigma `$, but are virtually at the same distances $`D_{\mathrm{LMC}}\pm w/2`$. The plateau also set an upper limit on the optical depth of machos (cf. eq. 20). On the other hand those stars with $`\kappa _{\mathrm{obs}}2`$ include both stars well behind the LMC disk, which have a very high optical depth (cf. the curve of $`\tau (D)`$ in Fig. 1), and any Gaussian tail of the LMC disk stars due to random error of the reddening. So $`\tau _{\mathrm{obs}}(\kappa _{\mathrm{obs}})`$ can have a high tail if the extra component behind the LMC is stronger than the Gaussian tail due to a non-zero dispersion $`\sigma `$. These results depend somewhat on the reddening and the relative thickness of the stellar disk and the dust layer (cf. Fig. 3b). A larger measurement error smoothes the sharp transition from the plateau to the high tail. The tail is lower if the stellar disk is thicker than the dust layer, because of more overlap in the observed reddening of the stars in the LMC disk and those well-behind the LMC. Fig. 4 shows that in general a strong excess in reddening is seen in models with most of the lenses in the LMC disk, and the sources behind the midplane of the LMC disk. Models with a high macho fraction $`f_{\mathrm{macho}}0.4`$ and/or with lenses being extra stars a few kpc well in front of the LMC disk predict only a modest amount of excess $`\xi _{\mathrm{obs}}0.1`$. So these models could be ruled out if we measure a strong excess. In particular, if $`\xi _{\mathrm{obs}}0.5`$ then the dark halo cannot have more than 15% in machos. On the other hand, if the excess is small, then the interpretation remains non-unique. These results depends only slightly on the thickness of the dust layer with the thinner dust layer predicting a stronger excess. Given accurate reddening, it is possible to measure or at least set an upper limit on the fraction of dark matter in machos. ### 3.3 Empirical relations We have also find a few empirical relations for converting the reddening excess $`\xi _{\mathrm{obs}}`$ to the macho fraction $`f_{\mathrm{macho}}`$. Assuming that there are few stellar lenses in the immediate foreground of the LMC disk, which would be the case if $`0.8\xi _{\mathrm{obs}}0.2`$, then to a good approximation, the fraction $`f_{}`$ of microlensing due to LMC stars is given by $$f_{}1.2\xi _{\mathrm{obs}}.$$ (25) The fraction of machos is related to $`\xi _{\mathrm{obs}}`$ by $$f_{\mathrm{macho}}0.5(11.2\xi _{\mathrm{obs}})$$ (26) (cf. eq. 22 and 21). Here we have adopted the present reported value for the microlensing optical depth to the LMC, i.e., about half of that of the standard halo. But the results can be rescaled proportionally as more data become available. The result can also be recast as following, $$f_{\mathrm{macho}}\frac{\tau _{\mathrm{obs}}(0<\kappa _{\mathrm{obs}}1)}{5\times 10^7}$$ (27) (cf. eq. 20), where $`\tau _{\mathrm{obs}}(0<\kappa _{\mathrm{obs}}1)`$ is the optical depth of the LMC stars observed with lower than average reddening; these are stars sandwiched in a thin layer between the mid-plane of the LMC disk and the near side of the dust layer. The optical depth of the least reddened stars best approximates the optical depth of foreground machos, because it is least contaminated by stellar lenses in the LMC. These relations work best if we can neglect stellar lenses in the immediate foreground of the LMC disk. But interestingly, these results are insensitive to the assumption of the dust distribution and stellar distribution of the LMC, e.g., the distance of the extra background material, and their surface density, the relative thickness of the dust layer and the stellar disk. We also allow for a realistic amount of measurement error and patchiness of extinction. ## 4 Summary In summary, we have studied effects of dust layer in the LMC on the microlensing events in a wide range of models of star and dust distributions in the LMC. We propose to bin LMC stars according to their reddening rank $`\kappa _{\mathrm{obs}}`$ (as defined in eq. 8), and study the observable microlensing optical depth as a function of the reddening rank $`\kappa _{\mathrm{obs}}`$. We find that self-lensing models of the LMC draw preferentially sources behind the dust layer of the LMC, and hence can be distinguished from the macho-lensing models once the reddening by dust is measured. If a low excess in reddening ($`\xi _{\mathrm{obs}}<0.2`$) is observed then the interpretation would not be unique: the reddening distribution does not distinguish very well lensing by machos and lensing by stars a few kpc in front of the LMC. The optical depth can be explained equally well by a screen of macho lenses with $`\mathrm{\Sigma }_{\mathrm{extra}}=50M_{}\mathrm{pc}^2`$ at 10 kpc from the Sun, or by a screen of stellar lenses with a same surface density at 10 kpc in front of the LMC disk. As far as the reddening distributions are concerned, these two models are barely distinguishable (cf. Fig. 2ab). On the other hand the reddening distribution is very sensitive to any stars behind the LMC disk. If 2-3% of the LMC stars is put in an extra component $`510`$ kpc behind the LMC, then we can see markedly different distribution in the microlensed stars (cf. Fig. 2c); this corresponds to a surface density $`\mathrm{\Sigma }_{\mathrm{extra}}=10M_{}\mathrm{pc}^2`$. The signal gets even stronger as we decrease the macho fraction and increase the surface density of the background stars. Our main finding is that among stars of different reddening rank, the optical depth of the least reddened stars $`\tau _{\mathrm{obs}}(\kappa _{\mathrm{obs}}0)`$ is the closest approximation to the optical depth of machos (cf. Fig. 3). The macho fraction $`f_{\mathrm{macho}}`$ also correlates tightly with the excess reddening $`\xi _{\mathrm{obs}}`$ of the microlensed sources (cf. Fig. 4). These results are summarized by the empirical relations in the previous section. An observable high excess in reddening ($`1>\xi _{\mathrm{obs}}>0.2`$) would be a definative signal for many LMC disk lenses and low macho fraction. Potentially we can constrain the fraction of Galactic dark matter in machos this way. There are a number of problems to apply the method to observations. (a) Reddening of individual stars is difficult to measure accurately. Reddening can be determined by constructing reddening-free indices with photometry of three or more broad bands, or with low resolution spectroscopy. Typical accuracy is about $`0.02`$ mag. in $`E(BV)`$ with these methods (Harris et al. 1997), which is about 20% of the typical reddening in the LMC (Harris et al. 1997). (b) Stars in the LMC are likely crowded from the ground. Blended images of stars lead to unphysical colors and spurious reddening. (c) Dust distribution is very clumpy. Extinction towards OB stars in the LMC disk can easily vary at a factor of two level among different patches of the sky separated by $`1^{}`$ (Harris et al. 1997). (d) The effect of extinction by the dust layer of the Galaxy near the Sun should be included. Oestriecher et al. (1996) show that the Galactic foreground extinction is not entirely smooth. There are dark patches on $`30^{}`$ scales. The clumpiness of the dust, together with the fairly large error of the reddening vector measurable from broad-band photometry, can lead to a large scatter in the relation between reddening and line of sight depth. Zhao (1999c) argue that these problems, particularly patchiness can be overcome if we restrict to the smallest patches of sky around each microlensing source where the variation is likely at only 20% level. Spectroscopic observation with the Hubble Space Telescope in the ultra-violet can in principle resolve many faint stars in the small patches around each microlensed stars, and measure the reddening accurately. By obtaining the reddening of stars in the present $`2030`$ microlensing lines of sight we should be able to measure the excess reddening confidently. We should also be able to bin the events in the reddening rank, and study the optical depth as a function of reddening. A null result of any variation of the optical depth would be in favor of a significant baryonic dark component of the Galaxy.
no-problem/9907/cond-mat9907430.html
ar5iv
text
# COOPERATIVE JAHN-TELLER EFFECT ON THE MAGNETIC STRUCTURE OF MANGANESE OXIDES 11footnote 1To appear in proceedings of the conference “Science and Technology of Magnetic Oxides ’99”, La Jolla, July 5–7, 1999. ## 1 Introduction The study of manganese oxides is receiving considerable attention in recent years both in its theoretical and experimental aspects . From the technological viewpoint, these materials could be used in the preparation of high-sensitive magnetic-field sensors due to their colossal magnetoresistance (CMR) phenomena. In addition, researchers in the condensed matter field have been interested in the rich phase diagram of these materials originating from the competition and interplay among charge, spin, and orbital degrees of freedoms. Obtaining a unified picture for this rich phase diagram is a challenging open problem. A prototype for the theoretical investigation of manganese oxides is the double-exchange (DE) framework, describing the hopping motion of e<sub>g</sub>-electrons ferromagnetically coupled to localized t<sub>2g</sub>-spins. This idea has conceptually explained the appearance of ferromagnetism when holes are doped . In addition, within the one orbital model, the existence of phase separation has been recently unveiled with the use of modern numerical techniques , leading to a potential explanation of the CMR effect . However, in order to understand the fine details of the phase diagram of manganites, the one-orbital model is not sufficient since the highly nontrivial A-type spin antiferro (AF) and C-type orbital structures observed experimentally in the undoped material LaMnO<sub>3</sub> cannot be properly addressed in such a simple context. Certainly two-orbital models are needed to consider the nontrivial state of undoped manganites. In this framework the two-band model without phonons has been studied before, and the importance of the strong Coulomb repulsion has been remarked for the appearance of the A-AF state . This is based upon the belief that the competition between kinetic and strong correlation effect determines the optimal orbital for e<sub>g</sub>-electrons and the lattice will be simply distorted to reproduce such optimal orbitals. However, Coulombic approaches have presented conflicting results regarding the orbital order that coexists with the A-type spin state, with several approaches predicting G-type orbital order, which is not observed in practice. While it is certainly correct that the orbital degrees of freedom play an essential role for the stabilization of A-AF, it should be noticed that our understanding is still incomplete. In particular, it is important to remark that the orbital structure is tightly related to the Jahn-Teller (JT) distortion of MnO<sub>6</sub> octahedron. If each JT distortion would occur independently, optimal orbitals can be determined by minimizing the kinetic and interaction energy of e<sub>g</sub>-electrons. However, oxygens are shared between adjacent MnO<sub>6</sub> octahedra, indicating that the JT distortions occurs cooperatively. Especially in the undoped situation, all MnO<sub>6</sub> octahedra exhibit JT distortion, indicating that the cooperative effect is very important, as discussed by Kanamori . Thus, in order to understand the magnetic and orbital structures in LaMnO<sub>3</sub>, it is indispensable to optimize simultaneously the electron and lattice systems. However, not much effort has been devoted to the microscopic treatment of the cooperative effect , although the JT effect in the manganese oxide has been studied by several groups . Then, in the present work, a careful investigation of this problem is performed with some numerical techniques, focusing on $`n=1`$, where $`n`$ is the electron number per site. In this paper, the optimal oxygen positions are determined by a relaxation technique to obtain the lattice distortions corresponding to several t<sub>2g</sub>-spin magnetic structures. In addition, Monte Carlo (MC) simulations were also performed to investigate the spin and orbital structure without a priori assumptions for their order. It is found that A-AF, as well as the C-type orbital structure, occurs in realistic parameter regions for LaMnO<sub>3</sub>, i.e., large Hund coupling between the e<sub>g</sub>-electron and t<sub>2g</sub>-spin, small AF interaction between t<sub>2g</sub>-spins, and strong electron-lattice coupling. It should be emphasized that our results are obtained without the Coulomb interaction. It is shown in a simple case that the optimized results are essentially unchanged even if the Coulomb interaction is included explicitly in the model. The organization of this paper is as follows. Section 2 is devoted to the formulation to include the cooperative effect in the two-orbital model tightly coupled to the JT distortion, and some technical points are briefly discussed. In Sec. 3, the results on the magnetic and orbital structures are provided and it is shown that the region of A-AF in the magnetic phase diagram is reasonable for LaMnO<sub>3</sub>, since the couplings needed agree with experiments. In Sec. 4, a prescription to obtain the C-type orbital order with the alternation of $`3x^2r^2`$ and $`3y^2r^2`$ orbitals is provided. Finally, in Sec. 5, the effect of the Coulomb interaction on the lattice distortion is discussed in the ferromagnetic state. Throughout this paper, units such that $`\mathrm{}=k_\mathrm{B}=1`$ are used. ## 2 Formulation ### 2.1 Hamiltonian Let us consider the motion of e<sub>g</sub>-electrons tightly coupled to the localized t<sub>2g</sub>-spins and the local distortions of the MnO<sub>6</sub> octahedra. This situation is well described by $$H=H_{2\mathrm{o}\mathrm{r}\mathrm{b}}+H_{\mathrm{AFM}}+H_{\mathrm{el}\mathrm{ph}}+H_{\mathrm{el}\mathrm{el}}.$$ (1) Here the first term indicates the two-orbital Hamiltonian, given by, $`H_{2\mathrm{o}\mathrm{r}\mathrm{b}}={\displaystyle \underset{\mathrm{𝐢𝐚}\gamma \gamma ^{}\sigma }{}}t_{\gamma \gamma ^{}}^𝐚c_{𝐢\gamma \sigma }^{}c_{𝐢+𝐚\gamma ^{}\sigma }J_\mathrm{H}{\displaystyle \underset{𝐢\gamma \sigma \sigma ^{}}{}}𝐒_𝐢c_{𝐢\gamma \sigma }^{}\sigma _{\sigma \sigma ^{}}c_{𝐢\gamma \sigma ^{}},`$ (2) where $`c_{𝐢a\sigma }`$ ($`c_{𝐢b\sigma }`$) is the annihilation operator for an e<sub>g</sub>-electron with spin $`\sigma `$ in the $`d_{x^2y^2}`$ ($`d_{3z^2r^2}`$) orbital at site $`𝐢`$. The vector connecting nearest-neighbor sites is $`𝐚`$, $`t_{\gamma \gamma ^{}}^𝐚`$ is the hopping amplitude between $`\gamma `$\- and $`\gamma ^{}`$-orbitals connecting nearest-neighbor sites along the $`𝐚`$-direction via the oxygen 2$`p`$-orbital, $`J_\mathrm{H}`$ is the Hund coupling, $`𝐒_𝐢`$ the localized classical t<sub>2g</sub>-spin normalized to $`|𝐒_𝐢|=1`$, and $`\sigma =(\sigma _1,\sigma _2,\sigma _3)`$ are the Pauli matrices. The second term is needed to account for the AFM character of the manganese oxide, given by $`H_{\mathrm{AFM}}=J^{}{\displaystyle \underset{𝐢,𝐣}{}}𝐒_𝐢𝐒_𝐣,`$ (3) where $`J^{}`$ is the AF-coupling between nearest-neighbor t<sub>2g</sub>-spins. In the third term, the coupling of e<sub>g</sub>-electrons to the distortion of MnO<sub>6</sub> octahedron is considered as $`H_{\mathrm{el}\mathrm{ph}}`$ $`=`$ $`g{\displaystyle \underset{𝐢\sigma \gamma \gamma ^{}}{}}c_{𝐢\gamma \sigma }^{}(Q_{1𝐢}\sigma _0+Q_{2𝐢}\sigma _1+Q_{3𝐢}\sigma _3)_{\gamma \gamma ^{}}c_{𝐢\gamma ^{}\sigma }`$ (4) $`+`$ $`(1/2){\displaystyle \underset{𝐢}{}}[k_{\mathrm{br}}Q_{1𝐢}^2+k_{\mathrm{JT}}(Q_{2𝐢}^2+Q_{3𝐢}^2)],`$ where $`g`$ is the electron-phonon coupling constant, $`Q_{1𝐢}`$ denotes the distortion for the breathing mode of the MnO<sub>6</sub> octahedron, $`Q_{2𝐢}`$ and $`Q_{3𝐢}`$ are, respectively, JT distortions for the $`(x^2y^2)`$\- and $`(3z^2r^2)`$-type modes, and $`\sigma _0`$ is the $`2\times 2`$ unit matrix. Spring constants for breathing- and JT-modes are denoted by $`k_{\mathrm{br}}`$ and $`k_{\mathrm{JT}}`$, respectively. The final term indicates the Coulomb interactions between e<sub>g</sub>-electrons. As mentioned in Sec. 1, since only the JT phonons are investigated, the Coulomb interactions are neglected. This point will be discussed later in the text (Sec. 5) where the explicit form of $`H_{\mathrm{el}\mathrm{el}}`$ is provided and briefly analyzed. ### 2.2 Lattice distortion As shown in Fig. 1(a), oxygens are shared between adjacent octahedra, indicating that the local lattice distortions cannot be treated independently and a cooperative analysis is needed for this problem. For this purpose, the normal coordinates for distortions of the MnO<sub>6</sub> octahedron, shown in Fig. 1(b), are written as $$Q_{1𝐢}=(1/\sqrt{3})(\mathrm{\Delta }_{\mathrm{𝐱𝐢}}+\mathrm{\Delta }_{\mathrm{𝐲𝐢}}+\mathrm{\Delta }_{\mathrm{𝐳𝐢}}),$$ (5) for the breathing mode, $$Q_{2𝐢}=(1/\sqrt{2})(\mathrm{\Delta }_{\mathrm{𝐱𝐢}}\mathrm{\Delta }_{\mathrm{𝐲𝐢}}),$$ (6) and $$Q_{3𝐢}=(1/\sqrt{6})(2\mathrm{\Delta }_{\mathrm{𝐳𝐢}}\mathrm{\Delta }_{\mathrm{𝐱𝐢}}\mathrm{\Delta }_{\mathrm{𝐲𝐢}}),$$ (7) for the JT modes, where $`\mathrm{\Delta }_{\mathrm{𝐚𝐢}}`$ is given by $$\mathrm{\Delta }_{\mathrm{𝐚𝐢}}=\mathrm{\Delta }_𝐚+\delta _{\mathrm{𝐚𝐢}}.$$ (8) The first term indicates the deviation from the cubic lattice, given by $`\mathrm{\Delta }_𝐚=L_𝐚L`$, where $`L_𝐚`$ is the length between Mn-ions along the $`𝐚`$-axis and $`L=(L_𝐱+L_𝐲+L_𝐳)/3`$. The second term is the contribution from the shift of oxygen position, expressed by $`\delta _{\mathrm{𝐚𝐢}}=u_𝐢^𝐚u_{𝐢𝐚}^𝐚`$, where $`u_𝐢^𝐚`$ is the deviation of oxygen from the equilibrium position along the Mn-Mn bond in the $`𝐚`$-direction. By this consideration, the cooperative JT distortion as well as the macroscopic lattice deformation is reasonably taken into account. Note that the buckling and rotational modes of MnO<sub>6</sub> octahedron are not explicitly included in this work. In general, $`L_𝐚`$ can be different for each direction, depending on the bulk properties of the lattice. Since the present work focuses on the microscopic mechanism for A-AF formation in LaMnO<sub>3</sub>, the undistorted lattice with $`L_𝐱=L_𝐲=L_𝐳`$ is treated first, and then corrections will be added. ### 2.3 Hopping amplitudes and energy scale In the cubic undistorted lattice, the hopping amplitudes are given by $$t_{\mathrm{aa}}^𝐱=\sqrt{3}t_{\mathrm{ab}}^𝐱=\sqrt{3}t_{\mathrm{ba}}^𝐱=3t_{\mathrm{bb}}^𝐱=t,$$ (9) for the $`𝐱`$-direction, $$t_{\mathrm{aa}}^𝐲=\sqrt{3}t_{\mathrm{ab}}^𝐲=\sqrt{3}t_{\mathrm{ba}}^𝐲=3t_{\mathrm{bb}}^𝐲=t,$$ (10) for the $`𝐲`$-direction, and $$t_{\mathrm{bb}}^𝐳=4t/3,t_{\mathrm{aa}}^𝐳=t_{\mathrm{ab}}^𝐳=t_{\mathrm{ba}}^𝐳=0,$$ (11) for the $`𝐳`$-direction. Throughout this paper, the energy unit is $`t`$. Corresponding to this choice of the energy unit, the length in the lattice distortion is scaled by $`\sqrt{t/k_{\mathrm{JT}}}`$. As a result of this scaling, a non-dimensional electron-phonon coupling constant $`\lambda `$ is defined as $$\lambda =g/\sqrt{k_{\mathrm{JT}}t}.$$ (12) It is noted that this coupling constant can be related to the static JT energy, which is conventionally defined by $`E_{\mathrm{JT}}=g^2/(2k_{\mathrm{JT}})`$, as $$E_{\mathrm{JT}}=t\lambda ^2/2.$$ (13) Note also that the present length scale is rewritten as $$\sqrt{t/k_{\mathrm{JT}}}=\mathrm{}_{\mathrm{JT}}/\lambda ,$$ (14) where $`\mathrm{}_{\mathrm{JT}}=g/k_{\mathrm{JT}}`$ is the characteristic length for the JT distortion. From the experimental result, $`\mathrm{}_{\mathrm{JT}}`$ is estimated as $`0.3`$Å , which is a typical length in this context. As for the spring constant for the breathing mode, it is expressed as $`k_{\mathrm{br}}=\beta k_{\mathrm{JT}}`$ and the ratio $`\beta `$ is treated as a parameter. If it is plausibly assumed that the reduced masses for those modes are equal, this ratio is given by $`\beta =(\omega _{\mathrm{br}}/\omega _{\mathrm{JT}})^2`$, where $`\omega _{\mathrm{br}}`$ and $`\omega _{\mathrm{JT}}`$ are the vibration energies for manganite breathing- and JT-modes, respectively. From experimental results and band-calculation data, $`\omega _{\mathrm{br}}`$ and $`\omega _{\mathrm{JT}}`$ are, respectively, estimated as $`700`$cm<sup>-1</sup> and $`500`$-$`600`$cm<sup>-1</sup>. Then, throughout this work, $`\beta `$ is taken as $`2`$, although the results presented here are basically unchanged as long as $`\beta `$ is larger than unity. In this work, the change in $`t`$ due to the displacement of oxygen position is not taken into account, but such an effect is shown to be very small as follows. Due to the pseudo-potential theory , the exact hopping amplitude, for example, along the $`𝐱`$-direction between a-orbitals in $`𝐢`$ and $`𝐢+𝐱`$ sites, is expressed as $$t_{\mathrm{aa}}^𝐱=\frac{t}{(1ϵ^2)^{7/2}},$$ (15) with $`ϵ=|2u_𝐢^𝐱|/L_𝐱`$. It should be remarked that the change in $`t_{\mathrm{aa}}^𝐱`$ due to the oxygen shift is of the order of $`ϵ^2`$, not of the order of $`ϵ`$. Since $`ϵ`$ is estimated to be at most a few percent, the change is considered to be negligible. Thus, in the present work, such a change is not included to avoid unnecessary complication in the calculation. However, when the distorted lattice is considered, namely, when the deviation from $`L_𝐱=L_𝐲=L_𝐳`$ is taken into account, the change in the hopping matrix due to this distortion should be included, because the effect is of the order of $`\mathrm{\Delta }_𝐚/L`$ in this case, not of the order of $`(\mathrm{\Delta }_𝐚/L)^2`$ This point will be discussed again in Sec. 4. ### 2.4 Techniques To study Hamiltonian Eq. (1) without $`H_{\mathrm{el}\mathrm{el}}`$, two numerical techniques have been applied. One is the relaxation technique, in which the optimal positions of the oxygens are determined by minimizing the total energy. In this calculation, only the stretching mode for the octahedron, is taken into account. Moreover, the relaxation has been performed for fixed structures of the t<sub>2g</sub>-spins such as ferro (F), A-type AF (A-AF), C-type AF (C-AF), and G-type AF (G-AF), shown in Fig. 2. The advantage of this method is that the optimal orbital structure can be rapidly obtained on small clusters. However, the assumptions involved in the relaxation procedure should be checked with an independent method. Such a check is performed with the unbiased MC simulations used before for one- and two-dimensional clusters using non-cooperative JT-phonons . The dominant magnetic and orbital structures are deduced from correlation functions. In the MC method, the clusters currently reachable are $`2\times 2\times 2`$, $`4\times 4\times 2`$, and $`4\times 4\times 4`$. In spite of this size limitation, arising from the large number of degrees of freedom in the problem, the available clusters are sufficient for our mostly qualitative purposes. In addition, the remarkable agreement between MC and relaxation methods lead us to believe that our results are representative of the bulk limit. ## 3 Results ### 3.1 Magnetic structure In Fig. 3, the mean-energy is presented as a function of $`J^{}`$ for $`J_\mathrm{H}=8`$ and $`\lambda =1.5`$, on a $`2\times 2\times 2`$ cluster with open boundary conditions. The solid lines and symbols indicate the results obtained with the relaxation technique and MC simulations, respectively. The agreement is excellent, showing that the relaxation method is accurate. The small deviations between the results of the two techniques are caused by temperature effects. As intuitively expected, with increasing $`J^{}`$ the optimal magnetic structure changes from ferro- to antiferromagnetic, and this occurs in the order F$``$A-AF$``$C-AF$``$G-AF. To check size effects, the t<sub>2g</sub>-spin correlation function $`S(𝐪)`$ was calculated also in $`4\times 4\times 2`$ and $`4\times 4\times 4`$ clusters, where $$S(𝐪)=(1/N)\underset{𝐢,𝐣}{}e^{i𝐪(𝐢𝐣)}𝐒_𝐢𝐒_𝐣.$$ (16) Here $`N`$ is the number of sites and $`\mathrm{}`$ indicates the thermal average value. As shown in Fig. 4, with increasing $`J^{}`$ the dominant correlation changes in the order of $`𝐪=(0,0,0)`$, $`(\pi ,0,0)`$, $`(\pi ,\pi ,0)`$, and $`(\pi ,\pi ,\pi )`$. The values of $`J^{}`$ at which the spin structures changes agree well with those in Fig. 3. ### 3.2 Orbital structure The shapes of the occupied orbital arrangement with the lowest energy for F, A-AF, C-AF, and G-AF magnetic structures are shown in Fig. 5. For the F-case, the G-type orbital structure is naively expected, because it is believed that the ferromagnetic spin structure is favored by the AF orbital configuration. However, a more complicated orbital structure is stabilized in the actual calculation, indicating the importance of the cooperative treatment for JT-phonons. For the A-AF state, only the C-type structure is depicted in Fig. 5, but the G-type structure, obtained by a $`\pi `$/2-rotation of the upper $`x`$-$`y`$ plane of the C-type state, was found to have exactly the same energy. Small corrections will remove this degeneracy in favor of the C-type as described in the next section. For C- and G-AF, the obtained orbital structures are G- and C-types, respectively. Note that there exists an additional triplet degeneracy in the A-type state due to the cubic symmetry for each magnetic structure: If axes are changed cyclically ($`xy,yz,zx`$), the optimized orbital structure is also transformed by this cyclic change, but the energy is invariant. Then, the magnetic and orbital structure in LaMnO<sub>3</sub> occurs through a spontaneous symmetry breaking process. Although the same change of the magnetic structure due to $`J^{}`$ was already reported in the electronic model with purely Coulomb interactions , the orbital structures in those previous calculations were G-, G-, A-, and A-type for the F-, A-AF, C-AF, and G-AF spin states, respectively. Note that for the A-AF state, of relevance for the undoped manganites, the G-type order was obtained , although in another treatment for the Coulomb interaction, the C- and G-type structures were found to be degenerate , as in our calculation. Thus, the stabilization in experiments of the C-type orbital structure is still puzzling both in the JT and Coulomb mechanisms. This point will be discussed later in the text. ### 3.3 Magnetic phase diagram In Figs. 6(a) and (b), the phase diagrams on the $`(J^{},\lambda )`$-plane are shown for $`J_\mathrm{H}=4`$ and $`8`$, respectively. The curves are drawn by the relaxation method. As expected, the F-region becomes wider with increasing $`J_\mathrm{H}`$. When $`\lambda `$ is increased at fixed $`J_\mathrm{H}`$, the magnetic structure changes from F$``$A-AF$``$C-AF$``$G-AF. This tendency is qualitatively understood if the two-site problem is considered in the limit $`J_\mathrm{H}1`$ and $`E_{\mathrm{JT}}1`$. The energy-gain due to the second-order hopping process of e<sub>g</sub>-electrons is roughly $`\delta E_{\mathrm{AF}}1/J_\mathrm{H}`$ and $`\delta E_\mathrm{F}1/E_{\mathrm{JT}}`$ for AF- and F-spin pairs, respectively. Increasing $`E_{\mathrm{JT}}`$, $`\delta E_\mathrm{F}`$ decreases, indicating the relative stabilization of the AF-phase. In our phase diagram, the A-AF phase appears for $`\lambda 1.1`$ and $`J^{}0.15`$. This region does not depend much on $`J_\mathrm{H}`$, as long as $`J_\mathrm{H}1`$. Although $`\lambda `$ seems to be large, it is realistic from an experimental viewpoint: $`E_{\mathrm{JT}}`$ is $`0.25`$eV from photoemission experiments and $`t`$ is estimated as $`0.20.5`$eV , leading to $`1\lambda 1.6`$. As for $`J^{}`$, it is estimated as $`0.02J^{}0.1`$ . Thus, the location in parameter-space of the A-AF state found here is reasonable when compared with experimental results for LaMnO<sub>3</sub>. ## 4 Orbital order in the A-AF phase Let us now focus on the orbital structure in the A-AF phase. In the cubic lattice studied thus far, the C- and G-type orbital structures are degenerate, and it is unclear whether the orbital pattern in the $`x`$-$`y`$ plane corresponds to the alternation of $`3x^2r^2`$ and $`3y^2r^2`$ orbitals observed in experiments . To remedy the situation, some empirical facts observed in manganites become important: (i) The MnO<sub>6</sub> octahedra are slightly tilted from each other, leading to modifications in the hopping matrix. Among these modifications, the generation of a non-zero value for $`t_{\mathrm{aa}}^𝐳`$ is important. (ii) The lattice is not cubic, but the relation $`L_{\mathrm{in}}>L_{\mathrm{out}}`$ holds, where $`L_{\mathrm{in}}=L_𝐱=L_𝐲`$ and $`L_{\mathrm{out}}=L_𝐳`$. From experimental results , these numbers are estimated as $`L_{\mathrm{in}}=4.12`$Å and $`L_{\mathrm{out}}=3.92`$Å, indicating that the distortion with $`Q_3`$-symmetry occurs spontaneously. For the inclusion of the point (ii), $`Q_{3𝐢}`$ in Eq. (7) is rewritten as $$Q_{3𝐢}=Q_3^{(0)}+(1/\sqrt{6})(2\delta _{\mathrm{𝐳𝐢}}\delta _{\mathrm{𝐱𝐢}}\delta _{\mathrm{𝐲𝐢}}),$$ (17) where $`Q_3^{(0)}`$ indicates the spontaneous distortion with $`Q_3`$-symmetry, given by $$Q_3^{(0)}=\sqrt{2/3}(L_{\mathrm{out}}L_{\mathrm{in}}).$$ (18) This length is rewritten in the non-dimensional form as $`Q_3^{(0)}=\eta \lambda `$, where $`\eta `$ is a numerical factor given by $`\eta =\sqrt{2/3}(L_{\mathrm{in}}L_{\mathrm{out}})/\mathrm{}_{\mathrm{JT}}`$, estimated as $`0.5`$ by using the experimental data. Note that the hopping amplitude becomes different from those in the $`x`$-$`y`$ plane due to this distortion. As discussed shortly in subsection 2.3, it is obtained as $`t_{\mathrm{bb}}^𝐳=(4t/3)(L_{\mathrm{in}}/L_{\mathrm{out}})^7`$. As for $`J^{}`$ along the $`𝐳`$-direction, it is given by $`J^{}(L_{\mathrm{in}}/L_{\mathrm{out}})^{14}`$, since the superexchange interaction is proportional to the square of the hopping amplitude. Motivated by these observations, the energies for C- and G-type orbital structures were recalculated including this time a nonzero value for $`t_{\mathrm{aa}}^𝐳`$ in the magnetic A-AF state (see Fig. 7a)). In Fig. 8, the configuration of $`x^2y^2`$ orbitals is depicted. From this figure, it is intuitively understood that if the $`x^2y^2`$ orbitals in the upper and lower planes are tilted from the $`x`$-$`y`$ plane as showing by arrows in the figure, there appears finite a hopping integral between adjacent $`x^2y^2`$ orbitals along the $`z`$-direction and the sign of this hopping amplitude is the same as that of $`t_{\mathrm{aa}}^𝐱`$ or $`t_{\mathrm{aa}}^𝐲`$. Thus, in the real material, the tilting of the MnO<sub>6</sub> octahedra will always lead to a positive value for $`t_{\mathrm{aa}}^𝐳`$ and the results of Fig. 7(a) suggest that the C-type orbital structure should be stabilized in the real materials. The explicit shape of the occupied orbitals is shown in Fig. 7(b). The experimentally relevant C-type structure with the approximate alternation of $`3x^2r^2`$ and $`3y^2r^2`$ orbitals is indeed successfully obtained by this procedure. Although the octahedron tilting actually leads to a change of all hopping amplitudes, effect not including in this work, the present analysis is sufficient to show that the C-type orbital structure is stabilized in the A-AF magnetic phase when $`t_{\mathrm{aa}}^𝐳`$ is a small positive number, as it occurs in the real materials. Our investigations show that this mechanism to stabilize the C-type structure works also for the purely electronic model in the Hartree-Fock approximation. ## 5 Discussion and Summary In this work, the Coulomb interaction term $`H_{\mathrm{el}\mathrm{el}}`$ has been neglected, but this detail needs further clarification. For this purpose, $`H_{\mathrm{el}\mathrm{el}}`$ is written as $`H_{\mathrm{el}\mathrm{el}}=U{\displaystyle \underset{𝐢\gamma }{}}n_{𝐢\gamma }n_{𝐢\gamma }+U^{}{\displaystyle \underset{𝐢\sigma \sigma ^{}}{}}n_{𝐢\mathrm{a}\sigma }n_{𝐢\mathrm{b}\sigma ^{}}+J{\displaystyle \underset{𝐢\sigma \sigma ^{}}{}}c_{𝐢\mathrm{a}\sigma }^{}c_{𝐢\mathrm{b}\sigma ^{}}^{}c_{𝐢\mathrm{a}\sigma ^{}}c_{𝐢\mathrm{b}\sigma },`$ (19) where $`U`$ is the intra-orbital Coulomb interaction, $`U^{}`$ the inter-orbital Coulomb interaction, and $`J`$ is the inter-orbital exchange interaction. For Mn-oxides, they are estimated as $`U=7`$eV, $`J=2`$eV, and $`U^{}=5`$eV , which are large compared to $`t`$. However, the result for the optimized distortion described in this paper, obtained without the Coulomb interactions, is not expected to change, since the energy gain due to the JT-distortion is maximized when a single e<sub>g</sub>-electron is present per site. This is essentially the same effect as produced by a short-range repulsion. In fact, the MC simulations show that the probability of double occupancy of a single orbital is negligible in the window of couplings where the A-type spin and C-type orbital state is stable. In order to confirm the above statement, the JT- and breathing-distortions were calculated as a function of $`U^{}`$ by using the Exact Diagonalization method on a $`2\times 2`$ cluster in the F-state in which $`U`$ and $`J`$ can be neglected. The result is shown in Fig. 9, where $`Q_{\mathrm{JT}}`$ and $`Q_{\mathrm{br}}`$ are defined as $$Q_{\mathrm{JT}}=(1/N)\underset{𝐢}{}\sqrt{Q_{2𝐢}^2+Q_{3𝐢}^2},$$ (20) and $$Q_{\mathrm{br}}=(1/N)\underset{𝐢}{}|Q_{1𝐢}|,$$ (21) respectively. As expected, the mean value of the breathing-mode distortion is almost zero and only the JT-mode is active in the case of $`\beta =2`$. It is noted that the dependence of $`Q_{\mathrm{JT}}`$ on $`U^{}`$ is very weak, indicating that the optimized distortion is not affected by the Coulomb interaction. The orbital arrangement in this $`2\times 2`$ lattice is exactly the same as that in the $`x`$-$`y`$ plane of the orbital structure for the A-AF phase in Fig. 5 and this arrangement is unchanged by the inclusion of $`U^{}`$. Note also that $`Q_{\mathrm{JT}}`$ is gradually increased with the increase of $`U^{}`$, although the dependence is weak. This suggests that the JT-distortion without $`U^{}`$ is reproduced at smaller value of $`\lambda `$ if $`U^{}`$ is included explicitly . Thus, it is expected that the A-AF state will be stabilized at smaller $`\lambda `$ improving the comparison of our results with experiments. Based on all these observations, it is believed that the effect of the Coulomb interaction is not crucial for the appearance of the A-AF state with the proper orbital order. Another way to rationalize this result is that the integration of the JT-phonons at large $`\lambda `$ will likely induce Coulombic interactions dynamically. Finally, let us briefly discuss transitions induced by the application of external magnetic fields on undoped manganites. When the A-AF state is stabilized, the energy difference (per site) obtained in our study between the A-AF and F states is about $`t/100`$. As a consequence, magnetic fields of $`2050`$T could drive the transition from A-AF to F order accompanied by a change of orbital structure, interesting effect which may be observed in present magnetic field facilities. In summary, with the use of numerical techniques at $`n=1`$, it has been shown that the A-AF state is stable in models with JT-phonons, using coupling values physically reasonable for LaMnO<sub>3</sub>. Our results indicate that it is not necessary to include large Coulombic interactions in the calculations to reproduce experimental results for the manganites. Considering the small but important effect of the octahedra tilting of the real materials, the C-type orbital structure (with the alternation pattern of $`3x^2r^2`$ and $`3y^2r^2`$ orbitals) has been successfully reproduced for the A-AF phase in this context. ## Acknowledgments T.H. is grateful to Y. Takada and H. Koizumi for enlightening discussion. T.H. is supported from the Ministry of Education, Science, Sports, and Culture of Japan. E.D. is supported by grant NSF-DMR-9814350.
no-problem/9907/astro-ph9907337.html
ar5iv
text
# Velocity bias in a ΛCDM model ## 1 Introduction Peculiar velocities of galaxies arise due to the gravitational pull of surrounding overdense regions and therefore reflect the underlying density field. The statistical study of galaxy velocities is important in cosmology since it can be used as a tool to constrain cosmological models. The connection between theoretical predictions and the observed statistics usually requires an additional quantity: the difference between galaxy and dark matter velocities, termed the velocity bias. The situation with predictions of the velocity bias is rather confusing. There is a wide range of estimates of the velocity bias. Values change from strong antibias with galaxies moving twice slower than the dark matter (Gelb & Bertschinger, 1994; Klypin et al., 1993), to almost no bias (Klypin et al. 1998 (KGKK); Ghigna et al. 1998), to slight positive bias (Diaferio et al., 1998; Okamoto & Habe, 1999). Following Carlberg (1994) and Summers, Davis, & Evrard (1995) we distinguish two forms of the velocity bias. The one-point velocity bias $`b_v`$ is defined as the ratio of the rms velocity of galaxies or galactic tracers to that of the dark matter: $$b_v=\frac{\sigma _{\mathrm{gal}}}{\sigma _{\mathrm{DM}}},$$ (1) where the rms velocity $`\sigma `$ is estimated on some scale. Traditionally, this measure of the velocity bias is used for clusters of galaxies. Two-particle or pairwise velocity bias $`b_{v,12}`$ compares the relative velocity dispersion in pairs of objects separated by distance $`r`$: $$b_{v,12}=\frac{\sigma _{\mathrm{g},\mathrm{g}}(r)}{\sigma _{\mathrm{dm},\mathrm{dm}}(r)}.$$ (2) The pairwise velocity dispersion (PVD) was often used to complement the analysis of the two-point spatial correlation function. At small scales, the cosmic virial theorem Peebles (1980) predicts that the PVD of galaxies should be proportional to the product of the mean density of the universe and the two-point correlation function. The PVD of galaxies has been estimated for the CfA catalog (Davis & Peebles 1983; Zurek et al.1994; Somerville, Davis & Primack 1997) and recently for the Las Campanas Redshift Survey by Landy, Szalay, & Broadhurst (1998) and Jing, Mo & Börner (1998). The latter two studies gave $`363\pm 44\text{km/s}`$ and $`570\pm 80\text{km/s}`$, respectively, for a $`1h^1\text{Mpc}`$ separation. Jing & Börner (1998) show that the discrepancy between these two studies is due to the difference in treatment of the infall velocities. The value of $`\sigma _{\mathrm{g},\mathrm{g}}`$ as well as the infall velocities depend on which regions (clusters or field) are included in the surveyed sample. The PVD of the dark matter, $`\sigma _{\mathrm{dm},\mathrm{dm}}`$, has also been estimated for a variety of cosmological models (e.g., Davis et al. 1985; Carlberg & Couchman 1989; Carlberg, Couchman & Thomas 1990; Klypin et al.1993; Colín, Carlberg, & Couchman 1997; Jenkins et al. 1998). If galaxies were a random sample of the mass distribution, we would expect that $`\sigma _{\mathrm{g},\mathrm{g}}`$ were approximately equal to $`\sigma _{\mathrm{dm},\mathrm{dm}}`$. Davis et al. (1985) showed that in this case an $`\mathrm{\Omega }_0=1`$ model with $`\sigma _8=1`$ produces a PVD that is too large compared to observations. Here $`\sigma _8`$ is the rms of mass fluctuation estimated with the top-hat window of radius 8$`h^1`$Mpc. This is an example of a model which needs some kind of bias to be compatible with the observations. The notion of the pairwise velocity bias $`b_{v,12}`$ was introduced by Carlberg & Couchman (1989). They found that the dark matter had a PVD a factor of two higher than that of the simulated “galaxies”. In a further analysis, Carlberg, Couchman, & Thomas (1990) suggested that an $`\mathrm{\Omega }_0=1`$ model with $`\sigma _8=1`$ could be made consistent with the available data for $`b_{v,12}0.5`$ (velocity antibias) and almost no spatial bias. Estimates of the pairwise velocity bias are in the range of 0.5–0.8 (Couchman & Carlberg, 1992; Cen & Ostriker, 1992; Gelb & Bertschinger, 1994; Evrard, Summers & Davis, 1994; Colín, Carlberg, & Couchman, 1997; Kauffmann et al., 1998a, b). Differences between the estimates (especially the early ones) can be attributed to some extent to numerical effects (“overmerging problem”) and to different methods of identifying galaxy tracers. Only recently $`N`$-body simulations achieved a high dynamic range in a relatively large region of the universe necessary for a large number of galaxy-size halos to survive in clusters and groups (e.g.,KGKK, Ghigna et al., 1998; Colín et al., 1999). The estimates of the pairwise velocity bias start showing a tendency for convergence. For example, results of Kauffmann et al. (1998a, 1998b) for a low-density model with a cosmological constant and results presented in this paper for the same cosmological model agree reasonably well in spite of the fact that we use very different methods. Results point systematically to a antibias $`b_{v,12}=0.60.7`$. One-point velocity bias for clusters and groups of galaxies tells a different story. Values of $`b_v`$ are typically larger than those for $`b_{v,12}`$ and range from 0.7 to 1.1 (Carlberg & Dubinski, 1991; Katz & White, 1993; Carlberg, 1994; Ghigna et al., 1998; Frenk et al., 1996; Metzler & Evrard, 1997; Okamoto & Habe, 1999; Diaferio et al., 1998). Carlberg & Dubinski (1991) suggested that if the pairwise velocity antibias is significant, galaxies in clusters should have orbital velocities lower than the dark matter. However, this may not necessarily be true. In this paper (see also, for example, Kauffmann et al. 1998a) we argue that galaxy tracers do not need to move slower in clusters to have the pairwise velocity bias $`b_{v,12}<1`$. In particular, we find that while $`b_{v,12}<1`$ for halos in our study, the halos in many clusters actually move somewhat faster than dark matter. Ghigna et al. (1998) also do not detect a significant difference between the orbits of DM particles and halos. They find that the cluster radial velocity dispersion of halos is within a few percent of that of the DM particles. Okamoto & Habe (1999) used hundreds of galaxy-size halos in their simulated cluster. They are able to compute the halo velocity dispersion profile. Their results suggest that in the range $`0.3\text{Mpc}\begin{array}{c}<\hfill \\ \hfill \end{array}r\begin{array}{c}<\hfill \\ \hfill \end{array}0.6\text{Mpc}`$ halos have a velocity dispersion slightly larger than that of the DM particles. Diaferio et al. (1998) using a technique that combines $`N`$-body simulations and semi-analytic hierarchical galaxy formation modeling also find that galaxies in clusters have higher orbital velocities than the underlying dark matter field. They suggest that this effect is due to the infall velocities of blue galaxies. We find in this paper a similar effect: galaxy-size halos are “hotter” than the dark matter in clusters. The paper is organized as follows. In § 2 brief descriptions of the model, simulation, and group-finding algorithm are given. In § 3 the DM and halo PVDs as well as the corresponding velocity bias are computed at four epochs. We take a sample of the most massive clusters in our simulation and compute an average halo and DM velocity dispersion profile. A cluster velocity bias is then defined and computed. A discussion of the main results are presented in § 4. The conclusions are given in § 5. ## 2 Model, simulation, halo-finding algorithm We use a flat low-density model ($`\mathrm{\Lambda }`$CDM) with $`\mathrm{\Omega }_0=1\mathrm{\Omega }_\mathrm{\Lambda }=0.3`$ and $`\sigma _8=1`$. Cluster mass estimates (e.g., Carlberg et al., 1996), evolution of the abundance of galaxy clusters (e.g., Eke et al., 1998), baryon fraction in clusters (e.g., Evrard, 1997), and the galaxy tracer two-point correlation function(e.g., Colín et al., 1999; Benson et al., 1999) favor a low-density universe with $`\mathrm{\Omega }_00.3`$ (see also Roos & Harun-or-Rashid, 1998). On the other hand, various observational determinations of $`h`$ (the Hubble constant in units of $`100\text{km sec}\text{-1}\text{ Mpc}\text{-1}`$) converge to values between 0.6–0.7. Our model was set to $`h=0.7`$ which gives an age for the universe of 13.4 Gyr in close agreement with the oldest globular cluster age determinations (Chaboyer, 1998). The approximation for the power spectrum is that given by (Klypin & Holtzman, 1997). The adopted normalization of the power spectrum is consistent with both the COBE observations and observed abundance of galaxy clusters. The Adaptive Refinement Tree code (ART; Kravtsov, Klypin & Khokhlov 1997) was used to run the simulation, as described by Colín et al. (1999). The simulation followed the evolution of $`256^3`$ dark matter particles in a 60$`h^1`$Mpc box which gives particle mass of $`1.1\times 10^9h^1\text{M}\text{}`$. The peak force resolution reached in the simulation is $`2h^1\text{kpc}`$. The mass resolution is sufficient for resolving and identifying galaxy-size halos with at least 30 particles. The force resolution allows halos to survive within regions of very high density (as those found in groups and clusters of galaxies). In dense environment of clusters the mass of halos is not well defined. Therefore, we use the maximum circular velocity $$V_{\mathrm{max}}=\left(\frac{GM(<r)}{r}\right)_{\mathrm{max}}^{1/2},$$ (3) where $`M(<r)`$ is the mass of the halo inside radius $`r`$, as a “proxy” for mass. Halos begin to form at very early epochs. For example, at $`z6`$ we identify $`>3,000`$ halos with maximum circular velocity, $`V_{\mathrm{max}}`$, greater than $`90\text{km/s}`$. The numbers of halos that we find at $`z=3`$, 1, and 0 are 14102, 14513, 10020, respectively. We use a limit of $`90\text{km/s}`$ on the circular velocity which is slightly lower than the completeness limit $`(110120)\text{km/s}`$ (Colín et al., 1999) of our halo catalog. This $`V_{\mathrm{max}}`$value increases the number of halos quite substantially (a factor of two as compared with the limit of $`120\text{km/s}`$), and, thus, reduces the statistical noise. We checked that our main results are only slightly affected by partial incompleteness of the sample. Our halo identification algorithm, the Bound Density Maxima (BDM; see KGKK), is described in detail elsewhere (Klypin & Holtzman 1997). The main idea of the BDM algorithm is to find positions of local maxima in the density field smoothed at the scale of interest ($`20h^1\text{kpc}`$). BDM applies physically motivated criteria to test whether a group of DM particles is a gravitationally bound halo. The major virtue of the algorithm is that it is capable of finding both isolated halos and halos orbiting within larger dense systems. Cluster-size halos were also located by the BDM algorithm. The physical properties of a sample of the twelve most massive groups and clusters<sup>1</sup><sup>1</sup>1The cluster number 8, in descending order in mass, was excluded from the sample because it has a group close to it that produces too much disturbance to the cluster. are shown in Table 1. The total number of clusters chosen for the sample is a compromise between taking a relatively large number of clusters, so that we could talk about cluster average properties, and using clusters with a relatively high number of halos. This cluster sample is used to compute the average DM and halo velocity dispersion profiles as well as the average DM and halo velocity anisotropy profiles. ## 3 Results ### 3.1 The pairwise velocity bias The three-dimensional pairwise velocity dispersion PVD is defined as $$\sigma _{3D}^2(r)=𝐯_{\mathrm{𝟏𝟐}}^{}{}_{}{}^{2}𝐯_{\mathrm{𝟏𝟐}}^2$$ (4) where $`𝐯_{\mathrm{𝟏𝟐}}`$ is the relative velocity vector of a pair of objects separated by a distance $`r`$ and brackets indicate averaging over all pairs with the separation $`r`$. Figure 1 shows the PVD for the dark matter, $`\sigma _{3D,dm}`$, at four epochs (top panel). At 1 $`h^1`$Mpc the radial PVD is about 1100 km/s at $`z=0`$. For the same cosmological model Jenkins et al. (1998) find a radial PVD of $`910\text{km/s}`$. Jenkins et al. used slightly lower normalization for the model ($`\sigma _8=0.9`$) and used a bigger simulation box ($`L_{box}=141.3h^1\text{Mpc}`$). When the differences in $`\sigma _8`$ are taken into account the Jenkins et al. value increases to 1,120 km/s. Thus, both estimates roughly agree. The ratio of the halo and the dark matter PVDs, the pairwise velocity bias $`b_{v,12}`$, is shown in the bottom panel of Figure 1. All halos with $`V_{max}>90\text{km/s}`$ were included in the computation. At very early epochs and on large scales halos tend to move faster than the dark matter. At later moments the pairwise velocity bias becomes smaller than unity. It is interesting to compare the evolution of $`b_{v,12}`$ with the changes in the spatial bias for the same cosmological model (Colín et al., 1999). The spatial bias is defined as the square root of the ratio of correlation functions $`[\xi _{\mathrm{hh}}(r)/\xi _{\mathrm{dm}}(r)]^{1/2}`$. In general, the biases evolve in the same way. At high redshifts both biases are positive ($`b>1`$) and decline as the redshift decreases. At low redshifts biases dive below unity (antibias) and stop evolving. In spite of similarities, there are some differences. The pairwise velocity bias becomes less than unity at $`z=3`$ on scales below $`3h^1\text{Mpc}`$. At the same redshift the spatial bias is still positive on all scales. Colín et al. (1999) and Kravtsov & Klypin (1999) interpret the evolution of the spatial bias as the result of several competing effects. Statistical bias (higher peaks are more clustered) tends to produce large positive bias and explain bias evolution at high redshifts. At later epochs halos of a given mass or circular velocity become less rare and start merging inside forming groups of galaxies. Both effects lead to a decrease of bias. The merging becomes less important as clusters with large velocity dispersions form at $`z<1`$. This results in a very slow evolution of the halo correlation function and bias. It is likely that the same processes define the evolution of the pairwise velocity bias. The differences can be explained by the known fact that the PVD is strongly dominated by few largest objects (e.g., Zurek et al. 1994; Somerville, Davis & Primack 1997): merging of halos inside forming groups at $`z=3`$ results in fewer pairs with large relative velocities and in velocity antibias on $`1h^1\text{Mpc}`$ scales. If this interpretation is correct, the pairwise velocity bias mostly measures the spatial bias, not the differences in velocities. ### 3.2 The velocity anisotropy $`\beta `$ A sample of 12 groups and clusters (see Table 1) was used to compute various average cluster velocity statistics. In order to reduce the noise in the profiles because of the small number of clusters in the sample, we double the sample by using also the same clusters at slightly different time $`z=0.01`$. For each cluster the halo distances to the cluster center are divided by the corresponding cluster virial radius (normalized distances). The halo velocities (averaged in spherical bins) are divided by the corresponding cluster circular velocity at the virial radius (normalized velocities). In Figure 2 we show radial profiles, in normalized units, for halos and DM: the mean radial velocity ($`v_r`$), the radial ($`\sigma _r`$) and the tangential ($`\sigma _t`$) velocity dispersions. All halos are given equal weight. We have accounted for the Hubble flow when we compute $`\sigma _r`$ and $`\sigma _t`$ (so, proper, not peculiar velocities are used); no correction for the mean radial velocity was made. The trend in both the velocity dispersion and the anisotropy velocity is slightly affected if the mean radial velocity is subtracted at distances $`\begin{array}{c}>\hfill \\ \hfill \end{array}0.6`$ and it is not affected at all at smaller distances. The velocity anisotropy function $$\beta =1\sigma _t^2/2\sigma _r^2$$ (5) is presented in the bottom panel of Figure 2 for halos and for DM. For pure radial orbits $`\beta =1`$, while an isotropic velocity dispersion implies $`\beta =0`$. The two lines added to the panel show a fitting formula (Carlberg et al., 1997). $$\beta =\beta _m\frac{4r}{r^2+4}+\beta _0$$ (6) for two pairs of parameters ($`\beta _m`$, $`\beta _0`$): (0.65,0.) and (0.5,0.15). The first set of parameters gives a better approximation for halos. It explicitly assumes that $`\beta =0`$ at the center. The second set of parameters allows a small anisotropy at the center. It provides a better fit for the dark matter. Note that while the halos have a tendency for more isotropic velocities (with possible exception of the center), the difference between halos and the dark matter is not statistically significant. The variances of $`\sigma _r`$ and $`\sigma _t`$ are computed using standard expressions for errors; for example, for $`\sigma _r`$ $$var(\sigma _r)=\frac{\mu _4\mu _2^2}{4n\mu _2},$$ (7) where $`\mu _2=_i(v_{r,i}\overline{v}_{r,i})^2`$ and $`\mu _4=_i(v_{r,i}\overline{v}_{r,i})^4`$, and $`n`$ is the number of halos. The statistical error is, thus, given by the square root of $`var(\sigma _r)`$. The variance of $`\beta `$ is given by $$[var(\beta )]^2=\left(\frac{var(\sigma _t^2)}{2\sigma _r^2}\right)^2+\left(\frac{var(\sigma _r^2)}{2\sigma _r^4}\sigma _t^2\right)^2.$$ (8) ### 3.3 The cluster velocity bias The three-dimensional velocity dispersions for both halos and DM are shown in the top panel of Figure 3. The bottom panel shows the cluster velocity bias, defined here as $`b_v=\sigma _{3D,halo}/\sigma _{3D,dm}`$. It is surprising that halos in clusters appear to have larger, by about 20%, velocity dispersions than the DM particles (positive bias). The trend is the same regardless of what type of velocity dispersion (3D, tangential or radial) we use in the velocity bias definition. There is almost no bias in the very center of clusters. However, the $`b_v`$ value of the innermost bin increases if we exclude the “cD” halos (defined as those halos which lie within the inner $`100h^1\text{kpc}`$ radius and have maximum circular velocities greater than about 300 km/s). Their exclusion increases the positive velocity bias to 1.22, a value which is comparable to that found in the adjacent bin. The cluster positive velocity bias is robust to changes in the limit of the circular velocity $`V_{\mathrm{max}}`$. Only the innermost bin experiences significant changes when this limit is increased. For example, when we increase $`V_{\mathrm{max}}`$ from $`90\text{km/s}`$ to $`150\text{km/s}`$ (more massive halos are chosen) the value of $`b_v`$ in the innermost bin reduces to 0.6. This favors a picture in which the central regions of clusters large galaxy-size halos feel more the slowing effect of the dynamical friction. All the other bins (within the virial radius) continue to show small positive velocity biases. The positive velocity bias is also robust to changes in the number of clusters of the sample. For instance, one might suspect, that the most massive cluster weights so much that it alters the statistics<sup>2</sup><sup>2</sup>2In fact, the most massive cluster of our simulation has had a recent major merger and halos may still have large (“overheated”) velocities (e.g., Katz & White, 1993). This is true to some degree, but is does not change the “sign” of the bias. For example, when we exclude this cluster and take $`V_{\mathrm{max}}=150\text{km/s}`$, all bins continue to show positive bias (within the virial radius) except the innermost bin where $`b_v=0.5`$. The results for the innermost bin should be taken with caution because the effects of overmerging may still be present in the central $`100h^1\text{kpc}`$ of the clusters. The difference in velocity dispersions of halos and dark matter particles indicates that their velocity distribution functions (VDF) are different. We have examined both differential and cumulative VDFs for the analyzed clusters and found that the halo VDFs are generally skewed towards higher velocities as compared to the dark matter VDF, at $`r/r_{vir}0.8`$. The two VDFs are approximately the same for larger radii. The observed differences in the velocity distribution may be caused either by the differences in velocity fields of infalling halos and dark matter (if, for example, halos are accreted preferentially along filaments resulting in orbits of higher ellipticity) or by effects of dynamical friction operating on halos, but not on dark matter, in clusters. The dynamical friction may affect the slowest halos more efficiently because the dynamical friction time is proportional to the cube of the halo velocity. The slowest halos may therefore merge more efficiently thereby skewing the velocity distribution of the surviving halos towards higher velocities. One could ask whether or not this positive cluster velocity bias persists in the next set of twelve clusters or groups, in descending order in mass (with virial masses below those clusters shown in Table 1). Because this new sample of clusters have an average virial mass lower than the average mass of clusters of Table 1, dynamical friction is expected to operate more efficiently (e.g., West & Richstone, 1988; Diaferio et al., 1998). The number of halos per cluster or group in this new sample is small, we therefore use the whole group velocity dispersion. We find integral $`b_v`$ values which are in general lower than one, and in some cases there are groups that exhibit a strong velocity antibias (ratios close to 0.6). This is contrary to what we find for the clusters of Table 1, where the majority of clusters have an integral positive velocity bias. ## 4 Discussion Literature on the velocity bias is very extensive and results are often contradictory. In this section we review some of the published results and compare them with our results. There are some reasons for the chaotic state of the field. One of them is the confusion of two different notions of the velocity bias – the single-point $`b_v`$ and the pairwise $`b_{v,12}`$ biases. The biases have different nature, and, thus, give different results. Another source of confusion is the way how galaxies are identified or approximated in theoretical models. When we combine this uncertainty with many physical processes, which we believe can create and change velocity bias, the situation becomes rather complicated. Velocity profiles seems to be the easiest part of the picture. In this paper we present results, which are less noisy and are based on a more homogeneous set of clusters than in most of previous publications. Our results on the average cluster profiles for the dark matter ($`v_r`$ and $`\sigma _r`$) roughly agree with the results of Cole & Lacey (1996), Tormen, Bouchet, & White (1997), and Thomas et al. (1998). For example, Tormen, Bouchet, & White (1997) find a DM velocity anisotropy $`\beta _{dm}\begin{array}{c}<\hfill \\ \hfill \end{array}0.2`$ at $`r/r_{vir}\begin{array}{c}<\hfill \\ \hfill \end{array}0.1`$ and $`\beta _{dm}0.5`$ at $`r/r_{vir}1`$, which is close to our results. The structure of galaxy clusters in various cosmologies is analyzed in detail by Thomas et al. (1998). From a total sample of 208 clusters they choose a subsample which shows no significant substructure. They find a more isotropic averaged beta profile ($`\beta _{dm}0.3`$ at $`r/r_{180}=1`$) in their $`\mathrm{\Lambda }`$CDM model. The differences between our result and theirs can be accounted for the fact that their clusters were selected not to have significant substructure. More substructure in a cluster likely means a more anisotropic cluster. The $`\beta `$ value at the cluster center (innermost bins) is around 0.1, which is close to our results. Pairwise velocity bias is very sensitive to the number of pairs found in rich clusters of galaxies. Removing few pairs may substantially change the bias. Thus, it mostly measures the spatial bias (or antibias) and is less sensitive to real differences in velocities. The value of $`b_{v,12}`$ that we find at $`z=0`$ is typically higher than previous estimates reported in the literature, computed for the $`\mathrm{\Omega }_0=1`$ CDM model (Carlberg & Couchman, 1989; Carlberg, Couchman & Thomas, 1990; Gelb & Bertschinger, 1994; Summers, Davis, & Evrard, 1995) Some of the results are difficult to compare because the pairwise velocity bias is expected to evolve with time and vary from model to model. The first interesting result of this paper, that comes out from the evaluation of $`b_{v,12}`$ at very high redshift, is that the halo PVD can be greater than that of the DM. This positive velocity bias had not been detected before (but see below) partly because of the lack of simulations with very high resolution that could overcome the overmerging problem. This result is surprising in part because halos are expected to be born dynamically cool <sup>3</sup><sup>3</sup>3Halos tend to form near the peaks of the DM density distribution, (e.g., Frenk et al., 1988).. In fact, this is one of the reasons given in the literature to explain the present-day pairwise velocity bias (e.g., Evrard, Summers & Davis, 1994). The other is the dynamical friction (e.g., Carlberg, Couchman & Thomas, 1990). We offer the following explanation to this positive velocity bias. Those halos that are formed at very high redshift come from very high density peaks. They are dynamically cooler than an average DM particle from the region where they were born in, but hotter than most of the matter. The pairwise velocity bias $`b_{v,12}`$ rapidly becomes smaller than one at non-linear scales. As time goes on, the mergers inside forming groups reduce the number of high velocity halos, while velocities of DM particles increase. As the result the average halo random relative velocities are reduced below that of the DM. Using a semi-analytical method to track the formation of galaxies Kauffmann et al. (1998a; 1998b) also find a pairwise velocity bias greater than one at high redshifts. They find that the galaxy PVD is greater than that of the DM at $`z>1.1`$ (their figure 11, $`\tau `$CDM model). A $`b_{v,12}>1`$ is expected at higher redshift in their $`\mathrm{\Lambda }`$CDM model as well. Single-point velocity bias appears to be the most difficult and controversial quantity. It is important because it is a more direct measure of the velocity differences. It still depends on the spatial bias, but to much lesser degree as compared with the pairwise bias. An interesting result was found when we evaluated the average cluster halo velocity dispersion profile and compared it with that of the DM particles: within the virial radius halos move faster than the dark matter. We believe that the explanation for this fact comes from a combination of two known physical mechanisms: the dynamical friction and the merging of halos. One may naively expect that the dynamical friction should always slow down halos, which must result in halos moving slower than the dark matter particles. This is not true. While on a short time-scale the dynamical friction reduces velocity of a halo, the halo may decrease or increase its velocity depending on the distribution of mass in the cluster and on the trajectory of the halo. For example, if a halo moves on a circular orbit inside a cluster with the Navarro-Frenk-White profile, its velocity will first increase as it spirals from the virial radius to $`2.2R_s`$, where $`R_s(200300)\mathrm{kpc}`$ is the characteristic radius of the core of the cluster. The halo velocity will then decrease at smaller radii. When the halo comes close to the center of the cluster it merges with the central cD halo, which will have a tendency to increase the average velocity of remaining halos. It appears that the Jeans equation provides a better tool for understanding the velocity bias. We will use the Jeans equation as a guide through the jungle of contradictory results. It cannot be used more than a hint because it assumes that a cluster is stationary and spherical, which is generally not the case. If a system is in a stationary state and is spherically symmetric, the mass $`M(<r)`$ inside radius $`r`$ is related to the radial velocity dispersion $`\sigma _r`$: $`M(<r)=`$ $`{\displaystyle \frac{r\sigma _r^2}{G}}A,`$ (9) $`A`$ $`\left({\displaystyle \frac{d\mathrm{ln}\sigma _r^2}{d\mathrm{ln}r}}+{\displaystyle \frac{d\mathrm{ln}\rho }{d\mathrm{ln}r}}+2\beta \right),`$ (10) where $`\rho `$ is the (number) density profile, and $`\beta `$ is the velocity anisotropy function. The left-hand-side of this equation (the total mass) is the same for both halos and the dark matter. Thus, if the term $`A`$ is the same for the dark matter and halos, then there should be no velocity bias: halos and the dark matter must have the same $`\sigma _r`$. Numerical estimates of the term $`A`$ are inevitably noisy because we have to differentiate noisy data. Nevertheless, we find that the value of the term $`A`$ for halos is systematically smaller than for the dark matter. This gives a tendency for $`\sigma _r`$ to be larger for halos. In turn, this produces a positive velocity bias. The main contribution comes typically from the the logarithmic slope of the density: the halo density profile is shallower in the central part as compared with that of the dark matter. The halo profile is shallower likely because of merging in the central part of the cluster, which gave rise to a central cD halo found in each of our clusters. We note that while the Jeans equation shows the correct tendency for the bias, it fails to reproduce correct magnitude of the effect: variations of the term $`A`$ are smaller than the measured velocity bias. One can also use the Jeans equation in a different way – as an estimator of mass. We have computed $`M(<r)`$ for our average cluster using both DM and halos. At $`r/r_{vir}=0.25`$, where $`b_v`$ is close to its maximum, the halo mass determination is larger than that of the DM by a factor of 1.4. This is due to the larger halo velocity dispersion. Because the term $`A`$ is actually higher for DM by about 10%, the overestimation is reduced from 1.56 to 1.4. As the distance to the cluster center approaches the virial radius the mass overestimation disappears. At the virial radius both mass estimations agree, essentially because $`\beta `$, $`\sigma _r`$, and the sum of the logarithm derivatives are the same for both halos and DM, and are within $`(1015)`$% of the true mass. Using the Jeans equation for a spherically symmetric system and assuming an isotropic velocity field, Carlberg (1994) showed that a cool tracer population, $`b_v<1`$, moving inside a cluster with a power-law density profile (the density profile for the tracer is also assumed to be a power-law), produced a mass segregation. That is, the tracer population had a steeper density profile. We can invert this reasoning and say that a more centrally concentrated halo distribution produces a velocity antibias. We do not find this kind of mass segregation in our halo cluster distribution. In fact, we see the opposite – halos are less concentrated than DM. Dynamical friction along with merging produces a lack of halos in the center of the cluster. This very likely explains differences between our and Carlberg’s results for the velocity bias. Carlberg & Dubinski (1991) simulated a spherical region of 10 Mpc radius and $`64^3`$ DM particles. They were unable to find galaxy-size halos inside cluster at $`z=0`$ because of insufficient resolution: softening length was 15 kpc instead of $`2h^1\text{kpc}`$ needed for survival of halos (KGKK). Their identification of “galaxies” with those DM particles which were inside high-density groups found at high redshift, may have produced a spurious cluster velocity antibias. Using different galaxy tracers Carlberg (1994) also found an integral cluster velocity bias lower than one. This result could still be affected by numerical resolution ($`ϵ=9.7h^1\text{kpc}`$). Evrard, Summers, & Davis (1994) run a two-fluid simulation in a small box, $`L_{box}=16\text{Mpc}`$, and stopped it at $`z=1`$. Each DM particle had a mass $`9.7\times 10^8\text{M}\text{}`$ and an effective resolution of 13 kpc (at $`z=1`$). The initial conditions were constrained to assure that a poor cluster could form in their simulation. Their “globs” (galaxy like objects) exhibit a lower than one velocity bias. This velocity bias appeared not to depend on epoch and mass. Their velocity antibias qualitatively agrees with our results for groups and poor clusters. At the same time, their value for the pairwise velocity bias agrees with our results. Metzler & Evrard (1997) use an ensemble of two-fluid simulations to compute the structure of clusters. Unfortunately, their simulations do not have high mass resolution to allow the gas in their simulations to cool and form “galaxies” (and then they could also allow for some feedback). Instead, they use a high-density peak recipe to convert groups of gas particles into galaxies particles. They find a one-point “galaxy” velocity bias that depends on cluster mass: the higher the cluster mass is the higher the $`b_v`$ value. We find a similar result when we do the analysis of the velocity bias cluster by cluster <sup>4</sup><sup>4</sup>4On individual clusters we take only integral velocity dispersions. Their ensemble-averaged bias parameter is 0.84. Their recipe for galaxy formation produces a galaxy number density profile which is steeper than that of the DM. This is likely the reason why they find a $`b_v`$ value lower than one (Carlberg 1994, see above). Frenk et al. (1996) simulated a Coma-like cluster with a P<sup>3</sup>M + SPH code that includes the effects of radiative cooling. The mass per gas particle is $`2.4\times 10^9\text{M}\text{}`$ with a softening parameter $`ϵ=35\text{kpc}`$ of the Plummer potential. Their galaxies have two extreme representations: one as a pure gas clumps and the other as lumps of the stellar component. They find a mass segregation in both representations – galaxies are more clustered than DM toward the center of the cluster which is not seen in our halo distribution <sup>5</sup><sup>5</sup>5The reader might want to compare the Fig. 11 in Frenk et al. (1996) with the Fig. 2 in Colín et al. (1999). Once again, according to Carlberg (1994) analysis, this would result in a one-point velocity bias lower than one ($`b_v0.7`$). Because of a strong cooling, their “galaxies” can acquire high density contrasts, which helps galaxies to survive inside cluster. At the same time, poor force resolution (35 kpc) could have affected their results. There are two studies where $`b_v`$ values greater than one are obtained. Okamoto & Habe (1999) simulate a spherical region of 30 Mpc radius using a constrained random field method. They use a multi-mass initial condition to reach high resolution. Their high-resolution simulated region, where the cluster ends up, has a softening length $`ϵ=5\text{kpc}`$ and mass per particle $`m10^9\text{M}\text{}`$. They find a cluster velocity bias lower than one only in the innermost part of the cluster where dynamical friction is expected to be more efficient. A small positive bias ($`b_v>1`$) is found in the range $`0.3\text{Mpc}<r<0.6\text{Mpc}`$. Based on the previous work by Kauffman et al. (1998a), Diaferio et al. (1998) study properties of galaxy groups and clusters. They also find that galaxies in clusters are “hotter” than the underlying dark matter field. They suggest that this effect is due to the infall velocities of blue galaxies. Infall could explain the positive velocity bias of the outermost bin (within the virial radius) of our Figure 3, but it definitely cannot account for the $`b_v>1`$ value seen in the inner bins (the mean radial velocity is close to zero for both DM and halos in the three innermost bins). There are several differences between our simulation and those mentioned above. First, some of the papers cited above simulate only a region which ends up as a cluster. So, they have structure for only one cluster. The single-cluster one-point velocity bias could not represent an average velocity bias, found using a sufficiently large sample of clusters. For example, if our small positive velocity bias is influenced by non-equilibrium cluster features, then when one selects a cluster which is in good dynamical equilibrium (this could be defined, for example, by the absence of substructure in the cluster) and computes the one-point velocity bias, it could be biased toward low values ($`b_v<1`$) because dynamical friction have had more time to operate. Second, we simulate a relatively large random volume that gives us many clusters in which effects such as tidal torques, infall, and mergers are included naturally. A cluster simulated region or a random large region but without enough resolution may not have a sufficiently large number of galaxy tracers and, thus, introduce high statistical errors. Our relatively large number of halos in clusters reduces significantly the statistical errors in the computation of $`b_v`$ and makes them suitable to the determination of, for example, the radial dependence of the velocity bias. Third, in view of the Okamoto & Habe (1999) and Ghigna et al. (1998) results, and our own results, it seems that numerical resolution not only plays an important role in determining the whole cluster velocity bias value (both spatial and velocity bias intervene to affect its value) but it is also important in determining the radial dependence of $`b_v`$ (almost pure velocity bias). What could account for the small positive velocity bias that we see in our average cluster? We have examined both the differential and the cumulative radial velocity distribution functions. We use the radial velocity to highlight any contribution of infall velocities to the velocity bias. The cumulative radial velocity distribution function is shown in the Figure 4 for four different radial bins. In the top-right panel (the innermost bin) we see a higher fraction of low-velocity halos at small $`v_r`$ values. This is due to central cD halos, which move very slowly relative to clusters themselves. At large $`v_r`$ values we observe the contrary – a higher slope, which means that there are many fast moving halos. If we do not include the cD halos, the velocity bias becomes larger then unity even in the central radial bin. However, as we noticed earlier in section 3.3, a velocity antibias can appear in the central bin, if the value of $`V_{\mathrm{max}}`$ is increased. It is clear that the deficiency of low and moderate $`v_r`$ halos produces the positive velocity bias measured at $`r=(0.20.8)r_{vir}`$ (see the top-left and the bottom-right panels). We have used the Kolmogorov-Smirnov test in order to evaluate whether or not the halo and the DM velocity distribution functions are statistically different. We find that the probability that these functions were drawn from the same distribution is smaller than 0.01 in all radial bins that are within the virial radius. As mentioned in § 3.3, the dynamical friction may have affected the slow moving halos more significantly because the dynamical friction time-scale is proportional to the cube of the halo velocity. It is thus expected that low-velocity halos merge sooner than their high-speed counterparts, thereby skewing the VDF toward high-velocity halos<sup>6</sup><sup>6</sup>6It should be also kept in mind that as halos move to orbits of smaller radii they could acquire higher velocities because the DM velocity dispersion increases toward the cluster center. Infall could also be an important source of positive velocity bias for the outermost bins. ## 5 Summary 1. We have found that galaxy-size halos have a time- and scale- dependent pairwise velocity bias. At high redshifts ($`z5`$) this bias is larger than unity ($`1.2`$). It declines with time and becomes $`0.60.8`$ at $`z=0`$. The evolution of the pairwise velocity bias follows and probably is defined by the spatial bias of the dark matter halos. These results are in qualitative agreement with those by Kauffmann et al. (1998b). 2. We have evaluated the velocity anisotropy function $`\beta (r)`$ for both halos and DM particles. For both halos and DM $`\beta `$ is a function that increases with radius and reaches a value of $`0.5`$ at the virial radius. The difference between this value and that found by Thomas et al. (1998) likely can be explained by the fact that Thomas et al. (1998) selected a sample of clusters which had little substructure. Our simulations indicate that the halo velocity anisotropy closely follows (but lies slightly below) that of the underlying dark matter. 3. Halos in our clusters move faster than DM particles: $`b_v=(1.21.3)`$ for $`r=(0.20.8)r_{vir}`$. This result disagrees with many previous estimates of the cluster velocity bias. This difference appears to be due to differences in numerical resolution. More work needs to be done to settle the issue. Nevertheless, it is encouraging that Diaferio et al. (1998) and Okamoto & Habe (1999) found results similar to ours. 4. Usual argument that dynamical friction slows down galaxies and, thus, must produce velocity antibias is not correct. Galaxy tracers in clusters move through an environment which has a steep density gradient. A sinking halo may either increase or decrease its velocity depending on the distribution of cluster mass and on the trajectory of the halo. A combination of the dynamical friction and merging appears as the most compelling hypothesis which could account for our small positive velocity bias. We acknowledge the support of the grants NAG-5-3842 and NST-9802787. Computer simulations were performed at NCSA. P.C. was partially supported by DGAPA/UNAM through project IN-109896.
no-problem/9907/astro-ph9907140.html
ar5iv
text
# Dark Matter in the Dwarf Galaxy NGC 247 ## 1 Introduction Dwarf galaxies (hereafter DGs) differ from normal spiral galaxies in many of their properties. While the rotation curves of spirals tend to be flat after the central rising, rotation curves of DGs continue to rise out to the last measured point, although, recently in DDO 154 it has been observed that the rotation curve is declining in the outer parts (Carignan & Purton cap (1998)). Moreover, DGs tend to have a disk gas content (measured through the $`21`$ cm line) higher with respect to that of spirals. In fact, within the radius $`D_{25}/2`$, defined to be the radius at which the surface brightness reaches $`25^m`$ arcsec<sup>-2</sup>, the ratio between the mass of the gas $`M_g`$ and the mass of the visible stars $`M_{}`$ is $`M_g/M_{}0.45`$ and tends to increase to $`M_g/M_{}0.7`$ at the radius $`LP_{HI}`$ of the last HI profile point (Burlak burlak (1996)). The contribution of the dark matter $`M_d`$ in DGs within $`3r_0`$ ($`r_0`$ being the scale height of the stellar disk as obtained from surface photometry) is on average $`65\%`$ of the total mass $`M_t`$, which is nearly a factor two greater than for normal spiral galaxies. It is interesting to note that among the 13 DGs analysed by Burlak (burlak (1996)), the 8 DGs dimmer than $`M_B=17^m`$ have on average $`M_d/M_t0.7`$ and $`M_d/M_{}3.9`$ while the 5 DGs with absolute magnitude in the range $`17^m>M_B>18^m`$ have $`M_d/M_t0.56`$ and $`M_d/M_{}1.5`$. For comparison, we note that in the case of normal spiral galaxies $`M_d/M_t0.38`$ and $`M_d/M_{}0.76`$. Although observations of dwarf galaxies are rather difficult, we can learn a lot from them. Dwarf galaxies are the most preferred objects for studies of the distribution of the dark matter, since the dynamical contribution of their dark halos dominates even in their central regions. DGs have been already precious objects, since they allowed to show that their dark matter constituent cannot be hot dark matter (neutrinos) (Tremaine & Gunn trg (1979), Faber & Lin fablin (1983), Gerhard & Spergel gersp (1992)). DGs put also constraints on cold dark matter candidates (WIMPs). Recent studies seem to point out that there is a discrepancy between the computed (through N-body simulations) rotation curves for dwarf galaxies assuming an halo of cold dark matter and the measured curves (Moore moore (1994), Navarro et al. navarro (1996), Burkert & Silk burkert (1997)). Since DGs are completely dominated by dark matter on scales larger than a few kiloparsecs (Carignan & Freeman cf (1988)), one can use them to investigate the inner structure of DG dark halos with very little ambiguity about the contribution from the luminous matter and the resulting uncertainties in the disk mass/luminosity ratio (M/L). Only about a dozen rotation curves of dwarf galaxies have been measured, nevertheless a trend clearly emerges: the rotational velocities rise over most of the observed region, which spans several times the optical scale lengths, but still lies within the core radius of the mass distribution. Rotation curves of dwarf galaxies are, therefore, well described by an isothermal density law. The diffuse X-ray emission has not yet been analysed for many DGs. Such observations yield important information about the dynamical mass distribution in elliptical galaxies. It is, therefore, natural to investigate if also DGs have a diffuse X-ray emission and if it can give us useful informations about their total mass. Of course, we are aware that this is a difficult task. Beside a proper treatment of the observational background, we must be able to disentangle the X-ray emission due to the hot halo gas from the contribution of point sources like low mass X-ray binaries or supernova remnants, which is demanding even for high mass ellipticals. In DGs the problem is even more severe, since we expect a weaker diffuse X-ray emission due to the less deep gravitational potential as compared to that of elliptical galaxies. This implies a lower diffuse gas temperature $`T_g`$ and, therefore, less emission in the X-ray band. Five DGs have been observed in the $`0.112.04`$ keV band with the ROSAT PSPC. Among them, we analysed the three members of the Sculptor group NGC 247, NGC 300 and NGC 55. In this paper we present our results for NGC 247, for which we have the best data. Unfortunately NGC 247 is not the best example of a dwarf galaxy. It has a huge gaseous disk and the dark matter begins to dominate over the stellar and gaseous disk only beyond 8 kpc (see Fig. 1 in Burlak burlak (1996)). However, its halo parameters seem to be typical for the whole class (Burlak burlak (1996)). In particular, the ratio between the halo mass and the total mass within the last measured HI profile point could be as high as $`M_h/M_t0.7`$, which is significantly above the value for spiral galaxies. The paper is organised as follows: In Sects. 2 and 3 we discuss the data analysis procedure and the model assumptions. In Sect. 4 the results of the spectral analysis are shown, while in Sect. 5 we determine the mass of the X-ray emitting gas and the dynamical mass of NGC 247. We close the paper with our conclusions in Sect. 6. ## 2 Data Analysis The data analysis is based upon the standard model for DGs of Burlak (burlak (1996)), which includes three components: a stellar disk, a gaseous disk and a dark halo. In addition, we consider a fourth component, the hot diffuse gas, which turns out to be not very important with respect to the total mass, but is relevant in our analysis, since it is the source of the diffuse X-ray emission. This component is assumed to form a halo around the galactic center, which can be described by a $`\beta `$-model (Canizares et al. canizares (1987)) with central density $`\rho _{}`$ and core radius $`a_g`$ different from the corresponding values of the dark halo hence, $$\rho _{hg}=\rho _{}\left(1+\left(\frac{r}{a_g}\right)^2\right)^{\frac{3}{2}\beta }.$$ (1) The X-ray analysis does only depend on the model of the hot emitting gas. We will, therefore, not make any further assumptions about the remaining components. The hot, diffuse gas component is taken to be in hydrostatic equilibrium. For spherical symmetry the dynamical mass $`M(r)`$ within $`r`$ is then given by $$M(r)=\frac{kT_{hg}(r)r}{\mu m_pG}\left(\frac{d\mathrm{log}\rho _{hg}(r)}{d\mathrm{log}r}+\frac{d\mathrm{log}T_{hg}(r)}{d\mathrm{log}r}\right),$$ (2) where $`T_{hg}`$ denotes the hot gas temperature, $`\rho _{hg}`$ its density, $`m_p`$ the proton mass and $`\mu `$ the mean atomic weight. For the data analysis and reduction we follow the method proposed by Snowden & Pietsch (snowden (1995)) using Snowden’s software package especially written for the analysis of ROSAT PSPC data of extended objects and the diffuse X-ray background (Snowden & Pietsch snowden (1995), Snowden et al. contamination (1994)). The package includes routines to exposure and vignetting correct images, as well as to model and subtract to the greatest possible extent the non-cosmic background components. For extended sources, the background can vary significantly over the region of interest, requiring an independent assessment of its distribution over the detector. Since the non-cosmic background is distributed differently across the detector than the cosmic background, the first has to be subtracted separately. Five different contamination components have been identified in the non-cosmic background of PSPC observations. For a detailed discussion we refer to Snowden et al. (contamination (1994)). ## 3 Observations of NGC 247 The dwarf galaxy NGC 247 is the one for which we have the best data. The ROSAT data archive contains two observations of NGC 247. The first observation was done from December $`21^{st}`$, 1991 to January $`6^{th}`$, 1992 with an observation time of 18240 s and the second from June $`11^{th}`$ to $`13^{th}`$, 1992 with an observation time of 13970 s. After exclusion of the bad time intervals ($`25\%`$ of the observation time), the non-cosmic background was subtracted following Snowden et al. (contamination (1994)). The cleaned data sets were then merged into a single one in order to increase photon count statistics. The data were then binned into concentric rings such that four rings lie within $`D_{25}/2`$. Therefore, the thickness $`\mathrm{\Delta }R`$ of each ring corresponds to $`2.67`$ arcmin. Obviously, the binning is a compromise between radial resolution and statistical errors. ### 3.1 Point source Detection Since we are interested in the diffuse X-ray emission, we must eliminate all contributions due to foreground Galactic X-ray sources, and from discrete X-ray sources associated with NGC 247. We follow again the procedure outlined by Snowden & Pietsch (snowden (1995)) to remove the point sources to the faintest possible flux limit to minimize the excess fluctuations caused by them. The threshold limit we used is given in Table 2 . We carried out the point source detection separately for the 1/4 keV band and for the 0.5-2 keV band. The threshold limit guarantees that less than one spurious source will remain within the inner part of the field of view (25 arcmin) at a confidence level of about $`3\sigma `$. ### 3.2 The cleaned X-ray images Figure 1 shows the X-ray brightness radial profiles of NGC 247 after point source removal for the 1/4 keV, 3/4 keV and the 1.5 keV band. The horizontal lines, defined by the average X-ray surface brightness between the $`7^{th}`$ and the $`12^{th}`$ ring, represent the sum of foreground and (cosmic) background diffuse X-ray emission. The X-ray image in the 1.5 keV band shows clearly that the emission originates from the disk of NGC 247, which extends to about 11 arcmin. We used elliptical rings for the 1.5 keV band, with an inclination angle of $`75.4^0`$ and a position angle of $`171.1^0`$ as derived in Carignan & Puche (Car (1990)). The distribution of the X-ray emission in the 3/4 keV band is less clearly associated with the disk. The radial profile of the 1/4 keV band is flat from the centre up to 30 arcmin. The count-rate fluctuations are partly due to the brightest undetected point sources. Since there exists an unresolved flux originating beyond NGC 247 and the HI column density of NGC 247 is sufficiently high over most of its inner region to absorb a significant fraction of this flux, any emission from NGC 247 must at least ‘fill’ the deficit caused by the absorbed flux in order to be observed as an enhancement. Thus, even the flat distribution of the X-ray brightness profile of NGC 247 is already a clear indication for an emission from the galaxy. Hence, we must correct the surface brightness of the image for the absorption of the extragalactic flux. Moreover, the increasing size of the HI column density towards the centre reduces the point-source detection sensitivity, and thus more undetected X-ray sources contribute to the background. The contribution of undetected sources of the extragalactic background can be calculated using the $`\mathrm{log}N\mathrm{log}S`$ relation $$N(S)=\{\begin{array}{cc}(285.3\pm 24.6)S^{(2.72\pm 0.27)}\hfill & \text{if }S2.66\hfill \\ (116\pm 10)S^{(1.80\pm 0.08)}\hfill & \text{otherwise}\hfill \end{array}$$ (3) as derived from a deep ROSAT survey by Hasinger et al. (Has (1993)). S denotes the flux in units of $`10^{14}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$. The energy spectrum of sources with fluxes in the range $`(0.254)\times 10^{14}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ is given by $$(7.8\pm 0.3)E^{(0.96\pm 0.11)}[\mathrm{keV}\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1\mathrm{keV}^1],$$ (4) where $`E`$ is the energy in keV. Since the absolute normalization for the $`\mathrm{log}N\mathrm{log}S`$ relation is still uncertain, we will use it in a way such that only the ratio is relevant. The remaining count-rate $`\mathrm{I}_{\mathrm{eg}}^\mathrm{i}`$ of extragalactic sources, due to sources having fluxes below the flux detection limit $`\mathrm{S}_{\mathrm{lim}}^\mathrm{i}`$ (see below) is given by $$I_{\mathrm{eg}}^i=I_{\mathrm{egobs}}^i\frac{_0^{S_{\mathrm{lim}}^i}SN(S)𝑑S}{_{0.25}^4SN(S)𝑑S},$$ (5) where $`I_{\mathrm{egobs}}^i`$ is the observed extragalactic count-rate in the $`i^{\mathrm{th}}`$ ring of sources with a flux in the range $`(0.254)\times 10^{14}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$. $`\mathrm{I}_{\mathrm{egobs}}^\mathrm{i}`$ in units of $`\mathrm{counts}\mathrm{s}^1\mathrm{arcmin}^2`$ is given by $$I_{\mathrm{egobs}}^i=\frac{1}{A_i}\left(\frac{2\pi }{360\times 60}\right)^2_{\mathrm{A}_\mathrm{i}}f_n(n_{\mathrm{HI}}(𝐱))𝑑\mathrm{A}_\mathrm{i}$$ (6) with $`A_i`$ the surface of the $`i^{\mathrm{th}}`$ ring and $`f_n(n_{\mathrm{HI}}(𝐱))={\displaystyle _0^{\mathrm{}}}e^{n_{HI}(𝐱)\sigma (E)}A_{\mathrm{eff}}^n(E)7.8E^{1.96}𝑑E`$ is the observed extragalactic count-rate at position $`𝐱`$ in the $`n`$band, where $`n`$ labels the $`1/4`$keV, $`3/4`$keV and $`1.5`$keV band, respectively. The column density $`\mathrm{n}_{\mathrm{HI}}`$ includes both the column density of the Milky Way ($`1.5\times 10^{20}\mathrm{HI}\mathrm{cm}^2`$ (Fabbiano et al. Fab (1992))) and the column density of NGC 247 at position $`𝐱`$, as calculated from Carignan & Puche (Car (1990)). The source detection limit $`\mathrm{I}_{\mathrm{Threshold}}`$ of our analysis (see Table 2) is then given via the relation $$I_{\mathrm{Threshold}}=c^i_{\mathrm{A}_\mathrm{i}}f_n(n_{\mathrm{HI}}(𝐱))𝑑\mathrm{A}_\mathrm{i}$$ (8) in units of $`\mathrm{counts}\mathrm{s}^1`$. The flux detection limit $`\mathrm{S}_{\mathrm{lim}}^\mathrm{i}`$ in keV is $$S_{lim}^i=c^i_{0.5}^27.8E^{0.96}𝑑E.$$ (9) Hence, using Eqs. (8) and (9) we get $`S_{\mathrm{lim}}^i=1.602\times 10^5{\displaystyle \frac{\mathrm{A}_\mathrm{i}I_{\mathrm{Threshold}}_{0.5}^27.8E^{0.96}𝑑E}{_{\mathrm{A}_\mathrm{i}}f_n(n_{\mathrm{HI}}(𝐱))𝑑\mathrm{A}_\mathrm{i}}}`$ in units of $`10^{14}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$. The extragalactic background and Milky Way corrected count-rate $`\mathrm{I}_{\mathrm{MW}}^\mathrm{i}`$ in the $`i^{\mathrm{th}}`$ ring is now given by $$I_{MW}^i=\frac{(I_{\mathrm{obs}}^i+I_{\mathrm{eg}}^i)(I_{\mathrm{obs}}^{\mathrm{Base}}+I_{\mathrm{eg}}^{\mathrm{Base}})}{\tau _f},$$ (11) where the base values, defined by the outermost rings ($`712`$), are treated correspondingly. $`\tau _f`$ is the band averaged absorption coefficient of our own Galaxy computed using a Raymond-Smith emission model (Raymond & Smith Ray (1977)) and a column density of $`1.5\times 10^{20}\mathrm{HI}\mathrm{cm}^2`$. We assume that the halo emission is not absorbed by the disk material of NGC 247. A detailed error calculation should include the uncertainties of the foreground column density, the column density of NGC 247, the $`\mathrm{log}N\mathrm{log}S`$ relation, the extragalactic source spectrum, the Raymond-Smith model, including the uncertainty in the temperature and the response matrix. Since this calculation is in principle possible but sophisticated, we adopt the formal error to be two times the count-rate uncertainty hence, $`\mathrm{\Delta }I_{\mathrm{MW}}^i=2\times \mathrm{\Delta }I_{\mathrm{obs}}^i/\tau _f`$. The corresponding radial profiles for the 1/4 keV and the 3/4 keV band are shown in Fig. 2 and the values are given in Table 3. For the 1.5 keV band the absorption due to gas associated with NGC 247 is negligible. Adopting the values given in Table 1 the X-ray luminosity corresponding to the observed count rates in the different bands amounts to $`L_X=1.1\times 10^{39}\mathrm{erg}/\mathrm{s}`$, $`L_X=4.8\times 10^{38}\mathrm{erg}/\mathrm{s}`$ and $`L_X=2.3\times 10^{38}\mathrm{erg}/\mathrm{s}`$ in the $`1/4`$keV, $`3/4`$keV and $`1.5`$keV band, respectively. ## 4 Spectral Analysis The ratio of the count rates in the different energy bands contains information about the temperature of the emitting gas. Assuming a Raymond-Smith spectral model (Raymond & Smith Ray (1977)) with cosmic abundances (Allen allen (1973)), the observed ratio as derived from the count rates (Eq. (11)) must equal the theoretical count rate ratio given by: $$\eta =\frac{_0^{\mathrm{}}A_{\mathrm{eff}}^nR(E,T)\frac{1}{E}𝑑E}{_0^{\mathrm{}}A_{\mathrm{eff}}^n^{}R(E,T)\frac{1}{E}𝑑E}.$$ (12) For the emission below $`3/4`$ keV we obtain $`\mathrm{log}T=6.1_{0.1}^{+0.1}`$. The higher energy band is not included in the fit since its emission is dominated by the contribution from the disk. The temperature obtained from the X-ray data tends to be too large due to the flux from the hotter plasma associated with the disk. The formal error is expected to be too small since at a temperature of $`\mathrm{log}T=6.1`$ most of the emission lies outside the ROSAT bands hence, the spectral fit has to be done using only the tail of the emission function. Alternatively, we can use the rotation curve of NGC 247 (Burlak burlak (1996)) to obtain the temperature at the last measured HI profile point ($`11.2`$kpc). Doing this we obtain $`\mathrm{log}T=5.6`$, which is in reasonable agreement with the above value deduced from X-ray data. Both values have their own merits. Using the higher value $`\mathrm{log}T=6.1`$ allows to determine –within the model– the total mass and the mass of the hot gas using only X-ray data, thereby we independently check the total mass as obtained from the rotation curve. However, the error of the result is quite large. Using the lower value $`\mathrm{log}T=5.6`$, obtained from the measured rotation curve, we mix the two methods and thus we can no longer compare with the HI-measurements, but the error is possibly smaller. We will give in the following all results for both temperatures $`\mathrm{log}T=5.6`$ and, in brackets, $`\mathrm{log}T=6.1`$. ## 5 Mass Determination ### 5.1 The Hot Emitting Gas Most of the flux originating from NGC 247 is observed in the $`1/4`$ keV band. We determine the mass of the emitting hot gas making the following assumptions: the gas temperature is about $`\mathrm{log}T=5.6`$ ($`\mathrm{log}T=6.1`$); the spectral emissivity is given by a Raymond-Smith model and the density profile is described by a $`\beta `$-model (see Sect. 2) with central gas density $`\rho _{}`$ and core radius $`a_g`$. Due to the small amount of data, we reduce the degrees of freedom by fixing $`\beta `$. The measured radial profile in the $`1/4`$ keV band can then be used to determine the remaining two parameters $`\rho _{}`$ and $`a_g`$. With these values we determine the integrated gas mass within the last measured HI profile point ($`r=11.2`$kpc). The results for three different values of $`\beta `$ are given in Table 4. Read et al. (read (1997)) have analysed a sample of 17 nearby galaxies among them NGC 247. They found a hot gas mass in NGC 247 of only $`3.1\times 10^6\sqrt{\eta }\mathrm{M}_{}`$ with $`\eta `$ the filling factor, which is nearly two order of magnitudes below the average value of their sample. Beside the different analysis technique, we believe that the main reason for the large discrepancy between their and our result is due to the different treatement of the internal absorption in NGC 247. ### 5.2 The Gravitating Mass The observed photons originate from a hot gas at a temperature of $`10^510^6`$ K as it cools due to Bremsstrahlung, and especially in the lower temperature range more important, due to line cooling. The amount of hot gas in a DG is not large enough to have a cooling time comparable with a Hubble time, in fact for NGC 247 we expect a cooling time of about $`5\times 10^8`$ years (Read et al. read (1997)). Therefore, the gas must have been reheated either by heat sources, or e.g. by adiabatic compression as the result of a flow through the galaxy’s potential. Obviously, both effects can combine. The main heat sources for the gas are supernovae. However, the heating and cooling rates for the gas do not balance at every radius. Since the cooling rate per ion is proportional to the local electron density, cooling will generally dominate over heating at small radii, where the density is high. Thus, the gas in the central regions will steadily cool and flow into the centre of the galaxy, where it presumably forms gas clouds and stars. However, the flow velocity is usually much less than the sound speed, such that approximately hydrostatic equilibrium is maintained in the gas (Binney & Tremaine Bin (1987)). Assuming an ideal gas in hydrostatic equilibrium, the total mass $`M(r)`$ within the radius $`r`$ is given by Eq. (2). Using the values for the parameters $`\rho _{}`$ and $`a_g`$ from Table 4, we can calculate the total mass of NGC 247 within $`r=11.2`$kpc. The results are given in Table 4. Figure 3 shows the profile of the integrated total mass for $`\beta =2/3`$ (full lines). The dotted lines give the variation due to a $`1\sigma `$ error in $`a_g`$. The three points are the values taken from Table 3 of Burlak (burlak (1996)). At $`r=11.2`$kpc he finds $`M=3.1\times 10^{10}\mathrm{M}_{}`$, where we refer to his dark halo model. We see, that given the uncertainties in the data, the agreement is still reasonable within a factor of two. ## 6 Discussion and conclusions We determined the mass of the hot emitting gas and the total mass of the dwarf galaxy NGC 247 using ROSAT PSPC data. From the X-ray data we obtain $`M_{hg}=0.7\times 10^8M_{}`$ for the hot gas mass and $`M=7.5\times 10^{10}M_{}`$ for the total mass adopting $`\beta =2/3`$, which seems to be favoured by the measured rotation curve. Unlike, as suggested by the results published so far, NGC 247 is not an exceptional object with a hot gas content about two orders of magnitude below the group average. The result obtained in this analysis agrees well with what is found for other Sculptor members. Although the quality of the X-ray data is less good as compared to e.g. elliptical galaxies, the mass determination agrees well with the value found by independent methods. The relative amount of hot gas is comparable to what one finds in more massive galaxies (DePaolis et al. strafa (1995)). For the mass to luminosity ratio as derived from the X-ray data, we obtain $`\mathrm{log}(L_X/M)=28.4`$ which agrees well with values found for other Sculptor members (Read & Pietsch rpi (1998), Schlegel et al. schl (1997)). The hot emitting gas seems to form a halo around the galactic center which is more concentrated towards the center than the dark halo. In fact, we find a core radius of $`a_g=4.4`$kpc for the hot gas, whereas Puche & Carignan (puche (1991)) favour an extended dark halo with a core radius of $`24.2`$kpc. Burlak’s dark halo model (Burlak burlak (1996)) assumes a core radius of $`a=6.9`$kpc. The smaller core radius of the hot gas found by the X-ray analysis could indeed indicate a flow of gas towards the center. As one can see in Fig. 1, the brightness profile of NGC 247 in the $`3/4`$keV and $`1.5`$keV band shows the presence of a “hump” about $`15`$arcmin from the center. Similar observations are known from M 101 and M 83. If MACHOs are low-mass stars or even brown dwarfs, we expect them to be X-ray active (De Paolis et al., dijr98 (1998)) with an X-ray luminosity of about $`10^{27}10^{28}`$erg/s particularly during their early stages (Neuhäuser & Comer$`\stackrel{´}{\mathrm{o}}`$n, neuhauser (1998)). Using similar arguments as in De Paolis et al. (dijr98 (1998)), it can be shown that this “hump” could be explained by the X-ray emission of dark clusters of MACHOs, which could make up to half of the dark matter in NGC 247. Therefore, NGC 247 might offer the unique possibility to observe how a halo of MACHOs actually forms. ###### Acknowledgements. The authors acknowledge helpful discussions with Alexis Finoguenov, Christine Jones, and Steve Snowden. Special thanks go to Claude Carignan for providing the NGC 247 HI-map in digital form, and to Steve Snowden for providing the detector on-axis response curves. We would like to thank the referee for his helpful and clarifying comments. This work is partially supported by the Swiss National Science Foundation.
no-problem/9907/gr-qc9907062.html
ar5iv
text
# Captions for Figures
no-problem/9907/nucl-th9907004.html
ar5iv
text
# Initial State Energy Loss Dependence of J/Ψ and Drell-Yan in Relativistic Heavy Ion Collisions ## 1 Introduction The yield of $`\mathrm{J}/\mathrm{\Psi }`$ particles in relativistic heavy ion collisions is a subject of considerable current experimental and theoretical work. It has been predicted that in the case of a phase transition to a quark-gluon plasma, the yield of $`\mathrm{J}/\mathrm{\Psi }`$ particles is suppressed due to Debye screening . The level of suppression of $`\mathrm{J}/\mathrm{\Psi }`$ yields can be benchmarked in the context of a simple theoretical framework. A Glauber model is employed where each nucleon-nucleon collision is assumed to have an equal probability to produce a $`c\overline{c}`$ pair. This $`c\overline{c}`$ pair may then form a quarkonium state; alternatively, the initially produced $`c\overline{c}`$ pair can undergo inelastic interactions while traversing the nuclear medium, reducing the final $`\mathrm{J}/\mathrm{\Psi }`$ yield. Measurements of $`\mathrm{J}/\mathrm{\Psi }`$ yields from $`pA`$ interactions and $`AA`$ interactions for beams of $`A32`$ are well described in this model by an absorption cross section of 6-8 mb. This cross section is consistent with calculated values if the absorption occurs not on a color singlet $`c\overline{c}`$ pair, but rather on a $`c\overline{c}g`$ color octet state. However, recently the NA50 experiment has measured the $`\mathrm{J}/\mathrm{\Psi }`$ yield in $`PbPb`$ collisions at 158 $`A`$GeV/$`c`$ as a function of the transverse energy produced in the collision in which the data appear to be inconsistent with a Glauber-based model including only the 6-8 mb absorption on nucleons . This difference is referred to as “anomalous” $`\mathrm{J}/\mathrm{\Psi }`$ suppression and has been interpreted by some as evidence for the deconfinement transition . However, without invoking deconfinement, other physical processes can and have been added to the above simple model. Once the $`c\overline{c}`$ pair has hadronized into a specific quarkonium state, it may also undergo inelastic interactions in the high density medium dominated by mesons (often referred to as co-movers), thus producing open charm in the form of D mesons. Many studies indicate that these cross sections are quite small relative to the expected disassociation in a deconfined state with free quarks and gluons . Model extensions including interactions with co-movers are discussed in . In addition, the possible contributions of gluon shadowing and enhanced charm production may play a role. It has also been suggested that initial state energy loss may explain the suppression of $`\mathrm{J}/\mathrm{\Psi }`$ . In their most recent paper , the NA50 collaboration has claimed that in the ratio of $`\mathrm{J}/\mathrm{\Psi }`$ to Drell-Yan a “clear onset of the anomaly is observed as a function of transverse energy. It excludes models based on hadronic scenarios since only smooth behaviors with monotonic derivatives can be inferred from such calculations.” This statement would imply that further investigation into explanations involving co-movers, initial state energy loss, etc. are not necessary. Although deconfinement may eventually be considered the correct explanation, we feel this conclusion is premature. Several studies have described some subset of the data reasonably well with various hadronic descriptions . However, detailed studies of hadronic scenarios which compare to all of the available data have yet to be fully carried out, and this must be done before any real conclusions can be drawn. In this letter, we detail a study of initial state energy loss and its impact on both the transverse energy dependence and the transverse momentum spectra of $`\mathrm{J}/\mathrm{\Psi }`$ and Drell-Yan. In a recent report , the inclusion of initial state energy loss into a Glauber model was shown to match the $`\mathrm{J}/\mathrm{\Psi }`$ yield measured by NA50 in $`PbPb`$ minimum bias (i.e. averaged over all impact parameters) collisions; this agreement led to the interpretation that initial state energy loss was the source of the “anomalous” $`\mathrm{J}/\mathrm{\Psi }`$ suppression. Here we extend the comparison to include the centrality dependence of the yields and $`p_t`$ spectra of both $`\mathrm{J}/\mathrm{\Psi }`$ and Drell-Yan. By looking at the details of the entire available data set, we hope to resolve the question of whether initial state energy loss can explain “anomalous” $`\mathrm{J}/\mathrm{\Psi }`$ suppression. ## 2 Calculations In the absence of absorption and energy loss, all individual $`NN`$ collisions within an $`AA`$ interaction are equally likely to produce a detected $`\mathrm{J}/\mathrm{\Psi }`$. However, due to absorption, those nucleons on the trailing edges of the colliding nuclei are the most likely to produce a $`\mathrm{J}/\mathrm{\Psi }`$ which will survive to be detected. This can be seen clearly in the left-hand panel of Figure 1, where the production position within the colliding nuclei of surviving $`\mathrm{J}/\mathrm{\Psi }`$ is plotted, and only the absorption process has been included. As beam nucleons pass through the target nucleus, they lose energy via inelastic interactions, so that collisions between those nucleons on the trailing edges of the nuclei, where the geometry is most favorable for a produced $`\mathrm{J}/\mathrm{\Psi }`$ to evade absorption, have considerably less than the full beam energy. Since $`\mathrm{J}/\mathrm{\Psi }`$ production has a very steep energy dependence , these collisions are the least likely to produce a $`\mathrm{J}/\mathrm{\Psi }`$. Thus, nucleon energy loss will certainly reduce the observed yield of $`\mathrm{J}/\mathrm{\Psi }`$. This can be seen in the right-hand panel of Figure 1, where the production position within the colliding nuclei of surviving $`\mathrm{J}/\mathrm{\Psi }`$ is plotted, and both initial state energy loss and absorption has been included. Moreover, the ratio of $`\mathrm{J}/\mathrm{\Psi }`$ to Drell-Yan, which is often used to gauge $`\mathrm{J}/\mathrm{\Psi }`$ suppression, could have a complicated centrality dependence, since Drell-Yan does not suffer from the geometrical predisposition caused by absorption. Finally, energy loss will also affect the $`p_t`$ spectrum of $`\mathrm{J}/\mathrm{\Psi }`$–the trailing edge collisions which are most affected by energy loss are those which, via the Cronin effect , produce $`\mathrm{J}/\mathrm{\Psi }`$ with the highest mean $`p_t`$. In order to study initial state energy loss, we have constructed a Glauber model of nuclear collisions. We will describe the model briefly here; more details are available elsewhere . Nucleons are distributed using a Woods-Saxon parameterization, and interact with a nucleon-nucleon cross section of 30 mb. As nucleons undergo interactions, they lose energy. Various models of energy loss are reasonably consistent with measured proton spectra in $`pA`$ collisions; we have utilized a parameterization where nucleons lose a constant fraction of their momentum in each interaction. In order to match measured data, the momentum fraction lost per interaction would be $``$40%. However, most of this energy loss occurs via soft interactions, with a time scale of a few fm/$`c`$. At SPS energies, the colliding nuclei cross in $``$0.1 fm/$`c`$; thus, only a fraction of the time-integrated total energy loss is applicable to hard interactions. Our approach is to treat the applicable fraction of total energy loss as a variable parameter. The values we have chosen are 5%, 10% and 15% momentum loss per collision, to be compared with the 40% loss realized as the time between collisions approaches $`\mathrm{}`$. By counting the number of prior collisions for each nucleon, a center-of-mass energy can be calculated for each $`NN`$ interaction. This energy is used to calculate the relative probability that a $`\mathrm{J}/\mathrm{\Psi }`$ or Drell-Yan pair will be produced, using the Schuler parameterization for the $`\mathrm{J}/\mathrm{\Psi }`$ energy dependence and “tau scaling” for Drell-Yan energy dependence. Produced $`\mathrm{J}/\mathrm{\Psi }`$ are taken to be at rest in the $`NN`$ center-of-mass frame, such that the survival probability is a function of the path length through nuclear material which the $`\mathrm{J}/\mathrm{\Psi }`$ must traverse, the nuclear density and the breakup cross section. We have utilized a breakup cross section of 7.1 mb, which is $``$15% higher than the value of 6.2 $`\pm `$ 0.7 mb reported by NA50 . The NA50 value is calculated by fitting the $`\mathrm{J}/\mathrm{\Psi }`$ to Drell-Yan ratio as an exponential function of $`L`$, the mean path length through the nuclear medium, for various centrality bins in $`pA`$ and $`AA`$ collisions. Due to absorption, all possible path lengths do not contribute equally to the actual mean path length for surviving $`\mathrm{J}/\mathrm{\Psi }`$ in a given centrality bin; thus, as we have shown elsewhere , a simple linear average over path lengths systematically underestimates the absorption cross section. Shown in Figures 2 and 3 are the calculated yields of $`\mathrm{J}/\mathrm{\Psi }`$ and Drell-Yan, respectively, from $`PbPb`$ collisions, plotted as a function of transverse energy $`E_t`$, and compared to the NA50 measured values . We have simulated the NA50 $`E_t`$ bins, assuming $`E_t`$ scales as the number of wounded nucleons and smearing the calculated values by the NA50 resolution of 94%/$`\sqrt{E_t}`$ . For each figure, the yield without energy loss is shown in the leftmost panel, as well as that for our nominal choices of 5%, 10% and 15% momentum loss per $`NN`$ collision. Our model does not predict absolute yields, so the normalization for each curve has been chosen so as to best match the NA50 data in the lowest $`E_t`$ bins. Clearly, the $`\mathrm{J}/\mathrm{\Psi }`$ yield deviates significantly from the prediction with no energy loss; although not shown here, the discrepancy is significant even if one increases the $`\mathrm{J}/\mathrm{\Psi }`$ breakup cross section as high as 9 mb. This plot shows, in the simplest way, the additional suppression seen in the $`PbPb`$ data as compared to expectations based on lighter systems. In the following panels of Figure 2, it can be seen that as energy loss is invoked, the prediction comes closer to the data, until for the maximum value of energy loss we consider, the prediction matches the data relatively well. However, the scenario for Drell-Yan is considerably different, as shown in Figure 3. The prediction for no energy loss matches the data very well, while with just 5% momentum loss per collision, the prediction deviates significantly from most of the data points. For the maximum value of energy loss, which is necessary to match the $`\mathrm{J}/\mathrm{\Psi }`$ yields, the prediction does not come close to matching the data. Thus, there is an inconsistency–the model can be forced to match the $`\mathrm{J}/\mathrm{\Psi }`$ data by invoking a fairly large amount of energy loss, but this same value of energy loss causes the model to substantially underestimate the Drell-Yan yields. However, it is possible that this inconsistency could be explained away if the energy loss of gluons, which may dominate $`\mathrm{J}/\mathrm{\Psi }`$ production, is different from that of quarks and antiquarks, the mutual annihilation of which lead to Drell-Yan production; whether this is the case remains an open question. It should be noted that precision studies of how energy loss affects Drell-Yan in $`pA`$ collisions are also being done . Another way to consider the effects of energy loss is to look at the $`p_t`$ spectra of $`\mathrm{J}/\mathrm{\Psi }`$. The $`p_t^2`$ of $`\mathrm{J}/\mathrm{\Psi }`$ from $`pA`$ collisions has been observed to be larger than that from $`pp`$ collisions. This increased $`p_t`$ is understood to come from an increase in the intrinsic transverse momentum of the partons in the colliding nucleons as a result of prior interactions. This mechanism, referred to as the “Cronin effect”, has been phenomenologically described by $$p_t^2_N=p_t^2_{pp}+N\mathrm{\Delta }p_t^2,$$ (1) in which the $`p_t^2`$ of $`\mathrm{J}/\mathrm{\Psi }`$ produced in a nucleon-nucleon collision where the colliding partners had a total of $`N`$ prior interactions is given by the sum of the $`p_t^2`$ value from $`pp`$ collisions plus $`N`$ times $`\mathrm{\Delta }p_t^2`$, the change in $`p_t^2`$ per collision. Thus, $`\mathrm{J}/\mathrm{\Psi }`$ with the highest mean $`p_t`$ come from the latest collisions, and are the most sensitive to effects coming from nucleon energy loss. For $`\mathrm{J}/\mathrm{\Psi }`$, a value of $`p_t^2_{pp}`$ = 1.23 $`\pm `$ 0.05 GeV<sup>2</sup> was measured by NA3 at a beam momentum of 200 $`A`$GeV/$`c`$; however, before we can use this value, we must correct for the beam energy, since the SPS $`Pb`$ beams are at 160 $`A`$GeV/$`c`$. Measured $`p_t^2`$ for Drell-Yan from pion and proton induced reactions on nuclei have been shown to scale linearly with $`s`$, the square of the center-of-mass energy, with similar slopes for the two incident particle species. Measurements of $`p_t^2`$ for $`\mathrm{J}/\mathrm{\Psi }`$ from proton induced reactions are scarce, but if one combines data from both pion and proton induced reactions, a linear scaling with $`s`$ fits the data reasonably well. Using this slope, we estimate the value of $`p_t^2_{pp}`$ for interactions at 160 $`A`$GeV/$`c`$ to be 1.13 GeV<sup>2</sup>. A recent study, combining $`\mathrm{J}/\mathrm{\Psi }`$ data from $`pA`$ and $`AA`$ interactions at 200 $`A`$GeV/$`c`$ and using the measured value of $`p_t^2_{pp}`$ given above, performed a fit to determine a value for $`\mathrm{\Delta }p_t^2`$ of 0.125 GeV<sup>2</sup>. Using these parameters, we have implemented the Cronin effect in our model. Transverse momentum distributions are taken to follow the usual form of $`\frac{dN}{dm_t}m_t\mathrm{exp}(\alpha m_t)`$. The prescription for the Cronin effect characterizes the transverse momentum after $`N`$ collisions in terms of $`p_t^2_N`$, which is related to a slope parameter $`\alpha _N`$ by $$p_t^2_N=\frac{2}{m\alpha _N+1}\left[\frac{3}{\alpha _N^2}+\frac{3m}{\alpha _N}+m^2\right]$$ (2) for a particle of mass $`m`$. In practice, we wish to convert $`p_t^2_N`$ to $`\alpha _N`$; over the $`p_t^2`$ range of interest and for the $`\mathrm{J}/\mathrm{\Psi }`$ mass, the inverse of Equation 2 is well approximated by a power law, $`\alpha _N6.68\times p_t^2_N^{0.855}`$. In the course of the Glauber calculation, a $`p_t`$ value is chosen from the appropriate distribution (based on the number of prior $`NN`$ collisions for the interacting nucleons) for each produced $`\mathrm{J}/\mathrm{\Psi }`$. For those $`\mathrm{J}/\mathrm{\Psi }`$ which evade absorption, a running value for $`p_t^2`$ is tabulated as a function of the total $`E_t`$ of the collision. The prediction for $`\mathrm{J}/\mathrm{\Psi }`$ $`p_t^2`$ from $`PbPb`$ collisions is compared to the NA50 data in Figure 4. The prediction is shown as a band of values, corresponding to the uncertainty in the scaled value of $`p_t^2_{pp}`$. The prediction with no energy loss matches the NA50 data quite well; this result is at odds with other recent studies , in which it has been claimed that a plasma was required in order to match the NA50 data. However, these calculations did not include a beam energy rescaling of the value for $`p_t^2_{pp}`$. There is some uncertainty in the scaling we have implemented, so that the question of matching the NA50 data is still open; however, given the normalization uncertainty involved, it seems premature to rule out a normal hadronic description of the NA50 $`p_t^2`$ data. It is clear, however, that the inclusion of energy loss causes the prediction to deviate severely from the data. For a value of 15% momentum loss per collision, which gave the best match to the $`\mathrm{J}/\mathrm{\Psi }`$ yields, the prediction for $`p_t^2`$ is in severe disagreement with the data. Thus, again we have an inconsistency; in this case, a single value of energy loss cannot describe both the $`\mathrm{J}/\mathrm{\Psi }`$ yields and the $`p_t^2`$ data. Since this discrepancy is between two aspects of the energy dependence of $`\mathrm{J}/\mathrm{\Psi }`$, it is not so easily dismissed. ## 3 Conclusions In summary, we have performed an evaluation of the importance of initial state energy loss with respect to $`\mathrm{J}/\mathrm{\Psi }`$ suppression. Within the uncertainty in normalization, the $`\mathrm{J}/\mathrm{\Psi }`$ $`p_t^2`$ spectrum seems consistent with a normal hadronic scenario. The addition of energy loss can cause the model prediction to fit the $`\mathrm{J}/\mathrm{\Psi }`$ yields; however, a single value of energy loss per collision cannot simultaneously match both the $`\mathrm{J}/\mathrm{\Psi }`$ and Drell-Yan yields, nor can it simultaneously match both the $`\mathrm{J}/\mathrm{\Psi }`$ yields and the $`\mathrm{J}/\mathrm{\Psi }`$ $`p_t^2`$ data. This result suggests that, contrary to the proposal made elsewhere based on minimum bias $`\mathrm{J}/\mathrm{\Psi }`$ yields only, “anomalous” $`\mathrm{J}/\mathrm{\Psi }`$ suppression cannot be explained by initial state energy loss. Clearly the simplest hadronic model does not match the $`PbPb`$ $`\mathrm{J}/\mathrm{\Psi }`$ yields, so that some other mechanism must be causing the increased suppression. However, before one can either claim or rule out any source of this effect, whether “normal” hadronic or otherwise, a systematic comparison to all of the data, as we have attempted to do here, must be performed. ## 4 Acknowledgements This work has been supported by the grant from the U.S. Department of Energy to Columbia University DE-FG02-86ER40281 and the Contract to Los Alamos National Laboratory W-7405-ENG-36.
no-problem/9907/hep-ph9907449.html
ar5iv
text
# References Recent data on atmospheric, solar and accelerator neutrino experiments have indicated that neutrinos have a finite mass and they exhibit flavor oscillation phenomena. Although the data has yet to be confirmed they point to the possibility that the three known active neutrinos are insufficient to account for all observations. The mass squared differences required to fit the data are: $`10^{11}\mathrm{eV}^2\delta m_s^210^5\mathrm{eV}^2`$ , $`\delta m_a^210^3\mathrm{eV}^2`$ and $`0.2\mathrm{eV}^2\delta m_{\mathrm{L}SND}^22\mathrm{eV}^2`$ where the subscripts $`s`$, and $`a`$ refer to solar and atmospheric oscillations respectively. At least one light standard model singlet fermion is required to reconcile the data with the standard two by two neutrino mixing explanations. In most studies of a SM singlet neutrino one assumes it has no interactions other than gravitation at low energies and its role is to supply a mass for the active neutrinos. An example of a singlet neutrino is a right-handed neutrino of the Dirac type. Another example comes from SO(10) GUT model where the heavy SM singlet is a Majorana neutrino. In this model, the heavy neutrino has a mass of $`10^{1214}`$ GeV and gives the active neutrinos a small mass via the seesaw mechanism. Its large mass prevents it from having direct experimental consequences at current available accelerator energies. Recently there has been renewed interest in models of radiatively generated neutrino masses with new physics occurring at or not much above the weak scale. An example is the Zee model which generates neutrino mass with the same mechanism that produces lepton number violation . The crucial ingredient is a lepton flavor changing SU(2) singlet scalar with U(1) hypercharge Y=2. It is straight forward to include a light SM neutrino in the model , so as to accommodate all of the anomalous neutrino data. We observe that now this light singlet neutrino now has an interaction with a strength determined by the mass of the Zee scalar and its Yukawa couplings to the SM leptons. A detailed phenomenological study of the current limits on these parameters are given in Ref . This paper is concerned with the effects of SM singlet neutrinos that have interactions as weak as or weaker than the normal weak interactions. These interactions can arise from a spin 1 particle exchange as in extended gauge models or a spin 0 boson exchange such as the charged scalar in the Zee model. The interactions may be weak due to small couplings and/ or a large mass of the mediating particle. One advantage of the Zee model is that it has relatively few parameters. Most are constrained by terrestrial experiments. This makes it a economical model for the study of singlet interacting neutrinos (SINs) on astrophysical phenomenon. Since there is now a large body of solar neutrino data with more to come, we look into the effects of SINs on some of the proposed solutions to the solar neutrino problem. For the purpose of this paper we can ignore the mixing of the Zee scalar with the requisite two Higgs doublets without loss of generality. SINs can affect the study of the neutrino fluxes from the sun in two ways. They can impact on matter enhanced flavor transformation and secondly they alter the neutrino detector cross sections since they now interact with electrons albeit very weakly. The interactions of the singlet will come into play in any scenario which involves the Mikheyev-Smirnov-Wolfenstein (MSW) mechanism of neutrino transformation between an active neutrino and a singlet. We illustrate both effects with the small angle solution to the solar neutrino problem. If electron neutrinos transform to singlet neutrinos in the sun, the singlet neutrino coupling in the Zee model is most tightly constrained by the neutrino-electron scattering data at SuperKamiokande . We also show that a model which produces neutrino mass, such as the Zee model, may provide additional interactions for the active neutrinos which will in principle occur in active-active transformation scenarios and detector cross sections. As we will demonstrate however, these parameters are constrained by terrestrial experiments to have a much smaller effect on neutrino flavor transformation. We begin by looking at the general equation governing the transformation of electron neutrinos into another type of neutrino, where x = $`\mu `$,$`\tau `$, or s, and s is the singlet neutrino, in a matter environment. This is given by $$i\mathrm{}\frac{}{r}\left[\begin{array}{cc}\mathrm{\Psi }_e(r)& \\ & \\ \mathrm{\Psi }_x(r)& \end{array}\right]=\left[\begin{array}{cc}\phi (r)& \sqrt{\mathrm{\Lambda }}+V_{ex}(r)\\ & \\ \sqrt{\mathrm{\Lambda }}+V_{xe}(r)& \phi (r)\end{array}\right]\left[\begin{array}{cc}\mathrm{\Psi }_e(r)& \\ & \\ \mathrm{\Psi }_x(r)& \end{array}\right],$$ (1) where $$\phi (r)=\frac{1}{4E}\left((V_e(r)V_x(r))E\delta m^2\mathrm{cos}2\theta _v\right)$$ (2) $$V_e(r)\pm 2\sqrt{2}G_F\left[N_e^{}(r)N_e^+(r)\frac{N_n(r)}{2}\right]$$ (3) In these equations $$\sqrt{\mathrm{\Lambda }}=\frac{\delta m^2}{4E}\mathrm{sin}2\theta _v,$$ (4) $`\delta m^2m_2^2m_1^2`$ is the vacuum mass-squared splitting, $`\theta _v`$ is the vacuum mixing angle, $`G_F`$ is the Fermi constant, and $`N_e^{}(r)`$, $`N_e^+(r)`$, and $`N_n(r)`$ are the number density of electrons, positrons, and neutrons respectively in the medium. In the formulas given here, upper signs (in this case plus) correspond to the mixing of neutrinos while the lower signs (in this case minus), correspond to the mixing of antineutrinos. The potential $`\phi _x(r)`$ will have a standard model value and an additional term due to the extra interactions introduced by the Zee model. Other models which produce neutrino masses such as R-parity violating supersymmetric models may have similar interactions. Focusing on the Zee model, the charged scalar, $`h^{}`$, is constrained by experiments at LEP to have a mass $`(M_h>100\mathrm{GeV})`$. This in turn induces the following four fermion effective Lagrangian: $$=\frac{|f_{12}|^2}{2M_h^2}\overline{\nu }_\mu \gamma _\mu \nu _{\mu L}\overline{e}\gamma ^\mu e_L.$$ (5) which governs low energy $`\nu _\mu e`$ scattering. Note that this term has the opposite sign as the SM charged current interaction and is a prediction of the model. This term has to be added to the standard model muon neutrino MSW potential as follows: $$V_\mu (r)2\sqrt{2}G_F\left[\delta (N_e^{}(r)N_{e^+}(r))+\frac{N_n(r)}{2}\right]$$ (6) where $`\delta =\sqrt{2}|f_{12}|^2/(8M_h^2G_F)`$. The coupling constant $`|f_{12}|^2`$ is constrained by the measurements of the lifetime of the muon to be $`|f_{12}|^2/(\frac{M_h}{100\mathrm{G}\mathrm{e}\mathrm{V}})^2<.0015`$ (see ). Therefore, $`\delta <.002`$. Similarly, $`\nu _e\nu _\tau `$ oscillations will be influenced by the Zee model via additional $`\nu _\tau e`$ scattering contribution to the matter potential. The potential takes the form of Eq. 6 with the replacement of $`V_\mu `$ by $`V_\tau `$ and $`\delta =\sqrt{2}|f_{13}|^2/(8M_h^2G_F)`$. The limit on $`|f_{13}|^2`$ is derived from LEP and SLC measurements of the leptonic vector and axial vector couplings in Z decay. For a scale mass of $`M_h=800\mathrm{GeV}`$; the limit is $`\delta <0.1`$. Clearly the effect can be much larger for $`\nu _e\nu _\tau `$ oscillation. This upper limit gives rise to a maximum change in the MSW potential of about 10%. The extension of the Zee model which includes a singlet neutrino produces neutrino-electron scattering terms which, after Fierz transformation, have the form $$=\frac{|g_1|^2}{2M_h^2}\overline{\nu }_R\gamma ^\mu \nu _R\overline{e}_R\gamma _\mu e_R.$$ (7) Here, $`g_1`$ is the Zee model coupling between the singlet neutrino, the right handed electron and the scalar. This produces a singlet neutrino MSW potential of $$V_s(r)=2\sqrt{2}G_F\beta (N_e^{}(r)N_{e^+}(r))$$ (8) where $$\beta =\left(\frac{\sqrt{2}|g_1|^2}{8M_h^2G_F}\right).$$ (9) The best terrestrial limit on $`\beta `$ comes from the leptonic right handed coupling to the Z . The singlet neutrino does not couple directly to the Z boson, it only makes a contribution to the decay through a correction at one loop order. The limit on $`|g_1|^2/2M_h^2`$ is about $`2\times 10^4\mathrm{GeV}^2`$, therefore $`\beta <6`$. Other than this one loop effect we found no direct experimental bound since this singlet neutrino can only enter in weak interaction processes via leptonic mixing and no usable constraint is available. This in principle allows the singlet-electron interaction to be larger than the weak interaction. However, as we shall see below astrophysical considerations can provide tighter constraints. It is a characteristic of the Zee model that there are no interactions which convert electron neutrinos directly to muon, tau or singlet neutrinos by way of electron scattering mediated by the charged scalar. Therefore, in this model $`V_{ex}=V_{xe}0`$. However, there are neutrino electron scattering terms in the Zee model which convert muon (tau) neutrinos to singlet neutrinos and vice versa. These are governed by the Lagrangian, $$=\frac{f_{12(13)}g_1^{}}{2M_h^2}\left(\overline{\nu }_R\nu _{\mu (\tau )}\overline{e}_Re_L\frac{1}{4}\overline{\nu }_R\sigma ^{\mu \nu }\nu _{\mu (\tau )}\overline{e}_R\sigma _{\mu \nu }e_L\right)+h.c.$$ (10) In the forward scattering direction, the scalar term is proportional to neutrino mass and can therefore be neglected. The tensor term is proportional to electron spin and integrates to zero for unpolarized electrons. Therefore, for most situations, $`V_{\mu s(\tau s)}=V_{s\mu (s\tau )}=0`$. Returning to the case of $`\nu _e\nu _s`$ transition, the combined active-singlet neutrino transformation potentials can be cast in the form: $$V_eV_s=\pm \frac{3G_FN_N(r)}{\sqrt{2}}\left[\left(1+\frac{2}{3}\beta \right)Y_e\frac{1}{3}\right]$$ (11) where $`N_N`$ is the total number density of nucleons. The electron fraction is defined as $$Y_e\frac{N_e^{}(r)N_{e^+}(r)}{N_N}$$ (12) A non-zero $`\beta `$ will have the effect of increasing the potential if the electron fraction is greater than 1/3. In the sun for example, the electron fraction ranges from a value of about two thirds at the center to more than 0.85 in the outer layers. The main effect of the new potential is to change the position at which a given neutrino undergoes the MSW resonance. The position of the resonance is determined by the condition: $$V_eV_s=\delta m^2\mathrm{cos}2\theta _v.$$ (13) Therefore, for given mixing parameters and a constant electron fraction, increasing $`\beta `$ causes the resonance position for a neutrino of energy E to shift to lower density. This scenario may be applied to several phenomenon, such as solar neutrinos and supernova neutrinos for the $`\nu _e\nu _s`$ or $`\nu _e\nu _{\mu ,\tau }`$ situation. For the up-down asymmetry in the atmospheric neutrino problem, the oscillations between muon neutrinos and either tau neutrinos or singlet may be analyzed in a similar manner, although we note that in general there will be off-diagonal terms for $`\nu _\mu \nu _\tau `$ mixing. Taking again the example of the sun, we see from Eq. 13 that all neutrinos will pass through the resonance condition at a position that is further from the center of the sun as $`\beta `$ increases. For fixed $`\delta m^2`$ and $`\mathrm{sin}^22\theta _v`$, low energy neutrinos that in the case of $`\beta =0`$ did not encounter resonances in the sun, will do so now if $`\beta >0`$. We illustrate this point in Fig. 1 where the survival probability for solar neutrinos is plotted for two values of the parameter $`\beta `$. This figure was produced by numerically integrating Eq (1) for the singlet neutrino. For $`\beta =0`$ the solution reduces to the small angle sterile neutrino oscillation solution to the solar neutrino problem, as in given in, for example . It can be seen that increasing $`\beta `$ will cause a decrease in the number of low energy electron neutrinos coming from the sun. In contrast, the nonzero $`\beta `$ has little effect on the high energy neutrinos. The effective weak potential scale height $$L_V=\left|\frac{d\mathrm{ln}(V_eV_s)}{dr}\right|^1$$ (14) which determines the survival probability at the resonance position remains fairly constant with small changes in $`\beta `$. In fact for a fixed $`Y_e`$ and an exponential density profile and a given neutrino energy, it can be shown that the weak potential scale height remains constant regardless of the value of $`\beta `$. Plots similar to Figure 1 can be drawn for $`\nu _e\nu _{\mu ,\tau }`$ oscillations, although nonzero $`\delta `$ will have a smaller impact on the survival probability as the constraints on $`\delta `$ are tighter. It is also important to take into account the effect of a non-zero $`\beta `$ in neutrino detectors. In the Zee model, the neutrinos have only additional interactions with other leptons and not with the quarks due to the weak charge of the scalar. Hence, the radiochemical solar neutrino experiments such as SAGE and GALLAX will not be impacted by the additional interactions the singlet neutrino has. However, they will register the low energy neutrino flux which depends on $`\beta `$ as explained before. On the other hand, an experiment which detects neutrino-electron scattering, such as SuperKamiokande however, will have some portion of its signal coming from singlet neutrino-electron scattering if singlet neutrinos are present. The cross section for singlet neutrino electron scattering at first order is given by: $$\frac{d\sigma }{dT}=\frac{G_F^2m_e}{2\pi }4\beta ^2$$ (15) where T is the electron recoil energy. In comparison, the largest contribution to the standard model $`\nu _ee`$ scattering cross section approximately given by: $$\frac{d\sigma }{dT}=\frac{G_F^2m_e}{2\pi }\left[\left(1+2\mathrm{sin}^2\theta _W\right)^2+(2\mathrm{sin}^2\theta _W)^2\left(1\frac{T}{E}\right)^2+𝒪(m_e/E)\right].$$ (16) The theoretical rate per electron recoil energy can be calculated by folding the survival probability for the electron neutrinos and the oscillation probability for sterile neutrinos with the cross sections and the fluxes of neutrinos. The detector rate may be estimated by folding the theoretical rate with an energy resolution function as in Eq 4 of Ref . If $`\nu _e\nu _s`$ transformation is the solution to the solar neutrino problem, then it can be readily seen that a constraint on $`\beta `$ comes from the overall number of events in Superkamiokande. Various combinations of mixing parameters and values of $`\beta `$ will produce different total count rate. If any significant mixing of electron neutrinos and singlet neutrinos takes place in sun, and the flux predicted by the standard solar model is correct, then $`\beta `$ must be smaller than $`1/2`$. This limit is much stronger than the best limit from accelerator experiments of $`\beta <6`$ and is derived from the extreme situation where all electron neutrinos above 5 MeV are converted to singlet neutrinos. Figure 2 plots the ratio of rates with to without matter enhanced flavor transformation, using the shape of the $`{}_{}{}^{8}\mathrm{B}`$ neutrino spectrum from . Several curves are plotted representing several mixing parameters and values of $`\beta `$. It is seen that increasing the value of $`\beta `$ causes a flattening of the recoil spectrum curve. It does not account for the upturn in event rate at high energy observed in the SuperKamiokande data . Correctly reproducing the overall rates in all of the solar neutrino experiments with a singlet neutrino - electron scattering interaction, requires an adjustment to the usual sterile neutrino MSW mixing parameters. For example if $`\beta =0.3`$, then $`\delta m^2`$ must be increased by $`30\%`$ (see Eq. 13) in order to avoid reducing the fluxes of pp neutrinos. The mixing angle must be adjusted to take into account both the effect of the change in $`\delta m^2`$ on the survival probabilities of the $`{}_{}{}^{8}\mathrm{B}`$ neutrinos and the effect of the nonzero singlet neutrino- electron scattering cross section in Kamiokande and Superkamiokande. For $`\beta =0.3`$, $`\mathrm{sin}^22\theta _v`$ must be increased by $`7\%`$. The survival probability and expected electron recoil spectrum for these parameters is shown by the dot-dashed lines in Figures 1 and 2. We turn to the case of $`\nu _\tau \nu _e`$ oscillations. The $`\nu _\tau e`$ scattering cross section can increase by a maximum of a factor of 2, depending on the neutrino and electron energy, since the Zee model amplitudes and the standard model amplitudes add coherently. The scattering cross section in this case looks like $$\frac{d\sigma }{dT}=\frac{G_F^2m_e}{2\pi }\left[\left(12\mathrm{sin}^2\theta _W+2\delta \right)^2+(2\mathrm{sin}^2\theta _W)^2\left(1\frac{T}{E}\right)^2+𝒪(m_e/E)\right].$$ (17) For $`\delta =0`$, this reduces to the standard model neutral current cross section. We illustrate the effect for matter enhanced flavor transformation with parameters $`\delta m^2=5.4\times 10^6\mathrm{eV}^2`$ and $`\mathrm{sin}^22\theta _v=6.3\times 10^2`$. In order to reproduce the observations in solar neutrino experiments with nonzero $`\delta `$, the change in the MSW potential will force a maximum increase in $`\delta m^2`$ of 10% above the $`\delta =0`$ solution. For this change in $`\delta m^2`$ the mixing parameter, $`\mathrm{sin}^22\theta _v`$ retains approximately its original value in order to take into account the both change in $`\delta m^2`$ and the increase in scattering at the detector. For comparison we consider the case of vacuum oscillations. There is no change in the survival probability of electron neutrinos due to the singlet interactions. However, the electron recoil spectrum will be effected. The singlet neutrino - electron scattering gives a signal similar to the neutral current scattering, although the size of the effect depends on the unknown magnitude of $`\beta `$. Figure 3 shows the electron recoil spectrum for a vacuum solution of $`\delta m^2=6.6\times 10^{11}`$ and $`\mathrm{sin}^22\theta _v=0.9`$. The active neutrino solution is very similar to the singlet neutrino solution with $`\beta =0.3`$. Therefore in the vacuum case, as in the small angle case, it may be difficult to differentiate between singlet neutrino oscillations and active neutrino oscillations just by detecting neutrino-electron scattering. On the other hand, SNO may be able distinguish the active-singlet solution from both the active-active oscillation and the active-sterile oscillation through the combination of the three reactions: (1) $`\nu _e+dp+p+e^{}`$, (2) $`\nu _x+dp+n+\nu _x`$ and neutrino-electron scattering (3) $`\nu _x+e^{}\nu _x+e^{}`$. In reaction 2, $`\nu _x`$ can be $`\nu _e`$, $`\nu _\mu `$, and $`\nu _\tau `$. In reaction 3, $`\nu _x`$ includes the three active neutrinos and also the singlet neutrino. The expected event rate per kilotonne year from the three reactions may be found in . The standard solar model predicts 6500 events from reaction 1 above a 5 MeV threshold, and about 2200 will be seen for vacuum $`\nu _e\nu _\tau `$ oscillations. These numbers remain roughly unchanged in the presence of a singlet or sterile neutrino, although there will be some variation depending on the choice of mixing parameters. The number of events for reaction two is 710 from the standard solar model and remains unchanged in the case of active-active oscillations, although it should be reduced by about a factor of two in the case of active-sterile or active-singlet oscillations, depending on the mixing parameters. From reaction 3, about 320 events are expected for active-active oscillations with about 78 coming from $`\nu _{\mu (\tau )}`$. The number of neutrino-singlet interaction events depends on the strength of the interaction $`\beta `$. Therefore, if reaction 2 indicates sterile or singlet neutrinos, for large $`\beta `$ reaction 3 in combination with 1 must be used to distinguish the two cases. In conclusion, we find that the extended Zee model with its new lepton number violating interactions gives rise to new neutrino-electron scattering mechanisms. The additional scattering occurs for muon, tau and singlet type neutrinos. It alters the small angle MSW $`\nu _e\nu _s`$ and $`\nu _e\nu _\tau `$ solar solution by shifting the resonance position for neutrinos of a given energy. Furthermore for it increases the number of counts in water detectors and modifies the shape of the electron recoil spectrum produced from neutrino electron scattering. Taking these into account we have obtained a limit on singlet neutrino - electron interaction $`\beta `$ which is an order of magnitude better that derived from current accelerator experiments. With non-zero Zee model interactions for the singlet and tau neutrinos, the apparent $`\delta m^2`$ as measured by solar neutrino experiments can differ by as much as $`50\%`$ and $`10\%`$ respectively, from that which would be measured by reactor experiments. This work is partially supported by a grant from the Natural Science and Engineering Council of Canada.
no-problem/9907/cond-mat9907216.html
ar5iv
text
# Influence of shape of quantum dots on their far-infrared absorption ## I Introduction Ever since the discovery that far-infrared radiation (FIR) can only be used to excite center-of-mass modes of electrons parabolically confined in circular quantum dots, the generalized Kohn theorem, ingenious ways have been thought of to modify the confinement in order to excite internal modes. The modes caused by relative motion of the electrons would allow for exploration of interaction and general many-body effects. It was demonstrated that in circular dots with a slight radial deviation from the parabolic confinement states with few electrons have a unique far-infrared spectrum. Dots with a higher number of electrons ($`N>10`$) commonly only show a simpler structure of Bernstein modes and an energy shift. Early on, it came clear that a certain anticrossing behavior discovered in the higher one of the two Kohn’s modes measured for quantum dots assumed to be circular is indeed a signature of a slight square deformation of the electron system. The far-infrared absorption of strictly square-shaped quantum dots with hard walls has been modelled by exact numerical diagonalization for two electrons, and by a real-time simulation of the density oscillations described by a local spin-density approximation (LSDA) for many electrons. The generalized Kohn theorem has been further extended to describe elliptic quantum dots and related three-dimensional structures, ellipsoids. The production of arrays of isolated angularly deformed quantum dots has been hampered by technical difficulties. The little there is of experimental results from FIR absorption measurements on quantum dots with angular deviation it has not found its way into the general physics literature but is to be found as thesis work. The FIR absorption of single quantum rings has been studied in relatively large systems with many electrons, but recently small quantum dots with few electrons and a hole through their center have been produced and measured. In this publication we model the FIR absorption of single quantum dots with a slight elliptic or square deviation assuming the radial confinement to be parabolic. The model is general enough to be applicable for most angular deviations of the confinement. In a circular quantum dot the electron-electron interaction does not break the angular symmetry, but in our model it can strongly modify the shape of the dot as soon as the circular symmetry is broken by the initial confinement. We apply our model to the FIR absorption of an elliptic dot in order to verify that it produces the results expected by Kohn’s theorem. The calculated absorption spectra for a square deviated quantum dot show both familiar results for a weak deviation, and effects reflecting internal electron motion that have not yet been observed in experiments. In addition, we calculate the FIR absorption for a small dot with few electrons with its center removed to compare with the results of A. Lorke. ## II Model As we allow for a very general angular shape of the quantum dot neither the total angular momentum nor the angular momentum of the effective single particle states in a mean field approach is conserved. The Coulomb interaction thus ’mixes‘ up all elements of the functional basis used and we limit ourselves to the Hartree approximation (HA) in order to be able to calculate the absorption. The quantum dot is modelled with the confinement potential $$V_{\text{conf}}(𝐫)=\frac{1}{2}m^{}\omega _0^2r^2\left[1+\underset{p=1}{\overset{p_{max}}{}}\alpha _p\mathrm{cos}(2p\varphi )\right],$$ (1) representing an elliptic confinement when $`\alpha _10`$ and $`\alpha _p=0`$ for $`p1`$, and a square symmetric confinement when $`\alpha _20`$ and $`\alpha _p=0`$ for $`p2`$. We use the Darwin-Fock basis; the eigenfunctions of the circular parabolic confinement potential in which the natural length scale, $`a`$, is given by $$a^2=\frac{\mathrm{}^2}{\sqrt{1+4(\frac{\omega _0}{\omega _c})^2}},\mathrm{}^2=\frac{\mathrm{}c}{eB},$$ (2) where $`\omega _c=eB/m^{}c`$ is the cyclotron frequency of an electron with effective mass $`m^{}`$ in a perpendicular homogeneous magnetic field $`B`$. The states are labelled by the radial quantum number $`n_r`$ and the angular quantum number $`M`$. The single electron spectrum of these states is shown in Fig. 1. To evaluate the FIR response of the quantum dot, the external potential is assumed to have the form $$\varphi ^{ext}(𝐫,t)=\varphi _0re^{i(N_p\varphi (\omega +i\eta )t)},$$ (3) where $`N_p=\pm 1`$ and $`\eta 0^+`$, representing a spatially constant external electric field. The FIR response is found by a self-consistent method in the linear response regime; the time-dependent Hartree approximation. The power absorption is then given by $$𝒫(\omega )\omega \mathrm{}\left\{\underset{\alpha ,\beta }{}f^{\beta \alpha }(\omega )\left|\beta |\varphi ^{sc}|\alpha \right|^2\right\},$$ (4) where $$f^{\beta \alpha }(\omega )=\frac{1}{\mathrm{}}\left\{\frac{f_\alpha ^0f_\beta ^0}{\omega +(\omega _\alpha \omega _\beta )+i\eta }\right\}$$ (5) and $`f^0`$ is the equilibrium Fermi distribution. Self-consistency is obtained by calculating linear response not to the external perturbation, $`\varphi ^{ext}`$, but rather to the total (self-consistent) potential $`\varphi ^{sc}=\varphi ^{ext}+\varphi _H^{ind}`$, where $`\varphi _H^{ind}`$ is the induced Hartree potential. For a circular quantum dot, the parabolic potential has proven to be a realistic approximation in many cases. For such a circular harmonic dot, the dipole selection rule for the center-of-mass angular momentum is $`\mathrm{\Delta }M=N_p`$ and the resonance frequencies are given by $$\omega _\pm =\frac{1}{2}(\mathrm{\Omega }+N_p\omega _c),\mathrm{\Omega }=(\omega _c^2+4\omega _0^2)^{1/2}.$$ For an anisotropic harmonic confinement, the selection rule is still $`\mathrm{\Delta }M=\pm 1`$, but there is absorption into both $`\omega _+`$ and $`\omega _{}`$ for each polarization. The resonance frequencies are then given by $$\omega _\pm ^2=\frac{\omega _x^2+\omega _y^2+\omega _c^2\pm [\omega _c^4+2\omega _c^2(\omega _x^2+\omega _y^2)+(\omega _x^2\omega _y^2)^2]^{1/2}}{2}.$$ (6) where $`\omega _x`$ and $`\omega _y`$ are the resonances at $`B=0`$ T, fulfilling $`\omega _x=(1+\alpha _1)^{1/2}\omega _0`$ and $`\omega _y=(1\alpha _1)^{1/2}\omega _0`$ in our model. ## III Results In the calculations reported here we use GaAs parameters, the effective mass $`m^{}=0.067m_0`$ and the dielectric constant $`\kappa =12.4`$. The FIR absorption of a quantum dot with an elliptic deviation is seen in Fig. 2. The induced density for the oscillation modes was used to confirm that only center-of-mass modes are excited and the dispersion with respect to $`B`$ can be reproduced as a simple difference of the dipole active transfers in the Darwin-Fock energy spectrum for one electron in a parabolic elliptic confinement. The oscillation strengths do comply with known analytic expressions. Interestingly, the external circular polarized electric field leads to linear oscillations of the electron system parallel to the minor and major axis of the contours of the elliptic confinement potential. Figure 3 shows the Darwin-Fock energy for a single electron in a confining potential with square deviation. Clearly visible is the anticrossing of the ($`M`$,$`n_r`$)$`=`$(+1,0) and the state (-3,0) that was found to be visible in the FIR absorption of quantum dots which were clearly of square shape in an electron micrograph. This anticrossing is present in the calculated absorption shown in Fig. 4 and 5 for two electrons in a dot. For this relative large deviation ($`\alpha _2=0.2`$ or $`0.4`$) from circular symmetry the absorption at $`B=0`$ for both circular polarizations ($`N_p=\pm 1`$) is split in two or more peaks. This is in contrast to the Darwin-Fock energy diagram in Fig. 3 from which we have to conclude that the lowest energy center-of-mass mode should be unsplit at $`B=0`$. Certainly it has to be kept in mind that the states in Fig. 3 can not be assigned any unique quantum numbers, $`n_r`$ and $`M`$. The evolution from one absorption peak to many at higher deviation is seen in Fig. 6. Observation of the induced density shows that all the modes are a mixture of center-of-mass and relative modes to different extent here. Some peaks can be identified with depolarization shifted transitions in the interacting Hartree energy spectrum while others are very close to transitions in the noninteracting Darwin-Fock diagram reflecting center-of-mass modes. In Fig. 7 the FIR absorption is shown for three electrons, ($`N=3`$). A splitting or anticrossing occurs in the absorption for $`N=3`$ and $`N_p=1`$ whereas it does not for $`N=2`$. This striking behavior can be understood from the Darwin-Fock energy diagram for three electrons (Fig. 8). The splitting can be identified as a transition from the third energy level to the fourth and fifth with a depolarization shift, showing its many-body character. This behavior can not be seen for $`N=2`$, since the third level is then not occupied. Generally the lower energy branch (seen in $`N_p=1`$ polarization in circular dots) is very stable against splitting, especially for dots with many electrons. One exception is a dot with some kind of a hole through its center. Such dots have been grown as self-organized InAs rings embedded into a GaAs/AlGaAs heterostructure. Here we try to model simply the qualitative behavior of such dots by using GaAs parameters and by replacing the hole with a repulsive Coulomb center in the middle of the dot. The confinement of the electrons is thus not of the same type as was used to study the far-infrared absorption of a finite 2DEG with an embedded impurity. The calculated absorption is seen in Fig. 9 for two and twelve electrons in the quantum ring. To understand the absorption one has to consider the Darwin-Fock energy spectrum of the Hartree interacting electrons in the ring shown in Fig. 10. In the case of two electrons in the ring the emptying of states with low angular quantum number $`M`$ close to the center with increasing magnetic field $`B`$ results in low energy ’side branches‘ with polarization $`N_p=+1`$ to the $`N_p=1`$ branch. The magnetic length, $`a`$, decreases with increasing $`B`$, causing the electrons to ’see’ in a sence more of the repulsice Coulomb center (and each other), resulting in the emptying of states of low $`M`$. It is our suggestion that A. Lorke is observing this same effect in his experiment. It should be mentioned here that the electron density in our calculation at the center of the ring is low for $`B=0`$ and vanishes as $`B`$ increases. In the case of twelve electrons screening effects and oscillating structure reminiscent of emptying of Landau bands is quite clear in the Darwin-Fock energy diagram, but again emptying of the $`M=0`$ state at $`B=4`$ T is visible in the absorption together with restructuring of the states with high $`M`$ at the outer edge. ## IV Summary We have used a general two-dimensional multipolar expansion of the confinement potential of a single quantum dot to study the effects of shape on its properties. We have shown that the far-infrared absorption spectrum of quantum dots with few electrons strongly reflects their shape and exact number of electrons if the confinement potential does not allow the application of Kohn’s theorem. Elliptic dots have two low energy peaks of equal strength at $`B=0`$, whereas, dots with slight square symmetry reveal only one peak. This behavior is connected to the different effects of shape on the ($`M`$,$`n_r`$)$`=`$($`\pm 1`$,1) degeneracy of circular dots at $`B=0`$. We find that this degeneracy can be lifted by the activation of relative modes in the far-infrared absorption spectrum of square shaped dots with increased deviation from the circular form. In quantum rings we observe a shift of occupation of the states labelled by the angular momentum quantum number $`M`$ to higher values with increasing magnetic field. This is essentially caused by the changing ratio of the magnetic length, $`a`$, and the dots radii. We identify structures seen in experiments with this effect. The Hartree approximation has been used here rather than the more desirable Hartree-Fock approximation in some unrestricted version, in order to manage the size of the absorption part of the calculation measured in GBytes and CPU-time. On the other hand, the HA can be expected to give a valid description of a system whose modes of excitation are mainly of the center-of-mass type. ###### Acknowledgements. This research was supported by the Icelandic Natural Science Foundation and the University of Iceland Research Fund.
no-problem/9907/cond-mat9907304.html
ar5iv
text
# Charge melting and polaron collapse in La1.2Sr1.8Mn2O7 ## Abstract X-ray and neutron scattering measurements directly demonstrate the existence of polarons in the paramagnetic phase of optimally-doped colossal magnetoresistive oxides. The polarons exhibit short-range correlations that grow with decreasing temperature, but disappear abruptly at the ferromagnetic transition because of the sudden charge delocalization. The “melting” of the charge ordering as we cool through $`T_C`$ occurs with the collapse of the quasi-static polaron scattering, and provides important new insights into the relation of polarons to colossal magnetoresistance. Manganese oxides have attracted tremendous interest because they exhibit colossal magnetoresistance (CMR) - a dramatic increase in the electrical conductivity when they order ferromagnetically. The basic relationship between ferromagnetism and conductivity in doped manganese oxides has been understood in terms of the double-exchange mechanism , where an itinerant $`e_g`$ electron hops between Mn<sup>4+</sup> ions, providing both the ferromagnetic exchange and electrical conduction. In addition, an important aspect of the physics of manganese oxides is the unusually strong coupling among spin, charge, and lattice degrees of freedom . These couplings can be tuned by varying the electronic doping, electronic bandwidth, and disorder, giving rise to a complex phase diagram in which structural, magnetic, and transport properties are intimately intertwined. The charge-ordered phases represent one of the most intriguing results of balancing these couplings, and have been observed at low temperature in insulating, antiferromagnetically ordered manganites, but are incompatible with double exchange-mediated ferromagnetism seen in optimally-doped CMR systems. In comparison to the cubic manganites such as La<sub>1-x</sub>A<sub>x</sub>MnO<sub>3</sub> (A=Sr, Ca, Ba), the two-layer Ruddlesden-Popper compounds La<sub>2-2x</sub>Sr<sub>1+2x</sub>Mn<sub>2</sub>O<sub>7</sub>, where $`x`$ is the nominal hole concentration, are advantageous to study because the reduced dimensionality strongly enhances the spin and charge fluctuations. The crystal structure is body-centered tetragonal (space group $`I4/mmm`$) with $`a3.87`$ Å and $`c20.15`$ Å, and consists of MnO<sub>2</sub> bilayers separated by (La,Sr)O sheets. In the intermediate doping regime ($`0.32x<0.42`$), the ground state is a ferromagnetic metal, and the magnetoresistance is found to be strongly enhanced near the combined metal-insulator and Curie transition at $`T_C`$ (112 K for the $`x`$=0.4 system of present interest). The present results reveal diffuse scattering associated with lattice distortions around localized charges, i.e. polarons, in the paramagnetic phase. The formation of lattice polarons above the ferromagnetic transition temperature $`T_C`$ has been inferred from a variety of measurements , but detailed observation via diffuse x-ray or neutron scattering in single crystals has been lacking until now . Through such measurements, we have observed the collapse of quasi-static polaron scattering when the metallic, ferromagnetic state is entered. Furthermore, we present evidence of the growth of relatively well developed short-range polaron correlations in the paramagnetic phase of this optimally-doped CMR material. However, the development of long-range charge ordering is preempted by the delocalization of the polarons themselves at $`T_C`$. The measurements were performed on a single-crystal of the double-layer compound La<sub>1.2</sub>Sr<sub>1.8</sub>Mn<sub>2</sub>O<sub>7</sub>, with dimensions $`6\times 4\times 1`$ mm<sup>3</sup>, cleaved from a boule that was grown using the floating-zone technique . The x-ray data were taken on the 1-ID-C diffractometer at the Advanced Photon Source, mostly using a high-energy beam of 36 keV to provide enough penetration in transmission geometry. Additional measurements were taken in reflection geometry with 21 keV. The neutron measurements were performed on the BT-2 triple-axis spectrometer at the NIST research reactor, using both unpolarized (with either energy integration or energy analysis) and polarized neutron beams with an incident energy of 13.7 meV. For the measurements under magnetic field at BT-2, we employed a superconducting solenoid to provide fields up to 9 T applied in the $`ab`$ plane. A wide range of reciprocal space was explored, including the ($`h`$0$`l`$) and ($`hhl`$) planes. A polaron consists of a localized charge with its associated lattice distortion field, which gives rise to diffuse scattering around the Bragg peaks, known as Huang scattering. Figure 1(a) shows a contour plot of the diffuse x-ray scattering in the ($`h`$0$`l`$) plane around the (0, 0, 8), (0, 0, 10) and (0, 0, 12) reflections . A similar anisotropic pattern of diffuse scattering was observed around (2, 0, 0). This scattering has a strong temperature dependence, with a dramatic response at $`T_C`$, as illustrated in Fig. 1(b). The almost linear temperature dependence of the diffuse scattering below $`T_C`$ in Fig. 1(b) suggests that phonons dominate in this temperature regime , but the sudden change at $`T_C`$ cannot be due to conventional acoustic phonons. This is confirmed by neutron energy scans such as shown in Fig. 1(c), which reveal both quasi-elastic and inelastic (phonon) contributions. The phonon mode at about 2.4 meV is well separated from the quasi-elastic scattering and obeys the usual Bose thermal population factor, whereas the quasi-elastic intensity increases with decreasing temperature, but then collapses below $`T_C`$. If we subtract an elastic nuclear incoherent contribution measured at 10 K, we see that the change at $`T_C`$ is entirely due to the quasi-elastic scattering contribution, showing that the lattice distortions giving rise to it are quasi-static on a time scale $`\tau \mathrm{}/2\mathrm{\Delta }E`$ 1 ps set by the energy resolution of the instrument, i.e. they are static on the time scale of typical phonon vibrations. A good description of the q-dependence of this diffuse scattering can be obtained in terms of Huang scattering, consistent with a Jahn-Teller type distortion around the Mn<sup>3+</sup> ions . Our results therefore provide direct evidence both for the existence of quasi-static polarons above $`T_C`$, and their abrupt disappearance upon cooling below the ferromagnetic transition, where the charges delocalize . The measurements also reveal the presence of broad incommensurate peaks in the paramagnetic phase, as shown by the contour plot of the x-ray intensity at 125 K in the ($`hk`$) plane at $`l`$=18 in Fig. 2(a). Three broad peaks are observed around the diffuse scattering rod; the expected fourth peak was not experimentally accessible. These peaks are characterized by a wave vector ($`\pm ϵ`$, 0, $`\pm 1`$) as measured from the nearest fundamental Bragg peak, where $`ϵ0.3`$ (in terms of reciprocal lattice units ($`2\pi /a`$, $`0`$, $`2\pi /c`$)). (0, $`\pm ϵ`$, $`\pm 1`$) peaks are also observed, either because of the presence of ($`a`$,$`b`$) twin domains in a 1q-system, or because this is a 2q-system. The in-plane incommensurability is evident in the x-ray $`h`$-scans shown in Fig. 2(b) at different temperatures. Note that, similar to the quasi-elastic peak in Fig. 1(c), this peak increases and then rapidly decreases in intensity as we cool through $`T_C`$. Figure 2(c) shows various neutron scans along the $`l`$-direction through the incommensurate peak positions (2.3, 0, $`l`$). The red circles are for an energy-integrated scan that reveals two broad symmetric peaks at $`l=\pm 1`$, consistent with out-of-phase correlations between bilayers. Identical data are obtained in (energy integrated) x-ray scans. The orange circles depict an elastic neutron scan across one of the peaks, scaled by an instrumental factor. Energy scans at the peak positions have confirmed that the correlations giving rise to these peaks are once again quasi-static on a time scale $`\tau `$ 1 ps. We have also performed polarized neutron experiments to probe the nature of this scattering. In the configuration where the neutron polarization P $``$ Q (the neutron wave vector), we found that all the signal was non-spin-flip scattering (depicted by the green diamonds in Fig. 2(c)), while any magnetic scattering would be spin-flip (green triangles in Fig. 2(c)). Thus the incommensurate peaks are purely structural reflections. Figure 3(a) shows that the temperature dependence of the incommensurate peak intensity is remarkably similar to the Huang scattering, whether the latter is derived from the x-ray scattering by subtracting the estimated thermal diffuse scattering (straight line in Fig. 1(c)) or directly from the quasi-elastic neutron scattering. This indicates that both types of scattering are associated with the development of polarons above $`T_C`$. The incommensurate peak intensity falls slightly more rapidly than the Huang scattering with increasing temperature. This is consistent with ascribing the Huang scattering to individual polarons, and the incommensurate peaks to polaron correlations which become stronger with decreasing temperature. Below $`T_C`$ we observe a “melting” of the polaron correlations occurring simultaneously with the collapse of the polarons themselves. We note that the collapse of the polaron correlations also occurs under an applied magnetic field (see Fig. 3(b)). This behavior is expected because of the coupling between the charge and spin dynamics through the double exchange interaction. The incommensurate peaks are broader than the $`q`$ resolution, showing that the in-plane and out-of-plane charge correlations remain relatively short range at all temperatures. Detailed measurements in the ($`h,0,l`$) plane indicate that the correlation lengths are weakly temperature dependent and peak at the same temperature as the intensity, with $`26.4`$ Å $`6a`$ in-plane, and $`10.4`$ Å $`\frac{c}{2}`$ out-of-plane. No higher harmonics have been observed, and no superlattice peaks have been found in the ($`hhl`$) plane. The $`l=\pm 1`$ component of the charge ordering wave vector is related to the presence of two MnO<sub>2</sub> bilayers per unit cell, and indicates that distortions produced by the modulation of the charge density are, on average, out of phase in adjacent bilayers. This results in a staggering of the charges from one bilayer to the next, as would be expected from Coulomb repulsion. The small $`c`$-axis correlation length ($``$ the separation between two bilayers) suggests that only two bilayers are correlated at most. The short-range nature of the charge correlations makes it difficult to collect enough integrated intensities at this stage to perform quantitative comparisons to specific charge ordering models. Charge and orbital ordering have been observed at low temperature in a number of insulating, antiferromagnetic cubic manganites at small and large ($`x0.5`$) doping, as well as in layered manganites with $`x`$=$`0.5`$ . Commensurate charge modulations in the antiferromag- netic insulating phases are also a familiar scenario in the related nickel oxides . However, short-range charge ordering in the paramagnetic phase of an optimally-doped CMR ferromagnet is a novel feature observed here. The charge correlations result from Coulomb interactions between the polarons, coupled with the interaction of overlapping polaronic strain fields. In the present $`x`$=$`0.4`$ system they are not strong enough to win the competition with the double exchange interaction, and the charges delocalize at the ferromagnetic transition, where the charge peaks collapse and the lattice strain relaxes. It is the delicate balance between double exchange, Coulomb repulsion and the lattice strain field that dictates whether the material is a ferromagnetic metal or charge-ordered insulator at low temperatures. This work was supported by the U.S. Department of Energy, Basic Energy Sciences-Materials Sciences (W-31-109-ENG-38), NSF (DMR 97-01339), NSF-MRSEC (DMR 96-32521), and the Swiss National Science Foundation. We are grateful to N. Wakabayashi for drawing our attention to similar x-ray diffuse scattering observations , and to W.-K. Lee for his help with the x-ray measurements.
no-problem/9907/hep-th9907150.html
ar5iv
text
# Gravity coupling from micro-black holes ## 1 Introduction As it is well known, the standard picture of supersymmetric GUT models predicts that the coupling of the three gauge interactions (electromagnetic, weak and color) unify to a good accuracy at an energy around $`210^{16}`$ GeV. Gravity, on the contrary, presents another natural energy scale: quantum fluctuations of the gravitational field seem to become important only when we observe them at the Planck length $`L_P=(G\mathrm{}/c^3)^{1/2}=1.610^{33}`$ cm. Energy fluctuations of the order of $`E_P=\frac{1}{2}(\mathrm{}c^5/G)^{1/2}=610^{18}`$ GeV create at this scale microblack holes that modify the topology of the spacetime. At the same scale, following the common views, gravity coupling (that is the ”strenght” of gravitational interaction) should become comparable with those of the other three gauge interactions. During the last years, studies based on string theories have changed this vision . In the early times of superstring theory, it was usual to associate it with (sub)planckian physics, because the theory provided an ultraviolet regulator of quantum gravity. More recently, a number of authors have considered the possibility that the compactification energy scale is far lower, with a fundamental scale of string theory being as low as TeV. Since the gauge couplings are inversely proportional to the volume of compactification space, this implies large compactification volumes and therefore large extradimension(s). By modifying the compactification radius one can tune the couplings of gauge interactions, including gravity. In the ”brane-world” picture the gauge interactions are localized on p-branes (with p $``$ 9) while gravity propagates in different spacetime dimensions. Precisely, gauge fields and particles with gauge charges move only on the ”walls” (i.e. on p-branes) while gravity moves also in the bulk, the spacetime region whose the p-branes are the boundary . In the Horava-Witten proposal the branes represent 3+1 dimensional walls in which all the standard model particles live, while gravity moves in the 4+1 dimensional bulk between the walls. The other six additional dimensions of string theory should be much smaller than that inhabited by gravity. The wall is then 9+1 dimensional in all and the complete spacetime is 10+1 dimensional. One can choose the size of the 11th dimension (the one inhabited only by gravity) so that the gravity coupling meets those of the other three forces at the same common scale, presumably the GUT scale . And the couplings of the other interactions remain untouched by this extra dimension. ## 2 Gravity coupling from micro-black holes The purpose of this section is to show how the threshold of quantum gravitational effects could be considerable lowered from $`M_{Planck}`$ to almost $`M_{GUT}`$, *without* the use of large extra dimensions, of 11 dimensional M-Theory or of Horava-Witten model. Working in the framework of semiclassical 4-dim gravity, a careful analysis brings to the conclusion that quantum gravity effects could be present also at the GUT scale. Let us begin by reminding the process of creation of a planckian micro-black hole. We can use the Heisenberg inequality $`\mathrm{\Delta }p\mathrm{\Delta }x\mathrm{}/2`$ casted in the form $`\mathrm{\Delta }E\mathrm{\Delta }x\mathrm{}c/2`$ (because $`\mathrm{\Delta }Ec\mathrm{\Delta }p`$ in our high energy situation) to observe that in a space region of width $`\mathrm{\Delta }x`$ the metric field can fluctuate with an amplitude in energy of $`\mathrm{\Delta }E\mathrm{}c/2\mathrm{\Delta }x`$. If the gravitational radius $`R_g=2G\mathrm{\Delta }E/c^4`$ associated with the energy $`\mathrm{\Delta }E`$ equals the width $`\mathrm{\Delta }x`$ of the space region, a micro-black hole originates . This happens when $`\mathrm{\Delta }x=L_P`$ and the typical energy of the process is of course the Planck energy $`E_P`$. Planckian microholes are therefore typical quantum gravitational objects. We want now to calculate the lifetime of one of these microholes. This can be done in at least three different ways. The first is to follow the pure thermodynamical approach of the semiclassical Hawking decay . Of course many criticisms can be raised towards this approach, mainly because the extremely high energy situation seems to forbid any semiclassical calculus and even the use of the notion of temperature. The second method is to follow a microcanonical approach , which, using the microcanonical ensamble, avoids the problematic use of the concept of ”temperature” at these very high energies. The third way is to make use of the semiclassical Hawking decay, but corrected with opportune factors in order to account for the great number of decay channels (i.e. species of particles) that the microhole can radiate into during the final stages of its (short!) life. Although the microcanonical approach seem to be the more appropriate for this high energy situation, it is based on an exponentially rising density of states, which is interpreted by considering black holes as extended quantum objects (*p*-branes). We want on the contrary to work in a completely non-stringy environment and to remain in the realm of standard model and 4-dim gravity, in order to test how non-stringy physics can tackle the problem of gravity coupling. Therefore we choose here the third method of calculus. Let us start with the thermodynamical approach and introduce after the ”multi-channels” correction factor. The lifetime of a black hole of given initial mass $`M_0`$, loosing mass via Hawking radiation can be calculated in a simple manner. Using the Stefan-Boltzmann law $$W=\sigma T^4$$ (1) we can say that the variation $`dM`$ of the mass of the hole in a time $`dt`$ is $$dM=\frac{\sigma T^4S}{c^2}dt$$ (2) where $`S`$ is the surface of the hole and $`T`$ is the Hawking temperature. For a Schwarzschild black hole we have $$S=4\pi R_g^2$$ (3) $$withR_g=\frac{2GM}{c^2}andT=\frac{\mathrm{}c^3}{8\pi kGM}.$$ The equation in $`dM`$ becomes $$dM=\frac{\lambda }{M^2}dt$$ (4) with $$\lambda =\frac{\sigma \mathrm{}^4c^6}{2^8\pi ^3G^2k^4}3.9310^{24}g^3s^1.$$ (5) This gives the mass of the hole at time $`t`$ $$M(t)=(M_0^33\lambda t)^{1/3}$$ (6) and therefore the lifetime of the hole is $$t_0=\frac{M_0^3}{3\lambda }=\frac{2^8\pi ^3G^2k^4}{\sigma \mathrm{}^4c^6}\frac{M_0^3}{3}.$$ (7) In particular for a micro-black hole of a Planck mass, $`M_0=M_P=E_P/c^2`$, we obtain for the lifetime $$t_{0P}=\frac{2^5}{3}\frac{\pi ^3k^4}{\sigma }\frac{1}{\mathrm{}^3c^2}(\frac{G\mathrm{}}{c^5})^{1/2}2.0110^3\tau _P$$ (8) where $`\tau _P=(G\mathrm{}/c^5)^{1/2}`$ is the Planck time. (Remember that $`E_P\tau _P=\mathrm{}/2`$). The precendent derivation, making use of the Stefan-Boltzmann law, takes into account only the pure electromagnetic black body emission (i.e. photons). In order to consider also other particles species, we have now to introduce a ”multi-channels” correction factor, $`f_c`$. The equation (4) now reads $$dM=\frac{\lambda f_c}{M^2}dt$$ (9) where $`f_c`$ accounts for the degrees of freedom of each emitted particle specie contributing to the energy loss. Various forms have been proposed for $`f_c`$ . In the standard model picture $`f_c`$ never exceeds a value of order $`10^2`$ (and also in a supersymmetric picture $`f_c`$ is again of this order of magnitude). The equation (8) hence becomes $$t_{0P}\frac{2.0110^3}{f_c}\tau _P.$$ (10) We note now that, in absence of gravitational effects, an energy fluctuation of the same size of a planckian micro hole would have a lifetime of just only one Planck time (if we take into account the Heisenberg principle only). The decay time is slowed down by the presence of the event horizon of the micro hole, which traps the energy and emits it only at a slow rate via Hawking evaporation. Thus we can consider the quantum micro hole, which has a lifetime of the order of $`20\tau _P`$, as a sort of *metastable quantum state*. From the theory of the decay of metastable states we can infer that the decay probability $`dP`$ of a state during the time interval $`(t,t+dt)`$ is proportional to $$exp[\frac{\mathrm{\Gamma }}{\mathrm{}}t]$$ (11) where $`\mathrm{\Gamma }`$ is the width in energy of the state. The mean lifetime of the state is $$\tau =\frac{\mathrm{}}{\mathrm{\Gamma }}.$$ (12) We know that the lifetime of the metastable micro hole quantum state is $`t_{0P}`$ and from here we can calculate its width in energy $$\mathrm{\Gamma }_{0P}=\frac{\mathrm{}}{t_{0P}}=\frac{2E_Pf_c}{2.0110^3}=610^{15}f_cGeV.$$ (13) For $`f_c`$ of the order $`10^2`$ we obtain $`\mathrm{\Gamma }_{0P}=610^{17}`$ GeV. This width is so huge that should allow the existence of planckian micro-black holes also at the GUT threshold or very near to it, quite below the Planck threshold. Incidentally, it is also interesting to note that, by reversing the argument, we can obtain a valuation of the number of particles species present in Nature. In fact, if we demand that $$\mathrm{\Gamma }_{0P}=E_PE_{GUT}$$ (14) we get then $`f_c996`$. ## 3 Conclusion The GUT energy scale seems to emerge in a natural way from a careful analysis of the micro-black hole metastable quantum state. This seems to indicate that typical quantum gravity objects can be present also at energies well below the Planck threshold, and this result is obtained in a completely non-stringy framework. The quantum gravity scale could be fixed at the GUT scale (almost) and no more at the Planck scale. This agrees also with the simplest inflationary potential invoked to explain the density fluctuations as measured by COBE (see Banks $`etal.`$ ).
no-problem/9907/hep-lat9907025.html
ar5iv
text
# Interquark potential, susceptibilities and particle density of two color QCD at finite chemical potential and temperature ## I Introduction A so far unique possibility to study a gauge model at finite density is afforded by two color QCD, which can be studied via numerical simulations even at non–zero chemical potential. In our first paper (hereafter HKLM) we have studied the model at zero temperature, presenting the first ab initio spectrum calculation at finite baryon density with full fermion feedback. Here we focus on the effect of temperature: we discuss the interplay of bulk thermodynamics observables, deconfinement and chiral symmetry by varying the chemical potential at different temperatures, and we contrast the nature of the phenomena along the temperature and density axes. Ultimately, we would like to understand the phase diagram of two color QCD, the nature of the critical lines separating the phases with different realisation of chiral symmetries and their interplay with the deconfinement and topological transitions. Finally, and ideally, we would also like to use some of the results on two color QCD to learn something about three color QCD. Clearly the present study, which addresses only a subset of these issues on a small $`6^4`$ lattice is just exploratory : we stress that even the “simple” $`\mu =0`$ limit, whose study was initiated many years ago in , has a number of interesting open problems. A rich pheonomenology at finite baryon density emerges from the analysis of model Hamiltonians with the same global symmetries as QCD. For two colors the predictions of these more sophisticated studies reduce to those expected of qualitative reasoning and early calculations . Simple models and qualitative reasoning, clearly, do not take fully into account details of the dynamics such as color forces and their temperature dependence: perhaps some of the lattice results can offer some quantitative control, and further insight into model studies, and, very optimistically, even the possibility of a first principle derivation of some of their relevant parameters (e.g. form factors). This provides a further motivation for this lattice study of two color QCD. An overview of the parameter space explored by the numerical simulations is given in Section II, while the results for the quark-(anti)quark potential are presented in Section III. Section IV discusses the particle number, and the “nuclear matter phase”. Section V presents the data for the susceptibilities, contrasting the high density and high temperature behaviour. In Section VI we further assess the rôle of the chemical potential into the gauge dynamics by carrying out a partial quenched calculation, and present some speculative comments on the relationships of two and three color QCD. We close with a summing up and outlook. Some of the results of this paper have been briefly mentioned in . ## II Overview of the parameter space Using the same algorithm as in HKLM we studied the thermodynamics of the system on a $`6^4`$ lattice at: * $`\beta =1.3`$, $`\mu =(0,.2,.6,.8)`$, mass = (.07, .05); * $`\beta =1.5`$, $`\mu =(0,.2,.6,.8,1.0)`$, mass = (.1, .07, .05); * $`\mu =0`$, $`\beta =(1.3,1.5,1.7,1.9,2.1,2.3)`$, mass = .1. The chemical potential explores the range of interest, up to the lattice saturation, while the temperature ranges from $`T0`$ to $`T>T_c`$. By contrasting the results obtained at $`\beta =1.5`$ with those obtained at $`\beta =1.3`$ we shall study how the temperature effects the chemical potential dependence. By contrasting the results as a function of chemical potential with those obtained by varying the temperature we shall compare the nature of the high T and the high $`\mu `$ transitions. Wherever available we rely on the results for condensates and susceptibilities obtained in HKLM. Clearly to study the chemical potential dependence at different temperature we need to know in which phase we are in at $`\mu =0`$. Our first task is then (approximately) locate the critical line in the $`\beta `$–mass plane. To this end we measured the mass dependence of the chiral condensate. In Fig. 1 we show the chiral condensate as a function of the bare mass for different values of the chemical potential at $`\beta =1.5`$. We note that, while a linear extrapolation from masses .1 and .07 would give a non–zero condensate, an extrapolation which uses masses .07 and .05 would suggest that chiral symmetry is already restored. In addition to that, for mass = .05 we observed a clear two state signal in the HMD history of the chiral condensate, which, to a lesser extent, was also visible at mass = .07. Fig. 1 should be considered together with the analogous one ( Fig. 3) of HKLM from which we inferred that the points $`\beta =1.3`$, mass = .05 and .07 are in the phase where $`<\overline{\psi }\psi >0`$. Assuming a first order chiral transition in $`\beta `$ at $`m=0`$, we would conclude that the chiral transition line in the $`\beta m`$ plane runs close to the points $`(\beta ,m)=(1.5,.05),(1.5,.07)`$. We sketch such a line in Fig. 2, with the caveat that if the $`m=0`$ transition were instead second order, the line drawn in Fig. 2 would indicate a crossover. Given the exploratory nature of this study this distinction is immaterial. In the same Figure we also show the locations of the simulation points $`\beta =(1.3,1.5)`$, and $`m=(.05,.07,.1)`$: by switching on the chemical potential we should be able to observe the effect of a finite density of quarks on a variety of dynamical situations, ranging from “very cold” (filled points), to “critical” (shaded points) to “hot” (open point). In addition to this, the line $`m=0.1`$ has been explored at $`\mu =0`$ for several $`\beta ^{}s`$. In Fig. 1 it is also shown the dependence of the chiral condensate on the chemical potential. As expected, when $`\mu `$ is increased the chiral condensate extrapolates to zero in the chiral limit also for larger bare masses. We remind that, of course, this does not correspond to the restoration of the chiral symmetry as the chiral condensate will be rotated to a baryonic diquark one ( see the discussions in HKLM, and below, for more). Figs. 3 show the same $`<\overline{\psi }\psi >`$ data plotted as a function of the chemical potential. Again, the findings at this largest temperature should be contrasted with those obtained in HKLM which we reproduce for the reader’s convenience in the upper part of the figure. At $`\beta =1.3`$, mass = .07 we observe a rather sharp drop in the chiral condensate at $`\mu .3`$, close to the half the pion mass, $`m_\pi /2.3`$. We also note that at $`\beta =1.5`$, $`m=.1`$ $`m_\pi =.8`$, and $`\mu m_\pi /2.4`$ lies somewhere in the crossover region as it should. The overall observation resulting from Fig. 1 is that the density transition shows the expected softening at smaller masses and higher temperatures. ## III Quark-antiquark and quark-quark interactions Although the results for $`<\overline{\psi }\psi >`$ look physical, and consistent with expectations, we would have obtained similar ones by use of the quenched approximation: we are not really observing qualitative, dynamical $`\mu `$ effects. It is interesting to investigate the dynamics of the gauge fields, and to this end we measured the correlations $`<P(O)P^{}(z)>`$ of the zero momentum Polyakov loops, averaged over spatial directions. Remember that this quantity is related to the string tension $`\sigma `$ via $`lim_z\mathrm{}<P(O)P^{}(z)>e^{\sigma z}`$. Note that as $`P`$ is a real quantity the Polyakov loop correlator defining the strength of the quark-quark interaction $`<P(O)P(z)>/<P(0)^2>`$ would be the same as the quark–antiquark one $`<P(O)P^{}(z)/<|P(0)|^2>`$. This would lead to the conclusion that the diquarks and mesons with the same spin and opposite parity are always degenerate, since their binding forces are the same. In principle this conclusion is limited to the heavy spectrum, however the numerical results presented in HKML support this point of view: mesons and diquarks seem to remain degenerate in SU(2) at any nonzero density, once the Fermi level shift is taken into account, producing four degenerate particles at large $`\mu `$. We show the results for the Polyakov loop correlators in Fig. 4, where we compare the behaviour at various temperature (upper) with that at various chemical potentials (lower). We note that the trends with temperature and chemical potential are quite similar: in both cases we have signs of long range ordering, i.e. deconfinement. The gap between the plateaus having $`\mu =.4`$ and $`\mu =.6`$ in Fig. 3b suggests increased fermion screening and the passage to a deconfined phase. We have then a direct evidence of the effect of the chemical potential on the gauge fields. Finally, we show (Fig. 5) the dependence of the Polyakov loop itself on the chemical potential, contrasted, in the lower part of the figure, with that of the spatial Polyakov loop. This suggests that the ordering effects of the chemical potential only affects the temporal direction. We should also note that some distinction between the behaviour of temporal and spacial Polyakov loop is also seen in the temperature dependence, and can be tentatively ascribed to an effect of the different boundaries for the fermion fields (periodic vs. antiperiodic) in space and time direction. ## IV The nuclear matter of SU(2), and a “new vacuum”? According to the standard wisdom for a chemical potential comparable with the baryon mass, baryons start to be produced thus originating a phase of cold, dense matter. For SU(2) baryons (diquarks) are bosons (as opposed to the fermionic baryons of real QCD). This has major implications on the physics of the dense phase. First, and obviously, the thermodynamics of (interacting) Bose and Fermi gases is different. In particular, it could be that the energetics favours the condensations of bosons. This would be revealed by a non–zero diquark condensate , which still breaks chiral symmetry, and should be reflected by different functional dependence of the particle density on $`\mu `$ which can thus characterise the various phases. In this exploratory study we just (try to) fit the data to a cubic spline, appropriate from free massless fermion at zero temperature, and try to infer from that the effect of temperature, and the possibility of a condensed diquark phase. In Figs. 6 we show the number density as a function of $`\mu `$ for the two $`\beta ^{}s`$ values, with superimposed the fits described below. Let us consider $`\beta =1.5`$ (our warmer lattice, close to $`T_c`$) first: we note the following features. At small chemical potential there is a clear dependence on the bare mass, which is greatly reduced in the thermodynamical region. This could be ascribed to a loss of the dynamical mass. In the same region, the behaviour is close to $`\mu ^3`$ (the line connecting the point is the polynomial $`2.07\mu ^3.01\mu ^2+.01`$). We just note here that a simple, pure cubic term accommodates the data well suggesting a free quark gas – somewhat surprisingly, the contribution of the temperature term seems to be small . At $`\beta =1.3`$ the situation is different. For $`\mu >\mu _c`$, we would expect diquark condensation. At the same time, we have observed long distance screening and deconfinement, so the behaviour might get closer to that of a cold phase of free quarks, which, however, is not supported by the data : as the diquark states appear to be bound (HKLM), their condensates can still influence the thermodynamics. The polynomial fits $`n(\mu )=.84\mu ^3+1.27\mu ^2.1`$ for mass = .05, $`n(\mu )=1.2\mu ^3+0.5\mu ^2.03`$ for mass = .07 are not amenable to any simple intepretation. We might well expect a rather complicated “mixed” nuclear matter phase, with an admixtures of diquark gas (which, if alone, would constitute a “pure” SU(2) nuclear matter), condensed diquarks, free quarks, perhaps characterised by both types of condensates (such mixed phases are also predicted by more detailed instanton studies ). A direct measure of diquark condensate should completely clarify this point . A free massless quark phase, with complete restoration of chiral symmetry (i.e. $`<\overline{\psi }\psi >=<\psi \psi >=0`$) could be reached at even larger $`\mu `$ – as this region is dominated by lattice saturation artifacts, improved/perfect actions might be necessary to explore it. ## V Particle content and condensation Clearly to better understand the thermodynamics results shown above it is useful to assess the particle content of the model, together with their possible condensation. Remember that at $`\mu =0`$ the model has a lattice analogous of the Pauli–Gürsey symmetry mixing quarks and antiquarks which belong to equivalent representation. Main consequences are 1. the only discernable condensate is $`<qq>^2+<q\overline{q}>^2`$, 2. the preferred direction of $`\chi SB`$ is picked by the explicit mass term 3. the diquark condensate is a natural object which does not break any more symmetry than the ordinary condensate 4. there are massless baryons (diquarks). When $`\mu 0`$ the symmetry group is smaller, coincident with that of staggered fermions, and the number of Goldstone modes is reduced accordingly. We refer to HKLM for more detailed discussions, and just reproduce here the basic idea and definitions used in spectroscopy calculations. As usual we form meson and diquark operators by taking correlations of the quark propagator in the appropriate sector of quantum numbers. We shall limit ourselves to the local sector of the spectrum and focus on the zero momentum connected propagators of the scalar and psudoscalar mesons and diquarks. The scalar meson propagator will thus be an isovector, which we will call $`\delta `$, following QCD notation. By applying a generic O(2f) transformation to the mass term $`\overline{\chi }\chi `$ we identify the basic set of operators which shall be used to build the spectrum: $`\text{scalar}\chi _1\overline{\chi }_1+\chi _2\overline{\chi }_2`$ $`\text{pseudoscalar}\epsilon (\chi _1\overline{\chi }_1+\chi _2\overline{\chi }_2)`$ (1) $`\text{scalar diquark }\chi _1\chi _2\chi _2\chi _1`$ $`\text{pseudoscalar diquark}\epsilon (\chi _1\chi _2\chi _2\chi _1)`$ (2) $`\text{scalar antidiquark}\overline{\chi }_1\overline{\chi }_2\overline{\chi }_2\overline{\chi }_1`$ $`\text{pseudoscalar antidiquark}\epsilon (\overline{\chi }_1\overline{\chi }_2\overline{\chi }_2\overline{\chi }_1)`$ (3) where the lower index labels colour. The first line displays the usual pseudoscalar and scalar operators. The second (third) line corresponds to diquark (antiquark) operators, scalar and pseudoscalar. This simple minded quantum number assignment can be done by considering that quark – quark and quark-antiquark pairs have opposite relative parity, and is confirmed by a more rigorous analysis presented in HKLM. Consider now quark propagation from a source at 0 to the point $`x`$. The propagator $`G_{ij}`$ ($`i,j`$ color index) is an SU(2) matrix: $$G_{ij}=\left(\begin{array}{cc}a& b\\ b^{}& a^{}\end{array}\right)$$ (4) The meson ($`q\overline{q}`$) and diquark ($`qq`$) and antidiquark ($`\overline{q}\overline{q}`$) propagators at $`\mu =0`$ are constructed from $`G_{ij}`$ as follows:- $`\text{pion}\text{tr}GG^{}`$ $`=`$ $`(a^2+b^2)`$ (5) $`\text{scalar meson}\epsilon \text{tr}GG^{}`$ $`=`$ $`\epsilon (a^2+b^2)`$ (6) $`\text{scalar }qq\text{det}G`$ $`=`$ $`(a^2+b^2)`$ (7) $`\text{scalar }\overline{q}\overline{q}\text{det}G^{}`$ $`=`$ $`(a^2+b^2)`$ (8) $`\text{pseudoscalar }qq\epsilon \text{det}G`$ $`=`$ $`\epsilon (a^2+b^2)`$ (9) $`\text{pseudoscalar }\overline{q}\overline{q}\epsilon \text{det}G^{}`$ $`=`$ $`\epsilon (a^2+b^2)`$ (10) The notable feature of the propagators at $`\mu =0`$ is the exact degeneracies of the pion, scalar $`qq`$ and scalar $`\overline{q}\overline{q}`$ and of the scalar meson, pseudoscalar $`qq`$ and pseudoscalar $`\overline{q}\overline{q}`$. We then identify two orthogonal directions in the chiral space: a) $`\pi `$ -scalar diquark - scalar antidiquark b) $`\delta `$ \- pseudoscalar diquark - pseudoscalar antidiquark All this at $`T=\mu =0`$. By increasing the temperature the “Pauli–Gürsey” lattice symmetry is preserved, so the above degeneracies remain true. Chiral condensate is reduced without changing direction, so the six susceptibilities will progressively become degenerate, and there will be no residual Goldstone particle. This behaviour is demonstrated in Fig. 7 where we plot the susceptibilities as a function of $`\beta `$. The behaviour of the susceptibilities as a function of the chemical potential has been discussed in HKLM. This is suggestive of a rotation in chiral space of the condensate, from the direction “parallel” to $`<\overline{\psi }\psi >`$ to that “parallel” to $`<\psi \psi >`$. In HKLM we have noticed that “conventional” observables used to monitor chiral symmetry (pion, $`\delta `$, chiral condensate) display a behaviour similar to that observed at finite temperature. While we confirm this finding at qualitative level, we also note a quantitative difference : pion and $`\delta `$ become exactly degenerate at high density, while at high temperature a splitting remains – not surprisingly, of course, since the bare mass is not zero. It is the apparent exact degeneracy observed for bare masses of comparable value at high density, for both $`\beta `$ values, which is sort of puzzling. The qualitative trend observed at $`\beta =1.5`$ and $`\beta =1.3`$ is quite similar : see Fig. 8, to be contrasted with Fig. 5 of HKLM. In Figs. 9 the susceptibility in the scalar diquark channel is plotted as a function of the bare mass for various chemical potentials at $`\beta `$ = 1.3 and $`\beta `$ = 1.5. It looks as though the signal form the scalar (and pseudoscalar alike) diquark susceptibilities in the chiral limit would be, in any case, much smaller than $`<\overline{\psi }\psi >`$ at zero chemical potential. It has to be stressed that the meaning of this exercise is simply to check how strong the indication of “rotation” detected with this procedure would be in the chiral limit. In addition to that, there the usual caveats associated with the small lattice size, and poor mass sample apply. These warnings issued, there might well be two effects on the chiral condensate while increasing $`\mu `$ : one is indeed a rotation in chiral space, one is a reduction of its magnitude similar to that observed at high temperature. ## VI The dynamical effects of the chemical potential : two vs three colors - speculative comments To gain further insight into the “dynamical”role of the chemical potential, and its effect on the gauge fields, we can take a look at a Toy version of the model, obtained by inverting the Dirac operator at nonzero chemical potential in a background of gauge fields generated at zero chemical potential. For these configurations the potential is obviously always “hard”, identical to that at $`\mu =0.0`$. In Fig. 10 we show the scalar diquark propagator for $`\mu =(0,.6,.8)`$ evaluated on two different configurations. Interestingly, we have found that in this case the behaviour of the diquarks propagators resembles that of the infamous quenched SU(3) “baryonic pions” measured in . This suggests that the nature of the interquark forces and the deconfinement transition might well play a major role in the condensation phenomena, and thus in the critical behaviour, for SU(2) and SU(3) alike. It is essential to have the correct quark–quark and quark–antiquark forces, since they control and soften diquark condensations, including the pathological ones (which of course should disappear in a correct SU(3) calculation). In a correct QCD algorithm the “baryonic pion” condensation would not take place as the potential become weaker, and the pathological onset at $`m_\pi /2`$ would disappear. ## VII Summary and outlook Our work in this paper stressed those aspects which are instrumental to study the influence of the gauge field dynamics on the phenomena expected at finite baryon density. Temperature dependence is an important tool in this context. A “Toy” model, where the gauge field dynamics is not affected by the chemical potential, proven informative as well. Although our calculations have not been intended as anything more than exploratory (remember everything was carried out on a $`6^4`$ lattice) we feel we have nonetheless managed to learn something from them, which we summarise here. We have investigated the interquark potential and found evidence of enhanced screening when a nonzero density is induced in the system, as seen in a larger plateau at large spatial separation. We underscore that the plateau builds in is at the onset of the thermodynamics, i.e. when the number density starts deviating appreciably from zero. This supports the results of heavy quarks SU(2) studies that a small density of quarks induces “deconfinement” – these observations are clearly semiqualitative and in particular the nature and the very exhistence of a nuclear matter state in SU(2) should be subjected to a more careful scrutiny, as discussed in the body of the paper. Temporal and spatial Polyakov loops behave in different ways, the latter being nearly insensitive to the chemical potential, suggesting the expect different spatial and temporal ordering predicted by instanton models. These observations might offer some further insight into Polyakov Loop models . Simple inspection of the heavy quark potential symmetries supports the numerical findings of HKLM, that for any non–zero chemical potential the (rather heavy) particle degeneracies noted at $`\mu =0`$ remain true, once the Fermi level shift is taken into account. This would lead to the four degenerate particles at large chemical potential, when pion and $`\delta `$, and scalar and pseudoscalar diquark, are degenerate. Interestingly, this pattern emerges from a recent model study . The behaviour of the number density changes qualitatively with temperature. Despite the many uncertainties described in the text, it seems anyway clear that screening and deconfinement compete against condensation, and this is better seen on on a “warmer” lattice, close, yet below $`T_c`$: there $`n\mu ^3`$, consistent with a free, massless quark gas, suggesting the existence of a critical temperature for diquark condensation (i.e. a temperature beyond which diquarks will not condense at any value of the chemical potential) smaller than $`T_c`$ itself. The behaviour of the susceptibilities suggests that there are two effects on the chiral condensate while increasing $`\mu `$ : one is a rotation in chiral space, one is a reduction of its magnitude (occuring in the chiral limit) similar to that we have observed at high temperature. The caveats have been spelled out above, and probably only a direct measure of the diquark condensate can clarify this point. Assuming it is confirmed, there are two possible consequences stemming from this observation. In the first place, again, this suggests that there is a critical temperature for diquark condensation: when temperature is high enough the magnitude of the condensate will “shrink” to zero at rather small $`\mu `$, leading to a complete realisation of the chiral symmetry $`<\overline{q}q>=<qq>=0`$. Secondly, it could be that this effects ultimately leads to the same complete restoration of chiral symmetry at large density even in the cold phase. Concerning the comparison of high temperature and high density behaviour, we noticed two interesting, and to our mind puzzling features: while at high density we observe a complete degeneracies among the chiral partners, at high temperature we observe the residual splitting expected of nonzero bare masses; we do not know where this difference –which is numerically quite clear– come from. At the same time, we have not observed any significant difference between the Polyakov loop correlations at large T and large $`\mu `$. Instanton models predict different ordering at large T and $`\mu `$ – “instanton-anti-instanton” pairs at high T and “polymers” at high density (see second entry of , and ) – it would be nice to detect these phenomena in a lattice simulations. This would require more sensitive observables, among which the topological ones, besides lattice instanton studies, seem to be particularly promising. We are now exploring the behaviour of the topological susceptibility, and space – time correlations of topological charge: this should hopefully shed light on these and other relevant aspects . We stress that what has been skecthed here constitute a possible coherent scenario not inconsistent with the present data, but certainly not implied by it. Further studies are necessary to sharpen our understanding and this will certainly require a larger set of chemical potentials and simulations in the scaling region. It is also appropriate to reiterate that according to ref. the theory with two colors and eight continuum flavour should be in the conformal phase, however we have not detected any indication of unconventional behaviour. Either we are too far from the continuum limit, so that the continuum results do not apply, and/or the nature of this new phase is not manifest in our calculation . One the dark side this issues yet another warning on these results, on the bright side this adds to the interest of the model. A challenging question remains as to whether we can learn anything useful for real QCD. Of course, as two color QCD baryons are completely degenerate with mesons, the system is not a good approximation to real QCD - symmetries and spectrum of two and three color QCD are certainly dramatically different. However there might well be a continuity of the dynamical effects from $`N_c=2`$ to $`N_c=\mathrm{}`$. A good example comes from the behavious of the diquark coupling . Another from the results of the last section which suggested that similar dynamical effects should take place when going from “Toy” to full dynamics in two and three color QCD. In addition, if we accept that the instanton picture for “real” QCD is correct, and take into account that instantons should be described as configurations belonging to the SU(2) subgroup of SU(3) , we might even speculate that the two color results would be closer to the real world than expected. ## Acknowledgments This paper extends a study initiated in HKLM, and uses the same codes: I wish to thank Simon Hands, John Kogut and Susan Morrison. I also thank: the Zentrum für Interdisziplinäre Forschung, und Fakultät für Physik der Universität Bielefeld; the Dipartimenti di Fisica delle Università degli Studi di Pisa e di Roma I La Sapienza for hospitality during various stages of this project. This work was partly supported by the TMR network Finite Temperature Phase Transitions in Particle Physics, EU contract no. ERBFMRXCT97-0122.
no-problem/9907/nucl-th9907035.html
ar5iv
text
# The QGP-Stall or Not ? ## 1 The Framework And The QGP Stall at CERN/SPS Beam Energies Let us consider two quite different equations of state (EOS) of nuclear matter within a true relativistic hydrodynamic framework (i.e., HYLANDER-C) . The first one, EOS-I, has a phase-transition to a QGP at $`T_C`$ = 200 $`MeV`$ ($`ϵ_C`$ = 1.35 $`GeV/fm^3`$) . The second EOS, EOS-II, is a purely hadronic EOS, which has been extracted from the transport code RQMD (cf., Ref. ) under the assumption of fully acchieved thermalization. If one assumes for each EOS different initial conditions before the hydrodynamical expansions, one can fit simultaneously hadronic single inclusive momentum spectra and BEC, which have been measured recently by the CERN/NA44 and CERN/NA49 Collaborations (cf., ), respectively. In particular, for the acceptance of the NA44 experiment a ratio $`R_{out}/R_{side}`$ 1.15 was found while using both EOS, EOS-I and EOS-II. Little difference was seen in the BEC of identical pion pairs while considering the two different EOS. ## 2 The QGP Stall At BNL/RHIC Beam Energies And Conclusions In the following, we shall assume for central Au+Au collisions at BNL/RHIC beam energies a set of fireball initial conditions, IC-I, which are similar to those as described in Ref. . From these fireball initial conditions, IC-I, single inclusive momentum spectra have been calculated while using EOS-I and EOS-II in the hydrodynamic expansions. We note, that the rapidity spectra of both calculations differ in width and nomalization significantly . Fig. 1. shows, that the isothermes of the transversely expanding fluids at longitudinal coordinate $`z=0`$ also differ significantly. Since there will be only one set of measured data, we shall fit the calculation using EOS-II to the single inclusive momentum spectra of the calculation using EOS-I. In doing so, we find new initial conditions, IC-II . But now the space-time picture of the evolving fireball at freeze-out is again very similar to the one using EOS-I with IC-I. If one calculates BEC of identical pion pairs or identical kaon pairs, resepectively, one finds while comparing the calculations using EOS-I with IC-I and EOS-II with IC-II, respectively, no significant differences in the extracted ratios $`R_{out}/R_{side}`$ regardless of the pair kinematics under consideration. In particular, the assumption of the PHENIX detector acceptance leads to a ratio $`R_{out}/R_{side}`$ 1.65 for both choices of EOS. In summary, the larger ratio $`R_{out}/R_{side}`$ at RHIC beam energies appears to be rather a consequence of the expected higher energy deposit in the fireball during the heavy-ion collision, but it appears not to be an indicator of the present or absent phase-transition to a QGP. Of course, more theoretical analysis is neccessary, but there is strong evidence, that BEC do not provide a good QGP signature, since we do not understand the initial state of a heavy-ion collision well enough yet.
no-problem/9907/astro-ph9907081.html
ar5iv
text
# Stellar Mass Function From SIM Astrometry/Photometry ## 1 Introduction Microlensing observations toward the Galactic bulge are yielding important clues about the structure of the Milky Way (Udalski et al. 1994; Alcock et al. 1997). However, the only useful parameter that is usually extracted from a microlensing event is its timescale, $`t_e`$, which is a complicated combination of the three parameters one would like to know about the lens, its mass $`M`$, its distance $`d_l`$, and its proper motion relative to the observer-source line of sight $`\mu `$. Specifically, $$t_e=\frac{\theta _e}{\mu },\theta _e=\sqrt{\frac{4GM}{c^2D}},$$ (1) where $`\theta _e`$ is the angular Einstein radius (the characteristic angular size over which the lens has a significant effect), $$D\frac{d_ld_s}{d_sd_l},$$ (2) and $`d_s`$ is the distance to the source. Thus, if one wants to use microlensing observations, for example, to measure the bulge mass function (MF), one can analyze the distribution of timescales (Zhao, Spergel, & Rich 1995; Han & Gould 1996), but to do so one must make a whole series of model-dependent assumptions, such as the distributions of the source-lens relative velocities, the source distances, and the lens distances, and the proportion of events that are due to foreground lenses in the disk rather than in the bulge itself. The scientific return from bulge microlensing observations would be increased many fold if it were possible to measure $`M`$, $`D`$, and $`\mu `$, separately for each event, especially if this were combined with measurements of $`d_s`$ and $`\mu _s`$, the distance and proper motion of the source. With these additional pieces of information, one could determine both $`d_l`$ and the absolute transverse velocity of the lens. First, bulge and disk lenses could be separately identified (from their distances and kinematics) so that the bulge and disk MFs could be measured separately and unambiguously. Second, the relative normalizations of the bulge and disk MFs could be determined so that one would know how much of the Galactic potential was attributable to each structure. Third, it would be possible to measure the number of white dwarfs and neutron stars in the bulge (Gould 1999). These stars are substantially too faint to be detected optically in the crowded bulge fields, but they would be easily revealed in a census of masses of bulge microlensing events. White dwarfs would show up as a spike in the MF at $`M0.6M_{}`$, and neutron stars have masses that are higher than those of turnoff stars. Note that the sharp white dwarf feature in the MF is spread out to a fractional width of $`𝒪(1)`$ in the $`t_e`$ distribution, so the white dwarfs cannot be picked out from the timescales. The same is basically true of neutron stars since they are only $`1.5`$ times heavier than turnoff stars. Since white dwarfs and neutron stars are remnants of main-sequence stars with masses respectively $`M_{}<M_{\mathrm{ms}}<8M_{}`$ and $`M_{\mathrm{ms}}>8M_{}`$, the specific frequency of these remnants would in turn yield information about the initial MF to very high masses. Fourth, one could determine whether the bulge contains massive objects other than those associated with the observed stars. A very puzzling (but often overlooked) fact is that kinematic studies of ellipticals and spiral bulges typically yield mass-to-light ratios $`M/L_V10h7`$, much higher than the only two populations for which we have unambiguous measurements: dymanical studies of globular clusters yield $`M/L_V2`$–3 (Pryor & Meylan 1993), and a complete census of stars in the disk (Gould, Bahcall, & Flynn 1997) combined with a surface brightness of the disk (Binney & Tremaine 1987) yield $`M/L_V2`$. It is usually assumed that the bulge MF differs dramatically from these other populations. However, it is quite possible that the bulge contains substantial quantities of dark matter, either in compact objects or in diffuse material (WIMPs). The MF of the luminous stars in the bulge has now been measured in both the optical (Holtzman et al. 1998) and the infrared (Zoccali et al. 1999), so that if the total MF were measured from microlensing it would be possible to distinguish among these various competing scenarios. ## 2 Decoding Microlensing Events with SIM SIM observations can completely solve for the physical parameters of the lens and source, $`M`$, $`D`$, $`\mu `$, $`d_s`$, and $`\mu _s`$, by combining two seemingly unrelated ideas: Boden, Shao, & Van Buren (1998) showed that it was possible to measure $`\theta _e`$ from astrometric measurements of the apparent source position. Gould (1995) showed that it was possible to measured the projected Einstein radius $`\stackrel{~}{r}_e`$ $$\stackrel{~}{r}_e=D\theta _e$$ (3) from photometric measurements of the event simultaneously from the Earth and a satellite in solar orbit. It is clear that if both $`\theta _e`$ and $`\stackrel{~}{r}_e`$ are measured, then one can measure $`M`$, $`D`$, and $`\mu `$. $$M=\frac{c^2}{4G}\stackrel{~}{r}_e\theta _e,D=\frac{\stackrel{~}{r}_e}{\theta _e},\mu =\frac{\theta _e}{t_e}.$$ (4) Moreover, as we will show below, in the course of measuring $`\theta _e`$ astrometrically, one automatically measures $`\mu _s`$ and $`d_s`$. Thus, if SIM can really carry out these two measurements simultaneously, bulge microlensing events can be completely solved. How does this work? ### 2.1 Astrometry Suppose that a lens and source are separated on the sky by $`𝐮\theta _e`$, where $`𝐮=(\tau ,\beta )`$ is the separation in units of the Einstein radius, $`\beta `$ is the impact parameter of the event, $`\tau =(tt_0)/t_e`$, and $`t_0`$ is the time of closest approach. Then the source will be split into two images with positions $`𝐮_\pm \theta _e`$ and magnifications $`A_\pm `$, $$𝐮_\pm =\left[\frac{u\pm \sqrt{u^2+4}}{2}\right]\frac{𝐮}{u},A_\pm =\frac{A\pm 1}{2},A=\frac{u^2+2}{u\sqrt{u^2+4}}.$$ (5) The separation between the images $`(2\theta _e)`$ is of order 100s of $`\mu `$as and so is far too small to be resolved by SIM with its 10 mas central fringe. However, as Boden et al. (1998) showed, the displacement of the image centroid from the “true” position of the source is $$(A_+𝐮_++A_{}𝐮_{}𝐮)\theta _e=\frac{𝐮}{u^2+2}\theta _e,$$ (6) and therefore has a maximum of $`\theta _e/\sqrt{8}`$ (at $`u=\sqrt{2}`$) and so is well within SIM’s capabilities. Of course, just measuring the apparent position of the source does not by itself yield the displacement due to lensing. One must also know where the source would have appeared in the absence of lensing. To detemine this, one must measure the distance $`d_s`$ (i.e. the parallax $`\pi _s=\mathrm{AU}/d_s`$) and proper motion $`\mu _s`$ of the source at late times (when its apparent position is not influenced by the lens), then project its “true” position backwards to the time of the event. In principle, it is also possible to measure $`\stackrel{~}{r}_e`$ from astrometry, but since the deviations caused by Earth’s motion are a higher order effect, this is not the most practical method (Gould & Salim 1999). ### 2.2 Photometry The Einstein radius projected onto the plane of the observer is typically a few AU, and so the satellite and the Earth see significantly different events, with different impact parameters $`\beta `$ and different times of maximum $`t_0`$. (The timescales $`t_e`$ are also slightly different, but this is a higher order effect which we will ignore for the moment.) Hence, the position in the Einstein ring will differ by $$\mathrm{\Delta }𝐮=(\mathrm{\Delta }\tau ,\mathrm{\Delta }\beta )$$ (7) where $`\mathrm{\Delta }\tau =(t_{0,\mathrm{sat}}+t_{0,})/t_e`$ and $`\mathrm{\Delta }\beta =\beta _{\mathrm{sat}}\beta _{}`$. By measuring $`\beta `$ (from the peak magnification) and $`t_0`$ (from the time of peak magnification) from the Earth and satellite, one can therefore measure $`\mathrm{\Delta }𝐮`$. It is then possible to determine $`\stackrel{~}{r}_e`$ by using $$\stackrel{~}{r}_e=\frac{d_{\mathrm{sat}}}{\mathrm{\Delta }u},$$ (8) where $`d_{\mathrm{sat}}`$ is the distance to the satellite projected onto the plane of the sky. Actually, there is a bit of a complication in that the impact parameter can be on either side of the lens so that $`\beta _{\mathrm{sat}}`$ and $`\beta _{}`$ can each be of either sign, while the measurement of $`\beta `$ from the light curve is sensitive only to its square (i.e., its amplitude but not its sign) (Refsdal 1966,; Gould 1994). Hence $`\mathrm{\Delta }\beta =\pm (\beta _{\mathrm{sat}}\pm \beta _{})`$ and so cannot be unambiguously determined simply by measuring $`\beta _{\mathrm{sat}}`$ and $`\beta _{}`$. However, Gould (1995) showed that this ambiguity could be resolved using the small difference in $`t_e`$ as measured by the Earth and satellite. ## 3 SIM: Simultaneous Astrometry and Photometry In fact, although SIM is designed to do astrometry, the astrometric measurements are done by counting photons over the central fringe. The sum of these photon counts is a photometric measurement. Thus SIM simultaneously does astrometry and photometry. Of course, for most purposes, photometry using 25 cm mirrors is not very interesting. However, in the present case, the fact that SIM is making the photometric measurement at several tenths of an AU from the Earth is what is crucial. For photon-limited measurements, the ratio of the fractional photometric error $`\sigma _{\mathrm{ph}}`$ to the astrometric error $`\sigma _\theta `$ is given by $$\sigma _{\mathrm{ph}}=\frac{\sigma _\theta }{\theta _f},\theta _f\frac{\lambda }{2\pi d}2.5\mathrm{mas},$$ (9) where $`d10`$m is the distance between the mirrors, $`\lambda `$ is the wavelength of the light, in this cases taken to be $`\lambda 0.8\mu `$m, appropriate for bulge clump giants. Gould & Salim (1999) showed that by combining such photometric measurements that can be generated simultaneously with SIM astrometric measurements, it should be possible to measure $`M`$, $`D`$, $`\mu `$, $`d_s`$, and $`\mu _s`$ all to better than 5% precision with about 5 hours of SIM time for bulge microlensing events with $`I=15`$ sources. Over the SIM lifetime, there should be of order 100 such events, so about 100 mass measurements are possible. ###### Acknowledgements. This work was supported in part by grant AST 97-27520 from the NSF and in part by grant NAG5-3111 from NASA.
no-problem/9907/astro-ph9907197.html
ar5iv
text
# Warm Gas and Ionizing Photons in the Local Group ## 1 Introduction The standard Big Bang cosmological model makes remarkably precise predictions for the abundance of baryons in the universe: in terms of the critical density parameter $`\mathrm{\Omega }`$, the prediction is $`\mathrm{\Omega }_b(0.068\pm 0.012)h^2`$, where we take the present-day Hubble constant to be $`H_0=100h\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ (e.g., Olive et al. 1991; Schramm & Turner 1998). An inventory of baryons observed at high redshift ($`z23`$), chiefly in the form of the low-column density Ly$`\alpha `$-forest clouds, gives an estimate of $`\mathrm{\Omega }_b`$ which, although subject to substantial, systematic uncertainties, is in reasonable agreement with the standard prediction of Big Bang nucleosynthesis (see the summary in Fukugita, Hogan, & Peebles 1998). However, as has been noted repeatedly (e.g., Persic & Salucci 1992; Fukugita et al. 1998), at $`z0`$ only a small fraction of the expected number of baryons has been observed, suggesting that there is a substantial, even dominant reservoir of baryons which has not yet been characterized. A plausible suggestion for one reservoir of baryons is that loose groups of galaxies contain substantial masses of warm ($`T<10^7`$ K) ionized gas, an idea which appears to have originated with Kahn & Woltjer (1959; see also Oort 1970; Hunt & Sciama 1972). X-ray observations of poor groups of galaxies frequently detect intragroup gas at $`T1\mathrm{keV}`$ (e.g., Pildis, Bregman, & Evrard 1995; Mulchaey et al. 1996). In general, only groups dominated by ellipticals are detected; spiral-rich groups tend to show only emission from individual galaxies. Although this may be due to the absence of gas in such groups, it is also plausible that the gas has not been seen because its temperature is too low: the velocity dispersions characterizing groups dominated by spiral galaxies are significantly smaller than those of compact, elliptical-dominated groups, and imply temperatures $`T0.20.3`$ keV (Mulchaey et al. 1996), making detection even at relatively soft X-ray wavelengths very difficult. Most recently, Blitz et al. (1998) have suggested that the majority of high-velocity clouds (HVCs; for a review, see Wakker & van Woerden 1997) are not associated with the Galactic ISM, but represent remnants of the formation of the Local Group (LG), as material continues to fall into the LG potential. In this scenario, some fraction of these infalling clouds will collide in the vicinity of the LG barycenter and shock up to the virial temperature, $`T2\times 10^6`$ K, producing a warm intragroup medium. In this paper, we explore the possibility that the Local Group contains such a reservoir of warm ionized gas. In particular, we examine whether significant constraints can be placed on the amount of gas through the detection of recombination lines from neutral gas within the Local Group. In the next section we briefly recapitulate the existing constraints on such an intragroup medium, and in §3 we estimate the flux of ionizing photons. §4 discusses the implications and additional constraints which can be imposed, in particular, mass flux due to cooling and the timing mass of the Local Group. ## 2 COBE and X-Ray Constraints on Local Group Gas Suto et al. (1996) suggested that a gaseous LG halo could significantly influence the CMB quadrupole moment observed by COBE. Assume the Local Group contains an isothermal plasma at temperature $`T_e`$ whose electron number density is (for core density $`n_o`$ and core radius $`r_o`$) $$n_e(r)=n_o\frac{r_o^2}{r^2+r_o^2}\mathrm{cm}^3;$$ (1) i.e., the nonsingular isothermal sphere. Since we allow $`r_o`$ as well as $`n_o`$ to vary, the parameterization of equation (1) includes density distributions ranging from $`n_e`$ constant to $`n_er^2`$. As in Suto et al. (1996), we calculate the resulting Sunyaev-Zeldovich temperature decrement as a function of angle, expand in spherical harmonics and average over the sky to obtain the monopole and quadrupole anisotropies. The COBE FIRAS data (Fixsen et al. 1996) imply that the Compton $`y`$-parameter $`|y|=T_{0,\mathrm{sz}}/2<1.5\times 10^5`$ (95% CL), which imposes the constraint $$n_or_oT_{\mathrm{keV}}<7.4\times 10^{21}\theta _o^1\frac{R}{r_o}\left(\frac{y}{1.5\times 10^5}\right)\mathrm{cm}^2,$$ (2) where $`\theta _o\mathrm{tan}^1(R/r_o)`$. Similarly, the COBE quadrupole moment requires $$n_or_oT_{\mathrm{keV}}<1.6\times 10^{20}Q_{\mu \mathrm{K}}\frac{R}{r_o}\left[\theta _o3\left(\frac{r_o}{R}\right)+3\theta _o\left(\frac{r_o}{R}\right)^2\right]^1\mathrm{cm}^2$$ (3) where the rms quadrupole amplitude $`Q_{\mathrm{RMS}}=10^6Q_{\mu \mathrm{K}}`$ K; the observed value $`Q_{\mu \mathrm{K}}6`$ (e.g., Bennett et al. 1996). Suto et al. (1996) argued that a LG corona which satisfied equation (2) could significantly affect the quadrupole term, as equation (3) is more restrictive than (2). However, Banday & Gorski (1996) showed there is no evidence for a LG corona in the COBE DMR skymaps. In addition, Pildis & McGaugh (1996) pointed out that the typical values of $`n_or_oT_{\mathrm{keV}}`$ observed in poor groups of galaxies, resembling the Local Group, are well below the limit (2), generally no more than a few$`\times 10^{20}\mathrm{cm}^2`$. Furthermore, spiral-rich groups usually reveal no evidence for intragroup gas at X-ray energies; Pildis & McGaugh give upper limits of a few$`\times 10^{19}\mathrm{cm}^2`$ for an assumed temperature $`T_{\mathrm{kev}}1`$. Thus, although the COBE constraints on a LG corona are in fact quite weak<sup>1</sup><sup>1</sup>1There is some confusion in the literature regarding the interpretation of the COBE limits. In evaluating eq. (9) of Suto et al. (1996), there is no numerical fudge factor suggested by Pildis & McGaugh (1996). Moreover, in Fig. 4 of Suto et al. , the 6$`\mu `$K curve is displaced downwards by a factor of 3., analogy with similar poor groups suggests that the LG is unlikely to have a significant gaseous X-ray corona. However, as noted in §1, the lower temperature expected for the gas in spiral-rich groups significantly relaxes the X-ray constraints on warm gas in groups similar to the LG. If the product $`n_or_oT_{\mathrm{keV}}`$ in a LG corona is typical of that seen in more compact groups, merely at lower temperature, the mass in baryons can still be very substantial: for the density distribution (1), scaling $`n_or_oT_{\mathrm{keV}}`$ to $`10^{20}\mathrm{cm}^2`$, the mass inside radius $`r`$ is approximately (assuming $`r/r_o\stackrel{>}{}`$ a few) $$M(r)7\times 10^{11}\left(\frac{r_o}{100\mathrm{kpc}}\right)^2\left(\frac{r}{r_o}\right)\left(\frac{n_or_oT_{\mathrm{keV}}}{10^{20}\mathrm{cm}^2}\right)\left(\frac{T_{\mathrm{keV}}}{0.2}\right)^1M_{};$$ (4) this could be a substantial fraction of the mass of the Local Group (see §4). Direct detection of emission from gas at such temperatures is exceedingly difficult. Using deep ROSAT observations, Wang & McCray (1993) (WM) find evidence for a diffuse thermal component with T<sub>keV</sub> $``$ 0.2 and $`n_e1\times 10^2x_{\mathrm{kpc}}^{0.5}`$ cm<sup>-3</sup> (assuming primordial gas) where $`x_{\mathrm{kpc}}`$ is the line-of-sight depth within the emitting gas in kiloparsecs. In the next section we consider an indirect method of detection: the recombination radiation from neutral gas embedded in the corona, due to the ionizing photon flux generated by the corona gas. ## 3 Ionizing Photon Flux from a Local Group Corona We assume the density distribution (1). Approximating the surface of a cloud as a plane-parallel slab, the normally incident flux on the inner (facing $`r=0`$) cloud face is $$\varphi _i(r)\frac{\pi n_o^2r_o}{(1+r^2/r_o^2)^{1.5}}\xi _i\left[0.8+1.3(r/r_o)^{1.35}\right]\mathrm{phot}\mathrm{cm}^2\mathrm{s}^1$$ (5) where $`\xi _i`$ is the frequency-integrated ionizing photon emissivity and the term in brackets is accurate to 10% for $`10^3r/r_o12`$. (For $`r/r_o\stackrel{>}{}2`$, the flux on the outer face of the cloud is insignificant.) To calculate $`\xi _i`$, we have used the photoionization/shock code MAPPINGS (kindly provided by Ralph Sutherland). Models have been calculated for metal abundances $`Z=0.01`$, $`0.1`$, and $`0.3`$ times solar, and for equilibrium and nonequilibrium ionization. For $`10^4<T<10^7`$ K, $`3\times 10^{15}\xi _i3\times 10^{14}\mathrm{phot}\mathrm{cm}^3\mathrm{s}^1`$ sr<sup>-1</sup>. Scaling to physical values, $$\varphi _i(r)10^4n_3^2r_{100}\left(\frac{\xi _i}{10^{14}}\right)\frac{\left[0.8+1.3(r/r_o)^{1.35}\right]}{(1+r^2/r_o^2)^{1.5}}\mathrm{phot}\mathrm{cm}^2\mathrm{s}^1$$ (6) where the central density $`n_o=10^3n_3\mathrm{cm}^3`$ and the core radius $`r_o=100r_{100}`$ kpc. Poor groups show a very broad range of core radii, from tens to hundreds of kpc (Mulchaey et al. 1996), and typical central densities $`n_o`$a few$`\times 10^3\mathrm{cm}^3`$ (Pildis & McGaugh 1996). In Fig. Warm Gas and Ionizing Photons in the Local Group, we plot $`\varphi _i`$ as a function of core radius $`r_o`$ for densities $`n_o=(1,3,10)\times 10^3`$ cm<sup>-3</sup>, for a metallicity $`Z=0.1`$ times solar; results differ by $`\stackrel{<}{}20\%`$ for the other values of $`Z`$. The value of $`\varphi _i`$ is evaluated at $`r=350`$ kpc, the assumed distance $`r_{\mathrm{MW}}`$ of the Galaxy from the center of the LG (solid lines), and at $`r=0`$ (dashed lines). The fluxes can be very large, exceeding $`10^6\mathrm{phot}\mathrm{cm}^2\mathrm{s}^1`$. However, for $`r_or_{\mathrm{MW}}`$, equation (6) shows that the incident flux at $`r_{\mathrm{MW}}`$ is greatly diminished compared to the peak value of $`\varphi _i`$. At distances $`r2r_o`$ or less, the ionizing flux produced by a LG corona could be large enough for detection in H$`\alpha `$: the emission measure is related to the normally incident photon flux by $`_m=1.25\times 10^2(\varphi _i/10^4)`$ cm<sup>-6</sup> pc. However, to produce a significant flux, $`n_o`$ must be so large that the cooling time $`t_c`$ within $`rr_o`$ is short, $`t_c\stackrel{<}{}10^9`$ years. Even though the LG may be, dynamically, considerably younger than a Hubble time, such a short cooling timescale makes it necessary to consider explicitly the fate of cooling gas. To estimate the mass cooling flux $`\dot{M}`$, we assume that the flow is steady, spherical, and subsonic, and that any gradients in the potential are small compared to the square of the sound speed. In this case the pressure is constant, and mass conservation requires that $`\dot{M}=4\pi \rho vr^2`$; $`v`$ is the inflow velocity. The cooling radius $`r_c`$ is set by the condition $`t_ct_{\mathrm{LG}}`$, the Local Group age. The flow time from $`r_c`$ is $`t_fr_c/v4\pi \rho _cr_c^3/\dot{M}`$, where $`\rho _c`$ is the gas density at $`r_c`$. We assume $`t_ft_c`$, so that the gas has time to cool before reaching $`r=0`$. This sets $`vr_c/t_{\mathrm{LG}}`$ at $`r_c`$. If the cooling function $`\mathrm{\Lambda }`$ does not vary rapidly with $`T`$, the density and temperature within $`r_c`$ scale nearly as $`\rho r_c/r`$, $`Tr/r_c`$ (Fabian & Nulsen 1977). We have used these scalings to calculate $`\dot{M}`$ and $`\varphi _i`$, including the variation of $`\xi _i`$ and $`\mathrm{\Lambda }`$ with radius. Fig. Warm Gas and Ionizing Photons in the Local Group shows several models. The ionizing flux can be large for small $`\dot{M}`$ if $`r_o`$ is large and $`n_o`$ is low, but in many cases $`\dot{M}`$ is prohibitively large, ruling out any such coronae. However, there are several important caveats. Unless the LG is very old, it is unlikely that a steady-state flow has been established (e.g., Tabor & Binney 1993), especially as infall of gas into the LG is likely to be ongoing. (If a steady-state flow existed with substantial $`\dot{M}`$, one would expect the line luminosity – e.g., H$`\alpha `$ – from the cooled gas to be high: see Donahue & Voit 1991.) Furthermore, $`\dot{M}`$ is sensitive to the assumed density distribution. For a given metallicity (and therefore $`\mathrm{\Lambda }(T)`$) and LG age, there is a unique value of $`n_o`$ at which $`t_c=t_{\mathrm{LG}}`$ and $`\dot{M}0`$. As $`n_o`$ is raised above this value $`\dot{M}`$ increases rapidly, since $`r_c`$ increases and $`\dot{M}r_c^2`$. The value of $`\varphi _i`$ at a given value of $`\dot{M}`$ also depends on $`Z`$, since the reduced $`\mathrm{\Lambda }`$ for low $`Z`$ means that $`n_o`$ is larger for a fixed $`t_c`$. Given these uncertainties, it is not clear that the estimated values of $`\dot{M}`$ should be regarded as serious constraints. ## 4 Discussion The results of the previous section show that a warm Local Group corona could in principle generate a large enough ionizing photon flux to produce detectable H$`\alpha `$ emission from neutral hydrogen clouds embedded within it. This would offer an indirect probe of gas which is extremely difficult to observe in emission. Whether the flux seen by clouds at distances comparable to the offset of the Galaxy from the center of the Local Group is high enough for detection depends to a large extent on the core radius characterizing the gas distribution, due to the dropoff in flux for $`r`$ substantially greater than $`r_o`$. As shown in Fig. Warm Gas and Ionizing Photons in the Local Group, for sufficiently large values of $`r_o`$ and $`n_o`$, $`\varphi _i`$ can be detectably large even at a few hundred kpc from the LG barycenter. These large$`n_o`$, large$`r_o`$ models run into insurmountable difficulties, however, when we examine the additional constraints which can presently be imposed on a LG corona. In Fig. Warm Gas and Ionizing Photons in the Local Group we show, shaded in gray, the range in ($`r_o,n_o`$) for which the resulting ionizing photon flux is between $`\varphi _i=10^4`$ and $`\varphi _i=10^5`$ phot cm<sup>-2</sup> s<sup>-1</sup>, for radial offsets $`r=0`$ (lower region) and $`r=r_{\mathrm{MW}}=350`$ kpc (upper region). The cosmic background is probably $`\varphi _{\mathrm{i},\mathrm{cos}}10^4`$ phot cm<sup>-2</sup> s<sup>-1</sup> (Maloney & Bland-Hawthorn 1999: MBH). We also plot the following constraints: (1) The assumption that any LG intragroup medium is “typical” (Mulchaey et al. 1996; Pildis & McGaugh 1996) constrains the product $`n_or_o\stackrel{<}{}1.5\times 10^{21}\mathrm{cm}^2`$, assuming $`T_{\mathrm{kev}}0.2`$. This is plotted as the short-dashed line in Fig. Warm Gas and Ionizing Photons in the Local Group. Any corona which is not unusually rich must lie to the left of this line. This restriction alone rules out any significant contribution to $`\varphi _i`$ at $`r_{\mathrm{MW}}`$. (2) Assuming that the relative velocity of approach of the Galaxy and M31 is due to their mutual gravitational attraction, one can estimate the mass $`M_T`$ of the Local Group (Kahn & Woltjer 1959; q.v., Zaritsky 1994). This ‘timing mass’ depends somewhat on the choice of cosmology; we take $`M_T=5\times 10^{12}M_{}`$ within $`r=1`$ Mpc of the LG center. The timing mass constraint (using equation ) is shown as the solid line in Fig. Warm Gas and Ionizing Photons in the Local Group. As plotted, it is barely more restrictive than the COBE quadrupole constraint (the long-dashed line), and is only more stringent than restriction (1) for large core radii. However, realistically the $`M_T`$ constraint is much more severe, as the Milky Way and M31 undoubtedly dominate the mass of the Local Group, and so the timing mass curve in Fig. Warm Gas and Ionizing Photons in the Local Group should be moved downward in density by a factor of at least $`510`$. (3) We possess some information on (more precisely, upper limits to) the actual electron densities at $`rr_{\mathrm{MW}}`$. Constraints on $`n_e(r_{\mathrm{MW}})`$ come from two sources. Observations of dispersion measures $`𝒟_m`$ toward pulsars in the LMC and the globular cluster NGC 5024 (Taylor, Manchester & Lyne 1993) require a mean $`n_31`$; this is a slightly weaker constraint than provided by $`M_T`$. However, most of this column must be contributed by the Reynolds layer, and some fraction of the $`𝒟_m`$ toward the LMC pulsars presumably arises within the LMC, so probably $`\stackrel{<}{}10\%`$ can be due to a LG corona. Second, a mean density of no more than $`n_30.1`$ is allowed by models of the Magellanic Stream; otherwise, the Stream clouds would be plunging nearly radially into the Galaxy (Moore & Davis 1994). This limits the central density to $`n_30.1+(r_{\mathrm{MW}}/r_o)^2`$. The hatched region in Fig. Warm Gas and Ionizing Photons in the Local Group indicates the portion of $`(r_o,n_o)`$ space in which $`n_e(r_{\mathrm{MW}})10^4`$ cm<sup>-3</sup>. (4) As noted earlier (§2), WM found evidence for a thermal soft X-ray component at $`T_{\mathrm{kev}}0.2`$. If this emission arises in a LG corona, then the corresponding electron density as derived from the emission measure $`_m`$ is $`n_e3\times 10^4x_{\mathrm{Mpc}}^{0.5}`$ cm<sup>-3</sup>, where $`x`$ is the extent of emitting region along the line of sight; the density would be $`3`$ times smaller for gas of solar rather than zero metallicity. This density constraint is comparable to the $`𝒟_m`$ constraint plotted in Fig. Warm Gas and Ionizing Photons in the Local Group. Some of these constraints can be avoided if the corona gas is clumped. The estimates of mass (equation) and $`\varphi _i`$ assume a smooth density distribution. However, if the actual densities are a factor $`C`$ higher than the mean (smoothed) density at a given radius, $`\varphi _i`$ can be kept constant while reducing both the gas mass and $`𝒟_m`$ by $`1/C`$. This is ad hoc, but if the LG halo is being fueled by ongoing infall, it would not be at all surprising for the gas distribution to be nonuniform. However, the WM X-ray determination is unaffected by clumping, as it is derived from $`_m`$. The constraints on a LG corona shown in Fig. Warm Gas and Ionizing Photons in the Local Group rule out a significant contribution to the ionizing flux at $`rr_{\mathrm{MW}}`$. If the core density $`n_o`$ is high, the core radius $`r_o`$ must be small; conversely, for large $`r_o`$, $`n_o`$ must be low. LG coronae within the allowed region of parameter space can produce fluxes $`\varphi _i\varphi _{\mathrm{i},\mathrm{cos}}`$, but only on scales of a few tens of kpc, at best. Thus the maximum volume in which a corona ionizing flux exceeds $`\varphi _{\mathrm{i},\mathrm{cos}}`$ is only of order $`1\%`$ of the LG volume, comparable to the volume which can be ionized by galaxies (MBH). This has important implications for the model of Blitz et al. (1998), in which most HVCs are remnants of the formation of the Local Group. If HVCs are at megaparsec distances, $`\varphi _i`$ will be dominated by the cosmic background. The resulting emission measures will be small: barring unusually favorable geometries, the expected H$`\alpha `$ surface brightnesses ($`\stackrel{<}{}10`$ mR) are at the limit of detectability. Any HVCs which are truly extragalactic and detectable in H$`\alpha `$ would need to lie close to the dominant spiral galaxies (within their “ionization cones”: Bland-Hawthorn & Maloney 1999a,b) or the LG barycenter. In summary, a warm LG corona which significantly dominates the UV emission within the Local Group is ruled out, although such a corona could contain a cosmologically significant quantity of baryons. More massive galaxy groups could well contain coronae that are both cosmologically important and dominate over the ionizing background. Such coronae could have major impact on the group galaxies through ionization and ram pressure stripping<sup>2</sup><sup>2</sup>2We note that, in principle, observations of the O VI doublet at 1032 and 1038 Å are extremely sensitive to the presence of such a corona: for the maximum allowed coronae of Fig. Warm Gas and Ionizing Photons in the Local Group, the expected line fluxes could be as large as $`F\mathrm{a}\mathrm{few}\times 10^{10}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. However, the observational difficulties (absorption and scattering of the photons within the ISM of the Galaxy and the very large spatial extent of the source for a LG corona) are severe.. Finally, we note that, four decades later, the observational limits on a LG corona have yet to improve on the values suggested by Kahn & Woltjer (1959). PRM is supported by the Astrophysical Theory Program under NASA grant NAG5-4061.
no-problem/9907/astro-ph9907129.html
ar5iv
text
# References Radius-Mass Scaling Laws for Celestial Bodies R. Muradian, S. Carneiro and R. Marques Instituto de Física, Universidade Federal da Bahia 40210-340, Salvador, BA, Brasil ## Abstract In this letter we establish a connection between two-exponent radius-mass power laws for cosmic objects and previously proposed two-exponent Regge-like spin-mass relations. A new, simplest method for establishing the coordinates of Chandrasekhar and Eddington points is proposed. In previous papers , Muradian has suggested the two-exponent Regge-like relation $$J=\mathrm{}\left(\frac{m}{m_p}\right)^{1+1/n}$$ (1) between the observed mass $`m`$ and angular momentum $`J`$ of celestial bodies. In this relation $`\mathrm{}`$and $`m_p`$ stand, respectively, for the Planck constant and for the proton mass. The exponent $`n=3`$ for star-like objects and $`n=2`$ for multistellar ones, like galaxies and clusters of galaxies. Relation (1), besides to fit reasonably well the observational data (see Figure 1), presents two remarkable points: equating (1) to Kerr limit $`J^{Kerr}=Gm^2/c`$ for the angular momentum of a rotating black hole, we obtain $$m=m_p\left(\frac{\mathrm{}c}{Gm_p^2}\right)^{\frac{n}{n1}}$$ (2) which, for $`n=3,`$ can be identified with Chandrasekhar mass $`m_{Ch}=m_p\left(\frac{\mathrm{}c}{Gm_p^2}\right)^{3/2}`$ and, for $`n=2`$, with the Eddington mass $`m_E=m_p\left(\frac{\mathrm{}c}{Gm_p^2}\right)^2`$. The corresponding limiting angular momenta could be obtained by substitution of these expressions into (1) (or into Kerr relation), as it has been shown in , : $`J_{Ch}=\mathrm{}\left(\frac{\mathrm{}c}{Gm_p^2}\right)^2`$ and $`J_E=\mathrm{}\left(\frac{\mathrm{}c}{Gm_p^2}\right)^3`$. In a recent paper , Pérez-Mercader has suggested the existence of two-exponent scaling relation between mass and radius of cosmic objects. Now we will try to establish a connection between such relation and the above referred Regge-like trajectories in the $`Jm`$ plane. First of all, let us note that a theoretical relation between $`m`$ and $`r`$ should be valid, in particular, for Chandrasekhar and Eddington points, for which the following relation is valid $$r=r_p\left(\frac{m}{m_p}\right)^{1/n}=\frac{\mathrm{}}{m_pc}\left(\frac{m}{m_p}\right)^{1/n}$$ (3) where $`r_p=\mathrm{}/m_pc`$ stands for the proton radius. Here, for $`n=3`$ and $`m=m_{Ch}`$ we obtain the radius of neutron star $`r_{NS}=\frac{\mathrm{}}{m_pc}\left(\frac{\mathrm{}c}{Gm_p^2}\right)^{1/2}`$, while for $`n=2`$ and $`m=m_E`$ the radius of observable Universe follows, $`r_U=\frac{\mathrm{}}{m_pc}\frac{\mathrm{}c}{Gm_p^2}.`$ In this last case relation (3) is just a possible expression for well known large number coincidences , . In this way, (3) seems to be a good candidate for theoretical two-exponent law relating the radius and mass of primordial dense proto-objects, from which the present day cosmic bodies originated, in the sense of Ambartsumian cosmogony (see and references there in). Another observation in favor of this suggestion is connected to the fact that the same relations for $`r_{NS}`$ and $`r_U`$ follow from the expression for half of the gravitational Schwarzchild radius $`r=Gm/c^2`$ after substitution of Chandrasekhar $`m_{Ch}`$ or Eddington $`m_E`$ masses. This is consistent with the above mentioned fact that the Chandrasekhar and Eddington points correspond in the $`Jm`$ Chew-Frautschi plane to maximally rotating black holes (see Figure 1). Relation (3) is plotted in Figure 2 together with the observational data and line $`r=Gm/c^2.`$ As expected, neutron star and Universe lie on this black hole line. The theoretical line $`r=\frac{\mathrm{}}{m_pc}\left(\frac{m}{m_p}\right)^{1/2}`$ fits crudely the data relating to clusters of galaxies, but in the case of star-like objects the theoretical relation$`r=\frac{\mathrm{}}{m_pc}\left(\frac{m}{m_p}\right)^{1/3}`$ is completely uncorrelated with the data (except the point of neutron star). The following reasoning can elucidate this disagreement. To (1) and (3) be consistent, one needs $`J=mcr,`$ what means that (3) refers to maximally rotating objects. As we have seen, this is the case for the Chandrasekhar and Eddington points, for which the equation $`mcr=Gm^2/c`$ is equivalent to $`r=Gm/c^2`$. But, in general, celestial bodies are far away from this limit and, in consequence, their radii are systematically distributed above the lines representing (3) (see Figure 2). But if (3) does not exactly represent the observational data, what does it represent? And why its $`Jm`$ partner, relation (1), fits well the data? A possible answer to these questions is that relations (1) and (3) represent an initial dense stage in the evolution of the bodies, when they have maximum, Regge-like, angular momenta for some given radii. So, as bodies evolve, their radii change, diverging from the original values given by (3). As indicated in , the fact that there are two radically different power laws for two classes of objects could serve as indication that the objects within each class have a similar physical origin. A possible reason for exponents change is different geometrical shape of the primordial objects: disk-like $`(n=2)`$ for multistellar objects and ball-like $`(n=3)`$ for stellar ones ,,.
no-problem/9907/cond-mat9907173.html
ar5iv
text
# Magnetic reconstruction at (001) CaMnO3 surface ## Abstract The Mn-terminated (001) surface of the stable anti-ferromagnetic insulating phase of cubic perovskite CaMnO<sub>3</sub> is found to undergo a magnetic reconstruction consisting on a spin-flip process at surface: each Mn spin at the surface flips to pair with that of Mn in the subsurface layer. In spite of very little Mn-O charge transfer at surface, the surface behavior is driven by the $`e_g`$ states due to $`d_{xy}`$ $``$ $`d_{z^2}`$ charge redistribution. These results, based on local spin density theory, give a double exchange like coupling that is driven by $`e_g`$ character, not additional charge, and may have relevance to CMR materials. Despite the abundance of work on manganese-based perovskites in the attempt to understand the rich panorama of their bulk properties, very little is known about their surfaces or interfaces. The physical mechanism inducing the so-called colossal magnetoresistance (CMR) in La<sub>1-x</sub>D<sub>x</sub>MnO<sub>3</sub> (with D a divalent alkaline earth ion, and $`x0.3`$) is yet to be fully understood, although it seems clear that the almost half-metallic nature (i.e. the complete spin-polarization of the electrons at the Fermi level) plays a decisive role. The clearest evidence so far of half-metallicity is from photoemission spectra, whose surface sensitivity makes it essential to know the electronic structure of the surface itself. The surface introduces the likelihood of square pyramidal coordinated Mn, which is also essential to the understanding of oxygen-deficient perovskite manganites. In addition, the interfacial behavior that is critical in producing low field CMR in polycrystalline material and trilayer junctions will involve closely related effects due to symmetry lowering of the Mn ion. Thus, studies of the surfaces of manganites are timely. The electronic structure of the stable phase of bulk CaMnO<sub>3</sub> has been actively investigated in recent years. CaMnO<sub>3</sub> is a G-type antiferromagnetic (AFM) semiconductor. The nominal ionic picture Ca<sup>2+</sup>Mn<sup>4+</sup>O$`{}_{3}{}^{}{}_{}{}^{2}`$ with spherical Mn d<sup>3</sup> configuration, makes the cubic (fcc) phase stable over possible distortions observed for instance in LaMnO<sub>3</sub>. In the G-type arrangement all nearest neighbors in the simple-cubic sublattice of Mn have spin-antiparallel orientation. The chemical picture of Mn<sup>4+</sup> ions is represented by completely occupied d t$`{}_{}{}^{}{}_{2g}{}^{}`$ states. An energy gap of $``$ 0.4 eV separate them from the empty d e$`{}_{}{}^{}{}_{g}{}^{}`$ orbitals. Hybridization with O p states reduces the 3 $`\mu _B`$ nominal magnetization of Mn to $``$ 2.5 $`\mu _B`$, whereas magnetic moments on O or Ca are zero by symmetry. In this paper we study the simplest manganite surface, the (001) surface of cubic CaMnO<sub>3</sub>, to determine the surface-induced changes of structural, electronic, and magnetic properties. We find unexpectedly rich effects of surface symmetry lowering: a spin-flip occurs on the surface Mn ions that can be traced to surface states that redistribute charge and spin amongst the various Mn $`d`$ suborbitals and render it metallic without doping. The net effect is a short-range double-exchange-like phenomenon that relates metallicity and spin alignment, analogous to the CMR phases. Calculations were done in a local-spin-density framework; the exchange-correlation potential formula by Perdew and Zunger was used. A plane-wave basis with 30 Ryd cut-off energy and Vanderbilt pseudopotentials make the computation viable. To establish the accuracy of our methods, which have only recently been applied to magnetic materials, in Table I we report our results for the bulk CaMnO<sub>3</sub> in different magnetic phases. As already shown, for manganese perovskites the local spin density approximation successfully predicts the observed stable phase not only against the strongly unfavoured paramagnetic (PM) phase, but also in competition with closer configurations like the ferromagnetic (FM) and the A-type AFM (made of (001) FM layers alternating along direction). Our results are in very good agreement with those of previous all-electron linear-augmented plane-wave calculations. Also, our calculated value for the equilibrium lattice constant of G-type AFM phase (3.735 Å) is in almost perfect agreement with the experimental value (3.729 Å). The stacking along consists of alternating MnO<sub>2</sub> and CaO units (see Fig. 1), and the surface unit cell of G-type AFM CaMnO<sub>3</sub> is $`\sqrt{2}\times \sqrt{2}`$ with respect to that of the bulk cubic cell (we neglect the very small structural distortion). The (001) layers are individually AFM and neutral, so the surface is formally non-polar. Surface formation produces two different surfaces, i.e. Mn-terminated and Ca-terminated. The presence of two inequivalent surfaces in a slab will produce fictitious fields in the vacuum that could affect the electronic and magnetic structure at the surface. We are interested on the Mn-terminated surface, since on it the effects on magnetic properties due to the surface formation are most visible. Thus we use a slab containing two identical Mn-terminated surfaces, with mirror symmetry in the central Mn layer (in total a 46-atom slab with 9 layers of atoms and 3 of vacuum). Surface neutrality generally favours the stability of the ideal surface against reconstructions involving strong changes of symmetry and atomic density at surface. Therefore in this work we consider the surface with relaxed but structurally unreconstructed structure, with different types of magnetic order. Thus we will speak of ‘reconstruction’ in purely magnetic sense: on the unreconstructed surface the spins are oriented as in the bulk (Figure 1), while the reconstructions involve spin-flips on the surface layer. The structure of the two configurations that can be obtained by flipping surface spins are pictured in Fig. 2. In the left panel all surface spins are flipped, so vectors $`(\pm a,\pm a)`$ remain AFM translations but each surface spin is aligned with its subsurface neighbor (spin-flip AFM: sf-AFM). In the right panel only one (of two) surface spins is flipped, leaving a FM surface layer (spin-flip FM: sf-FM). Magnetic and relaxation energies and workfunctions for the three phases are reported in Table II. The $`\mathrm{\Delta }E`$’s reported in Table II are the energies gained by relaxing all the atoms into the slab from their ideal positions. They are small and reflect the small inward atomic displacements $``$ 1% of the cubic lattice constant. This indicates a low excess stress due to the surface formation, and suggests that structural reconstructions are unlikely. The workfunction depends very little on the spin arrangement but is largest for the most stable surface. Most significantly, the sf-AFM surface is stable against the unreconstructed one, whereas the sf-FM is the most unfavoured. Thus, a quite intriguing physical picture follows: at the surface each spin prefers to pair with the one in the subsurface layer, while still keeping the AFM arrangement in-plane. It is possible to express the energy differences for differing types of magnetic order in terms of exchange constants in a Heisenberg model $$H=\underset{<ij>}{}J_{ij}\widehat{S}_i\widehat{S}_j$$ (1) where $`\widehat{S}_j`$ is a unit vector in the direction of the moment on site $`j`$, and the sum is over distinct pairs. For the bulk we get first and second neighbor constants $`J_1=26`$ meV, $`J_2=4`$ meV. This small value of $`J_2`$ suggest that the nearest neighbor (nn) exchange constants contain the important contributions. From surface energies we get the nn coupling parallel and normal to the surface of $`J^{}`$= -22 meV, $`J^{}`$ = 29 meV. While $`J^{}`$ is close to the bulk value, $`J^{}`$ has the opposite sign and is larger in magnitude, indicating the FM alignment of surface and subsurface spins is robust. The reversal of the surface-subsurface coupling can be traced to redistribution of $`d`$ suborbital occupations compared to the bulk, due to the occurrence of surface states. The orbital-projected Mn 4$`d`$ density of states (DOS) of the stable sf-AFM phase near the Fermi level, shown in Fig. 3, makes evident the surface states that lie within the bulk band gap, which extends from -0.3 eV to 0.1 eV relative to E<sub>F</sub>. The surface states are of two distinct types, $`d_{z^2}`$ and $`d_{xy}`$, reflecting the strong symmetry lowering of both $`e_gd_{z^2},d_{x^2y^2}`$ and $`t_{2g}d_{xy},(d_{xz},d_{yz})`$ manifolds. The surface states are almost completely polarized, a result of the large spin splitting $`\mathrm{\Delta }_{ex}`$ =2 eV that strongly inhibits hopping between ions of different spin. Figure 4 presents the surface band structure, where for clarity, only the energy region of interest (roughly spanning the bulk gap) is shown. The $`d_{xy}`$ and $`d_{z^2}`$ surface states are easily identifiable. The $`d_{xy}`$ states at the surface are shifted upward by 1 eV and overlap with $`d_{z^2}`$ states that in the bulk hybridize strongly with the O $`p\sigma `$ orbitals and form low-lying bonding and high-lying antibonding bands (most of the d weight is in the latter). The two $`d_{z^2}`$ states (one from each surface of the slab) are split by 0.2 eV as a result of the interaction between Mn at opposite sides of the slab. (For a thicker slab they would converge to a single, doubly degenerate band, averaging the calculated $`d_{z^2}`$ states). The $`d_{xy}`$ band has a bandwidth of 1.4 eV and a dispersion that follows $$\epsilon _k^{xy}=2t[cos(k_x+k_y)a+cos(k_xk_y)a]$$ (2) using the conventional perovskite coordinates. This dispersion arises from hopping between second nn, which are the nearest neighbors of like spin. The effective hopping amplitude is $`t`$ = 0.17 eV. The $`d_{z^2}`$ band is very narrow (0.2 eV) and its dispersion is not easily represented by tight binding form, reflecting small competing hopping processes along the surface (and perhaps subsurface) that are not easily identified. Coupling of the $`d_{z^2}`$ state perpendicular to the surface is large, however, as reflected in the penetration of the state onto the fifth atomic layer (third Mn layer). The net effect on the Mn ion of surface formation is an intra-atomic shift of charge from $`d_{xy}`$ to $`d_{z^2}`$ orbital, without appreciable change of charge or moment. In the solid the magnetic moment comes almost entirely from t<sub>2g</sub> states, whereas on the surface $`d_{z^2}`$ (surface) states contribute about 30% of the moment. Only the subsurface Mn in the sf-AFM phase experiences a net gain (0.07 $`\mu _B`$) due to the partially occupied $`d_{z^2}`$ state not compensated by a depletion of $`d_{xy}`$ states. In bulk, magnetic moments in the G-type bulk are only allowed on Mn, by symmetry. With the surface formation, O in-plane with Ca acquire a magnetic moment as well. This is larger in the sf-AFM (0.11 $`\mu _B`$) than in the unreconstructed phase (0.06 $`\mu _B`$), because it is enhanced by the parallel alignment of two neighboring Mn spins. Most of the characteristics inferred by DOS and band structure analysis can be better visualized by means of isosurface plots (Fig. 5) of charge density and magnetization of the stable sf-AFM phase. The quantities shown are due only to states within the bulk gap (see Figure 4), thus represent charge and magnetization of the surface states. The charge density clearly shows both $`d_{xy}`$ and $`d_{z^2}`$ characters of the charge on Mn, as well as a $`p_\pi `$-type contribution from O. The physical mechanism driving the changes in exchange interaction parameters at the surface is related to that described by Solovyev et al. investigating how the Jahn-Teller distortions (JTD) affect the magnetic ordering of LaMnO<sub>3</sub>. The basic driving force in the FM-to-AFM transition of LaMnO<sub>3</sub> vs. JTD is the decreasing of $`d_{z^2}`$ occupancy occurring with JTD. The $`d_{z^2}d_{z^2}`$ interaction is indeed a dominant positive (i.e. FM) contribution to $`J^{}`$. Also positive are the $`d_{x^2y^2}d_{z^2}`$ and the much weaker $`d_{x^2y^2}d_{x^2y^2}`$ interactions, whereas t<sub>2g</sub> orbitals interact by superexchange, and their contribution is AFM. When the $`d_{z^2}`$ orbitals are sufficiently occupied to make $`J^{}`$ larger than the next nearest neighbor interactions (favouring AFM) the order becomes FM along the $`\widehat{z}`$ axis. In the present case the picture follows analogously: surface formation, not JTD, results in dehybridization and partial filling of the $`d_{z^2}`$ states on surface (and subsurface) Mn, and a partial depletion of $`d_{xy}`$ orbitals. As a consequence $`J^{}`$ changes sign and the magnetic ordering along $`\widehat{z}`$ reverses. A verification of this mechanism is given by comparison of the band structures (or DOS) of the two competing phases (Figure 4): in the sf-AFM phase there is more $`d_{z^2}`$ occupation and less $`d_{xy}`$ depletion than in the sf-FM phase. Also, the slight occupation of $`d_{z^2}`$ states on the subsurface atom is larger for the reconstructed sf-AFM phase, but not sufficient to propagate the spin alignment further into the bulk. The considerable difference with respect to the LaMnO<sub>3</sub> JTD is that its distortion is extended, whereas at the surface the ordering is a local effect limited to the first two layers. This local spin-flip process is likely to be relevant to more general situations in manganites, such as at surfaces and interfaces of doped systems where it may affect spin transport, and at Mn sites neighboring O vacancies, as in CaMnO<sub>3-x</sub>. To summarize, we have described a spin-flip process at the Mn-terminated (001) surface of CaMnO<sub>3</sub> that is driven by symmetry lowering due to surface formation which causes the partial occupation of the $`e_g`$ $`d_{z^2}`$ surface states. This partially occupied narrow $`d_{z^2}`$ band may display correlated electron behavior. This $`d_{z^2}`$ occupation reverses the magnetic alignment (from AFM to FM) at the surface in the direction orthogonal to the surface but conserves the AFM symmetry along the surface. The surface states are almost completely polarized, but AFM symmetry requires that both spin states occur in equal number, so this result may be difficult to verify experimentally. This research was supported by National Science Foundation grant DMR-9802076. Calculations were done at the Maui High Performance Computing Center.
no-problem/9907/hep-ex9907004.html
ar5iv
text
# Upper limit on the 𝐾_𝑆→3⁢𝜋⁰ decay ## Introduction At present CP-violation is observed only in the $`K_L2\pi `$ and $`K_L\pi l\nu `$ decays, and first indication of the effect in B-decays was recently reported . Another possible domain for CP-violation studies are still unseeing $`K_S\pi ^+\pi ^{}\pi ^0`$ and $`K_S3\pi ^0`$ decays, of which the latter must be a pure CP-violating process , because for three neutral pions only CP-odd states exist. CP-violation in the $`K_S3\pi ^0`$ decay can be parameterized in terms of $`\eta _{000}`$ parameter, which is defined as: $$\eta _{000}=\frac{A(K_S\pi ^0\pi ^0\pi ^0)}{A(K_L\pi ^0\pi ^0\pi ^0)}.$$ One can estimate the decay branching ratio : $`\mathrm{Br}(K_S3\pi ^0)|ϵ_S|^2(\tau _S/\tau _L)\mathrm{Br}(K_L3\pi ^0)10^9`$. The lowest existing experimental upper limit of $`\mathrm{Br}(K_S3\pi ^0)<1.910^5`$ was reported by CPLEAR collaboration . In this paper results of the study of the $`K_S3\pi ^0`$ decay with the SND detector are presented. ## Detector and experiment The experiment was performed in 1996–1998 at VEPP-2M collider with SND detector . The SND is a general purpose nonmagnetic detector. Its main part is a spherical electromagnetic calorimeter, consisting of 1632 NaI(Tl) crystals. The calorimeter energy resolution for photons is $`\sigma _E/E=4.2\%/\sqrt[4]{E(\mathrm{GeV})}`$, the angular resolution is $`\sigma _\varphi (\mathrm{degrees})=0.82/\sqrt{E(\mathrm{GeV})}0.63`$ and solid angle coverage is close to $`90\%`$ of $`4\pi `$ steradian . Presented analysis is based on experimental data collected in the center-of-mass energy region 980–1040 $`\mathrm{MeV}`$, with most of the data, taken in the close vicinity of the $`\varphi (1020)`$ peak. The $`\varphi K_SK_L`$ decays were used as a source of $`K_S`$ mesons. Experimental data corresponds to about $`210^7`$ produced $`\varphi `$ mesons or $`710^6`$ $`K_SK_L`$ decays. ## Event selection The search for the $`K_S3\pi ^0`$ decay was performed using the process $$e^+e^{}\varphi (1020)K_SK_L,K_S3\pi ^06\gamma .$$ (1) In this process the $`K_L`$ having momentum of about $`110\mathrm{MeV}`$ and decay length of $`3.4\text{m}`$ may either produce signals in the detector due to nuclear interaction in the calorimeter or decay in flight, or it can punch through the detector unseen. The detection efficiency of $`K_L`$ mesons in the calorimeter material was studied in the process $$e^+e^{}\varphi (1020)K_SK_L,K_S2\pi ^04\gamma .$$ (2) With the probability of $`46\%`$ the $`K_L`$ produces a single cluster of hit crystals in the calorimeter, and with the $`29\%`$ probability — more than one cluster. The clusters produced by $`K_L`$ mesons are interpreted as “photons” by event reconstruction program. The rest $`25\%`$ of $`K_L`$ mesons produce no signal in the calorimeter. Events with 6 or 7 reconstructed photons were used in the search for $`K_S3\pi ^0`$ decay. In order to reject background caused by stray particles from the accelerator and cosmic events, constraints were imposed on total energy deposition ($`E_{\mathrm{tot}}>0.35E_0`$) and total momentum of events ($`P_{\mathrm{tot}}<0.45E_0`$), where $`E_0`$ is a beam energy. To suppress cosmic background even further, the events, where the most of hit crystals could be fitted by a single straight track, were rejected. Due to worse energy resolution near the calorimeter edges the polar angle of all reconstructed photons was limited to $`30^{}\vartheta 150^{}`$. Remaining background comes mainly from two processes: $$e^+e^{}\varphi (1020)\eta \gamma ,\eta 3\pi ^06\gamma $$ (3) and from (2) with $`K_L`$ producing extra 2 or 3 “photons” due to nuclear interaction or decay. Kinematic fitting based on $`\chi ^2`$-method was performed for the events satisfying selection criteria described above. For each event two hypotheses were checked: * $`H_{3\pi }`$: an event is due to the process (1), i.e. there are 3 $`\pi ^0`$-s from $`K_S`$ decay in the event; * $`H_{2\pi }`$: an event is due to the process (2), i.e. it contains 2 $`\pi ^0`$-s from $`K_S`$ decay. As result of kinematic fitting the following parameters were evaluated: * $`\chi _{3\pi }^2`$ and $`\chi _{2\pi }^2`$ — the chi-square values for the two hypotheses; * $`P_{3\pi }`$ and $`P_{2\pi }`$ — the momentum of reconstructed $`K_S`$; * $`\vartheta _{3\pi }`$ and $`\vartheta _{2\pi }`$ — the polar angle of reconstructed $`K_S`$; * $`m_{3\pi ,i}`$ and $`m_{2\pi ,i}`$ — raw invariant masses of photon pairs, attributed to pions during kinematic fitting. In order to isolate events of the process (1), the following cuts were applied: * $`\chi _{3\pi }^2<20,\chi _{2\pi }^230,`$ * $`80<P_{3\pi }(\mathrm{MeV})<145,`$ * $`30^{}<\vartheta _{3\pi }<150^{}.`$ As a result significant part of background events of the processes (3) and (2) was rejected. The total of 19 6-photon and 15 7-photon events survived the cuts. The process (2) was used as a reference for the detection efficiency monitoring. The events with 4 and 5 reconstructed photons were selected using the same primary cuts as for the process (1). After kinematic fitting the following additional cuts were applied: * $`\chi _{2\pi }^2<10,`$ * $`80<P_{2\pi }(\mathrm{MeV})<145,`$ * $`30^{}<\vartheta _{2\pi }<150^{}.`$ The $`\chi _{2\pi }^2`$ and $`P_{2\pi }`$ distributions for experimental and simulated events of the process (2) are shown in Fig. 2 and Fig. 2. Further analysis of the process (1) candidates was performed separately for events with 6 and 7 photons. The reference process (2) was studied in 4- and 5-photon classes respectively. ## Analysis of the events with detected $`K_L`$ In the events of the process (1) with 7 reconstructed photons, one of the photons must be in fact a $`K_L`$ meson, thus additional cuts on its parameters can be imposed. In the analysis it was required that energy deposition of $`K_L`$ cluster is at least $`100\mathrm{MeV}`$, and spatial angle between the cluster and reconstructed $`K_S`$ direction is more than $`120`$ degrees. The following cuts were based on specific for $`K_L`$ profiles of energy deposition in the calorimeter. The parameters $`\xi _T`$ and $`\xi _L`$ were introduced to quantitatively describe the differences between the energy deposition profiles for photons and $`K_L`$ mesons. The parameter $`\xi _T`$ represents the likelihood of a hypothesis that the transverse profile of energy deposition in a cluster was produced by a single photon. The parameter $`\xi _L`$ has the same meaning, but for the longitudinal profile. Both parameters were studied in the process (2), their distributions are shown in Figs. 4 and 4. The figures show that the requirement of either $`\xi _T>10`$ or $`\xi _L>8`$ reliably identifies $`K_L`$ meson. The same requirement was applied to $`K_L`$ meson in 5-photon events of the reference process (2). As a result the number of selected events was $`N_{K_S3\pi ^0}^1=0`$ for the process (1) and $`N_{K_S2\pi ^0}^1=92676`$ for the process (2). The detection efficiencies $`\epsilon _{K_S3\pi ^0}^1=1.7\%`$ and $`\epsilon _{K_S2\pi ^0}^1=5.3\%`$ for the processes (1) and (2) were calculated by Monte Carlo simulation using UNIMOD2 package . The branching ratio of the $`K_S3\pi ^0`$ decay was calculated as follows: $$\mathrm{Br}(K_S3\pi ^0)=\mathrm{Br}(K_S2\pi ^0)\frac{N_{K_S3\pi ^0}^1}{N_{K_S2\pi ^0}^1}\frac{\epsilon _{K_S2\pi ^0}^1}{\epsilon _{K_S3\pi ^0}^1}.$$ (4) An upper limit was obtained at $`90\%`$ confidence level: $$\mathrm{Br}(K_S3\pi ^0)<2.410^5.$$ ## The analysis of events with undetected $`K_L`$ In 6-photon (1) candidates all detected particle must be photons. Thus the remaining (2) background can be suppressed by the following cuts on $`\xi _T`$ and $`\xi _L`$: $`\xi _T<0`$ and $`\xi _L<0`$ for all six particles. It was required also that raw invariant masses of photon pairs, reconstructed as pions are restricted to the range $`120<m_{3\pi ,i}(\mathrm{MeV})<155`$. The same cuts were used in parallel analysis of 4-photon events of the reference process (2). The detection efficiencies of processes (1) and (2) were obtained by Monte Carlo simulation to be $`\epsilon _{K_S3\pi ^0}^0=1.9\%`$ and $`\epsilon _{K_S2\pi ^0}^0=4.3\%`$ respectively. $`N_{K_S3\pi ^0}^0=0`$ events of the process (1) and $`N_{K_S2\pi ^0}^0=57742`$ events of process (2) survived the cuts. The upper limit at the confidence level of $`90\%`$ is: $$\mathrm{Br}(K_S3\pi ^0)<2.810^5.$$ ## Combined analysis By using the following relation between numbers of found events of the processes (1) and (2): $$N_{K_S3\pi ^0}^i=\mathrm{Br}(K_S3\pi ^0)\frac{N_{K_S2\pi ^0}^i/\epsilon _{K_S2\pi ^0}^i}{\mathrm{Br}(K_S2\pi ^0)}\epsilon _{K_S3\pi ^0}^i,$$ (5) the results of both analyses can be combined: $$\mathrm{Br}(K_S3\pi ^0)=\mathrm{Br}(K_S2\pi ^0)\frac{(N_{K_S3\pi ^0}^0+N_{K_S3\pi ^0}^1)}{N_{K_S2\pi ^0}^0\frac{\epsilon _{K_S3\pi ^0}^0}{\epsilon _{K_S2\pi ^0}^0}+N_{K_S2\pi ^0}^1\frac{\epsilon _{K_S3\pi ^0}^1}{\epsilon _{K_S2\pi ^0}^1}}.$$ (6) The resulting upper limit according to Eq.(6) amounts to $$\mathrm{Br}(K_S3\pi ^0)<1.310^5$$ at the confidence level of $`90\%`$. The systematic error of the detection efficiency is determined mainly by imprecise simulation of $`K_L`$ nuclear interaction. Its estimated value is $`25\%`$. Since the (1) branching ratio depends on the ratio of the detection efficiencies for the processes (1) and (2), the common systematic error in the $`K_L`$ simulation cancels. The remaining systematic error in the efficiency ratio is determined mainly by the accuracy of simulation of electromagnetic showers. In order to estimate this error, the process (3) was studied with the cuts similar to those in the analysis of process (1). Resulting branching ratio of $`\varphi \eta \gamma `$ is $`8\%`$ lower than its world averaged value . This difference was taken as an estimate of the systematic error of the ratio of the detection efficiencies. The final result for the upper limit is then: $$\mathrm{Br}(K_S3\pi ^0)<1.410^5.$$ ## Conclusion The experiment was performed with SND detector at VEPP-2M $`e^+e^{}`$ collider. The total statistics of $`210^7`$ $`\varphi `$ mesons was analyzed. As a result, no candidate events of the $`K_S3\pi ^0`$ decay were found. The upper limit of the branching ratio of $`K_S3\pi ^0`$ $`\mathrm{Br}(K_S3\pi ^0)<1.410^5`$ at the confidence level of $`90\%`$ was placed. ## Acknowledgement This work is supported in part by The Russian Fund for Basic Researches (grant 96-15-96327) and STP “Integration” (No.274). ## Figure captions 1. The distribution of $`\chi ^2`$ of kinematic fit for $`K_S2\pi ^0`$ events. The histogram represents the simulation, points — experimental data. 2. The distribution of the momentum of reconstructed $`K_S`$ meson in $`K_S2\pi ^0`$ events. The histogram represents the simulation, points — data. 3. The $`\xi _T`$ distributions for photons (clear histogram) and for $`K_L`$ mesons (shaded histogram) in the process (2). 4. The $`\xi _L`$ distributions for photons (clear histogram) and $`K_L`$ mesons (shaded histogram) in the process (2).
no-problem/9907/hep-ph9907201.html
ar5iv
text
# Technihadron Production and Decay at LEP2 ## 1 The Technicolor Straw Man Model Strongly–coupled models of electroweak symmetry breaking are expected to have additional structure beyond the would–be Goldstone bosons that give mass to the $`W`$ and $`Z`$ bosons. In this study, we predict the lepton–collider production rates at the parton level of the lightest color–singlet technivector mesons $`\rho _T`$ and $`\omega _T`$ with masses around 200 GeV, which should be relevant to physics studies at LEP2. The basis for this analysis is the “Technicolor Straw Man Model,” or TCSM , which consists of a particle spectrum and effective Lagrangian to describe the phenomenology of the lowest–lying states from a more complete theory of dynamical symmetry breaking. The complete theory is expected to contain some of the aspects of technicolor , extended technicolor , walking technicolor , top condensate models and topcolor-assisted technicolor (TC2, and/or multiscale technicolor . Some signatures of low–scale technicolor in the TCSM have been considered at hadron and muon colliders . Here, we address the issue of what can be learned from the LEP2 collider operating at a center–of–mass energy $`\sqrt{s}200`$ GeV. We concentrate on the challenging case when the $`\rho _T`$ and $`\omega _T`$ masses are larger than $`\sqrt{s}`$. In the TCSM, only the lowest-lying bound states of the lightest technifermion doublet, $`(T_U,T_D)`$ are considered. The technifermions are assumed to be color singlets and to transform under technicolor $`SU(N_{TC})`$ in a fundamental representation, with electric charges $`Q_U`$ and $`Q_D`$. The phenomenology considered here depends only on the sum of these charges $`QQ_U+Q_D`$. The bound states of the technifermions are the pseudoscalar isotriplet $`\mathrm{\Pi }_T^{\pm ,0}`$ and isosinglet $`\mathrm{\Pi }_T^0`$ mesons, and the vector isotriplet $`\rho _T^{\pm ,0}`$ and isosinglet $`\omega _T`$ mesons. The technihadron mass scale is set by the technipion decay constant $`F_T`$. In TC2 models, $`F_TF_\pi /\sqrt{N_D}`$, where $`F_\pi =246\mathrm{GeV}`$, and $`N_D`$ is the the number of electroweak doublets of technifermions. In a specific model, $`N_D10`$ and $`F_T80\mathrm{GeV}`$ . The interaction states $`\mathrm{\Pi }_T`$ are admixtures of the electroweak Goldstone bosons $`W_L`$ and the mass eigenstates of pseudo–Goldstone technipions $`\pi _T^\pm ,\pi _T^0`$: $$|\mathrm{\Pi }_T=\mathrm{sin}\chi |W_L+\mathrm{cos}\chi |\pi _T,$$ (1) where $`\mathrm{sin}\chi =F_T/F_\pi `$ ($`1/\sqrt{10}`$ in the model mentioned above). Similarly, $`|\mathrm{\Pi }_T^0=\mathrm{cos}\chi ^{}|\pi _T^0+\mathrm{}`$, where $`\chi ^{}`$ is another mixing angle and the ellipsis refer to other technipions needed to eliminate the technicolor anomaly from the $`\mathrm{\Pi }_T^0`$ chiral current. If techni–isospin is a good approximate symmetry, $`\rho _T`$ and $`\omega _T`$, and, separately, $`\pi _T^0,\pi _T^0,\pi _T^\pm `$ are nearly degenerate in mass. However, there may be appreciable $`\pi _T^0`$$`\pi _T^0`$ mixing . If that is the case, the lightest neutral technipions are maximally–mixed $`\overline{T}_UT_U`$ and $`\overline{T}_DT_D`$ bound states. ### 1.1 Techniscalar decays Technipion decays are induced mainly by extended technicolor (ETC) interactions which couple them to quarks and leptons like Higgs bosons . With a few exceptions, technipions are expected to decay into the heaviest fermion pairs allowed. One exception is that decays to top quarks are not enhanced, since ETC interactions only generate a few GeV of the top quark mass. Another exception is that the constituents of the isosinglet $`\pi _T^0`$ may include colored technifermions as well as color-singlets, so that decays into a pair of gluons are possible. Therefore, the important decay modes are $`\pi _T^+c\overline{b}`$,$`u\overline{b}`$, $`c\overline{s}`$, $`c\overline{d}`$ and $`\tau ^+\nu _\tau `$; $`\pi _T^0b\overline{b}`$, $`c\overline{c}`$, $`\tau ^+\tau ^{}`$; and $`\pi _T^0gg`$, $`b\overline{b}`$, $`c\overline{c}`$, $`\tau ^+\tau ^{}`$. Branching ratios are presented in Fig. 1 for $`\pi _T^0`$ (solid lines) and $`\pi _T^0`$ (dash–dot lines) using the expressions of Ref. and $`C_f=1`$, except $`C_t=m_b/m_t`$, $`C_{\pi _T}=4/3`$, $`N_{TC}=4`$, and $`F_T=82`$ GeV. The $`\pi _T^0`$ and $`\pi _T^\pm `$ branching ratios are fairly flat as a function of $`M_{\pi _T}`$, while $`\pi _T^0`$ shows more variation because of the $`gg`$ decay mode. In addition to these considerations, there may be light topcolor pions present in a realistic theory, and these can mix with the ordinary technipions. The topcolor pions couple preferentially to top quarks, but there can be flavor mixing and instanton effects . The neutral top pion $`\pi _t^0`$ can decay $`t\overline{t}`$ above threshold; $`t\overline{t}^{}tbW`$ below threshold; $`t\overline{c},t\overline{u}`$ through mixing; $`b\overline{b}`$ through instanton effects; or $`gg`$ through a top quark loop. The charged top pion can decay $`\pi _t^+t\overline{b}`$ above threshold; $`t^{}\overline{b}b\overline{b}W`$ below threshold; or $`b\overline{c}`$ (etc.) through mixing. Typical branching ratios for $`\pi _t^0`$ and $`\pi _t^\pm `$ decays are shown in Fig. 1 (short–dashed lines) with the toppion decay constant set to 82 GeV. For the mass range considered here, only $`\pi _t^0`$ decays to $`b\overline{b}`$ and $`gg`$ final states are important. Note that off–shell decays $`\pi _t^\pm b\overline{b}W`$ can be competitive with the mixing–suppressed decay to $`bc`$ (the suppression was arbitrarily chosen as $`(.05)^2`$ for this plot). In the following, we ignore the complication of technipion–toppion mixing and assume that the technipions decay according to the expectations of the TCSM. ### 1.2 Technivector decays In the limit that the electroweak gauge couplings $`g,g^{}0`$, the isospin–conserving decays of $`\rho _T`$ and $`\omega _T`$ are fixed by the technipion mixing angle: $`\rho _T`$ $``$ $`\mathrm{\Pi }_T\mathrm{\Pi }_T=\mathrm{cos}^2\chi (\pi _T\pi _T)+2\mathrm{sin}\chi \mathrm{cos}\chi (W_L\pi _T)+\mathrm{sin}^2\chi (W_LW_L);`$ $`\omega _T`$ $``$ $`\mathrm{\Pi }_T\mathrm{\Pi }_T\mathrm{\Pi }_T=\mathrm{cos}^3\chi (\pi _T\pi _T\pi _T)+\mathrm{}.`$ (2) Because of the lifting of the technipion masses by the hard technifermion masses, the TCSM assumes the decay $`\omega _T\pi _T\pi _T\pi _T`$ are kinematically forbidden. In addition, we do not consider models where $`\omega _TW_LW_LZ_L`$ is possible. The isospin violating decay rates obey the relation $`\mathrm{\Gamma }(\omega _T\pi _A^+\pi _B^{})=|ϵ_{\rho \omega }|^2\mathrm{\Gamma }(\rho _T^0\pi _A^+\pi _B^{})`$, where $`ϵ_{\rho \omega }`$ is the isospin-violating $`\rho _T`$-$`\omega _T`$ mixing amplitude. In QCD, $`|ϵ_{\rho \omega }|5\%`$, so the isospin violating decays in the TCSM are expected to be unimportant. The technivectors also undergo 2–body decays to transverse gauge bosons and technipions ($`\gamma \pi _T`$, $`W\pi _T`$, etc.) and fermion–anti-fermion pairs $`f\overline{f}`$. The decay rates to transverse gauge bosons are set by a vector or axial mass parameter, $`M_V`$ and $`M_A`$, respectively, which is expected to be of the same order as $`F_T`$, and are proportional to $`\mathrm{cos}^2\chi `$ or $`\mathrm{cos}^2\chi ^{}`$. Decays where the mother and daughter techniparticle have the same isospin and electric charge are proportional to $`Q^2`$, and the decays to $`Z^0\pi _T`$ are of similar strength as $`\gamma \pi _T`$. The $`\rho _T`$ and $`\omega _T`$ decay to fermions because of the technifermion couplings to the standard model (SM) gauge bosons. In general, the branching ratios to fermions are small, and the $`\omega _T`$ decay rate is proportional to $`Q^2`$. ### 1.3 Direct technipion production The lightest technimeson states are difficult to produce directly at $`e^+e^{}`$ colliders. The process $`e^+e^{}\pi _T^0\mathrm{\Gamma }(\pi _T^0e^+e^{})(\frac{m_e}{F_T})^2`$ is suppressed by a small coupling, while $`\gamma \gamma \pi _T^0\mathrm{\Gamma }(\pi _T^0\gamma \gamma )`$ is one–loop suppressed. Additionally, the technipions have no tree level couplings to $`W`$ or $`Z`$, negating the usual Higgs boson production modes at lepton colliders. The charged technipion can be pair–produced through a virtual photon, but the production rates are not large. For a center–of–mass energy $`\sqrt{s}=200`$ GeV, the production cross section falls as (.169,.115,.063,.024,.011) pb for $`M_{\pi _T^\pm }=`$ (80,85,90,95,97) GeV. The SM $`W^+W^{}`$ cross section is about 20 pb, and it is problematic whether an excess of events with heavy flavor can be observed above the backgrounds (because of TC2, such a light charged technipion is not constrained by top quark decays). Presently, LEP experiments set a 95% C.L. exclusion on a charged Higgs boson with mass in the range $`5258`$ GeV . Therefore, we only consider models with technipions heavier than this limit. ### 1.4 Technivector production The explicit formulae for the cross sections of the technivector–mediated processes have been presented in Ref. . Unlike the technipions, the technirho and techniomega can have substantial mixing with the SM gauge bosons, and can be produced with electroweak strength. The mixing between $`\gamma ,Z`$ and $`\rho _T,\omega _T`$ is proportional to $`\sqrt{\alpha /\alpha _{\rho _T}}`$, where $`\alpha `$ is the fine structure constant and $`\alpha _{\rho _T}`$ is the technirho coupling, which is fixed in the TCSM by scaling the ordinary rho coupling by $`N_{TC}`$ ($`=4`$ in this analysis). The full expression for the mixing depends also on the masses and widths of the technivectors. In addition, $`\gamma \omega _T`$ and $`Z\omega _T`$ mixing is proportional to $`Q`$. From the discussion of decay rates above, the technirho and techniomega are also expected to be narrow, which naively reduces the reach of a lepton collider to regions where the center–of–mass energy is close to the resonance mass. However, the resonances are not of the simple, Breit–Wigner form, and the effects of mixing can be seen at lower energies than a few total widths from the resonance mass. On resonance, the production cross sections are substantial ($`𝒪`$(nb) strength for the models considered here), and a tail may be visible even if the nominal mass of the resonance is $`1020`$ GeV above the collider energy. If the resonance mass is substantially below the center–of–mass energy, then the resonance is produced in radiative return events, and should be easily excluded . ## 2 Technivector models To estimate the prospects for observing technivectors at LEP2, we have to fix all of the TCSM parameters. The remaining important parameters are the mass splittings between the vectors and scalars $`\mathrm{\Delta }MM_{\rho _T}M_{\pi _T}`$, the technipion mixing angle $`\chi `$, and the sum technifermion charge $`Q`$. The choices of model parameters are outlined in Table 1. The vector and axial mass parameters are fixed at $`M_V=M_A=100`$ GeV, $`\mathrm{sin}\chi ^{}=\mathrm{sin}\chi `$, and $`M_\rho =M_\omega `$ for simplicity. While the choice is not exhaustive, the models are intended to illustrate basic patterns of signals. Model 1 has relatively heavy $`\rho _T`$ and $`\omega _T`$, and the decays $`\rho _TW_L^+W_L^{}`$ and $`W_L^\pm \pi _T^{}`$ are suppressed by mixing and phase space. The charge $`Q`$ is large, so that $`\omega _T`$ has a large branching ratio to $`\gamma \pi _T`$ and $`f\overline{f}`$ final states and a large $`\gamma \omega _T`$ and $`Z\omega _T`$ mixing. Model 2 has a lighter $`\rho _T`$ and $`\omega _T`$ and $`Q=0`$, so that the $`\omega _Tf\overline{f}`$ coupling and $`\gamma /Z\omega _T`$ mixing vanishes. Model 3 has a small mass splitting $`\mathrm{\Delta }M`$, so that $`\rho _TW_L^\pm \pi _T^{}`$ is forbidden on–shell, and $`Q=1`$, which yields similar $`\gamma /Z\rho _T`$ and $`\gamma /Z\omega _T`$ mixing. Model 4 has the maximal coupling to $`W_L^+W_L^{}`$, while Model 5 has the minimal coupling, but $`\rho _T\pi _T^+\pi _T^{}`$ is kinematically forbidden on–shell. Model 6 is similar to Model 5, but $`\rho _T\pi _T^+\pi _T^{}`$ is allowed on–shell. Finally, Models 7 and 8 have light technipions, with unsuppressed couplings to $`W_L^+W_L^{}`$ and $`\pi _T^+\pi _T^{}`$, respectively, but $`Q=0`$ to suppress $`\omega _Tf\overline{f}`$ couplings and $`\gamma /Z\omega _T`$ mixing. The decay widths for the $`\rho _T`$ ($`\omega _T`$) calculated from these parameters are shown in the next–to–last column of Table 1. The final column shows the symbols used in the figures to denote the Models 1–8. ## 3 Signatures We concentrate on four basic signatures. The first two, the Drell–Yan production of $`\mu ^+\mu ^{}`$ and $`W_L^+W_L^{}`$ pair production, contain only SM particles in the final state. The last two contain either two technipions or a technipion and an electroweak gauge boson. ### 3.1 Drell–Yan As explained above, the technirho and techniomega couple to final states containing fermion pairs through mixing with $`\gamma `$ and $`Z`$ bosons. We consider the the $`\mu ^+\mu ^{}`$ final state here, since this avoids the complication of Bhabha scattering. The expected cross sections for the various models as a function of the center-of-mass energy $`\sqrt{s}`$ is illustrated in Fig. 2. It is worth noting the sensitivity to resonances with pole masses above the energy $`\sqrt{s}`$, despite the fact that the resonances are narrow. In particular, Model 1, with $`M_{\rho _T}=210`$ GeV and $`\mathrm{\Gamma }_{\rho _T}=1.36`$ GeV,<sup>1</sup><sup>1</sup>1The input mass parameters for the technivectors are not pole masses, so the peak of the resonance is shifted. has $`S/B=.03,.06,.19`$ at $`\sqrt{s}=160,180,200`$ GeV in the $`\mu ^+\mu ^{}`$ final state, where $`S\mathrm{\Delta }\sigma `$ is the deviation from the standard model cross section times the integrated luminosity, and $`B`$ is the expected number of standard model events. The large values for $`\mathrm{\Delta }\sigma `$ (even sizeable 50 GeV from the resonance peak) is due to the large charge $`Q=5/3`$ in this model. If $`Q=1`$ ($`Q_U=0`$), then $`S/B=.02,.04,.12`$ with all other parameters fixed. Likewise, for $`Q=0`$, which is the limit that the $`\omega _Tf\overline{f}`$ coupling and $`\gamma /Z\omega _T`$ mixing vanishes, we have $`S/B=.01,.02,.07`$. Far from the resonance peak, measurements of such variations in the overall rate will have important systematic as well as statistical errors, so it is important to have a verification of an effect. Because of the SM quantum numbers (and the energy range considered), there is more sensitivity in the lepton pair final state than in the quark pair, and, for the same set of parameters, the effect in the $`b\overline{b}`$ final state is roughly half of that in the $`\mu ^+\mu ^{}`$ one. There is a also difference in the angular distribution of the decay products because of the interference between the various resonances, but this is not dramatic. The general feature that the cross section decreases before increasing on the resonance is true even for $`Q<0`$, since the $`\gamma \gamma ,ZZ`$ and $`\gamma Z`$ components of the inverse propagator are quadratic in $`Q`$. The only models that do not demonstrate a significant effect in the fermion pair final state are those where the technirho is fairly wide and $`Q=0`$, so that the $`\gamma /Z\omega _T`$ mixing vanishes (Models 7 and 8).<sup>2</sup><sup>2</sup>2In this extreme case, the techniomega appears to be unreasonably narrow. Small isospin–violating effects will have to be included, but they will not contribute significantly to the $`f\overline{f}`$ final state. In these cases, a substantial signal is expected in the $`\rho _T`$–mediated $`W_L^+W_L^{}`$ or $`\pi _T^+\pi _T^{}`$ channels. ### 3.2 $`W_L^+W_L^{}`$ If $`\mathrm{sin}\chi 1`$, the $`\rho _T`$ coupling to the $`W_L^+W_L^{}`$ final state can be important. This is illustrated in Fig 3, where only models that yield a visible signal are shown. The SM prediction for the $`W^+W^{}`$ cross section is shown for reference. Model 4 , with $`\mathrm{sin}\chi =1`$, has $`S/B=.09,0.5,3.8`$ at $`\sqrt{s}=180,190,200`$ GeV, and Model 8 has a similar behavior. (Note, in this figure, the TCSM signal should be added to the standard model component.) The technirho is fairly wide once the $`W_L^+W_L^{}`$ channel is unsuppressed, but there is no theoretical motivation for $`\mathrm{sin}\chi 1`$. On the other hand, there is no realistic theory, yet, so we present these results for completeness. The feature around $`\sqrt{s}=200`$ GeV in Model 4 arises from complicated $`\gamma /Z\rho _T`$ interference. When $`\mathrm{sin}\chi =1/3`$, the increase in cross section is limited to a region of several GeV around the peak position, since the technirho is much narrower. Clearly, on or near the peak, the effect is a striking increase in the total $`W^+W^{}`$ production cross section. Otherwise, the signature is a moderate excess of $`W_L^+W_L^{}`$ events on a potentially large background. ### 3.3 $`W_L^\pm \pi _T^{}+\pi _T^+\pi _T^{}`$ If the technipion is light enough, the $`\rho _T`$ coupling to $`W_L^\pm \pi _T^{}`$ and $`\pi _T^+\pi _T^{}`$ as $`\mathrm{sin}\chi 0`$ is complementary to the $`W_L^+W_L^{}`$ coupling when $`\mathrm{sin}\chi 1`$. This is illustrated by comparing Model 7 in Fig. 4 to Model 8 in Fig. 3, which have large signals in one or the other channel. Both models yield the same $`S/B`$ at $`\sqrt{s}=180`$ GeV in their respective channels. Technirho and techniomega couplings to a transverse $`W`$ boson and $`\pi _T^\pm `$ also arise, but typically at reduced rates compared to $`W_L^\pm \pi _T^{}`$. Since $`\pi _T^\pm `$ decays preferentially to heavy flavor, $`W_L^\pm \pi _T^{}`$ or $`\pi _T^+\pi _T^{}`$ production will produce an excess of $`\tau `$ or $`b`$ and $`c`$–tagged events in the total $`W^+W^{}`$ data sample. The experimental sensitivity will be better if $`M_{\pi _T}`$ is sufficiently different from $`M_W`$. Note that the off–resonance production rate for $`\pi _T^+\pi _T^{}`$ is generally much larger than the usual charged Higgs boson pair production rate discussed earlier. ### 3.4 $`\gamma \pi _T^0,\gamma \pi _T^0`$ For $`\mathrm{sin}\chi 0`$, a significant $`\gamma \pi _T^0,\gamma \pi _T^0`$ signature can arise. $`Z\pi _T`$ production, while possible, is never important relative to $`\gamma \pi _T`$ from phase space considerations. The $`\omega _T`$ contribution to $`\gamma \pi _T`$ can be enhanced significantly if $`Q`$ is large, since the $`\gamma /Z\omega _T`$ mixing is proportional to $`Q`$. The expected signal rate is shown in Fig. 5 for the various models. We have not attempted to estimate the backgrounds, which may be prodigious if $`M_{\pi _T}M_Z`$. However, if $`M_\pi `$ is sufficiently different from $`M_Z`$, an off–resonance signal may be observable. The expected final states are $`\gamma b\overline{b}`$, $`\gamma \tau \tau `$ or $`\gamma gg`$. On resonance, the $`\gamma \pi _T`$ production rate can be $`𝒪`$(100 pb) or larger, and there is still some rate off resonance even when the $`\rho _T`$ and $`\omega _T`$ are narrow. Model 1 (with $`Q=5/3`$) yields a raw event rate of $`.18,.54,2.5`$ pb at $`\sqrt{s}=180,190,200`$ GeV. This drops to $`.06,.18,.90`$ pb if $`Q=1`$, and $`.01,.03,.15`$ pb for $`Q=0`$. These three choices for $`Q`$ represent $`\omega _T`$ domination, equal $`\rho _T`$ and $`\omega _T`$ contributions, and $`\rho _T`$ domination. Model 5, which has $`\mathrm{sin}\chi =0`$ and lighter $`\rho _T`$ and $`\omega _T`$, has a raw event rate of $`.53,2.6,271`$ pb. ## 4 Discussion and Conclusions We have presented examples of how several models of low–scale technicolor, in the framework of the TCSM, would manifest themselves at a lepton collider operating near $`\sqrt{s}=200`$ GeV. These can be used to guide searches at LEP2 to discover or constrain TCSM models. The actual limits will depend on the collider energy, the amount of delivered luminosity, and the SM backgrounds in each channel. For reference, it is quite possible that LEP2 will operate at $`\sqrt{s}=200`$ GeV, with 200 pb<sup>-1</sup> of data delivered to each experiment. In this case, each experiment will be sensitive to cross sections near 15 fb in channels which are relatively background free. The production rates shown have no event selection cuts and no effects of initial state radiation. A dedicated analysis at the particle–level is now under way based on the PYTHIA event generator . The details of how to study the TCSM using PYTHIA are included in the Appendix. Here, we review the results of our parton–level study. On or near resonance, there are substantial signals of technirho and techniomega production in one or more final states. The typical width of the $`\rho _T`$ considered is a few tenths to a few GeV, while the $`\omega _T`$ ranged from exceedingly narrow up to a few tenths of a GeV. When $`\mathrm{sin}\chi 1`$, the decays $`\rho _TW_L^+W_L^{}`$ are unsuppressed. Likewise, when $`\mathrm{sin}\chi 0`$, but $`\rho _T\pi _T^+\pi _T^{}`$ is kinematically allowed, a complementary signature arises in the $`\pi _T^+\pi _T^{}`$ final state, where $`\pi _T^\pm `$ decays predominantly to heavy flavor. For intermediate values of $`\mathrm{sin}\chi `$, decays to $`W_L^\pm \pi _T^{}`$ will occur when kinematically allowed. Also, there can be signals in $`f\overline{f}`$ or $`\gamma \pi _T`$ final states. These signatures should be unmistakable, since the on–resonance cross sections can be of $`𝒪`$(nb). For the same reason, we expect that technivectors with mass significantly below the center–of–mass energy can be easily excluded by searching for radiative return events, but this requires a detailed study . Because of the mixing between the technivector mesons and the electroweak gauge bosons, signatures are not limited to be near the resonance peak. In particular, the presence of the $`\rho _T`$ or $`\omega _T`$ may be inferred from a significant decrease in the $`\mu ^+\mu ^{}`$ rate, unless the $`\gamma /Z\omega _T`$ mixing is small or the $`\rho _T`$ has a width of several GeV. The $`b\overline{b}`$ final state would yield a similar effect, but at only about 1/2 the magnitude. Also, the $`\omega _T`$ and $`\rho _T`$ can mediate the $`\gamma \pi _T`$ final state, which may be observable above backgrounds, provided that $`M_{\pi _T}`$ is far enough from $`M_Z`$. Event rates of .18 pb are possible at 30 GeV below the resonance peak in the models considered here, depending on the technipion mass, the technifermion charge $`Q`$, and the technipion mixing angle $`\chi `$. Even rates closer to the resonance are much larger. The choices of TCSM parameters used in this analysis were motivated by the beam energy of LEP2. However, several technicolor–motivated analyses have emerged based on the Run I data sets at the Tevatron that constrain the properties of the color–singlet technirho and techniomega. In general, the technivector masses of the models considered here are beyond the sensitivity of these analyses, except for the techniomega search, which may exclude the models with large $`Q=5/3`$ at the 90% C.L. Therefore, it is expected that LEP2 can set stronger limits than Run I at the Tevatron for $`\rho _T^0`$ and $`\omega _T^0`$ signatures for certain choices of TCSM parameters. In conclusion, unless the technipion masses are fairly light compared to the technivector masses (which is not expected due to the enhancement of the hard technifermion masses), or the technipion mixing angle $`\mathrm{sin}\chi 1`$ (which is not expected due to the large number of technifermion doublets required in a model with a running coupling), technivector–mediated $`\mu ^+\mu ^{}`$ and $`\gamma \pi _T^0,\gamma \pi _T^0`$ final states can be studied at LEP2 to discover or constrain simple models of technicolor at collider energies substantially below the technivector masses. The actual limit will depend on a detailed background analysis, but the models studied here yield substantial effects at $`1020`$ GeV below $`M_{\rho _T}=M_{\omega _T}`$. The technirho alone can still produce visible effects in these channels, or (1) the $`W_L^\pm \pi _T^{}`$ final states, if kinematically allowed, (2) the $`W_L^+W_L^{}`$ final state, if $`\mathrm{sin}\chi 1`$, or (3) the $`\pi _T^+\pi _T^{}`$ final state, if $`\mathrm{sin}\chi 0`$ and the technipion is light. ## Acknowledgements I thank A. Kounine, G. Landsberg, and K. Lane for useful comments. This work was inspired by the “Strong Dynamics for Run II Workshop” at Fermilab. ## Appendix The simulation of the production and decays of technicolor particles has been substantially upgraded in Pythia v6.126, which is available at moose.ucdavis.edu/mrenna, along with documentation. The full set of processes are: ``` * Drell--Yan (ETC == Extended TechniColor) 194 f+fbar -> f’+fbar’ (ETC) 195 f+fbar’ -> f"+fbar"’ (ETC) ``` The final state fermions are $`e^+e^{}`$ and $`e^\pm \nu _e`$, respectively, which can be changed through the parameters KFPR(194,1) and KFPR(195,1), respectively. ``` * techni_rho0/omega * charged techni_rho 361 f + fbar -> W_L+ W_L- 370 f + fbar’ -> W_L+/- Z_L0 362 f + fbar -> W_L+/- pi_T-/+ 371 f + fbar’ -> W_L+/- pi_T0 363 f + fbar -> pi_T+ pi_T- 372 f + fbar’ -> pi_T+/- Z_L0 364 f + fbar -> gamma pi_T0 373 f + fbar’ -> pi_T+/- pi_T0 365 f + fbar -> gamma pi_T0’ 374 f + fbar’ -> gamma pi_T+/- 366 f + fbar -> Z0 pi_T0 375 f + fbar’ -> Z0 pi_T+/- 367 f + fbar -> Z0 pi_T0’ 376 f + fbar’ -> W+/- pi_T0 368 f + fbar -> W+/- pi_T-/+ 377 f + fbar’ -> W+/- pi_T0’ ``` All of the processes from 361 to 377 can be accessed at once by setting MSEL=50. The production and decay rates depend on several ”Straw Man” technicolor parameters ($`D`$ denotes the default value of a parameter): ``` * Techniparticle masses PMAS(51,1) (D=110.0 GeV) neutral techni_pi mass PMAS(52,1) (D=110.0 GeV) charged techni_pi mass PMAS(53,1) (D=110.0 GeV) neutral techni_pi’ mass PMAS(54,1) (D=210.0 GeV) neutral techni_rho mass PMAS(55,1) (D=210.0 GeV) charged techni_rho mass PMAS(56,1) (D=210.0 GeV) techni_omega mass Note: the rho and omega masses are not pole masses * Lagrangian parameters PARP(141) (D= 0.33333) $\sin\chi$, the mixing angle between technipion interaction and mass eigenstates PARP(142) (D=82.0000 GeV) F_T, the technipion decay constant PARP(143) (D= 1.3333) Q_U, charge of up-type technifermion; the down-type technifermion has a charge Q_D=Q_U-1 PARP(144) (D= 4.0000) N_TC, number of technicolors PARP(145) (D= 1.0000) C_c, coefficient of the technipion decays to charm PARP(146) (D= 1.0000) C_b, coefficient of the technipion decays to bottom PARP(147) (D= 0.0182) C_t, coefficient of the technipion decays to top PARP(148) (D= 1.0000) C_tau, coefficient of the technipion decays to tau PARP(149) (D=0.00000) C_pi, coefficient of technipion decays to gg PARP(150) (D=1.33333) C_pi’, coefficient of technipion’ decays to gg PARJ(172) (D=200.000 GeV) M_V, vector mass parameter for technivector decays to transverse gauge bosons and technipions PARJ(173) (D=200.000 GeV) M_A, axial mass parameter PARJ(174) (D=0.33300) $\sin\chi’$, the mixing angle between the technipion’ interaction and mass eigenstates PARJ(175) (D=0.05000) isospin violating technirho/techniomega mixing amplitude ``` Note, the decays products of the $`W`$ and $`Z`$ bosons are distributed according to phase space, regardless of their designation as $`W_L/Z_L`$ or transverse gauge bosons. The exact meaning of longitudinal or transverse polarizations in this case requires more thought.
no-problem/9907/cond-mat9907245.html
ar5iv
text
# Low-energy quasiparticle states near extended scatterers in d-wave superconductors and their connection with SUSY quantum mechanics ## Abstract Low-energy quasiparticle states, arising from scattering by single-particle potentials in d-wave superconductors, are addressed. Via a natural extension of the Andreev approximation, the idea that sign-variations in the superconducting pair-potential lead to such states is extended beyond its original setting of boundary scattering to the broader context of scattering by general single-particle potentials, such as those due to impurities. The index-theoretic origin of these states is exhibited via a simple connection with Witten’s supersymmetric quantum-mechanical model. PACS numbers: 74.62.Dh, 74.72.-h, 03.65.Sq, 11.30.Pb, 61.16.Ch Introduction: In the present work we shall explore the low-energy quasiparticle states available in d-wave superconductors due to the presence of an extended scatterer such as a boundary or an impurity more than a few Fermi wavelengths across. In the context of boundary scattering, such states represent an important signature of sign-variations of the superconducting order parameter, as they have been shown to originate in the possibility of scattering between momentum orientations that are subject to superconducting pair-potentials of differing sign. The main aims of our work are to extend the idea that sign-variations in the superconducting pair-potential lead to low-energy quasiparticle states to the context of scattering by general single-particle potentials, such as those due to impurities (i.e., beyond scattering by boundaries), and to explore the robustness of this effect. The theoretical framework that we shall adopt is the semiclassical approach to the quantum-mechanical problem of scattering from the single-particle potential, via which the eigenvalue problem at hand reduces to a family of effectively one-dimensional problems for the particle-hole dynamics in the presence of the superconducting pair-potential. Through this approach, we shall be able to see that the density of low-energy quasiparticle states (DOS) is determined solely by the classical scattering properties of the single-particle potential and, furthermore, that this DOS is insensitive to any suppression of the pair-potential that the impurity might cause. This approach also provides us with a framework for classifying and calculating corrections to the DOS at low energies, such as those due to diffraction during scattering from the single-particle potential itself, or due to any pair-potential modifications beyond mere suppression (such as the induction of any out-of-phase components of the pair-potential). Along the way, we shall discuss the fact that the emerging one-dimensional eigenproblem is a realization of Witten’s supersymmetric quantum-mechanical model which, via the Witten index , provides a natural setting in which to explore zero-energy states . Through this identification with Witten’s model we shall see that the conditions under which zero-energy states exist are indeed those mentioned above, viz., propagation between pair-potentials of differing signs. In addition, we shall examine the role played by the semiclassical approximation to the scattering problem vis-à-vis the existence of zero-energy states, and thus see how it is that going beyond this semiclassical approximation generically introduces transition amplitudes between classical scattering trajectories, thus causing the dispersion of the formerly zero-energy states, e.g., into one or more low-energy peaks in the DOS. We would like to stress at the outset that the issue of the origin of the low-energy states, viz., sign changes in the pair-potential, has already been soundly understood and extensively developed theoretically in several contexts: notable examples include the works of Buchholtz and Zwicknagl on p-wave superconductors near surfaces; and of Hu , Buchholtz et al. , and Fogelström et al. on d-wave superconductors near flat surfaces. Low-energy states have also received extensive experimental attention in the context of boundary-scattering in high-temperature superconductors. In particular, measurements of the (macroscopic) tunneling conductance have revealed a zero-bias anomaly indicative of the existence of low-energy states near boundaries. Apart from the effects of flat boundaries, theoretical research on low-energy quasiparticle resonances in d-wave materials has mostly been concerned with the effects of point-like impurities (i.e., impurities for which the size of the impurity is not much larger than the Fermi wavelength $`\lambda _\mathrm{F}`$). Of particular interest has been the effect of the impurity strength on the energies and wave functions of the resonances . More recently, attention has been paid to the effects on these resonances of impurity-induced suppression of the superconducting order parameter . Emerging from this body of work is a picture in which each strong, point-like impurity gives rise to a low-energy resonance. This resonance, which would show up in the tunneling DOS as a pair of peaks symmetrically located around zero energy, transforms (in the particle-hole symmetric case) into a single, marginal, bound state at zero energy in the unitary scattering limit. As the impurity strength is reduced, the energy of this resonance moves towards the gap maximum. Moreover, the quantitative details of the band structure and/or order parameter can play important roles . In particular, in particle-hole asymmetric systems the energies of the resonances no longer tend asymptotically to zero in the unitary limit. In contrast, the present work suggests that an extended (rather than point-like) impurity induces a zero-energy peak in the DOS with a weight of order the linear size of the impurity (measured in units of the Fermi wavelength). Moreover, the resulting low-energy DOS is much less sensitive to details such as the precise form of the band structure and any in-phase order parameter variations, i.e., the peak at zero energy is inert. In this respect, extended impurities behave more like flat boundaries than like point-like impurities. The theoretical distinctions between point-like and extended impurities raised in this Letter have, to some extent, been addressed experimentally via scanning tunneling spectroscopy on $`\mathrm{Bi}_2\mathrm{Sr}_2\mathrm{CaCu}_2\mathrm{O}_8`$ surfaces . Work on native defects , which often appear to be essentially point-like in STM imaging, yield weak signatures in the (smeared, local) DOS near each defect. Such signatures can each be interpreted as being induced by a point-like impurity that yields a resonance of unit weight. In contrast, the artificially-induced defects described in Ref. , which appear to be more extended in STM imaging, show much stronger signatures in the DOS. This is consistent with the idea that extended impurities produce many states, as the present work indicates they should. Bogoliubov-de Gennes eigenproblem: We regard the single-quasiparticle excitations as being described by the Bogoliubov-de Gennes (BdG) eigenproblem $$\left(\begin{array}{cc}\widehat{h}& \widehat{\mathrm{\Delta }}\\ \widehat{\mathrm{\Delta }}^{}& \widehat{h}\end{array}\right)\left(\begin{array}{c}u\\ v\end{array}\right)=E\left(\begin{array}{c}u\\ v\end{array}\right),$$ (1) where the components $`u(𝐱)`$ and $`v(𝐱)`$ of the energy eigenstate respectively give the amplitudes for finding an electron and a hole at the position $`𝐱`$, $`E`$ is the energy eigenvalue, and $`\widehat{h}=^2k_\mathrm{F}^2+V(𝐱)`$ is the one-particle hamiltonian, in which $`k_\mathrm{F}^2`$ is the chemical potential \[i.e., $`k_\mathrm{F}`$ ($`2\pi /\lambda _\mathrm{F}`$) is the Fermi wave vector\] and $`V`$ is the single-particle potential. We have adopted units in which $`\mathrm{}^2/2m=1`$, where $`m`$ is the (effective) mass of the electrons and holes. The operator $`\widehat{\mathrm{\Delta }}`$ (which should ultimately be determined self-consistently) is the pair-potential (integral) operator, whose action on the wave functions is specified by the (nonlocal) kernel $`\mathrm{\Delta }(𝐱,𝐱^{})`$ via: $`[\widehat{\mathrm{\Delta }}v](𝐱)=𝑑𝐱^{}\mathrm{\Delta }(𝐱,𝐱^{})v(𝐱^{})`$. We assume that sufficiently far from the scatterer $`\mathrm{\Delta }`$ returns to the value that characterizes the bulk superconductor (e.g., s-wave, d-wave, mixed, etc.). As we shall see below, our computation of the low-energy DOS is insensitive to the precise form of any suppression of the superconducting order induced by the single-particle potential, and therefore continues to hold when $`\mathrm{\Delta }`$ is replaced by its self-consistent value. However, as we shall also see below, induced modifications of the superconducting order parameter that go beyond simple suppression in a manner that causes local supercurrents \[i.e., via the addition of any intrinsically out-of-phase component to $`\mathrm{\Delta }`$\] spoil this robustness. Andreev’s approximation for a strong single-particle potential: To analyze the BdG eigenproblem we first apply a semiclassical approximation, which reduces the full problem to a family of first-order differential eigenproblems labeled by the classical trajectories of a particle at the Fermi energy in the presence of the full single-particle potential. This amounts to extending the Andreev approximation to situations in which there is a single-particle potential whose energy scale $`V_0`$ is not negligible compared with the Fermi energy. In technical terms, we are making an asymptotic approximation valid when $`k_\mathrm{F}^2(\mathrm{\Delta }_0,E)`$, $`V_0k_\mathrm{F}^2`$, and $`V(𝐱)`$ is slowly varying relative to $`\lambda _\mathrm{F}`$. To implement this approximation we consider the semiclassical solution of $$\left(^2k_\mathrm{F}^2+V(𝐱)\right)\left(𝒜(𝐱)\mathrm{e}^{ik_\mathrm{F}S(𝐱)}\right)=0,$$ (2) i.e., the “large” part of the BdG eigenproblem, where both $`𝒜(𝐱)`$ and $`S(𝐱)`$ are taken to be slowly varying (with respect to $`\lambda _\mathrm{F}`$. By retaining the first and second powers in $`k_\mathrm{F}`$ we obtain, from Eq. (2), the Hamilton-Jacobi equation $`\left|\mathbf{}S(𝐱)\right|^2=1k_\mathrm{F}^2V(𝐱)`$ and the conservation condition $`\mathbf{}\left(𝒜(𝐱)^2\mathbf{}S(𝐱)\right)=0.`$ We then use the resulting semiclassical solution, which is specified in terms of the incoming momentum orientation $`𝐧`$ via the asymptotic behavior $`S(𝐱;𝐧)𝐧𝐱`$ (for $`𝐱`$ far from the scattering center) and includes all of the fast (i.e., order of $`\lambda _\mathrm{F}`$) variations of the exact BdG eigenfunctions, to perform a generalized separation of rapidly and slowly varying components by writing $$\left(\begin{array}{c}u(𝐱)\\ v(𝐱)\end{array}\right)=𝒜(𝐱)\mathrm{e}^{ik_\mathrm{F}S(𝐱;𝐧)}\left(\begin{array}{c}\overline{u}(𝐱)\\ \overline{v}(𝐱)\end{array}\right),$$ (3) where $`\overline{u}`$ and $`\overline{v}`$ are assumed to be slowly varying relative to $`\lambda _\mathrm{F}`$. Then, by inserting this form into Eq. (1) we obtain $`\left[\widehat{h}\left(𝒜\mathrm{e}^{ik_\mathrm{F}S}\overline{u}\right)\right](𝐱)2ik_\mathrm{F}𝒜(𝐱)\mathrm{e}^{ik_\mathrm{F}S(𝐱;𝐧)}\left(\mathbf{}S\right)\left(\mathbf{}\overline{u}\right),`$ (4) for the action of $`\widehat{h}`$ on $`𝒜\mathrm{exp}\left(ik_\mathrm{F}S\right)\overline{u}`$. We now turn to the “small” part of the BdG eigenproblem, which involves the off-diagonal integral operator $`\widehat{\mathrm{\Delta }}`$. It is convenient to transform to relative and center-of-mass coordinates, $`𝐫`$ and $`𝐑`$: $$\overline{\mathrm{\Delta }}(𝐫,𝐑)\mathrm{\Delta }(𝐱,𝐱^{}),𝐫𝐱𝐱^{},𝐑\frac{𝐱+𝐱^{}}{2}.$$ (5) Then the action of $`\widehat{\mathrm{\Delta }}`$ can be asymptotically approximated (for $`k_\mathrm{F}^2\mathrm{\Delta }_0`$) as $`\left[\widehat{\mathrm{\Delta }}\left(𝒜\mathrm{e}^{ik_\mathrm{F}S}\overline{u}\right)\right](𝐱)={\displaystyle 𝑑𝐫\overline{\mathrm{\Delta }}(𝐫,𝐱𝐫/2)\overline{u}(𝐱𝐫/2)𝒜(𝐱𝐫/2)\mathrm{e}^{ik_\mathrm{F}S(𝐱𝐫/2;𝐧)}}\left(𝒜(𝐱)\mathrm{e}^{ik_\mathrm{F}S(𝐱;𝐧)}\right)\overline{u}(𝐱)\mathrm{\Delta }_{\mathrm{eff}}(𝐱;𝐧),`$ (7) $`\mathrm{\Delta }_{\mathrm{eff}}(𝐱;𝐧){\displaystyle 𝑑𝐫\overline{\mathrm{\Delta }}(𝐫,𝐱𝐫/2)\frac{𝒜(𝐱𝐫/2)}{𝒜(𝐱)}\mathrm{exp}\left(ik_\mathrm{F}S(𝐱𝐫;𝐧)ik_\mathrm{F}S(𝐱;𝐧)\right)},`$ (8) provided we assume that $`(\overline{u}(𝐱),\overline{v}(𝐱))`$ varies much more slowly than $`\lambda _\mathrm{F}`$. Thus the task of solving the full BdG eigenproblem (1) is reduced to the task of solving the (classical) Hamilton-Jacobi equation, along with the ($`2\times 2`$) first-order partial differential eigenproblem $$\left(\begin{array}{cc}2ik_\mathrm{F}\mathbf{}S\mathbf{}& \mathrm{\Delta }_{\mathrm{eff}}(𝐱;𝐧)\\ \mathrm{\Delta }_{\mathrm{eff}}^{}(𝐱;𝐧)& 2ik_\mathrm{F}\mathbf{}S\mathbf{}\end{array}\right)\left(\begin{array}{c}\overline{u}\\ \overline{v}\end{array}\right)=E\left(\begin{array}{c}\overline{u}\\ \overline{v}\end{array}\right).$$ (9) In fact, the eigenproblem is an ordinary rather than partial one. To see this, recall the element of Hamilton-Jacobi theory in which one establishes that the solution $`S`$ of the Hamilton-Jacobi equation (at least for classically allowed regions) is indeed the action computed along the classical trajectory $`𝐱_\mathrm{c}()`$ that solves Newton’s equation $`k_\mathrm{F}^2_s^2𝐱_\mathrm{c}(s)=\mathbf{}V(𝐱_\mathrm{c})`$ subject to the condition $`|_s𝐱_c(s)|1`$ as $`s\pm \mathrm{}`$ (so that the classical motion is at the Fermi energy). Owing to this connection between $`\mathbf{}S`$ and $`\dot{𝐱}_\mathrm{c}`$, Eq. (9) can be rewritten as $`\widehat{H}\left(\begin{array}{c}\overline{u}\\ \overline{v}\end{array}\right)=E\left(\begin{array}{c}\overline{u}\\ \overline{v}\end{array}\right),\widehat{H}\left(\begin{array}{cc}2ik_\mathrm{F}_s& \mathrm{\Delta }_{\mathrm{eff}}(s)\\ \mathrm{\Delta }_{\mathrm{eff}}^{}(s)& 2ik_\mathrm{F}_s\end{array}\right),`$ where $`\mathrm{\Delta }_{\mathrm{eff}}(s)`$ is defined to be $`\mathrm{\Delta }_{\mathrm{eff}}(𝐱_\mathrm{c}(s);𝐧)`$. This family of first-order ordinary differential eigenproblems is parametrized by $`𝐧`$ and the impact parameter $`b`$, which uniquely specify the classical trajectory $`𝐱_\mathrm{c}()`$ from amongst those having energy $`k_\mathrm{F}^2`$. Zero-energy states: To search for zero-energy states it is useful to reduce the eigenproblem via the following sequence of steps. We apply the unitary transformation (in electron-hole space) $`\widehat{U}\frac{1}{\sqrt{2}}\left(\genfrac{}{}{0pt}{}{1}{i}\genfrac{}{}{0pt}{}{1}{i}\right)`$, under which $`\widehat{H}`$ $``$ $`\widehat{H}^{}\widehat{U}^{}\widehat{H}\widehat{U}=\left(\begin{array}{cc}0& \widehat{A}\\ \widehat{A}^{}& 0\end{array}\right),`$ (11) $`\widehat{A}`$ $``$ $`2ik_\mathrm{F}_si\mathrm{\Delta }_{\mathrm{eff}}(s),\widehat{A}^{}2ik_\mathrm{F}_s+i\mathrm{\Delta }_{\mathrm{eff}}(s).`$ (12) We emphasize that it is not possible to arrive at this structure for values of $`\mathrm{\Delta }_{\mathrm{eff}}`$ that are intrinsically complex (i.e., cannot be made real by an elementary gauge transformation), as is the case, e.g., for supercurrent-carrying states. The virtue of the structure of Eqs. (11) and (12) is that it allows us to recognize that zero-energy eigenfunctions of $`\widehat{H}^{}`$ have the form $`\left(\genfrac{}{}{0pt}{}{\phi _+}{0}\right)`$ or $`\left(\genfrac{}{}{0pt}{}{0}{\phi _{}}\right)`$, where the functions $`\phi _\pm `$ obey $$\left(2k_\mathrm{F}_s\mathrm{\Delta }_{\mathrm{eff}}\right)\phi _\pm =0,$$ (13) provided they exist (i.e., are normalizable). Owing to their first-order nature, these (zero-energy) eigenproblems may readily be integrated to give $$\phi _\pm (s)\mathrm{exp}\left(\pm (2k_\mathrm{F})^1^s𝑑s^{}\mathrm{\Delta }_{\mathrm{eff}}(s^{})\right).$$ (14) However, the ability to normalize $`\phi _\pm `$, and therefore the existence of zero-energy eigenvalues, depends on the form of $`\mathrm{\Delta }_{\mathrm{eff}}`$ via the limiting values $`\mathrm{\Delta }_\pm lim_s\mathrm{}\mathrm{\Delta }_{\mathrm{eff}}(\pm s)`$ for a given semiclassical path $`𝐱_\mathrm{c}()`$. Specifically, for semiclassical paths for which $`\mathrm{\Delta }_+\mathrm{\Delta }_{}`$ is negative, one or other (but not both) of $`\phi _\pm `$ is normalizable and, therefore, for such paths provide precisely one zero-energy eigenvalue. On the other hand, for semiclassical paths for which $`\mathrm{\Delta }_+\mathrm{\Delta }_{}`$ is positive, neither of $`\phi _\pm `$ is normalizable, and therefore such paths provide no zero-energy eigenvalues. This diagnostic for when semiclassical paths lead to zero-energy states allows us to assemble the zero-energy contributions to the DOS. If, for the sake of concreteness, we restrict our attention to two-dimensional systems then our approximation to the low-energy DOS has the form $$\rho _{\mathrm{SC}}(E)=\delta (E)\frac{k_\mathrm{F}}{2\pi }𝑑𝐧𝑑b\left(1\mathrm{sgn}\mathrm{\Delta }_+\mathrm{sgn}\mathrm{\Delta }_{}\right).$$ (15) This formula should have corrections, which vanish as $`E`$ tends to zero, coming from the nodes in the gap of the homogeneous d-wave state, as well as suppression of the superconducting state near the impurity. Let us now highlight some features of Eq. (15). (i) The evaluation of Eq. (15) requires only knowledge of the classical scattering trajectories for $`V`$. (ii) The DOS peak is located at zero energy. Corrections to this result, owing inter alia to particle-hole asymmetry, are of relative order $`\mathrm{max}(1/k_\mathrm{F}R,\mathrm{\Delta }_0/k_\mathrm{F}^2)`$ (where $`R`$ is the characteristic extent of the impurity). For small $`\mathrm{\Delta }_0/k_\mathrm{F}^2`$ and extended impurities these corrections are small. (iii) Only the asymptotic signs of $`\mathrm{\Delta }`$ at the ends of the classical trajectories feature; the DOS is unchanged by deformations of the pair-potential, provided the asymptotic signs are preserved and no out-of-phase components are induced. (iv) The degeneracy of the zero-energy level is of order $`k_\mathrm{F}R`$, the constant of proportionality being dependent on the form of $`V`$. Connection with Witten’s model of supersymmetric quantum mechanics and index theory: Having seen, within the context of an explicit computation, the emergence (or otherwise) or zero-energy states, we now discuss the structure that underlies this issue, namely index theory . The relevant aspect of index theory is Witten’s index from Witten’s model of supersymmetric quantum mechanics (SUSY QM). The specific connection is as follows: $`\widehat{H}^2`$ (c.f. our 11) is Witten’s SUSY Hamiltonian; $`\mathrm{\Delta }_{\mathrm{eff}}`$ (our 8) is Witten’s SUSY potential; $`A`$ and $`A^{}`$ (our 12) are proportional to Witten’s annihilation and creation operators. Indeed, the analysis leading from Eq. (11) to the conditions for the existence of a zero-energy state, mirrors the (by now) standard SUSY QM analysis. In SUSY QM, an important tool is the Witten index, i.e., the number of zero-energy states of the form $`\left(\genfrac{}{}{0pt}{}{0}{\phi _{}}\right)`$ minus the number of the form $`\left(\genfrac{}{}{0pt}{}{\phi _+}{0}\right)`$. If the Witten index is nonzero then there certainly are zero-energy states (i.e., SUSY is good; see, e.g., Ref. , Sec. 2.1). If the Witten index is zero then there may or may not be zero-energy states, as contributions may cancel. In the present context, we are not prima facie concerned with the Witten index and its properties, but rather with ascertaining the number of zero-energy states. However, owing to the fact that there is at most one zero-energy state for any semiclassical trajectory (because the normalizability condition cannot be simultaneously satisfied by both $`\phi _+`$ and $`\phi _{}`$) the (modulus of the) Witten index does indeed permit the counting of the zero-energy states. Discussion and outlook: The condition on the existence of zero-energy states, together with Eq. (8), provide us with a way of calculating the DOS at low energies by a simple counting of the number of classical trajectories that start and end with different signs of the superconducting pair-potential \[see Eq. (15)\]. Thus, the DOS at low energies depends only on the classical scattering properties of the single-particle potential. As we have stressed earlier, this result is valid in the regime in which the single-particle potential is both spatially extended and strong and the pair-potential is much smaller than the Fermi energy. Before turning to a discussion (and classification) of the generic corrections to this result for the DOS, which arise upon the relaxation of these conditions, we remark that the foregoing approximation scheme and results also hold for spatially extended single-particle potentials that are weaker than the Fermi energy. Moreover, in the regime $`V_0\mathrm{\Delta }_0`$ our results can be extended to the case of rapidly-varying single-particle potentials (such as are due to point-like impurities). However, as the strength of the single-particle potential is diminished, the classical trajectories will tend towards straight lines and, hence, the number of trajectories that “see” different signs of the pair-potential will be reduced. This will result in a corresponding decrease in the degeneracy of the zero-energy level, in accordance with formula (15). Indeed, for $`V_0\mathrm{\Delta }_0`$ the trajectories are essentially straight lines. Thus, there would be no zero-energy states, but additional resonances (due to the impurity) may arise if the pair-potential is suppressed. By contrast, in the regime $`V_0k_\mathrm{F}^2`$ but $`V(𝐱)`$ rapidly varying (e.g., for strong, point-like impurities), the approximation scheme that enabled us to reduce the problem to a family of one-dimensional eigenproblems breaks down, due to the fact that the previously-neglected $`\mathbf{}𝒜`$ term becomes comparable to previously-retained $`\mathbf{}S`$ term. The former term introduces diffraction effects in the (quantum-mechanical) scattering from the single-particle potential, as well as tunneling through the classically-forbidden region. These effects can be viewed as consequences of nonzero transition amplitudes between states associated with the classical trajectories, and would result in the dispersion of the previously-degenerate zero energy states. Let us conclude by remarking that the presence of an impurity-induced subdominant component to the pair-potential, provided it is in-phase with the dominant component, would not change the picture presented here: specifically, formula (15) would continue to hold. On the other hand, if an out-of-phase component is induced (e.g., so that locally the state becomes d+is), this would cause the zero-energy peak in the DOS to split into two peaks of nonzero width , symmetrically disposed about zero energy, the lineshapes depending on the full (rather than solely the asymptotic) details of the pair-potential. If the out-of-phase component is small then the resulting lineshape can be computed via perturbation theory. Acknowledgments: Useful discussions with A. V. Balatsky and M. Stone are gratefully acknowledged. This work was supported by the Department of Energy, Award No. DEFG02-96ER45439, and by the Fulbright Foundation (A.S.).
no-problem/9907/cond-mat9907235.html
ar5iv
text
# ‘One Sided’ Log-normal Distribution of Conductances for a Disordered Quantum Wire ## Abstract We develop a simple systematic method, valid for all strengths of disorder, to obtain analytically for the first time the full distribution of conductance $`P(g)`$ for a quasi one dimensional wire in the absence of electron-electron interaction. We show that in the crossover region between the metallic and insulating regimes, $`P(g)`$ is highly asymmetric, given by an essentially ‘one sided’ log-normal distribution. For larger disorder, the tail of the log-normal distribution for $`g>1`$ is cut off by a Gaussian. Since the discovery of the absence of self averaging in mesoscopic disordered systems , the study of the full distribution of conductance has attracted a lot of attention . In particular, while the metallic regime is well described by a Gaussian distribution, the moments of the conductance fluctuations become of the same order of magnitude as the average conductance on approaching the localized regime. In such cases the average value becomes insufficient in describing properties of disordered conductors and the full distribution must be considered. Recently, numerical support for the existence of a new universal distribution at the metal-insulator transition , a broad distribution of the critical conductance at the integer quantum Hall transition , as well as the expected multifractal properties associated with the critical regime have increased the interest in the conductance distribution in the intermediate regime, between the well studied universal conductance fluctuations in the metallic limit and the log-normal distribution in the deeply insulating limit. However, even for a quasi one dimensional (1d) system where there is only a smooth crossover between the metallic and insulating regimes, there is no analytic result available for the conductance distribution in the crossover regime. So far only the first two moments have been obtained for all strengths of disorder , using the 1d supersymmetric nonlinear $`\sigma `$ model . This model has been shown to be equivalent, in the thick wire or quasi 1d limit, to the Dorokov-Mello-Pereyra-Kumar (DMPK) equation which describes the evolution of the distribution of the transmission eigenvalues with increasing wire length. In this work we develop a simple systematic method to evaluate directly the full distribution of conductance for a thick quasi 1d wire (mean free path $`l`$ the width), starting from the solution of the DMPK equation. The main result of the paper is that although there is no phase transition in quasi one dimension, the crossover region between metallic and insulating regimes is highly non trivial, and shows a remarkable ‘one-sided’ log-normal distribution. Recent numerical studies of a quasi 1d system in the quantum Hall regime have shown highly asymmetric log-normal distributions in the crossover region. We expect similar qualitative features to exist in the critical regimes in higher dimensions as well. Indeed, numerical studies near the integer quantum Hall transition in two dimensions as well as the Anderson transition in three dimensions also point to asymmetric distributions of the critical conductance . In addition, we predict that even the insulating regime should have a sharp cutoff in its log-normal tail near the (dimensionless) conductance $`g1`$. In particular, we show that the conductance distribution in the insulating regime (in the absence of time reversal symmetry) has the form $$P[\mathrm{ln}(g)]\{\begin{array}{cc}\sqrt{\frac{x_1\mathrm{sinh}2x_1}{1g}}e^{\mathrm{\Gamma }x_1^2},\hfill & g<1\text{;}\hfill \\ \sqrt{2}ge^{a(g1)^2},\hfill & g1\hfill \end{array}$$ (1) where $`x_1=\mathrm{cosh}^1(1/\sqrt{g})`$ and the parameter $`\mathrm{\Gamma }=\xi /L`$, where $`\xi =Nl`$ is the quasi 1d localization length, $`N`$ is the number of transmission channels, and $`Ll`$ is the length of the conductor. The parameter $`a`$ is the value of $`F^{\prime \prime }`$ given in (10) evaluated at $`x_2=2/\pi \mathrm{\Gamma }`$, and tends to $`\frac{3}{8}\mathrm{exp}[8/\pi \mathrm{\Gamma }]`$ for $`\mathrm{\Gamma }1`$ in insulators. Note that for $`g1`$, $`x_1\mathrm{ln}(2/\sqrt{g})`$, and the distribution is log-normal, centered at $`\mathrm{ln}g=1/\mathrm{\Gamma }`$. However, for $`g>1`$, the tail is cut off by an exponential function over an exponentially narrow scale in $`1/\mathrm{\Gamma }`$. The results for two different values of $`\mathrm{\Gamma }`$, $`0.7`$ and $`0.2`$, are plotted in fig. 1. The main point is that for very strong disorder, the typical value of $`g`$ is much smaller than unity, so the peak of the distribution $`P[\mathrm{ln}(g)]`$ is very far away from $`g1`$. In this case the exponential cutoff at $`g1`$ is less relevant. However, even for large disorder, a sharp cutoff in the tail for $`g>1`$ always exists, as shown for $`\mathrm{\Gamma }=0.2`$ in fig. 1. At intermediate strength of disorder, the peak of the distribution is close to the cutoff, and the distribution becomes highly asymmetric. In particular near the crossover between metallic and insulating behavior, the peak is at $`g1`$, and we obtain a ‘one-sided’ log-normal distribution, as shown for $`\mathrm{\Gamma }=0.7`$ in figure 1. As a check of the scope and validity of the method developed here, we obtain the exact universal conductance fluctuation in the weakly disordered metallic regime with the expected Gaussian distribution as well as the correct mean and variance of the log-normal distribution in the strongly disordered localized regime within the same unified framework. Systematic corrections as a function of disorder can be obtained from both the metallic and the insulating limits. Note however that the analytic results presented here near the crossover regime are only semi-quantitative, due to the approximate analytical evaluations of certain integrals. A more quantitative result is possible based on numerical evaluations of these integrals. We first briefly outline the method. The probability distribution $`p(\lambda )`$ of the $`N`$ variables $`\lambda _i`$, related to the transmission eigenvalues $`T_i`$ of an N-channel quasi 1d wire by $`\lambda _i=(1T_i)/T_i`$, satisfy the well known DMPK equation . The solution of this equation can be written in the general form $`p(\lambda )\mathrm{exp}[\beta H(\lambda )]`$, where $`H(\lambda )`$ may be interpreted as the Hamiltonian function of $`N`$ classical charges at positions $`\lambda _i`$. The symmetry parameter $`\beta `$=1,2 or 4 depending on the symmetry of the ensemble . The Hamiltonian depends on the parameters $`L`$, $`N`$ and $`l`$ only in the combination $`\mathrm{\Gamma }=Nl/L`$. We will consider the quasi 1d limit where both $`N`$ and $`L`$ approach infinity keeping $`\mathrm{\Gamma }`$ fixed. The dimensionless conductance in terms of $`\lambda _i`$ is given by $`g=_i^N\frac{1}{1+\lambda _i}`$ . The distributon of conductance can therefore be written as $$P(g)=\frac{1}{Z}_{\mathrm{}}^{\mathrm{}}\frac{d\tau }{2\pi }_0^{\mathrm{}}\underset{i=1}{\overset{N}{}}d\lambda _i\mathrm{exp}\left[i\tau (g\underset{i}{\overset{N}{}}\frac{1}{1+\lambda _i})\beta H\right],$$ (2) where $`Z`$ is a normalization factor. In the metallic regime $`g1`$, the $`\lambda _i`$ are all very close to each other so that a continuum description can be used with a density of $`\lambda `$ finite between zero and an upper cutoff given by the normalization condition. This approximation describes the universal conductance fluctuations in the metallic regime . In the deeply insulating regime on the other hand, all $`\lambda _i`$ are exponentially large and separated exponentially from each other, and the conductance is dominated by the smallest eigenvalue. This approximation describes the log-normal distribution in the deeply insulating regime . It is clear however that none of the above descriptions can be used in the crossover regime, where the smallest eigenvalue is neither zero, nor exponentially large. Nevertheless, it turns out that it is possible to combine the essential features of the two descriptions and develop a simple and systematic procedure to study the conductance distribution at intermediate regimes. For simplicity, we will discuss the case $`\beta =2`$ only. The basic idea is the following: 1) We first separate out the lowest eigenvalue $`\lambda _1`$ and treat the rest as a continuum with a lower bound at $`\lambda _2>\lambda _1`$. Note that this approximation can be systematically improved by separating out the lowest $`n>1`$ eigenvalues and treating the rest as a continuum. 2) The continuum part can be written as a functional integration on the generalized density $`\rho (\lambda )`$, and the distribution (2) can be rewritten as $$P(g)=\frac{1}{Z}_{\mathrm{}}^{\mathrm{}}\frac{d\tau }{2\pi }e^{i\tau g}_0^{\mathrm{}}𝑑\lambda _1_{\lambda _1}^{\mathrm{}}𝑑\lambda _2D[\rho (\lambda )]\mathrm{exp}[F(\lambda _1,\lambda _2;\rho (\lambda );\tau )].$$ (3) Here the ‘Free energy’ $$F(\lambda _1,\lambda _2;\rho (\lambda );\tau )=\beta H(\lambda _1,\lambda _2;\rho (\lambda ))+i\tau \left[\frac{1}{1+\lambda _1}+_{\lambda _2}^b𝑑\lambda \frac{\rho (\lambda )}{1+\lambda }\right]$$ (4) contains the ‘edge’ separating out $`\lambda _1`$ as well as the ‘source’ terms proportional to $`\tau `$, plus the continuum version of the Hamiltonian of the form $`H=_{i<j}^Nu(\lambda _i,\lambda _j)+_i^NV(\lambda _i)`$. The upper limit $`b`$ is given by the number conservation $`_{\lambda _2}^b𝑑\lambda \rho (\lambda )=N1`$. 3) We obtain the density by minimizing the Free energy with respect to $`\rho (\lambda )`$, keeping $`\lambda _1`$ and $`\lambda _2`$ fixed. This gives $$_0^{\mathrm{}}𝑑\lambda ^{}u(\lambda +\lambda _2,\lambda ^{}+\lambda _2)\rho _{sp}(\lambda +\lambda _2)=2V_{tot}(\lambda +\lambda _2),$$ (5) where we have shifted the lower limit to zero, and $`V_{tot}(\lambda )=V(\lambda )+\frac{i\tau /\beta }{1+\lambda }+u(\lambda _1,\lambda )`$. After taking a derivative on both sides of (5), the kernel can be inverted to obtain the saddle point density $`\rho _{sp}(\lambda )`$. 4) From the density, we obtain the saddle point Free energy $$F_{sp}=\frac{\beta }{2}_{\lambda _2}^b𝑑\lambda V_{tot}(\lambda )\rho _{sp}(\lambda )+\beta V(\lambda _1)+\frac{i\tau }{1+\lambda _1}.$$ (6) 5) Since $`V_{tot}`$ and therefore $`\rho _{sp}`$ are both linear in $`\tau `$, the Free energy is quadratic in $`\tau `$ and can be written in the form $`F_{sp}=F^0+(i\tau )F^{}+\frac{(i\tau )^2}{2}F^{\prime \prime }`$. The integral over $`\tau `$ in eq. (3) can then be done exactly. The result is $$P(g)=\frac{1}{Z}_0^{\mathrm{}}𝑑\lambda _1_{\lambda _1}^{\mathrm{}}𝑑\lambda _2e^S;S=\frac{(gF^{})^2}{2F^{\prime \prime }}+F^0.$$ (7) 6) At this point, the integrals over $`\lambda _1`$ and $`\lambda _2`$ can be evaluated numerically. Instead, we use saddle point approximation to do the integrals in order to obtain an analytic expression for $`P(g)`$. Solving for $`\frac{S}{\lambda _i}=0`$ for $`i=1,2`$ to determine the saddlepoint values of $`\lambda _1`$ and $`\lambda _2`$, we obtain the distribution as a function of the conductance $`g`$, in terms of the parameter $`\mathrm{\Gamma }`$. In the above approach, if we set both $`\lambda _1`$ and $`\lambda _2`$ equal to zero, we obtain the correct universal value $`2/15\beta `$ for the variance of $`g`$. This is consistent with the picture that in the metallic regime, the eigenvalue density can be treated as a continuum from zero to an upper cut off $`b`$. As disorder is increased beyond the metallic regime, this picture starts to break down. In particular, the smallest eigenvalue is pushed to a finite distance from zero depending on the strength $`\mathrm{\Gamma }`$, so that the continuum picture at the edge no longer holds. The correction to the metallic behavior is captured in the present approach by evaluating the shifts in $`\lambda _1`$ and $`\lambda _2`$ within a variational scheme. Since the insulating regime is dominated by the smallest eigenvalue, this approach clearly captures the correct insulating behavior. In the crossover regime, both the separation of the smallest eigenvalue as well as the rest of the continuum become important. Note that if more accuracy is needed, one can in principle separate out more than one eigenvalue. We now give some details. From the exact solution of DMPK eqn., the two and one body terms in the Hamiltonian of (2) are known to be $$u(\lambda ,\lambda ^{})=\frac{1}{2}\mathrm{ln}|(\lambda \lambda ^{})(x^2(\lambda )x^2(\lambda ^{}))|;V(\lambda )=\frac{\mathrm{\Gamma }}{2}x^2(\lambda ),$$ (8) where $`x(\lambda )=\mathrm{sinh}^1\sqrt{\lambda }`$. Note that the difference $`\mathrm{\Delta }u=u(\lambda +\lambda _2,\lambda ^{}+\lambda _2)u(\lambda ,\lambda ^{})`$ is negligible in the insulating regime and is a small correction proportional to $`\lambda _2`$ in the metallic regime. Therefore to a first approximation, the shifted kernel $`u(\lambda +\lambda _2,\lambda ^{}+\lambda _2)`$ can be replaced by the unshifted kernel $`u(\lambda ,\lambda ^{})`$. One can then use the saddle point density obtained from the unshifted kernel to calculate the correction due to the change in the kernel from the shift. This can be rewritten as an additional term $`\lambda _2V_2+V_{tot}=V_{eff}`$ with an unshifted kernel in eq. (5). The unshifted kernel can then be inverted to give the saddle point density $$\rho _{sp}(\lambda +\lambda _2)=\frac{1}{\lambda (1+\lambda )}_{\mathrm{}}^{\mathrm{}}𝑑\lambda ^{}K^1(x(\lambda )x(\lambda ^{}))\frac{d}{d\lambda ^{}}V_{eff}(|\lambda ^{}|+\lambda _2),$$ (9) where the inverse of the unshifted kernel is $`K^1(t)=(1/2\pi ^2)_0^{\mathrm{}}𝑑q\mathrm{sin}(qt)(1e^{\pi q})`$. The condition $`\rho _{sp}(\lambda )0`$ for all $`\lambda `$ in eq. (9) requires $`\lambda _2\lambda _1>\lambda _c=(2/\mathrm{\Gamma }\pi )^2`$. The free energy can now be obtained from eq. (6). The integrals for $`F^{\prime \prime }`$ can be done exactly. In terms of the variables $`x_1`$, $`x_2`$, defined as $`\mathrm{sinh}^2x_i=\lambda _i,i=1,2`$, we get $$F^{\prime \prime }(x_2)=\frac{1}{\mathrm{sinh}^22x_2}\left[\frac{1}{3}+\frac{1}{4x_2^2}\frac{1}{\mathrm{sinh}^22x_2}\right].$$ (10) The integrals for $`F^{}`$ and $`F^0`$ can be evaluated analytically in two limits. For $`x_21`$, $$F^{}\mathrm{\Gamma }b_1\mathrm{\Gamma }x_2^2+\frac{32}{\pi ^3}\sqrt{x_2^2x_1^2};$$ $$F^0\frac{3\pi ^2}{8}\mathrm{\Gamma }^2x_2^22\pi \mathrm{\Gamma }\sqrt{x_2^2x_1^2}+\frac{3}{2}\mathrm{ln}(x_2^2x_1^2)\mathrm{ln}x_1,$$ (11) where $`b_10.89`$. In the other limit $`x_21`$, $$F^{}\frac{1}{\mathrm{cosh}^2x_1};F^0\mathrm{\Gamma }x_1^2\frac{1}{2}\mathrm{ln}(x_1\mathrm{sinh}2x_1)+\frac{1}{3}\mathrm{\Gamma }^2x_2^3\mathrm{\Gamma }x_2^2+x_2.$$ (12) In the metallic regime, $`\mathrm{\Gamma }1`$, and $`x_2`$ can be very small. Then eqs (10) and (11) give the correct mean conductance $`g=\mathrm{\Gamma }`$ and variance $`var(g)=1/15`$. When $`\mathrm{\Gamma }<1`$, $`\lambda _2\lambda _1>\lambda _c`$ requires $`x_21`$. In this case the limit $`x_11`$ corresponds to the insulating limit, but the limit $`x_11`$ corresponds to the intermediate case close to the crossover regime. We therefore study this regime within a saddle point approximation for the integrals (7). The condition $`\frac{S}{x_1}=0`$ has the solution $`\mathrm{cosh}x_1^{sp}=\frac{1}{\sqrt{g}}`$, while the condition $`\frac{S}{x_2}=0`$ has the solution $`x_2^{sp}=1/\mathrm{\Gamma }`$. This leads to the saddle point result $`S^{sp}`$, to which the contributions from the fluctuations $`S^{fl}=\mathrm{ln}|^2S/x_1^2|`$ have to be added, leading to eq. (1). In the deeply insulating regime $`\mathrm{\Gamma }1`$, the above expression leads to the known mean and variance $`\mathrm{ln}(1/g)=var[\mathrm{ln}(g)]/2=1/\mathrm{\Gamma }`$. However, note that since $`\mathrm{cosh}x_11`$, the saddle point solution exists only for $`g1`$. For $`g>1`$, the $`x_1`$ and $`x_2`$ integrals are dominated by the boundary values at $`x_1=0`$ and $`x_2=2/\pi \mathrm{\Gamma }`$, which has been incorporated in (1). In fig. 2 we show $`P(g)`$ as obtained from eq. (1) for $`\mathrm{\Gamma }=0.7`$. The skewed shape and the exponential drop at $`g1`$ compare well with numerical results of for a slightly smaller value of $`\mathrm{\Gamma }=0.5`$. The difference in the $`\mathrm{\Gamma }`$ values simulates to some extent the correction terms to eq. (1) expected for values of $`\mathrm{\Gamma }`$ approaching unity. Also shown in fig. 2 is the result of a numerical integration of eq. (7) using eq. (11) for $`\mathrm{\Gamma }=1.6`$ in the metallic regime. The Gaussian shape of $`P(g)`$ obtained for this rather small value of $`\mathrm{\Gamma }`$ is in very good agreement with the results of . We note that according to the relation $`g=\frac{1}{1+\lambda _i}`$, any non-negligible $`P(g1)`$ comes from the possibility that the smallest eigenvalue $`\lambda _1`$ can be close to the origin. However, given $`\lambda _11`$, the logarithmic repulsion between eigenvalues generated from (8) forces the rest of the eigenvalues exponentially far when $`\mathrm{\Gamma }1`$, so $`P(g>1)`$ is cutoff sharply. Since these arguments are quite general, we expect qualitatively similar features in higher dimensions as well, which should have important consequences for the universal conductance distribution in the critical regime. To summarize, we calculated the distribution of conductances $`P(g)`$ for a quasi 1d disordered system using known results for the DMPK equation. In this case $`P(g)`$ depends only on one parameter $`\mathrm{\Gamma }=\xi /L`$, where $`\xi `$ is the localization length. In the crossover regime $`\xi /L1`$, we find that $`P(g)`$ is given by a ‘one-sided’ log-normal distribution, cut off by a Gaussian tail on the metallic side ($`g>1`$). We believe that this behavior could be generic for $`P(g)`$ in the transition regime even in higher dimensions, provided the average of $`g`$ at the transition or crossover region is of order unity. Our results can not be directly compared to the work of in $`d=2+ϵ`$ dimensions, because for $`ϵ1`$, $`g=1/ϵ`$ is large, and the bulk of $`P(g)`$ is located deep in the metallic regime. As proposed in , the center of $`P(g)`$ is then Gaussian, with power law tails $`g^{2/ϵ}`$. The latter results are peculiar to the behavior in $`2+ϵ`$ dimensions. One should keep in mind that the DMPK approach does not contain the effects of wave function correlations in the transverse direction, which are expected to be important in higher dimensions. Nonetheless, the similarities of the shape of $`P(g)`$ in the crossover regime obtained here with the numerically determined $`P(g)`$ in 3d at the critical point appears to suggest that the generic behavior of $`P(g)`$ is that of a log-normal distribution for $`g<1`$ combined with a Gaussian cut off for $`g>1`$. We are grateful to A. Mirlin for stimulating discussions and bringing refs. and to our attention. We also thank M. Fogelstroem for his help regarding numerical evaluations. This work has been supported in part by SFB 195 der Deutschen Forschungsgemeinschaft. FIGURE CAPTIONS: Figure 1: Log-normal distribution of conductance $`P[ln(g)]`$ given by eq. (1) in the insulating regime for two strengths of disorder, $`\mathrm{\Gamma }=0.2`$ (dashed line) and $`\mathrm{\Gamma }=0.7`$ (solid line). Figure 2: Distribution of conductance $`P(g)`$ on the insulating and metallic sides of the crossover regime ($`\mathrm{\Gamma }1`$) for two strengths of disorder, $`\mathrm{\Gamma }=0.7`$ (solid line) and $`\mathrm{\Gamma }=1.6`$ (dashed line).
no-problem/9907/physics9907047.html
ar5iv
text
# Many-body and model-potential calculations of low-energy photoionization parameters for francium ## I Introduction Remarkable progress has been made recently in determining energies and lifetimes of low-lying states of the heaviest alkali-metal atom francium, motivated in part by the enhancement of parity non-conserving (PNC) effects in francium compared with other alkali-metal atoms. This experimental work has been accompanied by theoretical studies of properties of the francium atom , concerned mostly with energies and hyperfine constants of the ground and low-lying excited states or transitions between such states. In this work, we present two calculations of photoionization of francium for photon energies below 10 eV; the first is an ab-initio many-body calculation and the second is a model potential (MP) calculation. Experiments on photoionization of francium are planned for the Advanced Light Source at Berkeley. Ab-initio calculations of photoionization in alkali-metal atoms have proved to be a formidable challenge. Photoionization calculations in cesium based on the Dirac or Breit-Pauli equations accounted for the spin-orbit interaction, but not for shielding of the dipole operator by the core electrons or for core-polarization effects; whereas, relativistic calculations that included corrections from many-body perturbation theory (MBPT) at the level of the random-phase-approximation (RPA) accounted for the spin-orbit effects and for core shielding, but not core polarization. Predictions from these many-body calculations were in poor agreement with measurements of the Fano spin-polarization parameter $`P`$ by Heinzmann et al. , of the spin-polarization parameter $`Q`$ by Lubell et al. , and with the measurement of the angular-distribution asymmetry parameter $`\beta `$ by Yin and Elliott . The first quantitatively successful many-body calculation of photoionization of cesium was a relativistic many-body calculation that included both core polarization and core shielding corrections ; that method is applied to low-energy photoionization of francium in the present paper. Although successful many-body calculations of photoionization of heavy alkali-metal atoms are of recent vintage, nearly three decades ago, a number of increasingly sophisticated and successful model potential calculations of the photoionization of cesium appeared , culminating with that of Norcross . The latter calculation, which included the spin-orbit interaction, long-range polarization potentials, and shielding corrections to the dipole operator, gave quantitatively correct values for all of the measured photoionization parameters in cesium. A model potential similar to the one used in was developed recently to study transitions in francium and is used here to study low-energy photoionization in francium. Below, we sketch the important features of the theoretical methods. The photoionization cross sections for Fr $`7s`$ are calculated in Section II.B in both methods and are compared against one another. In Section II.E, we give results for the spin-polarization parameters and the angular distribution asymmetry parameter. Section III concludes our discussion of the francium photoionization. ## II Theoretical Analysis & Discussion ### A Many-Body Perturbation Theory We start our many-body analysis from the Dirac-Hartree-Fock (DHF) $`V_{N1}`$ approximation, in which the DHF equations are solved self-consistently for core orbitals, and the valence orbitals are determined subsequently in the field of the “frozen core”. The total phase shift $`\overline{\delta }_\kappa `$ for a continuum state with angular quantum number $`\kappa `$ in the field of the core is a sum of rapidly varying Coulomb phase shift $`\delta _\kappa ^\mathrm{C}`$ and the short-range shift $`\delta _\kappa `$. The short-range DHF phase-shifts for $`p_{1/2}`$ and $`p_{3/2}`$ continuum wave functions are shown in Fig. 1. The $`p_{3/2}`$ wave function lags in phase compared to the $`p_{1/2}`$ wave function owing to the spin-orbit interaction, which is attractive for $`p_{1/2}`$ states and repulsive for $`p_{3/2}`$ states. The DHF approximation typically underestimates removal energies of bound electrons in heavy atoms such as francium by about 10%; similar accuracy is expected for phase shifts. To improve this level of accuracy one must take into account higher-order MBPT corrections. The clear advantage of the $`V_{N1}`$ approximation stems from the fact that one-body contributions to the residual Coulomb interaction vanish. This leads to a significant reduction in the number of terms in the order-by-order MBPT expansion. In particular, first-order corrections to the energy (or the phase shift) vanish and the perturbation expansion starts in second-order. The leading correlation contribution to the energy is the expectation value of the second-order self-energy operator $`\mathrm{\Sigma }^{(2)}`$, given diagrammatically by the Brueckner-Goldstone diagrams of Fig. 2. Solutions to the Dirac equation including the $`V_{N1}`$ potential and the self-energy operator are called Brueckner orbitals (BO). The non-local self-energy operator $`\mathrm{\Sigma }`$, in the limit of large $`r`$, describes the interaction of an electron with the induced electric moments of the core, $$\mathrm{\Sigma }(r,r^{},ϵ)\frac{\alpha _\mathrm{d}}{2r^4}\delta (rr^{}),$$ (1) where $`\alpha _\mathrm{d}`$ is the dipole polarizability of the core. We determine second-order correction to the phase shift perturbatively as $$\delta _\kappa ^{(2)}=\mathrm{sin}^1(\pi u_{ϵ\kappa }|\mathrm{\Sigma }^{(2)}|u_{ϵ\kappa }).$$ (2) Here, $`u_{ϵ\kappa }`$ is a continuum DHF wave function normalized on the energy scale. The resulting DHF+BO phase shifts are presented in Fig. 1. The attractive polarization potential draws in the nodes of the wave function, resulting in larger phase shifts. The change in the phase shift is approximately the same for both $`p_{3/2}`$ and $`p_{1/2}`$ continuum states, demonstrating that the self-energy correction is mainly due to the accumulation of phase outside of the core. ### B Model Potential The parametric model potential used in this work has the form $$V_{\mathrm{}}^{(j)}(r)=\frac{Z_\mathrm{}j(r)}{r}\frac{\alpha _d}{2r^4}[1e^{(r/r_c^{(j)})^6}],$$ (3) where $`\alpha _d`$ is the static dipole polarizability of the Fr<sup>+</sup> ionic core and the effective radial charge $`Z_\mathrm{}j(r)`$ is given by $$Z_\mathrm{}j(r)=1+(z1)e^{a_1^{(j)}r}+r(a_3^{(j)}+a_4^{(j)}r)e^{a_2^{(j)}r}.$$ (4) The angular momentum-dependent parameters, $`a_i^{(j)},i=1,\mathrm{},4`$ and the cut-off radius $`r_c^{(j)}`$ are obtained through a non-linear fit to one-electron Rydberg energy levels in francium . Because the spin-orbit effects are appreciable for heavy alkali metals, two separate nonlinear fits; one for each fine-structure series, $`j_+=\mathrm{}+\frac{1}{2}`$ and $`j_{}=\mathrm{}\frac{1}{2}`$ were performed. The static dipole polarizability was obtained from an extrapolation of the known core polarizabilities for the other alkali metals as $`\alpha _d(0)=23.2`$ a.u. . We note that an ab initio value for the francium core polarizability is now available. A comparison of short-range phase shifts calculated in the model-potential method and the MBPT is presented in Fig. 3. We find reasonable agreement between the two methods. The MP continuum wavefunctions are slightly lagging in phase compared to many-body wavefunctions. Such phase differences result in Cooper minima being shifted to higher photoelectron momentum in the model-potential calculation. ### C Quantum defects In quantum defect (QD) theory , the energy levels of the valence electron are described by a hydrogen-like Rydberg-Ritz formula $$ϵ_{n\kappa }=\frac{1}{2(n\mu _\kappa )^2}$$ (5) in terms of a quantum defect $`\mu _\kappa `$, which is represented as an expansion in powers of energy with constant coefficients $`\mu _\kappa ^{(i)}`$ $$\mu _\kappa =\mu _\kappa ^{(0)}+\mu _\kappa ^{(1)}ϵ_{n\kappa }+\mu _\kappa ^{(2)}(ϵ_{n\kappa })^2+\mathrm{}.$$ (6) The Rydberg-Ritz formula provides an accurate fitting expression for the bound spectrum of alkalis. The QD $`\mu _\kappa ^{(0)}`$ is related to threshold value of the phase shift as $`\mu _\kappa ^{(0)}=\delta _\kappa /\pi +\mathrm{integer}`$. The QD’s for Fr $`p`$ states are not known, since the relevant Rydberg series have not been observed experimentally. We use our ab-initio threshold phase shifts together with experimentally known energies for $`7p`$ and $`8p`$ states to predict QD’s; thereby approximating the entire Rydberg spectrum of Fr $`p`$ states. The predicted quantum defects are given in Table I. We assigned an error bar of $`0.5\%`$ to the threshold phase shift, based on the accuracy of an application of the many-body formalism employed here to the case of Cs . In Table I, we also present the MP values of quantum-defects obtained by fitting Rydberg series calculated with the potential in Eq. 3. We find generally good agreement for the leading order quantum-defect $`\mu _\kappa ^{(0)}`$, estimated in the two methods. Higher-order QD parameters, $`\mu _\kappa ^{(1)}`$ and $`\mu _\kappa ^{(2)}`$, calculated in the two methods do not agree well. This is due to the sensitivity of these parameters to the value of $`\mu _\kappa ^{(0)}`$. The values for $`\mu _\kappa ^{(0)}`$ obtained by fitting to the MP-calculated $`np`$ levels agree to four significant digits with the values, for $`\mu _\kappa ^{(0)}`$, extracted from the threshold phase shifts in Fig. 3. Using the calculated quantum defects, we predict energy levels for the lowest few $`np`$ states. Table II lists these energies and compares them with the present MP calculation and with a recent MBPT single-double (SD) calculation . The accuracy of our many-body calculation was estimated by exercising upper and lower bounds on $`\mu _\kappa ^{(0)}`$, and a consistent determination of $`\mu _\kappa ^{(1)}`$ and $`\mu _\kappa ^{(2)}`$ to fit $`7p`$ and $`8p`$ energies. MBPT results are in reasonable agreement with the MP calculations and SD predictions for these levels. ### D Cross-section The total cross-section for photoionization of the valence electron $`v`$ is the sum of partial cross-sections $$\sigma =\underset{\kappa }{}\sigma _\kappa =\frac{4\pi ^2\alpha }{3}\omega \underset{\kappa }{}|D_\kappa |^2,$$ (7) where $`\omega `$ is the photon energy. The dipole transition amplitude for an ionization channel $`vϵ\kappa `$ is defined as $$D_\kappa =i^{l+1}e^{i\overline{\delta }_\kappa }u_{ϵ\kappa }𝐫u_v,$$ (8) where $`u_v`$ is the valence wave function and where the continuum wave function $`u_{ϵ\kappa }`$ is normalized on the energy scale. Here we have two ionization channels $`7sϵp_{1/2}`$, with $`\kappa =1`$ and $`7sϵp_{3/2}`$, with $`\kappa =2`$. The DHF results for the total cross-section are shown with dashed lines in Fig. 4. Since the DHF potential is non-local, the resulting amplitudes depend on the gauge of the electromagnetic field. The difference between length- and velocity-form values is especially noticeable in the near-threshold region. Second-order corrections, and the associated all-order sequence of random-phase approximation (RPA) diagrams, account for the shielding of the external field by the core electrons. Explicit expressions for the second-order MBPT corrections can be found, for example, in Ref. . Already in second order, the dipole operator with RPA corrections reduces at large $`r`$ to an effective one-particle operator $$𝐫_{\mathrm{eff}}=𝐫\left(1\frac{\alpha _\mathrm{d}(\omega )}{r^3}\right),$$ (9) where $`\alpha _\mathrm{d}(\omega )`$ is a dynamic polarizability of the core. The first term is associated with the applied electric field and the second with the field of the induced dipole moment of the atomic core; the valence electron responds to a sum of these two fields. We note that the induced field may become strong and reverse the direction of the total field. The RPA cross-section is presented with a thin solid line in Fig. 4. In contrast to DHF amplitudes, the RPA amplitudes are gauge-independent. Furthermore, we note the sudden upturn in the RPA cross-section for the photoelectron momenta $`p0.7`$ a.u. associated with a $`J=1`$ core excitation resonance. To predict the position of this resonance, we calculate the dynamic polarizability of Fr<sup>+</sup> within the framework of relativistic RPA, discussed in . The energy of the first resonance is at $`\omega _r=0.4024`$ a.u.. Using the DHF value of $`7s`$ threshold, 0.1311 a.u., we expect the first core excitation resonance to appear at $`p0.74`$ a.u.. The dynamic polarizability of Fr core $`\alpha _\mathrm{d}(\omega )`$ from this RPA calculation is plotted as a function of electron momentum $`p`$ in Fig. 5. To account for core-polarization corrections to the DHF wave function, discussed in the introduction, we evaluate the second-order corrections to the DHF wave functions of the valence electron due to the self-energy operator $`\mathrm{\Sigma }^{(2)}`$ $$u_v^{(2)}=\underset{iv}{}\frac{\mathrm{\Sigma }_{iv}^{(2)}}{ϵ_vϵ_i}u_i.$$ (10) The resulting orbital $`u_v+u_v^{(2)}`$ is the perturbative approximation to the valence-state Brueckner orbitals (BO). Approximate Brueckner orbitals for a continuum state ($`ϵ\kappa `$) are found by solving the inhomogeneous Dirac equation $$\left(h+V_{N1}ϵ\right)w_{ϵ\kappa }=\left(\pi \mathrm{sin}\delta _\kappa \mathrm{\Sigma }^{(2)}\right)u_{ϵ\kappa }$$ (11) normalized on the energy scale, where $`\delta _\kappa `$ is given in Eq. (2). Brueckner orbitals for the $`7s`$ valence state and a $`p_{1/2}`$ continuum state are compared with unperturbed DHF orbitals in Fig. 6. The BO corrections contribute to transition amplitudes starting from third order. Together with the RPA corrections, they provide the most important third-order contributions for bound-bound transitions, as discussed in . In the present approach, we modify the conventional RPA scheme by replacing the valence and continuum wave functions by the approximate Brueckner orbitals described above (RPA$``$BO). Such a modification accounts for the important second- and third-order correlation corrections and for a subset of fourth-order contributions to transition amplitudes. We note that this fourth-order subset brings the photoionization parameters in cesium into good agreement with available experimental data; therefore, we believe that this approach will provide reliable predictions for francium. The resulting cross section is shown with a heavy solid line in Fig. 4, and decomposed into partial cross-sections in Fig. 7. Calculations using length and velocity forms of transition operator lead to slightly different result in the modified RPA$``$BO scheme; we present the final result in the length form only. Both photoionization channels exhibit Cooper minima; $`\sigma _{p_{1/2}}`$ vanishes at $`p0.1`$ a.u. and $`\sigma _{p_{3/2}}`$ vanishes at $`p0.5`$ a.u.. Combining the two partial cross sections, leads to a broad minimum in the total cross-section slightly below $`p=0.45`$ a.u.. The total cross section in Fig. 7 is not very sensitive to the positions of Cooper minima in the $`p_{1/2}`$ and $`p_{3/2}`$ channels. Conversely, the spin-polarization and angular distribution measurements, discussed in the following section, provide information sensitive to fine details of individual transition amplitudes. Fig. 8 examines the total photoionization cross sections for francium, calculated in the two method. The label ”static” refers to the set of MP results with the core static dipole polarizability in Eq. 9. The shielding of the electron dipole operator is truncated in the MP calculations by introducing a cut-off term, similar to the exponential term in the one-electron potential in Eq. 3. The threshold cross sections in the $`p_{1/2}`$ and $`p_{3/2}`$ channels (not shown here) are, respectively, 1.74 and 0.02 Mb. Cooper minima appear in both channels at approximately, $`p0.15`$ a.u. and $`p0.75`$ a.u. and the maximum cross section in the $`p_{1/2}`$ channel is $`\sigma _{p_{1/2}}(\mathrm{max})0.2`$ Mb. The Cooper minimum in the $`p_{3/2}`$ photoelectron cross section calculated in the MP method with the static core dipole polarizability, occurs approximately where the first core resonance in Fig. 5 becomes excited. By including the dynamic core polarizability $`\alpha _d(\omega )`$ in the MP calculations, the curve labeled as ”dynamic” in Fig. 8 is obtained. The Cooper minima are moved to lower photoelectron momenta, resulting in a shallow minimum in the total cross section near $`p0.5`$ a.u. The comparison in Fig. 8 indicates that the cross sections calculated in the MP method are in general larger than the MBPT cross sections. The “MP dynamic” and the MBPT cross sections both rise for the photoelectron momenta $`p>0.5`$ a.u. to meet the first core-excited resonance near $`p0.75`$ a.u. ### E Polarization parameters Fano proposed a measurement of spin polarization $`P`$ of photoelectrons emitted from unpolarized Cs atoms illuminated by circularly polarized photons. The total spin polarization is expressed in terms of $`p_{1/2}`$ and $`p_{3/2}`$ transition amplitudes as <sup>*</sup><sup>*</sup>* There is a phase difference in the $`D_{1/2}D_{3/2}^{}`$ interference term in Eqs. 12,14 and the corresponding equations in Ref. , caused by the unconventional definition of reduced matrix elements used in that work. $$P=\frac{5|D_{3/2}|^22|D_{1/2}|^2+4\sqrt{2}\mathrm{}[D_{1/2}D_{3/2}^{}]}{6(|D_{3/2}|^2+|D_{1/2}|^2)}.$$ (12) The result of our RPA$``$BO calculation of the spin-polarization parameter $`P`$ is presented in Fig. 9, where it is seen that the polarization reaches 100% at momentum $`p0.3`$ a.u. The model-potential results for $`P`$ are also given in Fig. 9 and compare well with the RPA$``$BO calculation in Fig. 9. Maximum spin polarization in the MP method occurs at $`p0.35`$ a.u. The calculations with the static and dynamic core polarizabilities in Eq. 9 are similar and differ only after the maximum is reached. Lubell and Raith measured a different spin-polarization parameter $`Q`$ obtained from photoionization of polarized Cs atoms by a circularly polarized light. In the Lubell-Raith setup, the $`p_{3/2}`$ channel can be accessed individually, for example, by photoionization with left-circularly polarized light of the $`7s`$, electron prepared in the $`m_s=+\frac{1}{2}`$ substate. Combining the partial cross section $`\sigma _{p_{3/2}}`$ thereby obtained with the total cross-section, permits one to deduce the partial cross-section for the $`p_{1/2}`$ channel. The Lubell-Raith parameter $`Q`$ is defined as the ratio of the difference to the total of the photoabsorption intensities for two photon helicities $$Q=\frac{I_+I_{}}{I_++I_{}}=\frac{|D_{3/2}|^22|D_{1/2}|^2}{2(|D_{3/2}|^2+|D_{1/2}|^2)}.$$ (13) The limiting values for the Lubell-Raith parameter are $`1Q\frac{1}{2}`$. We stress that a measurement of $`Q`$ or of the (phase-insensitive) parameter $`P`$, together with a measurement of the total cross-section permits one to obtain information about absolute values of transition amplitudes. A further measurement of the phase-sensitive angular-distribution parameter $`\beta `$ , $$\beta =\frac{|D_{3/2}|^22\sqrt{2}\mathrm{}\left[D_{1/2}D_{3/2}^{}\right]}{|D_{3/2}|^2+|D_{1/2}|^2},$$ (14) would permit one to determine the relative phase between the $`p_{3/2}`$ and $`p_{1/2}`$ continuum amplitudes and would constitute an essentially complete description of the photoionization process. The many-body result for $`\beta `$ is shown in Fig. 9. The differential cross section is proportional to $`1\frac{1}{2}\beta P_2(\mathrm{cos}\theta )`$. The MP results for $`Q`$ and the asymmetry parameter $`\beta `$ are given also in Fig. 9. The comparison between MP and MBPT results is generally favorable; the results with the core dynamics polarizability in Eq. 9 are in better qualitative agreement with the MBPT calculations. We note that near $`p0.45`$ a.u., the photoelectron has the propensity to be ionized perpendicular to the photon polarization axis and near $`p0.6`$ a.u., the photoelectron is preferentially ejected in the $`j=\frac{1}{2}`$ channel, where $`Q1`$. A similar situation is evident from MBPT results at $`p0.5`$. The other limiting value is reached near threshold, where $`\sigma _{p_{1/2}}0`$. ## III Conclusion We have calculated the photoionization cross sections of the ground-state francium. Both many-body and model-potential approaches were employed to obtain the cross sections, quantum defects, spin-polarization parameters and photoelectron asymmetry parameter. We find Cooper minima in both $`p_{1/2}`$ and $`p_{3/2}`$ channels. The comparison between the MBPT and MP results are satisfactory. The Cooper minima predicted in the MP calculations are at higher photoelectron energies than those calculated in the MBPT method. The origin of this difference can be traced to the shielding of the valence-electron dipole by the core electrons. The induced dipole moment of the core manifests itself as a dynamic polarizability term. Upon replacing the static core polarizability with the dynamic polarizability, better quantitative agreement with the MBPT results is observed. We predict the energy-dependence of the photoelectron spin-polarization and asymmetry parameters which we hope will stimulate further experimental work in francium. ## Acknowledgments The work of AD and WRJ was supported in part by NSF Grant No. PHY 99-70666. HRS is supported by a grant by NSF to the Institute for Theoretical Atomic and Molecular Physics. The authors owe a debt of gratitude to Harvey Gould for describing his proposed measurement of $`\sigma `$ and $`Q`$ for francium.
no-problem/9907/hep-th9907122.html
ar5iv
text
# Untitled Document Figure 1. The $`\widehat{\theta }=1`$ region.
no-problem/9907/chao-dyn9907013.html
ar5iv
text
# A Robust Method for Detecting Interdependences: Application to Intracranially Recorded EEG ## 1 Introduction During the last years the analysis of synchronization phenomena received increasing attention. Such phenomena occur in nearly all sciences, including physics, astrophysics, chemistry, and even economy. Probably the most important applications are in biology and medical sciences. In living systems, synchronization is often essential in normal functioning, while abnormal synchronization can lead to severe disorders. Typical examples are from neurosciences, where synchronization under normal conditions seems to be essential for the binding problem , whereas epilepsies are related to abnormally strong synchronization. Synchronization can manifest itself in different ways. At one extreme are coupled identical deterministic chaotic systems, which can synchronize perfectly: once the coupling exceeds a critical value, both systems move along identical orbits . If the coupled systems are not identical, in general, they can not move along identical orbits. If they are both chaotic and noise-free, a strict relationship can still exist, provided the coupling is sufficiently strong. Let us denote by $`X=(x_1,\mathrm{},x_N)`$ and $`Y=(y_1,\mathrm{},y_N)`$ two time sequences from which state vectors $`𝐱_n`$ and $`𝐲_n`$ can be reconstructed, e.g., as delay vectors. Let us also assume that one of the systems, say $`X`$, is driving the other. By this we mean that the evolution of $`𝐱_n`$ is autonomous, while $`𝐲_{n+1}`$ is a function of $`𝐲_n,𝐱_n`$, and probably of some external noise . If there is no noise, and if the driving is non-singular, $`𝐲_{n+1}=𝐅(𝐱_n,𝐲_n)`$ with $`det(F_i/x_{nk})0`$, this relationship can always be inverted (at least locally) and can be written as $`𝐱_n=𝐆(𝐲_n,𝐲_{n+1})`$ or, after eventually increasing the embedding dimension of $`Y`$, as $`𝐱_n=\mathrm{\Phi }(𝐲_n)`$ . The opposite relation (probably with some time shift $`k`$) $$𝐲_n=\mathrm{\Psi }(𝐱_{nk})$$ (1) is not guaranteed, although it looks a priori more natural in view of the fact that $`X`$ is assumed to drive $`Y`$. If eq.$`(1)`$ holds for some finite $`k`$, i.e. if the state of the driven system is a unique function of the driver‘s state, this is referred to as ‘generalized synchronization’ . Here two cases have to be distinguished: strong generalized synchronization corresponds to smooth functions $`\mathrm{\Psi }`$, while weak generalized synchronization can lead to functions which may even be nowhere continuous . In the latter case it might be difficult to detect synchronization by observing $`X`$ and $`Y`$, while it is immediately seen when comparing two realizations $`Y^{(a)}`$ and $`Y^{(b)}`$ of the response system: if both are unique functions of the same $`X`$, then obviously $`Y^{(a)}=Y^{(b)}`$, i.e. they synchronize perfectly. Notice that this notion of ‘generalized synchronization’ is closer to the notion of interdependence, rather than to a mere time shift generating temporal coincidences (this is what the word synchronization actually means). If a softening of the concept of synchronization is accepted in this way, this ‘generalized synchronization’ is clearly not yet the weakest and most general form of synchronization. The weakest form is given just when $`X`$ and $`Y`$, considered as stochastic processes, are not independent. The problem of finding weak effects of synchronization is thus equivalent to find weak interdependences. This is particularly true for a system as complex as e.g., the brain, where the question wether eq.$`(1)`$ holds might be meaningless. Driver/response asymmetries, as mentioned in the above example, are indeed quite common also in stochastic systems. Distinguishing the driver from the responder is of course one of the central goals, particularly in medicine where it is of utmost importance to detect causal relationships. Unfortunately, no general method exists to detect such relationships unambiguously. Even if $`Y`$ follows the motion of $`X`$ with a time delay as in eq.$`(1)`$, so that $`Y`$ hardly could drive $`X`$, this does not proof that $`X`$ drives $`Y`$. Both systems might be driven by an unobserved third system $`Z`$. In particular, eq.$`(1)`$ by itself does not imply that $`X`$ drives $`Y`$. This is obvious in cases where $`\mathrm{\Psi }`$ is bijective, i.e. where also $`\mathrm{\Psi }^1()`$ is unique. If $`\mathrm{\Psi }`$ is not bijective (which, as we have seen, actually happens if $`Y`$ drives $`X`$ but fails to synchronize it), then, in general, there are several states of $`X`$ which map onto a single state of $`Y`$. This will typically happen if the state space of $`X`$ is larger than that of $`Y`$. For practical applications where strict equality cannot be observed but only closeness, this means that $`X`$ has a larger attractor dimension (i.e. more effective degrees of freedom) than $`Y`$. But this does not imply any causal relationship. Typical observables used for detecting interdependences and synchronization are mutual information and cross correlations. Closely related to cross correlations are cross spectra. The main disadvantage of the latter two is that they measure only linear dependences. Causal relationships can (with the above caveats) be tested using time delays, i.e. by comparing $`x_my_n`$ with $`x_ny_m`$. Mutual information is sensitive to all kinds of dependencies (it is zero only if $`X`$ and $`Y`$ are strictly independent), but its estimation imposes quite substantial requirements on the amount and quality of the data. In particular, if the suspected optimal embedding dimension is high, these requirements might be hard to meet. Finally, cross correlations and mutual information are symmetric in $`X`$ and $`Y`$, so that causal relationships can be detected only if they are associated with time delays. A priori, causal relationships might exist without detectable delays and, as we have pointed out, there might exist delays which do not reflect the naively expected causal relationship. A new class of asymmetric interdependence measures which might overcome some of these limitations has been proposed recently . These authors have assumed that a deterministic relationship as in eq.$`(1)`$ exists, and have therefore not optimized their observables so as to detect reliably weak interdependences in a noisy environment. Moreover, they assumed that eq.$`(1)`$ automatically implies a causal relationship. That this is not unproblematic was discussed above. It is also seen from the fact that the authors of and drew exactly the opposite conclusions from mutual predictabilities of $`X`$ and $`Y`$. Equation $`(1)`$ was interpreted in as indicating that $`Y`$ is the driver and $`X`$ the response, and that $`Y`$ can be better predicted from $`X`$ than vice versa. The opposite interpretation — namely that the response can be better predicted from the driver — was given in . Nevertheless, these observables have been applied successfully to neurophysiological problems . In the present paper we present another interdependence measure following closely references . But we do not assume eq.$`(1)`$ and we try to make our definition such as to be most robust. Our observable, together with several alternatives, is defined in the next section. Applications to EEG signals recorded from electrodes implanted under the skull of patients suffering from severe epilepsies are presented in Sec.3, while our conclusions are drawn in Sec.4. ## 2 Outline of the Method Let $`X=(x_1,x_2,\mathrm{},x_N)`$ and $`Y=(y_1,y_2,\mathrm{},y_N)`$ denote two different simultaneously observed time sequences. Typically, they will be measurements of different observables of the same complex system, or measurements taken at different positions of a spatially extended system. The internal dynamics of the system is not known. In particular, it is not known whether the system is deterministic or stochastic, but we are mostly interested in cases where the latter is more likely a priori, or where it is at least unlikely that the attractor dynamics is so low that methods developed specifically for chaotic deterministic systems would be applicable. Physical time is related to the index $`n`$ of $`x_n`$, respectively $`y_n`$ by $`t=t_0+ϵn`$. Time-delay embedding in an $`m`$-dimensional phase-space leads to phase-space vectors $`𝐱_n=(x_n,\mathrm{},x_{n(m1)\tau })`$ and $`𝐲_n=(y_n,\mathrm{},y_{n(m1)\tau })`$. The delay $`\tau `$ can be chosen as 1, but for oversampled sequences it might be useful to use some integer $`\tau >1`$. To simplify notation, we assume that also values $`x_{2m},\mathrm{},x_0`$ and $`y_{2m},\mathrm{},y_0`$ are given, so that all delay vectors with index $`1nN`$ can be formed, and the time sequences of delay vectors have $`N`$ elements each. The arrays of all delay vectors will be denoted $`𝐗=(𝐱_1,\mathrm{},𝐱_N)`$ and $`𝐘=(𝐲_1,\mathrm{},𝐲_N)`$. Let $`r_{n,j}`$ and $`s_{n,j}`$, $`j=1,\mathrm{},k`$ denote the time indices of the $`k`$ nearest neighbours of $`𝐱_n`$ and $`𝐲_n`$, respectively. Thus, the first neighbour distances from $`𝐱_n`$ are $`d(𝐗)_n^{(1)}𝐱_n𝐱_{r_{n,1}}=\mathrm{min}_q𝐱_n𝐱_q`$, $`d(𝐗)_n^{(2)}𝐱_n𝐱_{r_{n,2}}=\mathrm{min}_{qr_{n,1}}𝐱_n𝐱_q`$, etc., where $`𝐱𝐱^{}`$ is the Euclidean distance in delay space, and similar for $`𝐲_n`$. For each $`𝐱_n`$, the squared mean Euclidean distance to its $`k`$ closest neighbours is defined as $$R_n^{(k)}(𝐗)=\frac{1}{k}\underset{j=1}{\overset{k}{}}\left(𝐱_n𝐱_{r_{n,j}}\right)^2$$ (2) while the conditional mean squared Euclidean distance, conditioned on the closest neighbour times in the time series $`𝐘`$, is $$R_n^{(k)}(𝐗|𝐘)=\frac{1}{k}\underset{j=1}{\overset{k}{}}\left(𝐱_n𝐱_{s_{n,j}}\right)^2.$$ (3) Notice that the only difference between these two is that we used the ‘wrong’ time indices for the neighbours in eq.(3). Instead of summing over nearest neighbours, we sum over those points whose equal time partners are nearest neighbours of $`𝐲_n`$. Similarly we define $$R_n^{(k)}(𝐘)=\frac{1}{k}\underset{j=1}{\overset{k}{}}\left(𝐲_n𝐲_{s_{n,j}}\right)^2$$ (4) and $$R_n^{(k)}(𝐘|𝐗)=\frac{1}{k}\underset{j=1}{\overset{k}{}}\left(𝐲_n𝐲_{r_{n,j}}\right)^2.$$ (5) If the point cloud $`\{𝐱_n\}`$ has average squared radius $`R(𝐗)=R^{(N1)}(𝐗)`$ and effective dimension $`D`$ (for a stochastic time series embedded in $`m`$ dimensions, $`D=m`$), then $`R_n^{(k)}(𝐗)/R(𝐗)(k/N)^{2/D}1`$ for $`kN`$. The same is true for $`R_n^{(k)}(𝐗|𝐘)`$ if $`𝐗`$ and $`𝐘`$ are perfectly correlated, i.e. if there is a smooth mapping $`𝐱_n=\mathrm{\Psi }(𝐲_n)`$. On the other hand, if $`𝐗`$ and $`𝐘`$ are completely independent, then $`R_n^{(k)}(𝐗|𝐘)R_n^{(k)}(𝐗)`$. Accordingly, we introduce local and global interdependence measures $`S_n^{(k)}(𝐗|𝐘)`$ and $`S^{(k)}(𝐗|𝐘)`$ as $$S_n^{(k)}(𝐗|𝐘)\frac{R_n^{(k)}(𝐗)}{R_n^{(k)}(𝐗|𝐘)}$$ (6) and $$S^{(k)}(𝐗|𝐘)\frac{1}{N}\underset{n=1}{\overset{N}{}}S_n^{(k)}(𝐗|𝐘)=\frac{1}{N}\underset{n=1}{\overset{N}{}}\frac{R_n^{(k)}(𝐗)}{R_n^{(k)}(𝐗|𝐘)}.$$ (7) Since $`R_n^{(k)}(𝐗|𝐘)R_n^{(k)}(𝐗)`$ by construction, we have $$0<S^{(k)}(𝐗|𝐘)1.$$ (8) If $`S^{(k)}(𝐗|𝐘)(k/N)^{2/D}1`$, then obviously $`𝐗`$ and $`𝐘`$ are independent within the limits of accuracy. If, however, $`S^{(k)}(𝐗|𝐘)(k/N)^{2/D}`$, we say that $`𝐗`$ depends on $`𝐘`$, thereby without implying any causal relationship. This dependence becomes maximal when $`S^{(k)}(𝐗|𝐘)1`$. The opposite dependences $`S_n^{(k)}(𝐘|𝐗)`$ and $`S^{(k)}(𝐘|𝐗)`$ are defined in complete analogy. They are in general not equal to $`S_n^{(k)}(𝐗|𝐘)`$ and $`S^{(k)}(𝐗|𝐘)`$. Both $`S^{(k)}(𝐗|𝐘)`$ and $`S^{(k)}(𝐘|𝐗)`$ may be of order 1. Therefore $`𝐗`$ can depend on $`𝐘`$, and at the same time can $`𝐘`$ depend on $`𝐗`$. If $`S^{(k)}(𝐗|𝐘)>S^{(k)}(𝐘|𝐗)`$, i.e. if $`𝐗`$ depends more on $`𝐘`$ than vice versa, we say that $`𝐘`$ is more “active” than $`𝐗`$. Again we do not imply this to have any causal meaning, a priori. An important question is whether an active/passive relationship, as defined in this way, has a causal driver/response interpretation in certain circumstances. In order to understand the origin of active/passive relationships, we consider the simple case where both time sequences are identical, $`X=Y`$, but we use different embedding dimensions $`m_X`$ and $`m_Y`$ in the delay vector construction. More precisely, we take $`m_X<m_Y`$ and $`m_X<m_{\mathrm{opt}}`$, where $`m_{\mathrm{opt}}`$ is an optimal embedding dimension in the sense that for $`m<m_{\mathrm{opt}}`$ the point cloud $`\{𝐱_n\}`$ is not completely unfolded, while it is unfolded for $`mm_{\mathrm{opt}}`$. Thus each $`𝐱_n`$ can be considered as a singular projection of $`𝐲_n`$, $`𝐱_n=\mathrm{\Psi }(𝐲_n)`$ with non-unique inverse $`\mathrm{\Psi }^1`$. Assume now that $`𝐲_s`$ is a close neighbour of $`𝐲_n`$. Then also $`𝐱_s`$ must be a close neighbour of $`𝐱_n`$. But the opposite is not true: Closeness in $`𝐱`$ space does not imply closeness in $`𝐲`$ space. Therefore, conditioning on times $`s`$ where $`𝐲_s`$ are close neighbours of $`𝐲_n`$ has less effect for neighbours of $`𝐱_n`$ than vice versa, and $`S^{(k)}(𝐗|𝐘)>S^{(k)}(𝐘|𝐗)`$. Although this is not a mathematically rigorous argument, it shows clearly that the active/passive relationship, as defined above, mainly reflects the relative number of degrees of freedom and not a driver/response relationship. Systems with many degrees of freedom (high dimensional “attractors”) are more active than those with few. Notice, however, that $`S^{(k)}`$ is sensitive only to those degrees of freedom which are excited with amplitudes of order $`R^{(k)}`$. The latter depends, among others, on $`k`$ and on $`N`$. The tendency of (weakly) coupled systems to have degrees of freedom which are excited with very small amplitudes is well known . It often leads to wrong estimates of attractor dimensions, and it can make the observable active/passive relationship to depend on parameters such as $`k`$ and $`N`$ . It might be responsible for the contradictory results of . Before leaving this section, we point out several possible generalizations and alternatives. (a) Using the same Euclidean distance to define neighbours and in the sums in eqs.(2)-(5) is not necessary. Instead of the geometrical distance, in eqs.(2)-(5) we could have used any other dissimilarity measure between $`𝐱_n`$ resp. $`𝐲_n`$ and the point clouds $`\{𝐱_{r_{n,j}}\}`$ etc.. If we would have used forecasting errors in local forecasts based on these clouds, we would have arrived at interdependence measures very similar to those of . In , also ‘zero time step’ forecasting was studied. This is most closely related to our observables, but it uses only the distance between $`𝐱_n`$ and the center of mass of the point cloud $`\{𝐱_{s_{n,j}},j=1,\mathrm{}k\}`$, while we use all distances $`|𝐱_n𝐱_{s_{n,j}}|`$ individually. It is clear that the latter contains more information, and should therefore be more sensitive. (b) Instead of using arithmetic averages as in eqs.(2)-(5) and (7), we could have used geometric or harmonic averages. And we could have replaced the average of ratios in eq.(7) by a ratio of (arithmetic, geometric, or harmonic) averages. Again this could severely change sensitivity and robustness. We have not made an exhaustive test of all alternatives, but we checked that the above definitions are more robust than several alternatives. For instance, replacing eq.(7) by $$S^{(k)}(𝐗|𝐘)^{}\left[\frac{1}{N}\underset{n=1}{\overset{N}{}}\frac{R_n^{(k)}(𝐗|𝐘)}{R_n^{(k)}(𝐗)}\right]^1$$ (9) gave much more noisy results in the applications discussed in the next section which were also much harder to interpret physiologically. This is easily understood. In $`S^{}`$, occasional very small values of $`R_n^{(k)}(𝐗)`$ have much more influence than in $`S`$. Such small values are obtained if $`𝐱_n`$ depends abnormally weakly on $`Y`$, which might arise from some perturbation acting at time $`n`$. Thus $`S`$ is more robust against shot noise than $`S^{}`$. We found similar results when using harmonic averages in eqs.(2)-(5). The main difference between the present paper and is that these authors were interested in the case of noiseless deterministic attractors and strong interdependences where these considerations play no rôle, and they therefore did not try do find the most robust observable. Also, they dicussed only the case $`k=1`$. This gives the strongest signal, but it is also much stronger affected by noise than $`k>1`$. In the following applications we used $`k=10`$ which seemed to give the best signal to noise ratio (see below). (c) In eq.(6) we essentially compare the $`𝐘`$-conditioned mean squared distances to the mean squared nearest neighbour distances. Instead of this, we could have compared the former to the mean squared distances to random points, $`R_n(𝐗)=(N1)^1_{jn}(𝐱_n𝐱_j)^2`$. Also, let us use the geometrical average in the analogon of eq.(7), and define $$H^{(k)}(𝐗|𝐘)=\frac{1}{N}\underset{n=1}{\overset{N}{}}\mathrm{log}\frac{R_n(𝐗)}{R_n^{(k)}(𝐗|𝐘)}$$ (10) This is zero if $`𝐗`$ and $`𝐘`$ are completely independent, while it is positive if nearness in $`𝐘`$ implies also nearness in $`𝐗`$ for equal time partners. It would be negative if close pairs in $`𝐘`$ correspond mainly to distant pairs in $`𝐗`$. This is very unlikely but not impossible. Therefore, $`H^{(k)}(𝐗|𝐘)=0`$ suggests that $`𝐗`$ and $`𝐘`$ are independent, but does not prove it. This (and the asymmetry under the exchange $`𝐗𝐘`$) is the main difference between $`H^{(k)}(𝐗|𝐘)`$ and mutual information. The latter is strictly positive whenever $`𝐗`$ and $`𝐘`$ are not completely independent. As a consequence, mutual information is quadratic in the correlation $`P(𝐗,𝐘)P(𝐗)P(𝐘)`$ for weak correlations ($`P`$ are here probability distributions), while $`H^{(k)}(𝐗|𝐘)`$ is linear. This might make $`H^{(k)}(𝐗|𝐘)`$ useful in applications. (d) Instead of eq.(3) we could have defined the time shifted generalization $$R_n^{(k)}(𝐗|𝐘,l)=\frac{1}{k}\underset{j=1}{\overset{k}{}}\left(𝐱_n𝐱_{s_{n+l,j}}\right)^2,$$ (11) with some (positive or negative) integer $`l`$. The idea behind this definition is that it is not clear a priori that $`𝐱_n`$ is most closely related to the simultaneous vector $`𝐲_n`$. Rather, if there are some time delays in generating either $`x_n`$ or $`y_n`$, the ‘natural’ partner of $`𝐱_n`$ might be $`𝐲_{n+l}`$. In this way we can introduce a further element of asymmetry which could give additional hints on causal relationships. (e) Up to now, we have assumed in general that we use the same embedding for $`𝐗`$ and for $`𝐘`$. This is not necessary, and we could have used a different embedding dimension $`m`$ and a different delay $`\tau `$ for $`𝐘`$. We did not follow this path since $`𝐗`$ and $`𝐘`$ had similar characteristics in examples studied in the next section. But it is worth while to point out that we can use our interdependence measure for pairs of time series with completely different characteristics (amplitudes, spectra, etc.). Dependence does not imply similarity in any sense! (f) Instead of the Euclidean distance we could have used any other distance in defining neighbourhoods, e.g. the maximum norm. ## 3 Application ### 3.1 Data Acquisition We analyzed electroencephalographic signals (EEG) that were recorded in patients suffering from pharmacoresistant focal epilepsies. In these patients freedom of seizures can be obtained by resecting the part of the brain responsible for seizure generation. Taking such sort of data is mandatory as part of the presurgical analysis. The sensoring electrodes are left in the brain for typically 2 to 3 weeks. During this time the patients are also watched by video, so that EEG activity can be matched with behavior, and seizures can be identified from either. The analyses reported here were made after surgery had taken place, and after it had become clear from its success whether the localization of the epileptic focus had been correctly predicted. EEG was recorded from electrodes implanted under the skull, hence close to the epileptic focus and with high signal-to-noise ratio. In particular, we used two types of electrodes: rectangular flexible grids of $`8\times 8`$ contacts placed onto the cortex, and pairs of needle shaped depth electrodes with 10 contacts each, implanted into deeper structures of the brain (see fig. 1). EEG signals were sampled at 173 Hz using a 12 bit analog-to-digital (A/D) converter and filtered within a frequency band of 0.53 to 40 Hz. The cutoff frequency of the lowpass filter was selected to suppress possible contamination by the power line. For more details on the data and recording techniques, see and references given therein. The data sets analyzed in this study had a duration of 10 minutes each (cut out from much longer sequences) and were divided into segments of $`T`$ seconds each. Neighbours were searched only within the same segment. ### 3.2 Parameter Selection As is well known, details of the delay embedding such as choice of embedding dimension $`m`$ and delay $`\tau `$ can be very important. In principle, the theorems of Takens and Sauer et al. state that results should not depend on them if data are noiseless and $`N`$ is arbitrarily large, but reality tells different. Many methods have been proposed to find ”optimal” parameter values. However, appropriate choices of $`m`$ and $`\tau `$ strongly depend on specific aspects of the problem at hand (such as noise level, type of noise, intermittency, stationarity, etc.). Thus general recipes which do not take into account these factors can be misleading. This holds true in particular for estimates of $`m`$ based on false nearest neighbours . One of the most popular recipes for determining the optimal delay $`\tau `$ is based on minimizing the mutual information in a two-dimensional embedding. But in general the same $`\tau `$ does not minimize the mutual information in an embedding $`m`$ $`3`$ dimensions . The same comment applies to estimates of $`\tau `$ from the first zero of the autocorrelation function. Therefore we used none of these a priori estimates of “optimal” embedding parameters in this study. Instead, we approached the problem empirically by calculating $`S^{(k)}(𝐗|𝐘)`$ and $`S^{(k)}(𝐘|𝐗)`$ for different values of $`m`$, $`\tau `$, $`T`$, and $`k`$. In addition, we applied also a Theiler correction by restricting the nearest neighbour times $`r_{n,j}`$ and $`s_{n,j}`$ to $`|nr_{n,j}|\tau _{\mathrm{Theiler}}`$ and $`|ns_{n,j}|\tau _{\mathrm{Theiler}}`$, and tested several values for $`\tau _{\mathrm{Theiler}}`$. It is of course not feasible to make a systematic search for all possible combinations of these parameters, but we feel sure that our final choices are reasonable and not too far from the optimum. We made these optimizations out of sample, i.e. we used a well understood ‘training’ data set where we could judge the reasonability of our observables by comparing with the medical diagnosis. This training set was not used as test set in any of the subsequent analyses. The “optimal” parameters are $`m=10`$ (embedding dimension), $`\tau =5`$ (delay in units of sampling time), $`k=10`$ (neighborhood size), $`T=10`$ (segment length in seconds), and $`\tau _{\mathrm{Theiler}}=10`$. Indeed, somewhat better results were in some cases obtained with larger $`k`$ (up to $`k=100`$), but we stuck to the above because it was faster without too much loss of significance. The delay $`\tau =5`$ was implemented by simply decimating the time sequences, thereby reducing effectively the sampling rate from 173 Hz to 34.6 Hz. Thus, each segment contained 346 delay vectors. ### 3.3 Data Representation #### 3.3.1 Depth Electrodes From the 20 time sequences recorded via the depth electrodes 400 combinations have to be analyzed. Results can be arranged into a $`20\times 20`$ interdependence matrix $`S_{ij}=S^{(k)}(𝐗_i|𝐗_j)`$. We present our results graphically by means of encoding each pixel in a $`20\times 20`$ array using a grey scale. Pixel $`(i,j)`$ is black if $`S_{ij}=1`$ ($`𝐗_i`$ and $`𝐗_j`$ are identical; this happens on the diagonal), while it is white if $`S_{ij}=0`$. The numbering of channels and their arrangement in the matrix are explained in fig. 2. Quadrants I and IV represent interdependences between signals from the same (left resp. right) hemisphere, while quadrants II and III show interdependences between different hemispheres. More precisely, if a pixel $`(i,j)`$ in quadrant II is darker than its partner $`(j,i)`$ in quadrant III, the region around contact $`i`$ in the right hemisphere is more active than the region around contact $`j`$ in the left hemisphere. Of particular interest are also average values of $`S_{ij}`$, i.e. averaged over a region symmetric under reflection along the diagonal. The average darkness of such a region is a direct measure of its average interdependences with other parts of the brain involved. A typical example of a grey scale pattern is shown in fig. 3 exhibiting two regions of high interdependence in both the left hemisphere and the right hemisphere. In this case the depth electrodes were not placed in a completely symmetrical fashion. While the electrode in the left hemisphere had 4 contacts in the entorhinal cortex and 6 contacts in the hippocampus, the right electrode had 3 contacts in the entorhinal cortex and 7 in the hippocampus. This difference (confirmed by MRI images) is clearly seen in fig. 3. In addition, there is a stronger interdependence between entorhinal cortex and hippocampus on the left than on the right side, and the left hippocampus can be assumed to be more active than the right one. Interpretations of the latter will be given in sec. 3.4.1. #### 3.3.2 Grid Electrodes Since grid electrodes consisted of 64 contacts, it is not very practical to represent the data in the same way as for the depth electrodes. In addition, labeling the contacts by means of a single index will result in a loss of all neighbourhood information, and the patterns would be hard to interpret. A different representation is obtained by displaying each contact as a plaquette of an $`8\times 8`$ matrix, and indicating the activity patterns by arrows connecting these plaquettes . But also such a picture (which is optimal for a small number of electrodes) is too much packed with information for our present applications to be useful. We proceeded differently. We first averaged all 60 matrices obtained by cutting the 10 minutes recording into intervals of 10 seconds. The resulting time-averaged interdependences are called $`\overline{S^{(k)}(𝐗_{i_1,i_2}|𝐗_{j_1,j_2})}`$ where $`(i_1,i_2)`$ and $`(j_1,j_2)`$ are the coordinates of the contacts. We next perform a ranking of all entries in the $`64\times 64`$ matrix except the elements on the diagonal. Using the highest one percent of entries after ranking and taking the lower end as a cutoff $`S_c`$, we define for each contact $`(i_1,i_2)`$ an average activity $$A_{i_1,i_2}=\underset{j_1,j_2}{}\overline{S^{(k)}(𝐗_{j_1,j_2}|𝐗_{i_1,i_2})}\mathrm{\Theta }(\overline{S^{(k)}(𝐗_{j_1,j_2}|𝐗_{i_1,i_2})}S_c)$$ (12) and an average passivity $$P_{i_1,i_2}=\underset{j_1,j_2}{}\overline{S^{(k)}(𝐗_{i_1,i_2}|𝐗_{j_1,j_2})}\mathrm{\Theta }(\overline{S^{(k)}(𝐗_{i_1,i_2}|𝐗_{j_1,j_2})}S_c).$$ (13) The cutoff $`S_c`$ is introduced in order to eliminate the effect of contact pairs with very weak interdependence. For these pairs, $`\overline{S^{(k)}(𝐗_i|𝐗_j)}`$ is dominated by noise, and including them would mainly decrease the signal-to-noise ratio. Using the coordinates $`i_1`$ and $`i_2`$ we can finally represent $`A_{i_1,i_2}`$ and $`P_{i_1,i_2}`$ as $`8\times 8`$ grey scale matrices. Alternatively, we can add them and represent the sum $`A_{i_1,i_2}+P_{i_1,i_2}`$ as a grey scale matrix. An example is given in fig. 4 exhibiting a region with very strong interdependence near the lower right corner. Its interpretation will be given in the next section. ### 3.4 Results Our results are illustrated by three examples covering lateralization of the focal brain side, precise focus localization in neocortical epilepsies, and changes of interdependences before an impending seizure. These examples are quite typical. A more systematic study involving statistically significant samples is under way and will be presented elsewhere. #### 3.4.1 First Example We analyzed 10 minutes of an interictal (seizure-free interval) EEG of a patient suffering from a so called mesial temporal lobe epilepsy. The clinical workup suggested the epileptic focus to be located in the left hemisphere of the brain. We divided the EEG data set into 60 nonoverlapping consecutive 10 seconds segments and calculated a 20 x 20 $`S`$-matrix for each segment as described above. One of these matrices was already shown in fig. 3. This figure is typical for all 60 matrices in showing more interdependences in the left hemisphere than in the right. This concerns both interdependences within the hippocampus, and between hippocampus and adjacent cortex. Indeed, surgery on the left side resulted in complete seizure control of this patient. This suggests that our proposed measure might be able to lateralize the focal side of the brain. #### 3.4.2 Second Example We analyzed 10 minutes of interictal EEG data from a patient suffering from a neocortical lesional epilepsy. In this case an $`8\times 8`$ grid electrode was implanted covering the underlying brain lesion. Again the data set was subdivided as in example one. A typical activity-passivity matrix obtained by means of the procedure described in sec. 3.3.2 is shown in fig. 4. As already pointed out in sec. 3.3.2, we observed highest interdependences in regions near the lower right corner. Indeed, the patient was operated on exactly in this region (which had been identified during presurgical evaluation) and is now free of seizures. #### 3.4.3 Third example In contrast to the afore mentioned examples, where we used only EEG recordings from a seizure free interval and averaged the data over time, we now study $`S`$ as a function of time. Our time resolution is again $`T=10`$ sec. Of particular interest are changes of $`S`$ before an impending seizure, as this could finally lead to its prediction .<sup>1</sup><sup>1</sup>1The results of use a vague definition of the interictal period and might therefore be questionable. But also changes during seizures and during the postictal (after-seizure) period are of interest. A sequence of interdependence patterns taken before, during and after a seizure is shown in fig. 5. The pattern of interdependences within the right hemisphere remains almost constant, even during the course of the seizure. On the other hand, $`S`$-values of the left hemisphere change dramatically. As confirmed by successful surgery, the left hemisphere was the focal side in this case. During the preictal stage, $`S`$ decreases from a high initial level to almost zero. Notice, that $`S`$ is very low also in quadrant II directly before seizure onset, indicating that the left hemisphere is much less active. In frame #13, shortly before the onset of the seizure, interdependence builds up again on the left side. It reaches its maximum during the seizure and finally declines towards the interictal level. This coincides with findings of Lehnertz and Elger who found reduced complexity before an impending seizure. Notice that “activity” according to our definition essentially depends on the number of excited degrees of freedom, which is exactly what was measured in . The loss of activity before the seizure onset can be interpreted as a more or less hidden pathological synchronization phenomenon. It is assumed that seizure activity will be induced when a ”critical mass” of neurons is progressively involved in closely time-linked high-frequency discharging. This critical mass might be reached if the preceeding level of synchronization decreases, enabling neurons to establish a synchronization which is high enough to finally lead to seizure activity. At first sight it may therefore seem paradoxical that interdependences decrease before a seizure. But this might indeed be exactly what happens. In a healthy brain a critical mass is never reached because neurons are strongly tied into networks where they communicate with others. A critical stage may be reached when a large population is “idle” and therefore on the one hand uncorrelated with the rest of the network, but on the other hand, easily recruitable for subsequent coherent pathophysiological activity. ## 4 Discussion We have presented an observable which can detect dependences between simultaneously measured time sequences. It is similar to other synchronization measures proposed recently, but is somewhat simpler and more robust. With the other measures it shares the property of being asymmetric. In principle, it can be assumed that our measure can indicate causal relationships. This might be useful identifying the driver of the two subsystems emitting the sequences. We claim that such information might be obtainable in principle, but the interpretation is subtle and naive arguments can be quite misleading. Nevertheless, this asymmetry is very interesting. It mainly depends on the difference in ‘activity’ which measures the effective number of excited degrees of freedom. This effective number of active degrees of freedom depends on the scales to which the observable is most sensitive. In principle, in an asymmetric driver-response pair the attractor dimension of the response is always at least as high as that of the driver (if both are deterministic), but this might be relevant only at length scales which are too small to be resolved practically. Our measure could also be used to detect generalized synchronization, but we do not assume in our applications that the signals are chaotic with low dimensions. In contrast to recent attempts to detect phase synchronization in brain signals, our measure does not treat phase information different from amplitude information, and thus we cannot discuss phase or frequency locking. We applied our measure to intracranial multichannel EEG recordings taken from patients suffering from severe epilepsies. We found significant dependences between different recording sites, and these dependences were in general not symmetric. Due to the careful pre-operational screening of these patients and their observation after being operated, we could compare our results in detail with other neurophysiological findings. The most interesting preliminary results are the following: 1) During seizure-free intervals, the seizure generating area of the brain exhibited higher interdependences than other brain areas. 2) Some seizures analyzed here were preceeded by short periods (30 s to several minutes) during which extremely low dependences were confined to the seizure generating area. Although these results are very encouraging, a more systematic study is needed and is under way. In addition, a host of further investigations is imaginable. Obvious candidates are the influences of drugs, the effect of mental activity (epilepsy patients behave normal even with implanted electrodes), or of various stimuli. Another important problem is the determination of the ‘critical mass of neurons needed to trigger a seizure. Moreover, a more systematic comparison with other diagnostical tools is necessary beforehand. Finally, the present findings already suggest a number of physiological results whose interpretation demands a thorough theoretical study. For instance, it is a priori not clear whether a seizure is primarily triggered by a change of activity in the seizure generating area, or a change of susceptibility of the surrounding regions. We hope that the near future will show progress along these lines. Acknowledgements We thank J. Müller-Gerking, R. Quian Quiroga, T. Schreiber, and W. Burr for the inspiring discussions and helpful comments during the study. Figure captions: Fig. 1: Schematic view of the two types of intracranial electrodes used in this paper. Grids were placed onto the cortex and have either $`8\times 8`$ electrodes. Needle shaped depth electrodes have ten contacts each and were always used pairwise in a left-right symmetrical fashion. In some cases, depth electrodes and grids were used together. Fig. 2: Scheme of subdivision of the 20x20 matrix $`S_{ij}`$. The indices $`L_1`$ to $`L_{10}`$ denote the contacts on the left depth electrode, from innermost ($`L_1`$) to outermost ($`L_{10}`$). Similarly, $`R_1`$ to $`R_{10}`$ correspond to the right depth electrode. The index $`i`$ runs horizontally, while $`j`$ runs vertically. E.g., quadrant II shows the effect of conditioning right hemispheric channels on the channels from left hemisphere. Fig. 3: Example for a 20x20 $`S`$-matrix of a 10 sec. segment recorded during the seizure free interval using 10 depth electrodes on each side of the brain. Fig. 4: (A) Average activity pattern in an $`8\times 8`$ grid electrode; (B) average passivity and (C) normalized sum of both. Fig. 5: Sequence of interdependence patterns $`S_{ij}`$ including preictal (1-14), ictal (15-16) and postictal (17-20) brain electrical activity. Figure 1 Arnhold et al. Figure 2 Arnhold et al. Figure 3 Arnhold et al. Figure 4 Arnhold et al. Figure 5 Arnhold et al.
no-problem/9907/astro-ph9907150.html
ar5iv
text
# The Steep Spectrum Quasar PG1404+226 with ASCA, HST and ROSAT ## 1 Introduction Narrow Line Active Galactic Nuclei form a distinct class of AGN on the basis of the properties of their optical/UV spectrum: full width at half maximum (FWHM) of the hydrogen lines and other lines in the range 500-2000 km s<sup>-1</sup>, intense high ionization lines, and intense FeII multiplets (Osterbrock & Pogge, osterbrock (1985); Shuder & Osterbrock, shuder (1981)). The weakest of these AGN, the Narrow Line Seyfert 1 (NLS1), and the somewhat brighter AGN with absolute optical luminosity above but close to the lowest limit for quasars of $`M_v`$ = -23.4 have been extensively studied in the X-ray range (e.g. Laor, Fiore, Elvis et al., 1994; Boller, Brandt & Fink, 1996; Laor et al., 1997). Among all AGN, the Narrow Lines AGN tend to have the steepest soft X-ray spectra (ROSAT), and some of them show fast, large amplitude soft X-ray variability with occasionally giant outbursts (e.g. Grupe 1996 and references therein; Boller et al 1996). In the harder 2-10 keV band a comparative study of a large sample of NLS1 and broad line Seyfert 1s revealed that the 2-10 keV ASCA spectral slopes of NLS1 are significantly steeper than those of broad line AGN (Brandt, Mathur & Elvis 1997). Recent BeppoSAX observations of a selected sample of bright NLS1 (Comastri, Fiore, Guainazzi et al. 1998; Comastri, Brandt, Leighly et al. 1998) over the broad 0.1-10 keV range indicate that a two component model provides an adequate description of the X–ray continuum. The relative strengths and slopes of the two components are different from those of broad line Seyfert 1s. The NLS1 are characterized by a stronger soft excess and, in general, have a steeper medium energy X-ray power law with respect to normal Seyfert 1s, but in PG 1404+226 the medium energy spectral index is not very different from that of classical Seyfert 1s while its soft excess is strong, very steep and rapidly variable like in other NLS1. The spectral behaviour of NLS1 suggests that the soft X-ray flux cannot be due only to disk reprocessing unless there is highly anisotropic emission. In the framework of the thermal models for the X-ray emission in Seyfert 1s (Haardt & Maraschi 1993) a strong soft component could lead to a strong Compton cooling of the hot corona electrons and to a steep hard tail. This hypothesis is also supported by the similarities between NLS1 spectra and those of Galactic black hole candidates in their high states first suggested by Pounds, Done and Osborne (1995) and Czerny et al. (1996). The high states of Galactic black hole candidates are thought to be triggered by increases in the accretion rate possibly reaching values close to the Eddington limit. In this case the disk surface is expected to be highly ionized, in good agreement with the observation of a H-like Fe edge in TonS180 (Comastri et al. 1998a). It should also be noted that a high $`L/L_{Edd}`$ ratio would also be consistent with the narrowness of the optical lines in NLS1 if the optical line producing region is virialized as suggested by Laor et al. (1997) The quasar PG 1404+226 ($`z`$ = 0.098, V = 15) is one of the brightest members of the class of Narrow Line AGN. The optical spectrum of PG 1404+226 displays the characteristics of NLS1 with FWHM (H$`\beta `$) $``$ 830 km s<sup>-1</sup> and strong Fe II emission (Boroson & Green 1992). The observations with the ROSAT PSPC ($``$ 0.1–2.0 keV) showed a very steep spectrum ($`\mathrm{\Gamma }3`$) with rapid flux (a factor 2 in 10 hours) and spectral variability typical of NLS1 in the X-ray range, and revealed a complex absorption feature around 0.8-1.0 keV (Ulrich & Molendi 1996, thereafter UM96). Time resolved spectral analysis showed the data to be consistent with a shift of the absorption feature to higher energy when the source brightens (UM96). We have obtained a 40 ks ASCA observation of PG 1404+226 in order to investigate the X-ray spectrum over a larger energy range and at higher energy resolution than was possible with ROSAT. At the time of the ASCA observations the absorption feature was around 1 keV, at an energy definitely higher than that of the OVII and OVIII edges at 0.74 and 0.87 keV commonly seen in ASCA and ROSAT spectra of AGN (e.g. Reynolds 1997). Preliminary results can be found in Comastri, Molendi & Ulrich (1997, hereafter CMU97). The ASCA and ROSAT data have been fit separately since they have not been obtained at the same epoch. The challenge presented by PG 1404+226 is the identification of the absorption at $`1.1`$ keV. Brandt et al (1994) found a similar feature at 1.15 keV in Ark 564 and considered several interpretations: Neon edge, iron L edges, and outflowing material which would raise the energy edge of OVIII (0.870 keV) to the observed energy. They find none of them to be satisfactory: neon edge because there is only a narrow range of ionization parameter where it would be stronger than the OVIII edge; the iron L edges would produce absorption at other somewhat higher energy (1.358 keV); and the outflow seems unlikely because the source of its kinetic energy is unclear. Otani et al (1996) found a feature at $`1`$ keV in IRAS 13224-3809 for which they suggest an interpretation in terms of outflow, or alternatively and considering that the the X-ray flux of IRAS 13224-3809 varied by a factor 50 in two days “the ionization state of the medium is far from equilibrium due to the violent variability”. Krolik & Kriss (1995) drew attention to the fact that “because the ionization timescales of some ions may be as long as the variability timescales in AGNs, the ionic abundances indicated by the transmission spectra may not be well described by ionization equilibrium”. This point has recently been re-investigated by Nicastro et al. (1999a) who pointed out also that (i) Recombination times can be longer than photoionization times resulting in gaseous absorbers which are over-ionized with respect to the equilibrium ionization state and (ii) Collisional ionization could be of comparable importance to photoionization. Hayashida (1997) found an edge near 1 keV in H0707-495 for which he also suggests an identification with a blushifted OVIII absorption edge. For PG 1404+226, CMU97 suggested an overabundance of iron, an interpretation which has the advantage to link the origin of the 1 keV absorption to the intense FeII lines present in the optical/UV of this quasar and also of IRAS 13224-3809 and Ark 564. The interpretation where the absorption near $`1`$ keV originates in hot gas outflowing at velocities in the range 0.2 - 0.5c has also been proposed by Leighly et al (1997). This paper presents an analysis of the ASCA observations, a re-analysis of the ROSAT spectra with our warm absorber (WA) models calculated with CLOUDY (Ferland 1993), and an analysis of HST spectra which we have obtained in order to search for UV absorption/emission lines. ## 2 Re-analysis of the ROSAT data The data reduction was carried out in a standard manner and like in UM96 the data were split into “high-state’ and “low-state’ depending on count rate. The analysis was performed with our warm absorber models based on CLOUDY. The warm absorber is assumed to be of constant density, of solar abundances (if not mentioned otherwise), and to be illuminated by the continuum of the central point-like energy source. The spectral energy distribution from the radio to the gamma-ray region consists of piecewise powerlaws with, in particular, an energy index in the EUV, $`\alpha `$<sub>uv,x</sub>, of 1.4 and an X-ray photon index $`\mathrm{\Gamma }`$<sub>x</sub> of 1.9. A black-body-like soft excess is added in some models. The column density N<sub>WA</sub> of the warm material (i.e. the total column density in Hydrogen) and the ionization parameter U are determined from X-ray spectral fits (Table 1). U is a measure of the number rate of ionizing photons above the Lyman limit and is defined by $$U=Q/(4\pi r^2n_\mathrm{H}c)$$ (1) where $`Q`$ is the number rate of incident photons above the Lyman limit, $`r`$ is the distance between central source and absorber, $`c`$ is the speed of light, and $`n_\mathrm{H}`$ is the hydrogen density (fixed to 10<sup>9.5</sup> cm<sup>-3</sup> unless noted otherwise). As detailed below, several models give acceptable fits to the low-state spectrum but none gives a really good fit to the high-state spectrum (which has a higher signal/noise ratio). ### 2.1 Results with a simple power law A simple power law plus cold absorption gives a very poor fit. The cold absorbing column generally tends to slightly underpredict the Galactic value. This may reflect some low energy calibration uncertainties, or the presence of an additional spectral component in the form of a very soft excess. Adding a soft excess parameterized as a black body which contributes only at very low energies (e.g. in NGC4051, Komossa & Fink 1997; Ton S180, Fink et al. 1997) improves the fit which, however, remains unsatisfactory. ### 2.2 Results of the Spectral fits with Warm absorber models We have examined a family of models (labelled a,b,..e; Table 1) which include a warm absorber plus a power law component with a photon index $`\mathrm{\Gamma }`$<sub>x</sub> fixed at 1.9. #### 2.2.1 Model a and b: solar abundances Model a: The inclusion of a ‘standard’ warm absorber model clearly improves the fit but does not yield an acceptable $`\chi ^2`$, with systematic residuals remaining near 1 keV. The reason is that, even for very high ionization parameters, the model has a strong oxygen absorption at 0.87 keV, even stronger than the neon absorptions (NeVII at $``$ 1.1 keV and NeX at $``$ 1.36 keV). This is exacerbated at high-state because the high-state data require absorption around the location of the neon edges to dominate. (The same results hold for a steeper underlying powerlaw.) Model b: The addition of the emission and reflection components of the warm material, calculated for a covering factor of 0.5 results in a slightly improved fit, but the high-state data are still not well matched. #### 2.2.2 Model c: non-solar abundances One way to make the neon absorption dominate over the oxygen absorption is to introduce deviations from solar abundances, with overabundant neon, or underabundant oxygen. Several deviation factors were studied between an oxygen abundance of up to O = 0.2 x solar and a neon abundance of Ne = 4 x solar. These models strongly improve the quality of the fit and the values of $`\chi ^2`$ reach acceptable values (for the ROSAT spectra). The best fit has an overabundance of neon of $`4`$ times the solar value. We note that while non solar abundances have been reported in a number of AGN/quasars (e.g. Hamann & Ferland 1993, Netzer 1997) the ratio O/Ne is expected to remain close to its solar value in all known astrophysical situations (On the other hand, Netzer occasionally depletes oxygen only in his photoionisation models, e.g. Marshall et al. 1993). In any event, the model with overabundance of neon is not compatible with the ASCA data (Section 3.2.2). #### 2.2.3 Model d and e: additional soft excess (and solar abundances) Model d: Motivated by the ASCA evidence for the presence of a soft excess, a sequence of models was calculated with an additional hot BB component of kT = 0.1 keV. This component was included in the ionizing spectral energy distribution that illuminates the warm absorber, i.e. the change in ionization structure of the warm material was self-consistently calculated. A successful description of both, high- and low-state data is possible with solar abundances. Note that two-component models without edge, specifically, power law + blackbody and power law + bremstrahlung were found to provide satisfactory fits of the ROSAT data at low state. For the high state data the power law + blackbody model gives $`\chi ^2`$ = 67/20 (UM96). Model e: U and N<sub>WA</sub> have above been treated as free parameters, and the fits tend to give slightly different column densities in low- and high-state data, whereas this quantity is not expected to vary within short time-scales (10 hours or less). We therefore have searched for the best fit model (model e) to the high- and low-state data in which the column density is identical in both states and only the ionization parameter U and the strength of the BB component are allowed to vary between states. We find a successful model with log(N<sub>WA</sub>) = 23.1, log(U<sub>high</sub>) = 0.6, log(U<sub>low</sub>) = 0.4, and a contribution of the blackbody to the ionization parameter of log(U<sub>BB</sub>) = log(U<sub>PL</sub>) - 0.9 in the high-state data and log(U<sub>BB</sub>) = log(U<sub>PL</sub>) -1.5 in the low-state data (Fig. 1; U<sub>BB</sub> and U<sub>PL</sub> are the ionization parameters related to the blackbody component, and powerlaw component, respectively. U is a measure of the number rate of ionizing photons above the Lyman limit. See Eqn. (1) for the definition of U). ## 3 ASCA Observations and Spectral Analysis ### 3.1 Observations PG 1404+226 was observed by ASCA (Tanaka, Inoue & Holt 1994) July 13-14, 1994 with the Gas Imaging Spectrometers (GIS2/GIS3) for a total effective exposure time of 35000 s, and with the Solid–state Imaging Spectrometers (SIS0/SIS1) for about 29000 s. The SIS was operating in 1-CCD mode and all the data were collected in Faint mode. Standard criteria for the good selection intervals have been applied (i.e. Cut-off rigidity $`>`$ 7 for GIS and $`>`$ 6 for SIS, minimum elevation angle from the earth $`>`$ 5 degrees and minimum bright earth angle $`>`$ 25 degrees for SIS) as well as DFE and echo corrections. The background subtracted count rates for PG 1404+226 are 0.049,0.038,0.014,0.017 c/s in S0, S1, S2, S3 respectively. Large flux variations have been detected. In panels a and b of Fig. 2 we show the soft (0.5–2.0) and hard (2.0–10 keV) SIS0 light curves. The SIS0 light curve in the soft band (Fig. 2a ) shows a factor $``$ 4 of amplitude variability with a doubling timescale of $``$ 8000 s. The variability pattern in the two bands is well correlated suggesting that the hard and soft fluxes varied in the same way. A detailed spectral resolved temporal analysis is hampered by the low counting statistics especially at high ($`>`$ 2 keV) energies. In order to have some indications on the spectral variability we have performed a hardness ratio analysis for the high (first part of the observation) and the low (second part of the observation) state shown in Figure 2. The hardness ratio (Fig. 2c) has been defined as HR = (H-S)/(H+S) where H and S are the counts in the 2–10 and 0.5–2 keV bands respectively. The results for the high and low state are $`HR_{Highstate}=0.79\pm 0.06`$ and $`HR_{Lowstate}=0.75\pm 0.04`$ suggesting that the spectrum is rather soft in both states. The 2–10 keV flux is rather low and as a consequence both the 2-10 keV light curve and the hardness ratio light curve are noisy with no clear evidence of spectral variability in contrast to the spectral change during the ROSAT observations. The spectral analysis has thus been performed on the spectrum accumulated over the entire duration of the ASCA observation. The average luminosity is comparable with the ROSAT low state but the absorption feature is at high energy as during the ROSAT high state. ### 3.2 Spectral Analysis GIS and SIS spectra were binned with more than 20 counts/bin in the 0.7–10 keV and 0.5–10 keV energy ranges respectively. Since the spectral parameters obtained by fitting the four detectors separately were all consistent within the errors, the data were fitted simultaneously to the same model. Given the very low Galactic column density toward PG 1404+226 ($`2\times 10^{20}`$ cm<sup>-2</sup>; Elvis, Lockman & Wilkes 1989) and the low sensitivity of ASCA detectors for small column densities, all the spectral fits have been performed with $`N_HN_{HGal}`$. This choice is also corroborated by the ROSAT results (UM96). A single power law model clearly does not provide an acceptable fit (Table 2) The power law fits reported in Table 2 suggest that at least two components are required to fit the overall spectrum. We note that the slope of the hard power law, which could not have been detected by ROSAT, is consistent with the average quasar slope (Comastri et al. 1992) while the slope below 2 keV is consistent with the UM96 findings. In order to model the broad band (0.4–10 keV) spectrum we tried 4 types of double component fits (See Table 2 for details): a double power law, bremsstrahlung plus power law, blackbody plus power law, and cut-off blackbody plus power law (the cut-off is of the form exp $`[(EE_c)/E_f]`$ for energies larger than the cutoff energy $`E_c0.94`$ keV. The depth of the cut-off is related to $`E_f`$, a small value correponding to a very steep decline of the intensity; here $`E_f0.05`$ keV). The first two models give unacceptable fits. The only two acceptable descriptions of the data are the cut-off blackbody plus power law (with a cut-off at 0.94 keV) and the blackbody plus power law plus absorption edge model around 1 keV . The addition of the absorption edge to the simple blackbody plus power law model produces an improvement in the fit which is significant at $`>99.99\%`$ (F-test) and the residuals are featureless in agreement with the results of Leighly et al (1997). #### 3.2.1 Is iron overabundant? As absorption edges are the signature of warm absorbing material, we fit the warm absorber model available in XSPEC 9.0 (the model is ‘absori’ and the iron abundance is a free parameter; it was developed by P. Magdziarz & A. Zdziarski following Done et al 1992 and Zdziarski, Ghisellini, George et al. 1990). Given the relatively large numbers of free parameters in this model we have fixed the temperature of the warm material at $`T=10^5`$ K (see for example Reynolds & Fabian 1995; the fits are not sensitive to the temperature, a change of logT from 4.5 to 6 would make no significant difference) and the iron abundance at the solar value. In addition we have considered only the 0.5–3.0 keV energy range in order to avoid contamination from the high energy component. We were not able to find any acceptable solution as can be judged from figure 3. Leaving the iron abundance free to vary the improvement is significant at $`>99.9`$% (F-test) and the residuals featureless (Fig 4). The resulting parameters are: $`\xi =3000_{1000}^{+500}`$ erg cm s<sup>-1</sup>, $`N_H=8_1^{+9}\times 10^{20}`$ cm<sup>-2</sup>, Iron abundance $`>`$ 25 solar and power law photon slope of $`3.55\pm 0.12`$. (All the quoted errors are at 90% confidence for one interesting parameter, $`\chi _{min}^2`$ \+ 2.7). The shape of residuals around 6–7 keV are suggestive of iron line emission (Fig 5). With the addition of a narrow line the improvement in the fit is, however, significant only at the 2 $`\sigma `$ level. The derived parameters are $`E_{K\alpha }=6.5\pm 0.3`$ keV and EW = $`1290\pm 680`$ eV. It is interesting to note that the huge equivalent width of the iron line is qualitatively consistent (see figure 3 in Reynolds, Fabian & Inoue, 1995) with the supersolar iron abundance found by fitting the data with the warm absorber model. #### 3.2.2 Non-solar abundances of Neon or Oxygen? We have also attempted to fit some warm absorber models (cloudy based, and with blackbody + power law continuum) to the ASCA spectrum and have found no satisfactory fit - the reason being that the edge-like feature at 1 keV is too deep and too narrow for these models. #### 3.2.3 Resonant Absorption ? It has been recently shown that the spectra emerging from ionized gas are strongly modified by resonant absorption if the dispersion velocity in the gas is of the order of 100 km s<sup>-1</sup> or greater (Nicastro, Fiore, Matt 1999b). The X–ray warm absorber features are also strongly dependent from the shape of the ionizing continuum. For soft X–ray spectra as steep as the one observed for PG 1404+226 several resonance absorption lines from Fe L, Mg, Si and S are predicted between 1 and 2 keV. Such blend of lines would appear as negative residuals in low resolution ASCA spectra. The $``$ 1 keV absorption feature of PG 1404+226 could be, at least in part, accounted for by resonant absorption. Signatures of this process have been looked for in the HST UV spectrum (see below). #### 3.2.4 Is the $`1`$ keV absorption the blueshifted OVIII edge? Finally, we have also considered the possibility that the absorption at 1.07 keV is the blueshifted OVIII edge whose rest frame energy is 0.87 keV. We have calculated the optical depth expected for the Neon edges at rest energies 1.1 and 1.36 keV which would be blueshifted to 1.29 and 1.55 keV respectively . For the 1.1 keV line the instrumental upper limit on $`\tau `$ is 0.14 (90% confidence level) and is 0.27 for the 1.36 keV edge. We note that with a ‘standard’ warm absorber in relativistic outflow the OVIII edge is never the “only” edge in the spectrum. If the absorber is lowly ionized, there is OVII co-existing and if it is very highly ionized, OVII becomes negligible, but one cannnot avoid having some Neon absorption. Although the exact ratios of the optical depths $`\tau `$ depend on the input parameters, we thought it would be useful to check whether we could detect any other edge, or check how strict the upper limits are. We take NGC 4051 as basis for comparison. In NGC 4051, the absorber is rather highly ionized, with OVIII being the strongest edge with $`\tau `$= $`1.1\pm 0.4`$, then NeX with $`\tau `$=$`0.8\pm 0.4`$. The OVII edge is weaker with $`\tau `$=0.35; Komossa & Fink, 1997. Using the ratio $`\tau `$<sub>OVIII</sub>/$`\tau `$<sub>NeX</sub> as typical for the relative depths in a highly ionized absorber, we can conclude that our non-detection of edges other than the one near 1.1 keV in PG1404+226 is still consistent with a standard warm absorber i.e. our upper limits on $`\tau `$ are not strict enough to rule out a warm absorber in relativistic outflow. ## 4 HST and IUE Observations ### 4.1 Observations Four spectra were taken in February 1996 with HST/FOS and gratings G130, G190H, G270 and G400 with integration times 2300, 530, 120 and 120 seconds respectively. The total observed wavelength range covered is 1087 - 4773 Å with a nominal resolution of 1300 (Fig.6). The spectra were taken through the 0.86 arcsec diameter aperture. Standard reduction procedures were performed. The wavelength scale of the spectrum taken with G190H was shifted by +1.5 Å, to be consistent with the other spectra. PG 1404+226 was observed with IUE on July 1994, 2 days after the ROSAT observations. The spectrum, SWP 51419, has a total integration time of 315 minutes (accumulated in 12 parts), and was taken through the large aperture and at low dispersion. Compared with the HST spectra taken 18 monthts later the continuum flux in July 1994 is $``$ 1.3 times brighter in the common observed wavelength range 1265 to 1400Å but the Ly$`\alpha `$ line kept the same intensity. The modest S/N and spectral resolution of the IUE spectrum would prevent the detection of the absorption lines seen in the HST spectra. No change in the emission/absorption profile of Ly$`\alpha `$ (Fig.7) can be detected by comparing the IUE spectrum and the FOS-G130 spectrum rebinned at 2Å. ### 4.2 Continuum and emission lines Table 3 lists the emission line intensities (H<sub>0</sub> = 50 km s<sup>-1</sup> Mpc<sup>-1</sup>, q<sub>0</sub> = 0, distance = 617 Mpc). We note the presence of some weak emission lines: (1) an unidentified line at 1175Å (rest wavelength 1070Å) noticed in a few other quasar spectra (Laor et al 1995; Hamann et al 1997) (2) a line at 1290 Å which we identifiy with CIII1175.7. This line is seen in HST spectra of IZw1 and Laor et al (1997) suggest that it is produced by resonance scattering of continuum photons by CIII ions, a mechanism which requires large velocity gradients ($``$ 1000 km s<sup>-1</sup>) within each emitting cloud of the BLR. ### 4.3 The two systems of absorption lines and their likely origin We identify two absorption systems in the HST spectra which, in the source frame, are separated by $`1920`$ km s<sup>-1</sup>. In the ‘blue’ system the absorption lines appear in the blue flank of the Ly$`\alpha `$ and CIV emission lines at 800 km s<sup>-1</sup> from the peak. In the ‘red’ system the absorption lines appear on the red flank of the emission lines at 1100 km s<sup>-1</sup> from the peak. It is known that in radio quiet AGN/Quasars, the high ionization lines such as CIV are blueshifted with respect to the systemic velocity by a few hundred to a few thousand km s<sup>-1</sup> (e.g. van Groningen 1987, Corbin 1995, Sulentic et al. 1995) this blueshift being generally interpreted as evidence for a wind outflowing from the face of the accretion disk turned towards us. On this basis, we argue that the red absorption system is close to zero velocity in the source frame while the blue system originates from material which has an outflow velocity of $`1900`$ km s<sup>-1</sup> (towards us with respect to the quasar). Several factors hamper the measurement of the absorption lines: modest S/N, limited resolution of FOS and uncertainties affecting the profile of the emission lines. The absorption lines were measured assuming a plausible reconstruction of the top of the emission lines. Still it is not possible to obtain sufficiently accurate Ly$`\alpha `$ absorption profile, and NV and CIV doublet ratios to ascertain whether these lines are optically thin or thick and to estimate the covering factor (some of the galactic lines do not reach zero either). Higher spectral resolution is needed to elucidate these important points. With this caveat, the measures are given in Table 4. ### 4.4 The CIII1175 absorption line Particularly interesting is the absorption line at 1282.5 Å (FWHM of 2Å and EW of $``$ 0.7 Å), which could be CIII1175 in the $`z`$ = 0.091 system. The agreement in redshift is good. This line is present in the IUE and HUT spectra of NGC 4151 (Bromage et al 1985, Kriss et al 1992). Bromage et al (1985) argue that absorption by the excited metastable level of CIII1175 and its strength relative to CIV 1549 require this level to be collisionally populated in a high density medium with N<sub>e</sub> = $`10^{10}`$ cm<sup>-3</sup>. In NGC 4151, the EW of CIII1175 is between 0.7 and 1.0 times the EW of CIV1548,1550 while in PG1404+226, it is between 0.35 and 0.5 the EW of CIV1548,1550. Note that an appealing alternative for the identification of this line is the CIV doublet blueshifted by 0.3 with respect to PG1404+226. The line width, only $``$ 2 Å, however, argues against this interpretation. There is no other candidate for highly blueshifted absorption lines in the spectrum. ### 4.5 Origin of the UV absorption systems The $`z`$ = 0.098 system is consistent with being produced in the halo of the host galaxy or of a nearby companion but could also be intrinsic to the nucleus. As for the $`z`$ = 0.091 system the probable presence (which needs verification) of the CIII1176 line suggests that the system is intrinsic to the quasar and forms in an outflowing wind with a velocity of 1900 km s<sup>-1</sup>. ## 5 Do the UV and X-ray absorption lines come from the same absorber? ### 5.1 UV absorption lines from the Warm Absorber (a) The EW of the UV absorption lines expected from the Warm Absorber have been calculated for the statistically acceptable models of the ROSAT data. They are given in Table 5, separately for low- and high-state as a measure of the uncertainty arising from the continuum variability (non simultaneous UV and X-observations, non- equilibrium of the gas, etc. see Section 1 and Nicastro et al. 1999a). We find that the CIV1550 and NV 1240 absorption lines are weak at low state, and negligible at high state (because C and N are highly ionized). There is always some absorption by hydrogen due to the large column density in H. (b) Following Spitzer (1978), a standard curve of growth was calculated for velocity parameters b = 20,60,100,140 km s<sup>-1</sup> (Figs. 8a-d). The predicted equivalent widths of Ly$`\alpha `$, CIV, NV, and (OVI) were then compared with those derived from the analysis of the HST spectrum. We find that at the ROSAT low-state (which is close to the state of the source at the time of the ASCA observations), and for b about 60 km s<sup>-1</sup>, there is a rough match between the Ly$`\alpha `$ and CIV absorption lines produced by the warm absorber and the observed lines at $`z`$ = 0.098 and $`z`$ = 0.090 (models c-e). The NV absorption line from the warm absorber is, however, always weaker than observed (N is too highly ionized in the models). Including an additional EUV bump will further increase the level of ionization, thus not changing the above conclusions. The different models differ most strongly in OVI, so this may be the most restrictive line, but it falls just outside the HST range. In conclusion, we find no single-phase medium which can produce both the UV and the X-ray absorption lines. (c) In the case of the best fit to the ASCA data i.e. the model with Fe overabundant by a factor of $``$ 22 over solar, the degree of ionization is not known but it is likely to be too high for the production of the CIII absorption line. Thus, in this case also, the UV and X-ray absorbers are very probably in different gaseous phases. ### 5.2 Emission lines from the Warm Absorber Tab. 5 gives the intensity of the strongest lines emitted by the absorber within the wavelength range of the HST spectra and in the optical range. The calculations were performed with a density of log n<sub>WA</sub> = 9.5. The line NeVIII$`\lambda `$774 was added to the list because of recent reports of its detection in high-$`z`$ quasars (Hamann et al. 1997). Table 5 is meant to provide an order of magnitude estimate of which lines may be important/detectable in the future. The actual strength of the lines depends on the covering factor of the warm absorber; total coverage was assumed for the values in Table 5. ### 5.3 Relation between the X-ray and the UV absorber In the recent years it has been realized that gas outflow is ubiquitous in AGN. It occurs under different gas phases: broad emission line gas (the most highly ionized lines are the most blueshifted) and UV/optical absorption lines (always observed at rest or blueshifted). It is likely that the X-ray absorption features also, originate in outflowing gas at velocities comparable with or higher than those of the UV emission/absorption gas - this cannot presently be ascertained because of the insufficient energy resolution of the X-ray instruments. \[This is not counting here the extraordinary blueshift of the X-ray absorbing gas if the 1 keV features are blueshifted OVII or OVII edges\]. The question as to whether UV absorption lines and X-ray absorption lines/edges tend to be present together or separately in AGN has been adressed by Ulrich (1988) and more recently by a number of authors (e.g. Schartel et al. 1997, Crenshaw 1997, Mathur 1997, Shields & Hamann 1997, Kriss et al 1996). In some AGN, the data appear consistent with a single-phase, photoionized plasma producing the UV absorption and the OVII and OVIII edges (NGC 3783 Shields & Hamann 1997; 3C212, 3C351 and NGC 5548 Mathur 1997). In contrast, in other AGN, the properties of the UV and the X-ray absorbers imply the presence of multiphase media (NGC 3516 Kriss et al 1996; MCG-6-3-15 Otani et al 1996). The present analysis indicates that PG1404+226 is another such case of multiphase absorbing medium (but the non-simultaneity of the observations has to be kept in mind). With the detection of X-ray and UV absorption in PG1404+226 the statistical association between the presence of UV absorption lines and X-ray absorption edges becomes stronger (Ulrich 1988, Mathur Wilkes & Elvis 1998). The two absorbers could be two different gaseous phases, partaking in the same outflow but differing by their physical conditions, velocity and radial distance to the central black hole. ## 6 Conclusions The main results of our analysis can be summarized as follows: 1) The X-ray spectrum of PG 1404+226 is variable by a factor 4 in $`3`$ 10<sup>4</sup>s and is characterized by a strong soft excess below 2 keV whose luminosity ($`L_{Soft}7\times 10^{43}`$ erg s<sup>-1</sup> in the 0.4–2.0 keV energy range) is about a factor 3 greater than the 2–10 keV luminosity ($`L_{Hard}`$). The soft excess emission can be described with a high temperature optically thick blackbody ($`kT140`$ eV). Optically thin models are ruled out combining the observed luminosity with the dimension of the region derived from the variability timescale. 2) The residuals around 1 keV can be best described by an absorption edge at $`E=1.07\pm 0.03`$ keV, not consistent with being caused by highly ionized oxygen at rest in the quasar frame. A possible explanation could be either in terms of iron overabundance, as suggested by the warm absorber fits and the extremely high EW of the iron K$`\alpha `$ line, or by resonant absorption in a turbulent gas. The interpretation of a blueshifted Oxygen edge in a relativistically outflowing gas is less likely and not supported by the optical–UV data. X–ray observations of NLS1 at high spectral resolution with XMM and Chandra will allow to clarify the origin of the 1 keV absorption detected in PG 1404+226 and 3 other NLS1. 3) Two systems of absorption lines separated by $`1900`$ km s<sup>-1</sup> are identified in the FOS/HST spectra in the lines of Ly$`\alpha `$, C iv and N v. One system is located to the red of the emission line peaks. Considering that in most radio quiet QSOs the highly ionized emission lines are blueshifted (as part of an outflow from the face of the accretion disk turned toward us) we argue that this absorption system is nearly at rest in the AGN frame. Its properties are consistent with this absorber being produced in the halo of the host galaxy or a companion. As for the system blueshifted by $`1900`$ km s<sup>-1</sup>, the very probable detection of CIII$`\lambda `$1175 (which has been seen in absorption only in NGC 4151 - Bromage et al. 1985, Kriss et al. 1992 ) indicates that this system is intrinsic to the quasar. 4) With the detection of X-ray and UV absorption in PG1404+226 the statistical association between the presence of UV absorption lines and X-ray absorption edges first suggested by Ulrich (1988) is becoming clearer (Mathur Wilkes & Elvis 1998). We may be seeing two different absorbers with different physical conditions and locations but both being parts of a grand design outflow. The differences observed among absorption features in various AGN are likely to result from intrinsic differences in the properties of the gaseous outflows, from differences in the aspect angles and from the shape of the X–ray spectrum. ###### Acknowledgements. We thank Gary Ferland for providing Cloudy. AC acknowledges partial support by the Italian Space Agency (ASI) under the contract ASI-ARS-96-70 and by the Italian Ministry for University and Research (MURST) under grant Cofin98-02-32. St. K. acknowledges support from the Verbundforschung under grant No. 50 OR 93065, and P.C. acknowledges support from NASA through contract NAS5-26670.
no-problem/9907/hep-ph9907383.html
ar5iv
text
# Neutrino magnetic moments, flavor mixing, and the SuperKamiokande solar data \[ ## Abstract We find that magnetic neutrino-electron scattering is unaffected by oscillations for vacuum mixing of Dirac neutrinos with only diagonal moments and for Majorana neutrinos with two flavors. For MSW mixing, these cases again obtain, though the effective moments can depend on the neutrino energy. Thus, e.g., the magnetic moments measured with $`\overline{\nu }_e`$ from a reactor and $`\nu _e`$ from the Sun could be different. With minimal assumptions, we find a new limit on $`\mu _\nu `$ using the 825-days SuperKamiokande solar neutrino data: $`|\mu _\nu |1.5\times 10^{10}\mu _B`$ at 90% CL, comparable to the existing reactor limit. \] In the minimally-extended Standard Model, neutrinos of mass $`m_\nu `$ have tiny loop-induced magnetic moments $`\mu _\nu 3\times 10^{19}\mu _B(m_\nu /1\mathrm{eV})`$, where $`\mu _B`$ is the Bohr magneton. In various extensions of the Standard Model, larger magnetic moments can occur without large neutrino masses. In the presence of flavor mixing, the fundamental magnetic moments are associated with the mass eigenstates (since either a boost or a magnetic moment can be used to reverse the helicity). In the mass eigenstate basis, Dirac neutrinos can have diagonal or off-diagonal (transition) moments, while Majorana neutrinos can only have transition moments . In the current experiments, the effects of neutrino magnetic moments can be searched for only in the recoil electron spectrum from neutrino-electron scattering . Below we consider the interplay between magnetic moments and flavor mixing for this process. We show how magnetic moments can be defined for beams that are initially neutrino flavor eigenstates. In some important cases these moments do not oscillate, i.e., they do not depend on distance from the source. However, in the presence of MSW mixing, these defined flavor moments can differ from the vacuum case and can depend on the neutrino energy, though not on the distance. As an illustration, we derive a new limit on the magnetic moment from the SuperKamiokande (SK) solar neutrino data . There are two incoherent contributions to neutrino-electron scattering: weak scattering, which preserves the neutrino helicity, and magnetic scattering, which reverses it. Thus the differential cross section is given by $`{\displaystyle \frac{d\sigma }{dT}}`$ $`=`$ $`{\displaystyle \frac{2G_F^2m_e}{\pi }}\left[g_L^2+g_R^2\left(1{\displaystyle \frac{T}{E_\nu }}\right)^2g_Lg_R{\displaystyle \frac{m_eT}{E_\nu ^2}}\right]`$ (1) $`+`$ $`\mu _\nu ^2{\displaystyle \frac{\pi \alpha ^2}{m_e^2}}{\displaystyle \frac{1T/E_\nu }{T}}.`$ (2) In Eq. (2), $`g_L=\mathrm{sin}^2\theta _W+1/2`$ for $`\nu _e`$, $`g_L=\mathrm{sin}^2\theta _W1/2`$ for $`\nu _\mu `$ and $`\nu _\tau `$, and $`g_R=\mathrm{sin}^2\theta _W`$ for all flavors (for antineutrinos, exchange $`g_L`$ and $`g_R`$). The magnetic moment $`\mu _\nu `$ is expressed in units of $`\mu _B`$. Magnetic scattering, the second term in Eq. (2), grows rapidly with decreasing electron recoil kinetic energy $`T`$. In principle, there can be weak-magnetic interference effects. There is a negligible effect due to the fact that a massive neutrino is not a helicity eigenstate . Also, if the neutrinos have a transverse polarization, the electron azimuthal angle distribution can be affected ; we ignore this case, as the effects are presently unobservable. Vacuum Mixing: The effects of flavor mixing on the weak scattering are well-known. Whatever the composition of the neutrino beam, the different flavors are in principle distinguishable and hence their cross sections combine incoherently, weighted by the probabilities for the neutrino to be of each given flavor. We want to explore how neutrino oscillations affect the magnetic scattering. The shape of the electron recoil spectrum in magnetic scattering is universal (the same for all mass eigenstates). The only quantity that depends on the beam composition is the effective magnetic moment $`\mu _\nu `$. Let us assume that we begin with a beam of electron neutrinos. Under the usual oscillation hypothesis such a beam propagates over the distance $`L`$ from its source in vacuum according to $$|\nu _e(L)=\underset{k}{}U_{ek}e^{iE_kL}|\nu _k,$$ (3) where $`U_{ek}`$ is an element of the unitary mixing matrix and $`k`$ labels the mass eigenstates. Similarly to above, whatever the composition of the neutrino beam, the different mass eigenstates are in principle distinguishable in the magnetic scattering, and hence their cross sections combine incoherently, weighted by the squares of the amplitudes for the neutrino to be of each mass after the scattering. Then the combined cross section for magnetic scattering has the form of Eq. (2) with magnetic moment squared $`\mu _\nu ^2`$ given by $`\mu _e^2`$ $`=`$ $`{\displaystyle \underset{j}{}}\left|{\displaystyle \underset{k}{}}U_{ek}e^{iE_kL}\mu _{jk}\right|^2`$ (4) $`=`$ $`{\displaystyle \underset{j}{}}{\displaystyle \underset{kk^{}}{}}U_{ek}U_{ek^{}}^{}\mu _{jk}\mu _{jk^{}}^{}e^{2\pi iL/L_{kk^{}}},`$ (5) where the summations $`j,k,k^{}`$ are over the mass eigenstates, and the subscript $`e`$ labels the initial flavor. We have made the usual relativistic expansion and have defined the oscillation length $`L_{kk^{}}=4\pi E_\nu /\mathrm{\Delta }m_{kk^{}}^2\mathrm{for}kk^{}`$ (there is no $`L`$ dependent phase for $`k=k^{}`$). The quantities $`\mu _{jk}`$ in Eq. (5) are the fundamental constants (in units of $`\mu _B`$) that characterize the coupling of the neutrino mass eigenstates to the electromagnetic field. The summation over $`j`$ is outside the square because of the incoherence of the cross sections for different final masses. The expression for $`\mu _e^2`$ simplifies in some important cases. Let us assume first that the neutrinos are Dirac particles (with $`n`$ flavors) with only diagonal magnetic moments ($`\mu _{jk}=\mu _j\delta _{jk}`$); this is the scenario used by the Particle Data Group . Then $$\mu _e^2=\underset{j}{}|U_{ej}|^2|\mu _j|^2,$$ (6) and there is no dependence on the distance $`L`$ or neutrino energy $`E_\nu `$. In this case one can characterize the magnetic scattering by the initial flavor index instead of the mass indices. (Hence we disagree with Ref. , in which the magnetic scattering depends on the final flavor index, i.e., it oscillates). Measurements of all magnetic moments and mixing parameters would allow extraction of the “fundamental” moments $`\mu _j`$. Next consider the case of Majorana neutrinos, and assume that only two mass eigenstates are relevant. Then $$\mu _e^2=|\mu _{12}|^2(|U_{e1}|^2+|U_{e2}|^2)=|\mu _{12}|^2,$$ (7) which is not only independent of the source distance and the neutrino energy, but also of the mixing angle. Under what circumstances does one have to worry about a dependence on distance and neutrino energy, and in particular, when can such terms be dominant? Clearly, at least one term of the type $`\mu _{jk}\times \mu _{jk^{}}`$ with $`kk^{}`$ must be nonvanishing and as large as $`\mu _{jk}^2`$ or $`\mu _{jk^{}}^2`$. In other words, in the $`3\times 3`$ matrix $`\mu _{jk}`$ there should be at least two comparable entries on the same line (and in the same column). For the Dirac case this implies that at least one nondiagonal magnetic moment is as large as the diagonal ones. For the Majorana case it implies that $`two`$ different nondiagonal magnetic moments are of a similar magnitude. Both of these cases seem unnatural. MSW Mixing: The above discussion must be modified for matter-enhanced oscillations (the MSW effect). First, the initial composition of the beam is governed not by the vacuum mixing angle $`\theta _v`$, but by the initial matter mixing angle $`\theta _m`$, which depends on $`\mathrm{\Delta }m^2/2E_\nu `$ and the electron density. If the initial density is well above the resonance density, as is true for the standard solutions to the solar neutrino problem, then $`\theta _m\pi /2`$ to an excellent approximation. Then initially, $`|\nu _e=\mathrm{cos}\theta _m|\nu _1+\mathrm{sin}\theta _m|\nu _2|\nu _2`$. Second, although a nearly pure $`|\nu _2`$ is produced in the solar center, if the passage through the resonance is nonadiabatic, then the final beam can be a mixture of $`|\nu _1`$ and $`|\nu _2`$. Most generally , the mass eigenstates evolve as $`|\nu _1`$ $``$ $`c_1e^{+i\varphi _a}|\nu _1+c_2e^{+i\varphi _b}|\nu _2`$ (8) $`|\nu _2`$ $``$ $`c_2^{}e^{i\varphi _b}|\nu _1+c_1^{}e^{i\varphi _a}|\nu _2,`$ (9) where $`|c_1|^2+|c_2|^2=1`$. The phases $`\varphi _a`$ and $`\varphi _b`$ (real functions that depend on integrals of the instantaneous mass basis eigenvalues) are irrelevant here, due to the non-interference of different mass eigenstates in the magnetic scattering. For the adiabatic case (e.g., the solar large-angle solution ), $`c_2=0`$. For the nonadiabatic case (e.g., the solar small-angle solution ), and a narrow resonance region (which naturally obtains), the probability of hopping from one mass eigenstate to the other is $`P_{hop}=|c_2|^2`$, which depends on the neutrino energy but not the distance from the source, e.g., for an exponential density with density scale height $`r_s`$, $$P_{hop}=\mathrm{exp}\left[\pi \frac{\mathrm{\Delta }m^2}{2E_\nu }r_s(1\mathrm{cos}2\theta _v)\right].$$ (10) Thus for two-flavor Dirac mixing with only diagonal moments, we obtain for the effective magnetic moment $`\mu _e^2`$ $`=`$ $`|c_2|^2|\mu _1|^2+|c_1|^2|\mu _2|^2`$ (11) $`=`$ $`P_{hop}|\mu _1|^2+(1P_{hop})|\mu _2|^2.`$ (12) Note that this is different from Eq. (6), even in the adiabatic case. However, for the two-flavor Majorana case, we again obtain $`\mu _e^2=|\mu _{12}|^2`$, as in Eq. (7). In both cases, since the initial state is a pure $`|\nu _2`$, there are no interference terms that depend on distance. SK Data: The best direct limit on the neutrino magnetic moment, $`1.8\times 10^{10}\mu _B`$ at 90% CL , comes from studies of neutrino-electron scattering with reactor antineutrinos. (See Ref. and references therein for the astrophysical limits). As explained above, the meaning of the measured $`\mu _\nu `$ using solar neutrinos and reactor antineutrinos could in principle be different. Nevertheless, it is important to realize that a magnetic moment numerically equal to the current reactor limit would have a statistically significant effect on the solar neutrino data from SK. Since there is, as explained below, no evidence in the data for a nonvanishing magnetic moment, we derive, with a minimum of assumptions, a limit on what we call $`\mu _e^{sol}`$. If the expected weak scattering rate were known (as assumed in Ref. ), an observed excess in the total rate would indicate a nonzero magnetic moment. However, as the total weak rate is not known a priori, we instead look at the shape of the electron spectrum for the effects of a magnetic moment. The signature of a nonvanishing magnetic moment would be an enhancement, compared to the weak scattering alone, of the events at low recoil energies, with less enhancement at higher energies. That is not observed. Instead, as shown below, the electron spectra recorded by SK have, within the statistics, the shape that one expects from weak scattering alone (we show below that the deviations observed currently in the highest energy bins are irrelevant for our purpose). However, the total number of events is less than the standard solar model predicts, presumably due to neutrino oscillations. We do not need to know the value, or the mixing mechanism behind it, of this overall reduction of the scattering rate. We assume only that the shape of the measured spectrum is not due to a fortuitous cancellation between a magnetic moment effect rising at low energies and an oscillation effect rising at high energies. The Sudbury Neutrino Observatory will check the spectral shape and total flux of the $`\nu _e`$ component. The procedure we adopt uses the measured $`relative`$ errors by SK and the fact that the measured shape agrees with expectations. We calculate $`d\sigma /dT`$ by folding Eq. (2) with the $`{}_{}{}^{8}\mathrm{B}`$ neutrino spectrum from Ref. . For both weak and magnetic scattering, we include the SK energy resolution , though it makes little difference in the final results. We histogram the results in 0.5 MeV bins in total electron energy, as in SK. Thus, as a function of the bin number $`i`$, we have constructed the expected spectra $`n_W(i)`$ and $`n_M(i)`$ for weak and magnetic scattering, respectively. In order to determine the upper limit of $`|\mu _e^{sol}|^2`$ we must take into account the statistical fluctuations in the SK data. While the data points divided by the solar model expectation are consistent with an energy independent reduction factor $`\alpha `$, the individual bins are distributed, presumably randomly, around that value. To take that into account we choose some reference values $`\alpha _{ref}`$ and $`\mu _{ref}`$ and create a set of $`simulated`$ data, $`n_S(i)`$, which are Gaussian-distributed around the theoretical expectation $`\alpha _{ref}n_W(i)+\mu _{ref}^2n_M(i)`$ with the relative errors $`\sigma (i)`$ given by SK. We then minimize the $`\chi ^2`$, $$\chi ^2=\underset{i}{}\left[\frac{\alpha n_W(i)+\mu _\nu ^2n_M(i)n_S(i)}{\sigma (i)n_S(i)}\right]^2.$$ (13) with respect to the fit parameters $`\alpha `$ and $`\mu _\nu ^2`$. For fixed $`\alpha _{ref}`$ and $`\mu _{ref}^2`$, we repeat this procedure many times and plot the frequencies with which given values of the fitted $`\alpha `$ and $`\mu _\nu ^2`$ appear. An example of the scatter plot of the fit parameters is shown in Fig.1. One can see, naturally, that the most probable values of the fit are the reference values $`\alpha _{ref}`$ and $`\mu _{ref}`$. Also, the two variables are strongly anticorrelated (correlation coefficient $`r0.9`$), i.e., larger $`\alpha `$ is accompanied by smaller $`\mu _\nu ^2`$. Dividing the numerator and denominator in Eq. (13) by $`n_W(i)`$, one sees that the $`\chi ^2`$ depends only on $`r_i=n_S(i)/n_W(i)`$, i.e., precisely on the quantities published by SK . By repeating the calculation for different values of $`\mu _{ref}`$ and projecting on the $`\mu _\nu ^2`$ axis, one gets the distributions shown in the upper panel of Fig. 2. These distributions are Gaussian, and their width is almost independent of the chosen value of $`\mu _{ref}`$. Based on them we obtain the lines in the lower panel of Fig. 2 signifying confidence levels at 10%, 50% (the mean), and 90%. For a given fitted $`\mu _\nu ^2`$ obtained in an experiment, these allow one to determine the likely range of the true $`\mu _\nu ^2`$. For example, if one found a fitted $`\mu _\nu ^24`$, then from Fig. 2, the most probable true value is $`\mu _\nu ^24`$, with the upper limit being $`8`$ and the lower limit being $`0`$. Similarly, for a fitted $`\mu _\nu ^20`$, the true $`\mu _\nu ^2`$ is $`3.9`$. This would be the largest true $`\mu _\nu ^2`$, due to statistical fluctuations of the finite data, that could have given this fitted $`\mu _\nu ^20`$. In this way we solve problems associated with the statistical fluctuations as well as with the constraint $`|\mu _e^{sol}|^20`$. Figure 2 was calculated with $`\alpha _{ref}=0.5`$, as observed in SK, but doesn’t change significantly for $`0.4<\alpha _{ref}<0.6`$. Note that the results summarized in Fig. 2 can be also obtained analytically, without generating many simulated spectra. The conclusions, in particular the lower panel of Fig. 2, simply follow from the properties of the individual sums in Eq. (13). Using the SK data , the fitted $`\alpha 0.5`$; the exact value is irrelevant since we are testing only the spectral shape, and not the normalization. The fitted values of $`\mu _\nu ^2`$ are slightly (but not significantly; see Fig. 2) negative: $`5`$, $`3`$, and $`2`$ (in the units of Table I) for the 504-, 708-, and 825-days data sets. The slight (but diminishing with time) positive slope observed in the data cannot be caused by a magnetic moment (which causes an increase at low energies), though it could be caused by oscillations. The most conservative conclusion is therefore to say that the slope is not negative, i.e., that the fitted $`\mu _\nu ^2`$ values are not positive. That is, we obtain the limit by using Fig. 2 (and its analogs) and an assumed fitted value of $`\mu _\nu ^2=0`$. Thus the limits in Table I are slightly weaker than what is naively implied by the data, but are more robust. The sensitivity to $`|\mu _e^{sol}|`$ improves with time only as $`t^{1/4}`$, but the addition of more low-energy bins (e.g., the two added since the 504-days data) gives a more dramatic improvement. The uncertainties $`\delta \alpha `$ in Table I reflect the increase in the error in the parameter $`\alpha `$ when one allows magnetic scattering. Our procedure does not include the correlations between systematic errors in different bins and therefore will not reflect the full systematic uncertainty. Note that in the standard analysis one assumes $`\mu _\nu =0`$ and hence the uncertainty $`\delta \alpha `$ is reduced by the factor $`1/\sqrt{1r^2}2.3`$. Similarly, if the value of $`\alpha `$ were accurately and independently known, and we fit for $`\mu _\nu ^2`$ only, an identical improvement in the upper limit of $`|\mu _e^{sol}|^2`$ would result. Conclusions: In this paper, we present three new results. First, that while neutrino magnetic moments are most fundamentally defined for mass eigenstates, in several cases of practical interest non-oscillating (i.e., independent of distance) effective magnetic moments can be defined for the flavor eigenstates. For Dirac neutrinos with only diagonal moments, these results are Eq. (6) for vacuum mixing and Eq. (12) for MSW mixing. For Majorana neutrinos with two flavors, the result is $`\mu _e^2=|\mu _{12}|^2`$, for either vacuum or MSW mixing. Second, that MSW mixing can change the definition of the effective magnetic moment (allowing a dependence on the neutrino energy), so that the measured moments using $`\overline{\nu }_e`$ from a reactor and $`\nu _e`$ from the Sun could be different. Third, that the shape of the SK recoil electron spectrum can be used to place a limit on the neutrino magnetic moment (note that we do not invoke any mechanism for neutrino interaction with the solar magnetic field). In general, this is a new limit, independent of the limit from reactor studies (with the same meaning only for solar vacuum oscillations). In any case, the limit obtained using the preliminary 825-days data, $`|\mu _e^{sol}|1.5\times 10^{10}\mu _B`$, is comparable to the existing reactor limit of $`1.8\times 10^{10}\mu _B`$ . This work was supported in part by the U.S. Department of Energy under Grant No. DE-FG03-88ER-40397. J.F.B. was supported by Caltech. We thank Boris Kayser, Bob Svoboda, and Mark Vagins for discussions, and the SuperKamiokande collaboration for supplying the 825-days results.
no-problem/9907/hep-ph9907447.html
ar5iv
text
# Modulus Stabilization with Bulk Fields ## Abstract We propose a mechanism for stabilizing the size of the extra dimension in the Randall-Sundrum scenario. The potential for the modulus field that sets the size of the fifth dimension is generated by a bulk scalar with quartic interactions localized on the two 3-branes. The minimum of this potential yields a compactification scale that solves the hierarchy problem without fine tuning of parameters. preprint: CALT-68-2232 hep-ph/9907447 The Standard Model for strong, weak, and electromagnetic interactions based on the gauge group $`SU(3)\times SU(2)\times U(1)`$ has been extremely succesful in accounting for experimental observations. However, it has several unattractive features that suggest new physics beyond that incorporated in this model. One of these is the gauge hierarchy problem, which refers to the vast disparity between the weak scale and the Planck scale. In the context of the minimal Standard Model, this hierarchy of scales is unnatural since it requires a fine tuning order by order in perturbation theory. A number of extensions have been proposed to solve the hierarchy problem, notably technicolor (or dynamical symmetry breaking) and low energy supersymmetry. Recently, it has been suggested that large compactified extra dimensions may provide an alternative solution to the hierarchy problem. In these models, the observed Planck mass $`M_{Pl}`$ is related to $`M,`$ the fundamental mass scale of the theory, by $`M_{Pl}^2=M^{n+2}V_n,`$ where $`V_n`$ is the volume of the additional compactified dimensions. If $`V_n`$ is large enough, $`M`$ can be of the order of the weak scale. Unfortunately, unless there are several large extra dimensions, a new hierarchy is introduced between the compactification scale, $`\mu _c=V_n^{1/n},`$ and $`M.`$ Randall and Sundrum have proposed a higher dimensional scenario to solve the hierachy problem that does not require large extra dimensions. This model consists of a spacetime with a single $`S^1/Z_2`$ orbifold extra dimension. Three-branes with opposite tensions reside at the orbifold fixed points and together with a finely tuned cosmological constant serve as sources for five-dimensional gravity. The resulting spacetime metric contains a redshift factor which depends exponentially on the radius $`r_c`$ of the compactified dimension: $$ds^2=e^{2kr_c|\varphi |}\eta _{\mu \nu }dx^\mu dx^\nu r_c^2d\varphi ^2,$$ (1) where $`k`$ is a parameter which is assumed to be of order $`M`$, $`x^\mu `$ are Lorentz coordinates on the four-dimensional surfaces of constant $`\varphi `$, and $`\pi \varphi \pi `$ with $`(x,\varphi )`$ and $`(x,\varphi )`$ identified. The two 3-branes are located at $`\varphi =0`$ and $`\varphi =\pi .`$ A similar scenario to the one described in ref. is that of Horava and Witten , which arises within the context of $`M`$-theory. Supergravity solutions similar to Eq. (1) are presented in ref. . In ref. , it is shown how this model may be obtained from string theory compactifications. The non-factorizable geometry of Eq. (1) has several important consequences. For instance, the four-dimensional Planck mass is given in terms of the fundamental scale $`M`$ by $$M_{Pl}^2=\frac{M^3}{k}[1e^{2kr_c\pi }],$$ (2) so that, even for large $`kr_c,`$ $`M_{Pl}`$ is of order $`M.`$ Because of the exponential factor in the spacetime metric, a field confined to the 3-brane at $`\varphi =\pi `$ with mass parameter $`m_0`$ will have physical mass $`m_0e^{kr_c\pi }`$ and for $`kr_c`$ around 12, the weak scale is dynamically generated from a fundamental scale $`M`$ which is on the order of the Planck mass. Furthermore, Kaluza-Klein gravitational modes have TeV scale mass splittings and couplings. Similarly, a bulk field with mass on the order of $`M`$ has low-lying Kaluza-Klein excitations that reside primarily near $`\varphi =\pi `$ and hence, from a four-dimensional perspective, have masses on the order of the weak scale . In the scenario presented in ref. , $`r_c`$ is associated with the vacuum expectation value of a massless four-dimensional scalar field. This modulus field has zero potential and consequently $`r_c`$ is not determined by the dynamics of the model. For this scenario to be relevant, it is necessary to find a mechanism for generating a potential to stabilize the value of $`r_c.`$ Here we show that such a potential can arise classically from the presence of a bulk scalar with interaction terms that are localized to the two 3-branesOther proposals for stabilizing the $`r_c`$ modulus can be found in ref. .. The minimum of this potential can be arranged to yield a value of $`kr_c10`$ without fine tuning of parameters. Imagine adding to the model a scalar field $`\mathrm{\Phi }`$ with the following bulk action $$S_b=\frac{1}{2}d^4x_\pi ^\pi 𝑑\varphi \sqrt{G}\left(G^{AB}_A\mathrm{\Phi }_B\mathrm{\Phi }m^2\mathrm{\Phi }^2\right),$$ (3) where $`G_{AB}`$ with $`A,B=\mu ,\varphi `$ is given by Eq. (1). We also include interaction terms on the hidden and visible branes (at $`\varphi =0`$ and $`\varphi =\pi `$ respectively) given by $$S_h=d^4x\sqrt{g_h}\lambda _h\left(\mathrm{\Phi }^2v_h^2\right)^2,$$ (4) and $$S_v=d^4x\sqrt{g_v}\lambda _v\left(\mathrm{\Phi }^2v_v^2\right)^2,$$ (5) where $`g_h`$ and $`g_v`$ are the determinants of the induced metric on the hidden and visible branes respectively. Note that $`\mathrm{\Phi }`$ and $`v_{v,h}`$ have mass dimension $`3/2`$, while $`\lambda _{v,h}`$ have mass dimension $`2.`$ Kinetic terms for the scalar field can be added to the brane actions without changing our results. The terms on the branes cause $`\mathrm{\Phi }`$ to develop a $`\varphi `$-dependent vacuum expectation value $`\mathrm{\Phi }(\varphi )`$ which is determined classically by solving the differential equation $`0`$ $`=`$ $`{\displaystyle \frac{1}{r_c^2}}_\varphi \left(e^{4\sigma }_\varphi \mathrm{\Phi }\right)+m^2e^{4\sigma }\mathrm{\Phi }+4e^{4\sigma }\lambda _v\mathrm{\Phi }\left(\mathrm{\Phi }^2v_v^2\right){\displaystyle \frac{\delta (\varphi \pi )}{r_c}}`$ (7) $`+4e^{4\sigma }\lambda _h\mathrm{\Phi }\left(\mathrm{\Phi }^2v_h^2\right){\displaystyle \frac{\delta (\varphi )}{r_c}},`$ where $`\sigma (\varphi )=kr_c|\varphi |.`$ Away from the boundaries at $`\varphi =0,\pi `$, this equation has the general solution $$\mathrm{\Phi }(\varphi )=e^{2\sigma }[Ae^{\nu \sigma }+Be^{\nu \sigma }],$$ (8) with $`\nu =\sqrt{4+m^2/k^2}`$. Putting this solution back into the scalar field action and integrating over $`\varphi `$ yields an effective four-dimensional potential for $`r_c`$ which has the form $`V_\mathrm{\Phi }(r_c)`$ $`=`$ $`k(\nu +2)A^2(e^{2\nu kr_c\pi }1)+k(\nu 2)B^2(1e^{2\nu kr_c\pi })`$ (10) $`+\lambda _ve^{4kr_c\pi }\left(\mathrm{\Phi }(\pi )^2v_v^2\right)^2+\lambda _h\left(\mathrm{\Phi }(0)^2v_h^2\right)^2.`$ The unknown coefficients $`A`$ and $`B`$ are determined by imposing appropriate boundary conditions on the 3-branes. We obtain these boundary conditions by inserting Eq. (8) into the equations of motion and matching the delta functions: $`k\left[(2+\nu )A+(2\nu )B\right]2\lambda _h\mathrm{\Phi }(0)\left[\mathrm{\Phi }(0)^2v_h^2\right]=0,`$ (11) $`ke^{2kr_c\pi }\left[(2+\nu )e^{\nu kr_c\pi }A+(2\nu )e^{\nu kr_c\pi }B\right]+2\lambda _v\mathrm{\Phi }(\pi )\left[\mathrm{\Phi }(\pi )^2v_v^2\right]=0.`$ (12) Rather than solve these equations in general, we consider the simplified case in which the parameters $`\lambda _h`$ and $`\lambda _v`$ are large. It is evident from Eq. (10) that in this limit, it is energetically favorable<sup>§</sup><sup>§</sup>§The configuration that has both VEVs of the same sign has lower energy than the one with alternating signs and therefore corresponds to the ground state. Clearly, the overall sign is irrelevant. to have $`\mathrm{\Phi }(0)=v_h`$ and $`\mathrm{\Phi }(\pi )=v_v`$. Thus, from Eq. (8) we get for large $`kr_c`$ $`A`$ $`=`$ $`v_ve^{(2+\nu )kr_c\pi }v_he^{2\nu kr_c\pi },`$ (13) $`B`$ $`=`$ $`v_h(1+e^{2\nu kr_c\pi })v_ve^{(2+\nu )kr_c\pi },`$ (14) where subleading powers of $`\mathrm{exp}(kr_c\pi )`$ have been neglected. Now suppose that $`m/k1`$ so that $`\nu =2+ϵ,`$ with $`ϵm^2/4k^2`$ a small quantity. In the large $`kr_c`$ limit, the potential becomes $$V_\mathrm{\Phi }(r_c)=kϵv_h^2+4ke^{4kr_c\pi }(v_vv_he^{ϵkr_c\pi })^2\left(1+\frac{ϵ}{4}\right)kϵv_he^{(4+ϵ)kr_c\pi }(2v_vv_he^{ϵkr_c\pi })$$ (15) where terms of order $`ϵ^2`$ are neglected (but $`ϵkr_c`$ is not treated as small). Ignoring terms proportional to $`ϵ`$, this potential has a minimum at $$kr_c=\left(\frac{4}{\pi }\right)\frac{k^2}{m^2}\mathrm{ln}\left[\frac{v_h}{v_v}\right].$$ (16) With $`\mathrm{ln}(v_h/v_v)`$ of order unity, we only need $`m^2/k^2`$ of order $`1/10`$ to get $`kr_c10.`$ Clearly, no extreme fine tuning of parameters is required to get the right magnitude for $`kr_c.`$ For instance, taking $`v_h/v_v=1.5`$ and $`m/k=0.2`$ yields $`kr_c12.`$ The stress tensor for the scalar field can be written as $`T_s^{AB}=T_k^{AB}+T_m^{AB},`$ where for large $`kr_c`$: $`T_k^{\varphi \varphi }`$ $``$ $`{\displaystyle \frac{k^2}{2r_c^2}}\left[(4+ϵ)(v_vv_he^{ϵkr_c\pi })e^{(4+ϵ)(kr_c\pi \sigma )}ϵv_he^{ϵ\sigma }\right]^2,`$ (17) $`T_k^{\mu \nu }`$ $``$ $`{\displaystyle \frac{k^2}{2}}e^{2\sigma }\eta ^{\mu \nu }\left[(4+ϵ)(v_vv_he^{ϵkr_c\pi })e^{(4+ϵ)(kr_c\pi \sigma )}ϵv_he^{ϵ\sigma }\right]^2,`$ (18) and $`T_m^{\varphi \varphi }`$ $``$ $`{\displaystyle \frac{2k^2ϵ}{r_c^2}}\left[(v_vv_he^{ϵkr_c\pi })e^{(4+ϵ)(kr_c\pi \sigma )}+v_he^{ϵ\sigma }\right]^2,`$ (19) $`T_m^{\mu \nu }`$ $``$ $`2k^2e^{2\sigma }\eta ^{\mu \nu }ϵ\left[(v_vv_he^{ϵkr_c\pi })e^{(4+ϵ)(kr_c\pi \sigma )}+v_he^{ϵ\sigma }\right]^2.`$ (20) As long as $`v_h^2/M^3`$ and $`v_v^2/M^3`$ are small, $`T_s^{AB}`$ can be neglected in comparison to the stress tensor induced by the bulk cosmological constant. It is therefore safe to ignore the influence of the scalar field on the background geometry for the computation of $`V(r_c)`$. A similar criterion ensures that the stress tensor induced by the bulk cosmological constant is dominant for $`kr_c1.`$ One might worry that the validity of Eq. (15) and Eq. (16) requires unnaturally large values of $`\lambda _h`$ and $`\lambda _v`$. We will check that this is not the case by computing the leading $`1/\lambda `$ correction to the potential. To obtain this correction, we linearize Eq. (11) and Eq. (12) about the large $`\lambda `$ solution. Neglecting terms of order $`ϵ`$, the VEVs are shifted by $`\delta \mathrm{\Phi }(0)`$ $`=`$ $`{\displaystyle \frac{k}{\lambda _hv_h^2}}e^{(4+ϵ)kr_c\pi }(v_vv_he^{ϵkr_c\pi }),`$ (21) $`\delta \mathrm{\Phi }(\pi )`$ $`=`$ $`{\displaystyle \frac{k}{\lambda _vv_v^2}}(v_vv_he^{ϵkr_c\pi }),`$ (22) and thus (neglecting subleading exponentials of $`kr_c\pi `$) $`\delta A`$ $`=`$ $`{\displaystyle \frac{k}{\lambda _vv_v^2}}e^{(4+ϵ)kr_c\pi }(v_vv_he^{ϵkr_c\pi }),`$ (23) $`\delta B`$ $`=`$ $`e^{(4+ϵ)kr_c\pi }(v_vv_he^{ϵkr_c\pi })\left[{\displaystyle \frac{k}{\lambda _vv_v^2}}+{\displaystyle \frac{k}{\lambda _hv_h^2}}\right].`$ (24) Hence, the correction to the potential is $$\delta V_\mathrm{\Phi }(r_c)=\frac{4k^2}{\lambda _vv_v^2}e^{4kr_c\pi }(v_vv_he^{ϵkr_c\pi })^2.$$ (25) This has the same form as the leading $`ϵ0`$ behavior of Eq. (15) and therefore does not significantly affect the location of the minimum. Note that the forms of the potentials in Eq. (15) and Eq. (25) are only valid for large $`kr_c`$. For small $`kr_c`$, the potential becomes $$V_\mathrm{\Phi }(r_c)=\frac{(v_vv_h)^2}{\pi r_c}$$ (26) when terms of order $`ϵ`$ and $`1/\lambda `$ are neglected. The singularity as $`r_c0`$ is removed by finite $`\lambda `$ corrections which become large for small $`r_c`$, and yield $$V_\mathrm{\Phi }(0)=\frac{\lambda _h\lambda _v}{\lambda _h+\lambda _v}\left(v_v^2v_h^2\right)^2.$$ (27) In the scenario of Randall and Sundrum, the action is the sum of the five-dimensional Einstein-Hilbert action plus world-volume actions for the 3-branes: $$S=d^4x𝑑\varphi \sqrt{G}[\mathrm{\Lambda }+2M^3R]d^4x\sqrt{g_h}V_hd^4x\sqrt{g_v}V_v.$$ (28) For Eq. (1) to be a solution of the field equations that follow from Eq. (28), one must arrange $`V_h=V_v=24M^3k,`$ where $`\mathrm{\Lambda }=24M^3k^2`$. This amounts to having a vanishing four-dimensional cosmological constant plus an additional fine tuning which causes the $`r_c`$ potential to vanish. However, imagine perturbing the 3-brane tensions by small amounts It has been noted that given the action in Eq. (28), changes in the relation between the brane tensions and the bulk cosmological constant result in bent brane solutions. It is possible that there are higher dimension induced curvature terms in the brane actions that make it energetically favorable for them to stay flat. For $`V_h=V_v=24M^3k,`$ Eq. (1) remains a solution to the field equations in the presence of such terms.: $`V_hV_h+\delta V_h,`$ (29) $`V_vV_v+\delta V_v.`$ (30) As long as $`|\delta V_h|`$ and $`|\delta V_v|`$ are small compared to $`\mathrm{\Lambda }/k,`$ these shifts in the brane tensions induce the following potential for $`r_c`$ $$V_\mathrm{\Lambda }(r_c)=\delta V_h+\delta V_ve^{4kr_c\pi }.$$ (31) For $`\delta V_v`$ small, the sum of potentials $`V_\mathrm{\Phi }(r_c)+V_\mathrm{\Lambda }(r_c)`$ has a minimum for large $`kr_c.`$ The effective four-dimensional cosmological constant can be tuned to zero by adjusting the value of $`\delta V_h.`$ For $`\delta V_v<kϵv_v^2`$ the minimum is global, while for $`\delta V_v>kϵv_v^2`$ the minimum is a false vacuum since $`r_c\mathrm{}`$ is a configuration of lower energy. We have seen that a bulk scalar with a $`\varphi `$-dependent VEV can generate a potential to stabilize $`r_c`$ without having to fine tune the parameters of the model (there is still one fine tuning associated with the four-dimensional cosmological constant, however). This mechanism for stabilizing $`r_c`$ is a reasonably generic effect caused by the presence of a $`\varphi `$-dependent vacuum bulk field configuration. It may be worthwile to work out other features of the specific toy model presented here, such as the back reaction of the scalar field and shifts of the brane tensions on the spacetime geometry. Also, the use of the large $`\lambda `$ limit was purely for convenience and the finite $`\lambda `$ case could be considered. With $`r_c`$ stabilized, the cosmology associated with this scenario should be standard for temperatures below the weak scale. However, for temperatures above this scale, it will be different (see ref. ) from the usual Friedmann cosmology. The scenario presented in ref. represents an attractive solution to the hierarchy puzzle. However, it also has some features that are less appealing than the Standard Model with minimal particle content. In this scenario, higher dimension operators are suppressed by the weak scale and unlike the Standard Model, where the suppression can be by powers of the GUT scale, there is no explanation for the smallness of neutrino masses and the long proton lifetime based simply on dimensional analysis. We thank R. Sundrum for several useful conversations. This work was supported in part by the Department of Energy under grant number DE-FG03-92-ER 40701.
no-problem/9907/cond-mat9907338.html
ar5iv
text
# Coupling strength of charge carriers to spin fluctuations in high-temperature superconductors In conventional superconductors, the most direct evidence of the mechanism responsible for superconductivity comes from tunnelling experiments in which a clear image of the electron-phonon interaction is revealed. The observed structure in the current voltage characteristics at the phonon energies can be used to measure, through inversion of the Eliashberg equations, the electron phonon spectral density $`\alpha ^2F(\omega )`$. The coherence length in conventional materials is long and the tunnelling process probes several atomic layers into the bulk of the superconductor. On the contrary, in the high $`T_c`$ oxides, particularly for $`c`$-axis tunneling, the coherence length can be quite short and in an optical experiment or in neutron scattering experiments the bulk of the sample is probed. Therefore, these spectroscopies become the methods of choice for the investigation of mechanisms of high-$`T_c`$ superconductivity. Accurate reflectance measurements in the infrared range and precise polarized neutron scattering data are available for a variety of oxides. In this paper we show that conducting carriers studied by means of infrared spectroscopy reveal strong coupling to a resonance structure in the spectrum of spin fluctuations examined with neutron scattering. The coupling strength inferred from experiment is sufficient to account for high values of $`T_c`$ which signals the prominent role of spin excitations in the superconductivity of oxides. There have been many suggestions as to the mechanism involved in the superconductivity of the oxides. While, so far, no consensus has yet emerged, the state itself is widely accepted to have $`d`$-wave symmetry with the gap on the Fermi surface vanishing along the main diagonals of the two-dimensional Brillouin zone. In YBCO there also exists extensive spin polarized inelastic neutron scattering data. These experiments reveal that spin excitations persist on a large energy scale, over several 100 meV, but are mainly confined around the $`(\pi ,\pi )`$-point in momentum space. Also, in the superconducting state, a new peak emerges out of, or is additional to, the spin excitation background which is often referred to as the $`41`$meV resonance (Fig. 1). This peak has received much attention but its origin remains uncertain. In one view, it is due to a readjustment in the spin excitation spectrum due to the onset of superconductivity. Such a readjustment of spectral weight with a reduction below twice the superconducting gap value $`\mathrm{\Delta }_0`$ is expected on general ground and is generic to electronic mechanisms. A second view is that it is a resonance in the $`SO(5)`$ unification of magnetism and superconductivity. If the spin excitations are strongly coupled to the charge carriers they should be seen in optical experiments. The normal state optical conductivity $`\sigma (\omega )`$ as a function of frequency $`(\omega )`$ depends on the electron self energy $`\mathrm{\Sigma }(\omega )`$ which describes the effect of interactions on electron motion. In an electron-phonon system the electron-phonon interaction spectral density, $`\alpha ^2F(\omega )`$, is approximately (but not exactly) equal to $`W(\omega )`$, a second derivative of the inverse of the normal state optical conductivity $$\alpha ^2F(\omega )W(\omega )=\frac{1}{2\pi }\frac{d^2}{d\omega ^2}\left[\omega \mathrm{}\mathrm{e}\frac{1}{\sigma (\omega )}\right].$$ (1) In the phonon energy range, the correspondence is remarkably close and determines $`\alpha ^2F(\omega )`$ with good accuracy. At higher energies, additional, largely negative wiggles come into $`W(\omega )`$ which can simply be ignored as they are not part of $`\alpha ^2F(\omega )`$. Note that (1) is dimensionless and so determines the absolute scale of the electron-phonon interaction spectral density as well as its shape in frequency. This fact is important as it allowed Marsiglio et al. to determine the $`\alpha ^2F(\omega )`$ of K<sub>3</sub>C<sub>60</sub> from its optical conductivity by inversion (1) and to conclude from a solution of the Eliashberg equations that it is large enough to explain the observed value of critical temperature. ($`T_c`$ is related to the mass enhancement factor $`\lambda `$, twice the first inverse moment of $`\alpha ^2F(\omega )`$.) The formalism for the normal state conductivity can also be applied to spin excitations. If we ignore anisotropy as a first approximation, we can proceed by introducing an electron-spin excitation spectral density denoted by $`I^2\chi (\omega )`$ with its scale set by the coupling strength to the charge carriers, $`I^2`$, and $`\chi (\omega )`$ the imaginary part of the spin susceptibility measured in spin polarized inelastic neutron scattering experiments averaged over all momentum in the Brillouin zone. At low temperatures $`\chi (\omega )`$ contains the $`41`$meV resonance observed in the superconducting state. To illustrate our main point we will use here $`\chi (\omega )`$ directly from experimental results on a YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.92</sub> sample with $`T_c=91`$K, near optimum doping and for which results exist at the temperatures $`T=100`$K and $`T=5`$K, both properly calibrated in units of $`\mu _B^2/eV`$ ($`\mu _B`$ is the Born magneton) as shown in Fig. 1. We multiply $`\chi (\omega )`$ at $`T=100`$K by a constant coupling $`I^2`$ fixed to get $`T_c=100`$K. The mass enhancement factor $`\lambda `$ (twice the first inverse moment of $`I^2\chi (\omega )`$) obtained is 2.6 and is, to within 10 percent, the same as obtained from the $`W(\omega )`$ derived from the normal state experimental data of Basov et al. in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.95</sub> and from our calculated $`W(\omega )`$ at $`T_c`$. In a preliminary attempt to invert Collins et al. found a $`\lambda `$ of three which is greater than our value. Their twinned crystals exhibited a higher optical scattering rate than our untwinned crystal and consequently they obtained about 50% more weight in the main peak of $`I^2\chi (\omega )`$ around $`30`$meV. In order to access lower temperatures, we need to understand how the $`I^2\chi (\omega )`$ structure enters the superconducting state optical conductivity. To this end, we have done a series of calculations of the superconducting state $`\sigma (\omega )`$ for a $`d`$-wave superconductor including inelastic scattering. Details have been presented in the work of Schachinger et al. We used their prescription to calculate the theoretical $`\sigma (\omega )`$ using the neutron data taken for YB<sub>2</sub>C<sub>3</sub>O<sub>6.92</sub> (at $`5`$K) as $`\chi (\omega )`$ multiplied by the same value of the coupling strength $`I^2`$ previously determined to obtain a $`T_c`$ of $`100`$K from the normal state neutron data. We then inverted this theoretical $`\sigma (\omega )`$ data using Eq. (1). The result of this inversion is compared in the top frame of Fig. 2 (solid line) with our input spectral density $`I^2\chi (\omega )`$ (solid triangles) shifted in energy by the gap $`\mathrm{\Delta }_0=27`$meV of our theoretical calculations. The absolute scale of $`I^2\chi (\omega )`$ in the resonance region is well given by the peak value in the solid curve. This peak is followed by negative wiggles which are not in the original input spectrum because $`W(\omega +\mathrm{\Delta }_0)`$ is not exactly $`I^2\chi (\omega )`$. Nevertheless, such a procedure allows us to see quite directly by spectroscopic means some of the features of $`I^2\chi (\omega )`$ and, more importantly gives us information on its absolute value at maximum. The long tails in $`I^2\chi (\omega )`$ at higher energy extending well beyond the resonance are not resolved in $`W(\omega )`$ but cause $`\tau ^1(\omega )`$, defined as $`\mathrm{}\mathrm{e}\{\sigma ^1(\omega )\}`$, to rise in a quasi linear fashion at high frequencies in both normal and superconducting state, as is observed. This quasilinear rise was the motivation for the marginal Fermi liquid model which gives $`\tau ^1(\omega )\omega `$ and a constant spectral density for $`\omega >T`$ extending to high energies. If we approximate the normal state experimental $`\tau ^1(\omega )`$ data at $`T_c`$ by a straight line for $`0\omega 200`$meV, we get 0.3 as the weight of the spectral density for all frequencies $`\omega >T`$ and a $`\lambda `$ of 2.8 quite consistent with our previous estimates. It is important to realize that, in as much as the $`41`$meV resonance is near $`2\mathrm{\Delta }_0`$, the density of quasiparticle states (not shown here), has structure at $`3\mathrm{\Delta }_0`$ in our calculations, a well established feature of tunnelling data particularly in Bi2212. In the bottom frame of Fig. 2 we show experimental results obtained from the data by Basov et al. on application of the prescription (1) to $`a`$-axis conductivity data on an untwinned single crystal. The $`41`$meV resonance is clearly resolved as a peak at approximately 69 meV in the solid curve (the gap is $`27`$meV). The height of this peak is about 3 and gives an absolute measure of the coupling between charge carriers and spin excitations. On comparison with the top frame of Fig. 2, we see that the coupling to the $`41`$meV resonance is larger in the experiment than the value assumed in the calculations that generated the theoretical results of that frame. This is not surprising since we have used the spin polarized inelastic neutron scattering data set measured on a near optimum $`91`$K sample of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.92</sub> while the neutron results for slightly overdoped YBCO are very different although the $`T_c`$ value is hardly affected. This large dependence of $`\chi (\omega )`$ on the sample can be used to argue against their role in establishing $`T_c`$. However, the function that controls the conductivity is a complicated weighting of the spin susceptibility involving details of the Fermi surface and points in the Brillouin zone away from $`(\pi ,\pi )`$ as well as the coupling to the charge carriers. Thus, the correspondence between $`I^2\chi (\omega )`$ and $`\chi (\omega )`$ is complicated. What optical experiments reveal is that $`I^2\chi (\omega )`$ is not as strongly dependent on doping as is $`\chi (\omega )`$. In Fig. 2, bottom frame, we present experimental results for $`W(\omega )`$ in underdoped, untwinned YB<sub>2</sub>C<sub>3</sub>O<sub>6.6</sub> (dashed line) and compare with the optimally doped case (solid line). It is interesting to note that the peak in the underdoped case is slightly reduced in height reflecting a reduction in $`T_c`$. It is also shifted to lower energies. Some experiments indicate a reduction in gap value with underdoping in YBCO while many experiments show an important increase in Bi2212. Even if the gap is assumed to stay the same at $`27`$meV, the spin polarized neutron resonant frequency is known to decrease with doping. Accounting for this gives almost exactly the downward shift observed in our experimental data of Fig. 2 (bottom frame). Very recently, inelastic neutron scattering data in Bi2212 have been published. They show a resonance peak at $`43`$meV in the superconducting state and establish a similarity with the earlier results in YBCO. We have inverted the optical data of Puchkov et al. in this case and find that coupling at low temperatures to the observed superconducting state spin resonance peak is a general phenomena in both YBCO and Bi2212. Spin excitations are seen in an appropriately chosen second derivative of the superconducting state optical conductivity and the strength of their coupling to the charge carriers determined from such data. The coupling to the excitations including the $`41`$meV resonance is large enough in YBCO that it can account for superconductivity at that temperature. At $`T_c`$ the spectrum obtained from experiment gives a value of the mass enhancement parameter $`\lambda `$ which is close to the value used in our model calculations to obtain a critical temperature of $`100`$K. Acknowledgments. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canadian Institute for Advanced Research (CIAR). Work at UCSD is supported by the NFS “Early Career Development” program. DNB is a Cottrel Scholar of the Research Corporation. We thank J.E. Hirsch, P.B. Hirschfeld, P.B. Littlewood, F. Marsiglio, E.J. Nicol, D. Scalapino, T. Timusk, and I. Vekhter for interest.
no-problem/9907/cs9907030.html
ar5iv
text
# Algorithms for Coloring Quadtrees ## 1 Introduction A quadtree is a data structure formed by starting from a single square, and recursively dividing squares into four smaller squares. In this paper we consider problems of coloring quadtree squares so that no two neighboring squares have the same color. This quadtree coloring problem was introduced by Benantar et al , motivated by problems of scheduling parallel computations on quadtree-structured finite element meshes. There are several variants of the problem depending on the details of its definition. Quadtrees may be balanced (i.e. squares sharing an edge may be required to be within a factor of two of each other in size) or unbalanced. Balanced quadtrees are typically used in finite element meshes, but other applications may give rise to unbalanced quadtrees. Further, squares may be defined to be neighboring if they share a portion of an edge (edge adjacency), or if they share any vertex or portion of an edge (vertex adjacency). We can thus distinguish four variants of the problem: balanced with edge adjacency, unbalanced with edge adjacency, balanced with corner adjacency, and unbalanced with corner adjacency. (Other balance conditions may also be used, but we do not concern ourselves with them here.) Since quadtrees are planar, the four-color theorem for planar maps implies that edge-adjacent quadtrees require at most four colors, regardless of balance. Benantar et al. showed that with corner adjacency, balanced quadtrees require at most six colors and unbalanced quadtrees require at most eight colors . Benantar et al also suggest that four colors may suffice, even for corner adjacency . Here, we tighten the upper bounds above, and show that balanced edge-adjacent quadtrees require only three colors while even unbalanced corner-adjacent quadtrees can be six-colored. We provide simple linear time algorithms that color quadtrees within these bounds, and that four-color edge-adjacent unbalanced quadtrees. We also provide lower bound examples showing that three colors are necessary for balanced edge adjacency, four colors are necessary for unbalanced edge adjacency, and at least five colors are necessary for balanced corner adjacency, refuting the suggested four-color bound of Benantar et al. ## 2 Balanced edge adjacency ###### Theorem 1 Any balanced quadtree can be colored with three colors so that no two squares sharing an edge have the same color. Proof: Imagine constructing the quadtree bottom-up, by starting with a regular grid of squares and then consolidating quadruples of squares of one size to make squares of the next larger size. We color the initial grid by a regular pattern of three colors, depicted in Figure 1(a). Then, when we consolidate four squares of one size to make squares of the next larger size, each larger square has only two colors among its smaller neighbors (Figure 1(b)), forcing it to take the third color. Connected sets of larger squares then end up colored by the same regular pattern used to color the smaller grid, so we can repeat this process of consolidation and coloring within each such set. We note that this process gives each square a color depending only on its size and position within the quadtree, and not depending on what subdivisions have occurred elsewhere in the quadtree. This coloring can be determined easily from the color the square’s parent would be given by the same process, so the coloring algorithm can be performed top-down in linear time. ## 3 Unbalanced edge adjacency By the four-color theorem for planar maps, any unbalanced quadtree can be colored with four colors so that no two squares sharing an edge have the same color. Such a coloring is not difficult to find: ###### Theorem 2 Any unbalanced quadtree can be colored in linear time with four colors so that no two squares sharing an edge have the same color. Proof: We form the desired quadtree by splitting squares one at a time; at each step we split the largest square possible. Thus the four smaller squares formed by each split are, at the time of the split, among the smallest squares in the quadtree. As we perform this splitting process, we maintain a valid four-coloring of the quadtree. When we split a square, we color the four resulting smaller squares. We give the upper right and lower left squares the same color as their parent. Each of the other two squares has at most four neighbors, two of which are the same color. Therefore each has at most three neighboring colors, and at least one color remains available; we give each of these two squares one of the available colors. As we now show, four colors may sometimes be necessary. ###### Theorem 3 There is an unbalanced quadtree requiring four colors for all colorings in which no two squares sharing an edge have the same color. Proof: An unbalanced quadtree is depicted in Figure 2, with some of its squares labeled. A simple case argument shows that it has no three-coloring: suppose for a contradiction that we are attempting to color it red, blue, and green. Since squares $`A`$, $`B`$, and $`C`$ are mutually adjacent, we may assume without loss of generality that they are colored red, blue, and green respectively. Since $`D`$ is adjacent to $`A`$ and $`C`$, it must be blue, and since $`E`$ is adjacent to $`B`$ and $`C`$, it must be red. Since $`F`$ is adjacent to $`D`$ and $`E`$, it must be green. But then $`G`$ is adjacent to a red square ($`E`$), a green square ($`F`$), and a blue square ($`B`$), so it can not be given any of the three colors. Thus, four colors are required to color this quadtree. ## 4 Balanced corner adjacency ###### Theorem 4 There is a balanced quadtree requiring five colors for all colorings in which no two squares sharing an edge or a corner have the same color. Proof: A balanced quadtree is depicted in Figure 3. A simple case argument shows that it has no four-coloring: choose four different colors for the four squares $`C_\mathrm{1}`$, $`C_\mathrm{2}`$, $`C_\mathrm{3}`$, and $`C_\mathrm{4}`$ meeting in the center vertex. Then, choose a color for one of the diagonal neighbors, $`D_\mathrm{1}`$ and $`D_\mathrm{2}`$, of the two small center squares. Now repeatedly apply the following two coloring rules: 1. If some square $`s`$ has three differently colored neighbors, assign the remaining fourth color to $`s`$. 2. If some square $`s`$ has a corner shared by three other squares, each of which is adjacent to squares of some color $`a`$, assign color $`a`$ to $`s`$ since no other choice leaves enough free colors to the other squares sharing the corner. Figures 4 and 5 show the results of a partial application of these rules, for two choices of color for $`D_\mathrm{1}`$. The third possible choice is symmetric with Figure 5. No matter what color is chosen for $`D_\mathrm{1}`$, these rules lead to an inconsistency at $`D_\mathrm{2}`$: rule 2 applies in two different ways, forcing $`D_\mathrm{2}`$ to have two different colors. Therefore the overall quadtree can not be colored. ## 5 Unbalanced corner adjacency ###### Theorem 5 Any balanced or unbalanced quadtree can be colored in linear time with six colors so that no two squares sharing an edge or a corner have the same color. Proof: We form the adjacency graph of the squares in the quadtree, and apply the greedy algorithm: remove a minimum degree vertex from the graph, color recursively, then add back the removed vertex and give it a color different from its neighbors. If the maximum degree of a vertex removed at any step is $`d`$, this uses at most $`d+\mathrm{1}`$ colors. We can find the minimum degree vertex by maintaining for each $`i\mathrm{5}`$ a doubly linked lists of the vertices currently having degree $`i`$; as we show below, at least one list will be nonempty, and it is straightforward to update these lists in constant time per step. Therefore, the overall time will be linear. Our bound of six colors then follows from the following lemma. Let $`Q`$ be a subset of the squares in a (not-necessarily balanced) quadtree. Define a big box to be a square that is not the smallest in $`Q`$, that has at most five neighbors which are also not the smallest in $`Q`$ (Figure 6(a)). Define a hanging box to be a square $`s`$ that is not the smallest in $`Q`$, that has at most three neighbors incident to the upper left corner, and at most two below or to the right; the below-right neighbors must also not be the smallest in $`Q`$ (Figure 6(b)). Define a good chain to be a set of one or more squares all the smallest in $`Q`$, with the following properties (Figure 6(c)): Each square in the chain must have at most one neighbor below it; except for the bottommost square in the chain, this neighbor must be another square in the chain, adjacent at the bottom left corner. The bottommost square in the chain can be adjacent to a square $`s`$ below it and outside the chain, but only if $`s`$ is larger than the squares in the chain. Similarly, each square in the chain must have at most one neighbor to the right of it; except for the topmost square in the chain, this neighbor must be another square in the chain, adjacent at the top right corner. The topmost square in the chain can be adjacent to a square $`s`$ to the right of it and outside the chain, but again only if $`s`$ is larger than the squares in the chain. If the chain has exactly one square in it, it may have neighbors both below and to the right, as long as both neighbors are larger. Finally, define a good configuration to be any one of these three patterns: a big box, a hanging box, or a good chain. Note that all three of these configurations give a degree-five square or squares. ###### Lemma 1 Let $`Q`$ be any subset of the squares of a quadtree. Then $`Q`$ has a good configuration. Proof: We use induction on the number of levels in $`Q`$. Let $`Q^{}`$ be formed by replacing each smallest square in Q by its parent. (We think of $`Q`$ as being formed by splitting some squares in $`Q^{}`$ and removing some of the resulting children.) Let $`C`$ be a good configuration in $`Q^{}`$. First, suppose $`C`$ is a big box in $`Q^{}`$. Then it is also a big box in $`Q`$ since none of its neighbors can be subdivided. Next, suppose $`C`$ is a hanging box in $`Q^{}`$. If none of its neighbors is subdivided to form $`Q`$, it is a big box in $`Q`$. If one of its neighbors is subdivided and has a child neighboring $`C`$ and not incident to the upper left corner of $`C`$, that child is a (singleton) good chain (its only below-right adjacency is to $`C`$ itself). If $`C`$’s neighbors are subdivided but the only children neighboring $`C`$ are on the corner, $`C`$ remains a hanging box in $`Q`$. Finally, suppose $`C`$ is a good chain in $`Q^{}`$. If some square of $`C`$ is subdivided, and its lower right child is in $`Q`$, that child is a (singleton) good chain in $`Q`$. If not, but some squares are subdivided and have upper right or lower left children, any maximal contiguous sequence of such children is a good chain in $`Q`$. If neither of these two cases holds, but some squares are subdivided and have only their upper left children in $`Q`$, then some sequence of such children and of lower right children of neighbors of $`C`$ forms a good chain in $`Q`$. If no squares in $`C`$ are subdivided and none of their upper or left neighbors are subdivided, each square in the chain becomes a big box in $`Q`$. If no squares in $`C`$ are subdivided, some upper or left neighbor is subdivided, and its lower right child is in $`Q`$, that child is a singleton good chain. In the remaining case, any subdivided neighbor has neighboring children only on the upper left corners of squares in $`C`$, and all squares in $`C`$ become hanging boxes in $`Q`$. By the lemma above, any graph formed by a subset of the quadtree squares has a vertex of degree at most five, so the greedy algorithm uses at most six colors. This concludes the proof of Theorem 5. ## 6 Conclusions We have shown that balanced edge-adjacent quadtrees require three colors, and unbalanced edge-adjacent quadtrees require four colors. Corner-adjacent quadtrees may require either five or six colors. It remains to close this gap in the corner-adjacent case and to determine whether the balance condition makes a difference in this case.
no-problem/9907/astro-ph9907226.html
ar5iv
text
# TRIPLETS OF GALAXIES: Some Dynamical Aspects ## 1. Introduction In celestial mechanics the 3-body problem has a long and rich history, while the problem of 3-galaxy systems is rather scarce (see review by Valtonen & Mikkola 1991). Most of the up-to-date works have addressed the 3-galaxy problem using a point-like approach, although some explicit-physics simulations have been performed to simulate dynamical friction effects and merging processes (Zheng, et al. 1993). These studies have provided important knowledge on the general behaviour of triplets of galaxies that are observed in the sky (Karachentsev 1999), and some accordance with observations has been obtained. However galaxies are not point-like particles, but rather consist of a large number of stars that in turn can be approximated as point-like particles. Qualitative and quantitative differences result in the dynamics when the 3-galaxy problem is addressed self-consistently; i.e. when galaxies are able to redistribute energy and angular momentum among their stars. Since not using self-consistent galaxies casts necessarily some doubts on earlier results, we address here the 3-galaxy problem self-consistently and compare some results with observations. A full report on this work is in preparation. ## 2. Numerical Experiments Galaxies were modeled after a Plummer sphere with $`N=3000`$ particles each. No explicit difference was made as to particles being luminous or dark. The units used here are such that $`G=M=R_0=1`$, where $`M`$ is the mass of each galaxy and $`R_0`$ its scale-length. To transform $`N`$-body results (‘n’) to astronomical ones (‘a’) we need to choose a real galaxy. We use here a galaxy similar to ours with $`M5.5\times 10^{11}`$ M and $`R_{\mathrm{halo}}135`$ kpc (Kuijken & Dubinski 1995, Model B). The following transformations follow: $$\frac{r_\mathrm{a}}{r_\mathrm{n}}13.5\mathrm{kpc},\frac{m_\mathrm{a}}{m_\mathrm{n}}5.5\times 10^{11}\mathrm{M}_{},\frac{t_\mathrm{a}}{t_\mathrm{n}}32\mathrm{Myr},\frac{v_\mathrm{a}}{v_\mathrm{n}}420\mathrm{km}/\mathrm{s}.$$ (1) The initial positions of the centre-of-mass of galaxies were sampled from a homogeneous spherical distribution of radius $`R_{\mathrm{max}}`$. The evolution of the triplet depends, obviously, on the size of this sphere. This radius is taken here as an approximate turn-around radius of a density perturbation with the mass of a triplet; galaxies are assumed to be already formed. We consider an initial virial ratio of $`2T/|W|=1/4`$ for this perturbation with velocity dispersion $`\sigma =V_0/2\sqrt{3},`$ where $`V_0=\sqrt{3GM_\mathrm{t}/5R_{\mathrm{max}}}`$ and $`M_\mathrm{t}=3M`$ (e.g. Barnes 1985). Numerical simulations of galaxy formation tend to indicate that the dark matter background at turn-around had $`\sigma 20`$ km/s (Lake & Carlberg 1988). We take this as a fiducial value for $`\sigma `$ in triplets at maximum expansion. Hence for three Galaxy-like spirals we obtain $`R_{\mathrm{max}}1`$ Mpc. No common dark matter halo has been used in the present simulations. Note that if we increase the mass of the triplet by introducing a common dark matter halo, and retain cold initial conditions, this will increase $`R_{\mathrm{max}}`$ proportionally. The collapse time is taken here as $`\tau _{\mathrm{coll}}=\pi \sqrt{R_{\mathrm{max}}^3/2GM_\mathrm{t}}79420`$ Gyr. These IC’s are more appropiate for triplets turning around at this epoch. We also considered virialized initial conditions (IC’s), obtained by assuming galaxies to be point-masses. We made 30 simulations and estimated the 1-D velocity dispersion $`\sigma `$, mean harmonic radius $`R_\mathrm{H}`$, crossing time $`t_\mathrm{c}`$, and virial mass $`M_\mathrm{v}`$ (Nolthenius & White 1987), along three orthogonal projections; i.e. 90 ‘triplets’ are simulated for each of the IC’s considered. Simulations lasted $`20`$ Gyr from turn-around, and energy was conserved for all the runs to $`<1.5`$%. ## 3. Results In Figure 1 we show two typical outcomes of the simulations performed. Qualitatively situations where binaries are formed first, some of them leading to a rather quick triple merger (bottom) and others taking a much longer time to even form a binary merger (top), are predominantly found. This resembles the same instability found in the 3-body problem to form initially a binary. However, when self-gravity is considered effects such as a sling-shot are not easy to reproduce due to the galaxies’ capability to absorb orbital energy. In Table 1 we present quantitative results for both types of IC’s, taking out mergers. Numbers are given in $`N`$-body units. The first row, for each time, are average values while in the second median values are given. Times correspond respectively to $``$ 0.5, 5, 10, 13, 15, and 20 Gyr. As expected, initially virialized triplets evolve much slower than cold ones. Collapsing triplets yield a significant number of mergers in the time interval $`(1013)`$ Gyr; i.e. $`\tau _{\mathrm{coll}}/2`$. The average or median mass values never overestimate of the true mass of the system; in collapsing triplets. The underestimate in the median is a factor of $`<3`$ than the true mass during the wide time interval of $`(515)`$ Gyr for collapsing triplets; the agreement is much better for initially viralized systems. Some triplets provided an overestimate in mass along a particular line-of-sight. Results for the median $`R_\mathrm{H}`$ in collapsing triplets do not agree well with those of observations of present day compact triplets ($`65`$ kpc). This also happens for virialized systems which have a $`R_\mathrm{H}>400`$ kpc. On other hand, the velocity dispersions are always $`\sigma <50`$ km/s for both types of IC’s, a value which is about half of the observed median ($`100`$ km/s). Nonetheless, about 10% of the simulated triplets have $`\sigma >100`$ km/s at $`t10`$ Gyr. Using a larger mass and halo size galaxy does not help much in increasing the astronomical median-$`\sigma `$ since velocities scale as $`v\sqrt{M/R}`$. We recall that when comparing to observational data the assumption that Karachentsev’s catalog of compact triplets forms an homogeneous and well defined sample is implicit, but this is not so since e.g. it includes galaxies of different luminosities ($``$ mass) and morphological type. This needs to be considered in future studies to make more consistent comparisons with observations. On other hand, however, there are some indications that triplets lie in the periphery of larger systems of galaxies. In a large-scale structure picture, triplets were probably not isolated from tidal perturbations before arriving at their present state. Hence it is of interest to estimate probable tidal effects on the dynamics of triplets in a Hubble time, even though to a first order. In Fig. 2 we present only the results of the $`\sigma `$-distribution under a tidal perturbation from a far away ‘poor cluster’; the triplet was retained at the same initial distance to study the effects of a tidal force. The agreement of the median and average $`\sigma `$ with observations is better when triplets are not considered ‘island universes’; e.g., $`\sigma 100`$ at $`t10`$ Gyr when the perturber in Fig. 2 is at 3 Mpc. Although this value depends obviously on the perturber mass and distance, the results manifest the importance external fields can play in the dynamical properties of otherwise assumed isolated triplets. They also suggests that the environment could have introduced a ‘selection effect’ in allowing some initially wide triplets ($`R_\mathrm{H}630`$ kpc) to become compact, and disrupting others, in a Hubble time. ## 4. Final Comments Simulations of isolated triplets show e.g. that in average rather low velocity dispersions are obtained when compared to observations. In this scenario an underestimate of mass will occur when using the bulk velocity and centre of galaxies, probably by a factor of $`3`$. This underestimate can be larger for particular triplets if strong signs of interactions are present and if galaxies have large dark halos; this situation can be similar for compact groups. Wide triplets could have evolved into compact triplets in a Hubble time, although their high $`\sigma `$ remains to be explained. Tidal perturbations on the evolution of a triplet, however, might have important effects on their dynamics over a Hubble time, and consequently on their mass estimation. We suggest that triplet dynamics is closely tied to cosmology, and that it is not direct to untangle internal effects from external ones. Observational studies that would search e.g. for possible correlations between $`\sigma `$ and the density of the triplets environment could shed light on these issues. ### Acknowledgments. The author is grateful to the Spanish Ministry of Foreign Affairs for financial support through its MUTIS Program. ## References Barnes, J. 1985, MNRAS, 215, 517 Karachentsev, I.D. 1999, these proceedings Kuijken, K., & Dubinski, J. 1995, MNRAS, 277, 1341 Lake, G., & Carlberg, R.G. 1988, AJ, 96, 1587 Nolthenius, R., & White, S.D.M. 1987, MNRAS, 235, 505 Valtonen, M.J., & Mikkola, S. 1991, ARAA, 29, 9 Zheng, J.Q., Valtonen, M.J., & Chernin, A.D. 1993, AJ, 105, 2047
no-problem/9907/astro-ph9907013.html
ar5iv
text
# A 4% geometric distance to the galaxy NGC4258 from orbital motions in a nuclear gas disk slugcomment: \****** To Appear In Nature; Draft of May 18; 1300 words + 3 figs + 2 eqs \****** The accurate measurement of extragalactic distances is a central challenge of modern astronomy, being required for any realistic description of the age, geometry and fate of the Universe. The measurement of relative extragalactic distances has become fairly routine, but estimates of absolute distances are rare$`^\text{1}`$. In the vicinity of the Sun, direct geometric techniques for obtaining absolute distances, such as orbital parallax, are feasible, but heretofore such techniques have been difficult to apply to other galaxies. As a result, uncertainties in the expansion rate and age of the Universe are dominated by uncertainties in the absolute calibration of the extragalactic distance ladder$`^\text{2}`$. Here we report a geometric distance to the galaxy NGC4258, which we infer from the direct measurement of orbital motions in a disk of gas surrounding the nucleus of this galaxy. The distance so determined - $`7.2\pm 0.3`$ Mpc - is the most precise absolute extragalactic distance yet measured, and is likely to play an important role in future distance-scale calibrations. NGC4258 is one of 22 nearby AGN known to possess nuclear water masers (the microwave equivalent of lasers). The enormous surface brightnesses ($`10^{12}`$ K), small sizes ($`10^{14}`$ cm), and narrow linewidths (a few km s<sup>-1</sup>) of these masers make them ideal probes of the structure and dynamics of the molecular gas in which they reside. VLBI observations of the NGC4258 maser have provided the first direct images of an AGN accretion disk, revealing a thin, subparsec-scale, differentially rotating warped disk in the nucleus of this relatively weak Seyfert 2 AGN $`^{\text{3},\text{4},\text{5},\text{6}}`$. Two distinct populations of masers exist in NGC4258. The high-velocity masers amplify their own spontaneous emission and are offset $`\pm 1000`$ km s<sup>-1</sup> and 4.7-5.1 mas (0.16-0.28 pc for a distance of 7.2 Mpc) on either side of the disk center. The beautiful Keplerian rotation curve traced by these masers requires a central binding mass ($`M`$), presumably in the form of a supermassive black hole, of $`(3.9\pm 0.1)\times 10^7(D/7.2\text{ Mpc})(\mathrm{sin}i_s/\mathrm{sin}82)^2`$ solar masses (M) where $`D`$ is the distance to NGC4258 and $`i_s`$ is the disk inclination. Because the high-velocity masers lie in the plane of the sky, they should to first order remain stationary as the disk rotates. The systemic masers, on the other hand, are positioned along the near edge of the disk and amplify the background jet emission evident in Figure 1 $`^{\text{12}}`$. A fundamental and as-yet untested prediction of the maser disk model is that the systemic masers should drift with respect to a fixed point on the sky by a few 10 $`\mu `$as yr<sup>-1</sup> as the disk rotates at $`1000`$ km s<sup>-1</sup>. NGC4258 was observed at 4–8 month intervals between 1994 and 1997 with the VLBA of the NRAO in order to search for the expected motions. Since the maser emission is essentially continuous across the envelope of systemic maser emission, we are forced to rely on structure in the systemic spectrum to isolate and track individual maser features. The assumption that distinct peaks in the spectrum correspond to individual clumps of gas is justified by the successful tracking of maser accelerations in single-dish monitoring programs, and leads to an identification of 20–35 potentially trackable systemic masers in each epoch. These features are too densely packed in position and velocity to be individually tracked with any reliability. However, the resolution and sampling are sufficient to detect bulk rotation in the system, and we have developed a Bayesian pattern-matching analysis tool to track inter-epoch shifts in the positions and velocities of the systemic masers, as a whole$`^\text{7}`$. The analysis assumes the systemic masers are randomly and narrowly scattered about an average radius, $`r_s`$, of 3.9 mas, as indicated by the global disk-fitting analysis (see Figure 1). The precise magnitude of the radial scatter is set so as to maximize the overall likelihood of the tracking analysis. In order to evaluate the likelihood of a given bulk proper motion ($`\dot{\theta }_x`$) or acceleration ($`\dot{v}_{los}`$), the pattern-matching procedure must compute the likelihood that each individual maser has in fact moved by $`\dot{\theta }_x`$ or $`\dot{v}_{los}`$. This leads to robust estimates for the “trackability” of each maser. Figure 2 shows the best-fitting acceleration and proper motion tracks for the most reliably trackable systemic masers. Figure 3 shows the overall probability density functions (PDFs) for $`\dot{v}_{los}`$ and $`\dot{\theta }_x`$. The PDFs indicate a highly significant detection of bulk motion in the disk and from them we conclude $`\dot{v}_{los}=9.3\pm 0.3`$ km s<sup>-1</sup> yr<sup>-1</sup> and $`\dot{\theta }_x=31.5\pm 1`$$`\mu `$as yr<sup>-1</sup>, where these and all subsequent uncertainties are $`1\sigma `$ values. The latter result is consistent with expectations and is the first detection of transverse motion in the NGC4258 accretion disk. We note that the pattern-matching algorithm has been verified on a number of simulated datasets with feature densities and spectral and spatial resolutions comparable to those of the true data. In order to convert the maser proper motions and accelerations into a geometric distance, we express $`\dot{\theta }_x`$ and $`\dot{v}_{los}`$ in terms of the distance and four disk parameters: $$\dot{\theta }_x=31.5\left[\frac{D_6}{7.2}\right]^1\left[\frac{\mathrm{\Omega }_s}{282}\right]^{1/3}\left[\frac{_{7.2}}{3.9}\right]^{1/3}\left[\frac{\mathrm{sin}i_s}{\mathrm{sin}82.3^{}}\right]^1\left[\frac{\mathrm{cos}\alpha _s}{\mathrm{cos}80^{}}\right]\text{ }\mu \text{as\hspace{0.17em}yr}\text{-1},$$ (1) and $$\dot{v}_{los}=9.2\left[\frac{D_6}{7.2}\right]^1\left[\frac{\mathrm{\Omega }_s}{282}\right]^{4/3}\left[\frac{_{7.2}}{3.9}\right]^{1/3}\left[\frac{\mathrm{sin}i_s}{\mathrm{sin}82.3^{}}\right]^1\text{ km\hspace{0.17em}s}\text{-1}\text{ yr}\text{-1}.$$ (2) Here $`D_6`$ is the distance in Mpc, $`\alpha _s`$ is the disk position angle (East of North) at $`r_s`$, and $`_{7.2}`$ is $`M/D\mathrm{sin}^2i_s`$ as derived from the high-velocity rotation curve and evaluated at $`D=7.2`$ Mpc and $`i_s=82.3^{}`$ (in units of $`10^7`$M). $`\mathrm{\Omega }_s(G_{7.2}/r_s^3)^{1/2}`$ is the projected disk angular velocity at $`r_s`$ as determined by the slope of the systemic position-velocity gradient (in units of km s<sup>-1</sup> mas<sup>-1</sup>; see Figure 1). a priori estimates for each of these disk parameters, derived directly from the positions and velocities of the masers, are included in the denominators of each of the terms of equations 1 and 2. When the a priori disk parameter estimates are used, the proper motions and accelerations yield independent distance estimates, through equations 1 and 2, of $`7.2\pm 0.2`$ Mpc and $`7.1\pm 0.2`$ Mpc, respectively. The quoted uncertainties are effectively the uncertainties in $`\dot{\theta }_x`$ and $`\dot{v}_{los}`$ recast in terms of distance, and as such are purely statistical in nature. The excellent agreement between the proper motion and acceleration distances for a priori values of the disk parameters is an impressive confirmation of the a priori disk model itself, and establishes the NGC4258 Keplerian disk as a fully self-consistent, dynamical model incorporating the positions, LOS velocities, proper motions, and accelerations of all the NGC4258 masers. We note that a preliminary geometric distance estimate, based on accelerations alone and a single VLBA epoch, yielded $`D=6.4\pm 0.9`$ Mpc $`^\text{4}`$. Hence, the old and new distances are consistent at the $`1\sigma `$ level. The discrepancy between the estimates is ultimately explained by the fact that the original distance estimate assumed an average systemic maser radius about 10% larger than the value indicated by the newer data and the more sophisticated disk models. Uncertainties in the disk parameters contribute to systematic uncertainties in the distance estimate. We derive a composite geometric distance estimate using equations 1 and 2, and from the $`\dot{v}_{los}`$ and $`\dot{\theta }_x`$ PDFs of Figure 3 and the a priori estimates for the disk parameters and their associated uncertainties. The result is a geometric distance estimate of $`7.2\pm 0.3`$ Mpc, where the quoted uncertainty now incorporates all statistical terms associated with tracking motions in the disk as well as systematics arising from disk parameter uncertainties. A $`5`$% uncertainty in $`\mathrm{\Omega }_s`$ is the dominant contributor to the latter, producing fractional uncertainties in the acceleration and proper motion distances of 6.7% and 1.7%, respectively. In total, disk-model systematics contribute an additional 0.26 Mpc (in quadrature) to the distance error budget derived from purely statistical considerations. The NGC4528 geometric distance is the most precise, absolute extragalactic distance measured to date and, being independent of all other distance indicators, it represents an important new calibration point for the extragalactic distance ladder. The geometric distance is consistent with pre-existing H-band Tully-Fisher ($`7.1\pm 1.1`$ Mpc $`^{\text{16}}`$), blue Tully-Fisher ($`7.9\pm 1.8`$ Mpc $`^{\text{17}}`$), and luminosity class ($`8.4\pm 2.2`$ Mpc $`^{\text{18}}`$) distance estimates, all of which rely on the Cepheid Period-Luminosity relationship for absolute calibration. Most importantly, efforts are underway to determine directly a Cepheid distance to NGC4258 from Hubble Space Telescope (HST) observations of the galaxy. The NGC4528 geometric distance is presently the most precise means for directly calibrating distances obtained by HST Cepheid observations. We emphasize that the above error budget considers statistical and systematic uncertainties within the framework of a thin Keplerian disk in which the masers trace the orbital motions of discrete clumps of gas. As always, it is difficult to estimate any additional systematic uncertainties that might exist as a result of imperfections in the model itself. The potential impact of eccentricity on the distance error budget depends on the assumed distribution of accretion disk eccentricities in AGN in general. Viscous dissipation within such disks is expected to circularize orbits on relatively short timescales, and detailed modeling of the optical emission lines from a large sample of AGN suggests eccentricities less than about 0.5 $`^{\text{19}}`$. The eccentricity of the NGC4258 disk is further constrained by the symmetry of the maser emission about the disk center in both position and velocity$`^{\text{12}}`$. These additional constraints lead to an expected eccentricity of zero with a probable error of 0.1, a negligible bias in the distance estimate, and a systematic uncertainty in the distance of 0.4 Mpc. Hence our distance estimate, including this uncertainty in the eccentricity, is $`7.2\pm 0.5`$ Mpc. Finally, we cannot unambiguously rule out contamination of the maser dynamics by some non-kinematical contribution, such as traveling density waves within the disk. However, given the complexity (30 systemic masers across $`8^{}`$ of disk azimuth) and the stability ($`70`$% of the features persisting) of the pattern we have tracked, orbital motion is certainly the simplest explanation. REFERENCES 1. Jacoby, G. H. et al. A critical review of selected techniques for measuring extragalactic distances. PASP 104, 599–662 (1992). 2. Madore, B. F. et al. The Hubble Space Telescope Key Project on the Extragalactic Distance Scale. XV. A Cepheid distance to the Fornax Cluster and its implications. Astropjys. J. 515, 29–41 (1999). 3. Watson, W. D. & Wallin, B. K. Evidence from masers for a rapidly rotating disk at the nucleus of NGC4258. Astrophys. J. 432, L35–L38 (1994). 4. Miyoshi, M. et al. Evidence for a black hole from high rotation velocities in a sub-parsec region of NGC4258. Nature 373, 127–129 (1995). 5. Greenhill, L. G., Jiang, R. D., Moran, J. M., Reid, M. J., Lo, K. Y., & Claussen, M. J. Detection of a subparsec diameter disk in the nucleus of NGC4258. Astrophys. J. 440, 619–627 (1995). 6. Herrnstein, J. R., Greenhill, L. J., & Moran, J. M. The Warp in the subparsec molecular disk in NGC4258 as an explanation for persistent asymmetries in the maser spectrum Astrophys. J. 468, L17–L20 (1996). 7. Herrnstein, J. R. PhD Dissertation, Harvard University, 1997. 8. Nakai, N., Inoue, M., Miyazawa, K., Miyoshi, M., & Hall, P. Search for extremely high velocity H<sub>2</sub>O maser emission in Seyfert galaxies. Pub. Astron. Soc. Japan 47, 771–799 (1995). 9. Greenhill, L. J., Henkel, C., Becker, R., Wilson, T. L., & Wouterloot, J. G. A. Centripetal acceleration within the subparsec nuclear maser disk of NGC4258. Astron. & Astrophys. 304, 21–33 (1995). 10. Bragg, A. E., Greenhill, L. J., Moran, J. M., & Henkel, C. Acceleration-derived positions of the high-velocity maser features in NGC4258. BAAS. 30, 1254 (1998). 11. Moran, J. M. et al. Probing active galactic nuclei with H<sub>2</sub>O megamasers. Proc. Natl. Acad. Sci. USA 92, 11427–11433 (1995). 12. Herrnstein, J. R. et al. Discovery of a subparsec jet 4000 Schwarzscild radii from the central engine of NGC4258. Astrophys. J. 475, L17–L21 (1997). 13. Herrnstein, J. R. et al. VLBA continuum observations of NGC4258: Constraints on an advection-dominated accretion flow. Astrophys. J. 497, L69–L73 (1998). 14. Cecil, G., Wilson, A. S., & DePree, C. Hot shocked gas along the helical jets of NGC4258. Astrophys. J. 440, 181–190 (1995). 15. Zensus, J. A., Diamond, P. J., & Napier, P. J. Very Long Baseline Interferometry and the VLBA (Astr. Soc. Pacific, San Francisco, 1995). 16. Aaronson, M. et al. A catalog of infrared magnitudes and HI velocity widths for nearby galaxies. Astrophys. J. Suppl. 50, 241–262 (1982). 17. Richter, O.-G. & Huchtmeier, W. K. Is there a unique relation between absolute (blue) luminosity and total 21 cm linewidth of disk galaxies? Astron. & Astrophys. 132, 253–264 (1984). 18. Rowan-Robinson, M. The Cosmological Distance Ladder (W.H Freeman and Co., 1985). 19. Eracleous, M., Livio, M., Halpern, J. P., & Storchi-Bergmann, T. Elliptical Accretion Disks in Active Galactic Nuclei Astrophys. J. 438, 610–622 (1995). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. CAPTIONS Figure 1. – The upper panel shows the best-fitting warped disk model superposed on actual maser positions as measured by the VLBA of the NRAO, with top as North. The filled square marks the center of the disk, as determined from a global disk-fitting analysis$`^\text{7}`$. The filled triangles show the positions of the high-velocity masers, so called because they occur at frequencies corresponding to Doppler shifts of $`\pm 1000`$ km s<sup>-1</sup> with respect to the galaxy systemic velocity of $`470`$ km s<sup>-1</sup>. This is apparent in the VLBA total power spectrum displayed in the lower panel. The inset shows line-of-sight (LOS) velocity versus impact parameter for the best-fitting Keplerian disk, with the maser data superposed. The high-velocity masers trace a Keplerian curve to better than 1%. Monitoring of these features indicates that they drift by less than $`1`$ km s<sup>-1</sup> yr<sup>-1</sup> $`^{\text{8},\text{9},\text{10}}`$ and requires that they lie within 5–10 of the midline, the intersection of the disk with the plane of the sky. The LOS velocities of the systemic masers are centered about the systemic velocity of the galaxy. The positions (filled circles of upper panel) and LOS velocities of these masers imply they subtend about $`8^{}`$ of disk azimuth centered about the LOS to the central mass, and the observed 8–10 km s<sup>-1</sup> yr<sup>-1</sup> acceleration of these features$`^{\text{8},\text{9}}`$ unambiguously places them along the near edge of the disk. The approximately linear relationship between systemic maser impact parameter and LOS velocity demonstrates that the disk is very thin$`^{\text{11}}`$ (aspect ratio $`0.2\%`$) and that these masers are confined to a narrow annulus in the disk. The magnitude of the velocity gradient ($`\mathrm{\Omega }_s`$) implies a mean systemic radius, $`r_s`$, of 3.9 mas which, together with the positions of the high-velocity masers, constrains the disk inclination, $`i_s`$, to be $`82\pm 1^{}`$ ($`90^{}`$ for edge-on). Finally, VLBA continuum images$`^{\text{12},\text{13}}`$ are included as contours in the upper panel. The 22-GHz radio emission traces a sub-parsec-scale jet elongated along the rotation axis of the disk and well-aligned with a luminous, kpc-scale jet$`^{\text{14}}`$. Figure 2. – Line-of-sight (LOS) velocities (a) and right ascensions (b) at the peaks of the systemic maser spectrum for each of the five VLBA epochs. Only those features deemed reliably trackable by the pattern-matching analysis are included. All epochs included the Very Large Array, phased to act as a single large aperture. In addition, the final epoch utilized the Effelsberg 100-meter telescope. In each epoch the VLBA correlator provided cross-power spectra with channel spacings of 0.22 km s<sup>-1</sup>. The NGC4258 masers are characterized by linewidths of $`13`$ km s<sup>-1</sup>. All spectra were phase-referenced, via self-calibration, to a strong systemic maser to stabilize the interferometer against atmospheric pathlength fluctuations, and synthesis images were constructed for each spectral channel using conventional restoration techniques$`^{\text{15}}`$. Error bars have been foregone in order to avoid clutter. The uncertainties in individual LOS velocity estimates are dominated by line blending in the spectrum. There is an average scatter of about 0.4 km s<sup>-1</sup> about the best-fitting acceleration tracks. The masers are spatially unresolved and relative positions have been measured to a precision of $`0.5\mathrm{\Theta }_B/\text{SNR}`$, where SNR is the signal to noise ratio and $`\mathrm{\Theta }_B`$ represents the $`0.6\times 0.9`$ mas, approximately North-South synthesized beam. Relative positional accuracies typically ranged from 0.5 to 10 $`\mu `$as. All positions are relative to a fixed point along the systemic position-velocity gradient. Our ability to precisely align this structure amongst all epochs suggests that it does indeed remain fixed in time. The best-fitting acceleration and proper motion tracks (solid lines) indicate average drifts of $`9.3`$ km s<sup>-1</sup> yr<sup>-1</sup> and $`31.5`$$`\mu `$as yr<sup>-1</sup> in velocity and position, respectively. The scatter in the individual proper motions and accelerations about these average values is consistent with a 0.2 mas scatter in the radii of the systemic masers about $`r_s`$. Figure 3. – Systemic maser bulk proper motion ($`\dot{\theta }_x`$; a) and acceleration ($`\dot{v}_{los}`$; b) probability density functions (PDFs) as derived using the Bayesian pattern-matching analysis described in the text. The curves were generated using all the maser features in each epoch. The uncertainties in $`\dot{\theta }_x`$ and $`\dot{v}_{los}`$ as derived from these PDFs include measurement uncertainties in the maser positions and velocities, but they are dominated by ambiguities in tracking specific maser features.
no-problem/9907/hep-ex9907030.html
ar5iv
text
# 1 Introduction ## 1 Introduction Hadronic final state analyses in Deep-Inelastic Scattering (DIS) interactions at HERA allow novel stringent tests of the physics of Quantum Chromodynamics (QCD), the theory of the strong interactions, in a kinematical region of high parton densities which so far has not been accessible . The high Center of Mass System (CMS) energy ($``$ 300 GeV) of the HERA collider allows a region in Bjorken-$`x`$ of $`10^510^4`$ to be reached while keeping the momentum transfer, $`Q^2`$, larger than a few GeV<sup>2</sup>, hence remaining in the regime of perturbative QCD (pQCD). In DIS a parton in the proton can induce a QCD cascade consisting of several subsequent parton emissions before the final parton interacts with the virtual photon. The multiplicity and the $`x`$ distribution of these emitted partons differ significantly in different approximations of QCD dynamics at small $`x`$. At low $`x`$, pQCD evolution is complicated by the occurrence of two large logarithms in the evolution equations, namely $`\mathrm{ln}1/x`$ and $`\mathrm{ln}Q^2`$. In contrast, in the better tested region of pQCD at larger $`x`$ a summation of the leading $`\mathrm{ln}Q^2`$ terms is sufficient. A complete perturbative treatment in the low-$`x`$ region is not yet available, and different approximations are made resulting in different parton dynamics. At high $`Q^2`$ and high $`x`$ pQCD requires the resummation of contributions of $`\alpha _s\mathrm{ln}(Q^2/Q_0^2)`$ terms, yielding the DGLAP (Dokshitzer-Gribov-Lipatov-Altarelli-Parisi) evolution equations. However at small $`x`$ the contribution of large leading $`\mathrm{ln}1/x`$ terms may become important. Resummation of these terms leads to the BFKL (Balitsky-Fadin-Kuraev-Lipatov) evolution equation. Hence a pertinent and exciting question is whether these $`\mathrm{ln}1/x`$ contributions to the parton evolution can be observed experimentally. Differences between different dynamical assumptions for the parton cascade are expected to be most prominent in the phase space region towards the proton remnant direction, i.e. away from the scattered quark. Here we investigate the region of central rapidity in the CMS system of the virtual photon and the proton. In the HERA laboratory frame this corresponds to a region of small polar angles and has been generically termed “forward region”<sup>1</sup><sup>1</sup>1 H1 uses a right-handed coordinate system with the $`z`$-axis defined by the incident proton beam and the $`y`$-axis pointing upward. . In previous H1 analyses results have been presented on forward jet and forward inclusive charged and neutral pion production , based on data collected in 1994. A measurement on forward jet production has been presented by the ZEUS collaboration . In this paper we study forward single $`\pi ^{}`$ production for a considerably larger data sample than that of , collected in 1996 and which allows the selection of particles with larger transverse momentum, $`p_T`$. The production of high $`p_T`$ particles is strongly correlated with the emission of hard partons in QCD and is therefore sensitive to the dynamics of the strong interaction . An advantage of studying single particles, as opposed to jets, is that no jet algorithm is needed and the potential to reach smaller angles than is possible with jets with broad spatial extent. Furthermore, theoretical calculations at the parton level can be convoluted with known fragmentation functions, allowing a direct comparison of the measurements and theory. The selection of high $`p_T`$ particles is also inspired by the proposal of Mueller to select events where the photon virtuality $`Q^2`$ and transverse momentum squared of the parton emitted in the parton cascade, $`k_T^2`$, are of similar magnitude, thereby suppressing the $`k_T`$ ordered DGLAP evolution with respect to the non-$`k_T`$ ordered BFKL evolution. In this analysis $`\pi ^{}`$’s are selected in DIS events at low $`x`$ in the region of momentum transfer 2 $`<`$ $`Q^2`$ $`<`$ 70 GeV<sup>2</sup>. The $`\pi ^{}`$’s are required to have a polar angle in the lab frame between $`5^{}`$ and $`25^{}`$, and transverse momentum larger than 2.5 GeV in the hadronic CMS (contrary to the analysis in , where a minimum transverse momentum of 1 GeV was required in the laboratory frame). Large transverse momenta in the hadronic CMS, as opposed to the laboratory system, are more directly related to hard subprocesses, since in the quark parton model picture the current quark has zero $`p_T`$ in the hadronic CMS. The increased transverse momentum cut enhances the sensitivity to hard parton emission in the QCD cascade and provides a hard scale for perturbative calculations. It also reduces significantly the influence of soft hadronization. A calculation based on pQCD which uses the BFKL formalism for the perturbative part, and fragmentation functions for the hadronization, is available and will be compared with the data. In addition, models using $`𝒪(\alpha _s)`$ QCD matrix elements and parton cascades according to DGLAP evolution, and colour string hadronization, will be compared with the data. ## 2 Experimental Apparatus A detailed description of the H1 detector can be found elsewhere . The following section briefly describes the components of the detector relevant for this analysis. The hadronic energy flow and the scattered electron are measured with a liquid argon (LAr) calorimeter and a backward SPACAL calorimeter, respectively. The LAr calorimeter extends over the polar angle range $`4^{}<\theta <154^{}`$ with full azimuthal coverage. It consists of an electromagnetic section with lead absorbers and a hadronic section with steel absorbers. With about $`\mathrm{44\hspace{0.17em}000}`$ cells in total, both sections are highly segmented in the transverse and the longitudinal direction, in particular in the forward region of the detector. The total depth of both sections varies between 4.5 and 8 interaction lengths in the region $`4^{}<\theta <128^{}`$. Test beam measurements of the LAr calorimeter modules showed an energy resolution of $`\sigma _E/E0.50/\sqrt{E[\mathrm{GeV}]}0.02`$ for charged pions and of $`\sigma _E/E0.12/\sqrt{E[\mathrm{GeV}]}0.01`$ for electrons . The hadronic energy measurement is performed by applying a weighting technique in order to account for the non-compensating nature of the calorimeter. The absolute scale of the hadronic energy is presently known to $`4\%`$. The scale uncertainty for electromagnetic energies is 3% for the forward region relevant for this analysis. The SPACAL is a lead/scintillating fibre calorimeter which covers the region $`153^{}<\theta <177.8^{}`$ with an electromagnetic section and a hadronic section. The energy resolution for electrons is $`7.5\%/\sqrt{E}2.5\%`$, the energy resolution for hadrons is $`30\%`$. The energy scale uncertainties are 1$`\%`$ and 7$`\%`$ for the electrons and hadrons respectively. The timing resolution of better than 1 ns in both sections of the SPACAL is exploited to form a trigger decision and reject background. The calorimeters are surrounded by a superconducting solenoid providing a uniform magnetic field of $`1.15`$ T parallel to the beam axis in the tracking region. Charged particle tracks are measured in the central tracker (CT) covering the polar angular range $`25^{}<\theta <155^{}`$ and the forward tracking (FT) system, covering the polar angular range $`5^{}<\theta <25^{}`$. The CT consists of inner and outer cylindrical jet chambers, $`z`$-drift chambers and proportional chambers. The jet chambers, mounted concentrically around the beam line, provide up to 65 space points in the radial plane for tracks with sufficiently large transverse momentum. A backward drift chamber (BDC) in front of the SPACAL with an angular acceptance of $`151^{}<\theta <177.5^{}`$ serves to identify electron candidates and to precisely measure their direction. Using information from the BDC, the SPACAL and the reconstructed event vertex the polar angle of the scattered electron is known to about 0.7 mrad. The luminosity is measured using the reaction $`epep\gamma `$ with two TlCl/TlBr crystal calorimeters installed in the HERA tunnel. The electron tagger is located at $`z=33`$ m and the photon tagger at $`z=103`$ m from the interaction point in the direction of the outgoing electron beam. ## 3 Theoretical Predictions Predictions for final state observables are available from Monte Carlo models using $`𝒪(\alpha _s)`$ matrix elements and parton cascades according to the DGLAP evolution, and from numerical calculations based upon the BFKL formalism. In the following we describe the models and calculations used. ### 3.1 Phenomenological QCD Models Implementations of $`𝒪(\alpha _s)`$ matrix elements complemented by parton showers based on the DGLAP splitting functions are available in the programs LEPTO6.5 and HERWIG5.9 . The factorization and renormalization scales are set to $`Q^2`$. The predictions of these models should be valid in the region: $`\alpha _s(Q^2)\mathrm{ln}(Q^2/Q_0^2)1`$ and $`\alpha _s(Q^2)\mathrm{ln}(1/x)1`$. In LEPTO the Lund string model as implemented in JETSET7.4 is used to describe hadronization processes. LEPTO includes soft colour interactions in the final state which can lead to events with a large rapidity gap. HERWIG differs from LEPTO in that it also considers interference effects due to colour coherence and uses the cluster fragmentation model for hadronization. The versions of LEPTO6.5 and HERWIG5.9 used consider only DIS processes in which the virtual photon is point-like. Recently a model has been proposed (RAPGAP2.06 ) which is also based on the DGLAP formalism but includes contributions from processes in which the virtual photon entering the scattering process can be resolved. The relative contribution from resolved photon processes depends on the scale at which the virtual photon is probed. As in the factorization and renormalization scale in this paper is taken to be $`Q^2+p_T^2`$ ($`p_T^2`$ of the partons from the hard subprocess). The model calculations in this paper were made with the CTEQ4M parton densities for the proton and the SAS-1D parton densities for the virtual photon. QED corrections are determined with the Monte Carlo program DJANGO6.2 . The contribution of photons emitted in the forward direction from QED processes originating from the quarks in the proton, were found to be negligible. In a previous paper we compared the results with ARIADNE and LDCMC . ARIADNE provides an implementation of the Colour Dipole Model (CDM) of a chain of independently radiating dipoles formed by emitted gluons . Unlike LEPTO, the cascade of the CDM is not ordered in transverse momentum. For the present analysis it was confirmed that the predictions depend strongly on the parameters controlling the “size” of the diquark and photon, and are therefore not explicitly compared with data in this paper<sup>2</sup><sup>2</sup>2 A good description of the data presented in this paper can be achieved (using ARIADNE4.10) e.g. choosing PARA(10)=1.7 and PARA(14)=1.0 .. The linked dipole chain (LDC) model is a reformulation of the CCFM equation, which forms a bridge between the BFKL and DGLAP approaches. Calculations of the hadronic final state based on this approach are available with the LDCMC 1.0 Monte Carlo which matches exact first order matrix elements with the LDC-prescribed initial and final state parton emissions. The model however failed to describe the data in and is therefore not considered further in this paper. ### 3.2 BFKL Calculation Recently $`\pi ^{}`$ cross-sections have been calculated based on a modified BFKL evolution equation in order $`𝒪(\alpha _s)`$ convoluted with $`\pi ^{}`$ fragmentation functions. The modified evolution equations include the so called “consistency constraint” which limits the gluon emission at each vertex in the cascade to the kinematically allowed region. It is argued that this constraint embodies a major part of the non-leading $`\mathrm{ln}(1/x)`$ contributions to the BFKL equation, which have been found to be very important . The predictions of this modified BFKL equation are therefore expected to be more reliable than those without this constraint. The parton densities and fragmentation functions used in the calculation are taken from and respectively. In this paper we compare “set (iii)” of to the data. In this set the scale for the strong coupling constant $`\alpha _S`$ is taken to be the transverse momentum squared of the emitted partons, $`k_T^2`$, and the infrared cut-off in the modified BFKL equation is taken to be 0.5 GeV<sup>2</sup>. Calculations with these parameters give a fair description of the forward jet cross-sections from when taking into account hadronization corrections. The predictions are labelled “mod LO BFKL” in the figures. ## 4 Measurement ### 4.1 Data Selection The analysis is based on data representing an integrated luminosity of $`=5.8\mathrm{pb}^1`$ taken by H1 during 1996. Deep-inelastic scattering events are selected and the event kinematics are calculated from the polar angle and the energy of the scattered positron. The four momentum transfer squared, $`Q^2`$, and the inelasticity, $`y`$, are related to these quantities (neglecting the positron mass) by $`Q^2=\mathrm{\hspace{0.17em}4}E_eE_l\mathrm{cos}^2\frac{\theta _e}{2}`$ and $`y=\mathrm{\hspace{0.17em}1}\frac{E_e}{E_l}\mathrm{sin}^2\frac{\theta _e}{2},`$ where $`E_l`$ and $`E_e`$ are the energies of the incoming and the scattered positron respectively, and $`\theta _e`$ is the polar angle of the scattered positron. Bjorken-$`x`$ is then given by $`x`$$`=Q^2/(ys)`$, where $`s`$ is the square of the $`ep`$ center of mass energy. Experimentally the scattered positron is defined to be the highest energy cluster, i.e. localized energy deposit, in the SPACAL with a cluster radius of less than 3.5 cm and an associated track in the BDC. Experimental requirements based on the energy and the polar angle of the scattered positron are used during the preselection but these are superseded by stronger kinematic cuts which restrict the data to the range $`0.1<y<0.6`$ and $`2<Q^2<70`$ GeV<sup>2</sup>. The restricted $`y`$-range ensures that the particles from the current quark are detected in the central detector, and not in the forward region, and that the DIS kinematics can be well determined from the measurement of the scattered positron. Photoproduction background is further reduced to a negligible level by requiring $`35<_j(E_jp_{z,j})<70\mathrm{GeV}`$ with $`E_j`$ and $`p_{z,j}`$ the energy and longitudinal momentum of a particle respectively, and where the sum extends over all detected particles in the event, except for those in the small angle electron and photon tagger. The reconstructed primary event vertex must have a $`z`$ coordinate not more than 35 cm away from the nominal interaction point. The trigger is based on energy depositions in the SPACAL and demands multiple track activity in the central tracker. For the events used in this analysis the efficiency of this trigger is around 80$`\%`$, determined using data from an independent second trigger. After the selection about 600K events are available for further analysis. ### 4.2 Forward $`𝝅^{\mathbf{}}`$-Meson Selection A measurement of particle production at mid-rapidity in the hadronic CMS system requires small forward angles in the lab system. It is difficult to identify individual charged particles in the forward direction in an environment with a high density of charged particles. However the finely segmented H1 LAr calorimeter allows the measurement of $`\pi ^{}`$’s down to very small angles. They are measured using the dominant decay channel $`\pi ^{}`$ $`2\gamma `$. The $`\pi ^{}`$ candidates are selected in the region $`5^{}<\theta _\pi <25^{}`$, where $`\theta _\pi `$ is the polar angle of the produced $`\pi ^{}`$. Candidates are required to have an energy such that $`x_\pi =E_\pi /E_{proton}>`$ 0.01, with $`E_{proton}`$ the proton beam energy (820 GeV), and a transverse momentum in the hadronic CMS, $`p_{T,\pi }^{}`$, greater than 2.5 GeV. At the high $`\pi ^{}`$ energies considered here, the two photons from the decay cannot be separated, but appear as one object (cluster) in the calorimetric response. Therefore, the standard method to identify $`\pi ^{}`$-mesons by reconstructing the invariant mass from the separate measurement of the two decay photons is not applicable. In this paper, a detailed analysis of the longitudinal and transverse shape of the energy depositions is performed to separate electromagnetic from hadronic showers. This approach is based on the compact nature of electromagnetic showers as opposed to showers of hadronic origin, which are broader. The analysis of shower profiles is made possible by the fine granularity of the calorimeter in the forward direction. It has a typical lateral cell size of $`3.5\times 3.5\mathrm{cm}^2`$. This can be compared to the mean Moliere radius $`\overline{R}_m`$ which is 3.6 cm and the mean radiation length $`\overline{X}_0`$ which is 1.6 cm. The calorimeter has a four-fold longitudinal segmentation for the electromagnetic section which has a thickness of 20 to 25 radiation lengths $`X_0`$. The main experimental challenge in this analysis is the high activity in this region of phase space, with hadronic showers “masking” the clear electromagnetic signature from the $`\pi ^{}`$ $`2\gamma `$ decay. The overlap of a $`\pi ^{}`$ induced cluster with another hadron is mainly responsible for losses of $`\pi ^{}`$ detection efficiency, since the distortion of the shower shape estimators it causes will, in many cases, lead to the rejection of the cluster candidate. The reconstruction of LAr data is optimized to contain all the energy of an electromagnetic shower in one cluster . A $`\pi ^{}`$-meson candidate is required to be a cluster with more than 90$`\%`$ of its energy deposited in the electromagnetic part of the LAr calorimeter. A “hot” core consisting of the most energetic group of contiguous electromagnetic calorimeter cells of a cluster, which must include the hottest cell, is defined for each candidate . More than 50 $`\%`$ of the cluster energy is required to be deposited in this core. The lateral spread of the shower is quantified in terms of lateral shower moments calculated relative to the shower’s principal axis and required to be less than 4 cm . The longitudinal shower shape is used as a selection criterion via the fraction of the shower’s energy deposited in each layer of cells in the electromagnetic part of the calorimeter. The precise specifications of these layers can be found in . The part of the cluster’s energy measured in the second layer minus that measured in the fourth layer is required to be more than 40% of the total cluster energy. This selects showers which start to develop close to the calorimeter surface and are well contained in the electromagnetic part of the calorimeter, as expected for showers of electromagnetic origin. As mentioned above, with this selection one cannot distinguish photons from a $`\pi ^{}`$-meson decay and photons from other sources. The high energy required in the selection, however, ensures that contributions from sources other than high energy $`\pi ^{}`$-mesons (such as prompt photon production) are at a negligible level . The influence of $`\eta `$-meson production is corrected for in the analysis. Uncertainties in the relative $`\eta `$ and $`\pi ^{}`$ production rates in the Monte Carlo models used have been studied and were found to have an negligible effect on the results. With this selection about 1700 (600) $`\pi ^{}`$ candidates are found in the kinematic range $`5^{}<\theta _\pi <25^{}`$, $`x_\pi >0.01`$ and $`p_{T,\pi }^{}>`$ 2.5 (3.5) GeV, with a detection efficiency better than 45$`\%`$. Monte Carlo studies, using a detailed simulation of the H1 detector for a sample of DIS events, yield a purity of about 70% for the selected $`\pi ^{}`$-meson sample. The impurities are due to misidentified hadrons and from secondary interactions of charged hadrons with passive material in the detector (between one and two radiation lengths in the forward region). These studies show that less than 10$`\%`$ of the selected $`\pi ^{}`$ candidates stem from secondary scattering of charged hadrons with passive material in the forward region, where the amount of material between the interaction point and the calorimeter surface is largest. The determination of the $`\pi ^{}`$ acceptance and purity depends only on the particle density and energies in the forward calorimeter. To ensure that the different Monte Carlo models used are in reasonable agreement with the data in this respect the transverse energy flow, $`E_T`$, around the $`\pi ^{}`$ candidate clusters was studied in detail. Both the $`E_T`$ flow and the $`E_T`$ spectra in the tail of the $`E_T`$ flow distributions are reasonably well described by the models used to determine the detector corrections . Of the two models used, however, ARIADNE showed a higher particle density while LEPTO has a lower particle density than the data. Remaining differences of the detector corrections determined with the two Monte Carlo models are therefore used to estimate the systematic error. When the energy and transverse momentum requirements are lowered, the two photons from the $`\pi ^{}`$ decay become separable and a clear $`\pi ^{}`$ mass peak can be observed which is also well reproduced by the H1 detector simulation. The same method of selecting $`\pi ^{}`$-mesons as outlined above was used in a previous H1 analysis , where a measurement of charged particles in the same region was also performed. The measured $`\pi ^{}`$ cross-sections were found to agree well with the average of the $`\pi ^+`$ and $`\pi ^{}`$ cross sections. Furthermore, the results of the analysis of the 1994 data are found to be in good agreement with the present analysis in their overlapping phase space regions. ## 5 Results The experimental results of the analysis are presented as differential $`ep`$ cross-sections of forward $`\pi ^{}`$-meson production as a function of $`Q^2`$, and as a function of $`x`$, $`\eta _\pi `$ and $`p_{T,\pi }^{}`$ in three regions of $`Q^2`$ for $`p_{T,\pi }^{}`$ $`>2.5`$ GeV. The pseudorapidity $`\eta _\pi `$ is given by $`\mathrm{ln}\left[\mathrm{tan}\left(\theta /2\right)\right]`$ with $`\theta `$ being the polar angle of the $`\pi ^{}`$ in the laboratory frame. In addition the $`\pi ^{}`$ cross-sections as a function of $`x`$ and $`Q^2`$ are measured for data with the threshold of the $`\pi ^{}`$ transverse momentum increased to $`p_{T,\pi }^{}`$ $`>`$ 3.5 GeV. An increased $`p_{T,\pi }^{}`$ threshold is expected to enhance the sensitivity to hard parton emission in the parton cascade. The phase space is given by 0.1 $`<`$ $`y`$ $`<`$ 0.6, 2 $`<`$ $`Q^2`$ $`<`$ 70 GeV<sup>2</sup>, $`5^{}<`$ $`\theta _\pi `$ $`<25^{}`$ and $`x_\pi `$ $`=`$ $`E_\pi `$$`/`$$`E_{proton}`$ $`>`$ 0.01, in addition to the $`p_{T,\pi }^{}`$ thresholds given above. $`\theta _\pi `$, $`E_\pi `$ and $`E_{proton}`$ are measured in the H1 laboratory frame; $`p_{T,\pi }^{}`$ is calculated in the hadronic CMS. The measurement extends down to $`x`$ $`=`$ 4$``$10<sup>-5</sup>, covering two orders of magnitude in Bjorken-$`x`$. All observables are corrected for detector effects and for the influence of QED radiation by a bin-by-bin unfolding procedure. The detector effects include the efficiency, purity and acceptance of the $`\pi ^{}`$-meson identification as well as contributions from secondary scattering in passive material. The correction functions are obtained with two different models (ARIADNE and LEPTO) and detailed detector simulation. The final correction is performed with the average of the two models. The remaining background from photoproduction in the data sample has been studied using a sample of photoproduction Monte Carlo events (PHOJET ) representing an integrated luminosity of about 1 pb<sup>-1</sup>. The contribution from such events is found to be negligible in all bins. The typical total systematic uncertainty is 15-25$`\%`$, compared to a statistical uncertainty of about 10$`\%`$. Contributions to the systematic error include: the uncertainty of the luminosity measurement (1.8$`\%`$), the statistical uncertainty in the determination of the trigger efficiency (5$`\%`$), the uncertainty of the electromagnetic energy scale of the LAr (3%) and the SPACAL (1%) calorimeters which each contribute 5-10$`\%`$, the variation of $`\pi ^{}`$-meson selection and acceptance requirements within the resolution of the reconstructed quantities (5-10$`\%`$), and the model dependence of the bin-by-bin correction procedure using differences between ARIADNE and LEPTO (5-10$`\%`$). The cross-sections as a function of $`x`$, shown in Fig. 1(a), exhibit a strong rise towards small $`x`$. In this and the following figures the inner error bars give the statistical errors, while the outer error bars give the statistical and systematical error added quadratically. It is of interest to note that the rise in $`x`$ in Fig. 1(a) is similar to the rise of the total inclusive cross-section as measured e.g. in . This is demonstrated in Fig. 1(b), which shows the rate of $`\pi ^{}`$-meson production in DIS as a function of $`x`$ obtained by dividing the cross-section shown in Fig. 1(a) by the inclusive $`ep`$ cross-section in each bin of $`x`$ and $`Q^2`$ . The inclusive cross-section is calculated by integrating the H1 QCD fit to the 1996 structure function data as presented in for every bin of inclusive $`\pi ^{}`$-meson cross-sections. Note, however, that the $`\pi ^{}`$ rate increases with increasing $`Q^2`$. The $`x`$-independence seen in Fig. 1(b) in a fixed $`Q^2`$ interval implies that the $`\pi ^{}`$ rate for particles with a $`p_T`$ above the cut-off and within the selected kinematical region, is independent of $`W`$, the hadronic invariant mass of the photon-proton system. The shapes of $`d\sigma _\pi /d\eta _\pi `$ and $`d\sigma _\pi /dp_{T,\pi }^{}`$ (Fig. 2) show no significant dependence on $`Q^2`$ . The measurements of the latter extend to values of transverse momenta as high as 8 GeV. Since $`\eta _\pi `$ is measured in the laboratory frame, mid-rapidity in the hadronic CMS corresponds approximately to $`\eta _\pi `$$`=2`$ in Figure 2 (a). Figure 3 shows the inclusive $`\pi ^{}`$-meson cross-section as a function of $`Q^2`$ . The cross-section falls steeply with increasing $`Q^2`$. Figure 4 finally shows $`d\sigma _\pi /d`$$`Q^2`$ and $`d\sigma _\pi /d`$$`x`$ for the higher threshold of $`p_{T,\pi }^{}`$ $`>`$ 3.5 GeV. No significant change in shape of the distributions occurs when the $`p_{T,\pi }^{}`$ threshold is raised, but the cross-sections are reduced by about a factor of three. All differential cross-sections are compared to three predictions based on different QCD approximations. The DGLAP prediction for pointlike virtual photon scattering (including parton showers) as given by LEPTO6.5 falls clearly below the data. There is still a fair agreement with data for the highest $`x`$ and $`Q^2`$ bins, shown in Fig. 1(a) and Fig. 3, but differences occur in the low-$`x`$ region. These are as large as a factor of five in the lowest $`x`$ region. LEPTO also fails to describe the ratio in Fig. 1(b), and shows a strong decrease with decreasing $`x`$. The mechanism of emitting partons according to the DGLAP splitting functions, combined with pointlike virtual photon scattering only, is clearly not supported by the data, in particular at low $`x`$. The LEPTO prediction is based on about seven times the integrated luminosity of the data. Comparisons to HERWIG5.9 (not shown) lead to similar conclusions . A considerable improvement of the description of the data is achieved by a model which considers additional processes where the virtual photon entering the scattering process is resolved. This approach can be regarded as an effective resummation of higher order corrections. It also provides a smooth transition towards the limit of $`Q^2=0`$, i.e. photoproduction, where resolved processes dominate in the HERA regime. Such a prediction is provided by RAPGAP2.06 . In Fig. 1(a) RAPGAP2.06 predicts a cross-sections very close to the measured distributions with the exception of the lowest $`Q^2`$ bin where the prediction is too low. All predicted cross-sections increase by up to 30$`\%`$ when the scale in the hard scattering is increased from $`Q^2+p_T^2`$ to $`Q^2+4p_T^2`$ , and therefore do not improve the overall description significantly. Hence RAPGAP, with the parton distributions used here, does not describe the low-$`x`$ behaviour of the data over the full range. The RAPGAP prediction is based on approximately four times the integrated luminosity of the data. The ARIADNE model (not shown), with parameters as given before, can describe the data presented in this paper , but it remains to be shown whether this choice allows for a consistent description of other aspects of the DIS final state data. Moreover, a moderate variation of these parameters leads to large changes in the prediction. Next we compare the data with a prediction of the $`\pi ^{}`$-meson cross-section based on a modified LO BFKL parton calculation convoluted with $`\pi ^{}`$ fragmentation functions. The predictions obtained with these calculations turn out to be in good agreement with the neutral pion cross-sections measured in most of the available phase space, but are below the data at the lowest values of $`Q^2`$. The calculation also describes well the ratio shown in Fig. 1(b), except possibly at the largest $`Q^2,x`$ bin. This ratio has been calculated by using the corresponding prediction for the inclusive cross section, based on the BFKL formalism . The BFKL predictions involve a cut-off parameter in the transverse momentum squared $`k_T^2`$ of the partons: $`k_0^2`$=0.5 GeV<sup>2</sup> and a choice of renormalization scale. It is shown in that a variation of $`k_0^2`$ by a factor of two leads to less than a 10% change of the cross-sections. The scale dependence is larger; a change from $`k_T^2`$ to $`k_T^2/4`$ leads to an approximate increase of 60% of the cross-sections. This affects mostly the normalization, but not the shape of the distributions. The good agreement between this prediction and the data suggests that the modified BFKL evolution equation, using the consistency constraints, is a good approximation for low-$`x`$ evolution in the considered phase space. This, in turn, can then be interpreted as a sign of the experimental manifestation of leading $`\mathrm{ln}1/x`$ terms which are anticipated in pQCD evolution. ## 6 Conclusions Differential cross-sections of forward $`\pi ^{}`$ production have been measured for particles with $`p_{T,\pi }^{}>`$ 2.5 (3.5) GeV, $`5^{}<`$ $`\theta _\pi `$$`<25^{}`$ and $`x_\pi `$ $`=`$ $`E_\pi `$$`/`$$`E_{proton}`$ $`>`$ 0.01, for DIS events with $`0.1<y<`$ 0.6 and 2 $`<`$ $`Q^2`$ $`<`$ 70 GeV<sup>2</sup>. The data are sensitive to QCD parton dynamics at low $`x`$ (high parton density) and mid-rapidity in the hadronic CMS system. They discriminate between different approximations to QCD evolution in the new regime opened up by HERA. The data show a strong rise of the cross section with decreasing $`x`$. This rise is similar to the rise of the inclusive cross section. Models using $`𝒪(\alpha _s)`$ QCD matrix elements and parton cascades according to the DGLAP splitting functions cannot describe the differential neutral pion cross-sections at low $`x`$. Inclusion of processes in which the virtual photon is resolved improves the agreement with the data, but does not provide a satisfactory description in the full $`x`$ and $`Q^2`$ range. A calculation based on the BFKL formalism is in good agreement with the data, particularly for the shape description, but the absolute normalization remains strongly affected by the scale uncertainty. So far the data in the phase space selected in this analysis could not be confronted with a next-to-leading (NLO) order prediction – either for DGLAP or for BFKL. More definite conclusions therefore have to be delayed until such calculations become available. ## Acknowledgments We wish to thank A.D. Martin, J.J. Outhwaite and A.M. Stasto for useful discussions. We are grateful to the HERA machine group whose outstanding efforts have made and continue to make this experiment possible. We thank the engineers and technicians for their work constructing and maintaining the H1 detector, our funding agencies for financial support, the DESY technical staff for continual assistance and the DESY directorate for the hospitality which they extend to the non-DESY members of the collaboration.
no-problem/9907/quant-ph9907035.html
ar5iv
text
# Three Approaches to the Quantitative Definition of Information in an Individual Pure Quantum State ## 1 Introduction While Kolmogorov complexity is the accepted absolute measure of information content in a classical individual finite object, a similar absolute notion is needed for the information content of a pure quantum state. <sup>1</sup><sup>1</sup>1 For definitions and theory of Kolmogorov complexity consult , and for quantum theory consult . Quantum theory assumes that every complex vector, except the null vector, represents a realizable pure quantum state.<sup>2</sup><sup>2</sup>2That is, every complex vector that can be normalized to unit length. This leaves open the question of how to design the equipment that prepares such a pure state. While there are continuously many pure states in a finite-dimensional complex vector space—corresponding to all vectors of unit length—we can finitely describe only a countable subset. Imposing effectiveness on such descriptions leads to constructive procedures. The most general such procedures satisfying universally agreed-upon logical principles of effectiveness are quantum Turing machines, . To define quantum Kolmogorov complexity by way of quantum Turing machines leaves essentially two options: 1. We want to describe every quantum superposition exactly; or 2. we want to take into account the number of bits/qubits in the specification as well the accuracy of the quantum state produced. We have to deal with three problems: * There are continuously many quantum Turing machines; * There are continously many pure quantum states; * There are continuously many qubit descriptions. There are uncountably many quantum Turing machines only if we allow arbitrary real rotations in the definition of machines. Then, a quantum Turing machine can only be universal in the sense that it can approximate the computation of an arbitrary machine, . In descriptions using universal quantum Turing machines we would have to account for the closeness of approximation, the number of steps required to get this precision, and the like. In contrast, if we fix the rotation of all contemplated machines to a single primitive rotation $`\theta `$ with $`\mathrm{cos}\theta =3/5`$ and $`\mathrm{sin}\theta =4/5`$ then there are only countably many Turing machines and the universal machine simulates the others exactly . Every quantum Turing machine computation using arbitrary real rotations can be approximated to any precision by machines with fixed rotation $`\theta `$ but in general cannot be simulated exactly—just like in the case of the simulation of arbitrary quantum Turing machines by a universal quantum Turing machine. Since exact simulation is impossible by a fixed universal quantum Turing machine anyhow, but arbitrarily close approximations are possible by Turing machines using a fixed rotation like $`\theta `$, we are motivated to fix $`Q_1,Q_2,\mathrm{}`$ as a standard enumeration of quantum Turing machines using only rotation $`\theta `$. Our next question is whether we want programs (descriptions) to be in classical bits or in qubits? The intuitive notion of computability requires the programs to be classical. Namely, to prepare a quantum state requires a physical apparatus that “computes” this quantum state from classical specifications. Since such specifications have effective descriptions, every quantum state that can be prepared can be described effectively in descriptions consisting of classical bits. Descriptions consisting of arbitrary pure quantum states allows noncomputable (or hard to compute) information to be hidden in the bits of the amplitudes. In Definition 2 we call a pure quantum state directly computable if there is a (classical) program such that the universal quantum Turing machine computes that state from the program and then halts in an appropriate fashion. In a computational setting we naturally require that directly computable pure quantum states can be prepared. By repeating the preparation we can obtain arbitrarily many copies of the pure quantum state. <sup>3</sup><sup>3</sup>3See the discussion in , pp. 49–51. If descriptions are not effective then we are not going to use them in our algorithms except possibly on inputs from an “unprepared” origin. Every quantum state used in a quantum computation arises from some classically preparation or is possibly captured from some unknown origin. If the latter, then we can consume it as conditional side-information or an oracle. Restricting ourselves to an effective enumeration of quantum Turing machines and classical descriptions to describe by approximation continuously many pure quantum states is reminiscent of the construction of continuously many real numbers from Cauchy sequences of rational numbers, the rationals being effectively enumerable. The second approach considers the shortest effective qubit description of a pure quantum state. This can also be properly formulated in terms of the conditional version of the first approach. An advantage of this version is that the upper bound on the complexity of a pure quantum state is immediately given by the number of qubits involved in the literal description of that pure quantum state. The status of incompressibility and degree of uncomputability is as yet unknown and potentially a source of problems with this approach. The third approach is to give programs for the $`2^{n+1}`$ real numbers involved in the precise description of the $`n`$-qubit state. Then the question reduces to the problem of describing lists of real numbers. In the classical situation there are also several variants of Kolmogorov complexity that are very meaningful in their respective settings: plain Kolmogorov complexity, prefix complexity, monotone complexity, uniform complexity, negative logarithm of universal measure, and so on . It is therefore not surprising that in the more complicated situation of quantum information several different choices of complexity can be meaningful and unavoidable in different settings. ## 2 Classical Descriptions The complex quantity $`x|z`$ is the inner product of vectors $`|x`$ and $`|z`$. Since pure quantum states $`|x,|z`$ have unit length, $`|x|z|=|\mathrm{cos}\theta |`$ where $`\theta `$ is the angle between vectors $`|x`$ and $`|z`$ and $`|x|z|^2`$ is the probability of outcome $`|x`$ being measured from state $`|z`$, . The idea is as follows. A von Neumann measurement is a decomposition of the Hilbert space into subspaces that are mutually orthogonal, for example an orthonormal basis is an observable. Physicists like to specify observables as Hermitian matrices, where the understanding is that the eigenspaces of the matrices (which will always be orthogonal) are the actual subspaces. When a measurement is performed, the state is projected into one of the subspaces (with probability equal to the square of the projection). So the subspaces correspond to the possible outcomes of a measurement. In the above case we project $`|z`$ on outcome $`|x`$ using projection $`|xx|`$ resulting in $`x|z`$$`|x`$. Our model of computation is a quantum Turing machine with classical binary program $`p`$ on the input tape and a quantum auxiliary input on a special conditional input facility. We think of this auxiliary input as being given as a pure quantum state $`|y`$ (in which case it can be used only once), as a mixture density matrix $`\rho `$, or (perhaps partially) as a classical program from which it can be computed. In the last case, the classical program can of course be used indefinitely often.<sup>4</sup><sup>4</sup>4We can even allow that the conditional information $`y`$ is infinite or noncomputable, or an oracle. But we will not need this in the present paper. It is therefore not only important what information is given conditionally, but also how it is described—like this is the sometimes the case in the classical version of Kolmogorov complexity for other reasons that would additionally hold in the quantum case. We impose the condition that the set of halting programs $`𝒫_y=\{p:T(p|y)<\mathrm{}\}`$ is prefix-free: no program in $`𝒫_y`$ is a proper prefix of another program in $`𝒫_y`$. Put differently, the Turing machine scans all of a halting program $`p`$ but never scans the bit following the last bit of $`p`$: it is self-delimiting. <sup>5</sup><sup>5</sup>5One can also use a model were the input $`p`$ is delimited by distinguished markers. Then the Turing machine always knows where the input ends. In the self-delimiting case the endmarker must be implicit in the halting program $`p`$ itself. This encoding of the endmarker carries an inherent penalty in the form of increased length: typically a prefix code of an $`n`$-length binary string has length about $`n+\mathrm{log}n+2\mathrm{log}\mathrm{log}n`$ bits, . <sup>6</sup><sup>6</sup>6 There are two possible interpretations for the computation relation $`Q(p,y)=|x`$. In the narrow interpretation we require that $`Q`$ with $`p`$ on the input tape and $`y`$ on the conditional tape halts with $`|x`$ on the output tape. In the wide interpretation we can define pure quantum states by requiring that for every precision $`\delta >0`$ the computation of $`Q`$ with $`p`$ on the input tape and $`y`$ on the conditional tape and $`\delta `$ on a tape where the precision is to be supplied halts with $`|x^{}`$ on the output tape and $`|x|x^{}|^21\delta `$. Such a notion of “computable” or “recursive” pure quantum states is similar to Turing’s notion of “computable numbers.” In the remainder of this section we use the narrow interpretation. ###### Definition 1 The (self-delimiting) complexity of $`|x`$ with respect to quantum Turing machine $`Q`$ with $`y`$ as conditional input given for free is $$K_Q(|x|y):=\underset{p}{\mathrm{min}}\{l(p)+\mathrm{log}(|z|x|^2):Q(p,y)=|z\}$$ where $`l(p)`$ is the number of bits in the specification $`p`$, $`y`$ is an input quantum state and $`|z`$ is the quantum state produced by the computation $`Q(p,y)`$, and $`|x`$ is the target state that one is trying to describe. ###### Theorem 1 There is a universal machine <sup>7</sup><sup>7</sup>7 We use “$`U`$” to denote a universal (quantum) Turing machine rather than a unitary matrix. $`U`$ such that for all machines $`Q`$ there is a constant $`c_Q`$ (the length of the description of the index of $`Q`$ in the enumeration) such that for all quantum states $`|x`$ we have $`K_U(|x|y)K_Q(|x|y)+c_Q`$. Proof. There is a universal quantum Turing machine $`U`$ in the standard enumeration $`Q_1,Q_2,\mathrm{}`$ such that for every quantum Turing machine $`Q`$ in the enumeration there is a self-delimiting program $`i_Q`$ (the index of $`Q`$) and $`U(i_Qp,y)=Q(p,y)`$ for all $`p,y`$. Setting $`c_Q=l(i_Q)`$ proves the theorem. $`\mathrm{}`$ We fix once and for all a reference universal quantum Turing machine $`U`$ and define the quantum Kolmogorov complexity as $`K(|x|y):=K_U(|x|y),`$ $`K(|x):=K_U(|x|ϵ),`$ where $`ϵ`$ denotes the absence of any conditional information. The definition is continuous: If two quantum states are very close then their quantum Kolmogorov complexities are very close. Furthermore, since we can approximate every (pure quantum) state $`|x`$ to arbitrary closeness, , in particular, for every constant $`ϵ>0`$ we can compute a (pure quantum) state $`|z`$ such that $`|z|x|^2>1ϵ`$. <sup>8</sup><sup>8</sup>8We can view this as the probability of the possibly noncomputable outcome $`|x`$ when executing projection $`|xx|`$ on $`|z`$ and measuring outcome $`|x`$. For this definition to be useful it should satisfy: * The complexity of a pure state that can be directly computed should be the length of the shortest program that computes that state. (If the complexity is less then this may lead to discontinuities when we restrict quantum Kolmogorov complexity to the domain of classical objects.) * The quantum Kolmogorov complexity of a classical object should equal the classical Kolmogorov complexity of that object (up to a constant additive term). * The quantum Kolmogorov complexity of a quantum object should have an upper bound. (This is necessary for the complexity to be approximable from above, even if the quantum object is available in as many copies as we require.) * Most objects should be “incompressible” in terms of quantum Kolmogorov complexity. * In a probabilistic ensemble the expected quantum Kolmogorov complexity should be about equal (or have another meaningful relation) to the von Neumann entropy. <sup>9</sup><sup>9</sup>9In the classical case the average self-delimiting Kolmogorov complexity equals the Shannon entropy up to an additive constant depending on the complexity of the distribution concerned. For a quantum system $`|z`$ the quantity $`P(x):=|z|x|^2`$ is the probability that the system passes a test for $`|x`$, and vice versa. The term $`\mathrm{log}(|z|x|^2)`$ can be viewed as the code word length to redescribe $`|x`$ given $`|z`$ and an orthonormal basis with $`|x`$ as one of the basis vectors using the well-known Shannon-Fano prefix code. This works as follows: For every state $`|z`$ in $`N:=2^n`$-dimensional Hilbert space with basis vectors $`=\{|e_0,\mathrm{},|e_{N1}\}`$ we have $`_{i=0}^{N1}|e_i|z|^2=1`$. If the basis has $`|x`$ as one of the basis vectors, then we can consider $`|z`$ as a random variable that assumes value $`|x`$ with probability $`|x|z|^2`$. The Shannon-Fano code word for $`|x`$ in the probabilistic ensemble $`,(|e_i|z|^2)_i`$ is based on the probability $`|x|z|^2`$ of $`|x`$ given $`|z`$ and has length $`\mathrm{log}(|x|z|^2)`$. Considering a canonical method of constructing an orthonormal basis $`=|e_0,\mathrm{},|e_{N1}`$ from a given basis vector, we can choose $``$ such that $`K()=\mathrm{min}_i\{K(|e_i)\}+O(1)`$. The Shannon-Fano code is appropriate for our purpose since it is optimal in that it achieves the least expected code word length—the expectation taken over the probability of the source words—up to 1 bit by Shannon’s Noiseless Coding Theorem. ### 2.1 Consistency with Classical Complexity Our proposal would not be useful if it were the case that for a directly computable object the complexity is less than the shortest program to compute that object. This would imply that the code corresponding to the probabilistic component in the description is possibly shorter than the difference in program lengths for programs for an approximation of the object and the object itself. This would penalize definite description compared to probabilistic description and in case of classical objects would make quantum Kolmogorov complexity less than classical Kolmogorov complexity. ###### Theorem 2 Let $`U`$ be the reference universal quantum Turing machine and let $`|x`$ be a basis vector in a directly computable orthonormal basis $``$ given $`y`$: there is a program $`p`$ such that $`U(p,y)=|x`$. Then $`K(|x|y)=\mathrm{min}_p\{l(p):U(p,y)=|x\}`$ up to $`K(|y)+O(1)`$. Proof. Let $`|z`$ be such that $$K(|x|y)=\underset{q}{\mathrm{min}}\{l(q)+\mathrm{log}(|z|x|^2):U(q,y)=|z\}.$$ Denote the program $`q`$ that minimizes the righthand side by $`q_{\mathrm{min}}`$ and the program $`p`$ that minimizes the expression in the statement of the theorem by $`p_{\mathrm{min}}`$. By running $`U`$ on all binary strings (candidate programs) simultaneously dovetailed-fashion <sup>10</sup><sup>10</sup>10A dovetailed computation is a method related to Cantor’s diagonalization to run all programs alternatingly in such a way that every program eventually makes progress. On an list of programs $`p_1,p_2,\mathrm{}`$ one divides the overall computation into stages $`k:=1,2,\mathrm{}`$. In stage $`k`$ of the overall computation one executes the $`i`$th computation step of every program $`p_{ki+1}`$ for $`i:=1,\mathrm{},k`$. one can enumerate all objects that are directly computable given $`y`$ in order of their halting programs. Assume that $`U`$ is also given a $`K(|y)`$ length program $`b`$ to compute $``$—that is, enumerate the basis vectors in $``$. This way $`q_{\mathrm{min}}`$ computes $`|z`$, the program $`b`$ computes $``$. Now since the vectors of $``$ are mutually orthogonal $$\underset{|e}{}|z|e|^2=1.$$ Since $`|x`$ is one of the basis vectors we have $`\mathrm{log}|z|x|^2`$ is the length of a prefix code (the Shannon-Fano code) to compute $`|x`$ from $`|z`$ and $``$. Denoting this code by $`r`$ we have that the concatenation $`q_{\mathrm{min}}br`$ is a program to compute $`|x`$: parse it into $`q_{\mathrm{min}},b,`$ and $`r`$ using the self-delimiting property of $`q_{\mathrm{min}}`$ and $`b`$. Use $`q_{\mathrm{min}}`$ to compute $`|z`$ and use $`b`$ to compute $``$, determine the probabilities $`|z|e|^2`$ for all basis vectors $`|e`$ in $``$. Determine the Shannon-Fano code words for all the basis vectors from these probabilities. Since $`r`$ is the code word for $`|x`$ we can now decode $`|x`$. Therefore, $$l(q_{\mathrm{min}})+\mathrm{log}(|z|x|^2)l(p_{\mathrm{min}})K(|y)O(1)$$ which was what we had to prove. $`\mathrm{}`$ ###### Corollary 1 On classical objects (that is, the natural numbers or finite binary strings that are all directly computable) the quantum Kolmogorov complexity coincides up to a fixed additional constant with the self-delimiting Kolmogorov complexity since $`K(|n)=O(1)`$ for the standard classical basis $`=\{0,1\}^n`$. <sup>11</sup><sup>11</sup>11 This proof does not show that it coincide up to an additive constant term with the original plain complexity defined by Kolmogorov, , based on Turing machines where the input is delited by distinguished markers. The same proof for the plain Kolmogorov complexity shows that it coincides up to a logarithmic additive term. (We assume that the information about the dimensionality of the Hilbert space is given conditionally.) ###### Remark 1 Fixed additional constants are no problem since the complexity also varies by fixed additional constants due to the choice of reference universal Turing machine. $`\mathrm{}`$ ### 2.2 Upper Bound on Complexity A priori, in the worst case $`K(|x|n)`$ is possibly $`\mathrm{}`$. We show that the worst-case has a $`2n`$ upper bound. ###### Lemma 1 For all $`n`$-qubit quantum states $`|x`$ we have $`K(|x|n)2n+O(1)`$. Proof. For every state $`|x`$ in $`N:=2^n`$-dimensional Hilbert space with basis vectors $`|e_0,\mathrm{},|e_{N1}`$ we have $`_{i=0}^{N1}|e_i|x|^2=1`$. Hence there is an $`i`$ such that $`|e_i|x|^21/N`$. Let $`p`$ be a $`K(i|n)+O(1)`$-bit program to construct a basis state $`|e_i`$ given $`n`$. Then $`l(p)n+O(1)`$. Then $`K(|x|n)l(p)\mathrm{log}(1/N)2n+O(1)`$. $`\mathrm{}`$ ### 2.3 Computability In the classical case Kolmogorov complexity is not computable but can be approximated from above by a computable process. The non-cloning property prevents us from copying an unknown pure quantum state given to us. Therefore, an approximation from above that requires checking every output state against the target state destroys the latter. To overcome the fragility of the pure quantum target state one has to postulate that it is available as an outcome in a measurement. ###### Theorem 3 Let $`|x`$ be the pure quantum state we want to describe. (i) The quantum Kolmogorov complexity $`K(|x)`$ is not computable. (ii) If we can repeatedly execute the projection $`|xx|`$ and perform a measurement with outcome $`|x`$, then the quantum Kolmogorov complexity $`K(|x)`$ can be approximated from above by a computable process with arbitrarily small probability of error $`\alpha `$ of giving a too small value. Proof. The uncomputability follows a fortiori from the classical case. The semicomputability follows because we have established an upper bound on the quantum Kolmogorov complexity, and we can simply enumerate all halting classical programs up to that length by running their computations dovetailed fashion. The idea is as follows: Let the target state be $`|x`$ of $`n`$ qubits. Then, $`K(|x|n)2n+O(1)`$. (The unconditional case $`K(|x)`$ is similar with $`2n`$ replaced by $`2(n+\mathrm{log}n)`$.) We want to identify a program $`x^{}`$ such that $`p:=x^{}`$ minimizes $`l(p)\mathrm{log}|x|U(p,n)|^2`$ among all candidate programs. To identify it in the limit, for some fixed $`k`$ satisfying (2) below for given $`n,\alpha ,ϵ`$, repeat the computation of every halting program $`p`$ with $`l(p)2n+O(1)`$ at least $`k`$ times and perform the assumed projection and measurement. For every halting program $`p`$ in the dovetailing process we estimate the probability $`q:=|x|U(p,n)|^2`$ from the fraction $`m/k`$: the fraction of $`m`$ positive outcomes out of $`k`$ measurements. The probability that the estimate $`m/k`$ is off from the real value $`q`$ by more than an $`ϵq`$ is given by Chernoff’s bound: for $`0ϵ1`$, $$P(|mqk|>ϵqk)2e^{ϵ^2qk/3}.$$ (1) This means that the probability that the deviation $`|m/kq|`$ exceeds $`ϵq`$ vanishes exponentially with growing $`k`$. Every candidate program $`p`$ satisfies (1) with its own $`q`$ or $`1q`$. There are $`O(2^{2n})`$ candidate programs $`p`$ and hence also $`O(2^{2n})`$ outcomes $`U(p,n)`$ with halting computations. We use this estimate to upper bound the probability of error $`\alpha `$. For given $`k`$, the probability that some halting candidate program $`p`$ satisfies $`|mqk|>ϵqk`$ is at most $`\alpha `$ with $$\alpha \underset{U(p,n)<\mathrm{}}{}2e^{ϵ^2qk/3}.$$ The probability that no halting program does so is at least $`1\alpha `$. That is, with probability at least $`1\alpha `$ we have $$(1ϵ)q\frac{m}{k}(1+ϵ)q$$ for every halting program $`p`$. It is convenient to restrict attention to the case that all $`q`$’s are large. Without loss of generality, if $`q<\frac{1}{2}`$ then consider $`1q`$ instead of $`q`$. Then, $$\mathrm{log}\alpha 2n(ϵ^2k\mathrm{log}e)/6+O(1).$$ (2) The approximation algorithm is as follows: Step 0: Set the required degree of approximation $`ϵ<1/2`$ and the number of trials $`k`$ to achieve the required probability of error $`\alpha `$. Step 1: Dovetail the running of all candidate programs until the next halting program is enumerated. Repeat the computation of the new halting program $`k`$ times Step 2: If there is more than one program $`p`$ that achieves the current minimum then choose the program with the smaller length (and hence least number of successfull observations). If $`p`$ is the selected program with $`m`$ successes out of $`k`$ trials then set the current approximation of $`K(|x)`$ to $$l(p)\mathrm{log}\frac{m}{(1+ϵ)k}.$$ This exceeds the proper value of the approximation based on the real $`q`$ instead of $`m/k`$ by at most 1 bit for all $`ϵ<1`$. Step 3: Goto Step 1. $`\mathrm{}`$ ### 2.4 Incompressibility ###### Definition 2 A pure quantum state $`|x`$ is computable if $`K(|x)<\mathrm{}`$. Hence all finite-dimensional pure quantum states are computable. We call a pure quantum state directly computable if there is a program $`p`$ such that $`U(p)=|x`$. The standard orthonormal basis—consisting of all $`n`$-bit strings—of the $`2^n`$-dimensional Hilbert space $`_N`$ has at least $`2^n(12^c)`$ basis vectors $`|e_i`$ that satisfy $`K(|e_i|n)nc`$. This is the standard counting argument in . But what about nonclassical orthonormal bases? ###### Lemma 2 There is a (possibly nonclassical) orthonormal basis of the $`2^n`$-dimensional Hilbert space $`_N`$ such that at least $`2^n(12^c)`$ basis vectors $`|e_i`$ satisfy $`K(|e_i|n)nc`$. Proof. Every orthonormal basis of $`_N`$ has $`2^n`$ basis vectors and there are at most $`m_{i=0}^{nc1}2^i=2^{nc}1`$ programs of length less than $`nc`$. Hence there are at most $`m`$ programs available to approximate the basis vectors. We construct an orthonormal basis satisfying the lemma: The set of directly computed pure quantum states $`|x_0,\mathrm{},|x_{m1}`$ span an $`m^{}`$-dimensional subspace $`𝒜`$ with $`m^{}m`$ in the $`2^n`$-dimensional Hilbert space $`_N`$ such that $`_N=𝒜𝒜^{}`$. Here $`𝒜^{}`$ is a $`(2^nm^{})`$-dimensional subspace of $`_N`$ such that every vector in it is perpendicular to every vector in $`𝒜`$. We can write every element $`|x_N`$ as $$\underset{i=0}{\overset{m^{}1}{}}\alpha _i|a_i+\underset{i=0}{\overset{2^nm^{}1}{}}\beta _i|b_i$$ where the $`|a_i`$’s form an orthonormal basis of $`𝒜`$ and the $`|b_i`$’s form an orthonormal basis of $`𝒜^{}`$ so that the $`|a_i`$’s and $`|b_i`$’s form an orthonormal basis $`K`$ for $`_N`$. For every directly computable state $`|x_j𝒜`$ and basis vector $`|b_iA^{}`$ we have $`|x_j|b_i|^2=0`$ implying that $`K(|x_j|n)\mathrm{log}|x_j|b_i|^2=\mathrm{}`$ and therefore $`K(|b_i|n)>nc`$ ($`0j<m,0i<2^nm^{}`$). This proves the lemma. $`\mathrm{}`$ We generalize this lemma to arbitrary bases: ###### Theorem 4 Every orthonormal basis $`|e_0,\mathrm{},|e_{2^n1}`$ of the $`2^n`$-dimensional Hilbert space $`_N`$ has at least $`2^n(12^c)`$ basis vectors $`|e_i`$ that satisfy $`K(|e_i|n)nc`$. Proof. Use the notation of the proof of Lemma 2. Assume to the contrary that there are $`>2^{nc}`$ basis vectors $`|e_i`$ with $`K(|e_i|n)<nc`$. Then at least two of them, say $`|e_0`$ and $`|e_1`$ and some pure quantum state $`|x`$ directly computed from a $`<(nc)`$-length program satisfy $$K(|e_i|n)=K(|x|n)+\mathrm{log}|e_i|x|^2.$$ (3) ($`i=0,1`$). This means that $`K(|x|n)<nc1`$ since not both $`|e_0`$ and $`|e_1`$ can be equal to $`|x`$. Hence for every directly computed pure quantum state of complexity $`nc1`$ there is at most one basis state of the same complexity (in fact only if that basis state is identical with the directly computed state.) Now eliminate all directly computed pure quantum states $`|x`$ of complexity $`nc1`$ together with the basis states $`|e`$ that stand in relation Equation 3. We are now left with $`>2^{nc1}`$ basis states that stand in relation of Equation 3 with the remaining at most $`2^{nc1}1`$ remaining directly computable pure quantum states of complexity $`nc2`$. Repeating the same argument we end up with $`>1`$ basis vector that stand in relation of Equation 3 with 0 directly computable pure quantum states of complexity $`0`$ which is impossible. $`\mathrm{}`$ ###### Corollary 2 The uniform probability $`\mathrm{Pr}\{|x:K(|x|n)nc\}11/2^c`$. ###### Example 1 We elucidate the role of the $`\mathrm{log}|x|z|^2`$ term. Let $`x`$ be a random classical string with $`K(x)l(x)`$ and let $`y`$ be a string obtained from $`x`$ by complementing one bit. It is known (Exercise 2.2.8 in ) that for every such $`x`$ of length $`n`$ there is such a $`y`$ with complexity $`K(y|n)=n\mathrm{log}n+O(1)`$. Now let $`|z`$ be a pure quantum state which has classical bits except the difference bit between $`x`$ and $`y`$ that has equal probabilities of being observed as “1” and as “0.” We can prepare $`|z`$ by giving $`y`$ and the position of the difference bit (in $`\mathrm{log}n`$ bits) and therefore $`K(|z|n)n+O(1)`$. Since from $`|z`$ we have probability $`\frac{1}{2}`$ of obtaining $`x`$ by observing the particular bit in superposition and $`K(x|n)n`$ it follows $`K(|z|n)n+O(1)`$ and therefore $`K(|z|n)=n+O(1)`$. From $`|z`$ we have probability $`\frac{1}{2}`$ of obtaining $`y`$ by observing the particular bit in superposition which (correctly) yields that $`K(y|n)n+O(1)`$. $`\mathrm{}`$ ### 2.5 Conditional Complexity We have used the conditional complexity $`K(|x|y)`$ to mean the minimum sum of the length of a classical program to compute $`|z`$ plus the negative logarithm of the probability of outcome $`|x`$ when executing projection $`|xx|`$ on $`|z`$ and measuring, given the pure quantum state $`y`$ as input on a separate input tape. In the quantum situation the notion of inputs consisting of pure quantum states is subject to very special rules. Firstly, if we are given an unknown pure quantum state $`|y`$ as input it can be used only once, that is, it is irrevocably consumed and lost in the computation. It cannot be copied or cloned without destroying the original . This phenomenon is subject to the so-called no-cloning theorem and means that there is a profound difference between giving a directly computable pure quantum state as a classical program or giving it literally. Given as a classical program we can prepare and use arbitrarily many copies of it. Given as an (unknown) pure quantum state in superposition it can be used as start of a computation only once—unless of course we deal with an identity computation in which the input state is simply transported to the output state. This latter computation nonetheless destroys the input state. If an unknown state $`|y`$ is given as input (in the conditional for example) then the no-cloning theorem of quantum computing says it can be used only once. Thus, for a non-classical pure quantum state $`|x`$ we have $$K(|x,|x||x)K(|x)+O(1)$$ rather than $`K(x,x|x)=O(1)`$ as in the case for classical objects $`x`$. This holds even if $`|x`$ is directly computable but is given in the conditional in the form of an unknown pure quantum state. However, if $`|x`$ is directly computable and the conditional is a classical program to compute this directly computable state, then that program can be used over and over again. In the previous example, if the conditional $`|x`$ is directly computable, for example by a classical program $`p`$, then we have both $`K(|x|p)=O(1)`$ and $`K(|x,|x|p)=O(1)`$. In particular, for a classical program $`p`$ that computes a directly computable state $`|x`$ we have $$K(|x,|x|p)=O(1).$$ It is important here to notice that a classical program for computing a directly computable quantum state carries more information than the directly computable quantum state itself—much like a shortest program for a classical object carries more information than the object itself. In the latter case it consists in partial information about the halting problem. In the quantum case of a directly computable pure state we have the additional information that the state is directly computable and in case of a shortest classical program additional information about the halting problem. ### 2.6 Sub-Additivity Quantum Kolmogorov complexity of directly computable pure quantum states in simple orthonormal bases is sub-additive: ###### Lemma 3 For directly computable $`|x,|y`$ both of which belong to (possibly different) orthonormal bases of Kolmogorov complexity $`O(1)`$ we have $$K(|x,|y)K(|x||y)+K(|y)$$ up to an additive constant term. Proof. By Theorem 2 we there is a program $`p_y`$ to compute $`|y`$ with $`l(p)=K(|y)`$ and a program $`p_{yx}`$ to compute $`|x`$ from $`|y`$ with $`l(p_{yx})=K(|x||y)`$ up to additional constants. Use $`p_y`$ to construct two copies of $`|y`$ and $`p_{yx}`$ to construct $`|x`$ from one of the copies of $`|y`$. The separation between these concatenated binary programs is taken care of by the self-delimiting property of the subprograms. The additional constant term takes care of the couple of $`O(1)`$-bit programs that are required. $`\mathrm{}`$ ###### Remark 2 In the classical case we have equality in the theorem (up to an additive logarithmic term). The proof of the remaining inequality, as given in the classical case, doesn’t hold directly for the quantum case. It would require a decision procedure that establishes equality between two pure quantum states without error. While the sub-additivity property holds in case of directly computable states, is easy to see that for the general case of pure states the subadditivity property fails due to the “non-cloning” property. For example for pure states $`|x`$ that are not “clonable” we have: $$K(|x,|x)>K(|x||x)+K(|x)=K(|x)+O(1).$$ $`\mathrm{}`$ We additionally note: ###### Lemma 4 For all directly computable pure states $`|x`$ and $`|y`$ we have $`K(|x,|y)K(|y)\mathrm{log}|x|y|^2`$ up to an additive logarithmic term. Proof. $`K(|x||y)\mathrm{log}|x|y|^2`$ by the proof of Theorem 2. Then, the lemma follows by Lemma 3. $`\mathrm{}`$ ## 3 Qubit Descriptions One way to avoid two-part descriptions as we used above is to allow qubit programs as input. This leads to the following definitions, results, and problems. ###### Definition 3 The qubit complexity of $`|x`$ with respect to quantum Turing machine $`Q`$ with $`y`$ as conditional input given for free is $$KQ_Q(|x|y):=\underset{p}{\mathrm{min}}\{l(|p):Q(|p,y)=|x\}$$ where $`l(|p)`$ is the number of qubits in the qubit specification $`|p`$, $`|p`$ is an input quantum state, $`y`$ is given conditionally, and $`|x`$ is the quantum state produced by the computation $`Q(|p,y)`$: the target state that one describes. Note that here too there are two possible interpretations for the computation relation $`Q(|p,y)=|x`$. In the narrow interpretation we require that $`Q`$ with $`|p`$ on the input tape and $`y`$ on the conditional tape halts with $`|x`$ on the output tape. In the wide interpretation we require that for every precision $`\delta >0`$ the computation of $`Q`$ with $`|p`$ on the input tape and $`y`$ on the conditional tape and $`\delta `$ on a tape where the precision is to be supplied halts with $`|x^{}`$ on the output tape and $`|x|x^{}|^21\delta `$. Additionally one can require that the approximation finishes in a certain time, say, polynomial in $`l(|x)`$ and $`1/\delta `$. In the remainder of this section we can allow either interpretation (note that the “narrow” complexity will always be at least as large as the “wide” complexity). Fix an enumeration of quantum Turing machines like in Theorem 1, this time with Turing machines that use qubit programs. Just like before it is now straightforward to derive an Invariance Theorem: ###### Theorem 5 There is a universal machine $`U`$ such that for all machines $`Q`$ there is a constant $`c`$ (the length of a self-delimiting encoding of the index of $`Q`$ in the enumeration) such that for all quantum states $`|x`$ we have $`KQ_U(|x|y)KQ_Q(|x|y)+c`$. We fix once and for all a reference universal quantum Turing machine $`U`$ and express the qubit quantum Kolmogorov complexity as $`KQ(|x|y):=KQ_U(|x|y),`$ $`KQ(|x):=KQ_U(|x|ϵ),`$ where $`ϵ`$ indicates the absence of conditional information (the conditional tape contains the “quantum state” with 0 qubits). We now have immediately: ###### Lemma 5 $`KQ(|x)l(|x)+O(1)`$. Proof. Give the reference universal machine $`|1^n0|x`$ as input where $`n`$ is the index of the identity quantum Turing machine that transports the attached pure quantum state $`|x`$ to the output. $`\mathrm{}`$ It is possible to define unconditional $`KQ`$-complexity in terms of conditional $`K`$-complexity as follows: Even for pure quantum states that are not directly computable from effective descriptions we have $`K(|x||x)=O(1)`$. This naturaly gives: ###### Lemma 6 The qubit quantum Kolmogorov complexity of $`|x`$ satisfies $$KQ(|x)=\underset{p}{\mathrm{min}}\{l(|p):K(|x||p)\}+O(1),$$ where $`l(|p)`$ denotes the number of qubits in $`|p`$. Proof. Transfer the conditional $`|p`$ to the input using an $`O(1)`$-bit program. $`\mathrm{}`$ We can generalize this definition to obtain conditional $`KQ`$-complexity. ### 3.1 Potential Problems of Qubit Complexity While it is clear that (just as with the previous aproach) the qubit complexity is not computable, it is unknown to the author whether one can approximate the qubit complexity from above by a computable process in any meaningful sense. In particular, the dovetailing approach we used in the first approach now doesn’t seem applicable due to the non-countability of the potentential qubit program candidates. While it is clear that the qubit complexity of a pure quantum state is at least 1, why would it need to be more than one qubit since the probability amplitude can be any complex number? In case the target pure quantum state is a classical binary string, as observed by Harry Buhrman, Holevo’s theorem tells us that on average one cannot transmit more than $`n`$ bits of classical information by $`n`$-qubit messages (without using entangled qubits on the side). This suggests that for every $`n`$ there exist classical binary strings of length $`n`$ that have qubit complexity at least $`n`$. This of course leaves open the case of the non-classical pure quantum states—a set of measure one—and of how to prove incompressibility of the overwhelming majority of states. These matters have since been investigated by A. Berthiaume, S. Laplante, and W. van Dam (paper in preparation). ## 4 Real Descriptions A final version of quantum Kolmogorov complexity uses computable real parameters to describe the pure quantum state with complex probability amplitudes. This requires two reals per complex probability amplitude, that is, for $`n`$ qubits one requires $`2^{n+1}`$ real numbers in the worst case. Since every computable real number may require a separate program, a computable $`n`$ qubit state may require $`2^{n+1}`$ finite programs. While this approach does not allow the development of a clean theory in the sense of the previous approaches, it can be directly developed in terms of algorithmic thermodynamics—an extension of Kolmogorov complexity to randomness of infinite sequences (such as binary expansions of real numbers) in terms of coarse-graining and sequential Martin-Löff tests, completely analogous to Peter Gács theory . ## Acknowledgement The ideas presented in this paper were developed from 1995 through early 1998. Other interests prevented me from earlier publication. I thank Harry Buhrman, Richard Cleve, Wim van Dam, Barbara Terhal, John Tromp, and Ronald de Wolf for discussions and comments on QKC.
no-problem/9907/cond-mat9907228.html
ar5iv
text
# Stripe Phases in High Temperature Superconductors \[ ## Abstract Stripe phases are predicted and observed to occur in a class of strongly-correlated materials describable as doped antiferromagnets, of which the copper-oxide superconductors are the most prominent representative. The existence of stripe correlations necessitates the development of new principles for describing charge transport, and especially superconductivity, in these materials. \] Thirteen years ago, the discovery of superconductivity in layered copper-oxide compounds came as a great surprise, not only because of the record-high transition temperatures, but also because these materials are relatively poor conductors in the “normal” (i.e., nonsuperconducting) state. Indeed, these superconductors are obtained by electronically doping “parent” compounds that are antiferromagnetic Mott insulators, materials in which both the antiferromagnetism and the insulating behavior are the result of strong electron-electron interactions. Since local magnetic correlations survive in the metallic compounds, it is necessary to view these materials as doped antiferromagnets. A number of other related materials, such as the layered nickelates (which remain insulating when doped) and manganites (the “colossal” magnetoresistance materials), are also doped antiferromagnets, in this sense. The conventional quantum theory of the electronic structure of solids, which has been outstandingly successful at describing the properties of good electrical conductors (metals such as Cu and Al) and semiconductors (such as Si and Ge), treats the electronic excitations as a weakly interacting gas. This approach, known as “Fermi Liquid Theory”, breaks down when applied to doped antiferromagnets. New principles must be developed to deal with these problems, which are at the core of the study of “strongly correlated electronic systems”, one of the central and most intellectually rich branches of contemporary physics. One idea that has evolved over the last decade, and which offers a framework for interpreting a broad range of experimental results on copper-oxide superconductors and related systems, is the concept of a stripe phase. A stripe phase is one in which the doped charges are concentrated along spontaneously generated domain walls between antiferromagnetic insulating regions. Stripe phases occur as a compromise between the antiferromagnetic interactions among magnetic ions and the Coulomb interactions between charges (both of which favor localized electrons) and the zero-point kinetic energy of the doped holes (which tends to delocalize charge). Experimentally, stripe phases are most clearly detected in insulating materials (where the stripe order is relatively static), but there is increasingly strong evidence of fluctuating stripe correlations in metallic and superconducting compounds. The existence of dynamic stripes, in turn, forces one to consider new mechanisms for charge transport and for superconductivity. More generally, we will show that the concept of electronic stripe phases developed for transition-metal oxides is applicable to a broad range of materials. Theoretical Background. Doped antiferromagnets are a particularly important and well-studied class of strongly-correlated electronic materials. Here, the parent compound is insulating, even at elevated temperatures, because of the strong short-range repulsion between electrons. At sufficiently low temperatures, antiferromagnetic order develops in which there is a non-zero average magnetic moment on each site pointing in a direction that alternates from site to site. (See Fig. 1.) Frequently the doping process, “hole doping”, involves chemically modifying the material so that a small fraction of electrons is removed from the insulating antiferromagnet. Whereas the charge distribution in a doped semiconductor is homogeneous, in a doped antiferromagnet the added charge forms clumps: solitons in one dimension, linear “rivers of charge” in two dimensions, and planes of charge in three dimensions, as exemplified by organic conductors, cuprates or nickelates, and manganites respectively. Typically, these clumps form what are known as “topological defects” across which there is a change in the phase of the background spins or orbital degrees of freedom. In $`d`$ dimensions, the defects are $`(d1)`$-dimensional extended objects. Stripes in a two-dimensional (2D) system are illustrated schematically in Fig. 1. Self-organized local inhomogeneities were predicted theoretically. They arise because the electrons tend to cluster in regions of suppressed antiferromagnetism which produces a strong, short-range tendency to phase separation that is frustrated by the long-range Coulomb interaction. The best compromise between these competing imperatives is achieved by allowing the doped holes to be delocalized along linear stripes, while the intervening regions remain more-or-less in the undoped correlated insulating state. Experimental Evidence for Stripes. The most direct evidence for stripe phases in doped antiferromagnets has come from neutron scattering studies. Diffraction of a neutron beam by long-period spin and charge density modulations, extending over a few unit cells, as indicated in Fig. 1, yields extra Bragg peaks. The position of such a superstructure peak measures the spatial period and orientation of the corresponding density modulation, while the intensity provides a measure of the modulation amplitude. Since neutrons have no charge, they do not scatter directly from the modulated electron density, but instead are scattered by the ionic displacements induced by the charge modulation. The lattice modulation is also measurable with electron and x-ray diffraction. The antiferromagnetic order found in the parent compounds of the cuprate superconductors is destroyed rapidly as holes are introduced by doping. The first indications of long-period (“incommensurate”) spin-density modulations were provided by inelastic neutron scattering studies of superconducting La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>, and by related measurements on the insulating nickelate analog. Following the discovery of “incommensurate” charge ordering in the latter system by electron diffraction, the proper connection between the magnetic and charge-order peaks was determined in a neutron diffraction study of La<sub>2</sub>NiO<sub>4.125</sub>. The positions of the observed peaks indicate that the charge stripes run diagonally through the NiO<sub>2</sub> layers (as opposed to the vertical stripes shown in Fig. 1). More recent experiments on La<sub>2-x</sub>Sr<sub>x</sub>NiO<sub>4</sub> have shown that the diagonal stripe ordering occurs for doping levels up to $`x\frac{1}{2}`$ (corresponding to a hole density of 1 for every 2 Ni sites), with the maximum ordering temperatures occurring at $`x=\frac{1}{3}`$. It is significant that the charge ordering is always observed at a higher temperature than the magnetic ordering, which is characteristic of a transition that is driven by the charge. It is also important to note that the period of the charge order is generally temperature dependent, which means that the hole concentration along each stripe also varies with temperature; this behavior is characteristic of structures that arise from competing interactions. These observations are consistent with the idea that the stripes are generated by the competition between the clustering tendency of the holes and the long-range Coulomb interactions. \[Weak density-wave order can occur in conventional solids under special conditions (“nested Fermi surfaces”), but the transitions tend to be “spin driven” and occur at a fixed “nesting” wave vector.\] Charge order is most easily detected when stripes are static, but perfect static charge order can be shown to be incompatible with the metallic behavior of the cuprates. Nevertheless, to get a better experimental handle on the charge order, one might hope to pin down fluctuating stripes with a suitably anisotropic distortion of the crystal structure. Just such a distortion of the La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> structure is obtained by partial substitution of Nd for La. Neutron diffraction measurements on a Nd-doped crystal with the special Sr concentration of $`x\frac{1}{8}`$ revealed charge and spin order, consistent with the vertical stripes of Fig. 1. (An anomalous suppression of superconductivity, associated with the lattice distortion, is maximum for $`x\frac{1}{8}`$.) The charge order has since been confirmed by high-energy x-ray diffraction. As in the nickelates, the spin ordering occurs at lower temperatures than the charge order, and the hole concentration on a stripe varies as a function of the Sr concentration, $`x`$. Although it has been difficult to observe a direct signature of charge stripes in other cuprate families, the existing neutron scattering studies of magnetic correlations are certainly most easily understood in terms of the stripe-phase concept. The doping dependence of dynamic magnetic correlations in Nd-free La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> is found to be essentially the same as the static correlations in Nd-doped samples, and a comprehensive study of a Nd-free sample near “optimum” doping (i.e., maximum superconducting transition temperature) indicates that ordering may be prevented by quantum fluctuations. To keep things interesting, static magnetic order has been observed to set in near the superconducting transition temperature in La<sub>2</sub>CuO<sub>4+δ</sub>. Finally, a beautiful experiment on superconducting YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> has shown that the low energy magnetic correlations in that system have strong similarities with those in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>. An example of planar domain walls in a 3D system occurs in nearly-cubic La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> with $`x0.5`$. Charge order has been imaged by transmission electron microscopy. The ordering phenomena are somewhat more complex in this case because the occupied Mn $`3d`$ orbitals are degenerate. As a consequence, charge, spin, and orbital ordering are all involved, although, again, charge order sets in at a higher temperature than magnetic order. Electronic Liquid Crystals. Once the idea of stripe phases of a two-dimensional doped insulator has been established, a major question arises: How can a stripe phase become a high temperature superconductor, as in the cuprates, rather than an insulator, as in the nickelates? Typically, interactions drive quasi one-dimensional metals to an insulating ordered charge density wave (CDW) state at low temperatures (and quenched disorder only enhances the insulating tendency). However, we have shown that the CDW instability is eliminated and superconductivity is enhanced if the transverse stripe fluctuations have a large enough amplitude. To satisfy this condition, the stripes could oscillate in time or be static and meandering. They are then electronic (and quantum mechanical) analogues of classical liquid crystals and, as such, they constitute new states of matter, which can be either high temperature superconductors or two-dimensional anisotropic unconventional metals. Classical liquid crystals are phases that are intermediate between a liquid and a solid, and spontaneously break the symmetries of free space. Electronic liquid crystals are quantum analogues of these phases in which the ground state is intermediate between a liquid, where quantum fluctuations are large, and a crystal, where they are small. Because the electrons exist in a solid, it is the symmetry of the host crystal that is spontaneously broken, rather than the symmetry of free space. An electronic liquid crystal has the following phases: (i) a liquid, which breaks no spatial symmetries and, in the absence of disorder, is a conductor or a superconductor; (ii) a nematic, or anisotropic liquid, which breaks the rotation symmetry of the lattice and has an axis of orientation; (iii) a smectic, which breaks translational symmetry in one direction and, otherwise is an electron liquid; (iv) an insulator with the character of an electronic solid or glass. These classifications applied to stripe phases make the stripe notion, which is based on local electronic correlations, macroscopically precise. Neutron and x-ray scattering experiments give direct evidence of electronic liquid crystal phases (conducting stripe ordered phases) in the cuprate superconductors. Charge Transport. In the standard theory of solids, the electron’s kinetic energy is treated as the largest energy in the problem, and the effects of electron-electron interactions are introduced as an afterthought. As a consequence, the electronic states in normal solids are highly structured in momentum space (k-space), and therefore, according to the uncertainty principle, they are highly homogeneous in real space. Moreover, as the “normal” (metallic) state is continuously connected to the ground state of the kinetic energy, any phase transition to a low-temperature ordered phase is necessarily driven by the potential energy, inasmuch as it involves a gain in the interaction energy between electrons at a smaller cost of kinetic energy. For transport properties, the central concept of a mean free path $`l`$, i.e. the distance an electron travels between collisions, is well defined so long as $`l`$ is much larger than the electron’s de Broglie wavelength, $`\lambda _F`$, at the Fermi energy. A number of interesting synthetic metals discovered in the past few decades seem to violate the conventional theory. They are “bad metals” in the sense that their resistivities, $`\rho (T)`$, have a metallic temperature dependence \[$`\rho (T)`$ increases with the temperature $`T`$\] but the mean free path, inferred from the data by a conventional analysis, is shorter than $`\lambda _F`$, so the concept of a state in momentum space would be ill defined. Among the materials in question are the cuprate high temperature superconductors; other oxides including the ruthenates, the nickelates and the “colossal magnetoresistance materials” (manganites); organic conductors; and alkali-doped $`C_{60}`$. Most of these materials are doped correlated insulators, in which the short-range repulsive interaction between electrons is the largest energy in the system. However, the ground state of this part of the Hamiltonian is not unique, so the kinetic energy cannot simply be treated as a perturbation; such materials display substantial structure in both real space and momentum space. As a consequence, the conventional theory must be abandoned. Neither the kinetic energy nor the potential energy is totally dominant, and both must be treated on an equal footing. Superconductivity. The highly successful theory of superconductivity developed by Bardeen, Cooper, and Schrieffer in the fifties was designed for good metals, not for doped insulators. A key issue, therefore, is the relation of stripes to the mechanism of high temperature superconductivity. In fact, there is a strong empirical case for an intimate relation between these phenomena: (i) strongly condensed stripe order can suppress superconductivity (as it does in La<sub>1.6-y</sub>Nd<sub>y</sub>Sr<sub>x</sub>CuO<sub>4</sub>), (ii) weak stripe ordering can, at times, appear at the superconducting transition temperature $`T_c`$ (as it does in La<sub>2</sub>CuO<sub>4+δ</sub>), (iii) there is a simple, linear relation between the inverse stripe spacing and the superconducting $`T_c`$ observed in several materials (including La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> and YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub>), and (iv) stripe structure and other features of the doped insulator, together with high temperature superconductivity, disappear as the materials emerge from the doped-insulator regime (“over doping”). Moreover, there is a clear indication that the optimal situation for high temperature superconductivity is stripe correlations that are not too static or strongly condensed, but also not too ethereal or wildly fluctuating. We have argued that the driving force for the physics of the doped insulator is the reduction of the zero-point kinetic energy. This proceeds in three steps: (i) the development of an array of metallic stripes lowers the kinetic energy along a stripe, (ii) hopping of pairs of electrons perpendicular to a stripe in the CuO<sub>2</sub> planes creates spin pairs on and in the immediate neighborhood of a stripe, and (iii) at a lower temperature, pair hopping between stripes creates the phase coherence that is essential for superconductivity. Steps ii and iii lower the kinetic energy of motion perpendicular to a stripe. Generality of the Stripe Concept. The physics of charge clustering in doped correlated insulators is general and robust, so one might expect that local stripe structures would appear in other related systems. Indeed, topological doping has long been documented in the case of quasi one dimensional charge-density-wave (CDW) systems, such as polyacetylene; it is an interesting open question whether it occurs in other higher dimensional systems. One recent, fascinating discovery is the observation that, under appropriate circumstances, quantum Hall systems (i.e. an ultra-clean 2D electron gas in a high magnetic field) spontaneously develop a large transport anisotropy on cooling below 150 mK. It is likely that this anisotropy is related to stripe formation on short length scales, and it apparently reflects the existence of an electronic nematic phase in this system. Stripe-like structures have also been observed (ref. 43 and references therein) in many other systems with competing interactions, on widely differing length scales. Beyond this generality, the existence of spontaneously generated local structures is clearly important for understanding all of the electronic properties of synthetic metals, including the anomalous charge transport and the mechanism of high temperature superconductivity. Many of these implications have already been explored in considerable detail, but many remain to be discovered. Here we content ourselves with a few general observations. The phenomena described above represent a form of “dynamical dimension reduction” whereby, over a substantial range of temperatures and energies, a synthetic metal will behave, electronically, as if it were of lower dimensionality. This observation has profound implications because conventional charge transport occurs in a high-dimensional state, and fluctuation effects are systematically more important in lower dimensions. In particular, in the quasi-two dimensional high temperature superconductors, stripes provide a mechanism for the appearance of quasi-one dimensional electronic physics, where conventional transport theory fails, and is replaced by such key notions as separation of charge and spin, and solitonic quasi-particles. At the highest temperatures (up to 1000 K), in what is often called the “normal state” of the high temperature superconductors, where coherent stripe-like structures are unlikely to occur, it is still probable that local charge inhomogeneities occur due to the strong tendency of holes in an antiferromagnet to phase separate. This behavior can lead to quasi zero-dimensional physics (quantum impurity model physics), which also produces a host of interesting, and well documented quantum critical phenomena and may be at the heart of much of the anomalous normal state behavior of these systems.
no-problem/9907/adap-org9907004.html
ar5iv
text
# A dynamical model of non regulated markets The main focus of this work is to understand the dynamics of non regulated markets. The present model can describe the dynamics of any market where the pricing is based on supply and demand. It will be applied here, as an example, for the German stock market presented by the Deutscher Aktienindex (DAX), which is a measure for the market status. The duality of the present model consists of the superposition of the two components - the long and the short term behaviour of the market. The long term behaviour is characterised by a stable development which is following a trend for time periods of years or even decades. This long term growth (or decline) is based on on the development of fundamental market figures. The short term behaviour is described as a dynamical evaluation (trading) of the market by the participants. The trading process is described as an exchange between supply and demand. In the framework of this model there the trading is modelled by a system of nonlinear differential equations. The model also allows to explain the chaotic behaviour of the market as well as periods of growth or crashes. PCAS numbers: 01.75.+m, 05.40.+j, 02.50.Le Contribution to the technical seminar 22/12/98, DESY-IfH Zeuthen The traditional approaches of pricing models (indices, stocks, currencies, gold, etc.) are related to combinations of economic figures like profit or cash-flow and their expected development. Indeed, these fundamental figures are related to the approximate price. However, it is well known that similar objects (companies, goods, …) can be priced on the same market quiet different. One can observe quick changes in the pricing, which can’t be explained by any change of the underlying basic figures . The present model consists of two basic components: * (Long term trend) Scaling of the price (index) based on the long term development of basic figures * (Short term trend) Pricing by the exchange between buyers (optimists), sellers (pessimists) and neutral market members Studying, for example, the DAX $`I`$ for a time period of one decade one will recognise, that the basic trend $`I_0(t)`$ shows an exponential behaviour with deviations (fig. 1.). This trend can be presented as: $$I_0(t)=\widehat{I}_0e^{\lambda t},\text{with}\widehat{I}_0,\lambda =const.$$ (1) The parameter $`\widehat{I}_0`$ is the starting value: $`\widehat{I}_0(t)I_0(t=0)`$. The growth rate $`\lambda `$ can variate on different markets. This parameter summarises all basic influences on the market, such as economic freedom, taxes, social-economic parameters, infrastructure and others. Comparing different markets one will find, that certain economics are growing (US, Europe) while others are declining over years (Japan <sup>1</sup><sup>1</sup>1A long term decline of national economies is often caused by massive regulations, reducing the economic freedom.). The value for the parameter $`\widehat{I}_0`$ and $`\lambda `$ can be fitted from the historical market data using the least square method. The development $`I_0(t)`$ symbolises the average growth of the economy which is measured in various economic figures. The growth in (1) fulfils the Euler equation, describing the “natural” growth of unlimited systems: $$I_0^{}(t)\lambda I_0(t)=0$$ (2) where $`y^{}(t)\frac{d}{dt}y(t)`$. The function (1) describes a real growth process. As far as there is no universal pricing model, the individual evaluations by the market participants differ and the price deviates from the fundamental average. These different evaluations which are changing in time lead to some kind of spontaneous oscillations. As far as each market has another scale it is useful to normalise the market index (price) to make different markets better comparable: $$I(t)i(t)=\frac{I(t)}{I_0(t)}$$ (3) The function (3) performs a normalisation which will project all indices of real markets to a unitarian index $`i`$ with a constant basic trend: $`i_0(t)1`$ and $`\lambda =0`$. This way the development of markets can be compared in a single scheme. For further discussions it is necessary to define the market structure. A market is the totality of all market members participating in the trading process . The total amount of market members on normalised markets (3) is constant. The normalised DAX can be found in (fig. 2.). As already mentioned above, the subjective evaluations of the market status differ from each other . The market participants can be separated into three groups: optimists, pessimists and neutral market participants. Each group has a certain concentration which evolves in time $`c_k(t)`$. Based on the normalisation there is: $$c_o(t)+c_p(t)+c_n(t)=1,$$ (4) with $`c_o(t)`$, $`c_p(t)`$ and $`c_n(t)`$ as the corresponding concentrations <sup>2</sup><sup>2</sup>2The concentration is the weighted average of individual market members with a similiar market view, but a different capitalization $$c_i(t)=\frac{1}{M(t)}\underset{k=1}{\overset{N_i}{}}m_k(t)$$ where $`i=o,p,n`$ represents the corresponding market views of optimists, pessimists and neutral market members. $`m`$ is their individual capital and $`M`$ the summary market capitalization. $`N_i`$ is the total number of individual market members with the same market view. The dynamics of the market is a result of the development of the $`c_k(t)`$ and the index $`i(t)`$. Each market group has certain features and react on market changes in a different way: * Optimists consider the market to be priced low. They want to buy. * Pessimists consider the market to be priced high. They want to sell. * Neutral market members consider the market to be priced fair. They are passive. The groups have different sizes. Comparing a typical daily trading volume with the total market capitalisation one will find that it is orders of magnitude smaller ($`<1\%`$). This leads to the following relation between the concentrations: $$c_o(t),c_p(t)<<c_n(t).$$ (5) Using (4) the dimension of the problem reduces from 3 to 2 independent functions $`c_p(t)`$ and $`c_p(t)`$. The system dynamics can be written in the form of a system of differential equations: $$c_k^{}(t)=L_k(c_o(t),c_p(t),t),k=o,p$$ (6) Now it is necessary to describe in $`L`$ the structure of the market drivers, which determine the dynamics of trading: On non regulated markets there the price is determined by supply and demand. The ratio of the concentration of optimists and pessimists defines the price level . In general the functional relation between the concentrations of different market members and the index $`i`$ can be expressed in the following form: $$i(t)=f\left(\frac{c_o(t)}{c_p(t)}\right).$$ (7) At present it is not possible to derive the explicit form of $`f`$ from economic principles. The function $`f`$ expresses the subjective evaluations of market participants. Here and in the following there will be made extensive use of Taylors theorem. Unknown functions will be expanded in Taylor series in order to parametrise them. As far as the higher order terms of each expansions will be neglected, it is possible to define the function $`f`$ in the following form: $$i(t)=f\left(\frac{c_o(t)}{c_p(t)}\right)\frac{c_o(t)}{c_p(t)}$$ (8) In the equilibrium state there the equation (8) gives sensible results: $$c_p(t)=c_o(t)i(t)=i_0(t)1$$ (9) After defining the basic conceptions there will be studied now the development of the concentrations $`c_o(t)`$ and $`c_p(t)`$. Their changes in time can be expressed by the following system of equations: $`c_o^{}(t)`$ $`=`$ $`F_o(\mathrm{\Delta }i(t))+\xi _oU(t),`$ $`c_p^{}(t)`$ $`=`$ $`F_p(\mathrm{\Delta }i(t))\xi _pU(t),`$ $`\mathrm{\Delta }i(t)`$ $`=`$ $`i(t)1`$ (10) The system describes the exchange of concentrations as functions of the current index and as external influences. The functions $`F_k(\mathrm{\Delta }i(t)),k=o,p`$ describe the subjective evaluations of the market members as a function of supply (pessimists) and demand (optimists). The function $`U(t)`$ represents an “external field”. It models effects that influence the market, but which are not related to the present value of the index $`i(t)`$. Typical external influences could be related to interest rates, taxes, political events or persons. The constants $`\xi _k`$ describe the difference in perception of external influences by the different market groups. The external influences lead to periods of continues optimism or depression, as they are observed on real markets. In general the functions $`F_k`$ in (10) are unknown. They will be expanded in Taylor series around the equilibrium state $`i_0`$: $$F_k(\mathrm{\Delta }i(t))=\underset{n=0}{\overset{\mathrm{}}{}}\alpha _{k,n}\left[\mathrm{\Delta }i(t)\right]^n$$ (11) In the following the will be used the following approach: $$F_k(\mathrm{\Delta }i(t))=\alpha _{k,1}\mathrm{\Delta }i(t)+O([\mathrm{\Delta }i(t)]^2).$$ (12) In case without external influences $`U(t)=0`$ it makes sense to assume that the system is symmetric concerning optimism and pessimism. Otherwise the system would follow a systematic trend, which has been already taken into account in (4). This leads to the relation $$F_p(\mathrm{\Delta }i(t))=F_o(\mathrm{\Delta }i(t))F(\mathrm{\Delta }i(t)).$$ (13) On ideal markets the perception of external influences would be symmetric too. Real markets show deviations from this symmetry $`\xi _o\xi _p`$. Performing a redefinition of $`U(t)\xi _oU(t)`$ one can substitute the the $`\xi `$ such as $`\xi _o=1`$ and $`\frac{\xi _p}{\xi _o}=1+\epsilon `$, where $`\epsilon `$ is an empirical parameter defining the asymmetry of perception of optimists and pessimists. Based on several reasonable assumptions, it has become possible to construct a nonlinear system of differential equations that reflects the market dynamics: $`c_o^{}(t)\alpha \left[c_o(t)c_p^1(t)1\right]U(t)=0,`$ $`c_p^{}(t)+\alpha \left[c_o(t)c_p^1(t)1\right]+(1+\epsilon )U(t)=0,`$ (14) with the starting conditions: $$c_o(0)=c_{o0},c_p(0)=c_{p0}.$$ (15) and $`\alpha \alpha _1`$. The equations of system (14) describe the principal relation between the concentrations and the market index, where the exchange between the concentration levels can be performed in infinite small steps (continuum limit). That means that the ideal market would react on infinite small deviations from the equilibrium with infinite small trading reactions (exchange of fractions of stocks). This is not possible on real markets, which react with the exchange of finite sized trading units. This causes discontinuous changes of the index. Each new trading process is related to the former trading process which itself has caused a change of the index. One can realize this discontinuous trading behaviour by transforming the system of differential equations (14) into a system of logistic equations, where the trading process becomes described as a sequence of finite exchange transactions : $`c_o^{(n+1)}`$ $`=`$ $`c_o^{(n)}+\mathrm{\Delta }c_o^{(n)},`$ $`c_p^{(n+1)}`$ $`=`$ $`c_p^{(n)}+\mathrm{\Delta }c_p^{(n)},`$ $`\mathrm{\Delta }c_o^{(n)}`$ $`=`$ $`\alpha \left[c_o^{(n)}\left(c_p^{(n)}\right)^11\right]+U^{(n)},`$ $`\mathrm{\Delta }c_p^{(n)}`$ $`=`$ $`\alpha \left[c_o^{(n)}\left(c_p^{(n)}\right)^11\right](1+\epsilon )U^{(n)},`$ $`U^{(n)}`$ $``$ $`U(t_n),`$ $`n`$ $`=`$ $`0,1,\mathrm{}`$ (16) with the starting conditions $$c_o^{(0)}=c_o(0),c_p^{(0)}=c_p(0).$$ (17) Now there will be shown the results of the application of the model to real markets. At first there will be studied growth periods and crashes, which are observed regularly on all financial markets. Using historical data of the DAX one can find, that the growth periods are caused by an exponential growing external optimism $$U(t)=U_0\left(e^{\beta (tt_0)}1\right)U^{(n)}=U_0\left(e^{\beta (t_nt_0)}1\right).$$ (18) As one can see in (fig. 3.) that an exponential growing external optimism leads to an exponential growing index $`i(t)`$. Starting from a certain deviation the system starts to generate oscillations and becomes instable. This fact may cause pessimism (or even panic) in a self reinforcing process . After some time this leads to a collapse of the market . Therefore crashes are not only the result of changes in the external influences $`U`$, but they are caused by the internal instability when the system is far from the equilibrium . Even if the external optimism would continue growing, the system would start to collapse starting from a critical deviation (DAX: critical deviation at $`\pm 35\%`$). An external potential of the type (18) is mathematically equivalent to a redefinition of the long term trend $`I_0(t)`$: $$I_0(t)=\widehat{I_0}e^{\beta (tt_0)}I_0^{}(t)=\widehat{I_0^{}}e^{\beta ^{}(tt_0)},\beta ^{}>\beta $$ (19) This “excited” state exists usually only a certain time period, until the system reaches the critical deviation. After the begin of the collapse the external optimism vanishes and the system returns to the equilibrium state. This behaviour can be found in the historical data of the DAX and other markets. Phases of continuous growth over several month are followed by phases of decline. All of these periods show an exponential behaviour. The market system is very sensitive concerning changes in the neutral component of the market $`c_n`$. Relatively small external influences on the neutral component become enhanced by a leverage effect on the index. This effect is caused by the different orders of magnitude of the concentrations (5): $$\mathrm{\Delta }i(t)\frac{c_n(t)}{c_{p,o}(t)}\mathrm{\Delta }U(t),\frac{c_n(t)}{c_{p,o}(t)}100\mathrm{}1000$$ (20) Another essential feature of the dynamics of markets is the chaotic behaviour, for example in the daily changes of the index. The reason for the appearance of chaos is the feedback of the market to itself. The strength of response on deviations of the equilibrium is described by the model parameter $`\alpha `$. In (fig. 4) there are shown examples of the development of the market system (16) in dependence of $`\alpha `$. In (fig. 4a) there the response of the market is relatively small, so that the market compensates after several transactions. If $`\alpha `$ reaches a critical value (fig. 4b) the reaction on a deviation $`\mathrm{\Delta }i`$ is that strong, that it creates a new deviation with the same size but opposite sign. As the result of this the system starts to oscillate. A further increase of $`\alpha `$ causes a permanent overcompensation of the market deviations. The system becomes chaotic (fig. 4c). The parameter $`\alpha `$ is proportional to the volatility of markets. It is worth to remark, that the market shows a typical feature of non linear problems - fractal patterns. The basic trend over years or decades has an exponential behaviour. The different fragments (medium term trends) have an exponential behaviour as well (but a different growth rate). In this work the model was applied on financial markets, but it can be generalised to all markets which are based on supply and demand. The model describes the long and the short term dynamics of markets within a single theoretical framework, using a few empirical parameters. The model can describe crashes as phase transitions, caused by it’s internal instability. Important features of real markets like chaotic behaviour and a fractal structure are described by a system of non linear differential equations. Using this model it is possible to determine basic parameters, which can describe the status of the market in both, the short and the long term trend. I would like to thank Gerhardt Bohm and Klaus Behrndt for their helpful support and discussion.
no-problem/9907/hep-ex9907055.html
ar5iv
text
# References Recently the WA102 collaboration has published the results of partial wave analyses of the centrally produced $`K^+K^{}`$ , $`K_S^0K_S^0`$ , $`\pi ^+\pi ^{}`$ and $`\pi ^0\pi ^0`$ channels. A striking feature of these analyses was the result that the $`f_J(1710)`$ has J = 0 (we shall refer to it as the $`f_0(1710)`$ hereafter). In these papers the S-wave from each channel was fitted independently using interfering Breit-Wigners and a background. In this present paper we will first show how the resulting parameters change if a different method of fitting is used, namely, a T-Matrix analysis and a K-Matrix analysis using the methods described in ref. . Next we will perform a coupled channel fit to the $`K^+K^{}`$ and $`\pi ^+\pi ^{}`$ final states in order to determine the pole positions and branching ratios of the observed mesons. Finally we will present information on the production kinematics of these resonances. In our previous publication a fit has been performed to the $`\pi ^+\pi ^{}`$ S-wave using a coherent sum of relativistic Breit-Wigner functions and a background of the form: $$A(M_{\pi \pi })=Bgd(M_{\pi \pi })+\underset{n=1}{\overset{N_{res}}{}}a_ne^{i\theta _n}BW_n(M_{\pi \pi })$$ where the background has been parameterised as $$Bgd(M_{\pi \pi })=\alpha (M_{\pi \pi }2m_\pi )^\beta e^{\gamma M_{\pi \pi }\delta M_{\pi \pi }^2}$$ where $`a_n`$ and $`\theta _n`$ are the amplitude and the phase of the $`n`$-th resonance respectively, $`\alpha `$, $`\beta `$, $`\gamma `$ and $`\delta `$ are real parameters, $`BW(M_{\pi \pi })`$ is the relativistic Breit-Wigner function for a spin zero resonance. In order to describe the centrally produced $`\pi ^+\pi ^{}`$ mass spectrum the function $`|A(M_{\pi \pi })|^2`$ has been multiplied by the kinematical factor $`(M_{\pi \pi }4m_\pi ^2)^{1/2}/M_{\pi \pi }^3`$ . The resulting function is then convoluted with a Gaussian to account for the experimental mass resolution. In this present paper we use the Flatté formula to describe the $`f_0(980)`$, this is referred to as Method I. For the $`\pi ^+\pi ^{}`$ channel the Breit-Wigner has the form: $$BW(M_{\pi \pi })=\frac{m_0\sqrt{\mathrm{\Gamma }_i}\sqrt{\mathrm{\Gamma }_\pi }}{m_0^2m^2im_0(\mathrm{\Gamma }_\pi +\mathrm{\Gamma }_K)}$$ and in the $`K^+K^{}`$ channel the Breit-Wigner has the form: $$BW(M_{KK})=\frac{m_0\sqrt{\mathrm{\Gamma }_i}\sqrt{\mathrm{\Gamma }_K}}{m_0^2m^2im_0(\mathrm{\Gamma }_\pi +\mathrm{\Gamma }_K)}$$ where $`\mathrm{\Gamma }_i`$ is absorbed into the intensity of the resonance. $`\mathrm{\Gamma }_\pi `$ and $`\mathrm{\Gamma }_K`$ describe the partial widths of the resonance to decay to $`\pi \pi `$ and $`K\overline{K}`$ and are given by $$\begin{array}{c}\mathrm{\Gamma }_\pi =g_\pi (m^2/4m_\pi ^2)^{1/2}\\ \\ \mathrm{\Gamma }_K=g_k/2[(m^2/4m_{K^+}^2)^{1/2}+(m^2/4m_{K^0}^2)^{1/2}]\end{array}$$ where $`g_\pi `$ and $`g_K`$ are the squares of the coupling constants of the resonance to the $`\pi \pi `$ and $`K\overline{K}`$ systems. The resulting fit is shown in fig. 1a) for the entire mass spectrum and in fig. 1b) for masses above 1 GeV. The sheet II pole positions for the resonances are | $`f_0(980)`$ | M $`=`$($`\mathrm{\hspace{0.33em}\hspace{0.33em}983}`$$`\pm `$8) | $`i`$ | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}58}`$ | $`\pm `$ | 11) | MeV | | --- | --- | --- | --- | --- | --- | --- | | $`f_0(1370)`$ | M $`=`$(1306$`\pm `$18) | $`i`$ | (111 | $`\pm `$ | 23) | MeV | | $`f_0(1500)`$ | M $`=`$(1502$`\pm `$12) | $`i`$ | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}65}`$ | $`\pm `$ | 12) | MeV | | $`f_0(1710)`$ | M $`=`$(1748$`\pm `$22) | $`i`$ | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}73}`$ | $`\pm `$ | 22) | MeV | These parameters are consistent with the PDG values for these resonances. To test the sensitivity of these results on the fitting method used we have also performed a fit to the $`\pi ^+\pi ^{}`$ mass spectrum using the T-Matrix parameterisation of Zou and Bugg . The invariant amplitude for $`\pi ^+\pi ^{}`$ central production can be expressed as $$A=\alpha _1(s)T_{11}+\alpha _2(s)T_{21}$$ (1) where $`T_{11}`$ and $`T_{21}`$ are the invariant amplitudes for elastic $`\pi \pi \pi \pi `$ and $`K\overline{K}\pi \pi `$ scattering and are parameterised by $$T_{11}=\frac{e^{2i\varphi }1}{2i\rho _1}+\frac{g_1e^{2i\varphi }}{M_R^2si(\rho _1g_1+\rho _2g_2)}$$ (2) $$T_{21}=\frac{\sqrt{g_1g_2}e^{i\varphi }}{M_R^2si(\rho _1g_1+\rho _2g_2)}$$ (3) where $`\rho _1=(14m_\pi ^2/s)^{1/2}`$ and $`\rho _2=(14m_K^2/s)^{1/2}`$ are phase space factors and $`s`$ is the invariant mass squared of the $`\pi ^+\pi ^{}`$ channel. The background term is presumed to be coupled only to the $`\pi \pi `$ channel and has the form $$T_b=\frac{e^{2i\varphi }1}{2i\rho _1}=\frac{1}{A(s)i\rho _1}$$ which satisfies the unitarity condition. $`A(s)`$ is an arbitrary real function which has been taken to be of the form $$A(s)=\frac{1+a_1s+a_2s^2}{b_1(sm_\pi ^2/2)+b_2s^2}$$ The real functions $`\alpha _i(s)`$ in equation (1) describe the coupling of the initial state to the channel $`i`$. These functions are approximated by the power expression : $$\alpha _i(s)=\underset{n=0}{}\alpha _i^n\{\frac{s}{4m_K^2}\}^n$$ where the factor $`4m_K^2`$ is introduced as a convenient scaling. It has been found that $`n`$ = 3 is sufficient to describe the S-wave distribution. In order to describe the centrally produced $`\pi ^+\pi ^{}`$ mass spectrum, the function $`|A(M_{\pi \pi })|^2`$ has been multiplied by the kinematical factor $`(M_{\pi \pi }4m_\pi ^2)^{1/2}/M_{\pi \pi }^3`$ and the resulting function is then convoluted with a Gaussian to account for the experimental mass resolution. The resulting fit is shown in fig. 1c) for the entire mass spectrum and in fig. 1d) for masses above 1 GeV. As can be seen the fit does not describe well the region above 1.0 GeV. The sheet II pole corresponding to the $`f_0(980)`$ is | $`M`$ | = | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}993}`$ | $`\pm `$ | $`\mathrm{\hspace{0.33em}\hspace{0.33em}8}`$) | $`i`$ | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}38}`$ | $`\pm `$ | $`\mathrm{\hspace{0.33em}\hspace{0.33em}9}`$) | MeV | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | There are two poles from the background term with | $`M_1`$ | = | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}388}`$ | $`\pm `$ | 55) | $`i`$ | (223 | $`\pm `$ | 28) | MeV | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`M_2`$ | = | (1541 | $`\pm `$ | 32) | $`i`$ | (143 | $`\pm `$ | 21) | MeV | The first pole may be associated with the low mass S-wave. The NA12/2 collaboration have previously shown that this region may be interpreted as being due to the $`\sigma `$ particle. The second pole would appear to be in the region of the $`f_0(1500)`$ but the width is very broad. This is due to the fact that the fit is not able to properly describe the region around 1.3 GeV. Adding one more term to equations (2) and (3) to describe the 1300 MeV mass region improves the fit considerably but still does not describe the region around 1700 MeV. In order to produce a satisfactory fit two terms have to be added to equations (2) and (3) to account for the $`f_0(1370)`$ and $`f_0(1710)`$ which results in the fit shown in fig. 1e) for masses above 1 GeV. The new sheet II pole positions are | $`f_0(980)`$ | M | = | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}992}`$ | $`\pm `$ | $`\mathrm{\hspace{0.33em}\hspace{0.33em}6}`$) | $`i`$ | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}52}`$ | $`\pm `$ | $`\mathrm{\hspace{0.33em}\hspace{0.33em}9}`$) | MeV | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`f_0(1370)`$ | M | = | (1310 | $`\pm `$ | 30) | $`i`$ | (134 | $`\pm `$ | 23) | MeV | | $`f_0(1500)`$ | M | = | (1497 | $`\pm `$ | 17) | $`i`$ | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}82}`$ | $`\pm `$ | 21) | MeV | | $`f_0(1710)`$ | M | = | (1752 | $`\pm `$ | 15) | $`i`$ | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}53}`$ | $`\pm `$ | 12) | MeV | These parameters are consistent with the values from the fit using interfering Breit-Wigners and with the PDG values for these resonances. An alternative parameterisation is to use the K-Matrix formalism. In this case the Lorentz invariant T-Matrix is expressed as $$\widehat{T}=\widehat{K}(1i\widehat{\rho }\widehat{K})^1$$ where for the case of $`\pi \pi `$ and $`K\overline{K}`$ final states $`\widehat{\rho }`$ is a 2 dimensional diagonal matrix and $`\widehat{K}`$ is a real symmetric 2 dimensional matrix of the form $$K_{ij}=\frac{a_ia_j}{M_R^2s}+\frac{b_ib_j}{s_bs}+\gamma _{ij}$$ In the K-Matrix formalism the background is assumed to couple to both the $`\pi \pi `$ and $`K\overline{K}`$ channels. In order to describe the centrally produced $`\pi ^+\pi ^{}`$ mass spectrum, the function $`|A(M_{\pi \pi })|^2`$ has been multiplied by the kinematical factor $`(M_{\pi \pi }4m_\pi ^2)^{1/2}/M_{\pi \pi }^3`$ and the resulting function is then convoluted with a Gaussian to account for the experimental mass resolution. Two coupled channel resonances are found from the fit shown in fig. 1f) for the entire mass spectrum and in fig. 1g) for masses above 1 GeV with their sheet II T-Matrix poles at | $`M_1`$ | = | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}988}`$ | $`\pm `$ | 18) | $`i`$ | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}39}`$ | $`\pm `$ | 12) | MeV | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`M_2`$ | = | (1526 | $`\pm `$ | 22) | $`i`$ | (191 | $`\pm `$ | 53) | MeV | As in the case of the T-Matrix analysis the fit fails in the 1.3 GeV region. Adding one additional pole improves the fit. However, in order to to obtain a satisfactory fit it is found necessary to include two extra poles. The fit shown is in fig. 1h) for masses above 1 GeV and results in sheet II pole positions of | $`f_0(980)`$ | M | = | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}982}`$ | $`\pm `$ | $`\mathrm{\hspace{0.33em}\hspace{0.33em}9}`$) | $`i`$ | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}38}`$ | $`\pm `$ | 16) | MeV | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`f_0(1370)`$ | M | = | (1290 | $`\pm `$ | 30) | $`i`$ | (104 | $`\pm `$ | 25) | MeV | | $`f_0(1500)`$ | M | = | (1510 | $`\pm `$ | 10) | $`i`$ | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}56}`$ | $`\pm `$ | 15) | MeV | | $`f_0(1710)`$ | M | = | (1709 | $`\pm `$ | 15) | $`i`$ | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}75}`$ | $`\pm `$ | 18) | MeV | These parameters are consistent with the values from the two previous fits. Finally, in order to perform a coupled channel fit to the $`\pi ^+\pi ^{}`$ and $`K^+K^{}`$ final states a correct normalisation of the two data sets has to be performed. The fit has been modified to take into account the relative differences in geometrical acceptance, event reconstruction and event selection. The fit also includes corrections for the unseen decay modes so that the branching ratio $`\pi \pi `$ to $`K\overline{K}`$ can be calculated. In addition to the resonances discussed above, the $`a_0(980)`$ can also contribute to the $`K\overline{K}`$ S-wave mass spectrum. The contribution from $`a_0(980)K^+K^{}`$ has been calculated from the observed decay $`a_0(980)\eta \pi `$ and the measured branching ratio of the $`a_0(980)`$ . The calculated contribution (500 $`\pm `$ 120 events) has been included in the fit as a histogram. There is no evidence for any $`a_0(1450)`$ contribution in the $`\eta \pi `$ channel and hence it has not been included in the fit to the $`K\overline{K}`$ S-wave. The coupled channel fit has been performed using the three methods described above. The following pole positions and branching ratios quoted for the resonances are a mean from the three methods. The statistical error is the largest error from the three fits and the systematic error quoted represents the spread of the values from the three methods. The results of the combined fit are shown in fig. 2. The sheet II pole positions are | $`f_0(980)`$ | M | = | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}987}`$ | $`\pm `$ | $`\mathrm{\hspace{0.33em}\hspace{0.33em}6}\pm \mathrm{\hspace{0.33em}\hspace{0.33em}6}`$) | $`i`$ | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}48}`$ | $`\pm `$ | 12 $`\pm \mathrm{\hspace{0.33em}\hspace{0.33em}8}`$) | MeV | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`f_0(1370)`$ | M | = | (1312 | $`\pm `$ | 25 $`\pm `$ 10) | $`i`$ | (109 | $`\pm `$ | 22 $`\pm `$ 15) | MeV | | $`f_0(1500)`$ | M | = | (1502 | $`\pm `$ | 12 $`\pm `$ 10) | $`i`$ | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}49}`$ | $`\pm `$ | $`\mathrm{\hspace{0.33em}\hspace{0.33em}9}`$ $`\pm `$ 8) | MeV | | $`f_0(1710)`$ | M | = | (1727 | $`\pm `$ | 12 $`\pm `$ 11) | $`i`$ | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}63}`$ | $`\pm `$ | $`\mathrm{\hspace{0.33em}\hspace{0.33em}8}`$ $`\pm `$ 9) | MeV | These parameters are consistent with the PDG values for these resonances. For the $`f_0(980)`$ the couplings were determined to be $`g_\pi `$ = 0.19 $`\pm `$ 0.03 $`\pm `$ 0.04 and $`g_K`$ =0.40 $`\pm `$ 0.04 $`\pm `$ 0.04. The branching ratios for the $`f_0(1370)`$, $`f_0(1500)`$ and $`f_0(1710)`$ have been calculated to be: $$\frac{f_0(1370)K\overline{K}}{f_0(1370)\pi \pi }=0.46\pm 0.15\pm 0.11$$ $$\frac{f_0(1500)K\overline{K}}{f_0(1500)\pi \pi }=0.33\pm 0.03\pm 0.07$$ $$\frac{f_0(1710)K\overline{K}}{f_0(1710)\pi \pi }=5.0\pm 0.6\pm 0.9$$ These values are to be compared with the PDG values of 1.35 $`\pm `$ 0.68 for the $`f_0(1370)`$ and 0.19 $`\pm `$ 0.07 for the $`f_0(1500)`$, which comes from the Crystal Barrel experiment . The value for the $`f_0(1710)`$ is consistent with the value of 2.56 $`\pm `$ 0.9 which comes from the WA76 experiment which assumed J = 2 for the $`f_J(1710)`$ . In our previous publications a study has been performed of resonance production rate as a function of the difference in the transverse momentum vectors ($`dP_T`$) between the particles exchanged from the fast and slow vertices . It has been observed that all the undisputed $`q\overline{q}`$ states (i.e. $`\eta `$, $`\eta ^{}`$, $`f_1(1285)`$ etc.) are suppressed as $`dP_T`$ goes to zero, whereas the glueball candidates $`f_0(1500)`$ and $`f_2(1950)`$ survive. In order to calculate the contribution of each resonance as a function of $`dP_T`$, the partial waves have been fitted in three $`dP_T`$ intervals with the parameters of the resonances fixed to those obtained from the fits to the total data using method I. As an example of how the mass spectra change as a function of $`dP_T`$, fig. 3 shows the $`\pi ^+\pi ^{}`$ S-wave and D-wave in three $`dP_T`$ intervals. The S-wave clearly shows that the region around 1.3 GeV is more enhanced at large $`dP_T`$ in contrast to the low mass part of the spectrum. This is not due to feed through from the D-wave which has been estimated to be less than 3 % in this mass region. In the D-wave the $`f_2(1270)`$ is suppressed at small $`dP_T`$ in contrast to the behaviour of the low mass part of the D-wave. Table 1 gives the percentage of each resonance in three $`dP_T`$ intervals together with the ratio of the number of events for $`dP_T`$ $`<`$ 0.2 GeV to the number of events for $`dP_T`$ $`>`$ 0.5 GeV for each resonance considered. As can be seen from table 1, the $`\rho ^0(770)`$, $`f_2(1270)`$ and $`f_2^{}(1525)`$ are suppressed at small $`dP_T`$ in contrast to the $`f_0(980)`$, $`f_0(1500)`$ and $`f_0(1710)`$. The azimuthal angle ($`\varphi `$) is defined as the angle between the $`p_T`$ vectors of the two protons. In order to determine the $`\varphi `$ dependence for the resonances observed, the partial waves have been fitted in 30 degree bins of $`\varphi `$ with the parameters of the resonances fixed to those obtained from the fits to the total data. The fraction of each resonance as a function of $`\varphi `$ is plotted in fig. 4. The $`\varphi `$ dependences are clearly not flat and considerable variation is observed between the different resonances. In order to determine the four momentum transfer dependence ($`t`$) of the resonances observed in the $`\pi ^+\pi ^{}`$ and $`K^+K^{}`$ channels the partial waves have been fitted in 0.1 GeV<sup>2</sup> bins of $`t`$ with the parameters of the resonances fixed to those obtained from the fits to the total data. Fig. 5 shows the four momentum transfer from one of the proton vertices for these resonances. The distributions for the $`f_0(980)`$, $`f_0(1370)`$, $`f_0(1500)`$, $`f_0(1710)`$ and $`\rho (770)`$ have been fitted with a single exponential of the form $`exp(b|t|)`$ and the values of $`b`$ found are given in table 2. The distributions for the $`f_2(1270)`$ and $`f_2^{}(1525)`$ cannot be fitted with a single exponential. Instead they have been fitted to the form $$\frac{d\sigma }{dt}=\alpha e^{b_1t}+\beta te^{b_2t}$$ The parameters resulting from the fit are given in table 3. In a previous publication by the WA76 collaboration the ratios of the production cross sections for the $`\rho (770)`$, $`f_0(980)`$ and $`f_2(1270)`$ were calculated at $`\sqrt{s}`$ = 12.7 and 23.8 GeV. However, the experiment at 300 GeV ($`\sqrt{s}`$ = 23.8 GeV) was only sensitive to $`\varphi `$ angles less than 90 degrees and the acceptance program that had been used assumed a flat $`\varphi `$ distribution. Hence the cross sections at 300 GeV were underestimated for the $`\rho (770)`$ and $`f_2(1270)`$ and overestimated for the $`f_0(980)`$. After correcting for geometrical acceptances, detector efficiencies, losses due to cuts, and unseen decay modes, the ratios of the cross-sections for the $`\rho (770)`$, $`f_0(980)`$, $`f_2(1270)`$ and in addition, the $`f_0(1500)`$ at $`\sqrt{s}`$ = 29.1 and 12.7 GeV are given in table 4. The cross sections for the $`f_0(980)`$, $`f_2(1270)`$ and $`f_0(1500)`$ are compatible with being independent of energy which is consistent with them being produced via double Pomeron exchange . In summary, a coupled channel fit has been performed to the centrally produced $`\pi ^+\pi ^{}`$ and $`K^+K^{}`$ mass spectra. The pole positions and branching ratios of the $`f_0(980)`$, $`f_0(1370)`$, $`f_0(1500)`$ and $`f_0(1710)`$ have been determined using three different methods which give consistent results. An analysis of the $`dP_T`$ dependence of the resonances observed shows that the undisputed $`q\overline{q}`$ mesons are suppressed at small $`dP_T`$ in contrast to the enigmatic $`f_0(980)`$, $`f_0(1500)`$ and $`f_0(1710)`$. Considerable variation is observed in the $`\varphi `$ distributions of the produced mesons. Acknowledgements This work is supported, in part, by grants from the British Particle Physics and Astronomy Research Council, the British Royal Society, the Ministry of Education, Science, Sports and Culture of Japan (grants no. 04044159 and 07044098), the French Programme International de Cooperation Scientifique (grant no. 576) and the Russian Foundation for Basic Research (grants 96-15-96633 and 98-02-22032). Figures Figure 1 Figure 2 Figure 3 Figure 4 Figure 5
no-problem/9907/cond-mat9907422.html
ar5iv
text
# Weighted Fixed Points in Self–Similar Analysis of Time Series ## I Introduction Time series analysis and forecasting have a long history and abundant literature, to mention just a few Refs. \[1–5\]. When analysing time series, one usually aims at constructing a particular model that could represent the available historical data and, after such a model is defined, one could use it for predicting future. This kind of approach has been found to be rather reasonable for describing sufficiently stable evolution, but it fails in treating large fluctuations like those happening in stock markets. There is a growing understanding that this failure is caused by the principal inability to take into account, with any given model, quite irregular evolution of markets whose calm at large development is occasionally interrupted by sudden strong deviations resulting in booms and crashes . Such abrupt changes are not regular cyclic oscillations but rather are chaotic events, alike heterophase fluctuations in statistical systems . Similarly to the latter, strong market fluctuations are also of coherent nature, having their origin in the collective interactions of many trading agents. The coherent collective behaviour of traders is often termed the crowd or herd behaviour \[9–11\], which ascribes a negative meaning to this, although one should remember that the process of price formation through the market mechanism is always collective. The motion of stock markets is essentially nonlinear and nonequilibrium, which makes them one of the most complex systems existing in nature, comparable with human brain. Market crashes are somewhat analogous to critical phenomena in physical systems \[12–14\], with the precursor signals, reminding heterophase fluctuations , being manifested as specific log-periodic oscillations . To our understanding, a market is a nonequilibrium system where two trends, bearish and boolish, are competing. This competition results sometimes in random fluctuations all of which by their nature are similar to heterophase fluctuations. But the largest among them are called crashes or booms, while the smaller ones, usually accompanying the large fluctuations, are termed precursors and aftershocks, depending on the time of their occurrence with respect to the main large fluctuations. A market crash can also be compared with an avalanche transition between two different metastable states, as it happens in random polymers , or with spinodal decomposition. However, these analogies are only qualitative and do not allow one to straightforwardly extend the methods of statistical physics to the quantitative description of markets. A novel approach to analysing and forecasting time series has been recently suggested \[18–20\]. This technique, being based on the self-similar approximation theory \[21–29\] can be called the self-similar analysis of time series. In this approach, instead of trying to construct a particular model imitating the dynamical system generating time series, we assume that the evolution of the system is on average self-similar. This is the same as to say that the dynamics of the considered system is predominantly governed by its own internal laws, with external noise being a small perturbation. Since the observed time-series data are the product of such a self-governed evolution, the information on some kind of self-similarity is hidden in these data. The role of the self-similar analysis is to extract this hidden information. The way of doing this has been advanced in our earlier works \[18–20\] where, however, there were missing an important point related to the intrinsically probabilistic nature of any forecast. Really, the arbitrage opportunities, even assumed as practically riskless, have to be represented by probabilities . So that, for instance, a crash is not a certain deterministic outcome of a bubble, the date of the crash being random , the magnitude of a crash being also a random variable. Thus, the problem we need to solve is how to construct a priori probabilities characterizing the spectrum of possible forecasts in the frame of the self-similar analysis \[18–20\]. When one tries to model the stochastic process, whose realization is a time series, by a system of stochastic equations, then one often can find the related probabilities as a solution of a Fokker-Planck-type equation. Although this problem is not as trivial even for seemingly simple linear stochastic processes which, in the case of multiplicative noise, can exhibit rather unexpected behaviour with large intermittent bursts . And the problem of dealing with nonlinear stochastic equations is incomparably more complicated. Moreover, some people advance the following principal objection against the belief that all random processes, including those related to markets, can be modelled by stochastic differential or difference equations. One tells that it is only relatively stable recurring processes, like seasonal variations, can be successfully modelled by particular equations. Contrary to this, such intricate organisms as stock markets cannot, because of their extreme compexity, be described over substantially long period of time by any system of concrete stochastic equations. However we do not think that stock markets, as any other statistical ensemble of interacting agents, is completely random and absolutely unpredictable. But rather, as any other complex organism, markets do posses some basic self-similar trends, the information on which is hidden in the past data. The aim of analysing time series should be in extracting the hidden information about the basic tendencies of the process, whose knowledge would make it possible to forecast at least the near future. As far as the analyzed time series is usually a realization of a random process, it would be naive to expect that it is always feasible to predict everything for sure. Certainly not! But what could be possible, and what would be the main aim of analysis, is to present a spectrum of admissible forecasts weighted with the corresponding probabilities. In other words, the outcome of an analysis must be not just one number but a set of possible scenarios with the probabilities assessing the related risks. In the present paper we make the necessary step in developing the self-similar analysis \[18–20\] by organizing it in the truly statistical form. We define the probabilities of different scenarios and show how the method works considering several time series. As examples, we choose market time series that are the most difficult case. And among them, we select the events accompanied by the rise and blowing up of the so-called bubbles, since such nonmonotonic cases are the most hard for description. A time-series bubble is an event corresponding to a fast rise of the time-series values, which abruptly changes to a burst, that is to a sudden drop of the values, during the time of order of the time-series resolution. In general, bubbles are universal and happen in various time series. The time-series bubbles are mostly discussed in connection with markets, being for them very common and for many participants quite dramatic. Keeping in mind pictures representing time series, one may talk about the bubble temporal structures. ## II Statistical Self-Similar Analysis A time series is an ordered sequence $`X\{x_n|n=0,1,2,\mathrm{}\}`$ which is a representation of a stochastic process with discrete time $`t=0,1,2,\mathrm{}`$. A given set $`X_N\{x_n|n=0,1,2,\mathrm{},N\}`$ of $`N+1`$ elements representing historical data can be called the data base. The problem we consider is how, with the given data base $`X_N`$, to predict the value $`x_{N+\mathrm{\Delta }t}`$ that would occur at a later time $`t=N+\mathrm{\Delta }t`$. That is, forecasting is a sort of an extrapolation procedure for stochastic processes. Let us define the triangle family of subsets of the data base $`X_N`$ in the following way: $$𝚽_0\{\phi _{00}=x_N\},$$ $$𝚽_1\{\phi _{10}=x_{N1},\phi _{11}=x_N\},$$ $$𝚽_2\{\phi _{20}=x_{N2},\phi _{21}=x_{N1},\phi _{22}=x_N\},$$ (1) $$\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}..$$ $$𝚽_N\{\phi _{N0}=x_0,\phi _{N1}=x_1,\mathrm{},\phi _{NN}=x_N\}=X_N.$$ The sequence $`\{𝚽_k\}_{k=0}^N`$ of the subsets $`𝚽_k`$ forms a tower since $$𝚽_k𝚽_{k+1}(k=0,1,2,\mathrm{},N1).$$ The ordered family (1) will be termed the data tower. For each member $`𝚽_k`$ of the data tower (1), we introduce a polynomial function $$f_k(t)\underset{n=0}{\overset{k}{}}a_nt^n(0tk)$$ (2) of a continuous variable $`t`$, with the coefficients $`a_n`$ defined by the algebraic system of equations $$f_k(n)=\phi _{kn}(n=0,1,2,\mathrm{},k).$$ (3) This polynomial function uniquely represents the data, from $`x_{Nk}=\phi _{k0}`$ to $`x_N=\phi _{kk}`$ pertaining to the subset $`𝚽_k`$. Then, predicting the values $`x_{N+\mathrm{\Delta }t}`$ of the time series $`X`$ is equivalent to the extrapolation of the function (2) to the region $`t>k`$. As a tool for extrapolation we employ the self-similar exponential approximants . To this end, starting with a polynomial function (2), we construct the nested exponential $$F_k(t,\tau )=a_0\mathrm{exp}\left(\frac{a_1}{a_0}t\mathrm{exp}\left(\frac{a_2}{a_1}t\mathrm{}\mathrm{exp}\left(\frac{a_k}{a_{k1}}\tau t\right)\right)\mathrm{}\right),$$ (4) in which $`\tau 0`$ is a control function playing the role of the minimal time necessary for reaching a fixed point. This control function $`\tau `$ will be called the control time. It is convenient here to use the fixed-point equation in the form of the minimal-difference condition $$F_k(t,\tau )F_{k1}(t,\tau )=0(k2).$$ (5) This equation defines the control time $`\tau _k(t)`$ as a function of $`t`$ for $`k2`$. For $`k=1`$, we put $`\tau _11`$. With the form (4), equation (5) results in the equation $$\tau =\mathrm{exp}\left(\frac{a_k}{a_{k1}}t\tau \right).$$ (6) When $`a_k/a_{k1}0`$, then Eq. (6) always possesses one real solution $`\tau _k(t)`$. But when $`a_k/a_{k1}>0`$, there may be one, two, or no real solutions. If we have two real solutions, we need to select the minimal of them, remembering that $`\tau `$ is, by definition, the minimal time necessary for reaching a fixed point. If Eq. (6) has no real solutions, two ways are admissible. One would be to look for a minimum of the difference $`|F_kF_{k1}|`$, instead of accepting Eq. (5). Another way, when there is no exact solution of Eq. (6), is to define an approximate solution to Eq. (5) by iterating the latter as follows: $$F_k(t,\tau )=F_{k1}(t,\tau _{k1}),$$ which, under the known $`\tau _{k1}(t)`$, defines an approximate value for $`\tau _k(t)`$. After the control time is found, substituting it in the nested exponential (4), we obtain the self-similar approximant $$f_k^{}(t)F_k(t,\tau _k(t)).$$ (7) This form can be used for extrapolating the polynomial function (2) to times $`t>k`$. Thus, with a given data base $`X_N`$, we can construct a spectrum $`\{f_k^{}(t)\}`$ of $`N`$ different forecasts suggesting different scenarios for the future behaviour of the time series considered. How could we characterize the probabilities of these scenarios? The answer to this question can be done by invoking the stability analysis \[23–25\]. Define the function $`t_k(\phi )`$ by the equation $$F_1(t,\tau _k(t))=\phi ,t=t_k(\phi ).$$ Substituting $`t_k(\phi )`$ into Eq. (7), we get $$y_k^{}(\phi )f_k^{}(t_k(\phi )).$$ The family of endomorphisms $`\{y_k^{}\}`$ can be considered as a cascade whose trajectory $`\{y_k^{}(\phi )\}`$ is, by construction, bijective to the approximation sequence $`\{f_k^{}(t)\}`$. For this approximation cascade, we may define the local multipliers $$\mu _k^{}(\phi )\frac{}{\phi }y_k^{}(\phi ),$$ (8) whose images in time are given by $$m_k^{}(t)\mu _k^{}(F_1(t,\tau _k(t))).$$ (9) Recall that here and in everywhere what follows $`k1`$. The local multiplier (9) can be presented in the form of the variational derivative $$m_k^{}(t)=\frac{\delta F_k(t,\tau _k(t))}{\delta F_1(t,\tau _k(t))},$$ (10) which suggests a convenient for practical purposes expression $$m_k^{}(t)=\frac{d}{dt}F_k(t,\tau _k(t))/\frac{d}{dt}F_1(t,\tau _k(t)).$$ (11) From these definitions, it follows that $`m_1^{}=1`$, while from the fixed-point condition (5), one has $`m_2^{}=1`$. So that we always have $$m_1^{}(t)=m_2^{}(t)=1.$$ The cascade trajectory at the time $`k`$ is stable provided that $$|m_k^{}(t)|1,$$ (12) where the equality stands for the neutral stability. It looks natural to assume that the most probable scenario corresponds to the most stable point of the cascade trajectory, that is to the point $`k`$ where $`|m_k^{}|`$ is minimal. When we are interested in the prediction for the time $`t=k+\mathrm{\Delta }t`$, the most stable point $`k=k^{}`$ is given by the condition $$\underset{k}{\mathrm{min}}|m_k^{}(k+\mathrm{\Delta }t)|k=k^{}.$$ (13) This defines the forecast mode, that is the most probable prediction (7). It is this course of thinking which was accepted in Refs. \[18–20\]. However, in the real life it is not necessarily the most probable case that happens, because a time series is a realization of a random process. What we need for generalizing the approach is to be able to present the whole spectrum of possible scenarios weighted with the corresponding probabilities, which would allow us to calculate the statistical characteristics of the random process. Before going to this generalization, let us make a note with regard to the usage of the self-similar exponentials (4). The form of the latter reminds us the iterated exponentials introduced by Euler , which have been studied in mathematical literature , where they are labelled by various names, like iterated exponentials, infinite exponentials, continued exponentials, multiple exponentials, stacked exponents, exponential towers, hypertowers, hyperexponents, superexponents, endless exponents, power sequences, reiterated exponentials, and so on. Except their form, our self-similar exponentials (4) are quite different from the Euler iterated exponentials . The difference is, first of all, in the origin. The Euler exponential is the iterative solution of a transcendental equation, while the self-similar exponents are the outcome of the self-similar approximation theory , as applied to the polynomial function (2). The theory prescribes the relation between the coefficients $`a_n`$ of the latter function. A specific feature of the self-similar exponentials is the existence of the control time $`\tau _k`$ defining a fixed point of the approximation cascade \[23–25\]. In general , the self-similar exponentials can have a more complicated structure involving noninteger powers of the variable $`t`$. This kind of exponentials with noninteger powers yields, in the first approximation, the so-called stretched exponentials that are often met in various applications . Let us now return to the problem of defining the scenario probabilities. Assume that we are interested in what happens at the time $`t=k+\mathrm{\Delta }t`$. For the latter, we can construct the set $`\{f_k^{}(k+\mathrm{\Delta }t)\}`$, where $`k=1,2,\mathrm{},N`$, of the self-similar forecasts (7). We need to define the probability $`p_k(\mathrm{\Delta }t)`$ for the realization, at the time $`t=k+\mathrm{\Delta }t`$, of the forecast $`f_k^{}(k+\mathrm{\Delta }t)`$, which is based on the self-similar analysis of $`k+1`$ terms from the subfamily $`𝚽_k`$ of the data base $`X_N`$. For brevity, we shall call $`p_k(\mathrm{\Delta }t)`$ the k-scenario probability. The idea of defining a probability $`p`$ comes from statistical mechanics where a probability $`p`$ can be connected with entropy $`S`$ by the relation $`pe^S`$. Another idea originates from dynamical theory where there exists the notion of the so-called dynamical entropy or the Kolmogorov-Sinai entropy rate . The latter, for a $`d`$-dimensional dynamical system, is given by the sum $$h\underset{i=1}{\overset{d}{}}\lambda _i\mathrm{\Theta }(\lambda _i)$$ of positive Lyapunov exponents $`\lambda _i`$. Since $`h`$ is an entropy rate, the entropy itself should be written as $`S=hk`$, where $`k=1,2,\mathrm{}`$ is discrete time. The Kolmogorov-Sinai entropy characterizes the asymptotic in time behaviour of unstable trajectories. There are two specific features of the case we are dealing with. First, we consider not asymptotic in time properties of a dynamical system but its finite-time behaviour. And second, we need to characterize not only unstable trajectories but all of them, stable as well as unstable. Thus, we generalize the Kolmogorov-Sinai entropy rate by introducing the summary local Lyapunov exponent $$\mathrm{\Lambda }_k\underset{i=1}{\overset{d}{}}\lambda _{ik},$$ being the sum of all local Lyapunov exponents $`\lambda _{ik}`$, positive as well as negative. For a one-dimensional dynamical system, we have just one local Lyapunov exponent $`\mathrm{\Lambda }_k=\lambda _k`$. The quantity $`S_k=\mathrm{\Lambda }_kk`$ can be both negative and positive, thence it may be called dynamical quasientropy. Retaining the relation $`p_ke^{S_k}`$, we have $`p_ke^{\mathrm{\Lambda }_kk}`$. The local Lyapunov exponent can be expressed through the local multiplier \[23–25\] as $$\mathrm{\Lambda }_k=\frac{1}{k}\mathrm{ln}|m_k|.$$ Hence $`p_k|m_k|^1`$, which, with the normalization condition $$\underset{k=1}{\overset{N}{}}p_k(\mathrm{\Delta }t)=1,$$ results in the $`k`$-scenario probability $$p_k(\mathrm{\Delta }t)=\frac{|m_k^{}(k+\mathrm{\Delta }t)|^1}{Z(\mathrm{\Delta }t)},Z(\mathrm{\Delta }t)\underset{k=1}{\overset{N}{}}\frac{1}{|m_k^{}(k+\mathrm{\Delta }t)|},$$ (14) which mathematically expresses the intuitive inverse relation between stability and probability. The local multipliers here are defined in Eq. (11). In this way, the spectrum $`\{f_k^{}(k+\mathrm{\Delta }t)\}_{k=1}^N`$ of possible scenarios is weighted with the scenario probabilities (14). The average forecast is $$<f(\mathrm{\Delta }t)>\underset{k=1}{\overset{N}{}}p_k(\mathrm{\Delta }t)f_k^{}(k+\mathrm{\Delta }t).$$ (15) As for any statistical analysis, we can define the dispersion $$\sigma ^2(\mathrm{\Delta }t)<f^2(\mathrm{\Delta }t)><f(\mathrm{\Delta }t)>^2,$$ (16) the standard deviation $$\sigma (\mathrm{\Delta }t)\left[<f^2(\mathrm{\Delta }t)><f(\mathrm{\Delta }t)>^2\right]^{1/2},$$ (17) having for markets the meaning of volatility, and the variance coefficient $$\rho (\mathrm{\Delta }t)\frac{\sigma (\mathrm{\Delta }t)}{<f(\mathrm{\Delta }t)>}100\%.$$ (18) When the actually realized value $`x_{N+\mathrm{\Delta }t}`$ for the considered moment of time is known, one may find the percentage error of the average forecast (15) as $$\epsilon (\mathrm{\Delta }t)\frac{<f(\mathrm{\Delta }t)>x_{N+\mathrm{\Delta }t}}{|x_{N+\mathrm{\Delta }t}|}100\%.$$ (19) If one deals with a series of examples for which the data-base order $`N`$ and the prediction time $`\mathrm{\Delta }t`$ are fixed, one may simplify the notation by omitting the quantities $`N`$ and $`\mathrm{\Delta }t`$, for instance writing $$<f>=<f(\mathrm{\Delta }t)>(N,\mathrm{\Delta }t,\mathrm{fixed}).$$ (20) The described procedure of analysing time series composes the statistical self-similar analysis. ## III Examples of Market Bubbles To illustrate the developed procedure, we select several examples of market time series exhibiting bubbles, which, as is mentioned in the Introduction, is the most difficult and most intriguing case for analysis. For the uniformity of consideration, we take everywhere a six-order data base, that is $`N=5`$, and for the prediction time, we set $`\mathrm{\Delta }t=1`$. For convenience, the results are arranged in the form of tables. Example 1. The dynamics of the average index of the South African gold mining share prices in the period of time from the second quarter of 1986 till the third quarter of 1987. The latter index is accepted as 100 (1987, III=100). Let us make a forecast for the fourth quarter of 1987, comparing it with the actual value $`x_6=81.64`$. The data $`x_n`$ and the results for the self-similar forecasts $`f_n^{}(n+1)`$, the related local multipliers $`m_n^{}(n+1)`$, and for the corresponding probabilities are given in Table 1. The average forecast (15), standard dispersion (17), variance coefficient (18), and the error (19), respectively, are $$<f>=82.926,\sigma =2.25,\rho =2.71\%,\epsilon =1.58\%.$$ Example 2. Let the USA tobacco price index (all markets) be given from 1965 till 1970, and we predict what happens in 1971. The value for 1990 is taken for 100 (1990=100). The results are in Table 2. Other characteristics are $$<f>=39.74,\sigma =2.27,\rho =5.71\%,\epsilon =4.93\%.$$ Example 3. The behaviour of the Bolivian zinc price index from 1979 till 1984 gives us an example of a nonmonotonic growth. We make a forecast for 1985. The corresponding analysis is presented in Table 3, where the value for 1990 is taken for 100 (1990=100). We have $$<f>=60.35,\sigma =6.41,\rho =10.6\%,\epsilon =2.74\%.$$ Example 4. The average index of Spanish share prices from the second quarter of 1986 till the third quarter of 1987. The time of interest is the fourth quarter of 1987. The analysis is in Table 4, and $$<f>=80.479,\sigma =7.354,\rho =9.14\%,\epsilon =9.62\%.$$ Example 5. The Indian share price index from 1969 till 1974 (1985=100). The time of interest is 1975. The results of analysis are in Table 5, and $$<f>=33.5,\sigma =3.44,\rho =10.2\%,\epsilon =4.8\%.$$ Example 6. The Mexican share price index from 1989 till 1994 (1990=100). The forecasting time is 1995. The analysis is in Table 6, and $$<f>=365.0,\sigma =47.8,\rho =13.1\%,\epsilon =6.65\%.$$ Example 7. The Korean share price index from 1973 till 1978 (1985=100). The forecasting time is 1979. The results are in Table 7, with $$<f>=84.8,\sigma =16.4,\rho =19.3\%,\epsilon =2.35\%.$$ Example 8. The UK copper price index in 1975 to 1980 (1990=100). The forecasting time is 1981. The analysis is in Table 8, and $$<f>=67.9,\sigma =34.1,\rho =50.2\%,\epsilon =3.56\%.$$ Example 9. The Denmark industrial share price index from 1968 till 1973 (1985=100). The time of interest is 1974. The analysis is in table 9, and $$<f>=18.1,\sigma =11.1,\rho =61.3\%,\epsilon =5\%.$$ Example 10. The World commodity price index from 1969 to 1974 (1990=100). The forecast time is 1975. The results are in Table 10, and $$<f>=63.7,\sigma =43.2,\rho =67.8\%,\epsilon =10.2\%.$$ Example 11. The US silver price index from 1975 to 1980 (1990=100). The time of interest is 1981. The analysis is in Table 11, and $$<f>=248.6,\sigma =309.6,\rho =125\%,\epsilon =12.2\%.$$ Example 12. The gold price index from 1970 to 1975 (1990=100). The forecasting time is 1976. The results of analysis are in Table 12, and $$<f>=33.7,\sigma =39.9,\rho =118\%,\epsilon =3.5\%.$$ Let us note that it is admissible to incorporate in the above analysis the no-change term by formally setting $`f_0^{}(1)\phi _{00}=x_N`$ and ascribing to the latter the multiplier $`m_0^{}(1)1`$. In the examples considered, the probability $`p_0(1)`$ is always small, so that the no-change term practically does not contribute to the averages. In conclusion, we have generalized the self-similar analysis of time series \[18–20\] by making this approach statistical. The scenario probabilities are introduced. The method makes it possible to analyse the whole forecast spectrum by considering different outcomes characterized by their weights. The average forecast is defined as the average fixed point. The latter does not need to be compulsory very close to the most probable forecast or to the actually realized value, although in the majority of cases it is so. Several examples of market time series, exhibiting bubbles, illustrate the approach. Since a time series is a realization of a random process, the bubble burst is a stochastic event that can be predicted only in a probabilistic way. The most that any forecasting theory can achieve is to define a forecast spectrum of possible scenarios weighted with the corresponding probabilities. But being able to get such a statistical analysis means to be in a position of using it. In this short communication we could not (and did not plan to) explain all technical details of the practical usage of the statistical self-similar analysis. This is a separate story. Our main aim here has been to demonstrate the principal way of constructing such a statistical analysis of time series. Acknowledgement Useful advice and criticism from E.P. Yukalova are gratefully appreciated. Table Captions Table 1. Statistical self-similar analysis of scenarios for the South African gold mining share price in 1987, IV, based on the data from 1986, II, till 1987, III. Table 2. Self-similar analysis of scenarios for the USA tobacco price index in 1971, based on the data from 1965 till 1970. Table 3. Analysis of scenarios for the Bolivian zinc price index in 1985, with the data base from 1979 till 1984. Table 4. Analisis of scenarios for the average index of Spanish share prices in 1987, IV, with the data base from 1986, II, till 1987, III. Table 5. Analysis of scenarios for the Indian share price index in 1975, based on the data from 1969 till 1974. Table 6. Scenarios for the Mexican share price index in 1995, with the data base from 1989 till 1994. Table 7. Scenarios for the Korean share price index in 1979, based on the data from 1973 till 1978. Table 8. Scenarios for the UK copper price index in 1981, based on the data from 1975 to 1980. Table 9. Scenarios for the Denmark industrial share price index in 1974, with the data from 1968 to 1973. Table 10. Scenarios for the World commodity price index in 1975, based on the data from 1969 to 1974. Table 11. Scenarios for the US silver price index in 1981, based on the data from 1975 to 1980. Table 12. Analysis of scenarios for the gold price index in 1976, based on the data from 1970 to 1975. Table 1 | $`n`$ | $`0`$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ | $`5`$ | $`6`$ | | --- | --- | --- | --- | --- | --- | --- | --- | | $`x_n`$ | 52.734 | 69.141 | 82.813 | 85.938 | 93.750 | 100 | $`\mathbf{81.640}`$ | | $`f_n^{}(n+1)`$ | $``$ | 107.122 | 109.355 | 82.812 | 132.501 | $`\mathrm{}`$ | | | $`m_n^{}(n+1)`$ | $``$ | 1 | 1 | $`4\times 10^4`$ | 0.233 | $`\mathrm{}`$ | | | $`p_n(1)`$ | $``$ | $`4\times 10^4`$ | $`4\times 10^4`$ | 0.997 | 0.002 | 0 | | Table 2 | $`n`$ | $`0`$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ | $`5`$ | $`6`$ | | --- | --- | --- | --- | --- | --- | --- | --- | | $`x_n`$ | 33.9 | 36.8 | 37.1 | 37.9 | 39.5 | 45.9 | $`\mathbf{41.8}`$ | | $`f_n^{}(n+1)`$ | $``$ | 54.6 | 37.5 | 38.4 | 36.5 | 41.4 | | | $`m_n^{}(n+1)`$ | $``$ | 1 | 1 | 0.023 | 0.152 | $``$0.024 | | | $`p_n(1)`$ | $``$ | 0.011 | 0.011 | 0.464 | 0.070 | 0.445 | | Table 3 | $`n`$ | $`0`$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ | $`5`$ | $`6`$ | | --- | --- | --- | --- | --- | --- | --- | --- | | $`x_n`$ | 53.5 | 53.6 | 61.2 | 58.3 | 54.6 | 68.4 | $`\mathbf{58.7}`$ | | $`f_n^{}(n+1)`$ | $``$ | 90.5 | 44.7 | 61.3 | 59.3 | 47.0 | | | $`m_n^{}(n+1)`$ | $``$ | 1 | 1 | $``$0.027 | $``$0.620 | $``$0.288 | | | $`p_n(1)`$ | $``$ | 0.023 | 0.023 | 0.839 | 0.037 | 0.079 | | Table 4 | $`n`$ | $`0`$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ | $`5`$ | $`6`$ | | --- | --- | --- | --- | --- | --- | --- | --- | | $`x_n`$ | 58.721 | 62.338 | 63.906 | 79.243 | 76.589 | 100 | $`\mathbf{73.026}`$ | | $`f_n^{}(n+1)`$ | $``$ | 141.147 | 63.083 | 94.207 | 43.978 | 78.060 | | | $`m_n^{}(n+1)`$ | $``$ | 1 | 1 | $``$0.029 | 1.109 | $``$0.005 | | | $`p_n(1)`$ | $``$ | 0.004 | 0.004 | 0.145 | 0.004 | 0.843 | | Table 5 | $`n`$ | $`0`$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ | $`5`$ | $`6`$ | | --- | --- | --- | --- | --- | --- | --- | --- | | $`x_n`$ | 30.9 | 33.4 | 32.3 | 31.8 | 35.1 | 38.8 | $`\mathbf{31.9}`$ | | $`f_n^{}(n+1)`$ | $``$ | 43.3 | 46.3 | 31.4 | 33.7 | 32.4 | | | $`m_n^{}(n+1)`$ | $``$ | 1 | 1 | $``$0.170 | 0.132 | $``$0.101 | | | $`p_n(1)`$ | $``$ | 0.039 | 0.039 | 0.232 | 0.299 | 0.390 | | Table 6 | $`n`$ | $`0`$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ | $`5`$ | $`6`$ | | --- | --- | --- | --- | --- | --- | --- | --- | | $`x_n`$ | 57.7 | 100 | 190.1 | 291.3 | 325.6 | 442.1 | $`\mathbf{389.3}`$ | | $`f_n^{}(n+1)`$ | $``$ | 666.0 | 288.9 | 510.0 | $`\mathrm{}`$ | 350.1 | | | $`m_n^{}(n+1)`$ | $``$ | 1 | 1 | 0.020 | $`\mathrm{}`$ | $``$0.002 | | | $`p_n(1)`$ | $``$ | 0.002 | 0.002 | 0.091 | 0 | 0.906 | | Table 7 | $`n`$ | $`0`$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ | $`5`$ | $`6`$ | | --- | --- | --- | --- | --- | --- | --- | --- | | $`x_n`$ | 57.6 | 56.7 | 63.0 | 77.3 | 81.7 | 103.4 | $`\mathbf{86.8}`$ | | $`f_n^{}(n+1)`$ | $``$ | 139.0 | 74.3 | 94.2 | 52.5 | 60.6 | | | $`m_n^{}(n+1)`$ | $``$ | 1 | 1 | 0.013 | 0.256 | $``$0.038 | | | $`p_n(1)`$ | $``$ | 0.009 | 0.009 | 0.698 | 0.035 | 0.239 | | Table 8 | $`n`$ | $`0`$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ | $`5`$ | $`6`$ | | --- | --- | --- | --- | --- | --- | --- | --- | | $`x_n`$ | 46.5 | 52.7 | 49.2 | 51.3 | 74.1 | 82.1 | $`\mathbf{65.5}`$ | | $`f_n^{}(n+1)`$ | $``$ | 92.0 | 155.9 | 46.5 | 59.7 | 365.9 | | | $`m_n^{}(n+1)`$ | $``$ | 1 | 1 | $``$0.269 | 0.213 | 49.371 | | | $`p_n(1)`$ | $``$ | 0.096 | 0.096 | 0.356 | 0.450 | 0.002 | | Table 9 | $`n`$ | $`0`$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ | $`5`$ | $`6`$ | | --- | --- | --- | --- | --- | --- | --- | --- | | $`x_n`$ | 12 | 14 | 13 | 12 | 17 | 26 | $`\mathrm{𝟏𝟗}`$ | | $`f_n^{}(n+1)`$ | $``$ | 49 | 25 | 12 | 15 | 454 | | | $`m_n^{}(n+1)`$ | $``$ | 1 | 1 | $``$0.349 | 0.126 | 467 | | | $`p_n(1)`$ | $``$ | 0.078 | 0.078 | 0.224 | 0.620 | $`2\times 10^4`$ | | Table 10 | $`n`$ | $`0`$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ | $`5`$ | $`6`$ | | --- | --- | --- | --- | --- | --- | --- | --- | | $`x_n`$ | 40.4 | 39.7 | 37.1 | 42.8 | 69.2 | 83.9 | $`\mathbf{70.2}`$ | | $`f_n^{}(n+1)`$ | $``$ | 105.8 | 202.6 | 37.4 | 58.3 | 37.1 | | | $`m_n^{}(n+1)`$ | $``$ | 1 | 1 | $``$0.320 | 0.171 | $``$0.392 | | | $`p_n(1)`$ | $``$ | 0.074 | 0.074 | 0.231 | 0.432 | 0.189 | | Table 11 | $`n`$ | $`0`$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ | $`5`$ | $`6`$ | | --- | --- | --- | --- | --- | --- | --- | --- | | $`x_n`$ | 91.7 | 90.3 | 95.9 | 112.1 | 230.1 | 426.9 | $`\mathbf{218.3}`$ | | $`f_n^{}(n+1)`$ | $``$ | 1273 | 918.6 | 94.5 | 182.4 | 76.0 | | | $`m_n^{}(n+1)`$ | $``$ | 1 | 1 | $``$0.149 | 0.116 | $``$1.93 | | | $`p_n(1)`$ | $``$ | 0.056 | 0.056 | 0.376 | 0.483 | 0.029 | | Table 12 | $`n`$ | $`0`$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ | $`5`$ | $`6`$ | | --- | --- | --- | --- | --- | --- | --- | --- | | $`x_n`$ | 9.4 | 10.6 | 15.2 | 25.4 | 41.5 | 42.0 | $`\mathbf{32.5}`$ | | $`f_n^{}(n+1)`$ | $``$ | 42.5 | 127.5 | $`\mathrm{}`$ | 84.0 | 9.3 | | | $`m_n^{}(n+1)`$ | $``$ | 1 | 1 | $`\mathrm{}`$ | 1.811 | $``$0.176 | | | $`p_n(1)`$ | $``$ | 0.121 | 0.121 | 0 | 0.067 | 0.690 | |
no-problem/9907/math9907010.html
ar5iv
text
# Fitting ideals for finitely presented algebraic dynamical systems ## 1. Introduction A natural family of measure–preserving $`Z^d`$–actions are provided by commuting automorphisms of compact abelian groups. Such actions are amenable to analysis using methods from commutative algebra and commutative harmonic analysis. The resulting theory, described in the papers , , , , , and the monograph associates to such a dynamical system a module over the ring of Laurent polynomials in $`d`$ variables with integer coefficients, and then relates various dynamical properties of the action to algebraic or geometric properties of the corresponding module. This singles out for attention the class of systems corresponding to Neotherian modules (the systems satisfying the Descending Chain Condition of Kitchens and Schmidt, ) and raises the problem of computing the set of associated primes of such modules. Our purpose here is to exploit standard methods from commutative algebra to study the dynamical systems corresponding to Noetherian modules described via a finite presentation. Before describing this we recall the algebraic description of such actions in . Let $`R=Z[u_1^{\pm 1},\mathrm{},u_d^{\pm 1}]`$ be the ring of Laurent polynomials with integral coefficients in the commuting variables $`u_1,\mathrm{},u_d`$. If $`\alpha `$ is a $`Z^d`$–action by automorphisms of the compact, abelian group $`X`$, then the dual (character) group $`M=\widehat{X}`$ of $`X`$ is an $`R`$-module under the dual $`R`$–action $$fa=\underset{𝕞Z^d}{}c_f(𝕞)\beta _𝕞(a)$$ for all $`aM`$ and $`f=_{𝕞Z^d}c_f(𝕞)u^𝕞R`$, where $`u^𝕟=u_1^{n_1}\mathrm{}u_d^{n_d}`$ for every $`𝕟=(n_1,\mathrm{},n_d)Z^d`$, and where $`\beta _𝕟=\widehat{\alpha _𝕟}`$ is the automorphism of $`M=\widehat{X}`$ dual to $`\alpha _𝕟`$. In particular, $$\widehat{\alpha _𝕟}(a)=\beta _𝕟(a)=u^𝕟a$$ for all $`𝕟Z^d`$ and $`aM`$. Conversely, if $`M`$ is an $`R`$-module, and $$\beta _𝕟^M(a)=u^𝕟a$$ for every $`𝕟Z^d`$ and $`aM`$, then we obtain a $`Z^d`$-action $$\alpha ^M:𝕟\alpha _𝕟^M=\widehat{\beta _𝕟^M}$$ on the compact, abelian group $$X^M=\widehat{M}$$ dual to the $`Z^d`$-action $`\beta ^M:𝕟\beta _𝕟^M`$ on $`M`$. The dynamical system $`\alpha ^M`$ on $`X_M`$ satisfies the Descending Chain Condition (any decreasing sequence of closed $`\alpha ^M`$–invariant subgroups of $`X_M`$ stabilizes) if and only if the $`R`$–module is Noetherian by Theorem 11.4 in . We assume from now on that $`M`$ is a Noetherian module, in which case it has a finite presentation of the form (1) $$M=M_AR^k/AR^n,$$ where $`M_A`$ is generated as an $`R`$–module by a subset with $`k`$ elements and the $`k\times n`$ matrix $`A`$ defines the various relations in $`M_A`$. Since free modules are not very interesting, we assume that the rank of $`A`$ is $`k`$. If this is not the case, then $`M_A`$ has a free submodule $`L`$ with the property that $`M_A/L`$ has a finite presentation in the form (1) with $`\mathrm{rank}(A)=k`$. In accordance with the spirit of the monograph , we would like then to be able to describe the dynamical properties of the $`Z^d`$–action $`\alpha ^{M_A}`$ in terms of the matrix $`A`$. Roughly speaking, we are able to (describe how to) compute all the associated primes of $`M_A`$ from $`A`$ using Auslander–Buchsbaum theory. This is enough to describe – in principle – the dynamical properties of $`\alpha ^{M_A}`$. For the special case $`k=n`$, or more generally, of principal associated primes, we are also able to find the multiplicities of the various associated primes, which allows the entropy of $`\alpha ^{M_A}`$ to be computed. This means in particular that the entropy of $`\alpha ^{M_A}`$ can be computed, and the expansiveness of $`\alpha ^{M_A}`$ can be decided, without computing any syzygy modules. Methods taken from commutative algebra are standard and may all be found for example in Eisenbud’s book . We are grateful to Prof. Rodney Sharp for pointing us to the right part of . By “entropy” we mean topological entropy, as defined in Section 13 of Schmidt’s monograph . ## 2. Language from commutative algebra Let $`S`$ be a commutative ring (recall that $`R`$ is the ring of Laurent polynomials in $`d`$ variables with integer coefficients). The basic terminology for an $`S`$–module $`M`$ may be found in any commutative algebra book. A prime ideal $`PS`$ is associated to $`M`$ if there is an element $`mM`$ with the property that (2) $$P=\mathrm{Ann}_M(m)=\{fSfm=0M\}.$$ The module $`M`$ is Noetherian if each submodule is finitely generated (the ring $`S`$ is Noetherian if it is a Noetherian $`S`$–module), and this holds for Noetherian rings if and only if $`M`$ has a finite presentation (1). The set $`\mathrm{Ass}(M)`$ of associated primes of a Noetherian module is finite (see Theorem 6.5 in ). A Noetherian module is free if it has a presentation (1) in which the matrix $`A`$ comprises zeros, and is cyclic if it has a presentation (1) with $`k=1`$. A finite free resolution of a Noetherian module $`M`$ is an exact sequence of $`S`$–modules and $`S`$–module homomorphisms (3) $$0F_n\stackrel{\varphi _n}{}\mathrm{}\stackrel{\varphi _2}{}F_1\stackrel{\varphi _1}{}F_0M0$$ in which each $`F_i`$ is a free $`S`$–module. A subset $`U`$ in $`S`$ is multiplicative if it is closed under multiplication. Each multiplicative subset $`US`$ defines a localization (4) $$S^U=\{\frac{s}{u}sS,uU\},$$ where two fractions $`\frac{s}{u}`$ and $`\frac{s^{}}{u^{}}`$ are identified if there is an element $`u^{\prime \prime }U`$ with $`u^{\prime \prime }(u^{}sus^{})=0`$. The notation is altered for one special case: if $`P`$ is a prime ideal in $`S`$, then write $`S^{(P)}`$ for $`S^{S\backslash P}`$, called the localization at the prime $`P`$. For a module $`M`$, the same definition as (4) works and defines a localized module $`M^S`$ or $`M^P`$. If the ideal $`P=\pi `$ is principal, write $`M^{(\pi )}=M^P`$. The dimension $`dim(S)`$ of $`S`$ is the supremum of the length of chains of distinct prime ideals in $`S`$, and this coincides with the supremum of $`dim(S^P)`$ over all prime ideals $`P`$. The dimension of a localization $`S^P`$ is also known as the codimension of $`P`$, and coincides with the supremum of lengths of chains of prime ideals descending from $`P`$. A ring is Noetherian if every ascending chain of ideals stabilizes, is a local ring if it has just one maximal ideal, and is regular if it is Noetherian and the localization at every prime ideal is a regular local ring. A local ring is a regular local ring if the maximal ideal is generated by exactly $`d`$ elements where $`d`$ is the dimension of the local ring. It is clear that $`R`$ is a regular ring, and it follows (see Chapter 19 of ) that any Noetherian $`R`$–module has a finite free resolution (3). Notice that the presentation (1) is itself the start of a finite free resolution of $`M_A`$: $$\mathrm{}R^n\stackrel{𝐴}{}R^kM_A0.$$ If $`M`$ is a Noetherian $`R`$–module with associated primes $`\mathrm{Ass}(M)=\{P_1,\mathrm{},P_r\}`$, then there is a prime filtration of $`M`$, (5) $$M=M_{\mathrm{}}M_\mathrm{}1\mathrm{}M_1M_0=\{0\},$$ in which each quotient $`M_j/M_{j1}R/Q_j`$ for some prime $`Q_jP_i`$ for some $`i`$ (see for example Corollary 2.2 in ). The number of times a given prime $`P_i`$ appears (that is, the number of $`j`$ for which $`Q_j=P_i`$) is the multiplicity of $`P_i`$ in the filtration (5). If the prime ideal in question is principal and the module $`M`$ has no free submodules, then the multiplicity with which $`P_i`$ appears is independent of the filtration, and we will therefore speak of the multiplicity of $`P_i`$ in $`M`$ (see Proposition 6.10 in ). ## 3. Dynamical properties Let $`M`$ be any countable $`R`$–module, with associated $`Z^d`$–action $`\alpha ^M`$ on $`X_M=\widehat{M}`$. The following result shows how the dynamical properties of $`\alpha ^M`$ may be deduced from the associated primes $`\mathrm{Ass}(M)`$ of $`M`$. All these results are in ; we state them here for completeness. A generalized cyclotomic polynomial is an element of $`R`$ of the form $`u_1^{n_1}\mathrm{}u_d^{n_d}c(u_1^{m_1}\mathrm{}u_d^{m_d})`$ for some cyclotomic polynomial $`c`$ and $`𝕟,𝕞Z^d`$. Write $`V(P)`$ for the set of common zeros of the elements of $`P`$ in $`C^d`$. ###### Theorem 3.1. The dynamical system $`\alpha ^M`$ on $`X_M`$: (a) satisfies the Descending Chain Condition on closed $`\alpha ^M`$–invariant subgroups if and only if $`M`$ is Noetherian; (b) is ergodic if and only if $`\{\left(u_1^{n_1}\mathrm{}u_d^{n_d}\right)^k1𝕟Z^d\}P`$ for every $`k1`$ and every $`P\mathrm{Ass}(M)`$; (c) is mixing if and only if $`u_1^{n_1}\mathrm{}u_d^{n_d}1P`$ for each $`𝕟Z^d\backslash \{0\}`$ and every $`P\mathrm{Ass}(M)`$; (d) is mixing of all orders if and only if either $`P=pR`$ for a rational prime $`p`$, or $`PZ=\{0\}`$ and $`\alpha ^{R/P}`$ is mixing for every $`P\mathrm{Ass}(M)`$; (e) has positive entropy if and only if there is a $`P\mathrm{Ass}(M)`$ that is principal and not generated by a generalized cyclotomic polynomial; (f) has completely positive entropy if and only if $`\alpha ^{R/P}`$ has positive entropy for every $`P\mathrm{Ass}(M)`$; (g) is isomorphic to a Bernoulli shift if and only if it has completely positive entropy; (h) is expansive if and only if $`M`$ is Noetherian and $`V(P)\left(S^1\right)^d=\mathrm{}`$ for every $`P\mathrm{Ass}(M)`$; (i) has a unique maximal measure if and only if it has finite completely positive entropy. ###### Proof. For (a) see Theorem 11.4 in ; (b) and (c) are in Theorem 11.2 in ; (d) follows from Theorem 3.1 in and Theorem 3.3 in ; (e), (f) and (i) are in ; (h) is Theorem 3.9 in ; (g) is Theorem 1.1 in . ∎ ## 4. Principal associated primes and entropy In this section we use localization to find the entropy of $`\alpha _A^M`$. ###### Definition 4.1. Let $`A`$ be a $`k\times n`$ matrix of rank $`k`$ over $`R`$. The determinental ideal of $`A`$, $`J_AR`$, is the ideal $$J_A=f_1,\mathrm{},f_{\left(\genfrac{}{}{0pt}{}{n}{k}\right)}$$ generated by all the $`k\times k`$ subdeterminants $`\{f_1,\mathrm{},f_{\left(\genfrac{}{}{0pt}{}{n}{k}\right)}\}`$ of $`A`$. For a polynomial $`fR`$, the logarithmic Mahler measure of $`f`$ is defined to be (6) $$m(f)=_0^1\mathrm{}_0^1\mathrm{log}|f(e^{2\pi is_1},\mathrm{},e^{2\pi is_d})|\text{d}s_1\mathrm{}\text{d}s_d.$$ For brevity, define $`m(0)`$ to be $`\mathrm{}`$. Recall from that the entropy of the dynamical system given by the cyclic module $`R/P`$, where $`P`$ is a prime ideal, is given by (7) $$h\left(\alpha ^{R/P}\right)=\{\begin{array}{cc}0,\hfill & \text{if }P\text{ is non–principal;}\hfill \\ m(f),\hfill & \text{if }P=f,f0\text{.}\hfill \end{array}$$ More generally, since $`R`$ is a UFD, for any ideal $`QR`$ there is a well–defined greatest common divisor, and $$h(\alpha ^{R/Q})=h(\alpha ^{R/\mathrm{gcd}(Q)}),$$ which is zero if $`\mathrm{gcd}(Q)=1`$ and equal to $`m(f)`$ if $`\mathrm{gcd}(Q)=f`$ (see Lemma 4.5 in ). For Noetherian modules, (8) $$h(\alpha ^M)=\underset{j=1}{\overset{\mathrm{}}{}}h(\alpha ^{R/Q_j})$$ where the prime ideals $`Q_j`$ are the primes appearing in the filtration (5). ###### Theorem 4.1. The entropy of $`\alpha ^M`$ is given by (9) $$h\left(\alpha ^{M_A}\right)=m\left(\mathrm{gcd}(J_A)\right).$$ Before proving this, we indicate some examples. ###### Example 4.1. (a) Taking $`k=n=1`$ and $`A=[f]`$ with an irreducible polynomial $`f`$, we recover the formula ( 7) in the cyclic case with a principal prime ideal. (b) Taking $`k=1`$ and $`n1`$ we recover the general cyclic case. (c) If $`k=n`$ then formula (9) simply reduces to $`det(A)`$, which was shown in Section 5 of . (d) Let $`k`$ be an algebraic number field with ring of integers $`𝒪_k`$, and $`f`$ a Laurent polynomial in $`d`$ variables with coefficients in $`𝒪_k`$. The $`Z^d`$–dynamical system $`\beta `$ dual to multiplication by $`u_1,\mathrm{},u_d`$ on the $`𝒪_k[u_1^{\pm 1},\mathrm{},u_d^{\pm 1}]`$–module $`=𝒪_k[u_1^{\pm 1},\mathrm{},u_d^{\pm 1}]/f`$ is studied in . Taking an integral basis for $`𝒪_k`$ shows that $``$ as an $`R`$–module is of the form (1) with $`n=k`$, and by (c) we see that $$h(\beta )=m(det(A))=m\left(N_{k:Q}f\right),$$ recovering Theorem 3.10 in . ###### Lemma 4.1. Each associated prime of $`M_A`$ contains $`J_A`$. ###### Proof. Let $`B`$ be a $`k\times k`$ subdeterminant of $`A`$. Then for any $`𝕧R^k`$, $`det(B)𝕧=BB^{\mathrm{adj}}𝕧`$. Therefore the annihilator of any element of $`M_AR^k/AR^n`$ contains $`det(B)`$ and also $`J_A`$. ∎ ###### Lemma 4.2. The principal associated primes of $`M_A`$ are generated by the irreducible factors of $`\mathrm{gcd}(J_A)`$. Moreover, the multiplicity of each principal associated prime in $`M`$ is equal to its multiplicity in $`\mathrm{gcd}(J_A)`$. ###### Proof. It follows from Lemma 4.1 that each element of the set of principal associated primes of $`M`$ contains the $`k\times k`$ subdeterminants of $`A`$. This means that the generator of a principal associated prime divides all the subdeterminants and is therefore a factor of $`\mathrm{gcd}(J_A)`$. Conversely, let $`\{B_i\}`$ be the set of $`k\times k`$ subdeterminants of $`A`$, and let (10) $$\mathrm{gcd}(B_1,\mathrm{},B_{\left(\genfrac{}{}{0pt}{}{n}{k}\right)})=\pi _1^{e_1}\mathrm{}\pi _r^{e_r}$$ be a factorization into irreducibles in $`R`$. By Section 2 (5) there is a prime filtration (11) $$M=M_{\mathrm{}}M_\mathrm{}1\mathrm{}M_1M_0=\{0\},$$ with $`M_j/M_{j1}R/Q_j`$ with $`Q_jP`$ for some $`P\mathrm{Ass}(M)`$. Localize (11) at the prime ideal $`\pi _1`$: the pair $`M_{j1}M_j`$ localizes to the pair $`M_{j1}^{(\pi _1)}M_j^{(\pi _1)}`$, with quotient $$R^{(\pi _1)}/Q_j^{(\pi _1)}=\{\begin{array}{cc}R/\pi _1\hfill & \text{if }Q_j=\pi _1\text{;}\hfill \\ 0.\hfill & \text{if not.}\hfill \end{array}$$ So (11) collapses to a shortened filtration of $`R^{\pi _1}`$–modules and we see that the multiplicity of $`\pi _i`$ in $`M`$ coincides with the multiplicity of $`\pi _i`$ in $`M^{(\pi _i)}`$ for each $`i=1,\mathrm{},r`$. We are therefore reduced to studying the local case: let $`\pi `$ be any one of the $`\pi _i`$’s, and write $$M_A^{(\pi )}=\left(R^{(\pi )}\right)^k/A\left(R^{(\pi )}\right)^n.$$ We can change our matrix by invertible (over $`R^{(\pi )}`$) elementary row operations and this gives us an isomorphic module with the same subdeterminants. In $`R^{(\pi )}`$ define $`\mathrm{ord}(f)=\mathrm{ord}_\pi (f)`$ to be the number of times that $`\pi `$ divides into $`f`$. Write $`fg`$ if $`\mathrm{ord}(f)\mathrm{ord}(g)`$, and with respect to this partial ordering find (one of) the smallest entries in $`A`$. Permute rows and columns in $`A`$ so that $`a_{11}`$ is a smallest entry. Then $`a_{i1}a_{11}`$ for $`2ik`$ so the quotient $`a_{i1}/a_{11}`$ is an element of $`R^{(\pi )}`$ and we can subtract multiples of the first row from the others to get a matrix of the form $$A_1=\left[\begin{array}{cccc}a_{11}& \mathrm{}& & \\ 0& \\ \mathrm{}& & \\ 0\end{array}\right]$$ Repeat with $`a_{22}`$ and so on to produce a matrix of the form $$A_{}=\left[\begin{array}{cc}a_{11}& \mathrm{}\\ 0& a_{22}& \mathrm{}\\ \mathrm{}& 0& \\ 0& \mathrm{}& 0& a_{kk}& \mathrm{}\end{array}\right]$$ in which each $`a_{jj}`$ is in turn the smallest non–zero element of the submatrix $`(a_{st})_{s,tj}`$. Let $`\mathrm{ord}(a_{jj})=e_{jj}`$ for each $`j`$. Now let $`𝕧=(1,0,\mathrm{},0)^t`$, so that $$\mathrm{Ann}(𝕧+A_{}R^n)=a_{11},$$ since the other columns of the matrix have a first component which is divisible by $`a_{11}`$. The map $`ff𝕧M^{(\pi )}`$ gives a filtration $$𝕧R^{(\pi )}\pi 𝕧R^{(\pi )}\mathrm{}\pi ^{(e_{11}2)}𝕧R^{(\pi )}\pi ^{(e_{11}1)}𝕧R^{(\pi )}0$$ of submodules of $`M_A^{(\pi )}`$. As the same argument works for the other standard basis vectors it follows that the multiplicity of $`\pi `$ in $`M_A^{(\pi )}`$ is $`e_{jj}`$. Calculating all the subdeterminants shows that the greatest common divisor is equal to the product $`_ja_{jj}=\pi ^{_je_{jj}}`$. So the multiplicity of $`\pi `$ in $`\mathrm{gcd}(J_A)`$ is equal to the multiplicity of the associated prime $`(\pi )`$ in a prime filtration of $`M_A`$. ∎ ###### Proof of Theorem 4.1. Use Lemma 4.2 and Section 2 to find the principal associated primes and their multiplicites; the result follows by (8).∎ For an ideal $`P`$ in $`R`$, recall that $`V(P)=\{𝕫C^df(𝕫)=0fP\}`$ denotes the set of common zeros of $`P`$. Write $`V(f)`$ for $`V(f)`$. By Theorem 3.1, $`\alpha ^M`$ is expansive if and only if $`V(P)(S^1)^d=\mathrm{}`$ for each associated prime $`P\mathrm{Ass}(M)`$. ###### Theorem 4.2. Let $`M_A`$ be a finitely presented module with $`A`$ of rank $`k`$. Then $`\alpha ^{M_A}`$ is expansive if and only if (12) $$(S^1)^d\left(\underset{j=1,\mathrm{},\left(\genfrac{}{}{0pt}{}{n}{k}\right)}{}V(det(B_j))\right)=\mathrm{},$$ where $`\{B_j\}`$ is the set of $`k\times k`$ subdeterminants of $`A`$. ###### Proof. Assume first that $$𝕫(S^1)^d\left(\underset{j=1,\mathrm{},\left(\genfrac{}{}{0pt}{}{n}{k}\right)}{}V(det(B_j))\right).$$ Assume that for every associated prime $`P`$ of $`M_A`$ there is a polynomial $`f_PP`$ for which $`f_P(𝕫)0`$. Then let $`f=_{P\mathrm{Ass}(M)}f_P`$ (so $`f(𝕫)0`$). From a prime filtration of $`M_A`$ it is clear that for some power $`m`$, (13) $$f^mM=0.$$ On the other hand, since $`𝕫`$ was chosen to lie in the set of common zeros of all the subdeterminants, in the ring $$M(𝕫)=\frac{Z[𝕫^{\pm 1}]^k}{A(𝕫)Z[𝕫^{\pm 1}]^n}$$ we have that all $`k\times k`$ subdeterminants of $`A(𝕫)`$ vanish, so $`\mathrm{rank}(A(𝕫))<k`$, and in particular $`M(𝕫)0`$, contradicting (13). It follows that if the intersection in (12) contains a point $`𝕫`$ then this point must lie in $`V(P)`$ for some associated prime $`P`$, showing that $`\alpha ^{M_A}`$ is not expansive by Theorem 3.1. Conversely, if $`\alpha ^{M_A}`$ is not expansive, then there is an associated prime $`P\mathrm{Ass}(M_A)`$ with $`V(P)(S^1)^d𝕫`$ say. However the associated prime $`P`$ must contain all the subdeterminants by Lemma 4.1. so $`𝕫_{j=1,\mathrm{},\left(\genfrac{}{}{0pt}{}{n}{k}\right)}V(det(B_j)).`$ ## 5. The square case As remarked in Theorem 3.1, various dynamical properties of systems of the form $`\alpha ^M`$ are governed by properties of the set $`\mathrm{Ass}(M)`$ of associated primes of $`M`$. In this section we show that the associated primes of a finite presentation with $`k=n`$ (the “square case”) are all visible in the determinant of the matrix of relations, so the dynamics are as easy to deduce as in the case of a cyclic module with a single principal associated prime. We also calculate the periodic points because a priori one needs more information than the associated primes to calculate this (see Section 7 of ). ###### Lemma 5.1. If the finitely–presented module $`M_A`$ has $`k=n`$ and $`A`$ has maximal rank, then the associated prime ideals of $`M`$ are all given by irreducible factors of $`det(A)`$. ###### Proof. Let $`det(A)=\pi _1^{e_1}\mathrm{}\pi _r^{e_r}`$ be the decomposition into irreducibles. By linear algebra over the quotient field of $`R`$ we know that $`wAR^n`$ if and only if $`\frac{1}{detA}A^{\mathrm{adj}}wR^n`$. If an element $`𝕧+AR^n`$ has $`\mathrm{Ann}(𝕧+AR^n)=P`$ for some $`P\mathrm{Ass}(M_A)`$, then $$P=\{fR\frac{f}{det(A)}A^{\mathrm{adj}}𝕧R^k\}.$$ Now in $`\frac{1}{det(A)}A^{\mathrm{adj}}v`$ after all possible cancellations there must be some $`\pi _i`$ in the denominator (since $`𝕧AR^k`$). Let this denominator be $`g`$ say; then $`g`$ must divide $`f`$ for all $`fP`$, so $`\mathrm{Ann}(𝕧+AR^n)=g`$. As $`P`$ is prime the element $`g`$ must be irreducible. It follows that all the associated primes of $`M_A`$ are principal and arise as factors of the determinant of $`A`$. It is easy to see that the argument above also proves that each irreducible factor of $`detA`$ gives an associated prime (or use Lemma 4.2) for the reverse inclusion. ∎ ###### Corollary 5.1. The dynamical system $`\alpha ^{M_A}`$ for a square matrix $`A`$ is ergodic, mixing, mixing on a shape $`F`$, mixing of all orders, $`K`$, if and only if the corresponding cyclic system $`\alpha ^{R/det(A)}`$ has the same property. We are also able to compute directly the periodic points in such systems. A period for a $`Z^d`$–action $`\alpha `$ on $`X`$ is a lattice $`\mathrm{\Lambda }Z^d`$ of full rank; the size of the period is the (finite) index $`|Z^d/\mathrm{\Lambda }|`$. The set of points of period $`\mathrm{\Lambda }`$ is $$\mathrm{Fix}_\mathrm{\Lambda }(\alpha )=\{xX\alpha _𝕟x=x𝕟\mathrm{\Lambda }\}.$$ Since the (multiplicative) dual group of $`Z^d`$ is $`(S^1)^d`$, the annihilator $`\mathrm{\Lambda }^{}`$ of $`\mathrm{\Lambda }`$ is a subgroup of $`(S^1)^d`$ with cardinality $`|Z^d/\mathrm{\Lambda }|`$. ###### Lemma 5.2. If $`A`$ is a square matrix of maximal rank, then $$\mathrm{Fix}_\mathrm{\Lambda }\left(\alpha ^{M_A}\right)=\{\begin{array}{cc}\mathrm{}\hfill & \text{if }_{𝕫\mathrm{\Lambda }^{}}|det(A)(𝕫)|=0;\hfill \\ _{𝕫\mathrm{\Lambda }^{}}|det(A)(𝕫)|\hfill & \text{if not.}\hfill \end{array}$$ ###### Proof. For brevity we prove this for square periods $`\mathrm{\Lambda }_n=nZ^d`$; the general case is similar but notationally unpleasant. We follow the method used in , Section 7, exactly. An element $`𝕩X=\widehat{R^k/AR^k}`$ is periodic with respect to $`\mathrm{\Lambda }_n`$ if it annihilates $`J(\mathrm{\Lambda }_n)R^k`$ where $`J(\mathrm{\Lambda }_n)=u_1^n1,\mathrm{},u_d^n1`$. So the periodic points are exactly the elements in the dual group of (14) $$R^k/(AR^k+J(\mathrm{\Lambda }_n)^k).$$ Therefore the number of periodic points is equal to the number of elements in (14) whenever this quantity is finite or is infinite if not. As $`R/J(\mathrm{\Lambda }_n)`$ is isomorphic to $`Z^{n^d}`$ we see that the module (14) is isomorphic to $`(Z^F)^k/B(Z^F)^k`$ where $`F=\{1,\mathrm{},n\}^d`$ and $`B`$ is obtained from $`A`$ by interpreting the variable $`u_{\mathrm{}}`$ as the shift of the $`\mathrm{}`$-th coordinate in $`F`$. The number of periodic points in $`X`$ is now given by the determinant of $`B`$ (or is infinite if $`detB=0`$). We calculate this quantity using a suitable basis of $`(C^F)^k`$. The elements in this vector space have the form $`(w_{\left(\genfrac{}{}{0pt}{}{𝕖}{i}\right)})_{\left(\genfrac{}{}{0pt}{}{𝕖F}{i[1,k]}\right)}`$; use the basis $$v_𝕗^j=(\delta _{ij}\omega ^{f_1e_1+\mathrm{}+f_de_d})_{\left(\genfrac{}{}{0pt}{}{𝕖}{i}\right)}.$$ where $`\omega `$ is a primitive $`n`$-th root. In this basis the matrix $`B`$ becomes $$C=(a_{ij}(\omega ^{e_1},\mathrm{},\omega ^{e_d})\delta _{𝕖𝕗})_{\left(\genfrac{}{}{0pt}{}{𝕖}{i}\right)\left(\genfrac{}{}{0pt}{}{𝕗}{j}\right)}$$ because the shift of the $`\mathrm{}`$-th coordinates in $`F`$ has $`v_𝕗^j`$ as eigenvector with eigenvalue $`\omega ^f_{\mathrm{}}`$. The determinants are given by $$det(B)=det(C)=\underset{𝕖F}{}det(A)(\omega ^{e_1},\mathrm{},\omega ^{e_d}),$$ Because the matrix $`C`$ can be viewed as being of the form $$\left[\begin{array}{ccccc}D_1& 0& 0& \mathrm{}& 0\\ 0& D_2& 0& \mathrm{}& 0\\ \mathrm{}& & & & \mathrm{}\\ 0& & \mathrm{}& 0& D_{|F|}\end{array}\right]$$ where the submatrices $`D_j`$ are obtained from $`A`$ by evaluation at $`(\omega ^{e_1},\mathrm{},\omega ^{e_d})`$ for some $`(e_1,\mathrm{},e_d)F`$. The determinant of such a matrix is the product of the determinants of the submatrizes. ∎ For a lattice $`\mathrm{\Lambda }`$, let $`g(\mathrm{\Lambda })=\mathrm{min}_{𝕟\mathrm{\Lambda }\backslash \{0\}}\{𝕟\mathrm{𝟘}\}`$. The characterization of expansiveness and Lemma 5.2 gives a very simple proof of the general result that the growth rate of periodic points coincides with the entropy for expansive algebraic $`Z^d`$–actions (see Section 7 of ) for finitely presented systems with $`k=n`$. ###### Corollary 5.2. If $`A`$ is a square matrix of maximal rank, and $`V(det(A))(S^1)^d=\mathrm{}`$, then the growth rate of periodic points is equal to the entropy: $$\underset{g(\mathrm{\Lambda })\mathrm{}}{lim}\frac{1}{|Z^d/\mathrm{\Lambda }|}\mathrm{log}\mathrm{Fix}(\alpha ^{M_A})=h(\alpha ^{M_A}).$$ ## 6. The general case In this section we simply describe the appropriate results from commutative algebra and indicate by examples how they may be used to compute associated primes in the general case. Fix the finite presentation (1) of a Noetherian $`R`$–module $`M`$. For each $`R`$–module map $`\varphi :R^aR^b`$ define $`J(\varphi )`$ to be the ideal generated by the $`\mathrm{rank}(\varphi )\times \mathrm{rank}(\varphi )`$ subdeterminants of a matrix for $`\varphi `$. For maps $`\varphi `$ appearing in a finite free resolution, these ideals are the Fitting ideals of the module. By convention $`0\times 0`$ determinants give the trivial ideal $`1`$. ###### Theorem 6.1. Let (15) $$0F_n\stackrel{\varphi _n}{}\mathrm{}\stackrel{\varphi _2}{}F_1\stackrel{\varphi _1}{}F_0M_A0$$ be a finite free resolution of the $`R`$–module $`M_A`$. Let $`P`$ be a prime ideal of $`R`$ with $`dim(R^P)=\mathrm{}`$. Then $`P\mathrm{Ass}(M_A)`$ if and only if $`PJ(\varphi _{\mathrm{}}).`$ ###### Proof. This is proved in Corollary 20.14 of with the condition $`dim(R^P)=\mathrm{}`$ replaced by $`\text{depth}(PR)=\mathrm{}`$ (see Chapter 18 of for this notion). By Theorem 18.7 of we have that since $`R`$ is regular (and hence Cohen–Macaulay by Section 18.5 of ), $`\text{depth}(P)=\text{height}(P):=dim(R^P)`$, so the result follows. ∎ Notice that the first Fitting ideal $`J(\varphi _1)`$ is exactly the ideal $`J_A`$ used above. We now describe several examples to illustrate the kind of calculations involved and some of the phenomena that may arise. ###### Example 6.1. (a) Let $`P=f`$ be a prime ideal. Then a finite free resolution of $`R/P`$ is given by $$0R\stackrel{[f]}{}RR/P0.$$ By Theorem 6.1, we see that the associated primes of $`R/P`$ comprise exactly $`\{P\}`$. (b) Let $`f`$ be irreducible, and let $`M=R/2f.`$ Then $$0R\stackrel{[2f]}{}RM0.$$ is a free resolution of $`M`$. If $`dim(P)=1`$ then $`P\mathrm{Ass}(M)`$ if and only if $`PJ([2f])=2f`$, so $`P=2`$ or $`f`$. Notice that $`J(\varphi _2)=1`$ so there are no further primes, so $`\mathrm{Ass}(M)=\{2,f\}`$. (c) The simplest setting in which a higher Fitting ideal appears is Ledrappier’s example. Let $`M=R/2,1+u_1+u_2`$. A simple syzygy calculation gives the free resolution $$0R\stackrel{\varphi _2}{}R^2\stackrel{\varphi _1}{}RM0,$$ where $`\varphi _1=\left[\begin{array}{c}1+u_1+u_2,2\end{array}\right]`$ and $`\varphi _2=\left[\begin{array}{c}1+u_1+u_2\\ 2\end{array}\right].`$ If $`dim(P)=1`$ then $`P\mathrm{Ass}(M)`$ if and only if $`PJ([1+u_1+u_2,2])=2,1+u_1+u_2`$, so there are no primes here. If $`dim(P)=2`$ then $`P\mathrm{Ass}(M)`$ if and only if $`PJ(\left[\begin{array}{c}1+u_1+u_2\\ 2\end{array}\right])=2,1+u_1+u_2`$, giving the one associated prime $`2,1+u_1+u_2`$. (d) Let $`A=\left[\begin{array}{ccc}2& u_2^25& 0\\ 0& u_1u_27u_1+u_2& 3\end{array}\right]`$. Then the first Fitting ideal $`J(A)`$ is generated by the set $`\{2u_1u_214u_1+2u_2,6,3u_2^215\}`$. A principal prime ideal which contains $`J(A)`$ must contain $`6`$, and must therefore be generated by $`2`$ or $`3`$: in either case it cannot contain the other two generators of $`J(A)`$. This proves that no principal ideals are associated to the module $`R^2/AR^3`$. Using the special form of the matrix we see that the kernel of $`A`$ in $`R^3`$ is generated by the vector $$v=\left[\begin{array}{c}3u_2^215\\ 6\\ 2u_1u_214u_1+2u_2\end{array}\right].$$ The second Fitting ideal $`J(v)`$ is equal to the first. Assume $`P`$ is prime with $`dim(R^P)=2`$ and $`PJ(v)`$. Then this prime contains either $`2`$ or $`3`$. If $`3P`$ then $`P`$ lies above the prime $`3,u_1u_27u_1+2u_2`$ which is the only one with $`dim(R^P)=2`$. For the case $`2P`$ we have also $`u_2^25P`$ but this element is modulo $`2`$ congruent to $`(u_21)^2`$; this means that the only prime with the correct local dimension containing $`2`$ is $`P=2,u_21`$. The only associated primes of $`M=R^2/AR^3`$ are therefore $`P_1=3,u_1u_27u_1+2u_2`$ and $`P_2=2,u_21`$. The corresponding dynamical system is expansive and ergodic but not mixing, and has zero entropy. (e) Let $`A=\left[\begin{array}{ccc}2& 3u_2+5& 3u_13u_2\\ u_14& u_11& 3u_16\end{array}\right]`$. Then the first Fitting ideal is generated by $$3u_1+183u_1u_2+12u_2,$$ $$18u_1123u_1^2+3u_1u_212u_2\text{ and}$$ $$21u_2303u_1^2+18u_1+12u_1u_2.$$ The only principal ideal above $`J(A)`$ is $`3`$. With a computer algebra system one can calculate the kernel of the map $`A`$: it is generated by the vector $$v=\left[\begin{array}{c}7u_210u_1^2+6u_1+4u_1u_2\\ 4+u_1^26u_1u_1u_2+4u_2\\ 6u_1u_2+4u_2u_1\end{array}\right].$$ The second Fitting ideal $`J(v)`$ is generated by the components of this vector and one can calculate that $$\{u_13u_24,3u_2^2+3u_22\}$$ is also a generating set. So $`J(v)`$ is a prime with local dimension $`2`$. The only associated primes of the module $`R^2/AR^3`$ are $`3`$ and $`u_13u_24,3u_2^2+3u_22`$. The entropy of the corresponding dynamical system is $`\mathrm{log}3`$, but the system does not have completely positive entropy. The ring $`R/J(v)`$ is isomorphic to a subring of $`Q[\sqrt{\frac{11}{12}}]`$ via the map sending $`u_2`$ to $`\frac{1}{2}+\sqrt{\frac{11}{12}}`$ (a root of $`3y^2+y2=0`$), and $`u_1`$ to $`\frac{5}{2}+\sqrt{\frac{33}{4}}`$. The field-theoretic norms of those two elements are $`2`$ and $`\frac{2}{3}`$ respectively. It follows that there can be no nontrivial $`(n_1,n_2)Z^2`$ such that $`u_1^{n_1}u_2^{n_2}1J(v)`$ because this would yield $`2^{n_1}(\frac{2}{3})^{n_2}=1`$. The dynamical system is therefore mixing of all orders and ergodic. (f) Even in the square ($`n=k`$) case the first Fitting ideal does not contain enough information to construct a prime filtration of the module. The following type of example is well–known (see for example Remark 6(5) in or Example 5.3(2) in ). Let $`A=\left[\begin{array}{cc}4u_1& 1\\ 1& u_1\end{array}\right]`$ and $`B=\left[\begin{array}{cc}3u_1& 2\\ 2& 1u_1\end{array}\right]`$. Then $`det(A)=det(B)`$ so the systems $`\alpha ^{M_A}`$ and $`\alpha ^{M_B}`$ have the same entropy, number of periodic points, and in fact are both isomorphic to Bernoulli shifts and hence measurably isomorphic. Both modules $`M_A`$ and $`M_B`$ have similar finite free resolutions, $$0R^2\stackrel{\mathit{\varphi }}{}R^2M0,$$ where $`\varphi =A`$ for $`M=M_A`$ and $`\varphi =B`$ for $`M=M_B`$. It is easy to check that $`M_AR/u_1^24u_11`$, so that $$M_A0$$ is a prime filtration. On the other hand, the shortest filtration of $`M_B`$ is of the form $$M_BN0,$$ with first quotient $`N/\{0\}R/u_1^24u_11`$, and second quotient $$M/NR/u_1^24u_11+Q$$ for some ideal $`Qu_1^24u_11.`$ (g) Let $`f,g,h`$ be co–prime elements of $`R`$, and consider the module $`M=R/[f,g,h]R^3`$. Then a finite free resolution is given by the Koszul complex $$0R\stackrel{\varphi _1}{}R^3\stackrel{\varphi _2}{}R^3\stackrel{\varphi _3}{}RM0,$$ in which $`\varphi _1=\left[\begin{array}{c}f\\ g\\ h\end{array}\right]`$, $`\varphi _2=\left[\begin{array}{ccc}0& h& g\\ h& 0& f\\ g& f& 0\end{array}\right]`$ and $`\varphi _3=[f,g,h]`$. (h) An example in which the rank of the presenting matrix is too small is given by $`A=\left[\begin{array}{c}2\\ 1+u_1+u_2\end{array}\right].`$ Let $`M=M_A`$; then $`\mathrm{Ann}_M\left(\begin{array}{c}1\\ 0\end{array}\right)=0`$, so $`M`$ has a free submodule $`L=\left(\begin{array}{c}1\\ 0\end{array}\right)R`$. The corresponding dynamical system therefore has as a factor the full shift with circle alphabet, so $`h(\alpha ^M)=\mathrm{}`$. The quotient $`M/LR/1+u_1+u_2`$ is then of the form (1). Of course the free submodule sits inside $`M`$ in many different ways, so there is no “canonical” quotient $`M/L`$. (i) The simplest examples of algebraic dynamical systems without finite presentation are certain non–expansive automorphisms of solenoids, as studied in and . Let $`X=\widehat{Z[\frac{1}{6}]}`$, and let $`\alpha `$ be the automorphism of $`X`$ dual to $`x2x`$ on $`Z[\frac{1}{6}]`$ (here $`d=1`$). The $`R`$–module corresponding to the dynamical system then has a chain of submodules $$Z[\frac{1}{2}]\frac{1}{3}Z[\frac{1}{2}]\frac{1}{9}Z[\frac{1}{2}]\mathrm{},$$ each of which is isomorphic as an $`R`$–module to $`R/u_12`$, that never stabilizes. It follows that the corresponding module is not Noetherian.
no-problem/9907/astro-ph9907407.html
ar5iv
text
# 1 INTRODUCTION ## 1 INTRODUCTION The inner regions of many planetary nebulae (PNs) and proto-PNs show much larger deviation from sphericity than the outer regions (for recent catalogs of PNs and further references see, e.g., Acker et al. 1992; Schwarz, Corradi, & Melnick 1992; Manchado et al. 1996; Sahai & Trauger 1998; Hua, Dopita, & Martinis 1998). By “inner regions” we refer here to the shell that was formed from the superwind$``$the intense mass loss episode at the termination of the AGB, and not to the rim that was formed by the interaction with the fast wind blown by the central star during the PN phase (for more on these morphological terms see Frank, Balick, & Riley 1990). This type of structure, which is observed in many elliptical PNs, suggests that there exists a correlation, albeit not perfect, between the onset of the superwind and the onset of a more asymmetrical wind. In extreme cases the inner region is elliptical while the outer region (outer shell or halo) is spherical (e.g., NGC 6826, Balick 1987). Another indication of this correlation comes from spherical PNs. Of the 18 spherical PNs listed by Soker (1997, table 2), $`75\%`$ do not have superwind but just an extended spherical halo. The transition to both high mass loss rate and highly non-spherical mass loss geometry may result from the interaction with a stellar or substellar companion (Soker 1995; 1997), or from an internal property of the AGB star. Regarding the second possibility, the change in the mass loss properties has been attributed to the decrease in the envelope density (a recent different suggestion for this behavior made by Garcia-Segura et al. is criticized in $`\mathrm{\S }3.2`$ below). In an earlier paper (Soker & Harpaz 1992) we proposed a mode-switch to nonradial oscillations. This scenario is related to the fast decrease in the Kelvin-Helmholtz time of the envelope, eventually becoming shorter than the fundamental pulsation period. For a low mass envelope the Kelvin-Helmholtz time is $$\tau _{\mathrm{KH}}\left(\mathrm{envelope}\right)\frac{GM_{\mathrm{core}}M_{\mathrm{env}}}{RL}=0.6\left(\frac{M_{\mathrm{core}}}{0.6M_{}}\right)\left(\frac{M_{\mathrm{env}}}{0.1M_{}}\right)\left(\frac{R}{300r_{}}\right)^1\left(\frac{L}{10^4L_{}}\right)^1\mathrm{yrs},$$ (1) where $`M_{\mathrm{core}}`$ and $`M_{\mathrm{env}}`$ are the core and envelope mass, respectively, $`R`$ the stellar radius and $`L`$ stellar luminosity. As the envelope expands the fundamental mode period increases, and due to mass loss and increase in luminosity, the Kelvin-Helmholtz time decreases. When the envelope mass becomes $`0.1M_{}`$, the Kelvin-Helmholtz time becomes shorter than the fundamental pulsation period. We noted (Soker & Harpaz 1992), though, that even this “single-star” mechanism requires a binary companion to spin-up the AGB envelope in order to fix the symmetry axis, and make some non-radial modes more likely than others (if all non-radial modes exist, then the overall mass loss geometry is still spherical). The widely accepted model for the high mass loss rate on the upper AGB includes strong stellar pulsations coupled with efficient dust formation (e.g., Wood 1979; Bedijn 1988; Bowen 1988; Bowen & Willson 1991; Fleischer, Gauger, & Sedlmayr 1992; Höfner & Dorfi 1997). In a recent paper, one of us (Soker 1998) proposed another mechanism for asymmetrical mass loss based on the decrease of the envelope density. This mechanism is based on the above model of mass loss due to pulsations coupled with dust formation. In that scenario which was further developed by Soker & Clayton (1999), Soker assumes that a weak magnetic field forms cool stellar spots, which facilitate the formation of dust closer to the stellar surface. Dust formation above cool spots, due to large convective elements (Schwarzschild 1975), or magnetic activity, enhances the mass loss rate (Frank 1995). If spots due to the dynamo activity are formed mainly near the equatorial plane (Soker 1998; Soker & Clayton 1999), then the degree of deviation from sphericity increases. Soker (1998) claims, based on a crude estimate, that this mechanism, of dust formation above cool magnetic spots, operates for slowly rotating AGB stars, having angular velocities of $`\omega 10^4\omega _{\mathrm{Kep}}`$, where $`\omega _{\mathrm{Kep}}`$ is the equatorial Keplerian angular velocity. Such angular velocities could be gained from a planet companion of mass $`>0.1M_{\mathrm{Jupiter}}`$, which deposits its orbital angular momentum to the envelope at late stages, or even from single stars which are fast rotators on the main sequence. The advantage of the late tidal interaction with a binary companion is that the companion can explain the formation of jets (Soker 1992, 1997; Soker & Livio 1994), while in models based on the decrease in the envelope mass (where a companion is required only to spin-up the envelope at earlier stages), there is no satisfactory model for the formation of jets. In any case, even for a late tidal interaction the mechanism of dust formation above cool magnetic spots may be significant. The scenario of asymmetrical mass loss via cool magnetic spots contains several assumptions and speculative effects, e.g., about the dynamo activity. To strengthen this scenario, we conduct numerical calculations to study the properties of the envelopes of upper AGB stars. We find that the entropy gradient becomes steeper, while the density profile becomes shallower. The steeper entropy profile increases the convective pressure, and makes the envelope, inside and outside the magnetic flux tubes, more prone to any convective motion. We speculate that these changes will result in a more efficient amplification of the envelope magnetic field, both through global dynamo activity and local concentration of magnetic flux tubes by the convective motion. The evolutionary simulation is described in $`\mathrm{\S }2`$, where we follow the evolution of an AGB star, from the late AGB to the post-AGB phase. We present the structure of the envelope at five evolutionary points, and show that the envelope’s density and entropy profiles changed significantly, and therefore may be the properties which determine the large change in the behavior of the mass loss rate and geometry at the end of the AGB. We would like to stress that we do not propose a new mass loss mechanism. We accept that pulsations coupled with radiation pressure on dust is the mechanism for mass loss (e.g., Bedijn 1988; Bowen 1988), and that the luminosity, radius, and mass of the AGB star are the main factors which determine the mass loss rate (e.g., Bowen & Willson 1991; Höfner & Dorfi 1997). We only suggest that magnetic cool spots ease the formation of dust, and that their concentration near the equator causes the mass loss geometry to deviate from sphericity (Soker 1998; Soker & Clayton 1999). In $`\mathrm{\S }3`$ we discuss the general behavior of the envelope properties, in particular the density and entropy profiles, and speculate on the way by which they may enhance the formation of cool magnetic spots. We summarize in $`\mathrm{\S }4`$. ## 2 ENVELOPE PROPERTIES In this section we describe the results of a numerical simulation of an AGB stellar model, as it evolves on the upper AGB, and turns to a post-AGB star. The stellar evolutionary code is similar to the one described by Harpaz, Kovetz, & Shaviv (1987), and we describe it here briefly. The equations of evolution are replaced by the difference equations for each mass shell: $$T\left(SS^0\right)=\left[q\frac{\mathrm{\Delta }L}{\mathrm{\Delta }m}+T\underset{a}{}\left(\frac{S}{x_a}R_a\right)\right]\delta t,$$ (2) $$\left(vv^0\right)=\left[4\pi r^2\frac{\mathrm{\Delta }P}{\mathrm{\Delta }m}\frac{Gm}{r^2}\right]\delta t,$$ (3) $$\frac{\mathrm{\Delta }\left(4\pi r^3/3\right)}{\mathrm{\Delta }m}=\frac{1}{\rho },$$ (4) and $$\left(x_ax_a^0\right)=\left[R_a(\rho ^0,T^0,x^0)\right]\delta t,$$ (5) where $`\delta t`$ is the selected time step, $`\mathrm{\Delta }`$ denotes space difference, superfix $`0`$ denotes variables evaluated at the beginning of the time step, and other variables are calculated at the end of the time step. $`T`$, $`S`$, $`P`$, $`\rho `$, $`L`$ denote the temperature, the entropy, the pressure, the density and the luminosity respectively, $`q`$ denotes the rate of nuclear energy production (minus the energy carried by neutrinos), $`v`$ is the velocity of matter, and $`x_a`$ and $`R_a`$ are the chemical composition and the reaction rate of the chemical isotope $`a`$, respectively. While quasi-hydrostatic evolution holds, the left side of equation (3) is equated to zero. The fully dynamical version can be used during fast evolutionary phases, which are observed whenever $`\left(vv^0\right)/\delta t`$ exceeds a few percent of $`Gm/r^2`$ anywhere. Nuclear reactions are calculated for five elements ($`H,He,C+N,O,Ne`$). The equations are implicit, and are solved by iterations, using the full Heney code. Convection is calculated by the mixing length prescription: $$L_{conv}=4\pi r^2\rho C_pv_cl_p\mathrm{\Delta }T,$$ (6) where $$\mathrm{\Delta }T=\left(\left|\frac{dT}{dr}\right|_{star}\left|\frac{dT}{dr}\right|_{ad}\right),$$ (7) and $$v_c^2=l_p^3\mathrm{\Delta }T.$$ (8) Here $`v_c`$ is the convective velocity, $`l_p`$ is the pressure scale height, and $`C_p`$ is the heat capacity per unit mass at constant pressure. The convection velocity is given in units of the local isothermal sound velocity $`c_s=\left(kT/\mu m_H\right)^{1/2}`$. From these equations we derive the the ratio of convective to thermal pressure $$\frac{P_{\mathrm{conv}}}{P_{th}}=\frac{v_c^2}{c_s^2}=\left(\frac{gl_p}{2TC_p}\right)^{2/3}\frac{F_{\mathrm{conv}}^{2/3}}{c_s^2}\rho ^{2/3}.$$ (9) We limited the value of $`v_c/c_s`$ to unity, and in the zone in which this value equals unity, the convective ram pressure is actually of the same magnitude as the thermal pressure. The atmosphere of the star was calculated by using Eddington approximation for grey atmosphere, hence we do not discuss details of the photosphere structure, and the values presented in the figures below are not accurate close to the stellar surface. Several notes should be made regarding the numerical calculation. First, we are interested mainly in the relative changes in the properties of the evolving envelope. Therefore, changing numerical values of physical variables that influence all evolutionary models in the same sense, e.g., the mixing length, will not affect our results. We use values that were found by us in previous works to give the best results. Second, the numerical code does not include dust formation in the atmosphere, and we do not have dust opacity. This code was developed using opacities from Alexander 1975. New opacity tables (e.g., Alexander, Rypma & Johnson 1983) were incorporated as they became available since this code was first used by us (Harpaz & Kovetz 1981), when they were found to be significantly different from former tables. Third, we do not model a full evolutionary track, but only calculate the structure of the envelope at five evolutionary points. Only in moving from the first to second point we have also evolved the core, for an average mass loss rate of $`10^6M_{}\mathrm{yr}^1`$ . In the last 4 evolutionary points the core mass was held constant (see below). Hence, we do not have a mass loss rate formula, but mechanically remove mass from the envelope until the desired envelope mass is achieved. Using this numerical code, we evolve a stellar model along the the AGB and beyond. In Figures $`15`$ we present some of the envelope properties versus the radius, at five points along the evolution. The quantities that are plotted are the temperature $`T`$ (in Kelvin), density $`\rho `$ ($`\mathrm{g}\mathrm{cm}^3`$), the mass $`m`$ ($`M_{}`$), the entropy $`S`$ (in relative units), the convection velocity $`v_c`$ (in units of the local sound speed $`c_s`$), the thermal pressure $`P`$ (dyne$`\mathrm{cm}^2`$), the moment of inertia $`I`$ ($`M_{}R_{}^2`$), and the pressure scale height $`l_p`$ ($`R_{}`$). The envelope masses at the five evolutionary points are $`M_{\mathrm{env}}=0.5`$, $`0.3`$, $`0.1`$, 0.033, and $`0.015M_{}`$, respectively. At an early time, when the mass is $`1.1M_{}`$, the core mass is $`0.58M_{}`$. We assume that when the envelope mass becomes $`0.3M_{}`$ the superwind started, and the core mass does not evolve much further. In order to isolate the influence of the envelope parameters during this evolutionary phase, we kept the core constant by switching off the chemical evolution. The core mass was kept at $`0.6M_{}`$. ## 3 DISCUSSIONS ### 3.1 The Density Profile The most striking changes in the envelope due to the mass loss are the decrease in the density below the photosphere and the changes in the density and entropy profiles (figs. 6 and 7, respectively). To emphasize this behavior, we present the density profile of the five models in Figure 6. This can be understood as follows. The photospheric pressure $`P_p`$ and density $`\rho _p`$ are determined by the stellar effective temperature $`T_p`$, luminosity, and photospheric opacity $`\kappa `$, and are given by (e.g., Kippenhahn & Weigert 1990, $`\mathrm{\S }10.2`$) $$P_p=\frac{2}{3}\frac{GM}{R^2}\frac{1}{\kappa },$$ (10) and $$\rho _p=\frac{2}{3}\frac{GM\mu m_H}{k_B}\frac{1}{R^2\kappa T_p},$$ (11) where $`M`$ is the stellar mass, $`R`$ is the photospheric radius, $`k_B`$ is the Boltzmann constant, and $`\mu m_H`$ is the mean mass per particle. In deriving these expressions we have used the definition of the photosphere as the place where $`\kappa ł\rho _p=2/3`$, where $`l`$ is the density scale height. At the level of accuracy of these expressions, we can take the pressure and density scale height at the photosphere to be equal (we do not consider here the density inversion region below the photosphere). In the range of the effective temperatures $`2800<T_p<3600`$ and the typical photospheric density of AGB stars, we find from Table 6 of Alexander & Ferguson (1994) that we can take the photospheric opacity to be (for a solar composition) $`\kappa 4\times 10^4(T/3,000)^q\mathrm{cm}^2\mathrm{g}^1`$, with $`q4`$. Substituting typical values for an upper AGB star, we find the ratio of the photospheric density to the average envelope density $`\rho _a=3M_{\mathrm{env}}/\left(4\pi R^3\right)`$ to be $$\frac{\rho _p}{\rho _a}0.25\left(\frac{R}{300R_{}}\right)\left(\frac{M}{10M_{\mathrm{env}}}\right)\left(\frac{T_p}{3,000\mathrm{K}}\right)^{q1}.$$ (12) As the star evolves along the AGB, the three factors of equation (12) contribute to the increase in the ratio of the photospheric density to the average density, with the envelope mass being the most influential. Therefore, the increase in this ratio is quite fast on the upper AGB, as the envelope mass decreases. This ratio continues to increase even as the envelope starts to contract, when its mass is $`M_{\mathrm{env}}0.1M_{}`$. We find here (see also fig. 3 by Soker 1992) that in the range $`5\times 10^3\left(M_{\mathrm{env}}/M_{}\right)0.1`$ the envelope radius goes as $`RM_{\mathrm{env}}^{0.2}`$. For a constant luminosity post-AGB star, the temperature goes as $`R^{1/2}`$, and with $`q=4`$ for the opacity dependence, we find $$\frac{\rho _p}{\rho _a}\left(\frac{M_{\mathrm{env}}}{0.1M_{}}\right)^{0.3}\left(\frac{\rho _p}{\rho _a}\right)_{M_{\mathrm{env}}=0.1}.$$ (13) This predicts a very shallow density profile for post-AGB stars with $`M_{\mathrm{env}}5\times 10^3M_{}`$, as is indeed found by Soker (1992; see his fig. 1) and the previous section here. During this stage the radius is still large, $`R150R_{}`$, and the temperature low enough for dust to form quite easily. Therefore, the deviation from spherical mass loss, if facilitated by the shallower density profile, may increases during the early post-AGB phase. ### 3.2 The Angular Momentum Problem The changes in the envelope’s properties should more than compensate for the fast decrease in the angular velocity of the star. The angular velocity decreases very fast due to the intense mass loss rate. For an envelope density profile of $`\rho r^2`$, the decrease goes as (for a constant radius on the upper AGB and a solid body rotation in the envelope) $`\omega M_{\mathrm{env}}^2`$ (Harpaz & Soker 1994). Since the density profile is steeper than $`\rho r^2`$ during most of the early AGB phase, the decrease in the angular velocity will be even faster. Let us refer to the dynamo activity required for the formation of cool magnetic spots (Soker 1998). At this point we cannot predict the angular velocity which is required to operate an efficient dynamo in AGB stars. However, based on the strong convective motion, we speculate that the required angular velocity is very low. Soker (1998), based on the work of Soker & Harpaz (1992), crudely estimates the required equatorial surface angular velocity to be $`\omega 10^4\omega _{\mathrm{Kep}}`$, where $`\omega _{\mathrm{Kep}}`$ is the Keplerian velocity on the equator. For angular velocity of $`\omega 10^2\omega _{\mathrm{Kep}}`$, a massive companion is required to spin up the envelope, and other effects become important. Therefore, the mechanism of cool magnetic spots, though it can be very effective for fast rotations, was introduced in order to explain axisymmetrical mass loss from slow AGB rotators, i.e., $`\omega 10^2\omega _{\mathrm{Kep}}`$. An angular velocity of $`\omega 10^4\omega _{\mathrm{Kep}}`$ can be attained even by fast rotating main sequence stars. However, due to the angular momentum loss mentioned above, in order for the dynamo to stay effective, the AGB star should be spun-up by a companion, and/or the dynamo must be effective even for $`\omega 10^5\omega _{\mathrm{Kep}}`$. As pointed out by Soker (1998), for the envelope spin-up, if it occurs on the upper AGB, a planet companion of mass $`0.1M_{\mathrm{Jupiter}}`$ is sufficient. As we mention below, born-again AGB stars may hint that the mechanism is efficient even for $`\omega 0.3\times 10^4\omega _{\mathrm{Kep}}`$. As pointed out by Soker (1998) and Soker & Clayton (1999), possible support for the influence of the low density in the envelope, and the effectiveness of the mechanism for axisymmetrical mass loss, comes from the PN A30. This PN has a large and almost spherical halo, with optically bright hydrogen-deficient blobs in the inner region, which are arranged in a more or less axisymmetrical shape. The blobs are thought to result from a late helium shell flash (i.e., a born-again AGB star). After a late helium flash, the star expands to a radius of $`R100R_{}`$, and since it has a very low envelope mass, the density profile will be very shallow. This may explain the axisymmetrical knots of A30, despite its almost spherical halo. Soker & Clayton (1999), by using the dust formation above cool magnetic spots, point to a possible connection between the mass loss behavior of R Coronae Borealis stars, which are thought to be born-again stars and AGB stars. Heber, Napiwotzki, & Reid (1997) find that single WDs rotate very slowly, $`v_{\mathrm{rot}}<50\mathrm{km}\mathrm{s}^1`$. A central star of a PN which rotates at a velocity of $`10\mathrm{km}\mathrm{s}^1`$, when expanding as a born-again AGB star to $`100R_{}`$, will rotate at $`10^3\mathrm{km}\mathrm{s}^10.3\times 10^4`$$`v_{\mathrm{Kep}}`$, where $`v_{\mathrm{Kep}}`$ is the Keplerian rotation velocity. A shallow density profile with some dynamo activity may result in dust formation (as in R Coronae Borealis stars which have similar radii), and axisymmetrical mass loss. Soker (1998) extensively discusses the advantage of the cool magnetic spots mechanism over models which require fast AGB rotation (his section 2). Despite this, in a recent paper Garcia-Segura et al. (1999) propose a scenario which they claim can operate for single AGB stars. They argue for an equatorial rotation velocity of $`>1\mathrm{km}\mathrm{s}^1`$ for an AGB star of $`150R_{}`$. We think their model can operate only if a stellar companion of mass $`M_20.1M_{}`$ spins-up the envelope. But then other effects of the companion (e.g., Livio 1997; Soker 1997; Mastrodemos & Morris 1998; 1999), become more significant. More specifically, we think their model is wrong for the following reasons. (1) Their mechanism to spin-up the envelope on the upper AGB is a transfer of a large amount of angular momentum from the core to the envelope of mass $`0.1M_{}`$. However, a coupling between the core and the envelope occurs at early stages of the evolution, i.e., first and second dredge-up, on the RGB and early AGB, respectively. (2) Many bipolar PNs have equatorial mass concentration with a mass of $`0.1M_{}`$, for which their mechanism of spin-up is not efficient. (3) Even for an envelope mass of $`0.1M_{}`$, the required angular momentum of the core means that it rotates, before its coupling to the envelope, at an angular velocity of $`0.3`$ times the Keplerian velocity at the core’s surface. This seems to be too fast. Balbus & Hawley (1994) maintain that the powerful weak-field MHD instability is likely to force a solid body rotation even in the radiative zone of stars. We think that this MHD instability is likely to transport most of the core’s angular momentum to the envelope, even if the region is stable to the Hoiland criterion. (4) For the formation of bipolar PNs their model requires the rotation velocity to be $`6.9\mathrm{km}\mathrm{s}^1`$ (their model D), for which they get an equatorial to polar density ratio of 112. By the time the rotation velocity is $`6.36\mathrm{km}\mathrm{s}^1`$ (their model C), the density ratio decreases to 9. However, as we discussed above, a decrease of the envelope mass by only $`4\%`$ will bring their model D to model C. A loss of $`30\%`$ of the envelope will bring them to model B, a rotation velocity of $`3.5\mathrm{km}\mathrm{s}^1`$ and a density contrast of $`<2`$. (5) To support their model, they cite the similarity between the structure of the Hourglass PN (also named MyCn 18 and PN G 307.5-04.9) and the nebula around $`\eta `$ Carinae. However, it seems that the central star of $`\eta `$ Carinae has a binary companion at an orbital separation of $`15\mathrm{AU}`$ (e.g., Lamers et al. 1998). (6) The problems with the magnetic activity during the late post-AGB phase were listed by Soker (1998, his section 2.2), and we will not repeat the arguments here. Basically, this mechanism requires substantial spin-up by a massive binary companion. We think that each of the first four points above is strong enough to make their model questionable. ### 3.3 Implications for Magnetic Cool Spots Our knowledge of magnetic cool spots comes mainly from the sun, for which there is a huge amount of theoretical work (e.g., Priest 1987), although there is no complete acceptable theory yet for the formation and evolution of spots. However, some basic ingredients seem to be common to all models. 1) First, there is a need for dynamo activity to amplify the magnetic field, and replace the magnetic flux which reaches the solar surface and escapes. In the previous subsection we presented some indications that the mechanism for axisymmetrical mass loss may operate for very slow rotation $`\omega 3\times 10^5\omega _{\mathrm{Kep}}`$. We suggest that due to the strong convective motion, i.e., high convective pressure, which results from the steep entropy gradient, dynamo activity in AGB stars is possible even when the rotation is very slow. No theory for dynamo activity exists which allows us to calculate the magnetic activity, and therefore at present we cannot expand on this point. 2) Another basic ingredient for the formation of cool magnetic spots is the process by which the motion of convection cells concentrates magnetic flux to form a strong vertical magnetic field, which then suppresses the vertical convective heat transport, hence leading to a cool spot. We suggest that the high convective pressure makes this process more efficient. 3) In addition to the external convective pressure that concentrates the magnetic field, there is another stage in the amplification of the magnetic field inside the tube. In this stage material cools, because of the reduced heat transfer, and sinks inside the tube (Meyer et al. 1974; Priest 1987. $`\mathrm{\S }\mathrm{8.6.1}`$). The steeper entropy profile (fig. 7) makes the envelope, inside and outside the tube, more prone to any convective motion. Hence, the sinking of cool gas mechanism to enhance the magnetic field may become more efficient. 4) For the formation of a stable spot the flux tube should be vertical. The sensitivity of the envelope to convective motion (i.e. to the sinking and rising motion of blobs), because of the steep entropy gradient, may cause rising flux tubes to become vertical more frequently. That the convective pressure increases can be seen from simple considerations that led to equation (9). From all factors in that equation, only the density changes fast during the intense mass loss rate on the upper AGB. The density decreases, and hence the convective to thermal pressure ratio increases. As can be seen from figures 1-5, the ratio of convective to isothermal sound speed becomes equal to unity (i.e., $`v_c/c_s=1`$) deeper and deeper in the envelope. This, we suggest, results in strong magnetic activity in the outer envelope. The solar magnetic field may be used as a hint on the expected behavior of AGB stars magnetic field, despite the large differences between the types of stars (Soker & Clayton 1999). This is implied from our assumption that dynamo mechanism amplifies the AGB magnetic field, as is the case for the solar magnetic field. Most relevant to the present work are the 4 basic ingredients listed above, and the expected concentration of cool magnetic spots toward the equator of the star. Finally, two other interesting properties should be noted. First, the location of the inner boundary of the convective region moves outward. For the envelope masses of $`0.5`$, $`0.3`$, $`0.1`$, $`0.033`$, and $`0.015M_{}`$, the inner boundaries are at $`0.3`$, $`0.4`$, $`1`$, $`4`$, and $`6R_{}`$, respectively. Second, the photospheres of cool magnetic spots in AGB stars are above the rest of the photosphere (Soker & Clayton 1999), contrary to the sun where cool spots are several hundred kilometers deeper than their surroundings. This complication should be included in future detailed calculations of the structure of magnetic cool spots in AGB stars. ## 4 SUMMARY In the present paper we examined the possibility that the transition from a spherical mass loss to an axisymmetrical mass loss on the upper AGB is caused by changes in some of the envelope’s properties. The transition to axisymmetrical mass loss is inferred from the structure of many elliptical planetary nebulae. Our main results can be summarized us follows. (1) As the envelope mass decreases on the upper AGB and early post-AGB phases, the density quickly decreases and its profile becomes very shallow. At the same time the entropy profile becomes very steep. This suggests that the transition to axisymmetrical mass loss which is inferred from observations, may be related to these properties, unless it is due to a late interaction with a stellar or a planet companion. (2) We qualitatively discussed a few processes by which the changes in the density and entropy profiles may lead to an enhanced formation of magnetic cool spots, which we assumed to be concentrated near the equatorial plane. We suggest that this in turn will lead to an enhanced formation of dust and a higher mass loss rate near the equatorial plane. We also claim that such magnetic activity requires only slow rotation, as the magnetic field is not globally dynamically important. ACKNOWLEDGMENTS: We would like to thank the referee, S. Höfner, for detailed and helpful comments. This research was supported in part by a grant from the University of Haifa and a grant from the Israel Science Foundation. FIGURE CAPTIONS Figure 1: The envelope structure of an AGB star model with a total mass of $`1.1M_{}`$, a core mass of $`0.58M_{}`$, and a luminosity of $`8,400L_{}`$. The quantities that are plotted versus the radius are: temperature $`T`$, density $`\rho `$, the mass $`m`$ ($`M_{}`$), the entropy $`S`$, the convection velocity $`v_c`$ (in units of the local sound speed), the thermal pressure $`P`$, the moment of inertia $`I`$ ($`M_{}R_{}^2`$), and the pressure scale height $`L_p`$ ($`R_{}`$). $`T`$, $`\rho `$, and $`P`$ are in cgs units, and $`S`$ is in relative units. Note that we treat the region near the photosphere using the Eddington approximation of grey atmosphere, and therefore the values of the density, pressure, and temperature very close to the surface (photosphere) are not accurate. Figure 2: Like figure 1, but at a later time when the total mass is $`0.9M_{}`$ and the core mass is $`0.6M_{}`$. The stellar luminosity is $`8,700L_{}`$. Figure 3: Like figure 2 (the same core mass and luminosity), but at a later time when the total mass is $`0.7M_{}`$. Figure 4: Like figure 2, but at a later time when the total mass is $`0.63M_{}`$. Figure 5: Like figure 2, but at a later time when the total mass is $`0.615M_{}`$. Figure 6: The density profile of the five models presented in figs. 1-5. The thick lines represent the model during the contraction phase, while the thin solid line is the model presented on fig. 1 (total mass of $`1.1M_{}`$). The masses of the models are: $`0.9M_{}`$ (dot, dot, dot-dash line), $`0.7M_{}`$ (dot-dash line), $`0.633M_{}`$ (dotted line), and $`0.615M_{}`$ (solid line). Figure 7: The entropy profile of the five models presented in figs. 1-5. The lines represent the same models as in figure 6.
no-problem/9907/hep-ex9907018.html
ar5iv
text
# 1 Introduction ## 1 Introduction As has been recently pointed out in the literature \[1-4\], the analysis of the precision data on the decays $`\mathrm{Z}\mathrm{f}\overline{\mathrm{f}}`$ from LEP and SLD has shown good agreement with the predictions of the Standard Electroweak Model (SM) with the exception of the parameter $`A_\mathrm{b}`$ defined as: $$A_\mathrm{b}=\frac{2(\sqrt{14\mu _\mathrm{b}})\overline{r}_\mathrm{b}}{14\mu _\mathrm{b}+(1+2\mu _\mathrm{b})\overline{r}_\mathrm{b}^2}$$ (1.1) where $$\overline{r}_\mathrm{b}=\overline{v}_\mathrm{b}/\overline{a}_\mathrm{b}$$ Here $`\overline{v}_\mathrm{b}`$ and $`\overline{a}_\mathrm{b}`$ are the effective b quark coupling constants and $`\mu _\mathrm{b}=(\overline{m}_\mathrm{b}(M_\mathrm{Z})/M_\mathrm{Z})^21.0\times 10^3`$ . Since 1995, the LEP+SLD average value of $`A_\mathrm{b}`$ has differed from the SM prediction of 0.935 by between 2.4 and 3.1 standard deviations. The evolution with time of the LEP+SLD average value of $`A_\mathrm{b}`$ is shown in Table 1 and Fig.1 \[1,7-13\]. It is important to note that, in the SM, the prediction for $`A_\mathrm{b}`$ is essentially a fixed number with no significant dependence on the values of the masses of of the top quark or the Higgs boson (see Figs 5-7 below). Combining the $`A_\mathrm{b}`$ measurement with that of $`R_\mathrm{b}`$, which shows relatively good agreement with the SM, enables the effective b quark couplings $`\overline{v}_\mathrm{b}`$, $`\overline{a}_\mathrm{b}`$ or $`\overline{g}_\mathrm{b}^L`$, $`\overline{g}_\mathrm{b}^R`$ to be extracted \[2-4\]. When this is done, the largest deviation from the SM prediction is found to be in the right handed effective coupling $`\overline{g}_\mathrm{b}^R`$ which is about 40$`\%`$ and three standard deviations higher than the SM prediction. The aim of the present note is a thorough study of the data dependence of the LEP+SLD average value of $`A_\mathrm{b}`$. Important questions concern the consistency of individual measurements, and the effect of one or a few ‘outlying’ measurements on the average. At SLD the parameter $`A_\mathrm{b}`$ is measured directly from the forward/backward, left/right asymmetry of tagged b quarks. Three different types of measurement are made. The b quarks are tagged using a decay vertex and the jet charge, a semi-leptonic weak decay, or a $`\mathrm{K}^\pm `$ tag . The LEP value of $`A_\mathrm{b}`$ is instead derived from the $`Z`$-pole forward/backward charge asymmetry, related to $`A_\mathrm{b}`$ by the expression: $$A_{FB}^{0,\mathrm{b}}=\frac{3}{4}A_\mathrm{e}A_\mathrm{b}$$ (1.2) where $`A_e`$ is the parameter defined similarly to $`A_\mathrm{b}`$ (Eqn.(1.1)) for the electron. In general lepton universality i.e. $`A_{\mathrm{}}=A_\mathrm{e}=A_\mu =A_\tau `$, is assumed. Each of the four LEP experiments measures $`A_{FB}^{0,\mathrm{b}}`$ using either a lepton tag or the combination of decay vertex and jet charge measurements. Thus there are eight separate (though not completely uncorrelated) LEP measurements of $`A_{FB}^{0,\mathrm{b}}`$. Using the LEP+SLD average value of $`A_{\mathrm{}}`$ ($`A_{\mathrm{}}=0.1490\pm 0.0017`$) and Eqn.(1.2) the corresponding values of $`A_\mathrm{b}`$ for each LEP experiment and each analysis method may be calculated. These results are shown, together with the three direct SLD measurements, in Table 2 and Fig.2. The data shown are the most recent (Spring 1999) available at the time of writing. They are essentially the same as those presented at the 1998 Vancouver conference except for the recent important update of the SLD jet charge measurement which yields an SLD average value of $`A_\mathrm{b}`$ that is consistent, at the one standard deviation level, with the SM prediction. Because the LEP value of $`A_\mathrm{b}`$ depends directly on the LEP+SLD average value of $`A_{\mathrm{}}`$, it is of interest to compare the different measurements of this quantity. Each of the four LEP experiments measures $`A_{\mathrm{}}`$ either via the forward/backward leptonic charge asymmetry: $$A_{\mathrm{}}=\sqrt{\frac{4A_{FB}^{0,\mathrm{}}}{3}},(l=\mathrm{},\mu ,\tau )$$ (1.3) or by the analysis of $`\tau `$-polarisation. The angular average of the $`\tau `$-polarisation measures $`A_\tau `$, whereas the angular distribution of the polarisation is also sensitive to $`A_e`$. Combining, for each LEP experiment, under the assumption of lepton universality, the measurements of $`A_\tau `$ and $`A_e`$, and including $`A_e`$ as measured at SLD by the left/right electron beam polarisation asymmetry, leads to the nine independent measurements of $`A_{\mathrm{}}`$ shown in Table 3 and Fig.3. Very good consistency can be seen in Table 2 and Fig.2 between the 11 different measurements of $`A_\mathrm{b}`$ ($`\chi ^2/dof=4.5/10,CL=92\%`$ for consistency of the measurements with their weighted mean). The LEP and SLD average values agree within 0.2$`\sigma `$. As noted also for the 1996 data set , the mutual consistency of the different $`A_{\mathrm{}}`$ measurements is somewhat less satisfactory. Although the $`\chi ^2`$ test gives: $`\chi ^2/dof=10.7/8,CL=22\%`$ which is acceptable, three measurements (OPAL $`A_{FB}^{0,\mathrm{}}`$ and the $`\tau `$-polarisation measurements of DELPHI and OPAL) all show negative deviations of 1.5$`\sigma `$ or more from the weighted average value. In contrast, all the positive deviations are $``$ 1$`\sigma `$. The average value of $`A_{\mathrm{}}`$, and hence the derived LEP value of $`A_\mathrm{b}`$ is thus sensitive to the inclusion or exclusion of these data, as will be discussed below. The situation concerning the consistency of the $`\tau `$-polarisation measurements, both with each other, and with the other determinations of $`A_{\mathrm{}}`$, discussed in detail for the 1996 data set in reference , has recently been improved by the new, more precise, ALEPH measurement (see Fig 3). ## 2 Effect of Individual Measurements on the Average Value of $`A_\mathrm{b}`$ In this Section the sensitivity of the $`A_\mathrm{b}`$ value to the different data contributing to the world average is examined. The results of this study are presented in Table 4. The ALEPH jet charge $`A_\mathrm{b}`$ value is the only one that lies above the SM prediction. The probability that ten or more out of eleven measurements of a quantity all lie either above or below the expected value is 1.2$`\%`$. Removing the ALEPH jet charge measurement increases the deviation from -2.4$`\sigma `$ to -2.8$`\sigma `$. The $`A_{FB}^{0,\mathrm{b}}`$ measurement with the largest weight in reducing the average value of $`A_\mathrm{b}`$ is the OPAL lepton measurement. Excluding this datum gives $`A_\mathrm{b}=0.902(19)`$ only 1.7$`\sigma `$ below the SM prediction. This single measurement gives, therefore, a significant contribution to the overall deviation of $`A_\mathrm{b}`$. As discussed in detail in Ref. , apparent inconsistencies exist between the $`\tau `$-polarisation measurements of $`A_{\mathrm{}}`$ by the different LEP experiments. Currently two measurements (ALEPH and L3) show good agreement with the Weighted Average (WA) value, whereas the other two (OPAL and DELPHI) show rather large (1.5-2.0$`\sigma `$) deviations as shown in Fig.3 and Table 3. Removing the latter measurements gives a small increase of the deviation from the SM to -2.6$`\sigma `$. Removing both the ALEPH and the DELPHI $`\tau `$-polarisation measurements and the ALEPH jet charge $`A_{FB}^{0,b}`$ result increases the deviation to -2.9$`\sigma `$, whereas removing the same $`\tau `$-polarisation measurements and the OPAL lepton $`A_{FB}^{0,\mathrm{b}}`$ result reduces the deviation to -1.9$`\sigma `$. Thus exclusion of ‘marginal’ data results in a variation of the $`A_\mathrm{b}`$ deviation from -1.7$`\sigma `$ to -2.9$`\sigma `$ as compared to the all data deviation of -2.4$`\sigma `$. One may remark however that, in general, removal of the data with the largest deviations from the average values (OPAL for $`A_{FB}^{0,l}`$, DELPHI and OPAL $`\tau `$-polarisation for $`A_{\mathrm{}}`$; ALEPH jet charge for $`A_{FB}^{0,\mathrm{b}}`$) tends to increase, not decrease the deviation from the SM. As mentioned above, the single measurement with the largest weight in the deviation is the OPAL lepton measurement of $`A_{FB}^{0,\mathrm{b}}`$. The average $`A_\mathrm{b}`$ value given by the LEP jet charge measurements, 0.913(28), shows good agreement with the SM prediction and is somewhat higher than the similar average of the lepton measurements, 0.880(26). However, the difference is mainly due to the high value of ALEPH measurement. Excluding this gives, for the jet charge average, 0.890(35), which agrees with the lepton average within 0.2$`\sigma `$. In the last two rows of Table 3 are shown the results of calculating $`A_\mathrm{b}`$ using either (i) only the measurements of each raw observable with the smallest total error, or (ii) the remaining data. The most accurate measurements are: ALEPH($`A_{FB}^{0,\mathrm{}}`$), ALEPH( $`\tau `$-polarisation), SLD($`A_{LR}`$), SLD jet charge ($`A_\mathrm{b}`$) and OPAL lepton ($`A_{FB}^{0,\mathrm{b}}`$). Although the weighted average error of the average using only the ‘most accurate’ measurements is 70$`\%`$ larger than for all data, the resulting value of $`A_\mathrm{b}=0.868(27)`$ still shows a -2.5$`\sigma `$ deviation from the SM. On the other hand, the remaining data with a weighted error only 38$`\%`$ larger than that for all data, gives a deviation of only -0.82$`\sigma `$ from the SM prediction. The poor consistency between these two sets of data evidently raises the question whether the systematic errors of some, or all, of the ‘most accurate’ measurements may have been under-estimated. If this is the case, the significance of the apparent deviation from the SM prediction may be much reduced. ## 3 The $`A_{\mathrm{}}`$ and $`A_\mathrm{b}`$ Measurements of the Different LEP and SLD Experiments The values of $`A_{\mathrm{}}`$ and $`A_\mathrm{b}`$ as measured separately by the four LEP experiments, and by SLD are presented in Table 5. For each LEP experiment $`A_\mathrm{b}`$ is calculated in two different ways: (i) by use of the world average value of $`A_{\mathrm{}}`$ in Eqn.(1.2), or (ii) by use, instead, of the value of $`A_{\mathrm{}}`$ measured by the experiment itself. In each case the deviation of $`A_\mathrm{b}`$ from the SM prediction is shown. It may be noticed that, although ALEPH provides two out of the five ‘most accurate’ measurements, that together yield a -2.5$`\sigma `$ deviation from the SM (see Table 4), the ALEPH measurement itself, for both cases (i) and (ii), is in good agreement with the SM. DELPHI shows small deviations of -0.92$`\sigma `$, -0.52$`\sigma `$ in the cases (i) and (ii), whereas L3 shows a larger deviation for case (ii) (-1.9$`\sigma `$) than for case (i) (-1.4$`\sigma `$). An interesting case is OPAL, which shows the largest deviation of any experiment (-1.9$`\sigma `$) in case (i), but a value quite consistent with the SM (0.40$`\sigma `$ deviation) in case (ii). This is easy to understand from Figs 1 and 2. The OPAL lepton measurement gives, as mentioned above, the most significant deviation of $`A_\mathrm{b}`$ from the SM for the case (i) (see Fig 1). However, it can seen in Fig 2 that the OPAL values of $`A_{\mathrm{}}`$, as determined from $`A_{FB}^{0,\mathrm{}}`$ and the $`\tau `$-polarisation measurement lie well below the WA value. The combined effect is so large, that for the case (ii), the deviations of $`A_{FB}^{0,\mathrm{b}}`$ and $`A_{\mathrm{}}`$ cancel almost exactly, giving an $`A_\mathrm{b}`$ value, calculated via Eqn(1.2), in agreement with the SM prediction. ## 4 The LEP and SLD Measurements of $`A_\mathrm{b}`$ The separate LEP and SLD measurements of $`A_\mathrm{b}`$ are given in Table 2. They differ, respectively, from the SM prediction of 0.935 by -2.3$`\sigma `$ and -1.0$`\sigma `$. The data are compared in more detail in Figs. 5, 6, 7 which show plots of the measured values of $`A_\mathrm{b}`$ and $`A_{\mathrm{}}`$ for LEP, SLD and LEP+SLD respectively. In Figs 5 and 7 the LEP average $`A_{FB}^{0,\mathrm{b}}`$ measurement is shown as a diagonal band. In each case results of fits to $`A_\mathrm{b}`$ and $`A_{\mathrm{}}`$ are shown, as well as the SM prediction for a range of values of $`m_t`$ and $`m_H`$. In Figs 5 and 6 the dark square marked ‘WA’ shows the World Average best fit value: $`A_\mathrm{b}=0.894`$, $`A_{\mathrm{}}=0.1487`$. ## 5 The Effect of Systematic Errors on the $`A_\mathrm{b}`$ Measurement The different errors on the combined SLD and LEP measurements of $`A_\mathrm{b}`$ as estimated by the LEP/SLD Heavy Flavour Working Group are presented in Table 5 . It can be seen that, even with the full LEP1 data set of all four experiments, the error on the LEP average value remains statistics dominated, and that the systematic error is about 50$`\%`$ correlated. In contrast, the SLD statistical and systematic errors are roughly equal and the correlated component of the systematic error is relatively small. Since the forward/backward b quark asymmetry measurements at SLD and LEP are very similar, and the systematic error related to the beam polarisation measurement gives only a small contribution, it is reasonable to hope for a considerable reduction in the SLD systematic error. Indeed, the smaller systematic error at LEP is largely due to the much larger statistics of Z-decays at LEP, permitting systematic effects related to quark fragmentation to be estimated from the data itself. Because of the large statistical weight of the LEP measurement, whose error is statistics dominated, the treatment of systematic errors is not expected to play a major rôle concerning the size of the $`A_\mathrm{b}`$ deviation. Even so, it is interesting to investigate the effect of different treatments of the systematic error on the $`A_\mathrm{b}`$ deviation. It must not be forgotten that the estimation of systematic errors is, perhaps, more of an art than a science, so that all confidence levels estimated on the assumption that the systematic errors are both correct and gaussian, should be taken cum grano salis. Here the effects are investigated of (i) using a uniform rather than a gaussian distribution for the systematic errors, (ii) an improvement in the systematic error of the SLD $`A_\mathrm{b}`$ measurement, (iii) optimism or conservatism in the assignement of systematic errors. A simple Monte Carlo program was used to generate ensembles of $`A_\mathrm{b}`$ measurements distributed according to the statistical and systematic errors of the different LEP and SLD experiments as shown in Tables 2 and 6. The correlated and uncorrelated components of the different $`A_\mathrm{b}`$ and $`A_{FB}^{0,\mathrm{b}}`$ measurements were properly taken into account. In all cases except one (see below) the systematic errors were modeled according to gaussian functions with RMS equal to the quoted errors. The error on the LEP+SLD average value of $`A_{\mathrm{}}`$ used to extract the LEP values of $`A_\mathrm{b}`$ according to Eqn(1.2) was taken to be gaussian and 100$`\%`$ correlated between the different measurements. The Standard Model value of $`A_\mathrm{b}`$ (0.935) was assumed, and the fraction, $`f`$, of ensembles of measurements with a simple mean value of $`A_\mathrm{b}`$ less than that given by the data ($`A_\mathrm{b}=0.886`$) was noted. In Table 6 the values of $`f`$ (corresponding to a one-sided CL) are shown for several different hypotheses concerning the errors. The first row corresponds to the quoted errors and assumes gaussian distributions. In the second row, all systematic errors are chosen according to uniform distributions with RMS equal to the quoted errors. In the third (fourth) rows the effect is shown of increasing (decreasing) the systematic errors of all the LEP experiments by a factor 1.5. In the fifth row is shown the effect of reducing the systematic errors of the SLD experiments by a factor 2.7 so that the average systematic error becomes equal to the uncorrelated LEP systematic error. Finally, in the last two rows an additional scale factor of 1.5 or 1/1.5 is applied to the systematic errors of all experiments. As anticipated above, different scenarios for the systematic errors do not have a dramatic effect on the significance of the observed deviation. Use of a uniform distribution instead of a gaussian one (expected to reduce the tails of distribution) in fact only gives a 2$`\%`$ relative change in $`f`$. Assuming that the SLD systematic error is reduced to the same level as the current LEP one, overestimation (underestimation) of all systmatic errors by a factor 1.5 gives CLs of 0.44$`\%`$ (1.4$`\%`$) that the observed fluctuation is purely statistical, to be compared with 1.2$`\%`$ for the nominal errors. It may finally be remarked that a previous study of Z decay measurements showed a clear tendency to overestimate point-to-point systematic errors and to underestimate correlated ones. Correcting for the first effect would increase the significance of any deviation, while correcting for the second would tend to decrease it. Unfortunately, there are insufficient independent measurements to perform a similar analysis in the present case. ## 6 Summary and Outlook This paper has studied, in detail, the data dependence of the parameter $`A_\mathrm{b}`$. The individual measurements of both $`A_\mathrm{b}`$ and the related (for LEP) parameter $`A_{\mathrm{}}`$ show quite good internal consistency. For $`A_\mathrm{b}`$ the largest positive deviation from the WA is given by the ALEPH jet charge measurement. Removing this increases the $`A_\mathrm{b}`$ deviation from -2.4$`\sigma `$ to -2.8$`\sigma `$. The single measurement with the largest weight tending to increase the size of the deviation is the OPAL lepton $`A_{FB}^{0,\mathrm{b}}`$ measurement. Removing this reduces the $`A_\mathrm{b}`$ deviation to -1.7$`\sigma `$. For $`A_{\mathrm{}}`$ it may be noted that the $`A_{FB}^{0,\mathrm{}}`$ measurement of OPAL and the $`\tau `$-polarisation measurements of DELPHI and OPAL all lie about 2$`\sigma `$ below the WA. Excluding these measurements slightly increases the $`A_\mathrm{b}`$ deviation to -2.6$`\sigma `$ . The deviation observed is much larger (-2.5$`\sigma `$) if only the most accurate measurements of each raw observable are used, than for all the remaining measurements (-0.82$`\sigma `$). This is a possible hint that the systematic errors of the ‘most accurate’ measurements may be underestimated, leading to an overestimation of the deviation from the SM for these data. The independent measurements of $`A_\mathrm{b}`$ for each LEP experiment give smaller deviations for all experiments, except L3, than when the world average value of $`A_{\mathrm{}}`$ is used to extract $`A_\mathrm{b}`$. The naive WA (neglecting error correlations) of the individual measurements of $`A_\mathrm{b}`$ of the four LEP experiments and SLD shows only a -1.3$`\sigma `$ deviation. Using the world average value of $`A_{\mathrm{}}`$ to extract $`A_\mathrm{b}`$ from the LEP experiments yields a deviation of -2.1$`\sigma `$ to be compared with -1.0$`\sigma `$ for the combined SLD experiments. A study of the modelling and the degree of optimism/conservatism in the estimation of systematic errors shows essentially identical results for gaussian or uniform distributions and values for the CL for agreement with the SM that varies from 0.44$`\%`$ to 1.9$`\%`$, as compared to the nominal value of 1.2$`\%`$. In the future, some improvement may be expected in the SLD values of $`A_{\mathrm{}}`$ and $`A_\mathrm{b}`$, mainly due to an improved understanding of systematic errors . On the other hand, no significant improvement is to be expected from the LEP results which, although many are still ‘preliminary’, are almost entirely based on the full LEP1 statistics. It may be noted that a recent summary of the SLD data found slightly different values for the LEP,SLD average values of $`A_\mathrm{b}`$ of 0.877(21), 0.898(29) respectively (compare with the values given in Table 2). The small differences from the values used above do not affect any of the conclusions of this study. This paper is based on the precision electroweak data available in Spring 1999. In the Summer 1999 update , the values 0.881(20), 0.905(26) were given for the LEP, SLD average values, respectively, of $`A_\mathrm{b}`$. A fit to the combined LEP+SLD data for $`A_{\mathrm{}}`$ and $`A_\mathrm{b}`$, similar to those shown if Figs.(5-7) of this paper, yielded the values; $`A_{\mathrm{}}=0.1493(16)`$, $`A_\mathrm{b}=0.889(16)`$. Thus, in the most recent data, the significance of the $`A_\mathrm{b}`$ deviation has increased to 2.9$`\sigma `$. Finally, the deviation in $`A_\mathrm{b}`$ although interesting, and possibly suggestive of new physics is still of only marginal statistical significance. If there is no fresh data from SLD it may be some decades before it is known for sure if the effective couplings of the b quarks are, or are not, in agreement with the SM predictions! Acknowledgements We thank Simon Blyth and Michael Dittmar for their careful readings of the paper and their helpful comments, and Franz Muheim for discussions of the LEP/SLD Heavy Flavour Working Group averages.
no-problem/9907/nucl-ex9907010.html
ar5iv
text
# Amplification of Gamma Radiation from X-Ray Excited Nuclear States 11footnote 1Published in Revue Roumaine de Physique 27, 559 (1982) ## 1 Introduction The observed trend $`[1,2]`$ toward short-wavelength coherent sources of electromagnetic radiation from stimulated atomic transitions is presently confronted with difficulties inherent to the shorter atomic life-times of the excited states, smaller cross sections for the stimulated emission, and the lack of resonators appropriate at these frequencies. On the other hand, because of their small dimensions, the atomic nuclei have comparatively long-lived excited states, while the Breit-Wigner cross section for the interaction with the electromagnetic radiation is essentially the same for atoms and nuclei at a given wavelength. Considerable effort has been therefore directed toward the development of stimulated emission devices which would use the Mössbauer effect in nuclear transitions for the generation of electromagnetic radiation in the 1 …100 keV range of photon energy. The difficulties in the development of the coherent gamma ray sources and the perceived reward of success have been recently reviewed by Baldwin, Solem and Gol’danskii $`[2]`$ with the general conclusion that while no scientific principle has yet been shown to prohibit gamma ray lasers, further progress in this direction is dependent upon research and technological advances in many areas. Most of the earlier proposals for a gamma ray laser $`[312]`$ involved neutrons, either for in situ pumping $`[5,79]`$ or for the production of long-lived nuclear isomers $`[10,6,11,12]`$. However, the relatively low intensities of the available fluxes of neutrons and the difficulties inherent in the narrowing of the effective widths in the case of the isomers $`[1317]`$ create major obstacles along these lines. In this context attention was given to the excitation of nuclear electromagnetic transitions, and among the processes considered are the use of bremsstrahlung radiation $`[18]`$, characteristic X radiation $`[19,20]`$, resonant Mössbauer radiation $`[9,21,22]`$, optical laser radiation $`[2331]`$, and synchrotron radiation $`[32,33]`$. In this paper we shall discuss the possibility of the excitation of nuclear electromagnetic transitions by the absorption of X-ray quanta produced in the inner-shell atomic transitions, and the relevance of this process for the amplification of the gamma radiation from the exited nuclear states. It is shown that a significant level of nuclear excitation can be obtained by an appropriate choice of the atomic X-ray transition. The X-ray power required for the pumping of a gamma ray laser is compared with the parameters of existing X-ray flash devices. The nuclides whose level structure appears to be favorable for the gamma-ray amplification are tabulated together with the X-ray pumping transitions. It is concluded that the X-ray flash pumping technique might provide a useful approach for the development of a gamma ray laser, and motivated further investigation of the process of excitation of nuclear electromagnetic transitions. ## 2 Basic concepts According to the general quantum mechanical description, the atomic nucleus can exist in a series of stationary states characterized by their energy, spin and parity. If the nucleus is in one of its excited states it generally undergoes a transition to a lower state which is accompanied by a corresponding energy transfer to the radiation field, atomic electrons, or other particles. When the transition energy is converted into a gamma-ray photon, a small amount of the total energy appears in the final state as kinetic energy of the whole nucleus. Since this recoil energy is generally large compared to the nuclear level width, the gamma-ray emission and absorption lines of the free nuclei are shifted and broadened, and the cross section for the resonant interaction is correspondingly reduced. This difficulty can be avoided by making use of the Mössbauer effect $`[34]`$ for nuclei bound in a crystalline lattice for transition energies which do not exceed 100 keV The dominant multipolarity of low gamma-ray transitions is M1, although E1 and E2 transitions also occur, and the typical life-time of these states is in the nanosecond range. The conventional method for the excitation of low-lying nuclear states is through beta decay or orbital electron capture from the contiguous nuclides. However, since the life-times of the decaying nuclei are very much longer than the life-times of the excited states, the concentration of the latter is extremely small. Another excitation technique would be the irradiation with a flux of neutrons, but again impractically large neutron fluxes are required to obtain significant population of the nanosecond excited states. On the other hand, the excitation of the nuclear states with electromagnetic radiation of appropriate energy appears to produce a significant level of population in the upper state. The electromagnetic radiation can be produced in X-ray transitions between the states of the atomic electron shell, and since the X-ray line spectrum involves energies up to about 100 keV for the heavy elements it seems appropriate for the excitation of the Mössbauer transitions. The width of the X-ray states are 0.1…1 eV, and therefore the very short-lived atomic states are not suitable for direct amplification of the radiation. ## 3 Amplification of gamma radiation Provided that a reasonable matching between the nuclear and the atomic transition energies could be obtained, an X-ray flash might be used to raise the nuclei from the ground state, $`a`$, to an upper state, $`b`$, in a process analogous to the optical pumping of the atomic transitions (Fig. 1). Now the process of transition to a lower state, $`c`$, through the emission of a gamma ray photon would be greatly enhanced in the presence of a large population in the upper state, $`b`$. As pointed out by several authors $`[2]`$ the development of a gamma ray pulse would require the population inversion between the states $`b`$ and $`c`$, and also the resonant gain must exceed the non-resonant losses. Since it is generally accepted that the gamma ray laser would be a single pass device $`[2]`$, the number of photons in the cascade induced by the spontaneously emitted gamma ray quanta must be of the order of the total number of nuclei in the upper state $`b`$, $$e^{\sigma _{bc}n_bL}n_bLa^2,$$ (1) where $`\sigma _{bc}`$ is the cross section for the transition $`bc,n_b`$ is the concentration of nuclei in the state $`b`$, $`L`$ is the length and $`a`$ is the transverse dimension of the nuclear sample. Moreover, in order to have negligible diffraction losses, it is necessary $`[2]`$ that $$L\lambda \frac{1}{3}a^2,$$ (2) where $`\lambda `$ is the gamma ray wavelength. The conditions, Eqs. (1) and (2), define the upper state concentration $`n_b`$ and the area, $`a^2`$, as functions of the length $`L`$ of the nuclear sample. Increasing the length $`L`$ results in a larger transverse dimension $`a`$, but generally in a lower threshold density of the nuclei in the upper state. However, the total number $`N_b=n_bLa^2`$ of nuclei in the upper state increases with $`L`$, provided that $`N_b\sigma _{bc}>L\lambda `$. On the other hand, increasing the wavelength $`\lambda `$ for fixed dimensions $`L,a`$ results in lower threshold values for the concentration $`n_b`$ and the total number of excited nuclei $`N_b`$. ## 4 The X-ray pumping of nuclear electromagnetic transitions The building up of a large concentration of nuclei in the excited state $`b`$ is facilitated by the fact that the atomic electrons screen the nucleus from interactions which otherwise would lead to the broadening of lines. In fact, although the concentration of nuclei in the upper state turns out to be large compared with typical inversion densities at optical frequencies, it represents a small fraction of the ground state concentration. The probability, $`w`$, for the excitation of a nucleus by an X-ray pulse on $`𝒩_x`$ quanta per cm<sup>2</sup> having a spectral width $`\mathrm{\Gamma }_x`$ around the nuclear transition energy is $$w=\sigma _{ab}𝒩_x\frac{\mathrm{\Gamma }_b}{\mathrm{\Gamma }_x},$$ (3) where $`\sigma _{ab}`$ represents the cross section for resonant nuclear excitation and $`\mathrm{\Gamma }_b`$ is the width of the upper state $`b`$. In general, the X-ray lines are broad compared to the widths of the nuclear levels, and only a small fraction $`\mathrm{\Gamma }_b/\mathrm{\Gamma }_x`$ of the X-ray quanta is effective in the excitation process. On the other hand, the relatively large X-ray width facilitates the matching of the resonance condition between the X and the gamma transitions. Since it is the density of the X-ray photons which determines the pumping rate, Eq. (3), it seems that a suitable X-ray source would be a filamentary plasma in the immediate proximity of the nuclear sample, as represented in Fig. 2. Among the devices which produce such flashes are the vacuum sparks, the exploding wires, the plasma focus, and the laser focus $`[36]`$. ## 5 Case study and tabulation The minimum fluence $``$ of the X-ray energy that must be injected into the sample is proportional to the nuclear transition energy $`E_{ab}`$, to the population of the upper state, $`N_b`$, and to the ratio $`\mathrm{\Gamma }_x/\mathrm{\Gamma }_b`$ of the atomic and nuclear widths: $$=E_{ab}N_b\frac{\mathrm{\Gamma }_x}{\mathrm{\Gamma }_b}.$$ (4) The quantity $``$ is represented in Fig. 3 for values of the parameters likely to be encountered for gamma ray devices. The total energy in the X-ray flash must probably be at least one order of magnitude higher than that represented in Fig. 3 because of solid angle problems. For comparison, the presently available X-ray devices provide flashes with a duration as short as 10 ps and X-ray powers in excess of $`10^{10}`$ watts $`[37]`$, thus being able to provide about $`10^1`$ Joules in periods of time which are short relative to the nuclear life-times. Assuming that the cross section for stimulated emission of the gamma rays is $`10^{18}`$ cm<sup>2</sup>, the length $`L`$ of the nuclear sample of 1 cm, then in order to obtain the significant gain of 1/cm it is necessary a concentration of $`n_b=10^{18}`$ nuclei/cm<sup>3</sup>. Since according to Eq. (2), $`a=10^4`$ cm, the number of excited nuclei is $`N_b=3\times 10^{10}`$. Therefore, the energy stored in the nuclear excitation is about $`6\times 10^6`$ Joules, and the energy of the gamma ray pulse would be of the order of $`10^6`$ Joules. The intensity of the gamma ray pulse at 10 cm from the source would be of the order of $`10^9`$ watts/cm<sup>2</sup>, or about $`10^{14}`$ Ci/cm<sup>2</sup>. Further tabulated are those nuclides which appear to be of interest for the electromagnetic excitation with atomic X rays. ## 6 Conclusions We have shown in this paper that the X-ray excitation of nuclear electromagnetic transitions might provide a technique for the pumping of a gamma ray laser. The performances of existing pulsed X-ray sources are one or two orders of magnitude below the threshold for the gamma amplification. Additional investigation is needed to ensure that the nuclear sample is stable enough against the high intensity X-ray pulse. This is consistent with the general conclusion that further progress in this direction is dependent upon research and technological advances in many areas. Acknowledgment. This paper was supported in part by the National Science Foundation under Grant No. INT 76-18982 and in part by the Romanian State Committee for Nuclear Energy under the U.S.-Romanian Cooperative Program in Atomic and Plasma Physics. REFERENCES 1. D. J. Nagel, Phys. Fenn. 9, 381 (Supplement 1) (1974). 2. G. C. Baldwin, J. C. Solem and V. I. Gol’danskii, Rev. Mod. Phys. 53, 687 (1981). 3. G. C. Baldwin, J. P. Neissel, J. H. Terhune, and L. Tonks, Trans. Am. Nucl. Soc. 6, 178 (1963). 4. L. A. Rivlin, Vopr. Radioelekytron. 6, 42 (1963). 5. V. I. Gol’danskii and Yu. Kagan, Zh. Eksp. Teor. Fiz., 64, 90 (1973) (Sov. Phys.-JETP, 37, 49 (1973)). 6. V. S. Letokhov, Zh. Eksp. Teor. Fiz. 64, 1555 (1973) (Sov. Phys.-JETP, 37, 787 (1973)). 7. J. C. Solem, Los Alamos Scientific Laboratory, Report No. LA-7898-MS (1979). 8. L. Wood and G. Chapline, Nature 252, 447 (1974). 9. V. I. Gol’danskii, Yu. Kagan and V. A. Namiot, Zh. Eksp. Teor. Fiz. Pis’ma Red., 18, 34 (1973) (JETP Lett. 18, 34 (1973)). 10. G. C. Baldwin, J. P. Neissel and L. Tonks, Proc. IEEE, 51, 1247 (1963). 11. V. I. Gol’danskii and Yu. M. Kagan, Usp. Fiz. Nauk, 110, 445 (1973) (Sov. Phys.-Usp. 16, 563 (1974)). 12. V. Vali and M. Vali, Proc. IEEE, 51, 182 (1963). 13. R. V. Khokhlov, Zh. Eksp. Teor. Fiz. Pis’ma Red. 15, 580 (1972) (JETP Lett., 15, 414 (1972). 14. Yu. A. Il’inskii and R. V. Khokhlov, Usp. Fiz. Nauk, 110, 448 (1974) (Sov. Phys.-Usp. 16, 565 (1974)). 15. V. I. Gol’danskii, S. V. Karyagin and V. A. Namiot, Zh. Eksp. Teor. Fiz. Pis’ma Red., 19, 625 (1974) (JETP Lett. 19, 324 (1974)). 16. V. A. Namiot, Zh. Eksp. Teor. Fiz. Pis’ma Red. 18, 369 (1973) (JETP Lett., 18, 216 (1973)). 17. S. V. Karyagin, Zh. Tekh. Fiz. Pis’ma 2, 500 (1976) (Sov. Phys.-Tech. Phys. Lett., 2, 196 (1976)). 18. D. Marcuse, Proc IEEE, 51, 849 (1963). 19. V. S¿ Letokhov, Kvantovaya Elektron., 4, 125 (1973) (Sov. J. Quantum Electr., 3, 360 (1974)). 20. V. I. Vysotskii, Zh. Eksp. Teor. Fiz. 77, 492 (1979) (Sov. Phys.-JETP, 50, 250 (1979)). 21. G. C. Baldwin and J. C. Solem, J. Appl. Phys. 51, 2372 (1980). 22. S. V. Karyagin, Zh. Eksp. Teor. Fiz., 79, 730 (1980) (Sov. Phys.-JETP, 52, 370 (1980)). 23. J. W. Eerkens, U.S. Patent 3,430,046 (1969). 24. E. V. Bakhlanov and P. V. Chebotaev, Zh. Eksp. Teor. Fiz. Pis’ma Red 21, 286 (1975) (JETP Lett. 21, 131 (1975)). 25. L. A. Rivlin, Kvantovaya Elektron., 4, 676 (1977) (Sov. J. Quantum Electron., 7, 380 (1977)). 26. C. B. Collins, S. Olariu, M. Petrascu and I. Iovitzu Popescu, Phys. Rev. Lett., 42, 1379 (1979). 27. C. B. Collins, S. Olariu, M. Petrascu and I. Iovitzu Popescu, Phys. Rev. C, 20, 1942 (1979). 28. B. Arad, S. Eliezer and A. Paiss, Phys. Lett. A, 74, 395 (1979). 29. S. Olariu, I. Iovitzu Popescu and C. B. Collins, Phys. Rev. C, 23, 50 (1981). 30. S. Olariu, I. Iovitzu Popescu and C. B. Collins, Phys. Rev. C, 23, 1007 (1981). 31. C. B. Collins, F. W. Lee, D. M. Shemwell, B. D. DePaola, S. Olariu, and I. Iovitzu Popescu, J. Appl. Phys., (1982). 32. V. F. Dmitriev and E. V. Shuryak, Zh. Eksp. Teor. Fiz., 67, 494 (1974) (Sov. Phys. -JETP, 40, 244 (1975)). 33. R. L. Cohen, G. L. Miller, K. H. West, Phys. Rev. Lett., 41, 381 (1978). 34. R. L. Mössbauer, Z. Physik, 151, 124 (1958). 35. L. Allen and G. I. Peters, J. Appl. Phys., A, 4, 564 (1971). 36. D. J. Nagel, in Advances in X-ray Analysis, vol. 18, edited by W. L. Pickles, C. S. Barrett, J. B. Newkirk and C. O. Rund, Plenum, 1974, p. 1. 37. D. J. Nagel and C. M. Dozier, Proc. 12<sup>th</sup> Int. Congr. High Speed Photography, Toronto, Canada, 1976, in Soc. Photo-Optical Instrum. Engrs., 1977, p. 2.
no-problem/9907/astro-ph9907110.html
ar5iv
text
# Radio Detection of Old GRB Remnants in the Local Universe ## 1 Introduction Since the detection of redshifted spectral features in gamma–ray burst (GRB) afterglows, it has become evident that GRB events do indeed occur at cosmological distances (Metzger et al. 1997; Djorgovski et al. 1998; Kulkarni et al. 1998a; Bloom et al. 1998; Bloom et al. 1999). The enormous energy release $`E10^{51}10^{54}`$ ergs implied by the observed fluences and the cosmological distance scale could lead to a relativistically–expanding fireball, which produces the prompt $`(100\mathrm{sec})`$ $`\gamma `$–ray emission due to collisions between its internal shells (Paczyński & Xu 1994; Rees & Mészáros 1994; Pilla & Loeb 1997; Kobayashi, Piran, & Sari 1997). The fireball then enters the afterglow phase at later times $`(1\mathrm{day})`$ when the expanding wind decelerates due to its interaction with the surrounding medium (e.g. Mészáros & Rees 1997; Waxman 1997a, b). During the early afterglow phase, the blastwave is still ultra–relativistic, and is hence well described by the self-similar Blandford–McKee solution (Blandford & McKee 1976). Within a month the expansion is no longer ultra–relativistic. After about a year the expansion becomes nonrelativistic and the gas dynamics is well approximated by the Sedov–Taylor self-similar solution (Taylor 1950; Sedov 1959), similarly to a supernova remnant (SNR). Given the BATSE-calibrated rate of occurrence of GRBs in the universe (Wijers et al. 1998) and the lifetime of their remnants, one infers that there should be hydrodynamic fossils of GRB remnants in any spiral galaxy at any given time (Loeb & Perna 1998). In fact, a subset of the so–called HI supershells observed in the Milky Way and other nearby galaxies might be old GRB remnants (Efremov et al. 1998; Loeb & Perna 1998). Rhode et al. (1999) have identified some HI holes in the nearby galaxy Holmberg II that do not have optical counterparts as expected in alternative models involving multiple SNe or starbursts with a normal stellar mass function. There are two important physical differences which could in principle lead to ways of observationally distinguishing GRB remnants from SNRs. First, the radiation energy emitted by an isotropic GRB explosion is estimated to be up to 4 orders of magnitude higher than that released in a supernova (Kumar 1999). Hence if a nearby expanding remnant is discovered and the explosion energy (as derived from the expansion speed and the Sedov-Taylor solution) is too great to be attributable to a single supernova, then one must conclude that we are seeing the remnant of either a GRB or multiple supernovae. But a GRB releases its total energy promptly, while the explosion time of multiple supernovae in a star–forming region is not expected to be synchronized to better than $`10^6\mathrm{years}`$. Hence, GRB remnants should be much more energetic than SN remnants at early times. The existence of energetic X-ray remnants was recently inferred in deep ROSAT X-ray images of M101 (Wang 1999); but future observations with the XMM and Chandra X-ray satellites are necessary in order to identify the nature of these sources conclusively through high-resolution imaging and spectroscopy. Second, GRB afterglows include a strong UV flash, which on the timescale of a few hundreds of years creates an ionized bubble of radius $`100n_1^{1/3}\mathrm{pc}`$, where $`n_1`$ is the ambient density in units of $`1\mathrm{cm}^3`$ (Perna & Loeb 1998). At ages $`10^4\mathrm{years}`$, the non–relativistic shock wave acquires a much smaller radius than the ionized bubble. This provides a potentially unique signature of young GRB remnants; namely, a relatively compact expanding blastwave embedded in a much larger ionized sphere. The optical-UV recombination lines from highly-ionized species in this region should provide a clear discriminant for recognizing GRB remnants (Perna, Raymond & Loeb 1999). The embedded blast wave emits both thermal and non-thermal radiation; the latter being an extension of the afterglow due to synchrotron emission by shock-accelerated electrons. In this Letter we calculate the synchrotron radio emission from the expanding GRB shock in old GRB remnants. Our goal is to examine whether a survey in the radio could efficiently identify old GRB remnants. Once a candidate has been identified in the radio, follow-up optical observations could search for the special recombination lines which are generic of GRB remnants. In §2 we review the blastwave hydrodynamics as extrapolated into the Sedov–Taylor regime, and calculate the synchrotron flux as a function of age, as well as the expected source counts in the Virgo cluster. Finally, we discuss our conclusions in §3. ## 2 Hydrodynamics and Emission While the hydrodynamic evolution of GRB remnants through the early afterglow phase is well-described by the ultra–relativistic Blandford-McKee (1976) solution, at times much greater than a year we may to a good approximation use the non-relativistic Sedov-Taylor solution. It has been shown (Huang, Dai, & Lu 1998) that the two regimes may be smoothly matched to one another. We assume that the kinetic energy remaining in the fireball is comparabale to the total energy $`E_0`$ which is radiated away in $`\gamma `$-rays, and that the energy release is isotropic; the effects of beaming in the initial energy release will be discussed in §3. The mean atomic weight of the surrounding (interstellar) medium is taken to be $`1.4m_p`$, where $`m_p`$ is the proton mass. The appropriate scaling laws for the shock radius $`r_s`$ and velocity $`v_s`$ in the non-relativistic regime are then $$r_s=1.17\left(\frac{E_0t^2}{1.4m_pn}\right)^{0.2}=1.56\times 10^{18}n_1^{0.2}E_{52}^{0.2}t_{\mathrm{yr}}^{0.4}\mathrm{cm},$$ (1) $$v_s=0.47\left(\frac{E_0}{1.4m_pnt^3}\right)^{0.2}=1.99\times 10^{10}n_1^{0.2}E_{52}^{0.2}t_{\mathrm{yr}}^{0.6}\mathrm{cm}\mathrm{s}^1,$$ (2) where $`n=1n_1\mathrm{cm}^3`$ is the pre–shock particle number density, $`E_0=10^{52}E_{52}\mathrm{erg}`$ is the total burst energy, and $`t_{\mathrm{yr}}`$ is the age of the remnant in years. The state of the gas behind the shock is defined by the Rankine-Hugoniot jump conditions. For a strong shock, the post-shock (primed) particle density, bulk velocity, and energy density are, respectively, $$n^{}=4n,$$ (3) $$v^{}=\frac{3}{4}v_s,$$ (4) $$u^{}=\frac{9}{8}nm_pv_s^2,$$ (5) where all quantities are measured in the frame of the unshocked material. As usual, we assume that the electrons are injected with a power–law distribution of kinetic energies, $$f(ϵ)=(p1)ϵ_m^{p1}ϵ^p,ϵ>ϵ_m,$$ (6) where $`f(ϵ)dϵ`$ is the fraction of shock-accelerated electrons with kinetic energies in the range $`(ϵ,ϵ+dϵ)`$. We assume a single power-law index at all energies, for simplicity. The spectral break that might exist at the transition to nonrelativistic electron energies occurs well below the electron energy which is responsible for the relevant radio emission at $`1\mathrm{GHz}`$. The post–shock magnetic and electron energy densities are assumed to be given respectively as fixed fractions $`\xi _b`$ and $`\xi _e`$ of the total post–shock energy density: $$u_b^{}=\frac{B^2}{8\pi }=\xi _bu^{},$$ (7) $$u_e^{}=n^{}ϵ_m\frac{p1}{p2}=\xi _eu^{},$$ (8) where $`m_e`$ is the electron mass. The magnetic field strength and minimum electron energy are thus given by $$B^{}=0.14\xi _b^{0.5}n_1^{0.3}E_{52}^{0.2}t_{\mathrm{yr}}^{0.6}\mathrm{G},$$ (9) $$ϵ_m=117\frac{p2}{p1}\xi _en_1^{0.4}E_{52}^{0.4}t_{\mathrm{yr}}^{1.2}\mathrm{MeV}.$$ (10) Before calculating the flux, we need to determine whether the Sedov–Taylor remnant is optically thin to synchrotron self–absorption. The shocked region has a thickness $`\eta r_s`$, with $`\eta 0.1`$ from particle number conservation. We will adopt the value $`\eta =1/15`$, which is consistent with the density profile of the Sedov (1959) solution, and provides a number of shocked electrons which is a fair fraction of the total number swept up by the blast wave. The self–absorption optical depth at a photon frequency $`\nu `$ is given by (cf. Rybicki & Lightman 1979, p. 190) $$\tau (\nu )=\frac{\sqrt{3}e^3nB_{}\eta r_s}{2\pi m_eϵ_m\nu ^2}(p1)\mathrm{\Gamma }\left(\frac{p}{4}+\frac{11}{6}\right)\mathrm{\Gamma }\left(\frac{p}{4}+\frac{1}{6}\right)\left(\frac{ϵ_m}{m_ec^2}\right)^p\left(\frac{3eB_{}}{2\pi m_ec\nu }\right)^{0.5p}.$$ (11) Here, $`m_e`$ is the electron mass, $`\mathrm{\Gamma }(x)`$ is the gamma function, and $`B_{}B^{}`$ is the component of the magnetic field perpendicular to the electron velocity. Substituting the numerical factors into (11) gives $$\tau (\nu )=\frac{39.1\eta }{\nu _9^2}\left(\frac{61.6}{\nu _9}\right)^{0.5p}\alpha (p)\xi _b^{0.5+0.25p}\xi _e^{p1}n_1^{1.50.25p}E_{52}^{0.5p}t_{\mathrm{yr}}^{11.5p},$$ (12) where $`\nu =10^9\nu _9\mathrm{Hz}`$, and $$\alpha (p)(p1)\left(\frac{p2}{p1}\right)^{p1}\mathrm{\Gamma }\left(\frac{p}{4}+\frac{11}{6}\right)\mathrm{\Gamma }\left(\frac{p}{4}+\frac{1}{6}\right)$$ (13) is of order unity. Thus, for typical power–law index values $`2.1<p<3.2`$ (Li & Chevalier 1999), we find that $`\tau (\nu )1`$ for $`\nu =1\mathrm{GHz}`$ at an age of $`1\mathrm{yr}`$. The flux from old remnants is therefore given by $$F_\nu =\left(\frac{4\pi r_s^2\eta r_s}{4\pi D^2}\right)\times P_\nu ,$$ (14) where $`D`$ is the distance to the remnant, and $`P_\nu `$ is the volume emissivity (in $`\mathrm{erg}\mathrm{s}^1\mathrm{cm}^3\mathrm{Hz}^1`$), given by (Rybicki & Lightman 1979, p. 180) $$P_\nu =\frac{4\sqrt{3}e^3nB_{}}{m_ec^2}\frac{p1}{p+1}\mathrm{\Gamma }\left(\frac{p}{4}+\frac{19}{12}\right)\mathrm{\Gamma }\left(\frac{p}{4}\frac{1}{12}\right)\left(\frac{ϵ_m}{m_ec^2}\right)^{p1}\left(\frac{3eB_{}}{2\pi m_ec\nu }\right)^{0.5(p1)}.$$ (15) Note that we are justified in using the synchrotron formula since the electrons which emit at GHz frequencies are ultra–relativistic: $$\frac{ϵ}{m_ec^2}=\left(\frac{4\pi m_ec\nu }{3eB_{}}\right)^{0.5}=41.3\nu _9^{0.5}\xi _b^{0.25}n_1^{0.15}E_{52}^{0.1}t_{\mathrm{yr}}^{0.3}.$$ (16) Coulomb collisions thermalize the electrons only at non-relativistic energies, well below the energies of interest for this discussion. Hence, we are justified in using the power–law spectral shape in equation (15). The flux should eventually show a cooling break at high frequencies, due to the fact that electrons with more than a threshold energy $`ϵ_c`$ will radiate away their energy faster than the dynamical time. The synchrotron cooling time is given by $`t_c=6\pi (m_ec^2)^2/\sigma _TcϵB_{}^2`$, where $`\sigma _T`$ is the Thomson cross section. The cooling frequency is thus given by $$\nu _c=\left(\frac{3eB_{}}{4\pi m_ec}\right)\left(\frac{6\pi m_ec}{\sigma _TB_{}^2t}\right)^2=9.4\times 10^{11}\xi _b^{1.5}n_1^{0.9}E_{52}^{0.6}t_{\mathrm{yr}}^{0.2}\mathrm{Hz}.$$ (17) For remnant ages $`10^7\mathrm{yr}`$, the cooling cutoff occurs well above 1 GHz. Thus, the emission at GHz frequencies should fall on the simple $`P_\nu \nu ^{(1p)/2}`$ part of the spectrum. We may now calculate the numerical value of the expected flux $$F_\nu =\frac{5.2\times 10^6\eta }{D_{\mathrm{Mpc}}^2}\left(\frac{61.6}{\nu _9}\right)^{0.5(p1)}\beta (p)\xi _b^{0.25(1+p)}\xi _e^{p1}n_1^{0.950.25p}E_{52}^{0.3+0.5p}t_{\mathrm{yr}}^{2.11.5p}\mathrm{Jy},$$ (18) where $`D_{\mathrm{Mpc}}`$ is the distance to the source in Mpc, and $$\beta (p)\left(\frac{p1}{p+1}\right)\left(\frac{p2}{p1}\right)^{p1}\mathrm{\Gamma }\left(\frac{p}{4}+\frac{19}{12}\right)\mathrm{\Gamma }\left(\frac{p}{4}\frac{1}{12}\right)$$ (19) is of order unity. In figure 1 we plot the flux at 1.6 GHz from equation (18) as a function of remnant age $`t_{\mathrm{yr}}`$ for $`p=2.1`$ ($`F_\nu t^{1.05}`$) and $`p=3.2`$ ($`F_\nu t^{2.7}`$), the two values which bracket the range of power–law indices seen in radio supernovae (Li & Chevalier 1999). We have assumed that the sources reside in the Virgo cluster, at a distance of $`D_{\mathrm{Mpc}}=16`$. We also assumed sub-equipartition energy density of the magnetic fields and nonthermal electrons in the post-shock gas with $`\xi _b=0.1`$, $`\xi _e=0.1`$; a typical interstellar medium density, $`n_1=1`$; and $`\eta =1/15`$. In panel (a), we take $`E_{52}=0.7`$, derived from the BATSE catalog for a source population with a redshift–independent burst rate per comoving volume, while in panel (b) we use $`E_{52}=14`$, which applies for population which traces the star formation rate in galaxies. For comparison, the horizontal line shows the $`5\sigma `$ sensitivity threshold, $`F_{\mathrm{VLA}}=70\mu \mathrm{Jy}`$, for the Very Large Array (VLA) in the 1.6 GHz band, using a one–hour integration in the VLA’s most extended configuration (L. Greenhill 1999, private communication). Clearly the maximum age for a detectable remnant, and hence the number of detectable remnants, depends strongly on $`p`$. The vertical line, at $`t=600`$ yr and $`10^5`$ yr in panels (a) and (b) respectively, indicates the age at which we expect to detect at least one remnant in the Virgo cluster. The intermediate plots of $`F_\nu (t)`$ in panels (a) and (b) are for $`p=2.6`$ and 2.3, the respective maximum electron slopes for which a 600 or $`10^5`$–year–old Virgo remnant would be detectable by the VLA. To calculate the expected source counts, we note that the local GRB rate per $`L_{}`$ (where $`L_{}`$ is the characteristic stellar luminosity per galaxy in the local universe), assuming no beaming, is estimated to be $`\mathrm{\Gamma }2.5\times 10^8L_{}^1\mathrm{yr}^1`$ (Wijers et al 1998). This value is derived assuming that the GRB rate traces the star formation history of galaxies; for a non–evolving burst rate, the inferred value is 150 times higher. The number of remnants per $`L_{}`$ younger than $`t_{\mathrm{yr}}`$ \[and hence brighter than $`F_\nu (t_{\mathrm{yr}})`$\] at a given time is then $`\mathrm{\Gamma }t_{\mathrm{yr}}`$. The best place to search for optical emission from old GRB remnants is in the Virgo cluster (Perna, Raymond, & Loeb 1999). There are $`2500`$ galaxies brighter than $`B=19`$ in this cluster; at a distance of 16 Mpc, this limit corresponds to an absolute magnitude magnitude $`M_B=12`$. For typical Schechter (1976) function parameters, $`\alpha =1`$ and $`M_{}=19`$ (Loveday et al. 1992; Marzke et al. 1994), this yields a total luminosity for the Virgo cluster of $`L_{\mathrm{Vir}}=430L_{}`$. Thus, for a non-evolving GRB population, we need to look for remnants as old as $`t=(\mathrm{\Gamma }L_{\mathrm{Vir}})^1=600`$ yr to be reasonably confident of seeing at least one remnant in the Virgo cluster. For a GRB population which traces the cosmic star formation history, we need $`t=10^5`$ yr. As illustrated in Figure 1, for the burst parameters we have assumed, remnants this old could be detected by the VLA only for an electron power–law index $`p<2.6`$ in the non–evolving case and $`p<2.3`$ in the evolving case. For deeper volume–limited searches, e.g. of volumes probed by the Sloan Digital Sky Survey (SDSS; Gunn & Weinberg 1995), the increase in the number of galaxies surveyed $`ND^3`$ dominates over the decrease in detectable ages $`tD^{2/(2.11.5p)}`$, obtained by solving equation (18) for $`t_{\mathrm{yr}}`$. Thus, although at a distance of 250 Mpc we need to look for remnants which are younger than $`10^2`$ years, the galaxy luminosity within this volume is $`10^5L_{}`$, leading to the prediction of at least one detectable young remnant for $`p=3.2`$ and many more for lower $`p`$ values. ## 3 Discussion and Conclusions We have calculated the expected synchrotron flux from $`10^5`$–year–old GRB remnants in the Virgo cluster. Although the most revealing signature of a GRB remnant may be the presence of optical-UV recombination lines from high-ionization states of metals (Perna, Raymond, & Loeb 1999), it might be easier to search for nearby GRB remnants in the radio due to the lower background noise (from the sky plus the host galaxy) at radio frequencies. For a non–evolving GRB population, we find that if the spectral index of the shock-accelerated electrons $`p<2.6`$, then one could find at least one $`600`$–year–old remnant in the Virgo cluster at the VLA detection threshold. Such a remnant should be characterized by a synchrotron spectrum $`F_\nu \nu ^{0.8}`$ or flatter. <sup>1</sup><sup>1</sup>1Although it appears that the energy flux $`\nu F_\nu \nu ^{0.2}`$ diverges at high frequencies, recall (cf. Eq. 17) that there is a break in the spectrum at the cooling frequency $`\nu _c`$, above which, $`F_\nu \nu ^{0.3}`$. Perhaps the most poorly constrained parameter of the GRB sources is the beaming fraction $`f_b`$, the fraction of $`4\pi `$ steradians into which the initial $`\gamma `$–ray emission is emitted; the actual event rate may then be enhanced to $`f_b^1\mathrm{\Gamma }`$. However, the total energy released in an event scales as $`f_bE_0`$, and so the synchrotron flux from the remnant is proportional to $`f_b^{0.3+0.5p}`$. The decrease in the required energy due to beaming may be counteracted by the fact that the efficiency for producing $`\gamma `$–rays in the initial event is very low (Kumar 1999). We note that even if the initial (impulsive) energy release is beamed, the deposited energy, $`f_bE_0`$, will be isotropized at the onset of the non-relativistic expansion phase. In addition, for a uniform distribution of sources in Eucledean space, the number of detectable remnants younger than a given age should decline with decreasing $`f_b`$ as $`f_b^1(f_b^{0.3+0.5p})^{3/2}=f_b^{0.75p0.65}`$. Due to the as–yet uncertain nature of the progenitors, it is not clear whether we should expect the GRB rate to directly trace the star formation history. Hence we considered both the SFR–tracing and non–evolving cases in Figure 1. We summarize these results in Figure 2, which shows the upper bound on $`p`$ as a function of explosion energy for the two cases. The factor of 150 enhancement in the rate for a non–evolving population allows for a greater probability of seeing younger bursts with steeper spectral slopes. It is important to note that if a young $`(10^3\mathrm{yr})`$ remnant is detected in the Virgo cluster, our results will strongly suggest that the local GRB rate is consistent with the non–evolving scenario, but not with the star–formation–tracing scenario. This is the only immediate way to constrain the local burst rate (short of monitoring the $`10^6`$ SDSS galaxies over one year and searching for a GRB explosion). The young radio remnants detected in this case should be well–embedded in the bubble that was ionized by the initial UV flash, and should be detectable in optical recombination lines. If GRBs follow the global star formation history, however, we only expect to see a $`10^5`$–year–old remnant in Virgo, which is large and not well-embedded, and will also not be easily detectable at optical wavelengths (Perna, Raymond, & Loeb 1999). Is it justified to assume that the only significant flux at 1.6 GHz comes from the freshly shocked electrons within a distance $`\eta r_s`$ behind the shock? Might the electrons which were shocked at $`t1\mathrm{yr}`$ contribute a substantial amount of flux when the remnant is $`10^4`$ years old? The Sedov-Taylor similarity solution implies that the volume occupied by the material behind the shock front is increasing with time as $`r_s^3`$. Hence, the relativistic energy densities of the electrons and magnetic fields scale adiabatically as $`u_b^{}u_e^{}r_s^4`$. The flux at a fixed frequency $`\nu `$ due to the old electrons is then $`F_\nu ^{\mathrm{old}}t^{0.8p_{\mathrm{old}}}`$, where $`p_{\mathrm{old}}`$ is the power–law index measured for afterglows at $`1\mathrm{yr}`$; typically $`p_{\mathrm{old}}3`$. Using this value, a comparison with equation (18) yields a ratio $`F_\nu ^{\mathrm{old}}/F_\nu ^{\mathrm{new}}=t^{1.5p_{\mathrm{new}}4.5}`$, where $`F_\nu ^{\mathrm{new}}`$ is the flux from newly–shocked electrons. Thus, if $`p_{\mathrm{new}}>3`$, the flux from the old electrons will dominate at times $`t1\mathrm{yr}`$. For values in the range observed in radio SNRs (Li & Chevalier 1999), we are justified in neglecting the contribution from the old electrons. Our extrapolation to the non-relativistic regime may be tested by continuing to monitor the event GRB 970508, which is at $`z=0.835`$ and had a 1.4 GHz flux of $`249\pm 60\mu \mathrm{Jy}`$ at an age of 354 days (Frail et al. 1999, in preparation). Our discussion assumed that GRB sources release their energy impulsively in the form of the ultra-relativistic wind that produces the early $`\gamma `$-ray and afterglow emission. It is possible that more energy is released in the form of non-relativistic ejecta that catches-up with the decelerating shock and re-energizes it at late times. In this case, if $`f_b=1`$ then the total hydrodynamic energy (and the associated synchrotron flux) may be much larger than we estimated based on the GRB energetics alone. However, the situation might be different if $`f_b1`$. Recently, there have been several claims for a potential detection of supernova emission in the light curves of rapidly declining afterglows (Kulkarni et al. 1998b; Bloom et al. 1999; Reichart 1999). We emphasize that any association between supernova and GRB events could be explored more directly in the local Universe by examining the ionization and hydrodynamic structure of SN–like remnants in the interstellar medium of nearby galaxies. For example, one could search for extended ionization cones in young supernova remnants, as expected if the intense UV emission from the associated GRB afterglows is collimated. Complementary information about the shock structure and temperature can be obtained by observations in the radio band (probing the synchrotron emission) or the X-ray regime (probing thermal emission from the hot post-shock gas). In particular, follow-up observations of the energetic X-ray remnants discovered by Wang (1999) in M101 or the optically-faint HI holes discovered by Rhode et al. (1999) in Holmberg II, would be revealing as to the nature of these peculiar objects. The expanding spherical shock front will typically acquire a diameter of $`1^{\prime \prime }`$ after $`10^4`$ years at the distance of the Virgo cluster. With VLBI, one can achieve up to sub milli-arcsecond resolution and hence easily resolve the radio–emitting shock. Since the shocked gas occupies a thin shell, the source should be strongly limb-brightened and possibly highly polarized at the limb (Medvedev & Loeb 1999). In the simplest case of an isotropic point explosion, the radio–emitting region will appear embedded inside a much larger region which was ionized by the prompt UV emission from the GRB and which emits optical-UV recombination lines (Perna, Raymond, & Loeb 1999). This distinct structure of a shock embedded in an HII region with high ionization states of heavy elements, is unique to GRB remnants since only the optically-thin wind of a relativistic GRB fireball can give rise to the intense hard radiation which produces a highly-ionized bubble out to large distances $`100\mathrm{pc}`$, in front of the shock. This is to be contrasted with ordinary supernovae, in which the UV emission from the optically-thick envelope is suppressed above the thermal cutoff. We thank Lincoln Greenhill for useful discussions. This work was supported in part by NASA grants NAG 5-7039 and NAG 5-7768.
no-problem/9907/nucl-ex9907009.html
ar5iv
text
# Experimental Conditions for the Gamma Optical Scattering September 197911footnote 1Typed in 1999 after the original September 1979 manuscript The interaction of gamma-ray photons with a nucleus, mediated by an electromagnetic field, was investigated in our paper, The Tuning of $`\gamma `$-Ray Processes with High Power Optical Radiation.<sup>2</sup><sup>2</sup>2Note added in May 1999: the referenced work is S. Olariu et al., Phys. Rev. C 23, 50 (1981); it was submitted for publication on 30 November 1979. In that paper little attention was given to the absolute location of the energy levels of the nuclei: it was rather a discussion of the resonance of the gamma ray energy and of the electromagnetic energy to the nuclear transition energy, where the energy of the gamma ray was considered as a parameter. In fact, there is a close relationship between the energy of the gamma ray from the nucleus in the source and the transition energy of the nucleus in the absorber. The positions of the energy levels of a free nucleus are affected by the interaction with the electric and magnetic fields in which the nucleus is immersed.<sup>3</sup><sup>3</sup>3We follow in this paragraph G. K. Shenoy and F. E. Wagner, Editors, Mössbauer Isomer Shifts, North-Holland, 1978, The Introduction. These fields can be created by external charge distributions or by atomic electrons. For example, the magnetic dipole interaction arises if an inner unpaired electron polarises the electron configuration, so that there is a difference in the energies corresponding to the various orientations of the nuclear magnetic moment, which is of the order of $`10^5`$ eV. The electric quadrupole interaction appears for ellipsoidal nuclei, which have a dependence of their energy on the orientation in the spatially varying electric fields created by a non-spherical charge distribution. This interaction is again in the order of $`10^5`$ eV. The isomeric monopole interaction arises from the dependence of the interaction energy between the external electronic charge and the nucleus on the nuclear radius, which is different for the ground and the excited state. The isomeric energy of interaction is of about $`10^7`$ eV. The hyperfine interaction mentioned above produce either splittings, or shifts, of the levels. The binding of the nuclei in a lattice eliminates the Doppler and the recoil effects, but, as the chemical composition of the source and of the absorber is not generally the same, the internal fields will be different, and the emission and absorption lines will not overlap. We proposed in our previous papers to compensate this detuning by an applied electromagnetic field of appropriate energy. Since 1 eV corresponds to $`2.41\times 10^{14}`$ Hz, the magnetic dipole and electric quadrupole spilttings are compensated by fields of frequencies in the range of several GHz, while the isomeric monopole shift is compensated by frequencies in the range of 10 Mz. As a matter of fact, the magnetic hyperfine interaction can be seen directly by Nuclear Magnetic Resonance techniques, the electric quadrupole interaction can be measured by Nuclear Quadrupole Resonance techniques, and the monopole interaction usually called the isomer shift is specific to the Mössbauer technique. Of course, each of these have their own limits of applicability, and a new technique may be efficient where the existing ones are not. But the main interest in the Gamma-Optical technique is to obtain experimental evidence for the nuclear Raman scatterings which represents a step toward tunable sources of $`\gamma `$ radiation, and a possible way to the Gamma RayLaser. The results of our paper, The Tuning of the $`\gamma `$-Ray Processes …, was that the ratio $`B`$ of the cross section of the enhanced scattering, $`\sigma ^{(2)}`$, to the single-photon, Breit-Wigner, cross section, $`\sigma ^{(1)}`$, is, for transitions mediated by the magnetic sublevels, proportional to the density of the energy flux, $`\mathrm{\Phi }_2`$, of the electromagnetic field, divided by the square of the frequency $`\omega _2`$, $$B\sigma ^{(2)}/\sigma ^{(1)}=\frac{4\pi e^2\mathrm{\Phi }_2g^2}{m^2c^3\omega _2^2}.$$ (1) It was also shown that the ratio, $`Z`$, of the gamma-optical cross section, $`\sigma ^{(2)}`$, to the off-resonance, single-photon cross section, $`\sigma _{\pm \omega _2}^{(1)}`$, is proportional to the energy flux, $`\mathrm{\Phi }_2`$, divided by the square of the width, $`\mathrm{\Gamma }`$, of the level, $$Z\sigma ^{(2)}/\sigma _{\pm \omega _2}^{(1)}=\frac{4\pi e^2\mathrm{\Phi }_2g^2}{m^2c^3\mathrm{\Gamma }^2}.$$ (2) The dependence $`1/\omega _2^2`$ of the two-photon cross section is valid with unsplit, Lorentzian profiles of the lines. The preparation of samples which provide such lines is certainly possible, by using lattices of appropriate chemical composition and internal symmetry. On the other hand, the internal fields, or applied external fields, produce splittings of the energy levels, and the $`1/\omega _2^2`$ dependence of the two-photon cross section is valid only for energies $`\mathrm{}\omega _2`$ larger than the splitting. Below that limit, the cross section becomes independent of the frequency $`\omega _2`$ of the electromagnetic field. The $`\mathrm{\Phi }_2`$ dependence of the gamma-optical cross section is valid if the power broadening of the lines is smaller, or comparable to the effective linewidth. The Zeeman shift of the levels becomes comparable to the width of the lines at intensities of the magnetic field of about 100 Gauss.<sup>4</sup><sup>4</sup>4ibid., p. 567, Fig. 8d.1 At very large powers, we expect to see a saturation of the cross section, together with the modifications predicted in Ref. 15 of our third Gamma-Optical paper.<sup>5</sup><sup>5</sup>5Note added in May 1999: this reference is M. N. Hack and M. Hamermesh, Nuovo Cimento XIX, 546 (1961). Due to imperfect preparation of the samples, the cross section of the two photon process could be lower than that predicted in Eq. 1 by the ratio of the natural linewith, $`\mathrm{\Gamma }`$, to the effective linewidth, $`\mathrm{\Gamma }_e`$. On the other hand, the ratio of the cross sections $`\sigma ^{(2)}`$ and $`\sigma ^{(1)}`$ is determined by the natural linewidth, as written in Eq. 2, because the single-photon, Breit-Wigner cross section, is also reduced by the same ratio $`\mathrm{\Gamma }/\mathrm{\Gamma }_e`$. A low frequency $`\omega _2`$ results in lower power requirement $`\mathrm{\Phi }_2`$ for a given branching ratio $`B`$, as may be seen from Eq. 1, and a narrow linewidth $`\mathrm{\Gamma }`$ results in lower power requirement for a given signal to noise ratio $`Z`$, as may be seen from Eq. 2. We have seen that the types of hyperfine interactions create two ranges of frequencies $`\omega _2`$. Beyond 1 GHz, which corresponds to a wavelength of 30 cm, in the region of the dipole and quadrupole interaction, the high values of the power density $`\mathrm{\Phi }_2`$ are most conveniently obtained by the microwave-cavity technique. This technique is not appropriate at the lower frequencies in the range of 10 MHz, because the size of the cavity, which is determined by the wavelength of the oscillating field, becomes impracticable. But, going back to the bases of the interaction between the gamma ray photon and the nucleus, mediated by an external applied field, we see that all that is necessary for the interaction mediated by the magnetic sublevels is the presence of a magnetic field $`H_2`$, oscillating with the frequency $`\omega _2`$. Currents oscillating in conductors create such magnetic fields, which are not electromagnetic waves. We have in fact an LC circuit, and, if we assume the volume of the capacitor and of the inductor to be comparable, then the amplitude of the magnetic field in the inductor is comparable (in CGS units) to the amplitude of the electric field in the capacitor. A magnetic field of 100 Gauss corresponds to an electric field of 100 statvolt/cm, or 30 kV/cm. The energy periodically transferred between the inductor and capacitor is $`H^2/8\pi `$ erg/cm<sup>3</sup>. Assuming $`H`$=100 Gauss, a volume of $`10^2`$ cm<sup>3</sup>, and a $`Q`$-factor of the LC circuit of 100, the input power in the circuit is, at a frequency of 100 MHz, of about 1 kW. The equivalent energy flux $`\mathrm{\Phi }_2`$ corresponding to the field $`H_2`$ of 100 Gauss is of about $`10^6`$ W/cm<sup>2</sup>. For illustrative purposes we chose here the 6.2 keV, 6.8 $`\mu `$sec line of <sup>181</sup>Ta, which has a single-photon cross section of $`1.73\times 10^{18}`$ cm<sup>2</sup>; <sup>181</sup>Ta has a good natural abundance (99.99 %), the 6.2 keV level is populated by the decay of <sup>181</sup>W, which has a half-life of 140 days and is obtained by neutron irradiation of <sup>180</sup>W. The branching ratio $`B`$ corresponding to a field $`H_2`$ of 100 Gauss and a frequency $`\omega _2`$ of 10 MHz is $`B10^3`$. The cross section of the two-photon process expected under these conditions is $`\sigma ^{(2)}10^{22}`$ cm<sup>2</sup>, because of the broadening of the lines due to imperfect sample preparation and to the power broadening. The ratio $`Z=\sigma ^{(2)}/\sigma _{\pm \omega _2}^{(1)}`$ is under these conditions, $`Z10`$. The expression for the branching ratio $`B`$ and the signal to noise ratio $`Z`$ were derived by assuming Lorentzian profiles for the two modes of the electromagnetic field. If the sampples are immersed in an oscillating magnetic field, it can be seen from the basic equations for the amplitude of a two-photon process that, if the transition is mediated by the magnetic sublevels, the branching ratio is $$B=\left(\frac{g\frac{e\mathrm{}}{2mc}H_2}{\mathrm{}\omega _2}\right)^2,$$ (3) which is similar to Eq. 1, since the equivalent energy flux is $`\mathrm{\Phi }_2=\frac{c}{8\pi }H_2^2.`$ The relatively high values of the branching ratio $`B`$ and the good values of the signal to noise ratio $`Z`$ obtained with a relatively low imput power which may be tuned by varying the capacitance in the LC circuit suggest that the experimental approach to the gamma-optical absorption be focused on processes compensated by smaller frequencies, like the isomeric shift and the second order Doppler shift, and involving narrow lines. We have to obtain first the resonance when the frequency of the oscillating magnetic field corresponds to the isomeric shift between the source and absorber. Then, if a static uniform magnetic field is superposed to the oscillating magnetic field, we expect to see a plateau of the two photon cross section at lower values of $`H_2`$, and a decrease of the cross section as $`1/H^2`$ as the intensity of the static field is increased, corresponding to the fact that the Zeeman splitting becomes larger than the value of the frequency $`\omega _2`$ determined by the isomeric shift. Finally, without the static field, increasing the intensity of the oscillating field, we expect to obtain the saturation of the cross section and to see the changes in the shape of the line described in Ref. 15 of the third Gamma-Optical paper. There is, of course, complete symmetry between compensating the detuning in source or absorber. Yet we suggest that the source be immersed in the oscillating field, in order to obtain the experimental confirmation of the nuclear Raman scattering, which is more close to a tunable gamma ray source, and is somewhat related to a gamma-ray laser. Due to the relatively large values of the branching ratio, $`B`$, - a $`B=10^4`$ is probably not difficult to obtain -, apparently there is no problem with the counting rate, so that weak radioactive source, say 1 mCi, can be used (1 Ci=$`3\times 10^{10}`$ disintegrations/sec). The absolute value of the two photon cross section is in the range of $`10^{23}`$ cm<sup>2</sup>, and is comparable to other scattering processes, like the Compton scattering and the photoelectric effect. But we may take the advantage of the fact the fluorescence radiation has a definite energy, while the Compton scattered radiation has a continuous spectrum, and to discriminate energetically between the signal and the noise. The resolution of the gamma-ray detectors in the region of interest in Mössbauer spectroscopy is good, it seems to be $`100`$ eV. For the single-photon off-resonance scattering, the ratio $`Z`$ is higher than 1 at larger magnetic intensities.<sup>6</sup><sup>6</sup>6The intensity is limited by the condition that the power broadening be comparable to the natural width. At lower intensities where $`Z<1`$ other discrimination techniques are available, such as the comparing of the signals when the magnetic power is off, and on. Since the frequency $`\omega _2`$ is in the range 10 MHz the width of the line has to be in the range 1 $`\mu `$sec. There are three narrow lines in Mössbauer spectroscopy, namely <sup>67</sup>Zn, 93.3 keV, 9.2 $`\mu `$sec, $`4.93\times 10^{20}`$ cm<sup>2</sup>, <sup>73</sup>Ge, 13.3 keV, 3.9 $`\mu `$sec, $`0.76\times 10^{20}`$ cm<sup>2</sup>, <sup>181</sup>Ta, 6.2 keV, 6.8 $`\mu `$sec, $`1.73\times 10^{18}`$cm<sup>2</sup>. The best candidate seems to be <sup>181</sup>Ta. In this case, an isomeric shift of 1 mm/sec is equivalent to $`2.07\times 10^8`$ eV, or 5.0 MHz. Many useful informations on the spectroscopy of the 6.2 keV gamma ray of <sup>181</sup>Ta and related isomeric shifts can be found in the cited reference.<sup>7</sup><sup>7</sup>7ibid, pp. 563-591, p. 873, p. 879, p. 886 In conclusion, the gamma-optical experiment requires an LC circuit operating in the range of 10 MHz, with an inductor providing magnetic fields of the order of 10 Gauss, the circuit being tunable by the variation of the capacitance C; the narrow-line sources and absorbers; and conventional gamma-ray equipment. Since the frequency of 10 MHz is in the radio frequency range, difficulties could arise only from the magnitude of the field. Seemingly, a magnetic field of 1 Gauss, which is equivalent to 300 V/cm, is sufficient for the experiment.
no-problem/9907/astro-ph9907217.html
ar5iv
text
# Evidence for early stellar encounters in the orbital distribution of Edgeworth-Kuiper Belt objects ## 1 Introduction Stars commonly form in groups or clusters within turbulent molecular clouds on timescales which are of about a million years (Hillenbrand 1997). Typical young stellar aggregates have sizes of roughly 1 pc and consist of a few hundred stars. Recent observations have also shown that most stars form in eccentric binary systems and that the binary frequency of young stars is about two times higher than that of main sequence stars in the solar neighbourhood (Ghez et al. 1997, Köhler & Leinert 1998). This reflects the fact that secular dynamical processes within newly formed stellar groups tend to reduce their binary fraction over time. Recent numerical modeling (Kroupa 1995, 1998) demonstrates that encounters between binaries can lead to the dissolution of aggregates on timescales of several hundred million years and that stochastic close stellar encounters — which are in general very energetic — can lead to the dissociation of the widest binaries. Binary dissociation occurs at binary orbital periods greater than about 3000 yrs, corresponding to separations of order a few 100 AU. It is therefore reasonable to expect that most single main sequence stars actually formed as part of a wider binary system which was disrupted through interactions within a young stellar cluster. Even after a proto-star becomes detached from its companion, or if it is born as a single star, encounters by passing stars would occur before the dissolution of the stellar cluster. The timescale for encounters with pericenter distance $`q200`$AU may be comparable to the dissolution timescale of the stellar cluster (Laughlin & Adams 1998). Thus, if the Sun formed in such a clustered environment, it most likely experienced a few close encounters with a transient binary companion or with passing stars at pericenter distances of order 100 AU, before the break up of the stellar cluster. Laughlin & Adams (1998) have suggested that the large eccentricities of extrasolar planets associated with Cyg B and 14 Her could have been pumped up by interactions with passing binary systems in an open cluster. Here, we will consider interactions of a star (the proto-sun) having a protoplanetary system which encounters a passing single star. In general interactions with a binary system are more disruptive to the protoplanetary system than those with a single star. Since we seek to model the Solar System the interactions we consider are necessarily much less disruptive to the planetary system than those considered by Laughlin & Adams. (More distant encounters with passing binary systems may lead to similar results.) Such an encounter will generally affect the dynamical and material structure of the solar protoplanetary disk and, provided internal conditions allow, the planetesimal disk will remain imprinted with this signature over much of the main sequence lifetime of the star. In this Letter we study the dynamical effects of the stellar encounters on protoplanetary disks and point out that the orbital distribution of Edgeworth-Kuiper Belt (EKB) objects may indicate that the Solar System has experienced close stellar encounters. We demonstrate that puzzling kinematical features in the orbital distribution of the EKB objects can be explained naturally if the Sun formed as a member of a stellar cluster and experienced a stellar encounter (or series of encounters) with $`q100`$$`200`$AU. ## 2 Dynamical structure of the Edgeworth-Kuiper Belt The observed EKB objects observed at multiple oppositions or during relatively long duration are shown in Fig.1 (e.g. see Marsden’s web site, http://cfa-www.harvard.edu/graff/lists/TNOs.html). The increasing numbers of EKB objects being revealed by observations presently fall into three distinct groups. Firstly, many objects have semimajor axes close to the 3:2 resonance with Neptune’s orbit (located at 39.5AU), and these display a wide range of eccentricities and inclinations (each up to $`0.35`$). Secondly, outside $`42`$ AU, the objects have slightly lower average eccentricity ($`0.1`$) and inclination ($`0.1`$ radian). At semimajor axes inside $`39`$AU, and between $`40`$AU and $`42`$AU, there are unpopulated regions (hereinafter ”gaps”). The cut-off outside $`50`$AU may imply depletion of objects but it could also be due to the present observational sensitivity limit (Jewitt, Luu, and Trujillo 1998; Gladman et al. 1998). The third group is comprised of the ’scattered disk’ objects (Duncan and Levison 1997), which have experienced close approach with Neptune. Pericenter for the scattered disk objects is located near Neptune’s orbit. An example is TL66 with $`e0.6`$ and $`a85`$AU, which is outside the range of Fig.1. Secular perturbations by the giant planets can account for the gap between $`40`$AU and $`42`$AU (Duncan, Levison, & Budd 1995). They cannot account for the other features (Duncan, Levison, & Budd 1995). The model of sweeping mean motion resonances due to Neptune’s outward migration successfully accounts for the concentrated distribution at the 3:2 resonance as well as for the gap inside $`39`$AU (Malhotra 1995). This model also predicts that a large accumulation ought to occur at Neptune’s 2:1 resonance (located at $`47.8`$AU) with a cleared gap interior to the present resonant location. If the number of objects captured by the 2:1 sweeping resonance is similar to that by the 3:2 resonance, it may be expected that more objects should now be detected near the 2:1 resonance (Jewitt, Luu, and Trujillo 1998). However, the current population near the 2:1 resonance is still poorly constrained owing to the observational sensitivity limit. The migration speed of Neptune also affects the relative population between the 3:2 and 2:1 resonances (Ida et al. 1999). In summary, the good agreement of the theoretical predictions by Malhotra (1995) with the observations for the objects near the 3:2 resonance supports the sweeping of mean motion resonances. The relatively high eccentricities and inclinations found outside $`42`$AU cannot be accounted for by long-range secular perturbations of the planets. The velocity dispersion of these observed objects exceeds their surface escape velocity for most objects, which cannot be explained by internal gravitational scattering (Safronov 1969). The capture probability of the sweeping 3:2 resonance becomes small, and the gap inside 39AU cannot be created, when the initial eccentricity exceeds $`0.05`$ (Malhotra 1995). The objects with $`e\stackrel{>}{}0.05`$ would not be swept and remain inside 39AU, although a clear gap is presently observed inside 39AU. Thus, the mechanism to pump up velocity dispersion outside $`42`$ AU should satisfy the condition of having occurred in a highly localized manner to keep $`e`$ and $`i`$ small enough inside 39AU, although we note that objects with $`e\stackrel{>}{}0.1`$, inside 39AU, can be destabilized by planetary perturbations in the age of the Solar System (Duncan, Levison, & Budd 1995). Some models have been proposed to account for the high $`e`$ and $`i`$ outside $`42`$ AU. The Earth-sized bodies that are thought to have once existed in the formation stage and were subsequently ejected might have been able to pump up the velocity dispersion (Stern 1991; Morbidelli and Valsecchi 1997; Petit, Morbidelli, and Valsecchi 1999). Partial trapping by sweeping of the 2:1 resonance might have also pumped up the eccentricities outside $`42`$AU (Hahn and Malhotra 1999). Here we propose another mechanism, stellar encounters, to dynamically heat the planetesimal disk outside $`42`$AU. While the two former mechanisms are associated with processes occurring after the formation of Neptune, the stellar encounter model can operate before Neptune’s formation as well. Although all these mechanisms may be able to account for the dynamical heating of the velocity dispersion between the 3:2 and 2:1 resonances, the predicted velocity dispersion beyond the 2:1 resonance (which has not been observed up to now) is expected to be quite different in our model, as discussed below. ## 3 Modeling We have investigated the possibility that stellar encounters with the young solar nebula could have increased the eccentricity $`e`$ and inclination $`i`$ of EKB objects presently located outside $`42`$AU. In our modeling we assume: 1. A single star passes by the proto-sun on a nearly parabolic orbit and perturbs the planetesimal system. The passing star may be weakly bound to the proto-sun in which case we can consider a series of encounters. 2. The pericenter distance ($`q`$) of the encounter(s) is on the order of 100AU. 3. Planetesimals with the present EKB object mass ($`10^{22}`$$`10^{23}`$ g) are formed on low-$`e`$ and low-$`i`$ orbits prior to the first encounter that pumps up $`e`$ and $`i`$ significantly. As discussed in the introduction, the assumptions 1 and 2 are consistent with recent observations and numerical modeling. If we are only concerned with the effects of stellar encounters on the protoplanetary system, the assumption 3 is not necessary. However, to apply our results to the EKB the assumption 3 is needed in our model because the induced velocity dispersion is larger than the surface escape velocity, which we would expect to halt planetesimal agglomeration (see below). According to conventional models (e.g. Safronov 1969; Goldreich & Ward 1973; Hayashi, Nakazawa, & Nakagawa 1985), dust grains settle to the equatorial plane of the nebula and subsequent gravitational instability of the dust layer results in planetesimal formation. The dust grain sedimentation timescale may be only $`10^3`$$`10^5`$ yrs (e.g., Hayashi, Nakazawa, & Nakagawa 1985) and the gravitational instability operates over a timescale that is comparable to the orbital period (reviewed by Papaloizou & Lin 1995). First-born planetesimals have masses of a few $`\times 10^{22}(a/40\mathrm{A}\mathrm{U})^{3/2}`$ g (e.g. Hayashi, Nakazawa, & Nakagawa 1985), which is already comparable to the masses of the present EKB objects. However, nebula turbulence may prevent dust grains from settling onto the equatorial plane, so that the gravitational instability does not occur (e.g. Weidenschilling & Cuzzi 1993). If this is the case, planetesimal accretion up to the present size of the EKB objects would require $`10^8`$$`10^9`$ yrs (e.g. Stern & Colwell 1997), so that assumption 3 may be too restrictive for our model of repeated encounters in an eccentric binary and only the model of flyby stellar encounters (before dissolution of a stellar aggregate) would be allowed. A series of numerical simulations to test the effect of stellar companion encounters in protoplanetary disks has been performed. We consider collisionless particles (corresponding to planetesimals), orbiting initially on coplanar circles around a primary star (the proto-sun). This particle disk encounters a hypothetical companion star. The orbital changes of the test particles are integrated taking into account the gravitational forces of the primary and the companion star using a fourth order predictor-corrector scheme. Many different encounter geometries and companion masses have been examined. If the scale length is defined by the pericenter distance $`q`$ of the encounter, each encounter is characterized by: the companion mass ($`M_c`$), the inclination angle of the companion orbit relative to the initial disk ($`\theta _c`$), and the orbital energy or eccentricity of the perturber ($`e_c=1`$) (Ostriker 1994). In the models, typically $`10^4`$ test particles were initially distributed in the region $`a/q=0.05`$$`0.8`$, where $`a`$ denotes semimajor axis. The initial surface number density $`n_{s0}`$ is proportional to $`a^{1.5}`$. Since we consider test particles which do not interact with one another, the particular choice of disk mass or surface number density profile does not affect the generality of the results. The initial eccentricity and inclination ($`e_0`$ and $`i_0`$) of the particles are taken to be $`\stackrel{<}{}0.01`$. Figure 2 shows the eccentricity and inclination of the particles after the encounter as a function of $`a/q`$, in the case with $`e_0=i_0=0`$ and $`M_c=M_p`$. Inclination angle $`\theta _c`$ is (a) 5 degrees, (b) 30 degrees, and (c) 150 degrees with the line of nodes along the $`x`$-axis. The spatial distribution in the case of $`\theta _c=30`$ degrees is shown in Fig.3. As shown in Fig.2 the encounter leads to a strong increase in $`e`$ and $`i`$ in the outer parts of the disk. In the case of $`\theta _c=30`$ degrees, $`e`$ and $`i`$ are pumped up only slightly ($`\stackrel{<}{}0.01`$) at $`a/q\stackrel{<}{}0.2`$, while they are pumped up highly ($`\stackrel{>}{}0.1`$) at $`a/q\stackrel{>}{}0.25`$. Note that the former condition is similar to the orbital stability condition of bounded three-body systems (e.g. Black 1982). At $`a/q\stackrel{>}{}0.3`$ a large fraction of particles are ejected: respectively 70% and 95% of objects with initial $`a/q0.5`$ and $`0.7`$. The remaining particles at $`a/q\stackrel{>}{}0.3`$ have large eccentricities. The other $`\theta _c`$ cases show similar features. The spike in $`i`$ at $`a/q0.3`$ coincides with the 3:1 commensurability of the unperturbed disk orbital frequency and the companion’s orbital frequency at pericenter. Since the companion’s angular velocity at pericenter is $`[2G(M_p+M_c)q^3]^{1/2}`$, the 3:1 commensurability is located at $`a/q=[2(1+M_p/M_c)\times 3^2]^{1/3}0.30`$, for prograde encounters. Thus the highly localized character of the disk response in this region appears to be associated with a corotation resonance occurring near pericenter (Korycansky & Papaloizou 1995). As shown in Fig.3, long-lived features in the spatial distribution are the inclined bar-like envelope and prominent one-armed spiral, both due to close correlations in the longitudes of perihelion and ascending node. Precession of the longitudes due to, for example, the nebula potential would gradually destroy the features. If the velocity dispersion is greater than the surface escape velocity of a planetesimal, collisions between planetesimals is too destructive (Backman, Dasgupta, & Stencel, 1995) and growth of planetesimals is halted. The surface escape velocity of a planetesimal with mass $`m`$ and density $`\rho `$ is $`0.5\times 10^5(m/10^{24}\mathrm{g})^{1/3}(\rho /1\mathrm{g}\mathrm{c}\mathrm{m}^3)^{1/6}`$cm/s. The velocity dispersion is $`(e^2+i^2)^{1/2}v_{\mathrm{Kep}}0.5\times 10^6(e^2+i^2)^{1/2}(a/40\mathrm{A}\mathrm{U})^{1/2}(M_p/M_{})^{1/2}`$cm/s, where $`v_{\mathrm{Kep}}`$ is Keplerian velocity. As a result, in the region where pumped-up $`e`$ or $`i\stackrel{>}{}0.1`$, planetesimal growth would be inhibited. The steep radial gradient of $`e`$ and $`i`$ seen in Fig.2 indicates that there exists a well defined boundary for planetesimal growth at $`a0.2`$$`0.3q`$: outside this region planetesimal growth is greatly inhibited while it is not affected at all inside. For different encounter parameters the distribution of the pumped-up $`e`$ and $`i`$ is generally very similar to Fig. 2, except for the length scale $`q`$. In other words, for different encounters the distribution of particles in Fig. 2 shifts towards larger or smaller values of $`a/q`$, except that $`i`$ is not pumped in the special case of a coplanar encounter. In general more massive companions and lower inclination encounters yield stronger interactions. For example, the distribution shifts as $`a/q(M_c/M_p)^{(0.20.25)}`$. Higher energy (i.e. more eccentric) encounters result in stronger effects further into the disk owing to the improved coupling there. Encounters with $`\theta _c`$ closer to 90 degrees result in higher $`i`$ relative to $`e`$. The $`\theta _c=150`$ degree encounter has the same amplitude of inclined angle as that with $`\theta _c=30`$ degrees. Hence, the pumped-up $`i`$ is similar, but $`e`$ is smaller (see Fig. 2), because the parameter $`\theta _c=150`$ degrees gives a retrograde encounter and the relative velocity between disk particles and the passing star is therefore significantly larger, resulting in only very weak coupling. As mentioned above, if the proto-sun had a transient binary companion, the proto-sun may have experienced a few close encounters with the companion before the binary system broke up. In this case, the individual encounters would have similar parameters and $`e`$ and $`i`$ would be pumped up cumulatively with each encounter, so that the perturbed forms of $`e`$ and $`i`$ would be preserved except for shifts towards smaller values of $`a/q`$. We shall now consider an encounter that gives the required $`e`$ and $`i`$ distributions for the inner EKB. As stated above, such encounters with $`q`$ on the order of $`100`$ AU may be reasonable for the protosolar system. We performed a similar simulation to that presented in Fig. 2b ($`\theta _c=30`$ degrees) except $`e_0^2^{1/2}=i_0^2^{1/2}=0.01`$. Overall features of the pumped-up $`e`$ and $`i`$ are quite similar to Fig. 2b (see Fig. 4a). With $`q=160`$AU, we randomly selected $`500`$ particles in the range $`30`$AU$`<a<65`$AU from the results, to compare them with the observed numbers of EKB objects. The selected distribution is shown in Fig. 4a. For $`a\stackrel{>}{}42`$AU, $`e`$ and $`i`$ are as large as those of the observed EKB objects. Some objects that originally had larger $`a`$ are scattered to this region with very high $`e`$ and $`i`$. However, at $`a\stackrel{<}{}39`$AU, $`e`$ is still small enough ($`\stackrel{<}{}0.05`$) to allow the formation of a gap inside $`39`$AU via resonance sweeping, without the need for any other processes, e.g. long-term orbital destabilization. In order to study the sweeping mean motion resonances we also performed simulations similar to other authors (Malhotra 1995, Ida et al. 1999), starting from the resultant distribution of particles after the stellar encounter (Fig. 4a). The proto-Neptune with a mass of $`10^{29}`$g (comparable to the present Neptunian mass) was artificially moved from $`23`$AU to $`30`$AU (therefore the 3:2 resonance moved from 30AU to 39.5AU), on a circular zero inclination orbit. We assumed a time dependence for the semimajor axis evolution given by: $`30\times [1(7/30)\mathrm{exp}(t/5\times 10^5\mathrm{yrs})]`$AU, and a migration timescale $`a/\dot{a}=2\times 10^6`$ yrs. If we chose a longer migration time, more particles are captured by the 2:1 resonance and a gap is created interior to the resonance while the capture probability of the 3:2 resonance would remain much as before (Ida et al. 1999). The result after the sweeping is shown in Fig.4b. The objects between $`40`$AU and $`42`$AU would be destabilized by a long-term secular resonance (Duncan, Levison, & Budd 1995). The objects that have high eccentricity and are not trapped by mean-motion resonances may experience close encounters with Neptune and go to the ’scattered disk’. Sweeping secular resonances, which we do not include in our simulations, may alter the inclination distribution both near the 3:2 resonance and beyond 42 AU (Malhotra, Duncan, & Levison 1999). Thus, our result is consistent with the observed distribution in Fig.1. In particular, the puzzling high values of $`e`$ at $`a\stackrel{>}{}42`$AU are explained without diminishing the capture probability of the sweeping 3:2 resonance. Different geometry of a stellar encounter, multiple encounters, or an encounter with a passing binary system might result in a better match. The typical damping time of $`e`$ and $`i`$ due to hydrodynamic gas drag at $`40`$AU is $`10^9(m/10^{22}\mathrm{g})^{1/3}(e/0.1)^1`$ yrs (Adachi, Hayashi, & Nakazawa 1976), for a typical minimum mass solar nebular model (Hayashi 1981). This is much longer than the lifetime of disk gas, inferred from observations, being of order $`10^6`$$`10^7`$ yrs (e.g. Zuckerman, Forveille, & Kastner 1995). Also the two-body relaxation time and collision time for the presently estimated surface density at $`40`$AU is longer than the Solar System age (Stern 1995, 1996; Davis & Farinella 1996). Hence, the orbital elements of the present EKB objects should not have changed significantly after the orbital perturbation. It is expected that the orbital distribution in $`e`$, $`i`$ and $`a`$ after the encounter reflects that observed today. ## 4 Discussion Our simulations show that early stellar encounters would lead to interesting features in the young solar nebula that might explain the structure of the outer part of the EKB. The stellar encounters would occur on timescales of dissolution of stellar aggregates, which is of the order of $`10^8`$ yrs. This may allow the EKB objects to grow to their observed sizes before the encounters. The objects initially inside 30AU would be strongly scattered to form the ’scattered disk’ during Neptune’s migration (Duncan & Levison 1997). The objects with initial $`a`$ from 30AU to 40AU would be captured by the sweeping of the 3:2 resonance with resultant high $`e`$ and $`i`$. Outside 40AU, the stellar perturbations are strong enough to pump up $`e`$ and $`i`$ to $`\stackrel{>}{}0.1`$. Once their velocity dispersion is pumped up to more than the surface escape velocity, collisions between the EKB objects would produce copious amounts of dust particles which would be removed by gas drag, Poynting-Robertson drag, and radiation pressure-driven ejection (Stern 1995; Backman, Dasgupta, & Stencel 1995). The initial surface density is therefore eroded by virtue of its dynamical state, the present EKB objects being remnants that have avoided significant erosion (Stern 1996; Davis & Farinella 1996). This result could explain the fact that the observationally inferred surface density in the EKB is much lower than that extrapolated from a minimum mass solar nebula model (e.g. Stern 1995; Weissman and Levison 1997). Detailed numerical modeling of the subsequent collisional evolution of the perturbed EKB is required to test this hypothesis. Our model predicts that there should be a steep increase in $`e`$ and $`i`$ with semimajor axis. In contrast, stirring by by Earth-sized bodies would predict decrease in $`e`$ and $`i`$ and that by partial trapping by the Neptunian sweeping 2:1 resonance predicts a ’cold’ disk beyond 50AU. Future observations can validate these models by the trend of radial dependences in $`e`$ and $`i`$. If $`e`$ and $`i`$ systematically increase beyond 50AU, our model is supported. The high eccentricities and inclinations that follow immediately from such an encounter also have a number of consequences for extrasolar planetary systems. Firstly, as stated previously, the augmented velocity dispersion amongst planetesimals promotes the production of dust particles. This can significantly increase the dust replenishment rates and lead to more prominent circumstellar disks around some main sequence stars (Stern 1995; Kalas & Jewitt 1996; Holland et al. 1998). The existence of the dust disks may reflect stellar encounters in the formation epoch. Secondly, as stated above, planetesimal growth could be forestalled in the outer region of the disk by a stellar encounter. This situation could be reflected in the fact that Neptune marks the outer boundary of our planetary system at $`30`$AU. Thus the existence of substantial planetary bodies outside $`50`$AU would be inconsistent with our model. Finally, we comment that recent advances in star formation theory and observation suggest that such stellar encounters with disks as those considered here should not be viewed as unique catastrophic events but as an integral part of the star- and planetary-system formation process. We thank the anonymous referee for helpful comments and useful suggestions. SI acknowledges the hospitality of the MPIA during his stay. JL is grateful to Dr. P. Kalas for comments on dust disks. Adachi, I., Hayashi, C., & Nakazawa, K. 1976. Prog. Theor. Phys., 56, 1756 Backman, D. E., Dasgupta, A., & Stencel, R. E. 1995, ApJ, 450, L35 Black, D. C. 1982, AJ, 87, 1333 Davis, D. R. & Farinella, A. P. 1996, Icarus, 125, 50 Davis, D. R. & Farinella, A. P. 1998, LPSC abstract Duncan, M. J., Levison, H. F., & Budd, S. M. 1995, AJ, 110, 3073 Ghez, A.M., McCarthy, D.W., Patience, J.L., & Beck, T.L 1997 ApJ, 481, 378 Gladman, B. et al. 1998, AJ, 116, 2042 Goldreich, P. & Ward, W. R. 1973, ApJ, 183, 1051 Hahn, M. J. & Malhotra, R. 1999, AJ, in press Hayashi, C. 1981, Prog. Theor. Phys. Suppl., 70, 35 Hayashi, C., Nakazawa, K., & Nakagawa, Y. 1985 in Protostars and Planets II. eds. D. C. Black & M. S. Matthew (Tucson: Univ. of Arizona Press), 1100 Hillenbrand, L.A. 1997, AJ, 113, 1733 Holland, W., Greaves, J., Zuckerman, B., Webb, R., McCarthy, C., Coulson, I., Walther, D., Dent, W., Gear, W., & Robson, I. 1998, Nature, 392, 788 Ida, S., Bryden, G., Lin, D. N. C., & Tanaka, H. 1999, ApJ, submitted Jewitt, D., Luu, J. & Trujillo, C. 1998, AJ, 115, 2125 Kalas, P. & Jewitt, D. 1996, AJ, 111, 1374 Kroupa, P. 1995, MNRAS, 277, 1507 Kroupa, P. 1998, MNRAS, 298, 231 Köhler, R & Leinert, C.H. 1998 A&A, 331, 977 Laughlin, G. & Adams, F. C. 1998, ApJ, 508, L171 Malhotra, R. 1995, AJ, 110, 420 Malhotra, R., Duncan, M. J., & Levison, H. F. 1999, in Protostars and Planets IV eds. V. Mannings & A. Boss (Tucson: Univ. of Arizona Press), in press Morbidelli, A. & Valsecchi, G. B. 1997, Icarus, 128, 464 Ostriker, E.C. 1994, ApJ, 424, 292 Papaloizou, J.C.B. & Lin D.N.C. 1995, ARA&A, 33, 505 Petit, J-M, Morbidelli, A. & Valsecchi, G. B. 1999, Icarus, in press Safronov, V. S. 1969, Evolution of the Protoplanetary Cloud and Formation of the Earth and Planets. (Moscow: Nauka Press) Stern, S. A. 1991, Icarus, 90, 271 Stern, S. A. 1995, AJ, 110, 856 Stern, S. A. 1996, AJ, 112, 1203 Stern, S. A. & Colwell, J. E. 1997, ApJ, 490, 879 Weidenschilling, S. J. & Cuzzi, J. N. 1993, in Protostars and Planets III eds. E.H. Levy & J.I. Lunine (Tucson: Univ. of Arizona Press), p. 1031 Zuckerman, B., Forveille, & Kastner, J. H. 1995, Nature, 373, 494
no-problem/9907/hep-ex9907063.html
ar5iv
text
# References This note provides a brief overview of four separate analyses performed by the SLD Collaboration to measure the parity-violation parameter $`A_b`$ in polarized $`Z^0`$ decays, and a description of how the analyses are combined to form a overall SLD result. The reader is referred to the detailed notes available for each analysis for specific information on how each analysis is performed. The most statistically powerful analysis selects $`b\overline{b}`$ events using an inclusive topological vertexing technique and forms the momentum-weighted jet charge of all selected events to identify the quark direction. This analysis was most recently updated at Moriond ’99 to include the full 1993-8 SLD dataset. The updated systematic errors are reproduced in Table 1. The combined jet-charge result is: $$A_b=0.882\pm 0.020(\text{stat})\pm 0.029(\text{syst})(\text{jet-charge}).$$ (1) The next analysis uses identified high-momentum muons and electrons to tag heavy flavor ($`b,c`$) events and then employs a number of kinematic and vertexing variables to try to distinguish leptons arising from $`b`$-hadron decays from those arising from $`c`$-hadron decays. The lepton sign is used to sign the quark direction and $`A_b`$ and $`A_c`$ are measured simulataneously. This analysis was most recently updated at Moriond ’99 to include the full 1993-8 SLD dataset. The updated systematic errors are reproduced in Tables 2 ($`\mu ^\pm `$ tag) and 3 ($`e^\pm `$ tag). The combined lepton-tag result is: $$A_b=0.924\pm 0.032(\text{stat})\pm 0.026(\text{syst})(\text{leptons}).$$ (2) Another analysis uses identified $`K^\pm `$ associated with separated topological vertices to sign the quark direction, exploiting the dominant $`bcs`$ decay chain. In the original version of this analysis, the error in the result was dominated by the experimental uncertainty in the relative rates of $`BK^+X`$ vs. $`BK^{}X`$ decay. This analysis has been updated at this conference to include data from the 1997-8 data run and now employs a self-calibration technique which removes the reliance on relative production rates of $`K^\pm `$ in $`B`$ decays. The combined $`K^\pm `$-tag result is: $$A_b=0.960\pm 0.040(\text{stat})\pm 0.056(\text{syst})(\text{kaons}).$$ (3) The last analysis uses the charge of the separated topological vertices themselves to assign the quark direction. The vertex charge is weighted in the analysis based on the mass of the reconstructed vertex, which gives an indication of the fraction of the $`B`$ decay tracks which have been correctly assigned to the vertex. This analysis, which has been first presented at this conference, includes data from the 1996-8 data run and also employs a self-calibration technique to determine the correct-sign probability directly from the data. The vertex-charge result is: $$A_b=0.897\pm 0.027(\text{stat})_{0.034}^{+0.036}(\text{syst})(\text{vertex-charge}).$$ (4) We have combined these four results as follows. The statistical overlap between the analyses was determined by explicitly tabulating events used by the four analyses for a subset of the total data which is common to all four and was marked by stable detector performance. Each event in this dataset used by a given analysis was assigned a weight by that analysis based on its estimated $`b`$-hadron purity, correct-signing probability, and reconstructed polar angle. The statistical correlations between analyses for this dataset was then determined from the overlapping event fractions, the fractions of events where different tags assigned the same (opposite) quark directions, and the individual event weights. This statistical correlation was then diluted to account for the fact that not all analyses use the same dataset. The statistical correlations extracted range from $`1030\%`$ depending on the pair of analyses considered. The largest correlation (28%) was observed between the jet-charge and vertex-charge analyses, as expected; due to its statistical power the jet-charge analysis has significant overlap with all three other analyses. The smallest correlation (8%) was between the lepton tag and vertex charge analyses. Correlations between analyses due to common systematic error sources have been treated in the standardized fashion developed by the LEP Electroweak Working Group . Since three of the four analyses (all but the lepton tag) use self-calibration techniques based on the data, most of the quoted systematic errors are in fact dominated by data statistics and thus (mostly) uncorrelated. For the purposes of this combination, we assume $`A_c`$ is fixed at its Standard Model value. The analyses are then combined in a weighted average using the individual analysis errors and the statistical correlation matrix. Each analysis receives a weight in the overall combination based on its statistical and uncorrelated systematic error. Statistical and uncorrelated systematic errors are combined in quadrature and correlated systematic errors are combined linearly. The final analysis weights are 38% (jet-charge), 30% (leptons), 22% (vertex-charge), and 10% (kaons). The combined SLD preliminary result obtained with this procedure is: $$A_b=0.905\pm 0.017(\text{stat})\pm 0.020(\text{syst})(\text{combined}).$$ (5) This result differs slightly from the LEP Electroweak Working Group fit of the same data due to correlations between the $`A_b`$ and $`A_c`$ results which enter here primarily through the lepton-tag analysis. We explicitly ignore such correlations in our average whereas the LEP global fits include them. Our average result for $`A_b`$ agrees well with the Standard Model expectation of 0.935, and also with that derived from the current combination of LEP results ($`0.892\pm 0.024`$) used in the global electroweak fit. The combined LEP and SLD results, however, imply that $`A_b`$ deviates from the Standard Model at the $`2.5\sigma `$ level; this intriguing situation has persisted since 1996 despite significant improvements in statistical and systematic errors. One recent analysis of the world’s $`A_b`$ data shows no evidence of systematic bias or underestimated errors. Thus the experimental question of possible anomalies in the $`Zb\overline{b}`$ coupling remains unresolved. The SLD Collaboration Kenji Abe,<sup>(21)</sup> Koya Abe,<sup>(33)</sup> T. Abe,<sup>(29)</sup> I. Adam,<sup>(29)</sup> T. Akagi,<sup>(29)</sup> H. Akimoto,<sup>(29)</sup> N.J. Allen,<sup>(5)</sup> W.W. Ash,<sup>(29)</sup> D. Aston,<sup>(29)</sup> K.G. Baird,<sup>(17)</sup> C. Baltay,<sup>(40)</sup> H.R. Band,<sup>(39)</sup> M.B. Barakat,<sup>(16)</sup> O. Bardon,<sup>(19)</sup> T.L. Barklow,<sup>(29)</sup> G.L. Bashindzhagyan,<sup>(20)</sup> J.M. Bauer,<sup>(18)</sup> G. Bellodi,<sup>(23)</sup> A.C. Benvenuti,<sup>(3)</sup> G.M. Bilei,<sup>(25)</sup> D. Bisello,<sup>(24)</sup> G. Blaylock,<sup>(17)</sup> J.R. Bogart,<sup>(29)</sup> G.R. Bower,<sup>(29)</sup> J.E. Brau,<sup>(22)</sup> M. Breidenbach,<sup>(29)</sup> W.M. Bugg,<sup>(32)</sup> D. Burke,<sup>(29)</sup> T.H. Burnett,<sup>(38)</sup> P.N. Burrows,<sup>(23)</sup> R.M. Byrne,<sup>(19)</sup> A. Calcaterra,<sup>(12)</sup> D. Calloway,<sup>(29)</sup> B. Camanzi,<sup>(11)</sup> M. Carpinelli,<sup>(26)</sup> R. Cassell,<sup>(29)</sup> R. Castaldi,<sup>(26)</sup> A. Castro,<sup>(24)</sup> M. Cavalli-Sforza,<sup>(35)</sup> A. Chou,<sup>(29)</sup> E. Church,<sup>(38)</sup> H.O. Cohn,<sup>(32)</sup> J.A. Coller,<sup>(6)</sup> M.R. Convery,<sup>(29)</sup> V. Cook,<sup>(38)</sup> R.F. Cowan,<sup>(19)</sup> D.G. Coyne,<sup>(35)</sup> G. Crawford,<sup>(29)</sup> C.J.S. Damerell,<sup>(27)</sup> M.N. Danielson,<sup>(8)</sup> M. Daoudi,<sup>(29)</sup> N. de Groot,<sup>(4)</sup> R. Dell’Orso,<sup>(25)</sup> P.J. Dervan,<sup>(5)</sup> R. de Sangro,<sup>(12)</sup> M. Dima,<sup>(10)</sup> D.N. Dong,<sup>(19)</sup> M. Doser,<sup>(29)</sup> R. Dubois,<sup>(29)</sup> B.I. Eisenstein,<sup>(13)</sup> I.Erofeeva,<sup>(20)</sup> V. Eschenburg,<sup>(18)</sup> E. Etzion,<sup>(39)</sup> S. Fahey,<sup>(8)</sup> D. Falciai,<sup>(12)</sup> C. Fan,<sup>(8)</sup> J.P. Fernandez,<sup>(35)</sup> M.J. Fero,<sup>(19)</sup> K. Flood,<sup>(17)</sup> R. Frey,<sup>(22)</sup> J. Gifford,<sup>(36)</sup> T. Gillman,<sup>(27)</sup> G. Gladding,<sup>(13)</sup> S. Gonzalez,<sup>(19)</sup> E.R. Goodman,<sup>(8)</sup> E.L. Hart,<sup>(32)</sup> J.L. Harton,<sup>(10)</sup> K. Hasuko,<sup>(33)</sup> S.J. Hedges,<sup>(6)</sup> S.S. Hertzbach,<sup>(17)</sup> M.D. Hildreth,<sup>(29)</sup> J. Huber,<sup>(22)</sup> M.E. Huffer,<sup>(29)</sup> E.W. Hughes,<sup>(29)</sup> X. Huynh,<sup>(29)</sup> H. Hwang,<sup>(22)</sup> M. Iwasaki,<sup>(22)</sup> D.J. Jackson,<sup>(27)</sup> P. Jacques,<sup>(28)</sup> J.A. Jaros,<sup>(29)</sup> Z.Y. Jiang,<sup>(29)</sup> A.S. Johnson,<sup>(29)</sup> J.R. Johnson,<sup>(39)</sup> R.A. Johnson,<sup>(7)</sup> T. Junk,<sup>(29)</sup> R. Kajikawa,<sup>(21)</sup> M. Kalelkar,<sup>(28)</sup> Y. Kamyshkov,<sup>(32)</sup> H.J. Kang,<sup>(28)</sup> I. Karliner,<sup>(13)</sup> H. Kawahara,<sup>(29)</sup> Y.D. Kim,<sup>(30)</sup> M.E. King,<sup>(29)</sup> R. King,<sup>(29)</sup> R.R. Kofler,<sup>(17)</sup> N.M. Krishna,<sup>(8)</sup> R.S. Kroeger,<sup>(18)</sup> M. Langston,<sup>(22)</sup> A. Lath,<sup>(19)</sup> D.W.G. Leith,<sup>(29)</sup> V. Lia,<sup>(19)</sup> C.Lin,<sup>(17)</sup> M.X. Liu,<sup>(40)</sup> X. Liu,<sup>(35)</sup> M. Loreti,<sup>(24)</sup> A. Lu,<sup>(34)</sup> H.L. Lynch,<sup>(29)</sup> J. Ma,<sup>(38)</sup> M. Mahjouri,<sup>(19)</sup> G. Mancinelli,<sup>(28)</sup> S. Manly,<sup>(40)</sup> G. Mantovani,<sup>(25)</sup> T.W. Markiewicz,<sup>(29)</sup> T. Maruyama,<sup>(29)</sup> H. Masuda,<sup>(29)</sup> E. Mazzucato,<sup>(11)</sup> A.K. McKemey,<sup>(5)</sup> B.T. Meadows,<sup>(7)</sup> G. Menegatti,<sup>(11)</sup> R. Messner,<sup>(29)</sup> P.M. Mockett,<sup>(38)</sup> K.C. Moffeit,<sup>(29)</sup> T.B. Moore,<sup>(40)</sup> M.Morii,<sup>(29)</sup> D. Muller,<sup>(29)</sup> V. Murzin,<sup>(20)</sup> T. Nagamine,<sup>(33)</sup> S. Narita,<sup>(33)</sup> U. Nauenberg,<sup>(8)</sup> H. Neal,<sup>(29)</sup> M. Nussbaum,<sup>(7)</sup> N. Oishi,<sup>(21)</sup> D. Onoprienko,<sup>(32)</sup> L.S. Osborne,<sup>(19)</sup> R.S. Panvini,<sup>(37)</sup> C.H. Park,<sup>(31)</sup> T.J. Pavel,<sup>(29)</sup> I. Peruzzi,<sup>(12)</sup> M. Piccolo,<sup>(12)</sup> L. Piemontese,<sup>(11)</sup> K.T. Pitts,<sup>(22)</sup> R.J. Plano,<sup>(28)</sup> R. Prepost,<sup>(39)</sup> C.Y. Prescott,<sup>(29)</sup> G.D. Punkar,<sup>(29)</sup> J. Quigley,<sup>(19)</sup> B.N. Ratcliff,<sup>(29)</sup> T.W. Reeves,<sup>(37)</sup> J. Reidy,<sup>(18)</sup> P.L. Reinertsen,<sup>(35)</sup> P.E. Rensing,<sup>(29)</sup> L.S. Rochester,<sup>(29)</sup> P.C. Rowson,<sup>(9)</sup> J.J. Russell,<sup>(29)</sup> O.H. Saxton,<sup>(29)</sup> T. Schalk,<sup>(35)</sup> R.H. Schindler,<sup>(29)</sup> B.A. Schumm,<sup>(35)</sup> J. Schwiening,<sup>(29)</sup> S. Sen,<sup>(40)</sup> V.V. Serbo,<sup>(29)</sup> M.H. Shaevitz,<sup>(9)</sup> J.T. Shank,<sup>(6)</sup> G. Shapiro,<sup>(15)</sup> D.J. Sherden,<sup>(29)</sup> K.D. Shmakov,<sup>(32)</sup> C. Simopoulos,<sup>(29)</sup> N.B. Sinev,<sup>(22)</sup> S.R. Smith,<sup>(29)</sup> M.B. Smy,<sup>(10)</sup> J.A. Snyder,<sup>(40)</sup> H. Staengle,<sup>(10)</sup> A. Stahl,<sup>(29)</sup> P. Stamer,<sup>(28)</sup> H. Steiner,<sup>(15)</sup> R. Steiner,<sup>(1)</sup> M.G. Strauss,<sup>(17)</sup> D. Su,<sup>(29)</sup> F. Suekane,<sup>(33)</sup> A. Sugiyama,<sup>(21)</sup> S. Suzuki,<sup>(21)</sup> M. Swartz,<sup>(14)</sup> A. Szumilo,<sup>(38)</sup> T. Takahashi,<sup>(29)</sup> F.E. Taylor,<sup>(19)</sup> J. Thom,<sup>(29)</sup> E. Torrence,<sup>(19)</sup> N.K. Toumbas,<sup>(29)</sup> T. Usher,<sup>(29)</sup> C. Vannini,<sup>(26)</sup> J. Va’vra,<sup>(29)</sup> E. Vella,<sup>(29)</sup> J.P. Venuti,<sup>(37)</sup> R. Verdier,<sup>(19)</sup> P.G. Verdini,<sup>(26)</sup> D.L. Wagner,<sup>(8)</sup> S.R. Wagner,<sup>(29)</sup> A.P. Waite,<sup>(29)</sup> S. Walston,<sup>(22)</sup> S.J. Watts,<sup>(5)</sup> A.W. Weidemann,<sup>(32)</sup> E. R. Weiss,<sup>(38)</sup> J.S. Whitaker,<sup>(6)</sup> S.L. White,<sup>(32)</sup> F.J. Wickens,<sup>(27)</sup> B. Williams,<sup>(8)</sup> D.C. Williams,<sup>(19)</sup> S.H. Williams,<sup>(29)</sup> S. Willocq,<sup>(17)</sup> R.J. Wilson,<sup>(10)</sup> W.J. Wisniewski,<sup>(29)</sup> J. L. Wittlin,<sup>(17)</sup> M. Woods,<sup>(29)</sup> G.B. Word,<sup>(37)</sup> T.R. Wright,<sup>(39)</sup> J. Wyss,<sup>(24)</sup> R.K. Yamamoto,<sup>(19)</sup> J.M. Yamartino,<sup>(19)</sup> X. Yang,<sup>(22)</sup> J. Yashima,<sup>(33)</sup> S.J. Yellin,<sup>(34)</sup> C.C. Young,<sup>(29)</sup> H. Yuta,<sup>(2)</sup> G. Zapalac,<sup>(39)</sup> R.W. Zdarko,<sup>(29)</sup> J. Zhou.<sup>(22)</sup> (The SLD Collaboration) <sup>(1)</sup>Adelphi University, Garden City, New York 11530, <sup>(2)</sup>Aomori University, Aomori , 030 Japan, <sup>(3)</sup>INFN Sezione di Bologna, I-40126, Bologna, Italy, <sup>(4)</sup>University of Bristol, Bristol, U.K., <sup>(5)</sup>Brunel University, Uxbridge, Middlesex, UB8 3PH United Kingdom, <sup>(6)</sup>Boston University, Boston, Massachusetts 02215, <sup>(7)</sup>University of Cincinnati, Cincinnati, Ohio 45221, <sup>(8)</sup>University of Colorado, Boulder, Colorado 80309, <sup>(9)</sup>Columbia University, New York, New York 10533, <sup>(10)</sup>Colorado State University, Ft. Collins, Colorado 80523, <sup>(11)</sup>INFN Sezione di Ferrara and Universita di Ferrara, I-44100 Ferrara, Italy, <sup>(12)</sup>INFN Lab. Nazionali di Frascati, I-00044 Frascati, Italy, <sup>(13)</sup>University of Illinois, Urbana, Illinois 61801, <sup>(14)</sup>Johns Hopkins University, Baltimore, Maryland 21218-2686, <sup>(15)</sup>Lawrence Berkeley Laboratory, University of California, Berkeley, California 94720, <sup>(16)</sup>Louisiana Technical University, Ruston,Louisiana 71272, <sup>(17)</sup>University of Massachusetts, Amherst, Massachusetts 01003, <sup>(18)</sup>University of Mississippi, University, Mississippi 38677, <sup>(19)</sup>Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, <sup>(20)</sup>Institute of Nuclear Physics, Moscow State University, 119899, Moscow Russia, <sup>(21)</sup>Nagoya University, Chikusa-ku, Nagoya, 464 Japan, <sup>(22)</sup>University of Oregon, Eugene, Oregon 97403, <sup>(23)</sup>Oxford University, Oxford, OX1 3RH, United Kingdom, <sup>(24)</sup>INFN Sezione di Padova and Universita di Padova I-35100, Padova, Italy, <sup>(25)</sup>INFN Sezione di Perugia and Universita di Perugia, I-06100 Perugia, Italy, <sup>(26)</sup>INFN Sezione di Pisa and Universita di Pisa, I-56010 Pisa, Italy, <sup>(27)</sup>Rutherford Appleton Laboratory, Chilton, Didcot, Oxon OX11 0QX United Kingdom, <sup>(28)</sup>Rutgers University, Piscataway, New Jersey 08855, <sup>(29)</sup>Stanford Linear Accelerator Center, Stanford University, Stanford, California 94309, <sup>(30)</sup>Sogang University, Seoul, Korea, <sup>(31)</sup>Soongsil University, Seoul, Korea 156-743, <sup>(32)</sup>University of Tennessee, Knoxville, Tennessee 37996, <sup>(33)</sup>Tohoku University, Sendai 980, Japan, <sup>(34)</sup>University of California at Santa Barbara, Santa Barbara, California 93106, <sup>(35)</sup>University of California at Santa Cruz, Santa Cruz, California 95064, <sup>(36)</sup>University of Victoria, Victoria, British Columbia, Canada V8W 3P6, <sup>(37)</sup>Vanderbilt University, Nashville,Tennessee 37235, <sup>(38)</sup>University of Washington, Seattle, Washington 98105, <sup>(39)</sup>University of Wisconsin, Madison,Wisconsin 53706, <sup>(40)</sup>Yale University, New Haven, Connecticut 06511.
no-problem/9907/nucl-th9907001.html
ar5iv
text
# Untitled Document YITP-99-41 June 1999 THEORETICAL PREDICTIONS OF RESIDUES CROSS SECTIONS OF SUPERHEAVY ELEMENTS <sup>3</sup><sup>3</sup>3Invited talk given at Nuclear Shells-50 years, Dubna, April 21-24, 1999 Y. Abe<sup>1</sup>, K. Okazaki<sup>2</sup>, Y. Aritomo<sup>3</sup>, T. Tokuda<sup>2</sup>, T. Wada<sup>2</sup> and M. Ohta<sup>2</sup> <sup>1</sup>Yukawa Institute for Theoretical Physics, Kyoto Univ., Kyoto 606-0l, Japan <sup>2</sup>Department of Physics, Konan Univ., Kobe 658, Japan <sup>3</sup>Flerov Laboratory of Nuclear Reactions, JINR, Dubna 141980, Russia Dynamical reaction theories are reviewed for synthesis of superheavy elements. Characteristic features of formation and surviving are discussed with reference to possible incident channels. Theoretical predictions are presented on favorable incident channels and on optimum energies for synthesis of Z = 114. 1. Introduction Superheavy elements around Z = 114(or 126) and N = 184 have been believed to exist according to theoretical predictions of stability given by the shell correction energy in addition to average nuclear binding energy<sup>1)</sup>. This means that heavy atomic nuclei with fissility parameter x $``$ 1 could be stabilized against fission by a huge barrier which is resulted in by the additional binding of the shell correction energy around the spherical shape. In other words, if superheavy compound nuclei(C.N.) are formed in such high excitation that the closed shell structure is mostly destroyed, they have no barrier against fission and thus are inferred to decay very quickly, though time scales of fission are now believed to be much longer than that of Bohr-Wheeler formula due to a strong friction for the collective motion<sup>2)</sup>. Therefore, the point is how to reach the ground state of the superheavy nuclei, or how to make a soft-landing at them. In order to minimize fission decays of C.N. or maximize their survival probabilities, so-called cold fusion reactions have been used, which succeeded in synthesizing SHEs up to Z = 112<sup>3)</sup>. They have the merit of large survival probabilities, but suffer from the demerit of small formation probabilities because of the sub-barrier fusion. On the other hand, so-called hot(warm) fusion reactions have the merit of expected large formation probabilities and the demerit of small survival probabilities due to relatively high excitation of C.N. formed. Anyway, an optimum condition for large residue cross sections of SHEs is a balance or a compromise between formation and survival probabilities as a function of incident energy or excitation energy of C.N. formed over possible combinations of projectiles and targets<sup>4)</sup>. 2. Two Reaction Processes: Formation and Surviving (Decay) They are not always independent, especially in so-called massive systems, but for simplicity we briefly discuss them separately. Formation of C.N. is by the fusion reaction. Fig.1 reminds us of its characteristic features, depending on the system. In lighter systems, i.e., those with Z<sub>1</sub>$``$Z<sub>2</sub> $``$ 1,800,they undergo fusion if they have enough energy to overcome the Coulomb barrier (say, Bass barrier<sup>5)</sup>, while in heavier systems, they have to overcome so-called conditional saddle to get fused even after overcoming the Coulomb barrier. Since the systems are under the action of strong nuclear interactions, their incident kinetic energies are quickly transformed into internal motions, and thereby much more energy than the difference between the barrier and the saddle point is required for formation of C.N., which corresponds to the extra-push or extra-extra-push energy<sup>6)</sup>. One more point to notice is that the potential energy surface for SHE has almost no pocket schematically shown in Fig.1 if the C.N. formed are in rather high excitation. This would be the reason why a simple practical formula is not available for SHE formation probability. A dynamical framework had been called for so long untill the recent works appeared<sup>4)</sup>. It is also worth to mention that Fig.1 is just a one-dimensional schematization. Real processes are in many dimensions including mass asymmetry degree of freedom etc. in addition to the elongation or the separation between two fragments. An important case that we will discuss below is that the incident channel is with Z<sub>1</sub>$``$Z<sub>2</sub> $``$ 1,800 and the compound nucleus is with Z = 114. The potential energy surface for the compound nucleus has almost no minimum like that shown in Fig.1 due to excitation, while the Bass barrier is high and quite inner, close to the point where the energy surface becomes flat. Figure 1 Figure 2 We have calculated formation probabilities in the following way.<sup>7)</sup> If an incident energy is below the barrier, we take into account the barrier penetration factor using WKB approximation. The potential (barrier) is calculated with the Coulomb and the nuclear proximity potentials<sup>8)</sup> between the incident ions, where effects of deformations etc. are not taken into account in order to see simply a general trend. After the incident ions reach the contact point, evolutions of shapes of the total system are under the dissipation-fluctuation dynamics, as mentioned above. We have employed a multi-dimensional Langevin equation to describe trajectories in three-dimensional space where the distance (or elongation) degree of freedom is taken into account as well as the mass-asymmetry and the fragment deformation. Some trajectories go to the spherical shape of the compound nucleus and its around, while some others to reseparations after random walks in the space. Examples of calculated formation probabilities are shown in Fig.2 for Z = 114 C.N. with the possible incident channels at the incident energies corresponding to their Bass barriers. We can readily see that the larger the mass asymmetry ($`\alpha `$) is, the larger the formation probability (P<sub>for</sub>) is. This is just consistent to the feature in Z<sub>1</sub>$``$Z<sub>2</sub> dependence of fusion reactions mentioned above. The smalll P<sub>for</sub>’s in small mass- asymmetric cases correspond qualitatively to the “heavier systems” in Fig.1, i.e. are due to the strong friction for the collective motions. What is noticeable here is the great increase of several orders of magnitude as a function of $`\alpha `$. This indicates that mass asymmetric incident channels do not suffer much from the dissipation and are extremely favorable in formation of C.N., but on the other hand, as shown in Fig.3, C.N. formed with mass asymmetric channels have higher excitation energies than those with less asymmetries, due to Q-values, which means that asymmetric channels are unfavorable for surviving. In order to know more precisely about excitation-energy dependence of survival probability (P<sub>sur</sub>) in SHEs, we have to take into account effects of cooling speeds which are essential for SHEs, because superheavy C.N. can be stabilized only by the restoration of the shell correction energy which is determined by the cooling, i.e., mainly by neutron evaporation. For particle evaporations we have used the statistical theory as usual. One more crucial factor in determining P<sub>sur</sub> is the time scale of Figure 3 Figure 4 fission. Since we know fission of excited nuclei is a dynamical process under strong friction, we have employed one-dimensional Smoluchowski equation for describing the evolution of fissioning degree of freedom, which is known to be correct enough for the present purpose.<sup>2)</sup> Results of P<sub>sur</sub> for Z = 114 are shown in Fig. 4 as a function of excitation energy(E) over several mass numbers A. It is surprising that i) P<sub>sur</sub>’s decrease very quickly as E increases and ii) mass number dependence of the decrease is enormous. This means that C.N. with large mass numbers are favorable for surviving. This is essentially due to quick coolings in neutron-rich C.N. where the separation energy Bn’s are small. Thus, unfavorable large Es could be somehow compensated by the quick coolings if C.N. are of small B<sub>n</sub>, of course, with the aid of rather long time scales of fission. On the other hand, if we initially form neutron-deficient isotopes, cooling speeds are slow and thereby their survival probabilities drop very rapidly as E increases. In such cases, we have to form C.N. in as low excitation as possible in order to obtain large residue cross sections, which is qualitatively consistent with GSI experiments.<sup>3)</sup> 3. Examples of the Calculated Cross Sections We have calculated excitation functions of evaporation residue cross sections by combining the two reaction processes; formation and surviving. Results for possible incident channels to form Z = 114 isotopes are shown in Fig.5 as a function of E. The left-hand side increases toward the peaks are due to formation probabilities, i.e., the barrier penetration and the dynamical evolution for fusion, while the right-hand decreases due to E dependence of survival probabilities. The arrows with the numbers show the positions of the Bass barriers in the channels, respectively. The incident channels <sup>250</sup>Cm + <sup>44</sup>Ar and <sup>244</sup>Pu + <sup>48</sup>Ca are predicted to have cross sections more than 1 pb which is thought to be a limit in measurements. The importance of larger neutron numbers is readily understoood. It is extremely interesting that Dubna group has recently observed an event which could be related to a synthesis of Z = 114 with the latter channel<sup>9)</sup>. Figure 5 4. Remarks It would be worth to mention again i) that the important point is a balance between formation and surviving and ii) that the neutron separation energy Bn’s which determine cooling speeds are another important quantities in addition to the magnitudes of the shell correction energy. The second point encourages us to explore exotic targets and projectiles with more neutron excess. For more precise quantitative prediction of residue cross sections, one-dimensional WKB approximation for the penetration factor should be improved so as to accomodate effects of the deformations of the incident ions etc. The last remark is on more mass-symmetric incident channels which are not shown here. They generally suffer more from the effects of dissipation which unfavor fusion probabilities but on the other hand, if neutron-rich C.N. could be formed, again there is a hope to obtain rather large residue cross sections.<sup>4)</sup> References: $`[1]`$ P. Möller et al., At. and Nucl. Data Table 59 (1996) 185. S. Cwiok, et al., Nucl. Phys. A611 (1996) 211. $`[2]`$ T. Wada, Y. Abe and N. Carjan, Phys. Rev. Lett. 70 (1993) 3538. Y. Abe et al., Phys. Reports C275(1996) 49. $`[3]`$ S. Hofmann et al., Z. Phys. A354 (1996) 229. $`[4]`$ Y. Abe et al., J. Phys. G23 (1997) 1275. Y. Aritomo et al., Phys. Rev. C55 (1997) R1011, and ibid C59(1999) 796. $`[5]`$ R. Bass, Nuclear Reactions with Heavy Ions (Springer, 1980). $`[6]`$ W.J. Swiatecki, Nucl. Phys. A376 (1982) 275. S.Bjornholm and W.J. Swiatecki, Nucl. Phys. A391 (1982) 471. $`[7]`$ T. Wada et al., Proc. DANF98, Slovakia, Oct. 1998. K. Okazaki, et al., publication under preparation. $`[8]`$ J. Blocki et al., Ann. Phys.(N.Y.) 105 (1997) 427. $`[9]`$ Yu. Oganessian et al., preprint JINR, E7-99-53.
no-problem/9907/cond-mat9907152.html
ar5iv
text
# Oscillatory Tunnel Splittings in Spin Systems: A Discrete Wentzel-Kramers-Brillouin Approach ## Abstract Certain spin Hamiltonians that give rise to tunnel splittings that are viewed in terms of interfering instanton trajectories, are restudied using a discrete WKB method, that is more elementary, and also yields wavefunctions and preexponential factors for the splittings. A novel turning point inside the classically forbidden region is analysed, and a general formula is obtained for the splittings. The result is appled to the Fe<sub>8</sub> system. A previous result for the oscillation of the ground state splitting with external magnetic field is extended to higher levels. The magnetic properties of the molecular cluster \[(tacn)<sub>6</sub>Fe<sub>8</sub>O<sub>2</sub>(OH)<sub>12</sub>\]<sup>8+</sup> (or just Fe<sub>8</sub> for short) are governed by a Hamiltonian $$=k_2J_z^2+(k_1k_2)J_x^2g\mu _B𝐉𝐇,$$ (1) where $`𝐉`$ is a dimensionless spin operator, $`𝐇`$ is an externally applied magnetic field, $`J=10`$, $`k_10.33`$ K, and $`k_20.22`$ K. The zero-field Hamiltonian has biaxial symmetry with easy, medium, and hard axes along $`z`$, $`y`$, and $`x`$ respectively . Very recently, Wolfgang and Sessoli have seen a new effect in this system, viz., an oscillation in the Landau-Zener transition rate between Zeeman levels, as a function of the applied field along $`\widehat{𝐱}`$. These oscillations reflect an oscillation in the underlying tunneling matrix element between the levels in question, and are in this author’s view, the only unambiguous evidence to date for quantum tunneling of a spin of such a large size in a solid state system. This effect is not seen, e.g., in a closely related Mn<sub>12</sub> cluster. Oscillations as a function of $`H_x`$ in the ground state tunnel splitting $`\mathrm{\Delta }`$ of the Hamiltonian (1) were in fact predicted earlier, on the basis of an instanton calculation. For $`𝐇\widehat{𝐱}`$, there are two instantons, with a complex action differing by a Berry phase that sweeps through odd multiples of $`\pi `$ as $`H_x`$ is varied, leading to a complete quenching of tunneling. In this view the effect arises from destructive intereference between spin trajectories . A different perspective was provided in Ref. , by noting that $``$ is invariant under a $`180^{}`$ rotation about $`\widehat{𝐱}`$ when $`H_z=0`$. The oscillation is due to a symmetry-allowed crossing of levels with different parity under the rotation. Although the oscillations with $`H_x`$ are easily seen by direct numerical diagonalization of $``$ (see Fig. 1 of , or Fig. 4 of ), it is of interest to understand these features analytically. Since the spin $`J`$ is large, it is natural to use the semiclassical, or $`J\mathrm{}`$ approximation. To some extent, the instanton method already does this. In this paper, we will try and make further progress using a discrete WKB method , which has several advantages. First, it is very difficult to find the next-to-leading terms in the $`J\mathrm{}`$ asymptotic expressions for various physical quantities (such as $`\mathrm{\Delta }`$) using instantons. Second, Wernsdorfer and Sessoli also see an oscillatory rate in the presence of a dc field along $`\widehat{𝐳}`$ that is such as to align the ground level in one well with an excited level in the other. The splitting is now never perfectly quenched as the symmetry of $``$ is destroyed. We do not (although others may) know how to solve this problem with instantons. Third, with an eye to the future, the method provides wavefunctions in addition to energies, which may be used to calculate matrix elements of various perturbations to Eq. (1) that are present in the actual physical system, and thus study their influence. Specifically, we will derive a general result \[see Eq. (19)\] for the tunnel splitting between degenerate pairs of levels in a symmetric problem when oscillations are present. Our result is expressed in terms of two action integrals. In the course of doing this, we will encounter a novel feature that does not arise in previous discrete WKB studies, namely, a turning point in the classically forbidden region! We will apply our result to the hamiltonian (1) for $`H_z=0`$, focussing in detail on quenching fields for ground and excited state splittings. The results for the latter are new. The study of imperfect quenching of tunneling that occurs when $`H_z0`$, because the potential is then asymmetric, is much more involved, and will be published separately. Let us first briefly review the discrete WKB formalism . The starting point is to write Schrödinger’s equation in the $`J_z`$ basis. Let $`|\psi =E|\psi `$, $`J_z|m=m|m`$, $`m|\psi =C_m`$, $`m||m=w_m`$, and $`m||m^{}=t_{m,m^{}}`$ ($`mm^{}`$). Then we have $$\underset{nm}{}t_{m,n}C_n+w_mC_m=EC_m.$$ (2) We assume that the matrix $`t_{m,n}`$ is real and symmetric, $`t_{m,n}=t_{n,m}`$. In the present problem, we need matrix elements that are off-diagonal by 1 ($`t_{m,m\pm 1}`$) and by 2 ($`t_{m,m\pm 2}`$). This makes Eq. (2) a recursion relation involving five terms, as opposed to three terms in previous work. The physical idea is to view Eq. (2) as a tight-binding model for an electron hopping on a one-dimensional lattice, and use the approximation of semiclassical electron dynamics. This would be exact if the matrix elements of $``$ were constant with $`m`$, and will be systematically justifiable if they are slowly varying with $`m`$. Formally, the latter means that we can find functions $`w(m)`$, $`t_1(m)`$, and $`t_2(m)`$ of a continuous variable $`m`$, such that on the discrete eigenset of $`J_z`$, $`w(m)`$ $`=`$ $`w_m,`$ (3) $`t_\alpha (m)`$ $`=`$ $`(t_{m,m+\alpha }+t_{m,m\alpha })/2,\alpha =1,2,`$ (4) and further, that if $`m/J`$ is regarded as a quantity of order $`J^0`$, then $`\dot{w}(m)dw/dm=O(w(m)/J)`$, with similar restrictions on $`\dot{t}_1(m)`$ and $`\dot{t}_2(m)`$. For Eq. (1), these conditions are met if $`J1`$. If $`w_m`$, $`t_{m,m\pm 1}`$, and $`t_{m,m\pm 2}`$ were constant, the eigenstates of $``$ would be states with $`C_m=e^{iqm}`$, and $`E=w+2t_1\mathrm{cos}q+2t_2\mathrm{cos}(2q)`$. Now we seek a solution in the form $`C_m=e^{i\mathrm{\Phi }(m)}`$ with $`\mathrm{\Phi }=\mathrm{\Phi }_0+\mathrm{\Phi }_1+\mathrm{\Phi }_2+\mathrm{}`$, where $`\mathrm{\Phi }_n=O(J^{1n})`$, and $`\dot{\mathrm{\Phi }}_n=O(\mathrm{\Phi }_n/J)`$. Then, one can show that up to terms of order $`J^0`$ in $`\mathrm{\Phi }`$, the solution is given as linear combinations of the form $$C_m\frac{1}{\sqrt{v(m)}}\mathrm{exp}\left(i^mq(m^{})𝑑m^{}\right),$$ (5) where $`q(m)`$ is a local wavevector that obeys the eikonal or Hamilton-Jacobi equation, $$E=w(m)+2t_1(m)\mathrm{cos}q+2t_2(m)\mathrm{cos}(2q)_{\mathrm{sc}}(q,m),$$ (6) and $`v(m)`$ is the associated semiclassical electron velocity, which obeys the transport equation $$v(m)=_{\mathrm{sc}}/q=2\mathrm{sin}q(m)(t_1(m)+4t_2(m)\mathrm{cos}q(m)).$$ (7) To talk of tunneling, we must first understand the classically allowed and forbidden regions in the $`m`$ space. As a function of $`q`$ for fixed $`m`$, the semiclassical Hamiltonian $`_{\mathrm{sc}}(q,m)`$ can be viewed as a band energy curve, and its minimum and maximum values define local band-edge functions $`U_\pm (m)`$. The classically accessible region for any energy $`E`$ is thus defined by $`U_{}(m)EU_+(m)`$. \[The first consequence of having five terms in the recursion relation shows up here. In the three term case, the band edges always occur at $`q=0`$ or $`\pi `$. Now, they can occur at values other than these if $`|t_1(m)/4t_2(m)|<1`$. These functions are sketched in Fig. 1 for Eq. (1) with $`H_z=0`$. The minimum, $`U_{}`$, is attained at $`q=0`$ for $`|m|m^{}`$, and at $`q0`$ for $`|m|<m^{}`$. The curve $`U_{}(m)`$ is smooth at $`m=\pm m^{}`$, and the formula for $`m^{}`$ is unimportant.\] Thus, for the energies $`E_a`$ and $`E_b`$ drawn in Fig. 1, the central region is classically forbidden and allowed, respectively. We will focus on states of the first type in what follows. The next step is to derive a generalization of Herring’s formula for the tunnel splitting $`\mathrm{\Delta }`$ for a pair of levels whose mean energy is $`E`$. Proceeding in exact analogy with Ref. (see also (c,d) or ), we consider a solution $`C_m`$ to Eq. (2) with energy $`E`$, that is (a) localized in the left well of $`U_{}(m)`$, and decays away from that well everywhere including the region near $`m=0`$, and (b) normalized to unit total probability. The behavior of this solution near the right well need not be specified or examined too closely. Up to an irrelevant over all sign, we find $$\mathrm{\Delta }=\{\begin{array}{cc}2\left[t_{01}C_0(C_1C_1)+t_{02}C_0(C_2C_2)+t_{1,1}(C_1^2C_1^2)\right],\hfill & \\ integer\text{J},\hfill & \\ 2t_{\frac{1}{2},\frac{1}{2}}\left(C_{\frac{1}{2}}^2C_{\frac{1}{2}}^2\right)+4t_{\frac{3}{2},\frac{1}{2}}\left(C_{\frac{1}{2}}C_{\frac{3}{2}}C_{\frac{1}{2}}C_{\frac{3}{2}}\right),\hfill & \text{half-integer }J\text{.}\hfill \end{array}$$ (8) To apply Eq. (8), we must find $`C_m`$ in the central region. In principle the procedure is straightforward, and follows conventional WKB. We first find $`C_m`$ in the allowed region, near $`m_0`$, and then use connection formulas to extend it into the forbidden region. For a three term recursion relation, this is done in Ref. . In the present case, we encounter a new difficulty. To see this, we consider points at which $`v(m)`$ vanishes. At all such points, which may be called turning points, the solution (5) diverges, indicating a breakdown of the WKB approximation. Let us now consider a point strictly inside the classically allowed region in the $`E`$-$`m`$ plane. At such a point $`q`$ is not an extremum of $`_{\mathrm{sc}}`$ for fixed $`m`$, i.e., $`v(m)0`$. It is a simple corollary that the points $`E=U_\pm (m)`$ are turning points, corresponding to $`q=0`$, $`\pi `$, or $`\mathrm{cos}^1(t_1/4t_2)`$. These turning points are of the same physical character as those in conventional WKB, and the $`q=0`$ or $`\pi `$ ones are the only ones that arise with a three-term recursion relation. For our five-term recursion, however, $`v(m)`$ can also vanish if $`\mathrm{cos}q=t_1/4t_2`$, even though $`EU_\pm (m)`$. To see how this can happen, we solve Eq. (6) to get $$\mathrm{cos}q(m)=\frac{t_1(m)\pm [t_1^2(m)4t_2(m)f(m)]^{1/2}}{4t_2(m)},$$ (9) where $`f(m)=w(m)2t_2(m)E`$. Thus, such a turning point may arise when the discrimiant of the quadratic equation for $`\mathrm{cos}q(m)`$ vanishes. Since, by exclusion, such points must necessarily lie in a classically forbidden region, where $`q(m)`$ is not real, it follows that they can only arise in problems where $`|t_1(m)/4t_2(m)|>1`$ for some $`m`$. This fact and Eq. (9) then imply that at such a point, $`\mathrm{cos}q`$ changes from real to complex, i.e., $`q`$ changes from pure imaginary to complex, and the wavefunction accordingly changes from an exponential decay with one sign to a decay with an oscillating sign. Since WKB breaks down at the forbidden region turning points, we need connection formulas at these points just as for ordinary ones . We will publish the derivation of these formulas elsewhere, and here we only give the result. Let the discriminant in Eq. (9) vanish at $`m=m_c`$, and let $`\mathrm{cos}q`$ be real for $`m<m_c`$, and complex for $`m>m_c`$. It is convenient to define $`q(m)=i\kappa (m)`$ with $`\kappa >0`$ in the region $`m<m_c`$, and to write $`s(m)=iv(m)`$ everywhere. (This definition renders $`s(m)>0`$ for $`m>m_c`$.) We consider the decaying WKB solution in the region $`m<m_c`$: $$C_m=\frac{A}{2\sqrt{s(m)}}\mathrm{exp}\left(\underset{m_c}{\overset{m}{}}\kappa (m^{})𝑑m^{}\right),m<m_c,$$ (10) where $`A`$ is chosen to be real. For $`m>m_c`$, we must consider linear combinations of the type (5), with two choices for $`q(m)`$ which we write as $$q_{1,2}(m)=i\kappa (m)\pm \chi (m).$$ (11) For the solution to continue decaying, we must still have $`\kappa >0`$, and we also choose $`\chi >0`$. Then, both $`\kappa (m)`$ and $`\chi (m)`$ have a kink at $`m=m_c`$. We further define $`s_{1,2}(m)=iv(q_{1,2}(m))`$ via Eq. (7), so that $`s_2=s_1^{}`$. The WKB solution which connects to (10) is then given by $$C_m=\mathrm{Re}\frac{A}{\sqrt{s_1(m)}}\mathrm{exp}\left(i\underset{m_c}{\overset{m}{}}q_1(m^{})𝑑m^{}\right),m>m_c.$$ (12) Note that this is explicitly real, as the reality of Eq. (2) requires. Also, Eqs. (10) and (12) only hold for $`|m_cm|J^{1/3}`$. The connection formula for the growing solution is similar, but is not needed for our present purpose. The result (8) is exact, but does not reveal the physically important barrier penetration factor. To remedy this, we substitute Eq. (12) in Eq. (8). We consider a situation as in Fig. 1, with minima in $`U_{}(m)`$ at $`\pm m_0`$, and forbidden region turning points at $`\pm m_1`$. The key, clearly, is to simplify Eq. (12) in the region $`|m|<m_1`$. To this end, we substitute Eq. (11) for $`q`$ in Eq. (9) and separate the real and imaginary parts. This yields $`\mathrm{cosh}\kappa \mathrm{cos}\chi `$ $`=`$ $`t_1/4t_2,`$ (13) $`\mathrm{sinh}\kappa \mathrm{sin}\chi `$ $`=`$ $`(4t_2ft_1^2)^{1/2}/4t_2.`$ (14) Using these results, it follows that $$s_1=8t_2(m)\mathrm{sinh}\kappa (m)\mathrm{sin}\chi (m)\mathrm{sin}q_1(m).$$ (15) We now specialize to the case of integer $`J`$; the other is similarly analysed, and yields the same result, Eq. (18) below. For Eq. (8), we only need $`C_m`$ for $`m=0`$, $`\pm 1`$, and $`\pm 2`$. The variations in $`\kappa `$, $`\chi `$, $`t_1`$, $`t_2`$, and $`q_1`$ between these points may be ignored as they are of of order $`J^2`$. Hence, to sufficient accuracy one may write (for $`|m|<2`$), $$C_m=\mathrm{Re}A_2\frac{e^{i(\mathrm{\Omega }+mq_{10})}}{\sqrt{\mathrm{sin}q_{10}}},$$ (16) where $`\mathrm{\Omega }=_{m_1}^0q_1(m^{})𝑑m^{}`$, and $`A_2=(8t_{20}\mathrm{sinh}\kappa _0\mathrm{sin}\chi _0)^{1/2}A`$. The suffix 0 denotes quantities evaluated at $`m=0`$; thus $`q_{10}=q_1(0)`$, $`\kappa _0=\kappa (0)`$, etc. To the same accuracy as Eq. (16) one may write $`t_{01}=t_1(0)`$, and $`t_{02}=t_{1,1}=t_2(0)`$ in Eq. (8). If we use Eq. (13), and write every thing in terms of $`t_{20}`$, $`\kappa _0`$, and $`\chi _0`$, then a certain amount of algebra leads to $`\mathrm{\Delta }`$ $`=`$ $`{\displaystyle \frac{1}{2}}A^2(e^{2i\mathrm{\Omega }}+e^{2i\mathrm{\Omega }^{}})`$ (17) $`=`$ $`A^2\mathrm{exp}\left({\displaystyle \underset{m_1}{\overset{m_1}{}}}\kappa (m^{})𝑑m^{}\right)\mathrm{cos}\left({\displaystyle \underset{m_1}{\overset{m_1}{}}}\chi (m^{})𝑑m^{}\right).`$ (18) The cosine factor clearly shows the possibility of oscillations. The next step is to match the WKB wavefunction (10) in the ordinary decaying region to the wavefunction in the allowed region. It is plain that $`A`$ will contain an additional barrier penetration factor $`\mathrm{exp}(_{m_1}^{m_t}\kappa (m)𝑑m)`$, where $`\pm m_t`$ are the ordinary turning points. Omitting the details of the calculation, which are very much like those in conventional WKB, we find that for the $`n`$th pair of levels, provided $`nJ`$, $$\mathrm{\Delta }=\frac{2\omega _0}{\pi }g_n\mathrm{exp}\left(\underset{m_t}{\overset{m_t}{}}\kappa (m^{})𝑑m^{}\right)\mathrm{cos}\left(\underset{m_1}{\overset{m_1}{}}\chi (m^{})𝑑m^{}\right),$$ (19) where $`\omega _0`$ is the small oscillation frequency in the wells near $`\pm m_0`$, and $`g_n=\sqrt{2\pi }\overline{n}^{\overline{n}}e^{\overline{n}}`$ (with $`\overline{n}=n+\frac{1}{2}`$). It need hardly be said that $`m_t`$ and $`m_1`$ depend on the energy and hence on $`n`$. Equation (19) is a general formula for the splitting in the presence of interference effects. As opposed to an “exponentially accurate” calculation which gives an asymptotically correct result for $`\mathrm{ln}\mathrm{\Delta }`$ as $`J\mathrm{}`$, it is correct for $`\mathrm{\Delta }`$ itself. We now apply Eq. (19) to the Hamiltonian Eq. (1). The problem is now merely one of quadrature, so we will focus only on the cosine factor. (The full expression for the non-oscillatory part of $`\mathrm{\Delta }`$ including the exact prefactor is exceedingly lengthy and unilluminating. A partial result for the WKB exponent, or Gamow factor may be found in Ref. .) In doing the quadratures, the first step is to find $`w(m)`$, $`t_\alpha (m)`$, etc. Here, any function that reproduces the first two terms in a series in $`1/J`$ is adequate, since Eq. (5) represents only the two leading terms in $`\mathrm{\Phi }(m)`$. We define $`\overline{J}=J+\frac{1}{2}`$, $`\mu =m/\overline{J}`$, $`H_c=2k_1J/g\mu _B`$, $`h_x=JH_x/\overline{J}H_c`$, $`\lambda =k_2/k_1`$, and measure all energies in units of $`k_1\overline{J}^2`$. Then, $`t_1=h_x(1\mu ^2)^{1/2}`$, $`4t_2=(1\lambda )(1\mu ^2)`$, $`2w=(1+\lambda )(1\mu ^2)`$, and $`\omega _0=2[\lambda (1h_x^2)]^{1/2}/\overline{J}`$. Further, let us denote the argument of the cosine in Eq. (19) by $`\mathrm{\Lambda }`$. The turning point $`\mu _1(E)`$ is given by $$\mu _1^2(E)=1\frac{h_x^2}{1\lambda }\frac{E}{\lambda }.$$ (20) Secondly, from Eq. (14), we see that $`\chi (\mu _1(E)\mu )^{1/2}`$, so that to relative order $`1/J`$, we may write $$\mathrm{\Lambda }=2\overline{J}\underset{0}{\overset{\mu _{10}}{}}\left(\chi (\mu ,E=0)+E\frac{\chi }{E}|_{E=0}\right)𝑑\mu ,$$ (21) where $`\mu _{10}=\mu _1(0)`$, and $`E=(n+\frac{1}{2})\omega _0`$ for the $`n`$th pair of levels. At $`E=0`$, we have $`\mathrm{cos}^2\chi =[(1\mu _{10}^2)/(1\mu ^2)]`$, $`\chi /E=\mathrm{cot}\chi /2(1h_x^2\mu ^2)`$. Doing the integrals, one obtains $$\mathrm{\Lambda }=\pi J\left(1\frac{H_x}{\sqrt{1\lambda }H_c}\right)n\pi .$$ (22) The result for $`n=0`$ is the same as in ), while for $`n0`$ it is new. To order $`1/J`$, the vanishing points for higher pairs are the same as those for the lowest one. It must be remembered, however, that since we demanded $`\chi >0`$, one must have $`\mathrm{\Lambda }>0`$. (Finding $`\mathrm{\Lambda }<0`$ means that the oscillatory forbidden region has disappeared.) Thus the highest-field level crossing is successively eliminated as $`n`$ increases, and (including zero and negative values) there are $`2(Jn)`$ fields in all where $`\mathrm{\Delta }`$ vanishes. ###### Acknowledgements. This work is supported by the NSF via grant number DMR-9616749.
no-problem/9907/hep-ph9907525.html
ar5iv
text
# Untitled Document About Some Distinguishing Features of the Weak Interaction Kh. M. Beshtoev Joint Institute for Nuclear Research, Joliot Curie 6 141980 Dubna, Moscow region, Russia Abstract In this work it is shown that, in contrast to the strong and electromagnetic theories, additive conserved numbers (such as lepton, aromatic and another numbers) and $`\gamma _5`$ anomaly do not appear in the standard weak interaction theory. It means that in this interaction the additive numbers cannot be conserved. These results are the consequence of specific character of the weak interaction: the right components of spinors do not participate in this interaction. The schemes of violation of the aromatic and lepton numbers were considered. PACS: 12.15.-y Ketwords: weak interaction, additive numbers, $`\gamma _5`$-anomaly 1. Introduction Strong and electromagnetic interactions theories are left-right systemic theories (i.e. all components of the spinors participate in these interactions symmetrically). In contrast to this only the left components of fermions participate in the weak interaction. This work is dedicated to some consequences deduced from this specific feature of the weak interaction. 2. Distinguishing Features of Weak Interactions As it is well known from the Neuter theorem , conserving currents appear at global and local abelian and nonabelian gauge transformations. These values for local gauge transformations are: electromagnetic current–$`j^\mu `$ $$j^\mu =e\overline{\mathrm{\Psi }}\gamma ^\mu \mathrm{\Psi },$$ $`(1)`$ where $`e`$ is an electrical charge; the current of strong interactions–$`j^{a\mu }`$ $$j^{a\mu }=q\overline{\mathrm{\Psi }}T^a\gamma ^\mu \mathrm{\Psi },$$ $`(2)`$ where $`q`$ is charge of a strong interactions, $`T^a`$ is $`SU(3)`$ matrix, $`a`$ is color. The currents $`S_i^\mu `$ obtained from the global abelian transformation is $$S_i^\mu =i(\overline{\mathrm{\Psi }}_i_\mu \mathrm{\Psi }_i),$$ $`(3)`$ (where $`i`$ characterizes the type of the gauge transformation) and the corresponding conserving current (the forth component of $`S_i^\mu `$) is $$I_i=S_i^0d^3x=ϵ\overline{\mathrm{\Psi }}_i\mathrm{\Psi }_id^3x,$$ $`(4)`$ where $`ϵ`$ is the energy of fermion $`\mathrm{\Psi }_i`$. Then conserving values of global gauge transformations are: electrical number $`Q`$ $$Q=eϵ\overline{\mathrm{\Psi }}\mathrm{\Psi }d^3x;$$ $`(5)`$ baryon numbers $`B`$ $$B=ϵ\overline{\mathrm{\Psi }}_B\mathrm{\Psi }_Bd^3x;$$ $`(6)`$ the lepton numbers $`l_i(i=e,\mu ,\tau )`$ $$l_i=ϵ\overline{\mathrm{\Psi }}_{l_i}\mathrm{\Psi }_{l_i}d^3x;$$ $`(7)`$ aromatic numbers and etc. In the vector (electromagnetic and strong interactions) theories all components of spinors ($`\mathrm{\Psi }_L,\mathrm{\Psi }_R`$) participate in interactions. In contrast to the strong and electromagnetic theories, the right components of the spinors ($`\overline{\mathrm{\Psi }}_R,\mathrm{\Psi }_R`$) do not participate in the weak interaction, i. e. this interaction does not refer to the chiral theory (in the chiral theory the left and right components of fermions participate in the interaction in the independent manner). Such character of the weak interaction leads to certain consequences: impossibility to generate fermion masses and to the problem of jointing this interaction to the strong and electromagnetic interactions . Let us consider another consequences of this specific feature of the weak interaction. The local conserving current $`j^{\mu i}`$ of the weak interaction has the following form: $$j^{\mu i}=\overline{\mathrm{\Psi }}_L\tau ^i\gamma ^\mu \mathrm{\Psi }_L,$$ $`(8)`$ where $`\overline{\mathrm{\Psi }}_L,\mathrm{\Psi }_L`$ are lepton or quark doublets $$\left(\begin{array}{c}e\\ \nu _e\end{array}\right)_{iL}$$ $$\left(\begin{array}{c}q_1\\ q_2\end{array}\right)_{iL},i=13.$$ If now we take into account that in the right components of fermions $`\overline{\mathrm{\Psi }}_{iR},\mathrm{\Psi }_{iR}`$ do not participate in the weak interaction, then from (4) for abelian currents we get $$I_i=ϵ\overline{\mathrm{\Psi }}_{iL}\mathrm{\Psi }_{iL}d^3x0,$$ $`(9)`$ i.e. (in contrast to the strong and electromagnetic interactions) no conserving additive numbers appear in the weak interaction. It is clear that the lepton and aromatic numbers appear outside the weak interaction and it is obvious that the interaction, where these numbers appear, must be a left-right symmetric one. It is also clear that, since in the weak interaction no conserving additive numbers appear, then the additive (aromatic, lepton and etc.) numbers can be violated in the weak interaction . Thus, the violation scheme of aromatic numbers, as well known, is Cabibbo-Kobayashi-Maskawa matrices $$V=\left(\begin{array}{ccc}V_{ud}& V_{us}& V_{ub}\\ V_{cd}& V_{cs}& V_{cb}\\ V_{td}& V_{ts}& V_{tb}\end{array}\right),$$ $`(10)`$ where $`u,d,s,c,b,t`$ are quarks. The analogous scheme can be used to discribe of the violation of the lepton numbers $$V=\left(\begin{array}{ccc}X_{ee}& X_{e\mu }& X_{e\tau }\\ X_{\mu e}& X_{\mu \mu }& X_{\mu \tau }\\ X_{\tau e}& X_{\tau \mu }& X_{\tau \tau }\end{array}\right),$$ $`(11)`$ where $`e,\mu ,\tau `$ are leptons. It is necessary to stress that, probably, in the weak interaction there is no conserving baryon number $`B`$, third projection the weak isospin ($`I_3^w`$), and etc. ( see Eqs (7),(9)), but it leads to any consequences since the local electric, strong and weak currents are conserved. All the above considered violations of these numbers in the weak interaction are the direct violations. Now we consider the problem: Can the $`\gamma _5`$ anomaly appear in the weak interaction? For this purpose we, at first, use the functional-integral measure method for the vector theory, considered by K. Fujikawa , and then use this method for the weak interaction. At the $`\gamma _5`$ transformations $$\mathrm{\Psi }(x)^{}exp(i\alpha (x)\gamma _5)\mathrm{\Psi }$$ $$\overline{\mathrm{\Psi }}(x)^{}exp(i\alpha (x)\gamma _5)\overline{\mathrm{\Psi }},$$ $`(12)`$ we get the following increment to lagrangian $``$ $$_\mu \alpha (x)\overline{\mathrm{\Psi }}\gamma ^\mu \gamma _5\mathrm{\Psi }2mi\alpha (x)\overline{\mathrm{\Psi }}\gamma _5\mathrm{\Psi },$$ $`(13)`$ where $$=\overline{\mathrm{\Psi }}(i\widehat{D}m)\mathrm{\Psi }+(\frac{g^2}{2})trF^{\mu \nu }F_{\mu \nu }$$ and $`\alpha (x)`$ is infinitesimal parameter. Functional- integral measure is defined by the following equation: $$d\mu \underset{x}{}DA_\mu (x)D\overline{\mathrm{\Psi }}(x)D\mathrm{\Psi }(x)$$ $`(14)`$ Under infinitesimal transformations this measure does not remain invariant and we get (see Appendix and work ) $$d\mu ^{}d\mu exp[i\alpha (x)(\frac{1}{8\pi ^2})tr^{}F^{\mu \nu }F_{\mu \nu }𝑑x],$$ $`(15)`$ where $`{}_{}{}^{}F_{}^{\mu \nu }=ϵ^{\sigma \rho \mu \nu }F_{\sigma \rho }`$. According to the requirement of the measure invariance at this infinitesimal transformation, we obtain $$_\mu (\overline{\mathrm{\Psi }}\gamma ^\mu \gamma _5\mathrm{\Psi })=2mi\overline{\mathrm{\Psi }}\gamma _5\mathrm{\Psi }i\frac{1}{8\pi ^2}ϵ^{\sigma \rho \mu \nu }F_{\sigma \rho }F_{\mu \nu }$$ $`(16)`$ The second term of the right part of (16) is the $`\gamma _5`$ anomaly term. In the case of the weak interaction $`\mathrm{\Psi }_R=\overline{\mathrm{\Psi }}_R0`$ and $$\overline{\mathrm{\Psi }}\mathrm{\Psi }_L,\mathrm{\Psi }\mathrm{\Psi }_L,$$ $`(17)`$ then the functional-integral measure is zero (see Appendix) $$d\mu 0,$$ $`(18)`$ and the $`\gamma _5`$ anomaly term of the right part of Eq. (16) is also zero. So, we see that in the weak interaction the $`\gamma _5`$ anomaly does not appear and the equation type of Eq.(16) for the weak interaction has the following form: $$_\mu (\overline{\mathrm{\Psi }}_L\gamma ^\mu \gamma _5\mathrm{\Psi }_L)0.$$ $`(19)`$ 3. Conclusion In this work it was shown that, in contrast to the strong and electromagnetic theories, additive conserved numbers (such as lepton, aromatic and another numbers) and $`\gamma _5`$ anomaly do not appear in the standard weak interaction theory. It means that in this interaction the additive numbers cannot be conserved. These results are the consequence of specific character of the weak interaction: the right components of spinors do not participate in this interaction. The schemes of violation of the aromatic and lepton numbers were considered. Appendix Under the chiral transformation (12) $$\mathrm{\Psi }(x)^{}exp(i\alpha (x)\gamma _5)\mathrm{\Psi }$$ the coefficient $`a_n`$ of the following expansions: $$\begin{array}{c}\mathrm{\Psi }(x)=_na_n\varphi _n\\ \overline{\mathrm{\Psi }}(x)=_n\varphi _n^+\overline{b}_n\end{array}$$ $`(a.1)`$ $$d\mu =\underset{x}{}[DA_\mu (x)]\underset{m,n}{}d\overline{b}_mda_n,$$ (where $`\widehat{D}\varphi (x)=\lambda _n\varphi _n,\varphi _n^+(x)\varphi _m(x)d^4x=\delta _{n,m}`$ and $`a_n,b_m^+`$ are the elements of the Grassman algebra) are transformed as $$a_{}^{}{}_{n}{}^{}=\underset{m}{}\varphi _n^+exp(i\alpha (x)\gamma _5)\varphi _m𝑑xa_m=\underset{m}{}c_{nm}a_m.$$ $`(a.2)`$ Then $$\underset{n}{}da_{}^{}{}_{n}{}^{}=(detC_{k,l})^1\underset{n}{}da_n$$ $`(a.3)`$ where $$(detC_{k,l})^1=det(\delta _{k,l}+i\alpha (x)\varphi _k^+(x)\gamma _5\varphi _l(x)𝑑x)^1=$$ $$=exp(i\alpha (x)\underset{k}{}\varphi _k^+(x)\gamma _5\varphi _k(x)dx).$$ The summation in the exponent of (12) is a bad-defined quantity and evaluating it by introduction a cutoff $`M`$ ($`\lambda _kM`$) we have $$\begin{array}{c}_k\varphi _n^+(x)\gamma _5\varphi _k(x)=lim_M\mathrm{}_k\varphi _k^+\gamma _5exp\left[\left(\frac{\lambda _k}{M}\right)^2\right]\varphi _k(x)=\\ =lim_{M\mathrm{},yx}tr\gamma _5exp\left[\left(\frac{\widehat{D}}{M}\right)^2\right]\delta (xy)=\\ =lim_{M\mathrm{},yx}\frac{dx}{(2\pi )^4}tr\gamma _5exp\left[(D^\mu D_\mu +\frac{1}{4}[\gamma ^\mu ,\gamma ^\nu ]F_{\mu \nu })/M^2\right]e^{ik(xy)}=\\ =lim_M\mathrm{}\frac{1}{16}tr\gamma _5([\gamma ^\mu ,\gamma ^\nu ]F_{\mu \nu })^2\frac{1}{2M^4}\frac{dk}{2\pi ^4}exp^{(\frac{k^2}{M^2})}.\end{array}$$ $`(a.4)`$ After the integration one abtains $$\underset{k}{}\varphi _k^+(x)\gamma _5\varphi _k(x)=\frac{1}{16\pi ^2}tr^{}F^{\mu \nu }F_{\mu \nu }$$ $`(a.5)`$ The same result one obtain at $`b_n^+`$ transformation, and as a result one gets eq. (15), i.e. $$d\mu ^{}=d\mu e^{\left(i{\scriptscriptstyle \alpha (x)({\scriptscriptstyle \frac{1}{8\pi ^2}})^{}F^{\mu \nu }F_{\mu \nu }}\right)}.$$ It is clear that in the case of the weak interactions, since $`\varphi _{Rk}^+(x)=\varphi _{Rk}0`$ we have $$\underset{k}{}\varphi _{Lk}^+(x)\gamma _5\varphi _{Lk}0$$ $`(a.6)`$ References N.N. Bogolubov, D.V. Shirkov, Introd. to the Quantum Field Theory, M., Nauka, 1986; G. Kane Modern Elementary Particle Physics, Add. W. P.C., 1987. Kh.M. Beshtoev, JINR Commun. 2-93-44, Dubna, 1993; JINR Commun., E2-93-167, Dubna, 1993.; Chinese Journal of Phys. 34 (1996) 979. Kh.M. Beshtoev Kh.M, JINR Commun. E2-94-221, Dubna, 1994. N. Cabibbo, Phys Rev. Lett., 10 (1963) 531. M. Kobayashi and K. Maskawa, Prog. Theor. Phys., 49 (1973) 652. Kh.M. Beshtoev, JINR, E2-94-293, Dubna, 1994; Turkish Journ. of Physics 20 (1996) 1245; JINR Commun., E2-95-535, Dubna, 1995; JINR Commun., P2-96-450, Dubna, 1996. JINR Commun. E2-97-210, Dubna, 1997. K. Fujikawa, Phys. Rev. Lett., 42 (1979) 1195.
no-problem/9907/nucl-th9907060.html
ar5iv
text
# Primordial hadrosynthesis in the Little Bang ## 1 Heavy-ion data and the nuclear phase diagram Relativistic heavy-ion collisions are studied with the goal of creating hot and dense hadronic matter and to investigate the nuclear phase diagram at high temperatures and densities, including the expected phase transition to a color deconfined quark-gluon plasma. But even if the energy deposited in the reaction zone is quickly randomized and the fireball constituents reach an approximate state of local thermal equilibrium, a simple connection between heavy-ion observables and the phase diagram is still not easy: the pressure generated by the thermalization process blows the fireball apart, causing a strong time dependence of its thermodynamic conditions which is difficult to unfold from the experimental observations. There are therefore two fundamental issues to be solved before one can extract information on the nuclear phase diagram from heavy-ion experiments: (1) To what degree does the fireball approach local thermal equilibrium? (2) Which observables are sensitive to which stage(s) of its dynamical evolution, and which is the most reliable procedure for extracting the corresponding thermodynamic information? Combining microscopic models for the dynamical fireball evolution with macroscopic thermal models for the analysis of heavy-ion data, significant progress has been recently made in answering both of these questions. Crucial for this achievement was the dramatically improved quantity and quality of hadron production data from the analysis of collisions between very heavy nuclei (Au+Au, Pb+Pb) from SIS to SPS. Fig. 1 shows a compilation by Cleymans and Redlich of hadronic freeze-out points in the nuclear phase diagram from various collision systems and beam energies. The upper set of points, parametrized by a constant average energy per particle $`E/N`$=1 GeV , is obtained from measured hadron yields. They indicate the average thermodynamic conditions at chemical freeze-out when the hadron abundances stopped evolving. The lower set of points, compared with lines of constant energy and particle density , is obtained from analyses of hadron momentum spectra and/or two-particle momentum correlations. They indicate thermal freeze-out, i.e. the decoupling of the momentum distributions. The chemical and thermal freeze-out points at the SPS and AGS, respectively, are connected by isentropic expansion trajectories with $`S/B`$ 36-38 for the SPS and 12-14 for the AGS. Figure 1. Compilation by Cleymans and Redlich of chemical and thermal freeze-out points. The legend refers to the symbols for the thermal freeze-out points. For the original references for all the data points see . My first goal is a critical discussion of how these freeze-out parameters were extracted from the data and how reliable Fig. 1 is. Following that is a more detailed study of the fireball properties at chemical freeze-out, taking into account additional information not contained in Fig. 1, and a discussion of a consistent dynamical picture which can explain Fig. 1. My main conclusion, based on a chain of arguments developed and sharpened over the last few years , is given in the abstract; similar conclusions were reached and recently publicized by R. Stock and are also found in E. Shuryak’s talk . ## 2 Thermal freeze-out, “Hubble”-flow, and the Little Bang Let me begin with a discussion of the thermal freeze-out points. Freeze-out marks the transition from a strongly coupled system, which evolves from one state of local thermal equilibrium to another, to a weakly coupled one of essentially free-streaming particles. If this transition happens quickly enough, the thermal momentum distributions (superimposed by collective expansion flow) are frozen in, and the temperature and collective flow velocity at the transition “point” can be extracted from the measured momentum spectra. In high energy heavy-ion collisions the freeze-out process is triggered dynamically by the accelerating transverse expansion and the very rapid growth of the mean free paths as a result of the fast dilution of the matter . Idealizing the kinetic freeze-out process by a single point in the phase diagram is therefore not an entirely unreasonable procedure. As in the Big Bang, the observed momentum spectra mix the thermal information with the collective dynamics of the system. In the Big Bang, the observed microwave background radiation has a Bose-Einstein energy spectrum with an “effective temperature” (inverse slope) which is redshifted by cosmological expansion down from the original freeze-out temperature of about 3000 K to an observed value of only 2.7 K. In the Little Bang, where we observe the thermal hadron radiation from the outside, the transverse momentum spectra are blueshifted by the collective transverse motion towards the observer. Simple approximate expressions which capture this effect are<sup>1</sup><sup>1</sup>1This is accurate for non-relativistic particles from a Gaussian source with a linear transverse velocity profile $`v_{}(r)=v_{}\frac{r}{r_{\mathrm{rms}}}`$, where $`v_{}`$ is the radial velocity at the rms radius $`r_{\mathrm{rms}}^2=x^2+y^2`$. The analogous formula in lacks the factor $`\frac{1}{2}`$ in the second term since it uses the radial velocity at $`r=r_{\mathrm{rms}}/\sqrt{2}`$. $`T_{\mathrm{slope}}T_{\mathrm{therm}}+\frac{1}{2}mv_{}^2`$ (which applies for $`p_{}m`$) and $`T_{\mathrm{slope}}T_{\mathrm{therm}}\sqrt{(1+v_{})/(1v_{})}`$ (which is good for $`p_{}m`$). For a given species (fixed $`m`$) the measured slope of the spectrum is thus ambiguous: temperature and flow cannot be separated. The ambiguity can be lifted in two ways: (i) One performs a simultaneous fit of the $`m_{}`$-spectra of hadrons with different rest masses, thereby exploiting the mass dependence in the first of these two expressions . This makes the implicit assumption that thermal freeze-out happens simultaneously for all particle species. For Au+Au collisions at the AGS this works well and gives $`T_{\mathrm{therm}}93`$ MeV and $`v_{}0.5c`$ at midrapidity . Or (ii) one concentrates on a single particle species and correlates (as described in detail by U. Wiedemann ) their spectra with their two-particle Bose-Einstein correlations. The $`M_{}`$-dependence of the transverse HBT radius parameter<sup>2</sup><sup>2</sup>2$`\xi 𝒪(1)`$ accounts for different transverse density and flow profiles; $`\xi =\frac{1}{2}`$ for the case described in fn. 1. $`R_{}(M_{})R/\sqrt{1+\xi v_{}^2\frac{M_{}}{T_{\mathrm{therm}}}}`$ then provides an orthogonal correlation between temperature and flow, allowing for their separation. For pions from Pb+Pb collisions at the SPS this leads again to kinetic freeze-out temperatures of 90–100 MeV and average tranverse flow velocities of 0.5–0.55 $`c`$ (perhaps even somewhat higher at midrapidity) . This fixes the position of the freeze-out point along the $`T`$-axis in Fig. 1, but what about $`\mu _\mathrm{B}`$? Since chemical equilibrium is already broken earlier, at $`T_{\mathrm{chem}}170180`$ MeV (see below), $`\mu _\mathrm{B}`$ is, strictly speaking, not well-defined. In order to still be able to associate with kinetic freeze-out a point in the phase-diagram one commonly adjusts $`\mu _\mathrm{B}`$ in such a way that the deviations between the observed particle ratios and their chemical equilibrium values at $`T_{\mathrm{therm}}`$ are minimized. This is acceptable if the deviations are small; in practice they can approach a factor 2 or so. This clearly causes irreducible systematic uncertainties in the baryon chemical potential at thermal freeze-out which are usually not evaluated and are not included in the horizontal error bars in Fig. 1 (where given). Let’s nonetheless accept these thermal freeze-out parameters and now ask the question: How did the system get there? Does the implied picture of a rapidly expanding, locally thermalized fireball, the Little Bang, make sense? These questions can be studied by microscopic kinetic simulations of RQMD, URQMD and HSD type; even if they do not include quark-gluon degrees of freedom during the very dense initial stages and thus may parametrize the initial hadron production incorrectly (see below), they can be used to explore the effects of scattering among the hadrons before kinetic freeze-out and the evolution of collective flow. A detailed study of thermalization by rescattering was recently performed by the URQMD group for Au+Au and Pb+Pb collisions from AGS to SPS energies (Fig. 2). After an initial non-equilibrium stage lasting for about 8–10 fm/$`c`$ these systems reach a state of approximate local thermal equilibrium which expands and cools at roughly constant entropy for another 10 fm/$`c`$ before decoupling. During the adiabatic expansion stage strong collective flow builds up. Thermalization is driven by intense elastic rescattering, dominated by resonances (e.g. $`\pi +N\mathrm{\Delta }\pi +N`$); inelastic processes are much rarer and lead only to minor changes in the chemical composition of the fireball . As a result, significant deviations from chemical equilibrium occur which increase with time; most importantly, at thermal freeze-out one sees a large pion excess which can only partially be accounted for by the initial string fragmentation process . Remarkably, these deviations from chemical equilibrium produce very little entropy . The $`S/B`$ values extracted from the URQMD simulations agree with those from the thermal model analysis of the data (cf. Figs. 1 and 2). Figure 2. Expansion trajectories from URQMD simulations . Open and closed symbols denote the pre-equilibrium and hydrodynamic stages, respectively, of the collision in steps of 1 fm/$`c`$. The filled symbols lie on lines of constant entropy per baryon, $`S/B`$=38, 20, 12 for 160, 40, 10.7 $`A`$ GeV, respectively . The shaded region indicates the expected parameter range for the deconfining phase transition. Hadron momentum spectra and two-particle correlations thus provide strong evidence for the existence of the Little Bang: thermal hadron radiation with $`T_{\mathrm{therm}}90100`$ MeV and strong 3-dimensional (“Hubble-like”) expansion with transverse flow velocities $`v_{}0.50.55c`$ (and even larger longitudinal ones ). These two observations play a similar role here as the discovery of Hubble expansion and the cosmic microwave radiation played for the Big Bang. But is there also a heavy-ion analogy to primordial Big Bang nucleosynthesis? In the following section I will argue that we have indeed evidence for “primordial hadrosynthesis” in the Little Bang. ## 3 Thermal models for chemical freeze-out and “primordial hadrosynthesis” Chemical reactions, which exploit small inelastic fractions of the total cross section, are typically much slower than the (resonance dominated) elastic processes. One thus expects chemical freeze-out to occur before thermal freeze-out ($`T_{\mathrm{chem}}>T_{\mathrm{therm}}`$) but on an expansion trajectory with roughly the same entropy per baryon $`S/B`$. Fig. 1 suggests that this is indeed the case. This is analogous to the Big Bang where nucleosynthesis happened after about 3 minutes at $`T_{\mathrm{chem}}100`$ keV whereas the microwave background decoupled much later, after about 300000 years at $`T_{\mathrm{therm}}\frac{1}{4}`$ eV. The much smaller difference between the two decoupling temperatures in the Little Bang is mainly due to its much (about 18 orders of magnitude) faster expansion rate. Before discussing implications of the chemical freeze-out points in Fig. 1 I first explain how they were obtained. Can thermal models be used to analyze chemical freeze-out? I discussed this question in some detail last year in Padova and thus will be short here. The first difficulty arises from the collective expansion which strongly affects the shape of the $`m_{}`$\- and $`y`$-spectra, in a way which depends on the particle mass. However, many experiments measure the particle yields only in small windows of $`m_{}`$ and $`y`$. A chemical analysis of particle ratios from such experiments depends very strongly on model assumptions about the fireball dynamics . Static fireball fits yield chemical freeze-out parameters which are quite sensitive to the rapidity window covered by the data . Flow effects drop out, however, from $`4\pi `$-integrated particle ratios as long as freeze-out occurs at constant $`T`$ and $`\mu `$. $`4\pi `$ yields thus minimize the sensitivity to the collective fireball dynamics and are preferable for thermal model analyses. Figure 3. Comparison between thermal model predictions and data for 158 $`A`$ GeV Pb+Pb collisions, after optimizing the model parameters $`T_{\mathrm{chem}}`$=170 MeV, $`\mu _\mathrm{B}`$=270 MeV, $`\gamma _\mathrm{s}`$=1 . Discrepancies between model and data remain below the systematic uncertainties of the model and among different data sets. Of course, dynamic systems never freeze out at constant $`T`$ and $`\mu `$. While the steep $`T`$-dependence of the particle densities (which determine the local scattering rates and control freeze-out ) prohibits strong temperature variations across the freeze-out surface , incomplete baryon number stopping causes at higher energies significant longitudinal variations of $`\mu _\mathrm{B}`$ . A global thermal fit replaces the freeze-out distributions $`T_{\mathrm{chem}}(x)`$, $`\mu _\mathrm{B}(x)`$ by average values $`T_{\mathrm{chem}}`$, $`\mu _\mathrm{B}`$. A recent study by Sollfrank , in which he performed a global thermal fit to particle yields from a hydrodynamic calculation, showed that after optimizing $`T_{\mathrm{chem}}`$ and $`\mu _\mathrm{B}`$ the thermal model predicted yields which differed by up to 15% from the actual ones, although hydrodynamic simulations implement perfect (local) chemical equilibrium by construction. Thermal model fits can thus never be expected to be perfect; without detailed dynamical assumptions local variations of the thermal parameters in the real collision can never be fully absorbed by the model. Discrepancies between model and data at the 15-30% level are inside the systematic uncertainty band of the thermal model approach. While being impressed by how thermal models can reproduce the measured particle ratios at this level of accuracy over up to 3 orders of magnitude (see Fig. 3), I am thus deeply suspicious of “perfect” thermal model fits. A second lesson to be learned from the exercise in is that the $`\chi ^2`$/d.o.f. resulting from such a fit is not very useful as an absolute measure for the quality of the fit: since discrepancies between the real yields and the predictions from the global thermal model cannot be avoided, $`\chi ^2`$/d.o.f. becomes larger and larger as the data become more and more accurate. While $`\chi ^2`$ minimization can still be used to identify the optimal model parameters within a given model, one should be very careful in using the absolute value of $`\chi ^2`$/d.o.f. to judge the relative quality of different model fits . Let me note that the ideal system for thermal model fits of particle ratios will be provided by heavy-ion collisions at RHIC and LHC: hadron formation will happen at the confinement transition, and near midrapidity the baryon density is so low that $`T_\mathrm{c}`$ is nearly independent of $`\mu _\mathrm{B}`$. Replacing $`T(x)`$ by $`T`$ will then be an excellent approximation. Due to longitudinal boost-invariance near midrapidity, knowledge of $`dN/dy`$ will be good enough for a reliable chemical analysis. Finally, transverse flow can only be stronger at RHIC and LHC than at the SPS, so freeze-out will happen even more quickly after hadronization, strengthening the primordial character of the observed particle ratios. Figure 4. Upper part: time dependence of midrapidity hadron densities for Au+Au collisions at RHIC, calculated in a combined hydrodynamic + URQMD simulation . At the hadronization temperature $`T_\mathrm{c}`$ hadrons are created from the hydrodynamic phase with chemical equilibrium abundances and are then evolved kinetically by URQMD. Lower part: final hadron abundances at the end of the kinetic stage (circles) and if the calculation is stopped and all resonances are decayed directly at $`T_\mathrm{c}`$ (squares). These expectations are borne out by a recent analysis by Bass and Dumitru who combined a hydrodynamic description of the dense early stage with a URQMD simulation of the late hadronic stage. Fig. 4 (bottom) shows that indeed chemical freeze-out occurs quickly after hadronization: the yields at hadronization (squares) and after the last elastic scattering (circles) differ by less than 30%, in spite of many collisions in between . The fit of Pb+Pb data in Fig. 3 yields a chemical freeze-out temperature $`T_{\mathrm{chem}}170`$ MeV at full strangeness saturation ($`\gamma _\mathrm{s}`$=1). In the same data are fit with $`T_{\mathrm{chem}}=144\pm 2`$ MeV and a strongly oversaturated strange phase-space ($`\gamma _\mathrm{s}=1.48\pm 0.08`$). The authors also allow for oversaturation of the light quarks and find $`\gamma _\mathrm{q}=1.72\pm 0.08`$ which allows to absorb the large pion multiplicity at a low value of $`T_{\mathrm{chem}}`$. This “chemical non-equilibrium” fit underpredicts $`\overline{\mathrm{\Omega }}/\overline{\mathrm{\Xi }}`$ by 40% and $`\mathrm{\Omega }/\mathrm{\Xi }`$ by 60%, a problem which disappears at $`T_{\mathrm{chem}}=170`$ MeV (Fig. 3). More importantly, since $`\gamma _\mathrm{q}^2=e^{\mu _\pi /T}`$, the freeze-out parameters in imply a very large pion chemical potential $`\mu _\pi =156`$ MeV $`>m_\pi `$; this invalidates the assumption that Bose statistics for pions can be accounted for by considering only the first correction to the Boltzmann term. Hence, while the authors of prefer their fit on the basis of a low $`\chi ^2`$/d.o.f., it has systematic uncertainties which far exceed the statistical errors given in . Having established the location in the phase diagram where chemical freeze-out occurs, we should again ask: How did the system get there? Since $`T_{\mathrm{chem}}`$ turns out to be very close to the predicted critical value for the hadronization phase transition, there is clearly no time between hadron formation and chemical freeze-out for kinetic equilibration of the hadron abundances by inelastic hadronic rescattering . The observed hadronic chemical equilibrium at $`T_{\mathrm{chem}}`$ must therefore be pre-established: it reflects a statistical occupation of the hadronic phase-space, following the principle of maximum entropy, by the hadronization process . Hadrons form from a prehadronic stage by filling each available phase-space cell with equal probability, subject only to the constraints of energy, baryon number and strangeness conservation (the latter includes a possible overall suppression of strangeness). Afterwards, the chemical composition decouples essentially immediately, without major modifications by hadronic rescattering. The parameter $`T_{\mathrm{chem}}`$ is thus not a hadronic temperature in the usual sense, i.e. not a result of hadronic kinetic equilibration. In the maximum entropy spirit it should be interpreted as a Lagrange multiplier which regulates the hadron abundances in accordance with the conservation laws and is directly related to the critical energy density at which hadronization can proceed. $`T_{\mathrm{chem}}170`$ MeV translates into $`ϵ_\mathrm{c}1`$ GeV/fm<sup>3</sup>. This also explains naturally Becattini’s observation of hadronic chemical equilibrium at the same value of $`T_{\mathrm{chem}}`$ in $`e^+e^{}`$, $`pp`$ and $`p\overline{p}`$ collisions at essentially all collision energies, although there hadronic final state interactions are completely absent. Whether the constituents of the prehadronic stage themselves thermalize before or during hadronization is a question which final hadron abundances cannot answer; as likely as their thermalization may appear, it is not necessary for an explanation of the observed phenomena. The concept of statistical hadron formation from a pre-existing state of color-deconfined, completely uncorrelated quarks and antiquarks is supported by a recent analysis<sup>3</sup><sup>3</sup>3For an improved argument taking into account global flavor conservation during hadronization see . of Bialas which follows similar earlier arguments by Rafelski but formulates them more generally such that they do not require thermalization. Bialas points out that by considering baryon/antibaryon (or generally particle/antiparticle) ratios, the unknown effects on hadronization from the internal hadron structure drop out and one can check directly whether the observed hadron abundances can be fully understood by just counting their conserved quantum numbers (carried by their valence quarks), or whether additional correlations among the quarks exist. He finds that in S+S and Pb+Pb collisions at the SPS the former is true while hadron production yields in p+Pb collisions point to correlations among the quarks. ## 4 Early memories: strangeness enhancement The one decisive feature which distinguishes heavy-ion from elementary particle collisions is the strangeness content in the hadronic final state: the global strangeness fraction of the produced quark-antiquark pairs, $`\lambda _\mathrm{s}=2s\overline{s}/u\overline{u}+d\overline{d}|_{\mathrm{produced}}`$, is about 2 times higher in nuclear collisions . This can not be reproduced by hadronic rescattering models and must thus be a feature of the prehadronic state. Here I would like to discuss in more detail the specific enhancement factors for $`K,\overline{K},\mathrm{\Lambda },\overline{\mathrm{\Lambda }},\mathrm{\Xi },\overline{\mathrm{\Xi }},\mathrm{\Omega },\overline{\mathrm{\Omega }}`$, and $`\varphi `$ reported recently and during this meeting . Figure 5. Centrality dependence of strangeness enhancement as measured by WA97 . The strange particle yields per participating nucleon in Pb+Pb collisions at the SPS are compared to the same ratio in p+Be and p+Pb collisions. Fig. 5 shows that the relative strangeness enhancement between Pb+Pb and p+Be collisions is the stronger the more strange quarks the hadron contains. This is perfectly consistent with the above picture of statistical hadronization: disregarding other phase-space constraints, an $`\mathrm{\Omega }`$, for example, which contains 3 strange quarks, is expected to be enhanced by a factor $`2^3=8`$ if strange quarks are enhanced by a factor 2. On the other hand, this pattern contradicts expectations from final state hadronic rescattering: since hadrons with more strange quarks are heavier and strangeness must be created in pairs, the production of stranger particles is suppressed by increasingly higher thresholds. An interesting observation from Fig. 5 is the apparent centrality independence of the specific strangeness enhancement factors: the enhancement appears to be already fully established in semiperipheral Pb+Pb collisions with about 100 participating nucleons. In fact, the global enhancement by a factor 2 was already seen in S+S collisions by NA35 . Since it must be a prehadronic feature, but the lifetime of the prehadronic stage is shorter for smaller collision systems, this points to a new fast strangeness production mechanism in the prehadronic stage. Exactly this was predicted for the quark-gluon plasma . At this meeting we saw new data on the centrality dependence of hadron yields . Unfortunately, different centrality measures and prescriptions to determine $`N_{\mathrm{part}}`$ have been used. This needs clarification before the pattern in Fig. 5 can be considered confirmed. Much recent effort went into trying to explain these observations within microscopic simulations based on string breaking followed by hadronic rescattering. All such attempts failed. VENUS and RQMD give more strangeness enhancement for more central collisions , however not from hadronic rescattering, but mostly from the non-linear rise of the formation probability for quark matter droplets and color ropes. Strangeness enhancement is thus put in as an initial condition; unlike Fig. 5 it rises monotonically with $`N_{\mathrm{part}}`$. HIJING/$`B\overline{B}`$ uses baryon junction loops to enhance strange baryon production near midrapidity. Again this puts the enhancement into the initial conditions. The measured $`N_{\mathrm{part}}`$-dependence is not reproduced. The model also disagrees with the observed pattern $`\overline{\mathrm{\Omega }}/\mathrm{\Omega }>\overline{\mathrm{\Xi }}/\mathrm{\Xi }>\overline{\mathrm{\Lambda }}/\mathrm{\Lambda }`$ (which the statistical hadronization picture explains nicely). Finally, the “improved dual parton model” , which gets some fraction of the enhancement in the initial state from “diquark-breaking collisions” and claims to obtain an even larger additional enhancement from hadronic final state interactions, suffers from a severe violation of detailed balance: it only includes inelastic channels (like $`\pi +\mathrm{\Xi }\mathrm{\Omega }+K`$) which increase multistrange baryons but neglects the (at least equally important ) strangeness exchange processes (like $`\pi +\mathrm{\Omega }\overline{K}+\mathrm{\Xi }`$) which destroy them. Consequently it also fails to reproduce the apparent saturation of the enhancement factors seen in Fig. 5. The observed strangeness enhancement pattern thus cannot be generated by hadronic final state interactions, but must be put in at the beginning of the hadronic stage. No working model which does so in agreement with the data is known, except for conceptually simplest one, the statistical hadronization model. ## 5 Summary The analysis of soft hadron production data at the SPS indicates that hadron formation proceeds by statistical hadronization from a prehadronic state of uncorrelated (color-deconfined) quarks. This leads to pre-established apparent chemical equilibrium among the formed hadrons at the confinement temperature $`T_\mathrm{c}`$; it is not caused by kinetic equilibration through hadronic rescattering. After hadronization the hadron abundances freeze out more or less immediately. The chemical freeze-out temperature thus coincides with the critical temperature, $`T_{\mathrm{chem}}T_\mathrm{c}170`$-180 MeV, corresponding to a critical energy density $`ϵ_\mathrm{c}1`$ GeV/fm<sup>3</sup> as predicted by lattice QCD. The prehadronic state in $`A+A`$ ($`A32`$) collisions contains about twice more strangeness than in $`e^+e^{}`$ and $`pp`$ collisions. This strangeness enhancement appears to be already fully established in nuclear collisions with 60 or more participant nucleons and can not be generated by hadronic final state interactions. This suggests a fast $`s\overline{s}`$ creation mechanism in the prehadronic stage, as predicted for a quark-gluon plasma. A clear hierarchy between chemical ($`T_{\mathrm{chem}}170`$–180 MeV) and thermal ($`T_{\mathrm{therm}}90`$–100 MeV) freeze-out is observed in Pb+Pb collisions at the SPS; the gap is somewhat smaller ($`130`$–140 MeV vs. $`90`$ MeV) at the AGS. In both cases thermal decoupling is accompanied by strong radial collective flow. The smaller inverse slopes of the $`\mathrm{\Omega }`$ $`m_{}`$-spectra suggest that a considerable fraction (but probably not all) of this flow is generated by strong elastic hadronic rescattering after hadronization . I conclude that we have seen the Little Bang in the laboratory, and that most likely it is initiated by a quark-gluon plasma. Acknowledgements: I thank S. Bass, J. Cleymans and R. Lietava for providing me with Figs. 1, 4, and 5. Several fruitful discussions with R. Stock are gratefully acknowledged. A comment by Zhangbu Xu led to the clarification presented in Footnote 1.
no-problem/9907/cond-mat9907063.html
ar5iv
text
# Theoretical Model for Kramers-Moyal’s Description of Turbulence Cascade \[ ## Abstract We derive the Kramers-Moyal equation for the conditional probability density of velocity increments from the theoretical model recently proposed by V.Yakhot in the limit of high Reynolds number limit. We show that the higher order ($`n3`$ ) Kramers – Moyal coefficients tends to zero and the velocity increments are evolved by the Fokker– Planck operator. Our result is compatible with the phenomenological descriptions by R.Friedrich and J.Peinke , developed for explaining the experiments recently done by J. Pienke. et al.. PACS numbers: 47.27.Ak, 47.27.Gs, 47.27.Eq \] The problem of scaling behavior of longitudinal velocity difference $`U=|u(x_1)u(x_2)|`$ in turbulence and the probability density function of $`U`$ i.e $`P(U)`$, attracts a great deal of attention \[4-10\]. Statistical theory of turbulence has been put forward by Kolmogorov , and further developed by others \[12-15\]. The approach is to model turbulence using stochastic partial differential equations. Kolmogorov conjectured that the scaling exponents are universal, independent of the statistics of large–scale fluctuations and the mechanism of the viscous damping, when the Reynolds number is sufficiently large. However, recently it has been found that there is a relation between the probability distribution function (PDF) of velocity and those of the external force (see for more detail). In this direction, Polyakov has recently offered a field theoretic method to derive the probability distribution or density of states in (1+1)-dimensions in the problem of randomly driven Burgers equation \[16-17\]. In one dimension, turbulence without pressure is described by Burgers equation (see also concerning the relation between Burgers equation and KPZ–equation). In the limit of high Reynolds number, using the operator product expansion (OPE), Polyakov reduces the problem of computation of correlation functions in the inertial subrange, to the solution of a certain partial differential equation \[19-20\]. Yakhot recently generalize the Polyakov approach in three-dimensions and find a closed differential equation for the two-point generating function of the $`\mathrm{"}longitudinal\mathrm{"}`$ velocity difference in the strong turbulence (see also about closed equation for PDF of velocity difference for two and three-dimensional turbulence without pressure). On the other hands, recently from detailed analysis of experimental data of a turbulent free jet, R.Friedrich and J.Pienke have been able to obtain a phenomenological description of the statistical properties of a turbulent cascade using a Fokker-Planck equation. In other words they have seen that the conditional probability density of velocity increments satisfy the Chapman-Kolmogorov equation. Mathematically this is a necessary condition for the velocity increments to be a Markovian process in terms of length scales. By fitting the observational data they have succeeded to find the different Kramers-Moyal(K.M) coefficients and they find that the approximations of the third and fourth order coefficients tend to zero whereas the first and second coefficients have well defined limits. Then giving address to the implications dictated by theorem they have got a Fokker-Planck evolution operator. As an evolution equation for the probability density function of velocity increments, the Fokker-Planck equation has been used to give the information on changing shape of the distribution as a function of the length scale. By this strategy the information on the observed intermittency of the turbulent cascade is verified. In their description and based on simplified assumptions on the drift and diffusion coefficients they have considered two possible scenarios in order to indicate that both the Kolmogorov 41 and 62 scalings are recovered as possible behaviors in their phenomenological theory. In this paper we derive the Kramers–Moyal equation from Navier–Stokes equation and show that how the higher order ($`n3`$) Kramers–Moyal coefficients tend to zero in the high Reynolds number limit. Therefore we find the Fokker–Planck equation from first principles. We show that the breakdown of the Galilean invariance is responsible for scale dependence of the Kramers–Moyal coefficients. Finally using the path-integral expression for the PDF we show that how small scale statistics affected by PDF‘s in the large scale and confirm the Landau‘s remark that the large-scale fluctuations of turbulence production in the integral range can invalidate the Kolmogorov theory \[12-13\]. Our starting point is the Navier–Stokes equations: $$𝐯_t+(𝐯)𝐯=\nu ^2𝐯\frac{p}{\rho }+𝐟(𝐱,t),𝐯=\mathrm{𝟎}$$ (1) for the Eulerian velocity $`𝐯(𝐱,t)`$ and the pressure $`p`$ with viscosity $`\nu `$, in N–dimensions. The force $`𝐟(𝐱,t)`$ is the external stirring force, which injects energy into the system on a length scale $`L`$. More specifically one can take, for instance a Gaussian distributed random force, which is identified by its two moments: $$<f_\mu (𝐱,t)f_\nu (𝐱^{^{}},t^{^{}})>=k(0)\delta (tt^{^{}})k_{\mu \nu }(𝐱𝐱^{^{}})$$ (2) and $`<f_\mu (𝐱,t)>=0`$, where $`\mu ,\nu =x_1,x_2,\mathrm{},x_N`$. The correlation function $`k_{\mu \nu }(r)`$ is normalized to unity at the origin and decays rapidly enough where $`r`$ becomes larger or equal to integral scale $`L`$. The force free N-S equation is invariant under space–time translation, parity and scaling transformation. Also it is invariant under Galilean transformation, $`xx+Vt`$ and $`vv+V`$, where $`V`$ is the constant velocity of the moving frame. Both boundary conditions and forcing can violate some or all of symmetries of force free N-S equation. However it is, usually assumed that in the high Reynolds number flow all symmetries of the N-S equation are restored in the limit $`r0`$ and $`r>>\eta `$, where $`\eta `$ is the dissipation scale where the viscous effects become important. This means that in this limit the root–mean square velocity fluctuations $`u_{rms}=\sqrt{<v^2>}`$ which is not invariant under the constant shift $`V`$, cannot enter the relations describing moments of velocity difference. Therefore the effective equations for the inertial–range velocity correlation functions must have the symmetries of the original N-S equations. For many years this assumption was the basis of turbulence theories. But based on the recent understanding of turbulence, some of the constraints on the allowed turbulence theories can be relaxed . Polyakov’s theory of the large–scale random force driven Burgers turbulence was based on the assumption that weak small – scale velocity difference fluctuations (i.e. $`|v(x+r)v(x)|<<u_{rms}`$ and $`r<<L`$), where $`L`$ is the integral scale of system, obey $`G`$–invariant dynamic equation, meaning that the integral scale and the single–point $`u_{rms}`$ induced by random–forcing cannot enter the resulting expression for the probability density. According to it has been shown that how the $`u_{rms}`$ enters the equation for the PDF and therefore breaks the G-invariance in the limited Polyakov‘s sense. We are interested in the scaling of the longitudinal structure function $`S_q=<(u(x+r)u(x))^q>=<U^q>`$, where $`u(x)`$ is the $`x`$-component of the three-dimensional velocity field and $`r`$ is the displacement in the direction of the $`x`$-axis and the probability density $`P(U,r)`$ for homogeneous and isotropic turbulence. Let us define the generating function $`\widehat{Z}`$ for longitudinal structure function $`\widehat{Z}=<e^{\lambda U}>`$. According to in the spherical coordinates the advective term in eq.(1) involve the terms $`O(\frac{^2\widehat{Z}}{\lambda r})`$ , $`O(\frac{\widehat{Z}}{r\lambda })`$, $`O(\frac{\widehat{Z}}{\lambda r})`$, $`O(\frac{\widehat{Z}}{\lambda r})`$ . It is noted that the advection contributions are accurately accounted for in equation of $`\widehat{Z}`$, but it is not closed due to the dissipation and pressure terms. Using the Polyakov‘s OPE approach Yakhot has shown the the dissipation term can be treated easily while the pressure term has an additional difficulty. The pressure contribution leads to effective energy redistribution between components of velocity field and has non-trivial effect in the dynamics of N-S equation. Proceeding to find a closed equation for the generating function of Longitudinal velocity difference, $`\widehat{Z}`$, the dissipation and pressure terms in eq.(1) give contributions and the longitudinal part of the dissipation term renormalizes the coefficient in front of the $`O(\frac{1}{\lambda })`$ in equation for $`\widehat{Z}`$, . Also it generates a term with order of $`O(u)`$ which can be written in terms of $`\widehat{Z}`$ as $`\lambda \frac{\widehat{Z}}{\lambda }`$. Taking into account all the possible terms and using the symmetry of the PDF i.e $`P(U,r)=P(U,r)`$, the following closed equation for $`\widehat{Z}`$ can be found,, $$\frac{^2\widehat{Z}}{\lambda r}\frac{B_0}{\lambda }\frac{\widehat{Z}}{r}=\frac{A}{r}\frac{\widehat{Z}}{\lambda }C\lambda \frac{\widehat{Z}}{\lambda }+3r^2\lambda ^2\widehat{Z}$$ (3) where the parameter $`A`$, $`B`$ and $`C`$ to be determined from the theory. Also we suppose that $`k_{\mu \nu }`$ has the structure $`k_{\mu \nu }(𝐫_{𝐢,𝐣})=k(0)[1\frac{|𝐫_{𝐢,𝐣}|^2}{2L^2}\delta _{\mu ,\nu }\frac{(𝐫_{𝐢,𝐣})_\mu (𝐫_{𝐢,𝐣})_\nu }{L^2}]`$ with $`k(0)=1`$ and $`𝐫_{𝐢,𝐣}=𝐱_𝐢𝐱_𝐣`$. The Gaussian assumption for $`\mathrm{"}singlepoint\mathrm{"}`$ probability density fixes the value of coefficient $`C=\frac{u_{rms}}{L}`$ and the $`C`$-term corresponds to the breakdown of G–invariance in the limited Polyakov’s sense . The $`A`$-term is responsible for interaction of transverse components of velocity field with the longitudinal component and produce an effective source and friction for the longitudinal correlation. In the limit $`r0`$ the equation for the probability density is derived from eq.(3) as, $$\frac{}{U}U\frac{P}{r}B_0\frac{P}{r}=\frac{A}{r}\frac{}{U}UP+\frac{u_{rms}}{L}\frac{^2}{U^2}UP$$ (4) Using the exact results $`S_3=\frac{4}{5}ϵr`$ in the small scale, ( $`ϵ`$ is the mean energy dissipation rate) one finds $`A=\frac{3+B}{3}`$, where $`B=B_0>0`$ . It is easy to see that the eq.(4) can be written as $`_rP=(_UUB_0)^1[(A/r)_UU+(u_{rms}/L)_U^2U]P`$ and so its solution can obviously be written as a scalar–ordered exponential , $`P(U,r)=𝒯(e_+^{_{r_0}^r𝑑r^{}L_{KM}(U,r^{})}P(U,r_0))`$ , where $`L_{KM}`$ can be obtained formally by computing the inverse operator. Using the properties of scalar–ordered exponentials the conditional probability density will satisfy the Chapman-Kolmogorov equation. Equivalently we derive that the probability density and as a result the conditional probability density of velocity increments satisfy a K.M. evolution equation: $$\frac{P}{r}=\underset{n=1}{\overset{\mathrm{}}{}}(1)^n\frac{^n}{U^n}(D^{(n)}(r,U)P)$$ (5) Where $`D^{(n)}(r,U)=\frac{\alpha _n}{r}U^n+\beta _nU^{n1}`$. We have found that the coefficients $`\alpha _n`$ and $`\beta _n`$ depend on $`A`$ and $`B`$, $`u_{rms}`$ and inertial length scale $`L`$ which are given by the recursion relations We scale the velocities as $`\stackrel{~}{U}=\frac{U}{(\frac{r}{L})^{1/3}}`$ and introduce a logarithmic length scale $`\lambda =\mathrm{ln}(\frac{L}{r})`$ which varies from zero to infinity as $`r`$ decreases from $`L`$ to $`\eta `$. Thus the form of $`\stackrel{~}{D^{(1)}}(\stackrel{~}{U},r)`$ and $`\stackrel{~}{D^{(2)}}(\stackrel{~}{U},r)`$ in the equivalent description would be $`\stackrel{~}{D}^{(1)}(\stackrel{~}{U},r)=(\frac{A}{1+B})\stackrel{~}{U}`$ and $`\stackrel{~}{D^{(2)}}(\stackrel{~}{U},r)=(\frac{A}{(2+B)(1+B)})\stackrel{~}{U}^2(\frac{r}{L})^{2/3}u_{rms}(\frac{(1)}{(2+B)})\stackrel{~}{U}`$. The drift and diffusion coefficients for various scales $`\lambda `$, determined in the theory of Yakhot, show the same functional form as the calculated coefficients from experimental data . In comparison with the phenomenological theory of Friedrich and Pienke we are able to construct a K.M. equation for velocity increments that is analytically derived from Yakhot theory which is based on just general underlying symmetries and OPE conjecture. Furthermore this viewpoint on the equation (4) gives the expressions for scale dependence of the coefficients in the K.M. equation. The important result is that scale dependent K.M. coefficients are proportional to $`u_{rms}`$ which gives a probable relationship between breakdown of G-invariance and scale dependence of the K.M. coefficients in the equivalent theory. The two unknown parameters $`A`$ and $`B`$ in the theory is reduced to one by fitting the $`\xi _3=1`$, so all the scaling exponents and $`D^{(n)}`$’s are described by one parameter, $`B`$. Considering the results in on which the value of $`B`$ are obtained, we have used the value $`B20`$ and have calculated the numerical values of KM coefficients. Ratios of the first three coefficients $`\alpha _n`$ and $`\beta _n`$ are $`\alpha _3/\alpha _2=0.04,\alpha _4/\alpha _2=0.001,\beta _3/\beta _2=0.04,\beta _4/\beta _2=0.001`$. From the comparison of numerical values of higher order coefficients we find that the series can be cut safely after the second term and a good approximation for evolution operator of velocity increments is a Fokker-Planck operator. According to the value of parameter $`B20`$ is calculated numerically in the limit of infinite Reynolds numbers. Using this value for calculation of the numerical values of $`\stackrel{~}{D_1}`$ and $`\stackrel{~}{D_2}`$ we find that the contribution of scale dependent terms are essentially negligible. This correspondence was referred to in about the behavior of diffusion coefficients for the limit of infinite Reynolds numbers. As it is well-known the Fokker-Planck description of Probability measure is equivalent with the Langevin description written as , $`\frac{\stackrel{~}{U}}{\lambda }=\stackrel{~}{D}^{(1)}(\stackrel{~}{U},\lambda )+\sqrt{\stackrel{~}{D}^{(2)}(\stackrel{~}{U},\lambda )}\eta (\lambda )`$, , where $`\eta (\lambda )`$ is a white noise and the diffusion term acts as a multiplicative noise. By considering the Ito prescription and using the path-integral representation of the Fokker-Planck equation we can give an expression for all the possible paths in the configuration space of velocity differences and thus demonstrate the change of the measure under the change of scale,i.e. $$P(\stackrel{~}{U}_2,\lambda _2|\stackrel{~}{U}_1,\lambda _1)=𝒟[\stackrel{~}{U}]e^{_{\lambda _1}^{\lambda _2}𝑑\lambda \frac{(\frac{\stackrel{~}{U}}{\lambda }\stackrel{~}{D}^1(\stackrel{~}{U},\lambda ))^2}{4\stackrel{~}{D}^2(\stackrel{~}{U},\lambda )}}$$ (6) When calculating, the measure of path integral is meaningful when some form of discretization is chosen , but we have written it in a formal way. Using the forms of $`\stackrel{~}{D}^1`$ and $`\stackrel{~}{D}^2`$ and approximating them with scale independent ones in the infinite Reynolds number limit, one can easily see that the transition functional can be written in terms of $`\mathrm{ln}\stackrel{~}{U}`$. It is an easy way to see how the large scale $`\lambda 0`$ Gaussian probability density can change its shape when going to small scales $`\lambda \mathrm{}`$ and consequently give rise to intermittent behavior. Instead of working with the probability Functional of velocity increments the formal solution of Fokker-Planck equation as a scalar–ordered exponential , can be converted to an integral representation for the probability measure of velocity increments when the $`\stackrel{~}{D}^1\alpha _1(\lambda )\stackrel{~}{U}`$ and $`\stackrel{~}{D}^2\alpha _2(\lambda )\stackrel{~}{U}^2`$ ,i.e $$P(\stackrel{~}{U},\lambda )=\frac{e^{\gamma _0(\lambda )}}{\sqrt{4\pi \gamma (\lambda )}}_{\mathrm{}}^+\mathrm{}e^{\frac{s^2}{4\gamma (\lambda )}}\varphi (\stackrel{~}{U}e^{\gamma _1(\lambda )s})𝑑s$$ (7) where, $`\gamma _0(\lambda )=_0^\lambda (\alpha _1(\lambda ^{})+2\alpha _2(\lambda ^{}))𝑑\lambda ^{}`$ and $`\gamma _1(\lambda )=_0^\lambda (\alpha _1(\lambda ^{})+3\alpha _2(\lambda ^{}))𝑑\lambda ^{}`$ and $`\gamma (\lambda )=_0^\lambda \alpha _2(\lambda ^{})𝑑\lambda ^{}`$ and $`\varphi (\stackrel{~}{U})`$ is the Probability measure in the integral length scales $`(\lambda 0)`$ . We consider the Gaussian distribution, $`\varphi (\stackrel{~}{U})e^{m\stackrel{~}{U}^2}`$ in the integral scale which is a reasonable choice ( experimental data shows that up to third moments the PDF in the integral scale are consistent with Gaussian distribution ) and derive the dependence of the variance of the probability density on the scales in the limit when the original distribution satisfies the condition $`m1`$. The result shows an exponential dependence like $`mme^{2\zeta }`$ where $`\zeta =3\alpha _2\alpha _1`$. The consistent picture with the shape change of probability measure under the scale is that when $`\lambda `$ grows, the width decreases and vice versa, which is reported in previous works as a simulation and experimental results. Moreover we should emphasize that the shape change is somehow complex which gives some corrections in order $`O(m^2\stackrel{~}{U}^4)`$ even in this simplifying limit,i.e. $`m1`$. Starting with a Gaussian measure at integral scales and using the calculated scale independent Fokker-Planck coefficients, we have numerically calculated the PDF’s for fully developed turbulence and Burgers turbulence in different length scales which their plots in Fig. and Fig. are completely compatible with experimental and simulation results . The extreme case of Burgers problem (i.e.$`B0`$) shows the ever localizing behavior as if in the limit of $`\lambda \mathrm{}`$ goes to a Dirac delta function which again is consistent with our knowledge about Burgers problem . Clearly the eqs. (4) and (5) give the same result for multifractal exponent of structure function ,i.e. $`S_n(r)A_nr^{\xi _n}`$ is derived to be $`\xi _n=\frac{(3+B)n}{3(n+B)}`$ . In summary We have constructed a theoretical bridge between two recent theories involving the statistics of longitudinal velocity increment fluctuations in fully developed Turbulence. On the basis of the recent theory proposed by V. Yakhot we showed that the probability density of longitudinal velocity components satisfy a Kramers-Moyal equation which encodes the Markovian property of these fluctuations in a necessary way. We are able to give the exact form of Kramers-Moyal coefficients in terms of a basic parameter in Yakhot theory $`B`$. The qualitative behavior of drift and diffusion terms are consistent with the experimental outcomes . As the most prominent result of our work, we could find the form of path probability functional of the velocity increments in scale which naturally encodes the scale dependence of probability density. This gives a clear picture about the intermittent nature in fully developed Turbulence. We should emphasize that the derivation of KM equation is not restricted to the Polyakov’s specific approach. One can show that similar results could be obtained by the conditional averaging methods \[24-25\]. Clearly analytic form of the K.M. coefficients $`D^{(n)}`$ can be estimated numerically but analytic derivation is not possible . Our work might be generalized to give a theoretical basis for the Markovian fluctuations of the moments of height difference in the surface growth problems like KPZ and we believe that it would be possible to derive the Kramers-Moyal description for the statistics of energy dissipation. Acknowledgement: We would like to thank from A. Aghamohamadi, B. Davoudi, R. Ejtehadi, M. Khorrami, A. Langari and S. Rouhani for helpful discussions and useful comments. FIGURE CAPTIONS Figure 1. Schematic view of the logarithm of PDF in terms of different length scales. These graphs are numerically obtained from the integral representation of PDF at the Fokker-Planck approximation. The curves correspond with the scales $`L/r=1.5,2,5,10,20`$. Figure 2. Schematic view of the logarithm of PDF in the Burgers turbulence ($`B0`$), in terms of different length scales.These graphs are numerically obtained from the integral representation of PDF at the Fokker-Planck approximation. The scales are $`L/r=1.5,2,5,10,20`$.
no-problem/9907/cond-mat9907187.html
ar5iv
text
# Magnetic field induced localization in a two-dimensional superconducting wire network \[ ## Abstract We report transport measurements on superconducting wire networks which provide the first experimental evidence of a new localization phenomenon induced by magnetic field on a 2D periodic structure. In the case of a superconducting wave function this phenomenon manifests itself as a depression of the network critical current and of the superconducting transition temperature at a half magnetic flux quantum per tile. In addition, the strong broadening of the resistive transition observed at this field is consistent with enhanced phase fluctuations due to this localization mechanism. \] In a recent paper a novel case of extreme localization induced by a transverse magnetic field was predicted for non interacting electrons in a two-dimensional (2D) periodic structure. This new phenomenon, due to a subtle interplay between lattice geometry and the magnetic field, differs from Anderson localization on two essential points: it occurs in a pure system, without disorder, and the system eigenstates are not localized but non-dispersive states. In a tight-binding (TB) approach, it can be simply understood in terms of Aharonov-Bohm effect which, at half a flux quantum per unit tile (half-flux), leads to fully destructive quantum interferences. For this flux, the set of sites visited by an initially localized wave-packet will be bounded in Aharonov-Bohm cages . This effect is absent on other regular periodic lattices at half-flux, such as the square and the triangular lattices. Superconducting wire networks are suitable to address phase interference phenomena driven by a magnetic field. These systems are extremely sensitive to phase coherence of the superconducting order parameter over the network sites which is exclusively determined by the competition between the external field and the network geometry. Besides, the quantum regime is accessible even in low $`T_c`$ diffusive superconductors: since all Cooper pairs condense in a quantum state, the relevant wavelength is associated with the macroscopic superfluid velocity and can be much larger than the lattice elementary cell. Also, the magnetic field corresponding to one superconducting flux quantum $`\mathrm{\Phi }_o=hc/2e`$, is easily accessible: it is about $`1`$mT for a network cell of $`1\mu `$m<sup>2</sup>, in contrast to the unattainable 10<sup>3</sup> T for an atomic lattice. In addition, some features of the TB spectrum, namely the Hofstadter butterfly, are experimentally accessible in the model system of a superconducting wire network. As shown by de Gennes and Alexander, the linearized Ginzburg-Landau (GL) equations for a superconducting wire network can be mapped onto the eigenvalues equation of a TB hamiltonian for the same geometry. This mapping is of particular relevance since one of the remarkable findings of Ref. is the total absence of dispersion in the TB spectrum at half-flux. In the context of a superconducting network, the localization effect is expressed by the inability of the superconducting wave function to carry phase information throughout the network and therefore, transport anomalies are expected. In this Letter we present transport measurements on 2D superconducting networks with the so-called $`T_3`$ geometry (see inset of Fig. 1). Our results allow us to confirm some of the exotic features of the $`T_3`$ energy spectrum related to the localization mechanism. The field-temperature (H,T) superconducting transition line is determined and related to the ground state of the $`T_3`$ spectrum. We also compare the critical current as a function of the magnetic field with calculations of the group velocity. The striking behavior found at half-flux is discussed as a possible signature of localization effects. The strong broadening of the normal to superconductor transition supports this interpretation. Very few experiments were reported so far on localization phenomena in superconducting networks and only the issues of irrational magnetic flux or disorder have been addressed. The networks pattern was defined on a 600 nm thick layer of positive UV3 resist using an e-beam writer Leica VB6-HR. A $`100`$ nm thick layer of pure aluminum was e-beam evaporated in a ultra-high vacuum chamber, followed by the resist lift-off. We designed two series with a large patterned area: Star 600 defined on a $`0.6\times 1`$ mm<sup>2</sup> surface and Star 20 defined on a surface of $`0.02\times 1`$ mm<sup>2</sup>, which required stitching of $`200\times 200\mu `$m<sup>2</sup> writing fields. The elementary tile side length is $`a=1\mu `$m, the wires having 100 nm width and 100 nm thickness. Dynamic resistance measurements were performed using a 33 Hz ac four terminal resistance bridge with an ac measuring current of 20 nA. Sample probes were connected to the cryostat terminals by ultrasonic bonding of $`25\mu `$m gold wires. Non invasive voltage probes were placed at 0.2 mm from the current pads. The zero field transition temperatures $`T_c(0)`$ were 1.234 K for Star 600 and 1.240 K for Star 20, using a resistance criteria of half the normal state resistance $`R_n`$ at 1.25 K, which are $`4.20\mathrm{\Omega }`$ and $`63.56\mathrm{\Omega }`$, respectively. The resistive transition width in zero field is 3 mK ($`10\%90\%`$) for both samples indicating a good homogeneity of the networks. The field dependent transition temperature $`T_c`$(H) was monitored by locking the temperature controller to keep the sample resistance at $`0.5R_n`$ as the magnetic field is varied. The experimental data is to be compared with the lowest energy solution of the network linear GL equations, that is given in terms of the ground state eigenvalue $`ϵ_g(f)`$ of the TB spectrum by, $$1\frac{T_c(f)}{T_c(0)}=\frac{\xi (0)^2}{a^2}\mathrm{arccos}^2\left(\frac{ϵ_g(f)}{\sqrt{1}8}\right),$$ (1) where $`\xi (0)`$ is the superconducting coherence length at zero temperature and $`f`$ the frustration. Neglecting field screening effects, $`f=\mathrm{\Phi }/\mathrm{\Phi }_o`$ where $`\mathrm{\Phi }=Ha^2\sqrt{3}/2`$ is the magnetic flux through a rhombus tile. The transition line of Star 600 is plotted in Fig. 1 in reduced units $`1T_c(f)/T_c(0)`$ as a function of frustration. Since the transition line is periodic on $`\mathrm{\Phi }_o`$ we only displayed it in the field range $`0<f<1`$. A small parabolic background due to field penetration in the wires was subtracted from the experimental $`T_c(f)`$. We also display the theoretical $`T_c(f)`$ obtained using Eq. (1) and $`\xi (0)=157`$ nm, the only adjustable parameter. The fine field structure of the experimental data is very well described by the theoretical curve. Distinct downward cusps are visible at low order rationals $`f=1/q`$, for $`q=3,4,6`$, and 2/9. They reflect the long range phase ordering of the order parameter among network sites, established at fields commensurate to the underlying lattice. These features were discussed previously. The novel feature of the transition line occurs at $`f=1/2`$, where the maximum of $`T_c(f)`$ depression (30 mK) is achieved, associated with an inversion of the field modulation concavity. This anomalous cusp persists distinctly at all criteria used on $`T_c(f)`$ determination, from $`0.06R_n`$ to $`0.87R_n`$, though the downward cusps at other rationals fade out with increasing temperature. This cusp is similar to the $`T_c`$ variation in a single loop geometry close to $`f=1/2`$ , and is characteristic of quantum effects determined on a finite length scale. It indicates that at half-flux the network transition is determined by fluxoid quantization at independent tiles. The maximum depression of $`T_c(f)`$ at half-flux shows the strong incommensurability at this field. To our knowledge, these results are the first experimental observation of such an effect on an extended periodic network. Besides, they indicate that 2D periodicity is not a sufficient condition for a commensurate state to exist at rational $`f`$. We also observed a strong broadening of the resistive transition $`\mathrm{\Delta }T_{width}`$, at half-flux, as displayed in Fig. 2 for Star 600. $`\mathrm{\Delta }T_{width}`$ is obtained as the difference between the $`T_c(f)`$ curves taken for criteria $`0.6R_n`$ and $`0.1R_n`$, respectively. The anomalous enhancement (up to 12 mK) at half-flux, twice the average width over most of the field range, confirms the singular behavior found at this field. At the strong commensurate fields $`f=0,\frac{1}{6},\frac{1}{3}`$, $`\mathrm{\Delta }T_{width}`$ is sharply reduced to a few mK as expected for a phase ordered system. Close to these fields, the phase of the order parameter at the network sites is able to ”lock” in the nearest commensurate state with the creation of few mobile defects, broadening slightly the transition. Close to half-flux no commensurate state is available, thus phase correlations between network sites cannot be established, leading to a strong broadening of the transition. In fact, the $`T_3`$ tiling geometry can be viewed as an ensemble of three coupled triangular sublattices, two formed by the 3-fold sites and another by the 6-fold sites. The singular properties of the $`T_3`$ spectrum $`ϵ(f)`$, at frustration $`f`$ are simply revealed by the transformation: $$ϵ^2(f)6=2\mathrm{cos}(\pi f)ϵ_T(f_T)$$ (2) that relates $`ϵ(f)`$ to the triangular lattice eigenvalues $`ϵ_T(f_T)`$ at frustration $`f_T=3f/2`$. At half-flux, due to cancellation of the cos($`\pi f`$) prefactor, all the energy levels collapse into two highly degenerate discrete levels at $`ϵ=\pm \sqrt{6}`$, forming flat, non-dispersive bands, in addition to the $`ϵ=0`$ flat band. Due to the mapping of the TB problem onto the linearized GL approach, the superfluid velocity can be expressed in terms of the group velocity of the band spectrum close to the ground state. In the context of a superconducting wave function, a non-dispersive state cannot carry phase information through the network, contrary to a Bloch state. Therefore, critical current measurements give information on the network ability to sustain a supercurrent, i.e., both a finite order parameter and a finite superfluid velocity. The critical current was studied as a function of field from the dynamic resistance characteristics $`vs`$ increasing dc bias current at temperatures close to $`T_c(0)`$. The used criteria was the threshold current for which the dynamic resistance exceeds $`0.2\%R_n`$. Within the sensitivity limits of our measurements, it corresponds to the maximum current that the circuit is able to carry without dissipation. To avoid heating effects due to feeding a large current, we used sample Star 20 with 23 cells (20 $`\mu `$m) width. The critical current density per wire $`J_c(T,f)`$ is obtained from the network critical current divided by the number of parallel wires (25) and the wire cross section. Close to $`T_c`$, we expect the critical current to follow a $`3/2`$ power law that generalizes the depairing current of a one-dimensional superconducting wire to a superconducting network: $$J_c(T,f)=J_nC(f)\left(\frac{T_c(f)T}{T_c(0)}\right)^{3/2},$$ (3) where $`J_n`$ is the zero field depairing current density at T = 0 K. The field dependent coefficient $`C(f)`$ is derived from the band curvature, $`^2ϵ_g/k^2`$ close to the ground state, $`ϵ_g(f)`$ by, $$C^2(f)=\frac{1}{a^2\sqrt{18ϵ_g^2}}\frac{^2ϵ_g}{k^2}\mathrm{arccos}\frac{ϵ_g}{\sqrt{1}8}$$ (4) In Fig. 3 is displayed the field dependence of $`J_c`$, at $`T=0.96T_c(0)`$ (1.185 K). Sharp peaks are obtained for the same frustrations as the downward cusps observed in the transition line. The remarkable finding is the total absence of peak in the critical current at the lowest order rational $`f=1/2`$, exhibiting a clear minimum at this field. For all studied temperatures the critical current was always found to exhibit the lowest values at $`f=1/2`$. In the inset of Fig. 3 is displayed the theoretical $`J_c`$ obtained using (3) and (4) for $`f=p/q`$, $`q<50`$. We used $`T=0.96T_c(0)`$ and $`J_n=7\times 10^6`$ A cm<sup>-2</sup>, estimated from a 3/2 power law fit of the zero field $`J_c(T,0)`$ data. The critical current consists of successive $`\delta `$-functions at rational frustration $`f=p/q`$ except at 1/2. The highest values of $`J_c`$ were obtained for $`f=0,\frac{1}{6},\frac{1}{3}`$ and the symmetric values. For these rational frustrations the spectrum is band-like and the group velocity is finite (Bloch states). The same applies for other regular periodic lattices such as the square lattice where a checkerboard commensurate state leads to an important peak at f=1/2 . However, in the $`T_3`$ case at half-flux the group velocity is strictly zero due to the absence of dispersive states and the critical current vanishes. This situation is original for an infinite tiling and is due to the special $`T_3`$ geometry. A similar effect is found when $`f`$ approaches irrational frustration (for example, at small frustration $`f=1/q`$, with large $`q`$): the bandwidth then becomes exponentially small and therefore the group velocity and the critical current are suppressed. The experimental data follow the same qualitative behavior vs field as the theoretical predictions. The $`3/2`$ power dependence of $`J_c`$ vs temperature at constant magnetic field was observed in the temperature range $`T_c(f)T<20`$ mK. Namely, the field dependent coefficient $`C(f)`$ at half-flux is reduced to $`17\%`$ of its zero field value. The $`C(f)`$ depression, thus of $`J_c`$, reflects the effect of the band structure on the superfluid velocity, and provides a strong evidence of the non-dispersive character of the state at $`f=1/2`$, although the measured critical current does not vanish. One possible explanation for the incomplete suppression of $`J_c`$ is the network finite size. A current carrying state (an edge state similar to surface superconductivity in finite type-II superconductors) exists along each edge of the finite network and is expected to lead to a non-zero supercurrent. A second possible origin for finite $`J_c`$ is the influence of the GL non-linear term which was neglected in Eq. (3). Presumably, the non-linear terms in the GL formulation are responsible for degrading the fine features of the band structure and therefore give a finite critical current. The critical current observed in Fig. 3 at small frustrations, for example close to zero, may have the same origin. To go further, an exact solution of the non-linear GL equations would be needed. Nevertheless, as demonstrated by Abrikosov, a good physical insight of the superconducting properties can be obtained from the eigenstates of the linearized GL equation. This phenomenon suggests interesting properties of the vortex sublattice. In this context, the coupling between network sites can be expressed as a landscape of energy barriers against vortex motion. For example, at $`f=1/3`$, a periodic vortex configuration can be easily constructed, matching perfectly the underlying lattice. This configuration is strongly pinned and very stable against driving currents, leading to a large critical current. The decoupling of some network sites at $`f=1/2`$ suggests that, in the absence of pinning, the vortex configuration will be highly disordered. Therefore, a significant dissipation is expected for small driving currents, as revealed on our experiments by the suppression of critical current and the anomalous transition broadening. These considerations are supported by preliminary experiments on vortex decoration which indicate a highly disordered vortex distribution at $`f=1/2`$ and will be addressed elsewhere. More subtle is the commensurate state at $`f=1/6`$, which corresponds to the $`f_T=1/4`$ state of the triangular lattice formed by the 6-fold sites (see Eq. 2). As shown in Ref. , the uniformly frustrated XY model on a triangular lattice at $`f_T=1/4`$ presents an accidental degeneracy of the ground state with zero energy domain walls which can weaken the global phase coherence. In our experiments we do observe a critical current peak at $`f=1/6`$, almost as large as at $`f=1/3`$. The singular behavior observed experimentally at $`f=1/2`$ ($`f_T=3/4`$) is completely absent at $`f=1/6`$ ($`f_T=1/4`$) and therefore, cannot be simply related to the triangular lattice problem. Besides, it is not clear if the accidental degeneracy persists for a tight binding coupling. In summary, the anomalous transport behavior of the $`T_3`$ superconducting networks at half-flux is consistent with the localization effect predicted in Ref. . The transition line is in excellent agreement with the related $`T_3`$ ground state. The broad transition width at half-flux, indicates a strong enhancement of phase fluctuations which we assign to destructive quantum interferences at this field. The reduction of the critical current at $`f=1/2`$, which was never observed so far in periodic superconducting networks, illustrates the inability of the network to sustain a transport current. This behavior is the analog, in the superconductor case, of the metal-insulator transition predicted in Ref. . We acknowledge B. Douçot, R. Mosseri and O. Buisson for fruitful discussions. The e-beam lithography was carried out with PLATO organization teams and tools. C.C.A. is supported by a grant from the Portuguese Ministry for Science and Technology. Discussions within the TMR n<sup>o</sup>FMRX-CT97-0143 are acknowledged.
no-problem/9907/quant-ph9907003.html
ar5iv
text
# 1 Introduction ## 1 Introduction If a system of nuclear spins is to be of real practical use as an NMR quantum computer \[1-13\] it should consist of tens of coupled nuclei, otherwise the accessible algorithms will afford no real advantage over those that can be readily implemented on a classical electronic computer. Unfortunately the number of suitable (and non-radioactive) spin-1/2 nuclei is strictly limited; the prime candidates are <sup>1</sup>H, <sup>13</sup>C, <sup>15</sup>N, <sup>19</sup>F, <sup>29</sup>Si and <sup>31</sup>P, while chemical considerations appear to rule out the two noble gases <sup>3</sup>He and <sup>129</sup>Xe, together with most of the heavy metals. Consequently, an NMR quantum computer with a very large number of qubits is only likely to be attainable if it includes extensive homonuclear systems of coupled spins. These spins must all form part of the same molecule, consequently chemical bonding constraints favour the nuclei <sup>1</sup>H, <sup>13</sup>C, <sup>19</sup>F. Thus a key task is to understand how to perform a logic operation on a homonuclear system, for example an array of coupled <sup>13</sup>C spins. This is the difficult part of the problem, whether or not heteronuclear spins are also involved, and is the subject of this Letter. A recent paper has emphasized this new “do nothing” aspect of NMR quantum computation. For a total of $`N`$ homonuclear spins, all coupled to each other, we select either one “active” spin to perform a rotation, or two “active” spins to evolve under their spin-spin coupling operator, leaving $`N1`$ or $`N2`$ “spectator” spins to be returned to their initial states at the end of the sequence. Because each logic gate has an appreciable duration, normally measured in tens of milliseconds, the “do nothing” feature is non-trivial, involving the refocusing of all chemical shift and spin-spin coupling interactions of the spectator spins, and the couplings between the active spin(s) and all the spectators. A fundamental constraint is that no two coupled spins should experience simultaneous soft pulses. Although spin-spin coupling can be neglected during a short hard radiofrequency pulse, this is not the case for simultaneous pulses that are selective in the frequency domain. If the duration of the two soft pulses is comparable with the reciprocal of the coupling constant, undesirable antiphase magnetization and multiple-quantum coherences are generated \[14-17\]. This so-called double-resonance two-spin effect “TSETSE” is irreversible and interferes with the proper operation of the logic gate. This constraint dictates the form of the refocusing pulse sequence, which can become highly complex for large numbers of spins if they are all coupled together with appreciable coupling constants . The soft refocusing pulses eventually “collide” in the time domain, setting a lower limit on the duration of the logic gate. Higher applied magnetic fields mitigate the problem by increasing chemical shift differences and hence permitting shorter (less selective) soft pulses. Two recent papers describe pulse sequences that are more efficient in that the total number of $`\pi `$ pulses does not increase exponentially with $`N`$ for a fully-coupled system of $`N`$ spins. In both reports the analogy with Hadamard matrices is stressed, since to refocus chemical shift or spin-spin interactions requires equal periods of “positive” and “negative” evolution under the $`I_z`$ or $`2I_zS_z`$ operators, and the Hadamard matrices provide an elegant formulation of this requirement, suggesting the most efficient recursive expansion procedure. Unfortunately the proposed schemes involve several pairs of simultaneous $`\pi `$ pulses and are therefore unsuitable for homonuclear systems. In general the design of suitable homonuclear pulse sequences can be based on traditional NMR “refocusing” considerations, or more formally on patterns derived from the Hadamard matrices. We draw up a section of a conventional Hadamard matrix in which the rows represent the different spins ($`I,S,R,Q`$, etc.) while the columns indicate the sense of nuclear precession ($`+`$ or $``$) in the different time segments. Spin-spin coupling between any two representative spins is refocused if the corresponding rows are orthogonal, the characteristic property of a Hadamard matrix. Consequently, for a four spin system, the pattern: $`\begin{array}{ccccccccc}I:& +& +& +& +& & & & \\ S:& +& +& & & & & & \\ R:& +& & +& & & & & \\ Q:& +& & & +& & & & \end{array}`$ (5) using the $`4\times 4`$ Hadamard matrix, will ensure that spin $`I`$ evolves only according to its chemical shift, with no splittings due to $`S,R`$ and $`Q`$, while $`S,R`$ and $`Q`$ have chemical shifts and spin-spin splittings refocused. However this matrix does not satisfy the constraint that no two soft $`\pi `$ pulses are simultaneous. A possible pattern which does satisfy the constraint is as follows: $`\begin{array}{ccccccccc}I:& +& +& +& +& +& +& +& +\\ S:& +& +& +& +& & & & \\ R:& +& +& & & & & +& +\\ Q:& +& & & +& +& & & +\end{array}`$ (10) This matrix involves selecting four rows from the $`8\times 8`$ Hadamard matrix. We see that one possible pattern of refocusing pulses forms a (1-2-4) cascade as illustrated in Fig. 1. To take a concrete example, consider first of all the case of five homonuclear spins ($`ISRQT`$) all interacting with each other with appreciable coupling constants. Suppose we wish to construct a controlled-not (CNOT) gate, which is written in terms of product operators as $`e^{i\frac{\pi }{2}I_y}e^{i\frac{\pi }{2}(I_z+S_z)}e^{+i\frac{\pi }{2}(2I_zS_z)}e^{+i\frac{\pi }{2}I_y}`$ (11) where, by convention, the operators are set out in time-reversed order (and we have ignored an irrelevant overall phase). The key step is the evolution of the $`I`$ and $`S`$ spins under the spin-spin coupling operator $`2I_zS_z`$. The overall duration of this sequence is determined by the “antiphase” condition: $`\tau =(2n+1)/(2J_{IS})`$ (12) where $`n`$ is an integer, normally zero. For the case of the CNOT as set out in (11), $`n`$ should be even. The $`R`$, $`Q`$, and $`T`$ spectator spins must be returned to their initial states at the end of the sequence. Refocusing of the appropriate chemical shifts and spin-spin interactions is achieved by two hard $`\pi `$ pulses and the (1-2-4) cascades of selective $`\pi `$ pulses shown in Fig. 1. Each spin experiences an even number of $`\pi `$ pulses, and the soft $`\pi `$ pulses are never applied simultaneously. The extension to further coupled nuclear spins is straightforward but daunting. Additional stages are added to the cascades, and each new stage doubles the number of time segments and contains twice as many soft $`\pi `$ pulses. Since these pulses have an appreciable duration, a point is eventually reached where the overall length of the sequence has to be increased to accommodate so many soft pulses without overlap in the time domain. This would be implemented by increasing $`n`$ in Eq. (4). Herein lies the principal drawback of the method, for a long sequence would be subject to appreciable decoherence effects. The onset of this condition is determined by the chemical shift dispersion of the nucleus under investigation at the field strength of the spectrometer. This favours <sup>13</sup>C or <sup>19</sup>F nuclei in the highest possible field, because this permits the shortest soft $`\pi `$ pulses. Fortunately in practice spin-spin couplings are relatively local, and in a large spin system many of the longer-range interactions are vanishingly small. This puts a quite different complexion on the “do nothing” feature. One can then find pulse patterns for the logic gate that do not incur an exponential increase in the number of pulses required as the number of spins is increased. Consider the practical case of a system of spins disposed along a “straight” chain (no branching) where spin-spin couplings are limited to one-, two- and three-bond interactions, neglecting the rest on the grounds that they would be too weak to cause significant TSETSE effects. This would usually be a good approximation if we decide to study coupled <sup>13</sup>C spins in an isotopically enriched compound. For simplicity of illustration the active spins $`IS`$ have been assumed to be at the end of the chain, but identical conclusions can be drawn for spins near the middle of a chain. Simultaneous $`\pi `$ pulses are now allowed, provided that the spins in question are separated by four or more chemical bonds, and provided that the two soft pulses are not too close in frequency . This affords a dramatic simplification and there is no longer an exponential growth in complexity as more spins are added to the chain. Consider a chain of nine coupled spins ($`ISRQTUVWX`$). All the relevant splittings can be refocused by the application of repeated (1-2-4) cascades of soft $`\pi `$ pulses separated by a stage where one spin ($`U`$ in this case) has no soft $`\pi `$ pulses at all (Fig. 2). This recursive expansion can be continued indefinitely without increasing the number of time segments in the sequence beyond sixteen. Note that at no time are two soft $`\pi `$ pulses applied simultaneously to spins less than four bonds apart, but that all shorter-range splittings are refocused. The number of soft pulses increases essentially linearly with the total number of spins, while the overall duration of the sequence remains constant. The simplification can be taken one step further by neglecting all except one- and two-bond interactions. Then a sequence made up of only eight time segments suffices and (1-2) cascades can be employed (Fig. 3). Simultaneous soft $`\pi `$ pulses are only applied to spins separated by at least three bonds, but all shorter-range interactions are refocused. Finally, in practical situations where all except the one-bond couplings can be neglected, an even simpler sequence of single soft $`\pi `$ pulses can be used. These ideas can be expressed in a slightly different form for coupled systems of protons or fluorine nuclei. The possible topologies comprise the two-bond geminal couplings, three-bond vicinal couplings, and the corresponding longer-range interactions. Since the coupling constants do not necessarily fall off monotonically with the number of intervening bonds, we consider them in order of decreasing magnitude, neglecting all those below a predetermined threshold, thus avoiding the awkward and unrealistic case where every spin interacts with every other with an appreciable coupling constant. An experimental test of these proposals was carried out on six coupled protons in inosine (Fig. 4) dissolved in dimethylsulfoxide-$`d_6`$ containing some heavy water. The hydroxyl resonances were removed by exchange with deuterium. As before, the $`I`$ and $`S`$ spins were allowed to evolve under the scalar coupling operator, while $`R`$, $`Q`$, $`T`$, and $`U`$ were passive spectators. A modified pulse sequence (the top six rows of Fig. 5) was used because it incorporates all the soft $`\pi `$ pulses in pairs, an arrangement well known to compensate pulse imperfections . The sign matrix corresponding to left-hand side of the sequence shown in Fig. 5 is not a Hadamard matrix but is constructed by repeating sections of a Hadamard matrix. The first six rows of the sign matrix are $`\begin{array}{ccccccccc}I:& +& +& +& +& +& +& +& +\\ S:& +& +& +& +& +& +& +& +\\ R:& +& & & & & +& +& +\\ Q:& +& +& +& & & & & +\\ T:& +& +& +& +& +& +& +& +\\ U:& +& & & & & +& +& +\end{array}`$ (19) Note that no soft $`\pi `$ pulses are applied to spin $`T`$; row $`T`$ is therefore not orthogonal to rows $`I`$ or $`S`$. This is allowed since we have assumed that $`J_{IT}`$ and $`J_{ST}`$ are negligible. The sequence for a controlled-not gate, (3), employs evolution under the $`2I_zS_z`$ operator for a period $`(2n+1)/(2J_{IS})`$, giving a multiplet that is antiphase with respect to the $`J_{IS}`$ splitting, but which requires a $`\pi /2`$ phase shift (the $`I_z`$ and $`S_z`$ terms) to convert from dispersion to absorption. Sequences are available to implement the requisite $`z`$ rotation while returning the spectator spins to their initial states. For the purposes of illustration we demonstrate only the evolution under the $`IS`$ coupling, obtaining the phase shift by resetting the receiver reference phase when recording the $`I`$ and $`S`$ responses. The soft pulse scheme set out in Fig. 5 is designed to refocus vicinal (three-bond) and long-range (four-bond) splittings. In practice only the vicinal couplings are well resolved for inosine in the rather viscous solvent. The resulting spectra are shown in Fig. 6. The soft $`\pi `$ pulses had a duration of 64 ms and were flanked by 1 ms delays, giving an overall duration of 528 ms for the sequence. The geminal coupling between $`I`$ and $`S`$ is 12.4 Hz, which requires $`n=6`$ in Eq.(4) to achieve the antiphase condition while still accommodating all the soft $`\pi `$ pulses. Although $`I`$ and $`S`$ show some effects of strong coupling (AB pattern) the antiphase condition is quite clear, while the four spectator spins have chemical shift and spin-spin couplings all refocused at the end of the sequence. This implements the key step of the controlled-not logic gate. Systems with larger numbers of coupled spectator spins are readily handled by extending the pulse patterns of Figs. 2, 3 or 5, without increasing the overall duration of the sequence. The number of soft $`\pi `$ pulses only increases linearly with the number of spectator spins. To summarise – there are not enough suitable spin-1/2 nuclear species to be able to construct an entirely heteronuclear NMR quantum computer with sufficient qubits to make it useful. Hence any viable device must contain extensive networks of coupled homonuclear spins, for example protons, fluorine or carbon-13. This imposes constraints which rule out some “efficient” schemes which are appropriate for heteronuclear systems as the latter involve simultaneous soft pulses on pairs of coupled spins, a procedure well known to generate undesirable multi-spin coherences. We have demonstrated new sequences for the construction of a quantum logic gate with homonuclear spins, where the spectator spins undergo no net evolution for the duration of the gate. When applied to the most realistic case where each spin is coupled to a restricted set of neighbours (neglecting long-range couplings) they have the important feature that the total duration of the sequence does not increase as further spins are added to the system. Analogous considerations apply to systems made up of both homonuclear and heteronuclear spins; the latter are refocused with conventional hard $`\pi `$ pulses of negligible duration. We show experimental results for a six qubit system: the six coupled protons in inosine. Figure Captions Fig. 1. Refocusing scheme for a system of five homonuclear spins where each pair of spins has an appreciable spin-spin coupling. The ellipses represent frequency-selective inversion pulses. The active spins $`I`$ and $`S`$ evolve under the $`2I_zS_z`$ operator and the duration of the sequence is set to $`\tau =1/(2J_{IS})`$. The spectator spins ($`R`$, $`Q`$, and $`T`$) are returned to their initial states at the end of the sequence. Note that the introduction of an additional spectator spin would necessitate a further stage of sixteen soft $`\pi `$ pulses. Fig. 2. Refocusing scheme for a chain of nine homonuclear spins for the case that spin-spin coupling over more than three chemical bonds can be neglected. $`I`$ and $`S`$ are the active spins evolving under the $`2I_zS_z`$ operator. The (1-2-4) cascade pattern would be repeated as more spectator spins are added. Consequently the complexity increases essentially linearly with the number of spectator spins. Fig. 3. Refocusing scheme similar to that shown in Fig. 2, except that spin-spin coupling over more than two chemical bonds is neglected. The (1-2) cascade pattern may be repeated indefinitely as more spectator spins are introduced without increasing the overall duration of the sequence. Fig. 4. The six-spin system of inosine with active protons labelled $`I`$ and $`S`$ and the spectators $`R,Q,T`$ and $`U`$. Fig. 5. A refocusing scheme equivalent to that shown in Fig. 3 incorporating soft $`\pi `$ pulses in pairs to compensate pulse imperfections. The first six rows of this sequence were used to obtain the experimental results shown in Fig. 6. Fig. 6. (a) Part of the conventional 600 MHz spectrum of the six protons of inosine. (b) The spectrum obtained after the pulse sequences of Fig. 5, showing the $`I`$ and $`S`$ responses in antiphase dispersion with respect to $`J_{IS}=12.4Hz`$. (c) Individual multiplets expanded in frequency 3.6 times, with a $`\pi /2`$ phase shift applied to $`I`$ and $`S`$ to restore the absorption mode. Note that the spectators ($`R`$, $`Q`$, $`T`$ and $`U`$) are essentially unchanged at the end of the sequence, apart from some minor attentuation through spin-spin relaxation.
no-problem/9907/cond-mat9907425.html
ar5iv
text
# REFERENCES Comment on “Quantum Suppression of Shot Noise in Atom-Size Metallic Contacts” In a recent letter , van den Brom and van Ruitenbeek found a pronounced suppression of the shot noise in atom-size gold contacts with conductances near integer multiples of $`G_0=2e^2/h`$, revealing unambiguously the quantized nature of the electronic transport. However, the ad hoc model they introduced to describe the contribution of partially-open conductance channels to the shot noise is unable to fit either the maxima or minima of their shot noise data. Here we point out that a model of quantum-confined electrons with disorder quantitatively reproduces the measurements of Ref. . We model a nanocontact in a monovalent metal as a deformable constriction in an electron gas, with disorder included via randomly distributed delta-function potentials . For convenience, the system is taken to be two-dimensional. The transmission probabilities $`T_n`$ of the conducting channels are obtained via a modified recursive Green’s function algorithm . The dimensionless shot noise $`s_I`$ at zero temperature is $$s_I\frac{P_I}{2eI}=\frac{_nT_n(1T_n)}{_nT_n},$$ (1) where $`P_I`$ is the shot noise spectral density and $`I`$ is the time-average current. Plotting $`s_I`$ versus the conductance $`G=G_0_nT_n`$ eliminates the dependence on dimensionality for an ideal contact, provided no special symmetries are present. Starting from the numerical data that were used to generate the conductance histogram in Ref. , we compute the mean and standard deviation of $`s_I`$ and $`T_n`$ as functions of $`G`$. The averages are taken over an ensemble of impurity configurations and contact shapes. The agreement of the experimental results for particular contacts and the calculated distribution of $`s_I`$ shown in Fig. 1(b) is extremely good: 67% of the experimental points lie within one standard deviation of $`s_I`$ and 89% lie within two standard deviations . It should be emphasized that no attempt has been made to fit the shot-noise data; the numerical data of Ref. , where the length of the contact and the strength of the disorder (mean-free path $`k_F\mathrm{}=70`$) were chosen to give qualitative agreement with experimental conductance histograms for gold , have simply been re-analyzed to calculate $`s_I`$. Contrary to the model of Ref. , the minima of $`s_I`$ do not occur at integer multiples of $`G_0`$, but are shifted to lower values, which correspond not to maxima of the conductance histogram, but rather to maxima of $`T_n`$. Fig. 1(c) shows that the number of partially-open channels increases in proportion to $`G`$. For comparison, the shot noise for a contact with only one partially-open channel, which sets a lower bound, is shown as a dashed curve in Fig. 1(b). The presence of several partially open channels for $`G>G_0`$ increases $`s_I`$ above this lower bound, leading to an apparent saturation at $`s_I0.18`$ for larger contacts. Neither the maxima nor the minima of the experimental data can be fit by the model of Ref. , which includes only two partially-open conductance channels. The excellent agreement between the shot-noise data of Ref. and our model calculation suggests that quantum transport in gold nanocontacts can be well described by a model which includes only two essential features, quantum confinement and coherent backscattering from imperfections in the contact. J. B. acknowledges support from Swiss National Foundation PNR 36 “Nanosciences” grant # 4036-044033. J. Bürki<sup>1,2,3</sup> and C. A. Stafford<sup>1</sup> <sup>1</sup>University of Arizona, Tucson, Arizona 85721 <sup>2</sup>Université de Fribourg, 1700 Fribourg, Switzerland <sup>3</sup>IRRMA, EPFL, 1015 Lausanne, Switzerland Submitted to Phys. Rev. Lett. on 3 June 1999 PACS numbers: 72.70.+m, 72.15.Eb, 73.23.Ad, 73.40.Jn
no-problem/9907/nucl-ex9907008.html
ar5iv
text
# Essay on the Gamma Ray Laser September 197911footnote 1Typed in 1999 after the original September 1979 manuscript The coherent, high-intensity sources of light, known as lasers, represented a great progress in modern optics. Whenever a sufficiently high population can be stored in a higher state of a quantal system which also has a lower, relatively unpopulated, energy level, the process of stimulated emission confers to that source the character of a laser. We say ”character”, because strictly speaking, an essential component of a laser is the resonant cavity. But if the particles in higher state are so dense and the cross section of the process so big as to give rise to large gains, the upper state is depopulated by a travelling wave: we have to do in this case not with a true laser, but rather with a superradiant source. There is a property of the laser radiation which is conserved, even with superradiant sources: the high intensity of the radiation; to a certain extent, it is also conserved the directionality of the radiation. Usually the optical cavity consists of two mirrors with good reflectivity at the operating frequency of the laser. The property of reflection is related at optical frequencies to the interaction of the electromagnetic field with the free electrons characteristic to the internal structure of the mirror. Reflective materials can be found which cover the whole range of optical frequencies. The basic concepts related to the optical laser, like discrete energy levels, populating of the higher states, stimulated emission, inversion and gain, are in principle applicable to any quantal system. Practically, it is not always easy to accomodate the requirements for inversion and gain. Lasers are built which make use of atomic and molecular transitions, which are operating in the infrared, in the visible, in the ultraviolet. The upper limit of the photon energy attained with the excimer lasers is of about 10 eV, or 10 nm. Attempts to obtain laser radiation in the X-ray region from atomic transitions to the inner electronic shells have failed because of the extremely short life-times of the corresponding excited states. The level of structure of the matter, next to the molecular and atomic one, is represented by the atomic nucleus. A given nucleus can exist in the ground state, or in one of its excited states. The transitions between these states are accompanied by the emission or absorption of electromagnetic radiation. The radiations characteristic to nuclear transitions are called gamma rays and their energy ranges from keV’s to tens of MeV. The life-time of the nuclear excited states cover an extraordinarily wide range of values: states with a life-time of $`10^{18}`$ sec, and excited states with life-time of years are known; a frequent life-time of a nuclear level in the 1 MeV region is 1 nsec. Because the nucleus is relatively isolated of the other surrounding nuclei by the electronic shell, the width of the levels is not affected by collisional processes. Therefore, the cross section for many resonant transitions is the Breit-Wigner one, which is proportional to the square of the wavelength of the electromagnetic radiation, and lower energy gamma rays have so large cross sections as $`10^{18}`$ cm<sup>2</sup>. Another consequence of the screening between the nuclei, mediated by the electron cloud, is the existence of long-lived excited states, called isomeric states, which correspond to highly forbidden gamma ray transitions. A nucleus may be found in its excited state as a result of the decay of another unstable nucleus and a sample of such unstable nuclei represents the common gamma ray source. The unstable nuclei are obtained from other, stable nuclei by irradiation with fluxes of particles, which are most conveniently neutrons. Sometimes, the absorption by a nucleus of another particle, like a neutron, leaves the resulting nucleus in one of its excited states, and represents a method in situ for having nuclei in excited states. The thermal motion of the nuclei produces a Doppler broadening of the lines, and the energy shift known as the recoil shift, due to the momentum carried away by the nucleus in the process of transition, often puts off resonance the process of transition. These two effects, which are proportional to the energy, and to the square of the transition energy, become important with gamma ray transitions, and produce a diminishing of the transition cross section by many orders of magnitude. Yet for transition energy not higher than 100 keV, the binding of the nuclei in crystalline lattices eliminates both the Doppler effect and the recoil energy shift, through what is known as the Mössbauer effect. Since a typical solid-state particle density is some $`10^{22}`$/cm<sup>3</sup>, since there are means of populating higher nuclear states, and since the cross section for the induced emission can be, under favorable conditions, some $`10^{18}`$ cm<sup>2</sup>, there are no principle constraints to the construction of a device operating as a laser at gamma-ray frequencies. Such a device would be a superradiating gamma ray source, rather than a laser, because there is not possible to construct the analogue of the optical cavity at gamma ray frequencies. The purpose of this paper is to investigate the conditions for the amplification of the gamma rays. Despite this apparently favorable situation, the attempts to obtain stimulated gamma radiation failed. If one tries to populate the excited states by the conventional method of the decay of unstable nuclei, the particle density in the excited state is some $`10^{22}`$/cm<sup>3</sup>, corresponding to a sample consisting only of the unstable species, times the ratio of the life-time of the level to the half-life of the unstable nucleus. Even in the very favorable situation of a level of 10 $`\mu `$sec and a half-life of $`10^3`$ seconds, the particle density in the excited state cannot exceed $`10^{15}`$/cm<sup>3</sup>, and the resulting gain, $`10^3`$/cm is not sufficient for amplification. Levels with longer life-time cannot be used, because they are broadened by imperfect sample preparation and that diminishes the cross section, and unstable nuclei with halflives lower than $`10^3`$ sec are difficult to concentrate up to $`10^{22}`$/cm<sup>3</sup>. Or one could try to populate the upper level by irradiating the sample with neutrons. The fraction of the nuclei in the excited state is the number of neutrons per cm<sup>2</sup> sec times the neutron capture cross section times the life-time of the level. <sup>2</sup><sup>2</sup>2It is implicit that the excitation by absorption of neutrons creates a chemically different nucleus which is in the excited state, and the same at the decay of an unstable nucleus. The highest today-available neutron flux is some $`10^{15}`$/cm<sup>2</sup> sec, and assuming $`10^{22}`$ cm <sup>2</sup> (100 barns) for the neutron capture cross section and $`10^5`$ sec for the life-time of the level, the fraction becomes $`10^{12}`$, which corresponds to an upper-state particle density not exceeding $`10^{11}`$/cm<sup>3</sup>, and a gain of $`10^7`$. The case of a neutron pulse appears more favorable: the fraction of the nuclei in the upper, excited state is the product of the number of neutrons per cm<sup>2</sup> times the neutron capture cross section. For a neutron capture cross section of 100 barns, the neutron density resulting in a gain of 1/cm is $`10^{18}`$/cm<sup>2</sup>, or at least $`10^{20}`$ neutrons per $`10^5`$ sec pulse duration, and that is not a laboratory experiment. A gain of $`10^3`$/cm is not, strictly speaking, without significance, but the samples appropriate for the assumed very narrow lines are probably sensitive to the high irradiation level required to obtain the population of the upper state. The previous estimations show that states with life-times below 10 $`\mu `$sec do not allow enough populating to obtain relevant gains. High particle concentrations could be obtained in the isomeric states. For long-lived states, one can think, in principle, of samples consisting only of isomeric nuclei, that is of densities of some $`10^{22}`$/cm<sup>3</sup>. The difficulty here is that the cross section of the transition to a lower state is the Breit-Wigner, $`10^{18}`$ cm<sup>2</sup>, cross section, reduced by the ratio of the life-time of the lower level to the half-life of the isomeric state, and, as before, imperfect sample preparation and the concentration make that this ratio be something like $`10^5/10^3`$, and that results in a gain not exceeding $`10^3`$/cm, under most favorable conditions. One could think of using the transition between the upper, isomeric state, and the lower, ground state, but the imperfect sample preparation reduces the Breit-Wigner cross section in the ratio of the effective to the natural life-time of the isomeric state. We are confrunted here with the puzzling problem of having a large concentration of nuclei in the upper, isomeric state, which seemingly cannot be transferred to the lower state. We have sometimes thought that a two photon transition could provide a way to avoid the small matrix elements connecting the isomeric state with the lower states. The two-photon nuclear transitions mediated by the magnetic sublevels, which generally have cross sections up to $`10^{20}`$ cm<sup>2</sup>, are not appropriate to this situation because their cross section contains the same small matrix element, which in fact defines the existence of the isomeric state. Other types of multiphoton processes are clearly not useful, because their cross section is definitely lower than $`10^{23}`$ cm<sup>2</sup>. But one possibility remains: the existence of states nearly degenerate with the nuclear isomeric state. Roughly speaking, if the nuclei in the sample are initially in the isomeric state, an electromagnetic field irradiating the sample, of frequency resonant to the transition energy between the isomeric and the nearly degenerate state, will populate the nearly degenerate state; the larger the cross section for single-photon, resonant transition from the isomeric to the nearly degenerate, states, and the larger the power density in the electromagnetic field, the higher the particle concentration in the state degenerate with the isomeric state. The transition from the level nearly degenerate with the isomeric state, which level will be called the normal upper level, to a lower state has a Breit-Wigner cross section, provided the life-times of the normal and of the lower levels are not too long. This will be also a Mössbauer transition, because it is the energy transferred between the nucleus and the field which determines the Mössbauer character of the process, and not the absolute location of the levels. Now, if the gain resulting from the particle concentration in the normal upper state is large enough, a wave travelling in the active sample will be amplified and the upper level will be depleted in the creation of a gamma ray pulse. The size of the sample is limited by its absolute radioactivity. A 1 Ci sample consisting of nuclei in the isomeric state of half-life $`10^6`$ sec ($`10`$ days) would contain about $`3\times 10^{16}`$ isomeric nuclei. Assuming complete conversion of the isomeric nuclei to the lower state, the gamma-ray pulse would carry an energy of about 50 Joules. We also note that a global recoil momentum becomes important due to the high concentrations in the sample. The many intersections which occur in the Nilsson diagrams suggest that nearly degenerate states have a real existence. The fact that one of the states has to be isomeric, and the selection rule requirements, represent, of course, additional constraints. In any case, two nearly degenerate states with an energy separation corresponding to optical frequencies are beyond the resolving power of the existing gamma-ray spectroscopy. Spontaneous transitions between such states are also imperceptible because that probability is proportional to the third power of the frequency of transition. The cross section of a single photon radiative exchange between the states $`n,n_0`$ of a quantal system and the mode of frequency $`\omega `$ and width $`\gamma `$ of the electromagnetic field is, apart from numerical coefficients, $$\sigma \frac{e^2}{\mathrm{}c}\frac{\omega }{\gamma }|v_{nn_0}|^2,$$ (1) where $`ev`$ is the reduced matric element $`v_\alpha `$ defined in Eq. 44 of the Third Gamma Optical Paper.<sup>3</sup><sup>3</sup>3Note added in May 1999: the referenced work is S. Olariu et al., Phys. Rev. C 23, 50 (1981); it was submitted for publication on 30 November 1979. Since the natural width of a level is proportional to $`\frac{e^2}{\mathrm{}c^3}\omega ^3|v_{nn_0}|^2,`$ and if the width of the mode is the same as the width of the level, then substituting the above expression in the expression, Eq. 1, for $`\sigma `$ gives the $`\lambda ^2`$ dependence of the Breit-Wigner cross section. In our case, the width $`\gamma `$ of the normal upper state is determined by the transition to the lower state (see Figure). The width is roughly $$\gamma \frac{e^2}{\mathrm{}c^3}\omega _{nl}^3|v_{nl}|^2.$$ (2) The cross section for a single photon transition between the isomeric and the normal upper state is, according to Eq. 1, $$\sigma _{ni}\frac{e^2}{\mathrm{}c}\frac{\omega _{ni}}{\gamma }|v_{ni}|^2.$$ (3) Substituting in Eq. 3 the expression of the width of the normal upper state, Eq. 2, gives for the cross section $`\sigma _{ni}`$, $$\sigma _{ni}\left(\frac{c}{\omega _{nl}}\right)^2\frac{\omega _{ni}}{\omega _{nl}}\frac{|v_{ni}|^2}{|v_{nl}|^2}.$$ (4) The first factor represents the Breit-Wigner cross section between the states $`n`$ and $`l`$ and Eq. 4 states that the cross section for a single photon transition between the nearly degenerate states, $`n`$ and $`i`$, is the typical $`10^{18}`$ cm<sup>2</sup>, Breit-Wigner cross section, reduced in the ratio of the energy separation to the energy of transition and by a factor depending on he type of transitions between the states $`ni`$ and $`nl`$. For example, the transition $`il`$ could be E3 and the transitions $`ni,nl`$ be E1 and E2; depending on which of them is the E1, the ratio of the matrix elements is larger or smaller than 1. It is, therefore, not possible to obtain an a priori estimate of the cross section $`\sigma _{ni}`$. If the sample consisting of nuclei in the isomeric state $`i`$ is illuminated by a pulse of optical radiation, the probability of having the nucleus in the normal upper state $`n`$ is the number, $`N_{/cm^2}`$, of optical photons per cm<sup>2</sup> which crosses the sample multiplied by the transition cross section $`\sigma _{ni}`$, provided the duration of the pulse is shorter or comparable to the width of the state $`n`$. In order to obtain the significant gamma ray gain of 1/cm with a gamma-ray cross section of $`10^{18}`$ cm<sup>2</sup> we have to have a particle density in the state $`n`$ of $`10^{18}`$ nuclei/cm<sup>3</sup>. That means that the product $`N_{/cm^2}\sigma _{ni}`$ has to be about $`10^4`$. We have explained previously that the radioactivity from the isomeric state limits the number of the nuclei in the sample to about $`10^{18}`$ nuclei. If the length of the sample is 1 cm, its transverse area will be about $`10^3`$ cm<sup>2</sup>. A pulse of 10 Joule of photon energy 1 eV corresponds to about $`10^{20}`$ photons, or $`10^{23}`$ photons/cm<sup>2</sup> crossing the sample per pulse. That means that a cross section $`\sigma _{ni}`$ of about $`10^{27}`$ cm<sup>2</sup> would result in the significant gain of 1/cm. From Eq. 4, we see that the ratio $`\omega _{ni}/\omega _{nl}`$ is about $`10^4`$, while the ratio of the matrix elements may be small as well as large. It is therefore not unreasonable to expect cross sections $`\sigma _{ni}`$ in the range $`10^{24}`$ cm<sup>2</sup>. The fine investigation of the nuclear structure in the vicinity of a given state has, of course, a fundamental importance. We have argued here that, if a state nearly degenerate with long-lived isomeric states exists, which is related to this and to the lower states by appropriate selection rules, this occurrence will open a way toward the gamma ray laser. The high-resolution investigation of the nuclear structure near an isomeric state is a surprisingly simple problem. The idea is that, if the sample consisting of isomeric nuclei is immersed in tunable optical radiation, the resonance of the optical radiation to the sought for nearly degenerate transition energy would result in the increase of the radioactivity of the sample. That will happen whenever the product $`N_{/cm^2}\sigma _{ni}/\tau _n`$ which represents the probability of the emission of gamma rays through the process $`inl`$ is larger than the probability of spontaneous emission from the state $`i`$, $`1/\tau _i`$; $`\tau _n`$ and $`\tau _i`$ are the life-times of the states $`n`$ and $`i`$, respectively: $$N_{/cm^2}\sigma _{ni}>\tau _n/\tau _i.$$ (5) Values of $`\tau _n`$ of practical interest are $`\tau _n<10\mu `$sec (because of the difficulties in the preparation of the samples) and for $`\tau _i`$ we assume $`10^6`$ seconds (10 days). Because we need radiation, tunable over a wide range of frequencies, it seems appropriate to take $`N_{/cm^2}10^{18}`$/cm<sup>2</sup> per pulse. Then the lower limit of measurable $`\sigma _{ni}`$ is of about $`10^{30}`$ cm<sup>2</sup>. Of course, this technique of measurement of $`\sigma _{ni}`$ requires the comparing of the signals obtained with the optical field on, and off. Weak gamma ray sources (1 mCi) and very small particle concentrations of isomeric nuclei can be used. This method is different from the radiofrequency resonance techniques, because it is not the absorption of the optical power which matters, but its effect on the large, gamma ray, transition. The nuclides with isomeric states of half-life longer than 10 days are listed further: half-life $`>`$ 10 d Nb<sup>92</sup>, Nb<sup>93</sup>, Tc<sup>95</sup>, Tc<sup>97</sup>, Rh<sup>102</sup>, Ag<sup>108</sup>, Ag<sup>110</sup>, Cd<sup>115</sup>, Sn<sup>117</sup>, Sn<sup>119</sup>, Sn<sup>121</sup>, Te<sup>121</sup>, Te<sup>123</sup>, Te<sup>125</sup>, Te<sup>127</sup>, Te<sup>129</sup>, Xe<sup>131</sup>, Pm<sup>148</sup>, Ho<sup>166</sup>, Lu<sup>174</sup>, Lu<sup>177</sup>, Hf<sup>178</sup>, Hf<sup>179</sup>, Re<sup>184</sup>, Re<sup>186</sup>, Ir<sup>192</sup>, Ir<sup>193</sup>, Ir<sup>194</sup>, Bi<sup>210</sup>, Am<sup>242</sup> Since the tunability range of the optical power, multiplied by the number of isomeric states and divided by the average separation between the nuclear levels represents the statistical probability of finding a convenient degenerate pair, there is a few percent chance for the existence of the pair. But, as very little is known about the fine actual nuclear structure, a search for degenerate states could demonstrate equally no structure as well as reveal surprising things.
no-problem/9907/astro-ph9907310.html
ar5iv
text
# Spectroscopy of the post-AGB star HD 101584(IRAS 11385-5517) Based on observations obtained at the European Southern Observatory(ESO), Chile and the Vainu Bappu Observatory, Kavalur, India ## 1 Introduction Humphreys and Ney (1974) found near-infrared excess in HD 101584 and suggested that it is a massive F-supergiant with an M-type binary companion star (Humphreys 1976). However, HD 101584 (V=7.01, F0 Iape (Hoffleit et al. 1983)) was found to be an IRAS source (IRAS 11385$``$5517) (Parthasarathy and Pottasch 1986). On the basis of its far-infrared colors, flux distribution and detached cold circumstellar dust shell, Parthasarathy and Pottasch (1986) suggested that it is a low mass star in the post-Asymptotic Giant Branch (post-AGB) stage of evolution. CO molecular emission lines at millimeter wavelengths were detected by Trams et al. (1990). The complex structure of the CO emission shows large Doppler velocities of 130 km s<sup>-1</sup> with respect to the central velocity of the feature indicating a very high outflow velocity. Te Lintel Hekkert et al. (1992) reported the discovery of OH 1667 MHz maser emission from the circumstellar envelope of HD 101584. The OH spectrum has a velocity range of 84 km s<sup>-1</sup> and shows two unusually broad emission features. Te Lintel Hekkert et al. (1992) found from the images obtained from the Australian Telescope, that the OH masers are located along the bipolar outflow. The post-AGB nature of HD 101584 is also suggested by the space velocity of the star derived from the central velocity of the CO and OH line emission. This velocity of V<sub>rad</sub> = 50.3 $`\pm `$ 2.0 km s<sup>-1</sup> does not agree with the galactic rotation curve assuming it to be a luminous massive population I F supergiant. Bakker et al. (1996a) studied the low and high resolution ultraviolet spectra and the high resolution optical spectra of HD 101584. Based on the strength of HeI (see also Morrison and Zimba 1989) N II, C II lines and Geneva photometry, Bakker et al. (1996a) suggest that HD 101584 is a B9 II star of T<sub>eff</sub> = 12000K $`\pm `$ 1000K and log g = 3.0. Bakker et al. (1996b) also found small amplitude light and velocity variations and suggested that HD 101584 is a binary with an orbital period of 218 days. The optical spectrum of HD 101584 is very complex and shows many lines in emission. In this paper we report an analysis of the high resolution optical spectrum of HD 101584. ## 2 Observations and analysis High resolution and high signal to noise ratio spectra of HD 101584 were obtained with the European Southern Observatory (ESO) Coude Auxiliary Telescope (CAT) equipped with the Coude Echelle Spectrograph (CES) and a CCD as detector. The spectra cover the wavelength regions 5360-5400Å, 6135-6185Å, 6280-6320Å, 6340-6385Å, 6540-6590Å, 7090-7140Å, 7420-7480Å, 8305-8365Å and 8680-8740Å. The spectral resolution ranged from 0.165Å at 6150Å to 0.210Å at 8700Å. We have also obtained 2.5Å resolution spectra of HD 101584 from 3900Å to 8600Å with the 1m telescope and UAGS spectrograph and a CCD as detector at the Vainu Bappu Observatory (VBO), Kavalur, India. In addition we obtained CCD spectra with the same telescope and Coude Echelle spectrograph, covering the the wavelength region 4600Å to 6600Å with a resolution of 0.4Å. All spectra mentioned above were used in this analysis. All the spectra were analyzed using IRAF software. The equivalent widths of lines were found by fitting a gaussian. For blended lines de-blending was done by fitting multiple gaussians. We carried out spectrum synthesis calculations using KURUCZ stellar models (1994). SYNSPEC code (Hubeny et al. 1985) was used for calculating the theoretical line profiles. The gf values were taken from Wiese et al. (1966), Wiese and Martin (1980), Hibbert et al.(1991), Parthasarathy et al. (1992) and Reddy et al. (1997 and references therein). For the analysis of forbidden lines we have used the IRAF software package NEBULAR under STSDAS. ## 3 Description of the spectrum The remarkable characteristic of the optical spectrum of HD 101584 is the fact that different spectral regions resemble different spectral types. The spectrum in the UV region is similar to that of $`\alpha `$ Lep which is an F-supergiant (Bakker 1994). The optical spectrum in the range 3600Å-5400Å is dominated by absorption lines. Most of them are due to neutral and single ionized lines of Ti, Cr and Fe. The CaII H and K absorption lines are strong. The strength of the absorption lines are similar to that observed in an A2 supergiant. In the yellow and red spectral regions, most of the lines are in emission (Fig. 1). The emission lines show complex line profiles. The absorption lines of NI, OI, CII and SiII are broad. The Paschen lines are in absorption. Some of these absorption lines are blended with emission lines and many have asymmetric profiles. The OI lines at 6156Å are blended with emission lines of FeI. The NI lines are strong and show asymmetric line profiles. The blue wing is shallow compared to the red wing. The CII lines at 6578Å and 6582Å are weak. The Na D lines, KI 7700 Å (Fig. 2), the CaII IR triplet lines (Fig. 3), \[OI\], \[CI\] and MgI 6318.7Å lines are found in emission. The OI triplet lines (Fig. 2) are very strong indicating an extended atmosphere and NLTE effects. ### 3.1 P-Cygni profiles The H$`\alpha `$ line has a very strong P-Cygni profile indicating an outflow. The profile looks very complex. It shows at least 6 velocity components. The FeII line at 6383Å is in emission and the profile is very similar to that of H$`\alpha `$ (Fig. 4). Similar behaviour of the 6383Å FeII line and H$`\alpha `$ line is also noticed in the post-AGB F supergiant IRAS 10215-5916 (García-Lario et al. 1994). The H$`\alpha `$ and the FeII 6383Å line show an outflow velocity of 100$`\pm `$10 km s<sup>-1</sup>. The H$`\beta `$ line also shows a P-Cygni profile. It has a broad emission wing at the red end. This indicates that the line forming region is extended. The H$`\beta `$, NaI D1, D2 and the CaII IR triplet lines (Fig. 3) show an outflow velocity of 75$`\pm `$20 km s<sup>-1</sup>. The velocity structure seen in these P-Cygni profiles could be due to emission from different shells formed during the episodic mass-loss events. ### 3.2 FeI and FeII emission lines The presence of numerous emission lines of FeI and FeII makes it possible to derive the physical conditions of the line forming region. From the curve of growth analysis of the FeI and FeII emission lines (Viotti 1969), we have derived T<sub>exi</sub>=6300$`\pm `$1000K and 5550$`\pm `$1700K respectively (Fig. 5). The scatter found could be due to the fact that the lines are not optically thin. On the other hand, there are only few emission lines of FeII present in the spectra and thus the estimate from FeII might not be accurate. In order to determine whether the large scatter observed in Fig. 4 is reflecting optical thickness effects we have done self-absorption curve (SAC) analysis (Friedjung and Muratorio 1987) for the FeI emission lines. SAC is a kind of curve of growth applied to emission lines, but it has certain advantages as compared to the classical emission line curve of growth analysis. This method of analysis is valid also for optically thick lines. It deals with each transition separately, so that it is possible to get the population of different levels without assuming a Boltzmann distribution. In this curve, a function of the line flux emitted in the different transition of a given multiplet is taken in such a way that it is constant for a optically thin uniform medium. As the optical thickness increases the curve will move towards a straight line inclined at -45<sup>o</sup>. The shape of the SAC in Fig. 6(a) shows the lines are optically thick. The shape of the SAC is obtained by shifting all the multiplets with respect to a reference multiplet. Here we have taken multiplet 207 as reference. The X and Y shifts of each multiplet gives the relative population of the lower and upper level with respect to the reference multiplet. Fig. 6(b) and Fig.6(c) shows the Y and X shifts versus the upper and lower excitation potential from which we derive the T<sub>exi</sub>=6100$`\pm `$200K. ### 3.3 Forbidden lines The forbidden emission lines at 5577Å, 6300Å and 6363Å of neutral oxygen are present in the spectra. The forbidden line of neutral carbon at 8727Å is also seen. The 6300Å line is blended with ScII line and the 5577Å line is very weak. We have calculated the I(6300)+I(6363)/I(5577) to be 13.3. From the flux ratio we can calculate T<sub>e</sub> (Osterbrock 1989). This flux ratio is not very accurate because of very the weak 5577Å line and poor signal to noise spectrum. For the flux ratio of 13.3 we derived a function depending on the electron density N<sub>e</sub> and temperature T<sub>e</sub>. Figure 7 shows the N<sub>e</sub> and T<sub>e</sub> contours for different values of flux ratio around 13.3. Since we do not see any other forbidden lines which are sensitive to the electron density, we could not fix both N<sub>e</sub> and T<sub>e</sub> uniquely. But assuming a temperature derived from the Fe emission lines, an electron density of 1 x 10<sup>7</sup> is obtained. For this value of electron density and temperature the C/O = 0.5$`\pm `$ 0.2 has been obtained. ## 4 Radial velocities There are very few absorption lines and most of these are affected by emission and or a shell component, therefore we derived the average radial velocity from the well defined emission lines. The average radial velocity from the emission lines is found to be 50 $`\pm `$ 2km s<sup>-1</sup>. Morrison and Zimba (1989) using 14 best absorption lines found the radial velocity to be 69$`\pm `$ 1 km s<sup>-1</sup>. From the equivalent widths of FeI absorption lines given by Rosenzweig et al. (1997) we find no correlation between log gf - $`\chi `$$`\mathrm{\Theta }`$ and heliocentric radial velocity (Fig. 8). However, Bakker et al. (1996a) found a correlations between log gf - $`\chi `$$`\mathrm{\Theta }`$ and heliocentric radial velocities of HD101584 in the UV. The discripancy could be due to the poor resolution of Rosenzweig et al. (1997) data compared to that of Bakker et al. (1996a). The large scatter seen in the radial velocities could be due to pulsation. Similar velocity variations were noticed in other post-AGB supergiants (García-Lario et al. 1997, Hrivnak 1997). ## 5 Atmospheric parameters and chemical composition The UV (IUE) low resolution spectrum of HD 101584 matches well with that of an A6Ia star (HD 97534) (Fig. 10) indicating a T<sub>eff</sub> of 8400K (Lang 1992). The presence of CII lines at 6578Å and 6582Å indicates a T<sub>eff</sub> $`>`$ 8000K. For T<sub>eff</sub> $``$ 8000K the CII lines would be very weak or absent. The Paschen lines also indicates a low gravity (Fig. 11). The luminosity class Ia also indicates a very low gravity. From the analysis of several nitrogen lines around 7440Å and 8710Å we derived the microturbulence velocity V<sub>turb</sub>=13km s<sup>-1</sup>. We synthesised the spectral region from 4000Å to 4700Å (Fig. 9) with low gravity (log g =1.5 )models of Kurucz (1993) with temperatures 8000K, 8500K and 9000K. The best fit was found for T<sub>eff</sub> = 8500K, log g = 1.5, V<sub>T</sub>=13km s<sup>-1</sup> and \[Fe/H\] = 0.0. The line at 5876Å was identified as a HeI line by Bakker et al.(1996a) who also state that the lines at 5047Å and 5045Å as due to HeI and NII respectively. However, we find that the 5047 and 5045 lines are in fact due to FeII. Except HeI 5876Å, we have not found any other helium lines in the spectrum and nor have we found any NII or OII lines. In fact, Hibbert et al. (1991) indicate the presence of a CI line at 5876Å. It is likely that the line at 5876Å may be due to CI instead of HeI. If we assume that the 5876Å line is due to HeI then for a solar helium abundance and log g = 1.5, T<sub>eff</sub> = 9000K is found. Since we do not see any other helium lines, if 5876Å line is due to helium , it is likely that it may be formed in the stellar wind or in the chromosphere of the star. On the basis of the presence of this helium line Bakker et al. (1996a) suggested that HD 101584 is a B9II star of T<sub>eff</sub> 12000K. On the basis of the analysis of our spectra we have not found any evidence for such a high temperature. We have also analysed the equivalent widths of absorption lines in the spectrum of HD 101584 given by Bakker et al.(1996a). The final abundances of some of the elements are listed in Table 2. The abundances listed in Table 2 show that the star is overabundant in carbon and nitrogen. It appears that the material processed by the triple alpha C-N and O-N cycle has reached the surface. ## 6 Discussion and Conclusions The optical spectrum of the post-AGB star HD 101584 is rather complex. We find several emission lines and P-Cygni profiles indicating an ongoing mass-loss and the presence of a circumstellar gaseous envelope. From the analysis of the absorption lines we find the atmospheric parameters to be T<sub>eff</sub>=8500K , log g=1.5, V<sub>t</sub>=13km s<sup>-1</sup> and \[Fe/H\]=0.0. Carbon and Nitrogen are found to be overabundant indicating that material processed by triple alpha C-N and O-N cycles has reached the surface. Since our blue spectra are of relatively low resolution and because of the presence of emission and shell components it is difficult to estimate reliable abundances of s-process elements. The OI line at 6156Å is blended with a weak FeI emission line. The OI triplet at 7777Å is very strong and affected by NLTE. In any case it appears that the oxygen abundance is nearly solar. A NLTE analysis of the high resolution OI 7777Å triplet may yield a more reliable oxygen abundance. The nitrogen abundance is based on 6 lines in the 7440Å and 8710Å region. We have not used the strong nitrogen lines. Nitrogen seems to be clearly overabundant. The carbon abundance is based on two CII lines at 6578Å and 6582Å. There is a clear indication that carbon is overabundant. The abundance of Mg, Ti, and Fe are nearly solar. The Ti abundance is based on 15 lines and the Fe abundance is based on 6 lines. Many of the other atomic lines are affected by emission and shell components. In our opinion, the line at 5876Å might be due to CI (Hibbert et al. 1991) and not to HeI, as previously suggested by Bakker et al. (1996a). We have not found any other HeI, NII or OII lines. Our analysis shows that the T<sub>eff</sub> is 8500$`\pm `$500K. Bakker et al. (1996b) found small amplitude light and velocity variations and suggested that HD 101584 is a binary with an orbital period of 218 days. The radial velocity variations may be due to pulsation, macroturbulence motions or shock waves in the outer layers of the stellar atmosphere. Many post-AGB supergiants show small amplitude light and velocity variations (Hirvnak 1997). These variations may not be interpreted as due to the presence of a binary companion. Long term monitoring of the radial velocities is needed in order to understand the causes for these variations. The spectrum and the brightness of HD 101584 appears to remain the same during last two or three decades. There is no evidence for significant variations in brightness similar to those observed in Luminous Blue Variables (LBVs). The chemical composition and all the available multiwavelength observational data collected during the last two decades by various observers indicates that HD 101584 is most likely a post-AGB star. The presence of several P-Cygni lines with significant outflow velocities, the OH maser and CO emission profiles (Te Lintel Hekkert et al. 1992, Trams et al. 1990) and the IRAS infrared fluxes and colours (Parthasarathy and Pottasch 1986) indicates the possibility that HD 101584 is a post-AGB star with a bipolar outflow with a dusty disk. Since HD 101584 shows a strong H$`\alpha `$ emission line, high resolution imaging with the Hubble Space Telescope (HST) may reveal the bipolar nebula and the presence of a dusty disk similar to that observed in other post-AGB stars like IRAS 17150-3224 (Kwok et al. 1998) or IRAS 17441-2411 (Su et al. 1998).
no-problem/9907/cond-mat9907101.html
ar5iv
text
# Ageing and Rheology in Soft Materials ## 1 Introduction Many soft materials, such as foams, dense emulsions, pastes and slurries, display intriguing features in their low frequency shear rheology. In oscillatory shear, for example, their viscoelastic storage and loss moduli, $`G^{}(\omega )`$ and $`G^{\prime \prime }(\omega )`$, are often weak power laws of shear frequency Mackley et al. (1994); Ketz et al. (1988); Khan et al. (1988); Mason et al. (1995); Panizza et al. (1996); Hoffmann and Rauscher (1993); Mason and Weitz (1995), while their nonlinear stress response $`\sigma `$ to shear strain of constant rate $`\dot{\gamma }`$ is often fit to the form $`\sigma =A+B\dot{\gamma }^n`$ (known as the Herschel-Bulkley equation, or when $`A=0`$, the power-law fluid) Holdsworth (1993); Dickinson (1992); Barnes et al. (1989). The fact that such a broad family of soft materials exhibits similar rheological anomalies is suggestive of a common cause, and it has been argued that these anomalies are symptomatic of the generic presence in such materials of slow, glassy dynamics Sollich et al. (1997); Sollich (1998). Indeed, all the above materials share features of structural disorder and metastability: large energy barriers impede reorganization into states of lower free energy because this would require rearrangement of local structural units, such as the droplets in a dense emulsion. The term “soft glassy materials” (SGM’s) has been proposed to describe such materials Sollich et al. (1997); Sollich (1998). Glassy dynamics are often studied using hopping (trap) models, in which single particle degrees of freedom hop by an activated dynamics, in an uncorrelated manner, through a random free energy landscape Bouchaud (1992); Monthus and Bouchaud (1996). By incorporating strain degrees of freedom into such a description, Sollich and coworkers Sollich et al. (1997); Sollich (1998) proposed a minimal model, called the “soft glassy rheology” (SGR) model, which appears to capture several of the rheological properties of SGM’s, although (for simplicity) all the tensorial aspects of viscoelasticity are discarded. The model exhibits various regimes depending on a parameter $`x`$ (discussed in more detail below) representing the “effective temperature” for the hopping process. When this is small ($`x1`$) the model exhibits a glass phase which shows some interesting properties above and beyond the power-law anomalies in viscoelasticity mentioned above. Specifically, the model shows ageing behaviour: its properties depend on the elapsed time since a sample was prepared. This is because the population of traps visited never achieves a steady state; as time goes by, deeper and deeper traps dominate the behaviour (a phenomenon known as “weak ergodicity breaking”). Broadly speaking, the system behaves as though its longest relaxation time is of order its own age. The success of the SGR model in accounting for some of the generic flow properties of SGM’s suggests that a detailed investigation of its ageing behaviour, and the effect this has on rheology, is now worthwhile. Ageing has been intensively studied in the context of spin glasses Bouchaud and Dean (1995); Cugliandolo and Kurchan (1995); Bouchaud et al. (1998), although some of the earliest experimental investigations of it involved rheological studies of glassy polymers Struik (1978). But we know of no previous theoretical work that explores the link between ageing phenomena and rheological properties within an explicit constitutive model. A particular added motivation is that detailed experiments on rheological ageing, in a dense microgel suspension, are now underway Cloître (1999). Although various kinds of ageing effects are often observable experimentally in soft materials, they have rarely been reported in detail. Instead they tend to be regarded as unwanted obstacles to observing the “real” behaviour of the system, and not in themselves worthy of study. But this may be illusory: ageing, when present, can form an integral part of a sample’s rheological response. For example, the literature contains many reports of viscoelastic spectra in which the loss modulus $`G^{\prime \prime }(\omega )`$, while remaining less than the (almost constant) storage modulus $`G^{}(\omega )`$ in a measured frequency window, appears to be increasing as frequency is lowered (see Fig. 1). The usual explanation Kossuth et al. (1999) is that some unspecified relaxation process is occurring at a lower frequency still, giving a loss peak (dashed), whose true nature could be elucidated if only the frequency window was extended. This may often be the case, but an alternative explanation, based on our explicit calculations for the SGR model, is shown by the thin solid lines. No oscillatory measurement can probe a frequency far below the reciprocal of the sample’s age; yet in ageing materials, it is the age itself which sets the relaxation time of whatever slow relaxations are present. Accordingly, the putative loss “peak” can never be observed and is, in fact, a complete figment of the imagination. Instead, a rising curve in $`G^{\prime \prime }(\omega )`$ at low frequencies will always be seen, but with an amplitude that decreases as the system gets older (typically ensuring that $`G^{\prime \prime }(\omega )`$ never exceeds $`G^{}(\omega )`$). Of course, we do not argue that all published spectra resembling those of Fig. 1 should be interpreted in this way; but we believe that many should be. The widespread reluctance to acknowledge the role of ageing effects in much of the rheological literature suggests that a full discussion of these could now be valuable. An exception has been in the literature on ageing in polymeric glasses, especially the monograph by Struik (1978): we return shortly to a brief comparison between that work and ours. The SGR model is simple enough to allow a fairly full exploration of the link between ageing and rheology. As well as providing some quantitative predictions of rheological ageing, this allows a broader discussion of the conceptual framework within which rheological data for ageing systems should be analysed and interpreted. This conceptual framework is broader than the SGR model itself; for example it is known that ageing concepts developed for spin-glass dynamics can also be applied to problems of domain-growth and coarsening Bouchaud et al. (1998). Many soft solids, such as polydomain defect textures in ordered mesophases of copolymers or surfactants, may show ageing through such coarsening dynamics, or through glassy rearrangement of domains, or both. While the SGR model is intended to address only the second feature, the broader conceptual framework we present can allow for both mechanisms (in which case a superposition of ageing dynamics with different timescales may result; see Eq. (22) below). Thus we begin in Secs. 2 and 3 by briefly introducing rheology and ageing respectively. Then in Sec. 4 we review the SGR model, and discuss the origin of its glass transition and the nature of the glass phase. We also briefly describe its rheology under non-ageing conditions; this is discussed fully elsewhere Sollich et al. (1997); Sollich (1998). In Sec. 5 we give a general discussion of ageing within the SGR model, which sets the stage for our new results for the linear and nonlinear rheological response of the SGR model in regimes where ageing cannot be neglected. The results for controlled strain conditions are presented and discussed in Sec. 6; those for controlled stress, in Sec. 7. We close in Sec. 8 with a brief summary and our conclusions. We now discuss the connection between our work and that of Struik (1978) on polymeric glasses. Struik presented many experimental results for such systems, and gave a coherent qualitative explanation of their ageing in terms of a slow relaxation of the free volume in the system below the glass point. However, he did not propose a model constitutive equation for this or any other class of ageing material. He argued that the effective relaxation time of a system of age $`t_\mathrm{w}`$ (the ‘waiting time’ since sample preparation) varies $`\tau (t_\mathrm{w})=t_\mathrm{w}^\mu \tau _0^{1\mu }`$, where $`\tau _0`$ is a microscopic time and $`\mu 1`$; apart from this uniform rescaling of (all) the rheological relaxation time(s), the material properties are almost invariant in time. (This is his ‘time waiting-time superposition’ principle; we show below that the SGR model offers a concrete example of it, with $`\mu =1`$.) We do not expect the SGR model, which makes no mention of the free-volume concept, to be particularly relevant to polymeric glasses; nonetheless, various points of contact with Struik’s work are indicated below. ## 2 Rheology Here we review the basic principles of rheology. Unlike most in the literature, our formulation does not assume time translational invariance (TTI); parts of it may therefore be unfamiliar, even to rheologists. The formalism allows in principle an arbitrary dependence of the material properties on time; we defer to Sec. 3 a discussion of what more specific form this dependence might take in materials which exhibit actual ageing effects (rather than other, more trivial time dependencies). ### 2.1 Constitutive Properties Rheology is the study of the deformation and flow properties of materials. In general, deformation can comprise volume changes, extensional strain, and shear strain; here we consider incompressible materials and assume that only shear strains arise. A system’s shear stress $`\sigma (t)`$ then depends functionally on its strain rate history $`\dot{\gamma }(t^{}<t)`$, where $`\dot{\gamma }`$ is the strain rate. Conversely, $`\gamma (t)`$ can be expressed as a functional of the preceding stress history. A specification of either type is referred to as a constitutive equation. In general, of course, the constitutive equation is a relationship between stress and strain tensors; see Doi and Edwards (1986) for an introduction. We ignore the tensorial aspects here, because the model we describe later is too simple to address them. ### 2.2 Step Strain A standard rheological test consists of suddenly straining a previously undeformed material by an amount $`\gamma _0`$. Suppose this to be done at time $`t_\mathrm{w}`$: then $`\gamma (t)=\gamma _0\mathrm{\Theta }(tt_\mathrm{w})`$, where $`\mathrm{\Theta }`$ is the usual step function. (For the moment, $`t_\mathrm{w}`$ is an arbitrary time label, but later we will take it as the time that the strain is applied, relative to the preparation of the sample in some prescribed state, at time zero.) In general the response can be written $$\sigma (t)=\gamma _0G(tt_\mathrm{w},t_\mathrm{w};\gamma _0)$$ (1) thereby defining the step strain response, $`G(tt_\mathrm{w},t_\mathrm{w};\gamma _0)`$. Note that, by causality, $`G`$ vanishes for negative values of its first argument. ### 2.3 Time Translation Invariance; Linearity If the material properties of a sample have TTI, then the time $`t_\mathrm{w}`$ of the initial step strain is irrelevant; the response function $`G(tt_\mathrm{w},t_\mathrm{w};\gamma _0)`$ can be written $`G(tt_\mathrm{w};\gamma _0)`$ and depends only on the elapsed time since the step strain was imposed. It is particularly important to recognize that TTI is a quite separate issue from the linearity of the material’s response to stress. Even when TTI is absent, in the small deformation limit ($`\gamma _00`$), a regime may exist for which $`\sigma `$ is linearly related to $`\gamma _0`$: $$\underset{\gamma _00}{lim}G(tt_\mathrm{w},t_\mathrm{w};\gamma _0)=G(tt_\mathrm{w},t_\mathrm{w})$$ (2) The system’s stress response is then linearly proportional to strain amplitude (in the sense that doubling the strain at all earlier times will cause the stress to be doubled), even if it also depends on (say) the sample’s age relative to an absolute time of preparation. Only by assuming both linearity and TTI do we obtain $$\sigma (t)=\gamma _0G(tt_\mathrm{w})$$ (3) where the function $`G(t)`$ is called the time-dependent modulus, or the linear stress relaxation function, of the material. If a linear material with TTI is subjected to a small time-dependent strain $`\gamma (t)`$, then by decomposing this into a sequence of infinitesimal step strains, one finds $$\sigma (t)=_{\mathrm{}}^tG(tt^{})\dot{\gamma }(t^{})𝑑t^{}$$ (4) which is, for a linear material with TTI, the constitutive equation between stress and strain. In the steady state (i.e., for constant strain rate $`\dot{\gamma }`$) one recovers: $$\sigma =\dot{\gamma }_0^{\mathrm{}}G(t^{\prime \prime })𝑑t^{\prime \prime }$$ (5) The integral, whenever it exists, defines the material’s zero-shear viscosity $`\eta `$. For many soft materials, however, $`G(t)`$ decays to zero so slowly that the integral diverges. In this case, there can be no regime of linear response in steady shear flow, although there may be a linear regime in, say, oscillatory shear. Note that there is no unique extension of (4) to the nonlinear case; only in the linear regime can one superpose the contributions from each small strain increment in this fashion. (In some models, the stress for a general flow is indeed written as an integral involving the nonlinear step strain response Bernstein et al. (1963); but this is not generally valid.) On the other hand, (4) is easily extended to the case where TTI is absent: $$\sigma (t)=_{\mathrm{}}^tG(tt^{},t^{})\dot{\gamma }(t^{})𝑑t^{}$$ (6) which represents the most general form of a (nontensorial) linearized constitutive equation. ### 2.4 Behaviour of the Linear Response Function The principle of causality demands that the response function $`G(tt_\mathrm{w},t_\mathrm{w})`$ is zero for times $`t<t_\mathrm{w}`$. At $`t=t_\mathrm{w}`$, when strain is applied, $`G`$ typically increases very rapidly (in effect discontinuously) to a value $`G_0`$ which represents an instantaneous elastic response with modulus $`G_0`$. Thereafter, $`G(tt_\mathrm{w},t_\mathrm{w})`$ is (almost always) a decaying function of its first argument: the more nearly the material approximates a viscous liquid, the more rapidly will the stress decay. Specializing to the TTI case, we recall that for a purely Newtonian liquid of viscosity $`\eta `$, the function $`G(t)`$ approaches a delta function $`\eta \delta (t)`$. (This shows that $`G_0`$ can be infinite so long as the subsequent decay is rapid enough.) On the other hand an ideally Hookean elastic material has $`G(t)=G_0`$, the static shear modulus: in this case the induced stress will never decay. (Note that properly one should write $`G(t)=G_0\mathrm{\Theta }(t)`$; the extra factor of $`\mathrm{\Theta }(t)`$, implied by causality, is omitted here and below.) Both the Newtonian fluid and the Hookean solid are idealized limiting cases; most real materials display behaviour intermediate between these limits and are, on some timescale at least, viscoelastic. For the soft materials of interest to us, the relevant timescale is readily observable in rheological experiments. The simplest (TTI) example of viscoelasticity is the Maxwell fluid, which is solid-like at short times and liquid at longer ones, with a simple exponential response function $`G(t)=G_0\mathrm{exp}(t/\tau )`$ connecting the two. (Its viscosity obeys $`\eta =G_0\tau `$.) This behaviour is seen in a few experimental systems Cates and Candau (1990), but more often one has $`G(t)=G_0\mu (t)`$ where the memory function $`\mu (t)`$ is not a single exponential. In many materials it is possible to identify a longest relaxation time via $`\tau _{\mathrm{max}}^1=lim_t\mathrm{}\mathrm{log}\mu (t)/t`$. However, in several important cases, such as a pure power law relaxation, $`\mu (t)t^y`$, the required limit does not exist; the longest relaxation time is infinite. ### 2.5 Creep Compliance Arguing along parallel lines to those developed above, one can in general write the strain response to a step stress $`\sigma (t)=\sigma _0\mathrm{\Theta }(tt_\mathrm{w})`$ as $$\gamma (t)=\sigma _0J(tt_\mathrm{w},t_\mathrm{w};\sigma _0)$$ (7) The linear creep compliance $`J(tt_\mathrm{w},t_\mathrm{w})`$ is then found by letting $`\sigma _00`$ (assuming this limit exists). This is the main rheological function considered by Struik (1978) and the one most relevant to studies of ageing in polymeric glasses (since these are often used, for example, as structural components subject to steady loads). In the presence of TTI the linear compliance reduces to a function of one time variable, $`J(tt_\mathrm{w})`$. For the examples of a viscous liquid, an elastic solid, and a Maxwell material we have (again omitting factors of $`\mathrm{\Theta }(t)`$) $`J(t)=t/\eta `$, $`J(t)=1/G_0`$, and $`J(t)=1/G_0+t/\eta `$, respectively. For any material with TTI, the zero-shear viscosity $`\eta `$ is defined experimentally as the limiting ratio of stress to strain rate long after application of an infinitesimal step stress; it therefore obeys $`\eta ^1=lim_t\mathrm{}dJ(t)/dt`$, which may be shown<sup>1</sup><sup>1</sup>1The given limit may also be written $`\eta ^1=lim_{\omega 0}i\omega J^{}(\omega )`$ which, by reciprocity of $`J^{}`$ and $`G^{}`$, implies $`\eta =lim_{\omega 0}G^{}(\omega )/i\omega `$. The last definition is equivalent to (5). See Sec. 2.6 for definitions of $`J^{}`$ and $`G^{}`$. to be equivalent to (5). A finite viscosity, requires, of course, that the limit is finite; this is discussed further in Sec. 2.8 below. ### 2.6 Viscoelastic Spectra A common experiment is to apply a steady oscillatory strain and measure the resulting stress, or vice versa. For example, the choice $$\gamma (t)=\mathrm{\Theta }(tt_\mathrm{s})\text{Re}\left[\gamma _0e^{i(\varphi +\omega t)}\right]$$ (8) describes an oscillatory flow started at time $`t_\mathrm{s}`$ and continued up to (at least) the time $`t`$ at which the stress is measured. Using the linear constitutive equation for a system with TTI (4), we have $`\sigma (t)`$ $`=`$ $`\text{Re}\left[\gamma _0i\omega {\displaystyle _{t_\mathrm{s}}^t}e^{i(\varphi +\omega t^{})}G(tt^{})𝑑t^{}+\gamma _0e^{i(\varphi +\omega t_\mathrm{s})}G(tt_\mathrm{s})\right]`$ (9) $`=`$ $`\text{Re}\left[\gamma _0e^{i(\varphi +\omega t)}\left(i\omega {\displaystyle _0^{tt_\mathrm{s}}}e^{i\omega t^{\prime \prime }}G(t^{\prime \prime })𝑑t^{\prime \prime }+e^{i\omega (tt_\mathrm{s})}G(tt_\mathrm{s})\right)\right]`$ where the second term accounts for any step strain arising at the switch on time $`t_\mathrm{s}`$. As the number of cycles becomes very large ($`\omega (tt_\mathrm{s})1`$), transient effects become negligible, and the stress settles to a simple oscillatory function of time. In this steady-state limit we can write $`\sigma (t)=\text{Re}\left[G^{}(\omega )\gamma (t)\right]`$ where<sup>2</sup><sup>2</sup>2If $`G(t)`$ has a non-decaying contribution $`G(t\mathrm{})>0`$, the form $`G^{}(\omega )=G(0)+_0^{\mathrm{}}e^{i\omega t}G^{}(t)𝑑t`$, derived from (9) by integration by parts, should be used instead of (10). The same relation can be obtained from (10) by inserting a regularizing factor $`e^{ϵt}`$ and taking the limit $`ϵ0`$. This corresponds to an oscillatory strain that is switched on by very slowly increasing its amplitude. $$G^{}(\omega )=i\omega _0^{\mathrm{}}e^{i\omega t}G(t)𝑑t$$ (10) which is, to within a factor $`i\omega `$, the Fourier transform of the stress relaxation modulus $`G(t)`$. Traditionally one writes $$G^{}(\omega )=G^{}(\omega )+iG^{\prime \prime }(\omega )$$ (11) where $`G^{},G^{\prime \prime }`$ are called respectively the storage and loss moduli of the material, and measure the in-phase (elastic) and out-of-phase (dissipative) response to an applied strain<sup>3</sup><sup>3</sup>3Many commercial rheometers are configured to deliver the storage and loss spectra automatically, from a measurement of the amplitude and phase relations between stress and strain in steady state.. Clearly one can reach an identical steady state by applying a small amplitude oscillatory stress and measuring the resulting strain. This defines a new response function $`J^{}(\omega )`$ via $`\gamma (t)=\text{Re}\left[J^{}(\omega )\sigma (t)\right]`$, which is evidently just the reciprocal of $`G^{}(\omega )`$. But by an argument similar to that given above for (10) one also has $`J^{}(\omega )=i\omega _0^{\mathrm{}}e^{i\omega t}J(t)𝑑t`$. Hence, within the linear response regime of a system with TTI, knowledge of any one of $`G(t),J(t),G^{}(\omega ),J^{}(\omega )`$ is enough to determine the other three. (Of course, this ignores any practical limitations on the time and frequency domains accessible by experiment.) Beyond the linear response regime, it is sometimes useful to define $`G^{}(\omega ;\gamma _0)`$ and $`J^{}(\omega ;\sigma _0)`$ from the response to a finite amplitude oscillatory shear. However, the interest in these quantities is more limited since, whenever the strain dependence is nontrivial, there is no analogue of (10) relating the nonlinear oscillatory response to that in step strain or step stress. ### 2.7 Viscoelastic Spectra without TTI The proper definition of linear viscoelastic spectra for systems without TTI is more subtle, and is to some extent a matter of choice. Let us envisage again the following idealized experiment: ($`i`$) the sample is prepared in a known state at time zero; ($`ii`$) a small amplitude oscillatory shear of amplitude $`\gamma _0`$ and phase $`\varphi `$ is started at later time $`t_\mathrm{s}`$, so that $`\gamma (t)=\mathrm{\Theta }(tt_\mathrm{s})\text{Re}\left\{\gamma _0\mathrm{exp}\left[i(\varphi +\omega t)\right]\right\}`$; ($`iii`$) this is maintained up to (or beyond) a time $`t`$ at which point the stress is measured. Using the linear constitutive equation (6), we obtain $`\sigma (t)`$ $`=`$ $`\text{Re}\left[\gamma _0i\omega {\displaystyle _{t_\mathrm{s}}^t}e^{i(\varphi +\omega t^{})}G(tt^{},t^{})𝑑t^{}+\gamma _0e^{i(\varphi +\omega t_\mathrm{s})}G(tt_\mathrm{s},t_\mathrm{s})\right]`$ $``$ $`\text{Re}\left[\gamma _0e^{i(\varphi +\omega t)}G^{}(\omega ,t,t_\mathrm{s})\right]`$ This unambiguously defines a time-varying viscoelastic spectrum<sup>4</sup><sup>4</sup>4Note that in principle, to identify by experiment the real and imaginary parts of $`G^{}`$ for a particular $`\omega ,t,t_\mathrm{s}`$ one would require the experiment to be repeated for two different phases $`\varphi `$ (e.g. pure sine and cosine deformations). A more common procedure is, of course, to maintain the oscillatory strain over many cycles and record the “steady state” amplitude and phase response of the stress. For systems without TTI the latter are not uniquely defined. Only when material properties vary slowly enough will this give a definite result; whenever it does, it will coincide with (12). The required conditions are considered, for the SGR model, below. as $$G^{}(\omega ,t,t_\mathrm{s})=i\omega _{t_\mathrm{s}}^te^{i\omega (tt^{})}G(tt^{},t^{})𝑑t^{}+e^{i\omega (tt_\mathrm{s})}G(tt_\mathrm{s},t_\mathrm{s})$$ (12) A similar compliance spectrum, $`J^{}(\omega ,t,t_\mathrm{s})`$ can be defined by exchanging stress and strain in this protocol. Since it depends on two time arguments as well as frequency, $`G^{}(\omega ,t,t_\mathrm{s})`$ is a somewhat cumbersome object. However, simplifications can be hoped for in the limit $`\omega (tt_\mathrm{s})1`$. In the TTI case, this condition eliminates simple transients, and allows one to relate $`G^{}(\omega )`$ to the Fourier transform of $`G(t)`$ (see (9)). Corresponding simplifications are certainly not guaranteed in the absence of TTI. However, the transient dependence on $`t_\mathrm{s}`$ may become negligible<sup>5</sup><sup>5</sup>5For the SGR model, an additional requirement is that $`\omega t_\mathrm{s}1`$; see Sec. 6.1.2 below. when $`\omega (tt_\mathrm{s})1`$, in which case we have $$G^{}(\omega ,t,t_\mathrm{s})G^{}(\omega ,t)$$ (13) giving a viscoelastic modulus that depends only on the measurement time $`t`$. If, in addition, the time evolution of the underlying material properties is negligible on the timescale of one oscillation, then $`G^{}(\omega ,t)`$ may obey the relation $$G^{}(\omega ,t)=i\omega _0^{\mathrm{}}e^{i\omega t^{}}G(t^{},t)𝑑t^{}$$ (14) Similarly, for $`\omega (tt_\mathrm{s})1`$ the compliance spectrum may become $`t_\mathrm{s}`$-independent, $`J^{}(\omega ,t,t_\mathrm{s})J^{}(\omega ,t)`$, and may be related to the step stress response via $$J^{}(\omega ,t)=i\omega _0^{\mathrm{}}e^{i\omega t^{}}J(t^{},t)𝑑t^{}$$ (15) Finally, $`G^{}(\omega ,t)`$ and $`J^{}(\omega ,t)`$ may obey the conventional reciprocal relation $`G^{}(\omega ,t)`$ $`=`$ $`1/J^{}(\omega ,t)`$. Indeed, we shall find that all the above simplifying relationships are true for the SGR model studied below. As discussed at the end of Sec. 3, they may also hold more generally for systems with what we term there “weak long term memory”. However, we do not have a rigorous proof for this. Pending such a proof, the above simplifications remain hypotheses needing explicit verification for any constitutive model. Experimenters should likewise beware that, for systems without TTI, such textbook relationships between the oscillatory and step strain response functions cannot be assumed, but should be empirically verified, for each system studied. This prima facie breakdown of conventional linear viscoelastic relationships in ageing systems was emphasized by Struik (1978), though he argued that they are recovered in sufficiently ‘short-time’ measurements. It does not (as Struik seems to suggest) extend necessarily to breakdown of linear superposition itself, which survives in the form of (6). ### 2.8 Steady State Response: The Flow Curve Consider now the ultimate state of a material with TTI long after an infinitesimal step stress of amplitude $`\sigma _0`$ has been applied. The ultimate deformation may involve a limiting strain $`\gamma =\sigma _0J(t\mathrm{})`$, in which case the steady state (linear) Hookean elastic modulus is $`G_{\mathrm{}}=\sigma _0/\gamma `$. Alternatively, the ultimate state may involve a limiting strain rate, in which case the zero-shear viscosity is $`\eta =\sigma /\dot{\gamma }`$. However, neither outcome need occur. If, for example, one has “power law creep”, i.e., $`J(t)t^y`$ with $`0<y<1`$, the material has both zero modulus (infinite compliance) and infinite viscosity in steady state. There is no rule against this, although it does require nonanalyticity of $`G^{}(\omega )`$ at small frequencies Sollich et al. (1997); Sollich (1998), such that $`\tau _{\mathrm{max}}`$ is infinite. What if the stress amplitude is larger than infinitesimal? The ultimate steady state can again be that of a solid, a liquid, or something in between. In cases where a liquid-like response is recovered, it is conventional to measure the “flow curve”, which is a steady state relationship between stress and strain rate: $$\sigma _{\mathrm{ss}}=\sigma (\dot{\gamma })$$ (16) In many materials, the following limit, called the yield stress $$\sigma (\dot{\gamma }0)=\sigma _\mathrm{y}$$ (17) is nonzero.<sup>6</sup><sup>6</sup>6The experimental existence of a true yield stress, as defined by this limit, is debatable Barnes et al. (1989); behaviour closely approximating this is, however, often reported. Note that our definition of yield stress, from the flow curve, is unrelated to that of Struik (1978) who defines a ‘tensile yield stress’ at constant strain rate. Note, however, that the presence of nonzero yield stress does not necessarily imply a finite Hookean modulus $`G_{\mathrm{}}`$: for $`\sigma <\sigma _\mathrm{y}`$, the material could creep forever, but at an ever decreasing rate.<sup>7</sup><sup>7</sup>7Alternatively, it could reach a steady strain $`\gamma `$ that is not linear in $`\sigma `$ even as $`\sigma 0`$. Nor does the absence of a finite yield stress imply a finite viscosity; a counterexample is the power law fluid, for which $`\sigma \dot{\gamma }^p`$. This has $`\sigma _\mathrm{y}=0`$ but, for $`p<1`$, infinite viscosity $`\eta =lim_{\dot{\gamma }0}\sigma (\dot{\gamma })/\dot{\gamma }`$. We now turn to materials without TTI. For these, no meaningful definition of “steady state response” exists in general. However, in the SGR model considered below, TTI is restored for nonzero $`\dot{\gamma }`$ Sollich et al. (1997); Sollich (1998), and this may be generic for certain types of ageing Sollich et al. (1997); Sollich (1998); Bouchaud and Dean (1995); Kurchan (1999). If so the flow curve, including the value of the yield stress $`\sigma _\mathrm{y}`$ (but not the behaviour for $`\sigma <\sigma _\mathrm{y}`$) remains well-defined as a steady-state property. ## 3 Ageing So far, we have set up a general framework for describing the rheological properties of systems without TTI. Time translation invariance can be broken, in a trivial sense, by the transients that any system exhibits during equilibration. We now consider how such transients can be distinguished from ageing proper. To focus the discussion, we consider the linear step strain response function $`G(tt_\mathrm{w},t_\mathrm{w})`$. The other response functions introduced above can be treated similarly. We define ageing (of the step strain response) as the property that a significant part of the stress relaxation takes place on timescales that grow with the age $`t_\mathrm{w}`$ of the system. If ageing is present, then in order to see the full stress relaxation we need to allow the time $`t`$ at which we observe the stress to be much larger than the time $`t_\mathrm{w}`$ at which the step strain has been applied. Formally, we need to consider $$\underset{t\mathrm{}}{lim}G(tt_\mathrm{w},t_\mathrm{w})$$ (18) at fixed $`t_\mathrm{w}`$. On the other hand, if there is no ageing, then the full stress relaxation is “visible” on finite timescales. This means that as long as $`\mathrm{\Delta }t=tt_\mathrm{w}`$ is large enough, we observe the full stress relaxation whatever the age $`t_\mathrm{w}`$ of the system at the time when the strain was applied. Formally, we can take $`t_\mathrm{w}`$ to infinity first, and then make $`\mathrm{\Delta }t`$ large, which amounts to considering $$\underset{\mathrm{\Delta }t\mathrm{}}{lim}\underset{t_\mathrm{w}\mathrm{}}{lim}G(\mathrm{\Delta }t,t_\mathrm{w}).$$ (19) In the absence of ageing, the two ways (18) and (19) of measuring the final extent of stress relaxation are equivalent, and we have $$\underset{t\mathrm{}}{lim}G(tt_\mathrm{w},t_\mathrm{w})=\underset{\mathrm{\Delta }t\mathrm{}}{lim}\underset{t_\mathrm{w}\mathrm{}}{lim}G(\mathrm{\Delta }t,t_\mathrm{w}).$$ (20) If the system ages, on the other hand, this equality will not hold: the right-hand side allows only for the decay of stress by relaxation modes whose timescale does not diverge with the age of the system, and thus attains a limit which includes elastic contributions from all modes that do have age-related timescales. It will be different from the left-hand side, which allows for relaxation processes occurring on all timescales, and thus attains a limit in which only completely non-decaying modes contribute. We therefore adopt the definition that a systems ages if at least one of its response functions violates (20). By contrast, we refer to deviations from TTI in other systems (for which all significant relaxation processes can essentially be observed on finite timescales) as transients. We discuss this point further in the context of the SGR model in Sec. 6.1.1. Systems that violate (20) are referred to as having “long term memory” Cugliandolo and Kurchan (1995); Bouchaud et al. (1998); Cugliandolo and Kurchan (1993). They can be further subdivided according to the strength of this memory. To illustrate this distinction, imagine applying a (small) step strain to a system at time $`t_0`$ and switching it off again at some later time $`t_1`$. The corresponding stress at time $`t>t_1`$ is proportional to $`G(tt_0,t_0)G(tt_1,t_1)`$. If this decays to zero at large times $`t`$, that is, if $$\underset{t\mathrm{}}{lim}[G(tt_0,t_0)G(tt_1,t_1)]=0$$ (21) \[and (20) is violated\] then we say that the system has “weak long term memory”, otherwise it has “strong long term memory”.<sup>8</sup><sup>8</sup>8There is a slight subtlety with the definition of long term memory for the linear step stress response. Eq. (21), applied literally to $`J(tt_\mathrm{w},t_\mathrm{w})`$, suggests that even a Newtonian fluid with $`J(tt_\mathrm{w},t_\mathrm{w})tt_\mathrm{w}`$ has strong long term memory, because its strain “remembers” stress applications in the arbitrarily distant past. This is clearly undesirable as a definition. The problem can be cured by “regularizing” the step stress response: one simply considers the material in question “in parallel” with an elastic spring with infinitesimal modulus. Although the weakness condition (21) does not hold for all response functions in all ageing systems, it seems rather natural to expect it, in the rheological context, for most materials of interest. Indeed, a system with weak long term memory eventually forgets any perturbation that was only applied to it during a finite period. Thus, the treatment of a sample directly after it has been prepared (by loading it into the rheometer, preshearing, etc.) will not have a strong impact on the age-dependence of its rheological properties. This is the usual experience, and is obviously needed for the reproducibility of experiments results; likewise, it means that one can hope to make theoretical predictions which are not sensitive to minute details of the sample preparation. For the SGR model, any long term memory is indeed weak (as shown in Sec. 6.1.1 below); we consider this an attractive feature. Note in any case that a rheological theory for systems with strong long term memory might look very different from the SGR model. We have defined ageing as the property that a significant part of the stress relaxation $`G(tt_\mathrm{w},t_\mathrm{w})`$ takes place on timescales that grow with the age $`t_\mathrm{w}`$ of the system. In the simplest case, there is only one such growing timescale, proportional to the age of the system itself. The (ageing part of the) stress relaxation then becomes a function of the scaled time difference $`(tt_\mathrm{w})/t_\mathrm{w}`$. We will encounter such simple ageing behaviour in the glass phase of the SGR model, which is discussed below. More complicated ageing scenarios are possible, however: There may be several timescales that grow differently with the age of the system. This can be represented as $$G(tt_\mathrm{w},t_\mathrm{w})=\underset{i}{}𝒢_i\left[h_i(t)/h_i(t_\mathrm{w})\right]$$ (22) where the functions $`h_i(t)`$ define the different diverging timescales. If there is only a single term in the sum, with $`h(t)=t`$, then the simplest ageing scenario (shown by the SGR model) is recovered. On the other hand, for $`h(t)=\mathrm{exp}(t/\tau _0)`$ (where $`\tau _0`$ is a microscopic time) one has TTI. The more general form $`h(t)=\mathrm{exp}[(t/\tau _0)^{1\mu }]`$ interpolates between these two limiting cases (and, for $`tt_\mathrm{w}t_\mathrm{w}`$, gives Struik’s general ‘time waiting-time superposition principle’ Struik (1978)). Otherwise, Cugliandolo and Kurchan have shown under fairly mild assumptions that (22) is the most general representation of the asymptotic behaviour of step response and correlation functions in systems with weak long term memory Cugliandolo and Kurchan (1994). Let us return now to the status of Eqs. (13,14,15). (These concern the lack of $`t_\mathrm{s}`$-dependence in $`G^{}(\omega ,t,t_\mathrm{s})`$, the Fourier relationship between frequency and real-time spectra, and the reciprocity between $`G^{}`$ and $`J^{}`$.) As stated in Sec. 2.7 these equations have no general validity for systems without TTI. Indeed, one can easily construct theoretical model systems with strong long term memory which violate them. On the other hand, we speculate that systems with weak long term memory will generically have the properties (13,14,15). Plausibility arguments can be given to support this hypothesis Fielding (2000), but these do not yet amount to a proof. The cautionary remarks at the end of Sec. 2.7 therefore still apply. ## 4 The SGR model The SGR model is a phenomenological model which captures many of the observed rheological properties of soft metastable materials, such as foams, emulsions, slurries and pastes Mackley et al. (1994); Ketz et al. (1988); Khan et al. (1988); Mason et al. (1995); Panizza et al. (1996); Hoffmann and Rauscher (1993); Mason and Weitz (1995). It is based upon Bouchaud’s trap model of glassy dynamics, with the addition of strain degrees of freedom, and the replacement of the thermodynamic temperature by an effective (noise) temperature. It incorporates only those characteristics deemed common to all soft glassy materials (SGM’s), namely structural disorder and metastability. We now review its essential features. We conceptually divide a macroscopic sample of SGM into many mesoscopic elements. By mesoscopic we mean large enough such that the continuum variables of strain and stress still apply for deformations on the elemental scale, and small enough such that any macroscopic sample contains enough elements to allow the computation of meaningful “averages over elements”. We then assign to each element a local strain $`l`$, and corresponding stress $`kl`$, which describe deformation away from some local position of unstressed equilibrium relative to neighbouring elements. The macroscopic stress of the sample as a whole is defined to be $`kl`$, where $``$ denotes averaging over elements. Note that, for simplicity, (shear-) stress and strain are treated as scalar properties. The model therefore does not predict, or allow for, the various normal stresses which can arise in real materials undergoing nonlinear shear Doi and Edwards (1986). For a newly prepared, undeformed sample, we make the simplest assumption that $`l=0`$ for each element. (Physically, of course, $`l=0`$ would be sufficient and is indeed more plausible.) The subsequent application of a macroscopic strain at rate $`\dot{\gamma }`$ causes each element to strain relative to its local equilibrium state and acquire a non-zero $`l`$. For a given element, this continues up to some maximal strain $`l_\mathrm{y}`$, at which point that element yields, and rearranges into a new configuration of local equilibrium with local strain $`l=0`$.<sup>9</sup><sup>9</sup>9This ignores possible “frustration” effects: an element may not be able to relax to a fully unstrained equilibrium position due to interactions with neighbouring elements. Such effects can be incorporated into the model, but are not expected to affect the results in a qualitative way Sollich (1998). Under continued macroscopic straining, the yielded element now strains relative to its new equilibrium, until it yields again; its local strain (and stress) therefore exhibits a saw-tooth dependence upon time. The simplest assumption to make for the behaviour between yields is that $`\dot{\gamma }=\dot{l}`$: the material deformation is locally affine Doi and Edwards (1986). Yield events apart, therefore, the SGR model behaves as an elastic solid of spring constant $`k`$. Yields confer a degree of liquidity by providing a mechanism of stress relaxation. Although above we introduced yielding as a purely strain-induced phenomenon, we in fact model it as an “activated” process Sollich et al. (1997); Sollich (1998). We assume that an element of yield energy $`E=\frac{1}{2}kl_\mathrm{y}^2`$, strained by an amount $`l`$, yields with a certain rate; this defines the probability for yielding in a unit time interval. We write this rate as $`\tau ^1`$, where the characteristic yield time $`\tau =\tau _0\mathrm{exp}\left[(E\frac{1}{2}kl^2)/x\right]`$ is taken to be the product of an attempt time and an activation factor which is thermal in form. This captures the strain-induced processes described above since any element strained beyond its yield point will yield exponentially quickly; but it also allows even totally unstrained elements to yield by a process of activation over the energy barrier $`E`$. These activation events mimic, within our simplified model, nonlinear couplings to other elements (the barrier heights depend on the surroundings, which are altered by yield events elsewhere). A more complete model would treat these couplings explicitly. However, in the SGR model, which does not, $`x`$ is regarded as an effective “noise” temperature to model the process. Because the energy barriers are (for typical foams, emulsions, etc.) large compared to the thermal energy $`k_BT`$, so are the energy changes caused by these nonlinear couplings, and so to mimic these, one expects to need $`x`$ of order the mean barrier height $`E`$.<sup>10</sup><sup>10</sup>10Whether it is fully consistent to have a noise temperature $`xk_BT`$ is a debatable feature of the model Sollich et al. (1997); Sollich (1998); however, we think the results are sufficiently interesting to justify careful study of the model despite any uncertainty over its interpretation. It is also intriguing to note that similar “macroscopic” effective temperatures (which remain nonzero even for $`k_BT0`$) have recently been found in other theories of out-of-equilibrium systems with slow dynamics Kurchan (1999); Cugliandolo et al. (1997). Note that the SGR model treats “noise-induced” yield events (where the strain is much below the yield strain $`l_\mathrm{y}`$, i.e., where $`\frac{1}{2}kl^2E`$) and “strain-induced” yield events (where $`\frac{1}{2}kl^2E`$) in a unified fashion. We will nevertheless find it useful below to distinguish between these two classes occasionally. The disorder inherent to SGM’s is captured by assuming that each element of a macroscopic sample has a different yield energy: a freshly yielded element is assigned a new yield energy selected at random from a “prior” distribution $`\rho (E)`$. This suggests the following alternative view of the dynamics of the SGR model, which is represented graphically in Fig. 2. Each material element of a SGM can be likened to a particle moving in a landscape of quadratic potential wells or “traps” of depth $`E`$. The depths of different traps are uncorrelated with each other and distributed according to $`\rho (E)`$.<sup>11</sup><sup>11</sup>11Because of this lack of correlation, it does not make sense to think of a particular spatial arrangements of the traps. The bottom of each trap corresponds to the unstrained state $`l=0`$; in straining an element by an amount $`l`$, we then effectively drag its representative particle a distance $`\frac{1}{2}kl^2`$ up the sides of the trap, and reduce the effective yield barrier height ($`EE\frac{1}{2}kl^2`$). Once the particle has got sufficiently close to the top of its trap ($`E\frac{1}{2}kl^2x`$), it can hop by activated dynamics to the bottom of another one. This process corresponds to the yielding of the associated material element. In the following, we shall use the terminology of both the “element picture” and the “particle picture” as appropriate. Thus, we will refer to $`\tau =\tau _0\mathrm{exp}\left[(E\frac{1}{2}kl^2)/x\right]`$ as either the yield or relaxation time of an element, or as the lifetime of a particle in a trap.<sup>12</sup><sup>12</sup>12Sometimes this will be further abbreviated to “lifetime of a trap” or “lifetime of an element”. The inverse of $`\tau `$ is the rate at which an element yields/relaxes or a particle hops. However, we normally reserve the term yield rate or hopping rate for the average of these rates over the whole system, i.e., over all elements or particles. This quantity is denoted $`Y`$ and will occur frequently below. A specific choice of $`\rho (E)`$ is now made: $`\rho (E)=(1/x_\mathrm{g})\mathrm{exp}(E/x_g)`$, where $`x_g=E`$ is the mean height of a barrier chosen from the prior distribution $`\rho (E)`$. As shown by Bouchaud (1992), the exponential distribution, combined with the assumed thermal form for the activated hopping, is sufficient<sup>13</sup><sup>13</sup>13This is sufficient, but it is necessary only that the given exponential form be approached at large $`E`$. to give a glass transition in the model. The transition is at $`x=x_g`$ and divides the glass phase ($`xx_g`$), in which weak ergodicity breaking occurs, from a more normal phase ($`x>x_g`$). In the glass phase, the Boltzmann distribution (which is the only possible steady state for activated hopping dynamics, in the absence of strain), $$P_{\mathrm{eq}}(E)\rho (E)\mathrm{exp}(E/x)$$ (23) is not normalizable: thus there is no steady state, and the system must age with time. (The converse applies for $`x>x_g`$: there is then a unique equilibrium state, which is approached at long times. Hence ageing does not occur, though there may be transients in the approach to equilibrium.) Apart from our use of an effective temperature $`x`$, the only modification to Bouchaud’s original model of glasses lies in our introduction of dynamics within traps coupled to strain. It may appear suspicious that, to obtain a glass transition at all, an exponential form of $`\rho (E)`$ is required Bouchaud (1992). In reality, however, the glass transition is certainly a collective phenomenon: the remarkable achievement of Bouchaud’s model is to represent this transition within what is, essentially, a single-particle description. Thus the chosen “activated” form for the particle hopping rates, and the exponential form of the trap depth distribution, should not be seen as two independent (and doubtful) physical assumptions, but viewed jointly as a tactic that allows glassy dynamics to be modelled in the simplest possible way Sollich et al. (1997); Sollich (1998). From now on, without loss of generality, we choose units so that both $`x_g`$ and $`k`$ are unity. This means that the strain variable $`l`$ is defined in such a way that an element, drawn at random from the prior distribution, will yield at strains of order one. Since the actual value of the strain variable can be rescaled within the model (the difference being absorbed in a shift of $`k`$), this is purely a matter of convention. But our choice should be borne in mind when interpreting our results for nonlinear strains, given below: where strains “of order unity” arise, these are in fact of order some yield strain $`l_\mathrm{y}`$, which the model does not specify, but which may in reality be a few percent or less. In addition we choose by convention $`\tau _0=1`$; the timescale in the SGR model is scaled by the mesoscopic “attempt time” for the activated dynamics. The low frequency limit, which is the main regime of interest, is then defined by $`\omega \tau _0=\omega 1`$. Note that, without our choice of units, $`E=1`$ so that we expect the interesting physics to involve $`x1`$. ### 4.1 Constitutive Equation The SGR model is exactly solved by two coupled constitutive equations Sollich (1998), the first of which expresses strain as an integral over stress history, while the second embodies the conservation of probability. We assume that the sample is prepared (in a known initial state of zero stress and strain) at time zero and that a time dependent macroscopic strain $`\gamma (t)`$ is applied thereafter, so $`\gamma (t)=0`$ for $`t0`$. The constitutive equations are then $$\sigma (t)=\gamma (t)G_0(Z(t,0))+_0^t\left[\gamma (t)\gamma (t^{})\right]Y(t^{})G_\rho (Z(t,t^{}))𝑑t^{}$$ (24) $$1=G_0(Z(t,0))+_0^tY(t^{})G_\rho (Z(t,t^{}))𝑑t^{}$$ (25) In these equations $$Z(t,t^{})=_t^{}^t\mathrm{exp}\left(\left[\gamma (t^{\prime \prime })\gamma (t^{})\right]^2/2x\right)𝑑t^{\prime \prime }$$ (26) and $`G_\rho (Z)`$ and $`G_0(Z)`$ obey $$G_\rho (Z)=_0^{\mathrm{}}\rho (E)\mathrm{exp}\left(Ze^{E/x}\right)𝑑E$$ (27) $$G_0(Z)=_0^{\mathrm{}}P_0(E)\mathrm{exp}\left(Ze^{E/x}\right)𝑑E$$ (28) where $`P_0(E)`$ is the probability distribution for the yield energies (or trap depths) in the initial state of preparation of the sample at time $`t=0`$. We return below (Sec. 5.1) to the issue of how to choose this initial state. These equations can be understood by viewing yielding as a “birth and death” process: each time an element yields it dies and is reborn with zero stress, and with a yield energy selected randomly from the prior distribution $`\rho (E)`$. The (average) yield rate rate at time $`t^{}`$ is $`Y(t^{})`$; the birth rate at time $`t^{}`$ of elements of yield energy $`E`$ is therefore $`Y(t^{})\rho (E)`$. The proportion of these which survive without yielding until time $`t`$ is found as $`\mathrm{exp}\left[Z(t,t^{})/\tau (E)\right]`$ where $`\tau (E)=\mathrm{exp}(E/x)`$ is the (mean) lifetime that an unstrained element with yield energy $`E`$ would have. The expression (26) for $`Z(t,t^{})`$ reflects the fact that an element that has last yielded at time $`t^{}`$ and has a yield energy $`E`$ will have a yield rate of $`\tau (E)^1\mathrm{exp}\left(\left[\gamma (t^{\prime \prime })\gamma (t^{})\right]^2/2x\right)`$ at time $`t^{\prime \prime }`$. Here the exponential factor accounts for the lowering of the yield barrier by strain applied since the element last yielded (see Fig. 2). Note that this factor is unity under conditions where the local strain is everywhere negligible, in which case $`Z(t,t^{})tt^{}`$ (we return to this point below). More generally, $`Z(t,t^{})`$ can be thought of as an effective time interval measured on an “internal clock” within an element, which allows for the effect of local strain on its yield rate, by speeding up the clock. This speeding up effect, which describes strain-induced yielding, is the only source of nonlinearity within the SGR model. According to the above arguments, the number of elements of yield energy $`E`$, present at time $`t`$, which were last reborn at time $`t^{}`$ is $$P(E,t,t^{})=Y(t^{})\rho (E)\mathrm{exp}\left[Z(t,t^{})/\tau (E)\right]$$ (29) Such elements each carry a local strain $`\gamma (t)\gamma (t^{})`$ and so the net contribution they make to the stress at time $`t`$ is $$s(E,t,t^{})=\left[\gamma (t)\gamma (t^{})\right]Y(t^{})\rho (E)\mathrm{exp}\left[Z(t,t^{})/\tau (E)\right]$$ (30) Integrating these expressions over $`t^{}`$ from $`0`$ to $`t`$ and adding terms representing the contribution from elements which have survived from $`t=0`$ without yielding at all, we get respectively the number $`P(E,t)dE`$ of elements at time $`t`$ with yield energies between $`E`$ and $`E+dE`$: $$P(E,t)=P_0(E)\mathrm{exp}\left[Z(t,0)e^{E/x}\right]+_0^tP(E,t,t^{})𝑑t^{}$$ (31) and the corresponding stress contribution $`s(E,t)dE`$ at time $`t`$ from such elements: $$s(E,t)=\gamma (t)P_0(E)\mathrm{exp}\left[Z(t,0)e^{E/x}\right]+_0^ts(E,t,t^{})𝑑t^{}$$ (32) Integrating (31) and (32) over all yield energies $`E`$, we finally recover our constitutive equations (24) and (25) respectively. Below we will return to these two quantities, which will sometimes be expressed instead as a function of the lifetime $`\tau (E)=\mathrm{exp}(E/x)`$ of an unstrained element with yield energy $`E`$, so that $`P(\tau ,t)d\tau =P(E,t)dE`$ and likewise for $`s`$. Note that, because $`E0`$, these distributions are nonzero only for $`\tau 1`$. We will not write this restriction explicitly below. Finally, the following alternative form of the first constitutive equation (24) is sometimes useful: $$\sigma (t)=\gamma (t)_0^t\gamma (t^{})Y(t^{})G_\rho (Z(t,t^{}))𝑑t^{}$$ (33) This is obtained by substituting (25) into (24). In the limit of small strains, $`Z(t,t^{})`$ is again replaced by $`tt^{}`$. ### 4.2 Rheological Properties of the SGR Model Solution of the constitutive equations (24, 25) is relatively straightforward under conditions where TTI applies. Here we recall the main results thereby obtained Sollich et al. (1997); Sollich (1998). #### 4.2.1 Linear Spectra A regime of linear rheological response arises whenever the effects of strain on the effective time interval $`Z(t,t^{})`$ is small. This requires that the local strains in each element remain small; in oscillatory shear, where $`\gamma (t)=\gamma _0e^{i\omega t}`$, this is satisfied at low enough strain amplitudes $`\gamma _0`$ for any finite frequency $`\omega `$. (The same is not true in steady shear flow; we return to this in Sec. 4.2.2 below.) In the linear regime, the model’s internal dynamics are independent of the imposed deformation: the elements’ lifetimes are, to order $`\gamma _0`$, strain-independent. In the constitutive equations, $`Z(t,t^{})`$ can then be replaced by the time interval $`tt^{}`$ (there is no strain-induced yielding). As described in Sec. 2.6 above, the conventional definition of the linear viscoelastic spectra $`G^{}(\omega ),G^{\prime \prime }(\omega )`$ (Eqs. 10,11), requires not only linearity but also TTI. Thus they are well-defined only for an equilibrium state; in the SGR model, the latter exists only for $`x>1`$. But even at $`x>1`$ these spectra show interesting power law dependencies at low frequency<sup>14</sup><sup>14</sup>14Here and throughout this paper, “low frequency” in the SGR model means, $`\omega 1`$, that is, frequencies small compared to the mesoscopic attempt rate for activated hopping $`\tau _0^1=1`$ (in our chosen units).; these are summarized as follows (the prefactors are omitted, but discussed by Sollich et al. (1997); Sollich (1998)): $$\begin{array}{ccccccc}G^{\prime \prime }\hfill & & \omega \hfill & \text{for }2<x,\hfill & & \omega ^{x1}\hfill & \text{for }1<x<2\hfill \\ G^{}\hfill & & \omega ^2\hfill & \text{for }3<x,\hfill & & \omega ^{x1}\hfill & \text{for }1<x<3\hfill \end{array}$$ (34) Throughout its glass phase ($`x1`$) where the SGR model violates TTI, we must study instead the time dependent spectra $`G^{}(\omega ,t,t_\mathrm{s})`$ as defined in Sec. 2.7 above; this is done in Sec. 6.1 below. An alternative, explored by Sollich et al. (1997); Sollich (1998); Evans et al. (1999) is to observe that TTI can be restored even for $`x1`$ by introducing a cutoff $`E_{\mathrm{max}}`$ in the trap depth distribution $`\rho (E)`$. This gives interesting predictions for $`x<1`$: for example, one finds $`G^{}(\omega )\omega ^{1x}`$, for $`\tau ^1(E_{\mathrm{max}})\omega 1`$ Sollich et al. (1997); Sollich (1998). However, the role of this cutoff is to bring all ageing processes to a halt after a large finite time of order $`\tau (E_{\mathrm{max}})`$; formally there is no long term memory. Since in the present work we want to study the ageing regime itself, we assume instead that $`E_{\mathrm{max}}`$ is infinitely large, so that for $`x1`$, ageing continues indefinitely. #### 4.2.2 Flow Curve The flow curve was defined in Sec. 2.8 as the nonlinear stress response $`\sigma (\dot{\gamma })`$ to a steady strain rate $`\dot{\gamma }`$. For the SGR model, it shows the following scalings: $$\begin{array}{ccccc}\sigma \hfill & & \dot{\gamma }\hfill & \text{for}\hfill & x>2\hfill \\ \sigma \hfill & & \dot{\gamma }^{x1}\hfill & \text{for}\hfill & 1<x<2\hfill \\ \sigma \sigma _\mathrm{y}\hfill & & \dot{\gamma }^{1x}\hfill & \text{for}\hfill & x<1\hfill \end{array}$$ Here $`\dot{\gamma }1`$ is assumed; prefactors are discussed by Sollich (1998). The flow curve exhibits two interesting features which are explored more fully in Secs. 6.2.2 and 7.2.1. Firstly, for $`x<1`$ there is a yield stress $`\sigma _\mathrm{y}(x)`$ (whose value is plotted in Sollich (1998)). A linear response regime exists at $`\sigma \sigma _\mathrm{y}`$; ageing can occur for all $`\sigma <\sigma _\mathrm{y}`$. For $`\sigma >\sigma _\mathrm{y}`$ the system achieves a steady state, and ageing no longer occurs. This is because any finite flow rate, however small, causes strain-induced yielding of elements even in the deepest traps.<sup>15</sup><sup>15</sup>15The time required to yield, with a steady flow present, is only power law, rather than exponential in $`E`$. Thus the ageing process is curtailed or “interrupted” by flow Sollich et al. (1997); Sollich (1998); the flow curve is well-defined (and independent of the choice of $`P_0`$ in the initial state) even in the glass phase. The second interesting feature is that, for $`1<x<2`$ (where ageing is absent) there is no linear response regime at all in steady shear: however small the applied stress, the behaviour is dominated by strain-induced yielding. There is an anomalous (power law) relation between stress and strain rate, and an infinite zero-shear viscosity (cf. Sec. 2.8 above). This also shows up in (34), where $`\eta =lim_{\omega 0}G^{\prime \prime }(\omega )/\omega `$ is likewise infinite. ## 5 Ageing in the SGR model In this section we discuss some general features of ageing in the SGR model; in subsequent ones, we explore the rheological consequences of these phenomena. ### 5.1 Initial Preparation of Sample As noted above, to solve the constitutive equations (24,25) the initial distribution $`P_0(E)`$ of yield energies or trap depths at time zero must be specified. Since we are largely interested in the rheological properties of the glass phase ($`x1`$), for which no steady-state distribution of yield energies exists in the absence of flow, we cannot appeal to equilibrium to fix $`P_0(E)`$. Instead, this should depend explicitly on the way the sample was prepared. For simplicity, we choose the case where $`P_0(E)=\rho (E)`$; this is equivalent to suddenly “quenching” the noise temperature $`x`$, at time zero, from a very large value ($`x1`$) to a value within the range of interest. We refer to it as a “deep quench”. The question of whether or not a deep quench is a good model for the sample preparation of a SGM remains open Sollich et al. (1997); Sollich (1998); since $`x`$ is not truly a temperature, it is not clear exactly how one would realize such a quench experimentally.<sup>16</sup><sup>16</sup>16One argument in its favour is that it this choice minimizes the information content (maximizes the entropy) of the initial distribution $`P_0`$; it is therefore a legitimate default choice when no specific information about the preparation condition is available. However, we expect that most interesting aspects of ageing behaviour are not too sensitive to the initial quench conditions $`P_0(E)`$, so that a deep quench is indeed an adequate model. A study of the effect of quench depth on the results for the SGR model is summarized in App. A.4; we find independence of quench depth so long as the final noise parameter $`x`$ is not too small.<sup>17</sup><sup>17</sup>17More precisely, if the “deep quench” specification is altered to one in which, at time zero, the system is quenched from equilibrium at $`x_0>1`$ to its final noise temperature $`x`$, the leading results are independent of $`x_0`$ so long as the final $`x`$ value obeys $`x>1/(21/x_0)`$. Note that this condition is never satisfied for $`x<1/2`$. More generally, a degree of insensitivity to the initial quench conditions is consistent with the weak long term memory scenario; a system whose response decays with a relaxation time of order its age will typically lose its memory of the initial state by a power law decay in time. This can then easily be swamped by larger, $`P_0`$ independent contributions, as indeed occurs in most regimes of the SGR model (App. A.4). Following the initial preparation step, subsequent time evolution of the rheological response is, within the glass phase, characterized by an ageing process. To allow simpler comparisons with the non-ageing (but still slow) dynamics for $`1<x<2`$, below we shall also consider a similar quench from large $`x`$ to values lying in this range. ### 5.2 Ageing of the Lifetime Distribution We now (following Bouchaud (1992) and Monthus and Bouchaud (1996)) discuss in detail the way ageing affects the lifetime distribution (or equivalently the distribution of particle hopping rates) within the SGR model. We ignore the presence of a strain; the following results apply when there is no flow, and in the linear response regime, where strain-induced hops can be ignored. Under such conditions, the hopping rate $`Y(t)`$ is a strain-independent function of time, and is readily found from (25) by Laplace transform. This is done in App. A.2. For the case of a deep quench (as defined above), the exact asymptotic forms of $`Y`$ are as follows: $$\begin{array}{ccccc}Y(t)\hfill & =& \frac{x1}{x}\hfill & \text{for}\hfill & x>1\hfill \\ Y(t)\hfill & =& \frac{1}{\mathrm{ln}(t)}\hfill & \text{for}\hfill & x=1\hfill \\ Y(t)\hfill & =& \frac{t^{x1}}{x\mathrm{\Gamma }(x)\mathrm{\Gamma }(1x)}\hfill & \text{for}\hfill & x<1\hfill \end{array}$$ (35) where $`\mathrm{\Gamma }(x)`$ is the usual Gamma function. These results assume $`t1`$, which we will usually take to be the case from now on (since timescales of experimental interest are expected to be much longer than the mesoscopic attempt time $`\tau _0=1`$). Note that the late-time asymptotes given here are subject to various subdominant corrections (see App. A.2), some of which are sensitive to the initial state of sample preparation.<sup>18</sup><sup>18</sup>18For a quench from initial noise temperature $`x_0`$, the relative order of the affected subdominant terms becomes $`t^{x(11/x_0)}`$. Thus, unless one quenches from a point that is itself only just above the glass transition, or to a point that has $`x`$ only just above zero, the exact specification of the initial state is unimportant at late times. A closely related quantity to the hopping rate $`Y`$ is the distribution of yield energies $`P(E,t)`$ – which obeys (31) – or equivalently the lifetime distribution $`P(\tau ,t)`$. As previously pointed out, in the absence of strain, the only candidate for a steady state distribution of yield energies $`P_{\mathrm{eq}}(E)`$ is the Boltzmann distribution: $`P_{\mathrm{eq}}(E)\rho (E)\mathrm{exp}(E/x)`$, which translates to $`P_{\mathrm{eq}}(\tau )=P_{\mathrm{eq}}(E)dE/d\tau \tau ^x`$; in either language, the distribution is not normalizable for $`x<1`$, leading to broken TTI in the model Bouchaud (1992). Let us therefore consider a deep quench at time $`t=0`$, and define the probability distribution for trap lifetimes $`P(\tau ,t_\mathrm{w})`$ as a function of the time $`t_\mathrm{w}`$ elapsed since sample preparation. (In Sec. 6, we will identify $`t_\mathrm{w}`$ with the onset of a step strain.) The initial lifetime distribution, $`P(\tau ,0)`$, describes a state in which the trap depths are chosen from the prior distribution $`P(E,0)=\rho (E)`$; just after a quench to temperature $`x`$ the distribution of lifetimes is therefore $`P(\tau ,0)\rho (E)d\tau /dE\tau ^{(1+x)}`$. Thereafter, by changing variable from $`E`$ to $`\tau `$ in (31), we find the following approximate expressions for $`P(\tau ,t_\mathrm{w})`$ $$\begin{array}{ccccccc}P(\tau ,t_\mathrm{w})\hfill & & xY(t_\mathrm{w})\tau \rho (\tau )\hfill & \text{for}& \tau t_\mathrm{w}\hfill & \text{and}& t_\mathrm{w}1\hfill \\ P(\tau ,t_\mathrm{w})\hfill & & xY(t_\mathrm{w})t_\mathrm{w}\rho (\tau )\hfill & \text{for}& \tau t_\mathrm{w}\hfill & \text{and}& t_\mathrm{w}1\hfill \end{array}$$ (36) For a quench temperature above the glass point ($`x>1`$), $`P(\tau ,t_\mathrm{w})`$ exhibits a transient decay; as $`t_\mathrm{w}\mathrm{}`$, we find (using the results in (35)) that $`P(\tau ,t)P_{\mathrm{eq}}(\tau )=(1x)\tau ^x`$, as expected. The nature of the approach to the long time limit is illustrated schematically in Fig. 3(a); the final distribution has most of its weight at $`\tau =O(1)`$, consistent with the fact that the hopping rate (35) is itself $`O(1)`$ in this phase of the model. For $`x<1`$, in contrast, $`P(\tau ,t_\mathrm{w})`$ evolves as in Fig. 3(b); the limit of $`P(\tau ,t_\mathrm{w})`$ is zero for any finite $`\tau `$ as $`t_\mathrm{w}\mathrm{}`$. Hence, the proportion of elements having yield time of order unity tends to zero as $`t_\mathrm{w}\mathrm{}`$; the bulk of the distribution’s weight is at $`\tau t_\mathrm{w}`$.<sup>19</sup><sup>19</sup>19More formally, for $`x<1`$, we have $`lim_{t_\mathrm{w}\mathrm{}}_1^bP(\tau ,t_\mathrm{w})𝑑\tau =0`$ for any $`b>1`$, while for any $`a<1<b`$ we have instead $`lim_{t_\mathrm{w}\mathrm{}}_{at_\mathrm{w}}^{bt_\mathrm{w}}P(\tau ,t_\mathrm{w})𝑑\tau =O(1)`$. This is consistent with the decay of the hopping rate as a power law of $`t_\mathrm{w}`$, and with the idea that, in a system undergoing ageing, the characteristic relaxation time is typically of the order the age of the system itself. ### 5.3 Higher Moments The above analysis focuses on the time-evolution of the distribution of elements’ lifetimes, which is the usual quantity of interest in the formal analysis of ageing effects Bouchaud and Dean (1995); Bouchaud et al. (1998). Indeed, the latter are usually attributed to the divergence of the normalization integral, or zeroth moment, of $`P_{\mathrm{eq}}(\tau )`$ (undefined, within the Boltzmann distribution, when $`x1`$). Formally, however, one can consider a series of critical $`x`$ values, $`x_n=n+1`$, below each of which the $`n`$th moment of $`P_{\mathrm{eq}}`$ becomes undefined Evans et al. (1999); Odagaki (1995). For $`n>0`$ this does not lead to ageing, in the sense defined in Sec. 3 above, but can lead to anomalous, slow time evolution in any experimental measurement that probes the $`n`$th moment. For example, in Sec. 6.1.3 below, we discuss the time-evolution of the distribution of stresses borne by elements in a steady-shear startup experiment. In steady state, the stress carried by an element whose lifetime is $`\tau `$ is of order $`\dot{\gamma }\tau `$. If $`P(\tau )=P_{\mathrm{eq}}(\tau )`$ and is unperturbed by flow (as a linear response analysis would assume), then the zero-shear viscosity is of order $`\tau P_{\mathrm{eq}}(\tau )𝑑\tau `$, which diverges for $`x<2`$ (see Sec. 2.8 above). ## 6 Rheological Ageing: Imposed Strain In this and the next sections, we describe our new rheological results for the SGR model. We focus particularly on rheological ageing, which occurs in the glass phase ($`x<1`$); however, several new results for $`1<x<2`$, including anomalous transient behaviour, are also presented. The case $`x=1`$, which divides these regimes, shows its own especially weak (logarithmic) form of ageing and is, where necessary, treated separately below. For simplicity, we consider (for all $`x`$ values) only the idealized route to sample preparation described in Sec. 5.1 above: the system is prepared at time $`t=0`$ by means of a deep quench, so that $`G_0(Z(t,0))`$ = $`G_\rho (Z(t,0))`$ in the constitutive equations (24, 25). Note that these constitutive equations for the SGR model are more readily solved to find the stress response to an imposed strain, rather than vice-versa. Accordingly, we focus first on strain-controlled experiments and defer to Sec. 7 our analysis of the stress-controlled case. ### 6.1 Linear Response As described in Sec. 4.2.1 above, when local strains are negligible, the SGR model displays a linear response regime. The effective time interval $`Z(t,t^{})`$ in Eqs. (24,25) becomes the actual time interval $`tt^{}`$, and the hopping rate $`Y(t^{})`$ a strain-independent function of time. For the deep quench considered here, $`Y(t^{})`$ assumes the asymptotic forms summarized in (35). The stress response to any strain history then follows simply from (24), by integration. #### 6.1.1 Step Strain For a step strain, the amplitude $`\gamma _0`$ gives the maximum local strain experienced by any element. The condition for linearity in this case is therefore simply $`\gamma _01`$. The linearized step strain response was defined in (2). It is found for the SGR model<sup>20</sup><sup>20</sup>20Note that by construction of the SGR model, the linear step strain response is actually identical to the correlation function defined by Bouchaud for his trap model Bouchaud (1992); Monthus and Bouchaud (1996). using (33): $$G(tt_\mathrm{w},t_\mathrm{w})=1_{t_\mathrm{w}}^tY(t^{})G_\rho (tt^{})𝑑t^{}$$ (37) As outlined in App. A.3, analytic limiting forms for $`G(tt_\mathrm{w},t_\mathrm{w})`$ can be found when experimental timescales are large on the scale of the mesoscopic attempt $`\tau _0=1`$, so that $`tt_\mathrm{w}1`$ and $`t_\mathrm{w}1`$. In this limit we identify two distinct regimes: a short time interval regime $`tt_\mathrm{w}t_\mathrm{w}`$ and long time interval regime $`tt_\mathrm{w}t_\mathrm{w}`$ (where the measure of “short” and “long” is not now $`\tau _0`$ but $`t_\mathrm{w}`$ itself). The limiting forms in each case depend on the value of $`x`$; our results are summarized in table 1. The asymptotic scalings apparent in the various entries of table 1 can be physically motivated by the following simple arguments. Upon the application of the strain at time $`t_\mathrm{w}`$ the local strain of each element exactly follows the macroscopic one, and the instantaneous response is elastic<sup>21</sup><sup>21</sup>21This is a general characteristic of the SGR model: whenever the macroscopic strain changes discontinuously by an amount $`\mathrm{\Delta }\gamma `$, the stress $`\sigma `$ also increases by $`\mathrm{\Delta }\gamma `$.: $`G(0,t_\mathrm{w})=1`$. In the time following $`t_\mathrm{w}`$, elements progressively yield and reset their local stresses $`l`$ back to zero. The stress remaining at $`t`$ will be that fraction of elements which has survived from $`t_\mathrm{w}`$ without yielding, and hence roughly that fraction $`_{tt_\mathrm{w}}^{\mathrm{}}P(\tau ,t_\mathrm{w})𝑑\tau `$ which, at time $`t_\mathrm{w}`$, had time constants greater than $`tt_\mathrm{w}`$. Hence in measuring the linear response to a step strain we are probing the properties of the system as they were at the time of strain application. Using the approximate expressions given in (36) above, we have $`P(\tau ,t_\mathrm{w})\tau ^x`$ for $`\tau t_\mathrm{w}`$ and $`P(\tau ,t_\mathrm{w})t_\mathrm{w}\tau ^{(1+x)}`$ for $`\tau t_\mathrm{w}`$. This gives, for short time intervals ($`tt_\mathrm{w}t_\mathrm{w}`$) $$G(tt_\mathrm{w},t_\mathrm{w})1_1^{tt_\mathrm{w}}P(\tau ,t_\mathrm{w})𝑑\tau 1x\frac{(tt_\mathrm{w})^{1x}1}{t_\mathrm{w}^{1x}x}$$ and, for long time intervals ($`tt_\mathrm{w}t_\mathrm{w}`$) $$G(tt_\mathrm{w},t_\mathrm{w})_{tt_\mathrm{w}}^{\mathrm{}}P(\tau ,t_\mathrm{w})𝑑\tau \frac{(1x)t_\mathrm{w}(tt_\mathrm{w})^x}{t_\mathrm{w}^{1x}x}$$ In fact these estimates approximate the numerical data already quite well. Even better agreement is obtained by adjusting the prefactors to fit the asymptotic results in table 1: $`G(tt_\mathrm{w},t_\mathrm{w})`$ $``$ $`1{\displaystyle \frac{\mathrm{\Gamma }(x)(tt_\mathrm{w})^{1x}1}{\mathrm{\Gamma }^2(x)\mathrm{\Gamma }(2x)t_\mathrm{w}^{1x}1}}\text{for }tt_\mathrm{w}t_\mathrm{w}`$ (38) $`G(tt_\mathrm{w},t_\mathrm{w})`$ $``$ $`{\displaystyle \frac{(x1)\mathrm{\Gamma }(x)t_\mathrm{w}(tt_\mathrm{w})^x}{1\mathrm{\Gamma }(x)\mathrm{\Gamma }(x+1)\mathrm{\Gamma }(2x)t_\mathrm{w}^{1x}}}\text{for }tt_\mathrm{w}t_\mathrm{w}`$ (39) In the relevant time regimes, these formulae agree well with our numerical results (see Fig. 4), at least over the noise temperature range 0 to 2; they could therefore be used in a standard curve fitter for comparison with experimental data. In the limit $`tt_\mathrm{w}\mathrm{}`$ and $`t_\mathrm{w}\mathrm{}`$, they reproduce (by construction) the results shown in table 1 for $`x>1`$ and $`x<1`$. The logarithmic terms at the glass point $`(x=1)`$ can also be recovered by taking the limit $`x1`$ first. Using the forms for $`G(tt_\mathrm{w},t_\mathrm{w})`$ as summarized in table 1, and substituting these in Eqs. (20,21), we see that the SGR model has short term memory for $`x>1`$ and weak long term memory for $`x1`$. Thus we expect transients for $`x>1`$ and ageing for $`x1`$. As elaborated in Fig. 5, this is indeed what we find. More generally, these step strain results for the SGR model show some interesting features of rheological ageing. Consider first the behaviour above the glass transition ($`x>1`$). Here the stress decay at short time intervals ($`tt_\mathrm{w}t_\mathrm{w}`$) depends only upon the time interval between stress imposition and measurement itself ($`tt_\mathrm{w}`$), and not on the sample age $`t_\mathrm{w}`$. This is because the traps which contribute to stress decay during this interval are mainly those with lifetimes $`\tau <tt_\mathrm{w}`$; and the population of these traps has already reached Boltzmann equilibrium before the strain is switched on (see Fig. 3(a)). Taking the limit $`t_\mathrm{w}\mathrm{}`$ at constant $`tt_\mathrm{w}`$, (i.e., letting the system fully equilibrate before we apply the strain), we recover a TTI stress relaxation function which decays to zero on timescales of order one (the mesoscopic attempt time). On the other hand, for any finite waiting time $`t_\mathrm{w}`$, the stress decay at long enough times ($`tt_\mathrm{w}t_\mathrm{w}`$) violates TTI, since it is controlled by decay out of deep traps ($`\tau t_\mathrm{w}`$) which had not already equilibrated before $`t_\mathrm{w}`$. Note that even though this feature of the stress relaxation depends explicitly on $`t_\mathrm{w}`$, it is not an ageing effect according to our definition in Sec. 3. This is because the deviations from TTI and the dependence on $`t_\mathrm{w}`$ manifest themselves at ever smaller values of $`G`$ as $`t_\mathrm{w}`$ becomes large. Equivalently, if we assume that $`G(tt_\mathrm{w},t_\mathrm{w})`$ can be measured reliably only as long as it remains greater than some specified value (a small fraction $`ϵ`$ of its initial value $`G(0,t_\mathrm{w})=1`$, for example), then the results will become $`t_\mathrm{w}`$-independent for sufficiently large $`t_\mathrm{w}`$. Below the glass point ($`x1`$) we see true ageing, rather than anomalous transient effects: A significant part of the stress relaxation $`G(tt_\mathrm{w},t_\mathrm{w})`$ now takes place on timescales that increase with the sample age $`t_\mathrm{w}`$ itself. In fact, in the case of the SGR model, this applies to the complete stress relaxation, and $`t_\mathrm{w}`$ itself sets the relevant timescale: for $`x<1`$, $`G`$ depends on time only through the ratio $`(tt_\mathrm{w})/t_\mathrm{w}`$.<sup>22</sup><sup>22</sup>22This is typical, but not automatic for ageing systems; the case $`x=1`$, for example, does not have it. In general, the timescale for ageing can be any monotonically increasing and unbounded function of $`t_\mathrm{w}`$. There can also be parts of the stress relaxation which still obey TTI. An example is $`G(tt_\mathrm{w},t_\mathrm{w})=g_1(tt_\mathrm{w})+g_2((tt_\mathrm{w})/t_\mathrm{w})`$, which exhibits ageing when $`g_2`$ is nonzero, but also has a TTI short time part described by $`g_1`$. Superpositions of relaxations with different ageing timescales are also possible; compare Eq. (22). It is still true that stress decay during the interval $`tt_\mathrm{w}`$ is dominated by traps for which $`\tau <tt_\mathrm{w}`$, but no longer true that these traps have reached Boltzmann equilibrium by time $`t_\mathrm{w}`$: in an ageing system such equilibrium is never attained, even for a subset of shallow traps (see Fig. 3(b)). Instead, the population of such traps will gradually deplete with age, as the system explores ever-deeper features in the energy landscape. Decay from these deep traps becomes ever slower; the limit $`t_\mathrm{w}\mathrm{}`$ (for any finite $`tt_\mathrm{w}`$) gives completely arrested behaviour in which all dynamics has ceased, and the system approaches a state of perfect elasticity ($`G=1`$). Even in an experiment that can only resolve values of $`G`$ above a threshold $`ϵ`$ (see above), we would detect that the stress relaxation becomes slower and slower as $`t_\mathrm{w}`$ increases. The fact that $`G`$ depends on time only through the ratio $`(tt_\mathrm{w})/t_\mathrm{w}`$ is a simple example of Struik’s ‘time ageing-time superposition’ principle Struik (1978): the relaxation curves for different $`t_\mathrm{w}`$ can be superposed by a rescaling of the time interval $`tt_\mathrm{w}`$ by the sample age. However, as mentioned previously, Struik’s discussion allows a more general form in which the scale factor varies as $`t_\mathrm{w}^\mu `$, with $`\mu <1`$. The case $`\mu =1`$, exemplified by the SGR model, is the only one in which, even at very long times, the relaxation time does not become short compared to system age. #### 6.1.2 Oscillatory Strain In an oscillatory strain, the maximal local strain of any element is $`\gamma _0`$, the strain amplitude. Thus a linear regime in the SGR model is ensured whenever $`\gamma _01`$. The linear viscoelastic spectrum, as defined in (12), can be found for the SGR model using (33): $$G^{}(\omega ,t,t_\mathrm{s})=1_{t_\mathrm{s}}^te^{i\omega (tt^{})}Y(t^{})G_\rho (tt^{})𝑑t^{}$$ (40) In principle, this quantity depends on $`t_\mathrm{s}`$, the time when the oscillatory strain was started. However, when the experimental timescales become large, we find (as shown in App. B) that this dependence on $`t_\mathrm{s}`$ is weak. In fact, within the SGR model, the conditions needed to make $`G^{}`$ negligibly dependent on $`t_\mathrm{s}`$ (for low frequencies, $`\omega 1`$) are that $`\omega (tt_\mathrm{s})1`$ and $`\omega t_\mathrm{s}1`$. The first signifies merely that many cycles of oscillatory strain are performed before the stress is measured; the second ensures that transient contributions from the initial sample preparation stage (the quench at $`t=0`$) are negligible. Notably, these criteria do not depend on the noise temperature $`x`$, and therefore hold even in the glass phase ($`x1`$); see Fig. 6. The fact that they are sufficient even in the glass phase is far from obvious physically, and requires a careful discussion: we give this in App. B. Broadly speaking, these criteria are satisfied in any experiment that would reliably measure a conventional $`G^{}(\omega )`$ spectrum for systems with TTI. For the purposes of such experiments, we can therefore drop the $`t_\mathrm{s}`$ argument and define a time-dependent spectrum $`G^{}(\omega ,t)`$. Our results for the long-time behaviour ($`t1`$) of this quantity are as follows (see App. A.3): $$\begin{array}{ccccc}G^{}(\omega ,t)\hfill & =& \mathrm{\Gamma }(x)\mathrm{\Gamma }(2x)(i\omega )^{x1}\hfill & \text{for}\hfill & 1<x<2\hfill \\ G^{}(\omega ,t)\hfill & =& 1+\frac{\mathrm{ln}(i\omega )}{\mathrm{ln}(t)}\hfill & \text{for}\hfill & x=1\hfill \\ G^{}(\omega ,t)\hfill & =& 1\frac{1}{\mathrm{\Gamma }(x)}(i\omega t)^{x1}\hfill & \text{for}\hfill & x<1\hfill \end{array}$$ (41) For comparison with experimental results, the simple interpolating form $$G^{}(\omega ,t)=1\frac{\mathrm{\Gamma }(x)\mathrm{\Gamma }(2x)\left(i\omega \right)^{x1}1}{\mathrm{\Gamma }^2(x)\mathrm{\Gamma }(2x)t^{1x}1}$$ (42) may be useful; we have checked that it provides a good fit to our numerical data, at least over the noise temperature range 0 to approximately 1.3 (see Fig. 7). By measuring $`G^{}(\omega ,t)`$ we are directly probing the properties of the system at the time of measurement, $`t`$. In light of this, the results of (41) are easily understood. In the ergodic phase ($`x>1`$), $`G^{}(\omega ,t)`$ will reach a $`t`$-independent value within a time of $`O(1/\omega )`$ after the quench, as the relevant traps will then have attained their equilibrium population. The relaxation time is then of $`O(\tau _0)`$ (that is, $`O(1)`$ in our units) and the response $`G^{}(\omega ,t)`$ is a function only of $`\omega `$. In contrast, below the glass point the characteristic relaxation time at the epoch of measurement is of order $`t`$, and the response is a function only of the product $`\omega t`$. Since the losses in an oscillatory measurement arise from traps with lifetimes less than about $`1/\omega `$ (elements in deeper traps respond elastically), the overall response becomes more elastic as the system ages into traps with $`\tau >1/\omega `$. Numerical results for the viscoelastic spectrum $`G^{}(\omega ,t)`$ at various measurement times $`t`$ for various $`x`$ are shown in Fig.8. These indeed show a characteristic “hardening” of the glassy material as it ages: the storage modulus at low frequencies evolves upwards, and the loss modulus downward Sollich et al. (1997); Sollich (1998). Each spectrum terminates at frequencies of order $`\omega t1`$. This is because one cannot measure a true oscillatory response for periods beyond the age of the system<sup>23</sup><sup>23</sup>23And, although (40) still provides an unambiguous definition of $`G^{}(\omega ,t,t_\mathrm{s})`$, this ceases to be independent of $`t_\mathrm{s}`$ in this regime, so $`G^{}(\omega ,t)`$ is undefined.. Therefore, the rise at low frequencies in $`G^{}`$ spectra like Fig. 1 represents the ultimate rheological behavior<sup>24</sup><sup>24</sup>24Note that this only applies for $`\mu =1`$ in Struik’s scheme, as exemplified by SGR. Whenever $`\mu <1`$, the region to the left of the loss peak can, in principle, be accessed eventually.. It is shown in App. B that the insensitivity of $`G^{}(\omega ,t,t_\mathrm{s})`$ to $`t_\mathrm{s}`$ in practical measurements of the viscoelastic spectrum (where an oscillatory strain is maintained over many cycles) arises because (even when $`x<1`$) the most recently executed strain cycles dominate the stress response at time $`t`$. In essence, the result means that, as long as oscillatory strain was started many cycles ago, there is no memory of when it was switched on; accordingly (by linearity) an oscillatory strain started in the distant past and then switched off at $`t_\mathrm{s}`$, will leave a stress that decays on a timescale comparable to the period of the oscillation. This is markedly different from non-oscillatory stresses, where long term memory implies that the response to a step strain, applied for a long time, persists for a similarly long time after it is removed (see Sec. 3 above). Thus the fact that the SGR glass “forgets” the $`t_\mathrm{s}`$ argument of $`G^{}(\omega ,t,t_\mathrm{s})`$, is directly linked to the oscillatory nature of the perturbation. As also shown in App. B, this forgetfulness means that, in the SGR model, a Fourier relationship between oscillatory and step strain responses is recovered; to a good approximation, one has the relation (14) $$G^{}(\omega ,t)=i\omega _0^{\mathrm{}}e^{i\omega t^{}}G(t^{},t)𝑑t^{}$$ (43) Apart from the explicit dependence on the measurement time<sup>25</sup><sup>25</sup>25Formally, $`t`$ appears as the time at which an step strain was initiated, or an oscillatory measurement ended. Thus $`G^{}(\omega ,t)`$ is to within $`i\omega `$, the Fourier transform of the step strain response function $`G(\mathrm{\Delta }t,t)`$ that would be measured if a step strain were applied immediately after the oscillatory measurement had been done. $`t`$, this is the usual (TTI) result. But here the result is nontrivial due to the presence of ageing effects. As discussed at the end of Sec. 3, we speculate that the relation (43) holds not only for the SGR model, but in fact for all systems which have only weak long term memory. #### 6.1.3 Startup of Steady Shear Consider now a startup experiment in which a steady shear of rate $`\dot{\gamma }1`$ is commenced at time $`t_\mathrm{w}`$. So long as we restrict attention to times short enough that the total strain remains small ($`\dot{\gamma }(tt_\mathrm{w})1`$) the system remains in a linear response regime.<sup>26</sup><sup>26</sup>26This contrasts with the ultimate steady-state behaviour which, for $`x<2`$, is always nonlinear; the crossover to a nonlinear regime at late times is discussed in Sec. 6.2.2 below. Within the regime of linear response, any element’s lifetime is independent of strain and obeys $`\tau =\mathrm{exp}(E/x)`$. As described in Sec. 5.2 above, at a time $`t`$ after a deep quench, the distribution of lifetimes obeys $`P(\tau ,t)\tau \rho (\tau )`$ for $`\tau t`$ and $`P(\tau ,t)t\rho (\tau )`$ for $`\tau t`$. Since the local stress associated with a given trap is of order $`\dot{\gamma }\tau `$ for $`\tau tt_\mathrm{w}`$, and $`\dot{\gamma }(tt_\mathrm{w})`$ for $`\tau tt_\mathrm{w}`$, we can construct an estimate of the macroscopic stress; for $`tt_\mathrm{w}t_\mathrm{w}`$, $`\sigma (t)`$ $``$ $`{\displaystyle \frac{\dot{\gamma }\left[_1^{tt_\mathrm{w}}\tau ^2\rho (\tau )𝑑\tau +(tt_\mathrm{w})_{tt_\mathrm{w}}^t\tau \rho (\tau )𝑑\tau +(tt_\mathrm{w})t_t^{\mathrm{}}\rho (\tau )𝑑\tau \right]}{_1^t\tau \rho (\tau )𝑑\tau +t_t^{\mathrm{}}\rho (\tau )𝑑\tau }}`$ $``$ $`{\displaystyle \frac{\dot{\gamma }\left[x(tt_\mathrm{w})^{2x}+(x2)(tt_\mathrm{w})t^{1x}+x(1x)\right]}{(x2)\left(t^{1x}x\right)}}`$ (44) This gives, for long times and in the linear response regime, $`\sigma (t)\dot{\gamma }(tt_\mathrm{w})`$ for $`x<1`$ (which is purely elastic behaviour), $`\sigma (t)\dot{\gamma }(tt_\mathrm{w})^{2x}`$ for $`1<x<2`$ (which is an anomalous power law), and $`\sigma (x)\dot{\gamma }`$ for $`x>2`$; repeating the same calculation with $`tt_\mathrm{w}`$ gives the same asymptotic scaling in each case. An asymptotic analysis of the constitutive equations confirms these scalings, with the prefactors as summarized in table 2. Because the results depend only on $`tt_\mathrm{w}`$, any explicit dependence on $`t_\mathrm{w}`$ (ageing, or anomalously slow transients) must reside in subdominant corrections to these leading asymptotes. Accordingly, linear shear startup is not a good experimental test of such effects (but see Sec. 6.2.2 below). The power law anomaly for $`1<x<2`$ can be understood by examining which traps make dominant contributions to $`\sigma (t)=s(\tau ,t)𝑑\tau `$. (Recall that $`s(\tau ,t)d\tau `$ is the stress contribution at time $`t`$ from elements of lifetime $`\tau `$; see Sec. 4.1.) For $`x>2`$, $`s(\tau ,t)`$ is weighted strongly toward traps of lifetime $`O(1)`$; hence $`\sigma (t)`$ tends to a finite limit (of order $`\dot{\gamma }`$) as $`t\mathrm{}`$, and the viscosity of the system is finite. For $`x<2`$, on the other hand, most of the weight in the $`s(\tau ,t)`$ distribution involves lifetimes of order $`t`$. As time passes, stress is carried by deeper and deeper traps, and (in the absence of flow-induced yielding) the mean stress diverges as $`t\mathrm{}`$. In fact, as discussed in Sec. 5.3 above, just as the Boltzmann distribution for the relaxation times $`P_{\mathrm{eq}}(\tau )=P(\tau ,\mathrm{})\tau \rho (\tau )`$ is non-normalisable for $`x1`$ (giving glassiness and ageing), so, in the absence of strain-induced yielding, is the ultimate distribution $`s(\tau ,\mathrm{})\tau ^2\rho (\tau )`$ of stresses residing in traps of lifetime $`\tau `$, whenever $`x<2`$. The zero shear viscosity $`\eta `$ is therefore infinite throughout this regime, as noted previously. ### 6.2 Nonlinear Response We now turn to the nonlinear behaviour of the SGR model under imposed strain, starting with the step strain case. #### 6.2.1 Step Strain The nonlinear step strain response function was defined in (2). It is found for the SGR model from (33): $$G(tt_\mathrm{w},t_\mathrm{w};\gamma _0)=G_0(Z(t,0))+_0^{t_\mathrm{w}}Y(t^{})G_\rho (Z(t,t^{}))𝑑t^{}$$ (45) where, using (26): $$Z(t,t^{})=(tt_\mathrm{w})\mathrm{exp}\left(\gamma _0^2/2x\right)+(t_\mathrm{w}t^{})$$ (46) On the other hand, in the linear regime we have: $`G(tt_\mathrm{w},t_\mathrm{w},\gamma _00)`$ $``$ $`G(tt_\mathrm{w},t_\mathrm{w})`$ (47) $`=`$ $`G_0[(tt_\mathrm{w})+(t_\mathrm{w}0)]`$ $`+{\displaystyle _0^{t_\mathrm{w}}}Y(t^{})G_\rho \left[(tt_\mathrm{w})+(t_\mathrm{w}t^{})\right]𝑑t^{}`$ Direct comparison of (45) and (47) reveals that: $$G(tt_\mathrm{w},t_\mathrm{w};\gamma _0)=G((tt_\mathrm{w})\mathrm{exp}\left(\gamma _0^2/2x\right),t_\mathrm{w})$$ (48) This result generalizes that of Sollich (1998) for the non-ageing case ($`x>1`$). It can be understood as follows. Within the SGR model, instantaneous response to a step strain at $`t_\mathrm{w}`$ is always elastic (that is, $`G(0,t_\mathrm{w},\gamma _0)=1`$); the fraction of stress remaining at time $`t>t_\mathrm{w}`$ is the fraction of elements which have survived from $`t_\mathrm{w}`$ to $`t`$ without yielding (see Sec. 6.1.1 above). The stress decay is therefore determined entirely by the distribution of relaxation times in the system just after the strain is applied at time $`t_\mathrm{w}`$. The effect of a finite strain is solely to modify the distribution of barrier heights, and hence to modify this distribution of relaxation times $`\tau `$; in fact (within the model) nonlinear strain reduces the yield time of every element by an identical factor of $`\mathrm{exp}(\gamma _0^2/2x)`$ Sollich (1998). Thus the relaxation after a nonlinear step strain at $`t_\mathrm{w}`$ is found from the linear case by rescaling the time interval $`tt_\mathrm{w}`$ using this same factor. Accordingly, the asymptotic results given for $`G(tt_\mathrm{w},t_\mathrm{w})`$ in table 1 can be converted to those for the nonlinear regime by replacing the time interval $`tt_\mathrm{w}`$ by a strain-enhanced value $`(tt_\mathrm{w})\mathrm{exp}(\gamma _0^2/2x)`$, wherever it appears there. #### 6.2.2 Startup of Steady Shear In Sec. 6.1.3 we discussed the response to start up of steady shear (with $`\dot{\gamma }1`$) at time $`t_\mathrm{w}`$; we assumed there that a linear response was maintained. Let us now consider the effect of strain-induced yield events, which cause nonlinearity. Consider first what happens for $`x>2`$ (where the SGR model predicts Newtonian fluid behaviour for $`\dot{\gamma }1`$). Here the main stress contribution is from elements which, were they unstrained, would have lifetime $`\tau (E)=\mathrm{exp}(E/x)`$ of order unity. So, if the strain rate obeys $`\dot{\gamma }1`$, these elements will acquire only negligible stress before they yield spontaneously. Hence their lifetimes are not affected by strain, and the stress response remains linear at all times, including the steady state limit: $`\sigma (t\mathrm{})\eta \dot{\gamma }`$. In the following, we focus on the case $`x<2`$, where nonlinearities do appear. The dominant stress contributions in this noise temperature regime are from deep traps, i.e., elements with lifetimes of order $`t`$. Linearity applies only if such elements are unlikely to undergo strain-induced yielding before they yield spontaneously, after a time of order $`t`$. Such elements carry strains of order $`\dot{\gamma }t`$, which enhances their yield rate by a factor $`\mathrm{exp}[(\dot{\gamma }t)^2/2x]`$; we require that this is small, which holds only so long as $`\dot{\gamma }t1`$. Hence the predictions of the linear theory of Sec. 6.1.3 can be maintained to arbitrarily long times only by taking the limit $`\dot{\gamma }0`$ before one takes the steady state limit of $`t\mathrm{}`$. This means that the width of the linear response regime in steady flow is vanishingly small for $`x<2`$, as previously discussed. As mentioned in Sec. 6.1.3, throughout the linear period the startup curve shows no strong ageing or transient effects, even though the stress is evolving into deeper traps. At finite $`\dot{\gamma }`$, the linear period ends at $`t\dot{\gamma }^1`$ (within logarithmic terms, discussed below); at later times, the main stress-bearing elements will, during their lifetimes, become strongly strained. Indeed, at strain rate $`\dot{\gamma }`$, an element with yield energy $`E`$ will be strained to the top of its yield barrier in a time $`t_{\mathrm{int}}E^{1/2}/\dot{\gamma }(\mathrm{log}\tau )^{1/2}/\dot{\gamma }`$. The tendency of the stress distribution $`s(\tau ,t)`$ (and also, for any $`x<1`$, the lifetime distribution $`P(\tau ,t)`$) to evolve toward deeper and deeper traps is thereby interrupted: the lifetime of a deep trap is converted from $`\tau `$ to a much smaller value, of order $`(\mathrm{log}\tau )^{1/2}/\dot{\gamma }`$ Sollich et al. (1997); Sollich (1998). This truncation of the lifetime distribution is enough to ensure that these distributions are never dominated by the deep traps, and a steady state is recovered; accordingly, there are no ageing effects at late enough times either. Note, however, that the stress at the end of the linear regime can be higher than the steady state value, leading to an overshoot in the startup curve; see Sollich (1998). This overshoot region, unlike the two asymptotes, shows a significant dependence on the system age $`t_\mathrm{w}`$, as shown in Fig. 9. The physics of this is clear: the extent of the linear regime progressively gets larger as $`t_\mathrm{w}`$ is increased, because the system has aged into deeper traps (and because the SGR model assumes that within each trap the relation between stress and strain is linear). Thus the strain at which strong yielding sets in increases (roughly logarithmically) with $`t_\mathrm{w}`$; the height of the overshoot is accordingly increased before dropping onto the same, $`t_\mathrm{w}`$-independent, steady-shear plateau. ## 7 Rheological Ageing: Imposed Stress We now analyse the SGR model’s predictions for various stress-controlled rheological experiments. (We continue to assume the sample to have been prepared at time $`t=0`$ by the idealized “deep quench” procedure defined in Sec. 5.1.) As previously remarked, the structure of the constitutive equations makes the analysis more difficult for imposed stress than for imposed strain. The following discussion is therefore largely based on our numerical results, with asymptotic analysis of a few limiting cases. Our numerical method is outlined in App. C.2. ### 7.1 Linear Response #### 7.1.1 Step Stress The SGR model predicts that upon the application of a step stress there will be an instantaneously elastic response. Elements then progressively yield and reset their local stresses to zero; thus we must apply progressively more strain to maintain the macroscopic stress at a constant value. In this way strain is expected to increase with time (but at a rate that could tend to zero at long times). Potentially therefore, individual elements can acquire large local strains and, just as in the shear startup case, linearity of the response need not be maintained at late times. As we did for shear startup, we therefore first proceed by assuming that the response is linear; we find the corresponding $`\gamma (t)`$ and then (in Sec. 7.2 below) consider a posteriori up to what time $`t`$ the linear results remain valid. In the linear regime the step stress response is described by the creep compliance $`J(tt_\mathrm{w},t_\mathrm{w})`$ which was defined for non TTI systems in Sec. 2.5. We computed this quantity numerically from the linearized form of the constitutive equation (33) for the SGR model, which for step stress may be written $$1=J(tt_\mathrm{w},t_\mathrm{w})_{t_\mathrm{w}}^tJ(t^{}t_\mathrm{w},t_\mathrm{w})Y(t^{})G_\rho (tt^{})𝑑t^{}$$ (49) In analysing our numerical results we first identify, as usual, regimes of short and long time interval between stress onset and measurement, $`tt_\mathrm{w}t_\mathrm{w}`$ and $`tt_\mathrm{w}t_\mathrm{w}`$ respectively.<sup>27</sup><sup>27</sup>27As before, we apply the “macroscopic time” conditions $`tt_\mathrm{w}1`$ and $`t_\mathrm{w}1`$. In these two regimes we find the time dependences summarized in table 3. For the long time interval regime ($`tt_\mathrm{w}t_\mathrm{w}`$), the results were in fact obtained as follows. Curves for $`J(tt_\mathrm{w},t_\mathrm{w})`$ were first generated numerically; the observed scalings (for example, $`J(tt_\mathrm{w})^{x1}`$ for $`1<x<2`$) were then taken as ansätze for analytic substitution into the constitutive equation (49). In each case this allowed us to confirm the given functional form, and to compute exactly the $`x`$-dependent prefactors shown. These prefactors were cross-checked by comparison with the numerical results; no discrepancies were found within available accuracy. To obtain results for short time intervals, we proceeded by assuming that the resulting compliance $`J(tt_\mathrm{w},t_\mathrm{w})`$ is the same as if we first let $`t_\mathrm{w}\mathrm{}`$ (the dominant traps are in Boltzmann equilibrium; see Fig. 3a); this limits<sup>28</sup><sup>28</sup>28For $`x<1`$, we find instead $`J=1+\text{const}\times \left[(tt_\mathrm{w})/t_\mathrm{w}\right]^{1x}`$ at very early times; but this breaks down as soon as the second term becomes comparable to the leading (elastic) result. the analysis to $`x>1`$. The resulting prediction of $`J(tt_\mathrm{w},t_\mathrm{w}\mathrm{})`$ was found analytically from $`G(tt_\mathrm{w},t_\mathrm{w}\mathrm{})`$ and the reciprocal relations between the corresponding Fourier transforms (see Sec. 2.6 above); these were again checked numerically. Further insight into the results of table 3 can be gained as follows. In step stress, we need to keep applying larger and larger strains because elements progressively yield and reset their local stresses to zero. To maintain constant stress, the rate at which stress increases due to straining, which in our units is just the strain rate $`\dot{\gamma }`$, must match the rate at which stress is lost, due to local yielding events. The latter defines a “stress-weighted hopping rate” $`Y_s=\tau ^1s(\tau ,t)𝑑\tau `$. For $`x>2`$, $`Y_s`$ remains a constant of order $`\sigma _0`$; stress remains in traps of lifetime $`\tau =O(1)`$ and the creep response is purely viscous. For $`x<2`$, however, $`Y_s`$ decays as a power law<sup>29</sup><sup>29</sup>29In fact, $`Y_s\dot{\gamma }(tt_\mathrm{w})^y`$ where $`y=x2`$ for $`1<x<2`$ and $`y=1`$ for $`x<1`$. of $`(tt_\mathrm{w})`$; the stress distribution $`s(\tau ,t)`$ is dominated at time $`t`$ by traps with lifetimes $`\tau `$ of order $`tt_\mathrm{w}`$, the time interval since the stress application. For $`1<x<2`$, the scenario given above for the time-dependence of $`Y_s`$ is closely analogous to that given in Sec. 5.2 above for the hopping rate $`Y=\tau ^1P(\tau ,t)𝑑\tau `$ in systems with $`x<1`$. Indeed, the evolution of $`Y_s`$ following a step stress, at noise temperature $`x`$, is closely related<sup>30</sup><sup>30</sup>30More generally one can show for the SGR model that, for an equilibrium system whose noise temperature is $`x>1`$, the evolution of the stress distribution $`s(\tau ,t)`$ following application of a step stress at $`t=0`$ is, at long times, equivalent to that of the probability distribution $`P(\tau ,t)`$, in a system deep-quenched to a noise temperature $`x1`$ at $`t=0`$. This result is connected with the discussion made in Sec. 5.3 above, of the variation with $`x`$ of the dynamics of successive moments of the lifetime distribution: at noise temperature $`x+n`$, the dynamics of the $`n`$th moment is like that of the zeroth moment at noise temperature $`x`$. to that of $`Y`$, following a quench, at noise temperature $`x1`$. The ageing behaviour of the linear creep compliance $`J(tt_\mathrm{w},t_\mathrm{w})`$ shows significant differences from the step strain modulus $`G(tt_\mathrm{w},t_\mathrm{w})`$ discussed in Sec. 6.1.1 above<sup>31</sup><sup>31</sup>31Since it refers to a shear measurement, one would not expect our result to resemble the empirical (stretched exponential) form measured by Struik (1978) for a wide range of materials in tensile creep; nonetheless, it shows upward curvature on a log-log plot before approaching the eventual logarithmic form (with downward curvature). The same applies in nonlinear creep; see Fig. 14 below.. In the glass phase ($`x<1`$), the strain response to step stress indeed depends on age: it is a function of $`(tt_\mathrm{w})/t_\mathrm{w}`$ as expected (see Fig. 10). However, the dependence (for long time intervals) is only logarithmic; $`J(tt_\mathrm{w},t_\mathrm{w})\mathrm{ln}\left((tt_\mathrm{w})/t_\mathrm{w}\right)=\mathrm{ln}(tt_\mathrm{w})\mathrm{ln}t_\mathrm{w}`$ (see table 3) which means that in the long time interval limit ($`tt_\mathrm{w}t_\mathrm{w}`$) the explicit waiting time dependence ($`\mathrm{ln}t_\mathrm{w}`$) represents formally a “small” correction to the leading behaviour $`\mathrm{ln}(tt_\mathrm{w})`$. This relatively slight $`t_\mathrm{w}`$-dependence in creep measurements is intuitively reasonable: the strain response at time $`t`$ to step stress is not determined purely by the relaxation spectrum at $`t_\mathrm{w}`$ (as was the case in step strain, table 1), but by the dynamics of the system over the entire interval between $`t_\mathrm{w}`$ and $`t`$. This decreases the sensitivity to the time $`t_\mathrm{w}`$ at which the perturbation was switched on. Similar remarks hold above the glass point ($`1<x<2`$, see Fig. 11): in step strain, we found for $`tt_\mathrm{w}t_\mathrm{w}`$ a slow transient behaviour which depended to leading order upon $`t_\mathrm{w}`$ (table 1). For step stress, however, the corresponding $`t_\mathrm{w}`$ dependence is demoted to lower order, and the late-time response is dominated by TTI terms.<sup>32</sup><sup>32</sup>32We restate here why we call these effects for $`x>1`$ transient behaviour rather than ageing. As explained after eq. (21), a consistent definition of long term memory and ageing for the step stress response function $`J(tt_\mathrm{w},t_\mathrm{w})`$ requires a form of “regularization” by considering the material in question in parallel with a spring of infinitesimal modulus $`g`$. This effectively puts an upper limit of $`J_{\mathrm{max}}=1/g`$ on the observable values of $`J(tt_\mathrm{w},t_\mathrm{w})`$. Taking the limit $`t_\mathrm{w}\mathrm{}`$ for $`x>1`$ then results in a fully TTI step stress response, whatever the value of $`J_{\mathrm{max}}`$. On the other hand, for $`x<1`$, the (albeit weak, logarithmic) $`t_\mathrm{w}`$-dependence of the response remains visible even for finite values of $`J<J_{\mathrm{max}}`$. #### 7.1.2 Oscillatory Stress For the SGR model it was noted in Sec. 6.1.2 that ($`i`$) in the oscillatory stress response $`G^{}(\omega ,t,t_\mathrm{s})`$, the $`t_\mathrm{s}`$ dependence is negligible for low frequencies ($`\omega 1`$) whenever $`\omega (tt_\mathrm{s})1`$ and $`\omega t_\mathrm{s}1`$; ($`ii`$) these conditions are satisfied in most conventional rheometrical measurements of the viscoelastic spectrum, where an oscillatory strain is maintained for many cycles; and ($`iii`$), perhaps surprisingly, these facts are true even in the glass phase, $`x1`$, of the SGR model. We also noted that, because response to oscillatory strain is dominated by memory of the few most recent cycles (over which the system has barely aged), $`G^{}(\omega ,t)`$ is the Fourier transform (with respect to the time interval $`\mathrm{\Delta }t`$) of the step strain response function $`G(\mathrm{\Delta }t,t)`$ that would be measured if a step strain were applied immediately after the oscillatory measurement had been done (43). We have confirmed numerically that similar remarks apply to the oscillatory stress response function $`J^{}(\omega ,t,t_\mathrm{s})`$.<sup>33</sup><sup>33</sup>33Although unsurprising, this does require explicit confirmation since, for example, the transient effects from switching on the perturbation could be different in the two cases. This was defined in Sec. 2.7 as the strain response, measured at $`t`$, to an oscillatory stress initiated at time $`t_\mathrm{s}`$. Memory of the startup time $`t_\mathrm{s}`$ is indeed small in $`J^{}(\omega ,t,t_\mathrm{s})`$ so long as $`\omega (tt_\mathrm{s})1,\omega t_\mathrm{s}1`$ (and $`\omega 1`$). It appears that, just as in the case of a strain controlled experiment, strain response to oscillatory stress is dominated by memory to the most recent cycles, over which the system has barely aged. We may therefore suppress the $`t_\mathrm{s}`$ parameter, defining a compliance spectrum at time $`t`$ by $`J^{}(\omega ,t)`$. Furthermore, $`J^{}(\omega ,t)`$ is found numerically to be the reciprocal of $`G^{}(\omega ,t)`$, $$J^{}(\omega ,t)G^{}(\omega ,t)=1$$ (50) just as it is (without the $`t`$ argument) in normal TTI systems. The numerical confirmation of this result is presented in Fig. 12. We emphasize that this result, like the previous one, has been confirmed here specifically for the SGR model; but it may hold more widely for systems with weak long term memory (see Sec. 3). ### 7.2 Nonlinear Response #### 7.2.1 Step Stress In Sec. 7.1.1 we argued that a step stress, $`\sigma (t)=\sigma _0\mathrm{\Theta }(tt_\mathrm{w})`$, of size $`\sigma _01`$, induces a strain response $`\gamma (t)`$ which increases over time, but remains linear in $`\sigma _0`$ for at least as long as the linearized constitutive equations predict $`\gamma (t)1`$. This is because $`\gamma (t)`$ provides an upper bound on the local strain of each element. Although sufficient to ensure linearity, this is not always necessary; we require only that the characteristic strain of those elements which dominate the stress is small. For $`x>2`$ (the Newtonian regime) the dominant elements have lifetimes $`O(1)`$ and so the response is linear to indefinite times so long as $`\sigma _01`$ (ensuring $`\dot{\gamma }(t)1`$ for all times $`t`$). But, whenever $`x<2`$, the linear analysis of Sec. 7.1.1 indicates the dominant elements have lifetimes of order $`tt_\mathrm{w}`$; so a self-consistently linear response is maintained only provided that $`\dot{\gamma }(t)(tt_\mathrm{w})1`$, just as in startup of steady shear (see Sec. 6.2.2; here we make the additional assumption that $`\dot{\gamma }`$ only changes negligibly between $`t_\mathrm{w}`$ and $`t`$). Using the forms for $`J(tt_\mathrm{w},t_\mathrm{w})`$ as summarized in table 3, we then find that for $`1<x<2`$ the strain response to step stress remains linear only for as long as $`tt_\mathrm{w}(1/\sigma _0)^{1/(x1)}`$. Beyond this time we expect strain-induced yielding to become important. To confirm the predicted linearity at short times, and to extract the long time non-linear behaviour, we numerically solved the non-linear constitutive equations (24, 25) by an iterative method (see App. C.2); this was done first for $`1<x<2`$ (Fig. 13). The results show a linear regime of the expected temporal extent, followed by a crossover into a non-linear steady-state flow regime, in which $`\gamma (t)\sigma _0^{1/(x1)}t`$. The latter is in agreement with the flow curve (4.2.2). The same numerical procedure was then used for the glass phase, $`x<1`$, for which the flow curve shows a finite yield stress, $`\sigma _\mathrm{y}(x)`$. As expected, the numerical results for step stress of very small amplitude $`\sigma _0\sigma _\mathrm{y}`$ show no crossover to a steady flow regime at late times. Instead, the system continues to creep logarithmically, according to the linear creep result (table 3): $$\gamma (t)=\sigma _0J(tt_\mathrm{w},t_\mathrm{w})=\sigma _0\frac{1}{\psi (1)\psi (x)}\mathrm{log}\left(\frac{tt_\mathrm{w}}{t_\mathrm{w}}\right)$$ (51) The resulting value of $`\dot{\gamma }(t)(tt_\mathrm{w})`$ never becomes large; so this is self-consistent. Next we studied numerically the case where $`\sigma _0`$ was not small but remained less than the yield stress $`\sigma _\mathrm{y}`$. For stresses not too close to the yield stress, we found that the creep was still logarithmic to a good approximation, but now with a nonlinear dependence of its amplitude on stress: $`\gamma (t)\sigma _0A(\sigma _0)J(tt_\mathrm{w},t_\mathrm{w})`$. The prefactor $`A(\sigma _0)`$ increases rapidly as $`\sigma _0`$ approaches the yield stress $`\sigma _\mathrm{y}`$ from below. Very close to the yield stress, the creep ceases to be logarithmic; $`\gamma (t)`$ then grows more quickly, but with a strain rate that still decreases to zero at long times. On the basis of these observations, we suspect that for a given stress $`\sigma _0`$ the creep will be logarithmic for short times (where “short times” might mean the whole time window which is accessible numerically), but will gradually deviate from this for longer times. The deviation is expected to be noticeable sooner for stress values closer to yield. We attempted to verify this conjecture numerically, but were unable to access a large enough range of values of $`\mathrm{ln}\left((tt_\mathrm{w})/t_\mathrm{w}\right)`$ to do so. Note that, for any $`\sigma _0<\sigma _\mathrm{y}`$, the system ages indefinitely, and there is no approach to a regime of steady flow. Finally, as expected from the flow curve, only for stress amplitudes exceeding the yield stress $`\sigma _\mathrm{y}`$ (which of course depends on $`x`$) did we see an eventual crossover from logarithmic creep to steady flow at long times; when that happened, we recovered numerically the flow-curve result, $`\gamma (t)(\sigma _0\sigma _\mathrm{y})^{1/(1x)}(tt_\mathrm{w})`$. Fig. 14 shows examples of our numerical results that illustrate the various features of nonlinear creep in the glass phase mentioned above.<sup>34</sup><sup>34</sup>34Note that whereas for most other shear scenarios we chose to present glass phase results for a noise temperature $`x=0.7`$, we here chose $`x=0.3`$. The yield stress is larger at this value of $`x`$, giving us a larger window $`0<\sigma _0<\sigma _\mathrm{y}`$ over which we see ageing and creep uninterrupted by a crossover into flow. ## 8 Conclusion In this paper we studied theoretically the role of ageing in the rheology of soft materials. We first provided, in Sec. 2 a general formulation of the linear and nonlinear rheological response functions suited to samples that show ageing, in which time translation invariance of material properties is lost. (Our analysis extends and, we hope, clarifies that of Struik (1978).) This was followed in Sec. 3 by a review of the concept of ageing, formally defined by the presence of long term memory, which can be either weak or strong. We suggested that for many rheological applications the main interest is in systems with weak long term memory: these have properties that are age-dependent, but not influenced by perturbations of finite duration that occurred in the distant past. We conjectured that weak long term memory is sufficient to cause the age-dependent linear viscoelastic modulus to become independent of the start time $`t_\mathrm{s}`$ of the oscillatory shear ($`G^{}(\omega ,t,t_\mathrm{s})G^{}(\omega ,t)`$) while retaining a dependence on system age $`t`$; for it to then obey the usual Fourier relation with the linear step strain response (likewise dependent on age $`t_\mathrm{w}`$); and for it to obey a reciprocal relation $`G^{}(\omega ,t)J^{}(\omega ,t)=1`$ with the time-varying compliance, similarly defined. Pending a general proof of these conjectures, all such relationships between age-dependent rheological quantities do however require empirical verification for each experimental system, or theoretical model, that one studies. Within this conceptual framework, we then explored rheological ageing effects in detail for the SGR model. After reviewing the basic rheological definition of the model in Sec. 4, we discussed in Sec. 5 its ageing properties from the point of view of the mean jump rate $`Y(t)`$ whose behaviour is radically different in the glass phase (noise temperature $`x<1`$) from that in the normal phase ($`x>1`$). The glass phase of the SGR model is characterized by “weak ergodicity breaking”, which means that the elastic elements that it describes evolve forever towards higher yield thresholds (deeper traps), causing a progression toward more elastic and less lossy behaviour. Within the glass phase, there is a yield stress $`\sigma _\mathrm{y}`$, and for applied stresses less than this, genuine ageing effects arise. These phemonena were explored in depth in Sec. 6 and Sec. 7 for the cases of imposed stress and imposed strain respectively. Ageing effects are distinguished from otherwise similar transient phenomena (arising, for example, when $`x>1`$) by the criterion that a significant part of the stress relaxation, following infinitesimal step strain, occurs on timescales that diverge with the age of the system at the time of strain application. This rheological definition appears appropriate for most soft materials and follows closely the definition of long-term memory in other areas of physics Cugliandolo and Kurchan (1995); Bouchaud et al. (1998); Cugliandolo and Kurchan (1993). In the glass phase of the SGR model, the nature of the ageing is relatively simple; for a step strain or stress applied at time $`t_\mathrm{w}`$, both the linear stress relaxation function $`G(tt_\mathrm{w},t)`$ and the linear creep compliance $`J(tt_\mathrm{w},t_\mathrm{w})`$ become functions of the scaled time interval $`(tt_\mathrm{w})/t_\mathrm{w}`$ only. This scaling is a simple example of the ‘time waiting-time superposition’ principle postulated empirically by Struik (1978) (in the somewhat different context of glassy polymers). The time-dependent viscoelastic spectra $`G^{}(\omega ,t)`$ and $`G^{\prime \prime }(\omega ,t)`$ have the characteristic ageing behaviour shown in Fig. 1: a loss modulus that rises as frequency is lowered, but falls with age $`t`$, in such a way that it always remains less than $`G^{}(\omega ,t)`$ (which is almost constant by comparison). For $`x<1`$ such spectra collapse to a single curve (see Fig. 8) if $`\omega t`$, rather than $`\omega `$, is used as the independent variable. Note that in more complicated systems, Eq. (22) may be required instead, to describe ageing on various timescales that show different divergences with the sample age $`t_\mathrm{w}`$. Even in simple materials, there may be an additional non-ageing contribution to the stress relaxation which the SGR model does not have; this will also interfere with the scaling collapse of both $`G(tt_\mathrm{w},t_\mathrm{w})`$ and $`G^{}(\omega ,t)`$. We found that, in its glass phase, the SGR model has weak long term memory, and we confirmed numerically that the conjectured relationships, Eqs. (13,14,15), among age-dependent linear rheological quantities indeed hold in this case. Significant ageing was also found for nonlinear rheological responses of the SGR model. For example the nonlinear step-strain relaxation follows the same ageing scenario as the linear one, except that all relaxation rates are speeded up by a single strain-dependent factor (Eq. (48)). This form of nonlinearity is a characteristic simplification of the SGR model, and would break down if the elastic elements in the model were not perfectly Hookean between yield events. Another interesting case was startup of steady shear; here there is no significant ageing in either the initial (elastic) or the ultimate (steady flow) regime; yet, as shown in Fig. 9, the intermediate region shows an overshoot that is strongly dependent on sample age. For an old sample, the elastic elements have higher yield thresholds. The linear elastic regime therefore extends further before the imposed strain finally causes yielding, followed by a larger drop onto the same steady-shear plateau. The plateau itself is age-independent: the presence of a finite steady flow rate, but not a finite stress, is always enough to interrupt the ageing process within the SGR model. Finally we found that the nonlinear creep compliance (Fig. 14), shows interesting dependence on both the stress level and the age of the sample; for small stresses we found logarithmic creep (for all $`x<1`$), crossing over, as the yield stress is approached, to a more rapid creep that nonetheless appears to have zero strain rate in the long time limit. Nonlinear creep gives challenging computational problems in the SGR model, which is otherwise simple enough, as we have shown, that almost all its properties can be calculated either by direct asymptotic analysis or using (relatively) standard numerics. Remaining drawbacks include (from a phenomenological viewpoint) the lack of tensorial elasticity in the model and (from a fundamental one) uncertainty as to the proper physical interpretation, if one indeed exists, of the noise temperature $`x`$ Sollich et al. (1997); Sollich (1998). Though obviously oversimplified, the SGR model as explored in this paper may provide a valuable paradigm for the experimental and theoretical study of rheological ageing phenomena in soft solids. More generally, the conceptual framework we have presented, which closely follows that developed to study ageing in non-flowing systems such as spin-glasses, should facilitate a quantitative analysis of rheological ageing phenomena across a wide range of soft materials. ## Appendix A Calculation of Linear Response Properties ### A.1 Initial Condition In discussing the SGR model’s non-equilibrium behaviour (Secs. 6 and 7) we considered for definiteness a system prepared by a quench from an infinite noise temperature (see Sec. 5.1), i.e., with an initial distribution $`P_0(E)=\rho (E)`$ of yield energies or trap depths. For our predictions to be easily compared to experimental data, however, they must be largely independent of the details of sample preparation. To test for such independence, we consider the extent to which our results would change if the pre-quench temperature, which we denote by $`x_0`$, were finite. This corresponds to an initial trap depth distribution $$P_0(E)\mathrm{exp}(E/x_0)\rho (E)$$ (52) In this appendix, we restrict ourselves to the linear response regime, where the effects of finite $`x_0`$ (if any) are expected to be most pronounced; nonlinearity tends to eliminate memory effects. The same is true for high temperatures, and correspondingly we will find that the influence of $`x_0`$ on our results is confined mainly to final (post-quench) temperatures $`x`$ within the glass phase ($`x<1`$). ### A.2 Yield Rate The yield or hopping rate is the basic quantity from which other linear response properties can be derived; see eqs. (37,40,49). It can be calculated from the second constitutive equation (25) $$1=G_0(t)+_0^tY(t^{})G_\rho (tt^{})𝑑t^{}$$ (53) where we have replaced $`Z(t,t^{})`$ by $`tt^{}`$, as is appropriate in the linear response regime. The function $`G_0(t)`$ is defined in (27); for the initial condition (52) it is related to $`G_\rho `$ via $$G_0(t)=G_\rho (t,y),y=x(11/x_0)$$ where we have now included explicitly the noise temperature argument $`(y)`$ in the argument list of $`G_\rho `$. Substituting this into (53), and taking Laplace transforms with $`\lambda `$ as our reciprocal time variable, we get: $$\frac{1}{\lambda }=\overline{G}_\rho (\lambda ,y)+\overline{Y}(\lambda )\overline{G}_\rho (\lambda ,x)$$ (54) and hence $$\overline{Y}(\lambda )=\frac{\frac{1}{\lambda }\overline{G}_\rho (\lambda ,y)}{\overline{G}_\rho (\lambda ,x)}$$ (55) in which (taking Laplace transforms of (27)) $$\overline{G}_\rho (\lambda ,x)=x_1^{\mathrm{}}\frac{\tau ^{x1}}{\lambda +\tau ^1}𝑑\tau =x_1^{\mathrm{}}\frac{\tau ^x}{1+\lambda \tau }𝑑\tau $$ (56) In its present form (55) cannot be inverted analytically. We will focus on the long time regime, however, where progress can be made by using an alternative expression for $`\overline{G}_\rho `$. From (56), $`\overline{G}_\rho (\lambda ,x)`$ has poles at $`\lambda =\tau ^1`$. Because of the integration over all $`\tau =1\mathrm{}\mathrm{}`$, these poles combine into a branch cut singularity on the (negative) real axis between $`\lambda =1`$ and $`\lambda =0`$. We will now derive an expression for $`\overline{G}_\rho `$ that is valid near this branch cut. This expression does introduce spurious singularities on the negative real axis for $`\lambda <1`$. But after inversion of the Laplace transform these only give contributions to $`G_\rho (t)`$ decaying at least as fast as $`\mathrm{exp}(t)`$; they can therefore be ignored in the long-time limit. We first write (56) as $$\frac{1}{x}\overline{G}_\rho (\lambda ,x)=_0^{\mathrm{}}\frac{\tau ^x}{1+\lambda \tau }𝑑\tau _0^1\frac{\tau ^x}{1+\lambda \tau }𝑑\tau $$ (57) After the rescaling $`\lambda \tau \tau `$, the first term becomes a representation of the Beta function.<sup>35</sup><sup>35</sup>35The rescaling can be carried out only when $`\lambda `$ is real and positive. But by analytic continuation, the result (58) also holds for complex $`\lambda `$ outside the branch cut of $`\lambda ^{x1}`$, i.e., everywhere except on the negative real axis. In the second term, because now $`\tau 1`$, we can expand the denominator into a series that is convergent for $`|\lambda |<1`$. This gives the desired expression $$\overline{G}_\rho (\lambda ,x)=a(x)\lambda ^{x1}+\underset{n=0}{\overset{\mathrm{}}{}}b_n(x)\lambda ^n$$ (58) in which $$a(x)=x\mathrm{\Gamma }(x)\mathrm{\Gamma }(1x),b_n(x)=\frac{x(1)^{n+1}}{n+1x}$$ (59) This is valid for $`|\lambda |<1`$ and therefore in particular near the branch cut $`\lambda =1\mathrm{}0`$; in the representation (58), this branch cut is apparent in the fractional power of $`\lambda `$ in the first term. The above derivation applies a priori only for $`x<1`$, because otherwise the integrals in (57) diverge at the lower end. However, using the relation $$\frac{1}{x+1}\overline{G}_\rho (\lambda ,x+1)=\frac{1}{x}\frac{\lambda }{x}\overline{G}_\rho (\lambda ,x)$$ which follows directly from (56), it can easily be shown that (58) holds for all $`x`$. (For integer $`x`$, there are separate singularities in the first and second term of (58), but these just cancel each other.) We can now substitute (58,59) into (55) and expand the denominator to find a readily invertible expression for $`\overline{Y}(\lambda )`$. Clearly the manner in which we perform the expansion depends on whether $`x>1`$ or $`x<1`$. Abbreviating $`a(x)=a`$, $`a(y)=a^{}`$, and $`b_n(x)=b_n`$, we have for $`x>1`$: $$\overline{Y}(\lambda )=\frac{1}{\lambda }\left[\frac{1}{b_0}\frac{a}{b_0^2}\lambda ^{x1}\frac{a^{}}{b_0}\lambda ^y+O(\lambda ^{2(x1)},\lambda ^{y+x1},\lambda ,\mathrm{})\right]$$ which, upon inversion of the Laplace transform, gives: $$Y(t)=\frac{1}{b_0}\frac{1}{\mathrm{\Gamma }(2x)}\frac{a}{b_0^2}t^{1x}\frac{1}{\mathrm{\Gamma }(1y)}\frac{a^{}}{b_0}t^y+O(t^{2(1x)},t^{1xy},\mathrm{})$$ (60) the first term of which is the asymptotic expression for $`Y(t)`$ above the glass points, as in (35). For $`x<1`$ on the other hand, we have $$\overline{Y}(\lambda )=\frac{1}{\lambda }\left[\frac{\lambda ^{1x}}{a}\frac{b_0\lambda ^{2(1x)}}{a^2}\frac{a^{}}{a}\lambda ^{y+1x}+O(\lambda ^{3(1x)},\lambda ^{y+2(1x)},\mathrm{})\right]$$ (61) which can be inverted to give $$Y(t)=\frac{1}{\mathrm{\Gamma }(x)}\frac{t^{x1}}{a}\frac{1}{\mathrm{\Gamma }(1+2(x1))}\frac{b_0t^{2(x1)}}{a^2}\frac{a^{}}{a\mathrm{\Gamma }(xy)}t^{x1y}+O(t^{3(x1)},t^{2(x1)y})$$ (62) the first term again being in agreement with (35). Finally, to obtain $`Y(t)`$ at the glass point $`x=1`$ we rewrite (61) as: $$\overline{Y}(\lambda )=\frac{1}{b_0\lambda }\left[\underset{n=1}{\overset{p}{}}z^n(\lambda )+O(\lambda ^y,\lambda ,\lambda ^{(p+1)(1x)}\mathrm{})\right]$$ in which $`z(\lambda )=b_0\lambda ^{1x}/a`$ and $`p`$ is the largest integer which is less than $`1/(1x)`$. Inversion of the Laplace transform gives $$Y(t)=\frac{1}{b_0}\underset{n=1}{\overset{p}{}}\frac{z^n(t)}{\mathrm{\Gamma }(1+n(x1))}+O(t^y,t^{(p+1)(x1)},\mathrm{})$$ in which $`z(t)=b_0t^{x1}/a`$. The Gamma function can now be expanded around $`\mathrm{\Delta }=1x=0`$; the sum over $`p`$ can be performed explicitly for each term in this expansion. Retaining only the dominant terms for small $`\mathrm{\Delta }`$, and also taking the limit $`\mathrm{\Delta }0`$ of the quantities $`z(t)`$, $`a`$ and $`b_0`$, one finds eventually $$\underset{x1}{lim}Y(t)=\frac{1}{\mathrm{ln}(t)}+\frac{\mathrm{\Gamma }^{}(1)}{\mathrm{ln}^2(t)}+O\left(\frac{1}{\mathrm{ln}^3(t)}\right)$$ as stated in (35). Consider now the effect of the pre-quench temperature $`x_0`$ on the above results for the asymptotic behaviour of the hopping rate $`Y(t)`$. We note first that all the leading terms are independent of $`y`$ and hence of $`x_0`$. For $`x>1`$, the largest $`y`$-dependent subleading term ($`t^y`$) in (60) becomes more important for smaller pre-quench temperatures $`x_0`$. However, provided we restrict ourselves to the regime $`x_0>x`$ (i.e., to a non-equilibrium situation in which a quench is actually performed; $`x=x_0`$ corresponds to equilibrium conditions), we see that $`y>x1`$ and that, even to subleading order, $`Y(t)`$ is independent of $`x_0`$. (We note furthermore that in the case of the deep quench defined in Sec 5.1, $`y=x`$ and the term $`t^y`$ is very small.) For $`x<1`$, in (62), the relative importance of the largest $`y`$-dependent term ($`t^{x1y}`$) again depends upon the relative values of the pre- and post-quench temperatures. For a high enough pre-quench temperature (specifically, provided $`y>1x`$, i.e., provided $`x_0>x/(2x1)`$) the leading and subleading terms of $`Y(t)`$ are independent of $`x_0`$. For any post-quench temperature $`x<1/2`$, the subleading term necessarily depends upon $`x_0`$ since the condition defined above for independence cannot be satisfied. (This is physically intuitively reasonable, since in general we expect a system at a lower temperature to remember its initial condition more strongly.) ### A.3 Step Strain and Oscillatory Strain Response Once the yield rate $`Y(t)`$ is know, the linear stress response $`G(tt_\mathrm{w},t_\mathrm{w})`$ to a step strain can be calculated from (37). To get its asymptotic behaviour for $`tt_\mathrm{w}1`$, $`t_\mathrm{w}1`$, the two regimes in which the time interval $`tt_\mathrm{w}`$ is much less and much greater than the age at the time of stress application $`t_\mathrm{w}`$ have to be considered separately. In the first regime ($`tt_\mathrm{w}t_\mathrm{w}`$), one can Taylor expand the hopping rate $`Y`$ around its value at time $`t`$. In the second regime, we rewrite (37) as $$G(tt_\mathrm{w},t_\mathrm{w})=G_0(t)+_0^{t_\mathrm{w}}Y(t^{})G_\rho (tt^{})𝑑t^{}$$ (63) The first term on the right-hand side can then be shown to be subdominant (at least for $`x_0\mathrm{}`$; see Sec. A.4 below), and the second can be treated by expanding $`G_\rho (tt^{})`$ around $`t^{}=0`$. To leading order, one then finds the results in table 1. The asymptotic behaviour of the stress response to oscillatory strain, $`G^{}(\omega ,t,t_\mathrm{s})`$, is obtained in a similar manner from (40). ### A.4 Rheological Irrelevance of Initial Condition In App. A.2 we discussed the influence of the initial state of the sample, as parameterized by the “pre-quench” temperature $`x_0`$, on the yield rate $`Y(t)`$. Now we consider the effects of $`x_0`$ on the various (linear) rheological observables, concentrating on the regime $`x<1`$ where such effects are expected to be most pronounced. We begin with the response to a step strain, $`G(tt_\mathrm{w},t_\mathrm{w})`$. In the short time regime $`tt_\mathrm{w}t_\mathrm{w}`$, it follows directly from (37) that $`x_0`$ affects only subdominant terms (through its effect on $`Y(t)`$). In the long time regime $`tt_\mathrm{w}t_\mathrm{w}`$, we see similarly from (63) that any effect on the leading behaviour can only be through the first term on the right-hand side, $`G_0(t)=G_\rho (t,y)t^y`$. Comparing this with the second term, which from table 1 is $`(t_\mathrm{w}/t)^x`$ (note that $`ttt_\mathrm{w}`$ in the long time regime), and using $`y=x(11/x_0)`$, one finds that the effect of $`x_0`$ is negligible up to $`tt_\mathrm{w}^{x_0}`$. For larger $`t`$, $`G(tt_\mathrm{w},t_\mathrm{w})G_0(t)G_0(tt_\mathrm{w})`$ and the response is TTI to leading order. An intuitive explanation for this behaviour can be found by analysing the evolution of the relaxation time distribution $`P(\tau ,t_\mathrm{w})`$ with $`t_\mathrm{w}`$ Fielding (2000). It can be shown that the initial condition $`P(\tau ,0)`$ is remembered in the long time tail of this distribution, $`\tau t_\mathrm{w}^{x_0}`$. For times $`tt_\mathrm{w}^{x_0}`$, these long relaxation times dominate the behaviour of $`G(tt_\mathrm{w},t_\mathrm{w})`$ and cause the observed $`x_0`$-dependence. For the step stress response $`J(tt_\mathrm{w},t_\mathrm{w})`$, we found in Sec. 7.1.1 that memory effects are rather weaker than for the step strain response. This is because $`J`$ is sensitive to the average behaviour of the relaxation time distribution $`P(\tau ,t^{})`$ over the time interval $`t^{}=t_\mathrm{w}\mathrm{}t`$, while $`G`$ depends on $`P(\tau ,t_\mathrm{w})`$ only. Correspondingly, we also find that $`J(tt_\mathrm{w},t_\mathrm{w})`$ is affected only weakly by the initial preparation of the system and hence by $`x_0`$. All effects are in subdominant terms; for the long time behaviour in the glass phase, for example, one finds that the asymptotic behaviour $`J(tt_\mathrm{w},t_\mathrm{w})\mathrm{ln}((tt_\mathrm{w})/t_\mathrm{w})`$ is only changed by an $`x_0`$-dependent constant offset Fielding (2000). Finally, consider the oscillatory response functions $`G^{}(\omega ,t,t_\mathrm{s})`$ and $`J(\omega ,t,t_\mathrm{s})`$. Any linear oscillatory perturbation effectively probes only those traps which have a relaxation time $`\tau <1/\omega `$. Provided such traps have attained an $`x_0`$-independent distribution by the time the perturbation is switched on at $`t_\mathrm{s}`$, $`G^{}(\omega ,t,t_\mathrm{s})`$ and $`J^{}(\omega ,t,t_\mathrm{s})`$ will be insensitive to $`x_0`$. It can be shown Fielding (2000) that the requirement for this is $`\tau t_\mathrm{s}^{x_0}`$ for all $`\tau <1/\omega `$ and hence $`\omega t_\mathrm{s}^{x_0}1`$. We argue in App. B, however, that in order to get a sensible measurement of $`G^{}`$ (and $`J^{}`$) which is independent of start time $`t_\mathrm{s}`$, we must ensure $`\omega t_\mathrm{s}1`$. This condition then automatically guarantees that the results are independent of $`x_0`$. In summary, the only significant effects of the initial sample preparation appear in the step strain response at long times ($`tt_\mathrm{w}^{x_0}`$). In the other linear response properties that we studied, the initial condition only affects subdominant terms. We reiterate our earlier statement that for nonlinear response, the initial sample condition should be even less important, because nonlinearities tend to wipe out memory effects. ## Appendix B Irrelevance of Switch-on Time in the Glass Phase It was stated in Sec. 6.1.2 that $`G^{}(\omega ,t,t_\mathrm{s})`$ does not depend on $`t_\mathrm{s}`$ so long as $`\omega (tt_\mathrm{s})1`$ and $`\omega t_\mathrm{s}1`$. These criteria do not depend on the noise temperature $`x`$, and therefore hold even in the glass phase, $`x1`$, where ageing occurs. This behaviour can be understood as follows. Consider a material which has not been strained since preparation except during a time window of duration $`t^{}`$ before the present time $`t`$. First write the linearized constitutive equation as: $$\sigma (t)=_{tt^{}}^t\gamma (t^{})\frac{dG(tt^{},t^{})}{dt^{}}𝑑t^{}$$ (64) where, for the SGR model $$\frac{dG(tt^{},t^{})}{dt^{}}=\delta (tt^{})+Y(t^{})G_\rho (tt^{})$$ (65) with<sup>36</sup><sup>36</sup>36This result follows by differentiation of (37), respecting the fact that $`G(tt_\mathrm{w},t_\mathrm{w})`$ vanishes for negative $`tt_\mathrm{w}`$ (that is, it contains a factor $`\mathrm{\Theta }(tt_\mathrm{w})`$ which is conventionally suppressed). $`G_\rho (tt^{})(tt^{})^x`$. Now consider the case of a step strain imposed at $`tt^{}`$, so that $`\gamma (t)`$ is constant in (64). Because $`dG/dt^{}`$ contains a contribution of order $`(tt^{})^x`$, the integral has significant contributions from $`t^{}`$ near $`tt^{}`$ whenever $`x1`$; in fact in the absence of the factor $`Y(t^{})`$ the integral would not even converge to a finite limit as $`t^{}`$ becomes large. This is a signature of long-term memory: Even the strain history in the distant past has an effect on the stress at time $`t`$. On the other hand, for an oscillatory strain (likewise switched on at $`tt^{}`$) one has (64) with $`\gamma (t)=\gamma _0e^{i\omega t}`$, and even without the factor $`Y(t^{})`$ the integral would now converge to a finite limit so long as $`\omega t^{}`$ is large. The convergence of the oscillatory integral follows from the mathematical result known as Jordan’s lemma Copson (1962) which, crudely speaking, states that inserting the oscillatory factor $`e^{i\omega t^{}}`$ has a similar effect to converting the integrand, $`dG/dt^{}`$, to $`\omega d^2G/dt^2`$. (Physically, this extra time derivative arises since the stress at $`t`$ due to any previously executed strain cycle must involve the change in $`dG/dt^{}`$ over the cycle: if this change is small, the response to positive and negative strains will cancel.) Accounting for this extra time derivative, it is simple to check that the most recently executed strain cycles indeed dominate the response at time $`t`$, in contrast to the non-oscillatory case where the entire strain history contributes. This observation allows us to simplify (64) further by setting $`G(\mathrm{\Delta }t,t^{})G(\mathrm{\Delta }t,t)`$ where $`\mathrm{\Delta }t=tt^{}`$ (we assume $`\omega t1`$, so that the variation in the stress response function over a fixed number of recent cycles is negligible). Likewise, the limit of integration can safely be set to $`\mathrm{\Delta }=\mathrm{}`$. Thus we have $$G^{}(\omega ,t)=_0^{\mathrm{}}e^{i\omega \mathrm{\Delta }t}\frac{dG(\mathrm{\Delta }t,t)}{d\mathrm{\Delta }t}𝑑\mathrm{\Delta }t$$ (66) This can be integrated by parts to give (43) as required. ## Appendix C Numerical Methods ### C.1 Yield Rate in the Linear Regime To obtain numerical results for the linear response properties of the SGR model, the yield rate $`Y(t)`$ has to be calculated first. A convenient starting point for this can be obtained by differentiating (53): $$Y(t)=G_0^{}(t)_0^tY(t^{})G_\rho ^{}(tt^{})𝑑t^{}$$ (67) This is a Volterra integral equation of the second kind, which can in principle be solved by standard numerical algorithms Press et al. (1992). Such algorithms are based on discretizing the time domain into a grid $`t_0=0,t_1\mathrm{}t_n`$; the values $`Y_n=Y(t_n)`$ are then calculated successively, starting from the known value of $`Y_0`$. The subtlety in our case is the choice of the grid: Because for times $`t1`$ we expect the hopping rate to be a power law, we expect relative discretization errors given a time-step $`\mathrm{\Delta }t`$ to scale roughly as $`\mathrm{\Delta }t/t_n`$. Once we have chosen an acceptable (constant) value for the discretization error we are therefore at liberty to increase the time-step $`\mathrm{\Delta }t`$ linearly with the time $`t_n`$, which corresponds to using a geometric time grid. This allows us to generate data over many decades without too much computational effort. To improve accuracy, we also used a spline interpolation between the known points $`(t_0,Y_0),(t_1,Y_1)\mathrm{}(t_{n1},Y_{n1})`$ when determining the next value $`Y_n`$. ### C.2 Strain Response to Finite Step Stress The numerical scheme used to solve (24) and (25) in the case of an imposed step stress $`\sigma (t)=\sigma _0\mathrm{\Theta }(tt_\mathrm{w})`$ is rather more complicated. This is because both the strain and the hopping rate, which are coupled through nonlinear integral equations, have to be calculated as functions of time. Again, we discretize time into a grid $`t_0,t_1\mathrm{}t_n`$, where $`t_0=t_\mathrm{w}^+`$, and proceed along the grid calculating the strain $`\gamma _n`$ and the hopping rate $`Y_n`$ for successive values of the index $`n`$.<sup>37</sup><sup>37</sup>37Note that the integral form of the constitutive equations renders the strain and the hopping rate at any time step $`t_n`$ dependent upon the values of these quantities at all previous times – even times prior to stress application ($`0<t^{}<t_\mathrm{w}`$). However, for such times the strain is clearly zero and the hopping rate is identical to that in the linear response regime, calculated previously. The first data point $`\gamma _0=\gamma (t_\mathrm{w}^+)`$ and $`Y_0=Y(t_\mathrm{w}^+)`$ on this grid is then obtained directly by treating the discontinuity at $`t_\mathrm{w}`$ “by hand”. At any subsequent time-step the two non-linear constitutive equations (24) and (25) are solved simultaneously. The first is essentially of the form: $$\begin{array}{ccccc}0\hfill & =& f(\gamma _n,Y_n,\{\gamma _n^{}\},\{Y_n^{}\},t_\mathrm{w},\sigma _0)\hfill & \text{for}& 0n^{}<n\hfill \end{array}$$ (68) while the second can be differentiated and rearranged to give $$\begin{array}{ccccc}Y_n\hfill & =& g(\gamma _n,\{\gamma _n^{}\},\{Y_n^{}\},t_\mathrm{w},\sigma _0)\hfill & \text{for}& 0n^{}<n\hfill \end{array}$$ (69) Because (68) cannot be solved explicitly for $`\gamma _n`$, we use an iterative process. At each time-step we start by placing sensible upper and lower bounds on $`\gamma _n`$, derived from physical expectations about the time dependence of the strain $`\gamma (t)`$. Each bound in turn is substituted into (69) (to find the corresponding value of $`Y_n`$) and (with its $`Y_n`$) into the function $`f`$ of the right hand side of (68). The secant method Press et al. (1992) is then used to update one of the bounds, and the new bound used to calculate a new $`Y_n`$ and $`f`$. This process is repeated until we obtain a sufficiently small value of $`f`$ ($`|f|<10^8`$). The current values of $`\gamma _n`$ and $`Y_n`$ are then accepted and we proceed to the next time-step. We initially chose a geometric grid of time values $`t_0,t_1\mathrm{}`$, but this led to numerical instabilities. We therefore switched to an adaptive procedure which chooses time-steps such that the strain always increases by approximately the same amount in a given time-step. Finally, note that at each iteration loop of each time-step we in principle need to evaluate double integrals of the form $`I=_0^th(Z(t,t^{}))𝑑t^{}`$ in which $$Z(t,t^{})=_t^{}^t𝑑t^{\prime \prime }\mathrm{exp}\{[\gamma (t^{\prime \prime })\gamma (t^{})]^2/2x\}$$ (70) Because this is very costly computationally, we first calculate at each loop $`Z(t,t^{})`$ on a grid of $`t^{}`$ values ranging from $`0`$ to $`t`$ and set up an interpolation over the calculated points. We are then left with single integrals of the same form as $`I`$, and look up the value of $`Z`$ whenever the integrand is called.
no-problem/9907/hep-ph9907438.html
ar5iv
text
# I Exotic fermions in a 27-plet of E6. DOE/ER/40561-61-INT99 EFI-99-33 hep-ph/9907438 MIXING OF CHARGE $`1/3`$ QUARKS AND CHARGED LEPTONS WITH EXOTIC FERMIONS IN E<sub>6</sub> <sup>1</sup><sup>1</sup>1To be submitted to Physical Review Letters Jonathan L. Rosner Institute for Nuclear Theory University of Washington, Seattle, WA 98195 and Enrico Fermi Institute and Department of Physics University of Chicago, Chicago, IL 60637 <sup>2</sup><sup>2</sup>2Permanent address. ABSTRACT > A way is suggested to understand why the average masses of $`(d,s,b)`$ quarks are smaller than those of $`(u,c,t)`$ quarks. In contrast to previously proposed mechanisms relying on different Higgs boson vacuum expectation values or different Yukawa couplings, the mass difference is explained as a consequence of mixing of $`(d,s,b)`$ with exotic quarks implied by the electroweak-strong unification group E<sub>6</sub>. PACS Categories: 12.10.Dm, 12.10.Kt, 12.60.Cn, 14.80.-j The currently known fermions consist of quarks $`(u,c,t)`$ with charge $`2/3`$, quarks $`(d,s,b)`$ with charge $`1/3`$, leptons $`(e,\mu ,\tau )`$ with charge $`1`$, and neutrinos $`(\nu _e,\nu _\mu ,\nu _\tau )`$. Some proposals address certain broad features of their masses. Specifically: * The evidence that neutrino masses are non-zero but tiny with respect to those of other fermions may be evidence for large Majorana masses of right-handed neutrinos, which overwhelm Dirac mass terms and lead to extremely small Majorana masses for left-handed neutrinos . * Many unified theories of the electroweak and strong interactions (see, e.g., ) imply a relation between the masses of charged leptons and quarks of charge $`1/3`$ at the unification scale. Such a relation does seem to be approximately satisfied for the members $`\tau ,b`$ of the heaviest family. * The larger (average) masses of the $`(u,c,t)`$ quarks with respect to the $`(d,s,b)`$ quarks could be a consequence of different vacuum expectation values in a two-Higgs-doublet model , where two different doublets are responsible for the masses of quarks of different charges. \[In such a picture we would view the masses of the lightest quarks, which have the inverted order $`m(d)>m(u)`$, as due, for example, to a radiative effect, and not characteristic of the gross pattern.\] In the present paper we propose another potential source of difference between masses of quarks of different charges, which arises in a unified electroweak theory based on the gauge group E<sub>6</sub> . The fundamental (27-dimensional) representation of this group contains additional quarks of charge $`1/3`$ and additional charged and neutral leptons, but no additional quarks of charge $`2/3`$. We have identified a simple mixing mechanism which can depress the average mass of $`(d,s,b)`$ quarks (and charged leptons) with respect to that of $`(u,c,t)`$ quarks without the need for different Higgs vacuum expectation values. This mixing can occur in such a way as to have minimal effect on the weak charged-current and neutral-current couplings of quarks and leptons, but offers the possibility of observable deviations from standard couplings if the new states participating in the mixing are not too heavy. This mechanism was first observed in Ref. . Similar mixing with isosinglet quarks was discussed in Ref. , but with a different emphasis (including a mechanism for understanding $`m_d>m_u`$). A related (“seesaw”) effect was used to describe the top quark mass in a particular theory of electroweak symmetry breaking . We first recall some basic features of E<sub>6</sub> and mass matrices, and then describe a scenario in which $`(d,s,b)`$ masses (and those of charged leptons) can be depressed by mixing with their exotic E<sub>6</sub> counterparts. Some consequences of the mixing hypothesis are then noted. The fundamental 27-dimensional representation of E<sub>6</sub> contains representations of dimension 16, 10, and 1 of SO(10). We assume there exist three 27-plets, corresponding to the three quark-lepton families. We may regard ordinary matter (including right-handed neutrinos) of a single quark-lepton family as residing in an SO(10) 16-plet, with SU(5) content $`5^{}+10+1`$. The additional (“exotic”) states in the 10-plet and singlet of SO(10) are summarized in Table I for one family. Here $`I_L`$ and $`I_{3L}`$ refer to left-handed isospin and its third component. All the new states are vector-like. They consist of an isosinglet quark $`h^c`$ of charge $`1/3`$, a lepton isodoublet $`(E^{},\nu _E)`$, the corresponding antiparticles, and a Majorana neutrino $`n_e`$. For simplicity we consider only mixings within a single family, which we shall denote $`(u,d,e,\nu _e)`$. We shall discuss only mass matrices of charged fermions. The neutral lepton sector is of potential interest since it contains possibilities for “sterile” neutrinos not excluded by the usual cosmological and accelerator-based experimental considerations . The simplest mass is that of the $`u`$ quark, which cannot mix with any other. We can describe its contribution to the Lagrangian (we omit Hermitian conjugates for brevity) in terms of a $`2\times 2`$ matrix $$^u=\left[\begin{array}{cc}0& m_u\\ m_u& 0\end{array}\right]$$ (1) sandwiched between Weyl spinors $`(u^c,u)`$ and $`(u^c,u)^T`$. The zeroes reflect charge and baryon number conservation. To diagonalize $`^u`$ it is most convenient to square it and note that the corresponding eigenvalues $`m_u^2`$ come in pairs. The simplest Higgs representation giving rise to $`m_u`$ belongs to the $`[27^{},10,5^{}]`$ of \[E<sub>6</sub>, SO(10), SU(5)\]. The corresponding mass matrix for quarks of charge $`1/3`$ takes account of the possible mixing between non-exotic $`d`$ and exotic $`h`$ quarks. Its most general form in a basis $`(d^c,d,h^c,h)`$ can be written $$^d=\left[\begin{array}{cccc}0& m_2& 0& M_1\\ m_2& 0& m_3& 0\\ 0& m_3& 0& M_2\\ M_1& 0& M_2& 0\end{array}\right].$$ (2) Here small letters refer to $`\mathrm{\Delta }I_L=1/2`$ masses, which are expected to be of electroweak scale or less, while large letters refer to $`\mathrm{\Delta }I_L=0`$ masses, which can be of any magnitude (including the unification scale). We shall assume $`m_iM_i`$. If the masses in Eq. (2) arise through vacuum expectation values of a Higgs $`27^{}`$-plet (the simplest possibility), their transformation properties are summarized in Table II. Eq. (2) is diagonalized, as before, by squaring it. $`(^d)^2`$ decomposes into two separate $`2\times 2`$ matrices, referring to the bases $`(d^c,h^c)`$ and $`(d,h)`$. For each of these, the eigenvalues $`\lambda _1`$ and $`\lambda _2`$ satisfy $$\lambda _1+\lambda _2=m_2^2+m_3^2+M_1^2+M_2^2,$$ $$\lambda _1\lambda _2=(M_1m_3M_2m_2)^2.$$ (3) Suppose, to begin with, that $`h`$ and $`h^c`$ pair up to form a Dirac particle with large mass $`M_2(M_1,m_2,m_3)`$. Then the two eigenvalues are $`\lambda _1m_2^2`$ and $`\lambda _2M_2^2`$, corresponding to light and heavy Dirac particles $`d`$ and $`h`$, respectively. If we label basis states with zeroes as subscripts, and physical states without subscripts, this solution corresponds to $`d=d_0`$, $`d^c=d_0^c`$, $`h=h_0`$, $`h^c=h_0^c`$. For the more general case where $`M_1`$ is not negligible in comparison with $`M_2`$, we can write $$M_1=M\mathrm{cos}\theta ,M_2=M\mathrm{sin}\theta ,$$ $$m_3=m\mathrm{cos}\varphi ,m_2=m\mathrm{sin}\varphi .$$ (4) Then for $`mM`$, we have $$\lambda _1m^2\mathrm{cos}^2(\theta +\varphi ),\lambda _2m^2\mathrm{sin}^2(\theta +\varphi )+M^2.$$ (5) This is our central result. It is possible to choose $`\theta +\varphi `$ in such a way that the $`d`$ quark mass is arbitrarily small in comparison with $`m^2`$, whose scale is a typical electroweak scale (as in the case of $`m^u`$). The opposite situation, in which $`u`$-type quarks are lighter than $`d`$-type quarks, is unnatural in the present scheme. The physical (left-handed) $`(d^c,h^c)`$ states are eigenstates of the matrix $$_{d^c,h^c}^2=\left[\begin{array}{cc}m^2\mathrm{sin}^2\varphi +M^2\mathrm{cos}^2\theta & m^2\mathrm{cos}\varphi \mathrm{sin}\varphi +M^2\mathrm{cos}\theta \mathrm{sin}\theta \\ m^2\mathrm{cos}\varphi \mathrm{sin}\varphi +M^2\mathrm{cos}\theta \mathrm{sin}\theta & m^2\mathrm{cos}^2\varphi +M^2\mathrm{sin}^2\theta \end{array}\right].$$ (6) For $`Mm`$ the approximate eigenstates are $$d^c\mathrm{sin}\theta d_0^c\mathrm{cos}\theta h_0^c,h^c\mathrm{cos}\theta d_c^0+\mathrm{sin}\theta h_0^c.$$ (7) In the limit $`\theta =\pi /2`$ in which $`M_2M_1`$, leading to a large Dirac mass for the exotic quark $`h`$, one thus has $`d^c=d_0^c,h^c=h_0^c`$. The physical (left-handed) $`(d,h)`$ states are eigenstates of the matrix $$_{d,h}^2=\left[\begin{array}{cc}m^2& mM\mathrm{sin}(\theta +\varphi )\\ mM\mathrm{sin}(\theta +\varphi )& M^2\end{array}\right],$$ (8) specifically $$dd_0(m/M)\mathrm{sin}(\theta +\varphi )h_0,h(m/M)\mathrm{sin}(\theta +\varphi )d_0+h_0.$$ (9) Thus, for $`mM`$, there is little mixing between the isosinglet and isodoublet quarks, and hence little potential for violation of unitarity of the Cabibbo-Kobayashi-Maskawa (CKM) matrix. Some consequences of this mixing have been explored, for example, in Refs. . The mixing parameter $`\zeta (m/M)\mathrm{sin}(\theta +\varphi )`$ and the suppression of $`d`$-type masses are both maximal for $`\theta +\varphi =\pm \pi /2`$. Although the present mechanism for lowering the masses of down-type quarks does not require $`h`$ quarks to be accessible at present energies, it is interesting to speculate about this possibility. One effect of mixing between an ordinary $`d`$-type quark and its exotic $`h`$-type counterpart is the modification of couplings of the $`b`$ quark. The forward-backward asymmetry $`A_{FB}^b`$ in $`e^+e^{}Zb\overline{b}`$, and the asymmetry parameter $`A_b`$ describing the couplings of the $`b`$ to the $`Z`$, are slightly different from the values expected in a standard electroweak fit, where $$g_{bL}=\frac{1}{2}+\frac{1}{3}\mathrm{sin}^2\theta _W,g_{bR}=\frac{1}{3}\mathrm{sin}^2\theta _W,$$ (10) and $`A_b=(g_{bL}^2g_{bR}^2)/(g_{bL}^2+g_{bR}^2)`$ is predicted to be 0.935 for $`\mathrm{sin}^2\theta _W=0.2316`$. To account for the experimental value of $`A_b=0.891\pm 0.017`$ while keeping $`g_{bL}^2+g_{bR}^2`$ fixed (since the total rate for $`Zb\overline{b}`$ is now in accord with standard model predictions) one must modify both $`g_{bL}`$ and $`g_{bR}`$ in such a way that $`g_{bL}\delta g_{bL}g_{bR}\delta g_{bR}`$ . The present scheme does not fill the bill, since it affects only left-handed couplings, mixing an isodoublet $`d`$ with an isosinglet $`h`$. We find $`\delta g_{bL}=\zeta ^2/2`$, $`\delta g_{bR}=0`$. Here we have assumed an unmixed $`Z`$. Severe constraints apply to the mixing of the $`Z`$ with a higher-mass $`Z^{}`$ . The production of $`h\overline{h}`$ pairs in hadronic collisions should be governed by standard perturbative QCD, which gives a reasonable account of top quark pair production . For the data sample of approximately 100 pb<sup>-1</sup> obtained in $`p\overline{p}`$ collisions at a center-of-mass energy of 1.8 TeV in Run I at the Fermilab Tevatron, it should be possible to observe or exclude values of $`m(h)`$ well in excess of $`m(t)`$ . It may also be possible to produce or exclude $`h`$ quarks singly through the neutral flavor-changing interaction at LEP II via the reaction $`e^+e^{}Z^{}h+(\overline{d},\overline{s},\overline{b})`$. Both charged-current decays $`hW+(u,c,t)`$ and neutral-current decays $`hZ+(d,s,b)`$ should be characterized by multiple leptons and missing energy in an appreciable fraction of events. The mixing proposed here applies in an almost identical manner to the charged leptons under the replacements $`d^ce^{}`$, $`de^+`$, $`h^cE^{}`$, $`hE^+`$. The charged leptons’ masses, just like those of the $`d`$-type quarks, thus may be depressed relative to their unmixed values. One could expect small modifications of right-handed lepton couplings since one is then mixing an isosinglet $`e`$ with an isodoublet $`E`$. To conclude, we have presented a mechanism which accounts for the depression in the average masses of down-type quarks and charged leptons relative to that of up-type quarks, without the need for differences in Higgs vacuum expectation values or in values of the largest Yukawa coupling for each type of fermion. This mechanism relies on mixings between ordinary fermions and their exotic counterparts in E<sub>6</sub> multiplets. It may be of use in building more realistic models of quark and lepton masses. Although the exotic E<sub>6</sub> fermions need not be accessible to present experimental searches in order for this mechanism to be effective, they could well be observable in forthcoming searches at the Fermilab Tevatron, the LEP II $`e^+e^{}`$ collider, or the Large Hadron Collider under construction at CERN. I am indebted to T. André, F. del Aguila, B. Kayser, R. N. Mohapatra, S. T. Petcov, and L. Wolfenstein for useful discussions. I wish to thank the Institute for Nuclear Theory at the University of Washington for hospitality during this work, which was supported in part by the United States Department of Energy under Grant No. DE FG02 90ER40560.
no-problem/9907/astro-ph9907353.html
ar5iv
text
# Two measures of the shape of the Milky Way’s dark halo ## 1 Introduction The speed at which spiral galaxies rotate remains relatively constant out to large radii \[e.g., Rubin, Ford & Thonnard 1980, Bosma 1981\], which implies that they contain large amounts of dark matter. The nature of this material remains obscure, but one key diagnostic is provided by the shape of the dark matter halo, as quantified by its shortest-to-longest axis ratio, $`q=c/a`$. The roundest halos with $`q0.8`$ are predicted by galaxy formation models in which hot dark matter is dominant \[Peebles 1993\]. Cosmological cold dark matter simulations typically result in triaxial dark halos \[Warren et al. 1992\] while the inclusion of gas dynamics in the simulations results in somewhat flattened, oblate halos \[Katz & Gunn 1991, Udry & Martinet 1994, Dubinski 1994\] with $`q=0.5\pm 0.15`$ \[Dubinski 1994\]. Models for other dark matter candidates such as cold molecular gas \[Pfenniger, Combes & Martinet 1994\] and massive decaying neutrinos \[Sciama 1990\] require halos as flat as $`q0.2`$. Clearly, the determination of $`q`$ for real galaxies offers a valuable test for discriminating between these cosmological models. In Fig. 1, we summarize the existing estimates of halo shape, derived using a variety of techniques. It is evident from this figure that there is only a very limited amount of data available for measuring the distribution of $`q`$ in galaxies. Rather more worrying, though, is the fact that different techniques seem to yield systematically different answers. The warping gas layer method, in which the shape of the halo is inferred by treating any warp in a galaxy’s gas layer as a bending mode in a flattened potential \[Hofner & Sparke 1994\], seems to imply that dark halos are close to spherical, with $`q0.8`$. Conversely, the flaring gas layer technique, which determines the halo shape by assuming that the gas layer is in hydrostatic equilibrium in the galaxy’s gravitational potential and uses the thickness of the layer as a diagnostic for the distribution of mass in the galaxy \[Olling 1996b\], yields much flatter halo shape estimates with $`q0.4`$. Although the numbers involved are rather small, there do seem to be real differences between the results obtained by the different methods. Thus, either these techniques are being applied to systematically different classes of galaxy, or at least some of the methods are returning erroneous results. In order to determine which techniques are credible, we really need to apply several methods to a single galaxy, to see which produce consistent answers. As a first step towards such cross-calibration, this paper compares the shape of our own galaxy’s dark matter halo as inferred by two distinct techniques: 1. Stellar kinematics. Our position within the Milky Way means that we have access to information for this galaxy that cannot be obtained from other systems. In particular, it is possible to measure the total column density of material near the Sun using stellar kinematics. After subtracting off the other components, we can infer the local column density of dark matter. By comparing this density close to the plane of the Galaxy to the over-all mass as derived from its rate of rotation, we can obtain a direct measure of the shape of the halo. 2. The flaring gas layer method. As outlined above, this technique assumes that the Hi emission in the Milky Way comes from gas in hydrostatic equilibrium in the Galactic potential, from which the shape of the dark halo is inferred. Since the results of previous applications of this method give results somewhat out of line with the other techniques, it is important to assess the method’s credibility. As well as providing a check on the validity of the flaring gas layer method, these analyses will also provide another useful datum on the rather sparsely populated Fig. 1. The remainder of the paper is laid out as follows. Both of the above methods rely on knowledge of the Milky Way’s rotation velocity as a function of Galactic radius – its “rotation curve” – so Section 2 summarizes the data available for estimating this quantity, and the dependence of the inferred rotation curve on the assumed distance to the Galactic centre and local rotation velocity. The analysis of the shape of the dark halo requires that we decompose the Milky Way into its visible and dark matter components, so Section 3 discusses the construction of a set of models consistent with both the photometric properties of the Galaxy, and its mass properties as inferred from the rotation curve. In Section 4 we show how these mass models can be combined with the local stellar kinematic measurements to determine the shape of the dark halo. Section 5 presents the application of the gas layer flaring technique to the mass models. Section 6 combines the results derived by the two techniques and assesses their consistency. The broader conclusions of this work are drawn in Section 7. ## 2 The observed rotation curve Our position within the Milky Way complicates the geometry when studying its structure and kinematics. It is therefore significantly harder to determine our own galaxy’s rotation curve, $`\mathrm{\Theta }(R)`$, than it is to derive those for external systems. More directly accessible to observation than $`\mathrm{\Theta }(R)`$ is the related quantity $$W(R)=R_0\left[\frac{\mathrm{\Theta }(R)}{R}\frac{\mathrm{\Theta }_0}{R_0}\right],$$ (1) where $`R_0`$ and $`\mathrm{\Theta }_0`$ are the distance to the Galactic centre and the local circular speed. If one assumes that material in the galaxy is in purely circular motion, then some simple geometry shows that, for an object at Galactic coordinates $`\{l,b\}`$ with a line-of-sight velocity $`v_{\mathrm{los}}`$, $$W=\frac{v_{\mathrm{los}}}{\mathrm{sin}l\mathrm{cos}b}$$ (2) (Binney & Merrifield 1998, §9.2.3). Thus, if one measures the line-of-sight velocities for a series of objects at some radius $`R`$ in the Galaxy, one has an estimate for $`W(R)`$. By adopting values for $`R_0`$ and $`\mathrm{\Theta }_0`$, one can then use equation (1) to determine the rotation speed at that radius. In practice, the difficulty lies in knowing the Galactic radii of the objects one is looking at. One solution is to look at standard candles, whose distances can be estimated, and hence whose radii in the Galaxy can be geometrically derived. Alternatively, one can select the subset of some tracer – usually Hi or H<sub>2</sub> gas – whose line-of-sight velocities and Galactic coordinates places it at the same value of $`W`$, and hence at the same radius. One can then use the properties of this cylindrical slice through the Galaxy to infer its radius. For example, in the inner Galaxy, all the material in a ring of radius $`R`$ will lie at Galactic longitudes of less than $`l_{\mathrm{max}}=\mathrm{sin}^1(R/R_0)`$, so one can use the extent of the emission on the sky of each $`W`$-slice to infer its radius – an approach traditionally termed the “tangent point method” \[see, for example, Malhotra (1994,1995)\]. At radii greater than $`R_0`$ this method is no longer applicable as the emission will be visible at all values of $`l`$. However, by assuming that the thickness of the gas layer does not vary with azimuth, one can use the observed variation in the angular thickness of the layer with Galactic longitude to estimate the radii of such $`W`$ slices (Merrifield 1992). For the remainder of this paper, we use Malhotra’s (1994, 1995) tangent point analysis to estimate $`W(R)`$ in the inner Galaxy. For the outer Galaxy, we have combined Merrifield’s (1992) data with Brand & Blitz’s (1993) analysis of the distances to HII regions, from which standard candle analysis the rotation curve can be derived. In order to convert $`W(R)`$ into $`\mathrm{\Theta }(R)`$ using equation (1), we must adopt values for the Galactic constants, $`R_0`$ and $`\mathrm{\Theta }_0`$. Unfortunately, there are still significant uncertainties in these basic parameters. In the case of $`R_0`$, for example, the extensive review by Reid (1993) discussed measurements varying between $`R_0=6.9\pm 0.6\mathrm{kpc}`$ and $`R_0=8.4\pm 0.4\mathrm{kpc}`$. Even more recently there have been few signs of convergence: Layden et al. (1996) used RR Lyrae stars as standard candles to derive $`R_0=7.2\pm 0.7\mathrm{kpc}`$, while Feast & Whitelock (1997) used a Cepheid calibration to obtains $`R_0=8.5\pm 0.5\mathrm{kpc}`$. The constraints on $`\mathrm{\Theta }_0`$ are similarly weak: a recent review by Sackett (1997) concluded that a value somewhere in the range $`\mathrm{\Theta }_0=210\pm 25\mathrm{km}\mathrm{s}^1`$ provided the best current estimate. It should also be borne in mind that the best estimates for $`R_0`$ and $`\mathrm{\Theta }_0`$ are not independent. Analysis of the local stellar kinematics via the Oort constants gives quite a well-constrained value for the ratio $`\mathrm{\Theta }_0/R_0=26.4\pm 1.9\mathrm{km}\mathrm{s}^1\mathrm{kpc}^1`$ (Kerr & Lynden-Bell 1986), so a lower-than-average value of $`R_0`$ is likely to be accompanied by a lower-than-average value for $`\mathrm{\Theta }_0`$. Currently, the best available measure of the local angular velocity of the Milky Way is based on VLBI proper motion measurements of SgrA. Assuming that SgrA is at rest with respect to the Galactic centre, Reid et al. (1999) find $`\mathrm{\Theta }_0/R_0=27.25\pm 2.5\mathrm{km}\mathrm{s}^1\mathrm{kpc}^1`$, consistent with the value proposed by Kerr & Lynden-Bell. To illustrate the effect of the adopted values of $`\mathrm{\Theta }_0`$ and $`R_0`$ on the derived rotation curve, Fig. 2 shows $`\mathrm{\Theta }(R)`$ for two of the more extreme plausible sets of Galactic parameters. Clearly, the choice of values for these quantities affects such basic issues as whether the rotation curve is rising or falling in the outer Galaxy. Given the current uncertainties, it makes little sense to pick fixed values for the Galactic constants, so in the following analysis we consider models across a broad range of values, $`5.5\mathrm{kpc}<R_0<9\mathrm{kpc}`$, $`165\mathrm{km}\mathrm{s}^1<\mathrm{\Theta }_0<235\mathrm{km}\mathrm{s}^1`$. ## 3 Mass models In order to relate the rotation curve to the shape of the dark halo, we must break the gravitational potential responsible for the observed $`\mathrm{\Theta }(R)`$ into the contributions from the different mass components. As is usually done in this decomposition \[e.g. van Albada & Sancisi 1986, Kent 1987, Begeman 1989, Lake & Feinswog 1989, Broeils 1992, Olling 1995, Olling 1996c, Dehnen & Binney 1998\], we adopt a model consisting of a set of basic components: 1. A stellar bulge. Following Kent’s (1992) analysis of the Galaxy’s K-band light distribution, we model the Milky Way’s bulge by a “boxy” density distribution, $$\rho _b(R,z)K_0(s/h_b),\text{where}s^4=R^4+(z/q_b)^4.$$ (3) This modified Bessel function produces a bulge that appears exponential in projection. The observed flattening of the K-band light yields $`q_b=0.61`$, and its characteristic scalelength is $`h_b=670\mathrm{pc}`$ (Kent 1992). The constant of proportionality depends on the bulge mass-to-light ratio, $`\mathrm{{\rm Y}}_b`$, which we leave as a free parameter. 2. A stellar disk. The disk is modelled by a density distribution, $$\rho _d(R,z)\mathrm{exp}(R/h_d)\mathrm{sech}(z/2z_d).$$ (4) The first term gives the customary radially-exponential disk. The appropriate value for the scalelength, $`h_d`$, is still somewhat uncertain – Kent, Dame & Fazio (1991) estimated $`h_d=3\pm 0.5\mathrm{kpc}`$, while Freudenreich (1998) found a value of $`2.5\mathrm{kpc}`$. We therefore leave this parameter free to vary within the range $`2\mathrm{kpc}h_d3\mathrm{kpc}`$. The $`z`$-dependence adopts van der Kruit’s (1988) compromise between a $`\mathrm{sech}^2`$ isothermal sheet and a pure exponential. For simplicity, we fix the scaleheight at $`z_d=300\mathrm{pc}`$. However, the exact $`z`$-dependence of $`\rho _d`$ was found to have no impact on any of the following analysis. Once again, the constant of proportionality depends on the mass-to-light ratio of the disk, $`\mathrm{{\rm Y}}_d`$, which we leave as a free parameter. 3. A gas disk. From the Hi data given by Burton (1988) and Malhotra (1995) and the H<sub>2</sub> column densities from Bronfman et al. (1988) and Wouterloot et al. (1990), we have inferred the density of gas as a function of radius in the Galaxy. This analysis treats the gas as an axisymmetric distribution, and neglects the contribution from ionized phases of the interstellar medium, but does include a 24% contribution by mass from helium (Olive & Steigman 1995). 4. A dark matter halo. We model the dark halo as a flattened non-singular isothermal sphere, which has a density distribution $$\rho _h(R,z)=\rho _h\frac{R_h^2}{R_h^2+R^2+(z/q)^2},$$ (5) where $`\rho _h`$ is the central density, $`R_h`$ is the halo core radius, and $`q`$ is the halo flattening, which is the key parameter in this paper. The procedure for calculating a mass model from these components is quite straightforward. For each pair of Galactic constants, $`R_0`$ and $`\mathrm{\Theta }_0`$, we vary the unknown parameters $`\{\mathrm{{\rm Y}}_b,h_d,\mathrm{{\rm Y}}_d,\rho _h,R_h,q\}`$ to produce the mass model that has a gravitational potential, $`\mathrm{\Phi }(R,z)`$, such that $$v(R)=\left(R\frac{\mathrm{\Phi }}{R}|_{z=0}\right)^{1/2}$$ (6) provides the best fit (in a minimum $`\chi ^2`$ sense) to the observed rotation curve, $`\mathrm{\Theta }(R)`$. Examples of two such best-fit models are shown in Fig. 2. As is well known \[e.g. van Albada & Sancisi 1986, Kent 1987, Begeman 1989, Lake & Feinswog 1989, Broeils 1992, Olling 1995, Olling 1996c, Dehnen & Binney 1998\], such mass decompositions are by no means unique: there is near degeneracy between the various unknown parameters, so many different combinations of the individual components can reproduce the observed rotation curve with almost equal quality of fit. We have therefore searched the entire $`\{\mathrm{{\rm Y}}_b,h_d,\mathrm{{\rm Y}}_d,\rho _h,R_h,q\}`$ parameter space to find the complete subset of values that produce fits in which the derived value of $`\chi ^2`$ exceeded the minimum value by less than unity. ## 4 Halo flattening from local stellar kinematics Although the analysis of the previous section tells us something about the possible range of mass models for the Milky Way, it does not place any useful constraint on the shape of the dark halo: for any of the adopted values of $`R_0`$ and $`\mathrm{\Theta }_0`$, one can find acceptable mass models with highly-flattened dark halos ($`q0`$), round dark halos ($`q1`$), and even prolate dark halos ($`q>1`$). We therefore need some further factor to discriminate between these models. One such constraint is provided by stellar kinematics in the solar neighbourhood. Studies of the motions of stars in the Galactic disk near the Sun imply that the total amount of mass within 1.1 kpc of the Galactic plane is $`\mathrm{\Sigma }_{1.1}=(71\pm 6)M_{}\mathrm{pc}^2`$ (Kuijken & Gilmore 1991). Clearly, the value of $`\mathrm{\Sigma }_{1.1}`$ provides an important clue to the shape of the Milky Way’s dark halo: in general, a model with a highly-flattened dark halo will place a lot of mass near the Galactic plane leading to a high predicted value for $`\mathrm{\Sigma }_{1.1}`$, while a round halo will distribute more of the dark matter further from the plane, depressing $`\mathrm{\Sigma }_{1.1}`$. The dark halo is not the only contributor to $`\mathrm{\Sigma }_{1.1}`$. In particular, the stellar disk has a surface density in the solar neighbourhood, $`\mathrm{\Sigma }_{}`$, which may contribute a significant fraction of $`\mathrm{\Sigma }_{1.1}`$. There must therefore be a fairly simple relation between the adopted value of $`\mathrm{\Sigma }_{}`$ and the inferred halo flattening, $`q`$. Specifically, as one considers larger possible values of $`\mathrm{\Sigma }_{}`$, the amount of dark matter near the plane must decrease so as to preserve the observed value of $`\mathrm{\Sigma }_{1.1}`$. Such a decrease can be achieved by increasing $`q`$, thus making the dark halo rounder. This inter-relation is illustrated in Fig. 3. Here, for one of the sets of possible Galactic constants, we have considered all the mass models that produce an acceptable fit to the rotation curve and reproduce the meaured value of $`\mathrm{\Sigma }_{1.1}`$. For each acceptable model, we have extracted the value of the halo flattening $`q`$ and the mass of the stellar disk in the solar neighbourhood, $`\mathrm{\Sigma }_{}`$. For the reasons described above, these quantities are tightly correlated, and Fig. 3 shows the linear regression between the two. Clearly, we have not yet derived a unique value for the halo flattening: by selecting models with different values of $`\mathrm{\Sigma }_{}`$, we can still tune $`q`$ to essentially any value we want. However, $`\mathrm{\Sigma }_{}`$ is not an entirely free parameter. From star-count analysis, it is possible to perform a stellar mass census in the solar neighbourhood, and thus determine the local stellar column density. Until relatively recently, this analysis has been subject to significant uncertainties, with estimates as high as $`\mathrm{\Sigma }_{}=145M_{}\mathrm{pc}^2`$ \[Bahcall 1984\]. However, there is now reasonable agreement between the various analyses, with more recent published values of $`35\pm 5M_{}\mathrm{pc}^2`$ (Kuijken & Gilmore 1989) and $`26\pm 4M_{}\mathrm{pc}^2`$ (Gould, Bahcall & Flynn 1997). If we adopt Kuijken & Gilmore’s slightly more conservative error bounds on $`\mathrm{\Sigma }_{}`$, the shaded regions in Fig. 3 no longer represent acceptable models as they predict the wrong local disk mass density, so we end up with a moderately well-constrained estimate for the halo flattening of $`q=0.7\pm 0.1`$. Note, though, that although this analysis returns a good estimate for $`q`$, Fig. 3 only shows the value obtained for a particular set of Galactic constants. As discussed above, changing the adopted values for $`R_0`$ and $`\mathrm{\Theta }_0`$ alters the range of acceptable mass models, which, in turn, will alter the derived correlation between $`\mathrm{\Sigma }_{}`$ and $`q`$. The uncertainties in the Galactic constants are still sufficiently large that the absolute constraints on $`q`$ remain weak. Nevertheless, it is to be hoped that the measurements of $`R_0`$ and $`\mathrm{\Theta }_0`$ will continue to improve over time, leading to a unique determination of $`q`$ by this method. Further, as we shall see below, it may be possible to attack the problem from the other direction by using other estimates of $`q`$ to help determine the values of the Galactic constants. ## 5 Halo flattening from gas layer flaring We now turn to the technique developed by Olling (1995) for measuring the shape of a dark halo from the observed thickness of a galaxy’s gas layer. In essence the approach is similar to the stellar-kinematic method described above: the over-all mass distribution of the Milky Way is inferred from its rotation curve, while the degree to which this mass distribution is flattened is derived using the properties of a tracer population close to the Galactic plane. In this case, the tracer is provided by the Hi emission from the Galactic gas layer. The thickness of the Milky Way’s gas layer is dictated by the hydrostatic balance between the pull of gravity toward the Galactic plane and the pressure forces acting on the gas. As the density of material in the Galaxy decreases with radius, the gravitational force toward the plane becomes weaker, so the equilibrium thickness of the layer becomes larger, giving the gas distribution a characteristic “flared” appearance. The exact form of this flaring depends on the amount of mass close to the plane of the Galaxy. Thus, by comparing the observed flaring to the predictions of the hydrostatic equilibrium arrangement of gas in the mass models developed in Section 3, we can see what degree of halo flattening is consistent with the observations. ### 5.1 The observed flaring of the gas layer Before we can apply this technique, we need to summarize the observational data available on the flaring of the Galactic Hi layer. Merrifield (1992) calculated the thickness of the gas layer across a wide range of Galactic azimuths in his determination of the outer Galaxy rotation curve. As a check on the validity of that analysis, we have also drawn on the work of Diplas & Savage (1991) and Wouterloot et al. (1992), which derived the gas layer thicknesses across a more limited range of azimuths. For completeness, we have also included the data for the inner Galaxy as derived by Malhotra (1995). The resulting values for the full-width at half maximum (FWHM) of the density of gas perpendicular to the plane are given in the upper panel in Fig. 4. Note that, once again, the results depend on the adopted values of the Galactic constants – since the radii in the Galaxy of the various gas elements were derived from their line-of-sight velocities via equations (1) and (2), the values of $`R`$ in Fig. 4 depend on $`R_0`$ and $`\mathrm{\Theta }_0`$<sup>1</sup><sup>1</sup>1Merrifield’s method for determining the thickness of the gas layer also exposes some of the shortcomings of more traditional methods. If the wrong values for $`R_0,\mathrm{\Theta }_0`$ and $`\mathrm{\Theta }(R)`$ are chosen, the inferred thickness of the gas layer (at a given $`R`$) will show a systematic variation with Galactocentric azimuth. It is possible to correct published data for this effect, but only if the assumed rotation curve, Galactic constants, as well as the thickness of the gas layer as a function of azimuth are specified.. It would appear from this figure that there are some discrepancies between the various measurements in the outer Galaxy. After some investigation, we established that these differences can be attributed to the effects of the beam sizes of the radio telescopes with which the observations were made. Such resolution effects will mean that the FWHM of the gas will tend to be overestimated. In the lower panel, we show what happens when the appropriate beam correction is made to the data from Diplas & Savage (1991) and Wouterloot et al. (1992). No similar correction is required for the Merrifield (1992) analysis, as in that work the derived value of the FWHM was dominated by gas towards the Galactic anti-centre, which lies at relatively small distances from the Sun, and so the beam correction is small. Clearly, this correction brings the various data sets into much closer agreement. We therefore adopt the mean curve shown in this panel for the following analysis; the error bars shown represent the standard error obtained on averaging the various determinations. ### 5.2 Sources of pressure support In order to compare the observed gas layer flaring to the predictions for a gas layer in hydrostatic equilibrium, we must address the source of the pressure term in the hydrostatic equilibrium equation. The most obvious candidate for supporting the Hi layer comes from its turbulent motions. In the inner Galaxy, the Hi is observed to have a velocity dispersion of $`\sigma _g=9.2\mathrm{km}\mathrm{s}^1`$ independent of radius \[Malhotra 1995\], and we assume that this value characterizes the turbulent motions of the gas throughout the Galaxy, providing a kinetic pressure term. Potentially, there may be other forces helping to support the Galactic Hi layer: non-thermal pressure gradients associated with magnetic fields and cosmic rays may also provide a net force to resist the pull of gravity on the Galactic Hi. However, the analysis we are doing depends most on the properties of the Hi layer at large radii in the Milky Way, where star formation is almost non-existent, so energy input into cosmic rays and magnetic fields from stellar processes is likely to be unimportant. Our concentration on the properties of gas at large radii also eliminates another potential complexity. In the inner Galaxy, the interstellar medium comprises a complicated multi-phase mixture of molecular, atomic and ionized material. A full treatment of the hydrostatic equilibrium of such a medium is complicated, as gas can transform from one component to another, so all components would have to be considered when calculating hydrostatic equilibrium. For the purpose of this paper, it is fortunate that at the low pressures characteristic of the outer Galaxy it is not possible to maintain both the cold molecular phase and the warm atomic phase \[Maloney 1993, Wolfire et al. 1995\]. Braun & Walterbos (1992) and Braun (1997,1998) have shown that the cold phase disappears when the B-band surface brightness of a galaxy falls below the 25th magnitude per square arcsecond level, which occurs at $`1.5R_0`$ in the Milky Way (Binney & Merrifield 1998, §10.1). Further, the ionized fraction of the ISM is expected to decrease with distance as it is closely associated with sites of star formation \[Ferguson et al. 1996, Wang, Heckman & Lehneert 1997\]. Ultimately, the ionizing effects of the extragalactic background radiation field become significant, but only when the Hi column density falls below about $`1M_{}\mathrm{pc}^2`$ \[Maloney 1993, Dove & Shull 19944, Olling 1996c\], which lies well beyond the radii we are considering here. For the current analysis, it therefore seems reasonable to treat the Milky Way’s gas layer as a single isothermal component supported purely by its turbulent motions. Moreover, this assumption has been made in the previous implementations of the gas flaring method \[Olling 1996b, Becquaert, Combes & Viallefond 1997\]. A principal objective of this paper is to test the validity of those analyses by comparing results obtained by the flaring technique to those obtained by other methods. It is therefore important that we make the same assumption of a single isothermal component in the present study. ### 5.3 Fitting to model gas layer flaring We are now in a position to compare observation and theory. The technique for calculating the gas layer thickness in different mass models has been described in detail by Olling (1995). In brief, for each model, at each radius $`R`$, one integrates the hydrostatic equilibrium equation, $$\frac{\mathrm{\Phi }}{z}=\frac{1}{\rho _g}\frac{\rho _g\sigma _g^2}{z},$$ (7) to obtain the gas density distribution, $`\rho _g(R,z)`$. The FWHM of this model gas distribution can then be compared directly with the observations. The results of such calculations are illustrated by the examples shown in Fig. 5. The basic trends in this analysis are clearly demonstrated by these examples. The decrease in total density with radius leads to a dramatic flaring in the model-predicted gas layer thickness, just as is seen in the observations. For a flatter model dark halo, the mass is more concentrated toward the plane of the Galaxy, squeezing the Hi layer into a thinner distribution. Once again, the results depend quite sensitively on the choice of Galactic constants, since these values affect both the gas distribution as inferred from observations and the acceptable mass models as inferred from the rotation curve. As is apparent from Fig. 5, none of the $`R_0=8.5\mathrm{kpc}`$ models fits the observations. In fact, to match the observed layer width one would require a substantially prolate dark matter halo with $`q1.5`$, which none of the current dark matter scenarios predict. For $`R_0=7.1\mathrm{kpc}`$, on the other hand, a very good fit is obtained for models with a halo flatness of $`q0.7`$. Such models even reproduce the observed inflection in the variation of the gas layer width with radius at $`R10\mathrm{kpc}`$. For each plausible set of Galactic constants, we can carry out a similar analysis to that in Section 4 to see what range of values of $`q`$ are consistent with the observed gas layer flaring. Because this technique relies on data from large radii in the Galaxy, where the dark matter halo is the dominant source of mass, the flaring predicted by the models depends very little on the properties of the disk and bulge. Unlike the stellar kinematic analysis, therefore, one cannot trade off the mass in the disk against the mass near the plane from a more-flattened dark halo. This difference is illustrated in Fig. 3, which shows the way that the value of $`q`$ inferred from the gas layer flaring depends on the properties of the stellar disk (as parameterized by the model’s column density of stars in the Solar neighbourhood). As for the stellar-kinematic analysis, there is a well-defined correlation between $`q`$ and $`\mathrm{\Sigma }_{}`$, but, for the reasons described above, the trend for the current method is very much weaker, and, within the observationally-allowed range for $`\mathrm{\Sigma }_{}`$, $`q`$ is tightly constrained to $`0.73\pm 0.03`$. Although this constraint is remarkably good, it should be borne in mind that it is still dependent on the adopted values for the Galactic constants. As we have seen above, larger values for $`R_0`$ lead to rounder, or even prolate estimates for halo shape, so we will not obtain an unequivocal measure for $`q`$ from this method until the Galactic constants are measured more accurately ## 6 Combining the techniques The different slopes of the two relations in Fig. 3 raises an interesting possibility. Clearly, for a consistent picture, one must use a single mass model to reproduce both the stellar-kinematic constraint on the mass in the solar neighborhood and the observed flaring of the gas layer. Thus, although there are whole linear loci in this figure of models with different values of $`q`$ and $`\mathrm{\Sigma }_{}`$ that satisfy each of these constraints individually, there is only the single point of intersection between these two lines where the model fits both the stellar-kinematic constraint and the observed flaring of the gas layer. Hence, for given values of $`R_0`$ and $`\mathrm{\Theta }_0`$, one predicts unique values for $`q`$ and $`\mathrm{\Sigma }_{}`$. We have therefore repeated the analysis summarized in Fig. 3 spanning the full range of plausible values for $`R_0`$ and $`\mathrm{\Theta }_0`$, and calculated the mutually-consistent estimates for $`\mathrm{\Sigma }_{}`$ and $`q`$ for each case. In order to reduce the computational complexity of this large set of calculations to manageable proportions, we made use of Olling’s (1995) fitting formula, which showed that, if self-gravity is negligible, one can approximate the model-predicted thickness of the gas layer by the relation $$\mathrm{FWHM}(R)\sqrt{\frac{13.5q}{1.4+q}}\frac{\sigma _g}{v_{h,\mathrm{}}}\sqrt{R_h^2+R^2},$$ (8) where $`v_{h,\mathrm{}}`$ is the circular rotation speed of the dark halo component at large radii. Allowing for the self-gravity of the gas layer, one obtains a similar formula. Comparing the values derived from this formula to sample results obtained by the full integration process described above, we found that the approximation based on equation (8) matches the detailed integration for $`R>1.75R_0`$. We therefore only used the data from beyond this radius in the following analysis, in which the model gas layer thickness was estimated using the approximate formula. The values obtained for $`q`$ and $`\mathrm{\Sigma }_{}`$ for each possible pair of Galactic constants are presented in Fig. 6. Thus, for example, for values of $`R_0=7.1\mathrm{kpc}`$ and $`\mathrm{\Theta }_0=185\mathrm{km}\mathrm{s}^1`$, one finds $`\mathrm{\Sigma }_{}=35M_{}\mathrm{pc}^2`$ and $`q=0.7`$, corresponding to the intercept that we previously calculated in Fig. 3. Figure 6 places some interesting limits on the properties of the Milky Way. For example, if we maintain our prejudice that the halo should be oblate ($`q<1`$), then, unless we adopt a particularly extreme value for $`R_0`$, we find that $`\mathrm{\Theta }_0`$ must be less than $`190\mathrm{km}\mathrm{s}^1`$. If we also adopt Kuijken & Gilmore’s (1989) measurement of $`\mathrm{\Sigma }_{}=35\pm 5M_{}\mathrm{pc}^2`$, we find that only models within the heavily-shaded region of Fig. 6 are acceptable, placing an upper limit on $`R_0`$ of $`7.6\mathrm{kpc}`$. Conversely, if one forces the Galactic constants to the IAU standard values of $`R_0=8.5\mathrm{kpc}`$ and $`\mathrm{\Theta }_0=220\mathrm{km}\mathrm{s}^1`$ (Kerr & Lynden-Bell 1987), one finds barely-credible values of $`\mathrm{\Sigma }_{}60M_{}\mathrm{pc}^2`$ and $`q1.5`$. ## 7 Conclusions The prime objective of this paper has been to check the validity of techniques for measuring halo flattening by asking whether two different techniques return consistent values when applied to the Milky Way. As we have seen in the last section, the answer is a qualified “yes.” The qualification is that consistency with the measured stellar column density requires values of the Galactic constants that differ from those conventionally adopted. However, as was discussed in Section 2, the true values of these constants remain elusive, with estimates spanning the ranges $`7\mathrm{kpc}<R_0<8.5\mathrm{kpc}`$ and $`185\mathrm{km}\mathrm{s}^1<\mathrm{\Theta }_0<235\mathrm{km}\mathrm{s}^1`$. With such gross uncertainties, it quite straightforward to pick values that produce an entirely self-consistent picture. To underline this point, we have included on Fig. 6 the results of Olling & Merrifield’s (1997) estimates for $`R_0`$ and $`\mathrm{\Theta }_0`$ derived from an analysis of the Oort constants. If that analysis is valid, then we have a consistent model for the Milky Way in which the dark halo has a flattening of $`q0.8`$. Ultimately, this analysis will allow us to come to one of two conclusions: 1. If future studies confirm low values for $`R_0`$ and $`\mathrm{\Theta }_0`$ similar to those derived by Olling & Merrifield (1997), then the consistency of the two analyses for calculating $`q`$ imply that the gas layer flaring technique is valid, adding confidence to the previous determinations by this method. 2. Conversely, if we learn in future that $`R_0`$ and $`\mathrm{\Theta }_0`$ are closer to the more conventional larger values, then the implied values of $`\mathrm{\Sigma }_{}`$ are so far from the observed estimates that one has to suspect that at least one of the techniques for measuring $`q`$ is compromised. In this case, one would have to look more closely at some of the assumptions that went into the analysis. For example, perhaps the non-thermal pressure forces from cosmic rays and magnetic fields have a role to play even at large radii in galactic disks. Alternatively, perhaps the Hi layer is not close enough to equilibrium for the hydrostatic analysis to be valid. Finally, our assumption of azimuthal symmetry may be invalid. Strong departures from axisymmetry could mean that our determination of the thickness and column density of the gas is compromised, and that the locally determined values of $`\mathrm{\Sigma }_{1.1}`$ and $`\mathrm{\Sigma }_{}`$ may not be representative for the Galactocentric radius of the Sun. Assuming for the moment that the analysis is valid, we have another datum to add to Fig. 1. Since the two previous flaring analyses returned systematically rather small values of $`q0.3`$, it is reassuring that the Milky Way seems to indicate a larger value of $`0.8`$ – it appears that the low values are simply a coincidence arising from the very small number statistics. This larger value is inconsistent with the very flat halos that are predicted by models in which the dark matter consists of either decaying neutrinos (Sciama 1990) or cold molecular hydrogen (Pfenniger, Combes & Martinet 1994). With the addition of the Milky Way to the data presented in Fig. 1, the only technique that stands out as giving systematically different values for $`q`$ is the bending mode analysis of warped gas layers. In this regard, it is notable that simulations cast some doubt on the validity of such analyses. The method assumes that warps are manifestations of persistent bending modes that occur when the flattening of a disk and the surrounding dark halo are misaligned. However, the simulations show that gravitational interactions between a misaligned disk and halo rapidly bring the two back into alignment, effectively suppressing this mechanism (e.g. Dubinski & Kuijken 1995, Nelson & Tremaine 1995, Binney, Jiang & Dutta 1998). The number of measurements is still rather small, but it is at least interesting that if one neglects the warped gas layer results, the remaining data appear entirely consistent with the dotted line in Fig. 1, which shows Dubinski’s (1994) prediction for the distribution of halo shapes that will be produced in a cold dark matter cosmology. ## acknowledgments We would like to thank Andy Newsam, Irini Sakelliou, Konrad Kuijken, Marc Kamionkowski and Jacqueline van Gorkom for useful discussions. We are also very grateful to the referee, James Binney, for his major contribution to the clarity of this paper.
no-problem/9907/quant-ph9907015.html
ar5iv
text
# “Haunted” quantum contextuality ## Abstract Two entangled particles in threedimensional Hilbert space (per particle) are considered in an EPR-type arrangement. On each side the Kochen-Specker observables $`\{J_1^2,J_2^2,J_3^2\}`$ and $`\{\overline{J}_1^2,\overline{J}_2^2,J_3^2\}`$ with $`[J_1^2,\overline{J}_1^2]0`$ are measured. The outcomes of measurements of $`J_3^2`$ (via $`J_1^2,J_2^2`$) and $`J_3^2`$ (via $`\overline{J}_1^2,\overline{J}_2^2`$) are compared. We investigate the possibility that, although formally $`J_3^2`$ is associated with the same projection operator, a strong form of quantum contextuality states that an outcome depends on the complete disposition of the measurement apparatus, in particular whether $`J_1^2`$ or $`\overline{J}_1^2`$ is measured alongside. It is argued that in this case it is impossible to measure contextuality directly, a necessary condition being a non-operational counterfactuality of the argument. http://tph.tuwien.ac.at/$`\stackrel{~}{}`$svozil/publ/context.$`\{`$ps,tex$`\}`$ Besides complementarity, contextuality is another, more subtle nonclassical feature of quantum mechanics. That is, one and the same physical observable may appear different, depending on the context of measurement; i.e., depending on the particular way it was inferred. Stated differently, the outcome of a physical measurement may depend also on other physical measurements which are coperformed. In Bell’s own words \[1, section 5\], “The result of an observation may reasonably depend not only on the state of the system $`\mathrm{}`$ but also on the complete disposition of the apparatus.” This property is usually referred to as contextuality. Formally, contextuality may be related to the nonexistence of two-valued measures on Hilbert logics and the partial algebra of projection operators when the dimension of the Hilbert space is higher than two. Contextuality then expresses the impossibility to construct consistently truth values of the whole physical system by any arrangement of truth values of “proper parts” thereof. The term “proper part” refers to any maximal number of independent comeasurable observables corresponding to commuting self-adjoint operators. In quantum logics , these are denoted by boolean subalgebras or “blocks” which can be represented by a single “maximal” observable. The entirety of all proper parts is then identified with the whole physical system. By definition, no union of different proper parts can itself be a proper part, because there are always observables in the constituents which are not comeasurable with another observable from any other different proper part. This does not exclude that, for Hilbert spaces of dimension greater than two, there may exist one or more elements of different proper parts which coincide. Indeed, we shall encounter a system with three observables $`A,B`$ and $`C`$ such that $`[A,B]=[A,C]=0`$, whereas $`[B,C]0`$. Therefore, although the proper parts are “classical mini-universes” by the way they are constructed, their whole is not, because it presupposes counterfactual reasoning. (Indeed, Specker \[16, in German\] has been motivated by scholastic speculations of the so-called “infuturabilities,” or “possibilities”). In what follows, we propose to test contextuality by an EPR-type measurement of one and the same observable, but with different comeasurable observables. The difference to the usual EPR-type setup is the identity of the observables and the specific attention paid to other comeasurable observables, which are usually disregarded. (For similar considerations, see an article by Heywood and Redhead .) Any argument of this kind must necessarily involve Hilbert spaces of dimension higher than two, since because of orthogonality, for two or lower dimensions, if two observables are identical, all other comeasurable observables are identical as well. We shall adopt the original system of observables used by Kochen and Specker . These observables are defined in threedimensional Hilbert space and are based upon the spin one observable along an arbitrary unit vector $`(x_1,x_2,x_3)=(\mathrm{sin}\theta \mathrm{cos}\varphi ,\mathrm{sin}\theta \mathrm{sin}\varphi ,\mathrm{cos}\theta )𝐑^3`$, with polar coordinates $`0\theta \pi `$ and $`0\varphi <2\pi `$. The radius is set to unity. The corresponding hermitian $`(3\times 3)`$-matrix is given by $$S(\theta ,\varphi )=\left(\begin{array}{ccc}\mathrm{cos}\theta & \frac{e^{i\varphi }\mathrm{sin}\theta }{\sqrt{2}}& 0\\ \frac{e^{i\varphi }\mathrm{sin}\theta }{\sqrt{2}}& 0& \frac{e^{i\varphi }\mathrm{sin}\theta }{\sqrt{2}}\\ 0& \frac{e^{i\varphi }\mathrm{sin}\theta }{\sqrt{2}}& \mathrm{cos}\theta \end{array}\right).$$ (1) The spin state observables $`J_1,J_2,J_3`$ ($`\mathrm{}=1`$) along the three cartesian coordinate axes $`x=(1,0,0)(\pi /2,0,1)`$, $`y=(0,1,0)(\pi /2,\pi /2,1)`$ and $`z=(0,0,1)(0,0,1)`$ are just given by (cf. Gudder \[17, pp. 54-57\]) $$J_1=S(\frac{\pi }{2},0),J_2=S(\frac{\pi }{2},\frac{\pi }{2}),J_3=S(0,0),$$ (2) and $$S^{}(x_1,x_2,x_3)=x_1J_1+x_2J_2+x_3J_3,$$ (3) where the asterisk “$``$” indicates that the arguments are the usual cartesian coordinates. Spin state measurements along another orthogonal tripod $`\overline{x},\overline{y},\overline{z}`$ can be easily represented by $`S(\overline{x}),S(\overline{y})`$ and $`S(\overline{z})`$. Consider now the squares of the spin state observables introduced in Equation (2). $$J_1^2=\frac{1}{2}\left(\begin{array}{ccc}1& 0& 1\\ 0& 2& 0\\ 1& 0& 1\end{array}\right),J_2^2=\frac{1}{2}\left(\begin{array}{ccc}1& 0& 1\\ 0& 2& 0\\ 1& 0& 1\end{array}\right),J_3^2=\left(\begin{array}{ccc}1& 0& 0\\ 0& 0& 0\\ 0& 0& 1\end{array}\right).$$ (4) Let us consider another system of $`\overline{J}_i^2`$’s rotated by $`\varphi 0\text{ mod }\pi /2`$ along the $`z=(0,0,1)`$-axis. According to Equation (1), $$\overline{J}_1^2=\left(S(\frac{\pi }{2},\varphi )\right)^2,\overline{J}_2^2=\left(S(\frac{\pi }{2},\varphi +\frac{\pi }{2})\right)^2,\overline{J}_3^2=\left(S(0,0)\right)^2.$$ (5) By inspection, it can be verified that the $`J_i^2`$’s and $`\overline{J}_i^2`$’s form two mutually commuting systems; i.e., $`[J_1^2,J_2^2]=[J_1^2,J_3^2]=[J_2^2,J_3^2]=0,`$ $`[\overline{J}_1^2,\overline{J}_2^2]=[\overline{J}_1^2,\overline{J}_3^2]=[\overline{J}_2^2,\overline{J}_3^2]=0.`$ But not all $`J_i^2`$’s commute with all $`\overline{J}_i^2`$’s. For instance, $`[\overline{J}_1^2,J_1^2]0.`$ Indeed, only $`J_3^2`$ commutes with $`\overline{J}_3^2`$, because these operators are identical. As has already been pointed out by von Neumann , for any system of mutually commuting self-adjoint operators $`H_1,H_2,\mathrm{}`$ there exists a maximal operator $`U`$ such that all $`H_i`$’s are functions $`f_i`$ of $`U`$; i.e., $$H_i=f_i(U).$$ (6) Applying this result to the two systems of mutually commuting operators $`\{J_1^2,J_2^2,J_3^2\}`$ and $`\{\overline{J}_1^2,\overline{J}_2^2,\overline{J}_3^2\}`$ yields two maximal operators $`U`$ and $`\overline{U}`$ and sets of functions $`f_i`$, $`\overline{f}_i`$ such that $$J_i^2=f_i(U)\mathrm{and}\overline{J}_i^2=\overline{f}_i(\overline{U}),i=1,2,3.$$ (7) In particular, $$J_3^2=f_3(U)=\overline{f}_3(\overline{U})=\overline{J}_3^2.$$ (8) More explicitly , let $`abca`$ and $`U`$ $`=`$ $`aJ_1^2+bJ_2^2+cJ_3^2={\displaystyle \frac{1}{2}}\left(\begin{array}{ccc}a+b+2c& 0& ab\\ 0& 2a+2b& 0\\ ab& 0& a+b+2c\end{array}\right),`$ (12) $`\overline{U}(\varphi )`$ $`=`$ $`\overline{a}\overline{J}_1^2+\overline{b}\overline{J}_2^2+\overline{c}\overline{J}_3^2={\displaystyle \frac{1}{2}}\left(\begin{array}{ccc}\overline{a}+\overline{b}+2\overline{c}& 0& (\overline{a}\overline{b})e^{2i\varphi }\\ 0& 2\overline{a}+2\overline{b}& 0\\ (\overline{a}\overline{b})e^{2i\varphi }& 0& \overline{a}+\overline{b}+2\overline{c}\end{array}\right).`$ (16) The diagonal form of $`U`$ and $`\overline{U}`$ is $`\mathrm{diag}(a+b,b+c,a+c)`$ and $`\mathrm{diag}(\overline{a}+\overline{b},\overline{b}+\overline{c},\overline{a}+\overline{c})`$ respectively. Measurement of $`U`$ can, for instance, be realized by a set of beam splitters ; or in an arrangement proposed by Kochen and Specker . Any such measurement will yield either the eigenvalue $`a+b`$ (exclusive) or the eigenvalue $`b+c`$ (exclusive) or the eigenvalue $`a+c`$. Since $`a,b,c`$ are mutually distinct, the eigenvalues of $`U`$ are nondegenerate. At the same time, $`J_1^2,J_2^2,J_3^2`$ are orthogonal projection operators in $`𝐑^3`$: they are idempotent $`J_i^2J_i^2=J_i^2`$, with eigenvalues $`0`$ and $`1`$ for $`i=1,2,3`$. (The same is true for any system $`\overline{J}_1^2,\overline{J}_2^2,\overline{J}_3^2`$.) Alternatively, they can be identified with onedimensional orthogonal subspaces of $`𝐑^3`$ which in turn are spanned by the orthogonal vectors $`(1,0,0)`$, $`(0,1,0)`$ and $`(0,0,1)`$. In quantum logic, they can be identified with atomic propositions and can be conveniently represented by hypergraphs called “Greechie diagrams,” in which points represent atoms and all orthogonal atoms belonging to the same tripod are represented by edges or smooth curves. The spatial configuration of subspaces as well as the associated Greechie diagram of the combined systems $`J_1^2,J_2^2,J_3^2`$ and $`\overline{J}_1^2,\overline{J}_2^2,\overline{J}_3^2`$ are drawn in Figure 1. The $`J_3^2`$ and $`\overline{J}_3^2`$ are then polynomials of $`U`$ and $`\overline{U}`$, respectively; i.e., $`J_3^2=f_3(U)`$ $`=`$ $`{\displaystyle \frac{1}{(cb)(ac)}}\left[U^2U(a+b+2c)+2(a+b)c𝐈\right],`$ (17) $`\overline{J}_3^2=\overline{f}_3(\overline{U})`$ $`=`$ $`{\displaystyle \frac{1}{(\overline{c}\overline{b})(\overline{a}\overline{c})}}\left[\overline{U}^2\overline{U}(\overline{a}+\overline{b}+2\overline{c})+2(\overline{a}+\overline{b})\overline{c}𝐈\right].`$ (18) Furthermore, $$J_1^2+J_2^2+J_3^2=\overline{J}_1^2+\overline{J}_2^2+\overline{J}_3^2=2𝐈,$$ (19) indicating that since the possible eigenvalues of any $`J_i^2,i=1,2,3`$ are either 0 or 1, the eigenvalues of two observables $`J_i^2,i=1,2,3`$ must be 1, and one must be 0. Thus any measurement of the maximal operator $`U`$ yields $`a+b`$ associated with $`J_1^2=J_2^2=1`$, $`J_3^2=0`$ (exclusive) or $`a+c`$ associated with $`J_1^2=J_3^2=1`$, $`J_2^2=0`$ (exclusive) or $`b+c`$ associated with $`J_2^2=J_3^2=1`$, $`J_1^2=0`$. Now consider an EPR-type arrangement with two particles in an identical state $`{\displaystyle \frac{1}{\sqrt{3}}}`$ $`\left[|a+b|a+b+|a+c|a+c+|b+c|b+c\right]`$ (39) $`{\displaystyle \frac{1}{\sqrt{3}}}\left[\left(\begin{array}{c}0\\ 1\\ 0\end{array}\right)\times \left(\begin{array}{c}0\\ 1\\ 0\end{array}\right)+\left(\begin{array}{c}1\\ 0\\ 1\end{array}\right)\times \left(\begin{array}{c}1\\ 0\\ 1\end{array}\right)+\left(\begin{array}{c}1\\ 0\\ 1\end{array}\right)\times \left(\begin{array}{c}1\\ 0\\ 1\end{array}\right)\right]`$ The quantum numbers $`a+b`$, $`a+c`$ and $`b+c`$ refer to the eigenstates of $`U`$ with eigenvalues $`a+b`$, $`a+c`$ and $`b+c`$, respectively. The eigenvalues and eigenstates of $`\overline{U}`$ are $`\overline{a}+\overline{b}`$, $`\overline{a}+\overline{c}`$, $`\overline{b}+\overline{c}`$ and $`(0,1,0)`$, $`(e^{2i\varphi },0,1)`$, and $`(e^{2i\varphi },0,1)`$, respectively. Now consider the following question: assume $`U`$ and $`\overline{U}`$ are measured for the right and the left particle separately. (Of course, one may also successively measure $`\{J_1^2,J_2^2,J_3^2\}`$ and $`\{\overline{J}_1^2,\overline{J}_2^2,\overline{J}_3^2\}`$.) Would the outcome of the measurement of $`J_3^2=\overline{J}_3^2`$ be different, depending on whether it was derived from $`U`$ or $`\overline{U}`$? There are at least three alternative answers which will be discussed shortly. “Strong” contextuality assumption (I): Although $`J_3^2`$ and $`\overline{J}_3^2`$ are identical operators, $`U`$ and $`\overline{U}`$ are not; and the associated measurement results need not coincide, since $`J_3^2`$ has to be inferred from $`U`$ (or equivalently, comeasured with $`J_1^2`$ and $`J_2^2`$), while $`\overline{J}_3^2`$ has to be be inferred from $`\overline{U}`$ (or equivalently, comeasured with $`\overline{J}_1^2`$ and $`\overline{J}_2^2`$). This would then make it necessary to add to each quantum number also—in Bell’s terms—the complete disposition of the measurement apparatus, which can be represented by the associated maximal operator. We may quantify “strong” contextuality by noticing that the $`J_i^2`$’s are dichotomic observables with eigenvalues $`0`$ and $`1`$. Therefore, in analogy to EPR-type experiments, it is possible to define a correlation function $$C(\varphi )=\underset{N\mathrm{}}{lim}\frac{1}{N}\underset{j=i}{\overset{N}{}}r_j(J_3^2)r_j(\overline{J}_3^2),$$ (40) where $`N`$ is the number of experiments, the index $`j`$ denotes the $`j`$’th experiment; and $`r(1)=+1`$ and $`r(0)=1`$. $`\varphi `$ is the relative angle between the $`x`$\- and $`\overline{x}`$-axes. If both axes coincide, then $`C(0)=1`$. The proposed experiment tests “strong” contextuality in the following way. If $`C(\varphi )=1`$ for $`0<\varphi <\pi /2`$, then the system behaves noncontextually. This can be verified by considering formula (40): in the noncontextual case, $`r_j(J_3^2)`$ and $`r_j(\overline{J}_3^2)`$ are always the same ($`+1,+1`$ or $`1,1`$), hence their product always yields $`1`$. Contextuality manifests itself as $`C(\varphi )<1`$ for some value of $`\varphi `$. In this case, $`r_j(J_3^2)`$ and $`r_j(\overline{J}_3^2)`$ differ sometimes ($`+1,1`$ or $`1,+1`$), resulting in a negative product which reduces the overall sum in (40). In view of the highly counterintuitive consequences discussed for the contextual case, let us consider noncontextuality assumption (II): $`J_3^2`$ and $`\overline{J}_3^2`$ are identical operators which therefore always yield identical observations, independent of the way they have been derived. As innocent and evident this statement may appear, it clashes with a theorem derived by Kamber , Zierler and Schlessinger and Kochen and Specker . (For recent reviews, see , among others.) For then we could consider instead of a two-particle EPR-experiment a, say, 17-entangled particle experiment characterized by a similar state as in (39) and make measurements of $`U`$ (and thus of $`J_1^2,J_2^3`$ and $`J_3^2`$) along the 17 direction vectors $`(0,0,1)`$, $`(0,1,0)`$ and all coordinate permutations from $`(0,1,\sqrt{2})`$, $`(1,\pm 1,\sqrt{2})`$. This system has been suggested by Peres , but the original Kochen-Specker configuration or any other proper system of vectors would do just as well (for a review, see, e.g., ). In this case, the assumption of noncontextuality results in a complete contradiction with the noncontextuality assumption (II): after measuring all 17 particles and checking the appropriate observables it turns out that the outcome of measurements of at least two observables which correspond to identical operators but are measured alongside different coobservables (blocks) are different. There is yet another, more subtle alternative which will be called “haunted” contextuality assumption (III): contextuality never manifests itself in its “strong” form (I) but only through counterfactual reasoning. Insofar states of the form (39) are explicitly constructed to yield identical results on measurements of $`J_3^2`$, these measurement outcomes are independent of the way they have been inferred; and in particular what observables have been measured alongside. This does not contradict the Kochen-Specker theorem, since obviously states obeying such perfect correlations can be constructed only in one direction, wheres the Kochen-Specker theorem necessarily requires directions associated with noncomeasurable, complementary observables. Thus the test of “strong” contextuality will fail; i.e., $`C(\varphi )`$ will always be unity. But this does not exclude—and indeed makes necessary—arguments involving counterfactuality, such as the Kochen-Specker theorem or the GHZ-theorem which, although of doubtless importance conceptually, bear a non-operational flavor in the sense of direct physical testability.
no-problem/9907/astro-ph9907180.html
ar5iv
text
# Integrating the BeppoSAX Gamma-Ray Burst Monitor into the 3rd Interplanetary Network ## 1 Introduction It is now well known that the breakthrough in our understanding of cosmic gamma-ray bursts (GRBs) has come about through the multiwavelength identification of fading counterparts, and that this revolution was initiated by BeppoSAX observations and precise localizations of X-ray counterparts (e.g., Costa et al. 1997) with the Wide Field Camera (WFC). Less well known, however, is the fact that BeppoSAX has other ways of localizing bursts, in particular with the Gamma-Ray Burst Monitor (GRBM). Here we describe the results of integrating BeppoSAX into the 3rd Interplanetary Network of GRB instruments, and demonstrate that triangulation using the GRBM and Ulysses is capable of producing precise localization annuli whose accuracy can be verified using the locations of GRB counterparts determined from observations at other wavelengths. ## 2 Instrumentation The BeppoSAX GRBM (Feroci et al. 1997; Amati et al. 1997, Frontera et al. 1997) is the anticoincidence shield for the Phoswich Detection System. Briefly, this shield consists of four optically independent CsI(Na) scintillators, each 1 cm thick, 27.5 cm high, and 41.3 cm wide. Although the geometric area of a shield element is 1136 cm<sup>2</sup>, the maximum effective area for a burst with a typical power law spectrum, arriving at normal incidence, is about 420 cm<sup>2</sup> for units 1 or 3 (the optimum units), when shadowing by spacecraft structures and the detector housing are taken into account. The GRBM records data in either a triggered or a real-time mode, in the energy range $``$ 40 - 700 keV. In the triggered mode, time resolutions up to 0.48 ms are available; in the real-time mode, the resolution is 1 s. For a triggered event, real-time data are also transmitted. As the spacecraft is in a near-equatorial orbit, it benefits from a very stable background. The Ulysses GRB detector (Hurley et al. 1992) consists of two 3 mm thick hemispherical CsI(Na) scintillators with a projected area of about 20 cm<sup>2</sup> in any direction. The detector is mounted on a magnetometer boom far from the body of the spacecraft, and therefore has a practically unobstructed view of the full sky. Because the Ulysses mission is in interplanetary space, the instrument also benefits from an exceptionally stable background. The GRB detector operates continuously, and over 97% of the data are recovered. The energy range is $``$ 25-150 keV, and like the GRBM, it takes data in both triggered and real-time modes, with time resolutions as fine as 8/1024 ms and as coarse as 2 s. Also, like the GRBM, real-time data are transmitted for triggered events. ## 3 Scope of this work Because of the very different sensor areas, thicknesses, and energy ranges, it is not immediately obvious that burst time histories from these two instruments can be accurately cross-correlated, a necessary prerequisite for precise triangulation. (The triangulation technique is described in detail in Hurley et al., 1999). Our goal in this paper is to demonstrate this is indeed the case. To accomplish this, we have selected bursts according to the following criteria. 1. The burst must have been observed by both Ulysses and the BeppoSAX GRBM, in either triggered or real-time data modes, and 2. The burst must have been independently localized to an accuracy equal to or better than that achieved by triangulation. These independent localizations include not only WFC observations, but also BeppoSAX Narrow Field Instrument (NFI) pointings, optical and radio counterparts, and Rossi X-Ray Timing (RXTE) All Sky Monitor detections. Many of the bursts which satisfy these criteria have also been observed by other GRB instruments, notably BATSE and Konus (see table 1). In some, but not all cases, BATSE/Ulysses triangulation will result in somewhat more precise localizations than those presented here due to BATSE’s larger area and better statistics. We defer these results to another paper. ## 4 Observations and Results Table 1 lists the dates, times, Ulysses and GRBM data modes, and the other experiments which observed the bursts. Table 2 gives the right ascension and declination of the centers of the IPN annuli, their radii, and their 3 $`\sigma `$ half-widths. For those bursts for which an unambiguous counterpart was identified with greater precision than the annulus width, the last column in the table gives the angular distance $`\mathrm{\Delta }`$ between the counterpart and the centerline of the annulus , expressed in units of the number of $`\sigma `$ of the annulus width. Figures 1-16 show the localization maps. Figure 1 shows the IPN annulus and the BeppoSAX WFC error circle (Heise et al., 1998) for GRB970111. (All the WFC error circles in this paper are 99% confidence, or 2.58 $`\sigma `$ regions.) The NFI error circle indicates the position of a weak X-ray source. Since the source lies outside the annulus, this localization supports the hypothesis that the X-ray source and the burst are unrelated (Feroci et al. 1998). Two radio sources were detected in the vicinity of the WFC error box (Galama et al. 1997), one of which is shown in the figure (the other lies outside the boundaries). As neither displayed any fading behavior, they are not considered to be counterpart candidates. No fading optical sources were detected either. Figure 2 shows the BeppoSAX WFC and NFI error circles for GRB970228 (Heise et al., 1998; Costa et al., 1997). The IPN annulus was obtained from reprocessed Ulysses data, and differs very slightly from the one in Hurley et al. (1997). The position of the fading optical counterpart is also shown (van Paradijs et al. 1997). Figure 3 shows the IPN annulus for GRB970402. This burst was quite weak, and was only detected in the real-time, low resolution data mode of Ulysses ; although the GRBM triggered on it, the trigger occurred late in the event and we have used the real-time data. A fading X-ray source was found in a BeppoSAX NFI observation (Nicastro et al. 1998a), but no optical counterpart was ever detected. Ulysses and GRBM time histories of this event are shown in figure 17. Figure 4 shows the IPN annulus, and the BeppoSAX WFC and NFI error circles for GRB970508 (Heise et al., 1998; Piro et al. 1998a). The position of the optical counterpart is also indicated (Djorgovski et al. 1997). This burst too was quite weak, and although it triggered the BeppoSAX GRBM, it was only observed in the real-time data of Ulysses . Figure 5 gives the IPN annulus and the RXTE error box for GRB970815 (Smith et al. 1999). The RXTE error box is from the All Sky Monitor, and is defined by the response functions of two crossed collimators; the box therefore has equal probability per unit area everywhere. A slightly more precise IPN annulus, derived from BATSE and Ulysses , appears in Smith et al. (1999). Figure 6 shows the IPN annulus, and the BeppoSAX WFC (Heise et al. 1997) and NFI (Antonelli et al. 1997) error circles for GRB971214. The position of the optical counterpart (Halpern et al. 1998) is also shown. IPN and RXTE locations appeared in Kippen et al. (1997). Figure 7 gives the IPN annulus and the BeppoSAX WFC (Coletta et al. 1997) error circle for GRB971227. The burst was weak and was detected only marginally by Ulysses , and was detected in the real-time data of the GRBM. Consequently the annulus is subject to rather large systematic uncertainties. A weak X-ray source detected at the 4 $`\sigma `$ level in an NFI observation has been proposed as the fading X-ray counterpart by Piro et al. (1997) and Antonelli et al. (1999). No radio or optical counterpart was identified. The WFC error box is large due to poor attitude reconstruction for this event. Figure 8 gives the IPN annulus and the BeppoSAX WFC (in ’t Zand et al. 1998a) error circle for GRB980109; no NFI observation was carried out, due to the relatively large uncertainty in the WFC localization (again due to poor attitude reconstruction). A possible optical counterpart was initially identified, but is no longer considered to be related to the GRB (E. Pian and T. Galama, private communication). It lay within the preliminary IPN annulus, but it lies just outside the final one. Figure 9 shows the IPN annulus and the BeppoSAX WFC error circle (Celidonio et al. 1998) for GRB980326. No NFI observations were carried out, but Groot et al. (1997) identified an optical transient in the WFC error circle. A preliminary IPN annulus appeared in Hurley (1998a). Figure 10 gives the IPN annulus, and the BeppoSAX WFC (Frontera et al. 1998) and NFI (in ’t Zand et al. 1998b) error circles for GRB980329. A radio counterpart was identified by Taylor et al. (1998), and an optical counterpart was found at the same position by Palazzi et al. (1998). A preliminary IPN annulus appeared in Hurley (1998b). Figure 11 shows the IPN annulus, and the BeppoSAX WFC (Soffita et al. 1998) and the two revised NFI (Piro et al. 1998b) error circles for GRB980425. The position of source 1 is consistent with that of the unusual supernova 1998bw (Galama et al. 1998a). However, because the burst was weak, it was detected only in the Ulysses real time data, and the IPN annulus is wide; it cannot be used to determine which NFI source is associated with the GRB. Figure 12 shows the IPN annulus, the BeppoSAX WFC (Muller et al. 1998) and NFI (Nicastro et al. 1998b) error circles, and the position of the optical transient (Hjorth 1998) for GRB980519. Figure 13 shows the IPN annulus, the RXTE error box (Smith et al. 1999), the NFI source location (Vreeswijk et al. 1999) and the position of the optical and radio counterparts (Bloom et al. 1998) for GRB980703. This annulus is consistent with, but narrower than, the initial BATSE/Ulysses annulus (Hurley and Kouveliotou 1998). As for GRB970815, the RXTE error box is from the All Sky Monitor, and is defined by the response functions of two crossed collimators; the box therefore has equal probability per unit area everywhere. Figure 14 shows the IPN annulus and the RXTE/ASM error box (Smith et al. 1999) for GRB981220. A radio source was proposed as a possible counterpart (Galama et al. 1998b, Frail & Kulkarni 1998), but is no longer thought to be related to the GRB (Frail, Kulkarni & Taylor 1999; Hurley & Feroci 1999). A preliminary annulus has appeared in Hurley et al. (1999). Figure 15 shows the IPN annulus, the BeppoSAX WFC (Feroci et al. 1999) and NFI (Heise et al. 1999) error circles, and the position of the optical transient (Akerlof 1999) for GRB990123. A preliminary IPN annulus was circulated in Hurley (1999). Figure 16 shows the IPN annulus, the BeppoSAX WFC (Dadina et al. 1999) and NFI (Kuulkers et al. 1999) error circles, and the position of the optical transient (Galama et al. 1999) for GRB990510. We have selected the light curves of two bursts which were not observed by BATSE for display in figures 17 and 18. These show the GRBM and Ulysses real-time and triggered data. ## 5 Discussion The BeppoSAX GRBM is clearly a sensitive burst detector which makes an important contribution to the IPN by providing high quality data for events which are not observed by BATSE. This is the case for three of the bursts discussed here. (This number is fewer than would have been predicted, based on a probability of 38% that BATSE will detect any burst above its threshold - Meegan et al. 1996; however, some of the events in this paper were in effect selected because of the knowledge of their detection by BATSE.) We have demonstrated that, despite the very different properties of the GRBM and the Ulysses GRB experiment, very accurate triangulations can be done. In the case of bright bursts, these annuli can be used to reduce or further constrain the WFC and NFI error boxes. KH is grateful to JPL for Ulysses support under Contract 958056, and to the NASA Astrophysics Data Program for supporting the integration of BeppoSAX into the IPN under NAG5-7766. BeppoSAX is a program of the Italian Space Agency, with participation of NIVR, the Dutch Space Agency.
no-problem/9907/hep-ph9907307.html
ar5iv
text
# Progress in 𝐾→𝜋⁢𝜋 Decays Work supported in part by TMR, EC-Contract No. ERBFMRX-CT980169 (EURODAΦNE). Talk given at PANIC99, Uppsala 10-16 june 1999 LU TP 99-18hep-ph/9907307July 1999 ## 1 Introduction The qualitative feature that $`\mathrm{\Gamma }(K^0\pi ^0\pi ^0)\mathrm{\Gamma }(K^+\pi ^+\pi ^0)`$ is one of the oldest problems in kaon decays that is not fully understood qualitatively. This is known as the $`\mathrm{\Delta }I=1/2`$ rule. The isospin-2 final state amplitude $`A_2`$ is much smaller than the isospin-0 amplitude $`A_0`$. Experimentally we have $`|A_0/A_2|=22.1`$, the precise definition used here can be found in and a review of Kaon physics is in More references can be found in either of these two. The underlying standard model process is the exchange of a $`W`$-boson but due to the large difference in the Kaon and $`W`$-mass very large corrections can come into play and even normally suppressed contributions can be enhanced by large factors $`\mathrm{ln}(m_W^2/m_K^2)10`$. At the same time, at low energies the strong interaction coupling $`\alpha _S`$ becomes very large which requires us to use non-perturbative methods at those scales. The resummation of large logarithms at short-distance can be done using renormalization group methods. At a high scale the exchange of $`W`$-bosons is replaced by a sum over local operators. For weak decays these start at dimension 6. The scale can then be lowered using the renormalization group. The short-distance running is now known to two-loops (NLO) which sums the $`\left(\alpha _S\mathrm{ln}(m_W/\mu )\right)^n`$ and $`\alpha _S\left(\alpha _S\mathrm{ln}(m_W/\mu )\right)^n`$ terms. A review of this can be found in the lectures by A. Buras . The major remaining problem is to calculate the matrix elements of the local operators at some low scale. I will address some progress on this issue in this talk. The main method was originally proposed in Ref. arguing that $`1/N_c`$ counting could be used to systematically calculate the matrix elements. Various improvements have since been introduced. The correct momentum routing was introduced in . The use of the extended Nambu-Jona-Lasinio model as an improved low energy model was introduced for weak matrix elements in and a short discussion of its major advantages and disadvantages can be found in . The results obtained were encouraging but a major problem remained. At NLO order the short-distance running becomes dependent on the precise definition of the local operators. This dependence should also be reflected in the calculations of the matrix elements as well as a correct identification of the scale of the renormalization group in the matrix element calculation. The more precise interpretation of the scheme of introduced in was shown there at one-loop to satisfy the latter criterion. Here I present in the next section how this method also satisfies the latter at NLO and how it solves the first problem as well. We call this method the $`X`$-boson method. The third section describes the numerical results we obtained in . Other recent work on matrix elements is the work of and using the $`1/N_c`$ method as well. A more model dependent approach is . ## 2 The $`X`$-boson method The basic underlying idea is that we know how to hadronize currents or at least that this is a tractable problem. So we replace the effect of the local operators of $`H_W(\mu )=_iC_i(\mu )Q_i(\mu )`$ at a scale $`\mu `$ by the exchange of a series of colourless $`X`$-bosons at a low scale $`\mu `$. The scale $`\mu `$ should be such that the $`1/N_c`$ suppressed contributions have no longer large logarithmic corrections. Let me illustrate the procedure in a simpler case of only one operator and neglecting penguin contributions. In the more general case all coefficients become matrices. $$C_1(\mu )(\overline{s}_L\gamma _\mu d_L)(\overline{u}_L\gamma ^\mu u_L)X_\mu \left[g_1(\overline{s}_L\gamma ^\mu d_L)+g_2(\overline{u}_L\gamma ^\mu u_L)\right].$$ (1) Summation over colour indices is understood inside the brackets. We now determine $`g_1`$, $`g_2`$ as a function of $`C_1`$. This is done by equalizing matrix elements of $`C_1Q_1`$ with the equivalent ones of $`X`$-boson exchange. The matrix elements are at the scale $`\mu `$ chosen such that perturbative QCD methods can still be used and thus we can use external states of quarks and gluons. To lowest order this is simple. The tree level diagram from Fig. 1(a) is set equal to that of Fig. 1(b) leading to $$C_1=\frac{g_1g_2}{M_X^2}.$$ (2) At NLO diagrams like those of Fig. 1(c) and 1(d) contribute as well leading to $$C_1\left(1+\alpha _S(\mu )r_1\right)=\frac{g_1g_2}{M_X^2}\left(1+\alpha _S(\mu )a_1+\alpha _S(\mu )b_1\mathrm{log}\frac{M_X^2}{\mu ^2}\right).$$ (3) At this level the scheme-dependence disappears. The left-hand-side (lhs) is scheme-independent. The right-hand-side can be calculated in a very different renormalization scheme from the lhs. The infrared dependence of $`r_1`$ is present in precisely the same way in $`a_1`$ such that $`g_1`$ and $`g_2`$ are scheme-independent and independent of the precise infrared definition of the external state in Fig. 1. One step remains, we now have to calculate the matrix element of $`X`$-boson exchange between meson external states. The integral over $`X`$-boson momenta we split in two $$_0^{\mathrm{}}𝑑p_X\frac{1}{p_X^2M_X^2}_0^{\mu _1}𝑑p_X\frac{1}{p_X^2M_X^2}+_{\mu _1}^{\mathrm{}}𝑑p_X\frac{1}{p_X^2M_X^2}.$$ (4) The second term involves a high momentum that needs to flow back through quarks or gluons and leads through diagrams like the one of Fig. 1(c) to a four quark-operator with a coefficient $$\frac{g_1g_2}{M_X^2}\left(\alpha _S(\mu _1)a_2+\alpha _S(\mu _1)b_1\mathrm{log}\frac{M_X^2}{\mu ^2}\right).$$ (5) The four-quark operator thus needs to be evaluated only in leading order in $`1/N_c`$. The first term we have to evaluate in a low-energy model with as much QCD input as possible. The $`\mu _1`$ dependence cancels between the two terms in (4) if the low-energy model is good enough and all dependence on $`M_X^2`$ cancels out to the order required as well. Calculating the coefficients $`r_1`$, $`a_1`$ and $`a_2`$ gives the required correction to the naive factorization method as used in previous $`1/N_c`$ calculations. It should be stressed that in the end all dependence on $`M_X`$ cancels out. The $`X`$-boson is a purely technical device to correctly identify the four-quark operators in terms of well-defined products of nonlocal currents. ## 3 Numerical results We now use the $`X`$-boson method with $`r_1`$ as given in and $`a_1=a_2=0`$, the calculation of the latter is in progress, and $`\mu =\mu _1`$. For $`B_K`$ we can extrapolate to the pole for the real case ($`\widehat{B}_K`$) and in the chiral limit ($`\widehat{B}_K^\chi `$) and for $`K\pi \pi `$ we can get at the values of the octet ($`G_8`$), weak mass term ($`G_8^{}`$) and 27-plet ($`G_{27}`$) coupling. We obtain $$\widehat{B}_K=0.69\pm 0.10;\widehat{B}_K^\chi =0.25\text{}0.40;G_8=4.3\text{}7.5;G_{27}=0.25\text{}0.40\text{ and }G_8^{}=0.8\text{}1.1,$$ (6) to be compared with the experimental values $`G_86.2`$ and $`G_{27}0.48`$ . In Fig. 3 the $`\mu `$ dependence of $`G_8`$ is shown and in Fig. 3 the contribution from the various different operators. ## 4 Conclusions I showed how the $`X`$-boson method allows to correctly treat NLO scheme dependence and that using that method and the ENJL model at low energies reproduces the $`\mathrm{\Delta }I=1/2`$ rule quantitatively without any free parameters.
no-problem/9907/hep-ex9907060.html
ar5iv
text
# 1 Introduction ## 1 Introduction We present a search for a di-photon resonance produced in $`\mathrm{e}^+\mathrm{e}^{}`$ collisions at LEP. The data were taken by the OPAL detector at centre-of-mass energies $`E_{\mathrm{cm}}`$ up to 189 GeV. The search is sensitive to the process $`\mathrm{e}^+\mathrm{e}^{}\mathrm{X}Y`$, with $`\mathrm{X}\gamma \gamma `$ and $`\mathrm{Y}\mathrm{f}\overline{\mathrm{f}}`$, where $`\mathrm{f}\overline{\mathrm{f}}`$ may be quarks, charged leptons, or a neutrino pair. In a Standard Model scenario, Y is a $`\mathrm{Z}^0`$ and X is a Higgs boson decaying into two photons. A more general search is achieved by removing the restriction that Y is a $`\mathrm{Z}^0`$. In the minimal Standard Model, the single Higgs boson can decay into two photons via a quark- or W-boson loop . The rate is too small for observation at existing accelerators even for a kinematically accessible Higgs boson, but other theoretical models can accommodate large $`\mathrm{h}^0\gamma \gamma `$ branching ratios . Throughout this paper, “$`\mathrm{h}^0`$” refers to a neutral CP-even scalar where non-minimal Higgs sector models are discussed. Particularly interesting are non-minimal Higgs sectors wherein some Higgs components couple only to bosons . This class of “fermiophobic” Higgs models includes the “Bosonic” Higgs model , and Type I Two-Higgs Doublet models with fermiophobic couplings . In Higgs triplet models , the particles formed from the triplet fields are fermiophobic. There are existing limits on the production of a di-photon resonance which couples to the $`\mathrm{Z}^0`$. Using data taken up to $`E_{\mathrm{cm}}`$=183 GeV, OPAL has set upper limits on the branching ratio $`\mathrm{h}^0\gamma \gamma `$ for masses up to 92 GeV and obtained a 95% confidence level (CL) lower mass limit of 90.0 GeV for a fermiophobic Higgs scalar. Other collaborations have recently reported limits on photonic Higgs boson decays. The lower mass region ($`M_{\gamma \gamma }`$$`<`$ 60 GeV) has been searched previously using data from LEP-I . ## 2 Data and Monte Carlo Samples The analysis is performed on the data collected with the OPAL detector during the 1998 LEP run. The data sample corresponds to an integrated luminosity of $`182.6\pm 0.8`$ pb<sup>-1</sup> collected at a luminosity-weighted $`E_{\mathrm{cm}}`$ of $`188.63\pm 0.04`$ GeV. To assess the sensitivity of the analysis to signals, two production models are considered: the Standard Model process $`\mathrm{e}^+\mathrm{e}^{}\mathrm{h}^0\mathrm{Z}^0`$, and Two Higgs Doublet models (2HDM) for $`\mathrm{e}^+\mathrm{e}^{}\mathrm{h}^0\mathrm{A}^0`$. The process $`\mathrm{e}^+\mathrm{e}^{}\mathrm{h}^0\mathrm{Z}^0`$, $`\mathrm{h}^0\gamma \gamma `$ was simulated for each $`\mathrm{Z}^0`$ decay channel using the HZHA generator . For the general search, mass grids were generated using the $`\mathrm{e}^+\mathrm{e}^{}\mathrm{h}^0\mathrm{Z}^0`$ and $`\mathrm{e}^+\mathrm{e}^{}\mathrm{h}^0\mathrm{A}^0`$ processes as models for $`\mathrm{e}^+\mathrm{e}^{}\mathrm{X}Y,\mathrm{X}\gamma \gamma ,\mathrm{Y}\mathrm{f}\overline{\mathrm{f}}`$. The dominant background to this search arises from the emission of two energetic initial state radiation (ISR) photons in hadronic events from $`\mathrm{e}^+\mathrm{e}^{}(\gamma /\mathrm{Z})^{}\mathrm{q}\overline{\mathrm{q}}`$. This process was simulated using the KK2f generator using CEEX ISR modelling, and with the set of hadronization parameters described in reference . Other Standard Model backgrounds, particularly those from 4-fermion processes, primarily affect the leptonic and missing energy modes of the search. Four-fermion processes were modelled using the Vermaseren and grc4f generators implemented in the KORALW Monte Carlo program. The programs BHWIDE and TEEGG were employed to model the s- and t-channel backgrounds from Bhabha scattering. The processes $`\mathrm{e}^+\mathrm{e}^{}\mathrm{}^+\mathrm{}^{}`$ with $`\mathrm{}\mu ,\tau `$ were simulated using KORALZ . The KORALZ program was also used to generate events of the type $`\mathrm{e}^+\mathrm{e}^{}\nu \overline{\nu }\gamma (\gamma )`$. The process $`\mathrm{e}^+\mathrm{e}^{}\gamma \gamma `$ was simulated using the RADCOR generator . Simulated events were processed using the full OPAL detector Monte Carlo and analyzed in the same manner as the data. ## 3 Event Selection In our searches for the generic process $`\mathrm{e}^+\mathrm{e}^{}\mathrm{X}Y`$, we consider three event topologies which are motivated mainly by the particular case where X is a generic Higgs boson decaying into a pair of photons, and Y is the $`\mathrm{Z}^0`$ boson decaying either into (1) a $`\mathrm{q}\overline{\mathrm{q}}`$ pair, or (2) a pair of oppositely charged leptons, or (3) a $`\nu \overline{\nu }`$ pair. The search topologies are therefore: * Two energetic photons recoiling against a hadronic system. * Two energetic photons produced in association with charged leptons. * Two energetic photons and no other significant detector activity. In the $`\mathrm{h}^0`$$`\mathrm{Z}^0`$ search, the mass recoiling from the di-photon system is required to be consistent with the $`\mathrm{Z}^0`$ mass for all topologies, while in the general search this condition is not required. A background common to all search modes arises from events with two visible ISR photons, resulting in an on-shell $`\mathrm{Z}^0`$ recoiling from a di-photon system. The selection criteria employed in this search are very similar to those described in reference . For all topologies, charged tracks (CT) and unassociated electromagnetic calorimeter (EC) clusters are required to satisfy the criteria defined in reference . “Unassociated” EC clusters are defined by the requirement that no charged tracks point to the cluster. For each channel, preselection cuts are applied which employ the following measured quantities: * $`E_{\mathrm{vis}}`$ and $`\stackrel{}{p}_{\mathrm{vis}}`$: the scalar and vector sums of charged track momenta, unassociated EC and unassociated hadron calorimeter cluster energies. * $`R_{\mathrm{vis}}\frac{E_{\mathrm{vis}}}{E_{\mathrm{cm}}}`$. * Visible momentum along the beam direction: $`|\mathrm{\Sigma }p_z^{\mathrm{vis}}|`$, where all tracks and unassociated clusters are summed over. ### 3.1 Photon Pair Identification After channel-dependent preliminary cuts based on data quality and rudimentary event topology (described in the next sections), events are required to have a photon pair satisfying several criteria. Photon identification is accomplished by identifying clusters in the electromagnetic calorimeter. These EC clusters are combined with the information from the tracking detectors to identify photon candidates if the lateral spread of the clusters satisfies the criteria described in reference . The photon detection efficiency is increased by including photon conversions into $`\mathrm{e}^+\mathrm{e}^{}`$ pairs using the methods described in reference . The most significant background to all search channels are processes having two ISR photons. Photons from ISR are peaked along the beam direction, hence cutting on the photon polar angle $`|\mathrm{cos}(\theta _\gamma )|<0.875`$ is very effective in reducing the background acceptances without significantly decreasing the efficiencies for potential signals. Figure 1 shows the distribution of $`E_{\gamma 1}`$, the highest-energy photon, in events having hadronic activity (criterion A1 described below), where at least one photon has $`E_\gamma >0.05\times E_{\mathrm{b}eam}`$ and $`|\mathrm{cos}(\theta _\gamma )|<0.875`$. Also shown is the simulated Standard Model background. The overall number of background photons, their energies, and their polar angle distribution (not shown) describe the data to better than 10%. The photon pair acceptance criteria is thus summarized by the following requirements on the two highest-energy photons in the event: * The two photon candidates are required to be in the fiducial region $`|\mathrm{cos}(\theta _\gamma )|<0.875`$, where $`\theta _\gamma `$ is the angle of the photon with respect to the $`\mathrm{e}^{}`$ beam direction. * The higher energy photon is required to have $`E_\gamma /E_{\mathrm{b}eam}>0.10`$ and the second-highest-energy photon is required to have $`E_\gamma /E_{\mathrm{b}eam}>0.05`$. After the final channel cuts described in the next sections, there are no events in which more than one photon pair is found. ### 3.2 Hadronic Channel The hadronic channel is characterized by two photons recoiling against a hadronic system. In addition to double ISR, backgrounds also arise from radiative $`\mathrm{Z}^0\gamma `$ events where a decay product of the $`\mathrm{Z}^0`$, such as an isolated $`\pi ^0`$ or $`\eta `$ meson, mimics a photon, or there is an energetic final state radiation (FSR) photon. In these cases, the recoil mass against the di-photon system will tend to be lower than the $`\mathrm{Z}^0`$ mass; therefore, this background can be suppressed by requiring a recoil mass consistent with that of the $`\mathrm{Z}^0`$. In the general search for XY $``$ $`\gamma \gamma +\mathrm{hadrons}`$, there is no recoil mass constraint to help suppress backgrounds from fake photons. The hadronic channel candidate selection is summarized in Table 1. Candidate events are required to satisfy the following criteria: * The standard hadronic event preselection described in reference with the additional requirements: + $`R_{\mathrm{vis}}>0.5`$ and $`|\mathrm{\Sigma }p_z^{\mathrm{vis}}|<0.6E_{\mathrm{beam}}`$; + at least 2 electromagnetic clusters with $`E/E_{\mathrm{beam}}>0.05`$. * The photon pair criteria described in Section 3.1. * To suppress the background from FSR and fake photons, the charged tracks and unassociated clusters were grouped into two jets using the Durham scheme , excluding the photon candidates. Both photon candidates are then required to satisfy $`p_{\mathrm{T},jet\gamma }>5`$ GeV, where $`p_{\mathrm{T},jet\gamma }`$ was defined as the photon momentum transverse to the axis defined by the closest jet. In the case of double ISR emission in the hadron channel, the photons tend to have a large energy difference. We therefore employ the quantity $`\mathrm{\Delta }E=(E_{\gamma 1}E_{\gamma 2})/\mathrm{E}_o`$, where $`E_{\gamma 1}`$ and $`E_{\gamma 2}`$ are the first- and second-highest-energy photons in the event, and $`\mathrm{E}_o=(\mathrm{E}_{\mathrm{cm}}^{}{}_{}{}^{2}M_\mathrm{Z}^2)/(2\mathrm{E}_{\mathrm{cm}})`$ is the energy of a single photon recoiling from the $`\mathrm{Z}^0`$. * $`\mathrm{\Delta }E<0.5`$. * For the $`\mathrm{h}^0\mathrm{Z}^0`$ topology, the invariant mass recoiling from the di-photon must satisfy $`|M_{\mathrm{r}ecoil}M_\mathrm{Z}|<20`$ GeV. For the general search topology, where no explicit recoil mass cut is made, 16 events are observed, while $`17.4\pm 1.7`$ are expected from Standard Model backgrounds. The uncertainty shown is for Monte Carlo statistics only <sup>1</sup><sup>1</sup>1Unless otherwise specified, all errors quoted are statistical only.. After applying the cut (A5) on the recoil mass, 10 events remain, compared to an expectation of $`9.0\pm 1.3`$ events. The efficiencies for this analysis to accept events for Higgs masses of 30 to 100 GeV are shown in Table 4. ### 3.3 Charged Lepton Channel This channel searches for events in the $`\gamma \gamma \mathrm{}^+\mathrm{}^{}`$ final state. Even in the case of $`\mathrm{}=\tau `$, this channel has a very clean signature, and therefore only one selection procedure is required for the $`\mathrm{e},\mu `$ and $`\tau `$ channels. Charged leptons are identified as low multiplicity jets formed from charged tracks and isolated EC clusters. A high efficiency is maintained for $`\tau `$ leptons by allowing single charged tracks to define a “jet” without requiring explicit lepton identification. This channel is sufficiently free of background to allow acceptance of events where one of the charged leptons was not reconstructed or was lost in uninstrumented regions of the detector. The most serious background comes from Bhabha scattering with initial and/or final state radiation. The leptonic channel event selection is summarized in Table 2. Leptonic channel candidates are required to satisfy the following criteria: * The low multiplicity preselection of reference and: + $`R_{\mathrm{vis}}>0.2`$ and $`|\mathrm{\Sigma }p_z^{\mathrm{vis}}|<0.8E_{\mathrm{beam}}`$; + number of EC clusters not associated with tracks: $`N_{\mathrm{EC}}10`$; + number of charged tracks: $`1N_{\mathrm{CT}}7`$; + at least 2 electromagnetic clusters with $`E/E_{\mathrm{beam}}>0.05`$. * The photon pair criteria described in Section 3.1. * For events having only one charged track, require: + the track not to be associated with a converted photon; + the track to have momentum satisfying $`p>0.2E_{\mathrm{b}eam}`$; + direction of event missing momentum: $`|\mathrm{cos}\theta _{\mathrm{m}iss}|>0.90`$. * For events having two or more charged tracks, the event is forced to have 2 jets within the Durham scheme, excluding the identified di-photon candidate. * For the $`\mathrm{h}^0`$$`\mathrm{Z}^0`$ search, the recoil mass to the di-photon is required to be consistent with the $`\mathrm{Z}^0`$: $`|M_{\mathrm{recoil}}M_\mathrm{Z}|<20`$ GeV. For the general search, 20 events survive cuts B1-B4, compared to $`25.6\pm 1.6`$ expected from Standard Model backgrounds. After the recoil mass requirement, the number of observed events is 7, with the background expectation of $`8.9\pm 1.0`$. The efficiencies for Higgs masses of 30 to 100 GeV are given in Table 4. ### 3.4 Missing Energy Channel The missing energy channel is characterized by two photons and no other significant detector activity. An irreducible Standard Model background is the process $`\mathrm{e}^+\mathrm{e}^{}\nu \overline{\nu }\gamma \gamma `$. Other potential backgrounds include $`\mathrm{e}^+\mathrm{e}^{}\gamma \gamma (\gamma )`$ and radiative Bhabha scattering with one or more unobserved electrons. These backgrounds tend to produce photons near the beam directions; therefore, they can be effectively dealt with by the restriction on the polar angles of the two photons and by requiring consistency with a di-photon recoiling from a massive object. The event selection for the missing energy channel is summarized in Table 3. Candidates in the missing energy channel are required to satisfy the following criteria: * The low multiplicity preselection of reference with the further requirement that the event satisfy the cosmic ray and beam-wall/beam-gas vetoes described in reference , and: + number of EC clusters not associated with tracks: $`N_{\mathrm{EC}}4`$; + number of charged tracks: $`N_{\mathrm{NCT}}3`$; + $`|\mathrm{\Sigma }p_z^{\mathrm{vis}}|<0.8E_{\mathrm{beam}}`$; + at least 2 electromagnetic clusters with $`E/E_{\mathrm{beam}}>0.05`$. * The photon pair criteria described in Section 3.1. * Consistency with the hypothesis that the di-photon system is recoiling from a massive body: + The momentum component of the di-photon system in the plane transverse to the beam axis: $`p_T(\gamma \gamma )>0.05E_{\mathrm{b}eam}`$. + The angle between the two photons in the plane transverse to the beam axis: $`|\varphi _{\gamma \gamma }180^{}|>2.5^{}`$. + The polar angle of the momentum of the di-photon system: $`|\mathrm{cos}\theta _{\gamma \gamma }|<0.966`$. * Events are required to have no charged track candidates (other than those associated with an identified photon conversion). * Veto on unassociated calorimeter energy: the energy observed in the electromagnetic calorimeter not associated with the 2 photons is required to be less than 3 GeV. * For the $`\mathrm{h}^0`$$`\mathrm{Z}^0`$ search, the recoil mass against the di-photon is required to be consistent with the $`\mathrm{Z}^0`$: $`|M_{\mathrm{recoil}}M_\mathrm{Z}|<20`$ GeV. The number of events passing the general cuts C1-C5 is 8, compared to the Standard Model background expectation of $`11.2\pm 0.5`$. After application of the recoil mass cut (C6), 5 candidates remain compared to an expectation of $`7.1\pm 0.3`$ events from Standard Model sources. The efficiencies for Higgs masses from 30 to 100 GeV are summarized in Table 4. ### 3.5 Systematic Errors The dominant systematic uncertainty for acceptances arises from the photon detection efficiency, primarily due to the simulation of the photon isolation criteria . This uncertainty is estimated to be 3% of the acceptance from comparison of data with Standard Model backgrounds. Photon energies and angles are well measured and consequently lead to a systematic uncertainty on the efficiencies of 0.6%, as determined from the measured di-photon recoil mass distribution. The systematic error on the integrated luminosity of the data is 0.4% and contributes negligibly to the limits. The uncertainty from simulation Monte Carlo statistics is typically better than 4%. From the differences observed in the comparison of data and simulations of Standard Model backgrounds (particularly the KK2f modelling of ISR), the systematic uncertainty for backgrounds is taken to be 10%; this value is subtracted from the predicted background in the setting of limits. A systematic error on the photon energy scale is estimated to be 0.25 GeV for 72 GeV photons using the fitted single-photon ISR peak in Figure 1 compared to the expected value based on the precisely known beam energy and $`\mathrm{Z}^0`$ mass. This leads to a systematic uncertainty on the di-photon mass of 0.35 GeV at a mass of 100 GeV. The background events in the missing energy channel include a component from Compton scattering in the beams, which is modelled by the TEEGG Monte Carlo. The photons from this process have a high probability to be found in the near the cut on polar angle. The photon energy uncertainty is rather large (5-9 GeV) in these regions because of the corrections for passage through significant material. ## 4 Results Figure 2 shows the di-photon mass versus the recoil mass for all candidate events passing the general search cuts. The distribution of di-photon masses for the $`\mathrm{h}^0`$$`\mathrm{Z}^0`$ search candidates is shown in Figure 3, together with the simulation of Standard Model backgrounds. Combining all three general search channels results in 44 observed events versus $`54.2\pm 2.5`$ expected from Standard Model sources. Summing over all three $`\mathrm{h}^0`$$`\mathrm{Z}^0`$ channel decay modes and expected background sources yields $`25.0\pm 1.7`$ events expected versus 22 observed. ### 4.1 General Search Results Using only the data taken at $`E_{\mathrm{cm}}`$=189 GeV, we set limits for the production mode $`\mathrm{e}^+\mathrm{e}^{}\mathrm{X}Y`$, where X is any scalar resonance decaying into di-photons. The candidate events from the general search (no recoil mass cut) are used to set upper limits on $`\sigma (\mathrm{e}^+\mathrm{e}^{}\mathrm{X}Y)\times B(\mathrm{X}\gamma \gamma )\times B(\mathrm{Y}f\overline{f})`$. Such results are valid independent of the nature of Y, provided it decays to a fermion pair and has negligible width. The search is also restricted to X and Y masses above 10 GeV and below 180 GeV in order to allow the decay products to have sufficient energies and momenta to give reasonable search acceptances. The event candidates from the general search are used to calculate 95% CL upper limits on the number of events in 1 GeV \[$`M_\mathrm{X},M_\mathrm{Y}`$\] mass bins, where $`M_\mathrm{X}`$ corresponds to the di-photon mass and $`M_\mathrm{Y}`$ to the recoil mass. Efficiencies for signals were calculated using two grids of simulated signals which were interpolated from Monte Carlo samples generated in 10 GeV \[$`M_\mathrm{X},M_\mathrm{Y}`$\] steps using the $`\mathrm{e}^+\mathrm{e}^{}\mathrm{h}^0\mathrm{Z}^0`$ and the $`\mathrm{e}^+\mathrm{e}^{}\mathrm{h}^0\mathrm{A}^0`$ processes as models for the $`\mathrm{e}^+\mathrm{e}^{}\mathrm{X}Y\gamma \gamma +\mathrm{f}\overline{f}`$ final state. The grid was generated for X masses from 10 to 180 GeV and Y masses from 10 to 180 GeV such that $`M_\mathrm{X}+M_\mathrm{Y}>M_\mathrm{Z}`$. This latter constraint was motivated by the higher sensitivity of searches performed at $`E_{\mathrm{cm}}`$=$`M_\mathrm{Z}`$. For each \[$`M_\mathrm{X},M_\mathrm{Y}`$\] bin, the 95% CL upper limit on the number of signal events is computed using the frequentist method of reference . This statistical procedure incorporates the di-photon mass resolution (typically less than 2 GeV for $`M_{\gamma \gamma }`$$`<`$100 GeV). The effect of the systematic error for efficiencies and background modelling is incorporated by reducing the subtracted background by the systematic, but using an additional systematic uncertainty of 5% to account for interpolation error in the efficiency grid (especially near kinematic limits). Figure 4 shows the 95% CL upper limits on $`\sigma (\mathrm{e}^+\mathrm{e}^{}\mathrm{X}Y)\times B(\mathrm{X}\gamma \gamma )\times B(\mathrm{Y}f\overline{f})`$. To present the limits only as a function of $`M_\mathrm{X}`$, the figure shows the weakest limit obtained in each $`M_\mathrm{X}`$ bin as $`M_\mathrm{Y}`$ was scanned subject to the constraints mentioned above. For a scalar/vector hypothesis for X/Y, the efficiency is found to be the same to within 5% with that for a scalar/scalar hypothesis; the lower of these efficiencies is used in setting the limits. For the lepton search channel, the efficiency for Y $`\tau ^+\tau ^{}`$ is used, as it turns out to have the lowest of the dilepton efficiencies. Cross section limits of 30 – 100 fb are obtained over $`10<M_\mathrm{X}<180`$ GeV. ### 4.2 Search for the Standard Model Higgs Boson The events passing all $`\mathrm{h}^0`$$`\mathrm{Z}^0`$ cuts are used to set an upper limit on the di-photon branching ratio of a particle having the Standard Model Higgs boson production rate. For each 1 GeV di-photon mass bin, the 95% CL upper limit on the number of signal events is computed using the frequentist method and background subtraction as in the previous section, with the efficiencies now including the Standard Model $`\mathrm{Z}^0`$ branching fractions. Figure 5 shows the 95% CL upper limit for the di-photon branching ratio obtained by combining the $`E_{\mathrm{cm}}`$=189 GeV candidate events with those from OPAL searches at $`E_{\mathrm{cm}}`$=91–183 GeV , where the Standard Model $`\mathrm{h}^0\mathrm{Z}^{0()}`$ production cross section is assumed at each $`E_{\mathrm{cm}}`$. For masses lower than approximately 60 GeV, LEP-1 limits for $`B(\mathrm{h}^0\gamma \gamma )`$ have been inferred from references . The limits on $`B(\mathrm{h}^0\gamma \gamma )`$ are used to rule out Higgs bosons in certain non-minimal models. Shown in Figure 5 is the $`\mathrm{h}^0\gamma \gamma `$ branching ratio in the Standard Model computed using HDECAY with the fermionic couplings switched off. A 95% CL lower mass limit for such fermiophobic Higgs bosons is set at 96.2 GeV, where the predicted branching ratio crosses the upper-limit curve. ### 4.3 The Higgs Triplet Model It is possible that a non-minimal Higgs sector incorporates triplet fields; particles formed exclusively from such fields are fermiophobic. The minimal Higgs Triplet model (HTM) requires the inclusion of two triplet fields in order to have the $`\rho `$-parameter near unity. The model has 10 Higgs bosons in the form of a fiveplet ($`H_5`$), a threeplet ($`H_3`$), and two singlets ($`H_1`$). The $`H_5^0`$ and one of the singlets, $`H_1^0^{}`$, are formed from the triplet field, apart from possible mixing with doublet components. Akeroyd has shown that measurements constrain the mixing parameters so that the $`H_1^0^{}`$ is almost entirely fermiophobic, and therefore could be interpreted as the X in this search. The process $`\mathrm{e}^+\mathrm{e}^{}\mathrm{H}_1^0^{}\mathrm{Z}^0`$ occurs at the Standard Model $`\mathrm{h}^0`$$`\mathrm{Z}^0`$ rate modified by the factor $`\frac{8}{3}\mathrm{sin}^2\theta _H`$, where the angle $`\theta _H`$ is a parameter of the model describing the mixing of the doublet and triplet fields. Limits on $`\theta _H`$ can therefore be inferred from Figure 5 by dividing the upper limit by the fermiophobic di-photon branching ratio. The limits on $`\theta _H`$ obtained from this experiment are more restrictive than limits inferred from the $`\mathrm{Z}^0`$ width up to an $`H_1^0^{}`$ mass of approximately 96 GeV. ## 5 Conclusions A search for the production of Higgs bosons and other new particles decaying to photon pairs has been performed using 182.6 pb<sup>-1</sup> of data taken at an average centre-of-mass energy of 188.6 GeV. Model independent upper limits are obtained on $`\sigma (\mathrm{e}^+\mathrm{e}^{}\mathrm{X}Y)\times B(\mathrm{X}\gamma \gamma )\times B(\mathrm{Y}f\overline{f})`$. Limits of 30 – 100 fb are obtained over $`10<M_\mathrm{X}<180`$ GeV, where $`10<M_\mathrm{Y}<180`$ GeV and $`M_\mathrm{X}+M_\mathrm{Y}>M_\mathrm{Z}`$, for Y either a scalar or vector particle, provided that the Y decays to a fermion pair. The results of this search have been combined with previous OPAL results to set limits on $`B`$($`\mathrm{h}^0\gamma \gamma `$) up to a Higgs boson mass of 100 GeV, provided the Higgs particle is produced via $`\mathrm{e}^+\mathrm{e}^{}\mathrm{h}^0\mathrm{Z}^0`$ at the Standard Model rate. A lower mass bound of 96.2 GeV is set at the 95% confidence level for Higgs particles which do not couple to fermions. Acknowledgements We particularly wish to thank the SL Division for the efficient operation of the LEP accelerator at all energies and for their continuing close cooperation with our experimental group. We thank our colleagues from CEA, DAPNIA/SPP, CE-Saclay for their efforts over the years on the time-of-flight and trigger systems which we continue to use. In addition to the support staff at our own institutions we are pleased to acknowledge the Department of Energy, USA, National Science Foundation, USA, Particle Physics and Astronomy Research Council, UK, Natural Sciences and Engineering Research Council, Canada, Israel Science Foundation, administered by the Israel Academy of Science and Humanities, Minerva Gesellschaft, Benoziyo Center for High Energy Physics, Japanese Ministry of Education, Science and Culture (the Monbusho) and a grant under the Monbusho International Science Research Program, Japanese Society for the Promotion of Science (JSPS), German Israeli Bi-national Science Foundation (GIF), Bundesministerium für Bildung, Wissenschaft, Forschung und Technologie, Germany, National Research Council of Canada, Research Corporation, USA, Hungarian Foundation for Scientific Research, OTKA T-016660, T023793 and OTKA F-023259.
no-problem/9907/astro-ph9907276.html
ar5iv
text
# Cosmic Microwave Background Observations in the Post-Planck Era ## I Scope of Report The Cosmic Microwave Background Future Missions Working Group (the authors of this report) was created by NASA headquarters to consider new missions to follow a successful Planck mission. The discussion was restricted to missions of cost greater than Explorer Class missions ($200M). This report presents the conclusions of the working group, arrived at after extensive discussions involving the CMB and high energy physics communities. This report is not intended to be a complete review of the literature on CMB science; a few references have been included, however, to serve as starting points for more thorough research. ## II Introduction The development of the hot-big-bang model for the early history of the universe is one of the crowning achievements of twentieth century science. It is remarkable that we may now understand what was going on in the universe just one second into the big bang expansion. A cornerstone in our understanding of modern cosmology has been the information gathered from measurements of the Cosmic Microwave Background (CMB) radiation. Just now, at the close of the twentieth century, an important piece of the big-bang puzzle is about to be put into place. Preliminary CMB results announced recently indicate that of the three possible geometries of the universe—open, flat, or closed—ours appears to be flat ($`\mathrm{\Omega }_{total}=1`$). At the same time, measurements of the flux from distant supernovae indicate that the universal expansion may actually be accelerating, that Einstein’s cosmological constant $`\mathrm{\Lambda }`$ may be significant. These are exciting times: measurements of the cosmological parameters are about to tell us the ultimate fate of the universe; meanwhile observations are confronting us with mysteries such as that of an accelerating universe. Soon, two CMB satellite missions already in the NASA and ESA pipelines (MAP and Planck) are expected to provide precise measurements of $`\mathrm{\Omega }`$ and $`\mathrm{\Lambda }`$. In addition, these experiments will rigorously test whether the large structures we find in the universe today, such as galaxy clusters and superclusters, grew from a nearly scale-invariant spectrum of tiny primordial density perturbations. They will test the idea that the largest structures in the universe began as minute quantum mechanical fluctuations during an inflationary epoch in the very early Universe. MAP and Planck are extraordinarily capable missions, but are they ultimate CMB missions? Are there important issues these two spacecraft do not address? That is the subject of this report. (For more on this issue, see also \[Halpern and Scott 1999\].) CMB images are maps of ancient temperature structure. A precise measurement of the CMB energy spectrum, carried out using the FIRAS instrument on COBE (\[Fixsen et al. 1994\]), showed that the energy spectrum closely follows a Planck curve. Because of this, the CMB is today widely accepted to be the thermal radiation relic of the hot big bang explosion. Under this interpretation we know that most cosmic background photons have traveled freely to us from a last scattering that occurred when the universe first became de-ionized, just 300,000 years after the start of the big bang explosion. The mapping of CMB sky structure therefore amounts to the measurement of small temperature variations that were present in the universe at that early time. CMB observations test gravitational collapse models of structure formation. Using DMR on COBE (\[Smoot et al. 1992\]; \[Kogut et al. 1995\]) the level of CMB sky structure has been measured from 10 to 180 degrees. The sky structure detected falls in line with measurements of large-scale structure in the distribution of galaxies in the universe today. It is therefore generally accepted that at ten degree scales the small temperature differences on the sky measured using CMB telescopes are the result of weak gravitational potential variations (and therefore density variations) in the early universe (\[Sachs and Wolfe 1967\]; \[Bennett et al. 1996\]; for a review, see \[White, Scott, and Silk 1994\]). These weak potential wells acted as seeds for the growth, by gravitational collapse, of the network of galaxy sheets and voids that we find in the universe today. Thus, CMB observations also provide a crucial test of the structure-growth-by-gravitational-collapse paradigm, a key tenet of cosmological theory. MAP and Planck should tell us the fate of the universe. As seen in Figure 1, at degree angular scales there is now good evidence that CMB sky structure is present at an amplitude about three times as large as that detected with COBE at large angular scales. That amplification is expected; it occurs naturally as a result of acoustic oscillations that took place while the universe was ionized, before age 300,000 years. These oscillations and the sky structure they produce are sensitive to the matter density in the universe today and to the other parameters of the cosmological world model. Measurement of degree angular scale CMB sky structure, at the sensitivity offered using the MAP and Planck spacecraft, is expected to provide, with percent-level precision, a measurement of the density of the universe $`\mathrm{\Omega }_{total}`$, the expansion rate H, the cosmological constant $`\mathrm{\Lambda }`$, as well as a measurement of the fractions of material present today in the forms of hot and cold dark matter. By accurately determining these cosmological parameters, CMB observations made with MAP and Planck have the potential to tell us the ultimate fate of the universe. Of course, MAP or Planck may find sky structure that is completely different from that expected. If this occurs, we may not be able to extract the cosmological parameters from the CMB data; instead, MAP and Planck will have presented for discussion an exciting new puzzle. Beyond MAP and Planck Following Planck there are two interesting new directions for CMB observations. A sensitive CMB polarization experiment can be used to detect the primordial gravitational waves produced during inflation. In addition, fine scale observations can provide images of the largest structures in the universe just as they were beginning to dissipate the heat of their gravitational collapse. Inhomogeneities in the universe that were present at age 300,000 years were themselves the result of earlier processes. By studying degree angular scale CMB structure we learn about processes, inflation for example, that may have produced gravitational potential perturbations. The careful study of CMB structure, in particular degree angular scale polarized sky structure, can be used to detect not just potential perturbations but also gravitational waves (\[Bond and Efstathiou 1984\]; \[Polnarev 1985\]; \[Kamionkowski, Kosowsky, and Stebbins 1997\]; \[Seljak and Zaldarriaga 1997\]; \[Keating et al. 1998\]) Unlike photons, these gravitational waves travel freely through the ionized early universe, so the study of CMB polarization may allow us to look much further back in time–and at much higher particle energies–than has been possible before. However, the sensitivity needed for these observations exceeds that attainable with the Planck satellite. As described in the Polarization section below, if a new instrument can be built with sufficient sensitivity and with sufficient control of systematic errors, the resulting observations have the potential to allow the study of physics at energy scales as high as $`10^{19}`$ GeV, far beyond the reach of the largest feasible particle accelerators (see, e.g., \[Kamionkowski and Kosowsky 1999\] for a recent review). On angular scales of one arcminute and finer, the CMB picks up an impression from the material it passes through. Since the CMB passed through the dark ages of cosmology, before stars or quasars formed, the study of arcminute scale CMB features is the only technique we know for viewing the very early processes of structure formation. Using the CMB as an intense, very high redshift backlight, we can study the early history of galaxy clusters and we can witness their gradual acceleration as they began to fall into the deepening potential wells around them. We should also be able to view directly, at the time of first formation, the precursors to the 100 Mpc scale sheets (e.g. the “Great Wall”) we find in the galaxy distribution today. No other known cosmological observable can provide such direct information on the great structures as they were forming. Because of the high angular resolution required, this work exceeds the capabilities of the MAP and Planck missions. These observations are described in the Fine Scale section below. ## III Polarization Polarization of the CMB contains a wealth of information on primordial perturbations that will not be provided by the temperature map. It was recognized early on that CMB sky structure should be polarized (\[Rees 1968\], \[Kaiser 1983\]), and theoretical studies of polarization continue. For a tutorial on CMB polarization see \[Hu and White 1997b\]. Using temperature information alone it will not be possible to confidently separate the three classes of initial perturbations: density, pressure, and gravitational waves. An example of this degeneracy is shown in Figure 2. Two simulated sky maps of temperature and polarization are shown. In one case the sky structure is due to gravitational waves, in the other case the structure is due to density perturbations. These two temperature maps differ by less than the cosmic variance so no CMB temperature observation can ever distinguish between them. Pressure perturbations can also create temperature structure indistinguishable from the two maps shown in Figure 2. Fortunately, the three sources of primordial perturbations produce distinctly different polarization patterns. Polarization information can break the perturbation-type degeneracy. Polarization information also allows fine selection among inflation models. Figure 3, from \[Kinney 1998\], shows the selective power of polarization information. Five different inflation models are shown. Without polarization information it is not possible to distinguish the models. However, a polarization experiment with a sensitivity three times better than Planck would allow a strong selection. Gravitational Waves from Inflation It is particularly exciting that gravitational waves are potentially detectable through measurement of CMB polarization. The clouds of ionized gas viewed with CMB telescopes were in motion at the time of last scattering. These motions produce patterns in the polarization in the CMB. Fractional polarization of CMB sky structure is expected at the few percent level in all models of the universe. In models with strong gravitational waves, however, swirling motions in the ionized gas produce polarization patterns that differ from gravitational infall patterns due to density perturbations. A map of polarization vectors on the sky can be decomposed into a curl and a curl-free part. Density perturbations produce scalar perturbations to the spacetime metric. Since density perturbations have no handedness, they cannot produce a curl. On the other hand, gravitational waves are tensor perturbations. They can have a handedness (there are right- and left-handed circularly-polarized gravitational waves), so they do produce a curl. Thus, a curl component in the polarization provides a smoking-gun signature for a gravitational-wave background (\[Kamionkowski, Kosowsky, and Stebbins 1997\]; \[Seljak and Zaldarriaga 1997\]). We will only understand the big bang when we also understand high energy physics. The current hot big bang model accounts for the universal expansion, the presence and energy spectrum of the CMB, the light-element abundances, and can certainly accommodate primordial perturbations, but the hot big bang model leaves many questions unanswered. For example, why is the universe flat? Why is it so smooth? Why was there a big bang in the first place? And where did the primordial inhomogeneities mapped with CMB telescopes come from? Just as the light element abundances in today’s universe could only be understood through an understanding of the underlying nuclear physics, we now appreciate that the answers to these current questions must be accompanied by development of our understanding of particle theory at very high energy scales. The answers to these questions will probably not be obtained without a concomitant understanding of the unification of the fundamental forces of Nature. Inflation, a proposed period of accelerated expansion in the early universe driven by the release of vacuum energy, offers answers to all the questions listed above. If MAP and Planck confirm that the universe is flat and if the spatial power spectrum of the CMB agrees with that expected from adiabatic initial perturbations, we may indeed be on the right track with inflation. If so, we must further test the inflationary hypothesis and attempt to determine the physics responsible for inflation. By precisely mapping CMB polarization we may be able to determine the energy scale of inflation. There is a very broad range of models of inflation that produce a flat universe and a nearly scale-invariant spectrum of primordial density perturbations. A priori, inflation could have occurred when the universe had a temperature anywhere from roughly electroweak scale, $`100\mathrm{GeV}/\mathrm{k}`$ to the Planck temperature, $`10^{19}\mathrm{GeV}/\mathrm{k}`$. However, it may be possible to determine the cosmic temperature at the end of the era of inflation because inflation also generically predicts the existence of a stochastic background of long-wavelength gravitational waves (\[Abbot and Wise 1984\]). The amplitude of this gravitational-wave background is proportional to the square of the inflationary energy scale. Detection of these gravitational waves would therefore provide a direct measurement of the energy scale of inflation and point to the new physics responsible for inflation. If inflation occurred at very high energies the resultant gravitational waves would have produced large amplitude temperature structure in the CMB. But the COBE sky structure is weak ($`\mathrm{\Delta }T/T10^5`$), so COBE data can already be used to constrain the energy scale of inflation to be less than $`10^{16}\mathrm{GeV}`$. However, since CMB temperature structure from gravitational waves cannot be unambiguously distinguished from that due to density perturbations, we don’t know right now which produced the CMB structure that we have detected. Polarization maps will be needed to settle this issue. If inflation has something to do with GUTs (unified theories of the electroweak and strong interactions), as most theorists surmise, then the inflation era probably occurred at temperature $`10^{16}`$ Gev and the gravitational-wave signal is detectable with a next-generation CMB polarization experiment. Detection of such a signal would be truly extraordinary: It would (1) allow us to penetrate the last scattering surface, giving us a glimpse of the universe as it was at a time $`10^{38}`$ seconds after the initial singularity; (2) provide a window to new physics at energy scales more than 12 orders of magnitude beyond those accessible at accelerator laboratories; (3) explain the origin of primordial perturbations. And, since gravitational waves and density perturbations are produced during inflation by an analog of Hawking radiation, detection of a gravitational-wave background would (4) be the first observation of the effects of quantum field theory in curved spacetimes, and thus would provide clues to the nature of quantum gravity. An observable polarization-curl signal is not guaranteed even if inflation did occur (it may have taken place at a lower energy scale), but the signal should be detectable if inflation took place near the GUT scale. This means that even a null result from a sensitive polarization experiment would be quite interesting. It would indicate that inflation did not arise from a GUT phase transition or from quantum gravity effects which also place the inflation epoch at a high temperature. High-sensitivity polarization maps will be used to study a wide range of interesting cosmological physics in addition to probing gravitational waves. For example, the polarization pattern can be used to isolate the peculiar-velocity contribution to degree-scale temperature anisotropy. This information will be essential to discriminate between various structure-formation models that give rise to the same temperature perturbations. Polarization information can also be used to constrain the ionization history of the universe and thus probe the earliest epochs of formation of gravitationally-collapsed objects in the universe. Polarization can also probe primordial magnetic fields. Perhaps most importantly, polarization data may contain surprises due to unanticipated physical processes. COBE provided valuable information on the origin of large-scale structure. It confirmed the notion that large-scale structure grew via gravitational infall from primordial density perturbations. Also, the COBE data now provide a normalization for the power spectrum of primordial perturbations in any gravitational growth model. MAP and Planck are designed to accurately measure the CMB temperature structure expected on the sky and thereby provide precise information on the primordial spectrum of density perturbations. Although both MAP and Planck will provide some polarization information they lack the sensitivity to detect the gravitational wave signal from GUT scale inflation. To complete our study of the primordial fluctuations we must also map the CMB polarization to small angular scales with a sensitivity at the cosmic-variance limit in the same way that Planck will map the temperature structure. Specifications of a Polarization Experiment Angular resolution: Although gravitational-wave-produced polarization structure in the CMB occurs over a range of angular scale from 100 to 0.1 degrees, angular resolution finer than one degree is required to see the predicted turnover in the power spectrum (\[Kamionkowski and Kosowsky 1998\]). Detecting this turnover will be valuable in discriminating between gravitational waves and an unsubtracted foreground (whose power spectrum would rise at smaller angular scales). A polarization experiment should have an angular resolution of 0.3. Sensitivity: As mentioned above, the amplitude of the gravitational-wave background—and therefore, the amplitude of the gravitational-wave-induced curl component of the polarization—depends on the energy scale of inflation, which is currently unknown. If inflation has something to do with grand unification of fundamental forces, then the polarization signal should be detectable with a next-generation experiment. If inflation had something to do with new physics at much lower energy scales, then the polarization-curl signal could be much too small to ever be detectable. Figure 3 can be used to make this argument more precise. Shown therein are predictions of the amplitude ($`r`$) of the gravitational-wave background (measured via the polarization-curl signal) for five classes of inflationary models. The four classes with the largest $`r`$ arise in models in which inflation is related to unification of fundamental forces. Generically, models based on grand unification predict $`r`$ values in this range. The fifth class of models (that with the smallest polarization) usually arises if inflation had something to do with lower-energy physics. To rule out all of the grand unification models shown requires a polarization sensitivity that will allow detection of a gravitational-wave amplitude as small as $`r0.001`$. A null result would then indicate that if inflation occurred, it must have arisen from new physics at a lower energy scale. To measure $`r`$ with sensitivity 0.001, in the presence of foregrounds, the polarimeter sensitivity required is $`0.1`$ $`\mu `$K$`\sqrt{\mathrm{sec}}`$, roughly 100 times that of Planck. Such a sensitivity level will be difficult to achieve in the short term. However, an order-of-magnitude improvement over Planck sensitivities should be achievable within the next decade, and this would allow an initial search for a gravitational-wave background and, in turn, a strong selection among inflation models. Foregrounds: Right now we don’t know whether our ability to detect gravitational waves will be limited by detector sensitivity or by foregrounds because the polarization foregrounds are currently very poorly understood. (\[Tegmark et al. 1999\]) Even if MAP and upcoming ground- and balloon-based experiments do not have sufficient sensitivity to detect gravitational waves, they will be able to measure the polarization of foreground emission such as the dust grain emission polarization. These measurements will be essential in the design of a future experiment. Sky Coverage: The current lack of information on polarized foregrounds makes it difficult to determine the optimum sky coverage for a polarization experiment. If the subtraction of foregrounds is the main difficulty for a sensitive polarization experiment, all sky coverage may be essential. High sensitivity is also needed, however, and better per-pixel sensitivity can be achieved by limiting sky coverage. Site: A high sensitivity polarization experiment must be done from space for two reasons. First, the sensitivity required for these observations is not likely to be possible from within the atmosphere. Second, uniform coverage over a large region of sky and over a wide range of frequencies will be valuable in separating polarized CMB sky structure from polarized foreground emission. Multi-frequency uniform sky coverage was one of the spacecraft advantages that placed the COBE results head and shoulders above previous results. This advantage will also allow MAP and Planck to succeed where ground and balloon based experiments are struggling. (See Figures 1 and 6.) The same rules will hold for polarization experiments: although good progress will be made from the ground and from balloons, a spacecraft will eventually be needed. ## IV Fine Scale CMB Sky Structure As CMB photons travel from the surface of last scattering to the observer, secondary sky structure can arise due to the interaction of the CMB photons with intervening re-ionized matter. This sky structure is called “secondary” in contrast to the primary structure produced at age 300,000 years. One interesting and useful source of secondary structure is the Sunyaev-Zel’dovich effect (\[Sunyaev and Zel’dovich 1970\], \[Sunyaev and Zel’dovich 1972\]), a distortion of the CMB energy spectrum that occurs when the CMB photons interact with hot ionized gas. For example, in a rich cluster of galaxies approximately $`10`$% of the total mass is in the form of hot ($`10^8`$K) plasma. Compton scattering of CMB photons by electrons in this intra-cluster (IC) plasma can present an optical depth of $``$0.01, resulting in a distortion of the CMB spectrum at the mK level. See \[Sunyaev and Zel’dovich 1980\]; \[Rephaeli 1995\]; \[Birkinshaw 1999\] for reviews. Figure 4 shows a simulated fine scale observation. On fine scales it is the thermal S-Z effect that is expected to dominate CMB sky structure. There are two components of the S-Z effect which result from distinct velocity components of the scattering electrons. The thermal component is due to the thermal (random) motions of the scattering electrons. The kinetic component is due to the bulk velocity of the IC gas with respect to the rest frame of the CMB. In Figure 5, the spectral distortion produced by each of the two components is shown. As evident from the figure, the two S-Z components have distinct spectra which can be separated by observations at several millimeter wavelengths. When combined with a measurement of electron temperature, the ratio of the kinetic and thermal component amplitudes provides a direct measurement of the cluster’s radial peculiar velocity relative to the rest frame of the CMB. The observed surface brightness difference of both the thermal and kinetic components is independent of the cluster redshift, as long as the cluster is resolved. Clusters are large objects, typically of order $`1`$Mpc, and subtend an arcminute or more at any redshift. Therefore, accurate S-Z measurements can be made throughout the universe, all the way back to the epoch of formation of the hot IC gas. A sky survey of S-Z determined velocities would provide us a view of the motions of test masses (the clusters) throughout the entire Hubble volume, and show us the evolution of velocity structure over much of the history of the universe. In addition, the evolution of the number density of clusters provides a sensitive test of gravitational collapse models. Until recently it has been assumed that X-ray measurements would be needed to determine the electron temperature of the IC gas. But the IC electrons are mildly relativistic, and recent calculations of the relativistic S-Z effect indicate that with sufficiently precise CMB measurements it may be possible to determine the electron temperature, and the cluster velocity, without need for X-ray data (\[Fabbri 1981\]; \[Rephaeli 1995\]; \[Stebbins 1997\]; \[Challinor and Lasenby 1998\]; \[Itoh, Kohyama, and Nozawa 1998\]; \[Nozawa, Itoh, and Kohyama 1998\]; see Figure 5). This is particularly important for high redshift clusters since the received X-ray flux falls as $`(1+z)^4`$ while the S-Z brightness is redshift independent. Of course, for nearby clusters, for which X-ray data is available, both techniques should be used and the results compared. In the last few years, high signal-to-noise detections and images have been made of the thermal S-Z effect toward several distant clusters ($`z>0.15`$). Most of these observations have been made at centimeter wavelengths; observations at $`2`$mm, however, have also been successful (\[Birkinshaw 1999\]). All of these observations have been done using ground based telescopes. The thermal S-Z distortion is a measure of the thermal history of the universe. There are several potential sources of heating for the intergalactic medium: gravitational collapse, energy input from galaxy superwinds (\[Pen 1999\]), and energy input from quasars and AGNs (\[Natarajan and Sigurdsson 1999\]). Since the S-Z effect depends upon $`n_e`$ rather than $`n_e^2`$, we can use it to trace the thermal history not only of dense clusters but also of filaments. If the mean electron temperature is 1 keV, then a 1 Mpc wide filament produced by the collapse of a 10 Mpc wave should produce a distortion of 10 $`\mu `$K. These filaments, one arcminute wide and about a degree long, can be detected by searching for the S-Z thermal effect distortion they produce on the sky. The filaments will be difficult to detect through x-ray emission because the electron density in the filaments is too low; however, the S-Z thermal effect signal they produce should be detectable. The thermal S-Z effect is expected to be the strongest source of sky structure at arcminute angular scales but other sources of structure can also provide information from the dark ages. Once arcminute resolution multi-frequency CMB maps are available, the regions affected by the thermal S-Z effect can be identified. S-Z free regions can then be studied to detect other sky structure present. The physics responsible for fine scale structure include gravitational lensing, bulk motions of plasma (the Ostriker-Vishniac effect), evolution of the gravitational potentials during the passage of CMB photons (the Integrated Sachs Wolfe effect and the Rees-Sciama effect), and details of the ionization history of the universe. See \[Hu and White 1997a\], \[Refregier, Spergel, and Herbig 1998\], and references therein for details. Specifications of Fine Scale Observations Angular resolution: One arcminute resolution will be required. At high redshifts the cores of galaxy clusters subtend about one arcminute. In addition the S-Z filaments produced by gravitational collapse should be about an arcminute across, and as long as a degree. At the millimeter wavelengths needed for CMB observations, arcminute beamwidths translate to a telescope aperture (or longest baseline) of about 10 meters. Sensitivity: The brightest clusters can produce S-Z amplitudes $``$1mK. The kinetic effect is expected to be smaller, $``$200 $`\mu `$K. Filaments are expected to produce 10 $`\mu `$K thermal S-Z signals. Currently CMB instruments are measuring the $``$100 $`\mu `$K degree scale sky structure with 5 $`\mu `$K precision, so current technology is sufficient for S-Z cluster work. Better sensitivity will be needed to study filaments. Frequency Range: To separate S-Z thermal, kinetic, and relativistic-electron effects, maps covering the range from 30-400 GHz will be needed. Ideally all these maps should be made using the same instrument. Additional information at lower (5-10 GHz) frequencies and higher (1000-3000 GHz) frequencies will also be needed as an aid in removing emission from unrelated foreground and background sources. Telescope technology: Both filled aperture and interferometric telescopes can be used for observations at this angular scale and frequency. The advantages of interferometers include: rejection of telescope emission, insensitivity to amplifier gain fluctuations, and fault tolerance. The advantages of filled aperture telescopes include: wide detection bandwidth, wide observing frequency range, lower complexity, and lower cost. Site: As shown in Figure 6, observations of the thermal S-Z effect in clusters should be attempted from the ground before resorting to spacecraft observations. Currently ground based efforts are making rapid progress on this topic. However, attempts to measure electron temperatures in clusters via the relativistic S-Z effect and attempts to map peculiar velocities over large regions of sky may require spacecraft observations. S-Z filament observations, which require multi-frequency maps covering degrees, along with microkelvin sensitivity, may also benefit from observations done in space. ## V Summary After the Planck mission, we will need to make precise multi-frequency, all-sky maps of CMB polarization. CMB polarization measurements, because they may allow for the detection of a gravitational wave background, probe much further back in time than any other astronomical observation. In so doing, CMB polarization measurements allow the study of particle physics at energies far higher than are available with any earth-bound accelerator experiment. The processes that may have produced this background of gravitational waves probably involve the unification of the the strong and electroweak forces and may also involve quantum gravity. Because of the ability to constrain physics at enormous energy scales, CMB polarization observations have the potential to revolutionize our understanding of basic physics. Polarization information is also expected to help determine the class of primordial perturbations (density; pressure; gravitational wave), and will allow a strong selection among inflation models. After Planck it will also be important to make arcminute angular scale observations of the CMB because these observations can probe the dark ages of cosmology. Using fine scale CMB observations we can see the largest structures of the universe, galaxy clusters and galaxy sheets, as they were first forming. This committee has sought comment from over one hundred cosmologists, as well as from members of the particle physics and general relativity communities. The discussion has been wide-ranging, but the same comment was heard again and again. Because of the potential for discovery of dramatic new physics (gravitational waves from inflation), along with model-testing power, CMB polarization measurements should have the highest priority among CMB observations following Planck. It would be truly remarkable if, early in the twenty-first century, we could say that we understand what was happening in the universe just $`10^{38}`$ seconds after it began. ## VI Acknowledgements Chris Cantalupo produced Figures 4 and 6. Greg Griffin, Michael Vincent, Michael O’Kelly, Kurt Miller, and Gabrielle Walker provided editorial comments. Greg Wright carried out atmospheric emission calculations.
no-problem/9907/cond-mat9907416.html
ar5iv
text
# Dynamic correlations in doped 1D Kondo insulator - Finite-𝑇 DMRG study - \[ ## Abstract The finite-$`T`$ DMRG method is applied to the one-dimensional Kondo lattice model to calculate dynamic correlation functions. Dynamic spin and charge correlations, $`S_\mathrm{f}(\omega )`$, $`S_\mathrm{c}(\omega )`$, and $`N_\mathrm{c}(\omega )`$, and quasiparticle density of states $`\rho (\omega )`$ are calculated in the paramagnetic metallic phase for various temperatures and hole densities. Near half filling, it is shown that a pseudogap grows in these dynamic correlation functions below the crossover temperature characterized by the spin gap at half filling. A sharp peak at $`\omega =0`$ evolves at low temperatures in $`S_\mathrm{f}(\omega )`$ and $`N_\mathrm{c}(\omega )`$. This may be an evidence of the formation of the collective excitations, and this confirms that the metallic phase is a Tomonaga-Luttinger liquid in the low temperature limit. \] Heavy fermion systems have attracted much attention for more than a decade because of their enormous mass renormalization and diverse ground states including unconventional superconductivity. The periodic Anderson model (PAM) and the Kondo lattice model (KLM) are their canonical models, and their properties have intensively been investigated. In the framework of the PAM, hybridized band picture with strong renormalization is widely accepted for a scenario of the formation of heavy quasiparticles. The formation of heavy quasiparticles is supported by numerical studies on the PAM in infinite dimensions (D=$`\mathrm{}`$) and in one dimension (1D). On the other hand it is a nontrivial question how the heavy fermion state is formed in the KLM, since the charge degrees of freedom are completely suppressed for the localized $`f`$-electrons. For this problem it is important to understand the relation between the Kondo effect and the hybridization picture. An important energy scale is the coherence temperature of the heavy quasiparticles, $`T_{\text{coh}}`$, but several approaches seem to give controversial results. Gutzwiller approximation suggests that $`T_{\text{coh}}`$ is enhanced compared with the single-impurity Kondo temperature $`T_\text{K}`$, while Nozières predicted that $`T_{\text{coh}}`$ is rather suppressed due to the exhaustion mechanism. Several numerical calculations on the PAM in D=$`\mathrm{}`$ and 1D suggest the suppression of $`T_{\text{coh}}`$. Intersite correlations are believed to play an important role for the formation of heavy quasiparticles in real compounds, but they are not taken into account in D=$`\mathrm{}`$. 1D systems provide complementary information, since the intersite correlations generally become more dominant in lower dimensions. Precise numerical calculations are also feasible in 1D compared with 2D or 3D. Therefore the 1D KLM may be another appropriate starting point to obtain reliable information on dynamics, although the ground state may not be a Fermi liquid. Ground-state properties of the 1D KLM have been studied extensively and its magnetic phase diagram is now determined. At half filling the ground state is a Kondo spin-liquid insulator with a spin gap $`\mathrm{\Delta }_\text{s}`$, and the charge gap $`\mathrm{\Delta }_\text{c}`$ is much larger than $`\mathrm{\Delta }_\text{s}`$ due to the correlation effects on the gap formation. Upon finite hole doping $`\delta `$, both spin and charge gaps close and the ground state is expected to belong to the universality class of the Tomonaga-Luttinger liquid (TLL). Thermodynamic properties at finite $`\delta `$ have been studied by the finite-temperature density-matrix renormalization-group (finite-$`T`$ DMRG) method and it is found that there exist two crossover temperatures. The first crossover is observed as a broad peak in the $`T`$-dependence of the spin susceptibility $`\chi _\text{s}`$ when the temperature is lowered from infinity. The crossover temperature $`T_1^{}`$ may be defined by its peak position, and calculations for various $`J`$’s show that $`T_1^{}`$ is scaled by $`\mathrm{\Delta }_\text{s}`$. The charge susceptibility $`\chi _\text{c}`$ and $`\chi _\text{s}`$ both decrease below $`T_1^{}`$ but they turn to increasing again at further low temperatures. The susceptibilities are expected to finally approach the $`T=0`$ value determined by the velocities of the collective excitations of the TLL ground state, and this saturation is actually observed in $`\chi _\text{c}`$ for some $`J`$ and $`\delta `$. The second crossover temperature $`T_2^{}`$ may be defined to characterize this saturation, and is essentially determined by either spin or charge velocity depending on the channel. The zero temperature susceptibilities calculated by the DMRG method show a diverging behavior as $`\delta 0`$, and this means vanishing of $`T_2^{}`$. In the present work we calculate temperature and doping dependence of the dynamic correlation functions and clarify the character of these crossovers. It will be shown that the higher crossover temperature $`T_1^{}`$ corresponds to the characteristic temperature of pseudogap formation in the dynamic spin and charge structure factors. At the same time a sharp peak structure appears at $`\omega =0`$, indicating the formation of the collective excitations of the TLL at low temperatures. The lower crossover temperature $`T_2^{}`$ may correspond to the coherence temperature of these collective excitations. The Hamiltonian of the 1D KLM is described as $$=t\underset{i,s}{}\left(c_{is}^{}c_{i+1s}+\text{H.c.}\right)+J\underset{i,s,s^{}}{}𝐒_i\frac{1}{2}𝝈_{ss^{}}c_{is}^{}c_{is^{}}$$ (1) with standard notations. The density of conduction electron $`n_\text{c}`$ is unity at half filling, and hole doping ($`n_\text{c}=1\delta `$) is physically equivalent to electron doping ($`n_\text{c}=1+\delta `$) due to particle-hole symmetry. Dynamic correlation functions of local quantities can be calculated at finite temperatures for infinite-size systems by the DMRG method applied to the quantum transfer matrix. Imaginary-time correlation functions are first calculated from the eigenvector of the maximum eigenvalue of the quantum transfer matrix, and then they are analytically continued to real frequency using the maximum entropy method. The advantages of this method are that the finite-$`T`$ DMRG method has no statistical errors and that we do not need the extrapolation on the system size. This approach was first applied to the insulating phase of the 1D KLM and the many-body nature of the gap formation is revealed. The present study is the first application to a metallic phase. In our calculations we usually keep 50 states in the finite-$`T`$ DMRG procedure with the Trotter number 60. We have calculated temperature dependence of several dynamic correlation functions. The results of the local spin dynamics of the $`f`$-spins, $`S_\text{f}(\omega )=\frac{dq}{2\pi }S_\text{f}(q,\omega )`$, are shown for $`J/t=1.6`$ and $`\delta =0.2`$ in the inset of Fig. 1. Note that the complete suppression of charge fluctuations for $`f`$-spins imposes the sum rule, $`\text{d}\omega S_\text{f}(\omega )=1/4`$, independent of $`J`$, $`\delta `$ and $`T`$. We can see that a peak structure appears around $`\omega \mathrm{\Delta }_\text{s}=0.4t`$ at $`T0.2t`$, and similar peak structure is also observed for different Kondo coupling $`J/t=1.2`$ at $`T0.06t`$, where $`\mathrm{\Delta }_\text{s}=0.16t`$. Based on these results, we may conclude that the characteristic temperature of the peak formation is scaled by the higher crossover temperature $`T_1^{}`$ determined by $`\chi _\text{s}(T)`$. At the same time another peak structure grows at $`\omega =0`$, when $`\delta `$ is finite. It suggests the formation of the collective spin excitations of the TLL at low temperatures. We expect that the low energy part finally approaches $`|\omega |^{K_c}`$, which is predicted by the TLL theory at $`T=0`$. The doping dependence of $`S_\text{f}(\omega )`$ is shown at the low temperature $`T=0.04t\mathrm{\Delta }_\text{s}`$ in the main panel of Fig. 1. With increasing $`\delta `$, the peak intensity at $`\omega =0`$ grows, while the intensity around $`\omega =\mathrm{\Delta }_\text{s}`$ is reduced as a consequence of the sum rule discussed before. We have checked that the peak intensity at $`\omega =0`$ is $`0.25\delta `$ within a few percent. The energy scale estimated from the peak width at $`\omega =0`$ at $`T=0.04t`$ is smaller than the lowest temperature in our calculations. This means the weight around $`\omega =0`$ yields nearly free spin degrees of freedom with density $`0.25\delta `$ down to around this temperature $`T0.04t`$. This is consistent with the behavior of the static spin susceptibility, $`\chi _\text{s}=\delta /(4T)`$, observed at low temperatures. In contrast to $`S_\text{f}(\omega )`$ the $`\omega =0`$ peak is small in the conduction-electron spin correlations, $`S_\text{c}(\omega )=\frac{dq}{2\pi }S_\text{c}(q,\omega )`$, as shown in Fig. 2. This shows that the low-energy part of the spin degrees of freedom of the conduction electrons are nearly exhausted to form spin singlet with $`f`$-spins and thus a clear peak is formed at $`\omega =\mathrm{\Delta }_\text{s}`$. Therefore, the low energy spin dynamics is mostly dominated by the $`f`$-spin degrees of freedom. However, the intensity of the peak around $`\omega \mathrm{\Delta }_\text{s}`$ is less than $`1/3`$ of the corresponding peak in $`S_\text{f}(\omega )`$ and over a half of the total weight extends over higher frequencies $`t\omega 5t`$. This means that although the low energy part is dynamically coupled with $`f`$-spins, the majority part of the conduction spin degrees of freedom have another energy scale of almost the band width $`4t\mathrm{\Delta }_\text{s}`$ when the Kondo coupling is small $`J<4t`$. This shows that only the conduction electrons close to the Fermi level screen the $`f`$-spins as pointed out by Nozières. Rather surprisingly, even though the screening by conduction electrons is not complete, the intensity at $`\omega =0`$ in $`S_\text{f}(\omega )`$ is almost $`\delta /4`$, just like the $`J=\mathrm{}`$ case, where each conduction electron screens one $`f`$-spin. This shows the importance of the $`f`$-$`f`$ spin correlations for the formation of the Kondo singlet state. The doping dependence of the dynamic charge correlations, $`N_\text{c}(\omega )=\frac{dq}{2\pi }N_\text{c}(q,\omega )`$, is shown in Fig. 3. At half filling, the charge excitations are exponentially suppressed below the crossover temperature $`T_1^{}`$ at $`\omega <\mathrm{\Delta }_\text{c}`$. Upon hole doping, a sharp peak appears at $`\omega =0`$ as in $`S_\text{f}(\omega )`$. This indicates the formation of collective charge excitations of the TLL as discussed for $`S_\text{f}(\omega )`$. The peak intensity increases with $`\delta `$, and this means that the effective carrier density of the TLL is strongly renormalized from $`n_\text{c}=1\delta `$. The renormalization of carrier is naturally explained in the limit of strong $`J`$, where each conduction electron forms a local singlet with the $`f`$-spin on the same site. In this limit, effective carriers are introduced by hole doping, and their main component is the unscreened $`f`$-spins with density $`\delta `$, as discussed for $`S_\text{f}(\omega )`$. The TLL theory predicts that the low-energy asymptotic form of $`N_\text{c}(\omega )`$ is $`|\omega |^{\text{min}(K_c,4K_c1)}`$ near $`\omega =0`$, and we expect that the $`\omega =0`$ peak finally approaches this form in the low temperature limit. We now consider crossover behavior of the quasiparticle density of states, $`\rho (\omega )`$. Figure 4 shows the temperature dependence at $`\delta =0.2`$ and $`J/t=1.6`$. We can see that a pseudogap develops just above the Fermi level $`\omega =\mu `$ below $`T/t0.4`$. Similar behavior is observed also for $`J/t=1.2`$ below $`T/t0.15`$. Based on these results, we may conclude that the characteristic temperature of pseudogap formation is scaled by $`T_1^{}`$ defined from $`\chi _\text{s}(T)`$. This conclusion is consistent with the results obtained at half filling. The crossover behavior around $`T_1^{}`$ may be explained as follows. Below $`T_1^{}`$ thermal fluctuations of the $`f`$-spins are substantially suppressed, since the temperature is lower than the characteristic energy scale of the $`f`$-spin excitations $`\mathrm{\Delta }_\text{s}`$. When $`\delta `$ is small, the characteristic time scale of the dominant part of the $`f`$-spins is given by $`\mathrm{\Delta }_\text{s}^1`$, and this is much longer than the time scale of quasiparticle propagation $`\tau _{\text{qp}}`$, since $`\tau _{\text{qp}}`$ may be determined by the inverse of bare hopping energy and charge gap, $`t^1`$ and $`\mathrm{\Delta }_\text{c}^1`$. Therefore, concerning the quasiparticle excitations, the $`f`$-$`f`$ spin correlations may be assumed to be static. Because of their staggered nature in space, these almost static $`f`$-$`f`$ spin correlations induce the opening of a gap in $`\rho (\omega )`$. Of course, as shown in $`S(\omega )`$ and $`N(\omega )`$, both the $`f`$-spins and the conduction electrons have a slow dynamics when $`\delta `$ is finite, and therefore the gap discussed here is not a real gap but rather a pseudogap for finite $`\delta `$. Thus the pseudogap develops below $`T_1^{}`$. We next consider doping dependence to study small structures in $`\rho (\omega )`$ near the Fermi level $`\omega =\mu `$. The results for $`J/t=1.6`$ at $`T/t=0.04`$ are shown in Fig. 5 at various dopings. The pseudogap becomes more prominent with approaching half filling, where the clear quasiparticle gap, $`\mathrm{\Delta }_{\text{qp}}/t=0.7`$, exists. The sharp peak structure just below $`\omega =\mu `$ also grows with decreasing $`\delta `$ and seems to continuously connect to the gap edge structure at $`\delta =0`$. The nature of the structure near the Fermi level is not fully understood yet. One possible scenario is the mean-field type argument assuming the $`f`$-spin helical SDW order with wave number $`2k_F=\pi (1\delta )`$. Band mixing induces gap opening at $`k=\pi (\pm 1+\delta )/2`$, and two new van-Hove (vH) divergent singularities appear in each of the two hybridized bands. The Fermi level sits between these two new singularities in the lower hybridized band, as far as $`\delta `$ is small. A slightly different scenario is that the gap is induced by the short-range $`f`$-spin correlations with wave number $`\pi `$. Then, there appears only one vH singularity in each band. The peak-like behavior observed in Fig. 5 may be identified as the lower vH singularity in the first scenario, while in the second one it is a consequence of the shift of the Fermi level towards the top of the lower band as $`\delta 0`$. However, it is not straightforward to describe the behavior of $`\rho (\omega )`$ just above the Fermi level in terms of these pictures. Another quite different scenario is that this peak is due to the Kondo singlet formation and reminiscence of the Kondo resonance. This may also be expressed as the renormalized hybridized bands as proposed for the PAM, with the Fermi level inside the lower band. The renormalized hybridized bands are actually observed for the PAM in D=$`\mathrm{}`$ and 1D. Although the results of the 1D PAM indicate a quite symmetric $`\rho (\omega )`$ near $`\omega =\mu `$ for conduction electrons, which differs from our results, there is a considerable asymmetry in the results for the D=$`\mathrm{}`$ PAM, similar to our results. With decreasing $`\delta `$, the Fermi level shifts towards the top of the lower band, and $`\rho (\omega )`$ near the Fermi level grows accordingly, which is consistent with the behavior in Fig. 5. Therefore, this scenario is quite promising. Anyway, the slow dynamics of $`f`$-spin correlations may be important to understand the low energy dynamics of quasiparticle motion, and we will make further investigations in future study. As the temperature decreases down to zero, the interaction between the effective carriers becomes relevant and the low energy excitations are expected to be described as a TLL. The second crossover temperature is defined to characterize this behavior, as discussed before. Figure 6 shows $`\rho (\omega )`$ for the smaller coupling $`J/t=1.2`$. In this case, the second crossover temperature is relatively high compared with the case of $`J/t=1.6`$, and the asymptotic TLL behavior may be observed. For $`\delta =0.2`$ and $`0.3`$, a dip structure is now observed at the Fermi level $`\omega =\mu `$, which is absent in Fig. 5. It is known that the TLL has $`\rho (\omega )`$ with a dip structure at the Fermi level as $`\rho (\omega )|\omega \mu |^\alpha `$ at $`T=0`$ where $`\alpha =(K_\text{c}1)^2/(4K_\text{c})`$. Here $`K_\text{c}`$ is the Luttinger liquid parameter and less than $`1/2`$ in the 1D KLM. Thus the dip structure in Fig. 6 is consistent with the TLL picture. Such dip structure is not found for small doping $`\delta =0.1`$. This agrees with the previous study of the thermodynamics in the 1D KLM, which shows that $`T_2^{}`$ is lower for smaller $`\delta `$ and vanishes as $`\delta 0`$. To summarize we have calculated dynamic quantities at various temperatures and hole densities, and clarified characters of the two crossovers in the paramagnetic metallic phase. Below the first crossover temperature $`T_1^{}`$ it has been shown that a pseudogap develops in the density of states and dynamic correlation functions, and $`S_\text{f}(\omega )`$ and $`S_\text{c}(\omega )`$ both show a peak structure around $`\omega =\mathrm{\Delta }_\text{s}`$ as in the half-filling case. At the same time a peak structure appears at $`\omega =0`$ in $`S_\text{f}(\omega )`$ and $`N_\text{c}(\omega )`$, and its intensity increases with hole density $`\delta `$. The $`\omega =0`$ peak indicates that as a consequence of local Kondo singlet formation effective carriers are strongly renormalized to have density $`\delta `$ and small energy scale. The increase of the peak intensity with $`\delta `$ is naturally explained in the limit of strong $`J`$, where effective carriers of the TLL are the unscreened $`f`$-spins whose density is $`\delta `$. We note that this is consistent with the large Fermi surface. Below the second crossover temperature $`T_2^{}`$, the interaction between the effective carriers becomes relevant and the renormalized carriers are expected to evolve into a TLL. This is supported by several sets of the present results.
no-problem/9907/astro-ph9907159.html
ar5iv
text
# Centroids of Gamow-Teller transitions at finite temperature in fp-shell neutron-rich nuclei ## Abstract The temperature dependence of the energy centroids and strength distributions for Gamow-Teller (GT) $`1^+`$ excitations in several fp-shell nuclei is studied. The quasiparticle random phase approximations (QRPA) is extended to describe GT states at finite temperature. A shift to lower energies of the GT<sup>+</sup> strength is found, as compared to values obtained at zero temperature. preprint: Published in Physica Scripta 59, 352 (1999) Weak-interaction mediated reactions on nuclei in the core of massive stars play an important role in the evolutionary stages leading to a type II supernova. These reactions are also involved in r-process nucleosynthesis . Nuclei in the fp-shell participate in these reactions in the post- silicon burning stage of a pre-supernova star . The astrophysical scenarios, where these reactions can take place, depend upon various nuclear structure related quantities . Among them, the energy centroids for GT and IAS transitions can determine the yield of electron and neutrino captures. The dependence of such nuclear observables upon the stellar temperature is a matter of interest . In the present letter we are addressing the question about the temperature dependence of the energy-centroids of GT<sup>±</sup> transitions . We have performed microscopic calculations of these centroids using the finite-temperature quasiparticle random phase approximation and for temperatures (T) below critical values related with the collapse of pairing gaps (T$``$1 MeV ). These temperatures are near the values characteristic of the pre-supernova core . The consistency of the approach has been tested by evaluating, at each temperature, the Ikeda Sum Rule and total GT<sup>-</sup> and GT<sup>+</sup> strengths . The starting Hamiltonian is $$H=H_{\mathrm{sp}}+H_{\mathrm{pairing}}+H_{\mathrm{GT}}$$ (1) where by the indexes (sp),(pairing) and (GT) we are denoting the single-particle, pairing and Gamow-Teller ($`\sigma \tau `$.$`\sigma \tau `$) terms, respectively. For the pairing interaction, both for protons and neutrons, a separable monopole force is adopted with coupling constants $`G_p`$ and $`G_n`$ and for the residual proton-neutron Gamow-Teller interaction $`H_{\mathrm{GT}}`$ the form given by Kuzmin and Soloviev is assumed. As shown in the context of nuclear double $`\beta `$ decay studies the Hamiltonian (1) reproduces the main features found in calculations performed with realistic interactions. The structure of the residual interaction-term can be defined as the sum of particle-hole ($`\beta ^\pm `$) and particle-particle ($`P^\pm `$) terms of the $`\sigma \tau ^\pm `$ operators, as shown in , namely: $$H_{\mathrm{GT}}=2\chi (\beta ^{}\beta ^+)2\kappa (P^{}P^+)$$ (2) in standard notation. To generate the spectrum of $`1^+`$ states associated with the Hamiltonian (1) we have transformed it to the quasiparticle basis, by performing BCS transformations for proton and neutrons channels separately, and then diagonalized the residual interaction between pairs of quasiprotons (p) and quasineutrons (n) in the framework of the pn-QRPA . This procedure leads to the definition of phonon states in terms of which one can write both the wave functions and the transition matrix elements for $`\sigma \tau ^{}`$ and $`\sigma \tau ^+`$ excitations of the mother nucleus. Since the procedure can be found in textbooks we shall omit further details about it and rather proceed to the description of the changes which are needed to account for finite temperature effects. Like before one has to treat pairing correlations first, to define the quasiparticle states at finite temperature, and afterwards transform the residual interaction into this basis to diagonalize the pn-QRPA matrix. The inclusion of thermal effects on the pairing Hamiltonian is performed by considering thermal averages in dealing with the BCS equations. Details of this procedure can be found in . The most notable effect of thermal excitations on the pairing correlations is the collapse of the pairing gaps, at temperatures of the order of half the value of the gap at zero temperature. For a separable pairing force the finite temperature gap equation is written $$\mathrm{\Delta }(T)=G\underset{\nu }{}u_\nu v_\nu (12f_\nu (T))$$ (3) where the factors $`f_\nu (T)=(1+e^{E_\nu /T})^1`$ are the thermal occupation factors for single quasiparticle states. The quasiparticles energies $`E_\nu =\sqrt{(e_\nu \lambda )^2+\mathrm{\Delta }(T)^2}`$ are now functions of the temperature, as well as the occupation factors $`u`$ and $`v`$. The thermal average procedure of accounts for the inclusion of excited states in taking expectation values at finite temperature. It has also been used to describe two-quasiparticle excitations and QRPA states at finite temperature . In the basis of unlike(proton-neutron)-two-quasiparticle states the thermal average gives, for the commutator between pairs, the expression: $$<[\alpha _{\nu ,n}\alpha _{\mu ,p},\alpha _{\rho ,p}^{}\alpha _{\gamma ,n}^{}]>=\delta _{\nu ,\gamma }\delta _{\mu ,\rho }(1f_{\nu ,n}f_{\nu ,p})$$ (4) These factor have to be included in the pn-QRPA equations in taking the commutators which lead to the pn-QRPA matrix, as they have been considered in dealing with pairs of like-(neutrons or protons)-quasiparticles . More details about this procedure, for the particular case of proton-neutron excitations, will be given in . The single particle basis adopted for the present calculations consists of the complete f-p and s-d-g shells and the corresponding intruder state $`0h_{11/2}`$, both for protons and neutrons. In this single particle basis, with energies obtained from a fit of the observed one-particle spectra at the begining of the fp-shell, and taking <sup>40</sup>Ca as an inert core we have solved temperature dependent BCS equations for temperatures 0 MeV $``$ T $``$ 0.8 MeV. The coupling constants $`G_p`$ and $`G_n`$, of the proton and neutron monopole pairing channels of Eq.(1), have been fixed to reproduce the experimental data on odd-even mass differences. In Table 1 the calculated neutron and proton pairing gaps at T=0 MeV are compared to the experimental values extracted from . Once the pairing coupling constants are determined one can calculate the standard zero temperature pn-QRPA equations of motion to reproduce the known systematics of GT<sup>±</sup> energies and strengths. ¿From the comparison between the calculated and experimental values for GT<sup>±</sup> energies and B(GT<sup>±</sup>) strengths we have fixed the values of the coupling constants $`\chi `$ and $`\kappa `$ of the Hamiltonian Eq.(2). The consistency of the pn-QRPA basis is also given by the ratios between the calculated and expected values of the Ikeda’s sum rule 3(N-Z). The values of the above quantities are shown in Table 2. The experimental values of the B(GT<sup>+</sup>) strengths have been approximated by using the expression $$\frac{\mathrm{B}(\mathrm{GT}^+)}{\mathrm{Z}_{\mathrm{valence}}(20\mathrm{N}_{\mathrm{valence}})}=a+b(20\mathrm{Z}_{\mathrm{valence}})\mathrm{N}_{\mathrm{valence}}$$ (5) where a=3.48 10<sup>-2</sup> and b=1.0 $`10^4`$ (see also ). The overall agreement between calculated and experimentally determined values at zero temperature, both for pairing and GT observables, is rather good. We are now in a position to calculate these observables at finite temperature. At a given value of T we have solved the pairing gap equations and the pn-QRPA equations. With the resulting spectrum of $`1^+`$ states, both for GT<sup>-</sup> and GT<sup>+</sup> excitations, and the corresponding transition matrix elements of the $`\sigma \tau ^+`$ operator we have obtained the values shown in Table 3, where from the energy-centroids $$\mathrm{E}(\mathrm{T})=\frac{_\mathrm{n}\mathrm{E}_\mathrm{n}(1^+)<1_\mathrm{n}^+\sigma \tau ^+\mathrm{g}.\mathrm{s}>^2}{_\mathrm{n}<1_\mathrm{n}^+\sigma \tau ^+\mathrm{g}.\mathrm{s}>^2}$$ (6) we have extracted the temperature dependent shifts $$\delta _\mathrm{E}(\mathrm{T})=\mathrm{E}(\mathrm{T}=0)\mathrm{E}(\mathrm{T})$$ (7) Since the changes of the calculated centroids for GT<sup>-</sup> excitations at different temperatures are minor we are showing only the quantities which correspond to GT<sup>+</sup> transitions. Let us discuss some features shown by the result of the present calculations by taking the case of <sup>56</sup>Fe as an example. As known from previous studies , the repulsive effects due to the proton-neutron residual interactions affect both the GT<sup>-</sup> and the GT<sup>+</sup> unperturbed strength distributions, moving them up to higher energies. The large upwards- shift, as compared to the strength distribution of the unperturbed proton-neutron two-quasiparticle states, is exhibited by the GT<sup>-</sup> distribution . At finite temperatures two different effects become important, namely: the vanishing of the pairing gaps and the thermal blocking of the residual interactions. In order to distinguish between both effects we have computed GT-strength distributions for the case of the unperturbed proton-neutron two-quasiparticle basis. The pairing gaps, for proton and neutrons, collapse at temperatures T$``$ 0.80 MeV. At temperatures below these critical values (T= 0.7 MeV) the neutron and pairing gaps decrease to about $`50\%`$ and $`40\%`$ of the corresponding values at T=0, respectively. At this temperature (T=0.7 MeV) these changes amount to a lowering of the centroid for GT<sup>+</sup> transitions of the order of 1 MeV. When the residual interaction is turned on the resulting shift to lower energies is $``$1.20 MeV. ¿From these results it can be seen that the total downward shift of the GT<sup>+</sup> centroid is not solely due to pairing effects but also due to the thermal blocking of the proton-neutron residual interactions. This result can be understood as follows. Since GT<sup>+</sup> transitions are naturally hindered by the so-called Pauli blocking effect the smearing out of the Fermi surface due to pairing correlations, at zero temperature, tends to favour them. When temperature dependent averages are considered, Eq.(3), the pairing correlations are gradually washed-out as the temperature increases. This in turn leads to a sharpening of the Fermi surface thus decreasing the value of the energy of the unperturbed proton-neutron pairs. In addition, from the structure of the pn-QRPA equations at finite temperature, it can easily be seen that factors such as in Eq.(4) will appear screening the interaction terms. This additional softening of the repulsive GT interaction adds to the decrease of the unperturbed proton-neutron energies and the result is a larger shift of the GT<sup>+</sup> centroids. It should be noted that the position of the centroid of the GT<sup>-</sup> transitions is less sensitive to these effects, as we have mentioned before. The calculated shifts for these centroids are of the order of (or smaller than) 0.5 MeV. This result, concerning GT<sup>-</sup> centroids, is understood by noting that the collapse of the proton pairing gap does not affect the BCS unoccupancy factor ($`u_p`$) for proton levels above the Fermi surface as well as the BCS occupancy factor ($`v_n`$) for neutron levels below the Fermi surface and the energy of the unperturbed proton-neutron pairs remains nearly the same. Table 3 shows similar features for the changes of the GT<sup>+</sup> energy-centroids in other cases. To conclude, in this work we have presented the result of temperature dependent QRPA calculations of GT transitions in some neutron-rich nuclei in the fp-shell. The energy centroids of these transitions have been calculated at temperatures below the critical values associated with the collapse of the pairing correlations. The inclusion of thermal averages on the QRPA equations of motion leads to the softening of the repulsion induced by the Gamow-Teller interaction on proton-neutron pairs as well as to the sharpening of the proton and neutron Fermi surfaces. The combined effects of both mechanisms leads to a downwards-shift of the GT<sup>+</sup> strength while the GT<sup>-</sup> centroids remain largely unaffected. We have observed the constancy of the total GT<sup>+</sup> strength, as a function of temperature, in agreement with previously reported results of the Monte Carlo Shell Model Method . The shift of the GT<sup>+</sup> centroids at finite temperatures will effectively lower the energy thresholds for electron capture reactions in stellar enviroments leading to more neutronization at lower stellar densities during gravitational collapse. On the other hand the neutrino induced r-process reactions will remain relatively unaffected by the small ( $``$ 0.5 MeV) thermally induced shifts of GT<sup>-</sup> centroids. Considering that the empirically determined energies of the GT<sup>+</sup> centroids are known with an accuracy of the order of 0.43 MeV the temperature-dependent effects reported here may be significant for astrophysical rate calculations and their consequences for pre-supernova stellar evolution and gravitational collapse. Work is in progress to predict GT<sup>+</sup>(GT<sup>-</sup>) centroids and strengths, by using the pn-QRPA method at finite T, for nuclei for which experimental data through charge-exchange (p,n)( (n,p))-reactions in the opposite direction is available to constrain the former one. We thank George Fuller for fruitful discussions and the Institute for Nuclear Theory at the University of Washington for its hospitality and the DOE for partial support during the completion of this work. (O.C) is a fellow of the CONICET (Argentina) and acknowledges the grant ANPCYT-PICT0079; (A. R) is a (U.S.) National Research Council sponsored Resident Research Associate at NASA/Goddard Space Flight Center.
no-problem/9907/astro-ph9907122.html
ar5iv
text
# Plasma and Warm Dust in the Collisional Ring Galaxy VII Zw 466 from VLA and ISO Observations ## 1 Introduction Collisions and interactions between galaxies provide a special opportunity for astronomers to study galaxies in non-equilibrium states (e.g. Schweizer 1997). Of particular interest are collisional systems which have well defined initial conditions and clear observational consequences. Models of “head-on” collisions, which began with the pioneering work of Lynds & Toomre (1976), and Toomre (1978) and have been extended considerably in both complexity and scope (see review by Appleton & Struck-Marcell 1996, Struck (1997)), fulfill those requirements. The radially expanding density waves driven into the disk of the “target” galaxy by such a collision, can not only explain the phenomenon of the collisional ring galaxy, as Toomre had originally suggested (the “Cartwheel” is perhaps the most well known example), but can also be useful as a means of exploring other aspects of the galactic disk affected by the compression of the ISM. In particular, in slightly off-center collisions, the expanding wave driven by the passage of the “intruder” through the disk can advance with greater strength in one direction than another, allowing for tests of such phenomena as star formation thresholds (see Appleton & Struck-Marcell 1987, Charmandaris, Appleton & Marston 1993) and studies of the compression of plasma–such as the cosmic-ray populations trapped in the disk. We present radio continuum, Mid-IR and CO-line observations of a particularly well studied northern ring galaxy VII Zw 466, first discovered by Cannon et al. (1970). As we shall show, the radio continuum emission traces the thermal and non-thermal (relativistic) particles in the disk of the ring galaxy, whereas the Mid-IR emission highlights unusually warm dusty regions associated, most likely, with the neutral/molecular gas boundaries surrounding the powerful O/B associations found in the ring. The observations will underline the differences between the fluid-like components of the ISM (in this case the cosmic-ray “gas”) and the star-formation regions that have developed as a result of the compression by the radially expanding wave. VII Zw 466 in one of the best studied northern rings (see Thompson & Theys 1978) and unlike its relative in the South, the “Cartwheel” (see Higdon 1993, 1996; Charmandaris et al. 1999a), it comprises of a single blue ring of star formation with no inner ring or obvious nucleus. VII Zw 466 was observed along with a small sample of other galaxies, as part of a major study of the star formation properties and metal content of northern ring galaxies (Marston & Appleton 1995, Appleton & Marston 1997, Bransford et al. 1998-hereafter BAMC). VII Zw 466 was also mapped with the VLA in the neutral hydrogen line and these observations provided information about the kinematics and dynamics of the cool gas in the ring galaxy and its nearby companions (Appleton, Charmandaris & Struck 1996, hereafter ACS). The H I observations suggested that the ring was formed by a collision between the progenitor of VII Zw 466 and a gas-rich dwarf spiral. A numerical model of the interaction was able to reproduce well the H I filaments extending from VII Zw 466, and a similar “plume” extending back from G2 towards the ring galaxy. In this paper we present ISO Mid-IR, VLA radio continuum, and Onsala Space Telescope millimeter observations of VII Zw 466 and its inner group. In $`S`$2 we describe the details of the observations. In $`S`$3 we present the intensity maps, and in $`S`$4 the broad-band spectral energy distribution and radio spectral index maps of VII Zw 466 and companions. In $`S`$5 we discuss the implications of the radio results, especially in the context of the compression of the ISM and the implications for the efficiency of the synchrotron emission from relativistic electrons. In $`S`$6 we explore the consistency between optically determined star formation rates and radio observations, allowing us to estimate the number of UV photons heating the dust seen in the ISO observations. This well constrained UV flux is discussed in the context of the “warm” dust distribution seen with ISO. We attempt to develop a complete picture of VII Zw 466 as a collisionally compressed galaxy in $`S`$7. We state our conclusions in $`S`$8. In the Appendix we present upper limits to the molecular content to VII Zw 466 and one of its companions. Throughout this paper, we will adopt a Hubble constant of 80 km s<sup>-1</sup> Mpc<sup>-1</sup>. Using the H I systemic velocity of 14,465 km s<sup>-1</sup> (ACS) we therefore assume a distance to the VII Zw 466 group of 180 Mpc. ## 2 The Observations ### 2.1 VLA Radio Continuum Observations The $`\lambda `$6 cm (4.86 GHz = C–band) radio observations were made with the D-array of the VLA on August 25 1992. Further observations were made at $`\lambda `$20 cm (1.425 GHz = L-band) and $`\lambda `$3.6 cm (8.43 GHz = X-band) with the C-array on October 28 1994. AIPS was used to do the standard calibration and data reduction. Images were made and CLEANed using the AIPS routine MX. The resulting synthesized beam sizes for the $`\lambda `$$`\lambda `$20 cm, 6 cm and 3.6 cm observations were 18$`\times `$12 arcsecs, 18$`\times `$10 arcsecs, and 13.3$`\times `$8.7 arcsecs respectively. Thus, through the choice of appropriate VLA configurations, the resolution of the $`\lambda `$6 cm and $`\lambda `$20 cm observations were closely matched. The $`\lambda `$3.6 cm observations were made using natural weighting of the interferometer observations in order to provide the maximum sensitivity to faint features. ### 2.2 OSO Millimeter-Wave Observations In order to attempt to detect molecular gas in VII Zw 466 and its spiral companion G2, we have searched for <sup>12</sup>CO(1-0) emission towards those galaxies using the 20m radiotelescope at Onsala Space Observatory, Sweden. The observations were made in April 1999. At 110 GHz, the telescope half-power beamwidth is 34$`\mathrm{}`$, the main-beam efficiency is $`\eta _{mb}=T_A^{}/T_{mb}`$=0.55. We used a SIS receiver and a filterbank with a total bandwidth of 512 MHz and a resolution of 1 MHz. During the observations, the typical system temperature was 500 K. The pointing was regularly checked on nearby SiO masers. The pointing offsets were always below 10$`\mathrm{}`$. The spectra were smoothed to a final velocity resolution of 21.8 km s<sup>-1</sup>. First-order baseline were subtracted from the spectra. The total on-source integration time was 400 minutes for VII Zw 466 and 90 minutes for G2. Since the galaxies were not detected we present in an Appendix the salient upper limits to the total molecular mass and a comparison with other work. ### 2.3 Mid-IR Observations VIII Zw 466 was observed by ISOCAM (Cesarsky et al. 1996) on June 28 1996 (ISO revolution 224) as part of a ISO-GO observation of bright northern ring galaxies. The galaxy was imaged through 4 filters, LW1 (4.5 \[4.0-5.0\]$`\mu `$m), LW7 (9.62 \[8.5-10.7\]$`\mu `$m), LW8(11.4 \[10.7-12.0\]$`\mu `$m), and LW9(15.0 \[14.0-16.0\]$`\mu `$m) with on-source time of $``$9.3 minutes per filter. A lens, resulting in a 3$`\mathrm{}`$ pixel field of view, was used to create a 3$`\times `$2 raster map in a “microscan” mode in one direction. To improve the signal to noise ratio, the ISOCAM images were smoothed to yield a point-spread-function with a FWHM of 5.4 arcsecs. The overall field of view was 102$`\times `$102 arcsec<sup>2</sup>. The standard data reduction procedures described in the ISOCAM<sup>2</sup><sup>2</sup>2The ISOCAM data presented in this paper was analyzed using “CIA”, a joint development by the ESA Astrophysics Division and the ISOCAM Consortium led by the ISOCAM PI, C. Cesarsky, Direction des Sciences de la Matière, CEA, France manual were followed (Delaney 1998). Dark subtraction was performed using a model of the secular evolution of the ISOCAM dark currents (Biviano et al. 1997). Because of the well-known transient response to cosmic-ray events in the data, cosmic ray removal was performed using a combination of multi-resolution median filtering, and time-series glitch removal. Memory effects in the detector were corrected using the so-called IAS transient correction algorithm (Abergel et al. 1996). These methods and their consequences are discussed in detail in Starck et al. (1999). The memory effects were worst in the LW9 filter (long-wavelength filter) which was observed first (sequence LW9, LW8, LW7 and LW1) since we had no knowledge of the source structure prior to our target acquisition. Hence, our signal to noise ratio is worse in this filter, reflecting the assumed arbitrary model point source for the state of the detectors prior to our observations. The rms noise in our ISOCAM images varied slightly over each image and it was typically 5 $`\mu `$Jy/pixel in the LW1 image, 2$`\mu `$Jy/pixel in the LW7, 4$`\mu `$Jy/pixel in the LW8 and 6$`\mu `$Jy/pixel in the LW9. ## 3 The ISOCAM and VLA Imaging of VII Zw 466 and its Inner Galaxy Group In Figure 1a,d we show the B-band greyscale image from Appleton & Marston (1997) of the inner VII Zw 466 group. On the figure we indicate the ring galaxy, companion galaxies VII Zw 466 (Cl):G1 (elliptical, hereafter G1) and VII Zw 466 (Cl):G2 (edge-on disk and likely “intruder”, hereafter G2) and the background galaxy VII Zw 466 (Cl):B1, hereafter B1<sup>3</sup><sup>3</sup>3The names of the galaxies presented here are consistent with the names given to them by Appleton, Charmandaris and Struck (1996), and Appleton & Marston (1997), and their coordinates are given in Table 1 of the current paper.. Superimposed over the optical image, in Figure 1a,c, we present contour maps of the more interesting, longer-wavelength ISOCAM images, $`\lambda `$9.6$`\mu `$m, $`\lambda `$11.4$`\mu `$m and $`\lambda `$15.0$`\mu `$m. In Figure 1d we show contours of the $`\lambda `$3.6 cm radio emission, again superimposed over the B-band images of the group. Three of the galaxies, VII Zw 466, G2, and B1 are detected at these the longer wavelength bands. Only the elliptical galaxy G1 is detected at $`\lambda `$4.5$`\mu `$m as an unresolved point source (this map is not shown). Upper limits to non-detections and values for the fluxes of the galaxies in each band are given in Table 1. The distribution of Mid-IR emission at $`\lambda `$9.6$`\mu `$m and $`\lambda `$11.4$`\mu `$m shown in Figure 1a and b are very similar except that the signal to noise level is higher in the $`\lambda `$9.6$`\mu `$m image. At these wavelengths, the emission in VII Zw 466 is concentrated in three main regions, one in the north, one in the east and one in the western quadrants of the ring (the eastern source is missing in $`\lambda `$11.4$`\mu `$m image, but this is most likely because the source is weaker and just drops below the detection threshold at this wavelength). The distribution of Mid-IR emission at these wavelengths follows quite closely the brightest H$`\alpha `$ sources in the ring. Figure 2a shows the distribution of H$`\alpha `$ emission contours superimposed on a grey-scale representation of the $`\lambda `$9.6$`\mu `$m emission. Strong emission in both bands is seen from the background galaxy B1, and the edge-on disk G2. The emission from G2 is elongated along the major axis of the galaxy and unresolved along the minor axis. The elliptical galaxy (G1) is detected at $`\lambda `$4.5$`\mu `$m, and is marginally detected at $`\lambda `$9.6$`\mu `$m. The background galaxy B1 is known to be at approximately twice the redshift of the VII Zw 466 group, and its Mid-IR emission provides a working point-spread-function for our observations because its emission is unresolved at all wavelengths at which it is detected ($`\lambda `$$`\lambda `$9.6, 11.4 and 15 $`\mu `$m). In Figure 1c we show the $`\lambda `$15.0$`\mu `$m emission from the group. This wavelength was the first observed in the observing sequence, and is therefore of lower sensitivity than the other observations (see $`S`$2). The emission from VII Zw 466 is quite faint and is only detected at the 5$`\sigma `$ level when averaged over the whole galaxy. Nevertheless, the brightest emission knot is seen centered on the western quadrant of the ring, and faint emission is also seen from the eastern knot. Very faint emission may be detected from the interior of the ring in the north, but the emission is weak and would require higher sensitivity observations to confirm its reality. Oddly, the northern ring-source, seen at both $`\lambda `$9.6$`\mu `$m and $`\lambda `$11.4$`\mu `$m, is absent in these observations. Considering that the northern knot is associated with the brightest H$`\alpha `$ emitting regions, the failure to detect this knot at $`\lambda `$15 $`\mu `$m is surprising (see Fig. 2a), but may also reflect the poorer quality of the $`\lambda `$15 $`\mu `$m observation (see discussion on memory effects in $`S`$2.2). We note that the emission from the edge-on disk galaxy (G2) at $`\lambda `$15$`\mu `$m breaks up into three discrete sources. In this region, a line of bad detectors crossed the galaxy position just south-east of the nucleus of G2. Flux from this region could not be recovered by the micro-scan technique because the galaxy was too close to the edge of the detector array. Hence the total flux for G2 in this band is probably underestimated in our observations by up to 20%. In Figure 1d, we present the radio continuum emission at $`\lambda `$3.6 cm. The radio emission is superficially similar to the emission at $`\lambda `$9.6$`\mu `$m. However, it is significantly more crescent-like, and, unlike the $`\lambda `$9.6$`\mu `$m image, shows no emission associated with the eastern star-formation complex. The brightest radio emission comes from the north-west quadrant and, unlike the Mid-IR maps, does not show a close correspondence with individual H II regions, although it follows roughly the shape of the ring. There is a tendency for the radio emission to lie on the inside edge of the ring defined by the stars and H II regions. Figure 2b shows the $`\lambda `$3cm radio emission contours superimposed on a greyscale representation of the H$`\alpha `$ emission. The ridge of radio emission along the south-western edge of the ring and along the northern regions of the ring peaks inside the ring. Only in the western ring is there a close correspondence between the peak of the radio emission and the peak in the H$`\alpha `$. We will argue from spectral index considerations, that this is where the radio emission is predominantly thermal. In Figure 3a,b, we present the radio continuum maps of the longer-wavelength 20 and 6 cm VLA observations. Although the observations are of lower resolution, they show the same crescent-shaped distribution seen in Figure 1d, with no evidence for emission from the eastern quadrant of the ring. Radio fluxes and globally-averaged radio spectral indices are given for VII Zw 466 and the other galaxies detected in Table 2. The galaxy G2, the edge-on spiral believed to be the “intruder” galaxy, is strongly detected (see Figure 1d and 3a,b), and fainter emission is seen from the known background galaxy B1. Galaxy G2 shows a flattening of its radio spectrum as one goes to higher frequencies (Table 2), whereas the globally average values for the spectral index are similar for the ring galaxy and the background galaxy B1. The steeper values of the spectral index are typical of a predominantly synchrotron spectrum whereas, for G2, the flattening of the spectrum may be a result of an increasingly thermal component, perhaps resulting from a powerful starburst. In Figure 4a and b, we present the radio spectral index maps of the galaxies (computed between $`\lambda `$$`\lambda `$20 and 6 cm and between $`\lambda `$$`\lambda `$6 and 3.6 cm). Although the resolution in these maps is poor (we convolved the higher frequency observation to the resolution of the lower frequency one in each case), the ring galaxy shows some structure in these maps. The “horn” of the crescent in both maps show radio spectral indices typical of synchrotron emission, with a value of $`\alpha `$ = -0.6 to -0.7 in both maps (a value close to the average for the whole galaxy-Table 2). However, in the center of the “crescent”, which would correspond to the western “hotspot” in Figure 1d in the ring, the radio spectrum shows a change of slope from a spectral index which is quite steep between $`\lambda `$20 and 6 cm, $`\alpha `$ = -1.0, to a flat spectrum at the shorter wavelengths of $`\lambda `$6 to 3cm ($`\alpha `$ = 0.0). Such a change in slope is typical of a radio spectrum which shifts from one dominated by synchrotron emission to a thermal spectrum. The radio observations suggest that there is a strong source of hot plasma in the western part of the ring. Optically, this region does not show strong H$`\alpha `$ emission, but lies between two moderately bright H II regions. ## 4 Spectral Energy Distributions and the origin of the Mid-IR emission In Figure 5 we present spectral energy distributions (SEDs) for the four galaxies observed, based on our previous optical/IR photometry (Appleton & Marston 1997), the low-resolution Faint Source Catalog of IRAS (Moshir, Kopman & Conrow 1992), and the current radio data. The detection of emission in the Mid-IR is an order of magnitude lower than the previous upper limits to the $`\lambda `$12$`\mu `$m flux from IRAS. The emission from the elliptical companion G1 at $`\lambda `$4.5$`\mu `$m and $`\lambda `$9.6$`\mu `$m is quite consistent with the emission being an extrapolation of the stellar continuum seen at shorter wavelengths. It is known that the Mid-IR spectrum of elliptical galaxies is dominated by their evolved stellar population, and can be modeled fairly well using a blackbody continuum with a temperature of$``$ 4500K (Madden 1997). This is not the case for the other galaxies where there is a clear Mid-IR excess. Although we explored a pure dust model of the SED of the Mid-IR emission from VII Zw 466 and obtained a satisfactory fit to these data with a dust temperature of $`T_D`$=226 K, we have decided not to present these results here. It is very likely that the spectrum is affected by emission from Unidentified Infrared Bands (UIBs). These broad lines (centered at $`\lambda `$$`\lambda `$ 6.2, 7.7, 8.6 and 11.3$`\mu `$m) are commonly seen in active star formating regions, and are usually attributed to emission from PAHs, since they can be produced by the stretching of C-C and C-H bonds (Léger & Puget 1984; Verstraete, Puget & Falgarone, 1996). As a result, we find that our LW7 (9.6$`\mu `$m) and LW8(11.4$`\mu `$m) filters are likely to be significantly affected by the 7.7 and 11.3$`\mu `$m UIBs, after taking into account the redshift of the galaxies. Indeed it is likely that in these bands, much of the emission we see originates in the UIB features. In order to facilitate a comparison between our observations and other star formation active regions in which UIBs are present, we show in Figure 6, LW7, LW8 and LW9 fluxes for VII Zw 466 superimposed on two contrasting spectra taken with ISO of the well known Antenna galaxy (Mirabel et al. 1998, Vigroux 1997). The spectra have been redshifted to that of VII Zw 466. It can be seen from Figure 6 that the UIBs at $`\lambda `$7.7 and $`\lambda `$8.6 $`\mu `$m are partially shifted into the LW7 ($`\lambda `$9.6$`\mu `$m) filter. Similarly, the rather brighter UIB at $`\lambda `$11.3$`\mu `$m, if present, is expected to affect the LW8 ($`\lambda `$11.4$`\mu `$m) flux measurement. Because of the redshift of VII Zw 466, the LW9 bandpass (centered on $`\lambda `$15.0$`\mu `$m) falls between the \[NeII\] and \[NeIII\] emission lines and effectively gives an uncontaminated measurement of the thermal continuum at $`\lambda `$15$`\mu `$m. It can be seen from a comparison between our flux measurements for VII Zw 466, and the Antenna spectra, that our measurements are consistent with a spectral energy distribution similar to Knot B in the Antenna. The main difference between Antenna Knots A and B is the strong enhancement in the $`\lambda `$12-16$`\mu `$m thermal continuum, which is very powerful in Knot A as compared with the strength of the UIBs. In Knot B, the contributions to the spectrum from the UIBs and a mild thermal continuum are much more comparable. In order to further quantify this similarity, we present in Table 3 the LW8/LW7 and LW9/LW7 flux ratios that would be expected for various input spectra obtained from publicly available ISO data sets. We take four different input spectra, three from the Antenna galaxy (Knots A, B and Z from Mirabel et al. 1998 and Vigroux et al. 1997), and one from the ISO Mid-IR spectrum of Arp 220, which shows a very strong silicate absorption feature (Charmandaris et al. 1999b). Table 3 shows that the flux ratios most consistent with the observed ratios for VII Zw 466 is Knot B of the Antenna. This is a result of the contaminating influence of the $`\lambda `$11.3 $`\mu `$m UIB relative to the $`\lambda `$7.7 and $`\lambda `$8.6$`\mu `$m UIB features. In most other cases the ratio of the LW8/LW7 flux is significantly greater than unity due to the combination of both the strength of the $`\lambda `$11.3$`\mu `$m UIB feature and a relatively strong rising thermal continuum beyond $`\lambda `$10$`\mu `$m. An extreme case is Arp 220, where the LW9/LW7 ratio is large, reflecting a combination of a rapidly increasing thermal continuum at longer wavelengths and a strong dip in the spectrum around $`\lambda `$10$`\mu `$m due to silicates seen in absorption<sup>4</sup><sup>4</sup>4We also compared our results with spectra of the rare case of sources in which silicates are seen in emission, rather than absorption (This is the case for some unusual H II regions observed recently by ISO–J. Lequeux, Private Communication). Again, this did not yield the ratio of LW8/LW7 emission seen in VII Zw 466.. Table 3 shows that the Mid-IR spectrum of the G2 and the background galaxy B1 are similar to VII Zw 466 in their ratios of LW8/LW7 and LW9/LW7. In the case of G2, the latter ratio is not reliable because of the uncertain flux of G2 in the LW9 band (see $`S`$3). ## 5 Explaining the Radio Morphology: Compression of the ISM and/or Increased Supernova Rate? ### 5.1 Amplification and Trapping of Cosmic-Ray Particles in a Compressed Disk? Models of slightly off-center collisions between galaxies lead to the prediction that the ISM will be compressed more on one side of the target galaxy compared with the other (Appleton & Struck-Marcell 1987). This effect was used by Charmandaris, Appleton & Marston (1993) to explore threshold star formation behavior in Arp 10. In VII Zw 466, the crescent-shaped distribution of the radio emission at $`\lambda `$3 cm is very suggestive of both compression and trapping of relativistic particles in the disk. In order to explore this idea further, we will briefly explore the possibility that the crescent-shaped enhancement in the radio emission can be explained in terms of a modest (approximately 5-10 times) compression of the galactic disk on one side of the galaxy compared with the other. Such density enhancements are similar to those predicted in the model by ACS used to explain the H I fingers extending from VII Zw 466. Helou & Bicay (1993) showed that the radio synchrotron luminosity of a relativistic electron in a galactic disk was: $$L_{synch}=\chi L_{cr}(t_x/(1+t_x))$$ (1) where L<sub>cr</sub> is the luminosity in the cosmic ray population, and t<sub>x</sub> is the ratio of two critical time-scales, t<sub>x</sub>=t<sub>esc</sub>/t<sub>synch</sub>. Here t<sub>esc</sub> is the time taken for a cosmic ray electron to diffuse vertically within the disk to a point where it can escape confinement, t<sub>synch</sub> is the synchrotron lifetime of an electron in an average galactic magnetic field, and $`\chi `$ is a constant close to unity. To extract the maximum available luminosity from the electron before it looses too much energy, t<sub>esc</sub> $`>>`$ t<sub>synch</sub> (or t<sub>x</sub> $`>>`$ 1) and then L<sub>synch</sub> $``$ L<sub>cr</sub>. On the other hand, if t<sub>esc</sub> $`<`$ t<sub>synch</sub> (or t<sub>x</sub> $`<`$ 1), a result of the electron escaping from the disk before it can deposit all its energy, then L<sub>synch</sub> $`<`$ L<sub>cr</sub> and the radio synchrotron luminosity will be significantly reduced. Helou & Bicay estimate the values of the two time-scales as follows: $$t_{esc}10^7[h_{disk}/1kpc]^2[1pc/l_{mfp}](cos\varphi )^1yr$$ (2) and $$t_{synch}8\times 10^9[B/1\mu G]^2[E_o/1GeV]^1(sin\varphi )^2yr$$ (3) Here, h<sub>disk</sub> is the scale-height of the magnetically confined galactic disk, l<sub>mfp</sub> is the mean free path of the electrons in the disk, B is the magnetic field strength, $`\varphi `$ is the angle of the electron momentum vector to the magnetic field vector, and E<sub>o</sub> is the initial injection energy of the particle. The escape lifetime is a random walk out of the disk by scattering off magnetic irregularities which set the scale of l<sub>mfp</sub>. Putting in realistic values for the above parameters (h<sub>disk</sub> = 1 kpc, l<sub>mfp</sub> = 1 pc, B = 5 $`\mu `$G, E<sub>o</sub> = 5 GeV) we find that t<sub>esc</sub> = 10<sup>7</sup> yrs and t<sub>synch</sub> = 6 x 10<sup>7</sup> yrs, and so for cosmic ray electrons with energies of a few GeV, t<sub>esc</sub> is shorter than the synchrotron lifetime, and we are in the regime in which L<sub>synch</sub> $`<L`$<sub>cr</sub><sup>5</sup><sup>5</sup>5In our galaxy, the break in the cosmic ray electron spectrum (see discussion by Longair 1994), in the range 1-10 GeV is most likely a result of the escape in the electrons from the Galactic disk at the lower energies–hence the most efficient synchrotron production occurs for those that remain trapped–with energies $`>`$ 5 to 10 GeV. We can now ask the question–what happens if the disk containing a steady-state cosmic-ray population of relativistic electrons before the collision, is compressed by a factor of 5-10 in part of the ring-wave for about 50-100 million years? This compression timescale comes from the width of the stellar ring (3.6 arcsec or approximately 3 kpc), divided by the radial expansion velocity of the ring V<sub>rad</sub> = 32 km s<sup>-1</sup> based on H I observations (see ACS). The compression has a two-fold effect. Firstly, it is likely that the component of the magnetic field perpendicular to the compressional shock wave will be amplified (B proportional to $`\rho `$), resulting in a decrease in the synchrotron lifetime by 25 to 100. Secondly, the compression will also order the random component of the magnetic field and reduce the average distance between magnetic irregularities in the disk in a linear way, increasing the time needed for the electron to escape from the disk by a factor of 5 to 10. The net effect is to increase t<sub>x</sub> by a few hundred, perhaps enough to allow the synchrotron emission from our canonical 5 GeV electron to reach its maximum efficiency L<sub>synch</sub> $``$ L<sub>cr</sub>. The compression of the disk in the ring wave temporarily decreases the energy at which disk CR-electrons will be just trapped by the disk from typically E<sub>o</sub> $``$ 10 GeV to around 1 GeV. The effect on the overall synchrotron luminosity will be a significant increase in the radio luminosity in the region just behind the peak compression of the wave. In the models of the off-center collisions, just such a crescent-shaped compression is expected, and the fact that the radio emission is off-set a little to the inner edge of the ring is confirmation that the basic idea is correct. One (rather untestable!) prediction of our observations would be that if it was possible to actually measure the electron energy spectrum of cosmic rays in the enhanced region of VII Zw 466, a spectral break should occur at lower energies (more like 1 GeV) than that seen in our own Galactic disk (5-10 GeV; e.g. Longair 1994). ### 5.2 Supernova Rates in VII Zw 466: An Alternative to Compression? In the last section we explored the possibility that disk compression alone is responsible for the crescent-shaped radio distribution. An alternative explanation for the enhanced radio emission on the western half of VII Zw 466 might be the generation of new CR particles from Type II SN resulting from the triggered star formation in the ring. Although this does not contradict the analysis in the previous section (we did not say where the CR electrons originated), it does mean that the relativistic electrons do not need to pre-date the collision, but can be created $`\mathrm{𝑖𝑛}\mathrm{𝑠𝑖𝑡𝑢}`$ as a direct consequence of the massive star formation process itself. In this picture, the compression need not be the main factor responsible for the enhanced radio emission, but rather that enhanced star formation would lead to a higher SN rate on one side of the galaxy compared with the other. Condon & Yin (1990) provide an estimate of the supernova rate from the non-thermal component of the radio synchrotron luminosity using the formula: $$L_{NT}(WHz^1)=1.3\times 10^{23}(\nu /1GHz)^\alpha R_{SN}yr^1$$ (4) where $`\nu `$ is the radio frequency of the observation, $`\alpha `$ is the spectral index, and R<sub>SN</sub> is the rate of Type II supernova per year. Assuming that 70% of the radio emission from VII Zw 466 is non-thermal at $`\lambda `$6 cm (a figure based on a simple fit to the spectrum of the galaxy), we find that L<sub>NT</sub>= 4.4 $`\times `$ 10<sup>21</sup> W Hz<sup>-1</sup>. With this figure, we find a supernova rate R<sub>SN</sub> = 0.048 yr<sup>-1</sup>. How reasonable is this number? One way to check it, is to see whether this supernova rate agrees with the expected rate based on our knowledge of the star formation rates in VII Zw 466. The total H$`\alpha `$ flux from VII Zw 466 is 7.2 $`\times `$ 10<sup>41</sup> ergs s<sup>-1</sup> or 7.2 $`\times `$ 10<sup>34</sup> W (Marston & Appleton 1994). Based on the models of Kennicutt (1983) we can calculate from the H$`\alpha `$ luminosity the rate of star formation for stars more massive than 10 M$``$ to be 0.97 M$``$/yr for VII Zw 466 as a whole. If we assume that all stars more massive than 10 M$``$ explode as supernova, then the number of stars being born per year with masses $`>`$ 10 M$``$ will be equivalent to the Type II supernova rate. The rate will increase with time since the main contribution to the supernova rate comes from the more populous lower mass stars which have longer main sequence lifetimes. However, this rate will be reasonable if it is averaged over a few x 10<sup>7</sup> yrs, which is also the approximate propagation time of the ring wave through a given regions of the gas disk. For an assumed upper mass cut-off of 80 M$``$ and an IMF slope of -2.35 (Salpeter) the rate of formation of massive stars is 0.045 $`\times `$ dM/dt or 0.044 stars/yr. This would correspond to the supernova rate (R<sub>SN</sub> = 0.044) under the assumptions given above. This figure is in close agreement with the value derived from the non-thermal radio emission. However the agreement is fortuitous. For example, a change in slope of the IMF to -3 (a bottom heavy IMF) would increase the supernova rate to 0.053/yr. And so a realistic value for the Type II supernova rate derived from the H$`\alpha `$ fluxes would be R<sub>SN</sub> = 0.045 $`\pm `$ 0.01, or about 1 supernova every 20 years. The lower-limit to the star formation rate implied by the H$`\alpha `$ observations (uncorrected for extinction) is therefore capable of delivering the observed radio flux through in situ Type II supernova associated with the star-forming regions. The agreement also suggests the H$`\alpha `$ flux is relatively free of absorption in VII Zw 466. These facts make VII Zw 466 an interesting place to look for supernova explosions at optical and radio wavelengths, and we would urge that the galaxy be monitored every few years for signs of explosions in the outer ring<sup>6</sup><sup>6</sup>6We also note that we get a very similar result for R<sub>SN</sub> = 0.046 from a composite optical/radio method (see equation 12 of Condon & Yin 1990). Since this method depends not only upon the H$`\alpha `$ flux, but also on the ratio of the thermal to non-thermal radio emission, the agreement between that result and the one presented in the main text here is confirmation that we have correctly separated the thermal from the non-thermal radio emission (70% to 30%) in VII Zw 466. The value of R<sub>SN</sub> derived by this method is sensitive to the assumed ratio of thermal to non-thermal emission.. ## 6 The Thermal Ultraviolet Continuum The Mid-IR observations suggest that dust grains are being heated by the hot young stars. What limits can be placed on the strength of the UV emission? We are in the fortunate position in VII Zw 466, of having two relativity independent method of measuring the UV continuum, namely the radio and the optical hydrogen recombination line flux (The Far-IR flux can also provide a third rough measure of the UV flux). The first measure is from the H$`\alpha `$ flux, which can be used to estimate the strength of the Lyman continuum. From the total H$`\alpha `$ flux (Marston & Appleton 1995) we can estimate the total number of ionizing UV photons N<sub>uv</sub>=L/1.36$`\times `$10<sup>-19</sup> = 5.3$`\times `$10<sup>53</sup> s<sup>-1</sup> (assuming an electron temperature of 10<sup>4</sup> K). Since no internal reddening correction has been made to the H$`\alpha `$ flux, this UV flux is a lower-limit. Secondly, we can make a crude decomposition of the radio flux into thermal and non-thermal, based on the spectral index radio maps and use this to estimate the UV emission. Approximately 30% of the total radio flux at $`\lambda `$6 cm is estimated to be thermal. Hence, again from Condon & Yin, we can determine the number of UV photons from: $$N_{uv}=6.3\times 10^{32}L_T(WHz^1)\nu ^{0.1}(GHz)T_4^{0.45}s^1$$ (5) From the 6 cm radio flux we obtain a value for N<sub>uv</sub> = 8$`\times `$10<sup>53</sup> if T<sub>4</sub>=1 (the electron temperature in units of 10<sup>4</sup> K). Hence there is excellent agreement between the H$`\alpha `$ determination and the radio determination for the number of ionizing photons. The radio result suggests an upper limit to the optical extinction near $`\lambda `$6563Å of $`<`$ 0.4 mag. This would translate to A<sub>v</sub> $`<`$ 1 mag for a Galactic extinction curve and is very consistent with a low value for A<sub>v</sub> of 0.2 for all of the H II regions observed spectroscopically by BAMC, except for one knot inside the ring on the western side)<sup>7</sup><sup>7</sup>7The Far-IR flux provides an additional check on the UV flux if we assume that the FIR luminosity contains a large component of its luminosity from re-processed UV radiation from star forming regions. The total FIR luminosity for VII Zw 466 can be estimated from the IRAS flux, and L<sub>FIR</sub>= 7.99 $`\times `$ 10<sup>36</sup> W. (VII Zw 466 is F12297+6641 in the IRAS Faint Source Catalog and its log(FIR)=-13.69 W m<sup>-2</sup>). This compares with a UV luminosity derived from the radio estimate above of L(radio)<sub>uv</sub> of 2.7$`\times `$10<sup>36</sup> W, if the average UV photon energy is 20 eV, the temperature characteristic of a typical 50,000 K O star.. ## 7 Building A Complete Picture of the Activity in VII Zw 466 Previous observations of VII Zw 466 have shown that the outer ring consists of star clusters and H$`\alpha `$ emission regions consistent with a young (10-20 million year old) population of stars recently laid down by the passage of a radially expanding wave through the disk (BAMC, ACS). In this paper we have presented some new ingredients, namely the radio continuum emission and the Mid-IR emission from ISO. We have determined that the thermal component of the radio emission is in good agreement with the global H$`\alpha `$ emission in providing a quantitative estimate of the ultraviolet continuum from the star formation. This observation allows us to rule out the possibility that VII Zw 466 is heavily obscured by dust at optical wavelengths. It is this same UV continuum which is presumably responsible for the heating of the Mid-IR emission regions seen by ISO, at least in the $`\lambda `$9.6 and $`\lambda `$11.4 $`\mu `$m ISOCAM bands. The UV continuum must also heat the grains responsible for the Far-IR emission, and we have seen that the fluxes we have calculated are at least comparable with the total Far-IR flux for reasonable mean UV photon energies. Based on the fluxes given in Table 1, we can estimate the total Mid-IR luminosity of VII Zw 466 L<sub>MIR</sub> = 2.39$`\times `$10<sup>35</sup> W, which is approximately 10% of the estimated available UV luminosity (L<sub>uv</sub>=N<sub>uv</sub>$`<`$E<sub>uv</sub>$`>`$, where we assume the average UV photon energy $`<`$E<sub>uv</sub>$`>`$ = 20 eV). The distribution of H II regions in VII Zw 466 have already been shown in Figure 2a and 2b. The observations, first presented by Marston & Appleton (1995), are further analysed here. In Figure 2a we label the knots and present the H$`\alpha `$ and $`\lambda `$9.6$`\mu `$m fluxes in Table 4. As mentioned earlier, the 9.6$`\mu `$m emission follows closely the H II region distribution. Even Knot 3, the eastern knot, is detected at $`\lambda `$9.6$`\mu `$m. From Table 4 it can be concluded that the fluxes of the ISO emission do not scale linearly with the fluxes of the H II region complexes. Since significant optical extinction is not responsible, it is more likely that the variations from one region to another are due to an irregular filling factor for the dust in the UV radiation field (we know that only 10% is absorbed on average, but this could vary wildly from one region to another). Alternatively, if the grains responsible for the emission are small, they may be subject to thermal spiking by single UV photons. In this case, there is no reason to expect that the IR flux from a collection of such grains would scale linearly with the incident UV flux. Figure 2b also emphasizes the difference between the radio and H$`\alpha `$ distribution, which does not follow the H II regions as faithfully as the $`\lambda `$9.6$`\mu `$m emission. This may be because the radio emission is following the region of maximum compression of the hydrodynamic disk (see $`S`$5), whereas the star formation is more stochastic showing hot-spots as different star clusters form at slightly different times around the ring. The observations show that, even in a galaxy in which many of the overall parameters of the collision are quite well known, the distribution of the various phases of the ISM and the star formation which develops is quite complex. An unexpected aspect of the ISOCAM observations is the distribution of $`\lambda `$15$`\mu `$m emission in the ring galaxy (Figure 1c–from the ISOCAM LW9 filter). Only the H$`\alpha `$ knots 3 and possibly 6 (see Figure 2a for labeling of knots) are coincident with corresponding knots at $`\lambda `$15$`\mu `$m. It is not clear if this is because of the poor signal to noise of this ISO band, or that the emission really does avoid the sources of UV radiation. This result is consistent with the majority of the H II region complexes in the Cartwheel galaxy (Charmandaris et al. 1999) which are also absent at $`\lambda `$15$`\mu `$m, but are detected at shorter wavelengths. Only one extremely powerful H II region complex in the Cartwheel was detected at $`\lambda `$15$`\mu `$m, and it was suggested that this was because of its unusually large H$`\alpha `$ luminosity. It is possible that strong winds and radiation pressure may lift the grains responsible for the $`\lambda `$15$`\mu `$m emission to large distances from the H II regions causing them to radiate in the Far-IR, unless the H II regions are especially luminous. None of the H II regions in VII Zw 466 are comparible in luminosity with the very bright complex in the Cartwheel (see BAMC). However, we caution that the emission from VII Zw 466 in the LW9 filter is weak, and this filter was more prone to memory effects than the shorter wavelength filters. Higher S/N ratio observations will be required to confirm the above results. ## 8 Conclusions The observations at radio and IR wavelengths have shown that: 1) Emission from the ring galaxy is detected in three ISOCAM bands ($`\lambda `$$`\lambda `$9.6, 11.4 and 15 $`\mu `$m–filters LW7, LW8 & LW9). The total mid-IR luminosity L<sub>MIR</sub> is approximately 10% of the available UV luminosity from star formation found by two independent methods. The emission at $`\lambda `$9.6 and $`\lambda `$11.4$`\mu `$m follows, but does not precisely scale with, the flux of the H$`\alpha `$ emitting regions. This is consistent with the dust grains being heated in clumpy, somewhat irregular distributions surrounding the H II region complexes, or that the grains are distributed uniformly, but are small thermally-spiked grains. The $`\lambda `$15$`\mu `$m emission is only marginally detected, but seems to be poorly correlated with the H II regions and some of its emission may lie inside the ring. 2) The Mid-IR emission spectral energy distribution is most consistent with emission from grains warmed by young stars. The shorter wavelength bands observed around $`\lambda `$8-11$`\mu `$m are contaminated by emission from Unidentified Infrared Bands (probably thermally spiked PAHs), whereas the longer wavelength band at 15$`\mu `$m shows a weak thermal continuum similar to Knot B in the Antennae galaxies. 3) Radio observations, made at $`\lambda `$$`\lambda `$3, 6 and 20 cm, reveal a crescent-shaped distribution of emission which peaks on the inside edge of the ring, but is not as closely associated with the H II region complexes as the Mid-IR emission. Spectral index variations around the ring suggest that about 30% of the emission at $`\lambda `$6 cm is thermal, and is consistent with the observed H$`\alpha `$ flux. This suggests that little optical extinction is present in the galaxy. The non-thermal component of the emission, which dominates the radio emission, is a synchrotron component which we associate with a cosmic ray (CR) population trapped in the disk of VII Zw 466. The enhancement in the radio flux in one side of the ring (giving it an apparent crescent-shaped distribution) is caused by either: i) a compressional amplification and trapping of the synchrotron emitting particles by the compression wave which is stronger on one side of the galaxy than the other (a result of the off-center nature of the collision), or ii) an enhancement in the number of Type II supernova on one side of the galaxy from the other. Both mechanisms seem plausible and both may play a role in defining the radio morphology. 4) By a number of different methods we calculate the Type II supernova rate in VII Zw 466 to be R<sub>SN</sub> = 0.045 $`\pm `$ 0.1 (approximately 1 every 20 years) in the ring. A comparison between optical and radio observations suggests that VII Zw 466 has unusually low optical extinction (a result also suggested by optical spectroscopy-BAMC). Hence this galaxy would be an ideal candidate for automated supernova searches and for repeated radio imaging for radio supernova. 5) The difference in morphology between the radio and optical/Mid-IR distribution may result from the different way in which various components of a galaxy respond to the compressional wave which is expected to pass through the “target” disk in a ring-galaxy collision. The strength of the radio emission may respond directly to the overdensity and compression of the ISM (especially the cosmic-ray “fluid”), whereas the star formation and associated dust is more stochastic, resulting from parts of the disk being pushed into a star-formation mode in a non-uniform way, leading to a more scattered distribution of emission centers around the ring. 6) The galaxy G2, an edge-on disk a few diameters away from the ring, is strongly detected at Mid-IR and radio wavelengths, and has a similar spectrum to the ring galaxy. It is likely that the star formation activity in that galaxy has been enhanced because of the interaction with VII Zw 466. Previous H I observations and modelling have suggested that G2 is the “bullet” responsible for the formation of the ring in VII Zw 466. P. N. Appleton is grateful for the hospitality shown by F. Mirabel and V. Charmandaris (Paris), and C. Horellou (Onsala) in the summer of 1997, when the data reduction for the ISOCAM observations was performed, and to F. Ghigo (NRAO) for similar hospitality at NRAO Green Bank in 1995. We are grateful to an anonymous referee for helpful comments on the manuscript. The authors have enjoyed discussions with C. Struck (ISU), J.H. Black (Onsala Space Observatory) and F. Combes (Paris Obs.). This work is supported in part by NASA/NAG 5-3317 (ISOCAM) and NSF grant AST-9319596 (radio observations). V. Charmandaris would like to acknowledge the financial support from a Marie Curie fellowship grant (ERBFMBICT960967). ## Appendix A Upper Limits to the Molecular and Total Gas Content of VII Zw 466 and its companion VII Zw 466 (Cl):G2 In $`S`$2.2 we describe the OSO observations of VII Zw 466 and G2. No significant CO emission was detected from either of the two galaxies. Upper limits can be determined from the noise in the final spectra ($`\sigma _{mb}`$=2.3 mK for VII Zw 466 and 10 mK for G2) and the H I linewidths ($`\mathrm{\Delta }v_{HI}=202`$ km s<sup>-1</sup> for VII Zw 466 and 84 km s<sup>-1</sup> for G2; see ACS). The inferred limits on the H<sub>2</sub> masses given in Table A1 were calculated using a standard CO to H<sub>2</sub> conversion factor (N(H<sub>2</sub>)/I(CO)= 2.3 $`\times `$ 10<sup>20</sup> mol cm<sup>-2</sup> (K km s<sup>-1</sup>)<sup>-1</sup>; Strong et al. 1988). We caution that it is not known whether this factor is appropriate for collisional systems. The limit on the value of log(M(H<sub>2</sub>)/L<sub>B</sub>) ratio for VII Zw 466 is $`<`$ -1.46 for VII Zw 466 and $`<`$ -0.93 for G2 (L<sub>B</sub> is based on the CCD photometry of Appleton & Marston 1997). Hence VII Zw 466 has a slightly lower than the average value measured in ring galaxies but not significantly ($`<`$log(M(H<sub>2</sub>)/L<sub>B</sub>)$`>`$ = -1.18$`\pm `$0.41; Horellou et al. 1995). Given the star formation rate of 0.97 M$``$/yr derived from the H$`\alpha `$ imaging, it would seem that VII Zw 466 would take at least 1 Gyr to deplete its molecular gas mass, and this is longer than the dynamical time for the ring to propagate out of the disk ($``$ 10<sup>8</sup> yrs–see ACS). The results also allow us to limit the total gas to stellar luminosity. In ACS, it was suggested that the low neutral hydrogen to optical luminosity for G2 may be evidence for significant gas stripping as a result of its passage through the disk of VII Zw 466 in the past. The molecular-line observations provide constraints on any molecular gas which may be present in both galaxies. Based on the H I observations of ACS, we can now place limits of the total gas mass in both galaxies, and various gas properties of the galaxies. These are presented in Table A1. The results show that both VII Zw 466 and G2 have total gas to light ratios which are similar. The molecular upper limit on G2 is not sufficiently stringent to support the idea that G2 is gas poor (the current limit on molecular hydrogen is a factor of two higher than the detected H I mass).