id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9909/cond-mat9909194.html
|
ar5iv
|
text
|
# Anomalous magneto-oscillations in two-dimensional systems
## Abstract
The frequencies of Shubnikov-de Haas oscillations have long been used to measure the unequal population of spin-split two-dimensional subbands in inversion asymmetric systems. We report self-consistent numerical calculations and experimental results which indicate that these oscillations are not simply related to the zero-magnetic-field spin-subband densities.
Spin degeneracy of the electron states in a solid is the combined effect of inversion symmetry in space and time. Both symmetry operations change the wave vector $`𝐤`$ into $`𝐤`$, but time inversion also flips the spin so that combining both we have a two-fold degeneracy of the single particle energies, $`_+(𝐤)=_{}(𝐤)`$ (Ref. ). When the potential through which the carriers move is inversion asymmetric however, the spin-orbit interaction removes the spin degeneracy even in the absence of an external magnetic field $`B`$. This $`B=0`$ spin splitting is the subject of considerable interest because it concerns details of energy band structure that are important in both fundamental research and electronic device applications ( and references therein).
The spin splitting of the single particle energies yields two spin subbands with different populations $`N_\pm `$. The frequencies $`f_\pm ^{\mathrm{SdH}}`$ of longitudinal magnetoresistance oscillations in small magnetic fields perpendicular to the plane of the system, known as Shubnikov-de Haas (SdH) oscillations, have often been used to measure the $`B=0`$ spin-subband densities $`N_\pm `$ following
$$N_\pm =\frac{e}{h}f_\pm ^{\mathrm{SdH}}.$$
(1)
Here $`e`$ is the electron charge and $`h`$ is Planck’s constant. Eq. (1) is based on a well-known semiclassical argument due to Onsager which relates the cyclotron motion at $`B>0`$ with extremal cross sections of the Fermi surface at $`B=0`$. In this paper, we test both experimentally and theoretically the validity of this procedure. We obtain good agreement between experimental and calculated SdH oscillations. On the other hand the calculated $`B=0`$ spin splitting differs substantially from the predictions of Eq. (1). We will show that this difference reflects the inapplicability of conventional Bohr-Sommerfeld quantization for systems with spin-orbit interaction.
The subject of our investigation are two-dimensional (2D) hole systems in modulation-doped GaAs quantum wells (QW’s). We use GaAs because high quality samples can be grown which allow the observation of many SdH oscillations, and because the band-structure parameters are well known so that accurate numerical calculations can be performed. The crystal structure of GaAs is zinc blende, which is inversion asymmetric. Furthermore, a QW structure can be made asymmetric if an electric field $`E_{}`$ is applied perpendicular to the plane of the well. Therefore, at a given 2D hole density, the $`B=0`$ spin splitting in these systems has a fixed part due to the bulk inversion asymmetry (BIA), and a tunable part due to the structure inversion asymmetry (SIA).
Figure 1 highlights the main findings of our paper. It shows the Fourier spectra of the calculated \[Fig. 1(a)\] and measured \[Fig. 1(b)\] SdH oscillations as well as the expected peak positions $`(h/e)N_\pm `$ according to the calculated spin split densities $`N_\pm `$ at $`B=0`$ \[open circles in Fig. 1(a)\] for a 2D system with constant hole density $`N=N_++N_{}=3.310^{11}`$ cm<sup>-2</sup> but with varying $`E_{}`$. Even around $`E_{}=0`$, when we have only BIA but no SIA, the open circles indicate a significant spin splitting $`\mathrm{\Delta }N=N_+N_{}`$. However, the Fourier spectra in Figs. 1(a) and (b), while in good agreement with each other , deviate substantially from the zero-$`B`$ spin splitting: for nearly all values of $`E_{}`$ the splitting $`(h/e)\mathrm{\Delta }N`$ is significantly larger than $`\mathrm{\Delta }f=f_+^{\mathrm{SdH}}f_{}^{\mathrm{SdH}}`$. In particular, near $`E_{}=0`$ only one SdH frequency is visible in both the measured and calculated spectra, whereas we would expect to obtain two frequencies . In the following we will show how one can understand these results. We will briefly describe some details of our calculations and experiments and then discuss the physical origin of when and why Eq. (1) fails.
Our calculations are based on the methods discussed in Refs. . A multiband Hamiltonian containing the bands $`\mathrm{\Gamma }_6^c`$, $`\mathrm{\Gamma }_8^v`$ and $`\mathrm{\Gamma }_7^v`$ is used to calculate hole states in the QW. It fully takes into account the spin splitting due to BIA and SIA. The Poisson equation is solved self-consistently in order to obtain the Hartree potential. We obtain two spin-split branches of the energy dispersion $`_\pm (𝐤_{})`$ as a function of in-plane wave vector $`𝐤_{}`$. However, we do not call these branches spin-up or spin-down because the eigenstates are not spin polarized, i.e., they contain equal contributions of up and down spinor components. (This reflects the fact that for $`B=0`$ the system has a vanishing magnetic moment.) From $`_\pm (𝐤_{})`$ we obtain the population $`N_\pm `$ of these branches .
For the calculation of SdH oscillations we use the very same Hamiltonian discussed above so that the results for $`B=0`$ and $`B>0`$ are directly comparable. We introduce the magnetic field by replacing the in-plane wave-vector components with Landau raising and lowering operators in the usual way . From the Landau fan chart, using a Gaussian broadening, we obtain the oscillatory density of states at the Fermi energy which is directly related to the electrical conductivity . In order to match the experimental situation the Fourier spectra in Fig. 1(a) were calculated for $`B`$ between 0.20 and 0.85 T ($`B^1`$ between 1.17 and 5.0 T<sup>-1</sup>). We note that the positions of the peaks in the Fourier spectra in Fig. 1(a) depend only on the Landau fan chart as determined by the multiband Hamiltonian . A single peak in the Fourier spectrum corresponds to the situation that at the Fermi energy the spacing between Zeeman-split Landau levels is a fraction $`\alpha `$ of the spacing between Landau levels with adjacent Landau quantum numbers $`n`$ and $`n+1`$, with a constant $`\alpha `$ independent of $`B`$.
For measurements, we use Si modulation doped GaAs QW’s grown by molecular beam epitaxy on the (113)A surface of an undoped GaAs substrate. The well width of the sample in Fig. 1 is 200 Å. Photolithography is used to pattern Hall bars for resistivity measurements. The samples have metal front and back gates that control both the 2D hole density and $`E_{}`$. Measurements are done at a temperature of 25 mK. In order to vary $`E_{}`$ while maintaining constant density we first set the front gate ($`V_{\mathrm{fg}}`$) and back gate ($`V_{\mathrm{bg}}`$) biases and measure the resistivities as a function of $`B`$. The total 2D hole density $`N`$ is deduced from the Hall coefficient. Then, at small $`B`$, $`V_{\mathrm{fg}}`$ is increased and the change in the hole density is measured. $`V_{\mathrm{bg}}`$ is then reduced to recover the original density. This procedure changes $`E_{}`$ while maintaining the same density to within 3%, and allows calculation of the change in $`E_{}`$ from the way the gates affect the density.
In Fig. 1(b) we show the Fourier spectra for the measured magnetoresistance oscillations. Keeping in mind that we may not expect a strict one-to-one correspondence between the oscillatory density of states at the Fermi energy \[Fig. 1(a)\] and the magnetoresistance oscillations \[Fig. 1(b)\] the agreement is very satisfactory. However, these experimental and theoretical results indicate a surprising discrepancy between $`f_\pm ^{\mathrm{SdH}}`$ and $`(h/e)N_\pm `$. In the following we will discuss possible explanations of these results.
The common interpretation of SdH oscillations in the presence of inversion asymmetry is based on the intuitive idea that for small $`B`$ the Landau levels can be partitioned into two sets which can be labeled by the two spin subbands. Each set gives rise to an SdH frequency which is related to the population of the respective spin subband according to Eq. (1). However, a comparison between the (partially) spin polarized eigenstates at $`B>0`$ and the unpolarized eigenstates at $`B=0`$ shows that in general such a partitioning of the Landau levels is not possible. This reflects the fact that the orbital motion of up and down spinor components is coupled in the presence of spin-orbit interaction, i.e., it cannot be analyzed seperately.
For many years, anomalous magneto-oscillations have been explained by means of magnetic breakdown . In a sufficiently strong magnetic field $`B`$ electrons can tunnel from an orbit on one part of the Fermi surface to an orbit on another, separated from the first by a small energy gap. The tunneling probability was found to be proportional to $`\mathrm{exp}(B_0/B)`$, with a breakdown field $`B_0`$, similar to Zener tunneling . This brings into existence new orbits which, when quantized, correspond to additional peaks in the Fourier spectrum of the SdH oscillations. However, if the anomaly of the SdH oscillations reported in Fig. 1 were due to magnetic breakdown, for $`E_{}=0`$ we would expect several frequencies $`f^{\mathrm{SdH}}`$ with different values rather than the observed single frequency. In a simple, semiclassical picture a single frequency could be explained by two equivalent orbits in $`𝐤_{}`$ space as sketched in Fig. 2. However, the latter would imply that the tunneling probabilities at the junctions $`j_1`$ and $`j_2`$ are equal to one (and thus independent of $`B`$). We remark that de Andrada e Silva et al. studied anomalous magneto-oscillations for spin-split electrons in a 2D system. Their semiclassical analysis based on magnetic breakdown failed to predict $`B_0`$ by up to a factor of three and $`\mathrm{\Delta }N`$ by up to 17% (see Table III in Ref. ).
In order to understand the deviation from Eq. (1) visible in Fig. 1 we need to look more closely at Onsager’s semiclassical argument which is underlying Eq. (1). It is based on Bohr-Sommerfeld quantization of the semiclassical motion of Bloch electrons, which is valid for large quantum numbers. However, spin is an inherently quantum mechanical effect, for which the semiclassical regime of large quantum numbers is not meaningful. Therefore Bohr-Sommerfeld quantization cannot be carried through in the usual way for systems with spin-orbit interaction. In a semiclassical analysis of such systems we have to keep spin as a discrete degree of freedom so that the motion in phase space becomes a multicomponent vector field , i.e., the motion along the spin-split branches of the energy surface is coupled with each other and cannot be analyzed seperately. In this problem geometric phases like Berry’s phase enter in an important way which makes the semiclassical analysis of the motion of a particle with spin much more intricate than the conventional Bohr-Sommerfeld quantization.
One may ask whether we can combine the older idea of magnetic breakdown with the more recent ideas on Bohr-Sommerfeld quantization in the presence of spin-orbit interaction. Within the semiclassical theory of Ref. spin-flip transitions may occur at the so-called mode-conversion points which are points of spin degeneracy in phase space. Clearly these points are related to magnetic breakdown. However, mode-conversion points introduce additional complications in the theory of Ref. so that this theory is not applicable in the vicinity of such points.
Clearly we can circumvent the complications of the semiclassical theory by doing fully quantum mechanical calculations as outlined above. We have performed extensive calculations and further experiments which confirm that the results reported here are quite common for 2D systems. In Ref. spin splitting of holes was analyzed for two GaAs QW’s which had only a front gate. Here $`V_{\mathrm{fg}}`$ changes both the total density $`N=N_++N_{}`$ in the well as well as the asymmetry of the confining potential. For these QW’s we obtain excellent agreement between the measured and calculated frequencies $`f_\pm ^{\mathrm{SdH}}`$ versus $`N`$ including the observation of a single SdH frequency near $`N=3.810^{11}`$ cm<sup>-2</sup> when the QW becomes symmetric. However, there is again a significant discrepancy between $`\mathrm{\Delta }f`$ and $`(h/e)\mathrm{\Delta }N`$.
Our results apply to other III-V and II-VI semiconductors whose band structures are similar to GaAs in the vicinity of the fundamental gap . Our calculations indicate that the deviations from Eq. (1) are related to the anisotropic terms in the Hamiltonian. If the Hamiltonian is axially symmetric Eq. (1) is fulfilled. This is consistent with the semiclassical analysis of spin-orbit interaction in Ref. where it was found that in three dimensions no Berry’s phase occurs for spherically symmetric problems. We note that for holes in 2D systems the anisotropy of $`_\pm (𝐤_{})`$ is always very pronounced . It is also a well-known feature of the Hamiltonian for electrons, in particular for semiconductors with a larger gap . Up to now most experiments have analyzed spin splitting and SdH oscillations for 2D electron systems . To lowest order in $`𝐤`$ the SIA induced spin splitting in these systems is given by the so-called Rashba term which has axial symmetry. For this particular case it can be shown analytically that Eq. (1) is fulfilled.
For different crystallographic growth directions spin splitting and SdH oscillations behave rather differently. Moreover, these quantities depend sensitively on the total 2D hole density $`N=N_++N_{}`$ in the well. In Fig. 3 we have plotted the calculated SdH Fourier spectra versus $`E_{}`$ for a GaAs QW with growth direction and $`N=3.010^{11}`$ cm<sup>-2</sup> \[Fig. 3(a)\] and $`N=3.310^{11}`$ cm<sup>-2</sup> \[Fig. 3(b)\]. Open circles mark the expected peak positions $`(h/e)N_\pm `$ according to the spin splitting $`N_\pm `$ at $`B=0`$ . Again, the peak positions in the Fourier spectra differ considerably from the expected positions $`(h/e)N_\pm `$. Close to $`E_{}=0`$ there is only one peak at $`(h/2e)N`$. Around $`E_{}=1.0`$ kV/cm we have two peaks, but at even larger fields $`E_{}`$ the central peak at $`(h/2e)N`$ shows up again. At $`E_{}2.25`$ kV/cm we have a triple peak structure consisting of a broad central peak at $`(h/2e)N`$ and two side peaks at approximately $`(h/e)N_\pm `$. In Fig. 3 we have a significantly smaller linewidth than in Fig. 1. Basically, this is due to the fact that for the Fourier transforms shown in Fig. 3 we used a significantly larger interval of $`B^1`$ (10.0 T<sup>-1</sup> as compared with 3.83 T<sup>-1</sup>) in order to resolve the much smaller splitting for growth direction . We note that for $`E_{}=0`$ the SdH oscillations are perfectly regular over this large range of $`B^1`$ with just one frequency, which makes it rather unlikely that the discrepancies between $`\mathrm{\Delta }f`$ and $`(h/e)\mathrm{\Delta }N`$ could be caused by a $`B`$ dependent rearrangement of holes between the Landau levels.
Similar results like those shown in Figs. 1 and 3 have been obtained also for growth direction , but the spectra were more complicated with, e.g., several SdH frequencies for $`E_{}=0`$. Our calculations for holes are based on the fairly complex multiband Hamiltonian of Ref. . We obtained qualitatively the same results by analyzing the simpler $`2\times 2`$ Hamiltonian of Ref. . However, this model is appropriate for electrons in large-gap semiconductors, where spin splitting is rather small, so that it is more difficult to observe these effects experimentally.
In summary, we have both measured and calculated the SdH oscillations of 2D hole systems in GaAs QW’s. As opposed to the predictions of a semiclassical argument due to Onsager, we conclude that the $`B=0`$ spin splitting is not simply related to the SdH oscillations at low magnetic fields . This is explained by the inapplicability of conventional Bohr-Sommerfeld quantization for systems with spin-orbit interaction.
R. Winkler benefitted from stimulating discussions with O. Pankratov, M. Suhrke and U. Rössler, and he is grateful to S. Chou for making available his computing facilities. The work at Princeton University is supported by the NSF and ARO. M. Shayegan acknowledges support by the Alexander von Humboldt Foundation.
|
no-problem/9909/hep-th9909053.html
|
ar5iv
|
text
|
# Inflation and Gauge Hierarchy in Randall-Sundrum Compactification
## Abstract
We obtain the general inflationary solutions for the slab of five-dimensional AdS spacetime where the fifth dimension is an orbifold $`S^1/Z_2`$ and two three-branes reside at its boundaries, of which the Randall-Sundrum model corresponds to the static limit. The investigation of the general solutions and their static limit reveals that the RS model recasts both the cosmological constant problem and the gauge hierarchy problem into the balancing problem of the bulk and the brane cosmological constants.
preprint: KAIST-TH 99/07, hep-th/9909053
The huge gap between the electroweak scale $`M_\mathrm{W}`$ and the Planck scale $`M_\mathrm{P}`$, $`M_\mathrm{P}/M_\mathrm{W}10^{16}`$, has been a long standing puzzle in unifying the standard model and the gravity. Recently the extra dimensional models addressing it have drawn much attention . Introducing the extra dimensions has a long history from Kaluza-Klein theory to string theory. What is new is that the ordinary matter is confined to our four-dimensional brane while the gravity propagates in the whole spacetime. One motivation for such models is the heterotic M theory, whose field theory limit is the 11-dimensional supergravity compactified on $`S_1/Z_2`$ with supersymmetric Yang-Mills fields living on two boundaries .
The large extra dimension model brings the fundamental gravitational scale around the weak scale to solve the gauge hierarchy problem and reduce the strength of the 4-dimensional gravity by having the large extra dimensions, while avoiding conflict with experiments by confining the standard model fields to a 3-brane in the extra dimensions. This model translates the gauge hierarchy to the hierarchy between the fundamental scale and the size of the extra dimensions.
More recently, Randall and Sundrum (RS) proposed a five-dimensional model with nonfactorizable geometry supported by the negative bulk cosmological constant and the oppositely signed boundary 3-brane cosmological constants. In this model, the gauge hierarchy problem can be explained by the exponential warp factor even for the small extra dimension. The model is quite interesting and has drawn much attention because it might be realizable in supergravity and superstring compactifications . However, its geometry is based on the very specific relation between the bulk and the brane cosmological constants. Then the question arises how precisely this relation should hold to preserve the necessary geometry. In this Letter, we try to answer this question by finding the cosmological inflationary solutions with general sets of the bulk and the brane cosmological constants and comparing them with the static solution given by RS. The cosmological aspect of the extra dimensional models has been discussed by many authors . Especially the inflationary solutions were obtained for the flat bulk geometry , and for the AdS bulk geometry , where the condition is imposed among parameters such that the extra dimension does not inflate. Here we obtain the general inflationary solutions for the AdS bulk geometry and focus on the connection to the gauge hierarchy problem.
We consider the five-dimensional spacetime with coordinates $`(\tau ,x^i,y)`$ where $`\tau `$ and $`x^i,i=1,2,3`$ denote the usual four-dimensional spacetime and $`y`$ is coordinate of the fifth dimension, which is an orbifold $`S^1/Z_2`$ where the $`Z_2`$ action identifies $`y`$ and $`y`$. We choose the range of $`y`$ to be from $`1/2`$ to $`1/2`$. We consider two 3-branes extending in the usual 4-dimensional spacetime reside at two orbifold fixed points $`y=0`$ and $`y=1/2`$, so that they form the boundaries of the five-dimensional spacetime. This five-dimensional model is described by the action
$`S`$ $`=`$ $`{\displaystyle _M}d^5x\sqrt{g}\left[{\displaystyle \frac{M^3}{2}}R\mathrm{\Lambda }_b\right]+{\displaystyle \underset{i=1,2}{}}{\displaystyle _{M^{(i)}}}d^4x\sqrt{g^{(i)}}\left[_i\mathrm{\Lambda }_i\right],`$ (1)
where $`M`$ is the fundamental gravitational scale of the model, $`\mathrm{\Lambda }_b`$ and $`\mathrm{\Lambda }_i`$ are the bulk and the brane cosmological constants, $`_i`$ are the Lagrangians for the fields confined in the branes. Since we are interested in the cosmological solution, we assume that the three-dimensional spatial section is homogeneous and isotropic. Further we consider it is flat for simplicity. The most general metric satisfying this can be written as
$$ds^2=n^2(\tau ,y)d\tau ^2+a^2(\tau ,y)\delta _{ij}dx^idx^j+b^2(\tau ,y)dy^2$$
(2)
For the above action and metric, we obtain the following Einstein equations corresponding to (00), (ii), (55), (05) components respectively:
$`{\displaystyle \frac{3}{n^2}}{\displaystyle \frac{\dot{a}}{a}}\left({\displaystyle \frac{\dot{a}}{a}}+{\displaystyle \frac{\dot{b}}{b}}\right){\displaystyle \frac{3}{b^2}}\left[{\displaystyle \frac{a^{\prime \prime }}{a}}+{\displaystyle \frac{a^{}}{a}}\left({\displaystyle \frac{a^{}}{a}}{\displaystyle \frac{b^{}}{b}}\right)\right]`$ (3)
$`=M^3\left[\mathrm{\Lambda }_b+{\displaystyle \frac{\delta (y)}{b}}\left(\mathrm{\Lambda }_1+\rho _1\right)+{\displaystyle \frac{\delta (y\frac{1}{2})}{b}}\left(\mathrm{\Lambda }_2+\rho _2\right)\right],`$ (4)
$`{\displaystyle \frac{1}{n^2}}\left[2{\displaystyle \frac{\ddot{a}}{a}}+{\displaystyle \frac{\ddot{b}}{b}}{\displaystyle \frac{\dot{a}}{a}}\left(2{\displaystyle \frac{\dot{n}}{n}}{\displaystyle \frac{\dot{a}}{a}}\right){\displaystyle \frac{\dot{b}}{b}}\left({\displaystyle \frac{\dot{n}}{n}}2{\displaystyle \frac{\dot{a}}{a}}\right)\right]`$ (5)
$`{\displaystyle \frac{1}{b^2}}\left[{\displaystyle \frac{n^{\prime \prime }}{n}}+2{\displaystyle \frac{a^{\prime \prime }}{a}}+{\displaystyle \frac{a^{}}{a}}\left(2{\displaystyle \frac{n^{}}{n}}+{\displaystyle \frac{a^{}}{a}}\right){\displaystyle \frac{b^{}}{b}}\left({\displaystyle \frac{n^{}}{n}}+2{\displaystyle \frac{a^{}}{a}}\right)\right]`$ (6)
$`=M^3\left[\mathrm{\Lambda }_b+{\displaystyle \frac{\delta (y)}{b}}\left(\mathrm{\Lambda }_1p_1\right)+{\displaystyle \frac{\delta (y\frac{1}{2})}{b}}\left(\mathrm{\Lambda }_2p_2\right)\right],`$ (7)
$`{\displaystyle \frac{3}{n^2}}\left[{\displaystyle \frac{\ddot{a}}{a}}{\displaystyle \frac{\dot{a}}{a}}\left({\displaystyle \frac{\dot{n}}{n}}{\displaystyle \frac{\dot{a}}{a}}\right)\right]{\displaystyle \frac{3}{b^2}}{\displaystyle \frac{a^{}}{a}}\left({\displaystyle \frac{n^{}}{n}}+{\displaystyle \frac{a^{}}{a}}\right)=M^3\mathrm{\Lambda }_b,`$ (8)
$`3\left({\displaystyle \frac{\dot{a}}{a}}{\displaystyle \frac{n^{}}{n}}+{\displaystyle \frac{\dot{b}}{b}}{\displaystyle \frac{a^{}}{a}}{\displaystyle \frac{\dot{a}^{}}{a}}\right)=0,`$ (9)
where the dot and the prime represent the derivatives with respect to $`\tau `$ and $`y`$, respectively. The equations with bulk and boundary sources are equivalent to the equations with bulk sources and proper boundary conditions. To give non-singular geometry, $`n`$, $`a`$ and $`b`$ must be continuous along the extra dimension. But (3) and (5) imply that $`n^{}`$ and $`a^{}`$ are discontinuous at $`y=0,\pm \frac{1}{2}`$ so that $`n^{\prime \prime }`$ and $`a^{\prime \prime }`$ have delta function singularities there. Applying $`_0^{}^{0^+}𝑑y`$ and $`_{\frac{1}{2}^{}}^{\frac{1}{2}^+=\frac{1}{2}^+}𝑑y`$ to (3) and (5), we obtain the boundary conditions
$`{\displaystyle \frac{n^{}}{n}}|_0^{}^{0^+}={\displaystyle \frac{b(\tau ,0)}{3M^3}}(\mathrm{\Lambda }_12\rho _13p_1),{\displaystyle \frac{a^{}}{a}}|_0^{}^{0^+}={\displaystyle \frac{b(\tau ,0)}{3M^3}}(\mathrm{\Lambda }_1+\rho _1)`$ (10)
$`{\displaystyle \frac{n^{}}{n}}|_{\frac{1}{2}}^{+\frac{1}{2}}=+{\displaystyle \frac{b(\tau ,\frac{1}{2})}{3M^3}}(\mathrm{\Lambda }_22\rho _23p_2),{\displaystyle \frac{a^{}}{a}}|_{\frac{1}{2}}^{+\frac{1}{2}}=+{\displaystyle \frac{b(\tau ,\frac{1}{2})}{3M^3}}(\mathrm{\Lambda }_2+\rho _2).`$ (11)
Since $`b^{\prime \prime }`$ does not appear in the equations, no boundary condition is imposed on $`b^{}`$.
The equations (3)–(9) and the boundary conditions (11) constitute a starting point of cosmology of the five-dimensional model considered in this paper. It’s difficult to solve the whole bulk equations with generic sources, but at the brane boundaries, the (55) and (05) equations give the Friedman-like equation
$$\left(\frac{\dot{a}}{a}\right)^2=\left(\frac{\mathrm{\Lambda }_b}{6M^3}+\frac{\mathrm{\Lambda }_i^2}{36M^6}\right)+\frac{\mathrm{\Lambda }_i}{18M^6}\rho _i+\frac{1}{36M^6}\rho _i^2.$$
(12)
The implications of this equation are quite interesting , but we will not pursue them here.
In this paper, we consider the cosmological constant dominated cases, and neglect the matter and radiation energy densities of the branes. Then the above equations allow a static solution, the so-called RS solution , when the bulk cosmological constant is negative ($`\mathrm{\Lambda }_b<0`$) and related to the brane cosmological constants by
$$k=k_1=k_2,$$
(13)
where $`k=(\mathrm{\Lambda }_b/6M^3)^{1/2}`$ and $`k_i=\mathrm{\Lambda }_i/6M^3`$. We take the brane with the positive cosmological constant to be at $`y=0`$. The metric of the static solution is given by
$$ds^2=e^{2kb_0|y|}\eta _{\mu \nu }dx^\mu dx^\nu +b_0^2dy^2,$$
(14)
where $`b_0`$ is a constant which determines the length of the extra dimension. In this model, the 4-dimensional Planck scale is given by
$$M_\mathrm{P}^2=\frac{M^3}{k}[1e^{kb_0}],$$
(15)
and for $`\frac{1}{2}kb_01`$ it is $`k`$ rather than $`\frac{1}{2}b_0`$ that determines it. RS argued that, due to the warp factor $`e^{kb_0y}`$ which has different values at the hidden brane ($`y=0`$) and at the visible brane ($`y=\frac{1}{2}`$), any mass parameter $`m_0`$ on the visible brane corresponds to a physical mass $`m=m_0e^{\frac{1}{2}kb_0}`$ and the moderate value $`\frac{1}{2}kb_037`$ can produce the huge ratio $`M_P/M_W10^{16}`$. Thus the gauge hierarchy problem is converted to the problem related to geometry, fixing the size of the extra dimension. Then it is an important question how precise the relation (13) must be in order for the RS solution to work since the exact relation is assumed a priori to get (14).
Now we try to answer this question by solving the Einstein equations with the bulk and the brane cosmological constants, but without the fine tuned condition (13). Boundary condition (11) suggests $`n=a`$, and we first try a separable function $`n=a=g(\tau )f(y)`$. Then the (05) equation yields $`b=b(y)`$. After separate coordinate transformations of $`\tau `$ and $`y`$, we come to an ansatz
$$n=f(y),a=g(\tau )f(y),b=b_0,$$
(16)
where $`b_0`$ is a constant. Now subtracting the (ii) equation by the (00) equation, we obtain $`(\dot{g}/g)\dot{}=0`$. So we define $`(\dot{g}/g)H_0=\mathrm{constant}`$. Then the (55) equation gives
$$\left(\frac{f^{}}{b_0}\right)^2=H_0^2+k^2f^2,$$
(17)
and the (00) and (ii) equations just give a redundant equation. For $`\mathrm{\Lambda }_b<0`$, the solution to this equation consistent with the orbifold symmetry is
$$f=\frac{H_0}{k}\mathrm{sinh}(kb_0|y|+c_0).$$
(18)
The boundary condition (11) imposes
$$k_1=k\mathrm{coth}(c_0),k_2=k\mathrm{coth}(\frac{1}{2}kb_0+c_0).$$
(19)
Therefore, the solution is allowed when $`k_1,k_2`$ and $`k`$ satisfy $`k<k_1<k_2`$ and they are related to the length of the extra dimension $`L_5=\frac{1}{2}b_0`$ by
$$L_5=\frac{1}{2}b_0=\frac{1}{2k}\mathrm{ln}\left[\frac{k_2k}{k_1k}\frac{k_1+k}{k_2+k}\right].$$
(20)
The metric of this solution is
$`ds^2`$ $`=`$ $`\left({\displaystyle \frac{H_0}{k}}\right)^2\mathrm{sinh}^2(kb_0|y|+c_0)\left[d\tau ^2+e^{2H_0\tau }\delta _{ij}dx^idx^j\right]+b_0^2dy^2.`$ (21)
We arrive at the static limit by taking $`H_00`$ and $`c_0\mathrm{}`$ while keeping the ratio $`\frac{H_0}{2k}e^{c_0}1`$ fixed, and the metric (21) becomes (14). For $`H_00`$, $`H_0`$ is not a physical quantity and can be set to k which corresponds to the shift of the initial value of $`\tau `$. Then by the coordinate transformation $`dy=d\stackrel{~}{y}/\mathrm{sinh}(kb_0|\stackrel{~}{y}|+\stackrel{~}{c}_0)`$, the metric becomes
$$ds^2=\frac{d\tau ^2+e^{2k\tau }\delta _{ij}dx^idx^j+b_0^2d\stackrel{~}{y}^2}{\mathrm{sinh}^2(kb_0|\stackrel{~}{y}|+\stackrel{~}{c}_0)}.$$
(22)
The metric (21) describes inflation of the spatial dimensions with the length of extra dimension fixed. At a given $`y`$, we can perform a 4-dimensional coordinate transformation to make the 4-dimensional metric be in the form $`ds_{(4)}^2=dt^2+e^{2H(y)t}\delta _{ij}dx^idx^j`$. Then we get the Hubble parameter
$$H(y)=k\mathrm{csch}(kb_0|y|+c_0).$$
(23)
Especially, at each boundary, we have
$$H(0)=\sqrt{k_1^2k^2},H(\frac{1}{2})=\sqrt{k_2^2k^2},$$
(24)
respectively. We see that inflation occurs when the bulk and the brane cosmological constants deviate from the relation (13), which can be easily seen from the boundary equation (12). The condition (20) means that to keep the length of the extra dimension fixed while the spatial dimensions inflate, we must fine tune the bulk and the brane cosmological constants, in other words, we must put two branes a distance $`L_5`$ apart for given $`k_1`$, $`k_2`$ and $`k`$.
For the more general case that the distance between two branes is larger or smaller than $`L_5`$, the solution can be obtained with the non-separable function
$$n(\tau ,y)=a(\tau ,y)=\frac{1}{\tau f(y)+g_0},b(\tau ,y)=kb_0\tau a(\tau ,y),$$
(25)
where $`b_0`$ and $`g_0`$ are constants. The metric for this general case can be found to be
$$ds^2=\frac{d\tau ^2+\delta _{ij}dx^idx^j+(kb_0\tau )^2dy^2}{\left[k\tau \mathrm{sinh}(kb_0|y|+c_0)+g_0\right]^2},$$
(26)
where $`b_0`$ and $`c_0`$ are given by
$`c_0`$ $`=`$ $`\mathrm{cosh}^1\left({\displaystyle \frac{k_1}{k}}\right),`$ (27)
$`kb_0`$ $`=`$ $`2\left[\mathrm{cosh}^1\left({\displaystyle \frac{k_2}{k}}\right)\mathrm{cosh}^1\left({\displaystyle \frac{k_1}{k}}\right)\right].`$ (28)
When $`g_0=0`$, this metric becomes (22) by the coordinate transformation $`\frac{1}{\tau }d\tau =kd\stackrel{~}{\tau }`$. When $`g_0>0`$ ($`g_0<0`$), the distance between two branes is smaller (larger) than $`L_5`$ and the above metric describes that two branes come close and finally meet (move away from each other) while the spatial dimensions are inflating. The boundary condition is not affected by the presence of $`g_0`$, and $`H(0)`$ and $`H(\frac{1}{2})`$ are the same as (24), as seen in (12).
For the special case $`k_1=k2>k`$, $`L_5`$ becomes 0 and the above solutions cannot cover. Instead, we can find a solution with a different ansatz $`n(\tau ,y)=a(\tau ,y)=b(\tau ,y)`$. The metric is given by
$$ds^2=\frac{d\tau ^2+\delta _{ij}dx^idx^j+dy^2}{\left[(k_1^2k^2)^{1/2}\tau +ky+c_0\right]^2},$$
(29)
where $`c_0`$ is a constant. This metric describes inflation in both the spatial dimensions and the extra dimension.
Let us consider the connection between the inflationary solutions and the RS static solution. The static limit that both $`k_1`$ and $`k_2`$ approach $`k`$ in the inflationary solutions corresponds to the RS solution. Suppose that the five-dimensional universe underwent inflation in the early epoch and finally settles down to the static RS model. Then the relation (13) does not hold exactly but approximately now. This situation is most likely described by the static limit of the inflationary solutions. Then, the current observations on the Hubble constant restrict the visible brane Hubble parameter $`H(\frac{1}{2})`$:
$$H(\frac{1}{2})=\sqrt{k_2^2k^2}10^{60}M_\mathrm{P}$$
(30)
where $`k=𝒪(M)=𝒪(M_\mathrm{P})`$ is assumed. Therefore, the bulk and the visible brane cosmological constants must cancel each other up to very high precision. This is a five-dimensional version of the well-known cosmological constant problem and the RS condition (13) is nothing but the condition for the vanishing 4-dimensional effective cosmological constant at both branes. We do not attempt to solve this notorious problem in this paper. Instead, we concentrate on the gauge hierarchy problem within this context.
The key point of the RS solution to the gauge hierarchy problem is the size of the extra dimension appearing in the exponential warp factor. If the RS condition (13) exactly holds, it is not determined in this framework and remains as a flat direction in the moduli space. For the bulk and the brane cosmological constants that do not satisfy the RS condition, there is a critical value $`L_5`$, for which the extra dimension size remains constant. Our general solution shows that this is an unstable stationary configuration. A slight deviation will make the extra dimension size shrink or grow. This is a generic consequence of gravity theory, and there is a way to overcome it by including extra dynamics beyond simple gravity for the modulus $`b`$. This is a five-dimensional version of the modulus stabilization problem. Toward the solution of this problem, an attempt using the bulk scalar field was made recently in and an earlier attempt within the compactified heterotic M theory without the bulk cosmological constant was done in using the membrane instanton effects and the racetrack mechanism. We will not discuss it which is beyond the scope of this paper.
With this in mind, let us look at the RS solution for the gauge hierarchy problem. In the static limit $`k_1,k_2k`$ of the inflationary solutions with the fixed extra dimension size, we can see from (20) that the extra dimension size dose not have a unique value but varies depending on how $`k_1`$ and $`k_2`$ approach $`k`$. This is just what is said by the modulus stabilization problem. Near the static limit, the length of the extra dimension is expressed in terms of the Hubble parameters of two branes by
$$kL_{\mathrm{RS}}=\frac{1}{2}kb_{\mathrm{RS}}\mathrm{ln}\frac{H(\frac{1}{2})}{H(0)}.$$
(31)
Since $`H(\frac{1}{2})`$ is very small, to keep $`\frac{1}{2}kb_{\mathrm{RS}}`$ of order 1, we also have to adjust the bulk and the hidden brane cosmological constants to the same accuracy as we did for the bulk and visible brane cosmological constants. The RS solution for the gauge hierarchy problem demands $`\frac{1}{2}kb_{\mathrm{RS}}37`$, and this number seems quite moderate at first sight. However, in fact, it requires further fine tune of the hidden brane Hubble parameter than the visible one:
$$H(0)=\sqrt{k_1^2k^2}10^{16}H(\frac{1}{2})10^{76}M_\mathrm{P}.$$
(32)
Thus, to solve the gauge hierarchy problem in the context of the RS model, the balance between the bulk and the hidden brane cosmological constants should take place with $`10^{16}`$ times more accuracy than that between the bulk and the visible cosmological constants. At any rate, the RS model again converts the gauge hierarchy to the fine tuning of the bulk and the brane cosmological constants. It will be interesting to look at the problem in terms of four dimensional effective theory and see whether there is a nice stabilization mechanism by which the above argument can be avoided using additional interactions of the $`b`$ modulus other than the gravity.
In summary, we found the general inflationary solutions for the slab of 5-dimensional AdS spacetime with two boundary 3-branes and viewed the RS model as the static limit of those solutions. Then both the cosmological constant problem and the gauge hierarchy problem are recast into the fine tuning problem of the bulk and the brane cosmological constants. The cosmological constant problem appear as the fine tuning between the bulk and the visible brane cosmological constants, and the gauge hierarchy problem as the more severe fine tuning between the bulk and the hidden brane cosmological constants. The inclusion of matter at the brane boundaries might alter the above conclusion, but we expect it does not much. The real solution of those problems surely requires additional ingredients to the 5-dimensional model considered in this paper, but the implications of this simple model is so interesting that it deserves further study. The magic bullet for the cosmological constant problem and the gauge hierarchy problem may be found in the correlation mechanism of the bulk and brane cosmological constants, which reminds us of the recent development in string theory, the holography. We also leave the modulus stabilization problem which is connected to the problem addressed in this letter for future work.
###### Acknowledgements.
We would like to thank Jai-chan Hwang, Youngjai Kiem, Yoonbai Kim for useful discussions and especially Kiwoon Choi for his valuable comments.
|
no-problem/9909/physics9909027.html
|
ar5iv
|
text
|
# Slowing and cooling molecules and neutral atoms by time-varying electric field gradients
## I Introduction
Time-invariant electric field gradients have long been used to deflect beams of molecules and neutral atoms. However, as we will show in this paper, time-varying electric field gradients can be used to accelerate, slow, cool, or bunch these same beams. We demonstrate slowing, cooling, and bunching of cold cesium atoms in a fountain, measure these effects and find good agreement with calculation. The possible applications of the time-varying electric field gradient technique include slowing and cooling thermal beams of molecules and atoms, launching cold atoms from a trap into a fountain, beam transport, and measuring atomic dipole polarizabilities.
The principle behind time-varying electric field gradient slowing is that an electric field gradient exerts a force on an electric dipole (thus accelerating or decelerating it) but a spatially uniform electric field, even if it is time-varying, exerts no force on an electric dipole. Thus, an atom with an induced electric dipole moment or a molecule with a ‘permanent’ electric dipole moment (with negative interaction energy in an electric field) will accelerate when it enters an electric field and decelerate back to its original velocity when it leaves the electric field. If we add a uniform electric field region between the entrance and exit, as in a pair of parallel electric field plates (Fig. 1), we can delay turning on the electric field until the atom or molecule is in this uniform electric field. The atom or molecule will not have accelerated entering the electric field plates but will decelerate when it leaves the electric field, thus slowing. Longitudinal cooling is achieved by applying a decreasing electric field, so that in a pulse of atoms or molecules, the fastest ones, arriving first, experience the greatest slowing (Fig. 2).
The time-varying electric field gradient technique can be useful for slowing and cooling thermal beams of atoms with large dipole polarizabilities and polar molecules with large electric dipole moments. Many atoms and most molecules are not amenable to laser slowing and cooling, and presently few alternative techniques exist. Slow molecules have application to molecular beam spectroscopy, the study of chemical reactions, low energy collisions, surface scattering, and trapping.
In molecular beam spectroscopy of long-lived states, slowing the molecules to increase the transit time through the observation region can improve the spectroscopic resolution, yielding better separation of spectroscopic features. In thermal detection of molecular beams, the fraction of molecules in long-lived vibrational states is measured by the extra energy that they contribute to a thermal detector. The sensitivity of this method can be increased by decreasing the kinetic energy of the beam. Similarly, for control of chemical reaction pathways, reducing the kinetic energy a beam of molecules to just above the reaction threshold energy may enhance the effect of orientation or state preparation.
In the study of elastic scattering from surfaces, helium and a few other light gasses are the most used projectiles because a small energy transfer to the surface and narrow velocity distribution are essential. Similarly, other forms of surface scattering also utilize light atoms and light molecules (such as H<sub>2</sub>, HD, CO, NO, and Ne). Heavier atoms and molecules that have been slowed and cooled will also meet the requirements of many surface scattering experiments. Thus, time-varying electric field gradient slowing and cooling could make many more species available for surface scattering experiments.
The trapping of molecules can increase confinement, observation, or interaction time by orders of magnitude, create high densities, or allow the molecules to be cooled by evaporative cooling or other slow cooling methods. With the possible exception of a toroidal storage ring trap, slow cold molecules are a necessary prerequisite for all proposed or existing neutral molecule traps and would be an asset for a storage ring. Time-varying electric field gradient slowing and cooling can provide beams of slow, cold polar molecules in vacuum and is compatible with all of these proposed or existing methods for trapping molecules.
The remainder of the paper is organized as follows. The interaction energies of atoms and molecules in electric fields and the principles of slowing, cooling, and bunching molecules and atoms with time-varying electric field gradients is discussed in detail in Section II. Our experiment is described and its results are compared with calculation in Section III. And finally, in Section IV, we examine how the time-varying electric field gradient method can be applied to slowing of thermal atoms and molecules, measurements of atomic dipole polarizabilities, atom optics, and launching atoms from traps.
## II Atoms and molecules in electric fields
### A Neutral atoms in electric fields
Time-varying electric field gradient slowing utilizes the shift in an atom’s potential energy as it travels through a time- and spatially- varying electric field. The effect of an electric field on an atom’s potential energy is described, to lowest order in the electric field, by the dipole polarizability of the atom, defined as the ratio of the induced electric dipole moment to the external electric field. Although the dipole polarizability is a tensor, the non-scalar terms are usually small, producing only negligible variations in the polarizability of the different ground state sublevels, and do not affect the processes that we will be discussing. Thus the induced dipole moment is, to a good approximation, a scalar, and the potential energy, $``$, is given, to lowest order in the electric field, by $`=\alpha E^2/2`$, where $`E`$ is the magnitude of the electric field and $`\alpha `$ is the scalar dipole polarizability. Because $`\alpha `$ is a scalar, the potential energy depends only on the electric field’s magnitude and not its direction.
In a spatially varying electric field, the force is $`𝐅=(1/2)\alpha (E^2)`$, which, for all ground state atoms, is in the direction of increasing electric field magnitude (strong field seeking). As with any conservative potential, the change in an atom’s kinetic energy, as it travels between two points in space, is path independent and equal to the change in potential energy between those points.
For example, a Cs atom traveling from a region, $`E_i`$ of no field, to a region $`E_f`$ of $`10^7`$ V/m, gains kinetic energy, KE, by an amount $`\mathrm{\Delta }\text{KE}=\mathrm{\Delta }=\alpha (E_f^2E_i^2)/2=3.3\times 10^{25}\text{J}=24\text{mK}`$, where we have used the value of $`6.63\times 10^{39}\text{J/(V/m)}^2`$ (or $`59.6\times 10^{24}`$ cm<sup>3</sup>) for the dipole polarizability of Cs. Since we will be interested in atomic beams from thermal sources (see Section IV), we will use energy units of kelvin with the conversion $`7.243\times 10^{22}`$ K/J. The velocity of an atom after traversing the potential is $`v_f^2=2(\mathrm{\Delta }\text{KE})/M+v_i^2`$, where $`v_i`$ and $`v_f`$ are the initial and final velocities, respectively, and $`M`$ is the mass. For $`v_i=0`$, the final velocity for the example given above would be 1.70 m/s.
### B Polar molecules in electric fields
In addition to a dipole polarizability, polar molecules have an intrinsic separation of charge that produces a dipole that can align with an external electric field to yield a large net electric dipole moment . When the interaction of the electric dipole moment, $`d_e`$, of a linear polar molecule with an external electric field is large compared to the molecular rotational energy, rotation is suppressed in favor of libration about the direction of the electric field. The potential energy of the low-lying rotational levels then approaches $`=d_eE`$ and is always negative. In a spatially varying electric field, the resulting force is $`𝐅=d_eE`$, which, as for ground state atoms, is in the direction of increasing electric field magnitude (strong field seeking).
As an example, consider cesium fluoride which has a very large dipole moment of $`d_e=2.65\times 10^{29}`$ J/(V/m) (or 7.88 Debye, where 1 Debye $`=3.36\times 10^{30}`$ J/(V/m)) and a small rotational constant of $`B_e=`$ 0.27 K (or 0.188 cm<sup>-1</sup>). In its lowest angular momentum state ($`J=0`$), and traveling from a region of no field to a region of $`10^7`$ V/m, CsF gains kinetic energy, KE, by roughly the amount $`\mathrm{\Delta }\text{KE}=\mathrm{\Delta }=d_e(E_fE_i)=2.65\times 10^{22}\text{J}=19\text{K}`$. A more accurate value, calculated using the formulas from Von Meyenn, is 16 K. This is about 640 times larger than for Cs, as discussed earlier.
As in the atomic case, the final velocity of the CsF molecule, after traversing the potential, is $`v_f^2=2(\mathrm{\Delta }\text{KE})/M+v_i^2`$, where $`v_i`$ and $`v_f`$ are the initial and final velocities, respectively, and $`M`$ is the mass. For $`v_i=0`$, the final velocity for the example given above, would be 45.7 m/s. Equivalently, a 45.7 m/s CsF molecule traveling from a region of $`10^7`$ V/m electric field, to a region of no field, would be slowed to rest.
### C Slowing molecules and atoms
A practical apparatus for slowing should have the electric field gradient perpendicular to the field. Otherwise, a beam of molecules or atoms traversing the electric field gradient is likely to strike one of the surfaces used to form the electric field. A simple apparatus that meets this requirement is a set of parallel electric field plates (Fig. 1) attached to a voltage source that can quickly be ramped from zero.
To operate a set of electric field plates as a time-varying electric field gradient slowing apparatus, we do the following. A neutral atom in its ground state enters the region between the electric field plates with no field (the term atom also applies to clusters and molecules in strong field seeking states). With the atom between the electric field plates, the voltage is turned on producing a uniform electric field. The potential energy of the atom is lowered by the electric field, but a spatially-uniform, time-varying electric field does no work on a dipole, so there is no change in the kinetic energy of the atom. As the atom exits the plates, passing through an electric field gradient, to a zero field region outside, it gains potential energy and loses kinetic energy. To accelerate an atom, the field is turned on before the atom enters the electric field plates and is then turned off before the atom exits. The latter arrangement can also be used to slow a weak field seeking molecule.
The slowing process may be repeated by arranging a series of electric field plates, each having a voltage applied once the atom has entered the uniform electric field region. The energy change of the atom, traversing the sequence, is then cumulative. If a sufficient number of electric field plate sections are assembled, it should be possible to slow a thermal beam of atoms to near rest.
This slowing process is analogous to, but the reverse of, the acceleration of charged particles in linear accelerators and cyclic accelerators , where charged particles accelerate through a sequence of small voltage gradients. After each voltage gradient, the charged particles drift through a time-varying, but spatially uniform voltage, in which the voltage changes or reverses. This establishes a new voltage gradient without requiring successively higher voltages.
The same slowing principle can also be applied using large magnetic field gradients on atoms or paramagnetic molecules. However, it is more difficult to switch strong magnetic fields.
### D Cooling and bunching
A decrease in the longitudinal velocity spread of the beam can be achieved by applying an electric field, that decreases in time, to atoms that have been arranged according to their velocity (Fig. 2). The first atoms exiting the plates are slowed more than the atoms exiting at later times, when the electric field, and hence the electric field gradient, has decreased. A beam of atoms will be ordered by velocity when a short pulse is allowed to spread.
This is a form of cooling even though the initial and final velocity distributions might not be Maxwell-Boltzmann. The process conserves phase space (the area enclosed in a plot of the relative velocity of each particle versus its relative position) and is analogous to debunching of a charged particle beam in an accelerator. The process can also be reversed and used to bunch a beam so that more atoms arrive at a selected point at the same time. Bunching can reproduce or even compress the original longitudinal spatial distribution of a pulse of atoms – a useful technique for detecting weak signals.
## III Experimental results
### A Experimental arrangement
To test the principle of slowing and cooling with time-varying electric field gradients, we slowed, cooled, and bunched packets of Cs atoms, initially traveling 2 m/s, using a single set of electric field plates with fields of up to $`5\times 10^6`$ V/m. The low initial velocity and large polarizability of Cs made it easy to observe and measure the slowing (0.20 m/s at $`5\times 10^6`$ V/m), cooling, and bunching effects with a single electric field region. A schematic of the apparatus is shown in Fig. 3.
Packets of Cs atoms were launched at rates of 0.25 Hz to 0.33 Hz from a vapor-capture magneto-optic trap constructed along the lines described in Ref.. The laser beams that formed the x-y plane of the trap were oriented at 45 degrees to the vertical. This made it easier to perform measurements on the atoms after they had been launched. The trap temperature, determined by observing the expansion of the Cs cloud, was about 30 $`\mu `$K. The trap laser system, which was stable and reliable, used an external cavity grating configuration with piezo-electric tuning, and a Spectra Diode Labs model 5410-C diode laser with an antireflection coating on the front facet. To launch the Cs atoms, a pair of acousto-optic modulators blue (red) shifted the upward (downward) pointing laser beams by 5 MHz to form a moving molasses.
The tower into which the atoms were launched extends 55 cm above the trapping region. At 20 cm above the trap, a 1.3 cm aperture restricts the horizontal dimensions of the packet. At 27 cm above the trap, a pair of stainless steel electric field plates, each 2.2 cm tall by 1.7 cm wide, straddle the center line. Each electric field plate is supported by a rod extending through a high voltage vacuum feed-through mounted on a bellows. The bellows allowed us to vary the spacing between the plates. Large gap spacings of 6 mm and 8 mm were chosen to allow the maximum number of atoms through the plates, and to minimize defocusing effects at the edges of the plates. See section III D for a discussion of the defocusing effects.
Two high-voltage pulsed power supplies, one positive and one negative, were used to charge the electric field plates. The heart of each power supply is an automobile ignition coil driven by a low current DC power supply that charges a capacitor in series with the input of the ignition coil. Discharging the capacitor supplies the input pulse to the coil. For cooling and bunching experiments, a decaying voltage was produced by an RC circuit at the ignition coil output. The components of this RC circuit are the high voltage coaxial cable (about 100 pF/m) from the ignition coil output to the high voltage feed through, and a resistor to ground. For slowing measurements, a high resistance was chosen to make the time constant long compared to the transit time of the atoms through the electric field plates.
For Cs atom time-of-flight velocity measurements, we formed probe laser beams using a small fraction of the light from the trapping laser. One probe beam passed 0.5 cm below the electric field plates and a second probe beam, perpendicular to the electric field, passed 14 cm above the first. The probe beam intensities were measured with photo diodes. The signal for the atoms passing through a probe beam was the attenuation of the probe beam due to scattering by the passing atoms.
The launched atoms arrived at the electric field plates in a packet 1.5 cm long – longer than the uniform region of the electric field plates. It was easier to understand the results of slowing measurements if the packet of Cs atoms fit entirely within the uniform region. The packet was trimmed, using the lower probe beam, which deflected the atoms sufficiently, so that they were not detected by the second probe. To accept only the center of the packet, the laser was shifted out of resonance for a few milliseconds. We were thus able to reduce the vertical size of the packet at the lower probe from 1.5 cm to about 0.3 cm. The arrival time of the atoms at the upper probe was measured relative to the launch time. The stability of the launch was checked by periodically measuring the arrival time at the lower probe.
### B Slowing
The time of arrival of the packet at the upper probe, as a function of the applied electric field, is shown in Fig. 4. The electric field was turned on after the Cs atoms entered the uniform field region of the plates, and was kept nearly constant as the atoms exited the plates. Increasing the electric field delayed the arrival of the Cs atoms at the upper probe. The width of the packet increased because the packet had more time to spread.
For a quantitative measurement of the slowing, we calculated the loss in kinetic energy, based on the increase in transit time, and plotted this quantity as a function of the square of the electric field. This is shown in Fig. 5. Two plate spacings, 6 mm and 8 mm, were used. We compare these data points with the expected energy loss, calculated from $`\alpha E^2/2`$ for $`\alpha =6.63\times 10^{39}\text{J/(V/m)}^2`$ . The effect is clearly quadratic in the electric field and is close to the predicted size. The systematic error is consistent with the large uncertainty in our measurement of the electric field.
### C Cooling and compression
To cool and bunch the Cs atoms we matched the decay time of the electric field with the transit time of the atoms through the electric field gradient. In addition, we used the full vertical size of the packet, which at the plates was 1.5 cm and, without cooling or bunching, was 2.5 cm at the upper probe. With this short decay time, we were able to utilize two methods to cool and bunch the packets. In the first, the electric field was turned on once the whole packet had entered the uniform region. Here, the faster atoms, exiting first, were slowed more than the slower atoms, reducing the velocity spread. In the second method, we turned on the electric field once the faster atoms in the leading edge of the packet had entered the uniform region of the field, but while the slower atoms were still in the electric field gradient. The slower atoms were accelerated into the plates, while the fastest atoms, already in the uniform field region, were unaffected. On exiting the plates, the field had decayed sufficiently so as not to affect the distribution. However, with plates of the proper length, additional cooling could be achieved, on exit, using the first method. Bunching occurs as the packet evolves, if during the process, the (initially) slow atoms are accelerated to a velocity greater than that of the fast atoms.
As an example, Fig. 6 shows the arrival time of the Cs atoms at the upper probe as a function of electric field. The electric field is turned on when roughly half of the packet reaches the uniform field region. The rest of the packet is still in the electric field gradient. The electric field decays with a RC time constant of 5 ms, and the packet velocity at this point is roughly 2 m/s. The effects of cooling and then compression in Fig. 6 are striking. The transit time of the atoms through the upper probe is reduced from about 16 ms (FWHM) at zero field, to 3 ms at $`3.5\times 10^6`$ V/m. With a packet velocity of 1.2 m/s at the upper probe, the transit time corresponds to a packet length of about 3.6 mm, reduced from the original 1.5 cm at the electric field plates (2.5 cm at the upper probe).
To compare the experimental results with calculation, we modeled the time evolution of the longitudinal phase space of the Cs atoms for the experimental conditions in Fig. 6. The results are shown in Fig. 7. The electric field along the center line between the plates, used to determine the potential energy of the Cs atoms at each point, was calculated by a two-dimensional finite-element analysis program. The resulting potential energy is shown, superimposed on phase space diagrams, in Fig. 7b.
In Fig. 8 we compare the observed Cs beam profile with the calculation in Fig. 7. The calculation, which is done in one dimension, and assumes that the initial spatial distribution of atoms, is Gaussian. The calculated spatial distribution has been converted into time, translated by about 5 ms, and scaled to align it to the data. There is good agreement between experiment and calculation, except for a small difference between the width of the calculated and observed peaks (possibly due to the simple assumptions used in the calculation). We conclude that one can make reliable calculations of the effects of time-varying electric field gradients on the phase space evolution of atoms.
### D Defocusing
So far we have only discussed electric field configurations in one dimension. For atoms on the midplane between two parallel plates there are no additional forces. However, for atoms in the fringe field of the plates, and not on the midplane, there is a force toward the nearest plate. The magnitude of this force, which transversely defocuses the packet, depends on the shape of the edges of the electric field plates. In general, any convex (concave) surface on a field plate produces a local increase in the electric field gradient towards (away from) the surface. It should be stressed that the change in kinetic energy of the atoms is determined by conservation of energy. All atoms will have their kinetic energy reduced, by the same amount, after exiting the electric field, even though some may have slightly changed direction.
The defocusing effects can be minimized by using an electric field plate gap that is large compared to the width of the beam of atoms. However, increasing the gap increases the voltage needed to produce the same electric field, and reduces the maximum electric field that can be sustained. For simplicity, in this experiment, we chose a small gap-to-beam-width ratio and tolerated some defocusing. However, there are more elaborate field configurations for which our calculations show very small defocusing effects. In Fig. 9, we compare one such set of electric field plates with field plates having a simple parallel plate geometry. For an atom slightly off the midplane, the transverse force is reduced by about a factor of six compared to the simple plate geometry. We will discuss details of focusing and defocusing in a future paper.
## IV Applications
### A Slowing thermal beams of atoms and molecules
#### 1 Electric fields
While electric fields of $`10^7`$ V/m or higher can be maintained by ordinary metal electrodes with a small gap spacing, much stronger fields can be maintained by heated glass cathodes. Glass cathode systems have been used to produce large Stark effects in beams of Cs and Tl atoms. A set of 75 cm long all-glass electric field plates have operated at $`4.5\times 10^7`$ V/m. Short electric field plates with a heated glass cathode at ground potential, and a metal anode have sustained electric fields of $`5\times 10^7`$ V/m and higher.
#### 2 Slowing ground state atoms
Atoms that are of interest for slowing and cooling by time-varying electric field gradients are those with large dipole polarizabilities that can not be laser slowed and cooled. The dipole polarizabilities are largest in alkali metals and alkali earths. However, actinides, lanthanides, and transition elements near an alkali, also have polarizabilities above $`1\times 10^{39}\text{J/(V/m)}^2`$, compared to $`6.63\times 10^{39}\text{J/(V/m)}^2`$ for Cs, which has the highest known ground state polarizability.
As an example, we consider slowing a thermal beam of neutral americium (atomic number 95) to near rest. With a polarizability of $`2.59\times 10^{39}\text{J/(V/m)}^2`$, each $`5\times 10^7`$ V/m slowing section will reduce the kinetic energy by 0.23 K. It is impractical to slow atoms from the peak of the velocity distribution at the 1500 K needed to form a beam of Am. However, if one is willing to sacrifice intensity, the slower atoms, from the low velocity tail of the thermal velocity distribution, are available. The low velocity atoms from thermal distributions are often used to load magneto-optic traps, either from a vapor inside the chamber (as we have done), or from an atomic beam. Recently, Ghaffari et. al. developed an atomic low-pass velocity filter that passes slow atoms and blocks fast atoms.
For the Am velocity distribution tail at 1 K (8.4 m/s), only about five time-varying electric field gradient slowing sections would be needed to bring the atoms to near rest. The maximum energy spread that can be accepted by a single section is about equal to the energy decrease in one section, which for Am is about 0.23 K. Based upon a Maxwell-Boltzmann velocity distribution inside a 1500 K effusive oven, with a thin orifice, roughly $`10^7`$ of the atoms in a beam will be in the energy range from 0.89 K to 1.1 K. ( About $`4\times 10^7`$ of the atoms in a beam will be within the energy range 3.60 K to 3.83 K.)
The final velocity and the minimum length of the electric field plates will determine repetition rate for slowing packets of atoms. At the last field plate section, where the distance between pulses will be at a minimum, one pulse of atoms must exit before the next pulse enters. The total number of atoms that reach the end of the apparatus thus depends on the final velocity, the length of the apparatus, the initial velocity, and on any focusing available.
A method for focusing strong field seeking atoms and molecules using alternating electric field gradients, has been applied to molecular beams. The range of beam energies that can be accepted can also be increased by using a design that cools over several electric field sections before slowing. We will discuss some of these details in a future paper.
For clusters, the polarizability per atom of small homonuclear alkali clusters is close to the atomic polarizability, and decreases to a value of about 0.4 times the polarizability per atom for bulk samples. It should be possible to slow and cool clusters in the same way as atoms.
We also note that the metastable states of noble gases have dipole polarizabilities that are of order $`10\times 10^{39}\text{J/(V/m)}^2`$. This would allow noble gas atoms in the metastable states to be slowed and cooled, much the same way as ground state (Cs) atoms. They do, however, have large tensor polarizabilities, that may permit effective slowing and cooling of only a single angular momentum state.
#### 3 Slowing Rydberg atoms
The properties of Rydberg atoms, atoms in states of very high principle quantum number $`n`$, are similar for all elements and include very large dipole polarizabilities, that can be either positive or negative. This makes Rydberg states worth considering for slowing atoms. Slowing of Rydberg atoms in inhomogeneous electric fields was proposed by Breeden and Metcalf , who analyzed the case of time independent inhomogeneous electric fields, and atoms in short-lived Rydberg levels.
To slow Rydberg atoms using time-varying electric field gradients, a number of conditions must be met. The lifetimes of the states must be long enough to pass through the apparatus and the electric field must neither quench the state, nor or ionize the atom, but should still be large enough to produce significant slowing. The lifetime of a state in a quasi-hydrogenic Rydberg atom, with principle quantum number $`n`$, and angular momentum $`l`$, has been calculated by Chang , who finds for high angular momentum, $`\tau =93n^3(l+0.5)^2`$, where $`\tau `$ is the lifetime in ps. For $`n=30`$, the lifetimes are about 2.2 ms for $`l=29`$ and about 1.1 ms for $`l=20`$. An external electric field mixes different values of $`l`$ having the same z component of angular momentum $`m`$. Thus, only the sublevels with large values of m will have unquenched lifetimes, since the lower m states mix with low $`l`$ states that have much shorter lifetimes.
The critical electric field for ionizing a Rydberg atom is given classically as $`E_{cr}=1/(16n^4)`$, where $`E_{cr}`$ is in atomic units (of $`5.14\times 10^{11}`$ V/m). High m states are more circular and thus require a higher field to ionize. The Stark effect also modifies the critical field and for blue shifted levels the critical field may be closer to $`1/(12n^4)`$.
The change in energy levels with the electric field has been calculated by Bethe and Salpeter . For a hydrogenic case they find, in atomic units (1 au $`=1.0973\times 10^7`$ m<sup>-1</sup>), $`W=0.5n^2+1.5En(n_1n_2)E^2n^4[17n^23(n_1n_2)^29m^2+19]/16`$, where $`n_1`$ and $`n_2`$ are parabolic quantum numbers that satisfy the equation $`n=n_1+n_2+|m|+1`$. Setting $`E`$ equal to $`1/(12n^4)`$, we find that for $`n=30`$, $`l=20`$ the energy change at the maximum field, that does not ionize the atom, is about 5.6 K or 7.6 K, depending on whether the red or blue shifted level is chosen. For the circular state, $`|m|=n1`$ and $`n_1=n_2=0`$, the electric field can not mix states from the same $`n`$ or lower $`n`$ and the shift is much smaller. For $`n=30`$, $`l=29`$, the maximum energy change before ionization is about 0.66 K.
#### 4 Slowing polar molecules
Time-varying electric field gradient slowing and cooling of polar molecules with large electric dipole moments can be very efficient. As examples we consider two linear diatomic molecules with large dipole moments: cesium fluoride, which has a small rotational constant, and lithium hydride, which has a very large rotational constant.
As discussed in section II, a rigid rotor model calculation of CsF, in its lowest rotational state ($`J=m=0`$), shows that CsF would lose about 16 K of energy exiting each $`10^7`$ V/m electric field section. The next few higher rotational levels (Fig. 10a) would also experience large changes in kinetic energy and could be efficiently slowed and cooled. With a $`5\times 10^7`$ V/m electric field, the change in kinetic energy (see Fig. 10a), for the lowest rotational level, would be about 89 K, or equivalently a molecule traveling 98 m/s could be brought to rest. Longitudinal cooling, of about 89 K, could also be achieved in a single time-varying electric field gradient section.
In a thermal beam of CsF formed at 850 K, about one-half percent of the molecules have a kinetic energy of 89 K or less. A disadvantage of using a thermal beam of molecules with a small rotational constant is that only a very small fraction of the molecules are found in any single rotational state. Only about one in 15,000 CsF molecules are found in the $`J=m=0`$ state in a thermal molecular beam at this temperature.
An alternative approach is to form a supersonic beam of CsF. The supersonic beam has a much lower internal temperature, which greatly increases the population of low rotational levels, and it is a very directional beam with a narrow velocity distribution. However, the beam velocity of a supersonic source is higher than the most probable velocity from an effusive source of the same temperature and has no low velocity tail. Slowing a supersonic CsF beam would require approximately twenty electric field sections – still very doable.
Lithium hydride and other metal hydrides have both very large rotational constants and large dipole moments. For LiH, $`B_e=`$ 11 K (7.5 cm<sup>-1</sup>) and $`d_e=2.0\times 10^{29}`$ J/(V/m) (5.9 Debye). Even at $`5\times 10^7`$ V/m, the electric field does not completely suppress rotation (Fig. 10b), and for the $`m=0`$, $`J0`$ states, the interaction energy is both large and positive. As shown in Fig. 10b, a LiH molecule in the $`J=1`$, $`m=0`$ state entering an electric field of $`3.5\times 10^7`$ V/m would lose about 7 K of energy. And since the rotation constant is large, a significant fraction of the molecules in a thermal beam are in low rotation states. Approximately $`2\times 10^5`$ of the LiH molecules in a beam from a1200 K effusive oven will have an energy of 7 K or less. Those in the $`J=1`$, $`m=0`$ state could be slowed to rest with a single electric field section.
Lithium hydride in the low-lying $`m=0`$, $`J0`$ states and other molecular states that have positive interaction energies are weak field seeking and can be transversely focused by static multipole electric fields. (For focusing strong field seeking states see Ref.). It may also be possible to trap weak field seeking states in electric field traps or a laser trap. An electric field trap could, in principle, be up to 7.5 K deep for the $`J=1`$, $`m=0`$ state of LiH, depending on the electric field that can be sustained in the trap geometry.
The $`J=1`$, $`m=0`$ and $`J=2`$, $`m=0`$ rotational levels of CsF have negative interaction energies at strong electric fields, but positive interaction energies at weaker fields (Fig.10a). Thus, CsF in these states could be efficiently slowed using strong electric field gradients, then focused and trapped in a weaker electric field. The change in kinetic energy exiting a $`5\times 10^7`$ V/m electric field would be 82 K (74 K) for the $`J=1`$, $`m=0`$ ($`J=2`$, $`m=0`$) state, while in an electric field of $`2.5\times 10^6`$ V/m ($`4\times 10^6`$ V/m) the $`J=1`$, $`m=0`$ ($`J=2`$, $`m=0`$) state would be weak field seeking with an interaction energy of 0.5 K (1 K). Alternatively, CsF could be slowed in the $`J=1`$, $`m=1`$ state and then, in a weak electric field, transferred to the $`J=1`$, $`m=0`$ state, by an rf transition.
### B Application to cold atoms
#### 1 Measurement of the ground state dipole polarizability of atoms
Fig. 5 demonstrates the sensitivity of time-varying electric field gradient slowing to the static dipole polarizability. Each data point in this figure represents less than 300 seconds of counting. Key quantities in a time-varying electric field gradient measurement of dipole polarizabilities are the electric field strength, the beam velocities, and the electric field profile at the plate exit and/or entrance. The field profile is needed to determine the drift lengths before and after slowing for a time-of- flight velocity measurement. We expect that with a moderate effort, polarizability measurements on alkali, alkaline earth and other slow, cold atoms can be made to an accuracy of a few parts per thousand.
Although dipole polarizability is related to many important physical and chemical properties , the ground state dipole polarizability has been measured in fewer than 20 percent of the known elements. And of these, only for the noble gasses and sodium has an accuracy of one percent been surpassed. The traditional method for measuring the static dipole polarizabilities of condensible atoms is the elegant electric-magnetic field gradient balance technique (E-H balance) , which uses thermal beams of atoms. Slow, cold atoms would also allow a significantly improved accuracy for E-H balance and other deflection based methods. However, we anticipate that time-varying electric field gradient slowing (or acceleration) will be easier to perform and poses fewer challenges to understanding the distributions of electric field and atoms.
#### 2 Beam transport
Inhomogeneous magnetic fields are used for transverse focusing of laser-cooled atom packets, and Cornell, Monroe, and Wieman have used time varying inhomogeneous magnetic fields to radially and axially focus atoms being transferred between traps. Time-varying electric field gradients are a useful compliment to these atom optic elements because they are insensitive to the magnetic or hyperfine substates, and when edge effects are small or absent, they control only in the longitudinal direction.
One possible application of the time-varying electric field gradient to beam transport, is to longitudinally spread atoms in a cold Cs atom atomic clock, to reduce collisional frequency shifts. In such a clock, the atoms would be spread before they passed through the first rf region (or the first passage through the single rf region in a fountain clock) and, if necessary, to sharpen the detection signal, rebunched after they passed through the second rf region (after their return through the rf region in a fountain clock) . The magnetic fields associated with turning on and off the electric field can be made small so as not to influence the magnetic shielding environment of the clock.
#### 3 Launching atoms
If a set of electric field plates is turned on near a cloud of cold confined atoms in the ground state, the atoms will accelerate into the plates. Turning off the field when the atoms are in the uniform field region then allows the atoms to exit the plates with a net velocity. Cesium atoms entering an electric field of $`4\times 10^7`$ V/m will accelerate from rest to 6.8 m/s.
The electric field plates can be positioned to launch atoms in any direction. To launch horizontally, a vertical gap will provide a fringe field at the bottom that can help keep the atoms from falling out of the plates. To launch vertically, the electric field gradient at the initial location of the atoms needs to be large enough to overcome gravity. In microgravity, the initial acceleration needs only to be large enough that the cloud of atoms does not expand beyond the dimensions of the electric field plate gap (to prevent the loss of atoms). It would also be easy to vary the launch velocity and direction. One possible arrangement of electric field plates is to invert the plate configuration shown in Fig. 9.
## V Acknowledgements
We are grateful to Douglas McColm for many stimulating and fruitful discussions and for his help in clarifying several key concepts. We thank Timothy Page, Andrew Ulmer, Christopher Norris, Karen Street, and Otto Bischof for assistance in constructing the apparatus, and thank C.C. Lo for developing a very functional and cost-effective time-varying high voltage power supply. One of us (JM) thanks the Environment, Health, and Safety Division at LBNL, and especially Rick Donahue and Roberto Morelli, for help with computing resources, and one of us (HG) thanks Alan Ramsey for timely inspiration. This work was supported by the Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC03-76SF00098. One of us (JM) is partially supported by a National Science Foundation Graduate Fellowship.
|
no-problem/9909/astro-ph9909032.html
|
ar5iv
|
text
|
# Hubble Space Telescope Observations of the CfA Seyfert 2s: The Fueling of Active Galactic NucleiBased on observations with the NASA/ESA Hubble Space Telescope obtained at the the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
## 1 Introduction
An important unsolved problem in the study of active galactic nuclei (AGN) is how the central massive black hole is fueled. In particular, if the primary fuel source is interstellar gas and dust in the host galaxy, how is that material transported from kiloparsec scales into the central parsecs of the galaxy and onto the supermassive black hole? In order to fuel the AGN, this material must somehow lose nearly all of its angular momentum. Estimates of the mass accretion rate to power AGN range from more than $`1M_{\mathrm{}}`$ per year for the most luminous quasars to $`0.01M_{\mathrm{}}`$ per year for the Seyfert galaxies discussed here. Thus, over an AGN’s lifetime (10<sup>7-8</sup> years?) a significant amount of material must be transported inwards to feed the central black hole.
A number of dynamical mechanisms have been proposed that could remove angular momentum from the host galaxy’s gas and thus supply the fueling rates required to match the observed luminosities of AGN. These include tidal forces caused by galaxy-galaxy interactions (Toomre & Toomre 1972) and stellar bars within galaxies (Schwartz 1981).
Numerical simulations of galaxy interactions (Hernquist 1989; Barnes & Hernquist 1991; Mihos, Richstone, & Bothun 1992; Bekki & Noguchi 1994; Bekki 1995) have shown that strong perturbations on a galaxy disk due to a close encounter with another galaxy can trigger the infall of large amounts of gas into the central 1 kpc to 10 pc, depending on the simulation resolution, the type of encounter, and the properties of the simulation. Further, relatively minor encounters (“minor mergers”, Hernquist 1989; Mihos & Hernquist 1994; Hernquist & Mihos 1995) can lead to significant gas infall, so one need not invoke disruptive encounters. Observationally, there is ample evidence of past or on-going tidal encounters and companions among nearby luminous QSOs (Hutchings & Neff 1988, 1992, 1997; Bahcall et al. 1995a,b, 1997), but at most only a mild statistical excess of interactions or close companions is seen among the lower-luminosity Seyfert galaxies (Adams 1977; Petrosian 1982; Dahari 1984; Keel et al. 1985; Keel 1996; DeRobertis, Hayhoe, & Yee 1998).
In barred galaxies, the triaxial stellar potential leads to a family of orbits for the stars and gas that can transport material into the inner 100$``$1000 pc of the galaxy. Models of orbits using 2-D and 3-D hydrodynamic simulations of gas and stars in barred potentials (Athanassoula 1992; Friedli & Benz 1993; Piner, Stone, & Teuben 1995) have shown the formation of a shock front at the leading edge of a bar. Material builds up in this shock and falls into the nuclear region, generally forming a nuclear ring with a diameter approximately equal to the bar’s minor axis ($`>`$100$``$1000 pc). Observations of apparent shock fronts at the leading edges of bars and velocity distributions in barred galaxies are in general agreement with inflow models (Quillen et al. 1995; Regan, Vogel, & Teuben 1997), and provide evidence that the dynamical behavior predicted by the models is realistic.
However, the typical semiminor axis of a bar is still $``$100$``$1000 pc, and thus while bars might plausibly transport material reasonably close to the center, an additional mechanism is needed to move the material into the nucleus proper. A mechanism proposed to complete the transport of material into the nucleus is the ‘bars within bars’ scenario (Shlosman, Frank, & Begelman 1989; Maciejewski & Sparke 1997; Erwin & Sparke 1998). In this model, a large-scale bar transports material into a kiloparsec-scale disk. This gaseous disk then becomes unstable to bar formation, creating a miniature ‘nuclear’ bar that transports the gas to within approximately 10 pc of the galactic nucleus, which is approximately where the central supermassive black hole’s potential can take over. While large-scale bars in galaxies are readily observable due to their large angular size, small-scale, nuclear bars (Shaw et al. 1995) are a factor of 5 to 10 smaller and thus are more difficult to resolve in distant galaxies. Because of their smaller physical size, nuclear bars were initially discovered in relatively nearby galaxies (de Vaucouleurs 1975; Kormendy 1979; Buta 1986a,b).
Observationally, investigations of the relative fraction of bars in AGN as compared to non-active galaxies have failed to turn up a significant excess of bars among spirals harboring AGN (Kotilainen et al. 1992; McLeod & Rieke 1995; Alonso-Herrero, Ward, & Kotilainen 1996; Mulchaey & Regan 1997). Similarly, searches for nuclear bars using the Hubble Space Telescope (HST) have failed to find them in numbers sufficient to salvage the bar fueling picture (Regan & Mulchaey 1999). We shall reach a similar conclusion in this paper. At most, these observations have shown that nuclear bars are present in only a minority of Seyferts.
This is not to say that bars and interactions are not viable fueling mechanisms as clearly they are operating in a number of Seyferts. The point is that neither is the dominant mechanism, nor necessarily responsible for fueling the AGN in all cases. What is responsible for fueling an AGN in the majority of cases where there is no evidence for either interactions or bars? One way of addressing this question is to assay the amount of potential fuel in the form of interstellar dust and gas in the inner kiloparsec of nearby Seyfert galaxies. Indeed, while Regan & Mulchaey (1999) failed to find a preponderance of nuclear bars in their HST imaging sample of 12 Seyferts, they did find one common morphological structure: spiral arms of dust, of greater or lesser degrees of organization, in the inner few hundred parsecs.
In this paper we present new, high angular resolution HST near-infrared NICMOS images combined with archival WFPC2 images of the nuclear regions of a representative sample of 24 Seyfert 2s to search for dynamical and morphological signatures of AGN fueling. In section 2 we describe our sample selection, and our observations in section 3. In section 4 we describe the nuclear morphology derived from direct flux and color maps to search for nuclear bars and circumnuclear dust structures. Section 5 describes the possible role of these structures in fueling the AGN, and we summarize our results and conclusions in section 6.
## 2 Sample Selection
The history of the study of interactions among Seyfert galaxies serves to illustrate the main difficulty in drawing conclusions on the nature of AGN from studies of statistical samples, namely the problem of defining an unbiased sample and identifying a suitable control group. As discussed by Keel (1996), the reexamination of interacting galaxy samples illustrated the importance of matching host galaxy types and luminosities (Fuentes-Williams & Stocke 1988). In the last decade, better defined samples of AGN, for example the Center for Astrophysics (CfA) sample (Huchra & Burg 1992) and the Revised Shapley-Ames (Sandage & Tammann 1987) sample of Seyfert galaxies (Maiolino & Rieke 1995), have greatly assisted statistical studies of the AGN population.
The CfA Redshift Survey (Huchra et al. 1982) obtained spectra of a complete sample of 2399 galaxies down to a limiting photographic magnitude of $`m_{Zw}14.5`$. This relatively unbiased survey avoids many of the problems of traditional surveys for AGN, particularly those based on ultraviolet excesses which are biased against reddened AGN and thus preferentially detecting Seyfert 1s and Seyferts in host galaxies with active star formation. This sample also has the often overlooked advantage of a uniform set of spectral classifications obtained with high signal-to-noise ratio that provides comparable detection limits for weak broad-line components (Osterbrock & Martel 1993). This is essential for making meaningful classifications of Seyfert types 1.8, 1.9, and 2 (Osterbrock 1981) for distinguishing objects with and without obvious broad-line components.
Studies of the relative frequency of close companions among the CfA Seyferts have shown that this sample has only a marginal excess of companions, although a larger fraction of the Seyferts do appear to be currently undergoing the final stages of a past interaction (DeRobertis, Hayhoe, & Yee 1998). In contrast, this same study emphasizes that a small but significant number of the CfA Seyferts show no morphological evidence for recent interactions. Infrared wavelength imaging also shows no signature of a violent interaction history in this sample (McLeod & Rieke 1995), nor does it detect an excess of nuclear bars.
For this study, we have chosen 24 of the 25 Seyfert 2s (including types 1.8 and 1.9 as classified by Osterbrock & Martel 1993) in the CfA sample. We are concentrating on just the Seyfert 2 galaxies because they in general have fainter nuclear point sources than Seyfert 1s (Nelson et al. 1996; Malkan, Gorjian, & Tam 1998), and thus the circumnuclear environment is relatively unaffected by the contamination from the complex HST point-spread function (PSF). This is especially true with NICMOS where the PSF is very complex (MacKenty et al. 1997).
## 3 Observations
Our observations are comprised of archival visible-band and new near-infrared HST images. The visible wavelength WFPC2 images include all of the Seyfert 2 galaxies in the CfA sample with the exception of NGC 4388 and Mrk 461 (neither have been observed to date with broad-band filters). Most of the visible images were from a snapshot survey of AGN (GO-5479, see Malkan, Gorjian, & Tam 1998) and are single 500s exposures taken through the F606W filter. The remaining archival images were taken through the F547M filter from a variety of programs. Table 1 lists the properties of the sample galaxies and which of these two visible band filters (hereafter referred to collectively as the $`V`$ filter) was used to obtain the $`V`$band galaxy image used in this investigation. The images were all obtained with the nucleus roughly centered on the PC1 detector of WFPC2, which has a plate scale of 0$`\stackrel{}{\mathrm{.}}`$04553 pixel<sup>-1</sup> and a total effective field of view of 35$`\stackrel{}{\mathrm{.}}`$6 (Biretta et al. 1996).
We obtained near-infrared $`J`$ and $`H`$band images of 23 of our Seyfert sample with the HST NICMOS Camera 1 during Cycle 7 (Project GO-7867). The remaining galaxy in our sample, NGC 1068, was observed by the NICMOS team as part of their Guaranteed Time Observations. We chose Camera 1 as its plate scale (0$`\stackrel{}{\mathrm{.}}`$043 pixel<sup>-1</sup>) is closest to that of the PC1 camera ($``$ 0$`\stackrel{}{\mathrm{.}}`$046 pixel<sup>-1</sup>), although with a substantially smaller field of view ($``$11″). The near match in plate scale between the visible and infrared images is helpful for matching images from the two cameras and achieving the best sampling of the resolutions of the two cameras. Further, the surface brightness of the underlying host galaxies falls off sufficiently fast that we would have obtained less signal-to-noise in the outer regions of NICMOS Camera 2 at a cost of coarser pixel sampling (0$`\stackrel{}{\mathrm{.}}`$075 pixel<sup>-1</sup>). Our images were taken through both the F110W and F160W filters (hereafter $`J`$ and $`H`$, respectively), which are each approximately 2000 Å wide and are centered at 1.1 $`\mu `$m and 1.6 $`\mu `$m.
In each filter we acquired images at four dither positions separated by 1″ (SPIRAL-DITH pattern) so that the effects of bad pixels and other detector artifacts could be eliminated. At each dither position we used an exposure ramp (STEP128) for a total of 256 seconds per position. This allows us to correct for any saturated pixels that might occur if the nucleus is brighter at near-IR wavelengths than suggested by the archival WFPC2 imaging (as would be expected if the nuclei are dusty). The final shifted and combined frames have a cumulative exposure time of 1024 seconds per filter. This was done for each galaxy, resulting in a homogeneous set of relatively deep near-infrared images of the central 11″ of these galaxies.
To calibrate this dataset we had to perform several additional processing steps beyond the standard HST data-reduction pipeline. For the WFPC2 images, this consisted of cosmic$``$ray removal and absolute flux calibration. The flux calibration included transforming our F547M and F606W to the Johnson/Cousins system by using the STSDAS SYNPHOT package to convolve a series of galaxy templates with both Johnson $`V`$ and the HST filters.
For the infrared images we shifted and added the four individual exposures from the CALNICA part of the data reduction pipeline (MacKenty et al. 1997) after masking out individual bad pixels and detector artifacts. We chose not to use the CALNICB part of the standard data reduction pipeline as this task attempts to subtract the background from the final mosaic image. As our observations were not through filters contaminated by thermal emission, this step was not necessary. Furthermore, the background subtraction attempted by the CALNICB pipeline was in fact subtracting the flux level due to extended emission from these galaxies, a problem that was particularly acute for our brighter targets that covered the entire array. We finally flux-calibrated our data using the transformations derived by Stephens et al. (2000) to place the near-infrared images on the CIT system. We will present the full atlas of our observations in a future publication.
## 4 Nuclear Morphology
### 4.1 $`VH`$ Color Maps
If there is cold gas flowing into the nuclear ($`<10`$ pc) regions from the host galaxy, we should see evidence of this material in the form of dust lanes extending from large scales into the nuclear regions. Dust is a useful tracer of this material as it is generally well-mixed with gas and can be detected by its attenuation of light from background stars. Our infrared images primarily trace the stellar distribution in these galaxies, while the visible wavelength images trace both the stellar distribution and the dust. We combined these images to form $`VH`$ color maps of all of the galaxies for which we had visible band HST data to map the dust distribution.
Figure 1 shows the $`VH`$ color maps for the Seyfert 1.8s and 1.9s in our sample, while Figure 2 shows the $`VH`$ color maps for the Seyfert 2s. In Figure 3 we show the $`JH`$ color maps for Mrk 461 and NGC 4388, the two galaxies for which we do not have visible-band HST imaging. These images are 5″ on a side, corresponding to projected spatial sizes between 2.7 kpc and 125 pc in the galaxies for $`H_0=75`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. The 2$``$pixel resolution element is 0$`\stackrel{}{\mathrm{.}}`$091, corresponding to projected spatial scales between 50 pc and 2.3 pc. In Table 1, column 8 we list the physical scale corresponding to 1″ at the distance of each galaxy. To provide a visual reference, a bar showing 100 pc projected distance at the galaxy is drawn in the bottom left corner of each frame.
To exactly match the plate scales of the two images, we rebinned our NICMOS images to the plate scale of the PC1 chip using a geometric transformation. Though the resolutions of the two cameras are different, we did not smooth our $`V`$ and $`H`$ images to a common resolution. As a result our $`VH`$ color maps, particularly those galaxies with a bright nucleus at $`H`$, show artifacts of the resolution mismatch as small, “red” rings at the position of the nucleus (e.g. Mrk 334, Mrk 744) as well as “red” spikes from the NICMOS diffraction pattern (e.g. Mrk 334). We did not smooth the $`V`$ and $`H`$ images to a common resolution as we found that this substantially decreased the contrast of the dust features in these color maps. This resolution mismatch, however, introduces small uncertainties into our estimate of the amount of reddening present in the color maps. We discuss this further in section §4.3 below.
The images shown in Figures 1, 2 & 3 reveal a wide range of morphologies, including nuclear spiral patterns, stellar bars, and irregular, clumpy dust distributions. In Table 2 we list our morphological classification for each galactic nucleus. The most common morphology we find in these images are nuclear spiral dust lanes, generally extending into the inner $`10100`$ pc. These nuclear spiral arms are wound in the same direction as the main disk spiral arms, but they do not appear to be continuations of the main disk spiral arms. In many of the galaxies with large-scale bars, these nuclear dust spirals are clearly connected to dust lanes along the leading edge of the bar, similar to what is seen in the hydrodynamic simulations of Athanassoula (1992). The most striking examples are Mrk 471, UGC 12138, NGC 5674, NGC 3362, NGC 5347, and NGC 7674.
The nuclear spiral dust lanes in Figures 1, 2, & 3 exhibit a variety of spiral morphologies, ranging from relatively tightly wound arms (e.g., NGC 1144 and UGC 6100) to loosely wound arms (e.g., UGC 12138 and Mrk 573). We find examples of one-sided spiral arms (e.g., NGC 3982 and Mrk 270), two-arm, “grand-design” spirals (e.g., UGC 12138 and NGC 7682), and multi-armed, “flocculent” spirals (e.g., Mrk 334 and NGC 5674). Often it is easier to see spiral arms on one half of the image than the other, indicating that the gas disks containing the spiral structure are inclined to the line of sight and we are more clearly seeing the dust lanes on the near side of the galaxy (e.g. Mrk 744 and NGC 1144).
In some of our color maps we also see regions with very blue colors. These include diffuse “blue” regions that are due to extended emission-line gas and bright, unresolved knots of blue light that are likely regions of OB star formation. In particular, these latter regions are found in Mrk 334, Mrk 744, Mrk 266SW, and NGC 7682 (Figures 1 & 2). While we do not have UV imaging to confirm that there is significant star formation in these sources, some authors (Heckman et al. 1995, 1997; González Delgado et al. 1998) have proposed that massive star formation on 100 pc scales may be a significant contributor to the continuum emission in some Seyfert 2s. These galaxies may therefore be candidates for such Seyfert 2–Starburst galaxies.
### 4.2 Nuclear Bars
Nuclear stellar bars have been implicated as a possible mechanism for removing angular momentum from gas on 100’s of parsec scales and transporting it to the inner 10’s of parsecs, where the central supermassive black hole begins to dominate the gravitational potential. Recent ground-based (Mulchaey & Regan 1997) and HST observations (Regan & Mulchaey 1999) have shown that most Seyfert galaxies do not have nuclear bars, implying that some other mechanism is responsible for transporting interstellar gas to the central engine. Our data confirm and extend their conclusions with a larger, representative sample of Seyfert 2s. We searched for nuclear bars by qualitatively examining the isophotal contour maps of our $`V`$, $`J`$ and $`H`$ images. In several galaxies we also found straight dust lanes in the color maps, but in all cases the bar signature was clearly visible in the isophotal contour maps (see Figure 4). In a future paper we will present a more quantitative analysis of bar selection in these galaxies.
We find at most 5 nuclear bars among the 24 galaxies in our sample (see Figure 4). Mrk 573 is the best example of a double-barred galaxy in our sample. It has both a host-galaxy bar and a nuclear bar as seen in previous ground-based and HST visible-wavelength imaging (Pogge & DeRobertis 1993; Capetti et al. 1996), and there is a clear straight dust lane going into the nucleus in visible and visible/near-IR color maps (Pogge 1996; Capetti et al. 1996; Quillen et al. 1999). The appearance of this dust lane in the $`VH`$ color map is consistent with dust lanes at the leading edge of the bar (cf. Quillen et al. 1999). The nuclear bar in Mrk 270 is apparent at visible and infrared wavelengths, and there is a straight dust lane in the $`VH`$ color map oriented nearly perpendicular to the bar major axis. NGC 5929 has a distinct nuclear bar in the near-infrared images, but it is hidden by dust in the visible-light images. The $`VH`$ color map of this galaxy shows the nuclear region is very dusty with an irregular dust morphology. Mrk 471 shows evidence for a nuclear bar in both the near-infrared surface brightness image and through straight dust lanes in the $`VH`$ color map. The lowest surface brightness contours in Figure 4 also show the large-scale bar in Mrk 471.
Regan & Mulchaey (1999) report a nuclear bar in NGC 5347. While this galaxy has a clear large-scale bar seen at visible and IR wavelengths (McLeod & Rieke 1995), the nuclear bar is less obvious in the $`H`$band image (also shown in Figure 4). While we will retain the classification of this galaxy as nuclear barred in the interests of setting a strong upper limit on the nuclear bar fraction in Seyferts, we consider this the weakest case for a nuclear bar of the 5 we have found.
Peletier et al. (1999) have suggested that there is a nuclear bar perpendicular to the main galaxy bar in NGC 5674 at visible wavelengths, but we do not see it in our near-infrared images (Figure 4). Instead, we find a great deal of dust in this galaxy at the position angle of both semiminor axes and thus the “bar” apparent at visible wavelengths is most likely an artifact of the circumnuclear dust distribution. This underscores the advantage of using near-infrared images to search for and confirm suspected nuclear bars, as dust can greatly distort the nuclear surface brightness profiles at visible wavelengths.
Finally, nuclear bars that are heavily enshrouded by dust may not reveal their presence with straight dust lanes in $`VH`$ color maps. If nuclear bars are generally hidden by large quantities of dust, however, straight dust lanes may be visible in $`JH`$ color maps. We have created $`JH`$ color maps for all of the galaxies in our sample, but find no additional evidence for nuclear bars in the $`JH`$ color maps that were not detected in our $`V`$, $`H`$, or $`VH`$ images.
### 4.3 Circumnuclear Dust Lanes
Seyferts, like more quiescent spirals, have a central stellar component that consists of a disk and a central bulge. In the nuclear regions there is additional light from the AGN and the extended narrow-line region. This latter region is partially resolved in many HST images of Seyferts (Bower et al. 1994; Simpson et al. 1997; Malkan et al. 1998). The near-infrared continuum emission from the unresolved active nucleus proper may also include a contribution from hot dust (e.g., Glass & Moorwood 1985; Alonso-Herrero et al. 1998; Glass 1998).
The presence of circumnuclear dust manifests itself in our images in two ways. First, we expect there to be a relatively uniform distribution of diffuse dust spread throughout the volume of the nuclear bulge and disk of the host galaxy. This component will be difficult to measure as it is expected to be well mixed with the stars. Second, we expect discrete clouds of gas and dust, most likely organized into a disk confined to a plane (or warped slightly), which could give rise to the spiral dust lanes seen in the color maps (Figures 1, 2, & 3).
Diffuse gas and dust uniformly distributed in the central regions of the galaxies would have a relatively low integrated mass, and so is unlikely to be a significant contributor to the fuel reserves of the active nuclei. The discrete, dense dust structures we see in the nuclear disks, however, particularly in the spiral dust lanes, are a more probable fuel source. As these spiral dust lanes are likely to be confined to a disk, their net extinction can be estimated by treating them as a thin, obscuring slab of material and using the change in $`VH`$ color between the arm and interarm regions as an estimate of the total extinction, $`A_V`$. We define the color excess due to dust in the nuclear disk as:
$$E(VH)=(VH)_{arm}(VH)_{interarm}.$$
This excess color is used to estimate the total visual extinction, $`A_V`$, via a standard interstellar extinction curve (Mathis 1990). The $`JH`$ color excess could also be used for this purpose, but due to the close proximity of these filters in wavelength, they do not provide as precise a measure of $`A_V`$. By relating this $`A_V`$ to a column density of hydrogen atoms, $`N_H=1.87\times 10^{21}`$ cm$`{}_{}{}^{2}A_{V}^{}`$ for an $`R_V=3.1`$ reddening law (Bohlin, Savage, & Drake 1978) and converting this into a mass column density, we can obtain a rough estimate of the surface density of these disks:
$$\mathrm{\Sigma }=15\times A_VM_{\mathrm{}}\mathrm{pc}^2.$$
The disk surface density derived by this technique actually estimates the maximum surface density rather than the average. However, as discussed in detail below, this measure of $`E(VH)`$ will tend to underestimate the true value of $`A_V`$ due to uncertainties in the dust geometry and instrumental effects. We therefore chose this conservative approach to estimating the average disk surface density so as to minimize the likelihood that we have significantly underestimated the surface density.
While this estimate is computationally straightforward, there are several systematic effects at work that combine to make these estimates lower limits on the true gas surface density. First, we have used a very simple foreground screen as our dustlane model. A more physically realistic model would be a dust screen sandwiched between two layers of stars, corresponding to the stars behind and in front of the dust relative to the line of sight. For an equal amount of starlight on either side of the dust lane, the arm-interarm $`E(VH)`$ defined above reaches a maximum of $`E(VH)0.45`$ at $`A_V1.5`$ for a smooth dust layer. A patchy or clumpy distribution, as discussed below, would increase these numbers. Scattering is another vital component to a realistic dust model. Dust particles are strongly forward scattering and more likely to scatter blue light than red light, causing a “bluing” of the starlight that could counteract, in part, the reddening due to absorption. Significant scattering would reduce the observed $`E(VH)`$, thus leading us to underestimate the total column density of material in these nuclear spiral arms. We estimated the magnitude of scattering by incorporating the “Dusty Galactic Nuclei” model of Witt, Thronson, & Capuano (1992) into our dust sandwich model. We find that scattering can decrease the observed $`E(VH)`$ by a factor of two, corresponding to an increase in the disk surface density by the same factor. The simple foreground screen model we have used to estimate the disk surface density therefore results in an underestimate of the true disk surface density by a factor which depends on the true dust geometry, but is useful as it sets a lower limit on the surface density.
Another complication to our estimate of the disk surface density is that our images do not resolve small-scale dust structures in these dust lanes. Studies of the arm-interarm extinction in overlapping galaxies (White & Keel 1992; White, Keel, & Conselice 1996; Berlind et al. 1997) show that while the large-scale dust lanes are optically thin, the extinction law in the spiral arms tends to be grayer than a standard Galactic extinction law, primarily due to unresolved, optically thick clumps (Berlind et al. 1997). Unresolved clumps of dust in these nuclear spirals, particularly in the more distant galaxies, would lead to a grayer extinction law than assumed here, resulting in an underestimate of the total dust mass.
A second resolution effect is due to the finite resolution of the WFPC2 and NICMOS cameras. Many of the dust lanes in our $`VH`$ color maps are unresolved or only marginally resolved. We have therefore constructed a simple model of a dust lane to estimate the magnitude of this effect. We constructed models of dust lanes by creating a series of artificial images at $`V`$ and $`H`$ of “ideal” dust lanes in which the dust lane was a step function in flux at a range of widths and $`E(VH)`$ (note that $`A_V=1.2E(VH)`$ for $`R_V=3.1`$; Mathis 1990). We then convolved these images with the HST PSF for the PC1 camera and NICMOS Camera 1 from TinyTim (Krist & Hook 1999) and measured the difference $`E(VH)_{model}E(VH)_{conv}`$ as a function of the width of the unconvolved dust lane in units of PC1 pixels. In Figure 5 we plot the difference in $`E(VH)`$ to illustrate the effect of unresolved and marginally resolved dust lanes on measurements of the extinction (solid lines). The lines correspond to input $`E(VH)`$ values of 1.1, 0.8, and 0.4 with the top line corresponding to the largest $`E(VH)`$ we measured. This figure shows that unresolved dust lanes may cause us to underestimate $`A_V`$ by up to a factor of three for the most heavily reddened dust arms in this sample.
A further complication is the resolution mismatch between the PC1 and NICMOS Camera 1, which corresponds different widths for the $`V`$ and $`H`$ PSFs. Creating a $`VH`$ color map corresponds to dividing a $`V`$band image of the galaxy that has been convolved with the narrower WFPC2 PSF by an $`H`$band image that has been convolved with the broader NICMOS PSF. The result is a weak unsharp masking effect that causes the $`VH`$ color to be slightly larger in unresolved or marginally resolved dust lanes. To measure the size of this effect, we repeated the exercise discussed above, but convolved both the $`V`$ and $`H`$ band image with the same PSF (in this case the PSF for the F606W filter and PC1 detector). We have plotted the results of this test (dotted lines) on Figure 5 for the same $`E(VH)`$ values and, as expected, the model dust lanes convolved with different PSFs are systematically redder than the model dust lanes convolved with identical PSFs. The magnitude of this effect, however, is much less than that due to convolving the model dust lanes with a PSF in the first place.
A final factor that introduces systematic uncertainties into the measurement of $`E(VH)`$ is the possible presence of diffuse line emission. As mentioned above, we are resolving the narrow-line region in several of these galaxies. In particular, Mrk 270, Mrk 573, NGC 3362, NGC 7682, and UGC 6100 all show bright, extended line emission in the F606W images that result in “blue”, and often cone-shaped, structures in their $`VH`$ color maps. The bow-shaped emission line region in Mrk 573 and other Seyfert 2s have been previously studied with HST by Falcke, Wilson & Simpson (1998). Capetti et al. (1996) have seen similar structures in HST images of the narrow line regions in Seyfert 2s. In addition to these discrete structures, there may also be faint, diffuse line-emitting gas spread throughout the circumnuclear regions. Since the F606W filter contains the bright \[O III\] $`\lambda 5007`$ and H$`\alpha `$ lines, a significant emission-line contamination is possible. We estimated the contribution of emission lines to a variety of Seyfert 2 spectra with the STSDAS SYNPHOT package. This exercise showed that if these lines have a total equivalent width of 100 Å, they would still only increase the $`V`$band surface brightness by $`\mathrm{\Delta }V=0.07`$ magnitudes. The contribution to the total flux in the F606W filter is small because the filter has an effective width of approximately 1500 Å. The brightest contributors to the $`H`$band surface brightness, the hydrogen Brackett lines, are significantly lower equivalent width than their visible-wavelength counterparts and therefore the emission-line contribution to the $`H`$band is expected to be unimportant. In addition to requiring a very high equivalent width to affect the $`V`$band flux, this emission would have to be correlated or anticorrelated with the dust lanes to lower or raise, respectively, the measured $`E(VH)`$.
To summarize our results, our ($`VH`$) color maps of these galaxies exhibit dust lanes with a range in $`E(VH)`$ corresponding to inferred mass surface densities of $`120M_{\mathrm{}}`$ pc<sup>-2</sup>. While we have overestimated the average surface density by measuring the maximum value of $`E(VH)`$, the dust model and resolution effects could offset this as they lead to an underestimate of the total column density. These surface densities imply that these nuclear disks contain at least $`10^6M_{\mathrm{}}`$ of interstellar material in the inner $`100200`$ pc. As Seyfert galaxies require mass accretion rates of $`0.01M_{\mathrm{}}`$ yr<sup>-1</sup> (e.g. Peterson 1997) to fuel their nuclear luminosities, these circumnuclear disks are massive enough to act as potential reservoirs for fueling the AGN.
## 5 Nuclear Spirals and Fueling
Nuclear spirals like those we have shown here have been observed in quiescent spirals (Phillips et al. 1996; Carollo, Stiavelli, & Mack 1998; Elmegreen et al. 1998; Laine et al. 1999), as well as in Seyfert galaxies (Quillen et al. 1999; Regan & Mulchaey 1999). These nuclear spirals are dynamically distinct from the large-scale spiral arms seen in spiral galaxies as they may lie inside the inner Lindblad Resonance of the outer spiral arms (Elmegreen, Elmegreen, & Montenegro 1992) and they may also be shielded by the dynamical influence of the galaxy’s nuclear bulge (Bertin et al. 1989). The nuclear spiral dust lanes appear to differ from the large-scale spiral structure of their hosts in two important ways: they are unlikely to be the result of gravitational instabilities in self-gravitating disks, and their sound speed is likely to be comparable to the orbital speed.
Elmegreen et al. (1998) estimated the value of Toomre’s (1981) $`Q`$ parameter for the nuclear disk in NGC 2207 and determined that the nuclear disk was not self-gravitating. We have applied a similar analysis to the galaxies in our sample after rewriting the expression for $`Q`$ in the form:
$$Q=1.5\times \left(\frac{a}{10\mathrm{km}\mathrm{s}^1}\right)^2\left(\frac{\mathrm{\Sigma }}{M_{\mathrm{}}\mathrm{pc}^2}\right)^1\left(\frac{h}{\mathrm{pc}}\right)^1$$
where $`a`$ is the sound speed, $`\mathrm{\Sigma }`$ is the azimuthally-averaged disk surface density in $`M_{\mathrm{}}`$ pc<sup>-2</sup>, and $`h`$ is the disk scale height in pc. This expression for $`Q`$ assumes that the nuclear disks are in solid–body rotation, as is the case in NGC 2207 (Rubin & Ford 1983) and in the circumnuclear gas disks in galaxies with rapidly rotating cores in the Virgo Cluster (Rubin et al. 1997). We have estimated $`a`$ to be 10 km s<sup>-1</sup>, which is a reasonable value for the inner disks of spirals (e.g. Spitzer 1978; Elmegreen et al. 1998), although some authors report values of $`310`$ km s<sup>-1</sup> for the main disks of spirals (e.g. Kennicutt 1989; van der Kruit & Shostak 1984). We are able to place an upper limit on $`h`$ by assuming that as the disks are thinner than the width of the spiral arms, as is true for spiral arms in galaxy disks proper. Most of our estimates of the arm widths are also upper limits, as at their narrowest points nearly all of the nuclear spiral arms are unresolved. Our estimates of $`h,\mathrm{\Sigma }`$, and $`Q`$ are summarized in Table 3.
Values of $`Q<1`$ would imply that the spiral arms are formed by gravitational instabilities in self-gravitating disks (e.g. Binney & Tremaine 1987), whereas values of $`Q>1`$ would correspond to spiral arms formed in non-selfgravitating disks by hydrodynamic instabilities. The $`Q`$ values we find for the spiral arms in our Seyfert galaxies range from $`5200`$ (Table 3, column 4). We thus conclude, as did Elmegreen et al. (1998) for NGC 2207, that these nuclear spiral arms do not occur in self-gravitating gas disks. Since $`Q`$ scales as the square of the sound speed, a lower value of $`a`$ could greatly reduce $`Q`$. As most of our estimates for $`Q`$ are significantly above 1, $`a`$ would have to be quite small to change our results in the direction of self-gravitating $`Q`$ values. Faster sound speeds, in contrast, would increase $`Q`$ as $`a^2`$. Our estimates of $`\mathrm{\Sigma }`$ are rough values that could vary due to several competing effects. As mentioned above, we may be underestimating the column density in these dust lanes due to scattering and the effects of unresolved dust lanes. In contrast, by measuring $`E(VH)`$ from the maximum arm-interarm $`VH`$ contrast, we may be overestimating the disk surface density. In addition, if the spiral arms are shock fronts they may correspond to density enhancements of up to a factor of 4 (for nonradiating, adiabatic shocks) or higher (for radiating shocks; e.g. Spitzer 1978) over the azimuthally-averaged disk surface density. In addition to the uncertainty in $`a`$ and $`\mathrm{\Sigma }`$, our estimate of the scale height is an upper limit as the spiral arms are generally unresolved. However, decreasing $`h`$ would increase $`Q`$ and the upper limit on $`h`$ thus only strengthens the argument that these disks are not self-gravitating. The fact that many of these nuclear spirals are multi-arm spirals further supports the notion that they are unlikely to be due to gravitational instabilities in self-gravitating disks (Lin & Shu 1964).
Spiral arms can form in gaseous disks that are not self-gravitating through hydrodynamic shocks. One possibility is that these shocks could be caused by turbulent gas motions driven by the inflow of material from larger radii (Struck 1997). Englmaier & Shlosmann (1999) have also shown that shocks in non-selfgravitating, nuclear disks can form grand-design, two-arm spiral structure in galaxies with a large-scale stellar bar. Since the orbital speeds in the inner regions of galaxies are of order the sound speed, it is also possible that the arms are formed by acoustic shocks (Elmegreen et al. 1998; Montenegro, Yuan, & Elmegreen 1999). Shocks in these gaseous disks could be a viable means of removing angular momentum and energy from the gas and thus these nuclear spirals could be signposts of the fueling mechanisms for Seyfert galaxies. While investigations have traditionally looked for large-scale signatures of interaction or bars in the host galaxies, the actual funneling of gas the last few hundred parsecs into the central black hole may be mediated by the hydrodynamics of non-selfgravitating gas disks on small scales. This mechanism could be common, regardless of whether large-scale bars or interactions transported the gas from large radii to 100’s of parsec scales.
## 6 Summary and Conclusions
These $`H`$-band images and $`VH`$ color maps of the centers of a representative sample of 24 Seyfert 2 galaxies rule out the presence of nuclear bars in all but at most 5 of the 24 galaxies in our sample. While this is not to say that nuclear bars cannot play some role in fueling an AGN if present, they are found in only a minority of Seyferts. This strengthens and extends similar results found with a smaller sample of Seyferts by Regan & Mulchaey (1999). While this sample is composed of only Seyfert 2s, a study of all of the Seyfert 1s and 2s in the CfA Seyfert sample with HST WFPC2 imaging did not reveal an excess of bars among Seyfert 1s relative to Seyfert 2s (Pogge & Martini 2000). We note, however, that the circumnuclear environments of Seyfert 1s are more difficult to study given their strong nuclear PSFs and they have not been imaged in the near-infrared, which was needed to find 2 of the 5 possible bars in our sample.
Our $`VH`$ color maps of the centers of these Seyfert galaxies have shown that 20 of 24 have nuclear spiral structure. Of the 4 that do not show nuclear spiral structure, 2 of these (NGC 4388 and NGC 5033) are nearly edge-on systems and thus the geometry is unfavorable for viewing any kind of nuclear spiral structure. The remaining two, Mrk 266SW and NGC 5929, are both in strongly interacting systems and have very chaotic dust morphologies, perhaps reflecting disordered delivery of host galaxy gas due to the tidal interaction. We have used the $`VH`$ color maps to estimate the amount of material in these spiral arms and have found that the nuclear disks in which they reside contain enough gas to serve as a fuel reservoir for the AGN. The surface densities of these disks, along with our estimates of the disk scale height and surface density, also suggest that these disks are not self-gravitating. The nuclear spiral structure is therefore probably formed by shocks propagating in the disk, via either an acoustic mechanism or other hydrodynamic process, and these shocks could dissipate energy and angular momentum from the gas.
Our observations strongly rule out small-scale nuclear bars as the primary means of removing angular momentum from interstellar gas to fuel the AGN in our representative sample of Seyfert 2s. Instead, we find that the most common morphological features in the centers of AGN are nuclear spirals of dust. These spirals may be caused by shock waves in a disk or may be streamers of mass falling inwards from the innermost orbital resonance. The large fraction of Seyfert 2s with dusty nuclear spirals demonstrates the importance of shocks and gas dynamical effects in removing angular momentum from the gas and feeding this material into the nucleus and hence into the supermassive black hole.
An outstanding question posed by these nearly ubiquitous nuclear spirals is how their host nuclear disks formed. As proposed by Rubin et al. (1997) for nuclear disks in some Virgo Cluster galaxies, we conclude that two likely scenarios are the standard processes previously invoked for fueling AGN: bars and interactions. Both large-scale bars and interactions can remove angular momentum from gas and drive it into the inner 100’s of parsecs, where it settles into a nuclear disk. Shocks or other hydrodynamic effects in these disks then complement the action of bars and interactions by removing angular momentum from this accumulated material and feeding the fuel into the active nucleus.
Two future areas of work are suggested by our results. First, similar observations of a large sample of quiescent spiral galaxies should be obtained to determine if the nuclear spirals found in Seyferts are present in all spirals. This work should also be augmented by spectroscopic data to provide kinematic information on the gas and dust in the inner regions. Second, theoretical predictions of accretion rates from spiral structure in both self-gravitating and non-selfgravitating disks are needed. Answers to these questions are needed before we can understand the role played by dusty nuclear spirals in fueling AGN.
We would like to thank Alice Quillen for many useful discussions and comments and Brad Peterson and Barbara Ryden for their helpful comments on the manuscript. We would also like to thank the referee, Mike Regan, for his helpful comments that have improved and clarified this paper. Support for this work was provided by NASA through grant numbers GO-07867.01-96A and AR-06380.01-95A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
|
no-problem/9909/astro-ph9909177.html
|
ar5iv
|
text
|
# Untitled Document
Status of VLBI Observations at 1 mm Wavelength and Future Prospects
S. Doeleman<sup>1</sup> and T.P. Krichbaum<sup>2</sup>
<sup>1</sup> NEROC-MIT Haystack Observatory, Westford, MA., USA
<sup>2</sup> Max-Planck-Institut für Radioastronomie, Bonn, Germany
Introduction
One of the main motivations for high angular resolution imaging at wavelengths shorter than 3 mm comes from the need to image the innermost structures of AGN and their jets on scales as close as possible to the Schwarzschild radii of the central supermassive objects. At a wavelength of 1.3 mm ($`230`$ GHz), VLBI observations with transcontinental baselines yield angular resolutions as small as 25 micro-arcseconds ($`\mu `$as). This corresponds to a spatial resolution of 52 light days for 3C 273 (z=0.158, $`H_0=100`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, $`q_0=0.5`$). Since most compact radio sources are self-absorbed at longer wavelengths (or scatter broadened in the case of Sgr A\*), VLBI imaging at 1 or 2 mm wavelength should offer a clear view to the nucleus, less affected by opacity effects.
While VLBI imaging at 3 mm with the CMVA (Coordinated Millimeter VLBI Array) is now fairly standard (maps with dynamic range of a few hundred are obtained from experiments involving up to 12 stations), VLBI observations at shorter wavelengths are still limited to single baseline detection experiments. With the foreseeable detection of 1 mm fringes on short baselines also on both sides of the Atlantic, as a next step, the combination of the European and American sub-arrays can be envisaged. With a global array operating near $`\lambda =`$1 mm ($`\nu =230`$ GHz) and utilizing phased interferometers (at Plateau de Bure, Ovro, and Hat Creek) as sensitive elements, micro-arcsecond VLBI imaging should become possible within the next few years.
Summary of Previous Experiments
The first VLBI tests at the highest frequencies yet attempted ($`1.31.4`$ mm) were carried out in 1990, 1994, and 1995. These tests were mainly technically driven and were performed to demonstrate the feasibility of 1 mm VLBI. A first experiment performed in 1990 yielded weak (SNR$`=5`$) fringes on 3C 273 on the 845 km baseline Ovro-Kitt Peak (Padin et al., 1990). After this, two VLBI experiments were performed in 1994 and 1995, using the 1150 km baseline between the 30 m MRT at Pico Veleta (Spain) and a single 15 m antenna of the IRAM interferometer on Plateau de Bure (France). On this baseline fringes with a fringe spacing of $`0.20.4`$ mas are obtained. With system equivalent flux densities of SEFD$`=2800`$ Jy for Pico Veleta and SEFD$`=11500`$ Jy for Plateau de Bure, respectively, the single baseline detection threshold (7 $`\sigma `$) is 1.3 Jy for incoherent averaging and 0.5 Jy, if coherent averaging can be applied (see below).
The first of these two experiments was made in December 1994. This experiment was solely technically driven and was performed to demonstrate the feasibility of 1 mm VLBI using the two IRAM instruments. After observations at 86 GHz which were used to determine the station clock offsets, the sources 3C 273 ($`\mathrm{S}_{215\mathrm{G}\mathrm{H}\mathrm{z}}=13.5`$ Jy), 3C 279 ($`\mathrm{S}_{215\mathrm{G}\mathrm{H}\mathrm{z}}=10.5`$ Jy), and 2145+067 ($`\mathrm{S}_{215\mathrm{G}\mathrm{H}\mathrm{z}}=5.6`$ Jy) were observed and detected with signal-to-noise ratios in the range of SNR=$`710`$ (Greve et al., 1995). 1823+568 which had a flux of only 1.5 Jy was not seen.
A second observation took place in March 1995, using the same antennas and observational setup (MK III mode A, 112 MHz bandwidth). In this experiment which was of longer duration, a sample of 8 bright AGN and the Galactic Center source Sgr A\* were observed (Krichbaum et al. 1997 & 1998). From this small sample, only the faintest source, 4C 39.25, which had a flux of $`\mathrm{S}_{215\mathrm{G}\mathrm{H}\mathrm{z}}=3.5`$ Jy, was not detected. For the remaining 8 objects (3C 273, 3C 279, 1334-127, 3C 345, NRAO 530, Sgr A\*, 1749+096, 1921-293) clear fringes were found with signal-to-noise ratios in the range of SNR=$`635`$.
In 1994 and 1995 3C 273 and 3C 279 were observed at similar interferometric hour angles (IHA). This facilitates a comparison of their correlated flux densities between the two epochs. Whereas for 3C 279 the total flux density and the visibility amplitudes between both experiments were similar ($`\mathrm{S}_{\mathrm{corr}}=2.42.8`$ Jy at IHA$`=4`$), the correlated flux in 3C 273 (at IHA$`=23`$) increased by a factor of two from $`\mathrm{S}_{\mathrm{corr}}=0.5`$ Jy in December 1994, to $`\mathrm{S}_{\mathrm{corr}}=1.0`$ Jy in March 1995. On the other hand the total flux of 3C 273 decreased from 13.5 Jy to 9.2 Jy. This and the superluminal motion seen in 3C 273 at longer wavelengths, can be regarded as evidence for structural variations in the jet of 3C 273 also at 215 GHz.
The highest correlated flux of about 4 Jy was seen in 3C 279. This corresponds to a visibility (or compactness) of about 40 %. For the other sources the visibilities were lower, ranging between 10–30 %. At present it is unclear, if these lower visibilities are due to residual calibration uncertainties, or if they indicate angular resolution effects. All of the sources were observed in snapshot mode for only a limited time range. The beating in the visibility amplitude, which is caused by the mas-to sub-mas structure of the individual source and which often is quite pronounced in the 3 mm data, easily could cause an underestimate of the correlated flux density and therefore would represent only a lower limit to the compactness.
Recent 3 mm maps of 3C 273 (T. Krichbaum et al., this conference) show a one-sided core jet structure with a compact core of $`80`$$`\mu `$as size and a brightness temperature $`\mathrm{T}_\mathrm{B}=2.110^{11}`$ K close to the theoretically expected inverse Compton limit ($`\mathrm{T}_\mathrm{B}10^{12}`$ K). Using this measured brightness temperature, the expected source size at 230 GHz would be $`\theta =10.5\sqrt{\mathrm{S}_{\mathrm{Jy}}}`$$`\mu `$as. With a total flux density of $`\mathrm{S}=10`$ Jy, the expected source size would be $`33`$$`\mu `$as. This yields a visibility of $`V=0.96`$ or $`\mathrm{S}_{\mathrm{corr}}=9.6`$ Jy at 700 M$`\lambda `$ (Pico – PdBure) and $`V=3.910^2`$ or $`\mathrm{S}_{\mathrm{corr}}=0.4`$ Jy at 6000 M$`\lambda `$ (transatlantic baselines).
In order to detect compact sources like 3C 273 on the long transatlantic baselines, a detection sensitivity of $`0.4`$ Jy is needed. There might be sources which are more compact, but they will be fainter. For a 1 Jy source with a size of $`10`$$`\mu `$as a correlated flux of $`0.70.8`$ Jy could be expected, relieving the sensitivity requirements by a factor of two. To reach the necessary sensitivity, future 1 mm VLBI will require participation of antennas with large collecting areas (phased interferometers like Plateau de Bure, OVRO, BIMA, and the future MMA and ALMA), high observing bandwidths ($`\mathrm{\Delta }\nu 256`$ MHz), and the possibility to correct for atmospheric phase fluctuations, which if uncorrected, lead to too short coherence times (see Tahmoush & Rogers, this conference).
A 1 mm-VLBI Experiment in February 1999
Past 1mm-VLBI detections and scientific results are encouraging but have, so far, been limited to single baselines. For AGN studies at 1 mm, imaging arrays are needed that include many more antennas. Observations of compact masers at the highest VLBI frequencies require relatively compact arrays with baselines less than $`1G\lambda `$. A group of five 1mm equipped mm-wave dishes in the SouthWest United States can potentially deliver a scientifically useful 1mm-VLBI array. This group includes : the Berkeley-Illinois-Maryland Array (Redding, CA), the Owens Valley Radio Observatory (Bishop, CA), the NRAO 12m (Kittpeak, AZ) and the Heinrich Hertz Telescope (Mt. Graham, AZ).
A 1mm-VLBI experiment was carried out with the above array plus the IRAM 30m on Pico Veleta during the winter season in 1999. Good weather is crucial for experiments at high frequencies especially since the smaller antenna sizes typical of this array raise detection thresholds. Recording began at 0600 UT on 17 Feb and ended at 1800UT on Feb 19. All sites recorded in MKIII compatible modes with VLBA sites (Kittpeak, OVRO) using 7 BBCs each 8MHz wide (56MHz BW), and all other sites using a full compliment of 14 BBCs for a total of 112MHz bandwidth. The center observing frequency was 230.5 GHz.
All sites other than the HHT routinely participate in CMVA 3mm-VLBI sessions so special preparations there were required. A MKIII VLBI electronics rack and a tape recorder were shipped to Mt. Graham a month early and set up for testing. A H-maser was borrowed from the Harvard-SAO and shipped along with the VLBI equipment. Standard phase tests carried out at the HHT confirmed that 1mm test tones injected into the receiver feed recorded properly on tape and could be recovered at the Haystack correlator. As a further check, the CO J=2-1 line at 230.5GHz was observed towards the cold core L1512, recorded using the VLBI system and its spectra generated by autocorrelating the tape at Haystack. Geodetic GPS measurements were made to determine the HHT position to within 1 meter. Personnel from the MPIfR created the important software links from the VLBI field system to the HHT pointing computer.
Target sources included the brightest compact AGN and placed emphasis on 3C279 and 3C273B which were both at $`10`$Jy. Historically, they have been at higher flux density levels. The compact source SgrA\* was also included as it has a rising spectrum from 3 to 1mm.
Bad weather at OVRO and BIMA made phasing the arrays difficult and we estimate that during times of good mutual visibility of bright sources on the VLBI array, these sites were unphased. Baselines to Pico Veleta, while potentially the most sensitive, were observed at very low elevations for antennas in the US with correspondingly higher Tsys values. The table shows the antenna sensitivities corresponding to the best times of mutual visibility on 3C273B and 3C279.
| Site | Diam | Tsys (K) | SEFD (Jy) |
| --- | --- | --- | --- |
| HHT | 10m | 375 | 22000 |
| Kittpeak | 12m | 450 | 24000 |
| Pico Veleta | 30m | 300 | 2500 |
| BIMA | $`9\times 6`$m | 700 | $`74000^{}`$ |
| OVRO | $`4\times 8`$m | 1000 | $`59000^{}`$ |
| Unphased | | | |
| --- | --- | --- | --- |
Searching for Fringes
Tapes were shipped to the Haystack Correlator and fringes were searched for to all sites. Searches concentrated on the Kittpeak-HHT baseline which observed 3C279 and 3C273B at optimal elevations and during periods of good weather. Station clocks determined using GPS receivers at each site limited fringe searches in delay to a few micro seconds but delay windows of up to $`\pm 28\mu sec`$ were searched. No sources were detected using a combination of coherent and incoherent detection methods.
Sensitivity
An expression for the coherent detection threshold for a single baseline can be written as :
$$D_c=\frac{7\sqrt{\text{SEFD}_1\text{SEFD}_2}}{L\sqrt{2B\tau _c}}$$
(1)
where $`L0.5`$ is the loss due to 1-bit sampling, B is the bandwidth, and $`\tau _c`$ is the coherent integration time. For the Kittpeak-HHT baseline, the detection level for a coherence time of 10 seconds is 9.6 Jy. This threshold can be lowered by averaging many coherent segments (Rogers, Doeleman & Moran 1995) which is very useful in the high frequency regime where $`\tau _c`$ can easily be less than 10 seconds. This incoherent detection threshold ($`D_i`$) can be expressed as $`D_iD_cN^{0.25}`$ where N is the number of coherent segments averaged. For scans of 6.5 minute length, the incoherent detection threshold is lowered to $`D_i=3.2`$Jy. The obvious question of why there were no detections with a detection threshold well below the source flux densities leads us to consider sources of loss in the VLBI systems.
Test tones traced through the receiver systems at both Kittpeak and HHT revealed no more than a 20% signal loss. This loss alone would cause $`D_i`$ to increase by a factor of 1.25. A more severe loss of signal can come from decorrelation due to phase noise on the maser reference. The maser coherence loss is $`\mathrm{exp}(\sigma ^2/2)`$ with $`\sigma =2\pi \nu \sigma _y(\tau )\tau `$ where $`\sigma _y(\tau )`$ is the Allan Standard Variance at the coherence time $`\tau `$ and $`\nu `$ is the observing frequency. Investigation into the performance of the HHT H-maser shows that it may have had an Allan Variance in excess of 8e-14 for a 10 second coherence time causing a 50% loss in signal. By comparison, the Kittpeak H-maser, with a variance of 2.4e-14 has a loss of only 4%. Combining these two sources of loss raises $`D_i`$ to 6Jy. This leaves even the incoherent detection method with only a marginal chance of detecting the source on this baseline. If we consider other possible factors such as source resolution on the $`200M\lambda `$ baseline or coherence times less than 10 seconds, then the situation becomes even worse. We conclude that the sum of these losses combined with uncooperative weather to raise detection thresholds on this baseline above flux densities of our brightest targets.
Future Experiments - 2mm
A compromise between the elevated detection thresholds at 230GHz and the need to explore VLBI at higher frequencies may be to attempt 2mm-VLBI. The sensitivity advantages are clear. The atmospheric opacity is much lower and coherence times are longer than at 1mm. System temperatures decrease while source flux densities rise : at 150GHz the flux density of 3C273B is 13Jy (down from its historical level of 18Jy). Effects of phase noise in the maser reference also decrease. Using 2mm SEFDs of 17500Jy and 19500Jy for the HHT and Kittpeak respectively, we find that $`D_i`$(10 sec)=3.3Jy with all the above losses accounted for. For the spectral line case, if we assume a line width of 0.5km/s, then $`D_c`$(10 sec)=100Jy. A number of SiO masers in evolved stars exceed this flux density in the 2mm range and may be the best class of fringe finders at the higher frequencies. Plans are underway for a 2mm-VLBI test involving HHT-Kittpeak-Pico Veleta in early 2000.
Acknowledgements
The Feb 1999 observations required a great deal of preparation and hard work from people at the following observatories: BIMA, Haystack, HHT, IRAM, MPIfR, NRAO, and OVRO. Their efforts are crucial to the success of these high frequency experiments and we are grateful for their assistance and dedication.
References
> Greve, A., Torres, M., Wink, J.E., et al., 1995, Astron. Astrophys. Lett., 299, L33.
> Krichbaum, T.P., Graham, D.A., Greve, A., et al., 1997, Astron. Astrophys. Lett., 323, L17.
> Krichbaum, T.P., Graham, D.A., Witzel, A., et al., 1998, Astron. Astrophys. Lett., 335, L106.
> Padin, S., Woody, D.P., Hodges, M.W., et al., 1990, ApJ Let., 360,11.
> Rogers, A.E.E., Doeleman, S.S. & Moran, J.M., 1995, AJ, 109, 1391.
|
no-problem/9909/astro-ph9909400.html
|
ar5iv
|
text
|
# A supernova remnant coincident with the slow X-ray pulsar AX J1845–0258
## 1. Introduction
It is becoming increasingly apparent that isolated neutron stars come in many flavors besides traditional radio pulsars. In recent years, the neutron star zoo has widened to include $``$10 radio-quiet neutron stars (Brazier & Johnston (1999)), six anomalous X-ray pulsars (AXPs; Mereghetti (1999)) and four soft $`\gamma `$-ray repeaters (SGRs; Kouveliotou (1999)). There is much uncertainty and debate as to the nature of these sources; one way towards characterizing their properties is through associations with supernova remnants (SNRs). An association with a SNR gives an independent estimate of a neutron star’s age and distance, while the position of the neutron star with respect to the SNR’s center can be used to estimate the transverse space velocity of the compact object.
A case in point are the AXPs. Some authors propose that the AXPs are accreting systems (van Paradijs, Taam, & van den Heuvel (1995); Mereghetti & Stella (1995); Ghosh, Angelini, & White (1997)), while others argue that AXPs are “magnetars”, isolated neutron stars with very strong magnetic fields, $`B10^{14}`$ G (Thompson & Duncan (1996); Heyl & Hernquist (1997); Melatos (1999)). However, the association of the AXP 1E 1841–045 with the very young ($``$2 kyr) SNR G27.4+0.0 (Vasisht & Gotthelf (1997)) makes the case that 1E 1841–045 is a young object. Assuming that the pulsar was born spinning quickly, it is difficult to see how accretion could have slowed it down to its current period in such a short time. This result thus favors the magnetar model for 1E 1841–045, and indeed the magnetic field inferred from its period and period derivative, and assuming standard pulsar spin-down, is $`B8\times 10^{14}`$ G.
AX J1845–0258 (also called AX J1844.8–0258) is a 6.97 sec X-ray pulsar, found serendipitously in an ASCA observation of the (presumably unassociated) SNR G29.7–0.3 (Gotthelf & Vasisht (1998), hereafter GV98; Torii et al. (1998), hereafter T98). The long pulse period, low Galactic latitude and soft spectrum of AX J1845–0258 led GV98 and T98 to independently propose that this source is an AXP (a conclusion which still needs to be confirmed through measurement of a period derivative). The high hydrogen column density inferred from photoelectric absorption ($`N_H10^{23}`$ cm<sup>-2</sup>) suggests that AX J1845–0258 is distant; T98 put it in the Scutum arm, with a consequent distance of 8.5 kpc, while GV98 nominate 15 kpc.
Because AX J1845–0258 was discovered at the very edge of the ASCA GIS field-of-view, its position from these observations could only be crudely estimated, with an uncertainty of $`3^{}`$. A subsequent (1999 March) 50 ks on-axis ASCA observation has since been carried out (Vasisht et al. (1999)). No pulsations are seen in these data, but a faint point source, AX J184453.3–025642, is detected within the error circle for AX J1845–0258. Vasisht et al. (1999) determine an accurate position for AX J184453.3–025642, and argue that it corresponds to AX J1845–0258 in a quiescent state. Significant variations in the flux density of AX J1845–0258 were also reported by T98.
The region containing AX J1845–0258 has been surveyed at 1.4 GHz as part of the NVSS (Condon et al. (1998)). An image from this survey shows a $`5^{}`$ shell near the position of the pulsar. We here report on multi-frequency polarimetric observations of this radio shell, at substantially higher sensitivity and spatial resolution than offered by the NVSS. Our observations and analysis are described in §2, and the resulting images are presented in §3. In §4 we argue that the radio shell coincident with AX J1845–0258 is a new SNR, and consider the likelihood of an association between the two sources.
## 2. Observations and Data Reduction
Radio observations of the field of AX J1845–0258 were made with the D-configuration of the Very Large Array (VLA) on 1999 March 26. The total observing time was 6 hr, of which 4.5 hr was spent observing in the 5 GHz band, and the remainder in the 8 GHz band. 5 GHz observations consisted of a 100 MHz bandwidth centered on 4.860 GHz; 8 GHz observations were similar, but centered on 8.460 GHz. Amplitudes were calibrated by observations of 3C 286, assuming flux densities of 7.5 and 5.2 Jy at 5 GHz and 8 GHz respectively. Antenna gains and instrumental polarization were calibrated using regular observations of MRC 1801+010. Four Stokes parameters (RR, LL, RL, LR) were recorded in all observations. To cover the entire region of interest, observations were carried out in a mosaic of 2 (3) pointings at 5 (8) GHz.
Data were edited and calibrated in the MIRIAD package. In total intensity (Stokes $`I`$), mosaic images of the field were formed using uniform weighting and maximum entropy deconvolution. The resulting images were then corrected for both the mean primary beam response of the VLA antennas and the mosaic pattern. The resolution and noise in the final images are given in Table 1. Images of the region were also formed in Stokes $`Q`$, $`U`$ and $`V`$. These images were made using natural weighting to give maximum sensitivity, and then deconvolved using a joint maximum entropy technique (Sault, Bock, & Duncan (1999)). At each of 5 and 8 GHz, a linear polarization image $`L`$ was formed from $`Q`$ and $`U`$. Each $`L`$ image was clipped where the polarized emission or the total intensity was less than 5$`\sigma `$.
In order to determine a spectral index from these data, it is important to ensure that the images contain the same spatial scales. We thus spatially filtered each total intensity image (see Gaensler et al. (1999)), removing structure on scales larger than $`5\mathrm{}`$ and smoothing each image to a resolution of $`15\mathrm{}`$. The spatial distribution of spectral index was then determined using the method of “T–T” (temperature-temperature) plots (Turtle et al. (1962); Gaensler et al. (1999)).
## 3. Results
Total intensity images of the region are shown in Figure 1. At both 5 and 8 GHz, a distinct shell of emission is seen, which we designate G29.6+0.1; observed properties are given in Table 1. The shell is clumpy, with a particularly bright clump on its eastern edge. In the east the shell is quite thick (up to 50% of the radius), while the north-western rim is brighter and narrower. Two point sources can be seen within the shell interior. At 5 GHz, the shell appears to be sitting upon a plateau of diffuse extended emission; this emission is resolved out at 8 GHz.
Significant linear polarization at 5 GHz is seen from much of the shell, particularly in the two brightest parts of the shell on the eastern and western edges. Where detected, the fractional polarization is 2–20%. At 8 GHz, linear polarization is seen only from these two regions, with fractional polarization 5–40%. No emission was detected in Stokes $`V`$, except for instrumental effects associated with the offset of the VLA primary beam between left- and right-circular polarization.
Meaningful T–T plots were obtained for three regions of the SNR, as marked in Figure 1; the spectral index, $`\alpha `$ ($`S_\nu \nu ^\alpha `$), for each region is marked. There appear to be distinct variations in spectral index around the shell, but all three determinations fall in the range $`0.7\alpha 0.4`$.
Two point sources are visible within the field. The more northerly of the two is at $`18^\mathrm{h}44^\mathrm{m}55\stackrel{}{\mathrm{.}}11`$, $`02\mathrm{°}55\mathrm{}36\stackrel{}{\mathrm{.}}9`$ (J2000), with $`S_{5\mathrm{GHz}}=0.8\pm 0.1`$ mJy and $`\alpha =+0.5\pm 0.3`$, while the other is at $`18^\mathrm{h}44^\mathrm{m}50\stackrel{}{\mathrm{.}}59`$, $`02\mathrm{°}57\mathrm{}58\stackrel{}{\mathrm{.}}5`$ (J2000) with $`S_{5\mathrm{GHz}}=2.0\pm 0.3`$ mJy and $`\alpha =0.4\pm 0.1`$. Positional uncertainties for both sources are $`0\stackrel{}{\mathrm{.}}3`$ in each coordinate. No emission is detected from either source in Stokes $`Q`$, $`U`$ or $`V`$.
## 4. Discussion
The source G29.6+0.1 is significantly linearly polarized and has a non-thermal spectrum. Furthermore, the source has a distinct shell morphology, and shows no significant counterpart in 60 $`\mu `$m IRAS data. These are all the characteristic properties of supernova remnants (e.g. Whiteoak & Green (1996)), and we thus classify G29.6+0.1 as a previously unidentified SNR.
### 4.1. Physical Properties of G29.6+0.1
Distances to SNRs are notoriously difficult to determine. The purported $`\mathrm{\Sigma }D`$ relation has extremely large uncertainties, and this source is most likely too faint to show H i absorption. So while we cannot determine a distance to G29.6+0.1 directly, we can attempt to estimate its distance by associating it with other objects in the region. Indeed hydrogen recombination lines from extended thermal material have been detected from the direction of G29.6+0.1 (Lockman, Pisano, & Howard (1996)), at systemic velocities of $`+42`$ and $`+99`$ km s<sup>-1</sup>. Adopting a standard model for Galactic rotation (Fich, Blitz, & Stark (1989)), these velocities correspond to possible distances of 3, 6, 9 or 12 kpc, a result which is not particularly constraining.
Nevertheless, G29.6+0.1 is of sufficiently small angular size that we can put an upper limit on its age simply by assuming that it is located within the Galaxy. At a maximum distance of 20 kpc, the SNR is $`27.5\pm 1.5`$ pc across. For a uniform ambient medium of density $`n_0`$ cm<sup>-3</sup>, the SNR has then swept up $`(260\pm 40)n_0`$ $`M_{\mathrm{}}`$ from the ISM which, for typical ejected masses and ambient densities, corresponds to a SNR which has almost completed the transition from free expansion to the adiabatic (Sedov-Taylor) phase (see e.g. Dohm-Palmer & Jones (1996)). Thus expansion in the adiabatic phase acts as an upper limit, and we can derive a maximum age for G29.6+0.1 of $`(13\pm 4)\left(n_0/E_{51}\right)^{1/2}`$ kyr, where $`E_{51}`$ is the kinetic energy of the explosion in units of $`10^{51}`$ erg. For a typical value $`n_0/E_{51}=0.2`$ (Frail, Goss, & Whiteoak (1994)), we find that the age of the SNR must be less than 8 kyr. For distances nearer than 20 kpc, the SNR is even younger. For example, at a distance of 10 kpc, the SNR has swept up sufficiently little material from the ISM that it is still freely expanding, and an expansion velocity 5000 km s<sup>-1</sup> then corresponds to an age 1.4 kyr.
### 4.2. An association with AX J1845–0258?
G29.6+0.1 is a young SNR in the vicinity of a slow X-ray pulsar. If the two can be shown to be associated, and if we assume that AX J1845–0258 was born spinning rapidly, then the youth of the system argues that AX J1845–0258 has slowed down to its current period via electromagnetic braking rather than accretion torque, and that it is thus best interpreted as a magnetar (cf. Vasisht & Gotthelf (1997)). Indeed if one assumes that the source has slowed down through the former process, its inferred dipole magnetic field is $`9t_3^{1/2}\times 10^{14}`$ G, for an age $`t_3`$ kyr. For ages in the range 1.4–8 kyr (§4.1 above), this results in a field in the range $`(38)\times 10^{14}`$ G, typical of other sources claimed to be magnetars.
But are the two sources associated? Associations between neutron stars and SNRs are judged on various criteria, including agreements in distance and in age, positional coincidence, and evidence for interaction. Age and distance are the most fundamental of these, but unfortunately existing data on AX J1845–0258 provide no constraints on an age, and suggest only a very approximate distance of $``$10 kpc (GV98; T98).
The source AX J184453.3–025642 (Vasisht et al. (1999)) is located well within the confines of G29.6+0.1, less than $`40\mathrm{}`$ from the center of the remnant (see Figure 1). Vasisht et al. (1999) argue that AX J1845–0258 and AX J184453.3–025642 are the same source; if we assume that this source is associated with the SNR and was born at the remnant’s center, then we can infer an upper limit on its transverse velocity of $`1900d_{10}/t_3`$ km s<sup>-1</sup>, where the distance to the system is $`10d_{10}`$ kpc. In §4.1 we estimated $`d_{10}/t_30.30.7`$, and so the velocity inferred falls comfortably within the range seen for the radio pulsar population (e.g. Lyne & Lorimer (1994); Cordes & Chernoff (1998)) Alternatively, if we assume a transverse velocity of $`400v_{400}`$ km s<sup>-1</sup>, we can infer an age for the system of $`<5d_{10}/v_{400}`$ kyr, consistent with the determinations above. There is no obvious radio counterpart to the X-ray pulsar — both radio point sources in the region are outside all of the X-ray error circles. At the position of AX J184453.3–025642, we set a 5$`\sigma `$ upper limit of 1 mJy on the 5 GHz flux density of any point source.
We also need to consider the possibility that the positional alignment of AX J184453.3–025642 and G29.6+0.1 is simply by chance. The region is a complex part of the Galactic Plane — there are 15 catalogued SNRs within 5°— and it seems reasonable in such a region that unassociated SNRs and neutron stars could lie along the same line of sight (Gaensler & Johnston (1995)). Many young radio pulsars have no associated SNR (Braun, Goss, & Lyne (1989)), so there is no reason to demand that even a young neutron star be associated with a SNR.
The first quadrant of the Galaxy is not well-surveyed for SNRs, so we estimate the likelihood of a chance association by considering the fourth quadrant, which has been thoroughly surveyed for SNRs by Whiteoak & Green (1996). In a representative region of the sky defined by $`320^{}l355^{}`$ and $`|b|1.5^{}`$, we find 44 SNRs in their catalogue. Thus for the $``$10 radio-quiet neutron stars, AXPs and SGRs at comparable longitudes and latitudes, there is a probability $`1.6\times 10^3`$ that at least one will lie within $`40\mathrm{}`$ of the center of a known SNR by chance. Of course in the present case we have carried out a targeted search towards a given position, and so the probability of spatial coincidence is somewhat higher than for a survey; nevertheless, we regard it unlikely that AX J184453.3–025642 should lie so close to the center of an unrelated SNR, and hence propose that the pulsar and the SNR are genuinely associated.
There is good evidence that magnetars power radio synchrotron nebulae through the injection of relativistic particles into their environment (Kulkarni et al. (1994); Frail, Kulkarni, & Bloom (1999)). The two such sources known are filled-center nebulae with spectral indices $`\alpha 0.7`$, and in one case the neutron star is substantially offset from the core of its associated nebula (Hurley et al. (1999)). In Figure 1, the clump of emission with peak at $`18^\mathrm{h}44^\mathrm{m}56^\mathrm{s}`$, $`02\mathrm{°}57\mathrm{}`$ (J2000) has such properties, and one can speculate that it corresponds to such a source. Alternatively, compact steep-spectrum features are seen in other shell SNRs, and may be indicative of deceleration of the shock in regions where it is expanding into a dense ambient medium (Dubner et al. (1991); Gaensler et al. (1999)).
## 5. Conclusions
Radio observations of the field of the slow X-ray pulsar AX J1845–0258 reveal a linearly polarized non-thermal shell, G29.6+0.1, which we classify as a previously undiscovered supernova remnant. We infer that G29.6+0.1 is young, with an upper limit on its age of 8000 yr. The proposed quiescent counterpart of AX J1845–0258, AX J184453.3–025642, is almost at the center of G29.6+0.1, from which we argue that the pulsar and SNR were created in the same supernova explosion. The young age of the system provides further evidence that anomalous X-ray pulsars are isolated magnetars rather than accreting systems, although we caution that the apparent flux variability of AX J1845–0258 raises questions over both its classification as an AXP and its positional coincidence with G29.6+0.1. Future X-ray measurements should be able to clarify the situation.
There are now six known AXPs, three of which have been associated with SNRs. In every case the pulsar is at or near the geometric center of its SNR. This result is certainly consistent with AXPs being young, isolated neutron stars, as argued by the magnetar hypothesis. If one considers the radio pulsar population, the fraction of pulsars younger than a given age which can be convincingly associated with SNRs drops as the age threshold increases. The age below which 50% of pulsars have good SNR associations is $``$20 kyr, and for several of these the pulsar is significantly offset from the center of its SNR (e.g. Frail & Kulkarni (1991); Frail et al. (1996)). Thus if the SNRs associated with both AXPs and radio pulsars come from similar explosions and evolve into similar environments, this seems good evidence that AXPs are considerably younger than 20 kyr. Indeed all of the three SNRs associated with AXPs have ages $`<`$10 kyr (Gotthelf & Vasisht (1997); Parmar et al. (1998); §4.1 of this paper). While the sample of AXPs is no doubt incomplete, this implies a Galactic birth-rate for AXPs of $`>`$0.6 kyr<sup>-1</sup>. This corresponds to $`(5\pm 2)`$% of core-collapse supernovae (Cappellaro et al. (1997)), or 3%–20% of the radio pulsar population (Lyne et al. (1998); Brazier & Johnston (1999)).
There is mounting evidence that soft $`\gamma `$-ray repeaters (SGRs) are also magnetars (Kouveliotou et al. (1999)). However of the four known SGRs, two (0526–66 and 1627–41) are on the edge of young SNRs (Cline et al. (1982); Smith, Bradt, & Levine (1999)), a third (1900+14) is on the edge of an old SNR (Vasisht et al. (1994)), and the fourth (1806–20) has no associated SNR blast wave (Kulkarni et al. (1994)). This suggests that SGRs represent an older, or higher velocity, manifestation of magnetars than do AXPs.
B.M.G. thanks Bob Sault for advice on calibration. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. B.M.G. acknowledges the support of NASA through Hubble Fellowship grant HF-01107.01-98A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5–26555. E.V.G. & G.V.’s research is supported by NASA LTSA grant NAG5–22250
|
no-problem/9909/astro-ph9909086.html
|
ar5iv
|
text
|
# Imaging the Haro 6–10 Infrared Companion
## 1 Introduction
Among the low-mass pre-main sequence binary systems in nearby active star-forming regions, there exist a handful in which one of the two stars is reminiscent of a protostar, radiating primarily at infrared wavelengths and faint or undetected in visible light. These objects are referred to as “infrared companions” (IRCs), despite being in most cases more luminous than their primaries. Their bolometric temperatures, which measure the “center of mass” of the spectral energy distribution and are correlated with evolutionary status for young stars (Myers & Ladd 1993), tend to lie in a transition region between true embedded sources on the one hand and classical or weak-lined T Tauri stars on the other. A variety of models have been proposed for the IRCs, but they are generally taken to be dust-shrouded stars which are either less evolved than their primaries and have yet to dissipate their natal envelopes (e.g., Dyck, Simon, & Zuckerman 1982) or are experiencing episodes of enhanced accretion, perhaps due to interactions with the primary or with a circumbinary disk (e.g., Koresko, Herbst, & Leinert 1997; hereafter KHL). The prototype IRC is the companion to T Tauri itself.
Early studies of Haro 6–10 by Elias (1978) revealed a nonstellar photographic appearance, a spectrum displaying prominent forbidden line emission, and photometric variations of a factor $`3`$ at 2.2 $`\mu `$m. The infrared companion was found 1″.2 north of the visible star by Leinert & Haas (1989; hereafter LH), who used slit-scanning speckle interferometry to measure the ratio of the brightnesses of the two stars at wavelengths between 1.65 and 4.8 $`\mu `$m. As in the T Tauri system, the IRC was fainter than the visible star at wavelengths shorter than $`4`$ $`\mu `$m, but brighter in the mid-infrared. The Haro 6–10 IRC is the reddest object studied by KHL, with a bolometric temperature of only 210 K, compared to 490 K for the T Tauri IRC.
In addition to the IRC, LH found evidence of extended emission, in the form of visibility amplitude curves which fell below unity at high frequencies, for scans taken perpendicular to the axis of the binary. By contrast, scans taken along the binary axis showed only the oscillations typical of a pair of pointlike stars. LH argued that this extended emission was probably associated with one of the two stars. With the advent of two-dimensional infrared detectors on large telescopes, together with refined speckle imaging techniques, it has become practical to directly image the components of young binaries to search for faint tertiary components and diffuse circumstellar material. This Letter presents the results of a speckle holographic imaging study of the Haro 6–10 IRC.
## 2 Observations
The new high-resolution images were taken in the K (2.2 $`\mu `$m) photometric band at the 10 m Keck 1 telescope on three nights in November 1997, March 1998, and November 1998 (see Table 1) using the Near-Infrared Camera (Matthews & Soifer 1994). The NIRC Image Converter (Matthews et al. 1996) produced a magnified pixel spacing of 0″.02, approximately Nyquist-sampling the diffraction limit at 2 $`\mu `$m. The observations consisted of thousands of exposures of the visual binary, with integration times of $`0.15`$ sec for each frame. This short exposure time partially “froze” the atmospheric seeing, so that the point-spread function (PSF) consisted of distinguishable speckles. The 1″.2 separation of the binary was large enough that the PSFs did not significantly overlap in frames with good seeing.
Individual frames were calibrated in the standard way by subtracting mean sky frames, dividing by flatfield images, and “fixing” bad pixels. A model was computed for the “bleed” signal which extended along and perpendicular to the readout direction, and this was subtracted from the calibrated frame. For each frame, a measurement of the instantaneous point-spread function (PSF) was made by masking all pixels on the side of the frame toward the IRC, leaving only the primary. Frames in which the instantaneous seeing was too poor for this procedure to cleanly separate the stars were rejected. For the rest, the Fourier power spectrum of the PSF frames, and the cross-spectrum (i.e., the Fourier transform of the cross-correlation) of the masked frame with the unmasked frame, were computed. If the primary star is unresolved, then in principle the ratio of the cross-spectrum to the PSF frame’s power spectrum is the Fourier transform of the diffraction-limited image.
Raw frames produced by NIRC suffer from semi-coherent electronic pattern noise which is typically concentrated in a small regions of the Fourier domain. If not corrected, this noise limits the sensitivity of the holography technique. To identify the contaminated frequencies, a noise frame was constructed by replacing the pixels containing the stars with copies of an empty region near a corner of the field. The power spectrum of the noise frame was computed, and frequencies containing more power in the noise frame than the average over the series of unmasked frames at similar frequencies were marked as bad.
The PSF power and the cross-spectrum at uncontaminated frequencies were accumulated over the whole series of frames. A final image was reconstructed from them, with the use of an apodizing function to suppress high-frequency noise. The image was then rotated to standard orientation. The apodizing function chosen was the product of a Gaussian and a Hanning function. It produced a final image resolution of 81 mas (FWHM).
The resulting images for each of the three nights are presented in Figure 1. The image from each epoch is scaled to the maximum pixel level in the IRC. These images display a dynamic range, measured as the brightness of the peak of the primary star in units of noise, of $`\mathrm{¿}\mathrm{}2000`$. This is much larger than is typical for a speckle image; the improvement is due to the holography technique’s use of instantaneous PSF measurements instead of relying on a statistical consistency of the atmosphere between separate target and reference-star frames as in normal speckle interferometry. The technique does produce a residual artifact in the form of an apparent ridge of emission which runs approximately midway between the two stars and parallel to the direction of the mask used to make the PSF frames. This is probably due to a small amount of flux from the primary which spills over into the masked region. In addition, in the final epoch there is a narrow strip which extends from the IRC in a direction approximately north and exactly parallel to the detector readout.
A direct K-band image of the region was taken without the image converter, so that the raw field was 38″ ($`5000`$ AU). Frames were taken with the binary in two well-separated locations, and the reduction was done by computing the difference between these frames and pixel-fixing. The resulting image, presented in Figure 2, has much lower resolution but higher flux sensitivity than the holographic image. It shows the two stars surrounded by an arc of nebulosity which curves gently from the primary star.
## 3 Results
The Haro 6–10 infrared companion is clearly resolved in the image taken on each of the three nights. Its peak is marginally resolved in all three images, with a deconvolved FWHM of $`35`$ mas, or 5 AU, along cuts in the East-West direction. The peak is surrounded by a complex and irregular structure, the brightest part of which consists of a nebulous “tail” that extends some 300 mas to the south. A fainter peak, which we designate Haro 6–10 IRC–SW, appears 400 mas to the southwest of the IRC. Finally, a narrow “arm” extends 500 mas westward from a point just north of the peak.
The IRC underwent a significant morphological change during the observations, especially between the second and third epochs. Between the first and third epochs, a new “tail” appears to the north of the peak, reaching an extent of $`200`$ mas. It is fainter but wider than the southern tail. At the same time, IRC–SW dimmed by a factor of $`2`$ compared to the IRC’s peak.
The 2.2 $`\mu `$m brightness of the IRC, measured as a fraction of the brightness of the primary, more than doubled from 0.15 to 0.39 (see Table 1). For comparison, LH found it to be 0.13 in September 1988, and KHL found 0.04 in October 1994.
In order to compare the holographic images with the one-dimensional (1-D) speckle measurements by LH, we projected the holographic images along position angle perpendicular to 355, computed the 1-D Fourier amplitudes, and divided them by the amplitude of a projected point-source image made with the same apodization function. The first and third epochs show excellent agreement with the K-band amplitude curve of LH, while the middle epoch shows small ($`5\%`$) depression in the zero-frequency level.
## 4 Discussion
It would be natural to suppose that the circumstellar material surrounding the IRC may be in the form of an optically-thick disk of gas and dust. Its “peak with tails” morphology in the final epoch qualitatively resembles simulations of a nearly edge-on disk (e.g., Wood et al. 1998), with the southern and northern tails tracing light scattering in the upper regions of the Sunward-facing side of the disk, and the star located somewhere slightly to the west of the emission peak. Such a disk is seen in HK Tauri, in which the secondary star is completely obscured and the disk is traced at visible and near-infrared wavelengths via the starlight it scatters (Stapelfeldt et al. 1998; Koresko 1998). In the case of Haro 6–10, the disk axis would be nearly perpendicular to the line joining the two stars, as one would expect if the disk lies in the orbital plane of the binary, and in contrast to the probable geometry of the HK Tauri system.
The large extinction which would be produced by an edge-on disk is consistent with previous infrared spectroscopic and spectrophotometric results. A deep mid-infrared silicate absorption feature was seen toward the IRC by van Cleve et al. (1994), while no such absorption was seen toward the primary. The low-resolution K-band spectrum measured by Herbst, Koresko, & Leinert (1995) is a featureless continuum except for a molecular hydrogen $`v=10`$ S(1) emission line at 2.12 $`\mu `$m. This suggests that the 2 $`\mu `$m light originates primarily in dust, which may be heated either by stellar photons or by gravitational energy released via disk accretion. The lack of a detectable $`v=21`$ S(1) line was taken to indicate that the hydrogen is excited in a shock, presumably associated with either accretion or outflow, rather than pumped by ultraviolet photons.
However, the simple edge-on disk picture alone fails to account for several important features of the system. In particular, it is not obviously consistent with the existence of Haro 6–10 IRC–SW and the west-facing arm, or with the lack of a northern tail in the first epochs. If most of the circumstellar mass does reside in a disk, then it appears that the disk may have suffered strong perturbations which have disrupted its outer regions.
The timescale over which the IRC’s morphology changes is too short to correspond to orbital or freefall motions in the material in the outer regions of the nebula, where the orbital period is on the order of centuries. This suggests that the changes are the result of changing illumination of the distant material due, e.g., to shadowing of the central star by material orbiting within $`1`$ AU of the star, or by starspots (e.g., Wood & Whitney 1998).
Additional clues to the structure of the IRC are provided by the outflows associated with the system. A giant Herbig-Haro flow which extends about 1.6 pc (39 arcmin) along a position angle close to 222 was found by Devine et al. 1999. These authors suggest that the flow originates from the IRC, which may be problematic in the context of the disk model, as the outflow’s position angle is $`45`$ from the sky-projected axis of the putative disk. Movessian & Makagian (1999) report the discovery of a jet which curves away from the binary at a position angle of 195, although they could not identify its source with either of the stars. This is presumably structure seen in our direct image, in which it appears to be associated with the primary.
If the IRC is the source of the giant H-H flow, then Haro 6–10 IRC–SW lies suggestively along the flow axis. But this object shows no sign of the $`300`$ mas outward motion which would be expected over the $`1`$ yr span of the observations if it moved at the $`200`$ km s<sup>-1</sup> typical of the pattern speed in such a flow (e.g., Eisloffel & Mundt 1992). By contrast, the free-fall velocity would produce an undectable motion. We conclude that IRC–SW is more likely to be orbiting the IRC or associated with envelope material than to be part of the outflow.
If IRC–SW were a stellar companion to the IRC, it could easily perturb the IRC’s disk, and any disk of its own would be similarly perturbed, perhaps accounting for its photometric variability. A triple system would likely be unstable to significant orbital evolution or even ejection on timescales shorter than the age of Haro 6–10 (Pendleton & Black 1983). On the other hand, IRC–SW’s surface brightness is roughly consistent with a model in which an isotropically-scattering dusty surface is illuminated by unextincted light from the central star of the IRC, which is taken to have T$`5400`$ K and L$`6\mathrm{L}_{}`$ (KHL). The low extinction seen by IRC–SW could result from its position along the outflow axis of the giant H-H flow, which would clear a path through the IRC envelope, and the scattering surface would be either an irregularity in the surface of the outflow cavity or a knot of material within it. Instabilities in the outflow (Devine et al. 1999) might account for the apparent variability of the object.
## 5 Conclusions
The complex structure seen in the Haro 6–10 IRC is in stark contrast to the simple, beautifully regular shape of the nearly edge-on disk which surrounds HK Tauri B. While it would be premature to draw strong conclusions from this about the IRC class as a whole, it does suggest that at least in this object the characteristic low infrared color temperature may be the product of reprocessing and scattering in a disk which has been strongly perturbed, at least in its outer regions. The origin of this perturbation is not clear, but possibilities include interactions with the giant Herbig-Haro flow, residual infalling cloud material, or with a possible star embedded in IRC–SW.
It is a pleasure to thank A. Bouchez for his assistance with the observations, F. Roddier for a useful suggestion regarding the data reduction, and the anonymous referee for suggestions which improved the interpretation of the results. Data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. This research was supported by the National Aeronautics and Space Administration.
|
no-problem/9909/astro-ph9909186.html
|
ar5iv
|
text
|
# Mid-IR imaging of Toomre’s Merger Sequence
## 1. Introduction
One of the major steps in the understanding of galaxy evolution was the realization that tails and bridges are the result of galaxy interactions (Toomre & Toomre 1972). The subsequent proposal by Toomre (1977) of using the morphology of the tidal features to create a “merging sequence” of 11 NGC galaxies triggered numerous multi-wavelength studies of those systems (i.e. Hibbard 1995). As a result, several observational characteristics have been tested as alternatives of assigning an “age” to the event of the interaction (i.e. Schweizer 1998). Moreover, the discovery by IRAS of the class of luminous IR galaxies and the revelation later on that they are also interacting/merging systems, attracted further attention to this problem (see Sanders & Mirabel 1996 for a review).
## 2. Discussion: The Mid-IR perspective
To improve our knowledge of the properties of interacting galaxies in the mid-IR, we used ISOCAM to perform deep spectral imaging observations in the (5–16$`\mu m`$) of sample including most of the well known nearby active/interacting systems (Laurent et al. 1999). The analysis of the spectral characteristics of our sample revealed that in galaxies where an active nucleus does not have a detectable contribution to their mid-IR flux, one can use the flux ratio of LW3(12-18 $`\mu m`$) to LW2(5-8.5 $`\mu m`$) ISOCAM filters as an indicator of the intensity of the star formation activity. This ratio samples the mid-IR continuum emission originating from very small dust grains (radius $`<`$ 10 nm) heated to high temperatures due to their close proximity to OB stars (Désert et al. 1990).
In Fig 1. we present eight galaxies of our sample found in increasing stages of interaction: from NGC 4676, to Arp 220, and NGC 7252. We observe that the ISOCAM LW3/LW2 diagnostic traces the star formation activity in the galaxies and that it is well correlated with the corresponding IRAS flux ratios (Charmandaris et al. 1999). This suggests that even though the bolometric luminosity of luminous infrared galaxies is found at $`\lambda 40\mu m`$, the study of the mid-IR spectral energy distribution is a powerful tool in understanding their global star formation history.
## References
Charmandaris, V., Laurent, O., Mirabel, I.F., et al. 1999, (in preparation)
Désert, F.-X, Boulanger, F., & Puget, J.L. 1990, A&A, 237, 215
Laurent, O., Mirabel, I.F., Charmandaris, V., et al. 1999, A&A. (submitted)
Hibbard, J., 1995 Ph.D. Thesis, Columbia Univ.
Sanders, D.B., & Mirabel, I.F. 1996, ARA&A, 34, 749
Schweizer, F. 1998, in “Galaxies: Interactions and Induced Star Formation”, Saas-Fee Advanced Course 26, 105
Toomre, A. 1977, in “The Evolution of Galaxies and Stellar Population”, eds. B.M. Tinsley & R.B. Larson, 401
Toomre, A., & Toomre, J. 1972, ApJ, 405, 142
|
no-problem/9909/math-ph9909032.html
|
ar5iv
|
text
|
# Theorem 1
The Levels of Quasiperiodic Functions on the Plane, Hamiltonian Systems and Topology
S.P. Novikov <sup>1</sup><sup>1</sup>1University of Maryland, IPST and Math Department, College Park, MD, 20742-2431 and Moscow 117940, Kosygina 2, Landau Institute for Theoretical Physics, e-mail novikov@ipst.umd.edu; This work is going to appear in the Russian Math Surveys (Uspekhi Math Nauk), 1999, n 6. It is partially supported by the NSF Grant DMS 9704613
Abstract: Topology of levels of the quasiperiodic functions with $`m=n+2`$ periods on the plane is studied. For the case of functions with $`m=4`$ periods full description is obtained for the open everywhere dense family of functions. This problem is equivalent to the study of Hamiltonian systems on the $`n+2`$-torus with constant rank 2 Poisson bracket. In the cases under investigation we proved that this system is topologically completely integrable in some natural sence where interesting integer-valued locally stable topological characteristics appear. The case of 3 periods has been extensively studied last years by the present author, Zorich, Dynnikov and Maltsev for the needs of solid state physics (”Galvanomagnetic Phenomena in Normal Metals”); The case of 4 periods might be useful for the Quasicrystals.
Let us consider a periodic function $`f(x),x=(x^1,\mathrm{},x^m),n=m+2,`$ in the space $`R^{n+2}`$ with the group of periods $`Z^mR^m,`$ i.e. the function on the torus $`f:T^mR,T^m=R^m/Z^m`$. We may think that the lattice $`Z^m`$ is generated by the standard basic vectors $`e_j=(0,\mathrm{},0,1,0,\mathrm{})`$. For every plane $`R^2R^{n+2}`$ with linear coordinates $`(y^1,y^2)`$ given by the system of the linear equations $`l_i=b_i,l_iR^m,i=1,\mathrm{},n,`$ we have a restriction $`g(y)`$ of the function $`f(x)`$ on the plane $`g_l(y)=f(x(y))`$. Such a function $`g_l(y)`$ is called Quasiperiodic with $`m`$ periods on the plane.
Problem: What can be said about Topology of the levels $`g_l(y)=const`$?
The case $`m=3`$ has been extensively studied in connection with the Galvanomagnetic Phenomena in the single crystal normal metals since late 50s by the school of I.Lifshitz (M.Azbel, M.Kaganov, V.Peschanski and others-see the survey and references in ). Topological investigations of this problem have been started after the article in 1982 in the present authors seminar (see for the main topological and physical results of our group).
The space $`T^3=R^3/Z^3`$ here is a space of quasimomenta, $`Z^3`$ is a reciprocal lattice, the function $`f(x)`$ is a dispersion relation. The standard notation for quasimomenta would be $`k`$ or $`p`$ but we use the letter $`x`$ for them here. The most important is Fermi level $`f=ϵ_F`$ which is a Fermi surface in $`R^3/Z^3`$. In the magnetic field $`B`$ ”semiclassical” electrons move in the planes $`R^2`$ orthogonal to $`B`$ in the space of quasimomenta. This family of planes leads to the quasiperiodic functions with 3 periods in the planes.
The case $`m=4`$ is the main subject of this work. Our results may be useful for the theory of quasicrystals. By definition, the Rational Plane is one given by two linear equations $`l_1=b_1,l_2=b_2`$ where both linear forms $`l_1,l_2`$ have rational coefficients in the standard basis of lattice above (no restrictions for right-hand parts $`b_1,b_2`$).
###### Theorem 1
There exist nonempty open neighborhoods $`U_jRP_3`$ of the rational directions $`l_jU_j,j=1,2`$ such that:
For every directions $`l_j^{}U_j,j=1,2`$ the connectivity component of the level $`g_l^{}(y)=const`$ is either compact or belongs to the strip of finite width between two parallel lines in $`R^2`$. Except codimension one subset of the ”nongeneric levels” this situation is stable under the variation of all parameters involved (including variation of the directions $`l^{}`$ and of the periodic function $`f`$ in $`R^4`$). The direction of the strip is an intersection of the 2-plane $`l^{}`$ with some 3-dimensional hyperplane $`R^3R^4`$ which is integral in the standard lattice basis above and stable under the small variations of parameters.
We may consider this problem from the different point of view. Let Poisson Bracket is given on the torus $`T^4`$ which is constant and degenerate with Annihilator (Casimir) generated by 2 multivalued fuctions $`l_1^{},l_2^{}`$. The Hamiltonian function $`f(x)`$ generates a flow whose trajectories are equal to our curves—sections of the level $`M_a^3:f=a`$ by the family $`l^{}`$ of 2-planes $`l_1^{}=b_1,l_2^{}=b_2`$. Remove from the level $`M_a^3`$ all compact nonsingular trajectories (CNST):
$$M_a^3=(CNST)\underset{i}{}M_i$$
Now fill in all boundaries of $`M_i`$ by the family of 2-discs in the corresponding 2-planes whose boundaries are the separatrix trajectories. We are coming to the piecewise smooth 3-manifolds $`\overline{M}_i^3T^4`$ representing the 3-cycles $`z_iH_3(T^4,Z)`$. Under the same restrictions as in the previous theorem, we have following
###### Theorem 2
All cycles $`z_i`$ are nonzero in $`H_3(T^4,Z)`$, equal to each other up to the sign, and sum of them is equal to zero. Every manifold $`\overline{M}_i^3`$ and corresponding cycle $`z_i`$ is represented as an image of the 3-torus $`T^3T^4`$. These cycles are stable under the small variation of parameters.
The idea of the proof is following. Take any pair of rational directions $`l_1,l_2`$. Assume that $`l_1=x^4`$, and the corresponding levels are tori $`x^4=c`$. Take small enough generic variation $`l_2^{}`$ of the direction $`l_2`$. The intersections of hyperplanes $`l_2^{}=b_2`$ with tori $`T_c^3:x^4=c`$ leads us to the one-parametric family of problems previously solved about the plane sections of the Fermi surfaces $`M_a^3T_c^3`$ in the 3-tori $`T_c^3`$. We may meet also a finite number of the singular sections $`M_c^3`$. In the nonsingular sections $`M_c^3`$ there are compact trajectories (i.e. closed in the universal covering space $`R^3`$) and open trajectories lying on the 2-tori $`T_{i,c}^2T_c^3`$ nonhomologous to zero in the homology group $`H_2(T^4,Z)`$. All other trajectories are the singular limits of these types.
Both these types are topologically stable. So we may have one-parametric family of compact trajectories or one-parametric family of 2-tori. Let this family be generic. The nonsingular compact trajectories remain compact after small perturbations (in particular, after replacing the direction $`l_1`$ by $`l_1^{}`$). They will be removed from our manifold according to the construction above.
One-parametric family of 2-tori $`T_{i,c}^2`$ may have generic singularities. The pair of 2-tori may meet each other in the nonsingular level $`c`$. The structure of this process can be extracted from the arguments given in the work : it is the same picture as one when the pair of 2-tori are meeting each other in the generic boundary point of the stability zone. These tori have the opposite homology classes, so they are homotopic in the torus $`T^3`$. After meeting they leave a tail consisting of the compact orbits. So all our picture is covered by the map $`T^2\times IM_c^3`$. It looks like 2-torus ”turned back” being reflected by this level. Another situation might happen when 2-torus meets a singular level $`M_c^3`$ (i.e. the closed 1-form $`dx_4`$ has a Morse critical point). The local and minima play no role here. We concentrate on the case of the Morse index equal to 2 (the case of index 1 corresponds to the inverse process). The vanishing cycle on the torus $`T_c^2`$ should be homologous to zero in $`T^2`$ because it is homologous to zero in $`T^4`$. By that reason we shall come to the union $`T^2S^2`$ after surgery. However all plane sections of the topological 2-sphere are compact. So our torus passed critical level homotopically unchanged giving raise to the tail of compact orbits. Let us mention that such tails of compact orbits might appear also from the more trivial reasons. Now we perturb the direction $`l_1`$ replacing it by the generic small perturbation $`l_1^{}`$. Cutting off all compact nonsingular sections and filling in all boundary separatrix compact curves by 2-discs in the corresponding plane,we are coming to the natural images of the 3-tori $`T^3\overline{M}^3T^4`$. Here the map is monomorphic in homology groups. After that both our theorems follow from the same arguments as before, for the 3-dimensional case.
Remarks: 1.It became clear after discussion of the present author with Dynnikov that no generalization of this results is possible for the number of periods more than 4 for the directions not closed to the rational ones. 2.For the case of 4 periods our conjecture is that Theorems 1 and 2 are valid for the measure one family of 2-planes in the grassmanian, but in generic case the Hausdorf codimension of the exceptional set is less than one. For the even (and therefore nongeneric) cases like $`_i\mathrm{cos}(x^i)=0`$ these theorems are probably not true. 3.Some generalization of our results for the directions closed to the rational one is probably possible for any number of periods.
|
no-problem/9909/cond-mat9909165.html
|
ar5iv
|
text
|
# Mean-field solution of the small-world network model
## The continuum model as the <br>small-$`\varphi `$ limit of the discrete model
In this appendix, we rederive Equations (3) and (6) from the behavior of the discrete version of the small-world model to leading order in the shortcut density $`\varphi `$.
Consider a neighborhood of sites which are within distance $`r`$ of a given starting site in the discrete model. By analogy with the continuous case, let $`m`$ be the number of sites on the lattice which are not in this neighborhood and $`n`$ be the number of “gaps” between clusters of occupied sites around the ring. In fact, in the spirit of our mean-field approximation, $`m`$ and $`n`$ should be thought of as the average of these quantities over all possible realizations of lattice. This means that they may have non-integer values. Here we treat them as integers for combinatorial purposes, but our formulas are easily extended to non-integer values by replacing the factorials by $`\mathrm{\Gamma }`$-functions.
When we increase $`r`$ by one, the value of $`m`$ decreases for two reasons: first because the gaps between clusters shrink and second because of new sites which are reached by traveling down shortcuts encountered on the previous step. We write
$$m^{}=m\mathrm{\Delta }m$$
(25)
where
$$\mathrm{\Delta }m=\mathrm{\Delta }m_g+\mathrm{\Delta }m_s,$$
(26)
with the two terms representing the shrinking of gaps and the shortcut contribution respectively.
To calculate $`\mathrm{\Delta }m_g`$, we note that the probability of any particular gap having size $`j`$ is
$$p_j=\frac{\left(\genfrac{}{}{0pt}{}{mj1}{n2}\right)}{\left(\genfrac{}{}{0pt}{}{m1}{n1}\right)},$$
(27)
and the average number of such gaps is $`n`$ times this. When we increase $`r`$ by one, gaps of size $`2k`$ or larger shrink by $`2k`$, while gaps smaller than $`2k`$ vanish altogether. Thus
$$\mathrm{\Delta }m_g=n\left[\underset{j=1}{\overset{2k}{}}jp_j+2k(1\underset{j=1}{\overset{2k}{}}p_j)\right]=m\frac{(m2k)!(mn)!}{(m1)!(mn2k)!}.$$
(28)
To calculate the contribution $`\mathrm{\Delta }m_s`$ from the number of shortcuts, we note that the probability of encountering the end of a shortcut at any given site is $`2/\xi =2k\varphi `$, just as in the continuum case, so the number of new shortcuts encountered when we increase $`r`$ by one is $`2k\varphi \mathrm{\Delta }m`$. $`\mathrm{\Delta }m_s`$ in fact depends on the number of shortcuts encountered on the previous increase in $`r`$, so we need to write $`2k\varphi \mathrm{\Delta }m^{(r1)}`$. Only those shortcuts which land us at one of the $`m\mathrm{\Delta }m_g`$ unoccupied sites contributes to $`\mathrm{\Delta }m_s`$, so
$$\mathrm{\Delta }m_s=2k\varphi \mathrm{\Delta }m^{(r1)}[m\mathrm{\Delta }m_g^{(r)}].$$
(29)
Substituting Eqs. (28) and (29) into Eq. (26) we get our complete expression for $`\mathrm{\Delta }m`$. Now we note that, except when the lattice is very nearly full, the number of unfilled sites $`m`$ is of order $`L`$. The number of clusters of filled sites can be no greater than the number of shortcuts on the lattice plus one for the initial cluster around the starting point. Thus $`n\varphi L+1`$, and the ratio $`(n1)/m`$ is a quantity of order $`\varphi `$. Expanding in powers of this quantity and assuming the number of sites $`m`$ to be much greater than $`2k`$ then gives
$$\mathrm{\Delta }m=2kn,$$
(30)
plus terms of order $`\varphi `$. Physically, the reason for the simplicity of this expression is that, to first order in $`(n1)/m`$, most gaps have size $`2k`$ or larger, and the contribution from new shortcuts to can be neglected, since most sites are connected only to their local neighbors.
The change in the value of $`n`$ has also two contributions. First, the number of gaps increases when a shortcut creates a new cluster, and divides a gap into two new ones. As we have already shown this happens $`\mathrm{\Delta }m_s`$ times on average when we increase $`r`$ by one, where $`\mathrm{\Delta }m_s`$ is given by Eq. (29). Second, gaps disappear when their edges meet. When we increase $`r`$ by one, a gap will close if its size is $`2k`$ or less. Thus the change in $`n`$ is
$$n^{}=n+\mathrm{\Delta }n,$$
(31)
where
$$\mathrm{\Delta }n=\mathrm{\Delta }m_sn\underset{j=1}{\overset{2k}{}}p_j=\mathrm{\Delta }m_sn\frac{(mn)!(m2k1)!}{(m1)!(mn2k)!}.$$
(32)
where $`\mathrm{\Delta }m_s`$ is given by Eq. (29). Expanding to lowest order in $`(n1)/m`$, taking $`m2k`$ again, and combining the result with Eq. (29) gives
$$\mathrm{\Delta }n=\frac{4k^2\varphi mn}{L}\frac{2kn(n1)}{m}.$$
(33)
Changing Eqs. (30) and (33) from difference equations to differential ones and dividing by $`L`$ to transform from $`m`$ and $`n`$ to $`\mu `$ and $`\nu `$ gives
$`{\displaystyle \frac{d\mu }{dr}}`$ $`=`$ $`2k\nu ,`$ (34)
$`{\displaystyle \frac{d\nu }{dr}}`$ $`=`$ $`4k^2\varphi \mu \nu +{\displaystyle \frac{2k\nu (\nu 1/L)}{\mu }}.`$ (35)
Recalling that $`\xi =1/(k\varphi )`$, we can see that these equations are identical to Equations (3) and (6).
|
no-problem/9909/quant-ph9909072.html
|
ar5iv
|
text
|
# Nonlocal Aspects of a Quantum Wave
## I INTRODUCTION
There are literally thousands of papers about nonlocality in quantum theory. However, there are still some aspects of nonlocality which have not been fully explored and the connection between various aspects have not been clarified. In this paper we will analyze particular nonlocal aspects which are different for quantum waves of bosons and fermions. This is a development of ideas originated in the works of one of us . In order to put these nonlocality aspects in the proper perspective we will give a brief review of other aspects of nonlocality of quantum theory.
An important nonlocality aspect which will not be discussed in this paper is related to the concept of nonlocal variables. Measurements of nonlocal variables cannot be reduced to measurements of local variables . Probably the simplest example of a nonlocal variable is the sum of spin components of two separated spin-$`\frac{1}{2}`$ particles, $`\sigma _{Az}+\sigma _{Bz}`$. According to the postulates of the quantum theory, if a system is in an eigenstate of a measured variable, ideal measurement of this variable should not alter this eigenstate. For example, the singlet state of the two spins, frequently named the Einstein-Podolsky-Rosen (EPR) state,
$$|\mathrm{\Psi }_{EPR}=\frac{1}{\sqrt{2}}(|_A|_B|_A|_B),$$
(1)
is an eigenstate of the operator $`\sigma _{Az}+\sigma _{Bz}`$ with an eigenvalue $`0`$. Thus, measurement of $`\sigma _{Az}+\sigma _{Bz}`$ must leave state (1) unchanged. Note that measurements of local variables, $`\sigma _{Az}`$ and $`\sigma _{Bz}`$, invariably change the state.
Some of the eigenstates of the nonlocal variable $`\sigma _{Az}+\sigma _{Bz}`$ are entangled states. It is interesting that there are nonlocal variables which have only product-state eigenstates. They are nonlocal in the sense of impossibility of their measurement using only local measurements in the space-time regions A and B . Moreover, recently there have been found nonlocal variables with product-state eigenstates which cannot be measured even when measurements are performed at different times in space locations A and B and unlimited classical communication between the sites is allowed.
Very important nonlocal variables are modular variables . Many surprising effects related to evolution of spatially separated systems can be effectively analyzed using them. The dynamical equations of modular variables are nonlocal.
However, the nonlocality issues related to nonlocal variables are mingled: it is not easy to separate which part of nonlocality in the dynamics is due to intrinsic nonlocality of the quantum world and which part is due to the nonlocality introduced by the definition of the variable. In this paper we limit ourselves to the analysis of relations between result of measurements of local variables.
The plan of the paper is as follows. In Section II we introduce the basic framework of our analysis. In Sections III-V we discuss three types of nonlocality. This discussion provides the frame of reference for the analysis of nonlocality. In Section VI we give a more detailed explanation of the framework. Following this preparatory introduction, we analyze the nonlocality of the boson quantum wave in Section VII and the nonlocality of the fermion quantum wave in Section VIII. Section IX is devoted to an apparent causality paradox arising from nonlocality of the boson wave. In Section X we discuss a related issue of collective measurement which is relevant mostly for the fermion quantum wave. Finally, in Section XI we summarize the main results of the paper.
## II THE FRAMEWORK OF THE ANALYSIS
The formalism of non-relativistic quantum theory allows introducing arbitrary Hamiltonians, in particular, Hamiltonians corresponding to nonlocal interactions. However, such interactions have not been observed in experiments. In the framework of our analysis of nonlocality we will assume that the Hamiltonian describes only local interactions. This is a basic assumption of our analysis.
Any wave in space is, in some sense, a nonlocal object. A classical wave, however, can be considered as a collection of local properties. What makes the quantum wave genuinely nonlocal is that it cannot be reduced to a collection of local properties. In order to analyze this aspect of a quantum wave we will concentrate on a particular simple case: a quantum wave which is an equal-weights superposition of two localized wave packets in two separate locations:
$$|\mathrm{\Psi }=\frac{1}{\sqrt{2}}(|a+e^{i\varphi }|b).$$
(2)
We will analyze various simultaneous (in a particular Lorentz frame) measurements performed in these two locations; see Fig. 1. We will denote by A and B the space-time regions of these measurements. The wave packet $`|a`$ is localized inside the spatial region of A and the wave packet $`|b`$ is localized inside the spatial region of B.
Fig. 1. Space-time diagram of the measurements performed on the quantum wave (2).
In this paper we will show that there is a profound difference in the nonlocal properties of the quantum wave of the form (2) between fermion and boson particles. The boson state leads to statistical correlations between results of measurements in A and in B that cannot be explained by local classical physics. The fermion state does not lead to such correlations but it has a different nonlocality aspect. The fermion quantum state cannot be measured using only local measurements in A and B even if we are given an ensemble of results of measurements performed on identical particles in the state (2). In particular, the relative phase $`\varphi `$ of the fermion state does not lead to locally measurable effects. This phase has a physical meaning: it influences the result of interference experiments in which the parts of the quantum state in A and in B are brought together. Existence of a physical quantity which does not manifest itself through local measurements is the nonlocality aspect of a fermion wave. In contrast, a boson state can be found from the ensemble of results of local measurements: it can be identified from the nonlocal correlations mentioned above.
## III NONLOCALITY OF THE COLLAPSE OF A QUANTUM STATES
In a situation in which a particle (boson or fermion) is described by the state (2), each region, A or B separately, cannot be described by a pure quantum state. By introducing the vacuum states $`|0_A`$ and $`|0_B`$ which describe the regions A and B without the particle, we can rewrite the state (2) in the following form
$$|\mathrm{\Psi }=\frac{1}{\sqrt{2}}(|1_A|0_B+e^{i\varphi }|0_A|1_B),$$
(3)
where $`|1_A|a`$ and $`|1_B|b`$. This form allows us to write down the complete quantum description of region B (as well as region A) by means of the density matrix
$$\rho _0=\left(\begin{array}{cc}\frac{1}{2}\hfill & 0\\ 0& \frac{1}{2}\hfill \end{array}\right).$$
(4)
In the framework of standard quantum theory, a measurement instantaneously collapses the quantum state of a system. Thus, an action in A can change the density matrix in B. After a measurement of the projection operator in A, i.e., after observing whether the particle is in A, the density matrix in B is changed instantaneously to the density matrix of one of the pure states:
$$\left(\begin{array}{cc}1& 0\\ 0& 0\end{array}\right)\mathrm{or}\left(\begin{array}{cc}0& 0\\ 0& 1\end{array}\right),$$
(5)
in anti-correlation to the corresponding density matrices in A.
According to the collapse interpretation, the measurement in A changes the state of affairs in B. Before the action in A the outcome of a possible measurement of the projection operator in B was undetermined not only to the observer in B, but to all. Nothing in nature could give an indication about the outcome of the experiment. The outcome is genuinely random with probability $`\frac{1}{2}`$ both for finding region B empty and for finding the particle there. After the measurement in A, the observer in B still does not know the outcome, but nature (in particular, the observer in A) has this information: the probabilities for the results of the measurement in B change to either 1 and 0 or to 0 and 1 according to the outcome in A.
There is no other example in physics in which a local action changes the state of affairs in a space-like separated region. Thus, this aspect of nonlocality provides an argument in favor of adopting one of the interpretations which does not have the collapse of a quantum state. We now briefly describe these interpretations.
According to the pragmatic approach , quantum theory is limited to providing a recipe for predicting probabilities in quantum experiments, i.e. frequencies of the outcomes in the experiments. In this approach the density matrix is a statistical concept. An observer in B, who does not know which outcome is obtained in A, considers the mixture of the two possibilities (5) as described by the statistical density matrix $`\rho _0`$ even after the measurement in A.
The causal interpretation of Bohm has no collapse and therefore it lacks the nonlocality aspect of instantaneous change of a quantum state. The result of the measurement of projection operators on region B is predetermined by a “Bohmian position” and, therefore, the measurement in A changes nothing in B. For a single particle, Bohmian theory is a local hidden variable theory which completes quantum mechanics without contradicting statistical predictions of the latter. However, for systems consisting of more than one particle, the evolution of “Bohmian positions” of the particles is nonlocal. The Bohmian theory is nonlocal in a robust sense: action in A can change the outcome in B. For example, consider the EPR state of two spin-$`\frac{1}{2}`$ particles (1). Consider Bohmian positions which are such that if a particular $`\sigma _z`$ measurement is performed on either particle, it must yield $`\sigma _z=1`$. However, if these $`\sigma _z`$ measurements are performed on both particles, the results will be different: the earlier measurement of $`\sigma _z`$ in A will change the outcome of the consequent measurement in B to $`\sigma _z=1`$. The details of this example are given in Ref. .
The non-collapse interpretation which one of us (L.V.) finds most appealing is the many-worlds interpretation (MWI) . In the physical universe, due to the measurement in A, the quantum state of the two particles and the measuring device in A changes in the following way:
$`{\displaystyle \frac{1}{\sqrt{2}}}(|1_A|0_B|0_A|1_B)|\mathrm{ready}_{MD_A}`$ (6)
$`{\displaystyle \frac{1}{\sqrt{2}}}(|1_A|0_B|\mathrm{click}_{MD_A}|0_A|1_B|\mathrm{no}\mathrm{click}_{MD_A}),`$ (7)
but the density matrix in B is still $`\rho _0`$. Note, that relative to an observer in A, who belongs to a world with a particular reading of the measuring device, the density matrix of the particle in B is that of one of the pure states (5). Only from the point of view of an external observer, who is not correlated to a particular outcome in A, the density matrix in A is unchanged.
If we now add an observer in B who measures the projection operator there, then in A there is a mixture of two worlds with and without the particle in A and, similarly, in B there is a mixture of two worlds with and without the particle in B. These mixtures were created locally by the decisions of the observers to make these particular measurements. What remains nonlocal in this picture are the “worlds”: the observer in A who found the particle, in his travel to B, will meet there the observer that has not found the particle, and vice versa in the other world.
One of us (Y.A.) strongly prefers an interpretation which does not require a multitude of worlds. The two-state vector formalism of quantum theory allows covariant description of the collapse. This picture suggests radical change in the concept of time which will avoid statements made above such as: “According to the collapse interpretation, the measurement in A changes the state of affairs in B.” These ideas will be presented elsewhere.
## IV NONLOCALITY OF CORRELATIONS
In the framework of standard quantum theory the (anti)correlations between finding particles in the two regions A and B described above are nonlocal in the sense that the theory does not yield a causal explanation for them. The complete quantum description does not specify the results of measurements and it does not yield a local causal explanation for this correlations. One might imagine that the quantum theory can be completed by a deeper theory which will provide a local causal explanation for the results of measurements. In fact, the Bohmian theory mentioned above provides a local explanation for the anti-correlations in finding the particle in the regions A and B, but for some other experiments performed in these space-like separated regions it is impossible to find a local hidden variable theory. In particular, statistics of the results of spin measurements performed on two separated spin-$`\frac{1}{2}`$ particles in a singlet state (the setup for which Bohmian theory is not local) cannot be explained by a local hidden variables theory. This is the content of the celebrated Bell-inequalities paper .
There are numerous proofs that quantum correlations cannot have local causes. We present here one more argument of this kind inspired by the work of Mermin . However, the reader can choose any other proof of this statement in order to proceed with the line of argumentation of this paper.
The argument presented here assumes the principle of counterfactual definiteness , i.e., that in any physical situation the result of any experiment which can be performed has a definite value. We will analyze again the EPR state (1). Consider measurements of the spin components in $`N+1`$ directions for the particle in A and in $`N`$ different directions for the particle in B. These directions are in the $`\widehat{x}\widehat{z}`$ plane and they are characterized by the angle $`\theta _i`$ with respect to the $`\widehat{z}`$ axis,
$$\theta _i\frac{i\pi }{2N},i=0,1,\mathrm{}2N.$$
(8)
Note that the measurement in the direction $`\theta _{2N}(=\pi )`$ is physically equivalent to the measurement in the direction $`\theta _0(=0)`$, but the result has to be multiplied by $`1`$, i.e., $`\sigma (\pi )=\sigma (0)`$.
Spin measurement of one particle in a given direction (effectively) collapses the spin state of the other particle to the opposite direction and, therefore, quantum theory predicts the same probability for all the following relations between the results of measurements, if performed:
$`\sigma _A(\theta _{2n})=\sigma _B(\theta _{2n+1}),`$ (9)
$`\sigma _A(\theta _{2n+2})=\sigma _B(\theta _{2n+1}),`$ (10)
where $`n=0,1,\mathrm{},N1`$. The probability is
$$p=\mathrm{cos}^2(\frac{\theta _{i+1}\theta _i}{2})=\mathrm{cos}^2(\frac{\pi }{4N}).$$
(11)
From the principle of counterfactual definiteness and the locality assumption, according to which local measurements yield the same outcomes independently of what has been measured in the other location, it follows that identical expressions in the equations (9) and (10) must correspond to equal values. Thus, we can use all these $`2N`$ equations together. The correctness of all the equation leads to a contradiction. Indeed, we obtain $`\sigma _A(\theta _0)=\sigma _A(\theta _{2N})`$ contrary to the fact that these expressions represent the same measurement in opposite directions: $`\sigma _A(0)=\sigma _A(\pi )`$. Therefore, at least one out of $`2N`$ equations (9) and (10) must fail to be satisfied. On the other hand, irrespective of what correlations (compatible with quantum mechanics) follow from a hidden variable theory, the probability that at least one of these relations fails to be satisfied cannot be more than the probability that one fails multiplied by the number of equations:
$$\mathrm{prob}(\mathrm{fail})2N(1p)=2N(1\mathrm{cos}^2(\frac{\pi }{4N})).$$
(12)
This expression, however, is smaller than 1 even for $`N=2`$ and for large $`N`$ it goes to zero as $`\frac{\pi ^2}{8N}`$.
Recently, Greenberger, Horne and Zeilinger (GHZ) have found an even more robust example (improved by Mermin ) of such nonlocality. While in our example we have several relations which have to be true according to quantum theory with high probability, in spite of the fact that they all cannot be true, in the GHZ example we have four relations which must be true with probability 1, but, nevertheless, they cannot all be true together. However, in the GHZ(Mermin) example we have to consider three, instead of two, space-like separated regions.
Note that the Bell and the GHZ arguments do not hold without the principle of counterfactual definiteness, i.e., it is not applicable in the framework of the many-worlds interpretation in which, in general, quantum measurements do not possess single outcomes.
## V THE AHARONOV-BOHM TYPE NONLOCALITY
Another nonlocality aspect of quantum theory is related to the Aharonov-Bohm (AB) effect. The effect has a topological basis. The wave-function of a particle enters two space regions tracing out trajectories in space-time which start and end together. An interference pattern which depends upon a field is observed in spite of the fact that locally, inside these regions, it is impossible to make measurements which can specify the result of the interference experiment. The main aspect of the effect is that it exists even when there is no field inside the regions during the whole time of the experiment.
In this paper we consider measurements in two space-time regions. This is different from the AB effect for which a closed trajectory in space is required. What is relevant to our discussion is the feature of a particle inside the two space-time regions A and B which will eventually be manifested in the results of the interference experiment. The AB nonlocality is the existence of a physical property (a property which has observable consequences) which does not have any manifestation in local measurements.
A simple example is a particle wave-packet which splits into a superposition of two wave packets (2) and later brought back again to the same region for an interference experiment. This can be achieved in a one dimensional model of a wave packet arriving at a barrier at time $`t_0`$; see Fig. 2a. The barrier is such that the particle has the probability $`\frac{1}{2}`$ to pass through and the probability $`\frac{1}{2}`$ to be reflected. Two reflecting walls at equal distance from the barrier return the two wave packets back to the barrier at the same time and the result of the interference experiment is observed by finding the particle on the left or on the right side of the barrier at a later time. The time-dependent (scalar) AB effect is obtained by changing the relative potential between the two parts of the wave during the time they are separated. For a charged particle this can be achieved by moving two large oppositely charged parallel plates located between the wave packets; see Fig. 2b. The two plates are placed originally one on top of the other, i.e., there is no charge distribution and, therefore, there is no electric field anywhere. The plates are then moved a short distance apart and then they are brought back. We will call such an operation “opening a condenser”.
Fig. 2. Scalar Aharonov-Bohm effect interference experiment. a). One-dimensional interference experiment. The particle in the wave packet $`|in`$ splits at the barrier into a superposition of the two wave packets, $`|a`$ and $`|b`$, which are reflected from the walls and reunited to interfere at the barrier. b). Parallel-plate condenser with charged plates, originally one on top of the other, is opened (by moving the plates apart) for a short time while the wave packets $`|a`$ and $`|b`$ are far apart. This operation introduces change in the electric potentials between the locations of $`|a`$ and $`|b`$ which generates the AB phase.
A naive answer to the question, “What is the nonlocal feature of the two regions A and B?” (the feature of the two parts of the wave after they are separated) would be the quantum phase $`\varphi `$ appearing in the equation (2). Indeed, we will argue, discussing fermions in Section VIII, that in certain circumstances the quantum phase is a nonlocal feature in the sense that it cannot be found through local experiments in A and in B. However, the statement is not correct for bosons. Moreover, the phase is not a gauge invariant concept. The physical effect of interference is of course gauge invariant since it is a topological property of the whole trajectory. Still, there is a property of the system in A and B which specifies the final outcome of the interference experiment given fixed circumstances. The quantum phase does characterize this property provided we are careful enough to fix the gauge in the problem.
This and preceding sections described nonlocality aspects which are very different: here we discuss an observable property of a system in two locations which does not have any local manifestation, while in the previous section we discussed results of local measurements which do not allow local-cause explanation. It is possible to perform analysis of these nonlocalities using different terms, such as local action, separability, etc. Then the differences between the nonlocalities discussed in the two sections might not be as sharp as stated above . However, such analysis strongly depends on the interpretation of quantum theory and is less helpful for the purpose of the present paper.
## VI THE DETAILED FRAMEWORK OF THE ANALYSIS
Our goal in this paper is to perform an analysis of nonlocal aspects of the quantum state (2). The main question is: “What are the physical consequences of the presence of this quantum wave in the space-time regions A and B?” One of the questions is: “Can we find the quantum phase $`\varphi `$ through local measurements in A and B?” In order to be able to make such analysis we have to specify exactly the meaning of space-time regions A and B. Are the positions of A and B fixed relative to each other or are they fixed relative to an external reference frame? Are there fixed directions in A and B such that measuring devices can be aligned according to them? Is the time in A and B defined relative to local clocks, or relative to an external clock? What are the measuring devices which are available in A and B? All these questions are relevant. We have to specify what is given in A and B prior to bringing the quantum wave there in order to distinguish effects related to the quantum wave from the effects arising from our preparation and/or definition of the sites A and B.
We make the following assumptions:
(i) There is an external inertial frame which is massive enough so that it can be considered classical.
(ii) There is no prior entanglement of physical systems between the sites A and B. The two laboratories in A and B are also massive enough so that the measurements performed on the quantum wave can be considered measurements performed with classical apparatuses. However, for various aspects of our analysis we will have to consider the two laboratories as quantum systems. We assume that relative to the external reference frame the two laboratories are initially described by a product quantum state $`|\mathrm{\Psi }_A|\mathrm{\Psi }_B`$.
(iii) There is no entanglement between location of the apparatuses in A and the wave packet $`|a`$ (as well as between location of the apparatuses in B and the wave packet $`|b`$). Instead, the fact that apparatus A measures $`|a`$ and apparatus B measures $`|b`$ is achieved via localization relative to the external frame. The measuring devices and the wave packets are well localized at the same place. This can be expressed in the equations
$`a|\widehat{x}|a=\widehat{x}_{MD_A},`$ (13)
$`b|\widehat{x}|b=\widehat{x}_{MD_B},`$ (14)
where $`x_{MD_A}`$ ($`x_{MD_B}`$) is the variable which describes the location of the interaction region of the measuring devices in A (in B). It is assumed that the wave packet $`|a`$ remains in the space region A (and $`|b`$ remains in B) during the time of measurements.
(iv) Measurements in A and in B are performed by local measuring devices activated by local clocks, say, at the internal time $`\tau _A=\tau _B=0`$. The clocks are well synchronized with the time $`t`$ of the external (classical) clock:
$$\tau _A(t)=\tau _B(t)=t,$$
(15)
and the spreads of the clock pointer variables $`\mathrm{\Delta }\tau _A,\mathrm{\Delta }\tau _B`$ are small during the experiment. Again, as stated in (ii), there is no entanglement between clocks in A and in B.
The assumptions can be summarized as follows: a measurement in A, the space-time point relative to an external classical frame, means a measurement performed by local apparatuses in A triggered by the local clock. The apparatuses and clocks in A are not entangled with the apparatuses and the clocks in B.
Given all apparatuses in A and B, but without the quantum particle (2), it is impossible to observe the nonlocality of the collapse described in Section III. Since the quantum state of all systems (measuring apparatuses, clocks, etc.) is the product state of a quantum state in spatial location A times a quantum state in spatial location B, there are no correlations between the results of measurements in A and in B. This requirement need not be so strong: the crucial feature is the absence of quantum correlations (following from entanglement between the systems in A and in B). Here, for simplicity of the analysis we forbid any initial correlation between measuring devices in the two sites.
There is a somewhat more complicated situation in relation to the nonlocality discussed in Section V. Clearly there is no quantum phase which characterizes the devices in A and B: these systems are in the product state. But the operational definition of the AB nonlocality of Section V was a feature which cannot be found through local experiments in A and B, the feature which leads to observable effects when the systems from locations A and B are brought together. If we restrict ourselves to measurements using local measuring devices, then there are many features which cannot be found locally, for example, the relative orientation of the measuring devices in A and in B. The observer in A (or in B) making measurements using local devices cannot find out his (or her) orientation. However, if we have other observers in the product state in regions A and B with well defined known orientation, they can measure locally the orientation of the system in A and orientation of the system in B. The question of what can and what cannot be measured from within the system itself is interesting , but we will not discuss it here. Here we allow all possible measuring devices provided that they do not possess entanglement between A and B.
In our discussion we assume that measurements are performed on a single system. But, for the question of finding the phase, the question of obtaining non-classical correlations, etc., we assume that we have an ensemble of experiments on identical single systems. Collective measurements on the ensemble of particles are not allowed: clearly, the results of such experiments can manifest properties of the composite system of many particles which are not intrinsic properties of each particle. (We will briefly discuss collective measurements in Section X.)
After stating here precisely the “rules of the game” we now proceed to discuss the nonlocality of the quantum wave (2) for various particles.
## VII SINGLE-PHOTON NONLOCALITY
Let us start with considering a photon in a state (2). There have been several proposals how to obtain quantum correlations based on such and similar systems . The photon in a state (2) exhibits nonlocality of the EPR correlations described in Section IV. The state of the photon, if we write it in the form (3), is isomorphic to the EPR state (1).
In order to get the EPR-type correlations we must be able to perform measurements on the photon analogous to the spin measurements in arbitrary direction. The analog of the spin measurement in the $`\widehat{z}`$ direction is trivial: it is observing the presence of the photon in a particular location. A gedanken experiment yielding the analog of the spin measurements on the EPR pair in arbitrary directions is as follows . Let us consider, in addition to the photon, a pair of spin$`\frac{1}{2}`$ particles, one located in A and one in B; see Fig. 3. Both particles are originally in a spin “down” state in the $`\widehat{z}`$ direction. In the locations A and B there are magnetic fields in the $`\widehat{z}`$ direction such that the energy difference between the “up” and “down” states equals exactly the energy of the photon. Then we construct a physical mechanism of absorption and emission of the photon by the spin which is described by the unitary transformation in each site:
$`|1||0|,`$ (16)
$`|1||1|,`$ (17)
$`|0||0|.`$ (18)
Fig. 3. Swapping of the single-photon state with the entangled state of two spin$`\frac{1}{2}`$ particles.
This transformation swaps the quantum state of the photon and the quantum state of the pair of spin$`\frac{1}{2}`$ particles as follows:
$`{\displaystyle \frac{1}{\sqrt{2}}}(|1_A|0_B+e^{i\varphi }|0_A|1_B)|_A|_B`$ (19)
$`{\displaystyle \frac{1}{\sqrt{2}}}|0_A|0_B(|_A|_B+e^{i\varphi }|_A|_B).`$ (20)
Thus, we can obtain nonlocal correlations of the EPR state starting with a single photon, swapping its state to the state of the pair of spin$`\frac{1}{2}`$ particles, and then making appropriate spin component measurements. Statistical analysis of the correlations between the results of spin measurements in A and in B allows us to find the phase $`\varphi `$. For example, the probabilities for coincidence and anti-coincidence in the $`x`$ spin measurements are given by
$`\mathrm{prob}(|_x|_x)=\mathrm{prob}(|_x|_x)={\displaystyle \frac{1}{4}}|1+e^{i\varphi }|^2,`$ (21)
$`\mathrm{prob}(|_x|_x)=\mathrm{prob}(|_x|_x)={\displaystyle \frac{1}{4}}|1e^{i\varphi }|^2.`$ (22)
We have shown that, in principal, the nonlocality of a single photon is equivalent to the nonlocality of the EPR pair. Now we will turn to the discussion of the possibilities of manifestation of this nonlocality in real experiments and will try to explore the nature of this equivalence.
We are not aware of experiments in which a spin in a magnetic field absorbs a photon with high efficiency. However, there is an equivalent operation which is performed in laboratories. Recently there has been a very significant progress in microwave cavity technology and there are experiments in which Rydberg atoms which operate as two-level systems absorb and emit photons into a microwave cavity with a very high efficiency . The excited state $`|e`$ and the ground state $`|g`$ of the atom are isomorphic to $`|`$ and $`|`$ states of a spin$`\frac{1}{2}`$ particle. For the atom, measuring the analog of the $`z`$ spin component is trivial: it is the test whether the atom is in the excited state or the ground state. For measurements analogous to the spin measurements in other directions there is an experimental solution too. Using appropriate laser pulses the atom state can be “rotated” in the two dimensional Hilbert space of ground and excited states in any desired way. Thus, any two orthogonal states can be rotated to the $`|e`$ and $`|g`$ states and, then, a measurement which distinguishes between the ground and excited states distinguishes, in fact, between the original orthogonal states.
The Hamiltonian which leads to the required interactions can be written in the following form:
$$H=a^{}|ge|+a|eg|,$$
(23)
where $`a^{}`$, $`a`$ are creation and annihilation operators of the photon. This Hamiltonian is responsible for the two needed operations. First, such coupling between the photon in the cavity in A and the atom in A together with similar coupling in B swaps the state (3) to the state of two Rydberg atoms:
$`{\displaystyle \frac{1}{\sqrt{2}}}(|1_A|0_B+e^{i\varphi }|0_A|1_B)|g_A|g_B`$ (24)
$`{\displaystyle \frac{1}{\sqrt{2}}}|0_A|0_B(|e_A|g_B+e^{i\varphi }|g_A|e_B).`$ (25)
The same Hamiltonian can also lead to an arbitrary rotation of the atomic state. To this end the atom has to be coupled to a cavity with a coherent state of photons,
$$|\alpha =e^{\frac{|\alpha |^2}{2}}\underset{n=0}{\overset{\mathrm{}}{}}\frac{\alpha ^n}{\sqrt{n!}}|n.$$
(26)
The phase of $`\alpha `$ specifies the axis of rotation and the absolute value of $`\alpha `$ specifies the rate of rotation. For example, the time evolution of an atom starting at $`t=0`$ in the ground state is:
$$|\mathrm{\Psi }(t)=\mathrm{cos}(|\alpha |t)|g+\frac{\alpha }{i|\alpha |}\mathrm{sin}(|\alpha |t)|e.$$
(27)
This is correct when we make the approximation $`a^{}|\alpha \alpha ^{}|\alpha `$ which is precise in the limit of large $`|\alpha |`$. The Hamiltonian (23) is actually implemented in laser-aided manipulations of Rydberg atoms passing through microwave cavities.
Conceptually, the above scheme can be applied to any type of bosons (instead of photons), even charged bosons. An example of a (gedanken) Hamiltonian for this case describes a proton $`|p`$ which creates a neutron $`|n`$ by absorbing a negatively charged meson:
$$H=a_m^{}|pn|+a_m|np|,$$
(28)
where $`a_m^{}`$, $`a_m`$ are creation and annihilation operators of the meson. This Hamiltonian swaps the state of the meson (now written in the form (3)) and the state of the nucleon pair:
$`{\displaystyle \frac{1}{\sqrt{2}}}(|1_A|0_B+e^{i\varphi }|0_A|1_B)|p_A|p_B`$ (29)
$`{\displaystyle \frac{1}{\sqrt{2}}}|0_A|0_B(|n_A|p_B+e^{i\varphi }|p_A|n_B).`$ (30)
Since there is no direct measurement of a superposition of proton and neutron, we need again a procedure which rotates the superposition states of a nucleon to neutron or proton state. This rotation requires coherent states of mesons which would be, in this case, a coherent superposition of states with different charge. Due to strong electro-magnetic interaction the coherent state will decohere very fast. This is essentially an environmentally induced “charge super-selection rule” which prevents stable coherent superpositions of states with different charge. It is important that there is no exact charge super-selection rule which would prevent, in principle, performing the experimental scheme presented above. Indeed, Aharonov and Susskind (AS) proposed a method for measuring the relative phase between states with different charge, thus showing that there is no exact charge super-selection rule. In their method one can measure the phase even if the whole system (the observed particle and the measuring device) is in an eigenstate of charge. This corresponds to initial entanglement between measuring devices in A and B and thus will not be suitable for the present procedure. Here we assume existence of superpositions of different charge states: only then it is possible that the quantum state of measuring devices in A and B is a product state.
There are some arguments that the total charge of the universe is zero and therefore, we cannot have a product of coherent states of charged particles in A and in B. More sophisticated analysis has to be performed: since the observable variables are only relative variables, the final conclusion will be as in the AS paper : conceptually, there is no constraint on a measurement of the relative phase of a charged boson, but decoherence will prevent construction of any realistic experiment. See also very different arguments against exact super-selection rule by Giulini .
## VIII NONLOCALITY OF A FERMION QUANTUM WAVE
As we have shown above, the nonlocality properties of the boson quantum state (2) are equivalent to the nonlocality of the EPR pair. In contrast, the nonlocality properties of the fermion quantum state (2) are very different from those of the EPR pair. We cannot generate quantum correlations between results of local measurements performed in A and in B, the correlations which violate Bell inequalities.
The reason why the method which was applicable to bosons fails for fermions is that there is no coherent state of fermions. The number state $`|n`$ exists only for $`n=0`$ and $`n=1`$ .
The intuitive understanding of the role of the coherent state is as follows. If, in addition to the measuring devices, there is an auxiliary identical particle in a known superposition of localized wave packets in A and B, then the phase $`\varphi `$ can be found using local measurements. We consider the superposition of $`|a^{}`$ and $`|b^{}`$ positioned near $`|a`$ and $`|b`$ respectively; see Fig. 4. We choose the phase of the auxiliary particle to be equal zero,
$$|\mathrm{\Psi }^{}=\frac{1}{\sqrt{2}}(|a^{}+|b^{}),$$
(31)
i.e., we have a composite system of two identical particles in the state
$$|\mathrm{\Psi }|\mathrm{\Psi }^{}=\frac{1}{2}(|a+e^{i\varphi }|b)(|a^{}+|b^{}).$$
(32)
Fig. 4. Space-time diagram of local measurements which allow finding the phase $`\varphi `$ of a quantum wave when an auxiliary identical particle with known phase is given.
The phase $`\varphi `$ controls the rate of coincidence counting in the measurements of a local variable in A with eigenstates
$$|a_+\frac{1}{\sqrt{2}}(|a+|a^{}),|a_{}\frac{1}{\sqrt{2}}(|a|a^{}),$$
(33)
and a local variable in B with eigenstates
$$|b_+\frac{1}{\sqrt{2}}(|b+|b^{}),|b_{}\frac{1}{\sqrt{2}}(|b|b^{}).$$
(34)
In the case that one particle was found on each side, the probabilities are (compare with (21)):
$`\mathrm{prob}(|a_+|b_+)=\mathrm{prob}(|a_{}|b_{})={\displaystyle \frac{1}{4}}|1+e^{i\varphi }|^2,`$ (35)
$`\mathrm{prob}(|a_+|b_{})=\mathrm{prob}(|a_{}|b_+)={\displaystyle \frac{1}{4}}|1e^{i\varphi }|^2.`$ (36)
The method described in the previous paragraph is applicable both for bosons and fermions. However, the existence of a particle described by (31) as a part of our measuring devices contradicts our assumption that sites A and B do not possess an entangled physical system prior to bringing in the test particle. For bosons we can consider a coherent state of particles described by state (31); it is equal to the product of local coherent states of bosons in A and in B:
$`e^{\frac{|\alpha |^2}{2}}{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{\alpha ^n}{\sqrt{n!}}}{\displaystyle \frac{1}{\sqrt{2}^n}}(|a^{}+|b^{})^n=`$ (37)
$`e^{\frac{|\alpha |^2}{2}}{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{\alpha ^n}{\sqrt{n!}}}|a^{}e^{\frac{|\alpha |^2}{2}}{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{\alpha ^n}{\sqrt{n!}}}|b^{}.`$ (38)
Thus, this state has no entanglement between the sites but it provides the reference for measuring the phase $`\varphi `$ of the state (2) via methods described in the previous section.
Again, if we assume that there is no prior entanglement between the sites A and B, the phase $`\varphi `$ of the fermion quantum state (2) cannot be measured locally. Quantum correlations which break Bell’s inequality cannot be obtained. The only type of nonlocality for a fermion wave (except the collapse nonlocality) is the AB nonlocality. The quantum phase manifests itself only in the interference experiments in which the wave packets $`|a`$ and $`|b`$ are brought together.
The impossibility of local measurement of the phase $`\varphi `$ is due to anti-commutation of fermion operators: the operator $`a_A^{}+a_A`$ does not commute with the operator $`a_B^{}+a_B`$. The eigenstates of the operator $`a^{}+a`$ are $`\frac{1}{\sqrt{2}}(|0\pm |1)`$; we have used measurements of such operator for finding out the phase $`\varphi `$ of the boson wave in Section VII. A measurement in site A of $`a_A^{}+a_A`$ leads to an observable change in the results of measurement of $`a_B^{}+a_B`$, where $`a_A^{}`$, $`a_A`$, $`a_B^{}`$, $`a_B`$ are creation and annihilation operators of the fermion in A and in B, respectively. This means that the possibility of such measurements would lead to superluminal communication.
Another question which can be asked is: “Can we measure locally the phase $`\varphi `$ of a superposition of a pair of fermions?” The quantum state is:
$$|\mathrm{\Psi }=\frac{1}{\sqrt{2}}(|2_A|0_B+e^{i\varphi }|0_A|2_B),$$
(39)
where, for example, $`|2_A`$ might represent two electrons in identical spatial state inside A being in a singlet spin state. Since $`a_A^{}a_A^{}+a_Aa_A`$ commutes with $`a_B^{}a_B^{}+a_Ba_B`$, the argument presented in the preceding paragraph for unmeasurability of the phase of a superposition of single-fermion wave packets does not hold in this case. In fact, a pair of fermions is, in a sense, a boson. We can construct a procedure for measuring phase $`\varphi `$ of the state (39) similar to the procedure which was previously described (for a photon (23)-(27) and for a charged meson (28), (29)) in Section VII. A difficulty is that the coherent state of pairs of fermions which is required for our procedure can only be constructed approximately.
## IX IS IT POSSIBLE TO CHANGE THE PHASE IN A NONLOCAL WAY?
The main message of Section VII is that the phase $`\varphi `$ for boson state (2) is locally measurable. Given an ensemble of bosons with identical phase $`\varphi `$ we can generate a set of numbers (results of measurements) in A and another set of numbers in B, such that the two sets together yield $`\varphi `$. This sounds paradoxical, in particular, because $`\varphi `$ is not a gauge invariant parameter.
Moreover, it seems that this phase can be changed non-locally. Indeed, it has been described in Section VIII how opening a condenser for a period of time in the space between the locations of a charged particle, A and B, changes the phase: this is the scalar AB effect. Thus, it seems that by an action in a localized region we can send information to a space-like separated region. Opening or not opening a condenser apparently changes correlations in the results of measurements in A and B; see Fig. 5.
Fig. 5. Apparent sending signals to a space-like separated region. Operation in O, opening the condenser for a period of time, apparently changes the correlations between measurements in A and B. No signal is sent from O, neither to A nor to B, but the signal is sent to the union of A and B. The intersection of light cones originated at A and at B lies inside the light cone originated at O. Therefore, the action of the condenser falls into the category of “jammers” considered in Ref. .
It has been shown that such an action, if possible, cannot lead to a paradoxical causal loop similar to the one generated by a possibility of sending signals from one localized space-time region to another space-like separated local region. In our case the region to which we send the information consists of two space-like separated regions. There is no local observer who receives superluminal signals.
In spite of the fact that we cannot reach a causality paradox if such operation is possible, it clearly contradicts the spirit, if not the letter, of special relativity. And, in fact, it is impossible. It is incorrect that the opening of a condenser will change correlations between results of measurements in A and B. It must be incorrect because we should be able to use a covariant gauge in which changes in the potentials take place only inside the light cone. However, we can explain this phenomena also in a standard (Coulomb) gauge. In our scheme the measurements in local sites include interactions with coherent states of auxiliary particles, particles which are identical to the particle in a superposition. Therefore, if the particle in question is charged, the auxiliary particles are also charged and opening the condenser changes the phase of the coherent state in such a way that the correlations are not changed. The gauge which we choose changes the description of auxiliary particles too, so that the probabilities for results of measurements remain gauge invariant.
Consider now a neutral boson state. A massive plate in between the regions A and B which we move or not move toward one of the sites will introduce the phase shift in complete analogy with the scalar AB effect. (The difference here is that the gravitational fields in the regions A and B are not zero, but the fields are not affected by the motion of the plate.) In a scenario where the boson is absorbed by spins in a magnetic field and the correlations are obtained from the spin measurements, it is not obvious how the measuring devices will be influenced by the movement of the massive plate. The resolution of the paradox in this case is similar to the resolution of Einstein’s paradox of an exact energy of an exact clock . The explanation is that the pointers of the local clocks are shifted. Simultaneity between A and B is altered due to the action of the massive plate. Since in our case local clocks activate the measurements, the shift in the pointer will lead to a change. This change compensates exactly the phase change of the boson.
## X COLLECTIVE MEASUREMENTS
In this paper we have considered the results of measurements on an ensemble of identical particles in an unknown state. We allow measurements to be performed only on single members of the ensemble, so that we will have an ensemble of results of measurements performed on single particles. We believe that this is the proper approach for the analysis of the nature of a quantum wave of a particle; however, it might be interesting to consider a related question: “Are there any changes to the questions posed in this paper if collective measurements are allowed?” Note that there is a recent result showing that collective measurements do make a difference for similar questions regarding the nonlocality of an ensemble of pairs of spin-$`\frac{1}{2}`$ particles in a particular mixed state .
For bosons we do not expect any difference because, even for single-particle measurements, we got the answers to our questions: (i) statistical analysis of the results of measurements allows us to find the phase $`\varphi `$ and (ii) there are measurements in A and in B such that the results are characterized by correlations which can not have local causes. For single-particle measurements on fermions both (i) and (ii) are not true and thus raises the question of the status of (i) and (ii) when collective measurements are allowed.
Let us start this analysis by assuming that our particle is an electron and, contrary to the assumption of no prior entanglement, we now have an auxiliary particle, a positron, in a known superposition in A and B, say, of the form (31). In this case both (i) and (ii) are true: the fermion state is measurable via local measurements, and some measurements in A and B exhibit correlations which have no local causes.
Indeed, we can apply an interaction such that the positron and the electron located in the same site annihilate and create a photon. Such interaction will lead to the following transformation
$`{\displaystyle \frac{1}{\sqrt{2}}}(|e^{}_A+e^{i\varphi }|e^{}_B){\displaystyle \frac{1}{\sqrt{2}}}(|e^+_A+|e^+_B)`$ (40)
$`{\displaystyle \frac{1}{2}}(|e^{}_A|e^+_B+e^{i\varphi }|e^{}_B|e^+_A+|\gamma _A+e^{i\varphi }|\gamma _B).`$ (41)
After testing and not finding the electron and the positron in the sites A and B the remaining state will be :
$$\frac{1}{\sqrt{2}}(|\gamma _A+e^{i\varphi }|\gamma _B),$$
(42)
which is a different notation for a single-photon state of the form (2). For a single photon we know that (i) and (ii) are true: the phase of a single-photon state (which is the original phase $`\varphi `$ of the fermion) can be found, and quantum correlations breaking Bell inequalities can be obtained.
However, we do not have a positron in a state (31). Instead, we have an ensemble of electrons in a state (2). So, the first step is to swap the state of the electron with the state of a positron . If we have an entangled state of a composite system which has two parts, one in A and another in B, such as the EPR state of two spin-$`\frac{1}{2}`$ particles located in A and B, and we want to transfer this entangled state to another pair of particles in A and B, then all we have to do is to perform local operation in each site which swaps the local quantum states of one particle from one pair with one particle from the other pair located in the same site . Linearity of quantum mechanics will ensure that swapping of local states, i.e. the states of parts of the systems, will lead to swapping of the quantum state of the whole systems.
In this paper we are interested in the swapping of a nonlocal state of a single particle to another single particle. The method described above cannot be applied directly because it is assumed that we have no another particle in a superposition of being in A and B (this is entanglement). Therefore, the other particle is not present in at least one of the sites and consequently, the “local swapping interaction” with this particle is meaningless. However, if the particles are bosons, then the swapping operation is possible. It can be done by transferring the quantum state to the entangled state of a composite system: a single-photon state can be transferred to two spin-$`\frac{1}{2}`$ particles in a magnetic field in the gedanken scenario described in Section VII or to two atoms in a real experiment using microwave cavities. After that, the quantum state can be swapped back to “another” photon.
Let us come back to the question of transferring the quantum state of the electron to a positron. Again, since we assumed no prior entanglement, the positron cannot be in a superposition of being in A and in B. Therefore, we will consider a situation in which there are two positrons one in A and another in B. We apply an interaction such that the positron and the electron which are in the same site annihilate and create a photon. This is described by the equation
$`{\displaystyle \frac{1}{\sqrt{2}}}(|e^{}_A+e^{i\varphi }|e^{}_B)|e^+_A|e^+_B`$ (43)
$`{\displaystyle \frac{1}{\sqrt{2}}}(|\gamma _A|e^+_B+e^{i\varphi }|\gamma _B|e^+_A).`$ (44)
Now, the procedure described in Section VII allows measurements of local superpositions of the vacuum and single-photon states. In particular, there is a nonzero probability to find the state $`\frac{1}{\sqrt{2}}(|0_A+|\gamma _A)`$ in A and a similar state $`\frac{1}{\sqrt{2}}(|0_B+|\gamma _B)`$ in B. When this occurs, the final situation is that the electron and one of the positrons are annihilated and a positron appears in a superposition of being in two places
$$\frac{1}{\sqrt{2}}(|e^+_B+e^{i\varphi }|e^+_A).$$
(45)
Thus, we can obtain a positron in superposition from an electron in a superposition. If we are allowed to perform collective measurements we now can annihilate this positron with another electron in the ensemble:
$`{\displaystyle \frac{1}{\sqrt{2}}}(|e^{}_A+e^{i\varphi }|e^{}_B){\displaystyle \frac{1}{\sqrt{2}}}(e^{i\varphi }|e^+_A+|e^+_B)`$ (46)
$`{\displaystyle \frac{1}{2}}(|e^{}_A|e^+_B+e^{i2\varphi }|e^{}_B|e^+_A+e^{i\varphi }|\gamma _A+e^{i\varphi }|\gamma _B)`$ (47)
We do not obtain relative phase between photon wave-packets in two places which would allow us to find the phase $`\varphi `$, but we do obtain a superposition of a photon in A and B with known (zero) phase. This superposition can generate quantum correlations without local causes as described above.
If we are allowed to perform collective measurements, we can consider measurements on the pairs of fermions from our ensemble. The phase of pairs of fermions is $`2\varphi `$ and, in general, it can be found by the method described in Section VII. However, as we mentioned above, all statements about measurability using collective measurements do not describe the nature of a quantum wave of a single particle.
## XI CONCLUSIONS
In this paper we have analyzed nonlocal aspects of a simple quantum wave which is an equal-weights superposition (2) of wave packets in A and in B. For this analysis we assumed that we are given non-entangled laboratories in A and B which are described quantum mechanically by a product state of systems in A and systems in B.
We have shown that presence of an ensemble of bosons in a superposition $`\frac{1}{\sqrt{2}}(|a+e^{i\varphi }|b)`$ leads to correlations in the results of single-particle local measurements in A and in B which break Bell’s inequality. These results, collected from a large ensemble allows us to find the phase $`\varphi `$. Thus, the boson quantum wave exhibit the EPR-type nonlocality. For a photon state this is not just a theoretical statement: the EPR nonlocality can be observed in an ensemble of measurements carried out on single photons. In principle, the statement applies to any boson state. However, environmentally induced super-selection rule prevents such experiment with charged bosons. Also, experiments with neutral massive bosons do not seem to be feasible.
The presence of an ensemble of fermions in a superposition $`\frac{1}{\sqrt{2}}(|a+e^{i\varphi }|b)`$ with the restriction that we perform separate measurements on each fermion does not lead to correlations in the results of the local measurements in A and in B which violate Bell’s inequality. We do get correlations between the results of local measurements in A and B, but these correlations are of the kind which allow local causal explanation. These results do not allow us to find the phase $`\varphi `$. The phase $`\varphi `$ has observable consequences in interference experiments. A fermion quantum wave exhibits the AB nonlocality which is the unobservability of this phase via local single-particle measurements.
ACKNOWLEDGMENTS
It is a pleasure to thank Lior Goldenberg, Jacob Grunhaus, Benni Reznik, Sandu Popescu, Asher Peres and especially Philip Pearle for helpful discussions. This research was supported in part by grant 471/98 of the Basic Research Foundation (administered by the Israel Academy of Sciences and Humanities) and NSF grant PHY9601280. Part of this work was done during the 1999 ESF-Newton Institute Conference in Cambridge.
|
no-problem/9909/hep-lat9909002.html
|
ar5iv
|
text
|
# The Structure of Projected Center Vortices at Zero and Finite Temperature
## 1 INTRODUCTION
Using the direct version of maximal center gauge we identify projected (P-)vortices by center projection . We map the SU(2) link variables $`U_\mu (x)`$ to $`Z_2`$ elements $`Z_\mu (x)=\text{sign Tr}[U_\mu (x)]`$. The plaquettes with $`Z_{\mu \nu }(x)=Z_\mu (x)Z_\nu (x+\widehat{\mu })Z_\mu (x+\widehat{\nu })Z_\nu (x)=1`$ we call “P-plaquettes.” The corresponding dual plaquettes form a closed surface in 4 dimensions.
## 2 ZERO TEMPERATURE
### 2.1 String tension and smoothing
The distribution of P-vortices in space-time determines the string tension $`\sigma `$ in center projection which agrees very well with $`\sigma `$ from full Wilson loops . If $`p`$ is the probability that a plaquette belongs to a P-vortex, we get for the expectation value of a Wilson loop of size $`A=I\times J`$ assuming the independence of piercings of the loop
$`W_{cp}(I,J)`$ $`=`$ $`\left[(1p)1+p(1)\right]^A`$
$`=`$ $`(12p)^A=e^{\sigma _{cp}A}e^{2pA},`$
where the string tension in center projection is
$$\sigma _{cp}=\text{ln}(12p)2p.$$
(2)
$`p`$ scales nicely with the inverse coupling $`\beta `$ . However, for small vortices the independence assumption of piercings is not fulfilled, simply because one piercing is always correlated with another piercing nearby. Hence, small vortices do not contribute to the string tension nor do small fluctuations of the P-vortex surface. This can be seen from Fig. 1: $`p`$ – labeled with “unsmoothed” – scales, but it is higher than the fraction of P-plaquettes $`f`$ infered from the measured string tension $`\sigma `$ using $`f=(1e^\sigma )/2`$ and assuming independent piercings.
To understand the discrepancy between $`p`$ and $`f`$ in more detail we remove short range fluctuations which are unimportant for $`\sigma `$. We introduce several smoothing steps which are depicted in Fig. 2. First, scanning through the lattice we iteratively identify cubes with 6 P-plaquettes and remove them which is called 0-smoothing. In the next steps we substitute cubes with 5 or 4 P-plaquettes by cubes with 1 resp. 2 complementary P-plaquettes which is called 1- resp. 2-smoothing.
As can be seen from Fig. 1, the value of $`p`$ nicely approaches $`f`$ with increasing smoothing step, especially for larger values of $`\beta `$ where P-plaquettes get less dense. Further we check the Creutz ratios extracted from P-configurations after various smoothing steps. It is clearly seen that 0- and 1-smoothing do not change the Creutz ratios, only 2-smoothing shows a deviation of $`5\%`$ for small $`\beta `$ (Fig. 3).
### 2.2 Topology
P-vortices have to percolate in order to give the Wilson loop an area law behaviour resulting in a finite string tension. Thus we check the size of P-vortices. The result is that around 90% of all P-plaquettes are part of one huge P-vortex. All other P-vortices are rather small and should not contribute to $`\sigma `$. After 2-smoothing this huge vortex contains almost all (over 99%) of all P-plaquettes.
Next we calculate the type of homomorphy of the surface of the dominating P-vortex. It is determined by $`a`$) the orientation behaviour, $`b`$) the Euler characteristic $`\chi `$.
The simulation shows that without exception large vortices are unorientable even after smoothing; apparently the smoothing procedure does not remove all of the local structures (e.g. “cross-caps”) responsible for the global non-orientability.
The Euler characteristic $`\chi `$ is defined as $`\chi =𝒩_0𝒩_1+𝒩_2`$ where $`𝒩_i`$ is the number of vertices, links and plaquettes resp. $`\chi `$ is related to the genus $`g`$ by $`\chi =2g`$; an unorientable surface of genus $`g`$ is homeomorphic to a sphere with $`g`$ attached “cross-caps”. Fig. 4 shows that after 2-smoothing $`g`$ roughly scales. This is not compatible with a self-similar short-range structure below the confinement length scale i.e. a fractal structure.
## 3 FINITE TEMPERATURE
In this section we will extend our study of P-vortex topology, and the effect of our smoothing steps on P-vortices, to the finite temperature case; see also . In the deconfinement phase there is a strong asymmetry of P-plaquette distributions. The density of space-time plaquettes decreases explaining why the string tension of timelike Wilson loops is lost and $`\sigma `$ of spatial loops is preserved . The dominance of the largest P-vortex is weaker in deconfinement, but as expected it is still there.
In contrast to the zero temperature case P-vortices get orientable in the deconfinement phase. The dual P-plaquettes form cylinders in time direction, closed via the periodicity of the lattice. For high temperatures the Euler characteristic $`\chi `$ approaches $`0`$ as shown in Fig. 5. The largest P-vortex has the topology of a torus. Fig. 6 shows a cut through a typical field configuration at $`\beta =2.6`$ on a $`212^3`$-lattice.
## 4 CONCLUSIONS
We have investigated the size and topology of P-vortices in SU(2) lattice gauge theory; P-vortices are surfaces on the dual lattice which lie at or near the middle of thick center vortices. In the confined phase the four-dimensional lattice is penetrated by a single huge P-vortex of very complicated topology. It is unorientable and has many handles. There exist also a few very small vortices. These and short range fluctuations of the large P-vortex don’t contribute to the string tension. Keeping the Creutz ratios constant, we could remove those fluctuations by a smoothing procedure.
In the deconfined phase, we found a strong space-time asymmetry. P-vortices at finite temperature are mainly composed of space-space plaquettes forming time-like surfaces on the dual lattice. They are orientable, closed via the periodicity of the lattice in the time direction and have the topology of a torus. The dominance of the largest vortex is not as strong as in the zero temperature case.
Further details and an expanded discussion can be found in .
|
no-problem/9909/astro-ph9909090.html
|
ar5iv
|
text
|
# TeV Observations of Markarian 501 with the Milagrito Water Cherenkov Detector
## 1 Introduction
Very High Energy (VHE) $`\gamma `$-ray astronomy studies the sky at energies above 100 GeV. To date, 4 galactic and 4 extragalactic sources have been identified as VHE sources (see Ong (1998) and Hoffman et al. (1999) for recent reviews). The four extragalactic sources, Markarian 421 (Mrk 421, $`z=0.031`$) (Punch et al. (1992)), Mrk 501 ($`z=0.034`$) (Quinn et al. (1996)), 1ES 2344+514 ($`z=0.044`$) (Catanese et al. (1998)) and PKS 2155-304 ($`z=0.117`$) (Chadwick et al. (1999)) are relatively nearby objects of the BL Lac subclass of active galactic nuclei. A characteristic feature of BL Lac objects is their rapid flux variability at all wavelengths. Flaring activity at TeV energies has been observed both from Mrk 421 and Mrk 501, ranging from variability times scales of minutes (Gaidos et al. (1996)) to months (Protheroe et al. (1997)).
Source detections and analyses at VHE energies are currently dominated by the highly successful atmospheric Cherenkov technique. Cherenkov telescopes are excellent tools for the detailed study of point sources and their sensitivity has been significantly improved over the past few years. Strong flaring activity of Mrk 421 and Mrk 501 can be detected with less than an hour of observation time per night.
To complement the pointed atmospheric Cherenkov telescopes, there is a strong case for wide-aperture instruments monitoring the sky with a high duty cycle and performing an unbiased search for new sources and source classes. The price to pay for overcoming the limitations of atmospheric Cherenkov telescopes is a loss in sensitivity for individual sources. To date, no unambiguous detection of a steady TeV source has been established with an air shower detector.
The Milagro water Cherenkov detector (McCullough et al. (1999)) near Los Alamos, New Mexico, at latitude $`35.9^\mathrm{o}`$ N and longitude $`106.7^\mathrm{o}`$ W, a first-generation all-sky monitor operating with an effective energy threshold below 1 TeV, started data taking in 1999. Milagrito (Atkins et al. (1999)), a smaller, less sensitive prototype of the top layer of Milagro, took data between February 1997 and May 1998. Milagrito was located at the same site and served mainly as a test run for studying specific design questions for the Milagro detector. Nevertheless, Milagrito operated as a fully functioning detector and took data during the strong, long-lasting flare of Mrk 501 in 1997. During this flare, Mrk 501 was intensively studied with several atmospheric Cherenkov telescopes (see Protheroe et al. (1997)). Detailed flux and spectral studies have been published from data taken by the Whipple telescope on Mt. Hopkins (Arizona) between February and June 1997 (Samuelson et al. (1998)), and the HEGRA stereo system of Cherenkov telescopes on La Palma (Canary Islands) between March and October 1997 (Aharonian et al. (1999)). Although they do not cover the same observation times, the average fluxes measured by Whipple and HEGRA agree extremely well in both shape and magnitude, and they both indicate an energy spectrum that deviates significantly from a simple power law. Using an average flux as measured by Whipple,
$$J_\gamma (E)=(8.6\pm 0.3\pm 0.7)\times 10^7\left(\frac{E}{\mathrm{TeV}}\right)^{2.20\pm 0.04\pm 0.05(0.45\pm 0.07)\mathrm{log}_{10}(E/\mathrm{TeV})}\mathrm{m}^2\mathrm{s}^1\mathrm{TeV}^1,$$
(1)
simulations suggest that the observation of a statistically significant excess from Mrk 501 is within the reach of Milagrito.
Observations with atmospheric Cherenkov telescopes do not cover the time between October 1997 and February 1998, when Mrk 501 was visible only during the day time. When atmospheric Cherenkov telescopes resumed observations in 1998, they observed a relatively high flux for a few days at the beginning of March, but the flux quickly decreased and was considerably lower for the rest of 1998 than in 1997 (Quinn et al. (1999)). As an instrument that is insensitive to sunlight, Milagrito continued to monitor Mrk 501 in late 1997 and early 1998. In this Letter, we present the results of an analysis of Milagrito data on Mrk 501.
## 2 The Milagrito Detector
As source spectra tend to be falling power laws, a large detector area is essential for a sufficient rate from point sources in the VHE region. The restriction to earth-bound detectors makes the detection of the primary $`\gamma `$-rays considerably more complicated, as the primary particle generates a cascade of secondary particles in the atmosphere, an “air shower.” Air shower detectors have to reconstruct the properties of the primary $`\gamma `$-ray from the secondary particles reaching the detector level, and any $`\gamma `$-ray signal has to be observed in the presence of a large isotropic background from cosmic rays. To achieve sufficient sensitivity at TeV energies, a high altitude location and the ability to detect a large fraction of particles falling within the detector area are crucial.
Milagrito was a water Cherenkov detector of size $`35\mathrm{m}\times 55\mathrm{m}\times 2\mathrm{m}`$, located at 2650 m above sea level ($`750\mathrm{g}\mathrm{c}m^2`$ atmospheric overburden) in the Jemez Mountains near Los Alamos, New Mexico. The project took advantage of an existing man-made rectangular 21 million liter pond. A layer of 228 submerged photomultiplier tubes on a $`2.8\mathrm{m}\times 2.8\mathrm{m}`$ grid detected the Cherenkov light produced by secondary particles entering the water, allowing the shower direction and thus the direction of the primary particle to be reconstructed. The detector and the detector simulation used to study its sensitivity are described in detail elsewhere (Atkins et al. (1999)).
The water Cherenkov technique uses water both as the detection medium and to transfer the energy of air shower photons to charged particles via pair production or Compton scattering. Consequently, a large fraction of shower particles can be detected, leading to a high sensitivity even for showers with primary energies below 1 TeV.
Milagrito operated with a minimum requirement of 100 hit tubes per event, where the discriminator threshold was set so that a photomultiplier signal of $``$0.25 photoelectrons would fire the discriminator. The direction of the shower plane is determined with an iterative least squares ($`\chi ^2`$) fitter using the measured times and positions of the photomultiplier tubes. Only tubes with pulses larger than 2 photoelectrons are used in the fit, and in subsequent iterations, tubes with large contributions to $`\chi ^2`$ are removed. The resulting angular resolution is a strong function of the number of tubes remaining in the final iteration of the fit, $`n_{fit}`$. If no restriction is made on $`n_{fit}`$, Monte Carlo simulations indicate that the median space angle between the fitted and the true shower direction is about $`1.1^\mathrm{o}`$ for a source at the declination of Mrk 501.
The optimal cut on $`n_{fit}`$ and the optimal bin size for a point source search depend upon the observed $`n_{fit}`$ distribution and the angular resolution as a function of $`n_{fit}`$. Since the point spread function of Milagrito is not well characterized by a two-dimensional Gaussian, the standard formulae are inappropriate. To estimate the angular resolution as a function of $`n_{fit}`$, the detector is divided into two independent, interleaved portions (similar to a checkerboard). For each band of $`n_{fit}`$, the distribution of space angle differences between the two portions of the detector are stored. In the absence of systematic effects, these distributions can be interpreted as twice the point spread function of the detector for the given band of $`n_{fit}`$ (Alexandreas et al. (1992)). Under the assumption that the point spread function for $`\gamma `$-ray showers is identical to that of hadron-induced air showers, one can use the above distributions to determine the optimal cut on $`n_{fit}`$ and the optimal size of the angular bin. Figure 1 shows the expected significance of a source as a function of angular bin size for three different cuts on $`n_{fit}`$. The analysis indicates that requiring $`n_{fit}>40`$ with a bin size of radius $`1.0^\mathrm{o}`$, which on average contains $`57\%`$ of the source events, is optimal for a binned analysis. As shown in Figure 1, for a rather wide range of cuts, the significance of an excess depends only weakly on the chosen source bin size.
As the detector is much smaller than the typical lateral size of a shower, the shower core, i.e. the point where the primary particle would have struck the detector had there been no atmosphere, is outside the sensitive detector area for a large fraction of showers fulfilling the trigger condition. Assuming a differential flux following $`E^{2.8}`$ for the proton background and $`E^{2.5}`$ for a typical $`\gamma `$-source, $`16\%`$ of the proton showers and $`21\%`$ of the $`\gamma `$-showers triggering the Milagrito detector have their cores on the pond. This leads to a broad distribution of detected events with no well defined threshold energy. Monte Carlo simulations using the Mrk 501 spectrum given in Equation 1 predict a distribution starting at energies as low as 100 GeV, with $`90\%`$ of the detected events having an energy in excess of 0.8 TeV. The median energy of detected showers depends on the declination $`\delta `$ and the spectral index of the source, and typical values are 3 TeV for $`\delta =39.8^\mathrm{o}`$ (Mrk 501) and 7 TeV for $`\delta =22.0^\mathrm{o}`$ (Crab nebula, assuming an $`E^{2.5}`$ spectrum).
Detector performance is best evaluated by observations of well-known sources. The standard candle of VHE astronomy is the Crab nebula. Simulations indicate that the expected statistical significance of the excess above background from the Crab nebula in Milagrito is too small to be used for testing Milagrito’s performance, and indeed no significant excess from this source was observed. However, the large average flux of Mrk 501 during its flaring state in 1997 results in an expected event rate from Mrk 501 3.6 times the Crab rate for Milagrito. Mrk 501 can therefore be used to measure the sensitivity of Milagrito and to test the reliability of the detector simulation.
## 3 Results
Milagrito took data on Mrk 501 from February 8, 1997 to May 7, 1998. The effective exposure time was about 370 days, with most of the downtime being due to power outages, detector maintenance, and upgrades. Milagrito started operation with about 0.9 m of water above the tubes. The water level was increased starting in November 1997 to study how the sensitivity changes with water depth. The trigger rate was about 300 Hz with 0.9 m of water and increased to 340 Hz (400 Hz) at a depth of 1.5 m (2.0 m).
The measured rate of $`2420\pm 80`$ reconstructed events per day for 0.9 m water depth in a typical bin with $`1.0^\mathrm{o}`$ radius at the same declination as Mrk 501 is in good agreement with the predicted rate of $`2460_{90}^{+160}`$ events per day from protons, Helium, and CNO nuclei (the error accounts for the uncertainty in the measured flux). The contributions from He and CNO to this predicted rate are $`27\%`$ and $`4\%`$, respectively.
The isotropic cosmic ray background flux exceeds the $`\gamma `$-signal from Mrk 501 by several orders of magnitude. The expected background flux in the source bin must be subtracted from the measured one in order to obtain the number of excess events from the source. Since the background in the source bin depends on its exposure and the detector efficiency in local angular coordinates, the background is calculated directly from the data (Alexandreas et al. (1993)). For each detected event, “fake” events are generated by keeping the local zenith and azimuth angles ($`\theta ,\varphi `$) fixed and calculating new values for right ascension using the times of 30 events randomly selected from a buffer that spans about 2 hours of data taking. The background level is then calculated from the number of fake events falling into the source bin. By using at least 10 fake events per real event, the statistical error on the background can be kept sufficiently small.
Figure 2 shows the significance of the observed signal as a function of right ascension and declination in a $`6^\mathrm{o}\times 6^\mathrm{o}`$ region with the Mrk 501 position in the center. For each bin, the significance is calculated for the area of the circle with radius $`1.0^\mathrm{o}`$ and the bin center as the central point, hence neighboring bins are highly correlated.
At the source position, 918 954 events are observed with an average expected background of $`\mathrm{915\hspace{0.17em}330}\pm 250`$ events. The excess of $`3624\pm 990`$ events corresponds to a significance of 3.7 sigma. We interpret this result as a reconfirmation of Mrk 501 as a TeV $`\gamma `$-ray source during this period. The corresponding excess rate averaged over the lifetime of Milagrito is $`(9.8\pm 2.7)\mathrm{day}^1`$. The excess rate measured between February and October 1997 can be directly compared to the $`\gamma `$-rate expected using the average flux measured by atmospheric Cherenkov telescopes during this period. Using the flux given in Equation 1, Monte Carlo simulations of a full source transit predict a $`\gamma `$-rate of $`(12.5\pm 3.8)\mathrm{day}^1`$, which is in good agreement with the measured rate during this period of $`(13.1\pm 4.0)\mathrm{day}^1`$.
Figure 3 shows excess divided by background for the lifetime of Milagrito. At Milagrito’s level of sensitivity, the flux is consistent with being constant in time.
The analysis was extended to 10 other nearby blazars ($`z<0.06`$) in Milagrito’s field of view, including Mrk 421, but Mrk 501 remains the only analyzed source with a significance in excess of 3 $`\sigma `$. Results from this blazar sample are reported elsewhere (Westerhoff et al. (1998)).
## 4 Conclusions and Outlook
Milagrito, the first TeV air shower detector based on the water Cherenkov technique, observed an excess with a statistical significance of $`3.7\sigma `$ from the direction of Mrk 501 between February 1997 and May 1998. The excess is in agreement with expectations based on simulations and indicates that the technique is working as anticipated.
Milagrito served as a prototype for the full Milagro detector. In its final stage, Milagro has a size of $`60\mathrm{m}\times 80\mathrm{m}\times 8\mathrm{m}`$ and two layers of photomultiplier tubes, an upper layer with 450 tubes at a depth of 1.5 m, and an additional layer with 273 tubes at a depth of 6.2 m. With its larger effective area and the ability to reject some of the cosmic ray background, Milagro will be at least 5 times as sensitive as Milagrito. Data taking began in early 1999.
This research was supported in part by the National Science Foundation, the U.S. Department of Energy Office of High Energy Physics, the U.S. Department of Energy Office of Nuclear Physics, Los Alamos National Laboratory, the University of California, the Institute of Geophysics and Planetary Physics, The Research Corporation, and the California Space Institute.
|
no-problem/9909/quant-ph9909079.html
|
ar5iv
|
text
|
# General equation for Zeno-like effects in spontaneous exponential decay
## 1 Introduction
The term “quantum Zeno paradox” was introduced in . It was argued there that an unstable particle which is continuously observed to see whether it decays will never be found to decay. Analogous ideas were also discussed in some earlier works (for review see ). Continuous observations (or measurements) of a system were described phenomenologically in , as a sequence of very frequent instantaneous collapses of system’s wave function. Later the idea of quantum Zeno paradox had been developed in two main directions. First, the idea was applied to forced transitions of Rabi type between discrete levels and was experimentally proved in this form . As a result quantum Zeno paradox is considered now as a real event (quantum Zeno effect, QZE), not as a paradox. Second, it was recognized that phenomenological description of QZE (using projection postulate and wave function collapses) is not necessary. Dynamical considerations of QZE were presented and it was shown that the main features of QZE found earlier may be reproduced. It was also argued that dynamical consideration of QZE could show some essentially new features of this phenomenon .
The situation of continuous observation of spontaneous exponential decay is especially interesting. It was argued that it is impossible to observe QZE in spontaneous decay . But using a dynamical approach to QZE, it was speculated that perturbations of decay rate could take place in principle and more over they could be strong . In each of three papers mechanisms of perturbation of decay rate seem to be quite different. In the inelastic interaction of emitted particles with a particle detector was analyzed. In the forced electromagnetic transition from the final discrete state of decay to another (third) discrete state was studied. And in the spontaneous decay of the final discrete state to another discrete state was considered. Three different formulae describing QZE were deduced in for these three different mechanisms. But there are common points in all these processes. Measurement mechanism has no direct influence on the initial state of the system (undecayed particle) in all cases. But as soon as the system comes to the final state of decay, it starts to interact with some outer systems (devices or fields) that rapidly leads to destruction of the final state of decay. The final state of decay means the united state of decaying system with outer systems. Consequently, both changing of the outer systems states and changing of the decaying system states after decay is a destruction of the final state. In the present paper we show that all three mechanisms of perturbation of decay rate described in are special kinds of a general mechanism, connected with destruction of the final decay state, and we deduce the general formula for it. Then the formulae of turn out to be the special cases of our new general formula.
## 2 General formula for Zeno-like effects
Let’s consider some quantum system $`S`$, being in the initial state $`|\mathrm{\Psi }_0`$ at the time $`t=0`$. Undisturbed Hamiltonian of the system $`S`$ is $`H_0`$, $`H_0|\mathrm{\Psi }_0=_0|\mathrm{\Psi }_0`$ where $`_0`$ is the initial eigenenergy. Under influence of perturbation $`V`$, the system transits from the initial discrete state $`|\mathrm{\Psi }_0`$ to continuum of states $`\{|\xi \}`$ which is orthogonal to $`|\mathrm{\Psi }_0`$. We consider that
$$V=\underset{\xi }{}\left(|\xi \mathrm{\Psi }_0|v(\xi )+|\mathrm{\Psi }_0\xi |v^{}(\xi )\right),$$
where $`v(\xi )=\xi |V|\mathrm{\Psi }_0`$ is the matrix element for the transition. We can suppose without restriction of generality that $`\mathrm{\Psi }_0|V|\mathrm{\Psi }_0=0`$. We suppose also that the perturbation $`V`$ is time-independent and small, thus the transition $`|\mathrm{\Psi }_0\{|\xi \}`$ may be approximated by a spontaneous exponential decay for sufficiently large times.
Let’s suppose that in addition to the small perturbation $`V`$ there exist another interaction Hamiltonian $`W(t)`$, which is not small and has dependence on time. Thus, the full Hamiltonian of the system $`S`$ is $`H=H_0+V+W(t)`$. The interaction $`W(t)`$ has the feature
$$W(t)|\mathrm{\Psi }_0=0,$$
i. e. it does not influence the initial state of the system. But $`W(t)`$ may cause a transition from subspace $`\{|\xi \}`$ to another subspace $`\{|\eta \}`$ which is orthogonal to both $`|\mathrm{\Psi }_0`$ and $`\{|\xi \}`$. Let $`\mathrm{\Gamma }`$ be the decay constant of the state $`|\mathrm{\Psi }_0`$. So, what is perturbed value of $`\mathrm{\Gamma }`$ if we take into account the interaction $`W(t)`$?
We find the no-decay amplitude
$$F(t)=\mathrm{\Psi }_0|\mathrm{\Psi }(t)e^{i_0t}(\mathrm{}=1).$$
(1)
To find it we solve the Shrödinger equation
$$|\dot{\mathrm{\Psi }}(t)=i(H_0+V+W(t))|\mathrm{\Psi }(t);|\mathrm{\Psi }(0)=|\mathrm{\Psi }_0.$$
(2)
We cannot apply the perturbation theory for $`W(t)`$, because this interaction is not small, but we can do this with respect to $`V`$. First let’s consider the Shrödinger equation for the Hamiltonian without interaction $`V`$:
$$|\dot{\mathrm{\Psi }}(t)=i(H_0+W(t))|\mathrm{\Psi }(t)$$
(3)
and let the solution of eq. (3) be
$$|\mathrm{\Psi }(t)=U(t,t_1)|\mathrm{\Psi }(t_1).$$
Let’s introduce the interaction picture as
$$|\mathrm{\Psi }_I(t)=U^+(t,0)|\mathrm{\Psi }(t).$$
Then eq. (2) may be rewritten as
$$|\dot{\mathrm{\Psi }}_I(t)=iV_I(t)|\mathrm{\Psi }_I(t),|\mathrm{\Psi }_I(0)=|\mathrm{\Psi }_0$$
(4)
where $`V_I(t)`$ is the potential $`V`$ in the interaction picture:
$$V_I(t)=U^+(t,0)VU(t,0).$$
In the second order of perturbation theory we easily find an equation for derivation of no-decay amplitude from eq. (4):
$$\frac{dF}{dt}=_0^t𝑑t_1\mathrm{\Psi }_0|V_I(t)V_I(t_1)|\mathrm{\Psi }_0.$$
(5)
Let $`\omega _\xi `$ be the eigenenergy of the state $`|\xi `$: $`H_0|\xi =\omega _\xi |\xi `$. It is not difficult to show that the matrix element under the integral in eq. (5) may be rewritten as
$$\mathrm{\Psi }_0|V_I(t)V_I(t_1)|\mathrm{\Psi }_0=\underset{\xi }{}e^{i(_0\omega _\xi )(tt_1)}|v(\xi )|^2D(t,t_1),$$
(6)
where we have introduced the dissipation function:
$$D(t,t_1)=\frac{\mathrm{\Psi }_0|VU(t,t_1)V|\mathrm{\Psi }_0}{\mathrm{\Psi }_0|V\mathrm{exp}[iH_0(tt_1)]V|\mathrm{\Psi }_0}$$
(7)
The dissipation function describes the dissipation of final decay states caused by the interaction $`W(t)`$. It is easily to see that if $`W(t)=0`$ then $`D(t,t_1)1`$. Let the index $`\xi `$ of the state be the set of the eigenenergy $`\omega `$ of the state and some other quantum numbers $`\alpha `$: $`|\xi =|\omega ,\alpha `$. Let’s introduce the function $`M(\omega )`$ as follows:
$$M(\omega )=\underset{\alpha }{}|v(\omega ,\alpha )|^2$$
and then change $`_\omega `$ to $`𝑑\omega `$. Then eq. (6) is
$$\mathrm{\Psi }_0|V_I(t)V_I(t_1)|\mathrm{\Psi }_0=𝑑\omega M(\omega )e^{i(_0\omega )(tt_1)}D(t,t_1).$$
(8)
For sufficiently large (but not very large) times $`F(t)=\mathrm{exp}(\gamma t)1\gamma t`$, consequently $`\gamma =dF/dt`$. To obtain the decay constant of the state $`|\mathrm{\Psi }_0`$: $`\mathrm{\Gamma }=2\mathrm{R}\mathrm{e}\gamma `$ we substitute eq. (8) in eq. (5) and formally tend $`t`$ to infinity supposing that this limit exists. We deduce
$$\mathrm{\Gamma }=2\pi 𝑑\omega M(\omega )\mathrm{\Delta }(\omega _0)$$
(9)
where the function $`\mathrm{\Delta }(ϵ)`$ is defined as
$$\mathrm{\Delta }(ϵ)=\frac{1}{\pi }\mathrm{Re}\underset{t\mathrm{}}{lim}_0^t𝑑t_1e^{iϵ(tt_1)}D(t,t_1).$$
(10)
If $`W(t)=0`$, it is easily to see that $`\mathrm{\Delta }(ϵ)=\delta (ϵ)`$, where $`\delta (ϵ)`$ is usual Dirac’s delta-function. Then eq. (9) is transformed to $`\mathrm{\Gamma }_0=2\pi M(_0)`$, i. e. to usual Fermi’s Golden Rule, as one could expect. Thus, eq. (9) is a generalization of usual Golden Rule for case of unstable final states of decay. The main difference between usual Golden Rule and eq. (9) is that in the first case $`\mathrm{\Gamma }_0`$ is expressed through the single value of function $`M(\omega )`$, but in the second case $`\mathrm{\Gamma }`$ is expressed through convolution of $`M(\omega )`$ with spreaded function $`\mathrm{\Delta }(\omega _0)`$. Now we use the eq. (9) for studying of some particular systems.
## 3 Detection of emitted particles
Let’s suppose that some system $`X`$ (for example, an atom) transits spontaneously from the initial exited state $`|x_0`$ to ground state $`|x_1`$ emitting some particle $`Y`$ (a photon or an electron). We consider this particle as a separate quantum system, which is initially in the ground (vacuum) state $`|y_0`$ and then transits to continuum $`|\omega ^Y,\alpha ^Y`$, where $`\omega ^Y`$ is energy of state and $`\alpha ^Y`$ represents all other quantum numbers. Particle $`Y`$ inelastically scatters on a third system $`Z`$ (a surrounding media or a particle detector) due to time-independent interaction $`W`$. As a result, system $`Z`$ transits from the initial ground state $`|z_0`$ to continuum $`|\zeta `$, and this process may be considered as registration of decay. We suppose that interaction $`V`$ does not act on system $`Z`$ and that interaction $`W`$ does not act on system $`X`$. It is a special case of situation described in Section 2 and we can write:
$`S`$ $`=`$ $`XYZ`$ (11)
$`H_0`$ $`=`$ $`H_0^XI_YI_Z+I_XH_0^YI_Z+I_XI_YH_0^Z`$ (12)
$`|\mathrm{\Psi }_0`$ $`=`$ $`|x_0|y_0|z_0|x_0y_0z_0`$ (13)
$`_0`$ $`=`$ $`\omega _0^X+\omega _0^Y+\omega _0^Z`$ (14)
$`V`$ $`=`$ $`V_{XY}I_Z;W=I_XW_{YZ}`$
$`\{|\xi \}`$ $`=`$ $`\{|x_1|\omega ^Y,\alpha ^Y|z_0\};\{|\eta \}=\{|x_1|\omega ^Y,\alpha ^Y|\zeta \}`$
$`\omega `$ $`=`$ $`\omega _1^X+\omega ^Y+\omega _0^Z`$ (15)
$`U(t,t_1)`$ $`=`$ $`\mathrm{exp}[i(H_0+W)(tt_1)]`$ (16)
where notations are obvious. Using the notations
$`H_0^{YZ}=H_0^YI_Z+I_YH_0^Z,`$
$`|\stackrel{~}{y}={\displaystyle 𝑑\omega ^Y\underset{\alpha ^Y}{}v(\omega ^Y,\alpha ^Y)|\omega ^Y,\alpha ^Y}`$
one can obtain the expression for the dissipation function from eq. (7):
$$D(t,t_1)D_s(tt_1)=\frac{\stackrel{~}{y}z_0|e^{i(H_0^{YZ}+W_{YZ})(tt_1)}|\stackrel{~}{y}z_0}{\stackrel{~}{y}z_0|e^{iH_0^{YZ}(tt_1)}|\stackrel{~}{y}z_0}$$
(17)
and the expression for the function $`\mathrm{\Delta }_s(ϵ)`$ from eq. (10):
$$\mathrm{\Delta }_s(ϵ)=\frac{1}{\pi }\mathrm{Re}_0^{\mathrm{}}D_s(\tau )e^{iϵ\tau }𝑑\tau =\frac{1}{2\pi }_{\mathrm{}}^+\mathrm{}D_s(\tau )e^{iϵ\tau }𝑑\tau .$$
(18)
The subscript $`s`$ is an abbreviation of “scattering”. As one can see from eq. (15), $`M(\omega )`$ in eq. (9) depends on $`\omega ^Y`$ only. Thus, we can rewrite eq. (9) as
$$\mathrm{\Gamma }=2\pi 𝑑\omega ^YM(\omega ^Y)\mathrm{\Delta }_s(\omega ^Y\omega _f^Y)$$
(19)
where $`\omega _f^Y=\omega _0^Y+\omega _0^X\omega _1^X=\omega _0^Y+\omega _{01}`$ is the expected value of the final energy of particle $`Y`$ in accordance with the energy conservation law. The formulae (19), (18), (17) show the perturbed value of decay rate for considered problem and coincide with the formulae (26), (27) and (18) of for the same case. Further analysis of this formulae can be found in .
## 4 Decay onto unstable level
Let’s consider three-level system $`X`$ which makes a cascade transition from the initial state $`|x_0`$ to the state $`|x_1`$ and then to the state $`|x_2`$ with emission of two particles $`Y`$ and $`Z`$, respectively. So, what is the influence of instability of state $`|x_1`$ on decay constant of the state $`|x_0`$? Again, it is a particular case of general situation described in Section 2. We consider particles $`Y`$ and $`Z`$ as separate systems which are in the initial vacuum states $`|y_0`$ and $`|z_0`$ at the moment $`t=0`$ and then they transit to continuum $`\{|\omega ^Y,\alpha ^Y\}`$ and $`\{|\omega ^Z,\alpha ^Z\}`$, respectively. The transition $`|x_0|x_1`$ is caused by interaction $`V=V_{XY}I_Z`$, and transition $`|x_1|x_2`$ is caused by interaction $`W=W_{XZ}I_Y`$, where
$`V_{XY}`$ $`=`$ $`{\displaystyle 𝑑\omega ^Y\underset{\alpha ^Y}{}|x_1|\omega ^Y,\alpha ^Yx_0|y_0|v(\omega ^Y,\alpha ^Y)}+\mathrm{E}.\mathrm{C}.`$ (20)
$`W_{XZ}`$ $`=`$ $`{\displaystyle 𝑑\omega ^Z\underset{\alpha ^Z}{}|x_2|\omega ^Z,\alpha ^Zx_1|z_0|w(\omega ^Z,\alpha ^Z)}+\mathrm{E}.\mathrm{C}.`$ (21)
It is seen from eqs. (20,21) that $`V`$ causes transition $`|x_0|x_1`$ only and $`W`$ causes transition $`|x_1|x_2`$ only. This process is characterized by relations
$$\{|\xi \}=\{|x_1|\omega ^Y,\alpha ^Y|z_0\};\{|\eta \}=\{|x_2|\omega ^Y,\alpha ^Y|\omega ^Z,\alpha ^Z\}$$
and by a number of relations, coinciding with eqs. (11, 12, 13, 14, 15, 16).
It is not difficult to show that the dissipation function eq. (7) now has the form
$$D(t,t_1)D_u(tt_1)=x_1z_0|e^{i(H_0^{XZ}+W_{XZ})(tt_1)}|x_1z_0e^{i_0^{XZ}(tt_1)},$$
(22)
where $`H_0^{XZ}=H_0^XI_Z+I_XH_0^Z`$ and $`_0^{XZ}=\omega _1^X+\omega _0^Z`$. The subscript $`u`$ is an abbreviation of “unstable”. We see from eq. (22) that the dissipation function now has a simple physical sense. This is nothing more than a no-decay amplitude of the sate $`|x_1`$ in relation to decay under the influence of interaction $`W`$. Thus, we can use the approximation
$$D_u(\tau )=e^{\lambda \tau },$$
(23)
where $`\lambda `$ is a complex decay constant of level $`|x_1`$. Let $`\lambda =\lambda _ri\lambda _i`$, where $`\lambda _r`$ and $`\lambda _i`$ are real numbers. Then we obtain from eq. (23) and eq. (10)
$$\mathrm{\Delta }_u(ϵ)=\frac{1}{\pi }\frac{\lambda _r}{\lambda _r^2+(ϵ\lambda _i)^2}$$
(24)
and we obtain from eq. (9) and eq. (24)
$$\mathrm{\Gamma }=2\pi 𝑑\omega ^YM(\omega ^Y)\frac{1}{\pi }\frac{\lambda _r}{\lambda _r^2+\left[\omega ^Y\left(\omega _f^Y+\lambda _i\right)\right]^2}.$$
(25)
with the same notations as in eq. (19). For the special case $`\lambda _i=0`$, $`\lambda _r\omega _f^Y`$, $`M(\omega ^Y)=\mathrm{const}`$ for $`\omega ^Y>0`$ we obtain the formula, similar to eq. (20) in .
## 5 Rabi transition from final state of decay
Now we analyze the last particular case of general problem of Section 2. The situation is similar to that described in Section 4, but the instability of the state $`|x_1`$ is caused by a forced resonance Rabi transition to another state $`|x_2`$. We describe Rabi transition semiclassically by the time-dependent interaction
$$W_X(t)=\mathrm{\Omega }(|x_1x_2|+|x_2x_1|)\mathrm{cos}\omega _{21}t$$
where $`\omega _{21}`$ = $`\omega _2^X\omega _1^X`$ and $`\mathrm{\Omega }`$ is the Rabi frequency. Spontaneous transition $`|x_0|x_1`$ is described in same manner as in the previous sections. We have
$`S`$ $`=`$ $`XY`$
$`H`$ $`=`$ $`H_0^XI_Y+I_XH_0^Y+V_{XY}+W_X(t)I_Y`$
$`|\mathrm{\Psi }_0`$ $`=`$ $`|x_0y_0;_0=\omega _0^X+\omega _0^Y`$
$`\{|\xi \}`$ $`=`$ $`\{|x_1|\omega ^Y,\alpha ^Y\};\{|\eta \}=\{|x_2|\omega ^Y,\alpha ^Y\}`$
$`\omega `$ $`=`$ $`\omega _1^X+\omega ^Y.`$
Based on eq. (7), it’s not difficult to show that the dissipation function is
$$D(t,t_1)=x_1|x(t)e^{i\omega _1^X(tt_1)}$$
(26)
where $`|x(t)`$ is the solution of equation
$$|\dot{x}(t)=i\left[H_0^X+W_X(t)\right]|x(t),|x(t_1)=|x_1.$$
(27)
Using the rotating wave approximation we find from eq. (27)
$$x_1|x(t)=\mathrm{cos}\frac{\mathrm{\Omega }}{2}(tt_1)e^{i\omega _1^X(tt_1)}.$$
(28)
Substituting the scalar product from eq. (28) in eq. (26) we obtain
$$D(t,t_1)D_R(tt_1)=\mathrm{cos}\frac{\mathrm{\Omega }}{2}(tt_1).$$
(29)
The subscript $`R`$ is an abbreviation of “Rabi”. Using eq. (29) and eq. (10) we find
$$\mathrm{\Delta }_R(ϵ)=\frac{1}{2}\left[\delta \left(ϵ\frac{\mathrm{\Omega }}{2}\right)+\delta \left(ϵ+\frac{\mathrm{\Omega }}{2}\right)\right]$$
and then we obtain from eq. (9)
$$\mathrm{\Gamma }=\pi \left[M\left(\omega _f^Y\frac{\mathrm{\Omega }}{2}\right)+M\left(\omega _f^Y+\frac{\mathrm{\Omega }}{2}\right)\right].$$
(30)
The resultant eq. (30) coincides with conclusion of (see , eq. (2.31)). But the method by which this conclusion has been obtained in significantly differs from our method. The forced transition from the level $`|x_1`$ (in the terms of present paper) to the level $`|x_2`$ was described by means of full quantum method, using quantized electromagnetic field instead of classical potential, rather than semiclassically. Instead of value $`\mathrm{\Omega }/2`$ in our eq. (30), the quantity $`B`$ arises in eq (2.31) :
$$B=|\mathrm{\Phi }_0|\sqrt{N_0}$$
where $`\mathrm{\Phi }_0`$ is the transition matrix element and $`N_0`$ is the number of field quanta in resonance with $`|x_1|x_2`$ transition. But it is not difficult to show that value $`B`$ is precisely the half of Rabi frequency. Thus, full quantum and semiclassical methods produce the same results. One can note that full quantum description of Rabi transition can be treated in the frame of general formalisms of Section 2 as well.
## 6 Discussion
It is easily seen from the formulae (19), (25), (30) that, in the formal limit of very fast dissipation of the final state of decay caused by interaction with environment, the spontaneous exponential decay is frozen. Really, the function $`M(\omega )`$ has only finite width. Very fast dissipation of the final state means that function $`\mathrm{\Delta }(ϵ)`$ becomes very wide. But $`\mathrm{\Delta }(ϵ)𝑑ϵ=1`$ in all cases. Thus, the integral in the right hand side of eqs. (19), (25), (30) tends to zero as the width of function $`\mathrm{\Delta }(ϵ)`$ tends to infinity, consequently $`\mathrm{\Gamma }0`$. This is an expression of Zeno paradox in dynamical consideration (without using projection postulate). But dissipation of final states of decay does not necessarily cause decrease of decay constant. When dissipation is not very strong, the behavior of decay constant may be rather complex, its behavior depends on fine features of the function $`M(\omega )`$. For example, if we consider realistic case $`\mathrm{\Omega }\omega _{01}`$ for Rabi transition from the final state of usual electromagnetic transition, the relation of disturbed to no-disturbed decay constant is
$$\frac{\mathrm{\Gamma }}{\mathrm{\Gamma }_0}=1+\frac{3}{4}\frac{\mathrm{\Omega }^2}{\omega _{01}^2}$$
i. e. $`\mathrm{\Gamma }>\mathrm{\Gamma }_0`$. And only in the case of very fast dissipation of final state, decay constant starts to decrease.
Let’s note that the decay constant perturbation by Rabi transition from final decay state (Section 5) seems to be not an usual QZE, because there are no irreversible changes in the environment following such transition, consequently there is no event of measurement. This is clearly seen from the semiclassical picture of this phenomenon. But such phenomenon is closely related to QZE and we can consider it as a Zeno-like effect. Consequently our main formula (9) describes not only Zeno effect itself, but a wide class of Zeno-like effects. There are also some other phenomena which can be described in the frame of general theory of Section 2, but have been excluded from our consideration: spontaneous oscillation in the final states of decay or combinations of different mechanisms considered above are examples of such phenomena.
ACKNOWLEDGMENTS
The author acknowledges the fruitful discussions with M. B. Mensky. The work was supported in part by the Russian Foundation of Basic Research, grant 98-01-00161.
|
no-problem/9909/hep-ph9909366.html
|
ar5iv
|
text
|
# 1. Introduction
## 1. Introduction
The basic idea of the Extended Chiral Quark Model (ECQM) consists in using the degrees of freedom which are relevant at each energy scale. It is built in terms of colored current quark fields $`\overline{q}_i(x),q_i(x)`$ with momenta restricted to be below the CSB scale $`\mathrm{\Lambda }_{CSB}1.3`$ GeV, and colorless chiral fields $`U(x)=\mathrm{exp}\left(i\pi (x)/F_0\right)`$ which are $`SU(N_F)`$ matrices (herein $`N_F=2`$) and which appear below $`\mathrm{\Lambda }_{CSB}`$. The quarks are endowed with a ‘constituent’ mass $`M_0`$ multiplied by the chiral field $`U(x)`$ without manifestly breaking chiral symmetry. The information on modes with momenta larger than $`\mathrm{\Lambda }_{CSB}`$ as well as the effect of residual gluon interactions is contained in the coefficients of the effective lagrangian. The ECQM truncation of QCD effective action happens to be an extension of both the chiral quark model and the Nambu-Jona-Lasinio one.
The external sources are included into the QCD quark lagrangian in order to compute the correlators of corresponding quark currents
$$\widehat{D}i\gamma _\mu (_\mu +\overline{V}_\mu +\gamma _5\overline{A}_\mu )+i(\overline{S}+i\gamma _5\overline{P}),$$
(1)
where $`S=m_q`$, matrix of current quark masses.
The low-energy effective lagrangian $`_{ECQM}`$ is built to be invariant under left and right $`SU(2)`$ rotations, of quark, chiral and external fields.
It is convenient to introduce the ‘rotated’, ‘dressed’ or ‘constituent’ quark fields
$$Q_L\xi q_L,Q_R\xi ^{}q_R,\xi ^2U,$$
(2)
which transform nonlinearly under
$`SU_L(2)SU_R(2)`$ but identically for left and right quark components.
Changing to the ‘dressed’ basis implies the following replacements in the external vector, axial, scalar and pseudoscalar sources
$`\overline{V}_\mu v_\mu `$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(\xi ^{}_\mu \xi _\mu \xi \xi ^{}+\xi ^{}\overline{V}_\mu \xi +\xi \overline{V}_\mu \xi ^{}\xi ^{}\overline{A}_\mu \xi +\xi \overline{A}_\mu \xi ^{}\right),`$
$`\overline{A}_\mu a_\mu `$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(\xi ^{}_\mu \xi _\mu \xi \xi ^{}\xi ^{}\overline{V}_\mu \xi +\xi \overline{V}_\mu \xi ^{}+\xi ^{}\overline{A}_\mu \xi +\xi \overline{A}_\mu \xi ^{}\right),`$
$`\overline{}`$ $`=`$ $`\xi ^{}\overline{}\xi ^{}.`$ (3)
In these variables the relevant part of ECQM action can be represented as,
$$_{ECQM}=_{ch}+_{}+_{vec},$$
(4)
where $`_{ch}`$ accumulates the interaction of chiral fields and quarks in the chiral limit in the presence of vector and axial-vector external fields, $`_{}`$ extends the description for external scalar and pseudoscalar fields and, in particular, for massive quarks, $`_{vec}`$ contains operators generating meson states in vector and axial-vector channels.
In more detail In (5) we have retained numerically the most important operators and only those four-quark vertices which induce a scalar isosinglet and pseudoscalar isotriplet meson states.
$`_{ch}`$ $`=`$ $`i\overline{Q}\left(\overline{)}D+M_0\right)Q{\displaystyle \frac{f_0^2}{4}}\mathrm{tr}(a_\mu ^2)`$ (5)
$`+{\displaystyle \frac{G_{S0}}{4N_c\mathrm{\Lambda }^2}}(\overline{Q}_LQ_R+\overline{Q}_RQ_L)^2{\displaystyle \frac{G_{P1}}{4N_c\mathrm{\Lambda }^2}}(\overline{Q}_L\stackrel{}{\tau }Q_R+\overline{Q}_R\stackrel{}{\tau }Q_L)^2,`$
where
$$QQ_L+Q_R,\overline{)}D\overline{)}+\overline{)}v\gamma _5\stackrel{~}{g}_A\overline{)}a,$$
(6)
with the ‘bare’ pion decay constant $`f_0`$ and the ‘bare’ axial coupling $`\stackrel{~}{g}_A1\delta g_A`$. These ‘bare’ contributions to the chiral effective lagrangian are renormalized after integration over low-energy quark
The massive part $`_{}`$ looks as follows
$`_{}`$ $`=`$ $`i({\displaystyle \frac{1}{2}}+ϵ)\left(\overline{Q}_RQ_L+\overline{Q}_L^{}Q_R\right)`$ (7)
$`+i({\displaystyle \frac{1}{2}}ϵ)\left(\overline{Q}_R^{}Q_L+\overline{Q}_LQ_R\right)`$
$`+\text{tr}\left(c_0\left(+^{}\right)+c_5(+^{})a_\mu ^2+c_8\left(^2+\left(^{}\right)^2\right)\right),`$
where the chiral couplings $`c_0,c_5,c_8`$ are ‘bare’, different from and their physical values are controlled by the CSR rules (see below).
The chiral invariant quark self-interactions in the vector and axial-vector channels, $`_{vec}`$, are
$`_{vec}`$ $`=`$ $`{\displaystyle \frac{G_{V1}}{4N_c\mathrm{\Lambda }^2}}\overline{Q}\stackrel{}{\tau }\gamma _\mu Q\overline{Q}\stackrel{}{\tau }\gamma _\mu Q{\displaystyle \frac{G_{A1}}{4N_c\mathrm{\Lambda }^2}}\overline{Q}\stackrel{}{\tau }\gamma _5\gamma _\mu Q\overline{Q}\stackrel{}{\tau }\gamma _5\gamma _\mu Q`$ (8)
$`+c_{10}\text{tr}\left(U\overline{L}_{\mu \nu }U^{}\overline{R}_{\mu \nu }\right).`$
Their inclusion leads to the appearance of vector and axial-vector isotriplet meson resonances . $`c_{10}`$ is a ‘bare’ chiral coupling.
In total the effective action suitable for derivation of two-point correlators contains 13 parameters to be determined by matching to QCD: $`M_0,\mathrm{\Lambda }`$(cutoff), the bare chiral constants $`f_0,c_0,c_5,c_8,c_{10}`$, the axial pion-quark coupling $`\stackrel{~}{g}_A`$, the mass asymmetry $`ϵ`$ and the four-fermion coupling constants $`G_{S0}G_{P1},G_{V1}G_{A1}`$.
## 2. Bosonization
We incorporate auxiliary fields $`\mathrm{\Phi }`$ in the scalar and pseudoscalar channels, $`\mathrm{\Sigma },\mathrm{\Pi }^a`$, and in the vector and axial-vector channel, $`iW_\mu ^{(\pm )a}`$, and replace the four-fermion operators
$$\frac{G_C}{4N_c\mathrm{\Lambda }^2}\overline{Q}\mathrm{\Gamma }Q\overline{Q}\mathrm{\Gamma }Q;\mathrm{\Gamma }=1;i\gamma _5\tau ^a;\gamma _\mu \tau ^a;\gamma _5\gamma _\mu \tau ^a,$$
(9)
by
$$i\overline{Q}\mathrm{\Gamma }\mathrm{\Phi }Q+N_c\mathrm{\Lambda }^2\frac{\mathrm{\Phi }^2}{G_C};C=S0;P1;V1;A1,$$
(10)
with an integration over new variables (see ).
Due to vacuum polarization effects (quark loops) the auxiliary fields obtain kinetic terms and propagate, i.e. interpolate resonance states. One fulfills the confinement requirement for a finite number of resonances if one retains only that part of the quark loop which contains the leading logarithm of the cutoff $`\mathrm{\Lambda }`$. In this approach one coherently neglects both the threshold part of quark loop (‘continuum’) and (the infinite number of) heavier resonance poles. This is supported by the large-$`N_c`$ approximation which associates all momentum dependence in the bosonized effective action solely with meson resonances.
The actual value of constituent mass $`<\mathrm{\Sigma }>=\mathrm{\Sigma }_0`$ is described by the mass-gap equation
$$\frac{\mathrm{\Lambda }^2}{G_{S0}}\left(\mathrm{\Sigma }_0M_0\right)=\frac{\mathrm{\Sigma }_0^3}{4\pi ^2}\mathrm{ln}\frac{\mathrm{\Lambda }^2}{\mathrm{\Sigma }_0^2}\mathrm{\Sigma }_0^3I_0.$$
(11)
Therefrom it is evident that the natural scale for the four-fermion interaction is given rather by $`\mathrm{\Sigma }_0`$ than by $`\mathrm{\Lambda }`$ and it is useful to redefine the related coupling constants: $`\overline{G}_C=G_CI_0\frac{\mathrm{\Sigma }_0^2}{\mathrm{\Lambda }^2}`$ for characterizing the weak coupling regime by $`\overline{G}_C1`$.
The leading-log part of the quark loop allows to find analytically both the mass spectrum and decay coupling constants of pions, heavy pions, scalar, vector and axial-vector resonances.
In particular, the physical axial coupling in pion-quark vertex is $`g_A=\stackrel{~}{g}_A/(1+\overline{G}_A)`$.
The masses of vector mesons are evaluated to be
$$m_V^2=\frac{6\mathrm{\Sigma }_0^2}{\overline{G}_V},m_A^2=6\mathrm{\Sigma }_0^2\frac{1+\overline{G}_A}{\overline{G}_A}.$$
(12)
Among others, their coupling constants to external vector fields are of main importance
$$f_V=\frac{N_cI_0}{6};f_A=g_Af_V.$$
(13)
The pion decay constant can be found by taking into account the bare pion kinetic term (5)
$$F_0^2=f_0^2+N_c\mathrm{\Sigma }_0^2I_0g_A\stackrel{~}{g}_A.$$
(14)
The pion mass is set by the quark condensate
$$C_q=\left(2c_0+\frac{N_c}{4\pi ^2}\mathrm{\Sigma }_0^3\mathrm{ln}\frac{\mathrm{\Lambda }^2}{\mathrm{\Sigma }_0^2}\right)B_0F_0^2,$$
(15)
according to the Gell-Mann-Oakes-Renner formula, $`m_\pi ^2=2m_qB_0`$ and the masses of the $`u,d`$ quarks are taken equal for simplicity.
Respectively the heavy $`\mathrm{\Pi }`$ mass is found to be
$$m_\mathrm{\Pi }^2=\frac{2\mathrm{\Sigma }_0^2\stackrel{~}{g}_A}{\delta ^2g_A}(\frac{1}{\overline{G}_P}+1),\delta \frac{F_0}{f_0}.$$
(16)
The weak decay coupling constant for heavy $`\mathrm{\Pi }`$ meson reads
$$F_\mathrm{\Pi }=F_0d_1\frac{m_\pi ^2}{m_\mathrm{\Pi }^2(0)};d_1=\frac{\sqrt{1\delta ^2}}{\delta }\left(\frac{2\mathrm{\Sigma }_0ϵ}{\overline{G}_Pg_AB_0}+1\right).$$
(17)
The scalar meson mass is obtained in the form
$$m_\sigma ^2=2\mathrm{\Sigma }_0^2(\frac{1}{\overline{G}_S}+3).$$
(18)
The matching to QCD yields further relations.
## 3. CSR matching
Let us exploit the constraints based on chiral symmetry restoration at QCD at high energies. We focus on two-point correlators of colorless quark currents
$$\mathrm{\Pi }_C(p^2)=d^4x\mathrm{exp}(ipx)T\left(\overline{q}\mathrm{\Gamma }q(x)\overline{q}\mathrm{\Gamma }q(0)\right),$$
(19)
with the notations (9) and (10). In the chiral limit the scalar correlator and the pseudoscalar one coincide at all orders in perturbation theory and also at leading order in the non-perturbative O.P.E. (see also ).The same is true for the difference between the vector and axial-vector correlators . As the above differences decrease rapidly with increasing momenta, one can expect that the lowest lying resonances included into ECQM will successfully saturate the constraints from CSB restoration.
In the scalar channel one obtains the following sum rules
$`c_8+{\displaystyle \frac{N_c\mathrm{\Sigma }_0^2I_0}{8\overline{G}_S}}{\displaystyle \frac{4ϵ^2N_c\mathrm{\Sigma }_0^2I_0}{8\overline{G}_P}}=0,`$ (20)
$`Z_\sigma =Z_\pi +Z_\mathrm{\Pi },Z_\pi =4B_0^2F_0^2,`$ (21)
$`Z_\sigma m_\sigma ^2Z_\mathrm{\Pi }m_\mathrm{\Pi }^224\pi \alpha _sC_q^20,`$ (22)
where $`Z_\sigma ,Z_\pi ,Z_\mathrm{\Pi }`$ stand for the residues in resonance pole contributions in the scalar and pseudoscalar correlators. The first relation fixes unambiguously the bare constant $`c_8`$, the last one is essentially saturated by heavy pion parameters. As result of CSR sum rules one determines the chiral constant
$$L_8\frac{F_0^2}{16}\left(\frac{1}{m_\sigma ^2}+\frac{1}{m_\mathrm{\Pi }^2}\right),$$
(23)
as well as the asymmetry
$$2ϵ=\frac{\overline{G}_P}{\overline{G}_S}\sqrt{\frac{g_A}{\stackrel{~}{g}_A}}\left(\beta \sqrt{1\delta ^2}\pm \delta \sqrt{1\beta ^2}\right),$$
(24)
where $`\beta \sqrt{1(m_\sigma ^2/m_\mathrm{\Pi }^2)}`$ and $`\delta =f_0/F_0`$.
In the vector channel one derives the relations
$`c_{10}=0,`$ (25)
$`f_V^2m_V^2=f_A^2m_A^2+F_0^2,`$ (26)
$`f_V^2m_V^4=f_A^2m_A^4,`$ (27)
where the two last ones represent the Weinberg sum rules. With the help of the first relation one obtains the chiral constant:
$$L_{10}=\frac{1}{4}\left(f_A^2f_V^2\right).$$
(28)
From the second one and eq.(13) we find
$$f_V^2=\frac{F_0^2}{m_V^2(1g_A^2\xi )}=\frac{N_cI_0}{6},\xi =\frac{m_A^2}{m_V^2},$$
(29)
and from the last one and eq.(13) it follows that
$$\xi =\frac{m_A^2}{m_V^2}=\frac{1}{g_A}.$$
(30)
The last QCD requirement we adopt concerns the CSR for the three-point correlator of one scalar and two axial-vector currents . It determines eventually $`c_5`$ and the chiral constant $`L_5`$
$$c_50;L_5\frac{N_c\mathrm{\Sigma }_0I_0g_A^2}{8B_0(1+3\overline{G}_S)}.$$
(31)
## 4. Fit and discussion
Let us specify the input parameters. As such we take $`F_0=90\mathrm{MeV}`$, $`m_\pi ^2=140\mathrm{MeV}`$. We adopt $`\widehat{m}_q(1\mathrm{GeV})6\mathrm{MeV}`$, $`B_0(1\mathrm{GeV})1.5\mathrm{GeV}`$, and engage the phenomenological value for the heavy pion mass $`m_\mathrm{\Pi }1.3`$ GeV . We also take the vector and axial-vector meson masses, $`m_\rho =770\mathrm{MeV}`$ and $`m_{a1}1.2`$ MeV, as known parameters. Then the parameter $`\xi 2.4`$.
Let us perform now an optimal fit applying in the vector channel only (25) and (26). For $`m_\sigma 1`$ GeV one finds $`\beta 0.64`$ and $`L_80.8\times 10^3`$. For $`g_A=0.55`$ one obtains $`L_5=1.2\times 10^3`$ ($`L_5,L_8`$ to be compared with ) and $`\mathrm{\Sigma }_0200`$ MeV. Therefrom one derives that $`\overline{G}_V0.25,\overline{G}_A0.2,\stackrel{~}{g}_A0.66`$. With these values, $`I_00.1`$ and $`\mathrm{\Lambda }1.3`$ GeV. Then the bare pion coupling $`f_062`$ MeV and for the rest of the parameters we find: $`\delta 0.7`$, $`\overline{G}_S0.11`$, $`\overline{G}_P0.13`$, and either $`ϵ0.05`$ or $`ϵ0.51`$. We see that indeed the four fermion coupling constants $`\overline{G}_S`$ and $`\overline{G}_P`$ as well as $`\overline{G}_V`$ and $`\overline{G}_A`$ are slightly different and their values $`1`$ signifying the weak coupling regime. We remark that for the value $`g_A=0.55`$ the last sum rule (30) is imprecise: 2.4 vs. 1.8 .
The vector and axial vector couplings are $`f_V=0.22`$ and $`f_A=0.12`$ to be well compared with the experimental values from the electromagnetic decays of $`\rho ^0`$ and $`a_1`$ mesons. Two more predictions can be obtained: $`F_\mathrm{\Pi }=0.8\times 10^2F_\pi ,F_\sigma =\frac{\sqrt{Z_\sigma }}{2B_0}=1.6F_0`$. These constants are not yet experimentally measured.
Thus we have estimated all parameters of the ECQM effective lagrangian and made certain predictions. We conclude that the ECQM supplied with the CSR matching conditions proves to be a systematic way to describe hadron properties at low and intermediate energies starting from QCD.
An alternative scheme exists for modelling the QCD effective action at intermediate energies which is based on manifestly chiral invariant, quasilocal many-quark interaction. Like the simple NJL model it exploits the hypotetical CSB mechanism due to strong attraction in scalar channels and yields a rather light scalar meson.
D.Becirevic (Orsay): Could you comment on why you did not use the last sum rule in the vector channel? How its inclusion may affect the scalar meson mass?
A.A.Andrianov: We, in fact, have performed the fit employing the sum rule (30). As a result, the mass of axial-vector meson comes out to be too low, 1 GeV or less, other parameters are changed slightly: $`g_A`$ grows up and $`\mathrm{\Sigma }_0`$ decreases. Thus we have disfavoured (30) not being satisfied with such a large discrepancy between physical and large-$`N_c`$ values for $`a1`$ mass. As to the scalar meson its mass is governed by the scalar sum rules and the chiral constant $`L_8`$ and thereby is not affected by addition or neglection of (30).
|
no-problem/9909/astro-ph9909022.html
|
ar5iv
|
text
|
# Coherent Transport of Angular Momentum The Ranque–Hilsch Tube as a Paradigm
## 1. Introduction
It is not an exaggeration to say that the angular momentum transport is one of the most important, yet poorest understood phenomena in astrophysics. Furthermore, the angular momentum problem is ubiquitous, not only in the the formation of the stars from the proto-stellar nebula, but in particular the Sun and the planets, galaxies and their central black holes (including our own Galaxy), and X ray sources powered by accretion from disks (e.g. Dubrulle 1993, Papaloizou & Lin 1995). There is just too much initial angular momentum. No one has yet provided a full understanding of how it is transported outward so fast and without concomitant excessive heating, as indicated by the astronomical observations, and by our very existence on this planet. Possible exceptions involve situations with an external magnetic field and the necessary ionization and conductivity (e.g. Hawley, Gammie & Balbus 1995) which do not exist in all situations where enhanced angular momentum is required.
Countless theories and many hundreds of theoretical papers in astrophysics have sought this explanation, and then parameterize this lack of understanding by the value of $`\alpha `$ as in the ubiquitous $`\alpha `$-viscosity (e.g. Dubrulle 1993, Papaloizou & Lin 1995). This viscosity is orders of magnitude greater than would be expected from laminar flow. A time-independent, $`kϵ`$ turbulence model of the Ranque-Hilsch tube (see below) also shows a need to scale up artificially the turbulent Prandtl number. It is clear that for a complete understanding of the angular momentum transport, full time-dependent hydrodynamic modeling is needed. However, it is even more difficult than this for the astrophysicist, because gravity strongly stabilizes the accretion disk, and the lack of a linear instability compounds the difficulty of finding the mechanism that fuels turbulence and large-scale structures such as vortices in the accretion disk. One can even arbitrarily introduce turbulence in a numerical model of an accretion disk (Balbus & Hawley, 1998, Hawley, Gammie & Balbus, 1995) and observe it to rapidly decay to laminar flow.
Coherent X-ray active structures have been reported on the surfaces of accretion disks (Abramowicz et al. 1992), and in fact it has already been proposed (Bath, Evans & Papaloizou 1984) that short-term flickering of cataclysmic variables and X-ray binaries have their origin in large-scale vortices.
At Los Alamos, Lovelace, Li, Colgate, Nelson (1998), and Li, Finn, Colgate & Lovelace (1999) have derived analytically a linear growing, azimuthally nonsymmetric instability in the disk provided one starts with a finite initial radial entropy or pressure gradient. The Ranque-Hilsch tube is subject to this same Rossby instability where a radial pressure gradient is induced by the tangential injection of compressed air (or gas). The pressure gradient induces a nonuniform distribution of vorticity or angular velocity, which in turn is a sufficient condition for the the induction of the Rossby instability.
The necessity for this nonuniform distribution of vorticity is shown in detail for baroclinic flows by Staley & Gall (1979). It is discussed by them in the context of tornados, but the conditions are similar to the Ranque-Hilsch tube where the radial pressure gradient presumably plays an identical role in exciting the Rossby wave instability. The same criterion of a local maximum or minimum of vorticity is necessary for the instability to occur in Keplerian flows. Thus one expects this instability to be the basis of the Ranque-Hilsch tube and further expects that it is the nonlinear interaction of these vortices that produces the weird effect of refrigeration. Refrigeration is the most dramatic experimental effect of the Ranque-Hilsch tube. Later we will offer an explanation of this effect in terms of these semi-coherent vortices.
By observing, understanding, and modeling this phenomenon one will have a laboratory example of the most likely mechanism of the enhanced transport of angular momentum in a Keplerian accretion disk.
We review next some of the history of the Ranque-Hilsch tube.
## 2. The Ranque – Hilsch Tube
The Ranque-Hilsch tube (or vortex tube) was invented by Ranque (1933) and improved by Hilsch (1946). It is made up of a cylinder in which gas (air) is injected tangentially, and at several atmospheres, through a nozzle of smaller area than the tube, which sets up strong vortical flow in the tube. Two exit ports of comparable area to the nozzle allow the gas to escape, one port being on axis and the second at the periphery. The air flow is shown schematically in Fig. 1. The surprising result is that one stream is hot and the other stream is cold. The question is which one is which and why. Gas injected into a static chamber of whatever form would in general be expected to exit through any arbitrary ports and return to its original temperature when brought to rest, assuming of course, that no heat is added or removed from the walls of the chamber. The contrary result for the Ranque-Hilsch tube has confounded physicists and engineers for decades. The device, when adjusted suitably, is impressive with the hot side becoming too hot to touch and the cold side icing up! However, the thermodynamic efficiency is poor, $`20\%`$ to $`30\%`$ of a good mechanical refrigerator.
Fig. 1. Schematic of the Ranque-Hilsch tube
Vortex tubes are commercially available, both for practical applications (e.g. cooling of firemen’s suits) and for laboratory demonstrations. Completely erroneous explanations are unfortunately frequently offered (for example, that the tube separates the hot and cold molecules - a Maxwell demon! - clear in violation of the second law of thermodynamics).
Previous experimental and theoretical work suggests that the Ranque-Hilsch tube operates through the induction of co-rotating vortices in rotational flow. The reasons for this belief depend upon a review of the theory and measurements described in the next section, but in summary these are: 1.) the unreasonableness of the high turbulent Prandtl number required to explain an axisymmetric model; 2.) the high frequency, large amplitude modulation observed in pressure measurements and in the associated acoustic spectrum, 3.) the reversibility of the temperature profile by entropy injection, 4.) the analogy to the temperature profile expected of a free running, radial flow turbine.
We will review many of these theories and experiments later, but the point for now is that there is no consensus on how this could happen, and to the extent it happens, based upon current understanding of the solutions to the Navier-Stokes equations. The literature, both theory and experiment, has recently been surveyed by Ahlborn & Groves (1997) who conclude: “This implies that none of these mechanisms altogether explains the Ranque-Hilsch effect”.
As a recent numerical simulation of the Ranque-Hilsch tube with an axisymmetric approximation (Fröhlingsdorf & Unger, 1999) shows that agreement with observation requires the extraordinary value of the turbulent Prandtl number of $`9`$ compared to unity for the $`kϵ`$ turbulence model to obtain agreement with the measured temperature difference of the two exit streams. It is precisely this very large departure from the ”standard model” that makes this device a paradigm for efficient and coherent angular momentum transport. The vortex tube has an uncanny ability to efficiently transport angular momentum and mechanical energy outward while severely limiting a counterbalancing heat flow inward, a property shared by astrophysical accretion disks.
It is fortunate that we have a laboratory device that can be used as a paradigm for studying coherent transport of angular momentum. It is true that the flow field is extreme compared to that in astrophysical disks, but on the other hand we do not have the complicating effects of gravity, nor of explaining the origin of the vortices (instability and nonlinear growth), since they are externally induced by the geometry of the tube.
Efficient turbine engines have very expensive blades that must withstand high temperatures and stresses. If the Navier-Stokes equations ”know” a better way of transporting angular momentum, then perhaps we could learn how to do so, and to make a more efficient and cheaper engine. We suggest that numerical modeling combined with laboratory observations are the best way to find out and improve our knowledge along the way.
## 3. Angular Momentum and the Excitation of Rossby Vortices
Let us first consider the flow in the cylinder under the assumption that the flow is laminar. The Reynolds number in typical experiments is $`R_e10^5`$, and so without turbulence, friction would be negligible with the exception of the Ekman layer flow, to be considered later. The primary dynamical constraint under these circumstances is the conservation of angular momentum, or $`Rv_\varphi =R_ov_{\varphi ,o}`$, so that the centrifugal acceleration, with conserved angular momentum, becomes: $`a=v_\varphi ^2/R=(v_oR_o)^2/R^3`$. Since the centrifugal force must be balanced by the pressure gradient, we obtain
$$dP/dR=a\rho $$
where $`\rho `$ is the gas density. Using the adiabatic law
$$P=P_o(\rho /\rho _o)^\gamma $$
where $`\gamma `$ is the ratio of specific heats $`=1.4`$ for air, and integrating we have:
$$\left[1\left(\frac{\rho }{\rho _o}\right)^{\gamma 1}\right]=Q\left[\left(\frac{R_o}{R}\right)^21\right]$$
where
$$Q=\frac{\gamma 1}{\gamma }\frac{\rho _o}{P_o}\frac{v_o^2}{2}$$
Thus, in this approximation the density would vanish at finite radius unless $`Q`$ is very small, i.e. high input pressure compared to the input kinetic energy. Under normal operating conditions of the tube there would no way for the gas to reach velocities sufficient to carry even a small fraction of the input mass flow out the axial hole when one considers that $`v_o(1/2)c_s`$.
Hence, the only way for the gas to exit the central port is to rid itself of angular momentum as it spirals toward the axis. Standard small scale turbulence cannot do that without excessive concomitant heating. Some large scale eddy structure is needed, and our suggestion is that the flow has non-azymuthally symmetric vorticity.
Two-dimensional rotational flow in an incompressible medium is known to be unstable to the formation of ”Rossby” waves (Nezlin & Snezhkin, 1993). The general criterion for the instability is the existence of a local maximum or minimum in an otherwise monotonic radial distribution of vorticity. Such a ”bump” in vorticity is created by an entropy or pressure bump (c .f. Fig. 1 of Li et al. 1999). When the waves grow to the nonlinear regime, they form co-rotating vortices. Such vortices act like particles in the sense that they carry or transport a conserved mass in their cores. Staley & Gall (1979) analyzed this instability and the structure of the vortices in the nonlinear regime for tornados and found remarkable agreement between theoretical analysis and the observations of 5 to 6 co-rotating vortices. They conjectured, but could not prove the enhanced transport of angular momentum by these vortices. The latter are of particular interest in the atmospheric sciences because they are responsible for most of the damage caused by tornados and hurricanes. The excitation of these vortices is also studied in planetary atmospheres where theory (Marcus, 1988, 1990) and experiment (Sommeria, Meyers, & Swinney, 1988) demonstrate remarkable agreement with the observations of the ”red spot” of the Jovian atmosphere. Multiple vortices can also be excited in laboratory experiments where a thin layer of fluid is co-rotated in equilibrium within a parabolic vessel into which vorticity is injected or removed at a local radius (Nezlin & Snezhkin, 1993), but again the question of the enhanced transport of angular momentum in the fluid is not measured.
We expect that these same vortices must be induced in the Ranque-Hilsch tube, because the flow at the innermost radius should be unstable because of the steep gradients in density and temperature. One of us, SAC, performed an experiment to prove this at Lawrence Livermore National Lab as a basis for an applied vortex reactor (Colgate, 1964). Here a standard Ranque-Hilsch tube produced the standard temperature ratios of a cold flow from the axial port and a hot stream from the periphery. We suspected that if the instability was due to the steep gradient in density and temperature, then a large change in the entropy of the rotating gas stream at an intermediate radius would make a significant change in the Ranque-Hilsch tube characteristics. Consequently we injected a flammable gas, acetylene, through a small hypodermic needle at a flow rate close to stoichiometric at half radius of the tube. With ignition of a flame, the radial temperature gradient was inverted by an order of magnitude, and the typical Ranque-Hilsch tube exit temperature ratio was inverted. The axial exit stream became hot enough to melt tungsten, $`4000`$ deg and the outer walls and peripheral exit stream returned to the input stream temperature. We thus became convinced that the the vortex flow field could be stabilized by an entropy gradient, and the converse that the vortex flow without a strong entropy gradient was unstable and that this instability was fundamental to the refrigerator action of the Ranque-Hilsch tube.
The most reasonable explanation is that this instability induces axially aligned vortices that act like semi-rigid vanes or turbine blades in the flow. These rigid members then transport mechanical work from the faster rotating (higher vorticity) inner rotating flow to the periphery where friction converts this mechanical energy to heat. The peripheral exit flow then removes this heat. This is just what would be expected of a free running, no mechanical load, radial flow turbine. The rotor and blades would remove mechanical energy from the axial exit stream and convert it into higher velocity frictionally heated flow at the peripheral walls.
We would like to know whether and how these vortices can transport angular momentum analogously to turbine blades. Before discussing these measurements we need to take note of another source of enhanced frictional torque on the fluid, namely the Ekman layers. Ekman layer flow is fluid in frictional contact with the two stationary end walls of the tube. The consequential lack of rotation is also a lack of centrifugal force, and so such a stationary fluid is highly buoyant relative to the rotating flow. Hence a gas or fluid layer, close to the end walls will flow rapidly towards the axis. The velocity of the radial flow is limited by the same friction that has slowed its rotation and so an equilibrium, steady state flow is reached, i.e. the Ekman layer. The thickness of this flow is typically $`R_oR_e^{1/2}`$ and velocity $`v_{Ekm}1/2v_o`$ and so the fractional loss due to the two Ekman layers of the two ends will be $`R_e^{1/2}3\times 10^3`$, or small compared to the primary angular momentum transport phenomenon sought. However, whether this source of frictional torque can be totally neglected, as in nearly all analyses, needs to be checked more carefully.
Because of the short time scales associated with the Ranque-Hilsch tube, it is difficult to make measurements of the flow field. All experimental work has therefore been limited to measuring time averaged quantities, thus necessarily hiding the putative sub-vortices that we think are responsible for the efficient transport of angular momentum and mechanical energy.
There are, however, a number of direct and indirect indications that unsteadiness plays a major role in the dynamics of the Ranque-Hilsch tube, and that the temperature separation is strongly influenced by ’fluctuations’.
1. A measurement of the acoustic spectrum (Ahlborn & Groves 1997) show a continuum stretching from 500Hz to above 25kHz, but with some very strong features near 19kHz. For this particular experiment this frequency corresponds to about 1/5 of the angular period of the flow. This is precisely what we would expect from ($``$ 5) swirling vortices.
2. Measurements by Kurosaka (1982) show that the suppression of the vortex whistle leads to a decrease of the energy separation.
3. Fröhlingsdorf & Unger (1999) who model the flow in a steady, axisymmetric approximation with a $`kϵ`$ turbulence model require an artificial enhancement of the Prandtl number by a staggering factor of 10 to fudge the unsteadiness.
We suggest here that the unsteady features are actually a set of 5 or 6 swirling vortices into which the unstable axisymmetric vortex has broken up.
An a priori ’obvious’ measurement of the flow field with laser scattering from suspended particles, e.g. smoke, does not work because the high centrifugal forces prevent the seed particles from reaching the inner region. The only viable approach seems to be Schlieren photography with an ultrafast camera, and correlated, (axis and radius), pressure measurements with small in situ probes up to 10 MHz.
The most sophisticated numerical approach that has been applied to the study of the vortex tube (Fröhlingsdorf & Unger, 1999) is the industrial flow software CFX. In our opinion these calculations are inadequate in two ways: First they assume azimuthal symmetry (about the axis of the tube) and second they assume a steady flow. Both of these assumptions hide the interesting physics that we need to understand. Proper 3D, or at least 2D numerical simulations are possible, and are required to remedy these obvious deficiencies. The flow is expected to be subsonic, although we cannot rule out shocks a priori and have to keep them in mind. Molecular heat transport is negligible. The flow problem is thus mathematically speaking simple, in the sense that it involves only the Navier Stokes equation. However, Reynolds numbers can be large, and the outer boundary layer plays an important role. From a physical and numerical point of view one is thus faced with a doable, although very difficult problem.
## 4. Conclusion
The Ranque-Hilsch tube has been a long standing scientific puzzle since the first half of the century (Ranque 1933, Hilsch 1946), and many dozens of theoretical and experimental papers have been written attempting to understand its action. The frustration of lack of clear results has often relegated the topic to ”curiosity” status. However, the equally enigmatic and similar phenomenon in astrophysics of the Keplerian accretion disk has been confronted in literally thousands of papers with similar results. The observational fact of these high angular momentum transporting flows has become indisputable so that the research effort has increased. One result of this effort has been to link theoretically the plausible explanation of the Ranque-Hilsch tube to the Keplerian accretion disk. The results have become promising. The expected excitation of axially aligned ”Rossby” vortices has been predicted analytically and the linear growth has been modeled with numerical codes. The excitation and action of these vortices represents new physics. One is reluctant to claim new physics without experimental proof. One can obtain this proof in the laboratory and thereby complete hopefully the last major non-understood domain of the Navier-Stokes equations.
## 5. Acknowledgements
This work has been aided by discussions with many colleagues among whom are Hui Li, Richard Lovelace, David Ramond, Howard Beckley, and Van Romero. It has been partially supported by New Mexico Tech and by the Department of Energy under contract w-7405-ENG-36, and by NSF (AST 9528338, and AST 9819608) at UF.
## References
Abramowicz, M.A., A. Lanza, E.A. Spiegel, & E. Szuszkiewicz, 1993, Nature 356, 41.
Ahlborn, B. & S. Groves, 1997, Secondary Flow in a Vortex Tube, Fluid Dyn. Res. 21, 73–86.
Balbus, S.A. & J. F. Hawley, 1998, Instability, Turbulence, and Enhanced Transport in Accretion Disks, Rev. Mod. Phys. 70, 1.
Bath, G. T., W. D. Evans, & J. Papaloizou, 1984 MNRAS 176, 7.
Colgate, S. A., 1964, Vortex Gas Accelerator, AIAA J., 2, No. 12, 2138-2142.
Dubrulle, B., 1993, Differential Rotation as a Source of Angular Momentum Transfer in the Solar Nebula, Icarus, 106, 59-76.
Dubrulle, B. & L. Valdettaro, 1992, Consequences of Rotation in Energetics of Accretion Disks, Astron. & Astrophys. 263, 387–400.
Fröhlingsdorf, W. & H. Unger, 1999, Numerical investigations of the compressible flow and the energy separation in the Ranque-Hilsch vortex tube, Int. Jour. of Heat and Mass Transfer, 42, 415.
Hawley, J .F., C. F. Gammie & S. A. Balbus, 1955, Local three-dimensional magnetohydrodynamic simulations of accretion disks, Astrophysical Journal, 440, 742.
Hilsch, R., 1946, Die Expansion von Gasen im Zentrifugalfeld als Kälteprozess, Zeitung für Naturforschung, 1, 208–214; and 1947 Rev. Sci. Instr. 18, 108–113.
Kurosaka, M., 1982, Acoustic Streaming in Swirling Flow and the Ranque-Hilsch Effect, J. Fluid Mechanics 124, 139–172.
Lovelace, R. V. E., H. Li, S. A. Colgate & A. F. Nelson, 1998, Rossby Wave Instability of Keplerian Accretion Disks, submitted to the Astrophysical Journal.
Li, H., J. H. Finn, S. A. Colgate & R. V. E. Lovelace, 1999, Linear theory of an Instability in thin Keplerian accretion disks, submitted to the Astrophysical Journal.
Marcus, P. S., 1989, Numerical simulation of Jupiters’s great Red Spot, Nature, 331: 693.
Marcus, P. S., 1990, Vortex dynamics in shearing zonal flow, J. Fluid Mech., 215, 393.
Nezlin, M. V. & E. N. Snezhkin, 1993, Rossby Vortices, Spiral Structures, Solitons, Springer Verlag, New York.
Papaloizou, J. C. B. & D. N. C. Lin, 1995, ‘Theory of Accretion Disks: Angular Momentum Transfer Processes, Annual Review of Astronomy and Astrophysics 33, 505–540.
Ranque, G. J., 1933, Expériences sur la détente giratoire avec production simultanée d’un échappement d’air chaud et d’un échappement froid, Physique du Radium, 4, 112-114.
Sommeria, J., S. D. Meyers & H. L. Swinney, 1988, Laboratory simulation of Jupiter’s great red spot, Nature, 331, 689.
Staley, D. O. & R. L. Gall, 1979, Barotropic Instability in a Tornado Vortex J. Atmos. Sci., 36, No. 6, 973.
|
no-problem/9909/astro-ph9909230.html
|
ar5iv
|
text
|
# Spectroscopy of Globular Cluster Candidates in the Sculptor Group Galaxies NGC 253 and NGC 55
## 1 Introduction
The search for globular clusters in external galaxies has progressed to the point where globular cluster systems have now been studied in over 100 galaxies (e.g. Harris 1999). In most cases these globular cluster systems are identified as a statistical excess of images above background, clustered around the parent galaxy. If the known globular clusters in the Galaxy are assumed to be representative of old cluster populations in external galaxies, then only in the nearest galaxies are the brightest globular clusters expected to be resolved in ground–based images (see Harris et al. 1984). The Sculptor group of galaxies forms a loose physical association of about 15 members (de Vaucouleurs 1959, 1978). At a distance of 2.5 Mpc \[Graham 1982\] it is generally believed to be the nearest aggregate of galaxies to our own Local Group \[de Vaucouleurs 1975\]. Despite its relative proximity, very little is known of the globular cluster systems surrounding the major galaxies in the Sculptor group (NGC 45, 55, 247, 253, 300 and 7793). Da Costa & Graham (1982) obtained spectroscopy of three resolved cluster candidates in the field of NGC 55, an SB(s)m galaxy with $`M_B=19.7`$ \[de Vaucouleurs et al. 1991\], and found all three to have velocities which agreed with that of NGC 55 itself. A visual search for globular cluster candidates around NGC 55 and NGC 253 has been made by Liller & Alcaino (1983a,b) using plates from the ESO 3.6 m telescope. These authors found a total of 114 slightly diffuse objects with the $`BV`$ colours, magnitudes and sizes appropriate to those of a globular cluster population similar to that in the Galaxy. A more quantitative selection of candidates was later derived by Blecha (1986) using profile analysis of images from the Danish 1.5 m telescope.
The true nature of these globular cluster candidates, however, can only be established via spectroscopy to determine accurate radial velocities. Since these objects are marginally resolved, there is not the same level of confusion with Galactic stars as is the case with more distant cluster systems. Moreover, because the parent galaxies have low systemic velocities ($`V_{\mathrm{N55}}`$=129 kms<sup>-1</sup>; $`V_{\mathrm{N253}}`$ = 245 kms<sup>-1</sup>) \[Da Costa et al. 1991\], there is little uncertainty in identifying background galaxies (see $`\mathrm{\S }4`$). In this paper we present a spectroscopic survey of the Liller & Alcaino (1983a,b) and Blecha (1986) samples. The plan of the paper is as follows: $`\mathrm{\S }2`$ contains the observations and data reduction, with the radial velocity analysis presented in $`\mathrm{\S }3`$. In $`\mathrm{\S }4`$ we discuss the spectroscopically confirmed clusters and use this information together with COSMOS digitised plate scans to define a new sample of globular cluster candidates in $`\mathrm{\S }5`$. Conclusions are summarized in $`\mathrm{\S }6`$.
## 2 Observations and Data Reduction
### 2.1 Sample Selection
The sample was selected from the list of globular clusters candidates published by Liller & Alcaino (1983a) for NGC 55, and from Liller & Alcaino (1983b) and Blecha (1986) for NGC 253. The candidates were taken within the radial limits $`0<R<20`$ arcmin from the centres of each galaxy and magnitude limits $`18<B<20.5`$. For completeness, our candidates included twenty extra objects, six labelled ’bright’ and fourteen labelled ’blue’ in the Liller & Alcaino (1983a,b) papers. Accurate astrometric positions ($`\pm `$ 0.3 arcsec) for each target were obtained using a PDS measuring machine and reference stars from the Perth70 catalogue. At this stage any obvious galaxies (usually low surface brightness objects showing spiral structure) were expunged from the lists. The final list of candidates contained 57 objects in NGC 55 and 58 in NGC 253.
### 2.2 Observations
The observations were obtained with the 3.9 m Anglo–Australian Telescope (AAT) and the AUTOFIB fibre positioner \[Parry & Sharples 1988\] to obtain intermediate dispersion spectra of up to 64 objects within a 40 arcminute diameter field. Several fibres were used to monitor the sky background spectrum and the faint background light from the parent galaxy halo. Using a 600 lines mm<sup>-1</sup> grating in first order with the RGO spectrograph, we obtained spectra covering the range 3850–5700 Å at a resolution of 4 Å using the Image Photon Counting System (IPCS) as the detector. For each galaxy $`5\times 3000`$ second exposures were interspersed with 200 second exposures of a Cu–Ar–He calibration lamp and 300 second exposures of blank sky regions. Most of the observations were obtained in rather poor seeing conditions (2–3 arcsec) so the final spectra have S/N ratios typically in the range 4–20.
### 2.3 Data Reduction
The data have been primarily reduced using the FIGARO data reduction package with standard techniques to extract the individual spectra from the data frame and wavelength calibrate them using the exposures of the Cu–Ar–He hollow cathode lamp. The rms residuals of the wavelength calibration were typically 0.2 Å. Sky subtraction was based on the dedicated sky fibres in each frame, with the fibre–to–fibre transmission variations and vignetting along the spectrograph slit being removed using the blank sky exposures. In addition, we have corrected for the fibre–to–fibre spectral response (most of which is introduced when the spectra are extracted) using exposures of the twilight sky spectrum whose shape is assumed to be constant across the field. During the run, exposures were obtained of several bright radial velocity standards by offsetting the stars into individual fibres. The spectra consisted of 925 channels at 2 Å pixel<sup>-1</sup> and prior to cross–correlation analysis were rebinned onto a logarithmic wavelength scale with a velocity step of 127 kms<sup>-1</sup> per bin.
Table 1 summarizes the template objects observed for the two galaxies.
## 3 Radial Velocities
Radial velocities were determined from the spectra through two methods. For obvious emission–line objects, identified lines were fitted with a gaussian and the position of the peak measured. The final velocity is the mean of these measurements for different lines, with the uncertainty being the rms between measurements. For spectra with no obvious emission lines, radial velocities were determined by the cross–correlation of object spectrum against template \[Tonry & Davis 1979\] with the task fxcor in iraf. Emission lines more than 4 $`\sigma `$ from the psuedo–continuum (determined by fitting a polynomial to the spectra) where either interpolated across by hand or removed with the lineclean task in iraf. Fig. 1 shows three extracted spectra of globular cluster candidates in NGC 253.
It was required that the cross–correlations have normalised peak heights $`>`$ 0.1; below this threshold it was found that returned velocities were unreliable and should therefore be discarded. Furthermore it was specified that each spectrum should have at least two reliable cross–correlations against different templates. The final velocities are taken to be the mean velocity weighted by the cross–correlation peak height of each template, and have been corrected to heliocentric values. The uncertainties given are the rms between velocities derived from these different templates. Tables 2 and 3 show the final velocities obtained from the candidate cluster spectra for the two galaxies. There is no real information as to the nature of external uncertainties; none of these objects have been previously observed spectroscopically and since each field was observed solely in one setup, there are no overlaps between objects for comparison.
## 4 Identification of Globular Clusters
All our globular cluster candidates have been classified as extended objects on the basis of visual inspection of photographic plates (Liller & Alcaino 1983a,b) or profile analysis (Blecha 1986). The main source of contamination in the samples should therefore be from background galaxies. The systemic velocity of the Sculptor group is low, ($`V_{\mathrm{N55}}=129`$ kms<sup>-1</sup>; $`V_{\mathrm{N253}}=245`$ kms<sup>-1</sup>), so we take a velocity cut at $`v_\mathrm{h}=`$ 1000 kms<sup>-1</sup>, with all objects above this threshold being assumed to be background galaxies. A large number of the spectra exhibited emission lines of \[O ii\], \[O iii\], H$`\beta `$ or H$`\gamma `$ and except for candidate #64 in NGC 253 (see below), these objects all had $`v_\mathrm{h}>`$ 1000 kms<sup>-1</sup>. Removing background galaxies in such a manner should effectively leave a sample consisting of objects which are either residual contaminating foreground Galactic stars, or globular clusters within the Sculptor group. For old clusters in the Milky Way, Armandroff (1989) gives a velocity dispersion of $`\sigma `$ = 100 kms<sup>-1</sup>. Assuming that mass scales with $`\sigma ^2`$ and the mass–to–light ratios of the Milky Way and the two Sculptor spirals are comparable, then the expected velocity dispersion for the halo clusters in NGC 55 and NGC 253 will be $``$ 70 kms<sup>-1</sup>. The velocity ranges for possible clusters ( $`\pm `$$`\sigma `$ from the mean velocity) are then taken to be -80 $`V_{\mathrm{N55}}`$ 340 kms<sup>-1</sup> and 35 $`V_{\mathrm{N253}}`$ 455 kms<sup>-1</sup> for NGC 55 and NGC 253 respectively. Since at some level there will be an overlap between high velocity foreground stars and globular clusters in the Sculptor group, distinguishing between these two cases relies on identifying true clusters as marginally extended objects.
Fifteen NGC 253 cluster candidates fall within the velocity range expected for globular clusters, and appear marginally resolved on images from the Digitized Sky Survey (see Lasker & Mclean 1994). Object #64 has $`v_\mathrm{h}=`$ 404 $`\pm `$ 10 kms<sup>-1</sup>, but shows emission lines of \[O iii\], H$`\beta `$ and H$`\gamma `$ and has therefore been excluded from further consideration. Two of the candidates, #25 and #27, have radial velocities which fall just short of the velocity cut. The appearance of their images and their COSMOS image parameters (see §5) point towards a Galactic origin; on this basis they have been identified as foreground stars. Objects #40 and #44 both have large radial velocities in comparison to the mean systemic velocity of NGC 253, with $`v_\mathrm{h}=`$ 401 $`\pm `$ 117 kms<sup>-1</sup> and $`v_\mathrm{h}=`$ 447 $`\pm `$ 102 kms<sup>-1</sup> respectively. However, the large errors reflect the significant degree of scatter between templates where the cross–correlations were close to the normalised peak height cut–off at 0.1 and so they have been left in the sample. Table 4 lists those objects in NGC 253 that are identified as globular clusters. The sample has mean velocity $`\overline{v_\mathrm{h}}=`$ 297 kms<sup>-1</sup> with velocity dispersion, $`\sigma =`$ 74 kms<sup>-1</sup>. Omitting objects #40 and #44 yields $`\overline{v_\mathrm{h}}=`$ 276 kms<sup>-1</sup> and $`\sigma =`$ 55 kms<sup>-1</sup>, this is entirely consistent with values expected for the NGC 253 globular cluster system. Twelve of the clusters fall into the ’classical’ colour region for globular clusters, with 0.5 $`<BV<`$ 1.25. Cluster #7 (B1) is very blue, with $`BV=`$ 0.19. Its spectrum shows strong Balmer absorption lines, and may be analogous to the blue clusters seen in the Magellanic Clouds (e.g. Bica, Dottori, & Pastoriza 1986). Cluster #41 has $`BV=`$ 1.8, and is probably highly reddened due to its proximity to the disk of NGC 253 (see Blecha 1986).
In the NGC 55 sample, there are six objects that are not galaxies and fall within the velocity cut for globular clusters. However, all but one of these appear effectively point–like and lie on the stellar locus of the COSMOS plate scans (see §5). Object #53 has $`v_\mathrm{h}=`$ 246 $`\pm `$ 58 kms<sup>-1</sup> and lies within 2 $`\sigma `$ of the galaxies’ systemic velocity. Images of this object show some elongation, but an isophotal plot indicates a round, marginally extended source blended with another object.<sup>5</sup><sup>5</sup>5 In their catalogue, Liller & Alcaino \[Liller & Alcaino 1983a\] indicate that object #53 (LA43) is separated by 5$`\stackrel{}{.}`$9 from a fainter companion. On the basis of this, it has been classified as a likely globular cluster. At $`B=17.1(BV=0.76)`$, cluster #53 is bright but not unreasonably so. Adopting a distance modulus to NGC 55 of $`(mM)_o=`$ 26.5 gives the cluster $`M_V`$ $`=`$ -10.2, the same as the most luminous Galactic globular cluster $`\omega `$ Cen \[Harris 1996\].
Object #11 has been identified as a broad absorption line (BAL) QSO with redshift $`z`$2.7. Its spectrum is shown in Fig. 2. Emission in N v and C iv is shown, though absorption shortward of N v is so strong that no Ly$`\alpha `$ emission is observed. This type of spectrum is occasionally seen in ’peculiar’ BAL QSOs (e.g. Korista et al. 1995).
Fig. 3 shows the spatial distribution of the 14 objects identified as globular clusters in NGC 253. The field is aproximately 40 arcminutes on a side. The distribution of cluster velocities is interesting; the clusters predominantly recede with respect to the galaxy rest–frame in the SW part of the galaxy and approach in the NE. This is consistent with the direction of rotation of the galaxy, as measured from H$`\alpha `$ rotation curves \[Pence 1980\]. However, due to the small numbers of clusters the level of rotation is not statistically significant.
## 5 Definition of a New Sample of Globular Clusters
### 5.1 COSMOS plate scans
The cluster samples selected by Liller & Alcaino (1983a,b) are based on visual inspection of photographic plates. Clearly such selection is prone to subjective errors and saturation effects. A more quantitative approach was taken by Blecha (1986) using electronographic plates, but as can be seen from Table 2, even this sample is contaminated by both foreground stars and background galaxies. In this section we explore the use of image parameters measured by the COSMOS measuring machine (Beard, MacGillivray & Thanisch 1990) to identify new samples of globular clusters around NGC 253 and NGC 55.
AAT prime focus plates of the two Sculptor group galaxies were raster scanned with the COSMOS facility using the mapping mode, with a step size of 16 $`\mu `$m and a 16 $`\mu `$m spot size. The image area threshold was set to 10 pixels (plate scale 15.3 arcsec mm<sup>-1</sup>), with all pixels above this threshold being grouped into discrete objects. Details of the plates are listed in Table 5.
The COSMOS image analysis software \[Stobie 1982\] was then run on each digitized frame to provide a list of objects with information on position, magnitude, orientation, axial ratio (major and minor axis lengths) and area. The COSMOS instrumental magnitudes were calibrated using the photoelectric sequences of Hanes & Grieve (1982), and Alcaino & Liller (1984).<sup>6</sup><sup>6</sup>6It should be noted for future reference that the star designated as V on their finding chart 38 for NGC 253 is tabulated as P in the paper. Alcaino & Liller give $`V`$ magnitudes and colours for 24 stars down to $`V`$ = 16.95 ($`BV`$ = 0.59) in the field of NGC 55, and for 19 stars to $`V`$ = 16.53 ($`BV`$ = 0.74) in the vicinity of NGC 253. Those from Hanes & Grieve are somewhat brighter, in the range 9 $`V`$ 14 for the two galaxies. There are a total of ten overlaps between their photometric sequences, with six in the NGC 253 field and four in the NGC 55 field. Both give good agreement, with a mean $`B`$ magnitude offset, $`\mathrm{\Delta }_B=0.025`$ mag, $`\sigma =0.06`$ and $`\mathrm{\Delta }_B=0.045`$ mag, $`\sigma =0.03`$ for NGC 253 and NGC 55 respectively. For calibration at the faint end, the cluster candidates reached down to $`B21`$, and a faint star ($`m_B=21.3`$) from the New Luyten Two–Tenths catalogue (NLTT) \[Luyten 1980\] was also used for the calibration of plate J1739. <sup>7</sup><sup>7</sup>7 There is significant disagreement between Blecha’s and Liller & Alcaino’s photometry. In some instances the data differs by a full magnitude for the same object. This is seen in the COSMOS data where Blecha’s candidates are systematically fainter – see Blecha (1986) for more details. For internal consistency, only Liller & Alcaino’s photometry was used for the calibration of the J1739 plate. Fig. 4 shows the best–fitting calibration curve for the COSMOS instrumental magnitudes for NGC 55. A least–squares fit to the data yields an rms of 0.2 mag. The photometric calibration for NGC 253 shown in Fig. 5 shows greater scatter, with $`\sigma =0.5`$ mag. The origin of this larger scatter in the NGC 253 plate is unknown, but is seen over the entire magnitude range.
An astrometric solution for the COSMOS scans was obtained so as to locate spectroscopically identified objects from this work within the COSMOS datasets. Typical astrometry plate residuals were of order $``$ 0.3 arcseconds.
### 5.2 A New Cluster Sample
Globular clusters should appear as round, marginally extended objects on the photographic plates, whereas stars should be effectively point–like.
The ellipticity for each COSMOS object is derived from its measured image moments, where ellipticity is defined as $`ϵ=1b/a`$ ($`b`$ and $`a`$ are the measured semi–minor and semi–major axes respectively). For both plates an ellipticity limit was set at approximately three times the rms stellar ellipticity, above which threshold objects were classified as being either truly elliptical or a blend of two or more objects. These were consequently excluded from the new cluster candidate sample. Object ellipticity showed only a weak dependence on magnitude, in the sense that at fainter magnitudes the numbers of elliptical objects *marginally* increased. This was not a strong trend and fainter globular clusters should not be discriminated against in this scheme (the limits which define elliptical images increase towards fainter magnitudes.)
A powerful diagnostic to test for the extended nature of the candidates is the measured COSMOS image area as a function of magnitude. Figs. 6 and 7 show the logarithm of the image area (in pixels) against magnitude for COSMOS objects in the magnitude range $`16B22`$. The sharply defined sequence is the stellar locus arising from foreground stars. Extended objects show an excess of area for their magnitude, and are raised from this sequence to some degree. Spectroscopically identified objects from this work which were located in the COSMOS data are shown. Da Costa & Graham (1982) identify three bright star clusters with $`V`$ 17 near the centre of NGC 55, one of which was returned in the COSMOS data and is indicated in Fig. 6. The two remaining clusters, along with a number of cluster candidates failed to be found. These objects are all seen in projection close to the disks of the two galaxies, and were presumably lost in the bright local background. Also, it should be noted that whilst the spectroscopic sample went out to radii comparable with that of the plate scans, NGC 253 was slightly offset from the centre of the COSMOS field, and consequently some candidates fell outside the scanning limits.
In order to quantify the excess in area shown by genuinely extended objects, and to differentiate between object types (stars, galaxies and globular clusters) a line was fit to the stellar sequence and a residual area, $`\delta A`$, for each object calculated. Figs. 8 and 9 show the results of this exercise, where $`\delta A=`$ 0 corresponds to the stellar locus. Several points are evident from these figures. The scatter in $`\delta A`$ is somewhat greater in NGC 253 than in NGC 55, with its the stellar locus being less well constrained (reflecting perhaps the larger photometry residuals – see $`\mathrm{\S }4`$). Those objects which lie on the stellar locus all have radial velocities consistent with foreground stars, and appear point–like on Sky Survey images. Galaxies are located well above the stars, and follow a similar distribution for both fields, with their residual areas increasing strongly as a function of magnitude. Identified globular clusters largely inhabit a parameter space *between* the stars and galaxies, though some overlap is apparent.
Overplotted are the model predictions of Harris et al. \[Harris et al. 1984\], scaled to the distance of the Sculptor group. These are produced by applying the COSMOS algorithm to a set of model cluster images with a range of magnitudes. The model clusters are all generated with King profiles \[King 1962\]. Those shown in Figs. 8 and 9 have $`r_t/r_c=30`$, though models with other concentration parameters ( $`r_t/r_c=6100`$) show similar characteristics. Model clusters shown with size = 1 possess core radii of 1.5 pc and tidal radii of 45 pc (0.15 and 4.6 arcsec respectively at 2 Mpc.) Sizes 2 and 3 are correspondingly twice and three times as large. Six of the spectroscopically identified globular clusters in NGC 253 lie below the size = 3 model line, with the remaining three lying marginally above this. The apparent size of clusters on the sky will be sensitive to the assumed distance to the two galaxies, and as pointed out by Harris et al. \[Harris et al. 1984\], each plate exhibits slightly different behaviour due to saturation effects and their individual isophotal thresholds (in fact they indicate that magnitudes can be in error by as much 0.6 mag for the most extended globular cluster candidates.) The cluster of Da Costa & Graham (1982) shown in Fig. 8 shows characteristics more like that of the galaxies than the other globular clusters, though DSS images of this object show an extremely bright and crowded background which could possibly confuse the analysis software.<sup>8</sup><sup>8</sup>8There is little chance of a misidentification by Da Costa & Graham between their globular cluster and a background galaxy. They obtain a radial velocity for their cluster of $`v_\mathrm{h}=106\pm 8`$ kms<sup>-1</sup> and it is seen in projection against the stellar light of NGC 55.
The regions of parameter space which should contain many globular clusters are delimited in Figs. 8 and 9 by the long dashed lines. The lower limit (above the stellar locus) is set to be 3 $`\sigma `$ from the mean stellar area residual. The upper limit corresponds to approximately size = 4 model clusters and was chosen so as to include as many cluster candidates as possible, whilst minimizing contamination from background galaxies. The degree of crowding increases significantly at fainter magnitudes, therefore a magnitude cut at $`m_B`$ $``$ 20.5 has also been taken. As a final measure, images of each of the candidates have been visually examined for obvious stellar appearance or any structure indicative of a background galaxy. Some 20 objects in total were identified as being obviously ’non–cluster–like’ and were removed from the sample. From the initial datasets, 91 cluster candidates have been identified in NGC 253 and 84 in NGC 55. Table 6 lists the positions and approximate $`B`$ magnitudes of the new cluster candidate samples.
An idea of the contamination in the new globular cluster samples by background galaxies can be gained from a scaling of the relative numbers of spectroscopically identified objects within the defined parameter space. For the NGC 253 sample, there are 7 globular clusters and 7 background galaxies. Assuming that there are no foreground stars in the new cluster samples, then we expect to obtain $``$ 45 clusters in the new NGC 253 sample. In the case of NGC 55, there are 6 galaxies and one globular cluster within the sample limits, giving an expected $``$ 14 globular clusters in our sample.
The specific frequencies of the two galaxies may be calculated from an estimate of their total cluster populations. The turn over in the Galactic globular cluster luminosity function (GCLF) occurs at $`M_V`$ $`=`$ -7.6, $`\sigma =`$ 1.2 \[Harris 1999\]. At a distance of 2.5 Mpc for the Sculptor group, this corresponds to $`B=19.9(BV=0.5)`$. Therefore the cluster candidate selection cut reaches some $``$ 0.6 magnitudes past the turnover. This implies a total cluster population of approximately 60 for NGC 253, which, with $`M_V`$ $`=`$ -20.0, yields a specific frequency, $`S_\mathrm{N}`$ $`=`$ 0.6. Applying the same arguments for NGC 55, an SB(s)m galaxy with $`M_V`$ $`=`$ -19.5 yields an expected total cluster population of approximately 20 and $`S_\mathrm{N}`$ $``$ 0.3.
## 6 Conclusions
We have identified 14 globular clusters in the spiral galaxy NGC 253, and one possible globular cluster belonging to NGC 55. Using digitized plate scans combined with spectroscopically identified stars, galaxies and globular clusters has allowed us to create new samples for the two galaxies which we expect will contain many new globular clusters.
Automated image searching techniques provide an efficient and, more importantly, quantitative way of identifying globular clusters from digitized wide–field photographic plates. Nevertheless, locating a small number of objects which can exhibit a range properties from within an initially large dataset is not straightforward, and automated searches *still* need to be supplemented by visual examination to minimize contamination from other sources. The level of contamination in the Liller & Alcaino (1983a,b) and Blecha (1986) samples indicates the difficulty faced when undertaking searches of this type based on geometrical and/or photometric properties, especially in relatively poor cluster systems.
|
no-problem/9909/cond-mat9909398.html
|
ar5iv
|
text
|
# Dopant-induced crossover from 1D to 3D charge transport in conjugated polymers
## Abstract
The interplay between inter- and intra-chain charge transport in bulk polythiophene in the hopping regime has been clarified by studying the conductivity $`\sigma `$ as a function of frequency $`\omega /2\pi `$ (up to 3 THz), temperature $`T`$ and doping level $`c`$. We present a model which quantitatively explains the observed crossover from quasi-one-dimensional transport to three-dimensional hopping conduction with increasing doping level. At high frequencies the conductivity is dominated by charge transport on one-dimensional conducting chains.
PACSnumbers: 71.20.Rv, 71.55.Jv, 72.60.+g, 72.80.Le
The charge transport mechanisms in conjugated polymers, although extensively studied over the last two decades, are still far from completely understood. Not only the behavior around the insulator-to-metal transition (IMT), which can be induced in several polymer materials upon appropriate doping, but also the nature of hopping transport in the deeply insulating regime are not yet resolved. While some studies indicate that transport is dominated by hops between three-dimensional (3D), well conducting regions, in other cases the strongly one-dimensional (1D) character of the polymer systems appears to be a crucial factor.
In investigating the nature of hopping transport in conjugated polymers, studying the temperature and doping level dependence of the DC conductivity is an important tool. Since the DC conductivity is determined by the weakest links in the conducting path spanning the sample, the study of $`\sigma _{DC}(T)`$ gives insight in the slowest relevant transport processes in the system.
On the insulating side of the IMT, the DC conductivity is predicted by many models to follow the well-known hopping expression
$$\sigma _{DC}=\sigma _0e^{(T_0/T)^\gamma }$$
(1)
where the value of $`\gamma `$ and the interpretation of $`T_0`$ depend on the details of the model. The original Mott theory for 3D variable range hopping with a constant density of states (DOS) at the Fermi energy predicts $`\gamma =1/4`$, while several modifications of the model have been proposed to describe the frequently observed value $`\gamma =1/2`$. Studying the dependence of $`\gamma `$ and $`T_0`$ on doping level $`c`$ provides the opportunity to discriminate between the various hopping models and extract parameters determining the conductive properties like the DOS and the localization length.
While the DC conductivity is sensitive to the slowest transport processes, the AC conductivity $`\sigma (\omega )`$ provides information about processes occurring at time scales $`\tau \omega ^1`$. Especially in conjugated polymers, where intra-chain and inter-chain transition rates may differ by orders of magnitude, knowledge of $`\sigma (\omega )`$ at high frequencies can help to clarify the properties of charge transport on a polymer chain.
In this Letter, we present a systematic study of the charge transport in a conjugated polymer far away from the IMT, as a function of frequency, temperature and doping level. By selecting a polymer system with very low inter-chain mobility, a separation of inter-chain and intra-chain contributions to the conductivity can be made when the applied frequency is varied over 12 decades. At low frequencies, transport between chains is studied, while at frequencies well above the inter-chain transition rate, intra-chain conduction is probed.
Experimental – The experiments were performed on discs (thickness between 0.4 and 1.0 mm) of pressed powders of the conjugated polymer poly(3,4-di-\[(R,S)-2-methylbutoxy\]thiophene), abbreviated as PMBTh, the synthesis of which is described elsewhere. The samples were doped with FeCl<sub>3</sub> in a dichloromethane solution at doping levels $`0.01<c<0.22`$; here $`c`$ is the number of doped carriers per thiophene ring, which was determined with the aid of Mössbauer spectroscopy. After doping, the solvent was evaporated and the resulting powders were vacuum dried overnight. The conductivity of the samples remained unchanged in an ambient atmosphere for several weeks. Contact resistances were less than 5% of the sample resistance. The conductivity data were taken in the range 5 Hz–3 THz with the aid of several experimental methods, which are described elsewhere. The conductivity in the range 0.3–3 THz was determined from the transmission measured with a Bruker FTIR spectrometer.
DC conductivity – The temperature-dependent static conductivity of samples with doping levels ranging from 0.01–0.22 is plotted in Fig. 1. Here $`\mathrm{log}\sigma _{DC}`$ is plotted vs. $`T^{1/2}`$, so that data sets following Eq. (1) with $`\gamma =1/2`$ fall on a straight line. In the inset, the value of $`\gamma `$ is plotted vs. $`c`$. The exponent $`\gamma `$ has been determined by plotting the logarithm of the reduced activation energy $`W=d\mathrm{ln}\sigma /d\mathrm{ln}T`$ versus $`\mathrm{ln}T`$ and fitting it to a straight line; the slope of this line gives $`\gamma `$. The data show a clear transition in the DC behavior as a function of doping level around $`c_0=0.12`$. For low doping $`c<c_0`$, $`\gamma `$ values are grouped around $`1/2`$, whereas for $`c>c_0`$, $`\gamma `$ is close to the Mott value $`1/4`$. The conductivity data for $`c<c_0`$ are now fitted to Eq. (1) with $`\gamma `$ fixed at $`1/2`$ (solid lines), while the data for $`c>c_0`$ are fitted with a fixed $`\gamma =1/4`$ (dashed lines). Note that as the conductivity is many of orders of magnitude below the minimum metallic conductivity $``$ 100 S/cm and depends very strongly on $`T`$, PMBTh is far away from the IMT at all doping levels.
Many authors have reported the conductivity in conjugated polymers to follow Eq. (1) with $`\gamma =1/2`$, and several models have been proposed to explain this value. In disordered systems, the single particle DOS around the Fermi energy has a parabolic shape when long-range Coulomb interactions between charge carriers are dominant. Inserting a quadratic DOS in the original Mott argument yields the exponent $`\gamma =1/2`$. Data on various conjugated polymers close to the IMT have been interpreted within this Efros and Shklovskii Coulomb-gap model. Alternatively, it has been argued that polymer materials can be viewed as 3D granular metallic systems when the strong, inhomogeneous disorder in polymer materials leads to the formation of well conducting 3D regions separated by poorly conducting barriers. For granular metals, $`\sigma _{DC}(T)`$, though still not completely understood, is widely accepted to follow Eq. (1) with $`\gamma =1/2`$. Close to the IMT both models predict a crossover from $`\gamma =1/2`$ to $`\gamma =1/4`$. Such transitions have indeed been observed in a number of systems . However, because PMBTh is far away from the IMT at all $`c`$, these models cannot be applied here.
To explain our data we will use another approach, which is an extension of the quasi-1D hopping model of Nakhemdov et al.. In their model, the charge carriers are supposed to be strongly localized on single chains (or 1D bundles of chains). Variable-range hopping is possible along the chains, while perpendicular to the chains with vanishingly small inter-chain overlap only nearest-neighbor hopping is allowed. Although the quasi-1D model predicts $`\gamma =1/2`$ strictly speaking only for the anisotropic effective conductivity perpendicular to the chains, it has been successfully applied to randomly oriented polymer systems. In a closely related approach, the polymer system is viewed as a fractal structure with a fractal dimension $`d_f`$ slightly greater than one. The static conductivity on such a nearly 1D fractal is also calculated to follow Eq. (1) with $`\gamma =1/2`$.
Model – To include a transition from quasi-1D hopping to 3D hopping as a function of doping level, the models mentioned above need to be extended. In the quasi-1D model such a transition is expected when the transverse overlap is sufficiently increased. In the nearly 1D fractal model, an increase in the fractal dimension would eventually lead to a decrease of $`\gamma `$ down to $`1/4`$. So in both cases, a transition to 3D hopping is induced by an increase of the inter-chain connectivity. Before we present our calculations, let us show how this can be qualitatively understood. Upon chemically doping a conjugated polymer, not only charge carriers, but also dopant counter-ions are added to the system. The counter-ions locally decrease the inter-chain potential seen by the carriers, and thus considerably enhances the hopping rate. Since the Fermi energy is shifted upwards upon doping, the counter-ions must even be considered as hopping sites when the distance of the dopant site energy to the Fermi level becomes of the order of the hopping energy. In this view, the sharply defined crossover at $`c_0`$ can be interpreted as the point where the dopant sites start to play an active role in the hopping process, thereby enhancing the density of hopping sites and making variable range hopping in the direction perpendicular to the chains possible.
We now show quantitatively that the conductivity in both doping regimes and the crossover with doping level can be explained in a single variable-range hopping model. We assume hops within an energy interval $`E`$ over a distance $`(X,Z)`$ in the combined parallel ($`x`$) and orthogonal ($`z`$) directions, with localization lengths $`L_x`$ and $`L_z`$ respectively. For an electron on a chain, the local DOS $`n`$ contains two contributions, $`n_0`$ from the chain itself and $`n_1`$ from the neighboring chains and from intermediate dopant sites. As the latter contribution depends stronger on $`c`$ than the former, $`n_1/n_0`$ rises with doping level. Following the usual variable range hopping arguments, the conductivity $`\sigma \mathrm{exp}[2X/L_x2Z/L_zE/k_BT]`$ should be maximized under the condition $`2X(2Z)^2nE1`$, where $`n`$ is the DOS averaged over the volume $`(2X,2Z,2Z)`$, for which we write $`n=n_0(L_z/2Z)^2+n_1`$. We introduce $`\xi =2X/L_x`$, $`\zeta =2Z/L_z`$ and $`ϵ=E/k_BT`$ and note that $`n`$ does not depend on $`X`$, leading to $`ϵ=\xi `$, i.e. $`\sigma \mathrm{exp}[2\xi \zeta ]`$. Optimizing in the ‘high doping’ limit $`\zeta ^2n_0/n_1`$ we find $`2\xi =\zeta `$ and $`\xi ^4=4k_BTL_xL_z^2n_1`$, which gives $`\sigma \mathrm{exp}[4\xi ]=\mathrm{exp}[(T_0^{high}/T)^{1/4}]`$ with $`T_0^{high}=64/(k_BL_xL_z^2n_1)`$. For the ‘low-doping’ regime $`\zeta ^2n_0/n_1`$, we get $`\xi ^2=k_BTL_xL_z^2n_0`$ and $`\zeta =0`$, which leads to $`\sigma \mathrm{exp}[2\xi ]=\mathrm{exp}[(T_0^{low}/T)^{1/2}]`$ with $`T_0^{low}=4/(k_BL_xL_z^2n_0)`$. Note that although in this limit the $`T`$-dependence ($`\gamma =1/2`$) is determined by the dominating hops in the chain direction, occasional hops between chains will still happen. In the high doping regime, hops in all directions are equally likely and become long-ranged at low $`T`$.
AC conductivity – In Fig. 2, $`\sigma (\omega )`$ at room temperature is plotted for three samples with doping levels $`0.03<c<0.22`$. At low frequencies, the conductivity is seen to be independent of frequency and equal to the DC value. At the onset frequency $`\omega _0/2\pi 1`$ MHz, the conductivity starts to rise, following an approximate powerlaw $`\sigma \omega ^s`$ with $`s<1`$. An extra upturn in the conductivity is observed around a second frequency $`\omega _1/2\pi 10`$ GHz. The temperature dependence of the high frequency (200–600 GHz) conductivity was also measured between 4 and 300 K (not shown), revealing that the frequency dependence $`\sigma \omega ^s`$ with $`s1.6`$ is independent of temperature. The absolute value of the conductivity only shows a weak (30%) decrease going from 300 to 150 K, and is constant with temperature below 150 K.
As was discussed above, the conductivity of a system of coupled polymer chains generally consists of contributions due to both inter- and intra-chain transport. When the chains are only weakly coupled, i.e. the inter-chain hopping rate $`\mathrm{\Gamma }_{inter}`$ is low, the conductivity $`\sigma (\omega )`$ at frequencies $`\omega \mathrm{\Gamma }_{inter}`$ is dominated by charge transport processes within single polymer chains. It is widely known that in a 1D chain, any impurity causes the states to be localized, so the chain has zero conductivity in the limit $`\omega 0`$, $`T0`$. At $`T0`$ and $`\omega >0`$, the conductivity of the chain is finite, stemming from resonant photon induced transitions between the localized states, and is at low frequencies $`\omega \tau 1`$ given by
$$\sigma _0(\omega )=\frac{4}{\pi \mathrm{}b^2}e^2v_F\tau (\omega \tau )^2\mathrm{ln}^2(1/\omega \tau )$$
(2)
where $`v_F`$ is the Fermi velocity on the chain, $`\tau `$ is the backward scattering time and $`b`$ is the inter-chain separation. Following Ref., this may be written as $`\sigma _0(\omega )=(\pi /2b^2)e^2g_0^2L_x^3\mathrm{}\omega ^2\mathrm{ln}^2(1/\omega \tau )`$, where $`g_0=n_0L_z^2`$ is the on-chain DOS per unit length. At finite temperatures, an extra contribution is present due to phonon assisted hopping within the chain. The phonon assisted conductivity is given by the 1D pair approximation
$$\sigma _1(\omega ,T)=\frac{\pi ^3}{128b^2}e^2g_0^2L_x^3k_BT\omega \mathrm{ln}^2(\nu _{ph}/\omega )$$
(3)
valid for frequencies $`\omega `$ below the phonon ‘attempt’ frequency $`\nu _{ph}`$. The total conductivity at temperature $`T`$ is now given by the sum of the two contributions, $`\sigma (\omega ,T)=\sigma _0(\omega )+\sigma _1(\omega ,T)`$ given by Eqs. (2) and (3). The data at $`\omega /2\pi >10`$ MHz have been fitted with this $`\sigma (\omega )`$, as is shown in the inset of Fig. 2. Here, the dielectric loss function $`ϵ^{\prime \prime }(\omega )=\sigma (\omega )/ϵ_0\omega `$ is plotted at frequencies between 10 MHz and 3 THz, together with the fitting line. The fits are excellent at high frequencies $`\omega /2\pi >200`$ MHz. The deviations below 200 MHz indicate that either multiple intra-chain hopping or inter-chain transitions have significant contributions in the MHz regime, which is consistent with the variable range hopping description at low frequencies.
Parameter values – From the fits of the AC data, the parameters $`\tau =10^{14}`$ s and $`\nu _{ph}=2\times 10^{12}`$ s<sup>-1</sup> are extracted. While the phonon frequency $`\nu _{ph}`$ is in good agreement with commonly suggested values of $`10^{12}`$ s<sup>-1</sup>, the scattering time $`\tau `$ is an order of magnitude longer than reported values for other (highly conducting) conjugated polymers, which is likely due to a smaller $`v_F`$ resulting from the low band filling. The typical time scales now follow directly from the fits, since phonon mediated hops between two sites within a chain occur at rates up to $`\mathrm{\Gamma }_{ph}=\nu _{ph}\mathrm{exp}[2X/L_x]`$. With $`X/L_x1`$, we have $`\mathrm{\Gamma }_{ph,max}10^{11}`$ s<sup>-1</sup>, equivalent to a local diffusion constant $`D=10^7`$ m<sup>2</sup>/s.
Assuming the localization lengths $`L_x`$ along the chain and $`L_z`$ perpendicular to the chain to be independent of doping level, they can be determined from the combined DC and high frequency conductivity data. From the samples in the low-doping regime, $`L_x`$ and $`g_0`$ can be extracted using $`T_0^{low}=4/(k_BL_xg_0)10^5`$K and Eq. (3). This gives $`L_x=10`$ Å, indicating that carriers are localized on the chains in regions consisting of two to three rings; furthermore $`g_0=0.1`$ levels/(eV ring) for $`c=0.03`$. For the typical hopping distance along the chain we find $`X=(L_x/4)(T_0^{low}/T)^{1/2}=50`$ Å. At the onset of the high-doping regime $`\zeta ^22n_0/n_1`$; using $`Zb`$ and $`T_0^{high}=64/(k_BL_xL_z^2n_1)10^8`$ K, we find $`L_z=1.4`$ Å, close to reported values for other conjugated polymers lying deeply in the insulating regime, and $`n_1=2\times 10^{26}`$ eV<sup>-1</sup>m<sup>-3</sup>, implying a DOS of 0.6 states per eV per dopant molecule. Within the localized regions, the carriers move in the chain direction with velocity $`v_x`$, which can be estimated using the fact that in 1D conductors the mean free path $`l_x=v_x\tau L_x`$; this gives $`v_x10^5`$ m/s, similar to values observed in 1D-organic conductors and other conjugated polymers.
In summary, we measured $`\sigma (\omega )`$ in a conjugated polythiophene with small inter-chain overlap. We developed a model that allows a consistent analysis of the $`\sigma (\omega )`$ data in terms of inter- and intra-chain transport. From the low frequency results we have found that carriers are strongly localized on 1D chains with $`L_z=1.0`$ Å, and no 3D metallic islands are present. The high frequency data show 1D transport along the polymer chains with a scattering time $`\tau =10^{14}`$ s, while intra-chain phonon assisted hopping proceeds at rates $`\mathrm{\Gamma }_{ph}10^{11}`$ s<sup>-1</sup>.
It is a pleasure to acknowledge B.F.M. de Waal who prepared the undoped polythiophene samples, G.A. van Albada who assisted in the far-infrared experiments, A. Goossens who performed the Mössbauer measurements, and L.J. de Jongh and O. Hilt who were involved in the discussions. This research is sponsored by the Stichting Fundamenteel Onderzoek der Materie, which is a part of the Dutch Science Organization.
|
no-problem/9909/hep-th9909001.html
|
ar5iv
|
text
|
# A New Angle on Intersecting Branes in Infinite Extra Dimensions
## 1 Introduction
String/M-theory requires the existence of nine or ten spatial dimensions. Traditionally it has been assumed that all but three spatial dimensions are compact, with size comparable to the Planck length. Recently it has been proposed that at least some of the extra dimensions are compact but very large, with size between a Fermi and a millimeter<sup>1</sup><sup>1</sup>1For earlier work on TeV<sup>-1</sup> sized new dimensions, see ref. ., but that the fields of the Standard Model are confined to a $`3+1`$ dimensional subspace, or “3-brane” . Large new dimensions in which the standard model fields do not propagate may allow for the unification of gravity with the other forces and strong quantum gravity effects below the Planck scale , perhaps as low as a few TeV . An alternative picture due to Randall and Sundrum (RS) also assumes that the Standard Model is confined to a 3-brane, but allows for a non-factorizable metric with $`4`$ dimensional Minkowski space embedded in a slice of $`5`$ dimensional Anti-deSitter (AdS) space<sup>2</sup><sup>2</sup>2These solutions parallel earlier solutions found in 4 dimensions reviewed in ref. .. The exponential dependence of the coefficient of the 4 dimensional metric tensor on an additional coordinate (the “warp factor”) can provide a explanation of the weak/Planck hierarchy without a large extra dimension . Randall and Sundrum have pointed out that such a warp factor also allows for a non-compact extra dimension, with a normalizable 0-mode of the graviton providing effective 3+1 dimensional Einsteinian gravity at long distances, even though there is a continuous, gapless spectrum of Kaluza-Klein gravitational modes<sup>3</sup><sup>3</sup>3See also refs. .. Such a normalizable zero mode is a general feature of any space-time metric that has a 3+1 dimensional Minkowski subspace and a warp factor which decreases sufficiently fast in all new directions . Arkani-Hamed, Dimopoulos, Dvali and Kaloper (ADDK) have extended the RS work to $`n`$ intersecting $`(2+n)`$ branes separating sections of $`4+n`$ dimensional AdS space<sup>4</sup><sup>4</sup>4As this work was being completed a paper appeared which further generalizes the RS solution.. Both the RS and the ADDK solutions require a specific relationship between the brane tensions and the bulk cosmological constants. In addition, the ADDK solution requires that there is no additional contribution to the tension of the intersection from brane-brane interactions. In this note I further generalize the ADDK solution for the case of 2 new dimensions, to the case of AdS space containing $`n`$ AdS branes with almost arbitrary brane tension meeting at a 3+1 dimensional Minkowski junction with localized gravity. One fine-tuning is required as there is a constraint on the value of the brane-brane interaction contribution to the tension of the intersection.
## 2 AdS Branes in AdS Space Intersecting at Angles
The following metric is a solution to Einsteins equations in 5+1 dimensions with a bulk cosmological constant and a 4-brane located along the $`y`$ axis.
$`ds^2`$ $`=`$ $`\mathrm{\Omega }^2(\eta _{\mu \nu }dx^\mu dx^\nu +dy^2+dz^2)`$ (1)
$`\mathrm{\Omega }`$ $`=`$ $`1/\left[1+k\left(y\mathrm{cos}\varphi +|z|\mathrm{sin}\varphi \right)\right]`$
Here the usual 3+1 dimensions are denoted by $`x^\mu `$ and the 2 new dimensions by $`y`$ and $`z`$. The brane glues together 2 semi-infinite slices of Anti-deSitter (AdS) space. In units of the 6 dimensional Newton’s constant, the bulk cosmological constant is $`10k^2`$ while the brane tension is $`8k\mathrm{sin}\varphi `$. For $`\varphi =90^o`$ this solution trivially generalizes the Randall-Sundrum metric to one additional dimension, and the induced metric on the brane is 5 dimensional Minkowski. Otherwise the induced metric is 5 dimensional AdS. Note that the angle $`\varphi `$ is determined by the relationship between 2 apparently nondynamical parameters–the brane tension and the bulk cosmological constant. Now consider $`n`$ branes, intersecting on a 3+ 1 dimensional subspace. Between adjacent pairs of branes is a wedge of 6 dimensional AdS space. The metric within one wedge may be written as in eq. 1 in a coordinate system where there is one semi-infinite brane located along the positive $`y`$ axis, and another extending from the origin in the $`z>0`$ half-plane along the line $`y\mathrm{cos}(2\varphi )z\mathrm{sin}(2\varphi )=0`$. The metric is reflection symmetric about the line $`y\mathrm{cos}(\varphi )z\mathrm{sin}(\varphi )=0`$. For $`\varphi =45^o`$, a solution pasting together 4 such wedges was found by ADDK . The ADDK solutions require a fine-tuned relation between the bulk cosmological constant and the brane tension, and also that there be no brane-brane interaction tension localized at the intersection. Generally, however, one would not expect $`\varphi =180^o/n`$ for integer $`n`$. However one could still patch together $`n`$ sections of six dimensional AdS space along $`n`$ AdS 4-branes, by allowing a global “deficit angle” (in analogy with the case of the gauge string metric ) of $`360^o2n\varphi `$. In a spacetime patch around each brane, in a coordinate system rotated such that the brane is located at along the positive $`y`$ axis, the metric is given by eq. 1. Thus this patched together metric is automatically a solution to Einstein’s equations everywhere except in the core of the intersection. For a global solution to exist the brane-brane interaction must produce a specific contribution to the tension of the intersection region, needed to match the global deficit angle. Thus it still appears that one fine tuning of a nondynamical parameter (or a remnant supersymmetry) is required. Perhaps a better understanding of brane interactions will shed some light on the cosmological constant question.
## 3 Outlook on Extra Dimensions and the Cosmsological Constant
The idea that noncompact higher dimensions with nonfactorizable spacetime might be consistent with our successful effective long distance theories and explain why the cosmological constant is zero has been around for a long time . The hope is that the metric can adjust such that in equilibrium an apparent cosmological term will only affect the part of the metric corresponding to the additional dimensions. For instance in the case of a gauge string, the vacuum energy of any effective degrees of freedom which live only on the string contribute to the string tension, but the induced metric on the string remains Minkowski, with any change in the string tension simply changing the deficit angle. Extra dimensional theories typically have new light degrees of freedom. New light fields are needed to understand any cosmological constant adjustment mechanism from an effective field theory point of view. It is conceivable that new degrees of freedom associated with an extra-dimensional metric somehow cleverly avoid the usual problems of adjustment mechanisms . Since the effective four dimensional Newton’s constant typically depends on the extra dimensional metric, perhaps a higher dimensional configuration can be found where apparent low energy contributions to the four dimensional cosmological constant (e.g. the QCD phase transition) actually just renormalize Newton’s constant. The configuration described in this note does not, however, seem so clever.
Acknowledgments
While this work was in progress ref. appeared, with overlapping results and speculations. I thank David Kaplan, Patrick Fox, and Andrew Cohen for useful conversations and the Aspen Center for Physics for hospitality during the inception of this work. The work was supported in part by DOE grant DE-FG03-96ER40956.
|
no-problem/9909/math9909162.html
|
ar5iv
|
text
|
# References
ASYMPTOTIC SERIES FOR
SOME PAINLEVÉ VI SOLUTIONS
V.Vereschagin
P.O.Box 1233, ISDCT RAS, Irkutsk 664033, RUSSIA
Introduction.
The study of asymptotic (as independent variable tends to a singular point) properties of Painlevé transcendents is one of the most important fields in modern theory of integrable nonlinear ODE’s. The Painlevé equations are known to be integrable in the sense of commutative matrix representation (Lax pairs). One has six matrix equations
$$D_zL_jD_xA_j+[L_j,A_j]=0,j=1,2,\mathrm{},6,$$
(1)
where $`D_x=d/dx;L_j=L_j(y,y^{},x,z),A_j=A_j(y,y^{},x,z)`$ are 2\*2 matrices that rationally depend on spectral parameter $`z,`$ and the j-th Painlevé equation $`y^{\prime \prime }P_j(y,y^{},x)=0`$ is equivalent to (1). The matrices $`L_j,A_j`$ were written in paper .
The goal of this paper is to analyze asymptotic behavior of the sixth Painlevé transcendent using the so-called Whitham method. The PVI case is tedious due to large amount of calculations, so it is easier to illustrate the basic ideas of the method (which are the same for all the six equations) on technically the simplest case of PI.
The matrices $`L_1`$ and $`A_1`$ look as follows:
$$L_1=\left(\begin{array}{cc}0& 1\\ yz& 0\end{array}\right),A_1=\left(\begin{array}{cc}y^{}& 2y+4z\\ xy^2+2yz4z^2& y^{}\end{array}\right).$$
(2)
Introduce now new variable $`X`$ and replace all the variables $`x`$ explicitly entering formula (2) by $`X`$ : $`L_j=L_j(y,y^{},X,z),A_j=A_j(y,y^{},X,z).`$ For such matrices we have the following Lemma.
Lemma 1. Let $`ϵ`$ be some real positive number. Then system
$$ϵD_zL_jD_xA_j+[L_j,A_j]=0$$
(3)
is equivalent to system
$$D_xX=ϵ,y^{\prime \prime }=P_j(y,y^{},X)=0.$$
(4)
Proof can be obtained via direct computation. So, for PI the system (4) has the form
$$D_xX=ϵ,y^{\prime \prime }3y^2X=0.$$
Calculations for the other Painlevé equations are principally analogous and can be extracted from paper .
Lemma 2. Solution of equation (3) as $`ϵ=0`$ and $`X=const`$ can be represented by the following formula:
$$y_0(x)=f_j(\tau +\mathrm{\Phi };\stackrel{}{a}),j=1,2,\mathrm{},6,$$
(5)
where $`\tau =xU,U=U(\stackrel{}{a});f_j`$ are periodic functions which can be explicitly written out in terms of Weierstrass or Jacobi elliptic functions for any of the six Painlevé equations. The vector $`\stackrel{}{a}(X)`$ consists of parameters that detrmine the elliptic function $`f_j.\mathrm{\Phi }`$ is some phase shift.
The proof uses the latter equation of system (4) where $`X`$ is put to constant value. In the case of the first Painlevé function $`f_1`$ is Weierstrass $`\mathrm{}`$function:
$$f_1=2\mathrm{}(x+\mathrm{\Phi };g_2,g_3);g_2=X,g_3=F_1/4,$$
(6)
where $`F_1`$ is some parameter. the formula (6) was first figured out in paper .
Now admit that number $`ϵ`$ ispositive and small. We look for solutions to equation (3) in the form of formal series in parameter $`ϵ:`$
$$y(x)=y_0(x)+ϵy_1(x)+\mathrm{},$$
(7)
so that parameters determining the elliptic function $`y_0=f_j`$ obey some special nonlinear ODE usually called Whitham equation or modulation equation. Thus, we look for the main term of series (7) in the form
$$y_0(\tau ,X)=f_j(ϵ^1S(X)+\mathrm{\Phi }(X);\stackrel{}{a}(X)),D_XS=U.$$
Lemma 3. The Whitham equation can be written in the following form:
$$D_XdetA_j=\overline{a_{22}D_zl_{11}}+\overline{a_{11}D_zl_{22}}\overline{a_{12}D_zl_{21}}\overline{a_{21}D_zl_{12}},$$
(8)
where $`A_j=\left(a_{mn}\right),L_j=\left(l_{mn}\right),m,n=1,2,`$ the bar means averaging over period of the elliptic function (5).
Proof. One can easily see that equation (3) as $`ϵ=0`$ indicates independence for spectral characteristics of matrix $`A_j`$ of variable $`x`$. So the condition $`D_xdetA_j=0`$ holds. Formal introduction of variable $`X`$ induces the change for differentiation rule: $`D_xUD_\tau +ϵD_X,`$ where parameter $`ϵ`$ is put to be small and positive. Further the condition (3) yields equation
$$a_{n,m}^{}=ϵD_zl_{n,m}+[L_j,A_j]_{n,m},n,m=1,2.$$
Substituting this into equality
$$D_xdetA_j=a_{11}^{}a_{22}+a_{22}^{}a_{11}a_{12}^{}a_{21}a_{21}^{}a_{12},$$
we change the differentiating rule and obtain the following:
$$\left(UD_\tau +ϵD_X\right)detA_j=ϵ\left(a_{22}D_zl_{11}+a_{11}D_zl_{22}a_{12}D_zl_{21}a_{21}D_zl_{12}\right)+O\left(ϵ^2\right).$$
Now average, i.e. integrate over the period (in ”fast” variable $`\tau `$). The averaging kills complete derivatives in $`\tau `$ which gives the claim.
Corollary 1. There exists unique coefficient of the polynomial $`detA_j`$ with non-trivial dynamics in $`X`$ in force of the modulation equation. Denote this coefficient $`F_j.`$ Thus the Whitham system can be written as unique ODE on $`F_j.`$
The corollary can be verified via direct calculations for all the six equations. For PI we have the following:
$$detA_1=16z^3+4XzF_1,$$
where $`F_1=\left(y^{}\right)^22y^32yX.`$ The modulation equation (8) takes the form
$$D_XdetA_1=4z+2\overline{y}$$
and can be rewritten as $`D_XF_1=2\overline{y}.`$ Taking into account the solution (6), we obtain:
$$D_XF_1=2\eta /\omega =2e_1+2(e_3e_1)E/K,$$
where $`E=E(k),K=K(k)`$ are complete elliptic integrals:
$$K=_0^1\frac{dz}{\sqrt{\left(1z^2\right)\left(1k^2z^2\right)}},E=_0^1\sqrt{\frac{1k^2z^2}{1z^2}}𝑑z,k^2=\frac{e_2e_3}{e_1e_3},$$
$`e_{1,2,3}`$ are roots of Weierstrass polynomial $`R_3(t)=4t^3g_2tg_3;g_2=X,g_3=F_1/4.`$
Corollary 2.The simplest way for obtaining the elliptic ansatz $`f_j`$ is solving equations
$$F_j=const_1,X=const_2.$$
(9)
Lemma 4. The elliptic ansatz (5) forms the main term $`y_0`$ in series in small parameter $`ϵ`$ (7) for solution to system (3).
To prove this one should see that perturbation of solution to system (3) with $`ϵ=0`$ runs continuously while $`ϵ`$ obtains small non-zero value. The appropriate elementary calculations are illustrated here on the simplest example of PI. So, for $`ϵ>0`$ system (3) is $`y^{\prime \prime }=3y^2+X_0+ϵx,`$ where $`X_0`$ is constant. Via simple manipulations this can be reduced to condition
$$2dx=\frac{dy}{\sqrt{2y^3+X_0+const}}+O(ϵ)$$
which means that the main term of the series (7) is the function $`f_1`$ (see (6)) on condition that $`x`$ does not belong to small neighborhoods of singularities of the elliptic function $`f_1.`$
Now we can prove the following theorem.
Theorem 1. The function $`y_0`$ determined by formulas (5) and (8) forms the main term of asymptotic series for solution of appropriate Painlevé equation as $`\left|x\right|`$ tends to infinity.
Proof. The scale transformation $`xϵx`$ leads to change $`D_xX=ϵD_xX=1`$ in formula (4). Therefore the expansion (7) in small parameter $`ϵ`$ turns to series in negative powers of large variable $`x.`$
2. PVI and the Whitham method.
The sixth (and the most common) Painlevé equation
$$y^{\prime \prime }=\frac{1}{2}\left(\frac{1}{y}+\frac{1}{y1}+\frac{1}{yx}\right)\left(y^{}\right)^2\left(\frac{1}{x}+\frac{1}{x1}+\frac{1}{yx}\right)y^{}+$$
(10)
$$\frac{y(y1)(yx)}{x^2(x1)^2}\left(\alpha +\beta \frac{x}{y^2}+\gamma \frac{x1}{(y1)^2}+\delta \frac{x(x1)}{(yx)^2}\right),$$
where the Greek letters denote free parameters, can be obtained as the compatibility condition of the following linear system of equations:
$$D_zY=A_6(z,x)Y(z,x),D_xY=L_6(z,x)Y(z,x),$$
(11)
where
$$A_6(z,x)=\left(\begin{array}{cc}a_{11}(z,x)& a_{12}(z,x)\\ a_{21}(z,x)& a_{22}(z,x)\end{array}\right)=\frac{A^0}{z}+\frac{A^1}{z1}+\frac{A^x}{zx},$$
$$A^i=\left(\begin{array}{cc}u_i+\theta _i& \omega _iu_i\\ \omega _i^1\left(u_i+\theta _i\right)& u_i\end{array}\right),i=0,1,x,L_6(z,x)=A^i\frac{1}{zx}.$$
(12)
Put
$$A^{\mathrm{}}=\left(A^0+A^1+A^x\right)=\left(\begin{array}{cc}k_1& 0\\ 0& k_2\end{array}\right),$$
$$k_1+k_2=\left(\theta _0+\theta _1+\theta _x\right),k_1k_2=\theta _{\mathrm{}},$$
$$a_{12}(z)=\frac{\omega _0u_0}{z}\frac{\omega _1u_1}{z1}\frac{\omega _xu_x}{zx}=\frac{k(zy)}{z(z1)(zx)},$$
$$u=a_{11}(y)=\frac{u_0+\theta _0}{y}+\frac{u_1+\theta _1}{y1}+\frac{u_x+\theta _x}{yx},$$
(13)
$$\widehat{u}=a_{22}(y)=u\frac{\theta _0}{y}\frac{\theta _1}{y1}\frac{\theta _x}{yx}.$$
Then $`u_0+u_1+u_x=k_2,\omega _0u_0+\omega _1u_1+\omega _xu_x=0,`$
$$\frac{u_0+\theta _0}{\omega _0}+\frac{u_1+\theta _1}{\omega _1}+\frac{u_x+\theta _x}{\omega _x}=0,(x+1)\omega _0u_0+x\omega _1u_1+\omega _xu_x=k,x\omega _0u_0=k(x)y,$$
which are solved as
$$\omega _0=\frac{ky}{xu_0},\omega _1=\frac{k(y1)}{(x1)u_1},\omega _x=\frac{k(yx)}{x(x1)u_x},$$
$$u_0=\frac{y}{x\theta _{\mathrm{}}}S_1,$$
where
$$S_1=y(y1)(yx)\widehat{u}^2+\left[\theta _1(yx)+x\theta _x(y1)2k_2(y1)(yx)\right]\widehat{u}+$$
$$k_2^2(yx1)k_2\left(\theta _1+x\theta _x\right),$$
$$u_1=\frac{y1}{(x1)\theta _{\mathrm{}}}S_1,$$
where
$$S_1=y(y1)(yx)\widehat{u}^2+\left[(\theta _1+\theta _{\mathrm{}})(yx)+x\theta _x(y1)2k_2(y1)(yx)\right]\widehat{u}+$$
$$k_2^2(yx)k_2\left(\theta _1+x\theta _x\right)k_1k_2,$$
(14)
$$u_x=\frac{yx}{x(x1)\theta _{\mathrm{}}}S_{\mathrm{}},$$
where
$$S_{\mathrm{}}=y(y1)(yx)\widehat{u}^2+\left[\theta _1(yx)+x(\theta _x+\theta _{\mathrm{}})(y1)2k_2(y1)(yx)\right]\widehat{u}+$$
$$k_2^2(y1)k_2\left(\theta _1+x\theta _x\right)xk_1k_2.$$
The compatibility condition for (11) implies
$$y^{}=\frac{y(y1)(yx)}{x(x1)}\left(2u\frac{\theta _0}{y}\frac{\theta _1}{y1}\frac{\theta _x1}{yx}\right).$$
(15)
Thus $`y`$ satisfies PVI with the parameters
$$\alpha =\frac{1}{2}\left(\theta _{\mathrm{}}1\right)^2,\beta =\frac{1}{2}\theta _0^2,\gamma =\frac{1}{2}\theta _1^2,\delta =\frac{1}{2}\left(1\theta _x^2\right).$$
Now we apply ideas described in the previous paragraph to asymptotic analysis of the sixth Painlevé transcendent. First calculate determinant for the matrix $`A_6.`$ Using formulas (13), (14) one obtains:
$$a_{11}(z)=R^1(z)S,$$
where
$$S=k_1z^2+z\left[x\left(u_0+u_1+\theta _0+\theta _1\right)+u_0+\theta _0+u_x+\theta _x\right]+O\left(z^0\right),$$
$$a_{22}(z)=R^1(z)\left\{k_2z^2z\left[(x+1)u_0+xu_1+u_x\right]+O\left(z^0\right)\right\},$$
where $`R(t)=t(t1)(tx);O\left(z^j\right)`$means powers of $`z`$ of order not higher than $`j.`$ The entries $`a_{12},a_{21}`$ yield terms of lower order in $`z,`$ so they can be ignored while computing the two higher terms of polynomial $`detA_6.`$ Therefore we have the following:
$$detA_6=R^2(z)S,$$
where
$$S=k_1k_2z^4$$
$$z^3\left[k_1\left(u_0(x+1)+u_1x+u_x\right)k_2\left(x\left(u_0+u_1+\theta _0+\theta _1\right)+u_0+\theta _0+u_x+\theta _x\right)\right]+$$
$$O\left(z^2\right).$$
Setting
$$detA_6=R^2(z)\left[k_1k_2z^4+F_6z^3+O\left(z^2\right)\right]$$
(16)
we get the coefficient $`F_6`$ that determines the Whitham dynamics:
$$F_6=\left(k_1k_2\right)\left(u_1+xu_x\right)x\left(2k_1k_2+\theta _x\right)2k_1k_2k_2\theta _1.$$
(17)
The current goal is to extract the constraint on elliptic function from condition (17). To do this we use (14):
$$\theta _{\mathrm{}}\left(u_1+xu_x\right)=R(y)\widehat{u}^2+$$
$$\widehat{u}\left[(y1)(yx)\left(2k_2+\theta _{\mathrm{}}\right)\theta _1(yx)x\theta _x(y1)\right]+$$
$$k_1k_2(x+1y)+k_2\left(\theta _1+x\theta _x\right),$$
which via (15) and (13) turns to the following:
$$\theta _{\mathrm{}}\left(u_1+xu_x\right)=\frac{x^2(x1)^2}{4R(y)}\left(y^{}\right)^2+$$
$$\frac{1}{2}y^{}x(x1)\left\{B+\frac{1}{R(y)}\left[(y1)(yx)\left(2k_2+\theta _{\mathrm{}}\right)\theta _1(yx)x\theta _x(y1)\right]\right\}$$
$$\frac{1}{4}R(y)B^2+\frac{1}{2}B\left[x\theta _x(y1)(y1)(yx)\left(2k_2+\theta _{\mathrm{}}\right)+\theta _1(yx)\right]+$$
$$k_1k_2(xy+1)+k_2\left(\theta _1+x\theta _x\right),$$
where
$$B=\frac{\theta _0}{y}+\frac{\theta _1}{y1}+\frac{\theta _x+1}{yx}.$$
Now substitute this into (17) and obtain final constraint on genus one Riemann surface $`(y^{},y)`$ and appropriate elliptic uniformization (consider $`x`$ and $`F_6`$ parameters):
$$x^2(x1)^2\left(y^{}\right)^22y^{}x(x1)y(y1)+y^4\left[1\left(k_1k_2\right)^2\right]+$$
$$2y^3\left[\left(k_1+k_2\right)C1+2x\theta _x\left(1k_2\right)+2F_6\right]$$
$$y^2S+$$
(18)
$$2yx\left[2k_1k_2(x+1)+2x\theta _x\left(1k_2\right)+2F_6\theta _0C\right]x^2\theta _0^2=0,$$
where $`C=(x+1)\left(k_1+k_2\right)+x\theta _x+\theta _1,`$
$$S=C^212x\theta _0\left(k_1+k_2\right)+4k_1k_2\left(x^2+x+1\right)+4x(x+1)(1k_2)\theta _x+4(x+1)F_6.$$
To start the Whitham asymptotic analysis we need also the modulation equation in addition to ansatz (18). It can be found in the following way. First replace variables $`x`$ to $`X`$ in formula (16) and differentiate it in $`X:`$
$$D_XdetA_6=\frac{z^4}{(zX)R^2(z)}\left(2k_1k_2+D_XF_6\right)+O\left(z^3\right).$$
(19)
On the other hand we have condition (8) which is to be studied now. Thus we have the following:
$$D_zL_6(z,x)=\frac{A_x}{(zX)^2},D_zl_{22}=\frac{u_x}{(zX)^2},$$
whence obtain:
$$a_{11}D_zl_{22}=\frac{u_x}{R(z)(zX)^2}\left[z^2k_1+O(z)\right],$$
$$a_{22}D_zl_{11}=\frac{u_x+\theta _x}{R(z)(zX)^2}\left[z^2k_2+O(z)\right],$$
substitute into (8) and get:
$$D_XdetA_6=\frac{z^2\left[\overline{u}_x\left(k_1k_2\right)k_2\theta _x\right]+O(z)}{R(z)(zX)^2},$$
(20)
where bar means averaging. Comparison of formulas (19) and (20) yields the modulation equation:
$$D_XF_6=\overline{u}_x\left(k_1k_2\right)k_2\theta _x2k_1k_2.$$
(21)
One can as well rewrite (21) in the initial coordinates $`y,X.`$ To do this use
$$u_x\left(k_1k_2\right)=\frac{yX}{X(X1)}S,$$
(22)
where
$$S=R(y)\widehat{u}^2+\widehat{u}\left[\theta _1(yX)+X\left(\theta _x+\theta _{\mathrm{}}\right)(y1)2k_2(y1)(yX)\right]+$$
$$k_2^2(y1)k_2\left(\theta _1+X\theta _x\right)Xk_1k_2.$$
Now substitute (22) into (21), again utilize (13), (14), (15), simplify and finally obtain the modulation equation:
$$D_XF_6=\frac{1}{2}\left(k_1k_2\right)D_X\overline{y}+\frac{\left(k_2k_1\right)\left(k_2k_1+1\right)}{2X(X1)}\overline{y^2}+$$
(23)
$$\frac{\overline{y}}{X(X1)}S+$$
$$\frac{1}{2(X1)}\left[\theta _0\left(k_2k_1\right)+2X\left(2k_1k_2+\theta _x\right)+2k_2\left(k_1+k_2+\theta _1\right)+2F_6\right]$$
$$k_2\theta _x2k_1k_2,$$
where
$$S=\frac{1}{2}\left(k_2k_1\right)\left[X\left(k_2k_1\theta _x\right)+\theta _0+\theta _x+1\right]$$
$$X\left(2k_1k_2+\theta _x\right)k_2\left(k_1+k_2+\theta _1\right)F_6.$$
Here $`\overline{y}`$ denotes the mean for elliptic function $`y`$ specified by equation (18) where $`F_6`$ and $`X`$ (instead of $`x`$) are considered as parameters.
3. Partial solutions for the modulation equation and PVI.
Analysis of the system (18), (23) in generic form is cumbersome, moreover there is a question of the phase shift $`\mathrm{\Phi }`$ within the elliptic ansatz (5). This is why we start with the simplest solutions that correspond to strongly degenerate surface (18). While trying to find partial solution to (18) - (23) among elementary functions one can note asymptotic homogeneity of formula (18) for large $`x`$. Denote $`y=x\xi `$ and rewrite (18) in variables $`x\xi ^{}`$ and $`\xi `$. The discriminant for this polynomial looks as follows:
$$D=\xi ^4\left(k_2k_1\right)^22\xi ^3\left[\left(k_2+k_1\right)^2+2\theta _x\left(1k_2\right)+2F_6x^1+O\left(x^1\right)\right]+$$
(24)
$$\xi ^2\left[\left(k_2+k_1\right)^2+4k_1k_2+4\theta _x\left(1k_2\right)+4F_6x^1+O\left(x^1\right)\right]$$
$$2\xi \left[2F_6x^2+O\left(x^2\right)\right]+O\left(x^2\right).$$
Seeking condition for strong degeneracy of the Riemann curve (18) one find out that the polynomial (24) tends to have two double roots if
$$\theta _x=0andF_6=2k_1k_2X+o(X),X\mathrm{},$$
(25)
$$D=\xi ^2(\xi 1)^2\left(k_2k_1\right)^2+O\left(X^1\right).$$
In this case four branch points of the Riemann surface (18) asymptotically coinside pairwise (that is what we call double or strong degeneracy). Substituting condition (25) into (18) one easily obtains the appropriate asymptotics for solution: $`\xi =1+o(1)`$ and, therefore,
$$y=x+o(x),x\mathrm{},$$
(26)
where $`o(x)`$ denotes terms that grow not faster than $`\mathrm{log}x.`$ One can also easily verify that solution (25), (26) suits the modulation equation (23).
So we have the following theorem.
Theorem 2. In the case of $`\theta _x=0(\delta =1/2)`$ the sixth Painlevé equation has a solution with asymptotics (26)<sup>1</sup><sup>1</sup>1Such a solution for PVI under $`\alpha =(2\mu 1)/2,\beta =\gamma =0,\delta =1/2`$ and half-integer $`\mu `$ was found in paper .
To prove this one should note in addition to mentioned above that strong degeneracy of the elliptic ansatz (18) transforms the phase shift $`\mathrm{\Phi }`$ in formula (5) into a shift in variable $`x`$ which can be found via simple iterative procedure computing terms of the series (26).
|
no-problem/9909/cond-mat9909314.html
|
ar5iv
|
text
|
# Thermodynamic Signature of a Two-Dimensional Metal-Insulator Transition
\[
## Abstract
We present a study of the compressibility, $`\kappa `$, of a two-dimensional hole system which exhibits a metal-insulator phase transition at zero magnetic field. It has been observed that $`\frac{d\kappa }{dp}`$ changes sign at the critical density for the metal-insulator transition. Measurements also indicate that the insulating phase is incompressible for all values of $`B`$. Finally, we show how the phase transition evolves as the magnetic field is varied and construct a phase diagram in the density-magnetic field plane for this system.
\]
Recently, we are seeing a growing body of experimental evidence supporting a metal-insulator quantum phase transition in a number of two-dimensional electron and hole systems where coulomb interactions are strong and particle mobility is quite high. These experiments are of interest because of the prevailing theory of non-interacting particle systems in two dimensions which states that only insulating behavior should be seen at all densities for even the smallest amount of disorder in the system. In order to further understand the nature of the unusual phase transition, it is important to study the thermodynamic properties near the transition. One particular question is whether there is any signature for the phase transition in a thermodynamic measurement. Theoretically, within the framework of Fermi liquid, one does not expect any qualitative change in the thermodynamic properties. On the other hand, recent theories for strong interacting systems have predicted that there should be profound consequences in thermodynamic measurements.
In this paper, we address this issue by presenting a measurement of one of the fundamental thermodynamic quantities: the thermodynamic density of states (or equivalently, the compressibility) of a strongly interacting two-dimensional hole system (2DHS). We report evidence that the compressibility measurement indeed provides an unambiguous signature for the metal-insulator transition (MIT). The insulating phase is incompressible. Furthermore, we show that the phase transition at $`B=0`$ is intimately related to the quantum Hall state to insulator transition for the lowest Landau level in a finite magnetic field.
Traditionally, to obtain the density of states (DOS) or the compressibility of a 2D electron system, capacitance between the 2D electrons and the gate is measured. This capacitance can be modeled as the geometrical capacitance in series with a quantum capacitance. The quantum capacitance per unit area $`c_q`$ is related to the DOS $`\left(\frac{dn}{d\mu }\right)`$ by $`c_q=e^2\left(\frac{dn}{d\mu }\right)`$, or to $`\kappa `$ by $`c_q=n^2e^2\kappa `$ where $`\mu `$ is the chemical potential and $`n`$ is the carrier density. One major drawback of this method is that, in the low magnetic field limit, the quantum capacitance is much larger than the geometric capacitance. For two capacitors in series, small uncertainty in the geometric capacitance can lead to a large quantitative error (even a sign error) in the extracted $`\kappa `$. In a pioneering experiment by Eisenstein et al., the penetration of the electric field through 2D electrons in one well was detected by the 2D electrons in the other well by using a double quantum well sample. This penetration field measures the screening ability of the electrons which was shown to be inversely proportional to $`\kappa `$. For the present study, we extend the field penetration method to a more conventional heterostructure with only a single layer of carriers.
The wafer used for the experimental devices was a p-type MBE grown GaAs/Al<sub>x</sub>Ga<sub>1-x</sub>As single heterostructure. A $`400`$Å Al<sub>.45</sub>Ga<sub>.55</sub>As undoped spacer was used to separate the Be-donors ($`1\times 10^{18}\text{ cm}^3`$) from the 2DHS. Just below the 2DHS is a $`5000`$Å undoped GaAs buffer layer and beneath that, a $`5000`$Å Al<sub>.72</sub>Ga<sub>.28</sub>As layer which was used as the etching stopping layer for substrate removal. The mobility of this particular sample was roughly $`123,000\text{ cm}^2/\text{V-s}`$ with a hole density of $`p=2.60\times 10^{11}\text{ cm}^2`$. The device for the compressibility measurement was fabricated by sandwiching the 2DHS between two metallic electrodes. To form the top electrode ($`2500`$Å from the 2DHS), NiCr was evaporated on the surface of the sample. To form the bottom electrode, the GaAs substrate was totally removed so that another NiCr electrode could be placed on the bottom in close proximity to the 2DHS ($`10000`$Å). Details of the substrate removal can be found elsewhere . To measure the compressibility, we applied a 10 mV AC excitation voltage $`V_{ac}`$ to the bottom electrode (Gate 1), as shown in Fig. 1a. A DC voltage $`V_g`$ was superimposed to vary the carrier density. The 2DHS was grounded to screen the electric field from the bottom electrode. A lock-in amplifier was used to detect the penetrating electric field as current from the top electrode (Gate 2) to ground. By modeling the system as a distributed circuit, both the quantum capacitance $`C_q`$ and the resistance $`R_s`$ of the channel for the 2DHS could be extracted individually based on the measured values for the in-phase $`I_x`$ and $`90^o`$ phase $`I_y`$ current components. The model for the circuit is an extension of the two wire transmission line problem . The exact expression for current is as follows:
$$I=\frac{i\omega C_1C_2V_{ac}}{C_1+C_2}\left[1\left(\frac{C_q}{C_1+C_2+C_q}\right)\frac{\mathrm{tanh}(\alpha )}{\alpha }\right];$$
$$\alpha =\sqrt{i\omega \frac{C_q(C_1+C_2)}{C_1+C_2+C_q}R_s};$$
In this expression, $`\omega `$ is the frequency of our excitation voltage on gate 1 and $`V_{ac}`$ is the amplitude. $`C_1`$ (136 pF) and $`C_2`$ (541 pF) are the geometric capacitances between gate 1 and gate 2 and the 2DHS, respectively. This is a complex equation with the real and imaginary parts coupled with respect to $`R_s`$ and $`C_q`$. These values were obtained by solving both equations simultaneously. In the low-frequency limit, $`I_x`$ is directly proportional to $`R_s`$, the dissipation of the 2DHS, and $`I_y`$ is proportional to $`1/C_q`$, the inverse compressibility.
Fig. 1b and 1c show typical traces of the $`C_q`$ and $`R_s`$ as a function of the gate voltage at various different frequencies. It is apparent that there is no frequency dependence over the entire range of gate voltage. Furthermore, we found that, for frequencies up to 200 Hz, $`I_x`$ and $`I_y`$ are indeed directly proportional to $`R_s`$ and $`1/C_q`$ respectively. It can then be assumed that the divergence in both channels is not due to depletion of the channel since this would produce a noticeable frequency dependence on the high gate voltage side of the maximum in $`C_q`$.
Fig. 2 shows both the inverse compressibility and the dissipation signals as a function of the density at $`B=0`$ for different temperatures ranging from 0.33 K to 1.28 K. Two main features are immediately noticeable for the $`1/\kappa `$ channel. First, $`1/\kappa `$ is negative for high densities and becomes more negative as the density is decreased as others have seen in diluted electron and hole systems. The negative compressibility is known to be due to the strong exchange energy contribution to the total chemical potential. Secondly, a sharp turn-around occurs at $`p=5.5\times 10^{10}\text{ cm}^2`$ as $`\frac{d\kappa }{dp}`$ changes sign. As the density is further reduced, $`1/\kappa `$ becomes positive and diverges rapidly (extraction of $`C_q`$ shows that $`\kappa `$ rapidly approaches zero from infinity in the low density limit ). Focusing now on the dissipation channel, one can find a temperature independent crossing point which we believe to be the critical density for the MIT in this sample (also can be seen in transport measurements). Although the qualitative shape of $`I_y`$ as a function of gate voltage was seen by others the minimum in this signal was never recognized as the MIT. Notice the temperature dependence on the high density side of the crossing point. As the temperature increases, $`I_x`$ is getting more negative which means that the 2DHS is getting more resistive (metallic behavior). The opposite is seen on the low density side of this crossing point where the characteristic temperature dependence is that of an insulator. This crossing point occurs at the point where $`\frac{\kappa }{p}`$ changes sign precisely at the minimum of $`1/\kappa `$. Therefore, we believe there is a clear signature of the metal-insulator phase transition at $`B=0`$ in this thermodynamic measurement. Since $`\kappa `$ tends toward zero in the insulating phase, the data also suggests that the insulating phase is incompressible. Theoretically, it has been argued that the insulating phase is a Wigner glass phase which is incompressible for strongly interacting systems.
Having identified that the signature of the MIT has been seen in $`\kappa `$ at $`B=0`$, we would like to see how this critical point evolves as the magnetic field is increased. We found $`1/\kappa `$ vs. $`p`$ to be independent of magnetic field up to about 1.5 T. Fig. 3 shows how $`1/\kappa `$ and $`R_s`$ evolve as a function of density in a higher magnetic field where variations in both $`I_x`$ and $`I_y`$ are more dramatic due to the presence of Landau levels. The data shown was taken at 3 T and at five different temperatures from 0.33 K to 1.27 K just as in the $`B=0`$ case. As seen in the figure, $`I_y`$ shows a local maximum in an integer filling factor where the compressibility, which is also proportional to the DOS, tends to zero between two adjacent Landau levels. The DOS is zero only at $`T`$ = 0, so the peaks get more pronounced as one goes to lower temperatures. Conversely, $`I_y`$ reaches a minimum when the DOS reaches a peak in a Landau level center where the delocalized states (states which exist at the Landau level centers where the electronic wavefunction is expected to extend spatially throughout the sample) reside. Meanwhile, $`I_x`$ also undergoes oscillations. It is important to note here, $`I_x`$ in the high field is not proportional to $`\rho _{xx}`$ but rather to a constant term plus a term that goes like 1/$`\sigma _{xx}`$ in the limit of high conductivity . Because $`\sigma _{xx}`$ is also going to zero in the quantum Hall liquid regime, this gives us a peak in $`I_x`$ when the DOS tends to zero. At 3 T, we also see a temperature independent point in both channels and in the same place where the inverse compressibility reaches a minimum. In this case however, the temperature independent point marks the phase boundary between the insulator and the $`\nu =1`$ quantum Hall state. On the insulator side, $`1/\kappa `$ diverges just as in the $`B=0`$ case and so one cannot distinguish between the insulating phase at $`B=0`$ versus the insulator at finite field.
As we vary the magnetic field from 0 to 12 T, we keep track of where the $`1/\kappa `$ minimums (i.e. phase boundaries) are occurring in density. Fig. 4 is a phase diagram in the density-magnetic field plane. We can see how the DOS peaks are evolving as one goes from the high field to the low field regime. There are number of interesting features. First, the phase boundary for the lowest Landau level flattens out as $`B`$ is reduced. If we can assume that the delocalized states are occurring in the DOS maximum, we can draw the conclusion that the delocalized states do not float up as $`B`$ tends toward zero for the strongly interacting 2DHS. The $`r_s`$ value which is the ratio of the Coulomb energy to the Fermi energy, reaches about 20 near the MIT. For non-interacting electron systems with low $`r_s`$ values, the lowest delocalized states are found to “float up” in energy as $`B0`$. This energy divergence means that the Fermi energy can only be tuned through localized states which do not contribute to current but give only insulating behavior at $`B=0`$ (i.e.- resistivity diverges as temperature decreases). The current observation is consistent with the studies that have been done in the past for the 2DHS through transport measurements. This implies that for the 2DHS there is indeed a metallic regime (as shown) in the thermodynamic limit which does not exist in the phase diagrams for the 2DES. We would like to note here the data in Fig. 4 was taken from another 2DHS sample which was cut from the same wafer but had slightly lower density and we see the lowest delocalized state terminating at a density of roughly $`4.0\times 10^{10}\text{ cm}^2`$ at $`B=0`$ rather than a density of $`5.5\times 10^{10}\text{ cm}^2`$ as seen in Fig. 2. Although we have only clearly seen the phase boundary between the metallic and insulating states in low magnetic field as we cross the critical density, we can see from our data that the metallic phase exists all the way to our highest density of $`2.6\times 10^{11}\text{ cm}^2`$ at low magnetic fields. We have also seen that this metallic phase is preserved to at least 1 T in our sample. The shaded vertical band marks an ill-defined region between the metallic phase and the quantum Hall phases. This region needs to be explored in more detail at lower temperatures when the quantum Hall plateaus are better resolved. The insulating regime is also shaded in the diagram for clarity. Secondly, the data suggests that the insulator to $`\nu =1`$ transition is related to the $`B=0`$ MIT based upon the similar behavior of the compressibility. In fact, the same argument has been made based on the tracking of temperature independent crossing points in a transport experiment.
In summary we have found, using an improved electric field penetration technique, the compressibility of a strongly interacting hole system undergoes a qualitative change traversing the MIT. More importantly, we have observed that the local maximum in compressibility occurs precisely at the critical density for the MIT at $`B=0`$. The divergence of the compressibility shows the insulating phase is incompressible for all values of magnetic field. The phase transition at $`B=0`$, in fact, evolves into the quantum Hall to insulator transition at high $`B`$. We believe these observations reported here cannot be explained by simple non-interacting models. Theoretical analysis for this strongly interacting system is called for to understand the thermodynamic properties presented here.
The authors would like to thank S. Chakravarty, Q. Shi, J. Simmons, S. Sondhi, and C. Varma for helpful discussions, and B. Alavi for technical assistance. This work is supported by NSF under grant # DMR 9705439.
|
no-problem/9909/cond-mat9909442.html
|
ar5iv
|
text
|
# Enhancement of Josephson quasiparticle current in coupled superconducting single-electron transistors
## 1 INTRODUCTION
Superconducting single-electron transistors (SSET’s) are small islands of superconducting material isolated from an external circuit by tunnel barriers (Josephson junctions). The normal tunnel barrier resistances ($`R>h/4e^2`$6.5 k$`\mathrm{\Omega }`$) are sufficient to constrain the excess charge on the island to integer multiples of $`e`$. At equilibrium, adding an electron to an electrically neutral island costs a charging energy $`E_c=e^2/2C`$, where $`C`$ is the island’s total capacitance. This ”Coulomb blockade” may be lifted by applying a gate voltage to the SSET, or it may be surmounted by applying a sufficient source-drain bias voltage.
At zero voltage bias, a supercurrent of Cooper pairs may flow coherently through the SSET, while for a large voltage bias ($`|eV|>4\mathrm{\Delta }`$, where $`\mathrm{\Delta }`$ is the superconducting gap), the current is dominated by successive charging and discharging of the island by quasiparticles. Within a range of moderate bias ($`2\mathrm{\Delta }+E_c<|eV|<2\mathrm{\Delta }+3E_c`$), current flows via a hybrid transport mechanism termed the ”Josephson quasiparticle cycle” (JQP)<sup>1,2</sup>. In each turn of the cycle, a Cooper pair (CP) resonantly tunnels into the SSET through one junction, accompanied by a quasiparticle (QP) tunneling event through the other junction, leaving one extra QP on the island. This extra QP then also tunnels through the second junction, and the cycle starts anew.
In this paper we show some intriguing measurements of the JQP current flowing through a SSET that is strongly coupled to a nearby, independently voltage-biased SSET. The two islands are coupled by a large capacitance $`C_m`$ supplied by an overlap capacitor instead of a tunnel junction, so there is no Josephson coupling between the islands themselves. The charging energy associated with this mutual capacitance is given by:
$$E_m=\frac{e^2C_m}{C_1C_2C_m^2}<E_c$$
(1)
where $`C_i`$ is the total capacitance of island $`i`$ (including $`C_m`$). The strong capacitive coupling ensures that the charge of one island influences the QP and CP tunneling rates for the other island, since $`E_m>>kT`$.
The islands were formed by the standard procedure of double angle evaporation of Al, with an oxidation step to form AlO<sub>x</sub> tunnel barriers between each island and its leads (Fig. 1(b)). The gate electrodes and the central metal strip coupling the two islands were created in an underlying Au layer, with an intermediate insulation layer of SiO (32 nm). From normal-state measurements , the junction capacitances were determined to be $``$ 0.3 fF, while $`C_m`$ 0.6 fF. The series resistances of the SSET’s were 7 and 13 M$`\mathrm{\Omega }`$, so the Josephson energies of the tunnel junctions are expected to be very small ($`E_J<`$ 0.1 $`\mu `$eV) compared to the charging energy ($`E_c`$ 80 $`\mu `$eV) and $`E_m`$ 40 $`\mu `$eV.
Measurements of the device were carried out at low electron temperature (27 mK) in a helium dilution refrigerator. The source-drain voltage biases for the two SSET’s ($`V_1,V_2`$) were applied asymmetrically, with one lead from each SSET tied to ground (Fig. 1(a)).
## 2 NORMAL STATE MEASUREMENTS
Before discussing the JQP experiments, we will discuss some relevant properties of the system in the normal state. After applying a 1 T magnetic field to suppress superconductivity, we measured low bias ($`V_1`$ = 20 $`\mu `$V) Coulomb oscillations in the left SET for various values of $`V_2`$. The results are plotted in Figure 2. We observe that when $`V_2`$ is sufficient to allow single-electron transport through the right SET, the Coulomb peaks spread, develop small sidepeaks, and diminish in amplitude. These measurements agree with our simulations based on the orthodox theory of single-electron tunneling .
The sidepeaks are due to the discrete charging of the right SET. The presence of an extra electron induces a charge of $``$ 0.25$`e`$ on the left SET via $`C_m`$; this shifts the Coulomb peak to a different $`V_{g1}`$, resulting in the extra peaks marked by arrows in Fig. 2(b). Each peak can thus be indexed by the charge of the right SET ($`n_2`$).
Note that the peaks (and sidepeaks) are not strictly $`e`$-periodic ($`\mathrm{\Delta }V_{g1}=e/C_{g1}`$ 4 mV), since $`V_{g1}`$ has a cross-capacitance to the right SET. This effect can be cancelled by countersweeping $`V_{g2}`$ in proportion to $`V_{g1}`$.
## 3 JQP ENHANCEMENT
During our experiments in the superconducting state, the left SSET was biased to be in the JQP regime. When $`V_2`$ was grounded, sweeping the gate voltage $`V_{g1}`$ produced a series of small peaks ($``$0.5 pA) periodic with gate charge of $`e`$. Typically one expects two peaks per period, corresponding to CP resonances in either junction. In our device, the second JQP peak was barely observable, suggesting that the tunnel barrier resistances (and thus the Josephson energies) were very dissimilar.
The effects of biasing the right SSET are shown in Figure 3, where $`I`$ is plotted as a function of $`V_{g1}`$ for various values of $`V_2`$. $`V_{g2}`$ was counterswept to cancel the cross-capacitance from $`V_{g1}`$ to the right SSET, fixing the induced charge of the right SSET in each gate sweep. Capacitive effects due to the changing asymmetric bias voltage were not cancelled, however, resulting in a slight leftward shift of the peaks for increasing $`V_2`$.
For small $`V_2`$, only one (or two) peaks are visible per period. Beginning at a bias of about 470 $`\mu `$V, up to four distinct peaks are seen per period. This bias is too small to allow successive QP tunneling, so the appearance of extra peaks may indicate a parallel JQP cycle in the right SSET. As an example of the complex behavior seen in this regime, at $`V_2`$ = 570 $`\mu `$V (Fig. 3(b)), peaks corresponding to an even $`n_2`$ are all larger than those with odd $`n_2`$. The four-peak structure continues up to $`V_2`$ 800 $`\mu `$V ($`4\mathrm{\Delta }/e`$). At higher biases the peaks disappear one by one, then re-emerge with about three times their low-bias amplitude. The peak heights continue to increase for even higher $`V_2`$; no saturation in the peak height was seen in similar experiments where $`V_2`$ was swept up to 5 mV. This enhancement of the JQP peaks is in stark contrast to the normal state measurements, where the peak height only diminished with increasing $`V_2`$ (Fig. 2(b)).
## 4 MODEL AND DISCUSSION
Although we cannot presently account for each peak’s behavior, it is clear from the sharp sidepeaks that each SSET is sensitive to the other’s charge. In this section we introduce a possible mechanism for the peak enhancement in the left SSET above $`V_24\mathrm{\Delta }/e`$, where the current through the right SSET is carried predominantly by quasiparticles.
Part of the explanation for the enhancement must take into account events involving the simultaneous transfer of charge in both SSET’s. The two processes of interest in our model are depicted in Figure 4, for the case when a CP can resonantly tunnel into the left SSET only if an extra QP is resident in the right SSET. Using the notation ($`n_1`$,$`n_2`$) to refer to the combined charge state having $`n_1`$ and $`n_2`$ extra electrons on the two respective islands, the states (0,1) and (2,1) are resonant, but (0,0) and (2,0) are not.
In the usual JQP process (Fig. 4(a)), the CP tunnels into the left SSET while a QP tunnels off with rate $`\mathrm{\Gamma }_1`$, so that (2,1) decays to (1,1). The cycle is completed when (1,1) decays to (0,1). In coupled SSET’s, the tunneling of the CP can also be coincident with a QP decay from the $`right`$ SSET, so that (2,1) decays to (2,0) with rate $`\gamma _1`$, leaving an extra CP in the left SSET. This CP then spontaneously decays via two QP decays to ground, and the cycle can restart after a QP tunnels into the right SSET and brings the left SSET back into resonance.
Just as the charge of the right SSET affects the CP resonances in the left SSET, the charge of the left SSET affects the QP tunneling rates of the right SSET. The voltage across a junction must exceed $`2\mathrm{\Delta }/e`$ for a QP to tunnel across it (ignoring the small subgap conductance). Due to the influence of the strong mutual coupling, some of these QP tunneling events can be blocked depending on the charge of the other island. Our model assumes that the right SSET can only undergo QP decay to ground when the left SSET has a charge $`n_12`$ (ie. if a CP is resident), but a QP can tunnel into the right SSET for $`n_12`$. The allowed (and disallowed) transitions are summarized in Figure 5(a). Transitions in the left (right) SSET are denoted by $`\mathrm{\Gamma }_i`$ ($`\gamma _i`$).
Since $`E_c<<2\mathrm{\Delta }`$, one can show that the (allowed) QP rate through each junction, regardless of the charge state under consideration, is well-approximated (within a factor of $``$ 2) by $`\mathrm{\Gamma }_{QP}V/eR2\mathrm{\Delta }/e^2R`$, where $`V`$ is the voltage across the junction, and $`R`$ is the normal-state tunneling resistance. The $`\mathrm{\Gamma }_i`$ rates all occur through the lower left junction, so we set: $`\mathrm{\Gamma }_1=\mathrm{\Gamma }_2=\mathrm{\Gamma }_3=\mathrm{\Gamma }_4`$. Likewise, the $`\gamma _i`$ rates (except $`\gamma _1`$) all occur through the upper right junction, so $`\gamma _2=\gamma _3=\gamma _4`$. The $`\mathrm{\Gamma }_i`$ rates are fixed, since $`V_1`$ is fixed, but the $`\gamma _i`$ rates will increase along with $`V_2`$.
We will assume that $`\mathrm{}\mathrm{\Gamma }_1<E_J`$, corresponding to asymmetric tunnel barrier resistances for the left SSET (as mentioned in Section 3). This condition means that the JQP current is bottlenecked by slow QP decay. We will also assume that $`\gamma _1>\gamma _2`$, corresponding to asymmetric resistances for the right SSET. The asymmetries are necessary to produce peak enhancement in our model.
Based on these assumptions, we have calculated the peak JQP current as a function of $`\gamma _1`$ (Fig. 5(b)), using a density matrix formulation. The plot shows a clear enhancement of $`I_1`$ for $`\gamma _1>0`$. The enhancement is even more pronounced if the asymmetry between $`\gamma _1`$ and $`\gamma _2`$ is stronger.
The enhancement can be understood by considering the effect of $`\gamma _1`$ on the charge state populations. The current can be shown to be equal to:
$$I_1=2e(P_{21}\mathrm{\Gamma }_1+P_{20}\mathrm{\Gamma }_3)2e(P_{21}+P_{20})\mathrm{\Gamma }_1$$
(2)
where $`P_{ij}`$ is the population of the ($`i`$,$`j`$) charge state, corresponding to the appropriate diagonal element of the density matrix. The current is directly related to the probability of finding a CP in the left SSET. We find that $`P_{20}`$ increases dramatically with $`\gamma _1`$, while $`P_{21}`$ is only slightly suppressed. In other words, the left SSET becomes more likely to contain a CP when the right SSET carries a current of QP’s. Due to the coherent nature of CP tunneling, one cannot be sure that the CP is on the island until a QP is emitted. In the usual JQP process this QP tunnels off the island itself with rate $`\mathrm{\Gamma }_1`$ (Fig. 4(a)), immediately destroying the CP. The larger this rate is, the less likely that the SSET contains an extra CP since the lifetime of the state is shorter and the resonance is weakened. In the coupled-SSET system, however, the presence of the CP can also be detected by the emission of a QP with rate $`\gamma _1`$ from the other SSET (Fig. 4(b)). The larger this rate is (up to a point), the more likely that the left SSET contains an extra CP, since this tunneling event does not change the charge of the left SSET.
In conclusion, we have observed a striking modification of the JQP current flowing through one SSET as a result of its strong Coulomb interaction with another SSET. We interpret these results as being a quantum measurement effect, since the tunneling rates for each SSET are highly sensitive to the other SSET’s charge, and thus one SSET can detect the presence or absence of a Cooper pair in the other SSET.
The authors gratefully acknowledge the input of Caspar van der Wal, K. K. Likharev, and M. Wegewijs. This research was supported by CHARGE, Esprit Project No. 22953, and by Stichting voor Fundamenteel Onderzoek der Materie (FOM).
|
no-problem/9909/astro-ph9909253.html
|
ar5iv
|
text
|
# The Halo Formation Rate and its link to the Global Star Formation Rate
## 1. Calculating the Halo Formation Rate
In our analysis, a halo formation event is considered to have occurred when all the mass in the halo is assembled. Note that this is different to that previously used by some authors (Lacey & Cole 1993). We wish to answer the question: ‘Given a halo of mass M forms at some time, what is the probability $`P(t|M)`$dt that it forms in the time interval (t,t+dt)?’
Standard PS theory calculates $`P(M|t)`$dM, the distribution of halo mass at fixed epoch, and we have shown that it is possible to calculate $`P(t|M)`$dt from this using Bayes’ theorem. We can also calculate the same formula using intrinsic properties of Brownian random walks invoked in PS theory. The mass of halo a small volume element resides in at time t, is given by the first upcrossing of the line $`\delta =\delta _c`$ by a Brownian random walk in ($`\delta `$,$`\sigma _M^2`$) space, where $`\delta `$ is a function of time and $`\sigma _M^2`$ is a function of mass. Using the theory of random walks we can calculate the distribution of first upcrossing times at $`\sigma _M^2`$, P($`\delta _c|\sigma _M^2`$), from which a simple change of variables can be used to obtain $`P(t|M)`$dt:
$$P(t|M)dt=\frac{\delta _c}{\sigma _M^2}\mathrm{exp}\left(\frac{\delta _c^2}{2\sigma _M^2}\right)\left|\frac{d\delta _c}{dt}\right|dt.$$
(1)
This equation is in good agreement with Monte-Carlo realisations of Brownian random walks (Fig. 1). We have also run a large N-body simulation, using the Hydra N-body hydrodynamics code (Couchman, Thomas & Pearce 1995). Groups of between 45 and 47 particles ($`1.3\times 10^{13}\mathrm{M}_{}`$) were identified using a standard friends-of-friends algorithm at 362 output times. The number which could have formed in each time interval is compared to the expected distribution in Fig. 1.
Now suppose we are only interested in a subset of formation events - e.g. those which involve similar mass objects merging together. The formation rate from such mergers is the same as that derived above, because all walks which pass through a given point can be thought of as new walks starting from that point. Consequently the mass distribution of progenitors immediately prior to the formation event is independent of the formation epoch. Turning this argument around, placing constraints on the progenitors of halos immediately prior to their formation doesn’t affect the distribution of formation times, although this might affect the bias (see later).
## 2. The Star Formation Rate
It is interesting to compare the halo formation rate with the observed global star formation rate. At $`z<1`$ the rate of halo formation falls off very rapidly with cosmic time, independently of halo mass, and it is likely that this effect is primarily responsible for the observed rapid evolution in the CFRS. Such strong evolution is not seen in semi-analytic models that only include a quiescent component of star formation (Guiderdoni et al. 1998).
## 3. The Clustering of Lyman-break Galaxies
The strong clustering of Lyman-break galaxies has been explained as being due to the high bias of the most massive overdensities, and consideration of the abundance of Lyman-break galaxies and their clustering leads to an interpretation of them as being associated with massive halos, $`M8\times 10^{11}h^1\mathrm{M}_{}`$ (Adelberger et al. 1998) for an $`\mathrm{\Omega }=0.3`$ flat universe. If star formation is initiated after halo formation then these most massive halos form too late in the universe to reproduce the observed evolution in SFR in a simple way: we would require the efficiency of star-formation to evolve with redshift. Conversely, if the Lyman-break galaxies have lower mass but are associated with newly-formed, merging halos, then we might suppose that such mergers are also highly biased, and recent simulations indicate this to be the case (Kolatt et al. 1999, Knebe & Müller 1999).
## 4. Conclusions
We have presented the key points involved in deriving a simple formula for the rate of formation of new halos using Press-Schechter theory. It agrees with Monte-Carlo and N-body simulation results. We have argued that the strong cosmological evolution observed in the SFR is primarily driven by the cosmic variation in the rate of halo formation. Given that quiescent star formation does not provide enough evolution (Guiderdoni et al. 1998) we suggest that merger-induced starbursts are extremely important for star formation at $`z1`$ and are perhaps the principal sites of the observed star formation at high redshifts. At high z, a more physically-motivated model is needed to deduce the relative contributions of a range of halo masses, but we have shown that a simple combination of such a range can produce evolution consistent with present data. Recent results indicate that such merging-halo systems are also sufficiently highly biased to explain the strong clustering of Lyman-break galaxies at $`z3`$.
The work highlighted here is more comprehensively covered in our recent paper available as astro-ph/9906204. We are also continuing to work on the bias of merging halos from numerical simulations.
## References
Adelberger K., Steidel C., Giavalisco M., Dickinson M., Pettini M., Kellogg M., 1998, ApJ, 505, 18
Bruzual A. G., Charlot S., 1993, ApJ, 405, 538
Connolly A.J., Szalay A.S., Dickinson M., SubbaRao M.V., Brunner R.J., 1997, ApJ, 486, L11
Couchman H. M. P., Thomas P.A., Pearce F.R., 1995, ApJ, 452, 797
Gallego J., Zamorano J., Aragon-Salamanca A., Rego M., 1995, ApJ, 455, L1
Glazebrook K., Blake C., Economou F., Lilly S., Colless M., 1998, MNRAS submitted, astro-ph/9808276
Guiderdoni B., Hivon E., Bouchet F.R., Maffei B., 1998, MNRAS, 295, 877
Hughes D. et al., 1998, Nat, 394, 241
Knebe, A. & Müller, V., 1999. astro-ph/9809255
Kolatt T.S. et al., 1999. astro-ph/9906104
Lacey C., Cole S., 1993, MNRAS, 262, 627
Lilly S.J., Le Fevre O., Hammer F., Crampton D., 1996, ApJ, 460, L1
Madau P., Pozzetti L., Dickinson M., 1998, ApJ, 498, 106
Pettini M., Steidel C.C., Adelberger K.L., Kellogg M., Dickinson M., Giavalisco M., 1997, ‘Origins’, Astron. Soc. Pacific Conference Series
Steidel C., Adelberger K., Giavalisco M., Dickinson M., Pettini M., 1998, ApJ submitted, astro-ph/9811399
|
no-problem/9909/hep-lat9909169.html
|
ar5iv
|
text
|
# 𝛽-function, Renormalons and the Mass Term from Perturbative Wilson Loops UPRF-99-15; BICOCCA-FT-99-29
## Outlook
Wilson Loops (WL) were the historic playground (and success …) of Numerical Stochastic Perturbation Theory (NSPT) for Lattice Gauge Theory . Having by now an increased computing power available, we are computing high perturbative orders of various WL on various lattice sizes in Lattice $`SU(3)`$. Physical motivations range over a variety of issues (not every one within reach, at the moment): Renormalons and Lattice Perturbation Theory (LPT), the Mass Term in Heavy Quark Effective Theory (HQET) and the Lattice $`\beta `$-function.
## 1 Renormalons and LPT
In WL of sizes $`1\times 1`$ and $`2\times 2`$ were computed in LPT via NSPT up to $`\beta ^8`$ order. The expected Renormalon contribution was found according to the formula ($`Q=a^1`$)
$$W_0^{\mathrm{ren}}=𝒩_{r\mathrm{\Lambda }^2}^{Q^2}\frac{k^2dk^2}{Q^4}\alpha _𝗌(k^2).$$
(1)
which is fixed by dimensional and Renormalization Group considerations. (1) can be cast in a form from which a power expansion can be easily extracted (two loop asymptotic scaling is assumed)
$`z=z_0(1{\displaystyle \frac{\alpha _𝗌(Q^2)}{\alpha _𝗌(k^2)}})`$ , $`z_0={\displaystyle \frac{1}{3b_0}}`$
$`z_0=z_0(1{\displaystyle \frac{\alpha _𝗌(Q^2)}{\alpha _𝗌(r\mathrm{\Lambda }^2)}})`$ , $`\gamma =2{\displaystyle \frac{b_1}{b_0^2}}`$
$`W_0^{\mathrm{ren}}`$ $`=`$ $`C^{}{\displaystyle _0^{z_0}}𝑑ze^{\beta z}(z_0z)^{1\gamma }`$
$`=`$ $`{\displaystyle \underset{\mathrm{}=1}{}}\beta ^{\mathrm{}}\{C^{}\mathrm{\Gamma }(\mathrm{}+\gamma )z_0^{\mathrm{}}+𝒪(\mathrm{\Lambda }^4/Q^4)\},`$
Actually (1) refers to some continuum scheme. On a finite lattice (which is what NSPT needs) one has to deal with
$$W_0^{\mathrm{ren}}(s,N)=C_{Q_0^2(N)}^{Q^2}\frac{k^2dk^2}{Q^4}\alpha _𝗌(sk^2).$$
(2)
The factor $`s`$ in the argument of $`\alpha _𝗌`$ is in charge of the lattice-continuum matching and is expected to be of the order $`s\left(\frac{\mathrm{\Lambda }_{lat}}{\mathrm{\Lambda }_{cont}}\right)^2`$ with respect to some continuum scheme. An explicit IR cut-off is present, dependent on the lattice size ($`Q_0(N)=2\pi (Na)^1`$, where $`N`$ is the number of points in any direction). (2) results in a new power expansion
$$W_0^{\mathrm{ren}}(s,N)=\underset{\mathrm{}}{}C_{\mathrm{}}^{\mathrm{ren}}(C,s,N)\beta ^{\mathrm{}}$$
(3)
whose coefficients $`C_{\mathrm{}}^{\mathrm{ren}}(C,s,N)`$ are given in terms of incomplete $`\mathrm{\Gamma }`$-functions. In (via a slightly different, equivalent formalism) it was shown that this renormalon contribution can account for the growth of the first $`8`$ coefficients in the pertubative expansion of the plaquette
$`W^{1\times 1}={\displaystyle \underset{\mathrm{}=1}{\overset{8}{}}}c_{\mathrm{}}^{1\times 1}\beta ^{\mathrm{}}`$
i.e. convenient values $`C^{}`$ and $`s^{}`$ were fitted so that $`c_{\mathrm{}}^{1\times 1}`$ were recognized to be asymptotically the same as $`C_{\mathrm{}}^{\mathrm{ren}}(C^{},s^{},8)`$ (in computations were performed on a rather small $`N=8`$ lattice); $`C_{\mathrm{}}^{\mathrm{ren}}(C^{},s^{},8)`$ (with a different choice for $`C`$) were also shown to fit the expansion of $`W^{2\times 2}`$.
Control on this (renormalon) perturbative contribution is crucial in the analysis presented in : once its resummation is subtracted from Monte Carlo data for the plaquette, one is left with a $`𝒪(\mathrm{\Lambda }^2/Q^2)`$ contribution rather than the expected $`𝒪(\mathrm{\Lambda }^4/Q^4)`$. In order to trust this remarkable result, one would like to further test the asymptotic formula (3) by going to even higher orders in the perturbative expansion on various lattice sizes.
By now we know the expansion of the basic plaquette up to order $`\beta ^{10}`$ on both $`N=8`$ and $`N=24`$ lattices. We do not present here the definitive results (which will be published soon elsewhere), but we do show how well the new results are described by $`C_{\mathrm{}}^{\mathrm{ren}}(C^{},s^{},N)`$ (i.e. by the values for $`C`$ and $`s`$ that were fitted in ). In the figure the expected $`C_{\mathrm{}}^{\mathrm{ren}}(C^{},s^{},24)`$ are plotted together with the computed $`c_{\mathrm{}}^{1\times 1}`$ ($`\mathrm{}=1\mathrm{}10`$) for the $`N=24`$ lattice. The renormalon contribution is indeed there just like described in . Work is in progress to perform the whole analysis of on the $`N=24`$ lattice.
## 2 The Mass Term in HQET
Consider WL of various sizes (in particular square $`L\times L`$ loops). From their renormalization properties one has to expect
$$W(L)=\mathrm{exp}^{4ML}w(L)$$
(4)
The first factor is the exponential of the perimeter times the Mass Term one has to deal with in HQET (an additive, linearly divergent mass renormalization). $`w(L)`$ contains logarithmic divergences, in particular only those connected to the coupling renormalization is there is no “corner” (which is not the case on the lattice). As stressed in Hashimoto’s review at this conference (after ), the mass term is a fundamental building block in a renormalon-safe determination of the b-quark mass from the lattice.
The mass term could of course be determined from the heavy quark propagator. The determination we are going to report on (again after ) goes through the computation of various WL and is well suited for NSPT as it is gauge invariant . By computing perturbative expansions of WL of various sizes
$`W(L)=1W_1(L)\beta ^1W_2(L)\beta ^2+\mathrm{}`$
and therefore
$`\mathrm{log}W(L)`$ $`=`$ $`W_1(L)\beta ^1+`$
$`\left(W_2(L)+{\displaystyle \frac{1}{2}}W_1(L)^2\right)\beta ^2+\mathrm{}`$
one can extract from each order the leading (linear) behaviour in $`L`$, that is mass term
$$4ML=4\left(M_1\beta ^1+M_2\beta ^2+\mathrm{}\right)L.$$
(5)
At the moment we have got results for square loops on a $`(N=24)^4`$ lattice. Results on bigger lattices ($`N=28,32`$) and rectangular loops are expected to come soon. They will be crucial to control finite size effects and subleading (logarithmic) contributions. As $`M_1`$ and $`M_2`$ are already known, they have to be recovered. Consider for example $`M_1`$, which has to be recovered by fitting $`W_1(L)`$ to
$`W_1(L)=4M_1L+k_1\mathrm{log}L+k_0`$
The analytical result is $`M_1=1.01`$ and we get $`M_1=1.00\pm 0.02`$. From one learns the second coefficient $`M_2=2.54`$, while from our fits we get $`M_2=2.43\pm 0.15`$. The errors we quote depend on both finite size effects and fitting subleading contributions. Note that since $`W_3(L)`$ contains a $`(\mathrm{log}L)^2`$ contribution, the latter point is even more crucial in the determination of $`M_3`$. At the moment we can pin down a preliminary number $`M_3=8\pm 1`$ which will turn in a definite number as soon as we get definite results not only for square loops, but also for ratios of rectangular loops.
## 3 The Lattice $`\beta `$-function
We now turn to describe what could possibly be another application of our perturbative computations. Ratios of WL combined in such a way that the corner and mass contributions cancel out were introduced several years ago by Creutz in order to study (among other things) the non-perturbative Lattice $`\beta `$-function.
Consider rectangular WL of sizes $`(L,L)`$, $`(2L,L)`$, $`(2L,2L)`$ on a $`(4L)^4`$ lattice and form the ratio
$`(L)`$ $`=`$ $`{\displaystyle \frac{W(L,L)W(2L,2L)}{W(L,2L)^2}}`$
$`=`$ $`1+c_1(L)\alpha _0+c_2(L)\alpha _0^2+\mathrm{}`$
where $`\alpha _0`$ is the bare lattice coupling. The only scale is $`L`$ and the only renormalization needed is that of the coupling, so that (for example)
$$\alpha _{}(L)=\frac{1}{c_1(L)}\mathrm{log}((L))$$
(6)
is supposed to be an acceptable coupling running with L. Since the pertubative matching between coupling in different schemes
$`\alpha _1(s\mu )=\alpha _2(\mu )+d_1(s)\alpha _2(\mu )^2+\mathrm{}`$
contains all the informations about the perturbative $`\beta `$-functions in both schemes, one could in principle study the perturbative Lattice $`\beta `$-function by computing the perturbative expansion of $`\alpha _{CR}(L)`$ in $`\alpha _0`$ at different values of $`N`$, where $`L=Na`$ (i.e. by computing different WL on different lattice sizes).
Within this application the big issue is accuracy. Due to large cancellations of the mass contribution, a fraction of per mille accuracy on the $`c_i(L)`$ (which is within reach) straight away degenerates when one computes for example $`d_1(s)`$ (still we were able to recover $`b_0`$, the first universal coefficient of the $`\beta `$-function). In view of this, it is quite unlikely that one can attain the terrific accuracy which should be needed to get the first unknown information. One should most probably look for a smarter definition of the coupling.
## Conclusions
Preliminary results were reported, coming from the computations of perturbative expansions of various WL. The basic plaquette is now known up to $`\beta ^{10}`$ order and the results on Renormalons are indeed to be trusted. Work is in progress to gain further confidence in the results of as well. A preliminary results on third order in the computation of the mass term has been reported and a definite result will be published soon. In principle also the perturbative lattice $`\beta `$-function could be studied via WL in NSPT, even if the accuracy needed to get the first unknown result is quite unlikely to be attained.
|
no-problem/9909/hep-lat9909023.html
|
ar5iv
|
text
|
# Potential between external monopole and antimonopole in SU(2) lattice gluodynamics Presented by Ch. Hoelbling. This research was supported in part under DOE grant DE-FG02-91ER40676 and by the U.S. Civilian Research and Development Foundation for Independent States of FSU (CRDF) award RP1-187.
## 1 Introduction
Since it was suggested by ’t Hooft and Mandelstam that monopole condensation in nonabelian gauge theories may be an explanation for the confinement mechanism, many studies have been devoted to the magnetic properties of these theories .
In the present study, we probe the vacuum structure of pure SU(2) gauge theory by inserting a static monopole-antimonopole pair into the vacuum and measuring its free energy at different separations and different temperatures. While in the classical theory there is a Coulomb potential between SU(2) monopoles, we expect the quantum theory to show a Yukawa-potential if there is a monopole condensate, and a Coulomb-potential, if quantum fluctuations do not produce a magnetically screening object.
## 2 Simulation method
The method we use for inserting a static SU(2) monopole pair on the lattice was put forward by ’t Hooft and others and essentially amounts to introducing a magnetic flux tube by multiplying the couplings with an element of the center of the gauge group ($`\beta \beta `$ for SU(2)) on a string of plaquettes in every timeslice (schematically shown in fig.1).
This tube carries a flux of $`\mathrm{\Phi }=\pi \sqrt{\beta }`$, so at each end of the flux tube is a elementary cube with net magnetic outflux $`\mathrm{\Phi }=\pm \pi \sqrt{\beta }`$. But since for a compact lattice theory magnetic flux is only defined modulo $`2\pi \sqrt{\beta }`$, both monopoles are indistinguishable and coincide with their own antimonopole.
In order to measure the free energy of a monopole pair, we split the SU(2) action into two parts with different couplings
$$S(\beta ,\beta ^{})=\frac{1}{2}\left(\beta \underset{PM}{}\text{Tr}(U_P)+\beta ^{}\underset{PM}{}\text{Tr}(U_P)\right)$$
(1)
where $`M`$ is the set of all the plaquettes in the monopole string. In the case where $`\beta ^{}=\beta `$, (1) is the action of a plain SU(2) theory and for $`\beta ^{}=\beta `$ it is the action of SU(2) with the static monopole pair. So the free energy difference upon a monopole pair insertion is
$$\mathrm{\Delta }F=T\mathrm{ln}\frac{Z(\beta ,\beta )}{Z(\beta ,\beta )}$$
(2)
where $`T=1/N_ta`$ is the temperature of the system and $`Z(\beta ,\beta ^{})`$ is the partition function
$$Z(\beta ,\beta ^{})=\underset{𝒞}{}e^{S(\beta ,\beta ^{})}$$
(3)
We can calculate $`\mathrm{\Delta }F`$ using the Ferrenberg-Swendsen multihistogram method on histograms of the flux tube energy
$$E^{}=\frac{1}{2}\underset{PM}{}\text{Tr}(U_P)_\beta ^{}$$
(4)
for a set of $`\beta ^{}`$’s between $`\beta `$ and $`\beta `$ (where the separation between successive $`\beta ^{}`$’s has to be small enough, that neighboring histograms have a substantial overlap).
Our simulations were performed with couplings between $`\beta =2.476`$ and $`\beta =2.82`$ on systems of volumes, ranging from $`N_x\times N_y\times N_z=16^2\times 32`$ to $`N_x\times N_y\times N_z=32^2\times 64`$. The time extent in our simulation varied between $`N_t=2`$ and $`N_t=16`$. For each system, we measured the free energy at monopole separations from $`a`$ to $`6a`$.
We used a combined overrelaxation and 3-hit Metropolis update algorithm. For every simulation, we started at $`\beta ^{}=\beta `$ with $`5000`$ thermalization updates followed by $`200`$ to $`800`$ independent measurements of $`E^{}`$, separated by $`50`$ updates. We then decreased $`\beta ^{}`$ in $`11`$ to $`61`$ steps to $`\beta ^{}=\beta `$ and performed $`500`$ thermalization updates followed again by the measurement updates. Here, a single $`E^{}`$ measurement consisted of an average over $`384`$ configurations of the plaquettes on the monopole string on a fixed background configuration.
## 3 Results
In fig. 2 we plot the free energy vs. monopole pair separation for a $`N_x\times N_y\times N_z=20^2\times 40`$ system at $`\beta =2.6`$ and different temperatures. The lines are fits to a Yukawa-potential
$$F(r)=F_0c\frac{e^{mr}}{r}$$
(5)
Table 1 shows the screening masses obtained from this fit. A Coulombic behavior ($`m=0`$) is clearly ruled out. The quality of a Coulomb fit is $`Q10^7`$ for $`N_t=16`$ and even worse ($`Q<10^{15}`$) for all other cases.
At low temperature, we can compare our screening masses to the known mass of the lightest glueball state (table 2). The masses are roughly in agreement, especially when one ignores the data point at separation $`a`$, which is most affected by discretization errors.
In fig. 3 we plot the screening masses vs. temperature. There clearly is an increase of the screening mass with temperature, but we can see no signal of the phase transition.
## 4 Conclusion
We have studied the free energy of a monopole pair in pure SU(2) gauge theory at finite temperature. We find, that in both phases it exhibits a screened behavior. At low temperature, the screening mass is roughly in agreement with the mass of the lightest glueball state. At high temperature, we observe an increase in the screening mass with no apparent discontinuity at the phase transition.
|
no-problem/9909/hep-th9909136.html
|
ar5iv
|
text
|
# References
## Abstract
Recently the long-standing puzzle about counting the Witten index in $`N=1`$ supersymmetric gauge theories was resolved. The resolution was based on existence (for higher orthogonal $`SO(N),N7`$ and exceptional gauge groups) of flat connections on $`T^3`$ which have commuting holonomies but cannot be gauged to a Cartan torus. A number of papers has been published which studied moduli spaces and some topological characteristics of those flat connections. In the present letter an explicit description of such flat connection for the basic case of $`Spin(7)`$ is given.
ITEP/TH-49/99
hep-th/9909…
On flat connections with non-diagonalizable holonomies
K.Selivanov
ITEP, Moscow, 117259, B.Cheryomushkinskaya 25
Recently the long standing paradox with counting the Witten index in $`N=1`$ supersymmetric gauge theory has been resolved . The essence of the paradox was that different ways of computing the Witten index gave different results for the higher orthogonal ($`SO(N),N7`$) and the exceptional gauge groups.
The first way was to put the gauge theory into a finite spacial box and to count the number of supersymmetric vacuum states which resulted in $`\mathrm{Tr}(1)^F=r+1`$ where $`r`$ is the rank of the gauge group. For higher orthogonal and exceptional groups, this result disagrees with the one based on counting of gluino zero modes in the instanton background and also on the analysis of weakly coupled theories with additional matter super-multiplets .
$$\mathrm{Tr}(1)^F=h^{},$$
(1)
where $`h^{}`$ is the dual Coxeter number of the group (see e.g. , Chapt. 6; it coincides with the Casimir $`T^aT^a`$ in the adjoint representation when a proper normalization is chosen.). For $`SO(N7)`$, $`h^{}=N2>r+1`$. Also for exceptional groups $`G_2,F_4,E_{6,7,8}`$, the index (1) is larger than the Witten’s original estimate.
In Witten has found a flaw in his original arguments and shown that, for $`SO(N7)`$, vacuum moduli space is richer than it was thought before so that the total number of quantum vacua is $`N2`$ in accordance with the result (1). The derivation in was based on the assumption that a flat connection on 3d torus $`T^3`$ can be gauged to a Cartan sub-algebra of the corresponding Lie algebra. This assumption seems to be quite natural since for a connected, simply connected gauge group a connection on $`T^3`$ is flat when and only when it has commuting holonomies $`\mathrm{\Omega }_j,j=1,2,3`$ over three independent nontrivial cycles of $`T^3`$, and it is very natural to expect that commuting holonomies are representable as exponentials of commuting Lie algebra elements. Nevertheless, it is not true. For higher orthogonal and exceptional gauge groups there are triples of commuting holonomies which cannot be represented as exponentials of a Cartan sub-algebra elements. This fact is in the heart of resolution of the paradox in .
Interestingly, existence of such triples has been known to topologists for a long (see, e.g. , where examples of such triples were constructed).
After the Witten’s work (see also its interpretation for pedestrians in ) a new interest to such triples has arisen. Moduli spaces of such triples (that is, additional components of moduli spaces of flat connections on $`T^3`$ for exceptional gauge group) have been described in and in . Later these results have been re-derived and extended in some respects in .
The purpose of this letter is to explicitely describe flat connections corresponding to such triples (actually, only the basic case of $`Spin(7)`$ is considered here; the description is likely generalizable to other cases). Although the explicit description is not needed in the problem of computing the Witten index, it may be useful elsewhere, in particular, in answering the question whether the existence of the new components of vacua moduli space affects only the Witten index or also some other observables in supersymmetric gauge theories with orthogonal or exceptional gauge groups. It may also be interesting per ce.
For $`Spin(7)`$ group there is a unique (up to conjugation) nontrivial triple. It can be chosen in the form ,
$`\mathrm{\Omega }_1=\gamma _{1234}`$
$`\mathrm{\Omega }_2=\gamma _{1256}`$ (2)
$`\mathrm{\Omega }_3=\gamma _{1357},`$
where and in what follows we use the notation
$$\gamma _{ijkl\mathrm{}}=\gamma _i\gamma _j\gamma _k\gamma _l\mathrm{}$$
(3)
and $`\gamma _i,i=1,\mathrm{}7`$ stand for the gamma-matrices. $`\mathrm{\Omega }`$’s in Eq.(S0.Ex1) mutually commute and cannot be conjugated to the Cartan torus (see, e.g. ).
Let $`x,y,z`$ be coordinates on the cube in $`R^3`$ which gives $`T^3`$ upon identification $`xx+1`$, $`yy+1`$, $`zz+1`$. We would like to explicitely describe a flat $`Spin(7)`$ connection $`A_i,i=1,2,3`$ on $`T^3`$ which has holonomies $`\mathrm{\Omega }_i,i=1,2,3`$ from Eq.(S0.Ex1), that is
$`\mathrm{\Omega }_1=Pexp\left({\displaystyle _0^1}A_1(x,0,0)𝑑x\right)`$
$`\mathrm{\Omega }_2=Pexp\left({\displaystyle _0^1}A_2(0,y,0)𝑑y\right)`$ (4)
$`\mathrm{\Omega }_3=Pexp\left({\displaystyle _0^1}A_3(0,0,z)𝑑z\right).`$
It will be represented in the form
$$A_i(x,y,z)=g(x,y,z)^1\frac{}{x^i}g(x,y,z)$$
(5)
where $`g(x,y,z)`$ takes values in the group and has the following properties:
$`g(1,y,z)=\mathrm{\Omega }_1g(0,y,z)`$
$`g(x,1,z)=\mathrm{\Omega }_2g(x,0,z)`$ (6)
$`g(x,y,1)=\mathrm{\Omega }_3g(x,y,0).`$
with $`\mathrm{\Omega }`$’s from Eq.(S0.Ex1).
Obviously, with $`g(x,y,z)`$ obeying Eq.(S0.Ex5) the flat connection $`A_i`$ from Eq.(5) is periodical on $`T^3`$ and has the appropriate holonomies.
Now introduce the following definitions:
$`g_1=e^{\frac{\pi }{2}x(\gamma _{12}+\gamma _{34})}`$
$`g_2=e^{\frac{\pi }{2}y(\gamma _{15}\gamma _{26})}`$ (7)
$`g_3=e^{\frac{\pi }{2}z(\gamma _{13}+\gamma _{57})}.`$
$`g_i`$ Eq.(S0.Ex7) produces the monodromy $`\mathrm{\Omega }_i`$ when the corresponding coordinate changes by 1. The specific choice of $`g_i`$’s is related to the following property:
$$[g_i,\mathrm{\Omega }_{i+1}]=0$$
(8)
By explicit computation one verifies that
$$g_i\mathrm{\Omega }_{i1}=\mathrm{\Omega }_{i1}g_i^1$$
(9)
Introduce also $`\stackrel{~}{g}_3`$ such that it commutes with $`g_2`$ (and, consequently, with $`\mathrm{\Omega }_2`$),
$$\stackrel{~}{g}_3=e^{\frac{\pi }{2}z(\gamma _{15}\gamma _{37})}.$$
(10)
$`\stackrel{~}{g}_3`$ as well as $`g_3`$ produces $`\mathrm{\Omega }_3`$ when the corresponding coordinate ($`z`$) changes by 1.
One can also verify that
$$\stackrel{~}{g}_3\mathrm{\Omega }_1=\mathrm{\Omega }_1(\stackrel{~}{g}_3)^1.$$
(11)
Using all these properties, one can see that the following $`g(x,y,z)`$
$$g(x,y,z)=g_1g_2g_1^1g_3\stackrel{~}{g}_3^1g_1g_2^1g_1^1\stackrel{~}{g}_3g_1g_2$$
(12)
obeys the key property Eq.(S0.Ex5), and hence the corresponding flat connection, Eq.(5), is periodic and has the monodromies $`\mathrm{\Omega }`$’s.
I would like to thank A.Gorsky and A.Rosly for discussions and A.Smilga for discussions and for convincing me to publish this letter. This work was partially supported by INTAS-96-0482.
|
no-problem/9909/cond-mat9909035.html
|
ar5iv
|
text
|
# Andreev Reflection in Strong Magnetic Fields
## Abstract
We have studied the interplay of Andreev reflection and cyclotron motion of quasiparticles at a superconductor–normal-metal interface with a strong magnetic field applied parallel to the interface. Bound states are formed due to the confinement introduced both by the external magnetic field and the superconducting gap. These bound states are a coherent superposition of electron and hole edge excitations similar to those realized in finite quantum-Hall samples. We find the energy spectrum for these Andreev edge states and calculate transport properties.
Rapid progress in fabrication techniques has made it possible to investigate phase-coherent transport in a variety of mesoscopic conducting devices . In recent years, the study of hybrid systems consisting of superconductors in contact with normal metals has continued to attract considerable interest, mainly because of the novel effects observed in superconductor–semiconductor microjunctions . Many of the unusual experimental findings arise due to the phenomenon of Andreev reflection, i.e., the (partial) retroreflection of an electron incident on a superconductor (S) – normal-metal (N) interface as a hole . Phase coherence between the electron and hole states is maintained during the reflection process. Hence, coupled-electron-hole (Andreev) bound states having energies within the superconducting gap are formed in mesoscopic devices with multiple interfaces, e.g., S–N–S systems , or S–N–I–N–S structures . (The symbol ‘I’ denotes an insulating barrier.) Recently, measurements of transport across the interface between a superconductor and a two-dimensional electron gas (2DEG) were performed with a strong magnetic field $`H`$ applied in the direction perpendicular to the plane of the 2DEG . While the magnetic field did not exceed the critical field of the superconductor, it was still large enough such that the Landau-level quantization of the electronic motion in the 2DEG was important . With these experiments, a link has been established between mesoscopic superconductivity and quantum-Hall physics which needs theoretical exploration.
In this Letter, we study a novel kind of Andreev bound state that is formed at a single S–N interface in a strong magnetic field . This bound state is a coherent superposition of an electron and a hole propagating along the interface in a new type of current-carrying edge state that is induced by the superconducting pair potential. Andreev reflection gives rise to the contribution
$$G_{\text{AR}}=\frac{e^2}{\pi \mathrm{}}\underset{n=1}{\overset{n^{}}{}}B_n$$
(1)
to the small-bias conductance, which we obtained by generalizing the familiar Büttiker description of transport in quantum-Hall samples. In Eq. (1), the summation is over Andreev-bound-state levels that intersect with the chemical potential, and $`B_n`$ is the hole probability for a particular bound-state level. It turns out that $`n^{}`$ is twice the number of orbital Landau levels occupied in the bulk of the 2DEG, and $`B_n1/2`$ depends weakly on magnetic field $`H`$ for an ideal interface but oscillates strongly with $`H`$ for a non-ideal interface. $`G_{\text{AR}}`$ can be measured directly as the two-terminal conductance in a S–2DEG–S system . Our treatment in terms of Andreev edge states provides a clear physical description of transport in such devices and explains oscillatory features in the conductance that were observed experimentally and also obtained in previous numerical studies .
Let us start by recalling the classical and quantum-mechanical descriptions of electron dynamics in an external magnetic field. When considered to be classical charged particles, bulk-metal electrons execute periodic cyclotron motion with a frequency $`\omega _\text{c}=eH/(mc)`$. A surface that is parallel to the magnetic field interrupts the cyclotron orbits of nearby electrons and forces them to move in skipping orbits along the surface . Within the more adequate quantum-mechanical treatment, the kinetic energy for electronic motion in the plane perpendicular to the magnetic field is quantized in Landau levels which are at constant eigenvalues $`\mathrm{}\omega _\text{c}(n+1/2)`$ for electron states localized in the bulk but are bent upward in energy for states localized close to the surface . Applying the classical picture of cyclotron and skipping orbits to a S–N interface , one finds that Andreev reflection leads to electrons and holes alternating in skipping orbits along the interface. \[See Fig. 1(a).\] In what follows, we provide a full quantum-mechanical description of these alternating skipping orbits in terms of current-carrying Andreev bound states. \[See Fig. 1(b).\]
We now provide details of our calculation. A planar interface is considered, located at $`x=0`$ between a semi-infinite region ($`x<0`$) occupied by a type-I superconductor and a semi-infinite normal region ($`x>0`$). A uniform magnetic field is applied in $`z`$ direction, which is screened from the superconducting region due to the Meissner effect. Neglecting inhomogeneities in the magnetic field due to the existence of a finite penetration layer , we assume an abrupt change of the magnetic-field strength at the interface: $`H(x)=H\mathrm{\Theta }(x)`$, where $`\mathrm{\Theta }(x)`$ is Heaviside’s step function. The energy spectrum of Andreev bound states at the S–N interface is found by solving the Bogoliubov–de Gennes (BdG) equation ,
$$\left(\begin{array}{cc}H_{\text{0},+}+U_{\text{ext}}& \mathrm{\Delta }\\ \mathrm{\Delta }^\text{*}& H_{\text{0},}U_{\text{ext}}\end{array}\right)\left(\begin{array}{c}u\\ v\end{array}\right)=E\left(\begin{array}{c}u\\ v\end{array}\right),$$
(2)
with spatially non-uniform single-electron/hole Hamiltonians $`H_{\text{0},\pm }`$ and pair potential $`\mathrm{\Delta }(x)=\mathrm{\Delta }_\text{0}\mathrm{\Theta }(x)`$. We introduced the potential $`U_{\text{ext}}(x)=U_\text{0}\delta (x)`$ to model scattering at the interface, and allow the effective mass and the Fermi energy to be different in the superconducting and normal regions. Choosing the vector potential $`\stackrel{}{A}(x)=xH\mathrm{\Theta }(x)\widehat{y}`$, we have
$$H_{\text{0},\pm }=\{\begin{array}{cc}\frac{p_x^2+p_z^2}{2m_\text{N}}+\frac{m_\text{N}\omega _\text{c}^2}{2}\left(xX_{p_y}\right)^2ϵ_\text{F}^{\text{(N)}}& x>0\hfill \\ \frac{p_x^2+p_y^2+p_z^2}{2m_\text{S}}ϵ_\text{F}^{\text{(S)}}& x<0\hfill \end{array}.$$
(3)
The operator $`X_{p_y}=p_y\mathrm{}^2\mathrm{s}gn(eH)/\mathrm{}`$ is the guiding-center coordinate in $`x`$ direction for cyclotron motion of electrons in the normal region, and $`\mathrm{}=\sqrt{\mathrm{}c/|eH|}`$ denotes the magnetic length. Uniformity in the $`y`$ and $`z`$ directions suggests the separation ansatz
$`u(x,y,z)`$ $`=`$ $`f_X(x)e^{iyX/\mathrm{}^2}e^{ikz}/\sqrt{L_yL_z},`$ (5)
$`v(x,y,z)`$ $`=`$ $`g_X(x)e^{iyX/\mathrm{}^2}e^{ikz}/\sqrt{L_yL_z}.`$ (6)
The lengths $`L_y`$, $`L_z`$ are the sample sizes in $`y`$ and $`z`$ directions, respectively. Solutions of Eq. (2) for the S–N junction are found by matching appropriate wave functions that are solutions in the normal and superconducting regions, respectively . The motion in $`z`$ direction can trivially be accounted for by renormalized Fermi energies $`\stackrel{~}{ϵ}_\text{F}^{\text{(N,S)}}=ϵ_\text{F}^{\text{(N,S)}}\mathrm{}^2k^2/(2m_{\text{N,S}})`$. Non-trivial matching conditions arise only in $`x`$ direction. For fixed $`X`$ and $`|E|<\mathrm{\Delta }_\text{0}`$, we have to match at $`x=0`$ the wave function
$$\left(\begin{array}{c}f_X\\ g_X\end{array}\right)_{x>0}=\left(\begin{array}{c}a\chi _{\epsilon _+}(xX)\\ b\chi _\epsilon _{}(x+X)\end{array}\right),$$
(8)
corresponding to a coherent superposition of an electron and a hole in the normal region, to that of evanescent excitations in the superconductor,
$$\left(\begin{array}{c}f_X\\ g_X\end{array}\right)_{x<0}=d_+\left(\begin{array}{c}\gamma \\ 1\end{array}\right)e^{ix\lambda _{}}+d_{}\left(\begin{array}{c}\gamma ^{}\\ 1\end{array}\right)e^{ix\lambda _+}.$$
(9)
The parameters $`\gamma `$ and $`\lambda _\pm `$ are defined in the usual way . The functions $`\chi _{\epsilon _\pm }(\xi )`$ solve the familiar one-dimensional harmonic-oscillator Schrödinger equation,
$$\frac{\mathrm{}^2}{2}\frac{d^2\chi _{\epsilon _\pm }}{d\xi ^2}\left[\frac{\xi ^2}{2\mathrm{}^2}\frac{\epsilon _\pm }{\mathrm{}\omega _\text{c}}\right]\chi _{\epsilon _\pm }=0,$$
(10)
with eigenvalues $`\epsilon _\pm =ϵ_\text{F}^{\text{(N)}}\pm E\mathrm{}^2k^2/(2m_\text{N})`$ and are assumed to be normalized to unity in the normal region. Hence, they are proportional to the fundamental solutions of Eq. (10) that are well-behaved for $`x\mathrm{}`$; these are the parabolic cylinder functions $`U(\frac{\epsilon _\pm }{\mathrm{}\omega _\text{c}},\frac{\sqrt{2}\xi }{\mathrm{}})`$. The matching conditions yield a homogeneous system of four linear equations for the coefficients $`a,b,d_\pm `$ whose secular equation determines the allowed values of $`E`$.
It is straightforward to calculate the probability and charge currents for any particular Andreev-bound-state solution of the BdG equation (2) that is labeled by guiding-center coordinate $`X`$ and energy $`E`$. It turns out that currents flow parallel to the interface. The total (integrated) quasiparticle probability current is given by
$$I_X^{\text{(P)}}=\frac{1}{\mathrm{}}\frac{\mathrm{}^2}{L_y}\frac{E}{X}.$$
(11)
The total charge current can be written as the sum of three contributions, $`I_X^{\text{(Q)}}=I_X^{\text{(Q,n)}}I_X^{\text{(Q,a)}}+I_X^{\text{(Q,s)}}`$, where
$`I_X^{\text{(Q,n)}}`$ $`=`$ $`{\displaystyle \frac{e}{\mathrm{}}}{\displaystyle \frac{\mathrm{}^2}{L_y}}{\displaystyle \frac{E}{X}},`$ (13)
$`I_X^{\text{(Q,a)}}`$ $`=`$ $`{\displaystyle \frac{e}{\mathrm{}}}{\displaystyle \frac{\mathrm{}^2}{L_y}}{\displaystyle \frac{E}{X}}\mathrm{\hspace{0.17em}\hspace{0.17em}2}{\displaystyle _x}\left|g_X(x)\right|^2,`$ (14)
$`I_X^{\text{(Q,s)}}`$ $`=`$ $`{\displaystyle \frac{e}{\mathrm{}}}{\displaystyle \frac{\mathrm{}^2}{L_y}}\mathrm{\hspace{0.17em}2}\mathrm{\Delta }{\displaystyle _x}\mathrm{\Theta }(x)\left[g_X^{}{\displaystyle \frac{df_X}{dX}}f_X^{}{\displaystyle \frac{dg_X}{dX}}\right].`$ (15)
Note that $`I_X^{\text{(Q,n)}}`$ is the current that would flow in an ordinary quantum-Hall edge state , i.e., due to normal reflection at the interface. The existence of Andreev reflection is manifested in the contribution $`I_X^{\text{(Q,a)}}`$ to the Hall current; it is proportional to the hole probability $`B(X)=_x|g_X(x)|^2`$. The part $`I_X^{\text{(Q,s)}}`$ of the quasiparticle charge current is converted into a supercurrent.
Numerical implementation of the matching procedure is straightforward. More detailed insight is gained, however, when considering the limit $`|X|\sqrt{\epsilon _\pm /(2m_\text{N}\omega _\text{c}^2)}`$ for which analytical progress can be made. Using an asymptotic form for the parabolic cylinder functions , the secular equation can be written as
$$\mathrm{cos}(\phi _+)+\mathrm{\Gamma }(\phi _{})=\frac{2s}{s^2+w^2+1}\frac{E\mathrm{sin}(\phi _+)}{\sqrt{\mathrm{\Delta }_\text{0}^2E^2}}.$$
(16)
Here we used the Andreev approximation ($`E,\mathrm{\Delta }_\text{0}\stackrel{~}{ϵ}_\text{F}^{\text{(N)}},\stackrel{~}{ϵ}_\text{F}^{\text{(S)}}`$), and the abbreviations
$`\phi _+`$ $`=`$ $`\pi {\displaystyle \frac{E}{\mathrm{}\omega _\text{c}}}+{\displaystyle \frac{2X}{\mathrm{}}}\sqrt{2m_\text{N}\stackrel{~}{ϵ}_\text{F}^{\text{(N)}}},`$ (18)
$`\phi _{}`$ $`=`$ $`\pi {\displaystyle \frac{\nu }{2}}+{\displaystyle \frac{EX}{\mathrm{}}}\sqrt{{\displaystyle \frac{2m_\text{N}}{\stackrel{~}{ϵ}_\text{F}^{\text{(N)}}}}},`$ (19)
$`\mathrm{\Gamma }(\alpha )`$ $`=`$ $`{\displaystyle \frac{[s^2+w^21]\mathrm{sin}(\alpha )+2w\mathrm{cos}(\alpha )}{s^2+w^2+1}}.`$ (20)
The variable $`\nu =2\stackrel{~}{ϵ}_\text{F}^{\text{(N)}}/(\mathrm{}\omega _\text{c})`$ coincides with the filling factor of quantum-Hall physics when the N region is a 2DEG. The parameter $`s=[\stackrel{~}{ϵ}_\text{F}^{\text{(S)}}m_\text{N}/(\stackrel{~}{ϵ}_\text{F}^{\text{(N)}}m_\text{S})]^{1/2}`$ measures the Fermi-velocitiy mismatch for the junction, and $`w=[2m_\text{N}U_\text{0}^2/(\mathrm{}^2\stackrel{~}{ϵ}_\text{F}^{\text{(N)}})]^{1/2}`$ quantifies interface scattering. We discuss briefly results for two limiting cases .
(a) Ideal interface. In the absence of scattering at the S–N interface ($`w=0`$) and for perfectly matching Fermi velocities ($`s=1`$), $`\mathrm{\Gamma }(\alpha )`$ vanishes identically. The energy spectrum is found from solutions of the transcendental equation $`\mathrm{cot}(\phi _+)=E/\sqrt{\mathrm{\Delta }_\text{0}^2E^2}`$. It consists of several bands, and states within each band are labeled by their guiding-center quantum number $`X`$. It turns out that $`a^2=b^2`$ at any energy, and the band dispersion is
$$\frac{E}{X}=\frac{2\sqrt{2m_\text{N}\stackrel{~}{ϵ}_\text{F}^{\text{(N)}}}}{\mathrm{}}\frac{\sqrt{\mathrm{\Delta }_\text{0}^2E^2}}{1+\pi \sqrt{\mathrm{\Delta }_\text{0}^2E^2}/(\mathrm{}\omega _\text{c})}.$$
(21)
(b) Non-ideal (S–I–N) interface at low energies. For $`E\mathrm{m}in\{\mathrm{\Delta }_\text{0},\mathrm{}\sqrt{\stackrel{~}{ϵ}_\text{F}^{\text{(N)}}/(2m_\text{N}X^2)}\}`$, the dependence of $`\phi _{}`$ on $`E`$ and $`X`$ can be neglected. We find
$$E=\mathrm{\Delta }_\text{0}\frac{(2n+1)\pi \pm \mathrm{a}rccos(\mathrm{\Gamma }_\text{0})2X\sqrt{2m_\text{N}\stackrel{~}{ϵ}_\text{F}^{\text{(N)}}}/\mathrm{}}{q+\pi \mathrm{\Delta }_\text{0}/(\mathrm{}\omega _\text{c})},$$
(22)
where $`\mathrm{\Gamma }_\text{0}=\mathrm{\Gamma }(\pi \nu /2)`$ and $`q=2s/(s^2+w^2+1)`$. For $`s=1`$ and $`w=0`$, the spectrum for an ideal interface at small energies emerges. In the opposite limit of a very bad interface ($`s^2+w^2\mathrm{}`$), we recover the spectrum of the Landau-level edge states close to a hard wall . In general, $`\mathrm{\Gamma }_\text{0}`$ oscillates as a function of filling factor $`\nu `$. Hence, unlike in the ideal case, the bound-state energies of Eq. (22) vary oscillatory with $`\nu `$. The same is true for the hole probability $`Bb^21/2`$, for which we find
$$B=\frac{1}{2}\frac{q^2/(1\mathrm{\Gamma }_\text{0}^2)}{1+\sqrt{1q^2/(1\mathrm{\Gamma }_\text{0}^2)}}.$$
(23)
The minima in the oscillatory dependence of $`B`$ on filling factor $`\nu `$ occur whenever $`\mathrm{tan}(\pi \nu /2)=2w/(1s^2w^2)`$.
Results obtained in the approximate analytical treatment sketched above are expected to be valid only as long as $`X`$ is not too large. It turns out, however, that they actually provide a good description at $`E0`$ for Andreev levels intersecting with the Fermi energy, which are important for small-bias transport. In particular, we obtained a non-vanishing dispersion $`E/X`$ close to the interface which leads to a finite Hall current $`I_X^{\text{(Q,n)}}I_X^{\text{(Q,a)}}`$. \[See Eqs. (11).\] It is clear that, far away from the interface, i.e., for $`|X|r_\text{c}`$, no coupling of electrons and holes via the pair potential is possible and dispersionless Landau levels are solutions of the BdG equation. However, as the guiding-center coordinate $`X`$ gets close to the interface, these Landau levels are bent upward and become Andreev-bound-state levels for $`|E|<\mathrm{\Delta }_\text{0}`$. This is seen in the exact numerical calculation of the spectrum (Fig. 2), which also provides a crucial piece of information that is elusive within the approximate analytical treatment: the number $`n^{}`$ of Andreev levels intersecting with the Fermi energy. We find that $`n^{}`$ is twice the integer part of $`\nu /2`$.
We now apply our findings to study transport in S–2DEG–S structures . In experiment, two S–N interfaces are linked by ordinary quantum-Hall edge channels whose local chemical potentials differ by $`\delta \mu `$. Generalizing the Büttiker formalism for edge-channel transport, the following picture emerges. (See inset of Fig. 3.) From the lower edge channel, a current $`\delta I=\delta \mu e/h`$ impinges on the left S–N interface. This current divides up into a part $`\delta I_{}`$ flowing parallel to the interface in Andreev edge states studied above, and $`\delta I_{}`$ which flows across the interface. Chirality of edge states (both Andreev and ordinary) and conservation of quasiparticle probability current completely determines the current parallel to the interface to be $`\delta I_{}=(12B_n)\delta \mu e/h`$. The upper edge channel collects $`\delta I_{}`$ and returns to the right interface, where a similar discussion applies. Hence, the two-terminal conductance $`e\delta I_{}/\delta \mu `$ in the S–2DEG–S device equals $`G_{\text{AR}}`$ \[given in Eq. (1)\]. Using hole amplitudes obtained from the exact numerical matching procedure, we determined the filling-factor dependence of $`G_{\text{AR}}`$ (shown in Fig. 3 for $`2\nu 18`$ and various values of $`w`$). As anticipated from the approximate analytical result $`B_n1/2`$, the ideal interface exhibits almost perfect conductance steps in units of $`2e^2/h`$. For finite scattering at the interface, oscillations appear in the conductance whose amplitude increases as the interface quality worsens. However, for certain single values of $`\nu `$, the ideal conductance is reached even at a bad interface. The location of minima and maxima in the field dependence of the conductance can be obtained from our analytical calculation and compare well with results of previous numerical studies based on a representation in terms of scattering states.
We thank W. Belzig, C. Bruder, T. M. Klapwijk, A. H. MacDonald, and A. D. Zaikin for useful discussions, and Sonderforschungsbereich 195 der DFG for support.
|
no-problem/9909/gr-qc9909092.html
|
ar5iv
|
text
|
# LIMITING SENSITIVITY OF SUPERCONDUCTIVE LC - CIRCUIT PLACED IN WEAK GRAVITATIONAL WAVE
## Abstract
It is demonstrated that the sensitivity of a superconductive LC - circuit placed in a weak gravitational wave is limited by two factors. One is the quantization of the magnetic flux through the circuit, the second one is the fraction of the elementary charge (effect Laughlin - Stormer - Tsui). Application to a possibility of using a superconductive LC - circuit as a weak gravitational waves detector is discussed.
PACS number(s): 04.80
The attractive goal of a gravitational astronomy stimulates us to enlarge the set of gravitational wave antennae based on different physical principles. Because it is evident that the diversity of experimental devices can help us to get complete and reliable information.
As it was proved in Fo , if a RLC - circuit is placed in a wave front plane of a monochromatic gravitational wave (GW) electromagnetic oscillations can be, so the RLC - circuit is a gravitational wave detector. The relative position of RLC - circuit elements and the gravitational wave front on fig.1 is shown.
The incoming GW changes mutual positions of circuit conductors and its cross - sections, consequently, magnitudes of resistor, inductance and capacity are changes also in a phase with GW. Its variations are proportional to the GW amplitude. In Fo obtained that in orthogonal TT frame (in that frame the RLC - circuit is stated relatively to the GW front)
$$\{\begin{array}{ccc}\hfill R(t)& =& R[1+\epsilon _R(t)],\hfill \\ \hfill L(t)& =& L[1+\epsilon _L(t)],\hfill \\ \hfill \frac{1}{C_i\left(t\right)}& =& \frac{1}{C_i}[1\epsilon _{C_i}(t)],i=1,2,\hfill \end{array}$$
(1)
where $`R,L,C_i`$ are initial magnitudes
$$\epsilon _{\mathrm{}}(t)=ϵ_\mathrm{}_+h_+(t)+ϵ_\mathrm{}_\times h_\times (t),$$
(2)
and $`g_{\mu ,\nu }=\eta _{\mu ,\nu }+h_{\mu ,\nu }`$, $`ϵ`$ is a parameter, $`\mathrm{}=R,L,C_i`$, $`+`$ or $`\times `$ signs of the monochromatic GW polarization. For the monochromatic GW we suppose $`h_{+,\times }(t)=A_{+,\times }\mathrm{cos}(\omega _gt+\varphi _{+,\times })`$.
The assumptions for deriving Eqs. (1) are as follows. The oscillations frequency is less then $`c/d`$ ($`c`$ is the velocity of light, $`d`$ is the maximum length of the circuit), the field of the GW is uniform or $`\lambda _gd`$ ($`\lambda _g`$ is the length of the GW).
In these assumptions one can write the equations for oscillations of a charge $`Q`$ in the RLC - circuit in the GW presence
$$\ddot{Q}+2\gamma \dot{Q}+\omega _0^2Q=v_+h_+(t)+v_\times h_\times (t),$$
(3)
where
$$\begin{array}{ccc}\hfill v_{+,\times }& =& \omega _0^2Q_0\frac{a}{(a+1)^2}[ϵ_{C_{1_{+,\times }}}ϵ_{C_{2_{+,\times }}}],\hfill \\ \hfill \omega _0^2& =& \frac{a+1}{a}\frac{1}{LC},\hfill \\ \hfill Q_1& =& aQ_2=\frac{a}{a+1}Q_0,\hfill \\ \hfill C_1& =& aC_2=aC.\hfill \end{array}$$
Eq. (3) in $`|\epsilon _{\mathrm{}}|1`$ and $`|\dot{\epsilon }_{\mathrm{}}|1`$ approximations is derived.
So, the most prominent effect of the action of the G W is the existence of a electromotive force in the RLC - circuit. Its value is defined by an amplitude of the GW. In order to simplify following calculations it is supposed that only ”plus” polarization of the GW is acting on the circuit. Plates of the condensers $`C_1`$ and $`C_2`$ are mutially orthogonal, and $`C_1=C_1=C`$. So $`a=1`$, $`v_+=1/2\omega _g^2Q_0`$.
It is obvious in a case of resonance $`\omega _0=\omega _g`$ that losses in the RLC - circuit confine an amplitude of electric oscillations. In usual RLC - circuit with the temperature $`T`$ its sensitivity is restricted by a termal noise
$$A_n=\frac{2}{U_0}\sqrt{\frac{\kappa T}{𝒬C\omega _0\tau }},$$
(4)
if one suppose $`U_0=Q_0/C,\kappa `$ is the Boltzman constant, $`𝒬`$ is the quality factor $`𝒬`$=$`\omega _0/4\gamma `$, $`\tau `$ is the interval of registration. For $`U_0=10^5V`$, $`T=4K`$,$`\omega _0=120\pi `$, $`𝒬=10^3`$, $`\tau =10^7s`$, then the minimal amplitude of GW is Fo
$$A_n=7,610^{22}.$$
(5)
Let us discuss the sensitivity of the superconductive LC - circuit. We can define the sensitivity of such circuit from the following reasons. The magnetic flux is quantized in the superconductive inductance $`L`$, and the value of minimum flux is 2
$$\mathrm{\Phi }_0=\frac{\pi \mathrm{}}{e}=2.0710^{15}Wb.$$
(6)
According to Faraday’s law, if the magnetic flux is changed on $`\mathrm{\Delta }\mathrm{\Phi }=\mathrm{\Phi }_0`$ in the time interval $`\mathrm{\Delta }t=T_g/2(T_g=2\pi /\omega _g)`$, then the electromotive force is created
$$E_L=\frac{\omega _g}{\pi }\mathrm{\Phi }_0.$$
(7)
This electromotive force in the LC - circuit is compensated by changes of $`C_1`$ and $`C_2`$ voltages (because Kirchgoff’s law). The last are created by the GW action. As one can conclude
$$\mathrm{\Delta }U=U_0A𝒬_s,$$
(8)
where $`𝒬_s`$ is the quality factor of the LC - circuit ( see p. 57 ), $`A`$ is an amplitude of the GW. So, using Kirchgoff’s law, one can have
$$E_L+\mathrm{\Delta }U=0.$$
(9)
After substitution (7) and (8) in (9) we find
$$\frac{\omega _g\mathrm{\Phi }_0}{\pi }=U_0A𝒬_s.$$
(10)
Consequently, the boundary value of the GW amplitude, which can be detected in the LC - circuit is
$$A_{min_s}=\frac{\omega _g\mathrm{\Phi }_0}{\pi U_0𝒬_s},$$
(11)
or in another terms
$$A_{min_s}=\frac{\mathrm{}\omega _g}{𝒬_seU_0}.$$
(12)
If one gets the above mentioned values and supposes that $`𝒬_s=10^6`$ and $`\omega _g=120\pi `$, then
$$A_{min_s}=2,410^{24}.$$
(13)
As it was shown recently 3 , if one lowers the temperature in the $`LC`$-circuit less than $`0.4K`$ and gets the external magnetic field more than $`10T`$, he can use the fraction of the elementary charge $`e`$ in the experimental measurement. Consequently, we have to rewrite the formula (12) so that this effect can be included. It is easy to show that with such technology we can get the sensitivity
$$A_{min_s}=\frac{\mathrm{}\omega _g}{𝒬_seU_0}n,$$
(14)
where $`n`$ is a fractional part of the elementary charge ( in particular 1/3, 1/5 and so on).
In conclusion, we notice that the sensitivity of a such device, being comparable with a laser interferometer one, can be exploited as an effective gravitational wave detector.
One of the authors (G. N. I.) should like to thank the INFN - Sezione di Ferrara for financial support of his staying in Ferrara.
|
no-problem/9909/cond-mat9909423.html
|
ar5iv
|
text
|
# Magnetization of noncircular quantum dots
## I Introduction
The internal electron structure of quantum dots has been explored by far-infrared (FIR) absorption, Raman scattering, and tunneling. The FIR absorption can only be used to detect center-of-mass oscillations of the whole electron system if it is parabolically confined in a circular quantum dot that is much smaller than the wavelength of the incoming radiation, i.e. the absorption is according to the extended Kohn theorem. Collective excitations due to relative motion of the electrons are observed in dots where either the circular symmetry is broken, the radial confinement is not parabolic, or Raman scattering is used to inelastically transfer finite momentum with a radiation of shorter wavelength to the electron system. A similar blocking effect has been found in the transport excitation spectroscopy of a quantum dot, where correlation between the multi-electron states suppresses most of the energetically allowed tunneling processes involving excited dot states. Transitions in Raman scattering measurements do follow certain selection rules but there is no general mechanism blocking a wide selection of collective modes comprized of relative motion of the electrons. Together these three methods have proven invaluable for the exploration of excitations in quantum dots and have supplied important indirect information about the ground state of the electron system.
Recently it has been realized that new or improved methods to measure the magnetization of a two-dimensional electrons gas (2DEG) give important information about the structure of the many-electron ground state. We are aware of an effort to extend these measurements to nanostructured 2DEG’s, arrays of quantum wires or dots.
The magnetization has been calculated for large ”semi-classical“ circular quantum dots with noninteracting electrons, or two ”exactly“ interacting electrons in a square quantum dot with hard walls and an impurity in the center to investigate how it changes the effect of the electron-electron interaction on the magnetization.
In this paper we show that magnetization measurements of isolated dots can reveal information about the shape of a dot and the number of electrons in it. In addition, the magnetization is shown to be very sensitive to the electron-electron interaction.
## II Model
We consider a very general angular shape of the quantum dot, thus neither the total angular momentum of the system nor the angular momentum of the effective single particle states in a mean field approach is conserved. The Coulomb interaction ’mixes‘ up all elements of the functional basis used and we limit ourselves to the Hartree approximation (HA) in order to be able to calculate the magnetization for up to five electrons. The confinement potential of the electrons in the quantum dot is expressed as
$$V(r,\varphi )=\frac{1}{2}m^{}\omega _0^2r^2\left[1+\underset{p=1}{\overset{p_{max}}{}}\alpha _p\mathrm{cos}(2p\varphi )\right],$$
(1)
representing an elliptic confinement when $`\alpha _10`$ and $`\alpha _p=0`$ for $`p1`$, and a square symmetric confinement when $`\alpha _20`$ and $`\alpha _p=0`$ for $`p2`$. We use the Darwin-Fock basis; the eigenfunctions of the circular parabolic confinement potential in which the natural length scale, $`a`$, is given by
$$a^2=\frac{\mathrm{}^2}{\sqrt{1+4(\frac{\omega _0}{\omega _c})^2}},\mathrm{}^2=\frac{\mathrm{}c}{eB},$$
(2)
where $`\omega _c=eB/m^{}c`$ is the cyclotron frequency of an electron with an effective mass $`m^{}`$ in a perpendicular homogeneous magnetic field $`B`$. The states are labelled by the radial quantum number $`n_r`$ and the angular quantum number $`M`$.
The total magnetization with an orbital contribution $`M_o`$ defined in terms of the quantum thermal average of the current density, and the spin contribution $`M_s`$ derived from the average value of the spin density is defined as
$$M_o+M_s=\frac{1}{2}_{𝐑^2}𝑑𝐫(𝐫\times 𝐉(𝐫))\widehat{𝐧}g\mu _B_{𝐑^2}𝑑𝐫\sigma _𝐳(𝐫),$$
(3)
where $`\mu _B`$ is the Bohr magneton. The equilibrium local current is evaluated as the quantum thermal average of the current operator,
$$\widehat{𝐉}=\frac{e}{2}\left(\widehat{𝐯}|𝐫𝐫|+|𝐫𝐫|\widehat{𝐯}\right),$$
(4)
with the velocity operator $`\widehat{𝐯}=[\widehat{𝐩}+(e/c)\widehat{𝐀}(𝐫)]/m^{}`$, $`\widehat{𝐀}`$ being the vector potential. For the finite electron system of a quantum dot the total magnetization can equivalently be expressed via the thermodynamic formula
$$M_o+M_s=\frac{}{B}(E_{\text{total}}TS),$$
(5)
where $`S`$ and $`E_{\text{total}}`$ are the entropy of the system and its total energy, respectively. In GaAs $`M_s`$ is a small contribution and within the HA it is a trivial one. Thus, we neglect the spin degree of freedom here, but admittedly the spin can be of paramount importance in connection with exchange effects on the orbital magnetization.
In order to check the numerical results we have verified that both definitions (3) and (5) give identical results within our numerical accuracy for $`T=1`$ K, even when the entropy term of (5) is neglected.
## III Results
In the numerical calculations we use GaAs parameters, $`m^{}=0.067m_0`$, and $`\kappa =12.4`$. Furthermore, we select the confinement frequency $`\mathrm{}\omega _0=3.37`$ meV in order to study quantum dots with few electrons in the regime where the energy scale of the Coulomb interaction is of same order of magnitude or larger than the quantization energy due to the geometry and the magnetic field.
To make clear the information about the structure of the ground state discernible in the curves of the magnetization versus the magnetic field $`B`$ we start by investigating a dot with noninteracting electrons. The magnetization of 2-5 electrons in an elliptic quantum dot is shown in Fig. 1 for different degree of deviation from a circular shape. For comparison the total energy $`E_{\text{total}}`$ for the same number of electrons, and the single-electron spectrum for a dot with an elliptical shape is presented in Fig. 2. Sharp jumps in the magnetization can be correlated with ”discontinuities“ in the derivative of $`E_{\text{total}}(B)`$ reflecting crossing of single-electron states. A jump represents a change in the electron structure of the dot. In a circular dot, or a nearly circular dot, each single-electron state can be assigned a definite quantum number $`M`$ for the angular momentum. As the magnetic field is increased the occupation of a state with a higher angular momentum is energetically favorable. The mean value of the radial electron density moves away from the center, the moment of inertia increases, and in order to conserve the total energy the equilibrium current is reduced leading to weaker magnetization. In addition to this over-simplified semiclassical picture the persistent equilibrium current, even in a circular dot, has a nontrivial structure as a function of the radial coordinate $`r`$ that can lead to the sign change of the magnetization.
For which range of $`B`$ the effects of the geometry are strongest can be understood in the following way: For low magnetic field $`B0`$ the increased elliptic shape results in a change of the curvature of some of the single-electron energy levels, i.e. those that are degenerate for circular dots. If we look at the magnetization for $`N_s=2`$ it only differs at low $`B`$ for different $`\alpha _1`$ reflecting the fact that the lowest occupied single-electron state has almost unchanged curvature for low $`B`$, but the second state is affected. In the case of three electrons the change in the curvature of the second and the third state for low $`B`$ cancels leaving the magnetization unaffected by the change in the shape for low magnetic field. Instead, the magnetization changes with $`\alpha _1`$ around $`B1`$ T where the third and the fourth single electron state cross. The crossing point varies with $`\alpha _1`$ shifting the location of the jump in the magnetization. In this same sense all the variation in the magnetization can be referred back to the single electron energy spectrum, and thus the total energy.
So, what changes do we observe when we change the degree of a square deviation of a quantum dot instead of the elliptical shape? The corresponding graphs for the magnetization and the energy can be seen in Fig. 3 and 4, respectively in case of square deviated dots. The square deviation does not lift the degeneracy of the second and the third single-energy levels at $`B=0`$ and it does not move strongly the crossing point between the third and the fourth energy level. The magnetization for $`N_s=2`$ and $`3`$ is thus not strongly effected by increased square shape of a quantum dot. On the other hand, the magnetization is strongly affected by the change in the shape of dots with four or five electrons. The large change in the anticrossing seen in the single-energy spectrum (at $`B2.3`$ T) with increasing $`\alpha _2`$ is clear in the magnetization.
The main difference in the magnetization of a quantum dot with an elliptical or square shape comes from the fact that the elliptical deviation has nonzero matrix elements between single electron states with a dominant contribution of basis states of a circular dot satisfying $`|\mathrm{\Delta }M|=2`$, whereas the square shape can only connect states with the dominant contribution satisfying $`|\mathrm{\Delta }M|=4`$. For weak deviation the square shaped dot thus needs the occupation of more states than the elliptical one to show effects in the magnetization different from the magnetization of a circular quantum dot.
Within the Hartree approximation the essential structure of the effective single electron spectrum remains, i.e. the anticrossing typical for the square shape and the lifting of the degeneracy of the second and third energy level typical for the elliptic shape. The finer details of the spectrum depend on the number of electrons in the dot as is expected in an effective potential. The electron-electron interaction in a circular dot can not change the angular symmetry of the dot, but as soon as this symmetry is broken by the confinement potential the interaction further modifies the angular shape of the dot. In Fig. 5 and 6 the magnetization for three and four interacting electrons is compared to the magnetization of noninteracting electrons. The Coulomb repulsion between the electrons causes changes in the electron structure to occur for lower magnetic field $`B`$. It is energetically favorable for electrons to occupy states associated with higher angular momentum earlier when $`B`$ is increased in order to reduce the overlap of their wavefunctions. The jumps in the magnetization are thus shifted toward lower $`B`$.
In the noninteracting case the magnetization of three electrons in a square shaped dot did not vary much with increased deviation from a circular shape, see Fig. 3. Quite the opposite happens for three interacting electrons in the same system, the reason being that the interaction enhances the deviation and thus the size of the anticrossing energy gap in the effective single electron spectrum. This behavior is only observed in dots with few electrons, in larger dots the interaction generally smoothens the angular shape of the dots.
## IV Summary
The calculation of the magnetization for few electrons in a quantum dot shows that it depends on the shape of the dot and the exact number of electrons present. This is in contrast to the results for large dots with many electrons where the magnetization assumes properties reminiscent of a homogeneous 2DEG. It is certainly harder to justify the use of the Hartree approximation here than in the case of a calculation of the far-infrared absorption where to a large degree, classical modes - center-of-mass modes - dominate the excitation spectrum. We are fully aware that exchange and correlation effects will be important in the present system, but they will not qualitatively change our results that magnetization measurements are ideal to investigate the many-electron structure of quantum dots.
###### Acknowledgements.
This research was supported by the Icelandic Natural Science Foundation and the University of Iceland Research Fund. In particular we acknowledge additional support from the Student Research Fund (I. M.).
|
no-problem/9909/nucl-ex9909005.html
|
ar5iv
|
text
|
# Thermalization and Flow in 158 𝐴GeV Pb+Pb Collisions
## 1 Introduction
Ultra-relativistic heavy-ion collisions produce dense matter which is expected to form at sufficiently high energy densities a deconfined phase of quarks and gluons, the Quark Gluon Plasma. A necessary condition for such a phase transition is local equilibrium which might be achievable through rescattering of produced particles. Hints for thermalization can most easily identified by studying the observables as function of centrality. The high $`p_T`$ pion production is expected to be dominated by hard parton scattering but has recently been shown to be also explainable by a thermal model with hydrodynamic expansion. The comparison of the WA98 $`\pi ^0`$ data with hydrodynamic models provides constraints on the partition of excitation energy in terms of temperature and an average flow velocity. Such a finite thermalized system without any external pressure will necessarily expand and the thermal pressure will generate collective motion which will be reflected in the momentum spectra of the final hadrons. Thus a part of the thermal excitation energy will be converted into collective motion of the hadrons.
## 2 Scaling of Global Variables
There seem to be qualitative changes in the behaviour of heavy-ion reactions once a certain system size is attained. Strangeness production is enhanced in S+S reactions compared to p+p, but seems to saturate for even larger nuclei. Recent results from the WA98 experiment show significant change of the shape of the $`\pi ^0`$ $`p_T`$ spectrum in peripheral Pb+Pb collisions compared to p+p data. However, from semi-central Pb+Pb reactions with about 50 participating nucleons up to the most central reactions the shape remains unchanged. In the present analysis the centrality has been selected with the transverse energy $`E_T`$ measured in the calorimeter MIRAC. Fig. 1 shows the scaling behaviour of the charged particle multiplicity $`dN_{ch}/d\eta `$ with the number of participants $`N_{part}`$. It can be seen that $`dN_{ch}/d\eta `$ follows a power law with the number of participants. The extracted exponent from the data is $`\alpha =1.07`$. On the bottom part of fig. 1 the same analysis performed with VENUS 4.12 is displayed. While the simulation result also obeys roughly a power law scaling, the agreement is not as good, and the scaling exponent appears to be significantly larger than that obtained from the experimental data.
## 3 Neutral Pion Spectra
The neutral meson spectra are mainly influenced by thermal and chemical freeze-out in the final state. In the analysis of central reactions of Pb+Pb at 158 $`A`$GeV it is seen that both predictions of perturbative QCD and hydrodynamical parameterizations can describe the measured neutral pion spectra very well. It is particularly astonishing to observe that on the one hand the pQCD calculation gives a good description also at relatively low momenta while on the other hand the hydrodynamical parameterization would yield a sizable contribution even at very high momenta.
Fig. 2a shows a comparison of the neutral pion spectra to a fit of a hydrodynamical parameterization including transverse flow and resonance decays . Using the default Gaussian profile the best fit is obtained with $`T=185\mathrm{MeV}`$ and $`\beta _T=0.213`$. Fig. 2b shows the best fit parameters as a filled circle – the corresponding $`2\sigma `$ contour is also shown. The figure also contains the $`1\sigma `$ allowed region from the $`m_T`$ dependence of the transverse radii extracted by negative pion interferometry with the WA98 negative tracking arm . The interferometry constraints are very similar to those given in – they favour relatively large transverse flow velocities. Such large velocities are only compatible with the neutral pion spectra, if one assumes a very different spatial profile. However, this would result in rather low temperatures, thus these parameters are very sensitive to the used profile.
## 4 Collective Flow
If the initial state of the evolution is azimuthally asymmetric, as in semi-central heavy-ion collisions, this property will be reflected in the azimuthal asymmetry of the final state particle distributions. The strength of the collective flow will yield information on the nuclear equation of state during the expansion. Collective flow development follows the time evolution of pressure gradients in the hot, dense matter. Thus, collective flow can serve as a probe to provide information on the initial state and to which extent the reaction zone might me thermalized. In particular, the formation of a Quark Gluon Plasma during the early stages of the collision is expected to result in reduced pressure gradients due to a softer nuclear equation of state which results in a reduced collective motion.
Since the Plastic Ball Detector in the WA98 experiment is azimuthally symmetric it is ideal to perform the analysis of azimuthal anisotropies.
Fig. 4 shows the centrality dependence of the directed flow in terms of the average transverse momentum $`p_x`$. For protons the maximum directed flow is observed in reactions with intermediate centrality. The corresponding impact parameter of $`b8\mathrm{f}\mathrm{m}`$ results twice as large as observed for AGS energies. Since the observed $`p_x`$ of pions is positive it indicates that the pions are preferentially emitted away from the target spectators, this is called anti-flow. The rapidity dependence of directed flow is given in fig. 4. In addition, pion data measured with the tracking arm in the WA98 experiment at midrapidity and data near midrapidity measured by the NA49 collaboration are shown. The maximum flow is observed in the fragmentation regions, while it rapidly decreases near midrapidity. The data follow a Gaussian distribution.
Hence for a complete description of the rapidity distribution of the collective flow the formerly used slope at midrapidity (e.g. $`dp_x/dy|_{y=0}`$) is not sufficient. It is more reasonable to use the three parameters of the Gaussian distribution to describe the data. The peak position reflects the beam momentum, the peak height gives the strength of the flow and the width of the distribution provides information on how much the participants and the spectators are involved in the collectivity.
|
no-problem/9909/cond-mat9909092.html
|
ar5iv
|
text
|
# Hysteresis and Spikes in the Quantum Hall Effect
## Abstract
We observe sharp peaks and strong hysteresis in the electronic transport of a two-dimensional electron gas (2DEG) in the region of the integral quantum Hall effect. The peaks decay on time scales ranging from several minutes to more than an hour. Momentary grounding of some of the contacts can vastly modify the strength of the peaks. All these features disappear under application of a negative bias voltage to the backside of the specimen. We conclude, that a conduction channel parallel to the high mobility 2DEG is the origin for the peaks and their hysteretic behavior.
The hallmark of the integral and fractional quantum Hall effects are wide regions of vanishing magnetoresistance and wide plateaus in Hall resistance. These features are centered around magnetic fields, that correspond to integral or fractional fillings of Landau levels of a two-dimensional electron gas (2DEG). Their origin is quantization of the electron orbits into Landau levels and the formation of localized states in real, slightly disordered 2DEG in the presence of a high magnetic field, B. Electrons in the localized states provide a reservoir, which is in equilibrium with the current-carrying, delocalized states and keep their occupation constant over wide stretches of B. While in the integral quantum Hall effect (IQHE) the ingredients of this picture are of single-particle origin, they are of many-particle origin in the fractional quantum Hall effect (FQHE).
Two recent experiments have observed hysteretic behavior and/or peak formation in electronic transport of 2DEG in the regime of the FQHE. Minor discrepancies in data taken on opposite field sweeps are common and are usually attributed to the large inductance of the magnet and the resulting time delays or to slight temperature differences between both sweeps. However, the recently observed effects are very large. Kronmueller *et al.* observed the appearance of a huge spike at the position of the $`\nu =2/3`$ minimum when the magnetic field is ramped very slowly. The time scale for development of this feature can be as long as hours, which suggested the involvement of nuclear spins in its creation. Cho *et al.* have observed hysteretic behavior in resistance traces taken on several fractions around filling factor $`\nu =1/2`$ and attribute it to non-equilibrium phases of composite fermions in this regime. The origin of these observations remains puzzling and the nature of the underlying non-equilibria remains unclear.
We have observed strong hysteresis and the formation of sharp peaks in magneto-transport experiments on 2DEGs in quantum wells in the regime of the IQHE at temperatures of $``$0.1K. To our recollection, we have never observed such features in a traditional single heterojunction interface sample. The characteristic decay times for the sharp peaks can be as long as several hours. Their life time can be strongly altered by momentary grounding of the contacts. Hysteresis and spikes disappear on application of a voltage bias to an electrode on the back side of the specimen. We conclude that the origin of hysteretic behavior and spike formation in our samples is the result of a non-equilibrium charge distribution, which arises due to the coexistence and dynamic exchange of electrons between the high-mobility 2DEG and a low-mobility parallel conduction channel in the vicinity of the doping layer.
Our samples are modulation-doped GaAs/AlGaAs quantum well structures grown by molecular-beam epitaxy (MBE). A high density 2DEG resides in a 300Å wide quantum well 2000Å below the surface. The well is $`\delta `$-doped on both sides with silicon impurities at a distance of 950Å in sample A and 750Å in sample B. Samples are cleaved into 4mm$`\times `$4mm squares and eight indium contacts are diffused at the corners and the middle of the edges. Transport measurements are performed using standard lock-in techniques in a dilution refrigerator with a base temperature of 70mK. A 100nA current is used in most of the measurements. Both samples have mobility higher than $`13\times 10^6cm^2`$/Vsec. The density is $`2.3\times 10^{11}cm^2`$ in sample A and $`3.2\times 10^{11}cm^2`$ in sample B.
As an example of the hysteresis and resistance spikes that arise at many IQHE positions, we show in Fig.1 the magnetoresistance, $`R_{xx}`$, of sample A in the vicinity of filling factor $`\nu =3`$, measured at a slow sweep rate of 0.05T/min. Solid and dash-dotted lines represent traces taken on upward and downward field sweeps, respectively. Both traces largely reproduce, although there is a slightly discrepancy in the position of the high-field flank. Most remarkably, however, a sharp spike appears in the central part of the $`\nu =3`$ minimum. This peak is only $``$0.01T wide, comparable in height to the surrounding $`R_{xx}`$, and it is completely missing on the up-sweep.
Hysteretic resistance peaks similar to the one observed around $`\nu =3`$ occur in our sample at the positions of most resolved integral filling factors. Fig.2 shows an extended $`R_{xx}`$ traces of sample A. A superposition of two oscillations is evident. A set of sharper Shubnikov-de Haas (SdH) oscillations is superimposed on a slowly oscillating background. In spite of this complication, we can clearly identify the positions of the IQHE minima in the SdH oscillations, of which we have labeled $`\nu =3`$ through 9 (see also inset on an expanded scale). Similar to the occurrence of the sharp peak in the $`\nu =3`$ minimum, such spikes are also present at these higher filling factors (see circles), most notably at $`\nu =5`$, where the spike dominates over any other features of this trace. The details of the hysteresis vary. The peaks appear on the up-sweep ($`\nu =9,7,6,5`$) or on the down-sweep ($`\nu =3`$) and in some cases in both directions, but with very different amplitudes ($`\nu =8`$). Instead of being absent in one direction, the previous peak position is sometimes marked by a small dip. At fields, higher than $`\nu =3`$ and up to the highest field of 14T (not shown), traces from both field sweeps largely reproduce and there are no further spikes observed. In this high-field regime all the regular features of the IQHE and FQHE are well developed.
Sample B behaves similar to sample A, despite the difference in electron density. The shapes of the low field envelopes in both samples resemble one another. Their maxima and minima occur roughly at the same field values. The backgrounds in both samples reach their highest values near 2.5T and gradually disappear above it. In sample A, the region of most pronounced hysteresis occurs for $`9\nu 3`$ while in sample B, this occurs for $`12\nu 4`$.
To examine the time dependence of the peaks, we swept the field slowly to their maxima and stopped at the summit. The inset of Fig.1 shows the subsequent time evolution of the peak at $`\nu =5`$. After a rapid drop the decay becomes exponential with a time constant of $`\tau `$2.7min. The time constant is sample-dependent, $`\nu `$-dependent and depends on the contact configuration. The typical value for sample A is a few minutes and for sample B, a few hours, with a maximum of $``$10 hours. These are enormously long time scales for the decay of an electrical resistance in a 2DEG.
We made a critical observation during such decay experiments in sample B. A quick grounding and subsequent un-grounding of some of the contacts led to a dramatic decrease of the amplitude of the peaks. Although this observation was not reproducible during all cool-downs, it points clearly to the existence of some non-uniform, non-equilibrium configuration, that can be equilibrated by the redistribution of charge within the specimen.
To characterize peak creation and decay we performed several additional experiments. The amplitudes of the peaks increase with increasing sweep rate, but much less than proportional. They increase by less than a factor of two when the sweep rate is raised twenty times from 0.02T/min to 0.4T/min. The decay of the peaks with time probably accounts for the small difference in amplitude. Raising the temperature weakens the observed hysteresis and increases the background, while the amplitudes of the peaks and dips shrink. The hysteresis at $`\nu =5`$ in sample A disappears at about 400mK. Sample B shows similar behavior. Both AC and DC current excitations were used to conduct the experiments and data from both largely resemble each other. The data remain essentially independent of AC current amplitude from 10nA to 1$`\mu `$A.
The oscillatory background in the low field data strongly suggests the existence of a conduction path in parallel to the 2DEG channel. The occupation of a second subband in the quantum well can safely be ruled out on the basis of a simple calculation. Another source for a parallel current path are the Silicon modulation-doping layers on both sides of the quantum well. A fraction of the electrons, can remain at the site of this layer and provide a conducting path in parallel to the 2DEG. At high enough mobility, such an impurity channel can exhibit its own magneto-transport, superimposed on $`R_{xx}`$ from the 2DEG.
To investigate the origin of the parallel channel, we applied a voltage, $`V_g`$, to a backside electrode (gate) placed under the substrate. Fig.3 shows the magnetoresistance of sample A at different backgate biases. Here we have chosen a current direction perpendicular to the one used in Figs.1 and 2. Although the hysteresis is less pronounced in this current configuration, several hysteretic peaks are clearly visible in panel (a), at zero bias. A negative voltage of -50V on the backgate across the 0.5mm thick substrate has a dramatic effect on the trace in panel (b). While the $`R_{xx}`$ background weakens, the previously small hysteresis peaks grow enormously in amplitude and the sharp spikes at $`\nu =4,5`$ and 6 dominate the graph. Uniformly, peaks occur on the down-sweep, while the up-sweeps either show the customary IQHE minima ($`\nu =4,5`$) or sharp downward cusps that approach vanishing $`R_{xx}`$ ($`\nu =6,7,8`$). The last and cleanest hysteretic peak has moved from its previous position at $`\nu =3`$ in panel (a) to $`\nu =4`$ in panel (b). In general, with increasing back bias, we see such a progression from higher to lower magnetic field. Eventually, all hysteresis effects disappear in panel (c) at a backgate bias of -150V. The specimen shows a clean $`R_{xx}`$ behavior as is customary for very high mobility samples. Not only is the fragile states at $`\nu =7/2`$ visible, but the data also show the recently discovered anisotropic state at $`\nu =9/2`$ and 11/2, manifested by deep minima in Fig.3 (c) and well-documented clear maxima in Fig.2.
Backgate bias does not change the electron density in the 2DEG as is evident from the stationary B-positions of its IQHE and FQHE features. The oscillations of the background, on the other hand, are steadily moving to lower field. Sample B behaves similarly to sample A and its data resembles those of panel (c) at a higher bias of -170V. The backgate bias experiments on both samples provide strong evidence for the existence of a parallel conducting path in the form of a two-dimensional impurity channel (2DIC) on the substrate-side of the quantum well and for its role in the appearance of the background as well as the hysteretic spikes. The symmetric doping channel on the top side of the sample does not provide such a parallel channel, probably due to depletion by the nearby surface of the sample.
From the shift of the minimum in the background at $``$0.95T in Fig.3 (a) with increasing $`V_g`$ and a gate capacitance of $`C_g22pF/cm^2`$ we derive an initial density of $`n_{2DIC}5.7\times 10^{10}cm^2`$ in the impurity channel. This identifies the 0.95T-minimum in Fig.2 as the $`\nu =2`$ IQHE of the 2DIC . The other minima in the background follow quite well the usual 1/B sequence, with the strong minimum at B$``$1.9T representing $`\nu =1`$. This indicates at least a moderate mobility for this 2DIC, since spin-splitting is just resolved. At -150V the density of the 2DIC has been depleted to $`3.0\times 10^{10}cm^2`$ and has probably fallen below the conduction limit of such a disordered channel. Therefore, parallel conduction has vanished and a clean $`R_{xx}`$ trace is observed.
In the remainder of the paper, we develop a model that can account for many of our observations. We regard our system as consisting of two parallel sheets of conductor, a high-mobility 2DEG and a low-mobility 2DIC. Both are connected via eight contacts at the perimeter of the specimen and are coupled by a mutual capacitance of $`C=120nF/cm^2`$. At any given magnetic field a complex current distribution emerges. The situation with the 2DEG in a quantum Hall state is shown as an inset to Fig.3. We do not differentiate between resitance, R, and resistivity, $`\rho `$, since both differ only by a factor of order of unity in our square sample. Following the value of the Hall resistances the total current $`I^{tot}`$ divides between both layers according to their density ratio, $`n^{2DIC}/n^{2DEG}`$. Therefore, about 1/5 of the total current is flowing through the 2DIC. In addition, due to the different resistivities between the 2DEG and the 2DIC, an interlayer current $`I^{int}`$ is induced, which contributes to the voltage $`V_{xx}`$. Whenever the 2DEG is in the IQHE, a simple calculation shows, that the measured $`R_{xx}=V_{xx}/I^{tot}`$ simply reflects $`\rho _{xx}^{2DIC}`$, attenuated by the square of the ratio of electron densities in both layers. From a value of $`100\mathrm{\Omega }`$ for the smoothly varying background in Fig.2, we deduce $`\rho _{xx}^{2DIC}`$2.5k$`\mathrm{\Omega }`$. This establishes the parallel channel as a moderately good conductor.
As the magnetic field is swept, the Fermi levels of both layers oscillate due to Landau quantization, creating an oscillating imbalance in the chemical potential between the layers. The resulting potential difference is particularly drastic in the regime of the IQHE of the 2DEG, where its Fermi energy changes abruptly by $`\mathrm{}\omega _c`$. To keep the Fermi levels in equilibrium, a charge $`QC\mathrm{}\omega _c/e`$ needs to be transferred between the layers. At B$`3`$T, $`Q4\times 10^9e/cm^2`$ or $`10\%`$ of the charge of the 2DIC. The relevant series conductance to charge or discharge the 2DEG sheet from its edge, where the contacts reside, is its diagonal conductivity $`\sigma _{xx}`$. In the regime of the IQHE this parameter tends toward zero. The resulting RC time constant for equilibration can assume values as long as hours if $`\sigma _{xx}10^{11}\mathrm{\Omega }^1`$, which is not uncommon in a high mobility 2DEG sample.
The combination of a finite field sweep rate and a long time constant gives rise to a density inhomogeneity in the 2DEG. While the edges of the 2DEG and the 2DIC are quickly equilibrated, the center of the 2DEG lags far behind and maintains a higher or lower electron density concentration depending on the field sweep direction. The resulting radial density gradient in the 2DEG is imprinted with opposite sign onto the 2DIC due to the electrostatic interaction between both layers. Since the diagonal resistivity, $`\rho _{xx}^{2DIC}`$, in the low density, disordered 2DIC depends strongly on electron density, the sudden change of the Fermi level in the 2DEG and the resulting density gradient in the 2DIC can abruptly alter the local $`\rho _{xx}^{2DIC}`$ and hence the current pattern. This leads to a non-equilibrium $`V_{xx}`$ which is observed in the experiment as a time-dependent spike. In particular, if the electron system in the 2DIC is near one of the metal-insulator transitions, as they arise close to the edge of a Landau level, the change in $`\rho _{xx}^{2DIC}`$ and hence in $`V_{xx}`$ can be dramatic. This explains why exceedingly sharp spikes always occur on a very low or completely absent background such as $`\nu =3,5`$ in Fig.2. It also accounts for the enormous growth of the spikes with decreasing carrier density in the 2DIC in Fig.3 (b). To predict the direction a particular spike is pointing (up or down) requires knowledge of the transient non-equilibrium current distribution. This pattern can be very complex since it depends on the local $`\rho _{xx}^{2DIC}`$. which is sensitive to the position of the Fermi level with respect to the density of states in the disordered 2DIC. The resulting current pattern is difficult to assess in detail.
The characteristic time of the phenomenon is the RC time of the electric discharge between 2DEG and 2DIC. However, it is not the actual discharge current that is observed in $`V_{xx}`$. There is simply insufficient charge in the capacitor to account for the observed, minute-long interlayer current. What is rather observed is the influence of the induced non-equilibrium charge distribution in the 2DEG on the resistivity pattern in the 2DIC and the resulting transient redistribution of currents in the specimen. The narrowness of the spikes is a result of the narrowness of the regions within the IQHE over which $`\sigma _{xx}`$ takes on sufficiently low values to create sufficiently long RC times to be observable on the time scale of our experiment. Outside of these narrow regions of exceedingly small $`\sigma _{xx}`$ charge transfer between 2DEG and 2DIC happens rapidly, both layers maintain equilibrium and $`V_{xx}`$ is time independent. Raising temperature increases $`\sigma _{xx}`$, therefore, peaks disappear at high temperature. The long decay times of the peaks is a direct result of the long RC times. Our model also explains the hysteresis of the spikes. Obviously, opposite field sweeps generate radial density gradients of opposite signs which lead to different current patterns and hence hysteretic behavior.
In conclusion, the strong spikes and large hysteresis in the magnetoresistance of our 2DEG in the regime of the IQHE are the result of a non-equilibrium charge distribution caused by the long RC times to modify the electron density in the 2DEG in the IQHE regime. The origin of these spikes is a parallel impurity channel. We can explain our observations as resulting from the capacitive coupling between 2DEG and this neighboring impurity channel and the time-dependent current distribution it creates.
|
no-problem/9909/quant-ph9909032.html
|
ar5iv
|
text
|
# References
Simulations of Quantum Logic Operations
in Quantum Computer with Large Number of Qubits
G.P. Berman<sup>1</sup>, G.D. Doolen<sup>1</sup>, G.V. López<sup>2</sup>, and V.I. Tsifrinovich<sup>3</sup>
<sup>1</sup>Theoretical Division and CNLS, Los Alamos National Laboratory, Los Alamos,
New Mexico 87545
<sup>2</sup> Departamento de Física, Universidad de Guadalajara, Corregidora 500, S.R. 44420, Guadalajara, Jalisco, México
<sup>3</sup>Department of Physics, Polytechnic University, Six Metrotech Center, Brooklyn NY 11201
ABSTRACT
We report the first simulations of the dynamics of quantum logic operations with a large number of qubits (up to 1000). A nuclear spin chain in which selective excitations of spins is provided by the gradient of the external magnetic field is considered. The spins interact with their nearest neighbors. We simulate the quantum CONTROL-NOT (CN) gate implementation for remote qubits which provides the long-distance entanglement. Our approach can be applied to any implementation of quantum logic gates involving a large number of qubits.
1. Introduction
The field of quantum computation has achieved three important milestones: the first quantum algorithm , the first error correction code , and the first experimental implementation of quantum logic . The next promising important step is implementation of quantum logic in solid-state systems with large number of qubits, say 1000 qubits. It is not clear which system will be the most feasible for quantum computation: nuclear spins -, electron spins -, quantum dots , or Josephson junctions -. For all of these implementations, the design of a quantum computer requires simulations of the quantum computation dynamics on a conventional digital computer to test the quantum computer experimental devices.
To simulate a general quantum computation involving $`N`$ qubits, one must solve time-dependent equations involving all $`2^N`$ states of a quantum computer. Generally, this problem cannot be solved on the digital computer. However, it is possible to simulate quantum logic operations which involve a limited number of states. These simulations can give insight into the dynamical properties of a quantum computer. Simulations of experimental implementations of quantum logic operations can explore the advantages and disadvantages of experimental devices long before they are built. In this paper, we report the first simulation of quantum logic for a large number of qubits (up to 1000). In Sec. 2, we describe the nuclear spin quantum computer which we simulate. In Sec. 3, we consider the Hamiltonian of the nuclear spin chain and the equations of motion for the amplitudes of the quantum states. In Sec. 4, we discuss resonant and non-resonant transitions in the spin chain under the action of radio-frequency (rf) pulses. In Sec. 5, we describe a quantum CN gate which entangles the two qubits at opposite ends of the spin chain. In Sec. 6, we give analytical analysis of the CN gate. In Sec. 7, the results of our simulations are presented. In the Conclusion we summarize our results.
2. Nuclear spin quantum computer
We consider a chain of identical nuclear spins placed in a high external magnetic field, $`B_0`$ (Fig. 1). We suppose that these spins are initially polarized along the direction of the external field ($`z`$-direction). The NMR frequency is $`f_0=(\gamma /2\pi )B_0`$, where $`\gamma `$ is the nuclear gyromagnetic ratio. For example, for a proton in the field $`B_0=10`$T, one has the NMR frequency $`f_0430`$MHz.
Next, we suppose that the external magnetic field is slightly non-uniform, $`B_0=B_0(z)`$. Suppose that the frequency difference of two neighboring spins is, $`\mathrm{\Delta }f10`$kHz. Thus, if the frequency of the edge spin is $`430`$MHz, the frequency of the other edge spin is $`440`$MHz. Then, the value of $`B_0`$ increases by $`\mathrm{\Delta }B_0=0.23`$T along the spin chain. Taking the distance between the neighboring spins, $`a2\AA `$, we obtain the value for the gradient of the magnetic field, $`|B_0/z|0.23/1000a\mathrm{cos}\theta `$, where $`\theta `$ is the angle between the direction of the chain and the $`z`$-axis (Fig. 1). Below we will take $`\mathrm{cos}\theta =1/\sqrt{3}`$. Thus, the gradient of the magnetic field is $`|B_0/z|2\times 10^6`$T/m.
Next, we discuss the interaction between spins. In a large external magnetic field, $`B_0`$, the stationary states of the chain can be described as a combination of individual states of nuclear spins, for example,
$$|00\mathrm{}00,|00\mathrm{}01,$$
and so on, where the state $`|0`$ corresponds to the direction of a nuclear spin along the direction of the magnetic field and the state $`|1`$ to the opposite direction. The magnetic dipole field on nucleus $`j`$ in any stationary state is much less than the external field. So, only the $`z`$-component of the dipole field, $`B_{dz}=B_{dz}(j)`$, affects the energy spectrum,
$$B_{dz}(j)=\underset{k=0}{\overset{N1}{}}\frac{3\mathrm{cos}^2\theta 1}{r_{kj}^3}\mu _{kz},(kj),$$
$`(1)`$
where $`\mu _{kz}`$ is the $`z`$-component of the nuclear magnetic moment, $`r_{kj}`$ is the distance between the nuclei $`k`$ and $`j`$. To suppress the dipole interaction, one should choose the angle $`\theta 54.7^o`$, for which $`\mathrm{cos}\theta =1/\sqrt{3}`$. Then, for any stationary state, the $`z`$-component of the dipole field disappears.
We assume that main interaction between the nuclear spins (when the dipole interaction is suppressed) is an Ising type of interaction mediated by chemical bonds. This situation is observed in liquids where the dipole-dipole interaction is suppressed by the rotational motion of the molecules. Nuclear spins in liquids were used for quantum computations involving a small number of qubits .
3. The Hamiltonian and equations of motion
The Hamiltonian for the chain of spins considered can be written in the form ,
$$=\underset{k=0}{\overset{N1}{}}\omega _kI_k^z2J\underset{k=0}{\overset{N2}{}}I_k^zI_{k+1}^z+V,$$
$`(2)`$
where $`\omega _k`$ is the Larmor frequency of the $`k`$-th spin (neglecting interactions between spins), $`\omega _k=\gamma B_0(z_k)`$, $`J`$ is the constant of the Ising interaction, $`I_k^z`$ is the operator of the $`z`$-component of spin 1/2, the operator, $`V`$, describes the interaction with pulses of the rf field, $`z_k`$ is the $`z`$-coordinate of the $`k`$-th spin, and we set $`\mathrm{}=1`$.
Below, we assume that the characteristic values of the parameters in the Hamiltonian (2) are,
$$\omega _k/2\pi =f_0+k\mathrm{\Delta }f,f_0430MHz,\mathrm{\Delta }f10kHz,J/2\pi 100Hz,N1000.$$
$`(3)`$
The operator $`V`$ for the $`n`$-th rf pulse can be written ,
$$V=\frac{\mathrm{\Omega }^{(n)}}{2}\underset{k=0}{\overset{N1}{}}\left[I_k^{}\mathrm{exp}(i\omega ^{(n)}t)+I_k^+\mathrm{exp}(i\omega ^{(n)}t)\right],$$
$`(4)`$
where $`\mathrm{\Omega }^{(n)}`$ is the Rabi frequency of the $`n`$-th pulse, $`I_k^\pm =I_k^x\pm iI_k^y`$, and $`\omega ^{(n)}`$ is the frequency of the $`n`$-th pulse. We choose the value of $`\mathrm{\Omega }0.1`$J. (We assume that the rf field is circularly polarized in the $`xy`$ plane.)
In the interaction representation, the wave function, $`\mathrm{\Psi }`$, of the spin chain can be written as,
$$\mathrm{\Psi }=\underset{p}{}C_p|p\mathrm{exp}(iE_pt),$$
where $`E_p`$ is the energy of the state $`|p`$. Substituting the expression for the wave function $`\mathrm{\Psi }`$ into the Schrödinger equation, we obtain the equation of motion for the amplitude $`C_p`$,
$$i\dot{C}_p=\underset{m=0}{\overset{2^N1}{}}C_mV_{pm}^{(n)}\mathrm{exp}[i(E_pE_m)t+ir_{pm}\omega ^{(n)}t],$$
$`(5)`$
where $`r_{pm}=1`$ for $`E_p>E_m`$ and $`E_p<E_m`$, respectively, $`V_{pm}^{(n)}=\mathrm{\Omega }^{(n)}/2`$ for the states $`|p`$ and $`|m`$ which are connected by a single-spin transition, and $`V_{pm}^{(n)}=0`$ for all other states.
4. Resonant and non-resonant transitions
As the number of spins, $`N`$, increases, the number of states increases exponentially but the number of resonant frequencies in our system is $`3N2`$ because only single-spin transitions are allowed by the operator $`V`$ in (4). The resonant frequencies of our spin chain are,
$$\omega _k\pm J,(k=0,ork=N1),$$
$`(6)`$
$$\omega _k,\omega _k\pm 2J,(1kN2).$$
For edge spins with $`k=0`$ and $`k=N1`$, the upper and lower signs correspond to the states $`|0`$ or $`|1`$ of the only neighboring spin. For inner spins with $`1kN2`$ the frequency $`\omega _k`$ corresponds to having nearest neighbors whose spins are in opposite directions to each other. The “+” sign corresponds to having the nearest neighbors in their ground state. The “-” sign corresponds to having the nearest neighbors in their excited state.
Now we consider any basic stationary state,
$$|q_{N1}q_{N2}\mathrm{}q_1q_0,$$
$`(7)`$
where the subscript indicates the position of the spin in a chain, and $`q_k=0,1`$. If one applies to the spin chain a resonant rf pulse of a frequency, $`\omega `$, from (6) one has two possibilities:
1). The frequency of the pulse $`\omega `$ is the resonant frequency of the state (7).
2). The frequency $`\omega `$ differs from the closest resonant frequency of the state (7) by the value $`2J`$ or $`4J`$.
In the first case, one has a resonant transition. For the second case, one has a non-resonant transition. If $`J2\pi \mathrm{\Delta }f`$ we can neglect all other non-resonant transitions for the state (7). Below, we will write a rigorous condition for $`\mathrm{\Delta }f`$ which is required in order to neglect all other non-resonant transitions.
Thus, considering the transformation of any basic state under the action of an rf pulse with a frequency, $`\omega `$, from (6), we should take into consideration only one transition. This transition will be either a resonant one or a non-resonant one with the frequency difference $`2J`$ or $`4J`$.
This allows us to simplify equations (5) to the set of two coupled equations,
$$i\dot{C}_p=(\mathrm{\Omega }^{(n)}/2)\mathrm{exp}[i(E_pE_m\omega ^{(n)})t]C_m,$$
$`(8)`$
$$i\dot{C}_m=(\mathrm{\Omega }^{(n)}/2)\mathrm{exp}[i(E_mE_p+\omega ^{(n)})t]C_p,$$
where $`E_p>E_m`$, $`|p`$ and $`|m`$ are any two stationary states which are connected by a single-spin transition and whose energies differ by $`\omega ^{(n)}`$ or $`\omega ^{(n)}\pm 2J`$ or $`\omega ^{(n)}\pm 4J`$.
The solution of equations (8) for the case when the system is initially in a stationary state $`|m`$, can be written,
$$C_m(t_0+\tau )=[\mathrm{cos}(\mathrm{\Omega }_e\tau /2)+i(\mathrm{\Delta }/\mathrm{\Omega }_e)\mathrm{sin}(\mathrm{\Omega }_e\tau /2)]\times \mathrm{exp}(i\tau \mathrm{\Delta }/2),$$
$`(9)`$
$$C_p(t_0+\tau )=i(\mathrm{\Omega }/\mathrm{\Omega }_e)\mathrm{sin}(\mathrm{\Omega }_e\tau /2)\times \mathrm{exp}(it_0\mathrm{\Delta }+i\tau \mathrm{\Delta }/2),$$
$$C_m(t_0)=1,C_p(t_0)=0.$$
In Eqs (9) we omitted the upper index “$`n`$” which indicates the number of the rf pulse, $`t_0`$ is the time of the beginning of the pulse, $`\tau `$ is its duration, $`\mathrm{\Delta }=E_pE_m\omega `$, $`\mathrm{\Omega }_e=(\mathrm{\Omega }^2+\mathrm{\Delta }^2)^{1/2}`$ is the NMR frequency in the rotating frame. If the system is initially in the upper state, $`|p`$, ($`C_m(t_0)=0`$, $`C_p(t_0)=1`$), one can obtain the solution of Eqs (8) by changing the sign at $`\mathrm{\Delta }`$ and setting: $`mp`$ and $`pm`$ in (9).
For the resonant transition ($`\mathrm{\Delta }=0`$) the expressions (9) transform into the well-known equations for the Rabi transitions,
$$C_m(t_0+\tau )=\mathrm{cos}(\mathrm{\Omega }\tau /2),C_p(t_0+\tau )=i\mathrm{sin}(\mathrm{\Omega }\tau /2).$$
$`(10)`$
In particular, for $`\mathrm{\Omega }\tau =\pi `$ (a $`\pi `$-pulse), Eqs (10) describe the complete transition from the state $`|m`$ to the state $`|p`$.
For non-resonant transitions, expressions (9) include two characteristic parameters: $`\mathrm{\Omega }/\mathrm{\Omega }_e`$ and $`\mathrm{sin}(\mathrm{\Omega }_e\tau /2)`$. If either of these two parameters is zero, the probability of a non-resonant transition disappears. The second parameter is equal zero when $`\mathrm{\Omega }_e\tau =2\pi k`$ ($`k=1,2,..`$), where $`k`$ is the number of revolutions of a non-resonant (average) spin about the effective field in the rotating frame. This is the basis of the “$`2\pi k`$”-method for elimination non-resonant transitions. (See , Chapter 22, and .)
5. A Control-Not gate involving remote qubits and their long-distance entanglement
A pure Control-Not ($`CN_{ab}`$) gate is a unitary operator which transforms the basic state,
$$|q_{N1}\mathrm{}q_a\mathrm{}\mathrm{}.q_b\mathrm{}.q_0,$$
into the state,
$$|q_{N1}\mathrm{}q_a\mathrm{}\mathrm{}.\overline{q}_b\mathrm{}.q_0,$$
where $`\overline{q}_b=1q_b`$ if $`q_a=1`$; and $`\overline{q}_b=q_b`$ if $`q_a=0`$. The $`a`$-th and $`b`$-th qubits are called the control and the target qubits of the $`CN_{ab}`$ gate. A modified CN gate performs the same transformation accompanied by phase shifts which are different for different basic states . It is well-known that the CN gate can produce an entangled state of two qubits, which can not be represented as a product of the individual wave functions.
We shall consider an implementation of the CN gate in the Ising spin chain with the left end spin as the control qubit and the right end spin as the target qubit, i.e. $`CN_{N1,0}`$ for a spin chain of 200 and a spin chain of 1000 qubits. Using this gate we will create entanglement between the end qubits in the spin chain. We start with the ground state. Then we apply a $`\pi /2`$-pulse with frequency $`\omega _{N1}`$ to produce a superpositional state of the $`(N1)`$-th (left) qubit,
$$\mathrm{\Psi }=|0\mathrm{}0+i|1\mathrm{}0.$$
$`(11)`$
(Here and below the normalization factor $`1/\sqrt{2}`$ is omitted.) To implement a modified $`CN_{N1,0}`$ gate we apply to the spin chain $`L=397`$ $`\pi `$-pulses if $`N=200`$, and $`L=1997`$ pulses if $`N=1000`$. The first $`\pi `$-pulse has the frequency $`\omega =\omega _{N2}`$. For the second $`\pi `$-pulse $`\omega =\omega _{N3}`$. For the third $`\pi `$-pulse, $`\omega =\omega _{N2}2J`$, etc.
6. An analytic solution
An analytical expression for the wave function, $`\mathrm{\Psi }`$, after the action of $`\pi /2`$\- and $`L`$ $`\pi `$-pulses, can be easily derived if for all $`\pi `$-pulses $`\mathrm{\Omega }_e\tau /2=2\pi k`$, with the same value $`k`$:
$$\mathrm{\Psi }=C_0|\mathrm{00..0}+C_1|10\mathrm{}1,$$
$`(12)`$
$$C_0=(1)^{kL}\mathrm{exp}(i\pi L\sqrt{4k^21}/2),C_1=1,$$
$`(13)`$
For $`k1`$, we get the same solution for odd and even $`k`$: $`C_01`$. This result is easy to understand. For a $`\pi `$-pulse, the Rabi frequency is $`\mathrm{\Omega }=|\mathrm{\Delta }|/\sqrt{4k^21}`$. Large values of $`k`$ correspond to small values of the parameter $`\mathrm{\Omega }/|\mathrm{\Delta }|`$. If $`\mathrm{\Omega }/|\mathrm{\Delta }|`$ approaches zero, the non-resonant pulse cannot change the quantum state.
For a small value of $`k`$, the non-resonant pulse can change the phase of a state. For example, for $`k=1`$ we have,
$$C_0=\mathrm{exp}[i\pi L(1\sqrt{3}/2)].$$
After the first $`\pi `$-pulse, the phase shift is approximately $`24^o`$, but it grows as the number of $`\pi `$-pulses, $`L`$, increases.
Now, we shall mention an important point. If we consider the probability of non-resonant transition, the small parameter of the problem is:
$$ϵ=(\mathrm{\Omega }/\mathrm{\Omega }_e)^2\mathrm{sin}^2(\mathrm{\Omega }_e\tau /2).$$
(It follows from (9) that the expression for $`|C_m|^2`$ can be written in the form: $`|C_m|^2=1ϵ`$, and $`|C_p|^2=ϵ`$.) If we take into consideration the change of the phase of non-resonant state, the small parameter of the problem is $`\mathrm{\Omega }/\mathrm{\Delta }|\mathrm{\Omega }/\mathrm{\Omega }_e`$.
Next, we will discuss the probability of non-resonant transitions using perturbation theory. The analytical expression for probabilities $`|C_0|^2`$ and $`|C_1|^2`$ can be easily found in the first non-vanishing approximation of perturbation theory:
$$|C_0|^2=1Lϵ,|C_1|^2=1.$$
$`(14)`$
The decrease of the probability $`|C_0|^2`$ is caused by the generation of unwanted states. One can see that the deviation from the value $`|C_0|^2=1`$ accumulates when the number of $`\pi `$-pulses, $`L`$, increases. It means that the small parameter of the problem is $`Lϵ`$ rather than $`ϵ`$. Consider first those non-resonant transitions which we ignore in this paper. For the number of qubits, $`N=1000`$, using the characteristic parameters from (3) and $`L2000`$, $`\mathrm{\Omega }0.1J`$, $`|\mathrm{\Delta }|=2\pi \mathrm{\Delta }f`$, we obtain, $`Lϵ10^3`$. Thus, as was already pointed out, one can neglect non-resonant transitions whose frequency differences are of the order of $`\mathrm{\Delta }f`$.
Now we consider non-resonant transitions which are included in our simulations. Setting $`|\mathrm{\Delta }|=2J`$, we obtain: $`Lϵ2.5`$. Thus, the deviation from the $`2\pi k`$ condition can produce large distortions from the desired wave function. To study these distortions we have to use computer simulations.
7. Computer simulations
We have developed a numerical code which allowed us to study the dynamics of quantum states with the probabilities no less than $`10^6`$ for a spin chain with up to 1000 qubits. (Because we omitted in (11) the normalization factor $`1/\sqrt{2}`$ all probabilities in this section including in Figs 2-14 are doubled.) The sum of the probabilities of all these states was equal $`2O(10^6)`$ (the normalization condition).
Next, in Figs 2-14, we present the results of computer simulations with $`N=200`$ and with $`N=1000`$ qubits. Fig. 2, shows the probability of the excited unwanted states after implementation of the $`CN_{199,0}`$ gate, for $`N=200`$ and $`\mathrm{\Omega }=0.14`$. On the horizontal axis the unwanted states are shown in the order of their generation. A total of $`7385`$ unwanted states were generated which had the probability, $`P10^6`$. (In all Figs 2-14 only the states with $`P10^6`$ are taken into account.) The probability distribution of unwanted states clearly contains two “bands”. One group of these states has the probability, $`P10^6`$ (the bold “line” near the horizontal axis). The second group of states has the probability $`P10^3`$ (the upper “curve” in Fig. 2). Fig. 3, shows an enlargement of the upper “band” of the Fig. 2. One can see some sub-structure of this “band”. Fig. 4, shows the sub-structure in the lower “band” of Fig. 2. Figs 5 (a,b) show the sub-structure of the upper and lower “bands” shown in Fig. 4. One can see some hierarchy in the structure of the distribution function of generated unwanted states. Figs 6(b-j) show the typical structure of unwanted states of the spin chain. Fig. 6a, shows the ground state of the spin chain (all qubits are in their ground state). The value of $`P`$ in Fig. 6, indicates the probability of the states. All states in Figs 6(b-j) belong to the upper “band” shown in Fig. 2, i.e. they have a probability, $`P10^3`$. It is interesting to note that the group of unwanted states with high probabilities contains the high energy states of the spin chain (many-spin excitations). Typical unwanted states of the lower “band” in Fig. 2 (with $`P10^6`$) are shown in Figs 7(a-j) and in Figs 8(a-j). It is important to note that typical unwanted states for both groups (Figs 6-8) contain highly correlated spin excitations. Fig. 9 shows the total number of unwanted states (with probability $`P10^6`$) and the probability of the ground state, $`|C_0|^2`$, as a function of the Rabi frequency, $`\mathrm{\Omega }`$. The maximum value of $`|C_0|^2`$ and the minimal total number of unwanted states correspond to the values of $`\mathrm{\Omega }`$ which satisfy the $`2\pi k`$-condition for $`396`$ of the total number of $`\pi `$-pulses, $`397`$. (The third $`\pi `$-pulse does not satisfy a $`2\pi k`$-condition.)
Next, we have studied the generation of unwanted states for the case when only a group of pulses had values of Rabi frequency which deviated from the $`2\pi k`$-condition. We changed the value of $`\mathrm{\Omega }`$ for all $`\pi `$-pulses from $`k_1=10`$ to $`k_2=(10+\mathrm{\Delta }k)`$. Fig. 10, shows the number of unwanted states and the probability of the ground state, $`|C_0|^2`$, as a function of the number of distorted pulses, $`\mathrm{\Delta }k`$, for $`N=1000`$. The value $`\mathrm{\Omega }0.100`$ in Fig. 10 corresponds to the $`2\pi k`$-condition for all pulses (except the $`3`$-rd $`\pi `$-pulse), for distorted $`\mathrm{\Delta }k`$ pulses, $`\mathrm{\Omega }=0.101`$. Figs 11 and 12 demonstrate the same quantities (the number of unwanted generated states and $`|C_0|^2`$) for the case when $`\mathrm{\Omega }`$ is a random parameter for a group of pulses. (Both figures show typical realizations for definite distributions of $`\mathrm{\Omega }`$.) Fig. 13, shows the dependence of the number of unwanted states and $`|C_0|^2`$ on the location of the group of distorted pulses. Fig. 14, demonstrates the ground state and the examples of unwanted states generated due to the distortion of this group of pulses. One can see again that high-energy states of the spin chain, with many-spin excitations, are generated.
Conclusion
In this paper we presented the results of simulations of quantum Control-Not gate, $`CN_{N1,0}`$, between remote qubits, $`(N1)`$-st and $`0`$-th, and the creation of long-distance entanglement in nuclear spin quantum computer having a large number of qubits (up to 1000). A considered quantum computer is a one-dimensional nuclear spin chain placed in a slightly non-uniform magnetic field, and oriented in such direction that the dipole interaction between spins is suppressed. So, the Ising interaction comes into play, as in the case of the liquid NMR. We used two essential assumptions:
1. The nuclear spin chain is prepared initially in the ground state.
2. The frequency difference between the neighboring spins due to the inhomogeneity of the external magnetic field is much large than the Ising interaction constant.
Using these assumptions, we developed a numerical method which allowed us to simulate the dynamics of quantum logic operations taking into consideration all quantum states with the probability no less that $`P=10^6`$. For the case, when the $`2\pi k`$-condition is satisfied (the $`\pi `$-pulse for the resonant transition is at the same time a $`2\pi k`$-pulse for non-resonant transitions), we obtained an analytic solution for the evolution of the nuclear spin chain. In the case of small deviations from the $`2\pi k`$-condition, the error accumulates. So, the perturbation theory becomes invalid even for small deviations from the $`2\pi k`$-condition. In this case, the numerical simulations are necessary.
The main results of our simulations are the following:
1. The unwanted states exhibit band structure in their probability distributions. There are two main “bands” in the probability distribution of unwanted states. The unwanted states in these “bands” have significantly different probabilities, $`P_{low}/P_{upper}10^3`$. Each of these two bands have their own structure.
2. A typical unwanted state is a state of highly correlated spin excitations. An important fact is that the unwanted states with relatively high probability include high energy states of the spin chain (many-spin excitations).
3. The method developed allowed us to study generation of unwanted states and the probability of the desired states as a function of the distortion of rf pulses. This can be used to formulate the requirements for acceptable errors in quantum computation.
The results of this paper can be used to design experimental implementations of quantum logic operations and to estimate (benchmark) the quality of experimental quantum computer devices. Our approach can be extended to any types of quantum computer.
Acknowledgments
This work was supported by the Department of Energy under contract W-7405-ENG-36 and by the National Security Agency.
Figure captions
Fig. 1: Nuclear spin quantum computer (the ground state of nuclear spins). $`B_0`$ is the permanent magnetic field; $`B_1`$ is the radio-frequency field. The chain of spins makes the angle $`\theta `$ with the direction of the field $`B_0`$.
Fig. 2: Probabilities of unwanted states. The total number of qubits: $`N=200`$; $`\mathrm{\Omega }=0.14`$. The number of unwanted states with probabilities $`|C_n|^210^6`$ is $`7385`$. The states are presented in the order of their generation.
Fig. 3: The upper band of Fig. 2, shown in a larger scale.
Fig. 4: The lower band of Fig. 2, shown in a larger scale.
Fig. 5a: A sub-structure of the upper band of Fig. 4, shown in a larger scale.
Fig. 5b: A sub-structure of the lower band of Fig. 4, shown in a larger scale.
Fig. 6: (a) The ground state of the spin chain; (b-j) The typical unwanted states with probabilities $`10^3`$. Horizontal axis shows the position of a qubit in the spin chain of $`N=200`$ spins. The vertical axis shows the state $`|0`$ or $`|1`$ of the qubit.
Fig. 7: Examples of “low energy” unwanted states from the lower band in Fig. 2.
Fig. 8: Examples of “intermediate energy” unwanted states from the lower band in Fig. 2.
Fig. 9: (a) Probability, $`|C_0|^2`$, as a function of $`\mathrm{\Omega }`$. The total number of qubits $`N=200`$; (b) The total number of unwanted states.
Fig. 10: Dependence of the total number of unwanted states as a function of $`\mathrm{\Delta }k=k_2k_1`$ ($`k_1=10`$, $`N=1000`$); (a) The total number of unwanted states; $`\mathrm{\Omega }=0.1`$ for all $`\pi `$-pulses except for the $`\pi `$-pulses with numbers $`k`$ in the range: $`k_1<k<k_2`$ for which $`\mathrm{\Omega }=0.101`$; (b) The probability, $`|C_0|^2`$, for the same parameters as in (a).
Fig. 11: (a) The probability, $`|C_0|^2`$, as a function of parameter $`\epsilon _0`$; $`N=1000`$; (b) The number of unwanted states as a function of parameter $`\epsilon _0`$; $`\mathrm{\Omega }=0.1`$ for all pulses from 10-th to $`(10+\mathrm{\Delta }k)`$-th for which $`\mathrm{\Omega }=0.1+\epsilon `$; $`\epsilon `$ is a random parameter: $`\epsilon [\epsilon _0,\epsilon _0]`$.
Fig. 12: (a) The probability, $`|C_0|^2`$, as a function of $`\mathrm{\Delta }k=k_2k_1`$; (b) The total number of unwanted states; $`\mathrm{\Omega }=0.1`$ for all $`\pi `$-pulses except for the $`\pi `$-pulses from 10-th to $`(10+\mathrm{\Delta }k)`$-th for which $`\mathrm{\Omega }=0.1+\epsilon `$; $`\epsilon `$ is a random parameter in the range $`0.05<\epsilon <0.05`$, $`N=1000`$.
Fig. 13: (a) The number of unwanted states as a function of the parameter $`k_1`$; $`\mathrm{\Omega }=0.1`$ for all $`\pi `$-pulses except for the $`\pi `$-pulses from $`k_1`$-th to $`(k_1+15)`$-th for which $`\mathrm{\Omega }=0.1+\epsilon `$; $`\epsilon `$ is a random parameter, $`\epsilon [0.005,0.005]`$; (b) The probability, $`|C_0|^2`$, for the same parameters as in (a); $`N=1000`$.
Fig. 14: (a) The ground state of the chain; (b-j) Examples of unwanted states ($`N=1000`$); $`\mathrm{\Omega }=0.1`$ for all $`\pi `$-pulses except for the $`\pi `$-pulses from 10-th to 40-th for which $`\mathrm{\Omega }=0.1+\epsilon `$, where $`\epsilon `$ is a random parameter in the range: $`0.05<\epsilon <0.05`$ .
|
no-problem/9909/astro-ph9909235.html
|
ar5iv
|
text
|
# The Distribution of X-ray Dips with Orbital Phase in Cygnus X-1
## 1 Introduction
### 1.1 Cygnus X-1
Cygnus X-1 is well known as a black hole binary, with a mass of the compact object in the range 4.8 – 14.7 M$`_{}`$ (Herrero et al. 1995). It is a High Mass X-ray Binary, and a member of the subclass of Supergiant X-ray Binaries (SXBs) consisting of a neutron star or black hole and an OB supergiant companion. When the compact object is a neutron star, it orbits within a few stellar radii of the OB star, and so is deeply embedded in the stellar wind. For the larger mass of the compact object in Cygnus X-1, the orbit is not so close. Cygnus X-1 is extremely variable over a large range of timescales. First, there are transitions between the low and high luminosity states; Cygnus X-1 spends most of its time in the low luminosity state, but in May 1996, it underwent a transition from the Low State to an intermediate or Soft State (Cui et al. 1996), returning after three months to the Low State. On short timescales it is also very variable, due to the well-known rapid aperiodic variability or flickering, the nature of which is not well understood. The X-ray spectrum of Cygnus X-1 in the Low State can be described by a hard underlying power law plus a reflection component (Done et al. 1992) and a weak soft excess (Bałucińska and Hasinger 1991) which has been identified with blackbody emission from the accretion disk (Bałucińska-Church et al. 1995). In the Soft State, the spectrum is dominated by a strong thermal component produced by enhanced emission from the accretion disk (Dotani et al. 1996; Cui et al. 1997).
### 1.2 X-ray Dip Properties
X-ray dips are also observed in Cygnus X-1, which usually last several minutes but have been up to 8 hrs in length. During the dips, there is a spectral hardening and the K absorption edge of iron may be seen showing that they are due to photoelectric absorption (e.g. Kitamoto et al. 1984). Spectral fitting of the X-ray spectra of Cygnus X-1 has shown that the column density can increase from the non-dip value of $`6.0\times 10^{21}`$ H atom cm<sup>-2</sup> to $`1\times 10^{23}`$ H atom cm <sup>-2</sup>, in dipping (Bałucińska-Church et al. 1997; Kitamoto et al. 1984). Spectral fitting of dip data, for example from ASCA is consistent with neutral absorber although discriminating between cold and warm absorber was not possible (Bałucińska-Church et al. 1997). Kitamoto et al. (1984) from analysis of a high quality Tenma spectrum of a dip, found an Fe edge implying an ionization state $`<`$ Fe V. It was realised at an early stage that dipping tends to occur at about phase zero of the 5.6 d orbital cycle, i.e. near to superior conjunction of the black hole, but dipping was also seen, for example, at $`\varphi `$0.71 (Remillard & Canizares 1984). Possible causes of dipping that have been suggested (see Remillard & Canizares 1984) are: 1) absorption asociated with Roche lobe overflow, matter having however, to be far out of the orbital plane; 2) absorption taking place in the stream flowing from the companion towards the compact object inferred from He II $`\lambda `$ 4686 measurements (Bolton 1975; Treves et al. 1980); 3) absorption in the wind of the companion 4) absorption in blobs in the wind of the companion (Kitamoto et al. 1984). Kitamoto et al. used the duration of short dips to estimate the size of an absorbing cloud as $``$ 10<sup>9</sup> cm, i.e. a relatively small region, and hence associated the absorber with clouds or “blobbiness” in the stellar wind of the companion. The present position is that the physical state and origin of the absorber is not at all well understood.
This contrasts with X-ray dipping in low mass X-ray binaries in which it is generally accepted that dipping is due to absorption in the bulge in the outer accretion disc where the accretion flow from the companion impacts (White & Swank 1982). Spectral evolution in dipping could not be explained in terms of absorption of a single emission component, since in particular sources, the spectrum may become harder, remain energy independent or even become softer during dipping. However, this behaviour has been explained by assuming two emission regions: point-like blackbody emission from the surface of the neutron star plus extended Comptonized emission from the accretion disk corona (Church & Bałucińska-Church 1995; Church et al. 1997, 1998a, 1998b). This model is able to explain the varied and complex spectral evolution in dipping in different sources. The presence of X-ray eclipses in XBT 0748-676 shows that deepest dipping occurs at orbital phase $``$0.9 consistent with the position of impact on the disk of an accretion flow trailing sideways from the inner Lagrangian point in the binary frame. Thus, dipping is much better understood in LMXBs than in Cygnus X-1.
### 1.3 Supergiant Stellar Winds
SXBs in general have strong stellar winds and exhibit strong orbital-related decreases in X-ray intensity due to absorption in the wind. In an EXOSAT observation of the archetypal eclipsing SXB, X 1700-371, it was found that large, smooth increases in column density took place between orbital phases 0.8 and 1.2 (Haberl, White & Kallman 1989; hereafter HWK89). This observation was useful since a full orbital cycle of 3.41 day was covered. In X 1700-371, the binary separation is the smallest known, with the compact object orbiting the primary star at 1.4 stellar radii (van Genderen 1977), leading to the dramatic changes in absorption with orbital phase. The increases in column density could be well modelled by absorption in a stellar wind obeying a CAK velocity law (Castor, Abbott & Klein 1975). An additional sharp increase in column density at phase $``$0.6 could be well modelled as a gas stream originating on the companion, possibly on a tidal bulge (HWK89). In the companion of Cygnus X-1, HDE 226868, the radial velocity curve of the He II $`\lambda `$ 4686 emission line is shifted by about 120 with respect to absorption in the companion, also indicating the presence of an accretion stream (Hutchings et al. 1973). In the case of Cygnus X-1, a strong decrease in X-ray intensity with orbital phase has not been seen; however, Kitamoto et al. (1990) showed evidence for a dependence of column density of the quiescent (non-dip) spectra on orbital phase, but with column density increasing from $`6\times 10^{21}`$ H atom cm<sup>-2</sup> to only $`2\times 10^{22}`$ H atom cm<sup>-2</sup>, at least one order of magnitude less than in X 1700-371. Apart from this, the only absorption events seen are the X-ray dips. There has been no systematic study of the phase of dipping, and in this work we present such a systematic study, and compare dipping in Cygnus X-1 with absorption effects in other SXBs.
### 1.4 Survey of X-ray Dips
It has not previously been possible to make a survey of the distribution of dips with orbital phase because of uncertainties in the available ephemeris. The ephemeris previously available of Gies & Bolton (1982) gave an orbital period of 5.59974 $`\pm `$0.00008 days, and this precision implies an uncertainty in phase of $`\pm `$0.02 cycle at the present time. However, Ninkov, Walker & Yang (1987) presented evidence for a period increase with time, and phases calculated using their ephemeris now differ by 0.5 from Gies & Bolton values, so that phase becomes completely indeterminate at the present time. In response to this, a new definitive ephemeris has recently been produced based on high quality radial velocity data on HDE 226868, by LaSala et al. (1998). This work concludes that there is no evidence for an evolving period, and the accuracy of the period determination of P = 5.5998 $`\pm `$ 0.0001 days in the new ephemeris allows accurate retrospective determination of orbital phase back many years from the present epoch; in fact, 85 years are required before an uncertainty of 0.1 cycle accumulates. The epoch of the new ephemeris is given for the superior conjunction of the black hole, i.e. $`\varphi `$ = 0 with the companion between the observer and the black hole, and thus corresponds to the orientation in which X-ray dips are thought to take place.
We are thus able to present a survey of the distribution of dipping with orbital phase, and this is based on two sources of data. First, a survey has been made using a large body of archival data from Copernicus, Einstein, EXOSAT, TENMA, GINGA, ROSAT and ASCA. Secondly, we have used data from the Rossi-XTE All Sky Monitor (ASM) obtained between September 1996 and the present.
This distribution shows a smooth variation with orbital phase which is strikingly similar to the observed variation of column density with phase in the HMXB X 1700-371 (HWK89), superimposed on the smooth variation is evidence for a stream. Consequently, we attempt simple modelling of the wind of HDE 226868 assuming that most of the wind is highly ionized by the X-ray flux from the black hole, and from the density of the neutral component of the wind, derive the variation of column density along lines-of-sight with orbital phase.
### 1.5 Wind-Driving Mechanisms
The SXBs, including Cygnus X-1 of spectral type O9.7Iab, consist of OB supergiants and a compact object, and the massive companion has a strong stellar wind as do isolated OB stars. The stellar wind in such systems is driven by the radiation pressure of UV photons on the gas. Calculations of the radiation force have been made by Lucy & Solomon (1970) based on resonance lines from a few elements, by Castor, Abbott and Klein (CAK; 1975) based on the combined effect of weak lines, and by Abbott (1982) who concluded that the correct approach uses the strong lines of many elements. However, in the case of binary systems, the X-ray luminosity L may result in photoionization which will suppress the radiative driving force. For high L, radiative acceleration will be totally suppressed on the side of the companion exposed to X-rays. A two-dimensional hydrodynamic coded was used by Blondin et al. (1990) to include effects such as radiative driving of the wind, suppression by photoionization, X-ray heating, radiative cooling and gravitational and rotational forces in the binary system. By application of this code to high X-ray luminosity systems such as Cen X-3 and SMC X-1 Blondin (1994) produced two-dimensional maps of particle density for L between $`10^{36}`$ and $`10^{38}`$ erg s<sup>-1</sup> showing dramatically the suppression of the normal wind and also demonstrated the production of a photoionization wake where the normal wind meets a stalled wind, and of a shadow wind originating in the X-ray shadow moving sideways in the binary system towards the compact object on the upstream side. The simulations of particle density are then used to calculate column density values along lines-of-sight to the X-ray source on the assumption that material having ionization parameter $`\xi `$ $`<`$ 2000 will contribute to photoelectric absorption.
In the case of Cyg X-1, we take a simple, approximate approach of calculating the density of the recombined component of a wind that is assumed to be fully ionized and from this derive column densities as a function of orbital phase for comparison with the dip distribution.
## 2 Observations
### 2.1 Archival data
For the first part of our analysis, we have included the data from a wide range of X-ray missions as shown in Table 1, which gives the date of the start of each observation, the observatory mission, and the authors. In using data published by other authors, we have used tables and figures showing the times of dipping from the literature. In other cases, data was analysed by ourselves, i.e. Einstein MPC: Bałucińska (1988), EXOSAT: Bałucińska and Hasinger (1992), GINGA: Present work, ROSAT: Bałucińska-Church et al. (1995), and ASCA: Bałucińska-Church et al. (1997).
### 2.2 Rossi-XTE ASM data
The All-Sky Monitor (ASM; Levine et al. 1996) on board Rossi-XTE (Bradt, Rothschild & Swank 1993) scans the sky in 3 energy bands: 1.53, 35 and 512 keV with $``$90 s exposure. Any source is scanned 520 times per day. We have extracted data from the archive, in these 3 energy bands, including data from September 6th, 1996 until the present (October 1998), i.e. including data labelled as days 250 - 998 from the start of ASM operation. The start time was chosen so as to exclude the period between 1996 May to August when Cygnus X-1 was in the Soft State; thus all of the results presented relate to the Low State of the source.
## 3 Analysis and Results
### 3.1 The Historical Data
In all cases of archival data, orbital phases were calculated using the new ephemeris of LaSala et al. (1998). First, the data were sorted into 100 phasebins, and a count was added to each bin in which dipping activity was observed. In Cygnus X-1 a succession of narrow X-ray dips usually occurs, and so we add a single count to a given phasebin when dipping is seen, not a count for each narrow dip.
There would be a strong selection effect due to the fact that coverage of phases has deliberately concentrated on phases close to zero. To compensate for this, we also made a count of the number of times each phasebin was observed, and the histogram of dip frequency versus phase was normalized by dividing the dip count by the phasebin count. A selection effect may remain, as data not analysed by ourselves, was selected by the various authors as suitable for publication. Finally, to improve statistics, phasebins were grouped together in fives to give 20 bins per orbital cycle, and this is shown in Fig. 1.
It can be seen that the distribution peaks at about phase 0.95 with a full width at half maximum of $``$0.25. This effect, i.e. the peak of the distribution being offset from phase zero, could also be seen in individual datasets; for example, Remillard and Canizares (1984) noted that short dips tended to occur before phase zero, whereas one 8 hr dip was centred on phase zero. This will be discussed fully after the next section on results from the RXTE ASM.
### 3.2 RXTE All Sky Monitor Data
Data were extracted from the ASM archive and stored as entries in a file, each entry corresponding to a single 90 s observation. Each entry consisted of the time, and the count rate in each of the 3 standard bands: 1.53 keV, 35 keV and 512 keV. From these, hardness ratios were constructed using the ratio of count rates in the bands 35 keV and 1.53 keV designated HR1, and the ratio of count rates in the bands 512 keV and 35 keV, designated HR2. The lightcurve in the total energy band of the ASM, together with HR1 and HR2, is shown in Fig. 2. A brief period of enhanced intensity can be seen at $``$ day 920; however, this data is excluded by our procedure of selecting data dip data using hardness ratios (below). Although the count rate varies between 15 and $``$140 count s<sup>-1</sup>, mostly due to flaring activity, the hardness ratios HR1 and HR2 were very stable, with mean values 1.10$`\pm 0.77`$ and 1.52$`\pm 0.35`$, respectively and were unaffected by flaring. The points with hardness ratio significantly larger than these means are dip data, for which HR1 can increase to 40 and HR2 can increase to 10.
Dip events can be seen in the light curve, but are more obvious in HR1 and HR2. The data folded on the orbital period are shown in Fig. 3. There is a clear anti-correlation between the hardness ratios and the count rate, with a reduction in count rate occuring just before phase zero, and associated hardening of the spectrum showing that it is due to absorption. Larger increases of hardness ratio in the lower energy bands would be expected for simple absorption; however, as partial covering takes place, the decrease in intensity at low energies is reduced by the presence of the uncovered part of the emission. The data can also be plotted as a colour-colour diagram, i.e. as HR2 against HR1, and this is shown in Fig. 4. We next discuss how spectral simulations were used to elucidate the behaviour seen in Fig. 4, and how selection of dip data was made.
#### 3.2.1 Flux calibration of the ASM and simulations of dipping in the ASM colour-colour diagram
To understand the spectral changes taking place in the data plotted in Fig. 4, we have simulated non-dip and dip spectra and obtained the fluxes in the 3 ASM bands. These were then converted to count rates in the 3 bands, and from these, the two hardness ratios HR1 and HR2 were derived. To calibrate the flux/count rate relations in each of the bands, not having an instrument response for the ASM, we have used ASM data from the Crab Nebula. The X-ray flux of the Crab in each of the standard bands was calculated and compared with the mean value of count rate in each band integrated over a long period. This gives calibration factors of $`3\times 10^{10}`$ erg cm<sup>-2</sup> s<sup>-1</sup> per count s<sup>-1</sup> for each of the 3 energy bands. This calibration makes the assumption that the spectral shape of the source does not differ markedly from that of the Crab Nebula over the energy range of the ASM.
Next, we simulated dipping in Cygnus X-1 for a wide range of spectral parameter values as follows. Spectral fitting of the source in the Low State showed that the contribution of the blackbody soft excess emission from the accretion disk is small, and so the source may be modelled with a power law spectrum which is partially covered during dipping (Bałucińska-Church et al. 1997). Good fits were obtained using neutral absorber with solar abundances (cross-sections of Morrison & McCammom 1983). Dip spectra are normally made by intensity selection to accumulate sufficient counts, and so include dip data due to a succession of short dips, plus non-dip data between the dips, and this accounts for the partial covering required in fitting. The parameters f, the partial covering fraction and the column density $`N_\mathrm{H}`$ can cover wide ranges of values and, in simulations, we spanned the whole range of this two-dimensional space in a set of calculations making small steps in the parameter values, calculating the flux in each of the ASM bands, and plotting the results on a colour-colour diagram of HR2 versus HR1. These simulated data show all the features of the actual colour-colour diagram of Cygnus X-1 data obtained from the ASM (Fig. 4), and allow us to interpret the real data in different positions on the plot.
First, the limited number of real dip points moving horizontally in HR1 with little increase in HR2 correspond to dipping with partial covering fraction f = 1.0, i.e. simple absorption in which the source region is completely covered by absorber. Along this horizontal line, $`N_\mathrm{H}`$ increases from low values to a maximum of $`25\times 10^{22}`$ H atom cm<sup>-2</sup>. Other positions on the colour-colour diagram have f $`<`$ 1. For a given f value, increasing $`N_\mathrm{H}`$ traces a loop on the plot with column density increasing in an anticlockwise sense. For very high $`N_\mathrm{H}`$, the count rate in the highest energy band begins to be affected, and points move on a vertical line at small HR1, so that as $`N_\mathrm{H}`$ continues to increase, the data points move vertically downwards.
Based on the above understanding of the spectral changes taking place in the source revealed in Fig. 4, we selected dip data by taking points with a value of HR1 $`>`$ 2 and HR2 $`>`$ 2.5. The plots of HR1 and HR2 against time were very flat (Fig. 2) allowing this selection method to be used. Kitamoto et al. (1990) found evidence for a variation of the non-dip column density with orbital phase from $`6\times 10^{21}`$ H atom cm<sup>-2</sup> to $`2\times 10^{21}`$ H atom cm<sup>-2</sup>, which they associated with varying $`N_\mathrm{H}`$ in the stellar wind. Our simulations show that such increases in $`N_\mathrm{H}`$ will be excluded from the selection of dip data.
For each point selected from the total data set in Fig. 4 as dip data, the orbital phase was calculated using the new ephemeris (LaSala et al. 1998) and the distribution is shown in Fig. 5. The data are not normalised by the time spent at each phase as the observation of orbital phases was almost uniform. In principle, it might be possible to derive $`N_\mathrm{H}`$ values for each data point using our simulations; however this is made difficult by the fact that any point on the colour-colour diagram has particular $`N_\mathrm{H}`$ and f values, and adjacent points can have very different $`N_\mathrm{H}`$. Although we cannot produce a plot of $`N_\mathrm{H}`$ versus phase, by plotting HR1 against phase for the selected data, it appears that the points at $`\varphi `$ $``$0 have the largest values of HR1 and thus the largest values of column density.
#### 3.2.2 The distribution of dipping with phase
The results shown in Fig. 5 show a similar effect to the historical data, i.e. the peak of dipping occurs offest from phase zero at phase $``$0.95. In addition, a peak at phase 0.6 can be seen. The statistics are much improved compared with the archival data. A total of 12519, 90 s observations from the ASM were extracted, made at an average of 16.7 observations per day, of which about 3% were dip events giving a total of 379 dip events compared with 55 events in the archival data. The two effects seeen in the distribution against orbital phase are very similar to the effects seen in SXBs in measurements of column density of the X-ray spectrum as a function of orbital phase, e.g. in X 1700-371 (HWK89). An asymmetry in $`N_\mathrm{H}`$ about phase zero and a peak in $`N_\mathrm{H}`$ are seen in this case. Thus there is evidence that the distribution of dipping relates ultimately to the properties of the stellar wind and the existence of a stream. It is thus important to know the variation of neutral wind density with orbital phase in Cygnus X-1, and we have carried out modelling of the stellar wind as discussed in the next section.
### 3.3 Wind modelling
We wish to derive column densities as a function of orbital phase for comparison with our results on the distribution of dipping with phase. Cygnus X-1 is luminous and we may expect suppression of the radiative wind driving force in particular spatial regions. Consequently we make approximate calculations of wind density and ionization parameter $`\xi `$ to investigate the extent of the suppression, and then estimate values of $`\mathrm{N}_\mathrm{H}`$.
First we calculate the ionization parameter $`\xi `$ = $`\mathrm{L}/\mathrm{nr}^2`$, where n is the particle density, as a function of distance r from the black hole to determine the extent of suppression of the wind driving force. This is done for different orbital phases along lines of sight that will be used to integrate particle number density to give column density, and for simple assumptions about the ionization state of the wind, i.e. about whether acceleration is suppressed or not. Assuming first that the wind is generally highly ionized we calculate the densities of neutral atoms $`\mathrm{n}_0`$ and of electrons $`\mathrm{n}_\mathrm{e}`$ in a wind which drifts with constant velocity $``$ 30 km s<sup>-1</sup> as a velocity typical of turbulent motions in the atmosphere of HDE 226868 (Gies & Bolton 1986). The total particle density in the wind $`\mathrm{n}_\mathrm{w}`$ ($`\mathrm{n}_\mathrm{w}`$ = $`\mathrm{n}_0`$ \+ $`\mathrm{n}_\mathrm{e}`$) is given by
$$\mathrm{n}_\mathrm{w}=\dot{\mathrm{M}}/4\pi \mathrm{m}_\mathrm{H}\mathrm{R}^2\mathrm{v}$$
where $`\dot{\mathrm{M}}`$ is the mass accretion rate, R is the distance from the centre of the primary, $`\mathrm{m}_\mathrm{H}`$ is the mass of the hydrogen atom and v is the wind velocity. Spherical symmetry of the wind is assumed. A simple approach was taken considering only the equilibrium between production of ions by photoionization of a solar abundance mixture, and radiative recombination of H<sup>+</sup> ions. This results in a neutral component of the order of 1% of the total wind density. We are neglecting ionization by the primary (Abbott 1982) which will reduce the neutral fraction substantially further. However, it is only the X-ray flux which suppresses radiative driving. If the number density of X-ray photons is $`\mathrm{n}_\gamma `$, the photoionization rate $`\mathrm{dn}_\mathrm{e}/\mathrm{dt}`$ is $`\mathrm{k}\mathrm{n}_0\mathrm{n}_\gamma `$, and the recombination rate is $`\alpha \mathrm{n}_\mathrm{e}^2`$, where the photoionization rate coefficient k, is related to the cross-section $`\sigma `$ via k = $`\sigma \mathrm{c}`$ (c is the velocity of light) and $`\alpha `$ is the recombination coefficient. The photoionization cross-sections of Morrison & McCammon (1983) were used. From Allen (1973), $`\alpha `$ = $`3\times 10^{11}\mathrm{Z}^2\mathrm{T}^{1/2}`$, having a value $`\alpha `$ = $`2.0\times 10^{13}`$ for T = 10<sup>4</sup> K. In equilibrium,
$$\frac{\mathrm{n}_\mathrm{e}^2}{\mathrm{n}_0}=\frac{\sigma \mathrm{c}\mathrm{n}_\gamma }{\alpha }$$
The photon number density $`\mathrm{n}_\gamma `$ is given by $`\mathrm{n}_\gamma `$ = $`\mathrm{L}/4\pi \mathrm{r}^2\overline{\mathrm{E}}`$ where $`\overline{\mathrm{E}}`$ is the mean energy, assumed to be 1 keV and L = $`2\times 10^{37}`$ erg s<sup>-1</sup> typical of the Low State. From the above, a simple quadratic equation for $`\mathrm{n}_\mathrm{e}`$ follows, allowing $`\mathrm{n}_\mathrm{e}`$, $`\mathrm{n}_0`$, the fractional ionization and $`\xi `$ to be obtained. Fig. 6 (lower curves) shows that $`\xi `$ is high close to the black hole, but falls to $`<`$ 100 at distances greater than 0.3 of the binary separation; thus the Strömgren region where radiative acceleration is suppressed does not extend down to the surface of the companion. In view of this, we also derived $`\xi `$ curves using a CAK law assuming no ionization of the wind as shown in the upper curves of Fig. 6. This assumed the velocity v(R) increases as $`\mathrm{v}_{\mathrm{}}(1\mathrm{R}_{}/\mathrm{R})^\alpha `$ where $`\mathrm{v}_{\mathrm{}}`$ = 2100 km s<sup>-1</sup>, $`\mathrm{R}_{}`$ = 17.0 R$`_{}`$, $`\dot{\mathrm{M}}`$ = $`3.0\times 10^6`$ M$`_{}`$ yr<sup>-1</sup> (Herrero et al. 1995) and $`\alpha `$ is 0.5. In this case, $`\xi `$ is greater than 1000 even at distances of several binary separations from the black hole. With these two extreme models $`\mathrm{N}_\mathrm{H}(\varphi )`$ was obtained by integrating $`\mathrm{n}_0`$ along lines of sight to the X-ray source for a range of orbital phases $`\varphi `$ between 0 – 0.5 for an inclination angle of 35 and a binary separation of 40.2 R$`_{}`$ (Herrero et al. 1995).
It can be seen that the two models produce a similar variation with phase. The high values of $`\mathrm{N}_\mathrm{H}`$ from the drift models would be easily measurable from the X-ray spectra. The only observational evidence (Kitamoto et al. 1990) shows a variation of similar shape between 6 and $`20\times 10^{21}`$ H atom cm<sup>-2</sup>. The correct description clearly lies between our 2 extremes; we concentrate on the relation between the dipping and the $`\mathrm{N}_\mathrm{H}`$ variation. In each case, the variation is strong (a factor of 6 for the drift model and a factor of 10 in the other case) suggesting that blob formation, and thus dip formation, depends on the neutral density component in the wind.
## 4 Discussion
Modelling of the stellar wind of HDE 226868 should take into account 2 main effects: radiative driving of the wind and suppression of the driving force by photoionization due to the X-ray source. To calculate the column density appropriate to X-ray observations of Cygnus X-1 at various orbital phases would require three-dimensional calculations of the particle number density in the wind combined with calculations of the ionization parameter $`\xi `$. Suppression of the radiative driving force takes place for $`\xi `$ $`>`$ 100 (Blondin 1994). Detailed modelling using a 2-dimensional hydrodynamic code was carried out for the wind density in luminous SXB by Blondin (1994). In high luminosity systems such as Cen X-3 and SMC X-1, strong effects due to the X-ray source were seen with a Strömgren region about the X-ray source in which radiative driving was suppressed. In Cen X-3, for L $`>`$ $`10^{37}`$ erg s<sup>-1</sup>, this region extended to all of the X-ray illuminated wind (i.e. not in the shadow of the companion). Column densities were obtained by integrating the particle densities where $`\xi `$ was $`<`$ 2000, i.e. assuming that all material not fully ionized contributes to $`\mathrm{N}_\mathrm{H}`$. In our case, the X-ray source is bright even in the Low State with L $`2\times 10^{37}`$ erg s<sup>-1</sup> and it might be thought that strong suppression of radiative acceleration takes place. However, the binary separation of 40.2 $`\mathrm{R}_{}`$ is larger in the black hole binary than the 19 $`\mathrm{R}_{}`$ in Cen X-3. Thus the flux of the X-ray source is reduced by a factor of 4 compared with Cen X-3 so that simple scaling implies total suppression of radiative driving in the X-ray illuminated wind only for L $`>`$ $`4\times 10^{37}`$ erg s<sup>-1</sup>. This depends primarily on $`\xi `$ and our simple calculations show that wind with totally suppressed driving force would result in $`\xi `$ $`>`$ 100 only relatively close to the black hole. Thus it is clear that detailed hydrodynamic simulations of Cygnus X-1 are required to delineate regions where radiative driving is suppressed. This would also reveal the details of the high density photoionization wake where normal wind in the X-ray shadow impacts on stalled wind (Fransson & Fabian 1980; Blondin 1994), and of the shadow wind which emerges from the X-ray shadow and may contribute to absorption (Blondin 1994).
We have shown that there is a strong dependence of the frequency of dipping in Cygnus X-1 with orbital phase, and that this correlates approximately with the variation of column density of the neutral component of the stellar wind with phase $`N_\mathrm{H}(\varphi )`$. This suggests that dipping is caused by blobs of largely neutral material, the formation of which may depend simply on the neutral density at any point in the wind. Our previous spectral fitting has shown that column density is typically between 2 and $`20\times 10^{22}`$ H atom cm<sup>-2</sup> in dip spectra (Bałucinśka-Church et al. 1997). For a blob diameter of $`10^9`$ cm (Kitamoto 1984) this gives densities of $``$ $`10^{12}`$$`10^{13}`$ cm<sup>-3</sup>. Our estimates for the wind density have maximum values of total wind density between a few $`10^9`$ and $`10^{11}`$ cm<sup>-3</sup> for no suppression of radiative driving and total suppression, respectively. A more realistic typical value might be $`10^{10}`$ cm<sup>-3</sup>. Thus the blob density is greater than the ambient wind density by factors of 100 - 1000. In such a higher density region, $`\xi `$ will be reduced, so that a high value in the ambient wind of 1000 would be reduced to 1 - 10, such that the photoionizing effects of the X-ray source are markedly reduced. One possibility for blob formation is that neutral material in the wind can act as a nucleus for blob growth, since in the X-ray shadow of a small blob, photoionization will be reduced and recombination into the blob will be rapid. Other possibilities for blob formation also exist. For example, the interaction of normal wind with stalled wind can lead to a high density region trailing behind the compact object (Fransson & Fabian 1980; Blondin 1994). In this region, instabilities and density enhancements may form. However this would not be exected to produce dipping having the basic symmetry about $`\varphi `$ $``$ 0.5, i.e. the line of centres, that we see. Similarly this would not be expected to account for the peak in dipping we see at $`\varphi `$ $``$ 0.6 as the high density region extends over a large range of angles with respect to the primary whereas a stream produced by Roche-lobe overflow or tidal enhancement does not.
In Cygnus X-1, for the first time we find definite evidence for a stream from X-ray data in the enhanced number of dips at $`\varphi `$ $``$ 0.6, and the source appears to be similar to X 1700-371 in which it was concluded that a stream was the cause of enhanced absorption at phase 0.6 (HWK89). Petterson (1978) showed that HDE 226868 is filling its Roche lobe and will therefore will produce a stream which may be expected to produce absorption effects at $`\varphi `$ $``$0.6 Even if the star was only close to filling its Roche lobe, Blondin et al. (1991) have shown that a stream will still develop by tidal enhancement of the stellar wind caused by the compact object. In the neutron star systems modelled, the formation of a stream depends on the binary separation. However, a clear result of this work was that the stream is produced at phase $`\varphi `$ $``$ 0.6. In either case, a stream is produced at a phase similar to that we found here. Blob formation in a stream would, of course, be easier than in the wind because the density is already increased over the wind density reducing $`\xi `$.
In summary, we have shown that the distribution of dipping with orbital phase correlates approximately with the variation of column density of the neutral component of the stellar wind of HDE 226868 with phase. There is in addition, extra dipping at $`\varphi `$ $``$ 0.6. These effects resemble the asymmetry of absorption in the wind of Supergiant X-ray Binaries, suggesting that the formation of absorbing blobs depends on the neutral density, and thus reflects the $`N_\mathrm{H}(\varphi )`$ variation.
## Acknowledgments
This paper uses data made publicly available, and we thank the ASM/RXTE Team, including members at MIT and NASA/GSFC. We thank Dr. J. Lochner for useful discussions on the flux/count rate calibrations for the ASM. MBC and MJC thank the British Council and the Royal Society for financial support.
|
no-problem/9909/hep-th9909048.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Understanding the hierarchy problem of the Standard Model in four dimensions is a classic and important problem. The main ideas for this problem have been so far focused on improvement of particle physics sector around the TeV scale. Recently, however, motivated by studies of non-perturbative superstring theory and M-theory, a completely new proposal for the hierarchy problem has been made in gravity sector where the huge Planck scale is reduced all the way to the TeV scale by assuming the existence of extra dimensions .
In particular, Randall and Sundrum have considered a solution for the five-dimensional Einstein equation with the cosmological constant and two 3-branes . This solution involves a ”red-shift” (or ”warp”) factor in the metric tensor, which was found to play a critical role in explaining the vast disparity between the Planck scale and the electroweak scale in a natural way . Subsequently, they proposed an alternative scenario for trapping of gravity .
Here it is worth summarizing basic setups and aims of the two papers since they have proposed two distinct new scenarios in terms of the same solution as stated above. In both the papers, two 3-branes, in other words, domain walls, are placed at the boundaries of the fifth dimension, which are the fixed points of an orbifold $`S^1/Z_2`$, in the five-dimensional anti-de Sitter space-time . The one 3-brane with positive brane tension is located at the origin of the fifth dimension while the other 3-brane with opposite brane tension to the first 3-brane is located at another fixed point away from the origin at some distance. Note that one has to make such a fine-tuning of brane tensions to obtain the Randall-Sundrum solution for the Einstein equation. (This issue will be argued in some detail around the end of section 2.)
In the first paper , our universe is assumed to the 3-brane with negative tension whereas the other 3-brane with positive tension is regarded as hidden universe. If we think that a natural scale on the hidden world is of order the Planck scale, the electroweak scale is generated on our universe from the red-shift factor by selecting rather small size of an extra dimension. Thus, the purpose of the paper is to present a resolution to the hierarchy problem.
On the other hand, in the second paper , the model setup is converse compared to that in the first paper . That is, our universe is assumed to the 3-brane with positive tension at the origin whereas the other 3-brane with negative tension is regarded as hidden universe. Moreover, the 3-brane with negative tension is moved to infinity, thus providing an example of a non-compact extra dimension. A remarkable thing here is that even without mass gap between the massless graviton and the continuous Kaluza-Klein spectrum, the four-dimensional Newton’s law is reproduced to more than adequate precision on our universe, thereby implying the trapping of gravity on our universe. The reason why the 3-brane with positive tension is taken to be our universe is that the 3-brane with positive tension supplies us with the $`\delta `$-function potential with a negative coefficient supporting the massless graviton, which will be discussed in section 4. Of course, in this setup, we cannot solve the hierarchy problem.
From these considerations, it is natural to ask whether one can construct a model with the same geometry as in the Randall-Sundrum metric solution such that problems of mass hierarchy and trapping of gravity on our 3-brane universe are simultaneously solved. Indeed, in a recent work , Lykken and Randall have explored such a possibility by considering a model including only 3-branes with positive tension. Their setup is similar to that in the second paper , but is distinct in that the setup includes more than one positive tension 3-brane. But it is unclear at least for the present author that introduction of many 3-branes with positive tension in addition to a single 3-brane with negative tension (”regulator brane”) in the setup is consistent with the Einstein equation. One of motivations in this paper is to show explicitly that the above setup is indeed compatible with the Einstein equation. However, to do so, at the outset we need to introduce the same number of negative tension 3-branes as positive tension 3-branes into a model, and after that we have to take a suitable limit in order to move negative tension 3-branes to infinity. Of course, in an extreme case where all the negative tension 3-branes are coincident, our model would become equivalent to that of Lykken and Randall.
The other motivations behind the study at hand are as follows. In this paper, we make use of solutions satisfying the Einstein equation, which were found in our previous work . As stated in the paper, these new solutions describe many domain walls standing along the fifth dimension with topology $`S^1`$ in five-dimensional anti-de Sitter space-time, so they realize many universe cosmology. But these solutions involve the same number of positive tension 3-branes as negative tension 3-branes, thus as a result, the total number of 3-branes are only even. It is quite unfair and against democracy that only even universes can exist in such a model. It will be shown later that we can construct a more plausible model with any number of D3-branes with positive tension by moving negative tension O3-planes to infinity.
Another important motivation of this study is relevant to a recent interesting work by Dienes et al . In their paper, some phenomenological difficulties associated with the Randall-Sundrum scenario and its extension were pointed out and possible resolutions to these puzzles are speculated. Specially, they have concluded that in the Randall-Sundrum model we cannot simultaneously generate the Planck/electroweak hierarchy (mass hierarchy) $`\mathrm{𝑎𝑛𝑑}`$ explain gauge coupling unification. This conclusion is physically sensible, of course, as in the original Randall-Sundrum model, there exists only a single red-shift factor depending on a relative distance between the two 3-branes. This red-shift factor operates universally on all mass scales in our universe so it is difficult to generate two mass scales, such as the electroweak scale and the GUT scale, on our universe without recourse to additional mechanisms. One interesting resolution to this puzzle is to consider at least three universe model where one D3-brane is regarded as our universe and the remaining two D3-branes are taken to be hidden universes. As there are two red-shift factors in this model, one could explain the Planck/electroweak hierarchy as well as the existence of the GUT scale on our universe at the same time .
The organization of the paper is as follows: in section 2 we review our previous study . Here new solutions for the Einstein equation are presented. Moreover, it is stated in detail about the reason why it is difficult to construct a model with only positive tension branes. In section 3, based on the solutions reveiwed in section 2, we construct a model with two positive tension 3-branes in a concrete way. The generalization to arbitrary number of positive tension branes is also discussed. In section 4, we investigate mass hierarchy and trapping of gravity along a similar line of arguments to the previous works . The final section is devoted to discussions and future directions of this work.
## 2 Review of many domain wall model
We begin by briefly reviewing a model of Ref.. This will enable us to establish our notations and conventions and explain why it is difficult to construct a model consisting of only D3-branes with positive tension.
Our starting action is the Einstein-Hilbert action with the cosmological constant in five dimensions plus an action describing many domain walls in four dimensions :
$`S={\displaystyle \frac{1}{2\kappa ^2}}{\displaystyle d^4x_0^{2L}𝑑z\sqrt{g}\left(R2\mathrm{\Lambda }\right)}+{\displaystyle \underset{i=1}{\overset{n}{}}}{\displaystyle _{z=L_i}}d^4x\sqrt{g_i}_i,`$ (1)
where the cosmological constant $`\mathrm{\Lambda }`$ is taken to a negative number, which implies that the geometry of five-dimensional bulk is anti-de Sitter space-time. The fifth dimension $`z`$ is assumed to be compact with the length $`2L`$, but later regarded to be effectively non-compact by taking the limit $`L\mathrm{}`$. Moreover, $`\kappa `$ denotes the five-dimensional gravitational constant with a relation $`\kappa ^2=8\pi G_N=\frac{8\pi }{M_{}^3}`$ where $`G_N`$ and $`M_{}`$ are the five-dimensional Newton constant and the five-dimensional Planck scale, respectively. Throughout this article we follow the standard conventions and notations of the textbook of Misner, Thorne and Wheeler . Note one important distinction between our model (1) and the Randall and Sundrum model . In the Randall and Sundrum model the geometry of the fifth dimension is taken to a singular orbifold geometry $`S^1/Z_2`$ , whereas in our model the topology of the fifth dimension is a circle $`S^1`$ because the existence of many domain wall solution for the Einstein equation requires us to choose this smooth manifold.
Variation of the action (1) with respect to the five-dimensional metric tensor leads to the Einstein equation:
$`\sqrt{g}\left(R^{MN}{\displaystyle \frac{1}{2}}g^{MN}R\right)=\sqrt{g}g^{MN}\mathrm{\Lambda }+\kappa ^2{\displaystyle \underset{i=1}{\overset{n}{}}}\sqrt{g_i}g_i^{\mu \nu }\delta _\mu ^M\delta _\nu ^N_i\delta (zL_i),`$ (2)
where $`M,N,\mathrm{}`$ denote five-dimensional indices, whereas $`\mu ,\nu ,\mathrm{}`$ do four-dimensional ones. Provided that we adopt a metric ansatz
$`ds^2`$ $`=`$ $`g_{MN}dx^Mdx^N`$ (3)
$`=`$ $`u(z)^2\eta _{\mu \nu }dx^\mu dx^\nu +dz^2,`$
with $`\eta _{\mu \nu }`$ denoting the four-dimensional Minkowski metric, the Einstein equation (2) reduces to two combined differential equations for the unknown function $`u(z)`$:
$`\left({\displaystyle \frac{u^{}}{u}}\right)^2=k^2,`$ (4)
$`{\displaystyle \frac{u^{\prime \prime }}{u}}=k^2+{\displaystyle \frac{\kappa ^2}{3}}{\displaystyle \underset{i=1}{\overset{n}{}}}_i\delta (zL_i),`$ (5)
where the prime denotes a differentiation with respect to $`z`$ and we have defined $`k`$ as
$`k=\sqrt{{\displaystyle \frac{\mathrm{\Lambda }}{6}}}.`$ (6)
In the previous paper , we have seeked special solutions with the form of
$`u(z)=e^{kf(z)},`$ (7)
and we have presented two distinct solutions with simple and manageable form although the other complicated solutions could be also constructed. These solutions describe even domain walls standing along $`S^1`$ at some intervals in five-dimensional anti-de Sitter space-time. One solution describes $`\frac{n1}{2}`$ even domain walls locating at $`L_{2i}`$ and is concretely given by
$`f(z)`$ $`=`$ $`|z|+{\displaystyle \underset{i=1}{\overset{\frac{n1}{2}}{}}}(1)^i|zL_{2i}|L,`$
$`f^{}(z)`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{\frac{n1}{2}}{}}}(1)^i\epsilon (zL_{2i})+1,`$
$`f^{\prime \prime }(z)`$ $`=`$ $`2{\displaystyle \underset{i=1}{\overset{\frac{n1}{2}}{}}}(1)^i\delta (zL_{2i}),`$ (8)
for which $`_i(i=1,2,\mathrm{},\frac{n1}{2})`$ must satisfy the relations
$`_{2i}=(1)^{i+1}{\displaystyle \frac{6k}{\kappa ^2}}.`$ (9)
The other solution describes $`n1`$ even domain walls locating at $`L_i(i=1,2,\mathrm{},n1)`$ and takes the form
$`f(z)`$ $`=`$ $`{\displaystyle \underset{i=2}{\overset{n1}{}}}(1)^{i+1}|zL_i|+L,`$
$`f^{}(z)`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{n1}{}}}(1)^{i+1}\epsilon (zL_i)1,`$
$`f^{\prime \prime }(z)`$ $`=`$ $`2{\displaystyle \underset{i=1}{\overset{n1}{}}}(1)^{i+1}\delta (zL_i),`$ (10)
for which, this time, $`_i(i=1,2,\mathrm{},\frac{n1}{2})`$ must satisfy the relations
$`_{2i}=_{2i1}={\displaystyle \frac{6k}{\kappa ^2}}.`$ (11)
Moreover, in the both solutions, $`L_i`$ satisfies the relations
$`L_{2i}={\displaystyle \frac{L_{2i1}+L_{2i+1}}{2}},L_10,L_n2L,`$ (12)
with $`i=1,2,\mathrm{},\frac{n1}{2}`$.
To close this section, it is worthwhile to argue why it is difficult to construct a model consisting of only D3-branes with positive tension. This problem is indeed closely related to various important problems associated with the models under consideration. Before doing that, let us recall that in the original Randall-Sundrum model , there are two 3-branes with opposite sign of brane tension at the boundaries of an orbifold $`S^1/Z_2`$. Also in our model, there are the same number of 3-branes with positive tension and negative tension on a circle $`S^1`$. It was stressed in Ref. that the necessity for the negative energy objects might be one of disadvantages in those setups since not only it is believed that the Standard Model is placed on the D3-brane, which is certainly a positive energy object, but also some problematic facts were pointed out in the cosmological context. For instance, the Friedmann-like expanding universe does not arise if the Standard Model is placed on the 3-brane with negative brane tension . Actually, in Ref., the Standard Model was located on such a negative energy 3-brane in order to explain the exponential mass hierachy.
Keeping these facts in mind, let us now turn to our question: ”Why is it difficult to construct a model consisting of only many D3-branes with positive tension?” The answer can be easily found from considerations of the electro-maganetics in a compact space. It is well known that we cannot put a single point charge in a $`\mathrm{𝑐𝑜𝑚𝑝𝑎𝑐𝑡}`$ space since electric flux lines have no place to go in the compact space. In other words, the field equation does not admit the existence of such a configuration. To remedy it, the simplest way is to introduce another point charge with the same size but opposite sign to the first point charge in order to make the flux lines exactly close. The other simplest way is to take account of a non-compact space by taking the limit of infinitely large size. In a non-compact space, the flux lines arising from a single point charge could run to infinity so we could have a consistent model with any number and configuration of 3-branes. In fact, this latter procedure has been recently taken by Lykken and Randall at least implicitly in order to make a model with only positive tension 3-branes.
Incidentally, it is of interest to observe that a similar situation also occurs in the context of D-brane theory . The D-brane, unlike the fundamental string, carries positive R-R charge. We cannot therefore put a single (or many) D-brane(s) in a compact region owing to a non-zero total R-R charge. To cancel the R-R charge exactly, we are led to introduce some objects which carry negative R-R charge with the same size. They are nothing but orientifolds! From this analogy, it is interesting to regard 3-branes with positive and negative tension as D3-branes and O3-planes, respectively. We think that we should pursue this analogy further to understand a long-standing problem, i.e., the cosmological constant problem, in future.
## 3 A model with positive tension D3-branes
In this section, on the basis of the solutions given in the previous section, we will present a concrete model which realizes simultaneously mass hierarchy and trapping of gravity on our universe (”visible 3-brane”). An essential aspect with respect to the two phenomena is very similar in the two distinct solutions Eqs.(8), (10), so we limit ourselves to the type of the solution Eq.(10) in this paper. Moreover, in this section and the next section, to start with, we shall give a model in the case of two positive tension 3-branes and then extend it to a more general case of arbitrary number of positive tension 3-branes.
Let us start with a model with two positive tension branes and two negative tension branes, which is a specific example ($`n=5`$) of the solution (10):
$`f(z)`$ $`=`$ $`|zL_2|+|zL_3||zL_4|+L,`$
$`f^{}(z)`$ $`=`$ $`\epsilon (z)\epsilon (zL_2)+\epsilon (zL_3)\epsilon (zL_4)1,`$
$`f^{\prime \prime }(z)`$ $`=`$ $`2\left[\delta (z)\delta (zL_2)+\delta (zL_3)\delta (zL_4)\right].`$ (13)
Of course, $`_i`$ must satisfy the relation (11) specified to $`n=5`$ case:
$`_1=_2=_3=_4={\displaystyle \frac{6k}{\kappa ^2}}.`$ (14)
Here, instead of Eq.(12) we require slightly modified relations for $`L_i`$:
$`L=L_2L_3+L_4,L_10,L_n2L,L_3>2L_2.`$ (15)
The reason is that as mentioned in the paper , the solution (10) with the relation (12) has a characteristic feature $`f(L_{2i1})=0`$ (or equivalently, $`u(L_{2i1})=1`$) which is a undesired feature in explaining mass hierarchy. Thus, in order to avoid this feature, we have imposed the relation (15), specially, $`L_3>2L_2`$ on the solution (13). Indeed, as shown in the next section, this type solution has a desired feature with respect to both mass hierarchy and trapping of gravity. And note that the positive tension domain walls are located at $`z=L_10,L_3`$ while the negative tension domain walls are at $`z=L_2,L_4`$ as seen in Eq.(14). For comparison with Ref., according to their terminology, we may call the positive tension 3-branes at $`z=L_10`$ and at $`z=L_3`$ ”TeV brane” and ”Planck brane”, respectively <sup>2</sup><sup>2</sup>2Our setup is different from that of Ref. where ”Planck brane” and ”TeV brane” are put at $`z=L_10`$ and at $`z=L_3`$, respectively. One can modify the present setup to coincide with their setup without any difficulty.. It might appear to be difficult to move only the two negative tension branes to infinity without changing the essential contents of the model since the negaive and the positive tension branes are adjacent in the solution (13). But the analysis in the next section reveals that this procedure can be carried out by taking the limit of $`L_2,2LL_4\mathrm{}`$ with keeping $`L_32L_2`$ finite. In this way, we can construct a model consisting of only two positive tension domain walls separated at some distance along a non-compact fifth dimension.
Next let us present a model including general $`\frac{n1}{2}`$ positive tension domain walls which is a straightforward generalization of the model with two positive tension domain walls. This model is just given in terms of Eq.(10) satisfying Eq.(11) but modified relations among $`L_i`$ compared to Eq.(12), and a limiting procedure. For convenience, let us write down explicitly this general model with $`\frac{n1}{2}`$ positive tension D3-branes:
$`f(z)`$ $`=`$ $`{\displaystyle \underset{i=2}{\overset{n1}{}}}(1)^{i+1}|zL_i|+L,`$
$`f^{}(z)`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{n1}{}}}(1)^{i+1}\epsilon (zL_i)1,`$
$`f^{\prime \prime }(z)`$ $`=`$ $`2{\displaystyle \underset{i=1}{\overset{n1}{}}}(1)^{i+1}\delta (zL_i),`$
$`_{2i}`$ $`=`$ $`_{2i1}={\displaystyle \frac{6k}{\kappa ^2}},`$
$`L`$ $`=`$ $`{\displaystyle \underset{i=2}{\overset{n1}{}}}(1)^iL_i,L_10,L_n2L,`$ (16)
with $`i=1,2,\mathrm{},\frac{n1}{2}`$. And the limiting procedure is given by
$`f(L_{2i})\mathrm{},`$ (17)
with keeping $`f(L_{2i1})`$ some finite, negative values.
## 4 Exponential mass hierarchy and Newton’s law
In the previous section, we have presented a concrete model so we are now ready to consider how this model resolves problems of mass hierarchy and trapping of gravity. As in the previous section, let us start with the model with two positive tension domain walls. Following the formula given in Ref., it is straightforward to evaluate mass scale $`m(0)`$ on the ”TeV brane” located at $`z=L_10`$ from mass scale $`m(L_3)`$ on the ”Planck brane” located at $`z=L_3`$ to which the Planck mass scale is allocated. The result is of the form
$`m(0)=e^{k(L_32L_2)}m(L_3).`$ (18)
As stated in the previous section, since we have taken $`L_32L_2`$ to be a positive and finite value, we can resolve the mass hierarchy if $`L_32L_2`$ is of order of $`10`$. In other words, the mass scale in our universe (”TeV brane”) is of order electroweak scale thanks to the red-shift factor in the geometry when the Planck mass scale is assigned to another positive tension domain wall (”Planck brane”) at $`z=L_3`$ and $`L_32L_2`$ is of order of $`10`$.
Next, let us turn our attention to a problem of trapping of gravity on ”TeV brane”. To so that, we will consider the linearlized approximation of the metric tensor and examine small fluctuations $`h_{\mu \nu }`$ around the four-dimensional Minkowski metric on the brane and determine the graviton spectrum as well as the Kaluza-Klein spectrum. Accordingly, we assume the form
$`ds^2`$ $`=`$ $`g_{MN}dx^Mdx^N`$ (19)
$`=`$ $`e^{2kf(z)}\left(\eta _{\mu \nu }+h_{\mu \nu }(x,z)\right)dx^\mu dx^\nu +dz^2.`$
Then, with the gauge conditions $`h_\mu ^\mu =^\mu h_{\mu \nu }=0`$, up to the leading order of $`h_{\mu \nu }`$, the field equation is of the form
$`{\displaystyle \frac{1}{2}}e^{2kf(z)}\mathrm{}h_{\mu \nu }{\displaystyle \frac{1}{2}}h_{\mu \nu }^{\prime \prime }+2k^2h_{\mu \nu }kf^{\prime \prime }(z)h_{\mu \nu }=0,`$ (20)
where $`\mathrm{}`$ denotes the flat space-time four-dimensional Laplacian operator, and $`f(z)`$ is defined as in Eq.(13). At this stage, if we consider the plane wave fluctuations
$`h_{\mu \nu }(x,z)=e^{ip_\mu x^\mu }h_{\mu \nu }(z),p_\mu ^2=m^2,`$ (21)
Eq.(20) reduces to the differential equation for $`h_{\mu \nu }(z)`$
$`\left[{\displaystyle \frac{m^2}{2}}e^{2kf(z)}{\displaystyle \frac{1}{2}}_z^2+2k^2kf^{\prime \prime }(z)\right]h_{\mu \nu }(z)=0,`$ (22)
It is more convenient to rewrite Eq.(22) into the one-dimensional Schrodinger wave-equation by making a change of variables, $`y=\frac{1}{k}e^{kf(z)}`$
$`\left[{\displaystyle \frac{1}{2}}_y^2+V(y)\right]\mathrm{\Psi }(y)={\displaystyle \frac{m^2}{2}}\mathrm{\Psi }(y),`$ (23)
where we have defined as
$`h_{\mu \nu }(z)k^{\frac{1}{2}}y^{\frac{1}{2}}\mathrm{\Psi }_{\mu \nu }(y),\mathrm{\Psi }_{\mu \nu }(y)\mathrm{\Psi }(y).`$ (24)
The one-dimensional potential is now given by
$`V(y)={\displaystyle \frac{15}{8}}{\displaystyle \frac{1}{y^2}}{\displaystyle \frac{2}{y}}\left[\delta (yy_1)\delta (yy_2)+\delta (yy_3)\delta (yy_4)\right],`$ (25)
where $`y_i(i=1,2,3,4)`$ are of the form
$`y_1`$ $`=`$ $`{\displaystyle \frac{1}{k}},`$
$`y_2`$ $`=`$ $`{\displaystyle \frac{1}{k}}e^{kL_2},`$
$`y_3`$ $`=`$ $`{\displaystyle \frac{1}{k}}e^{k(L_32L_2)},`$
$`y_4`$ $`=`$ $`{\displaystyle \frac{1}{k}}e^{k(2LL_4)},`$ (26)
thus we have a relation
$`0<y_3<y_1<y_2<y_4,`$ (27)
where without loss of generality we have assumed $`y_2<y_4`$. Here an interesting thing has happened owing to Eq.(15). Namely, the change of variables from $`z`$ to $`y`$ has caused a rearrangement of the positions of domain walls where the two positive tension 3-branes are located at the left side of the two negative tension 3-branes, so we can move the two negative tension 3-branes to infinity by taking the limit of $`L_2,2LL_4\mathrm{}`$ with keeping $`L_32L_2`$ finite. Note that the two positive tension 3-branes stay at the same points in this limit.
The one-dimensional Schrodinger wave-equation Eq.(23) gives rise to much useful informations about the graviton and a tower of the Kaluza-Klein modes. Luckily enough, the potential $`V(y)`$ has a similar form to that in Refs. so that we can take over the results obtained there to the present case with an appropriate modification. First of all, the potential $`V(y)`$ has the $`\delta `$-functions at $`y=y_1,y_3`$ with negative coefficients, which means that these $`\delta `$-functions supports a normalizable bound state mode, which is of course nothing but the massless graviton. Incidentally, the ”regulator” branes with negative brane tension locating at $`y=y_2,y_4`$, though they are moved to infinity, induce the $`\delta `$-functions with positive coefficients into the potential $`V(y)`$, so these branes cannot support such a massless graviton. This is the reason that in Ref., Randall and Sundrum have regarded the 3-brane with positive tension as our universe for the trapping of graviton on the brane.
Second, let us focus our attention to the properties of the continuous KK modes. In this point, in the leading approximation, we can safely neglect the existence of the $`\delta `$-functions in the potential $`V(y)`$. Then, it turns out that the squared mass $`m^2`$ of the KK modes is positive definite as desired. In the non-compact limit, the potential falls off to zero, so there is no mass gap between the massless graviton and the KK modes. At first sight, this could be a signal of danger since a tower of KK modes would give measurable effects to the modification of the Newton’s law, but as shown shortly, they give rise to only small corrections to the Newton’s law . We can easily write down the general solution for the continuous KK modes as
$`\mathrm{\Psi }_m(y){\displaystyle \frac{m^{\frac{5}{2}}}{k^2}}\sqrt{y}\left[Y_2(my)+{\displaystyle \frac{4k^2}{\pi m^2}}J_2(my)\right],`$ (28)
where $`Y_2`$ and $`J_2`$ denote the Bessel functions of order 2. From the above wave function, it is straightforward to evaluate the corrections to the Newton’s law from the continuous KK modes . In fact, the gravitational potential between two masses $`m_1`$ and $`m_2`$ takes the form in the polar coordinate
$`U(r)=G_N{\displaystyle \frac{m_1m_2}{r}}+{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{dm}{k}}G_N{\displaystyle \frac{m_1m_2e^{mr}}{r}}=G_N{\displaystyle \frac{m_1m_2}{r}}\left(1+{\displaystyle \frac{1}{k^2r^2}}\right).`$ (29)
Accordingly, it turns out that an observer living in the ”TeV brane” (and also ”Planck brane”) sees gravity as essentially four-dimensional since the corrections to the Newton’s law from the KK modes are very small. (Note that $`k`$ is taken to be of order the Planck scale.)
In the above, the case of the two D3-branes with positive tension has been investigated in detail. The analysis of the general model Eqs.(16) and (17) is quite straightforward and very similar to the case of the two D3-branes, so that we will not repeat it in this paper. For instance, in the Schrodinger wave-equation (23), only the modification lies in the form of the potential $`V(y)`$ where $`n1`$ $`\delta `$-functions appear instead of 4 $`\delta `$-functions. Hence, we can show the trapping of gravity on the D3-branes again. On the other hand, for the hierarchy problem, one can also show that the electroweak scale is generated on our universe when the Planck scale is assigned to a hidden universe and an appropriate value of distance between the two universes is chosen. However, in this general model, this is not the whole story since there are many red-shift factors arising from relative distances between our universe and the remaining hidden universes. This issue is worthy of further study and will be reported in detail in a separate publication .
## 5 Discussions
In this paper, we have investigated problems of both mass hierarchy and trapping of gravity on our universe by using the solutions describing many domain walls which were previously found by the present author . Our model in general consists of any number of positive tension D3-branes in a non-compact extra dimension, and shows the exponential mass hierarchy and trapping of gravity in our universe. In this sense, the study at hand shows that the scenario by Lykken and Randall is indeed realized.
Since various matters and gauge fields are also localized on our D3-brane universe in terms of the mechanism in string theory, our model is equipped with necessary conditions as a realistic model. Thus, future directions of this work would be to apply our model to other unsolved problems. In fact, we can easily point out that our model has the following advantages over the original Randall-Sundrum model. In the context of cosmology, since there exist only positive tension D3-branes in our model, the lack of the Friedmann-like expanding universe is certainly resolved . And, in the context of phenomenology, some of puzzles stated in a recent study seem to be also resolved in terms of our general model including many D3-branes since we have not a single but many ”red-shift” factors associated with many domain walls. These problems will be reported in a separate publication .
|
no-problem/9909/astro-ph9909004.html
|
ar5iv
|
text
|
# The LCO/Palomar 10,000 km s-1 Cluster Survey
## 1. Introduction
To understand the origins of the LP10K project, consider for a moment where the field of large-scale flows stood in late 1990. The “Seven Samurai” (7S) group (Lynden-Bell et al. 1988) had published their then-stunning result that elliptical galaxies in the nearby universe were streaming coherently in the direction of the Great Attractor (GA) at $`cz4000\mathrm{km}\mathrm{s}^1,`$ $`l300^{},`$ $`b10^{}.`$ The effective depth of the 7S survey, $`3000\mathrm{km}\mathrm{s}^1,`$ was too small to know for sure whether the motion was due simply to the gravitational pull of the GA, or part of a much larger-scale bulk flow. Hints that the latter might be the case came from the large TF study of Mathewson et al. (1992), who found galaxies in the GA itself to be outwardly streaming like the 7S ellipticals more nearby, and from my own TF sample of Perseus-Pisces spirals at $`cz5000\mathrm{km}\mathrm{s}^1,`$ which appeared to be infalling on the opposite side of the sky from the GA (Willick 1990). It certainly seemed plausible in 1990 that bulk flows might be coherent over much larger volumes than the $`5000\mathrm{km}\mathrm{s}^1`$ radius sphere probed up to that time.
It was in this scientific atomosphere that I planned the LP10K study. It’s worth noting that I was unaware of the survey by Lauer and Postman of brightest cluster galaxies (BCGs) that was then nearing completion. Their work was, of course, soon to burst on the scene with their 1992 announcement of a high-amplitude ($`700\mathrm{km}\mathrm{s}^1`$) bulk flow in which all Abell clusters out to the huge distance of 15,000 km s<sup>-1</sup> appeared to partake (Lauer & Postman 1994; LP). Subsequent to the announcement of hte LP result, I often portrayed my survey as “testing Lauer-Postman” for simplicity, but in fact it was part of a larger picture with deeper roots.
The LP10K survey is described in two journal articles (Willick 1999ab, hereafter Papers I and II), to which the interested reader should refer for technical details. In this conference proceeding paper, I list some of the salient features of the survey, describe its main results, and discuss how it fits into the picture of cosmic flows that is taking shape as a result of the Cosmic Flows 1999 conference.
## 2. The LP10K Survey: Observations and Data Reduction
The outlines of the survey took shape in early 1991, when I was completing my PhD thesis. I recognized then that to test for bulk flow one needed full-sky data. This is not easy to come by for a postdoc, but I had the good fortune to become a Carnegie Postdoctoral Fellow. Carnegie’s unique resources were its ample access to Palomar Observatory telescopes<sup>1</sup><sup>1</sup>1Carnegie’s Palomar privileges ended in 1995, just as my survey was completed. for Northern Hemisphere objects, and to telescopes at the LCO for objects in the Southern sky.
My intial plan, which I more or less followed, was to observe galaxies in 15 Abell clusters distributed around the sky in a relatively narrow redshift range, $`9000cz12,000\mathrm{km}\mathrm{s}^1.`$ I aimed to get TF data for at least 15 galaxies in each cluster. The redshift slice was selected so as to focus on a depth where there was little or no extant data pertaining to cosmic flows. I searched the ACO catalog (Abell, Corbin, & Olowin 1989) for clusters in this redshift range, and came up with about 35. I then winnowed this list to 15 by randomly selecting an “isotropic” sample—that is, one that left no large portion of sky unsampled, and which did not include groups of clusters all in the same place on the sky. Figure 1 shows the positions of the LP10K clusters on the sky, coding the size of the TF samples and their median redshifts by the point type.
In each cluster I obtained $`R`$-band CCD imaging of 1–2 square degrees of sky centered on the cluster core. These moderately deep ($`R\stackrel{<}{}21.5`$ mag) CCD frames were then analyzed using FOCAS, and all galaxies brighter than about $`m_R=17.5`$ were visually inspected for suitability as TF galaxies.
The skeptical reader might pause here and ask what “suitable” means in this context. Initially, it meant something rigorous such as “brighter than a certain apparent magnitude and more inclined than a certain minimum inclination.” I quickly learned that I could not afford to be so picky, however. If I wanted galaxies whose rotation would be detected, I needed to pick galaxies that were emitting a lot of H$`\alpha `$ radiation. The only way to ensure this was to choose objects primarily on morphological appearance: they had to look they had a lot of star formation going on. Basically, this means well-defined spiral structure, generally late-type morphology, and some visual evidence of Hii regions. Moreover, I couldn’t apply a single magnitude limit for all clusters; rather, I had to go much deeper in clusters with fewer bright, suitable spirals, in order to obtain a sufficient number of TF galaxies. This, I confess, is not a very rigorous selection procedure, but it was the one that had to be followed to make the program feasible.
Once the spirals were selected as described above, they were observed spectroscopically at the LCO 2.5 m or the Palomar 5 m telescopes. Long-slit spectra were acquired at about 2 Å resolution, in the portion of the spectrum containing H$`\alpha `$ out to $`z=0.1`$ Despite my efforts at preselecting objects that would exhibit H$`\alpha `$ emission, I suffered though many, many nondetections. This included objects with nuclear H$`\alpha `$ emission, fine for redshift purposes, but not extended enough to get a rotation velocity. This should serve as a cautionary warning to observers hoping to do deep, optical TF studies: hone your selection criteria in advance so you don’t waste too much large telescope time. Detectable, extended H$`\alpha `$ emission is by no means guaranteed from faint spirals.
Because I selected TF galaxies from new CCD imaging and not from a catalog, I didn’t know in advance what their redshifts would be, but I assumed they would usually be close to the nominal values of their parent clusters. This is where I got my second rude surprise (after nondetections): a large fraction ($`30`$$`40\%`$) of the TF sample lay well in the background of the cluster. In the end, fully two-thirds of the galaxies with extended H$`\alpha `$ emission were found to have $`cz>15,000\mathrm{km}\mathrm{s}^1,`$ and a not insignificant number had $`cz30,000\mathrm{km}\mathrm{s}^1.`$ I intend at a later date to explore the implications of these galaxies, which are perfectly good Tully-Fisher objects, for monopole distortions to the Hubble expansion, which is of considerable scientific interest. For the immediate goal of bulk flow on a 100 Mpc scale, of course, these objects were not especially useful. In Figure 1, the point type codes the median redshift of the TF sample. Note that the clusters with the greatest preponderance of high-redshift objects lie preferentially near the South Galactic Cap; this result merits further attention, as it could be indicative of very large-scale structure.
In the orginal survey plan, I hoped to obtain Fundamental Plane (FP) data for 15 ellipticals in each cluster to supplement, and provide an independent check on, the TF distances. Indeed, an auxiliary scientific goal of the survey was to use the spiral/elliptical cross-check as a test for environmental effects—significant E/S distance discrepancies that correlated with cluster properties would provide evidence for these. Unfortunately, the elliptical galaxy spectroscopic data is still very much in the reduction process, in the case of LCO ellipticals which I could observe using the du Pont telescope multifiber spectrograph, and are still to be acquired in the case of Northern sky ellipticals. If the LP10K elliptical results ever come out, it will probably be via a healthy amount of merging with other data sets. In hindsight, I can see that including ellipticals in the survey plan was a proverbial case of “biting off more than I could chew.” I say this not for the purpose of public self-flagellation but as a cautionary tale for ambitious postdocs contemplating a massive observational program. In any case, readers should keep in mind that the results presented in Papers I and II, and here, are based on Tully-Fisher data only.
## 3. Formulating the Optical TF Relation
### 3.1. Defining the TF Rotation Velocity
A key question that the LP10K survey data addressed was, “Given that we have rotation curve (RC) data, $`V(R),`$ for each galaxy, what particular velocity $`V_{\mathrm{TF}}`$ is it that enters into the TF relation?”. While similar questions arose in the older, HI-based TF surveys, where one had to extract a width from an unresolved 21 cm profile, the problem then was largely algorithmic. With long-slit spectroscopy, however, we have resolved data, and the issue of “what rotation velocity enters into the TF relation” becomes a physically meaningful one.
For LP10K I took the following approach. First, the RCs were fitted using a two-paremeter functional form:
$$V(R)=\frac{2V_a}{\pi }\mathrm{tan}^1\left(\frac{R}{R_t}\right).$$
The arctangent form approximates the canonical S-shaped rotation curve, but can also represent the quasi-linear RCs, still rising at the outermost point, that are frequently encountered. It is useful to refer to $`V_a`$ as the “asymptotic” velocity, and to $`R_t`$ as the RC scale length or “turnover radius.” It is important to bear in mind, however, that the velocity $`V_a`$ is not necessarily reached by the sampled RC, and may in fact not be reached at all, so its name, while evocative, is not rigorous. Note also that for a quasi-linear RC, only the ratio $`V_a/R_t`$ is well-determined from the fit. Still, bearing these caveats in mind, the arctan fit proved quite adequate for all LP10K galaxies.
The photometric data also yields a scale length for the galaxy, viz., $`R_e,`$ in this case the effective exponential scale length (see Paper I for more detail on how this was computed; it was not from an exponential fit). With it, one can pose the question of “what is $`V_{\mathrm{TF}}`$” in this way: at how many exponential scale lengths should one evaluate the rotation curve $`V(R)`$ to get the best (lowest-scatter) TF relation? Let us parametrize this question by writing $`V_{\mathrm{TF}}=V(f_sR_e),`$ where the right side is evaluated from the arctan fit, and take $`f_s`$ as a free paramer. I like to write this in the following way:
$$V_{\mathrm{TF}}=\frac{2V_a}{\pi }\mathrm{tan}^1\left(\frac{f_s}{x_t}\right),$$
where $`x_tR_t/R_e`$ is a dimensionless shaper parameter which measures the ratio of the dynamical to luminous scale lengths of the galaxy. Roughly speaking, $`x_t1`$ is a classical S-shaped RC, with the flat part well sampled, while $`x_t\stackrel{>}{}1`$ is a quasi-linear RC.
We next define the velocity width parameter, in the usual way, $`\eta (f_s)=\mathrm{log}(2V_{\mathrm{TF}})2.5,`$ and write the inverse TF relation $`\eta (f_s)=e(MD).`$ This is a three-parameter TF relation, with slope $`e`$ and zero point $`D`$ supplemented by $`f_s.`$ The values of all the parameters can be determined by maximum likelihood (see Papers I and II), with absolute magnitude given by a simple Hubble Flow model (in the CMB frame). It is illustrative to do this for a range of fixed values of $`f_s,`$ maximizing likelihood with respect to $`e`$ and $`D`$ at each value and obtaining a correponsponding TF scatter and fit likelihood. The results of this exercise are presented in Figure 2. The plot shows that the “correct” value of $`f_s`$—i.e., the one that minizes TF scatter—is quite tightly constrained at $`f_s2.0.`$ In particular, one sees that taking $`f_s1,`$ corresponding to taking $`V_{\mathrm{TF}}=V_a,`$ leads to a poor TF relation. This is an important point, for it demonstrates that the canonical wisdom—that the TF relation involves the velocity on the flat part of the RC—is wrong. In Paper I a heuristic physical explanation for this effect is offered, but undoubtedly the correct explanation is much deeper, and I encourage theorists to delve more deeply into this issue.
### 3.2. A Surface Brightness Dependence of the TF Relation
Another galaxy property which turned out to enter into the TF relation was the effective surface brightness $`I_e`$ (calculated via the same method that produced the scale length $`R_e`$—see Paper I for details). Specifically, we can write the TF relation
$$\eta (f_s)=e(MD)\alpha \mu _e,$$
where $`\mu _e`$ is the magnitude equivalent of $`I_e.`$ When this was done, a small but statistically significant reduction in the TF scatter was found; the coefficient was found to be $`\alpha 0.05,`$ which when combined with $`e0.12`$ leads to a power-law scaling relation of the form $`V_{\mathrm{TF}}I_e^{0.13}L^{0.3}.`$ This relation is reminiscent of the FP relations for elliptical galaxies. The surface brightness dependence of the TF relation has significant implications for galaxy structure.
It is only fair to note here that my claim of an SB dependence of the TF relation is not universally accepted. Riccardo Giovanelli made the very good point at the conference that by introducing a scale length into the definition of velocity width, I may have induced a spurious SB dependence of the TF relation. My Shellflow colleague Stéphane Courteau has argued that the particular method I employ for determining $`R_e`$ and $`I_e`$ from the surface brightness profiles could produce the dependence, whereas in a more standard approach the dependence would not show up. As of this writing (8/31/99) these possibilities have not been fully investigated. I hope to study this issue more closely in the coming months with the Shellflow data, and in the meantime I encourage input from members of the community who have done such experiments with their own data.
### 3.3. The Intrinsic TF Scatter
The likelihood fits used in Papers I and II made it possible to constrain the individual contributions to the overall TF scatter: measurement errors and intrinsic scatter. Although the rotation velocity measurement errors were not well determined a priori, they can be separated from the intrinsic scatter because the TF error they induce scales as $`\delta \eta \delta v/v.`$ (Photometric errors are fairly well determined, and are small in any case.) Thus, if one assumes a fixed velocity measurement error, the overall $`\eta `$ error $`V_{\mathrm{TF}}^1.`$ Using this model, I found that (i) the characteristic rotation velocity measurement error was $`17\mathrm{km}\mathrm{s}^1,`$ a reasonable value consistent with repeat observation comparisons; (ii) the observed decrease of TF scatter with increasing luminosity can be fully accounted for in this way—i.e., we are not required to assume the intrinsic TF scatter decreases with increasing luminosity—and (ii) the intrinsic TF scatter itself is $`0.28\pm 0.07`$ mag. This value is consistent with what was found by Willick et al. (1996) via an independent approach, and is also consistent with findings from the Shellflow survey (Courteau et al., this volume). Reproducing this level of scatter is an important challenge for galaxy formation theory.
## 4. On Bulk Flows
To determine the best-fitting bulk flow vector $`\stackrel{}{v},`$ I calculated absolute magnitude as $`M=m5\mathrm{log}[(cz\stackrel{}{v}\widehat{𝐧})/H_0]25,`$ and then maximized TF likelihood as before. Paper II gives many details about these fits, but the upshot is that the derived flow vector has an amplitude of $`700\pm 300\mathrm{km}\mathrm{s}^1`$ in the direction quoted in the Abstract. The relevant question now is, what should we make of this? As those who heard my talk at the conference, or heard about it, already know, I no longer consider this result to be “correct.”
Unfortunately, this seems to have led to a perception that I am “disavowing my own data.” This is untrue. The data is perfectly good, or at least as good as it can be given the observational challenges described above. Rather, my point of view is that this result is noisy—the flow is detected at the $`2.3\sigma `$ level—and thus would require extensive corroboration to be accepted. Looking at the other data sets around—most of them presented at the Cosmic Flows workshop—the prevailing picture is one of convergence to the CMB frame on smaller scales. Most convincing to me, because I worked on it, are the Shellflow results, which clearly show convergence at 6000 km s<sup>-1</sup>. Also very persuasive in this regard are the new supernova results presented by Adam Riess.
To believe the LP10K bulk flow result, then, I would have to accept that the Hubble flow converges to the CMB frame at 6000 km s<sup>-1</sup>, and then starts drifting away from the CMB frame beyond this distance. This would be blatantly unphysical. By and large I harbor few prejudices about how the universe should behave, but I basically do believe that gravitational instability drives structure formation and flows, and that the universe approaches homogeneity on progressively larger scales. If this is true, and Shellflow and the other shallower surveys are correct, the LP10K flow result cannot be. That is not a disavowal of my own data, but quite basic and, I hope, sound scientific reasoning.
#### Acknowledgments.
First and foremost, I once again thank my friend and collaborator Stéphane Courteau for organizing this fantastic conference. I also thank Felicia Tam, a Stanford undergraduate, for invaluable assistance in reducing the LP10K data over the last two years.
## References
Abell, G.O., Corwin, H.G., & Olowin, R.P. 1989, ApJS, 70, 1
Lauer, T.R., & Postman, M. 1994, ApJ, 425, 418 (LP)
Lynden-Bell, D., Faber, S.M., Burstein, D., Davies, R.L, Dressler, A.,Terlevich, R., & Wegner, G. 1988, ApJ, 302, 536
Mathewson, D. S., Ford, V. L, & Buchhorn, M. 1992, ApJS, 81, 413
Willick, J. A. 1990, ApJ, 351, L5
Willick, J. A., Courteau, S., Faber, S. M., Burstein, D., Dekel, A., & Kolatt, T. 1996, ApJ, 457, 460
Willick, J.A. 1999a, ApJ, 516, 47 (Paper I)
Willick, J.A. 1999b, ApJ, in press (Paper II, astro-ph/9812470)
|
no-problem/9909/hep-ph9909446.html
|
ar5iv
|
text
|
# Spin Dependence of Massive Lepton Pair Production in Proton-Proton Collisions
## I Introduction and Motivation
Both massive lepton-pair production, $`h_1+h_2\gamma ^{}+X;\gamma ^{}l\overline{l}`$, and prompt real photon production, $`h_1+h_2\gamma +X`$ are valuable probes of short-distance behavior in hadron reactions. The two reactions supply critical information on parton momentum densities, in addition to the opportunities they offer for tests of perturbative quantum chromodynamics (QCD). Spin-averaged parton momentum densities may be extracted from spin-averaged nucleon-nucleon reactions, and spin-dependent parton momentum densities from spin-dependent nucleon-nucleon reactions. An ambitious experimental program of measurements of spin-dependence in polarized proton-proton reactions will begin soon at Brookhaven’s Relativistic Heavy Ion Collider (RHIC) with kinematic coverage extending well into the regions of phase space in which perturbative quantum chromodynamics should yield reliable predictions.
Massive lepton-pair production, commonly referred to as the Drell-Yan process , provided early confirmation of three colors and of the size of next-to-leading contributions to the cross section differential in the pair mass Q. The mass and longitudinal momentum (or rapidity) dependences of the cross section (integrated over the transverse momentum $`Q_T`$ of the pair) serve as laboratory for measurement of the antiquark momentum density, complementary to deep-inelastic lepton scattering from which one gains information of the sum of the quark and antiquark densities. Inclusive prompt real photon production is a source of essential information on the gluon momentum density. At lowest order in perturbation theory, the reaction is dominated at large values of the transverse momentum $`p_T`$ of the produced photon by the “Compton” subprocess, $`q+g\gamma +q`$. This dominance is preserved at higher orders, indicating that the experimental inclusive cross section differential in $`p_T`$ may be used to determine the density of gluons in the initial hadrons .
In two previous papers , we addressed the production of massive lepton-pairs as a function of the transverse momentum $`Q_T`$ of the pair in unpolarized nucleon-nucleon reactions, $`h_1+h_2\gamma ^{}+X`$, in the region where $`Q_T`$ is greater than roughly half of the mass of the pair, $`Q_T>Q/2`$. We demonstrated that the differential cross section in this region is dominated by subprocesses initiated by incident gluons. Correspondingly, massive lepton-pair differential cross sections in unpolarized nucleon-nucleon reactions are a valuable, heretofore overlooked, independent source of constraints on the spin-averaged gluon density.
Turning to longitudinally polarized proton-proton collisions in this paper, we study the potential advantages that the Drell-Yan process may offer for the determination of the spin-dependence of the gluon density. To be sure, the cross section for massive lepton-pair production is smaller than it is for prompt photon production. However, just as in the unpolarized case, massive lepton pair production is cleaner theoretically since long-range fragmentation contributions are absent as are the experimental and theoretical complications associated with isolation of the real photon. Moreover, the dynamics of spin-dependence in hard-scattering processes is a sufficiently complex topic, and its understanding at an early stage in its development, that several defensible approaches for extracting polarized parton densities deserve to be pursued with the expectation that consistent results must emerge.
There are notable similarities and differences in the theoretical analyses of massive lepton-pair production and prompt real photon production. At first-order in the strong coupling strength, $`\alpha _s`$, the Compton subprocess and the annihilation subprocess $`q+\overline{q}\gamma +g`$ supply the transverse momentum of the directly produced prompt photons. Identical subprocesses, with the real $`\gamma `$ replaced by a virtual $`\gamma ^{}`$, are responsible at $`𝒪(\alpha _s)`$ for the transverse momentum of massive lepton-pairs. An important distinction, however, is that fragmentation subprocesses play a very important role in prompt real photon production at collider energies. In these long-distance fragmentation subprocesses, the photon emerges from the fragmentation of a final parton, e.g., $`q+gq+g`$, followed by $`q\gamma +X`$. The necessity to invoke phenomenological fragmentation functions and the infrared ambiguity of the isolated cross section in next-to-leading order raise questions about the extent to which isolated prompt photon data may be used for fully quantitative determinations of the gluon density. It is desirable to investigate other physical processes for extraction of the gluon density that are free from these systematic uncertainties. Fortunately, no isolation would seem necessary in the case of virtual photon production (and subsequent decay into a pair of muons) in typical collider or fixed target experiments. Muons are observed only after they have penetrated a substantial hadron absorber. Thus, any hadrons within a typical cone about the direction of the $`\gamma ^{}`$ will have been stopped, and the massive lepton-pair signal will be entirely inclusive.
Another significant distinction between massive lepton-pair production and prompt real photon production is that interest in $`h_1+h_2\gamma ^{}+X`$ has been drawn most often to the domain in which the pair mass $`Q`$ is relatively large, justifying a perturbative treatment based on a small value of $`\alpha _s(Q)`$ and the neglect of inverse-power high-twist contributions (except near the edges of phase space). The focus in prompt real photon production is directed to the region of large values of $`p_T`$ where $`\alpha _s(p_T)`$ is small. Interest in the transverse momentum $`Q_T`$ dependence of the massive lepton-pair production cross section has tended to be limited to small values of $`Q_T`$ where the cross section is largest. Fixed-order perturbation theory is applicable for large $`Q_T`$, but it is inadequate at small $`Q_T`$, and all-orders resummation methods have been developed to address the region $`Q_T<<Q`$.
As long as $`Q_T`$ is large, the perturbative requirement of small $`\alpha _s(Q_T)`$ can be satisfied without a large value of $`Q`$. We therefore explore and advocate the potential advantages of studies of $`d^2\sigma /dQdQ_T`$ as a function of $`Q_T`$ for modest values of $`Q`$, $`Q2`$ to 3 GeV, below the range of the traditional Drell-Yan region. There are various backgrounds with which to contend at small $`Q`$ such as the contributions to the event rate from prompt decays of heavy flavors, e.g., $`h_1+h_2c+\overline{c}+X;cl+X`$. These heavy flavor contributions may be estimated by direct computation and/or bounded through experimental measurement of the like-sign-lepton distributions.
In Sec. II, we review perturbative QCD calculations of the transverse momentum distribution for massive lepton-pair production in the case in which the initial nucleon spins are polarized as well as in the spin-average case. In Sec. III, we present next-to-leading order predictions for the transverse momentum dependence of the cross sections for massive lepton-pair and real prompt photon production in unpolarized proton-proton collisions at energies typical of the RHIC collider. Predictions for spin dependence are provided in Sec. IV. Our conclusions are summarized in Sec. V.
## II Massive Lepton Pair Production and Prompt Photon Production at Next-to-leading Order
In inclusive hadron interactions at collider energies, $`h_1+h_2\gamma ^{}+X`$ with $`\gamma ^{}l\overline{l}`$, lepton pair production proceeds through partonic hard-scattering processes involving initial-state light quarks $`q`$ and gluons $`g`$. In lowest-order QCD, at $`𝒪(\alpha _s^0)`$, the only partonic subprocess is $`q+\overline{q}\gamma ^{}`$. At $`𝒪(\alpha _s)`$, both $`q+\overline{q}\gamma ^{}+g`$ and $`q+g\gamma ^{}+q`$ participate, with the recoil of the final parton balancing the transverse momentum of the lepton-pair. These processes are shown in Figs. 1(a) and 2(a). Calculations of the cross section at order $`𝒪(\alpha _s^2)`$ involve virtual loop corrections to these $`𝒪(\alpha _s)`$ subprocesses (Figs. 1(b) and 2(b)) as well as contributions from a wide range of $`23`$ parton subprocesses (of which some examples are shown in Figs. 1(c) and 2(c)).
The physical cross section is obtained through the factorization theorem,
$$\frac{d^2\sigma _{h_1h_2}^\gamma ^{}}{dQ_T^2dy}=\underset{ij}{}𝑑x_1𝑑x_2f_{h_1}^i(x_1,\mu _f^2)f_{h_2}^j(x_2,\mu _f^2)\frac{sd^2\widehat{\sigma }_{ij}^\gamma ^{}}{dtdu}(s,Q,Q_T,y;\mu _f^2).$$
(1)
It depends on the hadronic center-of-mass energy $`S`$ and on the mass $`Q`$, the transverse momentum $`Q_T`$, and the rapidity $`y`$ of the virtual photon; $`\mu _f`$ is the factorization scale of the scattering process. The usual Mandelstam invariants in the partonic system are defined by $`s=(p_1+p_2)^2,t=(p_1p_\gamma ^{})^2`$, and $`u=(p_2p_\gamma ^{})^2`$, where $`p_1`$ and $`p_2`$ are the momenta of the initial state partons and $`p_\gamma ^{}`$ is the momentum of the virtual photon. The indices $`ij\{q\overline{q},qg\}`$ denote the initial parton channels whose contributions are added incoherently to yield the total physical cross section. Functions $`f_h^j(x,\mu )`$ denote the usual spin-averaged parton distribution functions.
The partonic cross section $`\widehat{\sigma }_{ij}^\gamma ^{}(s,Q,Q_T,y;\mu _f^2)`$ is obtained commonly from fixed-order QCD calculations through
$$\frac{d^2\widehat{\sigma }_{ij}^\gamma ^{}}{dtdu}=\alpha _s(\mu ^2)\frac{d^2\widehat{\sigma }_{ij}^{\gamma ^{},(a)}}{dtdu}+\alpha _s^2(\mu ^2)\frac{d^2\widehat{\sigma }_{ij}^{\gamma ^{},(b)}}{dtdu}+\alpha _s^2(\mu ^2)\frac{d^2\widehat{\sigma }_{ij}^{\gamma ^{},(c)}}{dtdu}+𝒪(\alpha _s^3).$$
(2)
The tree, virtual loop, and real emission contributions are labeled (a), (b), and (c) as are the corresponding diagrams in Figs. 1 and 2. The parameter $`\mu `$ is the renormalization scale. It is set equal to the factorization scale $`\mu _f=\sqrt{Q^2+Q_T^2}`$ throughout this paper.
The cross section for $`h_1+h_2ł\overline{l}+X`$, differential in the invariant mass of the lepton pair $`Q^2`$ as well as its transverse momentum and rapidity, is obtained from Eq. (1) by the relation
$$\frac{d^3\sigma _{h_1h_2}^{l\overline{l}}}{dQ^2dQ_T^2dy}=\left(\frac{\alpha _{em}}{3\pi Q^2}\right)\frac{d^2\sigma _{h_1h_2}^\gamma ^{}}{dQ_T^2dy}(S,Q,Q_T,y),$$
(3)
where $`Q^2=(p_l+p_{\overline{l}})^2`$, and $`p_l,p_{\overline{l}}`$ are the four-momenta of the two final leptons. The Drell-Yan factor $`\alpha _{em}/(3\pi Q^2)`$ is included in all numerical results presented in this paper.
While the full next-to-leading order QCD calculation exists for massive lepton-pair production in the case of unpolarized initial nucleons, only a partial calculation is available in the polarized case . Correspondingly, we present spin-averaged differential cross sections at next-to-leading order, but we calculate spin asymmetries at leading order. Spin asymmetries are obtained by dividing the spin-dependent differential cross section by its spin-averaged counterpart. For prompt photon production, comparisons of asymmetries computed at next-to-leading order with those at leading order show only modest differences , whereas the cross sections themselves are affected more significantly. Given the similarity of prompt photon production and massive lepton-pair production in the region of $`Q_T`$ of interest to us , we expect that the leading-order asymmetries will serve as a useful guide for massive lepton-pair production.
Rewriting Eq. (3) and integrating over an interval in $`Q^2`$, we calculate the spin-averaged differential cross section $`Ed^3\sigma _{h_1h_2}^{l\overline{l}}/dp^3`$ as
$$\frac{Ed^3\sigma _{h_1h_2}^{l\overline{l}}}{dp^3}=\frac{\alpha _{em}}{3\pi ^2S}\underset{ij}{}_{Q_{min}^2}^{Q_{max}^2}\frac{dQ^2}{Q^2}_{x_1^{min}}^1\frac{dx_1}{x_1\overline{x}_1}f_{h_1}^i(x_1,\mu _f^2)f_{h_2}^j(x_2,\mu _f^2)s\frac{d\widehat{\sigma }_{ij}^\gamma ^{}}{dt}.$$
(4)
In Eq. (4), $`Q_{max}^2`$ and $`Q_{min}^2`$ are the chosen upper and lower limits of integration for $`Q^2`$, and $`x_1^{min}=(\overline{x}_1\tau )/(1\overline{x}_2)`$. The value of $`x_2`$ is determined from $`x_2=(x_1\overline{x}_2\tau )/(x_1\overline{x}_1)`$, with
$$\overline{x}_1=\frac{Q^2U}{S}=\frac{1}{2}e^y\sqrt{x_T^2+4\tau },$$
(5)
and
$$\overline{x}_2=\frac{Q^2T}{S}=\frac{1}{2}e^y\sqrt{x_T^2+4\tau }.$$
(6)
We use $`P_1`$ and $`P_2`$ to denote the four-vector momenta of the incident nucleons; $`S=(P_1+P_2)^2`$. The invariants in the hadronic system, $`T=(P_1p_\gamma ^{})^2`$ and $`U=(P_2p_\gamma ^{})^2`$, are related to the partonic invariants by
$$(tQ^2)=x_1(TQ^2)=x_1\overline{x}_2S,$$
(7)
and
$$(uQ^2)=x_2(UQ^2)=x_2\overline{x}_1S.$$
(8)
The scaled variables $`x_T`$ and $`\tau `$ are
$$x_T=\frac{2Q_T}{\sqrt{S}},\tau =\frac{Q^2}{S}.$$
When the initial nucleons are polarized longitudinally, we can compute the difference of cross sections
$$\mathrm{\Delta }\sigma =\sigma _{++}\sigma _+,$$
(9)
where $`+,`$ denote the helicities of the incident nucleons.
In analogy to Eq. (4), we find
$$\frac{Ed^3\mathrm{\Delta }\sigma _{h_1h_2}^{l\overline{l}}}{dp^3}=\frac{\alpha _{em}}{3\pi ^2S}\underset{ij}{}_{Q_{min}^2}^{Q_{max}^2}\frac{dQ^2}{Q^2}_{x_1^{min}}^1\frac{dx_1}{x_1\overline{x}_1}\mathrm{\Delta }f_{h_1}^i(x_1,\mu _f^2)\mathrm{\Delta }f_{h_2}^j(x_2,\mu _f^2)s\frac{d\mathrm{\Delta }\widehat{\sigma }_{ij}^\gamma ^{}}{dt}.$$
(10)
The functions $`\mathrm{\Delta }f_h^j(x,\mu )`$ denote the spin-dependent parton distribution functions, defined by
$$\mathrm{\Delta }f_h^i(x,\mu _f)=f_{h,+}^i(x,\mu _f)f_{h,}^i(x,\mu _f);$$
(11)
$`f_{h\pm }^i(x,\mu _f)`$ is the distribution of partons of type $`i`$ with positive $`(+)`$ or negative $`()`$ helicity in hadron $`h`$. Likewise, the polarized partonic cross section $`\mathrm{\Delta }\widehat{\sigma }^\gamma ^{}`$ is defined by
$$\mathrm{\Delta }\widehat{\sigma }^\gamma ^{}=\widehat{\sigma }^\gamma ^{}(+,+)\widehat{\sigma }^\gamma ^{}(+,),$$
(12)
with $`+,`$ denoting the helicities of the incoming partons.
The hard subprocess cross sections in leading order for the unpolarized and polarized cases are
$$s\frac{d\widehat{\sigma }_{q\overline{q}}}{dt}=s\frac{d\mathrm{\Delta }\widehat{\sigma }_{q\overline{q}}}{dt}=e_q^2\frac{2\pi \alpha _{em}C_F}{N_C}\frac{\alpha _s}{s}\left[\frac{u}{t}+\frac{t}{u}+\frac{2Q^2(Q^2ut)}{ut}\right],$$
(13)
$$s\frac{d\sigma _{qg}}{dt}=e_q^2\frac{\pi \alpha _{em}}{N_C}\frac{\alpha _s}{s}\left[\frac{s}{t}+\frac{t}{s}+\frac{2Q^2u}{st}\right],$$
(14)
and
$$s\frac{d\mathrm{\Delta }\sigma _{qg}}{dt}=e_q^2\frac{\pi \alpha _{em}}{N_C}\frac{\alpha _s}{s}\left[\frac{2u+s}{t}\frac{2u+t}{s}\right].$$
(15)
Our results on the longitudinal spin dependence are expressed in terms of the two-spin longitudinal asymmetry $`A_{LL}`$, defined by
$$A_{LL}=\frac{\sigma ^\gamma ^{}(+,+)\sigma ^\gamma ^{}(+,)}{\sigma ^\gamma ^{}(+,+)+\sigma ^\gamma ^{}(+,)},$$
(16)
where $`+,`$ denote the helicities of the incoming protons.
## III Unpolarized Cross Sections
We turn in this Section to explicit evaluations of the differential cross sections as functions of $`Q_T`$ at collider energies. We work in the $`\overline{\mathrm{MS}}`$ renormalization scheme and set the renormalization and factorization scales equal. We employ the MRST set of spin-averaged parton densities and a two-loop expression for the strong coupling strength $`\alpha _s(\mu )`$, with five flavors and appropriate threshold behavior at $`\mu =m_b`$; $`\mathrm{\Lambda }^{(4)}=300`$ MeV. The strong coupling strength $`\alpha _s`$ is evaluated at a hard scale $`\mu =\sqrt{Q^2+Q_T^2}`$. We present results for three values of the center-of-mass energy, $`\sqrt{S}=`$ 50, 200, and 500 GeV.
For $`\sqrt{S}=`$ 200 GeV, we present the invariant inclusive cross section $`Ed^3\sigma /dp^3`$ as a function of $`Q_T`$ in Fig. 3. Shown in this figure are the $`q\overline{q}`$ and $`qg`$ perturbative contributions to the cross section at leading order and at next-to-leading order. We average the invariant inclusive cross section over the rapidity range -1.0 $`<y<`$ 1.0 and over the mass interval 5 $`<Q<`$ 6 GeV. For $`Q_T<`$ 1.5 GeV, the $`q\overline{q}`$ contribution exceeds that of $`qg`$ channel. However, for values of $`Q_T>`$ 1.5 GeV, the $`qg`$ contribution becomes increasingly important. As shown in Fig. 4(a), the $`qg`$ contribution accounts for about 80 % of the rate once $`Q_TQ`$. The results in Fig. 4(a) also demonstrate that subprocesses other than those initiated by the $`q\overline{q}`$ and $`qg`$ initial channels contribute negligibly.
In Fig. 4(b), we display the fractional contributions to the cross section as a function of $`Q_T`$ for a larger value of Q: 11 $`<Q<`$ 12 GeV. In this case, the fraction of the rate attributable to $`qg`$ initiated subprocesses again increases with $`Q_T`$. It becomes 80 % for $`Q_TQ`$.
For the calculations reported in Figs. 3 and 4(a,b), we chose values of Q in the traditional range for studies of massive lepton-pair production, viz., above the interval of the $`J/\psi `$ and $`\psi ^{}`$ states and either below or above the interval of the $`\mathrm{{\rm Y}}^{}s`$.
For Fig. 4(c), we select the interval 2.0 $`<Q<`$ 3.0 GeV. In this region, one would be inclined to doubt the reliability of leading-twist perturbative descriptions of the cross section $`d\sigma /dQ`$, integrated over all $`Q_T`$. However for values of $`Q_T`$ that are large enough, a perturbative description of the $`Q_T`$ dependence of $`d^2\sigma /dQdQ_T`$ ought to be justified. The results presented in Fig. 4(c) demonstrate that, as at higher masses, the $`qg`$ incident subprocesses dominate the cross section for $`Q_TQ`$.
The calculations presented in Figs. 4 show convincingly that data on the transverse momentum dependence of the cross section for massive lepton-pair production at RHIC collider energies should be a valuable independent source of information on the spin-averaged gluon density.
In Fig. 5, we provide next-to-leading order predictions of the differential cross section as a function of $`Q_T`$ for three values of the center-of-mass energy and two intervals of mass $`Q`$. Taking $`Ed^3\sigma /dp^3=10^3\mathrm{pb}/\mathrm{GeV}^2`$ as the minimum accessible cross section, we may use the curves in Fig. 5 to establish that the massive lepton-pair cross section may be measured to $`Q_T=`$ 7.5, 14, and 18.5 GeV at $`\sqrt{S}=`$ 50, 200, and 500 GeV, respectively, when 2 $`<Q<`$ 3 GeV, and to $`Q_T=`$ 6, 11.5, and 15 GeV when 5 $`<Q<`$ 6 GeV. In terms of reach in the fractional momentum $`x_{gluon}`$ carried by the gluon, these values of $`Q_T`$ may be converted to $`x_{gluon}x_T=2Q_T/\sqrt{S}=`$ 0.3, 0.14, and 0.075 at $`\sqrt{S}=`$ 50, 200, and 500 GeV when 2 $`<Q<`$ 3 GeV, and to $`x_{gluon}`$ 0.24, 0.115, and 0.06 when 5 $`<Q<`$ 6 GeV. On the face of it, the smallest value of $`\sqrt{S}`$ provides the greatest reach in $`x_{gluon}`$. However, the reliability of fixed-order perturbative QCD as well as dominance of the $`qg`$ subprocess improve with greater $`Q_T`$. The maximum value $`Q_T`$ 7.5 GeV attainable at $`\sqrt{S}=50`$ GeV argues for a larger $`\sqrt{S}`$.
It is instructive to compare our results with those expected for prompt real photon production. In Fig. 6, we present the predicted differential cross section for prompt photon production for three center-of-mass energies. We display the result with full fragmentation taken into consideration (upper line) and with no fragmentation contributions included (lower line). Comparing the magnitudes of the prompt photon and massive lepton pair production cross sections in Figs. 5 and 6, we note that the inclusive prompt photon cross section is a factor of 1000 to 4000 greater than the massive lepton-pair cross section integrated over the mass interval 2.0 $`<Q<`$ 3.0 GeV, depending on the value of $`Q_T`$. This factor is attributable in large measure to the factor $`\alpha _{em}/(3\pi Q^2)`$ associated with the decay of the virtual photon to $`\mu ^+\mu ^{}`$. Again taking $`Ed^3\sigma /dp^3=10^3\mathrm{pb}/\mathrm{GeV}^2`$ as the minimum accessible cross section, we may use the curves in Fig. 6 to establish that the real photon cross section may be measured to $`p_T=`$ 14, 33, and 52 GeV at $`\sqrt{S}=`$ 50, 200, and 500 GeV, respectively. The corresponding reach in $`x_T=2p_T/\sqrt{S}=`$ 0.56, 0.33, and 0.21 at $`\sqrt{S}=`$ 50, 200, and 500 GeV is two to three times that of the massive lepton-pair case.
The breakdown of the real photon direct cross section at $`\sqrt{S}=`$ 200 GeV into its $`q\overline{q}`$ and $`qg`$ components is presented in Fig. 7. As may be appreciated from a comparison of Figs. 4 and 7, dominance of the $`qg`$ contribution in the massive lepton-pair case is as strong as in the prompt photon case. The significantly smaller cross section in the case of massive lepton-pair production means that the reach in $`x_{gluon}`$ is restricted to about a factor of two to three less, depending on $`\sqrt{S}`$ and $`Q`$, than that potentially accessible with prompt photons in the same sample of data. Nevertheless, it is valuable to be able to investigate the gluon density with a process that has reduced experimental and theoretical systematic uncertainties from those of the prompt photon case.
In our previous papers we compared our spin-averaged cross sections with available fixed-target and collider data on massive lepton-pair production at large values of $`Q_T`$, and we were able to establish that fixed-order perturbative calculations, without resummation, should be reliable for $`Q_T>Q/2`$. The region of small $`Q_T`$ and the matching region of intermediate $`Q_T`$ are complicated by some level of phenomenological ambiguity. Within the resummation approach, phenomenological non-perturbative functions play a key role in fixing the shape of the $`Q_T`$ spectrum at very small $`Q_T`$, and matching methods in the intermediate region are hardly unique. For the goals we have in mind, it would appear best to restrict attention to the region $`Q_TQ/2`$.
## IV Predictions for Spin Dependence
Given theoretical expressions derived in Sec. II that relate the spin-dependent cross section at the hadron level to spin-dependent partonic hard-scattering matrix elements and polarized parton densities, we must adopt models for spin-dependent parton densities in order to obtain illustrative numerical expectations. For the spin-dependent densities that we need, we use the three different parametrizations suggested by Gehrmann and Stirling (GS) . We have verified that the positivity requirement $`\left|\mathrm{\Delta }f_h^j(x,\mu _f)/f_h^j(x,\mu _f)\right|1`$ is satisfied.
The current deep inelastic scattering data do not constrain the polarized gluon density tightly, and most groups present more than one plausible parametrization. Gehrmann and Stirling present three such parametrizations, labelled GSA, GSB, and GSC. In the GSA and GSB sets, $`\mathrm{\Delta }G(x,\mu _o)`$ is positive for all $`x`$, whereas in the GSC set $`\mathrm{\Delta }G(x,\mu _o)`$ changes sign. After evolution to $`\mu _f^2=100`$ GeV<sup>2</sup>, $`\mathrm{\Delta }G(x,\mu _f)`$ remains positive for essentially all $`x`$ in all three sets, but its magnitude is small in the GSB and GSC sets.
In this Section, we present two-spin longitudinal asymmetries for massive lepton-pair production as a function of transverse momentum. Results are displayed for $`pp`$ collisions at the center-of-mass energies $`\sqrt{S}=`$ 50, 200, and 500 GeV typical of the Brookhaven RHIC collider.
In Figs. 8(a-c), we present the two-spin longitudinal asymmetries, $`A_{LL}`$, as a function of $`Q_T`$. As noted earlier, these asymmetries are computed in leading-order. More specifically, we use leading-order partonic subprocess cross sections $`\widehat{\sigma }`$ and $`\mathrm{\Delta }\widehat{\sigma }`$ with next-to-leading order spin-averaged and spin-dependent parton densities and a two-loop expression for $`\alpha _s`$. The choice of a leading-order expression for $`\mathrm{\Delta }\widehat{\sigma }`$ is required because the full next-to-leading order derivation of $`\mathrm{\Delta }\widehat{\sigma }`$ has not been completed for massive lepton-pair production. Experience with prompt photon production indicates that the leading-order and next-to-leading order results for the asymmetry are similar so long as both are dominated by the $`qg`$ subprocess. Results are shown for three choices of the polarized gluon density. The asymmetry becomes sizable for large enough $`Q_T`$ for the GSA and GSB parton sets but not in the GSC case. Comparing the three figures, we note that $`A_{LL}`$ is nearly independent of the pair mass $`Q`$ as long as $`Q_T`$ is not too small. This feature should be helpful for the accumulation of statistics; small bin-widths in mass are not necessary, but the $`J/\psi `$ and $`\mathrm{{\rm Y}}`$ resonance regions should be excluded.
As noted above the $`qg`$ subprocess dominates the spin-averaged cross section. It is interesting and important to inquire whether this dominance persists in the spin-dependent situation. In Figs. 9 and 10, we compare the contribution to the asymmetry from the polarized $`qg`$ subprocess with the complete answer for all three sets of parton densities. The $`qg`$ contribution is more positive than the full answer for values of $`Q_T`$ that are not too small; the full answer is reduced by the negative contribution from the $`q\overline{q}`$ subprocess for which the parton-level asymmetry $`\widehat{a}_{LL}=1`$. At small $`Q_T`$, the net asymmetry may be driven negative by the $`q\overline{q}`$ contribution, and based on our experience with other calculations , from processes such as $`gg`$ that contribute in next-to-leading order. For the GSA and GSB sets, we see that once it becomes sizable (e.g., 5% or more), the total asymmetry from all subprocesses is dominated by the large contribution from the $`qg`$ subprocess.
As a general rule in studies of polarization phenomena, many subprocesses can contribute small and conflicting asymmetries. Asymmetries are readily interpretable only in situations where the basic dynamics is dominated by one major subprocess and the overall asymmetry is sufficiently large. In the case of massive lepton-pair production that is the topic of this paper, when the overall asymmetry $`A_{LL}`$ itself is small, the contribution from the $`qg`$ subprocess cannot be said to dominate the answer. However, if a large asymmetry is measured, similar to that expected in the GSA case at the larger values of $`Q_T`$, Figs. 9 and 10 show that the answer is dominated by the $`qg`$ contribution, and data will serve to constrain $`\mathrm{\Delta }G(x,\mu _f)`$. If $`\mathrm{\Delta }G(x,\mu _f)`$ is small and a small asymmetry is measured, such as for the GSC parton set, or at small $`Q_T`$ for all parton sets, one will not be able to conclude which of the subprocesses is principally responsible, and no information could be adduced about $`\mathrm{\Delta }G(x,\mu _f)`$, except that it is small.
In Figs. 10 (a) and (b), we examine the energy dependence of our predictions for two different intervals of mass $`Q`$. For $`Q_T`$ not too small, we observe that $`A_{LL}`$ in massive lepton pair production is well described by a scaling function of $`x_T=2Q_T/\sqrt{S}`$, $`A_{LL}(\sqrt{S},Q_T)h_\gamma ^{}(x_T)`$. In our discussion of the spin-averaged cross sections, we took $`Ed^3\sigma /dp^3=10^3\mathrm{pb}/\mathrm{GeV}^2`$ as the minimum accessible cross. Combining the results in Fig. 5 with those in Fig. 10, we see that longitudinal asymmetries $`A_{LL}=20\%,\mathrm{\hspace{0.17em}7.5}\%,\mathrm{and}\mathrm{\hspace{0.17em}3}\%`$ are predicted at this level of cross section at $`\sqrt{S}=`$ 50, 200, and 500 GeV when 2 $`<Q<`$ 3 GeV, and $`A_{LL}=11\%,\mathrm{\hspace{0.17em}5}\%,\mathrm{and}\mathrm{\hspace{0.33em}2}\%`$ when 5 $`<Q<`$ 6 GeV. For a given value of $`Q_T`$, smaller values of $`\sqrt{S}`$ result in greater asymmetries because $`\mathrm{\Delta }G(x)/G(x)`$ grows with $`x`$.
The predicted cross sections in Fig. 5 and the predicted asymmetries in Fig. 10 should make it possible to optimize the choice of center-of-mass energy at which measurements might be carried out. At $`\sqrt{S}=`$ 500 GeV, asymmetries are not appreciable in the interval of $`Q_T`$ in which event rates are appreciable. At the other extreme, the choice of $`\sqrt{S}=`$ 50 GeV does not allow a sufficient range in $`Q_T`$. Accelerator physics considerations favor higher energies since the instantaneous luminosity increases with $`\sqrt{S}`$. Investigations in the energy interval $`\sqrt{S}=`$ 150 to 200 GeV would seem preferred.
In Fig. 11, we display predictions for $`A_{LL}`$ in prompt real photon production for three values of the center-of-mass energy. These calculations are done at next-to-leading order in QCD. Dominance of the $`qg`$ contribution is again evident as long as $`A_{LL}`$ is not too small. So long as $`Q_TQ`$, we note that the asymmetry in massive lepton-pair production is about the same size as that in prompt real photon production, as might be expected from the strong similarity of the production dynamics in the two cases. As in massive lepton-pair production, $`A_{LL}`$ in prompt photon production is well described by a scaling function of $`x_T=2p_T/\sqrt{S}`$, $`A_{LL}(\sqrt{S},p_T)h_\gamma (x_T)`$. For $`Ed^3\sigma /dp^3=10^3\mathrm{pb}/\mathrm{GeV}^2`$, we predict longitudinal asymmetries $`A_{LL}=31\%,\mathrm{\hspace{0.17em}17}\%,\mathrm{and}\mathrm{\hspace{0.33em}10}\%`$ in real prompt photon production at $`\sqrt{S}=`$ 50, 200, and 500 GeV.
## V Discussion and Conclusions
In this paper we focus on the $`Q_T`$ distribution for $`p+p\gamma ^{}+X`$. We present and discuss calculations carried out in QCD at RHIC collider energies. We show that the differential cross section in the region $`Q_TQ/2`$ is dominated by subprocesses initiated by incident gluons. Dominance of the $`qg`$ contribution in the massive lepton-pair case is as strong as in the prompt photon case, $`p+p\gamma +X`$. As our calculations demonstrate, the $`Q_T`$ distribution of massive lepton pair production offers a valuable additional method for direct measurement of the gluon momentum distribution. The method is similar in principle to the approach based on prompt photon production, but it avoids the experimental and theoretical complications of photon isolation that beset studies of prompt photon production.
As long $`Q_T`$ is large, the perturbative requirement of small $`\alpha _s(Q_T)`$ can be satisfied without a large value of $`Q`$. We therefore explore and advocate the potential advantages of studies of $`d^2\sigma /dQdQ_T`$ as a function of $`Q_T`$ for modest values of $`Q`$, $`Q2`$ GeV, below the traditional Drell-Yan region.
For the goals we have in mind, it would appear best to restrict attention to the region in $`Q_T`$ above the value at which the resummed result falls below the fixed-order perturbative expectation. A rough rule-of-thumb based on our calculations is $`Q_TQ/2`$. Uncertainties associated with resummation make it impossible to use data on the $`Q_T`$ distribution at small $`Q_T`$ to extract precise information on parton densities.
In this paper we also present a calculation of the longitudinal spin-dependence of massive lepton-pair production at large values of transverse momentum. We provide polarization asymmetries as functions of transverse momenta that may be useful for estimating the feasibility of measurements of spin-dependent cross sections in future experiments at RHIC collider energies. The Compton subprocess dominates the dynamics in longitudinally polarized proton-proton reactions as long as the polarized gluon density $`\mathrm{\Delta }G(x,\mu _f)`$ is not too small. As a result, two-spin measurements of inclusive prompt photon production in polarized $`pp`$ scattering should constrain the size, sign, and Bjorken $`x`$ dependence of $`\mathrm{\Delta }G(x,\mu _f)`$. Significant values of $`A_{LL}`$ (i.e., greater than 5 %) may be expected for $`x_T=2Q_T/\sqrt{S}>0.10`$ if the polarized gluon density $`\mathrm{\Delta }G(x,\mu _f)`$ is as large as that in the GSA set of polarized parton densities. If so, the data could be used to determine the polarization of the gluon density in the nucleon. On the other hand, for small $`\mathrm{\Delta }G(x,\mu _f)`$, dominance of the $`qg`$ subprocess is lost, and $`\mathrm{\Delta }G(x,\mu _f)`$ is inaccessible.
## Acknowledgments
Work in the High Energy Physics Division at Argonne National Laboratory is supported by the U.S. Department of Energy, Division of High Energy Physics, Contract W-31-109-ENG-38. This work was supported in part by DOE contract DE-AC05-84ER40150 under which Southeastern Universities Research Association operates the Thomas Jefferson National Accelerator Facility.
|
no-problem/9909/cond-mat9909225.html
|
ar5iv
|
text
|
# Two-finger selection theory in the Saffman-Taylor problem
## Abstract
We find that solvability theory selects a set of stationary solutions of the Saffman-Taylor problem with coexistence of two unequal fingers advancing with the same velocity but with different relative widths $`\lambda _1`$ and $`\lambda _2`$ and different tip positions. For vanishingly small dimensionless surface tension $`d_0`$, an infinite discrete set of values of the total filling fraction $`\lambda =\lambda _1+\lambda _2`$ and of the relative individual finger width $`p=\lambda _1/\lambda `$ are selected out of a two-parameter continuous degeneracy. They scale as $`\lambda 1/2d_0^{2/3}`$ and $`|p1/2|d_0^{1/3}`$. The selected values of $`\lambda `$ differ from those of the single finger case. Explicit approximate expressions for both spectra are given.
PACS numbers: 47.54.+r, 47.20.Ma, 47.20.Ky, 47.20.Hw
The Saffman-Taylor problem has played a central role in the field of interfacial pattern selection in the last few decades. It deals with the morphological instability of the interface between two inmiscible fluids confined in a quasi twodimensional (Hele-Shaw) cell, when the less viscous fluid is displacing the more viscous one in a channel geometry. In particular, in their seminal work Saffman and Taylor called the attention upon the so-called selection problem, namely the fact that a unique finger-like steady state solution is observed whereas a continuum of solutions is possible if surface tension is neglected. Full analytical understanding of the subtle role of surface tension acting as the relevant selection mechanism was not achieved until much more recently. The resulting scenario of selection has been shown to apply with some genericity to other interfacial pattern forming systems, most remarkably in dendritic growth. On the other hand, despite the relative analytical tractability of the problem, the dynamics of competing fingers is far from being understood even at a qualitative level. Recently, it has been shown that in general surface tension may affect the long time dynamics in an essential way. In the case of the dynamics of finger arrays, the effect of surface tension becomes particularly dramatic, showing that the qualitative picture of finger competition based solely on the concept of screening of the laplacian field or the global instability of a periodic finger array is insufficient.
Existence of multifinger stationary solutions of the zero surface tension problem has been known for a long time. In Ref. it has been emphasized that multifinger stationary solutions are relevant to the issue of the dynamical role of surface tension. In particular the equal-finger fixed point has been pointed out as the relevant saddle-point to describe competition dynamics. In connection with the phase flow structure around this fixed point, the problem of existence of unequal-finger fixed points with nonzero surface tension has been posed. Here we will extend selection theory to search for such solutions in the case of two fingers. We will follow the formulation of Hong and Langer, which is based on a Fredholm solvability analysis of a non-selfadjoint problem defined through linearization about the zero surface tension solution, together with WKB and steepest descent techniques. Despite the admitted objections to the full quantitative validity of the approximations involved in this method , it has been shown that it leads to the correct qualitative picture of selection and the correct scaling of solutions. It is therefore suitable, for simplicity of calculus and presentation, for a first exploration of novel situations such as the present one.
Our starting point is the dynamical equation for the conformal mapping $`f(w,t)`$ which maps the interior of the unit circle in the complex plane $`w`$ into the viscous fluid region, with the unit circle $`w=e^{i\varphi }`$ being mapped into the interface. Without loss of generality, we will assume a channel width $`W=2\pi `$ in the $`y`$ direction (with periodic boundary conditions) and a velocity $`U_{\mathrm{}}=1`$ of the fluid at infinity. We define the velocity of the stationary solutions of the interface as $`U=\frac{1}{\lambda }`$ where $`\lambda `$ is the total filling fraction of the channel by the invading fluid. The cartesian coordinates in the frame moving with velocity $`U`$ are given by $`z=x+iy=f(w,t)Ut`$. The mapping $`f(w,t)`$ contains a logarithmic singularity which is due to the fact that we are mapping an unbounded domain (the semiinfinite strip) into the unit circle, in such a way that $`f(w,t)+\mathrm{ln}w`$ is always an analytic funtion in the interior of the unit circle. The exact dynamical equation for the mapping can be written in the form
$$Re(i_\varphi f(\varphi ,t)_tf^{}(\varphi ,t))=1Ud_0_\varphi H_\varphi [\kappa ]$$
(1)
which can be easily derived for instance from Ref.. Here $`d_0`$ is a dimensionless surface tension defined as $`d_0=\frac{\sigma b^2}{12\mu U}`$, where $`\sigma `$ is the surface tension, b is the gap and $`\mu `$ is the viscosity. The curvature $`\kappa `$ can be expressed in terms of $`x(\varphi )`$ and $`y(\varphi )`$ as
$$\kappa (\varphi )=\frac{_\varphi ^2x_\varphi y_\varphi ^2y_\varphi x}{[(_\varphi y)^2+(_\varphi x)^2]^{\frac{3}{2}}}.$$
(2)
$`H`$ is a linear integral operator (Hilbert transform) which acts on a real $`2\pi `$-periodic function $`g(\varphi )`$ according to the definition
$$H_\varphi [g]=\frac{1}{2\pi }P_0^{2\pi }g(s)\mathrm{cotg}\frac{1}{2}(\varphi s)𝑑s.$$
(3)
It follows from Eq.(3) that the function $`A(w)`$ defined by $`A(e^{i\varphi })=g(\varphi )+iH_\varphi [g]`$ is analytic in the interior of the unit circle, and $`Im(A(0))=0`$.
In the steady state we will have $`_tf^{}(\varphi ,t)=U`$, and Eq.(1) will read
$$U\frac{dy}{d\varphi }=1Ud_0\frac{dH_\varphi [\kappa ]}{d\varphi }$$
(4)
We search for solutions of the generic type described in Fig.1. The total filling fraction $`\lambda `$ is splitted into two contributions $`\lambda _1+\lambda _2=\lambda `$, and we define as a new selection parameter the relative finger width $`p=\lambda _1/\lambda `$. For simplicity we will consider fingers which are axisymmetric and for convenience we will fix the tip positions at $`\varphi =\pi /2,3\pi /2`$, for all $`\lambda `$ and $`p`$. The filling fraction $`\lambda `$ ranges from $`0`$ to $`1`$. We take $`\lambda _2\lambda _1`$ so that $`p`$ ranges from $`1/2`$ to $`1`$. In these conditions the two fingers correspond to the intervals $`\varphi _1=\frac{\pi }{2}(12p)`$ to $`\varphi _2=\frac{\pi }{2}(1+2p)`$ and $`\varphi _2`$ to $`\varphi _3=2\pi +\varphi _1`$ respectively.
After integration over $`\varphi `$ of Eq.(4) we obtain
$$Uy(\varphi )=\varphi Ud_0H_\varphi [\kappa ]+c(\varphi )$$
(5)
where $`c(\varphi )`$ is a piecewise constant function. The values it takes at the intervals $`(\varphi _1,\varphi _2)`$ and $`(\varphi _2,\varphi _3)`$ differ by the finite amount $`\pi (U1)`$ which accounts for the discontinuity of $`y`$ and the finite flux at the points $`\varphi _1,\varphi _2`$.
After Hilbert transform of Eq.(5) and using that $`H_\varphi ^2[g]=g(\varphi )`$, we obtain
$`d_0\kappa (\varphi )+x(\varphi )=x_0(\varphi )`$ (6)
$`y(\varphi )+\varphi =const+H_\varphi [x].`$ (7)
where Eq.(7) is just an expression of analyticity of $`f(w,t)+\mathrm{ln}w`$. The function $`x_0(\varphi )`$ is found explicitely as $`x_0(\varphi )=H_\varphi [g]`$ with $`g(\varphi )=(\lambda 1)\varphi +c(\varphi )`$, and by construction it corresponds to the solution of the zero surface tension case. In our case it reads
$`x_0(\varphi )=(1\lambda )\mathrm{ln}(2|\mathrm{sin}\varphi \mathrm{cos}p\pi |).`$ (8)
Completed with $`y_0(\varphi )=\lambda (\varphi +c(\varphi ))`$, this gives a two parameter class of exact solutions of the type of Fig.1, for $`d_0=0`$. Both $`\lambda `$ and $`p`$ can be varied continuously within their natural range. The difference $`\mathrm{\Delta }_x`$ between the x-coordinate of the two tips is given by
$$\mathrm{\Delta }_x=(1\lambda )\mathrm{ln}\frac{1\mathrm{cos}p\pi }{1+\mathrm{cos}p\pi }.$$
(9)
These solutions are precisely those studied in Ref..
The present formulation has some interesting advantages over the traditional approach of McLean and Saffman for instance in that the zeroth order solution is obtained naturally as an explicit outcome of the method and that it is more amenable to generalization, for instance to a larger number of fingers.
We now proceed by assuming $`x(\varphi )=x_0(\varphi )+d_0x_1(\varphi )`$ and linearizing on $`x_1(\varphi )`$ but keeping all singular terms necessary for selection. (Nonlinear effects are expected to introduce only a slight quantitative correction to the final spectrum of selection.) Using the relation $`y_1(\varphi )=H_\varphi [x_1]`$ we get
$`d_0{\displaystyle \frac{d^2x_1}{d\varphi ^2}}+d_0p(\varphi ){\displaystyle \frac{d^2H_\varphi [x_1]}{d\varphi ^2}}+r(\varphi )x_1=\mu (\varphi )`$ (10)
where $`r(\varphi )`$ and $`p(\varphi )`$ are given by
$`r(\varphi )=\lambda ^2{\displaystyle \frac{|q(\varphi )|}{q^4(\varphi )}}[(q(\varphi ))^2+{\displaystyle \frac{1}{\beta ^2}}\mathrm{cos}^2\varphi ]^{\frac{3}{2}}`$ (11)
$`p(\varphi )={\displaystyle \frac{1}{\beta }}{\displaystyle \frac{\mathrm{cos}\varphi }{q(\varphi )}}`$ (12)
with $`q(\varphi )=\mathrm{sin}\varphi \mathrm{cos}p\pi `$ and $`\beta =\frac{\lambda }{1\lambda }`$. Explicit knowledge of $`\mu (\varphi )`$ is not necessary for the solvability analysis. First order derivatives are subdominant as $`d_00`$ and have been omitted.
The linear operator on the lhs of Eq.(10) can be seen as a $`2\times 2`$ matrix operator acting on a vector of two components $`x_1^+(\varphi )`$ and $`x_1^{}(\varphi )`$ which are defined respectively on the intervals $`(\varphi _1,\varphi _2)`$ and $`(\varphi _1,\varphi _3)`$. Inserting an ansatz of WKB form with a point of stationary phase of the solution in the upper (or lower) complex plane one can show, using steepest descent techniques, that the off-diagonal terms of Eq.(10) lead to exponentially small contributions. As a consequence, to leading order the problem is decoupled into two separate problems defined in two disjoined intervals. Similarly, neglecting exponentially small terms, the integral part of the diagonal terms takes a purely differential form in the complex plane. The change of variables $`\eta =\frac{1}{\beta }\frac{\mathrm{cos}\varphi }{\mathrm{sin}\varphi \mathrm{cos}p\pi }`$ maps separately each of the two disjoint intervals above into the whole real axis $`\eta ϵ(\mathrm{},\mathrm{})`$. Therefore, to leading order we endup with two (complex) differential equations of the form
$`d_0{\displaystyle \frac{d^2x_1^\pm }{d\eta ^2}}+Q_\pm (\eta )x_1^\pm =R_\pm (\eta ).`$ (13)
which are mutually independent but linked through the dependence on the parameters $`\lambda `$ and $`p`$. More details of this derivation will be presented elsewhere. We define two solvability functions as
$`\mathrm{\Lambda }_\pm (\lambda ,p;d_0)={\displaystyle _{\mathrm{}}^{\mathrm{}}}\stackrel{~}{x}^\pm (\eta )R_\pm ^{}(\eta )d\eta `$ (14)
where $`\stackrel{~}{x}^\pm (\eta )`$ are eigenfunctions of the null space of the adjoint operators of the respective homogeneous equations. To enforce solvability we now have to impose the simultaneous vanishing of the two solvability functions $`\mathrm{\Lambda }_\pm (\lambda ,p;d_0)=0`$. These two conditions will fix the discrete spectra of possible values of both $`\lambda `$ and $`p`$.
Within the WKB approximation, the two solvability functions take the form
$`\mathrm{\Lambda }_\pm (\lambda ,p;d_0)={\displaystyle _{\mathrm{}}^{\mathrm{}}}G_\pm (\eta )e^{\frac{1}{\sqrt{d_0}}\mathrm{\Psi }_\pm (\eta )}d\eta `$ (15)
where
$`\mathrm{\Psi }_\pm (\eta )=i\lambda \beta {\displaystyle _0^\eta }{\displaystyle \frac{(1i\eta ^{})^{\frac{1}{4}}(1+i\eta ^{})^{\frac{3}{4}}}{1+\beta ^2\eta ^2}}`$ (16)
$`\times \left(1{\displaystyle \frac{\mathrm{cos}p\pi }{\sqrt{1+\eta ^2\beta ^2\mathrm{sin}^2p\pi }}}\right)\mathrm{d}\eta ^{}.`$ (17)
In order to estimate the solvability functions in the steepest descent approximation, only the form of $`\mathrm{\Psi }_\pm (\eta )`$ is required. The singularity structure of $`\mathrm{\Psi }_\pm (\eta )`$ is such that the cases $`p=1/2`$ and $`p1/2`$ must be treated separately. The first case (two identical fingers) degenerates into the usual single finger problem. For $`p>\frac{1}{2}`$, a more complicated singularity structure is revealed. In the upper half complex plane of $`\eta `$, we find that $`\frac{d\mathrm{\Psi }_+(\eta )}{d\eta }`$ has a new branch point at $`\eta =i/\beta \mathrm{sin}p\pi `$, in addition to the singularities that were present in the single finger problem, namely, a branch point at $`\eta =i`$ and a pole at $`i/\beta `$. On the other hand, $`\frac{d\mathrm{\Psi }_{}(\eta )}{d\eta }`$ has the branch point at $`\eta =i`$ and the new one at $`\eta =i/\beta \mathrm{sin}p\pi `$, whereas the pole at $`i/\beta `$ is suppressed. Since $`1/\beta \mathrm{sin}p\pi >1/\beta `$, we obtain that $`\beta >1`$ is a necessary condition for the first solvability function $`\mathrm{\Lambda }_+(\lambda ,p;d_0)`$ to oscillate, and therefore generate zeroes. We thus recover the condition $`\lambda >1/2`$ of the single finger case, but now for the total filling fraction. The equivalent condition for $`\mathrm{\Lambda }_{}(\lambda ,p;d_0)`$ is $`\beta \mathrm{sin}p\pi >1`$ so that the new singularity at $`\eta =i/\beta \mathrm{sin}p\pi `$ stands below $`\eta =i`$. This condition also implies that in the contour integration for $`\mathrm{\Lambda }_+(\lambda ,p;d_0)`$ we will always pick up a contribution from this new singularity.
By deforming the contour integral as indicated in Fig.2, and following Ref. in identifying the crossover from oscillating to non oscillating behaviour of the solvability functions, we obtain the scaling of both $`\lambda `$ and $`p`$ with $`d_0`$ to be $`(\lambda \frac{1}{2})d_0^{2/3}`$ and $`|p\frac{1}{2}|d_0^{1/3}`$. According to Eq.(9), the resulting scaling for the tip difference is $`\mathrm{\Delta }_xd_0^{1/3}`$.
An explicit (approximate) discrete spectra of selected values of $`\lambda `$ and $`p`$ for small $`d_0`$ will be given by the condition $`\mathrm{cos}(\frac{\mathrm{\Psi }_\pm (i+0)\mathrm{\Psi }_\pm (i0)}{2i\sqrt{d_0}})=0`$.
From the condition $`\mathrm{\Lambda }_{}(\lambda ,p;d_0)=0`$ we thus obtain
$`{\displaystyle \frac{1}{\sqrt{d_0}}}{\displaystyle \frac{(1\lambda )^2}{\lambda }}I(\beta ,p)=m{\displaystyle \frac{1}{2}}`$ (18)
with $`m=1,2,\mathrm{}`$ and where
$`I(\beta ,p)={\displaystyle \frac{1}{2\pi }}\mathrm{cotg}p\pi {\displaystyle _0^{u_1}}{\displaystyle \frac{u^{\frac{3}{4}}H(u;\beta ,p)}{(u_3u)(u_1u)^{\frac{1}{2}}}}du`$ (19)
with the regular part of the integrand $`H(u;\beta ,p)=(2u)^{\frac{1}{4}}(u_4u)^1(u_2u)^{\frac{1}{2}}`$ and $`u_{1,2}=11/\beta \mathrm{sin}p\pi `$, $`u_{3,4}=11/\beta `$.
Finally, from condition $`\mathrm{\Lambda }_+(\lambda ,p;d_0)=0`$, expressing Eq.(18) to leading order and using properties of hypergeometric functions, the two selection conditions can be combined to read
$`{\displaystyle \frac{1}{\sqrt{d_0}}}(2\lambda 1)^{\frac{3}{4}}=n`$ (20)
$`S(\alpha )={\displaystyle \frac{1}{n}}(m{\displaystyle \frac{1}{2}})`$ (21)
with $`n=1,2,\mathrm{}`$ and where
$$S(\alpha )=\frac{3\sqrt{2\pi }}{5\mathrm{\Gamma }^2(\frac{1}{4})}_2F_1(\frac{5}{4},\frac{1}{2};\frac{9}{4};1\alpha )(1\alpha )^{\frac{5}{4}}.$$
(22)
$`{}_{2}{}^{}F_{1}^{}`$ is a hypergeometric function and $`\alpha =\frac{\pi ^2}{4}\frac{(p\frac{1}{2})^2}{2\lambda 1}`$ is of order $`(d_0)^0`$ and ranges from $`0`$ to $`1`$.
Eq.(20) determines a set of discrete values of $`\lambda `$. Notice that these are given independently of $`p`$ but the set of values are inserted between those of the single finger case ($`p=1/2`$), which in the same approximation are given by $`\frac{1}{\sqrt{d_0}}(2\lambda 1)^{\frac{3}{4}}=n\frac{1}{2}`$ in place of Eq.(20). On the other hand, the lhs of Eq.(21) is a monotonically decreasing function of $`\alpha `$ which varies continuously from $`1/4`$ (at $`\alpha =0`$) to $`0`$ (at $`\alpha =1`$). Solving Eq.(21) for $`\alpha `$ produces solutions with $`p1/2`$. These will exist whenever $`\frac{1}{n}(m\frac{1}{2})<\frac{1}{4}`$. For a given $`n`$, the solutions are labeled by $`m=1,2,\mathrm{}`$ up to the integer part of $`(n+1)/4`$. Therefore, the first solution with $`p1/2`$ will appear at $`n=3`$ and gives $`|p\frac{1}{2}|0.3886d_0^{1/3}+\mathrm{}`$ For fixed $`m`$, $`p`$ is an increasing function of $`n`$ (like $`\lambda `$), but for fixed $`n`$, $`p`$ has its maximum value at $`m=1`$ and then decreases with $`m`$ (See Fig.3). The spectra here derived must be taken with some caution, since they are only approximate. An exact calculation to lowest order in $`d_0`$ should include nonlinear effects and a proper treatment of the turning points in the WKB analysis, but the corrections are expected to be quantitatively small. More details will be presented elsewhere.
Concerning the stability of these solutions, it is reasonable to presume that, in general, they will be globally unstable, such as established numerically for the equal finger array ($`p=1/2`$) in Ref.. This implies that they would only be directly observable as a transient slowing down of the competition dynamics whenever an initial condition is prepared close to any of those solutions. From a dynamical systems point of view, we would like to emphasize that the knowledge of the fixed points, even if unstable, is always relevant to elucidate the topological features of the phase space flow, and therefore to gain insight and qualitative understanding of the dynamics. In particular the location of these new fixed points will definitely affect the path in phase space describing the transient dynamics from a nearly equal finger array towards the single finger attractor. This point of view was developed in Ref. to study the dynamical role of surface tension. In that spirit it was pointed out that a generalized solvability scenario of selection could hold to some extent for the dynamics.
We conclude by remarking that, although the present solvability analysis is not a rigorous proof of existence of solutions it reveals by itself a quite unexpected richness of the problem. It would be interesting to search for these solutions numerically or by other more rigorous means . The sole existence of the predicted solutions and its presumable generalization to a larger number of fingers has important consequences on the physical picture of finger competition, which turns out to be much more complex than common arguments of laplacian screening seem to suggest. The common picture, according to which fingers slightly ahead escape from their neighbors, is not necessarily valid in general because of the existence of growth modes with unequal noncompetiting fingers. For vanishingly small surface tension, however, these modes collapse and only the equal-finger multifinger mode ($`p=1/2`$) survives as a stationary state. Finally, given the genericity of the solvability mechanism of selection, this opens the possibility of finding similar solutions in related problems such as needle crystal growth or viscous fingering in circular geometry.
We acknowledge financial support from the Dirección General de Enseñanza Superior (Spain) under Project PB96-1001-C02-02. F.X.M. also acknowledges financial support from the Comissionat per a Universitats i Recerca (Generalitat de Catalunya).
Fig. 1. Typical configuration of a two-finger stationary solution.
Fig. 2. Deformation of the steepest descent contour of integration in the complex $`\eta `$ plane with $`\beta >1`$ and $`p>\frac{1}{2}`$ (a) for $`\mathrm{\Lambda }_+`$; (b) for $`\mathrm{\Lambda }_{}`$.
Fig. 3. Spectrum of $`p`$ as a function of $`n`$ for different values of $`m`$.
|
no-problem/9909/astro-ph9909127.html
|
ar5iv
|
text
|
# Two nearby M dwarf binaries from 2MASS
## 1 Introduction
Nearby binary stars have considerable importance for astronomical studies. The nearby star sample is the basis for determining the properties of the field disk population, such as the luminosity and mass functions, kinematics, binary fraction, and chromospheric and coronal activity. Furthemore, binary systems offer special opportunities to astronomers. Since the components of a system can be assumed to have the same age and composition, studies of one component can be used to constrain the properties of the other component. This is especially important for the coolest M dwarfs, whose complicated atmospheres are difficult to model. G dwarf primaries allow many otherwise unmeasurable properties of their M dwarf secondaries to be constrained, but surprisingly few wide systems suitable for ground-based follow-up are known.
We have recently begun a search using the Two Micron All Sky Survey (2MASS) aimed at identifying the nearest isolated M and L dwarfs in the solar neighborhood. Here we report the discovery of two nearby wide M dwarf secondaries of special interest in the course of that search.
## 2 Data
Candidate nearby M dwarfs were identified using the pairing of 2MASS detections with USNO PMM scans (Monet, private comm.) of the Second Palomar Sky Survey (POSSII) plates \[Reid et al. 1991\]. On the basis of the resulting BRIJHK magnitudes, nearby M dwarfs can be identified with high confidence. More details are given in Gizis et al. (1999, in prep.). The selection criteria are designed to produce a complete sample of M8 and cooler dwarfs, but necessarily include many M6.5 and M7 dwarfs. Objects with $`K_s<12`$, $`JK_s>1.0`$ were selected for initial followup.
In the course of compiling the nearby star candidate list, we noted that two of the M dwarfs appeared to be companions to brighter stars and marked them for priority in follow-up spectroscopy. The 2MASS J2000 positions and magnitudes are given in Table 1. The first, 2MASSW J1000503+315545, is close (133 arcseconds east, 20 arcseconds north) to the nearby G star HD 86728, which appears in the nearby star catalog as Gl 376 \[Gliese & Jahreiß 1991\]. Measurement of the POSSI and POSSII plates reveals the 2MASS source’s proper motion is identical to Gl 376. Indeed, it is a little surprising that this source has not been noticed before. It is easily visible on the Palomar Sky Survey plates, and its proper motion is sufficient to be included in the LHS Catalogue (Luyten 1979), although it is within the halo of the bright star. We estimate $`R14.5\pm 0.5`$ from the uncalibrated PMM scans. The common proper motion indicates it is associated, and we therefore denote this object Gl 376B. We henceforth refer to the primary (HD 86728) as Gl 376A.
A spectrum of Gl 376B was obtained on 04 March 1999 with LRIS spectrograph \[Oke et al. 1995\] on the Keck II telescope using the setup described in Kirkpatrick et al. (1999). A second spectrum was obtained at the Hale 200 in. telescope at Palomar Mountain on 24 May 1999. The Keck spectrum is shown in Figure 1. The spectral type of Gl 376B is M6.5 V with an H$`\alpha `$ equivalent width of 16Å in the Keck spectrum and and 13 Å in the lower resolution Palomar spectrum. Spectral types are on the Kirkpatrick, Henry & McCarthy (1991) system, and were determined by visually comparing our spectrum to stars of known spectral type taken with the same setup.
Our second system consists of the M dwarfs 2MASSW J1047126+402643 and 2MASSW J1047138+402649. The two sources show common proper motion of $`\mu _\alpha =0.30`$ and $`\mu _\delta =0.03`$ arcseconds per year relative to other stars in the field, as measured from the POSSI and 2MASS images, and have a separation of 14.5 arcseconds. Both the position and the proper motion agree with the two Luyten \[Luyten 1979-1980\] NLTT sources LP 213-67 and LP 213-68, and we adopt those names henceforth. The two M dwarfs were observed at Keck II on 05 March 1999. The primary, LP 213-67, has a spectral type of M6.5 V and an H$`\alpha `$ equivalent with of 6.9Å. The secondary, LP 213-68, is an M8 V with an H$`\alpha `$ equivalent width of 3.7 Å.
## 3 Discussion
Gl 376A has been extensively studied, and as a result, its properties are well determined. The Hipparcos parallax places it at 14.9 parsecs \[Perryman et al. 1997\]. Taylor’s (1995) analysis of all metallicity determinations of the star indicates that $`[Fe/H]=+0.170\pm 0.052`$. Using the Hipparcos distance and Edvardsson et al.’s (1993) photometry, Ng & Bertelli (1998) find $`M=1.02`$ M and $`\mathrm{log}age=9.854\pm 0.083`$. This age is in good agreement with Barry’s (1988) estimate of $`\mathrm{log}age=9.93`$ Gyr based on the CaII chromospheric emission lines. Duquennoy, Mayor & Halbwachs (1991) find a constant radial velocity of +55.83 km/s, and both the Hipparcos observations \[Perryman et al. 1997\] and ground based astrometry \[Heintz 1988\] indicate that there is no close companion. Although there is an IDS and CCDM entry for a bright, nearby companion to Gl 376A in SIMBAD, this entry has been deleted from the more recent Washington Visual Double Star Catalog \[Worley et al. 1996\] and there is clearly no such double on the Palomar plates. We note that Gl 376B should have been included in the Duquennoy & Mayor (1991) analysis of the frequency of binary companions, though it appears in a separation range in which they assumed 50% incompleteness and therefore, in a sense, was accounted for by their analysis.
The Hipparcos parallax and 2MASS photometry implies $`M_K=8.19`$ for Gl 376B. However, the typical absolute magnitude of an M6.5 dwarf is $`M_K=9.60`$ \[Kirkpatrick & McCarthy 1994\]. Furthermore, our estimate of $`R14.5`$ implies $`M_R13.4`$, which is brighter than the typical M6.5 value of $`M_R=15.13`$ \[Kirkpatrick & McCarthy 1994\], but consistent with the $`K_s`$ discrepency. The apparent overluminosity of Gl 376B may be partially explained if it is actually a near-equal luminosity binary (or triple!). The high metallicity of the system may also partially account for the apparent overluminosity of the M dwarf with respect to the typical field main sequence.
We may compare Gl 376B’s H$`\alpha `$ emission strength to similar known M dwarfs observed by Hawley, Gizis & Reid (1996). Despite its great age, Gl 376B is one of the most active nearby field M dwarfs. Since strong emission was observed twice, it is unlikely that a flare is responsible. We note that the ROSAT All-Sky Survey \[Voges et al. 1999\] source 1RXSJ 100050.9+315555 is coincident with Gl 376B. Adopting the calibration given by Fleming et al. (1993), we find $`f_X=1.4\times 10^{12}`$ erg s<sup>-1</sup> cm<sup>-2</sup> if the M dwarf is the X-ray source. With an implied luminosity of $`\mathrm{log}L_X=27.5`$ (or 27.2 if it is an equal-luminosity binary), Gl 376B is very active compared to the limited sample of late M dwarfs observed by Fleming et al. While we cannot rule out the possibility that some of the flux is due to the G primary, the ROSAT position with its 30 arcsecond uncertainty does not match the G star and Barry \[Barry 1988\] find little chromospheric activity, which suggests that much or all of the observed X-ray. Hünsch et al. \[Hünsch et al. 1999\] find nearby old G stars with $`\mathrm{log}L_X<27`$, and they do not attribute the ROSAT source to Gl 376A.
The high activity level cannot be due to youth, since the G primary is known to be old from two independent techniques. This strongly suggests that Gl 376B is a short-period binary which maintains a rapid rotation rate via tidal interaction \[Young et al. 1987\]. An unresolved companion would thus explain both the high activity level and the apparent overluminosity of Gl 376B. The postulated companion, Gl 376C, should be detectable via high resolution spectroscopy and radial velocity monitoring.
Both members of the LP 213-67/68 system show H$`\alpha `$ emission. We note that the M6.5 primary has stronger emission than the M8 secondary. This is consistent with the general observation that in the Pleiades and the field, the chromospheric activity levels are becoming weaker as the hydrogen burning limit is approached \[Tinney & Reid 1998\], although the hotter component could be more active due to chance or the influence of an unseen companion.
We have derived the relationship $`M_K=7.59+2.25\times (JK)`$ using trigonometric parallax data for M7 and later dwarfs (Gizis et al. 1999, in prep.). This implies a distance of 16 parsecs for the M8 secondary. Another estimate may be obtained using Kirkpatrick & McCarthy’s (1993) value of $`M_K=9.6`$ for M6.5 dwarfs. In this case, the estimated distance for the primary is 14.5 parsecs. The tangential velocity of $`23`$ km s<sup>-1</sup> does not a give a strong clue to the age of this system, but an age of a few Gigayears is likely given the M6.5’s activity level.
## 4 Conclusion
We have discovered an M dwarf member of the Gl 376 system. This offers the opportunity to study a old, metal-rich M dwarf with known properties. Most active, metal-rich objects are young (such as the Hyades) and therefore it is difficult to disentangle the effects of youth and metallicity. It is perhaps unfortunate that Gl 376B is likely an unresolved double, because we cannot presently use its position to measure the shift of the main sequence with metallicity. On the other hand, detection of Gl 376C would allow mass ratio for the M dwarf to be determined, giving an additional contraint for models, and would allow the luminosity ratio of the system to be measured. Of course, if a direct orbital mass determination proves to be possible, or if the system proves to be eclipsing, this system would provide a unique constraint on stellar structure, atmospheric, and coronal models near the hydrogen burning limit.
We have also identified a nearby pair of cool M dwarfs. LP 213-67 and LP 213-68 appear to be a normal pair of M dwarfs just above the hydrogen burning limit. We note that the two components are easily resolved and would have simultaneously measureable parallaxes. With a separation of $`230`$ AU, an orbital period is not likely in the near future, but the system should allow comparative study of M6.5 and M8 dwarfs.
## acknowledgments
We thank Hartmut Jahreiß for discussion of the nature of Gl 376A and James Liebert for comments. This work was funded in part by NASA grant AST-9317456 and JPL contract 960847. JDK and AJB acknowledge the support of JPL, Caltech, which is operated under contract with NASA. INR and JDK acknowledge funding through a NASA/JPL grant to 2MASS Core Project science; AJB acknowledges support through this grant. This publication makes use of data products from 2MASS, which is a joint project of the University of Massachusetts and IPAC, funded by NASA and NSF. Some of the data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the Caltech, the University of California and NASA. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. This research has made use of the Simbad database, operated at CDS, Strasbourg, France.
|
no-problem/9909/astro-ph9909301.html
|
ar5iv
|
text
|
# CMBology
## 1. Seeing Sounds in the CMB (see Figure 1)
As the Universe cools it goes from being radiation dominated to matter dominated. The boundary is labelled ‘$`z_{eq}`$’ on the right side of Fig. 1. As the Universe cools further, electrons and protons combine, thereby decoupling from photons during the redshift interval ‘$`\mathrm{\Delta }z_{dec}`$’. The opaque universe becomes transparent. Present observers, on the left of Fig. 1, look back and see hot and cold spots on the surface of last scattering. But where did the hot and cold spots come from?
At $`z_{eq}`$, dark matter over-densities begin to collapse gravitationally. The photon-baryon fluid (grey) falls (inward pointing arrows) into the dark matter potential wells – gets compressed (dark grey) and then rebounds (outward pointing arrows) due to photon pressure support – leaving less dense regions (white) at the bottoms of the wells, then recollapses and so on. In the observable interval $`\mathrm{\Delta }z_{dec}`$, the phases at which we see these oscillations depend on their physical size. Four different sizes with four different phases are shown in Fig. 1. From top to bottom: maximum Doppler inward velocities, maximum adiabatic compression, maximum Doppler outward velocities, maximum adiabatic rarefaction. The corresponding power spectrum of the CMB, $`C_{\mathrm{}}`$, is shown on the left. Notice that the peaks in the total power spectrum are due to adiabatic compression and rarefaction, while the valleys are filled in by the relatively smaller Doppler peaks. Although we have used the example of dark matter over-densities, we are in the regime of small amplitude linear fluctuations and so dark matter under-densities produce the same power spectrum, i.e., the first and largest peak in the total spectrum is produced by equal numbers of hot and cold spots on the surface of last scattering. When we see hot and cold spots in the CMB we are seeing sound: acoustic adiabatic compressions and rarefactions, visible across 13 billion years of vacuum.
## 2. The CMB Data (see Figure 2)
Since the COBE-DMR discovery of hot and cold spots in the CMB (Smoot et al. 1992), anisotropy detections have been reported by more than a dozen groups with various instruments, at various frequencies and in various patches and swathes of the microwave sky. The top panel of Fig. 2 is a compilation of recent measurements. The COBE-DMR points (on the left) are at large angular scales while most recent points are trying to constrain the position and amplitude of the dominant first peak at $`0.5`$ degrees.
The three curves are: $`(\mathrm{\Omega }_m,\mathrm{\Omega }_\mathrm{\Lambda })=(1.0,0.0),(0.3,0.0)`$ and $`(0.3,0.7)`$ (SCDM, OCDM and $`\mathrm{\Lambda }`$CDM, respectively). The $`\mathrm{\Lambda }`$CDM model fits the position and amplitude of the dominant first peak quite well. The largest feature in the data which doesn’t fit this model is the low values in the $`20<\mathrm{}<100`$ region. Blame the PythV points. The SCDM model has a peak amplitude much too low. Lowering $`H`$ to $`50`$ would bring the peak down a further 10% to 20%. The OCDM model has the peak at angular scales too small to fit the data and is strongly excluded by a fuller analysis (cf. Fig. 3A but see Ratra et al. (1999) for a dissenting view).
The positions and amplitudes of the features in model $`C_{\mathrm{}}`$’s, (Zaldarriaga & Seljak 1996), depend on cosmological parameters. This dependence allows measurements of the CMB power spectrum to constrain the parameters. The results I report here are based on such an analysis and may be compared with Tegmark (1999), Efstathiou et al. (1999) and Bond & Jaffe (1999).
A major concern of CMB measurements is galactic foreground contamination (see the monograph “Microwave Foregrounds”, de Oliveira-Costa & Tegmark 1999 for a review and update). Another concern is the analysis methods used to convert the measurements into constraints on parameters. The data points in the top panel of Fig. 2 are almost independent. A few are looking at overlapping patches of sky in similar $`\mathrm{}`$-bands, thus cosmic variance components of the error bars are correlated. Observers use similar instrumentation which can produce correlated systematic errors. Some calibrate on the same sources and therefore may share the same systematic calibration error. Although “there is no systematic way to handle systematic errors” (to quote Paul Richards), we do know that partially correlated error bars reduce the effective number of degrees of freedom. However, the $`\chi ^2/DOF`$ of the best fit is marginally too good. Thus there is some room to reduce the $`DOF`$ and still find good-fitting best-fits.
Another concern is the assumptions we make. CMB results are subject to several well-motivated assumptions; we assume gaussian adiabatic initial conditions. This means that when we input initial gaussian fluctuations of the density field, we put hot baryon/photon fluid in potential wells and cold baryon/photon fluid on potential hills, as if the compression/rarefaction mechanism discussed in Section 1 had already been active on all scales, including super-horizon scales. An alternative, isocurvature initial conditions, seems to be disfavored by the data. We also assume that the topological defect mechanisms have not played an important role. These and other assumptions are beginning to be looked at more carefully.
## 3. Cosmological Parameters from the CMB, SN, <br>Galaxy Clusters $`\mathrm{}`$ and Even Lenses (See Figure 3)
Fig. 3 contains multiple constraints in the $`\mathrm{\Omega }_m\mathrm{\Omega }_\mathrm{\Lambda }`$ plane and is difficult to read. I took Fig. 3 of Lineweaver (1999) and overlayed all the lensing constraints I could find in the literature. In A, the elongated triangle (upper left to lower right) is the $`68\%`$ confidence level (CL) region preferred by the CMB data (the 95% and 99% CL regions are also shown). The dark grey triangle in the lower right is the $`68\%`$ CL from lensing reported in Falco et al. (1998). The region extends to the x axis but is hidden behind the CMB 68% CL. The light grey region (upper right to lower left) is the $`95\%`$ CL from Falco et al. (1998). The small elongated almond shape in F is the $`68\%`$ CL region based on the CMB and 4 other sets of observations (B,C,D & E). When this region is projected onto the $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ axes it yields
* $`\mathrm{\Omega }_\mathrm{\Lambda }=0.65\pm 0.13`$
* $`\mathrm{\Omega }_m=0.23\pm 0.08`$
These results do not include lensing constraints because there seems to be some disagreement about what the lensing constraints are. Notice that in A, the CMB and lensing constraints overlap in the region around $`(\mathrm{\Omega }_m,\mathrm{\Omega }_\mathrm{\Lambda })=(1.0,0.0)`$. However in this region the age of the Universe is $`10`$ Gyr (too young) and this age problem can only be remedied by making the Hubble constant $`50`$ (too low). The dark and light grey regions in B through E are other published lensing results: B: Chiba & Yoshii 1999 and Cheng & Krauss 1998, C: Quast & Helbig 1999 and Helbig et al. 1999, D: Cooray (2000), E: Cooray (1999a,b). In general the lensing constraints are parallel to the lines of constant age (and also to the SN constraints in B). Only D gives a lensing result with little $`\mathrm{\Omega }_\mathrm{\Lambda }`$ dependence.
To a large extent these lensing constraints depend on the same or similar data so they are not independent. The constraints in A, C and E are very similar; standard CDM is favored. The lensing constraints in B and D however are in very good agreement with the results shown in F. Kochanek et al. (1999) have criticized the panel B results based largely on different estimates of the Schecter function parameters $`\alpha `$ and $`B_{}`$.
Suppose that $`\mathrm{\Lambda }`$CDM is the correct cosmology, i.e., that the combination of data that produced the contours in F are correct. What mistake has been made in interpreting the lensing data, producing the constraints in A, C and E? Kochanek seems to think that people are barking up the wrong tree: lensing models. They should be barking up the galactic evolution tree, i.e., that our understanding of the lensed population is more important than the degree of central concentration or other aspects of the lens models.
I came to this conference hoping to hear what the latest new $`\mathrm{\Omega }_m\mathrm{\Omega }_\mathrm{\Lambda }`$ constraints from lenses were. It seems that people are trying hard to increase the sample size and conscientiously wrestling with systematics of the assumptions required to convert lensing number counts into constraints on $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$. I was hoping to hear some heated discussion about the different lensing constraints in the $`\mathrm{\Omega }_m\mathrm{\Omega }_\mathrm{\Lambda }`$ plane, but the authors of the lensing constraints in B were not at this conference to defend themselves against the collective derision of rival data analysts. We can all agree however that we need more lenses. The JVAS/CLASS survey and the CASTLES follow-up survey seem to be doing just that.
### 3.1. The Age of the Universe (See Figures 4 & 5)
The age of the Universe can be determined from General Relativity via the Friedmann equation. Estimates for three parameters are needed: the mass density, the cosmological constant and Hubble’s constant; that is, $`t_o=f(\mathrm{\Omega }_m,\mathrm{\Omega }_\mathrm{\Lambda },h)`$. The focus of Lineweaver (1999) was to jointly constrain $`\mathrm{\Omega }_m`$, $`\mathrm{\Omega }_\mathrm{\Lambda }`$ and $`h`$ using the CMB and 6 other independent data sets. The result, $`t_o=13.4\pm 1.6`$ Gyr, is shown in Fig. 4 and can be compared to the ages of various models in Fig. 5.
#### Acknowledgments.
I acknowledge a Vice-Chancellor’s Research Fellowship at the University of New South Wales. I thank Tereasa Brainerd for coherently lensing my geodesic into Boston for a well-organized friendly conference.
## Discussion
Prof Turner: We’ve been seeing likelihood contour plots like yours for years – they always change when new data comes out and represent some kind of cosmology du jour, but not much more.
Dr Lineweaver: Maybe you can point me to the references offline.
Prof Turner: In any case, it’s not wise to combine inhomogeneous data sets. Bad, proemial data compromises the results. You should only use the best data. For example, did you use Paul’s low H value?
Dr Lineweaver: I have used $`68\pm 10`$ to represent our best estimate of H. I have been tempted to trust my better judgement and throw out data I don’t like but I’ve resisted.
Dr Bridle: You seem to have selectively chosen your data sets. For example, you have not included constraints from Eke’s work on cluster evolution.
Dr Lineweaver: I wanted to include his work but he didn’t publish it in terms of constraints on $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ which I could easily combine with the likelihoods I presented. His work would, I think, broaden the cluster evolution constraints in Fig. 3D out to the right.
Dr Bridle: We have done a similar analysis of the CMB data but we have done a full integration to properly marginalize over the nuisance parameters.
Dr Lineweaver: I have not marginalized by integrating. I have followed the peaks in likelihood space. A good 1-D analogy which represents our different techniques is that you are using the mean to represent the distribution while I use the mode. One can argue about which is better or more robust in this context. I favor the mode because it allows me to include more parameters which I don’t have to condition on, $`\mathrm{\Omega }_bh^2`$ for example.
Prof Schecter: In the top panel of Figure 2, is one of those curves the best-fit?
Dr Lineweaver: No, I just wanted to show the three most popular models in a simple way. I didn’t minimize the $`\chi ^2`$ values for each model with respect to the other parameters. For example, all three have $`h=0.65`$, $`\mathrm{\Omega }_bh^2=0.025`$ and $`Q=18\mu `$K.
polite applause:
## References
Bond, J.R. & Jaffe, A. 1999, astro-ph/9809043
Cheng, Y.N. & Krauss L.M. 1998, astro-ph/9810393
Chiba,M. & Yoshii, Y. 1999, astro-ph/9808321 (lensed QSO’s)
Cooray, A., Quashnock, J.M. & Miller, M.C. 1999a, astro-ph/9806080 & 9811115 (lenses in HDF)
Cooray, A. 1999b, A&A submited, astro-ph/9811448 CLASS +VLA
Cooray, A. astro-ph/9904245 2000, Ap.J., 999, L1 (optical arcs around clusters)
de Oliveira-Costa,A. & Tegmark, M. 1999, “Microwave Foregrounds”, A.S.P. Conf. Ser. Vol 181
Efstathiou, G. et al. , 1999, MNRAS, 303, 47, astro-ph/9812226
Falco, E., Kochanek, C. & Munoz, J.A. 1998, Ap.J., 494 (radio galaxies and QSO’s, JVAS survey)
Helbig, P. et al. 1999, A&A Suppl. Ser. 136, 297, astro-ph/9904175
Hu, W., Sugiyama, N. & Silk, J. 1996, Nature, 386, 37
Kochanek, C. et al. 1999, astro-ph/981111
Lineweaver, C.H. 1997, A.S.P. Conf. Ser. Vol. 126, ‘From Quantum Fluctuations to Cosmological Structures’, edt. D. Valls-Gabaud et al. p 185
Lineweaver, C.H. 1998, ApJ, 505, L69
Lineweaver, C.H. 1999, Science, 284, 1503-1507
Navarro, J.F. & Steinmetz, M. 1999, Ap.J., in press, astro-ph/9908114
Quast, R. & Helbig, P. 1999, A & A344, 721
Ratra, B. et al. 1999, Ap.J., 517, 549
Seljak, U. & Zaldarriaga, M. 1996, Ap.J., 4469, 437
Smoot, G.F. et al. , 1992, Ap.J., 396, L1
Tegmark, M. 1996, Varena, astro-ph/9511148
Tegmark, M. 1999, Ap.J., 514, L69
|
no-problem/9909/cond-mat9909114.html
|
ar5iv
|
text
|
# From Massively Parallel Algorithms and Fluctuating Time Horizons to Non-equilibrium Surface Growth
## Abstract
We study the asymptotic scaling properties of a massively parallel algorithm for discrete-event simulations where the discrete events are Poisson arrivals. The evolution of the simulated time horizon is analogous to a non-equilibrium surface. Monte Carlo simulations and a coarse-grained approximation indicate that the macroscopic landscape in the steady state is governed by the Edwards-Wilkinson Hamiltonian. Since the efficiency of the algorithm corresponds to the density of local minima in the associated surface, our results imply that the algorithm is asymptotically scalable.
To efficiently utilize modern supercomputers requires massively parallel implementations of dynamic algorithms for various physical, chemical, and biological processes. For many of these there are well-known and routinely used serial Monte Carlo (MC) schemes which are based on the realistic assumption that attempts to update the state of the system form a Poisson process. The parallel implementation of these dynamic MC algorithms belongs to the class of parallel discrete-event simulations, which is one of the most challenging areas in parallel computing and has numerous applications not only in the physical sciences, but also in computer science, queueing theory, and economics. For example, in lattice Ising models the discrete events are spin-flip attempts, while in queueing systems they are job arrivals. Since current special- or multi-purpose parallel computers can have $`10^410^5`$ processing elements (PE) , it is essential to understand and estimate the scaling properties of these algorithms.
In this Letter we introduce an approach to investigate the asymptotic scaling properties of an extremely robust parallel scheme . This parallel algorithm is applicable to a wide range of stochastic cellular automata with local dynamics, where the discrete events are Poisson arrivals. Although attempts have been made to estimate its efficiency under some restrictive assumptions , the mechanism which ensures the scalability of the algorithm in the “steady state” was never identified. Here we accomplish this by noting that the simulated time horizon is analogous to a growing and fluctuating surface. The local random time increments correspond to the deposition of random amounts of “material” at the local minima of the surface. This correspondence provides a natural ground for cross-disciplinary application of well-known concepts from non-equilibrium surface growth and driven systems to a certain class of massively parallel computational schemes. To estimate the efficiency of this algorithm one must understand the morphology of the surface associated with the simulated time horizon. In particular, the efficiency of this parallel implementation (the fraction of the non-idling processing elements) exactly corresponds to the density of local minima in the surface model. We show that the steady-state behavior of the macroscopic landscape is governed by the Edwards-Wilkinson (EW) Hamiltonian , implying that the density of the local minima does not vanish when the number of PEs goes to infinity. This ensures that the simulated time horizon propagates with a non-zero average velocity in the steady state. Thus the algorithm is asymptotically scalable! Further, based on the strong analogy between the evolution of the simulated time horizon and the single-step surface growth model , we describe the asymptotic scaling properties of the parallel scheme.
The difficulty of parallel discrete-event simulations is that the discrete events (update attempts) are not synchronized by a global clock. The traditional dynamic MC algorithms were long believed to be inherently serial, i.e., in spin language, the corresponding algorithm could attempt to update only one spin at a time. Lubachevsky nevertheless presented an approach for parallel simulation of these systems without changing the dynamics of the underlying model . Applications of his scheme include modeling of cellular communication networks , ballistic particle deposition , and metastability and hysteresis in kinetic Ising models .
Here we consider the case of one-dimensional systems with only nearest-neighbor interactions (e.g., Glauber spin-flip dynamics) and periodic boundary conditions. We restrict ourselves to the case where each PE carries one site (e.g., one spin) of the underlying system. Non-zero efficiency for this algorithm implies non-zero efficiency for the case where a PE carries a block of sites . Also, the stochastic model for the simulated time horizon is an exact mapping only for the synchronous algorithm, where the main simulation cycles are executed in lock-step on each PE. Our goal is to show for this one-site-per-PE synchronous algorithm (which can be regarded as the worst-case scenario) that the efficiency does not go to zero as the number of PEs goes to infinity. The basic synchronous parallel scheme is as follows.
The size of the underlying system, and thus the number of PEs, is $`L`$. Update attempts at each site are independent Poisson processes with the same rate, independent of the state of the underlying system. Hence, the random time interval between two successive attempts is exponentially distributed. Without loss of generality we use time increments of mean one (in arbitrary units). The Poisson arrivals correspond to attempted instantaneous changes in the state of the site. In the parallel algorithm each PE generates its own local simulated time for the next update attempt, denoted by $`\tau _i(t)`$, $`i=1,2,\mathrm{},L`$. Here, $`t`$ is the discrete index of the parallel steps simultaneously performed by each PE. Initially $`\tau _i(0)=0`$ for each site, and an initial configuration for the underlying system is specified. The simulated time of the first update attempt is determined by $`\tau _i(1)=\tau _i(0)+\eta _i(0)`$, where $`\{\eta _i\}`$ are independent exponential variables. For parallel steps $`t1`$, each PE must compare its local simulated time to the local simulated times of its neighbors. If $`\tau _i(t)\mathrm{min}\{\tau _{i1}(t),\tau _{i+1}(t)\}`$, the change of state of the site is attempted (and decided by the rules of the underlying system), and its local simulated time is incremented by an exponentially distributed random amount, $`\tau _i(t+1)=\tau _i(t)+\eta _i(t)`$. Otherwise, the change of state is not attempted and the local simulated time remains the same, $`\tau _i(t+1)=\tau _i(t)`$, i.e., the PE waits (“idles”). The comparison of the nearest-neighbor simulated times, and waiting if necessary, ensures that information passed between PEs does not violate causality. The algorithm is obviously free from deadlock, since at worst the PE with the absolute minimum local time can make progress. After the initial step, the probability density of the simulated time horizon $`\{\tau _i(t)\}`$ is a continuous measure, so the probability that updates for two nearest-neighbor sites are attempted at the same simulated time, is of measure zero. When modeling the efficiency, we ignore communication times between PEs, since they typically contribute to a scalable overhead. Thus, the efficiency is simply the fraction of non-idling PEs (inherent utilization). This exactly corresponds to the density of local minima of the simulated time horizon.
The question naturally arises: Is it possible that the fraction of non-idling PEs goes to zero in the $`L`$$``$$`\mathrm{}`$ limit? This would obviously make the algorithm unscalable and the performance of the actual implementation poor if not disastrous. To study this problem, we focus on the evolution of the simulated time horizon $`\{\tau _i(t)\}`$, which is completely independent of the underlying model. The above algorithmic steps can be compactly summarized as:
$`\tau _i(t+1)`$ $`=`$ $`\tau _i(t)`$ (1)
$`+`$ $`\mathrm{\Theta }\left(\tau _{i1}(t)\tau _i(t)\right)\mathrm{\Theta }\left(\tau _{i+1}(t)\tau _i(t)\right)\eta _i(t).`$ (2)
The $`\eta _i(t)`$ are drawn from an exponential distribution independently at every time $`t`$ and site $`i`$, and independent of $`\{\tau _i(t)\}`$. Here $`\mathrm{\Theta }()`$ is the Heaviside step-function. This stochastic evolution model is very simple and easily simulated on a serial computer. Alternatively, one can consider the evolution of the local slopes, $`\varphi _i=\tau _i\tau _{i1}`$:
$`\varphi _i(t+1)\varphi _i(t)`$ $`=`$ $`\mathrm{\Theta }\left(\varphi _i(t)\right)\mathrm{\Theta }\left(\varphi _{i+1}(t)\right)\eta _i(t)`$ (3)
$``$ $`\mathrm{\Theta }\left(\varphi _{i1}(t)\right)\mathrm{\Theta }\left(\varphi _i(t)\right)\eta _{i1}(t).`$ (4)
The periodic boundary conditions for $`\{\tau _i\}`$ impose the constraint $`_{i=1}^L\varphi _i=0`$. In this representation the operator for the density of local minima is
$$u(t)=\frac{1}{L}\underset{i=1}{\overset{L}{}}\mathrm{\Theta }(\varphi _i(t))\mathrm{\Theta }(\varphi _{i+1}(t)).$$
(5)
The process described by (3) is a microscopic realization of biased diffusion . The random amount of material “deposited” at a local minimum $`\tau _i`$ corresponds to the transfer of this amount from $`\varphi _{i+1}`$ to $`\varphi _i`$. Since the noise in (3) is independent of $`\{\varphi _i(t)\}`$, the average can be simply taken. This yields a transparent continuity equation, $`\varphi _i(t+1)\varphi _i(t)=(j_ij_{i1})`$, where the average current is $`j_i=\mathrm{\Theta }(\varphi _i)\mathrm{\Theta }(\varphi _{i+1})`$. Translational invariance implies that $`u=\mathrm{\Theta }(\varphi _i)\mathrm{\Theta }(\varphi _{i+1})`$, which is the same as the magnitude of the average current or the mean velocity of the surface.
To gain some insight into the evolution of the surface, we perform a naive coarse-graining by taking an ensemble average on (3) and replacing $`\mathrm{\Theta }(\varphi )`$ with a smooth representation. The procedure is independent of the actual form of the representation. We use $`\mathrm{\Theta }_\kappa (\varphi )=(1/2)[\mathrm{tanh}(\varphi /\kappa )+1]`$ so $`lim_{\kappa 0}\mathrm{\Theta }_\kappa (\varphi )=\mathrm{\Theta }(\varphi )`$. To leading order in $`\varphi /\kappa `$,
$`\varphi _i(t+1)\varphi _i(t)`$ $`=`$ $`{\displaystyle \frac{1}{4\kappa }}\varphi _{i+1}2\varphi _i+\varphi _{i1}`$ (6)
$``$ $`{\displaystyle \frac{1}{4\kappa ^2}}\varphi _i(\varphi _{i+1}\varphi _{i1}).`$ (7)
Taking the naive continuum limit one obtains
$$_t\widehat{\varphi }=\frac{^2\widehat{\varphi }}{x^2}\lambda \frac{}{x}\widehat{\varphi }^2$$
(8)
for the coarse-grained field, $`\widehat{\varphi }`$, where roughly speaking $`\lambda `$, the coefficient of the nonlinear term, carries the details of the coarse-graining procedure. Equation (8) is the nonlinear biased diffusion or Burgers’ equation . Through $`\widehat{\varphi }=\widehat{\tau }/x`$ it is simply related to the deterministic part of the KPZ equation for the coarse-grained surface height fluctuations ,
$$_t\widehat{\tau }=\frac{^2\widehat{\tau }}{x^2}\lambda \left(\frac{\widehat{\tau }}{x}\right)^2.$$
(9)
To capture the fluctuations one typically extends the above equations with appropriate noise, i.e., conserved for (8), and non-conserved for (9). This implies that the evolution of the simulated time horizon is KPZ-like. In one dimension the steady state of such systems (on coarse-grained length scales) is governed by the EW Hamiltonian , $`_{\mathrm{EW}}`$$``$$`𝑑x(\widehat{\tau }/x)^2`$. This corresponds to a simple random-walk surface, where the coarse-grained slopes are independent in the thermodynamic limit, yielding $`1/4`$ for the density of local minima. Obviously this value will be different for our specific microscopic model. However it cannot vanish: a zero density of local minima in the $`L`$$``$$`\mathrm{}`$ limit would imply that it is zero at all length scales. This would contradict our finding that the steady state at the coarse-grained level is governed by the EW Hamiltonian. The non-zero density of local minima is an important characteristic of this (steady-state) universality class. It ensures that our specific microscopic surface propagates with a non-zero average velocity in the steady state. Models belonging to other universality classes do not necessarily have non-zero extremal-point densities. For example, we can show that the density of local minima vanishes for a one-dimensional curvature-driven random Gaussian surface.
We now present our MC results to test the coarse-graining approach. First we follow the time evolution of the width $`w^2(t)`$$`=`$$`(1/L)_{i=1}^L\left[\tau _i(t)\overline{\tau }(t)\right]^2`$, where $`\overline{\tau }(t)`$$`=`$$`(1/L)_{i=1}^L\tau _i(t)`$ and the average $``$ is taken over many independent runs. After the early-time regime, which is strongly affected by the intrinsic width, and before saturation, we find $`w^2(t)`$$``$$`t^{2\beta }`$ \[Fig. 1(a)\]. Although the system exhibits very strong corrections to scaling, for our largest system, $`L`$$`=`$$`10^5`$, we find $`\beta `$$`=`$$`0.326\pm 0.005`$, which includes within two standard errors the exact KPZ exponent, $`1/3`$. In the steady state the width is stationary and $`w^2`$$``$$`L^{2\alpha }`$ for large $`L`$. Here the corrections to scaling are somewhat smaller than in the earlier regime, and the above scaling is obeyed for $`L`$$``$$`10^3`$ with $`\alpha `$$`=`$$`0.49\pm 0.01`$. This agrees with the prediction that the long-distance behavior is governed by the EW Hamiltonian, for which $`\alpha `$$`=`$$`1/2`$. Further, plotting rescaled width $`w^2/L^{2\alpha }`$ vs rescaled time $`t/L^z`$, with $`z`$$`=`$$`\alpha /\beta `$, confirms dynamic scaling for the intermediate-to-late time crossover \[Fig. 1(a) inset\]. We also measured the average steady-state structure factor of $`\{\tau _i\}`$, finding the expected $`1/k^2`$ behavior for small wave vectors $`k`$. The spatial two-point correlations decay linearly for $`\{\tau _i\}`$ and are short ranged for the slopes, $`\{\varphi _i\}`$.
To further probe the universal properties of the surface in the steady state we construct the full width distribution, $`P(w^2)`$. The EW class is characterized by a universal scaling function, $`\mathrm{\Phi }(x)`$, such that $`P(w^2)=w^2^1\mathrm{\Phi }(w^2/w^2)`$ , where
$$\mathrm{\Phi }(x)=\frac{\pi ^2}{3}\underset{n=1}{\overset{\mathrm{}}{}}(1)^{n1}n^2e^{\frac{\pi ^2}{6}n^2x}.$$
(10)
Systems with $`L`$$``$$`10^3`$ show convincing data collapse \[Fig. 1(b)\] onto this exact scaling function.
Next we estimate the efficiency of the algorithm. When simulating the system described by (1) and measuring the average local minimum density $`u(t)`$, we observe that for every system size it monotonically decreases as a function of time and approaches a constant slightly smaller than $`1/4`$ for large systems. Using the close similarity between our model and the single-step solid-on-solid surface-growth model , we can understand the scaling behavior for the steady-state average $`u`$ and the fluctuations $`\sigma ^2=u^2u^2`$. In the single-step model the height differences (i.e., the local slopes) are restricted to $`\pm 1`$, and the evolution consists of particles of height $`2`$ being deposited at the local minima. The advantage of this model is that it can be mapped onto a hard-core lattice gas for which the steady-state probability distribution of the configurations is known exactly . This enables one to find arbitrary moments of the local minimum density operator, analogous to (5). For the single-step model we find that $`u_L=(1/4)(11/L)^1=1/4+1/(4L)+𝒪(1/L^2)`$, and $`\sigma _L^2=1/(16L)+𝒪(1/L^2)`$. We propose that the scaling of the local minimum density for large $`L`$ in our model follows the same form, i.e.,
$$u_Lu_{\mathrm{}}L^1,\sigma _LL^{1/2}.$$
(11)
Our reasons are: (i) both models in their steady states belong to the EW universality class (short-range correlated local slopes), which guarantees that $`u_{\mathrm{}}`$ is non-zero; (ii) the constraint $`_{i=1}^L\varphi _i=0`$ in our model and the conservation of the number of particles in the lattice gas will produce similar finite-size effects. Our simulation results show very good agreement with (11) \[Fig. 2(a)\]. Further, extrapolating to $`L`$$``$$`\mathrm{}`$, yields $`u_{\mathrm{}}`$$`=`$$`0.24641\pm (7\times 10^6)`$. The scaling relation (11) also implies that $`u`$ is a self-averaging macroscopic quantity: its full distribution $`P_L(u)`$, for large $`L`$, is Gausian with the parameters in (11) \[Fig. 2(b)\]. Thus, in the $`L`$$``$$`\mathrm{}`$ limit it approaches a delta-function centered about $`u_{\mathrm{}}`$.
Finally, we point out the lack of up-down symmetry in our model in the steady-state. This is most easily noticeable at short distances, either by looking at a snapshot \[Fig. 3(a)\], or through the high degree of asymmetry in the nearest-neighbor two-slope distribution \[Fig. 3(b)\]: the hilltops are sharp and the valley-bottoms are flattened. Such stationary-state skewness is generally observable in one-dimensional KPZ growth, but has only recently received serious attention .
In summary, we studied the asymptotic scaling properties of a general parallel algorithm by regarding the simulated time horizon as a non-equilibrium surface. We conclude that the basic algorithm (one site per PE) is scalable for one-dimensional arrays. The same correspondence can be applied to model the performance of the algorithm for higher-dimensional logical PE topologies. While this will involve the typical difficulties of surface-growth modeling, such as an absence of exact results and very long simulation times, it establishes potentially fruitful connections between two traditionally separate research areas.
We thank S. Das Sarma, S. J. Mitchell, and G. Brown for stimulating discussion. We acknowledge the support of DOE through SCRI-FSU, NSF-MRSEC at UMD, and NSF through Grant No. DMR-9871455.
|
no-problem/9909/astro-ph9909504.html
|
ar5iv
|
text
|
# The visible environment of gas-accreting galaxies
## 1. Introduction
The process of galaxy formation seems to continue by means of accretion of matter from the exteriors, at epochs much later than that in which a galaxy has reached a stable configuration. Peculiar morphologies and kinematical configurations such as inclined, polar or warped rings, observed in about a hundredth of galaxies (Bertola & Galletta 1978, Whitmore et al. 1990) are widely attributed to a ’second event’ in the history of galaxies. The same origin is attributed to other peculiarities, such as counterrotation, a phenomenon characterized by gas and/or stars rotating in opposite direction to the most part of the galaxy (Bettoni 1984, Galletta 1987, Ciri et al. 1995). These configurations, defined by two distinct and eventually opposite spins coexisting in the same galaxies is not expected in the conventional picture of galaxy formation, driven by a sequential condensation under the effect of the gravitational force.
If there are few doubts about the external origin of the gas or stars in inclined or retrograde orbits, several different hypotheses have been presented about the origin of this matter.
On one side, the new matter has been attributed to the acquisition and engulfment of satellite galaxies, destroyed after the merging (Quinn 1991, Rix & Katz 1991). But the presence of many systems with a large amount of accreted matter has put back this hypothesis, being a dwarf galaxy too small to justify the large detected masses in a single event of cannibalism (Richter et al. 1994, Galletta et al. 1997). In addition, the presence of accreted matter in spiral galaxies also has added the difficulty to explain the accretion of large matter in a single event without heat or destroy the disk (Barnes 1992, Rix et al. 1992, Quinn et al. 1993).
On the other side, the possibility has been studied that a large quantity of gas may be accreted by means of a progressive infall of diffuse matter (Ostriker & Binney 1989, Thakar & Ryden 1996).
In the first hypothesis (accretion of dwarf galaxies in an environment particularly rich) it is possible to detect some peculiarity of the environment by means of statistical studies in visible wavelengths of the region of sky around the gas accreting galaxies. The second hypothesis, accretion of diffuse gas, is hard to test from the point of view of statistical studies.
We started a study oriented to analyze statistically the objects present in the sky around a set of accreting galaxies, separating the polar ring galaxies from the cases of counterrotation. We present here the preliminary results.
## 2. Data selection and analysis
The study of the environments of these fields was based on the counts of the objects present and on the statistical analysis of their properties, such as the projected distance r from the counterrotating or polar ring galaxy and the apparent diameter D of every small, diffuse object present in the field. The positions and diameters of such objects were extracted from the APM archive (Irwing et al. 1994) available at the Observatory of Edinburgh. We extracted data for the field – 200 kpc wide – of 31 polar ring galaxies (Brocca et al. 1997) and 43 counterrotating galaxies (Galletta 1996). As control samples for each of the previous type of peculiar galaxies, we adopted a sample of 48 galaxies without polar ring and another one of 53 galaxies without counterrotation. The latter sample has been chosen looking at the published rotation curves of gas and stars of all these stellar systems, starting from the catalog published by Prugniel et al. (1998).
According to similar studies (Heckman et al. 1985, Fuentes-Wiliams & Stoke 1988) we defined for each field around the peculiar galaxy the following density parameters:
$$\rho _{ij}=\underset{k}{}r_k^1D_k^1$$
where (i,j) could assume the values (0,1),(2,2) and (3,2.4). From the above formula, $`\rho _{00}`$ represent the number of neighboring galaxies, $`\rho _{01}`$ is the number weighted by the relative size, $`\rho _{10}`$ is weighted by proximity and $`\rho _{11}`$ is weighted by size and proximity. The parameter $`\rho _{22}`$ is proportional to the gravitational force exerted by the surrounding galaxies on the central object, while $`\rho _{3,2.4}`$ is proportional to the tidal interaction between the surrounding galaxies and the central one. The last two parameters amplify the effects present in the parameter $`\rho _{11}`$ and generally a variation of one of the above independent parameters $`\rho _{00}`$, $`\rho _{01}`$ and $`\rho _{10}`$ induces changes in the connected parameters.
Finally, after determining the $`\rho _{ij}`$ parameters for the two samples, a Kolmogorov -Smirnov test has been applied to the analyzed and the control samples by means of a Fortran program that utilizes the IMSL library routine.
The results of this analysis are reported in the following Table.
## 3. Discussion
The study of the environment based on the analysis of the $`\rho _{i,j}`$ parameters indicates that there are no significant, statistical differences in the optical environment of accreting galaxies (polar ring or systems with counterrotation) with respect to that of normal galaxies. None of the significance levels for the parameters is above 89%. This result is achieved also separating the sample according to the type of the central galaxy (Elliptical, S0 or Spiral) or according to the kind of counterrotation present (gas vs. stars or stars vs. stars).
Our data suggest that the event of accretion generating the polar ring or the counterrotation has not left clear, detectable traces in the present. According to the models existing in the literature and as said in the introduction, two hypotheses may hold: 1) The accreted matter is not related to the presence of satellites and derives from a slow, non-traumatic gas infall; 2) The polar rings or the counterrotation was generated by a mass transfer from a companion galaxy or by a satellite ingestion, which is happened in a remote epoch leaving traces no longer visible inside and around the galaxies.
### Acknowledgments.
This work has been partially supported by the grant ’Formazione ed Evoluzione delle Galassie’ Fondi ex 40% of the Italian Ministry of University and Scientific and Technologic Research (MURST).
## References
Barnes, J.E., 1992, ApJ, 393, 484
Brocca, C., Bettoni, D. & Galletta, G., 1997, A&A, 326, 907
Bertola, F. & Galletta, G. 1978, ApJ, 226, L115
Bettoni, D., 1984, The Messenger, 37, 17
Ciri, R., Bettoni, D. & Galletta, G. 1995, Nature, 375, 661
Fuentes-Wiliams, T. & Stoke, J.T. 1988, AJ, 96, 1235
Galletta, G. 1996, in Barred Galaxies, IAU Coll.157, ASP Conf.Ser. No.91, ed R. Buta, D.A. Crocker and B.G. Elmegreen, p.429
Galletta, G. 1987, ApJ, 318, 531
Galletta, G. Sage, L.J. & Sparke, L.S. 1997, MNRAS, 284, 773
Heckman, , T.M., Carty, T.J. & Bothum, G.D. 1985, ApJ, 288, 122
Irwing, M., Maddox, S. & McMahon, R. 1994, Spectrum, 2, 14
Ostriker E.C., Binney J.J., 1989, MNRAS, 237, 785
Prugniel, Ph., Zasov, A., Busarello, G. & Simien, F. 1998, A&AS, 127, 117
Quinn, T. 1991, in Warped disks and inclined rings around galaxies ed S. Casertano, P. Sackett, F. Briggs. (Cambridge University Press), p. 143
Quinn, P.J., Hernquist, L., & Fullagar, D.P., 1993 ApJ, 403, 74
Richter, O.G., Sackett, P.D., & Sparke, L.S. 1994, AJ, 107, 99
Rix, H.-W., & Katz, N., 1991, in Warped disks and inclined rings around galaxies ed S. Casertano, P. Sackett, F. Briggs. (Cambridge University Press), p. 112
Rix, H.-W.,Franx, M., Fisher, D., & Illingworth, G., 1992, ApJ, 400, L5
Thakar, A.R., Ryden, B.S., ApJ, 461, 55
Whitmore, B.C., Lucas, R.A., McElroy, D.B., Steiman-Cameron, T.Y., Sackett, P.D. & Ollong, R.P. 1990, AJ, 100, 1489
|
no-problem/9909/cond-mat9909301.html
|
ar5iv
|
text
|
# Time distribution and loss of scaling in granular flowThis paper is dedicated to Professor Franz Schwabl on the occasion of his 60th birthday
## I Introduction
Dynamics of granular materials represents an important practical and theoretical problem. A new theoretical approach to the problem of driven granular flow has been initiated in the past few years , which is motivated by the observed scaling behavior both in the laboratory granular piles and in natural landslides . It has been recognized that the collective dynamics of grains may lead to a self-organized critical (SOC) states , characterized by scaling properties of sandslides (avalanches). Moreover, dynamics may depend on various parameters, such as dimension and mass of individual grains and quality of their contact surfaces, and on the external conditions. By varying some of these parameters in a controlled manner, steady states with different characteristics are reached, and a phase transition to a steady state with no long-range correlations occurs when a parameter is varied through certain critical value .
Various cellular automata models have been introduced so far to mimic stochastic variations in the conditions of toppling . One-dimensional rice-pile automata with stochastic critical slope rules have been useful in understanding transport properties of rice piles . Relaxation rules in these models are a kind of branching processes with internal stochasticity. In two dimensions, two models studied in Refs. utilize mixed dynamic critical slope (CS) and critical height (CH) rules, motivated by the observed nonuniversality of the emergent critical states in natural landslides (for a recent review see Refs. ).
In the present work we extend the study of the models of Refs. , which we term model A and B, respectively. In these models stochastic toppling by the CH mechanism is controlled by an external parameter—probability of toppling $`p`$, which can be attributed to variations in the external conditions (e.g., wetting), or by internal kinetic friction, determined globally by the quality of contact surfaces between grains. In contrast to rice-pile models of Refs. , the present models are more appropriate for the evolution of landscape, in which a variety of erosion mechanisms might be simultaneously active.
Two types of triggering mechanisms of landslides are recognized in the literature : (i) soil moisture, which is controlled by rainfall and water level, and (ii) ground motion, which influences slope variation. The local shear stress threshold may depend on both slope angle and soil properties. We assume that these triggering mechanisms are dynamically correlated. By wetting diffusion probability is lowered and grains stick together, thus building up local heights. However, when the difference between heights at neighboring sites exceeds certain limit, the slope mechanism becomes activated.
A simplified picture of the natural mechanisms of erosion is taken into account by combined relaxation rules for the height transport on a two-dimensional square lattice, as follows: If at a site $`(i,j)`$ local height $`h(i,j)h_c`$, then the site relaxes with probability $`p`$ as $`h(i,j)h(i,j)2`$ ; $`h(i+1,j_\pm )h(i+1,j_\pm )+1`$ . If for finite $`p`$ some of the local slopes $`\sigma _\pm (i,j)h(i,j)h(i+1,j_\pm )\sigma _c`$, then the site relaxes with probability one by toppling one particle along each unstable slope, repeatedly until all slopes are reduced below $`\sigma _c`$. Here $`(i+1,j_\pm )`$ are positions of two downward neighbors of the site $`(i,j)`$ on a square lattice oriented downward.
The system is driven by adding grain from the outside at a random site along first row and the system is let to relax according to the above rules. Another grain is added when the relaxation process stops. The internal time scale is measured by the number of steps that the relaxation process proceeds on the lattice. At time step $`t`$=1 a site at first row topples after added grain from outside. According to the above relaxation rules, one or two grains are toppled from that site, which then appear at one or two downward neighboring sites. Therefore, mass flow is only down. However, an instability may propagate to both downward and upward neighbors of a toppled site (except for the sites on the first row, which have no upward neighbors), thus triggering four new sites as candidates for toppling per each just toppled site. At one time step we update in parallel all candidates for toppling. This comprises the usual definition of the time step in cellular automata models.
Since the system builds up unstable sites (with respect to probabilistic CH rule), the above dynamic rules need to be supplemented by an additional rule, which makes the difference between two models. In Model A, all sites that are visited by an avalanche at least once are considered as candidates for toppling until the whole instability dies off. In this way a propagating instability has an internal correlation time $`t_c`$ which is determined by the dynamics itself. In Model B, we set $`t_c=1`$. Therefore, only sites which are in the neighborhood of active sites at time $`t`$ may be candidates for toppling in the next time step $`t+1`$. It should be stressed that, since an avalanche is extended object, in both models there are many sites which topple simultaneously and which are not neighbors in space. In model B next toppled sites are neighbors only on time scale but not in space, whereas in model A next toppled sites are not necessarily neighbors neither on temporal nor spatial scale. However, all toppled sites are connected within affected area in space-time dimensions. In both models particles are added from the outside only on a random site at the first row and leave the system at lower (open) boundary. The mass transport is unidirectional (down). However, since the above rules allow an instability to propagate both forward and backward on a 2-dimensional lattice, and evolve on an internal time scale, both models are essentially $`2+1`$-dimensional, with extra dimension representing the internal time scale. Differences in the additional relaxation rule lead to different emergent critical states, as explained below.
In Fig. 1 two examples of large avalanches in model A (below) and model B (top) are shown for values of the control parameter $`p=p^{}`$ at the edge of the scaling region ($`p^{}0.4`$, see Sec. IV for discussion). In both models multiplicity of topplings at a site (larger number of topplings is marked by darker gray tone), is induced by the instability propagating back and forth due to nonlocal relaxation rules. In model A number of candidates for toppling at each time step is larger compared to model B, due to internal correlation time typically $`t_c1`$, leading to more efficient relaxation of unstable sites. On the other hand, $`t_c=1`$ in model B enables building up numerous unstable sites (with respect to CH rule) for low values of $`pp^{}`$. Therefore huge avalanches with perpendicular extent comparable to the system size (cf. Fig. 1 (top)) occur frequently, indicating that the anisotropy of the relaxation events vanishes at $`p^{}`$.
In the limit $`p=1`$ both models reduce to the deterministic directed CH model introduced and solved exactly by Dhar and Ramaswamy in Ref. . In this limit slopes are restricted to $`\sigma _\pm (i,j)1`$, and thus CS rule remains inactive.
## II Model A: Multifractal scaling behavior of landslides
Correlation times $`t_c>1`$ in model A are motivated by varying toppling conditions after an avalanche commenced, which represents a natural choice in the case of long relaxation times, such as geological evolution of landslides. It has been shown that this type of temporal disorder is a relevant perturbation both for the evolution of landslides and for directed percolation processes . In this model each site develops an individual time scale of activity, which then contributes to the whole event (avalanche). As a consequence, the distribution of avalanche durations $`P_A(t,L)`$ in the scaling region exhibits multifractal scaling properties when the system size $`L`$ is varied, according to the expression:
$$P_A(t,L)=(L/L_0)^{\varphi _t(\alpha _t)};\alpha _t\left(\mathrm{ln}\frac{t}{t_0}\right)/\left(\mathrm{ln}\frac{L}{L_0}\right).$$
(1)
In Fig. 2 the distribution of duration of avalanches is shown for $`p=0.7`$ and various lattice sizes. In the inset the spectral function $`\varphi _t(\alpha _t)`$ vs. $`\alpha _t`$ is determined by the scaling plot according to Eq. (1), with $`t_0=1/4`$ and $`L_0=1/4`$. The integrated distribution of durations exhibits a power-law behavior as $`P(t)t^{(\tau _t1)}`$ in the entire region $`p^{}p<1`$, with the $`p`$-dependent exponent $`\theta \tau _t1`$, which is shown in the inset to Fig. 3. Similar nonuniversality with decreasing scaling exponents with the parameter $`p`$ are found for the distributions of size $`D(s)s^{(\tau _s1)}`$, and mass of avalanches $`D(n)n^{(\tau _n1)}`$ (see Ref. for detailed analysis). Slopes of various curves in the main Fig. 3 determine the dynamic exponent $`z(p)`$, which is also shown in the inset to Fig. 3. For values of the control parameter $`p`$ below a critical value $`p^{}0.4`$ (see below) the critical steady states are no longer accessible by the dynamics.
## III Model B: Nonuniversal scaling in granular piles
For finite correlation times, i.e., by setting $`t_c=1`$, avalanches have in the average a reduced number of topplings per site, compared with model A for the same value of the control parameter $`p`$. This leads to a smaller incidence of large avalanches, and thus to increase of the scaling exponents with decreasing probability of toppling $`p`$. In Fig. 4 the probability distribution of avalanche durations is shown for few values of $`p`$ in the scaling region. On the other hand, for short correlation times the balance between the CS and CH toppling mechanisms is altered: By lowering $`p`$ the system builds up heights faster than in the case of model A, and thus the CS mechanism becomes more effective, and eventually prevails at the boundary of the scaling region at $`p^{}`$. We find numerically that scaling behavior is lost at $`p^{}0.5`$ . The scaling behavior for $`p^{}p<1`$ is characterized by nonuniversal $`p`$dependent scaling exponents (see inset to Fig. 5 and Ref. ).
The scaling properties of the distribution of avalanche durations are determined by using the following finite-size scaling form
$$P_B(t,L)L^{\theta z}𝒫(tL^z),$$
(2)
where $`\theta \tau _t1`$ as above, and $`z`$ is the dynamic exponent, which also depends on $`p`$. The scaling plots of $`P_B(t,L)`$ for various values of $`p`$ in the scaling region and for three system sizes at each value of $`p`$, are shown in the inset to Fig. 4. Similar scaling properties are found for the distributions of size and length of avalanches (see Ref. for detailed discussion). In addition to the temporal distribution discussed above, here we also concentrate on the distribution of mass of avalanches, $`P_B(n,L)`$, satisfying the scaling form $`P_B(n,L)n^{\tau _n1}𝒬(nL^{D_n})`$, where mass $`n`$ of an avalanche is determined as total number of grains that slide during one avalanche. In Fig. 5 the distribution of mass of avalanches is shown for few different values of the parameter $`p`$ in the scaling region. In the inset to Fig. 5 we plot the exponents $`\theta (p)`$ and $`\tau _n(p)1`$, for duration and mass of avalanches, respectively, and the dynamic exponent $`z(p)`$, and fractal dimension of mass $`D_n(p)`$ against $`p`$. For $`p0.5`$ the following scaling relations are satisfied (cf. inset to Fig. 5): $`(\tau _n1)D_n=z\theta =\alpha `$, where $`\alpha \tau _{\mathrm{}}1`$ is the exponent of length of avalanches, which is determined in Ref. . The dynamic exponent $`z`$ which appears in the scaling form (2) can also be determined from slopes of the curves $`<T>_{\mathrm{}}`$ vs. $`\mathrm{}`$, similar as we have determined it in model A. Obtained values are in a good agreement, within numerical error bars, with those obtained from the scaling plots in Fig. 4. Values of the exponents at $`p=0.4`$ are taken from the straight sections of the lines representing distributions of duration and mass for smaller system sizes $`L128`$. As indicated in the inset to Fig. 5, these values do not satisfy scaling relations within error bars, indicating that $`p=0.4`$ is already beyond the edge of the scaling region in model B (see discussion in Sec. IV).
## IV Universal criticality at the edge of the scaling region
When the control parameter $`p`$ is varied through a critical value $`p^{}`$ we find that the scaling behavior of the avalanche distributions is lost, indicating that self-organized critical states are no longer accessible by the dynamics. By numerical simulations of various distributions and applying the appropriate scaling analysis it was shown that critical steady states disappear below $`p=0.4`$ in model A , and below $`p=0.5`$ in model B . Here we argue that dynamic rules with different correlation times $`t_c`$ in these models lead to separate prevailing relaxation mechanisms at the edge of the scaling region, which lead to different values of $`p^{}`$ and to two different universality classes of scaling behavior. In particular, in model A we find that the scaling exponents of large avalanches $`\theta (p^{})`$, $`\tau _s(p^{})`$, etc., are close to the universality class of parity-conserving (PC) branching and annihilating random walks , whereas in model B the exponents at $`p^{}`$ reach the values of the mean-field SOC universality class.
Although the relaxation rules in both model A and model B are complex interplay of the probabilistic critical height and deterministic critical slope rules, we may distinguish two basic type of local branching and annihilating processes that take part to propagate an avalanche in these models. Propagation of an avalanche may stop at a site to which one or two particles drop in time step $`t`$, in the following two cases: (1) One particle drop will not continue if the site had height zero, that is “annihilation” $`A0`$ occurs with probability $`1\rho `$, where $`\rho `$ is the dynamically changing probability that a site has height $`h1`$; (2) When two particles drop to a site at time $`t`$, the avalanche may not proceed if the diffusion probability $`p`$ is too low, i.e., $`A+A0`$ occurs with probability $`1p`$. Note that since number of particles is conserved by the processes in the interior of the pile, “annihilation” means accumulation of particles at a site, which thus will take part in future events, in contrast to the case of chemical reactions, where particles are extinct. When the conditions for toppling are fulfilled, propagation of an avalanche represents a branching process which consists of two steps. A toppled site at time $`t`$ transfers two particles forward, however, the instability is transferred to its four neighbors, but the site itself can not topple in the next time step. Toppling of an isolated site away from the open boundaries by the critical height (CH) rule makes four neighboring sites as candidates for toppling in the next time step, and if these four sites topple, they make nine new candidates for toppling etc, along the chain $`1491625,\mathrm{}`$. Since each toppled site, both initial and triggered sites, topple by two particles, in CH mechanism, this toppling chain represents a reaction $`A(m+1)A`$ with odd number of offsprings $`m=3,5,7,9,\mathrm{}`$ per each initial particle . The same conclusion is true for topplings by the critical slope (CS) rule with two simultaneously unstable slopes. If, however, a site topple by critical slope (CS) rule by dropping one particle along one unstable slope, it will trigger three neighboring sites to topple by CS mechanism, and another toppling chain occurs as $`137\mathrm{}`$, i.e., $`m=2,4,6,\mathrm{}`$ offsprings per initial particle.
Diffusion limited branching annihilating random walks (BARW) have been studied by field-theory methods (for a recent review see and references therein). It has been recognized that $`d_c=2`$ is the upper critical space dimension, and that BARW with even number of offsprings in $`d=1`$ belong to PC universality class, whereas the directed percolation (DP) universality class was found in the case of odd number of offsprings. Two examples of dynamical processes in $`1+1`$ dimension belonging to PC universality class have been studied numerically in Refs. and . It should be stressed that in contrast to BARW and DP processes, the present models A and B are dynamical, and thus the propagation rules apply statistically and depend on the history of the state of the system. Recently an analogy between the directed percolation and stochastic dynamical model with critical height rules has been discussed in Ref. .
In model B, short correlation time makes the relaxation at a site less effective with decreasing $`p`$, and thus efficient building-up of heights occurs, leading eventually to $`\rho (p^{})1`$. (A transverse section of the pile in model B at $`p^{}`$ is given in Fig. 6 (bottom)). Therefore, when an instability starts, it may trigger a mixture of branching processes described above, making an instability transferred back and forth on two-dimensional lattice and evolving in time. Fractal dimension associated with the mass of avalanches at the edge of scaling region was found to be $`D_n(p^{})2`$ (see inset to Fig. 5). Thus an avalanche appears to be compact in $`2`$-dimensional space and, since next toppled sites are neighbors in time ($`t_c=1`$), it represents a connected object in $`2+1`$ dimensions. Starting an instability in full lattice, i.e., with no threshold condition, will trigger an avalanche which propagates as a directed percolation cluster in 3-dimensions, until eventually too many sites will have heights zero and the avalanche will stop. Thus $`p^{}`$ should coincide with the site-directed percolation threshold on simple cubic lattice $`p_c^{SDP}=0.435`$. Mass of avalanche is defined as the number of particles that slide during an avalanche, and thus it is equivalent to number of branchings. Therefore, since for $`p=p^{}`$ the effective dimension $`D_n(p)`$ reaches the upper critical dimension of BARW, we may expect mean-field universality class for the scaling behavior of avalanches. Our numerical results listed in Table 1 confirm this conclusion. A schematic phase diagram is shown in Fig. 7.
The situation is different in model A (cf. Fig. 1), where decreasing diffusion probability $`p`$ an avalanche is either extinct quickly (short avalanches), or lives much longer (large avalanches) with large separation times . In turn, this leads to the efficient topplings at each affected site due to many attempts within correlation time $`t_c1`$. In the resulting steady state most of the sites have heights $`h<h_c`$ (cf. Fig. 6 (top)). Therefore, only toppling by CH rule takes place and threshold condition is still active, in contrast to model B. In model A a toppled site $`(i,j)`$ at time $`t`$ may trigger topplings at time $`t+1`$ at three neighboring sites, since the site toppled at $`t1`$ time step will not fulfill the threshold condition ($`h2`$) at time $`t+1`$. It turns that among three neighbors less than two sites topple in the average, therefore leading to a chain of toppled sites with few branches, which is embedded in $`2+1`$ dimensional space-time. However, affected sites which do not topple due to low probability $`p`$ at first attempt may topple in later time steps before the avalanche dies off, thus starting a new chain. The avalanche is made of set of such chains, and has the fractal dimension $`D_n=1.48`$. We believe that this effectively low dimensional BARW process, although it takes part in $`2+1`$ dimensional space-time, is the reason for PC universality class in model A. Another reason for the occurrence of PC universality class in reaction-diffusion processes might be the existence of more than one symmetric absorbing states, as discussed in and . The process is reminiscent to bond-directed percolation in 3-dimensional simple cubic lattice, thus we also expect that $`p^{}p_c^{BDP}=0.382`$ . In the phase diagram in Fig. 7 phase boundaries for model A (dashed lines) separate reactive phase from the critical and noncritical absorbing phases.
In the phase diagram in Fig. 7 phase boundaries for model A (dashed lines) separate nonconducting phase from the conducting critical and noncritical phases. In model B the noncritical conducting phase exists only along the line $`\rho =1`$ below MF point, and a finite slope occurs via a phase transition at SB point . On the other hand, in model A our results suggest that noncritical steady states occupy a finite region close to the right corner, and that a finite slope occurs asymptotically at $`p=0`$, $`\rho =1`$. Further analysis is necessary in order to find precise location of the PC point and the nature of phase transition between critical and noncritical conducting states. Along the phase boundaries between the points PC and DR, and between MF and DR, we have the nonuniversal criticality of model A and model B, respectively, discussed in the present work. The point marked by DR at $`\rho =0.5`$, $`p=1`$ corresponds to the universal SOC of Dhar-Ramaswamy model.
Sets of the exponents for $`p=0.4`$ are summarized in Table I. Exponents in the model B at this value of $`p`$ are estimated from the straight sections of lines in the subcritical region for smaller lattice size $`L=128`$. Value of the exponent $`\tau _s`$ is taken from Ref. . For comparison, shown are also the numerical values of the exponents for PC universality class, from Ref. , and mean-field self-organized criticality exponents, from Ref. . Note that our exponent $`\theta `$ corresponds to the survival probability distribution exponent $`\delta `$ in Ref. , and $`z2\nu _{}/\nu _{}`$ and that the scaling relation $`\tau _s1=\theta /(\theta +1)`$ holds. The exponents $`\tau _n`$ for mass of avalanche and roughness exponent $`\chi `$ are unique for granular piles, and can not be defined in models of chemical reactions or damage spreading, considered in Refs. . We estimate roughness exponent $`\chi `$ from the contour of several transverse sections of the pile (two examples are given Fig. 6). For instance, by using box counting method we find the fractal dimension of the contour curve in model B, as $`d_f=1.1791.183`$, and using the expression $`\chi =d_f1`$ leads to the value listed in Table 1.
## V Conclusions
We have shown that sandpile automata with mixed relaxation rules of stochastic diffusion and deterministic branching processes are capable of describing nonunuiversality of the self-organized critical states and a loss of scaling at a critical value of the control parameter, in a qualitative agreement with observed behavior in natural and laboratory granular flow. Differences in the relaxation rules due to internal correlation time lead to distinct dynamic critical states. In particular, unlimited (within lifetime of an avalanche) correlation time $`t_c`$ in model A leads to a multifractal scaling behavior and scaling exponents of large avalanches decrease with decreasing values of the control parameter $`p`$. On the other hand, finite correlation time $`t_c=1`$ in model B leads to increase of the scaling exponents with decreasing $`p`$, and to finite-size scaling properties of avalanches in the entire scaling region $`p^{}p<1`$. At the edge of the critical region at $`p^{}`$, dominating relaxation mechanisms of modulo two conserving branching processes and effectively low dimensionality of the processes, lead to criticality in the parity conserving universality class in model A. In model B building up of a global slope appears to be dominant on top of the above branching processes, which thus appear to have the effective dimension which exceeds the upper critical dimension of BARW, and thus mean-field scaling exponents. It should be stressed that the numerical values of the exponents listed in Table I prove the closeness of these universality classes within numerical error bars, which we estimate as 0.03. Value of the exponent $`\tau _n=1.66`$ in mean-field models is known only numerically , whereas in branching processes, which are equivalent to sandpiles with a fixed number of grains per toppled site, there is the equality $`\tau _n=\tau _s=3/2`$. Study of the details of the dynamic phase transition in these models, e.g., in terms of the order parameter and its fluctuations, is left out of the present paper (see, Ref. for appearance of finite slope at SB point in model B). However, due to scaling relations among various exponents at the transition, the observed different universality classes of avalanche statistics at $`p^{}`$ indicate that the exponents of the order parameter $`\beta `$ and correlation length $`\nu _{}`$ should belong to two distinct universality classes of the dynamic phase transitions in these models. Our results suggest that although basic relaxation rules in laboratory granular piles and natural landslides might be the same, details of actual implementation of these rules such as variation of control parameter during the course of an avalanche might lead to entirely different critical states.
## Acknowledgments
This work was supported by the Ministry of Science and Technology of the Republic of Slovenia. I would like to thank Uwe Täuber for helpful discussions.
|
no-problem/9909/physics9909001.html
|
ar5iv
|
text
|
# Using rigorous ray tracing to incorporate reflection into the parabolic approximation
## References
1. V. A. Fock, Electromagnetic Diffraction and Propagation Problems (Pergamon, New York, 1965) Chapter 11.
2. F. D. Tappert and R. H. Hardin, “Computer Simulation of Long Range Ocean Acoustical Propagation Using the Parabolic Equation Method”, Proceedings 8th International Congress on Acoustics Vol. II (Goldcrest, London, 1974) p. 452.
3. M. D. Collins and R. B. Evans, J. Acoust. Soc. Am. 91, 1357 (1992); J. F. Lingevitch and M. D. Collins, J. Acoust. Soc. Am. 104, 783 (1999).
4. E. R. Floyd, J. Acoust. Soc. Am. 60, 801 (1976); 75, 803 (1984); 79, 1741 (1986); 80, 877 (1986).
5. E. R. Floyd, Found. Phys. Lett. 9, 489 (1996), quant-ph/9707051.
6. E. R. Floyd, Phys. Essay 5, 130 (1992); 7, 135 (1994); An. Fond. L. de Broglie 20 263 (1995).
7. S. T. McDaniel, Am. J. Phys. 47, 63 (1979); J. Acoust. Soc. Am. 58, 1178 (1975).
8. P. M. Morse and H. Feshbach, Methods of Theoretical Physics, Part I, (McGraw-Hill, New York, 1953) pp. 691, 706.
9. R. J. Urick, Principles of Underwater Sound for Engineers (McGraw-Hill, New York, 1967) pp. 187–234.
|
no-problem/9909/astro-ph9909368.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The multiplicity function (hereafter MF) is a powerful tool to test the various cosmological scenarios (Combes & Boissé 1991) and provides an independent way to measure the index $`n`$ of the initial perturbation spectrum $`n`$ at the era of baryonic recombination (Turner, Gott III 1976).
The CRoNaRio project is a joint enterprise among Caltech and the astronomical observatories of Napoli, Roma and Rio de Janeiro, aimed to produce the first general catalogue of all objects visible on the DPOSS (Digitised Palomar Sky Survey). The final Palomar-Norris catalogue will include astrometric, photometric (in the three Gunn-Thuan bands g, r and i) and rough morphological information for an estimated $`2\times 10^9`$ stars and $`5\times 10^7`$ galaxies. More than $`60\%`$ of the catalogues are already available and are currently being used for many scientific applications.
## 2 The van Albada algorithm
In order to compile a catalog of candidate groups of galaxies in absence of redshift information, we implemented a slightly modified version of the van Albada algorithm (Soares 1989).
This algorithm makes use of apparent magnitude and projected position in the sky only, and gives, for each pair of adjacent galaxies, their probability of being physically related.
Assuming a Poisson statistic, the probability that the angular distance of a fixed galaxy to its nearest non-physical companion lies between $`\theta `$ and $`\theta +d\theta `$ is given by:
$$P_1(\theta )d\theta =exp\left[\pi \theta ^2n\right]2\pi \theta nd\theta .$$
The introduction of an adimensional distance $`x`$ (defined as the ratio of the observed to the expected mean distance to the nearest neighbour) allows to combine the angular separation of different pairs into a single distribution (Fig. 1) removing the effects of clustering in the background.
## 3 The accuracy of the algorithm
In order to test the accuracy (lost groups) and the reliability (spurious groups) of the algorithm, we tested it on $`70`$ realistically simulated sky fields.
We first produced the galaxy background by assuming uniform distribution and the field luminosity function given by Metcalfe et al. 1994; then we added simulated groups of galaxies according to the multiplicity function by Turner & Gott III 1976, with redshift computed according to:
$$N(z)=4\pi \frac{\rho _0}{3}\left[\frac{8}{H_0^3}\left(z+1\sqrt{1+z}\right)^3\right]$$
and absolute magnitude of the brightest galaxy in the group taken from the cumulative luminosity function for groups of galaxies:
$$\mathrm{\Phi }(M)dM=\mathrm{\Phi }^{}\left[10^{0.4}\left(M^{}M\right)\right]^{\alpha +1}exp\left[10^{\left(0.4\left(M^{}M\right)\right)}\right]dM$$
where $`M^{}=20.85`$ and $`\alpha =0.83\pm 0.17`$. Other parameters which were varied in the course of the simulations were:
* the diameter of the group inside a Gaussian distribution centered at: $`D_0=0.26Mpc`$;
* the maximum possible redshift for a group, which were assumed to fall in the range: $`0.2z0.4`$.
Different simulations were performed in order to optimize the lower limit of the probability for which two galaxies were considered as physical companions by the algorithm. The optimal value, id est the value ensuring the best compromise between accuracy and reliability (Fig. 2), turned out to be $`p0.6`$.
With this choice, the modified Van Albada algorithm succeds in retrieving, from the simulated catalogues, more than $`90\%`$ of the groups having more than three components.
## 4 A preliminary Multiplicity Function from CRoNaRio data
The figure (Fig. 3) gives the MF obtained from $`13`$ CRoNaRio plates (subtending a total solid angle of $`483deg^2`$). In the same figure we also compare it with six other MFs taken from literature:
1. MF derived from magnitude limited surveys making use of redshift information: Garcia (Garcia et al. 1993), CfA (Geller & Huchra 1983), NorthCfA (Ramella, Pisani 1997), ESP (Ramella et al. 1998)
2. MF derived from diameter limited survey: Maia (Maia & Da Costa 1989)
3. MF derived from limited magnitude survey without redshift information: Turner (Turner & Gott 1976)
Agreement is found between our FM, obtained with our algorithm without any tridimensional data, and the ones in literature using redshift information. Therefore, with our algorithm, even in absence of redshift information, it is possible to explore wide areas of the sky and to retrieve groups ($`N>2`$) with an efficiency comparable to that of 3-D surveys.
## 5 An estimate of $`n`$
For the groups with at least one redshift available, MF (as a function of luminosity) was calculated. With a fit of the observed data to the theoretical function
$$f(L)=1\pi ^{1/2}_{\left(\frac{L}{L_c}\right)^{\left(1+n_{oss}/3\right)}}^{\mathrm{}}e^tt^{\frac{1}{2}1}𝑑t.$$
(1)
we obtained: $`n_{oss}=1.0(1.2,0.8);c=L_c/L^{}=3.5(3.2,\mathrm{\hspace{0.33em}3.8}).`$
This allows an estimate of the spectral index $`n`$:
$$n=1.5+1.5n_{oss}=0.0(0.3,\mathrm{\hspace{0.33em}0.3}).$$
|
no-problem/9909/hep-ex9909017.html
|
ar5iv
|
text
|
# Untitled Document
Supersensitive avalanche silicon drift photodetector.
Z. Ya. Sadygov, M. K. Suleimanov,T. Yu. Bokova
Joint Institute for Nuclear Research, Dubna,Russia
\*) E-mail: sadygov@cv.jinr.ru
ABSTRACT
Physical principles of performance and main characteristics of a novel avalanche photodetector developed on the basis of MOS(metal-oxide-silicon) technology is presented. The photodetector contains a semitransparent gate electrode and a drain contact to provide a drift of multiplicated charge carriers along the semiconductor surface. A high gain(more than $`10^4`$) of photocurrent was achived due to the local negative feedback effect realizied on the $`SiSiO_2`$ boundary.
Special attention is paid to the possibilities of development of a supersensitive avalanche CCD (charge coupled device) for detection of individual photons in visible and ultraviolet spectral regions. Experimental results obtained with a two-element CCD prototype are discussed.
INTRODUCTION
For last decades, researchers have sought solid-state alternatives to photomultiplier tubes(PMT) to be used for application in physical experiments and medical tomography. However, there is not yet any adequate solid-state analog comparable to commercial PMT to detect week light pulses consisting of a few photons at room temperature. During last years, we have investigated possibilities to made relatively cheap APDs on the basis of MRS(metal-resistive layer- semiconductor) structures fabricated on low resistive silicon wafers . The main peculiarity of the MRS APDs is a local negative feedback (LNF) effect which results in a self-stabilized avalanche process in the semiconductor. These LNF APDs have been developed for use in red and infrared spectrum regions. Recently, we reported some characteristics of another design of LNF APD’s having a high sensitivity in visible and ultraviolet spectral regions . This report is devoted to physical principles of performance and main characteristics of a novel design of the LNF APD.
DEVICE DESIGN AND PRINCIPLES OF OPERATION
The new LNF APD contains a semitransparent titanium layer (gate electrode) separated from the semiconductor surface by a silicon oxide layer and a guard-ring(drain electrode) to provide a drift of multiplicated charge carriers along the $`SiSiO_2`$ boundary (see Fig.1). The titanium gate and drain electrode can be provided with individual or common(key $`K_1`$ is closed) aluminum contacts for voltage supply.
Avalanche multiplication of photocurrent occurs in the p-n junction(under the gate electrode) where the breakdown potential is specially reduced due to an additional ion-implantation. The voltage applied to the LNF APD is distributed between the depletion layer of the semiconductor and the oxide layer. The hole curriers to be caused by avalanche process in a given microregion of the p-n junction is collected in a small area of the $`SiSiO_2`$ boundary, reducing the value of voltage drop in this microregion of the semiconductor. As a result each start avalanche process is self-switched in a few nanoseconds in the given microregion. A drift of the hole charge curriers from the avalanche region to the drain contact is provided by a high resistive layer during about 100 ns after switching of the given analanche process. The parameters(thickness and surface resistivity) of this resistive layer are field-dependent, and so the LNF effect can be adjusted by the gate potential. Such character of avalanche process is called the local negative feedback effect.
MAIN PECULIARITIES OF DEVICE PERFORMANCE
Fig.2 shows the measured gain of photocurrent versus negative bias on the drain electrode. The LNF APD sample had a photosensitive area of 1mm diameter. An initial pulsed photocurrent with a duration of 0.4ms and an amplitude of 6nA was used in these measurments. One can see that the character of avalanche process in the LNF APD is fully defined by the gate potential. At fixed gate biases, a situation can be achieved where gain becomes independent of drain potential. This peculiarity of the tested device indicates the unique possibility of building multichannel avalanche photodetectors with a high spatial uniformity of gain.
Another type of operation of this device is connected with a pulsed third level supply of the gate electrode(for example, $`V_g`$=-69.5V, -55V, -45V at $`V_d`$=-63V=const). Under these conditions, a CCD(charge coupled device) type performance of the LNF APD was realized. A high gain of about $`10^4`$ was obtained. A light emitted diode with a wavelength of 450nm and a pulse duration of 50ns was used as a signal for detection.
The amplitude distributions of the signals detected by the LNF APD are presented in Fig.3 for light pulses which contain one photon in average. The photomultiplier FEU-130(made in Russia) was used to count the average number of photons in light pulses. One can see that a single photon detection mode with a high efficiency was realized at room temperature. If we take the threshold amplitude equal to the 150-th channel, then the detection efficiency of the LNF APD is about 25$`\%`$.
PERSPECTIVES
As shown above there are real possibilities of development of a super sensitive multichannel photodetectors as well as an avalanche type CCD for single photon detection in visible and ultraviolet spectrum regions. To our mind, these unique properties of the presented LNF APDs would be good prospects for future applications in high energy physics, high speed photonics and medical tomography.
REFERENCES
1.Z.Sadygov et.al. Proc. SPIE- The Intern. Soc. for Opt. Eng., 1621, (1991), 158-168.
2.Z.Sadygov et.al. IEEE Trans. Nucl.Sci., 43, 3, (1996), 1009-1013.
3.Z.Sadygov. Russian patent N2086027, application of May 30-th, 1996.
4.N.Bacchetta, Z.Sadygov et.al.. Nucl.Instr.and Meth, A387, (1997), 225-230.
FIGURE CAPTIONS
Figure 1. Simplified structure of the LNF APD. 1- thick Al electrodes; 2- semitransparent Ti gate electrode; 3- dielectric layer ($`SiO_2`$); 4-surface drift layer; 5- p-Si layer; 6- n-Si layer with additional ion-implantation; 7- n-Si wafer; 8- p-Si guard (drain) ring.
Figure 2. Photocurrent gain as a function of drain voltage. 1- $`V_g`$=-68.5V=const; 2- $`V_g`$=-69.0V=const; 3- $`V_g`$=-69.5V= const; 4- $`V_g`$=$`V_d`$ (key $`K_1`$ is closed).
Figure 3. Amplitude spectrum of LNF APD output pulses. 1- dark condition; 2- single-photon mode; 3- double-photon mode
|
no-problem/9909/astro-ph9909173.html
|
ar5iv
|
text
|
# ON THE RAPID SPIN-DOWN AND LOW LUMINOSITY PULSED EMISSION FROM AE AQUARII
## 1 INTRODUCTION
AE Aqr, which has a spin period of $`P_{}=33.08`$ s and an orbital period of 9.88 hr (Patterson 1979; Welsh, Horne, & Gomer 1995), is usually classified as a DQ Her-type magnetic cataclysmic variable (CV) or an intermediate polar. It consists of a magnetic white dwarf and a companion star with a spectral type of K3 – K5. This companion fills its Roche lobe and transfers matter to the white dwarf (Casares et al. 1996). However, optical observations (e.g., single-peaked Balmer emission lines) indicate that there is little evidence of a Keplerian accretion disk in the binary system (Welsh, Horne, & Gomer 1998 and references therein).
Pulsations at the spin period are clearly seen from optical to X-ray, but there are no radio pulsations at this period (Bastian, Beasley, & Bookbinder 1996). The optical & UV pulse profiles show a sinusoidal double peak, where the two peaks are separated by 0.5 in phase and their amplitudes are unequal (Eracleous et al. 1994). On the other hand, the X-ray pulse profile has a sinusoidal single peak which suggests that the X-rays have a different origin (Eracleous, Patterson, & Halpern 1991; Choi, Dotani, & Agrawal 1999). Flares are aperiodic and last for $`10`$ min to $`1`$ hr. The flares have been observed for AE Aqr in various wave bands (Patterson 1979; Bastian, Dulk, & Chanmugam 1988; Eracleous & Horne 1996; Choi, Dotani, & Agrawal 1999). During a large flare, the UV & X-ray luminosities increase by a factor of 3 ($`1\times 10^{32}`$ erg/s in UV and $`2\times 10^{31}`$ erg/s in X-ray) compared with the quiescent luminosities (the quiescent luminosity in UV is $`4\times 10^{31}`$ erg/s and $`7\times 10^{30}`$ erg/s in X-rays).
De Jager et al. (1994) found that AE Aqr is steadily spinning down at a rate of $`\dot{P_{}}=5.64\times 10^{14}`$ s/s. One interesting result from this observation is that the spin-down power, $`L_{sd}=I\mathrm{\Omega }_{}\dot{\mathrm{\Omega }}_{}5\times 10^{34}M_{,1}R_{,9}^2`$ erg/s where $`R_{,9}`$ is the white dwarf radius in units of $`10^9`$ cm, $`M_{,1}`$ is the stellar mass in units of the solar mass $`\mathrm{M}_{}`$, and $`\mathrm{\Omega }_{}=2\pi /P_{}`$, is much greater than the observed quiescent UV and X-ray luminosities, or even the bolometric luminosity. This fact implies that the spin-down power should be mostly converted into different types of energy emission. Eracleous & Horne (1996) and Wynn et al. (1997) proposed a magnetic propeller model in which most of the accreted matter is expelled from the binary system. In this picture the spin-down power is consumed to expel the accreted matter.
Although the propeller model offers a good explanation for the observational features of AE Aqr, there is no direct evidence of the high-velocity gas stream escaping from the system. In addition, the propeller model requires a high mass-transfer rate, $`\dot{M}7\times 10^{18}`$ g/s (e.g. Ikhsanov 1998) in order to account for the observed spin-down rate. This mass-transfer rate is greater by one order of magnitude or more than the rate expected from the empirical $`\dot{M}`$–period relation (Patterson 1984). Alternatively, several studies have invoked the pulsar-like spin-down mechanism, where the spin-down power can be used for the generation of electromagnetic dipole radiation and particle acceleration (e.g., de Jager 1994, Ikhsanov 1998). According to the recent study by Ikhsanov (1998), the rapid spin-down rate of AE Aqr can be explained by this mechanism if the white dwarf has a strong surface magnetic field of $``$50 MG. This field strength exceeds the upper limit of $`5`$ MG derived by Stockman et al. (1992). There has been some sporadic attempts to connect the observed spin-down power to the yet-to-be-confirmed TeV $`\gamma `$ray emission of $`10^{32}`$ erg/s (Meintjes et al. 1992, 1994; Bowden et al. 1992), which is not convincing. The exact nature of the spin-down power in AE Aqr therefore remains yet to be clarified.
In this paper, we propose that the gravitational radiation emission mainly drives the spin-down power while accretion, ejection, and electromagnetic radiation from spinning white dwarf are responsible for the observed luminosities in various energy bands. We construct a self-consistent picture for AE Aqr by removing the large spin-down power from the observable electromagnetic emission. For our numerical estimates, we adopt $`R_{}=7\times 10^8`$ cm, $`M_{}=0.8\mathrm{M}_{}`$, and the moment of inertia $`I_{}=3\times 10^{50}`$ g cm<sup>2</sup>.
## 2 SPIN-UP AND SPIN-DOWN IN AE AQR
AE Aqr contains a very rapidly spinning white dwarf with its spin period $`P_{}=33.08`$ s. It is an unusual white dwarf as its spin period is quite close to the theoretically maximum break-up spin period. Such a short spin period could be achieved through accretion only if the accreted mass $`\mathrm{\Delta }M`$ over time $`\mathrm{\Delta }t`$ is at least as high as
$$\mathrm{\Delta }M>I_{}\mathrm{\Omega }_{}/(GM_{}R_{})^{1/2}0.1\mathrm{M}_{},$$
(2-1)
where this estimate has to be taken as a lower bound since we have assumed that the accreted material has the specific angular momentum $`(GM_{}R_{})^{1/2}`$ which is realized only when the Keplerian accretion disk extends all the way down to the stellar surface. In reality, magnetic truncation could limit the specific angular momentum to a much lower value than the Keplerian value at the stellar surface (e.g. Frank, King, & Raine 1992; see below). For an accretion rate $`\dot{M}=10^{16}\dot{M}_{16}`$ g/s, the accretion of angular momentum has to occur for the duration of
$$\mathrm{\Delta }t>6\times 10^7(\mathrm{\Delta }M/0.1M_{})\dot{M}_{16}^1\mathrm{yr}.$$
(2-2)
Although such an accretion is possible, it is unclear how the accretion flow – magnetosphere interaction could have affected the spin-up of AE Aqr. Presently, there is no indication of well-defined rotation of accreted material in the form of the accretion disk. If there is no accretion disk in AE Aqr, accretion could occur via the diskless accretion such as a ballistic accretion stream. The estimated spin-up time scale is short enough to be realized in binary systems similar to AE Aqr where the secondary is a K3 – K5 main sequence red dwarf.
There have been discussions on possible spin-down mechanisms without a clear favorite. The main problem for various spin-down mechanisms arises due to the fact that the inferred large spin-down power is not observed in any detectable forms such as high velocity gas or high luminosities. The relatively low quiescent luminosities also pose a serious problem for any mechanisms involving accretion. If the white dwarf’s dipole-type magnetic field is strong enough, the electromagnetic power due to dipole radiation could account for the rapid spin-down as discussed by Ikhsanov (1998). The electromagnetic power is estimated as
$$L_{em}=2\mu _{}^2\mathrm{sin}^2\theta \mathrm{\Omega }_{}^4/3c^32.5\times 10^{30}\mathrm{sin}^2\theta B_{,6}^2R_{,9}^6\mathrm{\Omega }_{,1}^4\mathrm{erg}/\mathrm{s},$$
(2-3)
where $`\mu _{}=B_{}R_{}^3`$ is the magnetic moment of the dipole stellar field, $`\mathrm{\Omega }_{,1}=\mathrm{\Omega }_{}/0.1`$ s<sup>-1</sup>, $`B_{,6}`$ is the stellar polar surface field strength in units of $`10^6`$ G, and $`\theta `$ is the misalignment angle between the rotation axis and the magnetic axis. For AE Aqr with $`\mathrm{\Omega }_{}=0.19`$ s<sup>-1</sup>, we expect $`L_{em}4\times 10^{30}B_{,6}^2`$ erg/s or for the observed upper limit $`B_{,6}5`$, $`L_{em}<1\times 10^{32}`$ erg/s which is at least two orders of magnitude lower than the observed spin-down power.
The rapid spin-down has been widely attributed to the propeller action in which the inflowing material is flung out at a radius $`R_x`$. This radius is likely to be beyond the corotation radius and is conventionally understood as the magnetic truncation radius. The spin-down power due to the propeller action could be estimated as
$$L_{prop}\dot{M}R_x^2(GM_{}/R_x^3)R_x^1,$$
(2-4)
where $`R_x`$ is the radius at which the accretion flow is expelled. If the propeller action occurs near the corotation radius
$$L_{prop}\dot{M}(GM_{}\mathrm{\Omega }_{})^{2/3},$$
(2-5)
and the observed spin-down power requires
$$\dot{M}L_{prop}/(GM_{}\mathrm{\Omega }_{})^{2/3}L_{sd}/(GM_{}\mathrm{\Omega }_{})^{2/3}3\times 10^{17}\mathrm{g}/\mathrm{s}.$$
(2-6)
In this case, the expelled material is likely to be flung out at a characteristic speed of $`R_c\mathrm{\Omega }_{}2.7\times 10^8`$ cm. The expected high velocity outflow has not been detected in AE Aqr. If the propeller action occurs at the magnetospheric radius $`R_{mag}1.4\times 10^{10}\dot{M}_{16}^{2/7}M_{,1}^{1/7}B_{,6}^{4/7}`$ (cf. eq. 6-2 below), $`L_{sd}`$ is accounted for by the propeller action if $`L_{sd}GM\dot{M}R_o^1`$ or
$$\dot{M}8\times 10^{17}B_{,6}^{4/9}.$$
(2-7)
In short, the mass accretion rate required for the propeller action to account for the spin-down power is likely to be considerably higher than $`10^{17}`$ g/s. The observed luminosities indicate that the mass accretion rate is considerably lower than $`10^{17}`$ g/s. Any magnetized model with the Ghosh-Lamb type (e.g., Frank et al. 1992, Yi 1995 and references therein) would essentially lead to the same conclusion since the torque achieved in variations of the model is essentially limited to the above propeller estimate on the dimensional ground. We therefore conclude that the observed luminosities and the spin-down power are incompatible if the spin-down power results in the emission of the observable radiation.
## 3 SPIN-DOWN DUE TO GRAVITATIONAL RADIATION
We have pointed out that the electromagnetic dipole radiation with $`B_{}5\times 10^6`$ G and the propeller action with $`\dot{M}10^{17}`$ g/s are unable to account for the observed spin-down torque. We propose that the spin-down power does not transform into the observed electromagnetic radiation. We consider the gravitational radiation emission as an alternative spin-down mechanism. This mechanism could be an attractive one since the resulting spin-down power does not need to go into the observable electromagnetic radiation, which effectively avoids the long standing question of non-detection of the observed large spin-down power. The gravitational radiation emission would be a particularly interesting spin-down mechanism if the spin evolution is highly stable and shows no signs of disturbances in accretion flows or the stellar magnetospheres. This appears to be the case in AE Aqr.
Although the white dwarf in AE Aqr spins unusually fast, under normal circumstances, the gravitational radiation emission requires a rather high non-zero quadrupole moment or a large eccentricity. If we define the eccentricity as $`ϵ=[1(R_2/R_1)^2]/[1+(R_2/R_1)^2]`$, where $`R_2`$ and $`R_1`$ are radial extents of the star in the plane perpendicular to the rotational axis, the gravitational radiation power becomes
$$L_{gr}=\frac{32G}{5c^5}ϵ^2I_{}^2\mathrm{\Omega }_{}^6.$$
(3-1)
The spin-down power of AE Aqr is accounted for by $`L_{gr}`$ when
$$4\pi ^2I_{}\dot{P}_{}P_{}^3=\frac{32G}{5c^5}I_{}^2(2\pi /P_{})^6ϵ^2,$$
(3-2)
or
$$ϵ=(5c^5\dot{P}_{}P^3/32GI_{})^{1/2}1,$$
(3-3)
which essentially implies that the non-axisymmetric distortion of the white dwarf has to be too large to account for the observed spin-down power.
We have argued that a significant mass accretion must have occurred in order to account for the observed unusual spin period (e.g. eqs. 2-1, 2-2). One of the plausible possibilities is that the accreted material mostly lands on a small fraction of the total surface area near the magnetic poles. If this is the case as expected in significantly magnetized accretion case, the accreted material would provide a source of non-zero quadrupole moment if the magnetic axis is misaligned with the rotation axis. That is, the magnetically channeled material would spread from the magnetic poles while its spread is partially hindered by the strong stellar magnetic field. Conceivably, in a steady state achieved in the high mass accretion rate episode while the rapid spin-up occurred, prior to the present spin-down, the accreted material could form accretion mounds at the magnetic poles (e.g. Inogamov & Sunyaev 1999 and references therein).
If we assume that the accreted material is present at the magnetic poles in the form of the spatially limited blobs or mounds while the rotation axis and the magnetic axis are misaligned by an angle $`\theta `$, the time averaged rate of gravitational radiation power could be estimated as
$$L_{gr}=\frac{336G}{5c^5}\delta m^2R_{}^4\mathrm{sin}^4\theta \mathrm{\Omega }_{}^6,$$
(3-4)
where $`\delta m`$ is the amount of mass accumulated on one magnetic pole. We have assumed for simplicity that the accumulated material exists at the magnetic poles without any significant spatial spread and it remains unperturbed during each stellar rotation. We note that the large coefficient in the formula effectively amounts to a large eccentricity despite the fact that the bulk of the stellar mass is not perturbed and distributed in such a way that the contribution to the quadrupole moment is negligible.
By comparing $`L_{gr}`$ calculated above and the observed spin-down power of AE Aqr,
$$4\pi I_{}\dot{P}_{}P_{}^3=\frac{336G}{5c^5}\delta m^2R_{}^4\mathrm{sin}^4\theta \left(\frac{2\pi }{P_{}}\right)^6,$$
(3-5)
we get
$$\delta m1.6\times 10^3\mathrm{sin}^2\theta \mathrm{M}_{},$$
(3-6)
which is roughly $`1.6\times 10^2`$ of the minimum mass required for the spin-up of AE Aqr during the rapid accretion phase.
This power becomes rapidly negligible as the star slows down due to the sensitive dependence of the gravitational radiation power on the spin frequency. For the AE Aqr parameters, $`L_{gr}>L_{em}`$ occurs if
$$\delta m>2.1\times 10^5\mathrm{sin}^1\theta B_{,6}\mathrm{M}_{},$$
(3-7)
or for $`\delta m10^3\mathrm{M}_{}`$
$$P_{}<0.4\mathrm{sin}\theta B_{,6}^1\mathrm{hr}.$$
(3-8)
AE Aqr could well have been continuously spun-down after reaching a high spin frequency resulting from the high accretion phase. The above estimate indicates that the AE Aqr’s current rapid spin-down could continue to periods much longer than the present short spin period.
What kinds of magnetized white dwarf systems could show large spin-down powers caused by the gravitational radiation emission as in AE Aqr? Obviously the candidate systems have to be rapidly spinning, which is likely when the systems were spun-up by preceding high mass accretion rate flows. With strong magnetic fields, a significant fraction of the accreted mass could reside in the polar regions while the magnetic axes have to be substantially misaligned with the rotation axes. For these systems, if the mass accretion becomes high and the magnetic fields are strong, the propeller-like torque is likely to dominate over the gravitational radiation emission. The accumulated mass near the magnetic poles could spread over the stellar surface although the details of the spreading process could be highly complicated (e.g. Inogamov & Sunyaev 1999). The gradually cooling accretion mounds could be a source of persistent UV emission which should show significant pulse signals (see below).
## 4 ELECTROMAGNETIC EMISSION COMPONENTS AND PHYSICAL PARAMETERS OF AE AQR
Based on the large pulse fraction, as high as $``$80% during quiescence (Eracleous et al. 1994), the most likely UV production site is the magnetic poles. The total UV luminosity (pulsed and non-pulsed) in quiescence is $`L_{uv}4\times 10^{31}`$ erg/s which corresponds to the nominal polar accretion rate of $`\dot{M}2\times 10^{14}`$ g/s. If the UV emission is the result of the low $`\dot{M}`$ accretion occurring at the magnetic poles, the expected emission temperature in the form of the thermalized radiation is $`T_{uv}2\times 10^4a^{1/4}(\dot{M}/10^{14}\mathrm{g}/\mathrm{s})^{1/4}`$ K where $`a1`$ is the fraction of the stellar surface area accreting near the magnetic poles.
On the other hand the X-ray pulse fraction is much lower ($`30`$%) during quiescence (Choi, Dotani, & Agrawal 1999). The straightforward implication from this low pulse fraction is that the X-ray emission site is much more extended than the narrow polar region around the magnetic poles.
If the accretion occurs in the form of the accretion stream directly impacting on the magnetosphere, only a small fraction of the accreted material lands on the magnetic poles. We call such an accretion flow as the high altitude accretion flow which constitutes a small fraction of the total accretion flow. That is, we consider a picture in which a large fraction of the accretion stream hits low altitudes and a comparable fraction of it hits high altitudes and travels directly to the poles (Figure 1).
### 4.1 Low Altitude Accretion
The power resulting from the low altitude stream-magnetosphere interaction region is simply
$$L_xGM_{}\dot{M}_x/R_x,$$
(4-1)
where we denote the location and luminosity by subscript $`x`$ assuming that most of the energy is released in the X-ray range and a certain fraction of the inflowing matter is expelled. X-ray emission is possible if the accretion stream’s kinetic energy is virialized after the accreted material gets shocked at the impact region.
The X-ray emission region is characterized by the simple magnetospheric radius (e.g. eq. 6-2)
$$R_{mag}1.4\times 10^{10}\dot{M}_{16}^{2/7}M_{,1}^{1/7}B_{,6}^{4/7}\mathrm{cm},$$
(4-2)
where $`\dot{M}_{16}=\dot{M}/10^{16}`$ g/s. At this radius, the low altitude stream’s ram pressure becomes roughly equal to the magnetic pressure of the magnetosphere. We have assumed that the stream is nearly free-falling and the stream’s geometric cross-section is comparable to the spherical surface area at the interaction region.
If most of the UV emission is due to the low $`\dot{M}`$ accretion occurring at high altitudes, we expect as before,
$$L_{uv}GM_{}\dot{M}_{uv}/R_{}.$$
(4-3)
Therefore, we get
$$L_{uv}/L_x(\dot{M}_{uv}/\dot{M}_x)(R_x/R_{}),$$
(4-4)
or for the magnetospheric radius of the low altitude accretion $`R_xR_{mag}`$.
Using the observed $`L_{uv}/L_x6`$ (Eracleous et al. 1994; Choi, Dotani, & Agrawal 1999)
$$\dot{M}_{uv}/\dot{M}_x0.3\dot{M}_{16}^{2/7}M_{,1}^{1/7}B_{,6}^{4/7},$$
(4-5)
which is uncertain due to the uncertain $`B_{}`$ and $`\dot{M}`$ in AE Aqr.
If most of the propeller action occurs near the corotation radius $`R_c1.4\times 10^9`$ cm, which would be close to the magnetospheric radius $`R_{mag}`$ if $`B_{}2\times 10^4\dot{M}_{16}^{1/2}M_{,1}^{1/7}`$, then, the accretion rates in the two regions are compared as
$$\dot{M}_{uv}/\dot{M}_xR_{}/R_c0.5,$$
(4-6)
which implies that a large fraction of the accreted matter has to flow to the poles.
If the X-ray emission is from the shocked gas at the propeller action region where the accretion stream hits the magnetosphere and collides with the outflowing material creating localized shocks, the characteristic X-ray emission temperature from the optically thin gas is likely to be
$$T_x3GM_{}m_p/16kR_x2\times 10^7\dot{M}_{16}^{2/7}B_{,6}^{4/7}\mathrm{K},$$
(4-7)
or $`kT_x2\dot{M}_{16}^{2/7}B_{,6}^{4/7}`$ keV, which is very close to the observed X-ray emission temperature $`<3`$ keV. Similarly for the propeller action occurring near the corotation radius, the expected X-ray emission temperature $`kT_x9`$ keV which implies that the propeller action occurring anywhere between the two regions at radii $`10^{10}`$ cm and $`R_c1.4\times 10^9`$ cm can account for the bremsstrahlung emission in the X-ray band.
On the other hand, if propeller action region is responsible for blackbody like emission, then the characteristic temperature could be as low as $`1\times 10^4\dot{M}_{16}^{1/4}`$. This temperature corresponds to optical/UV emissions although the high pulse fraction observed in AE Aqr rules out the possibility that the dominant optical/UV emissions arise from the propeller action region.
In the polar regions, the radially falling material lands on the surface of the white dwarf and the kinetic energy could thermalize and be radiated as blackbody like emission. If the accreting fraction of the white dwarf is $`a(1)`$ of the total stellar surface area, then the expected emission temperature is
$$T_{uv}2\times 10^4a^{1/4}\dot{M}_{14}^{1/4}\mathrm{K},$$
(4-8)
where $`\dot{M}_{14}=\dot{M}/10^{14}`$ g/s as noted earlier.
Using the observed X-ray luminosity and the X-ray emission temperature, we can estimate the mass accretion rate and the magnetic field strength. First, using the shock temperature in the propeller action region, we require that the shock temperature be close to the observed X-ray emission temperature $`3`$ keV
$$\dot{M}_{16}4.1B_{,6}^2.$$
(4-9)
Similarly, the observed X-ray luminosity should be close to the total accretion power (with the radiative efficiency $`\eta _x`$) $`10^{31}`$ erg/s or
$$\dot{M}_{16}0.2\eta _x^{7/9}B_{,6}^{4/9}.$$
(4-10)
The two constraints are simultaneously satisfied only if $`B_{}1.4\times 10^5\eta _x^{1/2}`$ G and $`\dot{M}8\times 10^{14}\eta _x^1`$ g/s.
Substituting the mass accretion rate and the field strength in the equation for the ratio of $`\dot{M}_{uv}/\dot{M}_x`$ we arrive at
$$\dot{M}_{uv}4\times 10^{14}\eta _x^1\mathrm{g}/\mathrm{s},$$
(4-11)
which is a substantial fraction of the accreted material arriving at the propeller action region. We therefore conclude that a significant fraction of material manages to land on the surface of the white dwarf despite an ongoing propeller action in AE Aqr.
### 4.2 High Altitude Accretion
In the above calculations, we have considered that a part of the accretion stream hits low magnetic altitudes and most of the X-ray emission is produced from the shock-heated (optically thin) gas near the magnetospheric radius or the corotation radius. On the other hand, the high altitude accretion flow travels directly to the poles. At low altitudes, the accreting stream remains mostly optically thick until shock-heated and expelled by the propeller action. On the other hand, the accretion stream remains optically thin at high altitudes and is continuously heated in a roughly quasi-spherical accretion pattern. The propeller action is relatively weaker at higher altitudes as in the case of the diskless accretion and hence, a substantial fraction of it travels to the poles while emitting X-rays until it reaches the magnetic poles where the accretion flow gets thermalized and emits optically thick blackbody-like UV emission. In this picture, because the high altitude gas stream is adiabatically heated to an X-ray emitting temperature near the magnetic poles, we expect pulsed X-ray emission.
For the derived magnetic field strength, the dipole radiation power becomes $`L_{em}8\times 10^{28}\eta _x^1`$ erg/s which suggests that the observed radio emission could be accounted for by the dipole radiation if the X-ray efficiency is $``$ 10%. Alternatively, the radio emission could result from accelerated electrons at the shock near $`R_x`$, a remote possibility given the fact that there exists no evidence of high speed material in AE Aqr.
It is unlikely that the TeV emission occurs at the level of $`10^{32}`$ erg/s based on the above conclusion. Meintjes et al. (1992, 1994) and Bowden et al. (1992) reported detection of TeV $`\gamma `$-rays from AE Aqr which are pulsating at the spin period. However, according to a more recent observation by Lang et al. (1998), there is no evidence for any steady, pulsed or episodic TeV emission. In the present picture, the TeV emission at the claimed level is not likely since the largest power goes into the gravitational radiation.
Using the mass accretion rate and the magnetic field strength, we estimate that the accretion stream is likely to be stopped at a radius
$$R_x1\times 10^{10}\mathrm{cm},$$
(4-12)
which is compared with the nominal circularization radius for material traveling through the inner Lagrange point in the binary system (e.g. Frank et al. 1992)
$$R_{circ}3\times 10^{10}\mathrm{cm},$$
(4-13)
where we have adopted the secondary to primary mass ratio $`0.6`$. For the estimated magnetospheric radius, it is possible that the accretion disk doesn’t exist in this system as has been recently argued based on the non-detection of the double-peaked H$`\alpha `$ emission line in AE Aqr. Even if they exist, they could exist in a very narrow radial zone in which the line emission is too weak to detect.
## 5 FLARES AND LOW ENERGY EMISSION
Simultaneous observations between optical and UV and between optical and X-ray (Osborne et al. 1995; Eracleous & Horne 1996) show some similarities in their light curves. For example, while UV & optical flares are well correlated, they are less correlated with the X-ray flares. However, radio flares are not correlated with the optical flares (Abada-Simom et al. 1995). The pulse amplitudes in both UV and X-ray region do not display any large variations compared with the values of their quiescent states, nor do they follow the variations of the non-pulsed level. The difference between quiescent and flare spectra in X-ray region is not significant, although a hint of spectral hardening is recognized (Choi, Dotani, & Agrawal 1999). These characteristics strongly suggest that the magnetic poles or an accretion columns above the white dwarf surface are hardly connected with the flaring activities. A possibility that a stellar flare which might occur in the companion star has been ruled out because it could not account for the kinematic properties of line-emitting gas in the UV data (e.g., Eracleous & Horne 1996). The flares arising from the occasional encounters between propeller-expelled outgoing gas streams (or blobs) and the incoming accretion stream are not realistic because the possible interaction region is too far to account for the high energies observed in the flares.
The exact nature of the flaring activities is beyond the scope of the present study. If the flaring activities indeed occur in a region far beyond the polar area, then a possible region is again the impact region formed by the accretion stream interacting with the magnetosphere. If the gas rotating between $`R_{circ}`$ and $`R_x`$ wraps the poloidal component, $`B_z`$, of the stellar field and amplifies the toroidal component, $`B_\varphi `$ to an equilibrium value $`B_\varphi (\gamma /\alpha )(\mathrm{\Omega }_{}/\mathrm{\Omega }1)B_z`$ at a rate $`\gamma (\mathrm{\Omega }_{}\mathrm{\Omega })B_z`$ (e.g. Yi 1995 and reference therein), it would take $`t_{amp}(\alpha \mathrm{\Omega })^1`$ where $`\alpha 0.1`$ is the usual $`\alpha `$ viscosity parameter, $`\mathrm{\Omega }_{}=2\pi /P_{}`$, $`\mathrm{\Omega }=(GM_{}/R^3)^{1/2}`$ is the local rotational angular velocity, and $`\gamma `$ is the vertical velocity shear parameter of order unity. Here we have assumed that the amplification is balanced by the magnetic diffusion with the magnetic Prandtl number of order unity. Then, a simple-minded characteristic time scale for magnetic amplification is $`10\times P_{}300`$ s provided that the amplification occurs near the corotation radius. On the other hand, based on the AE Aqr parameters $`\dot{M}10^{15}`$ g/s and $`B_{}10^5`$ G, we estimate that $`R_{mag}8\times 10^9`$ cm and $`L_{sd}1\times 10^{31}`$ erg/s. The observed typical flare energy $`10^{35}`$ erg requires the accumulation of the spin-down energy (in the form of the magnetic stress) for a duration of $`10^4`$ s which is longer than the magnetic field amplification time scale at least by a factor of $`30`$. If the magnetic field amplification is not balanced by the magnetic diffusion of the $`\alpha `$ type (Yi 1995) but by the buoyant loss of Parker type (e.g. Wang 1987), it is conceivable that the flare energy accumulation time scale could be accounted for since the field can grow to a higher value on a longer time scale. The details of this issue is especially hard to describe given the uncertain and complex nature of the accretion flow pattern and density structure near the propeller action region.
## 6 DISCUSSION AND SUMMARY
Various models have been considered to explain the pulsations and flares in AE Aqr. Among the proposed models, both an oblique rotator model (Patterson 1979, 1994) and a magnetospheric gating model (van Paradijs et al. 1989; Spruit & Taam 1993) have been ruled out by the reason that there is lack of observational evidence for an accretion disk (see Eracleous & Horne 1996 and Welsh, Horne & Gomer 1998 for a detailed discussion). The propeller model is primarily based on this point.
A high-velocity gas which is escaping from the system is predicted from the propeller model. However, Welsh, Horne, & Gomer (1998) have argued that they did not detect any signatures for the high-velocity gas, nor did they obtain any expected pattern from their trailed spectrograms. Therefore, without any evidence for the high-velocity gas, it is rather unlikely that the propeller action is mainly responsible for the spin-down of AE Aqr. We also note that in the propeller scenario mass-loss from the companion star is inhomogeneous and intermittent. If this is true, it is questionable why the pulsed emission is sustained almost constantly and stably.
By removing the spin-down power from the observable luminosities, we have constructed a self-consistent model which in essence incorporates all the existing ingredients. Eracleous et al. (1994) analyzed 10 pulse profiles in different wavelengths obtained from the simultaneous HST (UV) and ground (optical) observations. According to their results, there is no discernible phase shifts between pulses at different wavelengths, and the amplitude of pulsations decreases with increasing wavelength. This implies that the emission region becomes broader for longer wavelengths (or lower energies). In our model, pulsed X-ray emission is from the adiabatically heated gas in an extended region between the high altitude magnetosphere and the magnetic poles, which is traveling onto the poles. While the gas travels toward the poles, it cools down due to the radiation and UV emission becomes dominant near the polar region. After the gas landed on the polar regions, it would spread over the stellar surface as discussed in § 3. While spreading over the stellar surface, the gas cools down further. Therefore, it is possible that the emission region becomes broader for longer wavelengths. On the other hand, the incoming accreting stream is optically thick at low altitudes and it is mostly expelled. Therefore, optical/UV emission can also arise in this region and partly contribute to the non-pulsed optical/UV emission. Alternatively, if cooling occurs rapidly during the gas infall phase, the kinetic energy of the radially falling material could thermalize at the polar region and be radiated as blackbody like emission with temperature of $`2\times 10^4`$ K as estimated in eq. 4-8. The gas then follows the similar spreading and cooling processes on the white dwarf surface.
We have considered a possibility that the flares of AE Aqr are due to the release of magnetic stress which is reminiscent of the solar flares. An alternative possibility, the magnetic pumping model, was proposed by Kuijpers et al. (1997) with an intention to account for radio outbursts or flares. In this model, the radio flares are caused by eruptions of bubbles of fast particles from a magnetosphere surrounding the white dwarf. The model also speculates that at relatively low accretion rates the conversion of spin energy into acceleration (rather than heating) of electrons and protons can be efficient. The accelerated fast particles remain trapped in the magnetosphere and when their total energy becomes comparable to the magnetic field energy, an MHD instability sets in. Then the synchrotron radiation occurs in the expanding plasmoid at radio and long wavelengths.
The highly stable spin-down implies that the spin-down mechanism remains stable despite propeller action and related possible dynamical instabilities. It has usually been interpreted as a sign of a stable accretion disk. If AE Aqr doesn’t have an accretion disk as often claimed, the spin-down could indeed be due to the gravitational radiation emission which is obviously quite stable as long as it is the dominating spin-down mechanism.
If there exists a Keplerian accretion disk interacting with the stellar magnetosphere, the accretion flow will exert a torque (Yi 1995)
$$N=\frac{7N_o}{6}\frac{1(8/7)(R_{mag}/R_c)^{3/2}}{1(R_{mag}/R_c)^{3/2}},$$
(6-1)
where $`N_o=\dot{M}(GM_{}R_{mag})^{1/2}`$ and $`R_{mag}`$ is the magnetic truncation or the magnetospheric radius determined by
$$(R_{mag}/R_c)^{7/2}=A\left|1(R_{mag}/R_c)^{3/2}\right|,$$
(6-2)
where
$$A=2(\gamma /\alpha )B_c^2R_c^3/\dot{M}(GM_{}R_c)^{1/2},$$
(6-3)
with $`B_c=\mu _{}/R_c^3`$. The observed spin-down will result if $`R_{mag}/R_c>0.9148`$ in this particular magnetized accretion model or $`A>5.86`$ which for AE Aqr translates into
$$B_{}>1\times 10^4(\alpha /\gamma )^{1/2}\dot{M}_{15}^{1/2}$$
(6-4)
where $`\dot{M}_{15}=\dot{M}/10^{15}g/s`$. Therefore, for the derived AE Aqr parameters, a rotating accretion disk would always exert a spin-down torque although the spin-down is dominated by the gravitational radiation mechanism.
Our proposed model is essentially summarized as follows. (i) The spin-down is driven by gravitational radiation emission. (ii) The dipole radiation from the rapidly spinning white dwarf is responsible for radio emission and possible TeV $`\gamma `$ray emission. (iii) UV emission is from the magnetic poles with a low accretion rate or from cooling accretion mound from previous high accretion episode. A large amount of material present at the poles would be compatible with the current rapid white dwarf rotation. (iv) Accretion to the poles occur at high altitudes while at low altitudes the propeller action drives material outward. However, as long as the accretion rate is lower than $`10^{17}g/s`$, the spin-down power by propeller action is smaller than the observed spin-down. (v) X-rays are mostly from the accretion stream-magnetosphere boundary where the propeller action drives material outward while shock-heating the accreted material. UV emission could also arise in this region if there exists optically thick gas. (vi) Flares are due to the release of magnetic stress much like that seen in solar flares. (vii) In sum, $`L_{sd}2\times 10^{34}`$ erg/s mostly goes to $`L_{gw}`$ which is undetectable. $`L_{em}<10^{32}`$ erg/s goes to $`L_{radio}`$ and possibly to $`L_\gamma `$. $`L_{uv}`$ is either from $`\dot{M}2\times 10^{14}`$ g/s high altitude accretion or from the cooling accretion mound at the magnetic poles. The contribution from the disk-magnetosphere interaction region has to be small as required by the high pulse fraction.
IY is grateful to John Bahcall and Craig Wheeler for hospitality and Bill Welsh for useful information. IY thanks Korea Research Foundation for its grant support KRF 1998-001-D00365 for its generous support. CSC is grateful to the Korea Astronomy Observatory for its grant support KAO 99-1-201-20.
FIGURE CAPTIONS
|
no-problem/9909/astro-ph9909365.html
|
ar5iv
|
text
|
# VLBA ABSORPTION IMAGING OF IONIZED GAS ASSOCIATED WITH THE ACCRETION DISK IN NGC 1275
## 1 Introduction
The dominant member of the Perseus Cluster is the galaxy NGC 1275, which has a Seyfert-like nucleus, but an unusual morphology. Minkowski (1957) found that it contains two velocity systems, one at 5300 km s<sup>-1</sup>and the other at 8200 km s<sup>-1</sup>. The low velocity system includes the main body of NGC 1275 (Strauss et al. 1992) and filaments of ionized gas thought to be a product of the cooling flow that is centered on the galaxy. CO and X-ray observations suggest the cooling flow rate is several hundred $`\mathrm{M}_{\mathrm{}}\mathrm{yr}^1`$ (Lazareff et al. 1989; Allen & Fabian 1997). HST and Keck observations of globular clusters suggest that many may have been formed in a merger event several hundred million years ago (Carlson et al. 1998; Brodie et al. 1998). The high velocity system includes giant H II clouds. Hydrogen associated with the high velocity system is seen in absorption against the nuclear radio emission (DeYoung, Roberts, & Saslaw 1973; Romney 1978). This and other evidence indicate that the high velocity system is either infalling, or is a system that is already part way through the main galaxy (e.g. Kaisler et al. 1996; Nørgaard-Nielsen et al. 1993).
The radio source 3C 84 is associated with NGC 1275. The features coincident with the center of the galaxy constitute one of the strongest compact radio sources in the sky. This source began a major increase in activity in about 1959. It was below 10 Jy in the earliest observations, but was already rising towards peaks in excess of 50 Jy in the 1970’s and 1980’s (Dent 1966; Pauliny-Toth & Kellermann 1966; Pauliny-Toth et al. 1976; O’Dea, Dent, and Balonek 1984; Nesterov, Lyuty, & Valtaoja 1995). It has been high ever since, although in recent years the flux density had been decaying and is currently around 20 Jy at centimeter wavelengths.
The compact structure of 3C 84 has been the object of Very Long Baseline Interferometry (VLBI) observations since the early days of that technique (See, for example, Pauliny-Toth et al. 1976; Readhead et al. 1983; Romney et al. 1984; Marr et al. 1989; Biretta, Bartel, & Deng 1991; Krichbaum et al. 1993; Venturi et al. 1993; Dhawan, Kellermann, and Romney 1998; Romney, Kellermann, & Alef 1999). It has complex structure on parsec scales that frustrated imaging efforts in the early years. But early models, combined with more recent images made with the global networks and with the Very Long Baseline Array (VLBA — Napier et al. 1994) show that, south of the compact core, there is a bright region of emission that has been increasing in length by about 0.3 milliarcseconds yr<sup>-1</sup> (mas yr<sup>-1</sup>) and whose size extrapolates back to zero at about the time of the 1959 outburst (Romney et al. 1982; the rate is from Walker, Romney, and Benson 1994, hereafter WRB, and is based on their image and others from the literature). The bright region now extends about 15 mas south of the compact core. It is almost certainly related to a jet, but the morphology is somewhat like the radio lobes seen in many sources on much larger scales. This may indicate that the jet material from the 1959 event is not simply following earlier material down the jet channel, but is interacting strongly with either that material or the surrounding medium. To avoid a physical bias, we will simply call the bright regions “features”. For this study, they are only used as sources of background radiation with which to study the absorbing medium.
Beyond the bright southern feature, the remnants of earlier activity are clearly seen. Immediately south of the 15 mas bright feature, low frequency VLBI observations show a continuation of the jet to about 100 mas (e.g., Taylor & Vermeulen 1996; Silver, Taylor, & Vermeulen 1998, hereafter STV; this paper). STV also see a “millihalo” approximately 250 mas in size at 330 MHz. On scales of arcseconds and larger, 3C 84 has weak, somewhat asymmetric twin jet structure surrounded by a diffuse halo (Pedlar et al. 1990; Sijbring 1993).
In 1993, a feature extending to about 8 mas north of the compact core was discovered with the VLBA at 8.4 GHz by WRB and independently, using 1991 Global VLBI Network data at 22 GHz, by Vermeulen, Readhead, and Backer (1994, hereafter VRB). At 22 GHz, this feature was similar to, but smaller and somewhat weaker than the bright southern feature. It most likely is related to a jet on the far side of the AGN corresponding to the near side jet responsible for the southern feature. This source provides an especially favorable case for relating jet and counterjet features because of the dramatic changes in jet brightness dating to the 1959 outburst. WRB show that the ratio of distances to the ends of the bright emission in the north and south, the brightness ratio of the features at 22 GHz, and the measured speed of the southern feature, are consistent with a mildly relativistic jet ($`0.3c`$ to $`0.5c`$) at a moderate angle ($`30^{}`$ to $`55^{}`$) to the line-of-sight.
The most exciting result of WRB and VRB was the observation that the northern feature has a strongly inverted spectral index. It was very much brighter at 22 GHz than at 8.4 GHz, and it had not been seen at all in high dynamic range images of Biretta et al. (1991) at 1.7 GHz. But the 22 and 8.4 GHz observations were separated in time by about 2 years so there was some potential for confusion due to time dependent effects. VRB present persuasive arguments that the spectral index, if confirmed, is most likely the result of free-free absorption by ionized material along the line-of-sight to the northern feature. This absorbing material must be located in the inner few parsecs of the source — otherwise it is highly unlikely that it would absorb only the northern feature leaving the core and southern feature unaffected. An obvious geometry consistent with the data puts the ionized gas in, or associated with, a disk. Levinson, Laor, and Vermeulen (1995, hereafter LLV) discuss models involving accretion disks at some length. One problem is that, according to simple theory, the accretion disk should be too cold at parsec distances to contain ionized gas. They address a variety of mechanisms that would allow ionizing radiation from the central regions of the AGN to reach the outer disk and create an ionized atmosphere adequate to explain the observed absorption.
Apparent free-free absorption is also seen in this source at lower frequencies. O’Dea, Dent, and Balonek (1984) deduce, from flux density monitoring data, that the variable core component is absorbed below about 2.2 GHz. They suggest that the absorbing gas has a thickness of $`1.5<L<5`$ pc, $`T10^4`$ K, and $`n2\times 10^3`$ cm<sup>-3</sup>. More recently, STV found a feature 80 mas north of the core in 330, 612, and 1414 MHz VLBA observations that presumably corresponds to an earlier epoch of activity. The spectrum of this feature can be interpreted in terms of free-free absorption on a much larger scale.
Evidence for free-free absorption on parsec scales has been seen in a few other sources. Jones et al. (1996) find preferential absorption of counterjet emission in Cen A, indicating a geometry similar to what is seen in 3C 84. Jones and Wehrle (1997) see a feature in NGC 4261 that is possibly due to absorption by an inner accretion disk with a width of less than 0.1 pc, but they do not have spectral information on this feature. Kellermann, Vermeulen, Cohen, and Zensus (1999) report the presence of free-free absorption of the nucleus and receding jet in NGC 1052, with the absorption moving to lower frequencies over scales from 15 light days to more than 3 light years. Taylor (1996) attributes the spectral turnover seen in the parsec scale jet components in Hydra A to free-free absorption over the entire central region, probably due to gas on a somewhat larger scale. Parsec scale ionized gas associated with a disk may have been observed directly in NGC1068 by Gallimore, Baum, & O’Dea (1997). The inferred temperature of the observed region is around $`10^{6.7}`$K. But the case that this is ionized disk gas, rather than a jet, is not firmly established.
Ulvestad, Wrobel, & Carilli (1999) report evidence for absorption in Markarian 231, a powerful Seyfert 1/starburst galaxy at $`z=0.0422`$ whose compact emission regions look remarkably similar to 3C 84. In both magnitude and physical scale, the observed absorption in Mrk 231 is more comparable to that observed in 3C 84 by STV than that reported in this paper. Unlike 3C 84, a spectral gradient is reported for the southern feature, implying some absorption over the whole source. But this may be a result of alignment of the images on the core. At least in 3C 84, the centroid of the core shifts with frequency. Also, the 1.4 GHz image shows emission well north of any seen at higher frequencies, implying unphysically steep spectral indices or some imaging effect such as much greater sensitivity to large structures at 1.4 GHz than at the other frequencies.
In this paper, we present results of the first two epochs of observations of 3C 84 designed to study the absorption and its two dimensional distribution. The observations were made with the Very Long Baseline Array of the National Radio Astronomy Observatory, supplemented in some cases by a single antenna of the Very Large Array (VLA). At each epoch, a wide range of frequencies were observed, all within a period short enough to preclude uncertainties due to variability. High dynamic range images, all convolved to the same resolution, allow us to image the absorption over the extent of the northern feature.
The first, and simplest, result is that the apparent absorption is not due to temporal changes in the source. The quasi-instantaneous radio spectra at positions on the northern feature are consistent with free-free absorption. Minor deviations from a theoretical free-free spectrum are probably due to inhomogeneities in the medium. We show that the dominant spatial structure of the absorption is a strong decrease with radial distance from the central feature. The concept that the ionized gas is associated with the accretion disk and falls off in density from the center is consistent with the observations.
At $`v=5300`$ km s<sup>-1</sup>, $`1\mathrm{mas}=0.25h^1`$ pc, where $`H_o=100h`$ km s<sup>-1</sup>Mpc<sup>-1</sup>. In this paper, we assume that $`h=0.75`$, so the scale of the images is very close to $`1\mathrm{pc}=3\mathrm{mas}`$.
## 2 The Observations
Observations of 3C84 were made with the VLBA in 1995 January and October at 2.3, 5.0, 8.4, 15.4, 22, and 43 GHz. The specific dates are listed in Table 1. The observations were spread over several days at each epoch in order to gather sufficient data at each frequency to make high dynamic range images. However all observations for each epoch were confined to a period of two weeks in order to avoid confusion by possible time variations. A single antenna of the VLA was included in the 22 and 43 GHz observations in October in order to enhance the coverage of short baselines. The day following the last of our October observations, 3C 84 was observed at 0.33, 0.61, and 1.4 GHz by STV. Therefore near simultaneous observations of this source in 1995 October are available at all bands for which the VLBA has receivers.
The northern feature seen at higher frequencies about 8 mas from the core was not detectable at 2.3 GHz to a limit of about 2 mJy beam<sup>-1</sup> in both January and October, presumably because it is too heavily absorbed. Also the much more distant northern feature, at about 80 mas, that was reported by STV at low frequencies, was not seen in our 2.3 GHz images, perhaps because of inadequate short baselines. The convolved images used to look for the distant features have off source noise levels of about 1.5 mJy beam<sup>-1</sup> with beams of 19 by 15 mas. At 43 GHz the the feature 8 mas north of the core was detected, but most of the flux density appears to be missing. This feature is several mas in size, which is a size scale that falls in the range that is poorly measured at 43 GHz because of the gap in (u,v) coverage between the VLBA and the VLA. The feature is mostly resolved out on the VLBA, but is still a point source to the VLA. Therefore the results presented here will be based almost entirely on the 5.0, 8.4, 15.4, and 22 GHz images, although we show the 2.3 GHz images and one 43 GHz image.
Details of the observations are presented in Table 1. The columns are the date within 1995, the antennas included, the bit rate, the polarizations recorded, and the number of effective baseline hours of data available for the final image. The total recorded bit rate was divided among 2, 4, or 8 baseband channels. The baseline hours listed is simply the integration time times the number of visibility records. The 2.3, 5.0, and 8.4 GHz observations were performed by band switching during a single day in both January and October. In January, but not in October, the 2.3 and 8.4 GHz data were obtained simultaneously using the VLBA’s dichroic system.
Apriori amplitude calibration was done in the usual way using the system temperatures measured during the observations and gains measured by the NRAO staff during pointing observations. Most of the data were processed with AIPS. The 22 GHz image from January was made using DIFMAP (Shepherd, Pearson, & Taylor 1994). On 1995 October 20, VLA observations were made of 3C 84 and the compact source DA 193 (0552+398) to obtain flux densities. These were used to check and adjust the apriori amplitude calibration of the VLBA. Adjustments in the amplitude scales by factors of 1.00, 0.91, 1.00, 0.95 for 22, 15.4, 8.4, and 5.0 GHz were made as a result. The remaining uncertainty in the flux scale is roughly estimated to be about 5% for the October data. For January, the apriori calibration had to be trusted and there may be residual errors at the 10% level. The 2.3 GHz amplitudes from January were scaled by 0.87 to adjust for the effects of an inappropriate off-source position used in VLBA 2.3/8.4 GHz dual frequency gain measurements prior to 1997 November.
Montages of the contour plots of the images used in the analysis are shown in Figures 1 and 2. All images are convolved to a beam of 1.6 by 1.2 mas, elongated in position angle 0 degrees. This is about the actual resolution of the 5 GHz images, but represents a considerable smoothing of the higher frequency images. On each contour plot, a segmented line is shown which marks the slice used for some of the analysis below.
Contour plots of the 2.3 GHz images are shown in Figure 3. The resolution of these images is lower than for the others because of the lower frequency. These images show the southern feature well and make it clear that there is a weak, underlying structure extending well beyond the region of the features that resulted from the 1959 outburst. Those underlying structures are also seen weakly in the 5 GHz images, although only when plotted to deeper levels than were used in Figures 1 and 2. The northern feature is absent, presumably completely absorbed. In the 1995 October image at 22 GHz, the ratio of integrated flux density in the northern and southern features is about 3.7. In the 2.3 GHz image from the same epoch, the integrated flux density of the southern feature is about 21 Jy. If the intrinsic spectral index (without absorption) of the northern and southern features is the same, the expected flux density of the northern feature at 2.3 GHz would be 5.7 Jy. Instead it is below about 2 mJy, so there is very strong absorption — corresponding to an optical depth of at least 7.9. At 5 GHz, the northern feature had an integrated flux density of about 20 mJy in January and 34 mJy in October in a region smaller than the 2.3 GHz beam. Thus the observed spectral index between 2.3 and 5.0 GHz was at least 2.9 in January and 3.6 in October. While we do not use this information in the following analysis, it further supports the interpretation of the spectra as being due to the exponential cutoff resulting from free-free absorption toward the northern feature.
An image based on the 43 GHz VLBA data from Jan. 1995 is shown in Figure 4. The image has been convolved to the same beam used in the analysis of the 5 to 22 GHz images. The northern and southern features are clearly there, but at much reduced flux densities because of the lack of baselines of a few tens to a few hundred km. Because of this missing flux density, this image cannot be used in the analysis. For more on VLBA monitoring of the 43 GHz structure of this source, see Dhawan, Kellermann, and Romney (1998).
For background information, full resolution images of the source in 1995 October are shown in Figure 5. The analysis in this paper will be based entirely on the images convolved to the resolution of the 5 GHz images. At higher resolutions, imaging artifacts, that are not readily visible in the total intensity images, become apparent in spectral index images and in the absorption analysis, which are based essentially on differences. These artifacts are the result of the inadequate (u,v) coverage of the VLBA at high frequencies for a source of this complexity and size. Future observations will include additional antennas to address this difficulty. The 22 GHz image was not made with the instrument’s full resolution, knowing that it was to be convolved to the lower resolution of the lower frequency observations. Note that the 5 GHz image, with contour levels that are deeper than those of Figure 2, shows the beginning of the extension, beyond 15 mas south of the core, that is seen at 2.3 GHz.
## 3 Analysis
The jet speed and angle results of WRB are based on the measured length ratio of the northern and southern features, on the long term expansion rate of the southern feature, and on the assumption that the northern and southern features both originated in the outburst of about 1959, so they have the same age. The 1995 data do not change the numbers. But the flux density ratio of the northern and southern features at 22 GHz was 9 in the data from VRB, used by WRB. In the 1995 October 22 GHz image, the ratio is about 3.7. The difference might be the result of the better coverage of short interferometer baselines in the 1995 data, or it might be the result of real changes in either the emission in the source or in the absorbing screen suggested by VRB and discussed below. By the beamed jet model of Blandford and Königl (1979), the flux density ratio $`R`$ and length ratio $`D`$ are related by $`R=D^\eta `$. The index $`\eta `$ is either $`(2\alpha )`$ or $`(3\alpha )`$, where $`\alpha `$ is the spectral index ($`S\nu ^\alpha `$), for a continuous jet or a single component respectively. The data of VRB give $`\eta =3.7`$, which is consistent with the single component model and the spectral index observed in the southern feature. The 1995 data give $`\eta =2.2`$, which does not fit the simple model so well. Even with the continuous jet model, the spectral index would have to be flatter than is observed in the southern feature. But the flux density of the source has been decreasing with time for many years. In any one image, the northern feature is observed at an earlier time in its history than the southern feature because of light travel time effects — about 22 years earlier for the geometry of WRB and our assumed Hubble constant. In the early to mid 1970’s, the integrated flux density of 3C 84 at 22 GHz was roughly twice the the 1995 value (Nesterov, Lyuty, & Valtaoja 1995). Therefore, while the distribution of the emission at 22 GHz in the 1970’s is not known, it is reasonable that the northern feature would be stronger ($`\eta `$ lower) than would be expected by the simple, no-evolution beaming model.
The southern feature has a fairly uniform spectral index of $`\alpha 0.7`$, a value rather typical of radio jets and hot spots in lobes. The northern feature is entirely different. It is far stronger at high frequencies and has spectral indices that vary both spatially and with frequency and that get as high as $`\alpha >+4`$. Even with less data, VRB argued that the steeply inverted spectral index, along with the extended size of the feature, effectively rules out other explanations than free-free absorption. Below we characterize and discuss this absorption. The core region also has an inverted spectrum and may be subject to some free-free absorption. However that spectrum could also simply be due to synchrotron self-absorption in the most compact regions of the jets. Therefore we do not include the core region in our discussion of free-free absorption.
Most of the analysis for this study was done with a special purpose program that operates on the CLEAN components from the AIPS images. Very deep CLEANs were done, so essentially all of the flux density is represented in the CLEAN components. The program is used to shift the components to align the images, make any required final amplitude calibration adjustments, convolve the components with a Gaussian beam roughly equivalent to the resolution of the lowest frequency image, and fit for the absorption. The fits are done both in the image plane and for points along a segmented slice as shown in Figures 1 and 2.
In order to image the absorption, it is necessary to align the images at different frequencies very accurately. Because of the presence of steep emission gradients, relative shifts of as small as 20 microarcseconds ($`\mu `$as) make small, but perceptible, changes in the distribution of the absorption. An alignment to that level of accuracy is desirable, but VLBI observations that don’t utilize phase referencing retain no absolute position information. These observations were scheduled with occasional short scans on the nearby calibrator 0309+411. Positions accurate to about a 1 mas could probably be obtained with the aid of these scans. But to obtain positions to 20 $`\mu `$as would require dual frequency observations to remove the effects of the ionosphere and scans all over the sky to calibrate the troposphere.
It is possible to use the source itself, in the case of 3C 84, to align the images very accurately. This requires the assumption that the southern feature is similar in structure at all frequencies — that there aren’t large variations in observed spectral index. This assumption is supported by the images. The images could be aligned by finding the relative positions that minimize spectral index gradients. But a method that is at least as effective, and easier to automate, is to utilize the image cross correlation techniques previously developed for use on VLA data on 3C120 and M87 (Walker 1997 and references therein). The idea is to find the peak, as a function of position offset, of the cross correlation of portions of two images. For comparison, the software also does a least squares fit for the position offset of the image portions, generally obtaining similar results. For 3C 84, the regions greater than 5 mas south of the core were cross-correlated for all 6 possible pairs of images at each epoch. The images were shifted so as to minimize the offsets determined by the cross correlations. The 6 pairs over determine the problem, so the scatter of the results gives a crude estimate of the accuracy of the method. For each epoch, an alignment was found for which all reported offsets in RA and Dec were less than 5 $`\mu `$as, easily meeting our requirements. With these alignments, the uniformity of spectral indices was better than had been achieved previously with a tedious, non-automated procedure based directly on the spectral indices.
With alignment nominally accurate to a few microarcseconds, the internal motions of the source start to be a concern. Historically, the southern lobe is moving way from the core at about 0.3 mas yr<sup>-1</sup>. The counterjet would be expected to be moving at some fraction of that speed, probably lower by the length ratio of about 1.8. Thus we expect relative motions of jet and counterjet at the level of about 1.3 $`\mu `$as per day. The January observing sequence spanned 15 days and the October sequence spanned 13 days, so relative motions of up to 19 $`\mu `$as are expected. This has very little effect on the final results. Nevertheless an attempt has been made to account for it by moving the features more than 2 mas north of the core by an appropriate amount, depending on observing date relative to the day of the 5 and 8.4 GHz observing. Rather than having an abrupt transition between the shifted and unshifted components, the region between -3.6 mas and +2 mas north of the core was stretched linearly. These distances are in proportion to the jet/counterjet length ratio.
The analysis program is capable of doing simultaneous fits for 22 GHz amplitude ($`I_o`$), emitted spectral index ($`\alpha _e`$), absorption, and a covering factor ($`f`$), all as a function of position. The equation of the fit is:
$$I_\nu =I_o\times \left(\frac{\nu }{2.2\times 10^{10}}\right)^{\alpha _e}\left[(1f)+fe^\kappa \right]$$
(1)
where $`I_\nu `$ is the flux density measured at frequency $`\nu `$ (Hz), and $`\kappa `$ is the absorption coefficient, which is the main item of interest. The absorption is given by:
$$\kappa =9.8\times 10^3L_{pc}n_e^2T^{1.5}\nu ^2\left[17.7+\mathrm{ln}(T^{1.5}\nu ^1)\right]$$
(2)
where $`L_{pc}`$ is the path length through the absorbing medium in parsecs, $`n_e`$ is the electron density in cm<sup>-3</sup>, $`T`$ is the temperature in degrees K, and the terms in the square brackets are the Gaunt factor. The fits for absorption were actually done for the combination of parameters ($`L_{pc}n_e^2T_4^{1.5}g_4`$), where $`T_4`$ is the temperature in units of $`10^4`$ K and $`g_4`$ is the ratio of the Gaunt factor to the Gaunt factor for $`T=10^4`$ K. Note that $`L_{pc}n_e^2`$ is simply the emission measure (EM), so for the case of a $`10^4`$ K constant temperature medium, the fit is simply for the emission measure.
The above parameterization has 4 unknowns. The observations reported here have 4 frequencies, and so have a maximum of 4 measurements at any point in the image or along the slice — sometimes fewer. Fitting for all 4 parameters was not really justified. After some experimentation, it was decided to fit only for $`I_o`$ and $`\alpha _e`$ along the southern feature, setting $`EM=0`$. On the northern feature, the subtle variations in emitted spectral index are overwhelmed by the strong absorption, so the fits were done for $`I_o`$ and absorption with $`\alpha _e=0.7`$ and $`f=1`$. The results for the fits along the slice are shown in Figures 6 and 7. The results for the absorption as a function of position in the image are shown as the gray scales in Figures 8 and 9.
The quality of the fits is best assessed in Figures 6 and 7. In those figures, the top panel shows the results for $`\alpha _e`$ and absorption along the slice, in terms of radial distance from the core. The amplitudes from the 22 GHz slice are also shown for orientation. The lower panel shows the amplitudes from all 4 frequencies as lines and the fitted model amplitudes as crosses. Note the logarithmic scale and the very deep absorption, especially at 5 GHz. The fits are not perfect, in the sense that the absorption is not actually quite as sharp as for free-free absorption in a uniform medium. Better, even near perfect, fits can be obtained by releasing either $`\alpha _e`$ or $`f`$. Fitting for $`\alpha _e`$ gives spectral indices that are close to zero, which is significantly different from what is seen in the southern jet. We do not consider this to be very likely. Fitting for $`f`$ gives values in the range 0.98-0.995 — allowing less than 2% of the light to pass unabsorbed would give a near perfect fit. Probably more likely is that there are variations in the absorption over each beam area that account for the slightly flatter than a pure free-free spectrum. A detailed model has not been attempted and almost surely would not be unique.
We are assuming that the origin of the jets, presumably in the vicinity of a massive black hole, is located at the northern end of the bright, compact regions seen at high frequency (Dhawan, Kellermann, and Romney 1998). We believe that this is consistent with the structures seen in our images, including the absence of any compact features farther north in the high resolution versions of the 43 GHz images. But we cannot completely exclude the possibility that the true base of the jets is significantly north of this point as suggested by Nesterov, Lyuty, & Valtaoja (1995) based on time lags between optical and radio variations. If the core is displaced northwards, the lack of strong radio emission at its location could be because the radio emission has not turned on in the inner jet, as suggested by Nesterov, Lyuty, & Valtoaja, or because those inner regions are absorbed. The latter could be consistent with the fact that we do see structures in that region at the high frequencies, but not at lower frequencies, and with the possibility that the inverted spectrum in the region we call the core is due to absorption. A displaced core does not strongly affect the conclusions of this paper, but would affect the parameterization of the radial gradient of absorption. It would also affect the geometry derivation of WRB in the sense that the jets would lie closer to the line-of-sight and the simple relativistic beaming model would not work quite as well.
Our fit results show that the absorption has a strong gradient away from the core. This is seen clearly in the slices of Figures 6 and 7 and in the gray scale images of Figures 8 and 9. An attempt to characterize this gradient was made using the slice results by fitting a power law in $`r_{2.5}`$, the core distance in units of 2.5 pc. This reference distance was chosen so that the constant in the equation corresponds to the value at about the middle of the northern feature. The results are displayed on Figures 6 and 7. They are:
$$L_{pc}n_e^2T_4^{1.5}g_4=5.7\times 10^8r_{2.5}^{1.5}$$
(3)
in January and
$$L_{pc}n_e^2T_4^{1.5}g_4=4.8\times 10^8r_{2.5}^{1.8}$$
(4)
in October. The power law is clearly only a rough description of the results. The formal fit errors for the exponent are less than 0.1. But it is clear that there are large deviations from any power law, so the value of the exponent should not be taken too seriously. A wider range of core distances would be useful for any characterization of the gradient.
The feature (NN), seen at about 80 mas north of the core at low frequencies by STV, also has a peaked spectrum that STV conclude is the result of free-free absorption. For this feature, the absorption is much weaker since it occurs at a much lower frequency. The simple power laws fit to our higher frequency data in the region less than 10 mas from the core, if extrapolated to the more distant feature, would give far too much absorption. If the power law characterization is reasonable, the exponent must be in the vicinity of the $`2.6`$ determined by STV. But a power law with a $`2.6`$ exponent is not very consistent with our data in the 4 to 10 mas range. It is likely that the amount of ionized gas eventually falls off faster between 10 and 80 mas (3.2 and 26 pc) than it does over the 4 to 10 mas (1.3 to 3.2 pc) range over which we measure it. In fact, the existence of structure even closer to the core at 15 and 22 GHz (see Figure 5) suggests that the absorption does not continue to increase as fast with decreasing core distance inside the region of our measurements as it does further out.
## 4 Discussion
The current data provide more frequencies than were available for previous discussions of the spectrum of the northern feature and provide measurements of all frequencies taken at approximately the same time, so that time variability should not be a source of uncertainty. Based on the earlier 8.4 and 22 GHz data, with a 2 year separation, VRB and LLV argue that the inverted spectrum of the northern feature is not synchrotron self-absorption on the grounds that, given the spectrum and the size of the feature, the magnetic field energy density would have to be greater than the particle energy density by a factor of $`10^{14}`$, which is unlikely. Also the total energy content of the jet with the implied magnetic field would be unreasonably high. The current measurements provide a rather simpler argument. The observed spectral index on the northern feature between 5.0 and 8.4 GHz is about $`\alpha =4`$. This is significantly steeper than the optically thick synchrotron spectral index of 2.5. Therefore, synchrotron self-absorption cannot be the only cause of the inverted spectrum. LLV also consider stimulated Raman scattering as the cause of the inverted spectrum, but rule it out on the grounds that the brightness temperature is inadequate. That argument remains unchanged. The Razin-Tsytovich effect, which exponentially suppresses synchrotron radiation at frequencies below those where the phase velocity is greater than c, can cause a steeply inverted spectrum below $`\nu _r20n_e/B`$ Hz. But with only $`\nu ^1`$ in the exponent, compared to the $`\nu ^2`$ of free-free absorption, it cannot produce the change from about $`\alpha =0`$, seen between 15 and 22 GHz, to about $`\alpha =4`$, seen between 5.0 and 8.4 GHz, without an unphysically steep underlying emitted spectral index. Also, the Razin-Tsytovich effect operates in the emitting region rather than along the line-of-sight, so it is difficult to understand why there would be a very big difference between the northern and southern features. Free-free absorption, on the other hand, fits the spectra reasonably well, and, to the extent that there are deviations, they are what might be expected from non-uniformities in the medium, something that is reasonable to expect.
The opacity fits presented above give an emission measure of about $`5\times 10^8`$ at 2.5 pc with a fairly steep radial gradient, assuming a temperature of $`T=10^4`$K. The path through the absorbing medium cannot be significantly larger than the projected distance of the line-of-sight from the core. In fact, if the geometry were that of a thin disk, the path would be significantly smaller than the projected offset. The density of ionized gas required is roughly $`n_e=2\times 10^4`$ cm<sup>-3</sup> for a thick (1 pc) medium at $`10^4`$ K, rising to significantly higher values for a thinner medium or a higher temperature.
LLV point out that the Thomson optical depth along the line-of-sight must not significantly exceed unity, constraining the density to be
$$n_e5\times 10^5L_{pc}^1\mathrm{cm}^3$$
(5)
This density is not all that far from that implied by the free-free absorption measurements. If we insert the inequality of Equation 5 into Equations 3 and 4, we get:
$$T5.8\times 10^5r_{2.5}L_{pc}^{2/3}$$
(6)
for January and
$$T6.5\times 10^5r_{2.5}^{1.1}L_{pc}^{2/3}$$
(7)
for October. The fact that $`L_{pc}`$ is not well constrained leaves room for a fairly wide range of temperature limits. But for values of $`L_{pc}`$ between 0.01 and 1, the temperature at 2.5 pc cannot exceed between about $`1.4\times 10^7`$ and $`6.5\times 10^5`$ respectively in October and slightly less in January. These limits don’t really cause any problems for models that we are aware of, but it is clear that the temperature cannot greatly exceed the $`10^6`$ K called for in some cases (eg. Königl and Kartje 1994; Gallimore, Baum, & O’Dea 1997)
The region where free-free absorption is observed is a few parsecs from the central object in NGC 1275. LLV estimate the bolometric luminosity to be about $`4\times 10^{44}`$ ergs s<sup>-1</sup>, which is about the Eddington luminosity for a $`10^6M_{\mathrm{}}`$ black hole. If that is the mass of the central object, a location 2.5 pc away is at roughly $`3\times 10^7R_G`$ (The gravitational radius $`R_G=2GM/c^2`$). On the other hand, the same location would only be about $`3\times 10^4R_G`$ from a $`10^9M_{\mathrm{}}`$ black hole. These distances are in the outer regions typically covered in accretion disk models, if not well beyond. In such regions, models are complicated by the possible importance of self gravity and of external illumination of unknown geometry. The temperature at these distances is expected to be too low for ionization without illumination by ionizing radiation from the central regions. Such illumination requires an appropriate geometry such as a flaring or warped disk or a source of radiation offset from the disk center, possibly from a jet or from scattering. LLV give an extensive discussion of the implications for disk models of the observation of free-free absorption in NGC 1275 — a discussion that is not significantly changed by the new data.
The idea that AGN disks are likely to be illuminated by ionizing radiation from the central compact region near the black hole has been explored as part of efforts to model the spectra of broad line regions and the structure of outer disks (see Collin and Huré 1999 and references therein including Collin-Souffrin & Dumont 1990). Generally the density of the outer disk is expected to be much higher than implied by the free-free absorption. However, much of the disk is sufficiently optically thick that the central regions are shielded from ionizing radiation. These regions will be neutral and will not contribute to the free-free absorption or to many of the observed optical emission lines. But there is likely to be an ionized “chromosphere” above the dense regions. At large radii, the density is too low to provide shielding and the disk is fully ionized and at a temperature of about 7000 K. This chromosphere, or the ionized regions at large distances, could be the absorbing region.
An alternative to the irradiated disk for the location of the ionized material responsible for the free-free absorption might be a wind off the surface of the disk. For example, Königl and Kartje (1994 — see also other references therein) discuss the properties of centrifugally driven winds in the AGN context. Winds of this type are the leading model for the origin of the bipolar outflows that are very common in young stellar objects. They seem likely to exist whenever a disk is threaded by magnetic fields — a situation that is probably hard to avoid. The temperatures and densities of some of the examples presented in Königl and Kartje are of the magnitude required by our data.
It seems likely that both of the above scenarios apply in an AGN. The central regions are a strong source of ionizing radiation which is very likely to affect some portion of the disk. And, given the almost certain presence of magnetic fields in any accretion disk, combined with the ubiquitous presence of magnetically driven winds from galactic examples of accretion disks, it seems hard to avoid having a magnetically driven wind. There is a considerable range of possible variations on these models, depending on the black hole mass, the accretion rate, the disk composition, the magnetic field strength, and other factors, so a variety of types of data will be required to determine anything like a unique model. Because of the large range of possible models, we have not attempted detailed comparisons with any specific models. But our data do provide some firm constraints that any model must match. We are attempting to obtain additional constraints by making multi-frequency observations that can be used at higher resolution than those presented here. Such observations will also be sensitive to temporal variations in the absorption since 1995. In addition, we are searching for recombination lines from the absorbing medium.
## 5 Conclusions
The primary conclusions of this work are:
* The suggestion from VRB and WRB, based on two frequencies and observations separated by 2 years, that there is free-free absorption of the northern feature in 3C 84 at a few parsecs from the central object, is confirmed. We present two separate epochs in which the absorption was observed nearly simultaneously at 5 frequencies. The observed spectral indices are sufficiently steep to preclude other absorption mechanisms.
* The free-free absorption shows a two dimensional structure dominated by a gradient with distance from the radio core. The absorption is greater near the core and falls off with distance. If the absorption goes as a power law with distance, the exponent is a bit above $`2`$ over the range of the observations presented here (about 1.5 to 3.5 pc projected distance from the core). However a somewhat steeper exponent is required to match up with the observations of STV at around 25 pc. Those observations were taken at the same time as ours.
* The observed absorption is consistent with the model proposed by WRB and VRB. In that model, the northern feature is on the far side of the system relative to the Earth. There is an accretion disk extending to the parsec scales observed here and that accretion disk has associated ionized gas that is responsible for the absorption. The amount of ionized gas falls off fairly rapidly with core distance.
Various models, including disk ionization by radiation from the central regions and disk-driven hydromagnetic winds, might provide the necessary ionized material in a geometry that would only affect the far-side jet. The free-free absorption results provide firm constraints on the ionized gas on parsec scales, including positional information, that any model of the central regions of NGC 1275 must match.
We would like to thank J. Benson and W. Alef for their contributions to these observations. We also thank J. Wrobel and G. Taylor for useful discussions. Finally, we thank the staff of the VLBA for their invaluable support. These observations would not have been possible without the major advances in frequency flexibility and image quality provided by the VLBA. The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.
|
no-problem/9909/hep-ph9909276.html
|
ar5iv
|
text
|
# Remarks on Form Factor Bounds
## Abstract
Improved model independent upper bounds on the weak transition form factors are derived using inclusive sum rules. Comparison of the new bounds with the old ones is made for the form factors $`h_{A_1}`$ and $`h_V`$ in $`BD^{}`$ decays.
preprint:
A set of model independent bounds has been derived to provide a restriction on the shape of weak transition form factors . They have been extensively used to bound weak decay form factors and the decay spectrum of heavy hadrons See, however, for model independent parametrizations of the form factors. Here We provide a more stringent upper bound without any further assumptions. This upper bound differs from the one derived previously at order $`1/m_Q^2`$ or $`\alpha _s/m_Q`$. Though this is only a small improvement, it is worth doing because it can give a tighter bound from above if one includes higher order corrections.
The bounds are derived from sum rules that relate the inclusive decay rate, calculated using the operator product expansion (OPE) and perturbative QCD, to the sum of exclusive decay rates. To be complete, we will derive both the upper and lower bounds, though the lower bound is the same as the previous one.
Without loss of generality, we take for example the decay of a $`B`$ meson into an $`H`$ meson, with the underlying quark process $`bf`$, where $`f`$ could be either a heavy or light quark. First, consider the time ordered product of two weak transition currents taken between two $`B`$ mesons in momentum space,
$`T^{\mu \nu }`$ $`=`$ $`{\displaystyle \frac{i}{2M_B}}{\displaystyle d^4xe^{iqx}B(v)\left|T(J^\mu (x)J^\nu (0))\right|B(v)}`$ (1)
$`=`$ $`g^{\mu \nu }T_1+v^\mu v^\nu T_2+iϵ^{\mu \nu \alpha \beta }q_\alpha v_\beta T_3+q^\mu q^\nu T_4+(q^\mu v^\nu +v^\mu q^\nu )T_5,`$ (2)
where $`J^\mu `$ is a $`bf`$ weak transition current. The time ordered product can be expressed as a sum over hadronic or partonic intermediate states. The sum over hadronic states includes the matrix element $`H|J|B`$. After inserting a complete set of states and contracting with a four-vector pair $`a_\mu ^{}a_\nu `$, we obtain:
$`T(ϵ)`$ $`=`$ $`{\displaystyle \frac{1}{2M_B}}{\displaystyle \underset{X}{}}(2\pi )^3\delta ^3(\stackrel{}{p}_X+\stackrel{}{q}){\displaystyle \frac{\left|X\left|aJ\right|B\right|^2}{E_XE_Hϵ}}`$ (4)
$`+{\displaystyle \frac{1}{2M_B}}{\displaystyle \underset{X}{}}(2\pi )^3\delta ^3(\stackrel{}{p}_X\stackrel{}{q}){\displaystyle \frac{\left|B\left|aJ\right|X\right|^2}{ϵ+E_X+E_H2M_B}},`$
where $`T(ϵ)a_\mu ^{}T^{\mu \nu }a_\nu `$, $`ϵ=M_BE_Hvq`$, and the sum over $`X`$ includes the usual $`d^3p/2E_X`$ for each particle in the state $`X`$. We choose to work in the rest frame of the $`B`$ meson, $`p=M_Bv`$, with the $`z`$ axis pointing in the direction of $`\stackrel{}{q}`$. We hold $`q_3`$ fixed while analytically continuing $`vq`$ to the complex plane. $`E_H=\sqrt{M_H^2+q_3^2}`$ is the $`H`$ meson energy. There are two cuts in the complex $`ϵ`$ plane, $`0<ϵ<\mathrm{}`$, corresponding to the decay process $`bf`$, and $`\mathrm{}<ϵ<2E_H`$, corresponding to two $`b`$ quarks and a $`\overline{f}`$ quark in the final state. The second cut will not be important for our discussion.
The integral over $`ϵ`$ of the time ordered product, $`T(ϵ)`$, times a weight function, $`ϵ^nW_\mathrm{\Delta }(ϵ)`$, can be computed perturbatively in QCD . For simplicity, we pick the weight function $`W_\mathrm{\Delta }(ϵ)=\theta (\mathrm{\Delta }ϵ)`$, which corresponds to summing over all hadronic resonances up to the excitation energy $`\mathrm{\Delta }`$ with equal weight. Relating the integral with the hard cutoff to the exclusive states requires local duality at the scale $`\mathrm{\Delta }`$. Therefore, $`\mathrm{\Delta }`$ must be chosen large enough so that the structure functions can be calculated perturbatively.
Taking the zeroth moment of $`T(ϵ)`$, we get
$`M_0`$ $``$ $`{\displaystyle \frac{1}{2\pi i}}{\displaystyle _C}𝑑ϵ\theta (\mathrm{\Delta }ϵ)T(ϵ)`$ (5)
$`=`$ $`{\displaystyle \frac{\left|X|aJ|B\right|^2}{4M_BE_H}}+{\displaystyle \underset{XH}{}^{}}\theta (E_XE_H\mathrm{\Delta })(2\pi )^3\delta ^3(\stackrel{}{q}+\stackrel{}{p}_X){\displaystyle \frac{\left|X|aJ|B\right|^2}{2M_B}},`$ (6)
where the primed summation means a sum over all the kinematically allowed states except the $`H`$ meson. So,
$`{\displaystyle \frac{\left|X|aJ|B\right|^2}{4M_BE_Hϵ}}`$ $`=`$ $`M_0{\displaystyle \underset{XH}{}^{}}\theta (E_XE_H\mathrm{\Delta })(2\pi )^3\delta ^3(\stackrel{}{q}+\stackrel{}{p}_X){\displaystyle \frac{\left|X|aJ|B\right|^2}{2M_B}}.`$ (7)
On the other hand, the first moment of $`T(ϵ)`$ gives
$`M_1`$ $``$ $`{\displaystyle \frac{1}{2\pi i}}{\displaystyle _C}𝑑ϵϵ\theta (\mathrm{\Delta }ϵ)T(ϵ)`$ (8)
$`=`$ $`{\displaystyle \underset{XH}{}^{}}\theta (\mathrm{\Delta }E_X+E_H)(E_XE_H)(2\pi )^3\delta ^3(\stackrel{}{q}+\stackrel{}{p}_X){\displaystyle \frac{\left|X\left|aJ\right|B\right|^2}{4M_BE_X}}`$ (12)
$`\{\begin{array}{cc}(E_{max}E_H){\displaystyle \underset{XH}{}^{}}\theta (\mathrm{\Delta }E_X+E_H)(2\pi )^3\delta ^3(\stackrel{}{q}+\stackrel{}{p}_X){\displaystyle \frac{\left|X\left|aJ\right|B\right|^2}{4M_BE_X}},\hfill & \\ (E_1E_H){\displaystyle \underset{XH}{}^{}}\theta (\mathrm{\Delta }E_X+E_H)(2\pi )^3\delta ^3(\stackrel{}{q}+\stackrel{}{p}_X){\displaystyle \frac{\left|X\left|aJ\right|B\right|^2}{4M_BE_X}}.\hfill & \end{array}`$
where $`E_{max}`$ and $`E_1`$ denote the highest energy state kinematically allowed and the first excited state that is more massive than $`H`$ meson, respectively. Here the validity of the second inequality relies on the assumption that multiparticle final states with energy less than $`E_1`$ contribute negligibly. This assumption is true in large $`N_c`$, and is also confirmed by current experimental data. However, the first inequality is valid without any further assumption.
From Eq. (7) and the first inequality in Eq. (8), one can get an upper bound on the matrix element $`\left|H\left|aJ\right|B\right|^2/4M_BE_H`$,
$$\frac{\left|H\left|aJ\right|B\right|^2}{4M_BE_H}\frac{1}{2\pi i}_C𝑑ϵ\theta (\mathrm{\Delta }ϵ)T(ϵ)\left(1\frac{ϵ}{E_{max}E_H}\right).$$
(13)
Dropping $`ϵ/(E_{max}E_H)`$ on the right hand side gives the previously derived upper bound . Since $`E_{max}E_H`$ is of order $`m_Q`$ and the first moment, $`M_1`$, is of order $`1/m_Q`$ and positive definite, this extra term makes the new upper bound smaller than the old one at order $`1/m_Q^2`$. Perturbative corrections will also modify the new bound at order $`\alpha _s/m_Q`$.
Similarly, a lower bound can be formed by combining Eq. (7) and the second inequality in Eq. (8) to be
$$\frac{\left|H\left|aJ\right|B\right|^2}{4M_BE_H}\frac{1}{2\pi i}_C𝑑ϵ\theta (\mathrm{\Delta }ϵ)T(ϵ)\left(1\frac{ϵ}{E_1E_H}\right).$$
(14)
Therefore, we find the bounds
$`{\displaystyle \frac{1}{2\pi i}}{\displaystyle _C}𝑑ϵ\theta (\mathrm{\Delta }ϵ)T(ϵ)\left(1{\displaystyle \frac{ϵ}{E_1E_H}}\right)`$ $``$ $`{\displaystyle \frac{\left|H(v^{})\left|aJ\right|B(v)\right|^2}{4M_BE_H}}`$ (15)
$``$ $`{\displaystyle \frac{1}{2\pi i}}{\displaystyle _C}𝑑ϵ\theta (\mathrm{\Delta }ϵ)T(ϵ)\left(1{\displaystyle \frac{ϵ}{E_{max}E_H}}\right).`$ (16)
Since $`1/(E_1E_H)1/\mathrm{\Lambda }_{\mathrm{QCD}}`$, the lower bounds will be good to one less order in $`1/m_Q`$ than the upper bound.
As emphasized in , the old upper bound is essentially model independent while the lower bound relies on the assumption about the final state spectrum. The new upper bound provided here is also model independent. These bounds are valid for both heavy mesons and baryons. (For baryons, a spin sum $`\frac{M_H}{2j+1}_{S,S^{}}`$ needs to be included in front of the bounded factor.)
Great interest has been paid to the semileptonic exclusive decay rate of $`BD^{}l\overline{\nu }`$ from which $`|V_{cb}|`$ can be extracted . As an example, we now focus on the case that $`H`$ is the $`D^{}`$ meson and give, in particular, the upper bounds on the form factors $`h_{A_1}`$ and $`h_V`$. The hadronic matrix element for the semileptonic decay of a $`B`$ meson into a vector meson $`D^{}`$ may be parameterized as
$`{\displaystyle \frac{D^{}(v^{},\epsilon )V^\mu A^\mu B(p)}{\sqrt{M_D^{}M_B}}}`$ $`=`$ $`h_{A_1}(\omega )(\omega +1)\epsilon ^\mu +\left[h_{A_2}(\omega )v^\mu +h_{A_3}(\omega )v^\mu \right]v\epsilon ^{}`$ (18)
$`+ih_V(\omega )ϵ^{\mu \nu \alpha \beta }\epsilon _\nu ^{}v_\alpha ^{}v_\beta ,`$
where $`v^{}`$ is the velocity of the final state meson, and the variable $`\omega =vv^{}`$ is a measure of the recoil. One may relate $`\omega `$ to the momentum transfer $`q^2`$ by $`\omega =(M_B^2+M_{D^{}}^{}{}_{}{}^{2}q^2)/(2M_BM_D^{})`$. Therefore, with a proper choice of the current $`J^\mu `$ and the four vector $`a^\mu `$, one may readily single out the form factors, $`h_{A_1}`$ and $`h_V`$, and establish corresponding bounds, as was done in references . Nonperturbative corrections to the structure functions can be found in references , whereas complete $`𝒪(\alpha _s)`$ corrections are given in references .
To obtain the bounding curves within the kinematic range, $`1<\omega 1.25`$, we will expand in $`\alpha _s`$, $`\mathrm{\Lambda }_{\mathrm{QCD}}/m_Q`$ and $`\omega 1`$. For both the upper and lower bounds, we will keep perturbative corrections up to order $`\alpha _s(\omega 1)`$, but drop terms of order $`\alpha _s(\omega 1)^2`$, $`\alpha _s^2`$, and $`\alpha _s\mathrm{\Lambda }_{\mathrm{QCD}}/m_Q`$. We will calculate to order $`1/m_Q^2`$ for the upper bounds, but only to order $`1/m_Q`$ for the lower bounds.
Both the old and new upper bounds along with the lower bound on $`h_{A_1}`$ are shown For the figures we take $`m_b=4.8\mathrm{GeV}`$, $`m_c=1.4\mathrm{GeV}`$, $`\alpha _s=0.3`$ (corresponding to a scale of about $`2\mathrm{GeV}`$), $`\overline{\mathrm{\Lambda }}=0.4\mathrm{GeV}`$, $`\lambda _1=0.2\mathrm{GeV}^2`$, $`\lambda _2=0.12\mathrm{GeV}^2`$ and $`\mathrm{\Delta }=1\mathrm{GeV}`$. in Fig. 1. In this and the next example, the corresponding first excited state more massive than $`D^{}`$ that contributes to the sum rule is the $`J^P=1^+`$ state, i.e., the $`D_1`$ meson, and $`E_{max}`$ is taken to be $`M_B`$ in the limit of no energy transfer to the leptonic sector.
The upper and lower bounds for $`(\omega ^21)\left|h_V(\omega )\right|^2/(4\omega )`$ are shown in Fig. 2.
In both diagrams, the thick solid (dashed) curve is the new (old) upper bound including perturbative corrections. The thin solid (dashed) curve is the upper bound without perturbative corrections. At large recoil, the new bound improves the upper limit by more than $`4\%`$ in Fig. 1 and by about $`3\%`$ in Fig. 2.
This work provides tighter upper bounds on weak decay form factors. The new upper bounds are compared with the old ones on, in particular, the $`BD^{}`$ form factors, $`h_{A_1}`$ and $`h_V`$. Their difference is due to the $`1/m_Q^2`$ nonperturbative corrections and $`\alpha _s`$ corrections that are suppressed by $`1/M_Q`$. The difference of higher order $`1/m_Q`$ corrections between the old and new bounds will be more significant.
###### Acknowledgements.
The author would like to thank Fred Gilman, Ira Rothstein and Adam Leibovich for useful comments and discussions. This work was supported in part by the Department of Energy under Grant No. DE-FG02-91ER40682.
|
no-problem/9909/astro-ph9909202.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
High-resolution $`N`$-body simulations with power-law initial power spectra suggest that density profiles of dark halos in large range of masses are well fitted by a simple universal formula
$$\frac{\rho (x)}{\rho _{\mathrm{crit},0}}=\frac{\delta _{\mathrm{char}}}{(r/r_\mathrm{s})(1+r/r_\mathrm{s})^2}$$
(1)
where $`\rho _{\mathrm{crit},0}`$ is the present critical density of the Universe and $`\delta _{\mathrm{char}}`$ is the characteristic density
$$\delta _{\mathrm{char}}=\frac{vc^3}{3[\mathrm{ln}(1+c)c/(1+c)]},$$
(2)
with $`v=200`$. The scale radius $`r_\mathrm{s}`$ is related to the virial radius $`r_v`$ (the distance from the center of the halo within which the mean density is $`v`$ times the critical density) by $`r_\mathrm{s}=r_v/c`$ and $`c`$ is the concentration, the only fitting parameter in the formula.
The density profile was observed to steepen from $`r^1`$ near the center of the halo to $`r^3`$ at large distances. This result seemed to contradict the prediction of the analytical spherical infall model (hereafter SIM) which for $`\mathrm{\Omega }=1`$ (the only case considered here) finds the profiles to be power-laws of the form $`r^{3(n+3)/(n+4)}`$ where $`n`$ is the index of the initial power spectrum of density fluctuations.
## 2 The modified spherical infall model
I will argue here that the discrepancy between the two approaches is mainly due to the oversimplifications applied in the SIM. While such assumptions as spherical symmetry of the initial density distribution and the absence of peculiar velocities will be kept, the shape of the initial density distribution can in fact be made more realistic. This distribution is usually described by the expected overdensity within $`r_\mathrm{i}`$ provided there is a peak (overdense region) of height $`a\sigma `$ at $`r_\mathrm{i}=0`$, where $`\sigma `$ is the rms fluctuation of the linear density field smoothed on scale $`R`$. If the initial probability distribution of fluctuations is Gaussian and the filter is Gaussian the general form of this quantity as a function of $`x_\mathrm{i}=r_\mathrm{i}/R`$ can be found
$$\mathrm{\Delta }_\mathrm{i}(x_\mathrm{i})=\frac{6a\sigma }{(n+1)x_\mathrm{i}^2}\left[{}_{1}{}^{}F_{1}^{}(\frac{n+1}{2},\frac{3}{2},\frac{x_\mathrm{i}^2}{4}){}_{1}{}^{}F_{1}^{}(\frac{n+1}{2},\frac{1}{2},\frac{x_\mathrm{i}^2}{4})\right].$$
(3)
This function is flat near the center and only at large distances from the peak it approaches the $`x_\mathrm{i}^{(n+3)}`$ power-law applied in .
Although in the flat Universe any overdense region bounds the mass up to infinite distance, in reality there are always neighbouring fluctuations that also gather mass. As a way to emulate this conditions I propose a second modification of the initial density distribution in the form of a cut-off. One can think of two ways of estimating the cut-off scale. First, such scale could be found as a coherence scale of the overdense region defined by the expected overdensity (3) being equal to its rms fluctuation. It turns out however, that a more stringent constraint is induced by the presence of other peaks (see ). Therefore here the cut-off will be introduced at the half inter-peak separation $`x_{\mathrm{i},\mathrm{pp}}/2`$ for the most reasonable height of the peak, $`a=3`$. In the case of $`n=1`$ we have $`x_{\mathrm{i},\mathrm{pp}}/2`$=6.45. The generalized initial density distribution with a cut-off will be modelled by
$$\mathrm{\Delta }_{\mathrm{i},\mathrm{cut}}(x_\mathrm{i})=\frac{\mathrm{\Delta }_\mathrm{i}(x_\mathrm{i})}{1+\mathrm{e}^{(x_\mathrm{i}x_{\mathrm{i},\mathrm{pp}}/2)/w}}$$
(4)
with the width of the filter $`w=1`$.
According to the SIM the subsequent shells numbered by the coordinate $`x_\mathrm{i}`$ will slow down due to the gravitational attraction of the peak, stop at the maximum radius and then collapse by some factor $`f`$ to end up at the final radius
$$x=\frac{x_\mathrm{i}f[\mathrm{\Delta }_{\mathrm{i},\mathrm{cut}}(x_\mathrm{i})+1]}{\mathrm{\Delta }_{\mathrm{i},\mathrm{cut}}(x_\mathrm{i})}.$$
(5)
The simplest versions of the SIM adopt the value $`f=1/2`$ motivated by the virial theorem and it will also be assumed here but a more realistic description can be found in . The final profile of the virialized halo is then
$$\frac{\rho }{\rho _{\mathrm{crit},0}}=(1+a\sigma \varrho )(1+z_\mathrm{i})^3\left(\frac{x_\mathrm{i}}{x}\right)^2\frac{\mathrm{d}x_\mathrm{i}}{\mathrm{d}x}$$
(6)
where $`\varrho =\xi _R(r)/\sigma ^2`$ is the correlation coefficient.
## 3 Comparison with the universal profile
Since the measurements of halo properties from $`N`$-body simulations were done at the state corresponding to the present epoch, the same condition will be applied for SIM calculations. Once the initial redshift $`z_\mathrm{i}`$ is specified, equating the collapse time to the present age of the Universe determines the overdensity of the presently virializing shell which ends up at the virial radius of the halo. When we adopt the normalization of the initial power spectrum ($`\sigma _8=1`$) and the conditions $`a=3`$ and $`a\sigma =0.1`$ (for the linear theory to be valid) choosing the initial redshift $`z_\mathrm{i}`$ for a given spectral index $`n`$ gives the comoving smoothing scale $`R`$ with which the overdense regions are identified. The mass of the halo within the virial radius $`x_v`$ can then also be determined
$$M=\frac{800\pi }{3}\rho _{\mathrm{crit},0}\left(\frac{x_vR}{1+z_\mathrm{i}}\right)^3.$$
(7)
Figure 1 shows the density profile of a dark matter halo of galactic mass. The solid line presents the prediction of the SIM obtained from formula (6) for $`n=1`$, $`z_\mathrm{i}=600`$ and $`R=0.188h^1`$ Mpc. The final (virial) proper radius of the halo is $`r_v=0.231h^1`$ Mpc and the mass $`M=2.88\times 10^{12}h^1M_{}`$ which correspond to a galactic halo. We see that the result of the SIM can be well fitted by formula (1) but the SIM profile is significantly flatter than the corresponding one from the simulations. The concentration parameters which measure the steepness of the profile (the higher $`c`$ the steeper the profile) are $`c=57.1`$ and $`c=19.1`$ respectively from the simulations and from the SIM.
One of the main results of $`N`$-body simulations was the dependence of the shape of the density profiles of halos on their mass. On the other hand, the standard prediction of the SIM gives the same profile independently of mass. However, with the improvements introduced above it is possible to reproduce the dependence of the profiles on mass.
It is sometimes argued that if the density field is smoothed with a given scale $`R`$ lower peaks end up as galaxies and higher ones as clusters. This, however, would violate the hierarchical way of structure formation since higher peaks collapse earlier. Another argument against such assumption comes from the calculations based on the improved SIM: the reasonable range of peak heights $`a`$ between 2 and 4, which are most likely to produce halos, leads for a given smoothing scale to the range of masses spanning only one order of magnitude, while in $`N`$-body simulations halos with masses spanning few orders of magnitude are observed. This suggests that the dependence on mass should rather be related to the initial smoothing scale.
The dependence of the shape of the profiles on mass obtained with these assumptions is shown in Figure 2. The solid lines give the values of the concentration parameter $`c`$ obtained by fitting the formula (1) to the results of SIM for different power spectra and the dashed lines are the corresponding values from the simulations . The overall trend of steeper profiles for smaller masses is reproduced and the agreement between the two approaches is significantly better for smaller masses.
The spherical infall model provides simple understanding of the dependence of the shape of the halo on its mass: smaller halos start forming earlier and by the present epoch their virial radii reach the cut-off scale that accounts for the presence of the neighbouring fluctuations; more massive halos form later and their virial radii are not affected by the cut-off scale, their virialized regions contain only the material that initially was quite close to the peak identified with the smoothing scale corresponding to the mass.
Acknowledgements. This work was supported in part by the Polish State Committee for Scientific Research grant No. 2P03D00815.
|
no-problem/9909/astro-ph9909454.html
|
ar5iv
|
text
|
# Dark Energy and the CMB
\[
## Abstract
We find that current Cosmic Microwave Background (CMB) anisotropy data strongly constrain the mean spatial curvature of the Universe to be near zero, or, equivalently, the total energy density to be near critical—as predicted by inflation. This result is robust to editing of data sets, and variation of other cosmological parameters (totaling seven, including a cosmological constant). Other lines of argument indicate that the energy density of non-relativistic matter is much less than critical. Together, these results are evidence, independent of supernovae data, for dark energy in the Universe.
\] Introduction. Cosmologists have long realized that there is more to the Universe than meets the eye. A wide variety of evidence points to the existence of dark matter in the Universe, matter which cannot be seen, but which can be indirectly detected by its contribution to the gravitational field. As observations have improved, the phenomenology of the “dark sector” has become richer. While dark matter was originally posited to explain what would otherwise be excessively attractive gravity, dark energy explains the accelerating expansion—an apparently repulsive gravitational effect. The most well-known argument for this additional dark component is based on inferences of the luminosity distances to high–$`z`$ supernovae. The anomalously large distances indicate that the Universe was expanding more slowly in the past than it is now; i.e., the expansion rate is accelerating. Acceleration only occurs if the bulk pressure is negative, and this could only be due to a previously undetected component.
Here we argue for dark energy based on another gravitational effect: its influence on the mean spatial curvature. This argument does not rely on the supernovae observations and therefore avoids the systematic uncertainties in the inferred luminosity distances. It is based on a lower limit to the total density, and a smaller upper limit on the density of non-relativistic matter. The lower limit comes from measurements of the anisotropy of the cosmic microwave background (CMB) whose statistical properties depend on the mean spatial curvature, which in turn depends on the mean total density. We find that the CMB strongly indicates that $`\mathrm{\Omega }>0.4`$, where $`\mathrm{\Omega }`$ is the ratio of the total mean density to the critical density (that for which the mean curvature would be zero). Upper limits to the density of non-relativistic matter come from a variety of sources which quite firmly indicate $`\mathrm{\Omega }_m<0.4`$.
The CMB sensitivity to curvature is due to the dependence on curvature of the angular extent of objects of known size, at known redshifts. CMB photons that are penetrating our galaxy today, were emitted from a thin shell at a redshift of $`z1100`$ (called the “last-scattering surface”) during the transition from an ionized plasma to a neutral medium. The “object” of known size at known redshift is the sound-horizon of the plasma at the epoch of last-scattering. Its observational signature is the location of a series of peaks in the angular power spectrum of the CMB.
One must be careful about using current CMB data to determine $`\mathrm{\Omega }`$ or any other cosmological parameters for several reasons. First, these are very difficult experiments, and the data sets they produce have low signal-to-noise ratios and limited frequency ranges, complicating the detection of systematic errors. Use of different calibration standards further increases the risk of underestimated systematic error. To counter these problems, we examine the robustness of our results to editing of data sets, and check that the distribution of model residuals is consistent with the stated measurement uncertainties.
Second, the CMB angular power spectra depend on a number of parameters other than the curvature. To some degree, a change in curvature can be mimicked by changes in other parameters. We therefore vary six parameters besides the curvature, placing mild prior constraints on some of these so as not to explore unrealistic regions of the parameter space.
Finally, existing data are insufficient to firmly establish the paradigm for structure formation which we have assumed: structure grew via gravitational instability from primordial adiabatic perturbations. Our conclusions depend on this assumption. At present, this counts as a possible source of systematic error. Fortunately, future CMB data will verify (or refute) the paradigm and will also allow for the determination of $`\mathrm{\Omega }`$ with greatly reduced model dependence.
The data. Present data are already so abundant that it must be compressed before it can serve as the basis for a multi-dimensional parameter search. Fortunately, all data sets have been compressed to constraints on the angular power spectrum, $`C_l2\pi C(\theta )P_l(\mathrm{cos}\theta )d(\mathrm{cos}\theta )`$ where $`C(\theta )`$ is the correlation function. Because of the tremendous reduction in the size of the data sets, this data compression is called “radical compression” .
Here we use the radically compressed data from http://www.cita.utoronto.ca/k̃nox/radical.html. In this compilation the non-Gaussianity of the power spectrum uncertainties has been characterized for a number of experiments with a lot of the weight (including all those plotted with large symbols in Fig. 1); assuming Gaussianity leads to biases.
The Search Method. We search over a seven-dimensional parameter space specified by $`\mathrm{\Omega }`$, $`\mathrm{\Omega }_bh^2`$, $`\mathrm{\Omega }_{\mathrm{cdm}}h^2`$, $`\mathrm{\Omega }_\mathrm{\Lambda }h^2`$, $`\tau `$, $`n_s`$ and $`C_{10}`$, where $`\mathrm{\Omega }_i=\rho _i/\rho _c`$ and $`i=b,\mathrm{cdm},\mathrm{\Lambda }`$ is for baryons, cold dark matter and a cosmological constant respectively, $`\rho _c3H_0^2/(8\pi G)`$ is the critical density, $`\tau `$ is the optical depth to Thomson scattering, $`n_s`$ is the power-law index of the primordial matter power spectrum, and $`C_{10}`$ serves as the normalization parameter. The Hubble constant, $`H_0100h\mathrm{km}\mathrm{sec}^1\mathrm{Mpc}^1`$, is a dependent variable in this space, due to the sum rule: $`\mathrm{\Omega }_\mathrm{\Lambda }+\mathrm{\Omega }_b+\mathrm{\Omega }_{\mathrm{cdm}}=\mathrm{\Omega }`$. Note that, for specificity and simplicity, we have chosen the dark energy to be a cosmological constant; other choices (e.g., qunitessence) would not significantly affect our curvature constraints.
For each value of $`\mathrm{\Omega }`$ we vary the 23 other parameters (six cosmological and 17 calibration—one for each experiment) to find the minimum value of $`\chi ^2=\chi _d^2+\chi _p^2`$. Here $`\chi _d^2`$ is the offset log-normal form explicitly given in Eq. 39-43 of , which was shown to be a good approximation to the log of the likelihood function. Information from non-CMB observations is included as a prior contribution, $`\chi _p^2`$. Unless otherwise stated, we assume that $`h=0.65\pm 0.1`$ (a reasonable interpretation of several measurements) and $`\mathrm{\Omega }_bh^2=0.019\pm 0.003`$ (from but with a 40% increase in their uncertainty). We use the Levenberg-Marquardt method to find the minimum value of $`\chi ^2`$ for each value of $`\mathrm{\Omega }`$. We stop the hunt when the new $`\chi ^2`$ is within $`0.1`$ of the old value. We tested this method on simulated data and recovered the correct results.
The likelihood of the best-fit model, $`(\mathrm{\Omega })`$ is proportional to $`\mathrm{exp}(\chi ^2/2)`$. Ideally we would marginalize over the non-$`\mathrm{\Omega }`$ parameters rather than maximizing over them. However, we note that in the limit that the likelihood is Gaussian, these two procedures are equivalent. More generally, in order for marginalization to give qualitatively different answers there would have to be, with decreasing $`\mathrm{\Omega }`$, a very rapid increase in the volume of parameter space in the non-$`\mathrm{\Omega }`$ direction with $`\chi ^2`$’s comparable to the minimum $`\chi ^2`$. Inspection of the Fisher matrix leads us to believe this is not the case.
The Results. Our main results are shown in Fig. 2: the relative likelihood ($`\mathrm{exp}(\chi ^2/2)`$) of the different values of $`\mathrm{\Omega }`$. Including all the data, the best-fit (minimum $`\chi ^2`$) $`\mathrm{\Omega }=1`$ model is $`2\times 10^7`$ times more probable than the best-fit $`\mathrm{\Omega }=0.4`$ model. $`\mathrm{\Omega }<0.7`$ is ruled out at the $`95\%`$ confidence level.
To test the robustness of this result, we edited out single data sets suspected of providing the most weight. Most of these editings produced little change. Only the omission of TOCO changes things substantially, and even then, the best-fit $`\mathrm{\Omega }=1`$ model is $`150`$ times more probable than the best-fit $`\mathrm{\Omega }=0.4`$ model. We also edited pairs of data sets: for no CAT and TOCO, no MSAM and CAT, and no MSAM and TOCO, we find $`\mathrm{\Omega }=1`$ to be $`120`$, $`2.5\times 10^6`$ and $`8`$ times more likely than $`\mathrm{\Omega }=0.4`$. Also shown, as measures of goodness-of-fit, are $`\chi ^2`$ and the degrees of freedom. The $`\chi ^2`$ value for the “All” case is a bit high, but one expects even higher ones over $`8\%`$ of the time, so there is no strong evidence for inconsistencies in the data. As further indication of the robustness of the result, one can see from the “TOCO” panel of Fig. 2 that it persists even when all but a single data set is removed.
For the “All” case, the best-fit $`\mathrm{\Omega }=1`$ model has $`\mathrm{\Omega }_bh^2=0.019`$, $`h=0.65`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.69`$, $`\tau =0.17`$ and $`n=1.12`$ and is plotted in Fig. 1. There are degeneracies among these parameters though and none of them is strongly constrained on its own. For example, an equivalently good fit (to just the CMB data) is given by the following model with no tilt or reionization: $`\mathrm{\Omega }=1`$, $`\mathrm{\Omega }_bh^2=0.021`$, $`h=0.65`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.65`$, $`\tau =0`$ and $`n=1`$.
We also covered the $`\mathrm{\Omega }_m`$, $`\mathrm{\Omega }_\mathrm{\Lambda }`$ plane, at each point finding the minimum $`\chi ^2`$ possible with variation of the remaining 5 parameters. Figure 3 shows the $`\mathrm{\Delta }\chi ^2=`$ 1, 4 and 9 contours in this plane (which, for a Gaussian, correspond to the 40%, 87% and 99% confidence regions, respectively).
Discussion. Figure 3 also shows constraints on $`\mathrm{\Omega }_m`$ from clusters. Although constraints on $`\mathrm{\Omega }_m`$ arise from a variety of techniques (for reviews see ) perhaps the most reliable are those based on the determination of the ratio of baryonic matter to dark matter in clusters of galaxies . With the assumption that the cluster ratio is the mean ratio (reasonable due to the large size of the clusters), and the baryonic mean density from nucleosynthesis, one can constrain the range of allowable values of $`\mathrm{\Omega }_m`$. Since only the baryonic intracluster gas is detected, the upper limits on $`\mathrm{\Omega }_m`$ from this method are better understood than the lower limits. Mohr et al. find, from a sample of 27 X-ray clusters, that (including corrections for clumping and depletion of the gas) $`\mathrm{\Omega }_m<(0.32\pm 0.03)/\sqrt{h/0.65}`$. Including the Hubble constant uncertainty ($`h=0.65\pm 0.1`$) this becomes $`\mathrm{\Omega }_m<0.32\pm 0.05`$. Assuming 10% of the baryons to be in galaxies as opposed to the gas, as estimated by , we find $`\mathrm{\Omega }_m=0.29\pm 0.05`$. Results from observations of the Sunyaev-Zeldovich effect in clusters are consistent, though less restrictive: $`\mathrm{\Omega }_m=0.31\pm 0.1`$ . Most other methods (those that do not rely on the cluster baryon fraction) generally result in formally stronger upper limits to $`\mathrm{\Omega }_m`$. This increases our confidence in the Mohr et al. $`\mathrm{\Omega }_m`$ upper limit, but we do not quote these stronger constraints due to our concerns that they are affected by systematic uncertainties that are more difficult to quantify than those in the baryon fraction method.
There have been a number of other analyses of CMB anisotropy data which generally obtained weaker constraints on $`\mathrm{\Omega }`$. There are technical differences between our work and previous work: we account for the non-Gaussianity of the likelihood function, allow for calibration uncertainties, place “sanity” priors on the Hubble constant and the baryon density, and vary six parameters in addition to the curvature. Also, much of the strength of our argument comes from data reported within the last year.
The verdict from the CMB is now in. It does not depend on any one, or even any two, experiments. It clearly points towards a flat Universe and, together with cluster data, strongly indicates the existence of dark energy. These conclusions are consistent with, and independent of, the supernovae results. The completely different sets of systematic uncertainties in the two arguments further strengthen the case. Other constraints in the $`\mathrm{\Omega }_m,\mathrm{\Omega }_\mathrm{\Lambda }`$ plane were recently obtained by combining cosmic flow data with supernovae observations.
We have neglected several data sets, all of which, if included, would only strengthen our conclusions. Two of these are PythonV and Viper. PythonV and Viper together trace out a peak with centroid near $`l=200`$, and a significant drop in power by $`l=400`$. They have not been included because of the strong correlations in the existing reductions of the data; a new reduction with all correlations specified will soon be available for PythonV.
Any model without a drop in power from $`l=200`$ to $`l=400`$ has difficulties agreeing with all the data. Models fitting this description include the adiabatic models considered here with $`\mathrm{\Omega }<0.4`$ and also topological defect models, whose breadth is a consequence of the loss of the coherent peak structure .
We have been concentrating on implications of the peak location, but the height is also of interest. With fixed $`h`$, it is additional evidence for low $`\mathrm{\Omega }_m`$. The lower $`\mathrm{\Omega }_mh^2`$, the later the transition from a radiation-dominated Universe to a matter-dominated Universe and the larger the early ISW effect, which contributes in the region of the first peak . For flat models, the best fit is at $`\mathrm{\Omega }_m=0.4`$ with $`\mathrm{\Omega }_m=1`$ four times less likely.
Conclusions. We have shown that $`\mathrm{\Omega }=1`$ is strongly favored over $`\mathrm{\Omega }=0.4`$. This result is interesting for two reasons. First, $`\mathrm{\Omega }=1`$ is a prediction of the simplest models of inflation. Second, together with the constraint $`\mathrm{\Omega }_m<0.4`$, it is evidence for dark energy.
The CMB can say little about the nature of the dark energy. A cosmological constant fits the current data, but then so would many of the other forms of dark energy proposed over the past few years. Generation and exploration of new theoretical ideas as to the nature of this dark energy is clearly warranted.
Measurements of CMB anisotropy have already delivered on their promise to provide new clues towards an improved understanding of cosmological structure formation and fundamental physics. We look forward to greater clarification of the dark energy problem, as well as possibly new surprises, from improved CMB anisotropy measurements in the near future.
###### Acknowledgements.
We are grateful to J. Mohr for useful conversations and A. Jaffe for supplyings us with the SN data. We used CMBFAST many, many times. SD is supported by the DOE and by NASA Grant NAG 5-7092. LK is supported by the DOE, NASA grant NAG5-7986 and NSF grant OPP-8920223.
|
no-problem/9909/chao-dyn9909006.html
|
ar5iv
|
text
|
# SEMICLASSICAL THEORY OF h/e AHARONOV-BOHM OSCILLATION IN BALLISTIC REGIMES
## 1 Introduction
Electron transport through ballistic quantum billiards is an exceedingly rich experimental system, bearing the quantum signature of chaos. One of the interesting result that has emerged concerns the magneto-transport of doubly connected ballistic billiards, i.e., Aharonov-Bohm (AB) billiards. We have calculated the $`average`$ conductance for these systems and showed that the self-averaging effect causes the $`h/2e`$ Altshuler-Aronov-Spivak (AAS) oscillation which is ascribed to interference between time-reversed coherent back-scattering classical trajectories. Moreover we have showed that the AAS oscillation in these systems becomes an experimental probe of the quantum chaos. Another interesting phenomenon in these systems is the $`h/e`$ AB oscillation for $`nonaveraged`$ conductance. The result of numerical calculations indicated that the period of the energy averaged conductance changed from $`h/2e`$ to $`h/e`$, when the range of energy average $`\mathrm{\Delta }E`$ is decreased. However, little is known about the effect of chaos on the $`h/e`$ AB oscillation in AB billiards. In this paper, we shall calculate the correlation function $`C(\mathrm{\Delta }\varphi )`$ of the $`nonaveraged`$ conductance by using the semiclassical theory and show that $`C(\mathrm{\Delta }\varphi )`$ is qualitatively different between chaotic and regular AB billiards.
## 2 Semiclassical Theory
In the following, we shall derive $`C(\mathrm{\Delta }\varphi )`$ separately for chaotic and regular AB billiards in which uniform normal magnetic field $`B`$ (AB flux) penetrates only through the hollow. The transmission amplitude from a mode $`m`$ on the left to a mode $`n`$ on the right for electrons at the Fermi energy is given by
$$t_{n,m}=i\mathrm{}\sqrt{\upsilon _n\upsilon _m}𝑑y𝑑y^{}\psi _n^{}(y^{})\psi _m(y)G(y^{},y,E_F),$$
(1)
where $`\upsilon _m(\upsilon _n)`$ and $`\psi _m(\psi _n)`$ are the longitudinal velocity and transverse wave function for the mode $`m`$ ($`n`$) at a pair of lead wires attached to the billiards. In eq. (1), $`G`$ is the retarded Green’s function. In order to carry out the semiclassical approximation, we replace $`G`$ by the semiclassical Green function,
$$G^{sc}(y^{},y,E)=\frac{2\pi }{(2\pi i\mathrm{})^{3/2}}\underset{s(y,y^{})}{}\sqrt{D_s}\mathrm{exp}\left[\frac{i}{\mathrm{}}S_s(y^{},y,E)i\frac{\pi }{2}\mu _s\right]$$
(2)
where $`S_s`$ is the action integral along a classical path $`s`$, the pre-exponential factor is
$$D_s=\frac{m_e}{\upsilon _F\mathrm{cos}\theta ^{}}\left|\left(\frac{\theta }{y^{}}\right)_y\right|$$
(3)
with $`\theta `$ and $`\theta ^{}`$ the incoming and outgoing angles, respectively, and $`\mu `$ is the Maslov index. Substituting eq. (2) into eq. (1) and carrying out the double integrals by the saddle-point approximation, we obtain
$$t_{n,m}=\frac{\sqrt{2\pi i\mathrm{}}}{2W}\underset{s(\overline{n},\overline{m})}{}\mathrm{sgn}(\overline{n})\mathrm{sgn}(\overline{m})\sqrt{\stackrel{~}{D}_s}\mathrm{exp}\left[\frac{i}{\mathrm{}}\stackrel{~}{S}_s(\overline{n},\overline{m};E)i\frac{\pi }{2}\stackrel{~}{\mu }_s\right],$$
(4)
where $`W`$ is the width of the hard-wall leads and $`\overline{m}=\pm m`$. In eq. (4), $`\stackrel{~}{S_s}(\overline{n},\overline{m};E)=S_s(y_0^{},y_0;E)+\mathrm{}\pi (\overline{m}y_0\overline{n}y_0^{})/W`$, $`\stackrel{~}{D_s}=(m_e\upsilon _F\mathrm{cos}\theta ^{})^1\left|(y/\theta ^{})_\theta \right|`$ and $`\stackrel{~}{\mu _s}=\mu _s+H\left((\theta /y)_y^{}\right)+H\left((\theta ^{}/y^{})_\theta \right),`$ respectively, where $`\theta =\mathrm{sin}^1(\overline{n}\pi /kW)`$ and $`H`$ is the Heaviside step function.
The fluctuations of the conductance $`g=(e^2/\pi \mathrm{})T(k)=(e^2/\pi \mathrm{})_{n,m}\left|t_{n,m}\right|^2`$ are defined by their deviation from the classical value; in the absence of any symmetries,
$$\delta ggg_{cl}.$$
(5)
In this equation $`g_{cl}=(e^2/\pi \mathrm{})T_{cl}`$, where $`T_{cl}`$ is the classical total transmitted intensity. In order to characterize the $`h/e`$ AB oscillation, we define the correlation function of the oscillation in magnetic field $`B`$ by the average over $`B`$,
$$C(\mathrm{\Delta }B)\delta g(B)\delta g(B+\mathrm{\Delta }B)_B.$$
(6)
With use of the ergodic hypothesis, $`B`$ averaging can be replaced by the $`k`$ averaging, i.e.,
$$C(\mathrm{\Delta }B)=\delta g(k,B)\delta g(k,B+\mathrm{\Delta }B)_k.$$
(7)
The semiclassical correlation function of transmission coefficients is given by
$`C(\mathrm{\Delta }\varphi )=\left({\displaystyle \frac{e^2}{\pi \mathrm{}}}\right)^2`$ $`{\displaystyle \frac{1}{8}}\left({\displaystyle \frac{\mathrm{cosh}\delta 1}{\mathrm{sinh}\delta }}\right)^2\mathrm{cos}\left(2\pi {\displaystyle \frac{\mathrm{\Delta }\varphi }{\varphi _0}}\right)`$ (8)
$`\times `$ $`\left\{1+2{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}e^{\delta n}\mathrm{cos}\left(2\pi n{\displaystyle \frac{\mathrm{\Delta }\varphi }{\varphi _0}}\right)\right\}^2,`$
where $`\delta =\sqrt{2T_0\gamma /\alpha }`$. In deriving eq. (8) we have used the exponential dwelling time distribution, $`N(T)\mathrm{exp}(\gamma T)`$, and the Gaussian winding number distribution for fixed $`T`$, i.e.,
$$P(w;T)=\sqrt{\frac{T_0}{2\pi \alpha T}}\mathrm{exp}\left(\frac{w^2T_0}{2\alpha T}\right),$$
(9)
where $`T_0`$ and $`\alpha `$ are the system-dependent constants corresponding to the dwelling time for the shortest classical winding trajectory and the variance of the distribution of $`w`$, respectively.
On the other hand, for the regular cases, we use $`N(T)T^\beta `$. Assuming as well the Gaussian distribution of $`P(w;T)`$, we get
$`C(\mathrm{\Delta }\varphi )`$ $`=`$ $`C(0)\mathrm{cos}\left(2\pi {\displaystyle \frac{\mathrm{\Delta }\varphi }{\varphi _0}}\right)`$ (10)
$`\times `$ $`\left\{{\displaystyle \frac{1+2{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}F(\beta {\displaystyle \frac{1}{2}},\beta +{\displaystyle \frac{1}{2}};{\displaystyle \frac{n^2}{2\alpha }})\mathrm{cos}\left(2\pi n{\displaystyle \frac{\mathrm{\Delta }\varphi }{\varphi _0}}\right)}{1+2{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}F(\beta {\displaystyle \frac{1}{2}},\beta +{\displaystyle \frac{1}{2}};{\displaystyle \frac{n^2}{2\alpha }})}}\right\}^2,`$
where $`F`$ is the hyper-geometric function of confluent type.
Next we shall see the difference of $`C(\mathrm{\Delta }\varphi )`$ for chaotic and regular AB billiards in detail. In the chaotic AB billiard, a main contribution to the AB oscillation comes from the $`n=1`$ component. On the other hand, for regular cases, the amplitude of the AB oscillation decays algebraically, i.e., $`Fn^{2\beta 1}`$ for large $`n`$. This behavior is caused by the power law dwelling time distribution, i.e., $`N(T)T^\beta `$. Thus, in contrast to the chaotic cases, we can expect that the considerably higher harmonics contribution causes a noticeable deviation from the cosine function for $`C(\mathrm{\Delta }\varphi )`$. Therefore, the difference of $`C(\mathrm{\Delta }\varphi )`$ of these ballistic AB billiards can be attributed to the difference between chaotic and regular classical scattering dynamics.
## 3 Summary
We have investigated that magneto-transport in single ballistic billiards whose structures form AB geometry by use of semiclassical methods with a particular emphasis on the derivation of the semiclassical formulas. The existence of the AB oscillation of $`nonaveraged`$ magneto-conductance is predicted for single chaotic and regular AB billiards. Furthermore, we find that the difference between classical dynamics leads to qualitatively different behaviors for the correlation function. The AB oscillation in the ballistic regime will provide a new experimental testing ground for exploring quantum chaos.
## Acknowledgments
I would like to acknowledge K. Nakamura and Y. Takane for valuable discussions and comments.
## References
|
no-problem/9909/cond-mat9909332.html
|
ar5iv
|
text
|
# Magnetization-controlled spin transport in DyAs/GaAs layers
## Abstract
Electrical transport properties of DyAs epitaxial layers grown on GaAs have been investigated at various temperatures and magnetic fields up to $`12T`$. The measured longitudinal resistances show two distinct peaks at fields around $`0.2`$ and $`2.5T`$ which are believed to be related to the strong spin-disorder scattering occurring at the phase transition boundaries induced by external magnetic field. An empirical magnetic phase diagram is deduced from the temperature dependent experiment, and the anisotropic transport properties are also presented for various magnetic field directions with respect to the current flow.
preprint: HEP/123-qed
With the development of molecular beam epitaxy (MBE) techniques, rare-earth monoarsenides (RE-As) can now be grown on GaAs substrate with high quality . Such magnetic semimetals host a number of interesting electronic and magnetic phases mostly derived from the rare-earth elements. Integration of magnetic semimetal with semiconductor, e.g. GaAs, yields a new type of low-dimensional quantum structures where both charge and spin transport are of interest. The magnetotransport properties of these compounds, however, have been relatively unexplored with the exception of GaAs/ErAs/GaAs , for which the properties of carrier, magnetic phases, and electronic band structure have been studied in some detail. Here we report the results for magnetization-controlled spin transport in DyAs thin layers measured at low-temperatures and high magnetic fields.
Samples were grown by MBE on semi-insulating GaAs $`(001)`$ substrates . DyAs layer was grown at temperature $`500^{}C`$ on top of the $`200nm`$ undoped GaAs buffer layer, and then an undoped GaAs cap layer of about $`20nm`$ was grown subsequently on top of DyAs. Three samples having DyAs layer thickness of $`70`$, $`270`$, and $`600nm`$ have been characterized. The structural quality of the DyAs layers is comparable with that of the GaAs/ErAs/GaAs grown by MBE . For the electrical transport measurements we used a Hall bar geometry (width $`500\mu m`$) defined by standard optical lithography. Indium contacts were alloyed at $`400^{}C`$ for electrical connections to both the electrons and holes in the DyAs. The transport experiment was performed from room temperature down to $`0.4K`$ in a $`{}_{}{}^{3}He`$ refrigerator, and with a magnetic field $`H`$ up to $`12T`$; a standard low-frequency $`(<20Hz)`$ lock-in technique was employed for measurements of the magnetotransport coefficients. To study the anisotropic properties, the magnetic field was oriented either perpendicular to the plane of DyAs, or in the plane and with a varying direction to the current flow $`J`$.
Magnetoresistance characteristics are qualitatively similar for all three samples. In the following, we concentrate on the results from the sample with a DyAs layer thickness of $`270nm`$. In Fig.1, we present the longitudinal magnetoresistances, $`R_{xx}`$ , measured at four typical magnetic field orientations at $`0.4K`$. Except for the case where magnetic field parallel to the current flow, positive background is seen in magnetoresistance. In each of the $`R_{xx}`$ traces intrinsic signals from the magnetization manifest themselves as two distinct peaks, seen here at $`0.2T`$ and $`2.5T`$ respectively.
We note that the $`R_{xx}`$ trace, obtained here in a field sweep from negative to positive direction (from the left to the right in $`H`$ axis), is strongly asymmetric about $`H=0`$. We have reversed the field sweep direction and found that the curve is an exact mirror image of the previous trace. Such observations are unambiguous evidences for magnetization-controlled electronic transport in the DyAs layer.
Based on the discussion of the results in ErAs , we relate both of the peaks to the strong spin-disorder scattering occurring in magnetic phase transition regimes induced by external magnetic field. It should be mentioned that in GaAs/ErAs/GaAs only single magnetoresistivity peak has been observed at $`1T`$ and ascribed to antiferromagnet to paramagnet transition . We tentatively attribute the peak observed around $`2.5T`$ in DyAs to a transition of similar nature. The origin of the anomaly at about $`0.2T`$ in our DyAs samples is so far unresolved, but the resistance peak could be an indication of the transition between two different configurations in the antiferromagnetic phase. The possibility of multiple configurations in the antiferromagnetic phase is consistent with the results from temperature-dependent magnetization experiment, where two inflections showed up in the DyAs magnetization curves at temperatures around $`6`$ and $`8K`$, respectively . Sharp peak and associated $`R_{xx}`$ change influenced by weak magnetic field of $`0.2T`$ is, to some extent, similar to giant magnetoresistance observed in metal superlattices and granular materials .
In contrast with the largely isotropic magnetization properties of DyAs reported in , the electrical transport properties are essentially anisotropic: $`R_{xx}`$ is very sensitive to the magnetic field-current flow configurations, i.e., the peak amplitudes and their positions (especially for the peak around $`2.5T`$) vary with the field orientation. Furthermore, while it is absent in other field orientations, an additional peak in $`R_{xx}`$ emerges at higher magnetic field of $`9.5T`$ for the in-plane magnetic field oriented about $`35^{}`$ with respect to the current flow. Strongly anisotropic magnetoresistance in our experiments further indicates the effects of crystal field interactions at low temperature. Such strong effects were suggested by Child et al in their study on magnetic properties of a variety of RE-V compounds by neutron diffractions . Like in other RE-V compounds , the magnetically aligned sheets in DyAs are expected to be perpendicular to $`(111)`$ direction. Magnetotransport experiments on DyAs grown on $`(111)`$ GaAs substrates, in addition to the present $`(001)`$ data, are thus needed to clarify the issue.
The Hall resistance $`R_{xy}`$ measured for the magnetic field perpendicular to the DyAs layer is extremely small, of the order of $`10^3\mathrm{\Omega }`$. Moreover, the overall shape of the Hall resistance is similar to that of $`R_{xx}`$, which may be caused by mixing of transport coefficients. Extremely small Hall resistance is due to either high carrier concentration or electron-hole compensation. Lack of the information on carrier density and mobility prevents us from quantitative analysis of the experimental results. In particular, we were unable to assess the relative contribution from the electrons and that from the holes to the transport data, an issue demanding further studies.
As shown in Fig.2, the temperature dependence of the resistivity before magnetization $`(H=0)`$ displays a dip around $`8.5K`$ followed by a sharp increase at lower temperature, with a tendency to saturate below $`4.4K`$. At the transition point from antiferromagnetism to paramagnetism, a divergence of the resistivity’s temperature derivative is expected, as described in Reference . The Neel temperature, $`T_N`$, could then be inferred from the maximum of $`dR_{xx}(T)/dT`$. The estimated value $`T_N8K`$ is consistent with the magnetization measurements .
The $`R_{xx}`$ traces have been recorded at different temperatures; several typical curves are shown in Fig.3 for the $`H`$ field perpendicular to the plane and swept from negative to positive side. Again, the asymmetry of the $`R_{xx}`$ around $`H=0`$, which shows a strong $`T`$ dependence and eventually disappears above $`8K`$, reflects the intrinsic magnetization in the sample. It can also be seen from Fig.3 that both of the peak positions shift towards lower magnetic field as the temperature increases. The amplitude of the first peak remains nearly unchanged up to $`6.5K`$ while that of the second peak decreases monotonously within this temperature range. At temperatures above $`6.5K`$, a drastic decrease of the first peak is detected; this peak disappears at $`8.5K`$, which coincides with the Neel temperature $`T_N8K`$. At the same time, the second peak shifts rapidly to the low-field side and then disappears at about $`11K`$.
In order to summarize our observations, we assume that the valley between the two peaks represents the transition from one type of antiferromagnetism configuration (i.e., AFM I) to another (AFM II), and the second peak the transition point from the antiferromagnetism to paramagnetism (PM), to arrive at an empirical magnetic phase diagram, as sketched in the inset of Fig.3. This phase diagram shows schematically the magnetic phase transition boundaries as critical magnetic field against temperature.
The temperature dependence of the magnetoresistance shows different behavior for the in-plain magnetic field. Here we consider the case for an in-plane magnetic field parallel to the current flow (not shown). The overall tendency of the first peak is nearly the same as that in Fig.3; it also disappears around $`8K`$. However, in contrast with the monotonous shift to the low-field side (as shown in Fig.3), the second peak first shifts to higher field up to $`8.5K`$, then moves down to the low-field side, and eventually approaches zero field at $`16.5K`$, i.e. higher temperature is required to convert the antiferromagnetism to paramagnetism. The experimental results from a series of field-current configurations lead us to conclude that the AFM I configuration of the antiferromagnetism is weakly anisotropic and the AFM II is strongly anisotropic. Such anisotropy is believed to be caused by a combination of the strong crystal field and the strain/dislocations produced in the sample, since there is as large as two percent lattice mismatch between DyAs and GaAs. The details of the magnetization-controlled anisotropic transport will be published elsewhere.
In summary, we have studied the magnetotransport properties of the epitaxial DyAs layers grown on GaAs. It is shown from the longitudinal magnetoresistance data that the electronic transport is controlled by the magnetization. The Neel temperature $`T_N`$ deduced from temperature dependence of $`R_{xx}`$ is about $`8K`$, and is consistent with the magnetization results . We have observed two distinct peaks in the magnetoresistances which are attributed to strong spin-disorder scattering at the magnetic phase boundaries. Our data suggest that there exist more than one type of antiferromagnetism configurations in DyAs grown on $`(001)`$ GaAs, which is qualitatively different from ErAs where only one peak has been observed. Strongly anisotropic transport properties further support the notion that crystal field interaction plays an important role in the magnetism of epitaxial DyAs layer on GaAs.
We thank S. J. Allen, Jr. for discussions. This work is partially supported by NSF.
|
no-problem/9909/hep-lat9909055.html
|
ar5iv
|
text
|
# Large rescaling of the Higgs condensate: theoretical motivations and lattice results
## 1 Introduction
To understand the scale dependence of the ‘Higgs condensate’ $`\mathrm{\Phi }`$ let us define the $`\lambda \mathrm{\Phi }^4`$ theory in the presence of a lattice spacing $`a1/\mathrm{\Lambda }`$. The basic problem is the relation between the bare “lattice” condensate $`v_B(\mathrm{\Lambda })\mathrm{\Phi }_{\mathrm{latt}}`$ and its renormalized physical value $`v_R\frac{v_B}{\sqrt{Z}}`$ in the continuum limit $`\mathrm{\Lambda }\mathrm{}`$.
In the presence of spontaneous symmetry breaking, there are two basically different definitions: $`ZZ_{\mathrm{prop}}`$ from the propagator of the bare shifted ‘Higgs’ field $`h_B(x)\mathrm{\Phi }_{\mathrm{latt}}(x)v_B`$
$$G(p^2)\frac{Z_{\mathrm{prop}}}{p^2+M_h^2}$$
(1)
and $`ZZ_\phi `$ where $`Z_\phi `$ is the rescaling of $`v_B^2`$ needed in the effective potential $`V_{\mathrm{eff}}(\phi _B)`$
$$\chi ^1=\frac{d^2V_{\mathrm{eff}}}{d\phi _B^2}|_{\phi _B=v_B}\frac{M_h^2}{Z_\phi }$$
(2)
to match the quadratic shape at its absolute minima with the Higgs mass. The usual assumption $`Z_\phi Z_{\mathrm{prop}}`$ is equivalent to require a smooth limit $`p0`$. This is not necessarily true in the presence of Bose condensation phenomena where one can have a very large particle density at zero-momentum that compensates for the vanishing strength $`\lambda 1/\mathrm{ln}\mathrm{\Lambda }`$ of the elementary two-body processes. In this case, one can have trivially free fluctuations so that $`Z_{\mathrm{prop}}1`$ and $`h_B(x)=h_R(x)=h(x)`$, but a non-trivial effective potential with a divergent $`Z_\phi 1/\lambda \mathrm{ln}\frac{\mathrm{\Lambda }}{M_h}`$ . Therefore, the bare ratio $`R_{\mathrm{bare}}=\frac{M_h^2}{v_B^2}0`$ consistently with the rigorous results of Euclidean field theory but $`R_\phi =\frac{M_h^2}{v_R^2}`$ remains finite and cannot be used to constrain the magnitude of $`\mathrm{\Lambda }`$.
## 2 The lattice simulation
The one-component $`(\lambda \mathrm{\Phi }^4)_4`$ theory becomes in the Ising limit
$$S=\kappa \underset{x}{}\underset{\mu }{}\left[\varphi (x+\widehat{e}_\mu )\varphi (x)+\varphi (x\widehat{e}_\mu )\varphi (x)\right]$$
(3)
with $`\mathrm{\Phi }(x)=\sqrt{2\kappa }\varphi (x)`$ and where $`\varphi (x)`$ takes only the values$`+1`$ or $`1`$.
We performed Monte-Carlo simulations of this Ising action using the Swendsen-Wang cluster algorithm. Lattice observables include: the bare magnetization $`v_B=|\mathrm{\Phi }|`$ where $`\mathrm{\Phi }_x\mathrm{\Phi }(x)/L^4`$ is the average field for each lattice configuration), the zero-momentum susceptibility $`\chi =L^4\left[|\mathrm{\Phi }|^2|\mathrm{\Phi }|^2\right]`$, the shifted-field propagator
$$G(p)=\underset{x}{}\mathrm{exp}(ipx)(\mathrm{\Phi }(x)v_B)(\mathrm{\Phi }(0)v_B),$$
(4)
where $`p_\mu =\frac{2\pi }{L}n_\mu `$ with $`n_\mu `$ being a vector with integer-valued components, not all zero.
When approaching the continuum limit, one can compare the lattice data for $`G(p)`$ to the 2-parameter formula
$$G_{\mathrm{fit}}(p)=\frac{Z_{\mathrm{prop}}}{\widehat{p}^2+m_{\mathrm{latt}}^2}$$
(5)
where $`m_{\mathrm{latt}}`$ is the dimensionless lattice mass and $`\widehat{p}_\mu =2\mathrm{sin}\frac{p_\mu }{2}`$. Actually, if “triviality” is true, there must be a region of momenta where Eq.(5) gives a good description of the lattice data and can be used to define the mass. However, since the determination of the mass is a crucial issue, it is worth to compare the results of the previous method with the determination in terms of “time-slice” variables . To this end let us consider a lattice with 3-dimension $`L^3`$ and temporal dimension $`L_t`$ and the two-point correlator $`C_1(t,0;𝐤)`$. In this way, parameterizing the correlator $`C_1`$ in terms of the energy $`\omega _k`$, the mass can be determined through the lattice dispersion relation
$$m_{\mathrm{TS}}^2=2(\mathrm{cosh}\omega _k1)2\underset{\mu =1}{\overset{3}{}}(1\mathrm{cos}k_\mu ).$$
(6)
## 3 Numerical results: symmetric phase
As a check of our simulations we started our analysis at $`\kappa =0.0740`$ in the symmetric phase, where the high-statistics results by Montvay & Weisz are available.
Fig. 1 displays the data for the scalar propagator suitably re-scaled in order to show the very good quality of the fit Eq. (5). The 2-parameter fit gives $`m_{\mathrm{latt}}=0.2141(28)`$ and $`Z_{\mathrm{prop}}=0.9682(23)`$. The value at zero-momentum is defined as $`Z_\phi m_{\mathrm{latt}}^2\chi =0.9702(91)`$. Notice the perfect agreement between $`Z_\phi `$ and $`Z_{\mathrm{prop}}`$. We measure also the time-slice mass Eq. (6) at several values of the 3-momentum. Our results are in good agreement with the corresponding result of Montvay & Weisz and with the value $`m_{\mathrm{latt}}=0.2141(28)`$ obtained from the fit to the propagator data.
In conclusion our analysis of the symmetric phase is in good agreement with Ref. and supports, to high accuracy, the usual identifications $`Z_\phi Z_{\mathrm{prop}}`$ and $`m_{\mathrm{latt}}m_{\mathrm{TS}}`$. Note that our result for $`Z_{\mathrm{prop}}`$ is in excellent agreement with the 1-loop renormalization group prediction $`Z_{\mathrm{pert}}=0.97(1)`$.
## 4 Numerical results: broken phase
We now choose for $`\kappa `$ three successive values, $`\kappa =0.076,0.07512,0.07504`$, lying just above the critical $`\kappa _c0.0748`$ . Thus, we are in the broken phase and approaching the continuum limit where the correlation length $`\xi `$ becomes much larger than the lattice spacing. To be confident that finite-size effects are sufficiently under control, we used a lattice size, $`L^4`$, large enough so that $`5L/\xi `$ . We checked that or results for the magnetization and the susceptibility at $`\kappa =0.076`$ are in excellent agreement with the corresponding results of Jansen et al . Typical data for the re-scaled propagator are reported in Fig. 2. Unlike Fig. 1, the fit to Eq. (5), though excellent at higher momenta, does not reproduce the lattice data down to zero-momentum. Therefore, in the broken phase, a meaningful determination of $`Z_{\mathrm{prop}}`$ and $`m_{\mathrm{latt}}`$ requires excluding the lowest-momentum points from the fit. The fitted $`Z_{\mathrm{prop}}`$ is slightly less than one. This fact is attributable to residual interactions since we are not exactly at the continuum limit, so that the theory is not yet completely “trivial.” This explanation is reasonable since we see a tendency for $`Z_{\mathrm{prop}}`$ to approach unity as we get closer to the continuum limit. Moreover, we find good agreement between our result, $`Z_{\mathrm{prop}}=0.9321(44)`$, and the Lüscher-Weisz perturbative prediction $`Z_{\mathrm{pert}}=0.929(14)`$ at $`\kappa =0.0760`$. The comparison $`Z_{\mathrm{prop}}=0.9566(13)`$ with $`Z_{\mathrm{pert}}=0.940(12)`$ at $`\kappa =0.07504`$ is also fairly good. The quantity $`Z_\phi `$ is obtained from the product $`m_{\mathrm{latt}}^2\chi `$ and is shown in Fig. 3. According to conventional ideas $`Z_\phi `$ should be the same as the wavefunction-renormalization constant, $`Z_{\mathrm{prop}}`$, but clearly it is significantly larger. Note that there was no such discrepancy in Fig. 1 for the symmetric phase. Moreover our data show that the discrepancy gets worse as we approach the critical $`\kappa `$. Indeed Fig. 3 shows that $`Z_\phi `$ grows rapidly as one approaches the continuum limit (where $`m_{\mathrm{latt}}0`$). Thus, the effect cannot be explained by residual perturbative $`𝒪(\lambda _\mathrm{R})`$ effects that might cause $`G(p)`$ to deviate from the form in Eq.(5); such effects die out in the continuum limit, according to “triviality.” The results accord well with the “two $`Z`$” picture in which, as we approach the continuum limit, we expect to see the zero-momentum point, $`Z_\phi m_{\mathrm{latt}}^2\chi `$, become higher and higher.
## 5 Conclusions
We have reported new numerical evidence that the re-scaling of the ‘Higgs condensate’ $`Z_\phi m_{\mathrm{latt}}^2\chi `$ is different from the conventional wavefunction renormalization $`ZZ_{\mathrm{prop}}`$. Perturbatively, such a difference might be explicable if it became smaller and smaller when taking the continuum limit $`\lambda _R0`$. However, our lattice data shows that the difference gets larger as one gets closer to the continuum limit, $`m_{\mathrm{latt}}0`$. Our lattice data is consistent with the unconventional picture of “triviality” and spontaneous symmetry breaking in which $`Z_\phi `$ diverges logarithmically, while $`Z_{\mathrm{prop}}1`$ in the continuum limit. In this picture the Higgs mass $`M_h`$ can remain finite in units of the Fermi-constant scale $`v_R`$, even though the ratio $`M_h/v_B0`$. The Higgs mass is then a genuine collective effect and $`M_h^2`$ is not proportional to the renormalized self-interaction strength. If so, then the whole subject of Higgs mass limits is affected.
|
no-problem/9909/hep-ph9909460.html
|
ar5iv
|
text
|
# Transient topological objects in high energy collisions.
## Abstract
I study the topology of quantum fluctuations which take place at the earliest stage of high-energy processes. A new exact solution of Yang-Mills equations with fractional topological charge and carrying a single color is found.
11.15.Kc, 11.27.+d ,12.38.Aw, 12.38.Mh
1. The problem of initial data for ultra-relativistic heavy ion collisions has been a sore subject for more than a decade. The roots of the problem penetrate deeply into the least explored areas of QCD like the nature of the QCD vacuum and hadronic structure. The parton picture of a nucleus completely disregards the properties of the vacuum and only partially respects the hadron structure by replacing the true bounded QCD state with an artificial flux of free quarks and gluons. For heavy ion collisions, the evolution of parton distributions is very different from the evolution in the simplest processes like ep-DIS or high-energy pp-collisions. In a series of papers we have studied the causal character of the QCD evolution and found that high-energy processes explore all possible quantum fluctuations that may develop before the collision and are consistent with a given inclusive probe. All these fields propagate forward in time and collapse at the vertex of interaction with the probe. These fluctuations are the snapshots of nuclei frozen by the measurement, and they cannot be arbitrary since the emerging final-state configurations must be consistent with all interactions that are effective on the time-scale of the emission process. In other words, we have proved that the QGP as the final state can be created only in a single quantum transition, as an ensemble of collective modes of expanding matter. The scale of the entire evolution process appeared to be set by the physical properties of the final state.
Besides a fair treatment of the final state-interactions, the theory has to rely on a realistic picture of the initial state. It must allow one to treat nuclei as Lorentz-contracted finite-size objects. It has to account for those interactions that keep nuclei intact before the collisions and are responsible for the coherence of the nuclei wave function. It is commonly accepted that the effective interactions responsible for the existence of hadrons are due to topological fluctuations in the QCD vacuum, i.e., instantons. These exist only in Euclidean space, and only spherically symmetric field configurations are used in the theory that describe stable hadrons. They cannot be directly transferred to Minkowsky space.
The solutions of the QCD field equations with non-trivial topology, by their design, rely heavily on the identification the directions in physical (geometric) space with the directions of the internal (tangent) color space so that the local rotations of the geometric coordinates can be compensated by the gauge transformation in color space. It is impossible to match the $`SU(2)\times SU(2)O(4)`$ local color group with the Lorentz group, $`SL(2,C)O(1,3)`$, since the first one is a compact Lie group, the second is not. Which of them should be given priority, and what properties have to be sacrificed in order to identify the angular coordinates of internal color and external physical spaces? An empirical answer to this question is already known. One must use the Euclidean metric and construct the self-dual solutions of the Yang-Mills equations (instantons). Acting in this way, we indeed achieve remarkable successes in the description of many properties of stable hadrons . These successes are not by chance and should be considered as important physical inputs. However, this theory is incapable of describing moving hadrons. Motion is possible only in the Minkowsky world (where no self-dual fields exist). Therefore, we may ask if this commonly used prescription is sufficiently motivated physically? It is perfectly clear that would such a motivation exist, it can be only due to the nature of the measurement process: as viewed from Minkowsky world of moving stable hadrons, the Euclidean calculations provide an effective theory in the rest frame of a hadron. If a precise resolution of the color field coordinate takes place, then (since the moment of the measurement) the Euclidean picture is no longer valid .
2. In this note, I show that the solutions of the Yang-Mills equations, which interpolate between the Euclidean and Minkowsky worlds, do exist. Such an interpolation becomes possible because two regimes are separated by the light-cone of the point-like interaction. In the Euclidean domain (before the interaction) the transient topological object has finite action and a fractional winding number. The fields of this object evolve in Euclidean proper time $`\tau `$, and after collapse at the time $`\tau =0`$, they can be continued as the waves propagating in Minkowsky space. <sup>*</sup><sup>*</sup>* According to English orthography, the suffix -on in the name of this object (would it deserves a name) seems unavoidable. I would suggest ephemeron (rater than transiton) in order not to create an image of a particle and emphasize an ephemeral nature of this field configuration. Constructing the Euclidean part of the solution, I map the $`[SU(2)\times SU(2)]_{color}`$ on $`[O(4)]_{space}`$ and require that the spin connections of the metric and the gauge field potential (both taken in the same group representation) must be identical. Acting in this way, I give priority to the geometry of the gauge group. Moreover, I insist that, before the interaction has resolved the color field on a sufficiently short scale, the space-time metric is defined by the internal Euclidean geometry of the gauge group.
The only tool which is capable of coping with this view of the relationship between the internal color dynamics and the geometry is the so-called tetrad formalism (see, e.g.,). Indeed, the vector and spinor fields are essentially defined in the tangent space. In a tetrad basis, components of any tensor (e.g $`A^\alpha (x)`$, $`\gamma ^\alpha `$) become scalars with respect to a general coordinate transformations and behave like Lorentz tensors under the local Lorentz group transformations. The usual tensors are then given by the tetrad decomposition, $`A^\mu (x)=e_\alpha ^\mu (x)A^\alpha (x)`$, $`\gamma ^\mu (x)=e_\alpha ^\mu (x)\gamma ^\alpha `$, etc. The covariant derivative of the tetrad vector includes two connections (gauge fields). One of them, $`\mathrm{\Gamma }_{\mu \nu }^\lambda (x)`$, is the gauge field which provides covariance with respect to the general transformation of coordinates. The second gauge field, the spin connection $`\omega _\mu ^{\alpha \beta }(x)`$, provides covariance with respect to the local Lorentz rotation.
Let $`x^\mu =(\tau ,r,\varphi ,\eta )`$ be the contravariant components of the curvilinear coordinates that cover the past of the hyperplane $`t=0,z=0`$ of the interaction,
$`x^0=\tau \mathrm{cosh}\eta ,x^3=\tau \mathrm{sinh}\eta ,`$ (1)
$`x^1=r\mathrm{cos}\varphi ,x^2=r\mathrm{sin}\varphi ,`$ (2)
where $`x^\alpha =(t,x,y,z)(x^0,x^1,x^2,x^3)`$ are those of the flat Minkowsky space. \[In order to cover the full Minkowsky space, we have to build a chart with similar parameterization in each of four domains separated by the hypersurfaces, $`\tau ^2=t^2z^2=0`$. The domains $`\tau ^2>0`$ can be split further into two parts by the light cone $`\tau ^2=r^2`$.\] For the coordinates (2), the four tetrad vectors $`e_\mu ^\alpha `$ form a matrix
$`e_\mu ^\alpha =\mathrm{diag}(1,1,r,\tau ).`$ (3)
These vectors correctly reproduce the curvilinear metric $`\mathrm{g}_{\mu \nu }`$ and the flat Minkowsky metric $`g_{\alpha \beta }`$, i.e.,
$`\mathrm{g}_{\mu \nu }=g_{\alpha \beta }e_\mu ^\alpha e_\nu ^\beta =\mathrm{diag}[1,1,r^2,\tau ^2],`$ (4)
$`g^{\alpha \beta }=\mathrm{g}^{\mu \nu }e_\mu ^\alpha e_\nu ^\beta =\mathrm{diag}[1,1,1,1].`$ (5)
The spin connection can be found from the condition that the covariant derivatives of the tetrad vectors equal zero , i.e.,
$`\omega _\mu ^{\alpha \beta }=[\mathrm{\Gamma }_{\mu \nu }^\lambda e_\lambda ^\alpha _\mu e_\nu ^\alpha ]e^{\beta \nu }.`$ (6)
Indeed, the tetrad vector $`e_\mu ^\alpha `$ is the coordinate vector and the Lorentz vector at the same time. (The Lorentz index $`\alpha `$ and the coordinate index $`\mu `$ are moved up and down by the local Minkowsky metric tensor $`g_{\alpha \beta }`$ and the global metric tensor $`\mathrm{g}_{\mu \nu }`$, respectively.) The only non-vanishing components of the connections are
$`\mathrm{\Gamma }_{\eta \eta \tau }^{}=\mathrm{\Gamma }_{\tau \eta \eta }^{}=\tau ,\mathrm{\Gamma }_{\varphi \varphi r}^{}=\mathrm{\Gamma }_{r\varphi \varphi }^{}=r,`$ (7)
$`\omega _\eta ^{30}=\omega _\eta ^{03}=1,\omega _\varphi ^{12}=\omega _\varphi ^{21}=1.`$ (8)
In the tetrad formalism, the transition to the Euclidean space is easily done by making the time-like tetrad vector $`e_\mu ^0`$ imaginary, $`e_\mu ^0(e_\mu ^0)_E=(i,0,0,0),(e_0^\mu )_E=(i,0,0,0)`$. Then Eqs. (5) take the form
$`\mathrm{g}_{\mu \nu }=g_{\alpha \beta }(e_\mu ^\alpha )_E(e_\nu ^\beta )_E=\mathrm{diag}[\pm 1,1,r^2,\tau ^2],`$ (9)
$`g^{\alpha \beta }=\mathrm{g}^{\mu \nu }(e_\mu ^\alpha )_E(e_\nu ^\beta )_E=\mathrm{diag}[1,1,1,1].`$ (10)
This formal step also leads to a set of standard prescriptions for the transition to the Euclidean version of the field theory: $`A_E^\tau =(e_0^\tau )_EA^0=iA^0`$. The same rule holds for the spin connection, $`(\omega _\mu ^{03})_M(\omega _\mu ^{03})_E=i(\omega _\mu ^{03})_M`$. These formulae indicate that we perform a transition to an imaginary proper time $`\tau `$. The choice between the signs in Eq. (10) is a subject of an independent analysis.
3. Assuming the Euclidean long-distance behavior, we employ the metric
$`ds^2=d\tau ^2+dr^2+r^2d\varphi ^2+\tau ^2d\eta ^2`$ (11)
with the only non-vanishing components of the spin connection being $`\omega _\eta ^{03}=1`$, and $`\omega _\varphi ^{12}=1`$. \[Christoffel symbols $`\mathrm{\Gamma }`$ remain the same as in Eq.(8). Overall, we have four domains with the same Euclidean metric, which explains the result (48,49).\] Introducing the (iso)vector representation $`A_\mu ^{\alpha \beta }`$ of the gauge field of the $`O(4)`$ group, and insisting on a one-to-one mapping of the color and space directions, we require that
$`A_\mu ^{\alpha \beta }=h\omega _\mu ^{\alpha \beta },`$ (12)
where the factor $`h`$ is an arbitrary real number which defines the relationship between the cyclic components of space-time and color coordinates. The gauge fields of $`O(4)`$ have two projections on its two $`SU(2)`$-subgroups,
$`(A_\mu ^a)_\pm ={\displaystyle \frac{1}{2}}\eta _\pm ^{a\alpha \beta }A_\mu ^{\alpha \beta }=\pm A_\mu ^{0\alpha }+{\displaystyle \frac{1}{2}}ϵ^{a\alpha \beta }A_\mu ^{\alpha \beta },`$ (13)
where $`\eta _\pm ^{a\alpha \beta }`$ are the ’t Hooft symbols , and the subscripts $`(\pm )`$ denote two chiral projections. Thus, we have
$`(A_\eta ^3)_\pm =h\omega _\eta ^{03}=\pm h,(A_\varphi ^3)_\pm =h\omega _\varphi ^{12}=h,`$ (14)
which is compatible with the gauge condition $`A^\tau =0`$ that we adopt for both the Euclidean and the Lorentz regimes of the process. One can easily find a representation for this potential which manifests its pure gauge origin,
$`A_\mu (x)=(1/2)A_\mu ^a(x)\sigma ^a=S_\mu S^1.`$ (15)
Using the decomposition, $`S=iu_0\mathrm{𝟏}+u_a𝝈^a`$, and $`S^1=iu_0\mathrm{𝟏}+u_a𝝈^a`$ we arrive at
$`A_\mu (x)=(1/2)A_\mu ^c(x)𝝈^c`$ (16)
$`=(ϵ^{abc}u_a_\mu u_b+u_0_\mu u_cu_c_\mu u_0)𝝈^c.`$ (17)
By comparison with (14), and accounting for the unitarity, $`SS^1=1`$, we obtain a system of equations,
$`2(u_1_\eta u_2u_2_\eta u_1+u_0_\eta u_3u_3_\eta u_0)=\pm h,`$ (18)
$`2(u_1_\varphi u_2u_2_\varphi u_1+u_0_\varphi u_3u_3_\varphi u_0)=h,`$ (19)
$`u_0^2+u_a^2=1,`$ (20)
which has a solution
$`(u_0)_\pm =2^{1/2}\mathrm{cos}h\eta ,(u_3)_\pm =2^{1/2}\mathrm{sin}h\eta ,`$ (21)
$`(u_1)_\pm =2^{1/2}\mathrm{cos}h\varphi ,(u_2)_\pm =2^{1/2}\mathrm{sin}h\varphi .`$ (22)
A non-trivial solution of the Yang-Mills equations is looked for in the form
$`(A_\eta ^3)_\pm =E(\tau ,r)iS_\eta S^1=E(\tau ,r)\pm 1,`$ (23)
$`(A_\varphi ^3)_\pm =\mathrm{\Phi }(\tau ,r)iS_\varphi S^1=\mathrm{\Phi }(\tau ,r)1,`$ (24)
where the arrows point to the values of potentials at $`\tau \mathrm{}`$, where the field must approach a pure gauge. (For now, we assume that $`h=1`$.) Since the field has only one color component, the commutator in the definition, $`F_{\mu \nu }=_\mu A_\nu _\nu A_\mu [A_\mu ,A_\nu ]`$, vanishes and the components of the field tensor are the same as in the Abelian case where
$`F_{\tau \eta }=_\tau E,F_{\tau \varphi }=_\tau \mathrm{\Phi },F_{\tau r}=0,`$ (25)
$`F_{r\eta }=_rE,F_{r\varphi }=_r\mathrm{\Phi },F_{\eta \varphi }=0.`$ (26)
The condition for the (anti)self-duality of the field tensor $`F_{\mu \nu }`$ reads as
$`\stackrel{͙}{F}_{\mu \lambda }\mathrm{g}_{\mu \nu }\mathrm{g}_{\lambda \sigma }{\displaystyle \frac{ϵ^{\nu \sigma \rho \kappa }}{2\sqrt{\mathrm{g}}}}F_{\rho \kappa }=\pm F_{\mu \lambda }.`$ (27)
Note that the definition of the dual tensor is different from the familiar definition in flat space. This modification is obvious. Indeed, the co- and contravariant tensor components are even of different dimensions. The requirement of the self-duality of the field (24) yields a system of equations,
$`{\displaystyle \frac{1}{r}}{\displaystyle \frac{\mathrm{\Phi }}{r}}={\displaystyle \frac{1}{\tau }}{\displaystyle \frac{E}{\tau }},\tau {\displaystyle \frac{\mathrm{\Phi }}{\tau }}=r{\displaystyle \frac{E}{r}}.`$ (28)
Two conditions of the self-consistency for this system are
$`_r^2\mathrm{\Phi }{\displaystyle \frac{1}{r}}_r\mathrm{\Phi }=(_\tau ^2\mathrm{\Phi }+{\displaystyle \frac{1}{\tau }}_\tau \mathrm{\Phi }),`$ (29)
$`_r^2E+{\displaystyle \frac{1}{r}}_rE=(_\tau ^2E{\displaystyle \frac{1}{\tau }}_\tau E).`$ (30)
This system is easily solved by separation of variables. The solution which obeys the original system (28), the condition for finiteness, the boundary condition of pure gauge at $`\tau \mathrm{}`$, and the condition that $`A_\eta 0`$ at $`\tau 0`$, is as follows,
$`A_\eta ^3=E(\tau ,r)=\lambda \tau K_1(\lambda \tau )J_0(\lambda r)+1,`$ (31)
$`A_\varphi ^3=\mathrm{\Phi }(\tau ,r)=\lambda rJ_1(\lambda r)K_0(\lambda \tau )+1.`$ (32)
From these expressions for the potentials, we may easily find the field strengths of the ephemeron,
$`\tau ^1F_{\tau \eta }^3=r^1F_{r\varphi }^3=\lambda ^2K_0(\lambda \tau )J_0(\lambda r),`$ (33)
$`r^1F_{\tau \varphi }^3=\tau ^1F_{r\eta }^3=\lambda ^2J_1(\lambda r)K_1(\lambda \tau ).`$ (34)
The geometry of this field is noteworthy. It has the symmetry of a torus. The magnetic field has two components, one along the torus pipe, and the second winding around the pipe. This is a well known configuration of a toroidal magnetic trap. Since $`r<\tau `$, both radii of the torus get smaller when $`\tau 0`$; the torus collapses at $`\tau =0`$. The electric fields has two similar components which are created in accordance with the Abelian induction law. Every pipe is mono-colored. The parameter $`\rho =\lambda ^1`$ clearly plays the role of the “size” of the ephemeron.
4. Starting from the fields given by the Eqs. (26) we may find the Euclidean action of the ephemeron:
$`S_E={\displaystyle \frac{1}{4g^2}}{\displaystyle d^4x\sqrt{\mathrm{g}}\mathrm{g}^{\mu \rho }\mathrm{g}^{\nu \sigma }F_{\mu \nu }^aF_{\rho \sigma }^a}`$ (35)
$`={\displaystyle \frac{2\pi ^2}{g^2}}{\displaystyle _0^{\mathrm{}}}\tau 𝑑\tau {\displaystyle _0^\tau }{\displaystyle \frac{dr}{r}}\left[\left({\displaystyle \frac{\mathrm{\Phi }}{\tau }}\right)^2+\left({\displaystyle \frac{\mathrm{\Phi }}{r}}\right)^2\right].`$ (36)
In the same way, we compute the topological charge
$`Q={\displaystyle \frac{1}{32\pi ^2}}{\displaystyle d^4x\sqrt{\mathrm{g}}\frac{ϵ^{\mu \nu \rho \sigma }}{2\sqrt{\mathrm{g}}}F_{\mu \nu }^aF_{\rho \sigma }^a}`$ (37)
$`=\pm {\displaystyle \frac{1}{2}}{\displaystyle _0^{\mathrm{}}}\tau 𝑑\tau {\displaystyle _0^\tau }{\displaystyle \frac{dr}{r}}\left[\left({\displaystyle \frac{\mathrm{\Phi }}{\tau }}\right)^2+\left({\displaystyle \frac{\mathrm{\Phi }}{r}}\right)^2\right].`$ (38)
Thus, we have reproduced a standard relation, between the instanton action and its winding number,
$`S_E={\displaystyle \frac{8\pi ^2}{g^2}}|Q|.`$ (39)
We now have to find the winding number $`Q`$.
We shall do it using the representation of topological charge via the divergence of the Chern-Simons current,
$`Q={\displaystyle \frac{1}{4\pi ^2}}{\displaystyle 𝑑\sigma _\mu K^\mu },`$ (40)
where
$`K^\mu ={\displaystyle \frac{1}{4}}ϵ^{\mu \nu \rho \sigma }\left[A_\nu ^a_\rho A_\sigma ^a+{\displaystyle \frac{g}{3}}ϵ_{abc}A_\nu ^aA_\rho ^bA_\sigma ^c\right].`$ (41)
The second term (usually the major one) identically vanishes since the ephemeron field has only one color component. In our geometry, only two components of $`K^\mu `$ survive,
$`K^\tau ={\displaystyle \frac{\pm 1}{4}}\left[E{\displaystyle \frac{\mathrm{\Phi }}{r}}\mathrm{\Phi }{\displaystyle \frac{E}{r}}\right]={\displaystyle \frac{\pm 1}{8}}\left[{\displaystyle \frac{r}{\tau }}{\displaystyle \frac{E^2}{\tau }}+{\displaystyle \frac{\tau }{r}}{\displaystyle \frac{\mathrm{\Phi }^2}{\tau }}\right]`$ (42)
and
$`K^r={\displaystyle \frac{1}{4}}\left[E{\displaystyle \frac{\mathrm{\Phi }}{\tau }}\mathrm{\Phi }{\displaystyle \frac{E}{r}}\right]={\displaystyle \frac{1}{8}}\left[{\displaystyle \frac{r}{\tau }}{\displaystyle \frac{E^2}{r}}+{\displaystyle \frac{\tau }{r}}{\displaystyle \frac{\mathrm{\Phi }^2}{r}}\right].`$ (43)
Correspondingly, the total flux of the vector $`K^\mu `$ over a closed surface can be split into the sum of integrals over three surfaces, $`\tau =\mathrm{}`$, $`r=0`$, and $`\tau =r`$,
$`Q={\displaystyle 𝑑\tau 𝑑r\theta (\tau r)[_\tau K^\tau +_rK^r]}`$ (44)
$`={\displaystyle _0^{\mathrm{}}}𝑑r[K^\tau (\mathrm{},r)K^\tau (r,r)]`$ (45)
$`+{\displaystyle _0^{\mathrm{}}}𝑑\tau [K^r(\tau ,\tau )K^r(\tau ,0)].`$ (46)
(The factor $`4\pi ^2`$ has come from two angular integrations.) Straightforward computation of the integrals lead to the following expressions,
$`{\displaystyle _0^{\mathrm{}}}K^\tau (\mathrm{},r)𝑑r=0,{\displaystyle _0^{\mathrm{}}}K^r(\tau ,0)𝑑\tau ={\displaystyle \frac{1}{4}},`$ (47)
$`{\displaystyle _0^{\mathrm{}}}[K^r(\tau ,\tau )K^\tau (\tau ,\tau )]𝑑\tau ={\displaystyle \frac{1}{8}}.`$ (48)
Eventually, we find
$`Q={\displaystyle \frac{1}{4\pi ^2}}{\displaystyle 𝑑\sigma _\mu K^\mu }={\displaystyle \frac{1}{8}}.`$ (49)
The transient topological gluon configuration carries a fractional topological charge of 1/8 in that part of the Euclidean space which is an image of the interior of the past light cone of the interaction vertex. In a full chart, we have several domains with similar Euclidean picture, and the sum over all of them gives $`Q_{\mathrm{tot}}=1`$.
5. The ephemeron field is defined by four parameters. Two of them, $`x_0`$ and $`y_0`$, are obvious and correspond to the preserved translation symmetry in the $`xy`$-plane. To include them explicitly, we must read $`r`$ as $`\sqrt{(xx_0)^2+(yy_0)^2}`$. Next, we have the ephemeron “radius” $`\rho =\lambda ^1`$. The fourth parameter is the scale factor $`h`$, which has been introduced in Eq. (12) an dropped from the explicit calculations after Eq. (22). In fact, the presence of scale factor $`h`$ among the parameters of ephemeron can be recovered from the observation that equations (28) are homogeneous (and thus admit an arbitrary scale factor) and the fact that solutions for the vector potential then have a pure gauge asymptote given by Eqs. (15)-(22). With $`h`$ explicitly retained, the topological charge is $`Q=h^2/8`$ and thus becomes a continuous parameter. This is in contrast with the case of spherically symmetric BPST instantons which can have only integer charges. Equations (17) and (22) explain the origin of the non-trivial topology of the ephemeron. The solutions with different values of $`h`$ correspond to different field configurations whose asymptotes at $`\tau \mathrm{}`$ are the different gauge transforms of the classical vacuum with $`A(x)=0`$.
The most impressive feature of the ephemeron is that it is one-dimensional both in physical and color spaces. This is not so surprising from a mathematical point of view since we map the two spaces onto one another. In physical space, the full spherical symmetry is corrupted by the interaction and only two rotations in the $`tz`$\- and the $`xy`$-planes survive as an actual symmetry. The high-precision measurement of the coordinate inside an object formed by the strong interaction necessarily resolves a mono-colored field pattern. Topologically, the ephemeron is a collapsing ring before the interaction, and an opened expanding string after it.
Existence of the transient topological field configurations poses several important questions.
(i) How does the toroidal geometry of the ephemeron affect the distribution of the gluons produced in high-energy collisions? Is the proper field of the resolved color charge Coulomb-like or does it carry some remnants of the twisted geometry of the ephemeron fields.
(ii) The geometry of the electric and magnetic field of the ephemeron implies strong spin polarization effects. All known evidences of $`P`$\- and $`CP`$-violations come from the study of various decays which are genuinely non-stationary processes. What is the role of the transient topological configurations in these decays? Formally, the ephemerons must be included into the path-integral representation of the point-to-point correlators of hadronic currents on equal footing with the BPST instantons.
Finally, the border between perturbative and non-perturbative QCD is clearly defined by their relation to the non-trivial topological properties of the QCD vacuum. The very existence of the ephemeron solutions of the Yang-Mills equations undoubtedly indicates that there is a hope to bridge the gap.
This work was supported by the U.S. Department of Energy under Contract No. DE–FG02–94ER40831.
|
no-problem/9909/cond-mat9909263.html
|
ar5iv
|
text
|
# Magneto-Transport Properties of the Rutheno-Cuprate RuSr2GdCu2O8
## 1 INTRODUCTION
The original motivation for the synthesis of RuSr<sub>2</sub>GdCu<sub>2</sub>O<sub>8</sub> was to incorporate a metallic interlayer between the CuO<sub>2</sub> planes, increasing their coupling and hence their critical current. However, soon after the first successful synthesis the material was found to display not only superconductivity, but coexisting ferromagnetism as well, first in the sister compound $`R_{1.4}`$Ce<sub>0.6</sub>RuSr<sub>2</sub>Cu<sub>2</sub>O<sub>10-δ</sub> ($`R`$=Gd and Eu), and then in Ru-1212 itself. Evidence has accumulated from static magnetisation and muon spin rotation ($`\mu `$SR) studies, and more recently from Gd-eSR indicating that the two phases coexist on a truly microscopic scale. In this paper we explore the interaction between the transport carriers in the CuO<sub>2</sub> layers and the ferromagnetic Ru moments. We analyse magnetisation and MR data for the same sample in terms of the s-d model, and derive a value for the exchange interaction energy.
## 2 EXPERIMENTAL
Phase-pure sintered pellets of RuSr<sub>2</sub>GdCu<sub>2</sub>O<sub>8</sub> were synthesized via solid-state reaction of a stoichiometric mixture of high-purity metal oxides and SrCO<sub>3</sub>. A final extended anneal at 1060C in flowing high-purity O<sub>2</sub> produced a marked improvement in the crystallinity of the compound and a corresponding lower residual resistivity (Fig 1a).
Bars of approximate dimensions 4$`\times `$0.8$`\times `$0.4mm were cut using a diamond wheel and mounted on quartz substrates in a six-contact configuration allowing both resistance and Hall voltage to be measured simultaneously.
Field-sweeps were performed at constant temperature by controlling with a capacitance thermometer. A small correction was made for the drift in capacitance, and hence temperature, with time (typically $`<`$150mK during one 0-11-0T field cycle). A Cernox thermometer was used to control the temperature sweeps for the resistivity measurements, for which alternating current densities of around 0.25Acm<sup>-2</sup> were used.
Magnetisation measurements were made on a sample of dimensions $``$ 10$`\times `$1$`\times `$1mm using a commercial SQUID magnetometer, with the field parallel to the long axis up to a maximum of 50 kOe.
## 3 RESULTS
### 3.1 Transport data
Figure 1a shows the zero-field resistivity as a function of temperature. It is metallic in character at high temperatures, with a slight upturn above $`T_c`$. In this sample both the size of the upturn, and the residual resistance have been lowered relative to the as-grown sample by the extended anneal mentioned above, while the TEP and Hall coefficient show little change. This suggests that the upturn is due to grain boundary effects.
The Hall effect data (Fig. 1b) are similar to those obtained for heavily under-doped polycrystalline YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub>, which has $`R_H=6.1\times 10^9`$m<sup>3</sup>C<sup>-1</sup> at 300K for an oxygen deficiency, $`\delta =0.62`$. The thermoelectric power (TEP) data are also characteristic of a heavily under-doped high-$`T_c`$ cuprate, with about 0.06 to 0.07 holes per Cu atom.
Figure 2 shows a selection of the MR data for temperatures above and below $`T_{Curie}`$. For $`T>T_{Curie}`$ the MR is always negative and it decreases as $`H^2`$ for temperatures well above $`T_{Curie}`$. This field dependence arises from the freezing out of spin-disorder scattering as the Ru moments become aligned with the magnetic field. Close to $`T_{Curie}`$ the MR becomes quite linear over the range of $`H`$ investigated, and for lower temperatures displays a positive peak at 15 to 20kOe (Fig. 2b).
### 3.2 Magnetisation Data
Magnetisation measurements were made both as a function of temperature and field. The temperature sweeps may be fitted using a ferromagnetic (Curie-Weiss) plus paramagnetic (Curie) expression of the form $`\chi (T)=\frac{C_1}{T\theta }+\frac{C_2}{T}`$ . The ferromagnetic term, $`C_1`$, is associated with the Ru moments, while the Gd moments are paramagnetic. Fitting our $`\chi \left(T\right)`$ data in this way for temperatures between 150 and 300 K gives $`\mu _{Ru}=1.1\mu _B`$ and $`\mu _{Gd}=7.4\mu _B`$, in agreement with previous results. The $`C_2/T`$ term is later subtracted from data obtained via the field sweeps, leaving just the ferromagnetic component which we then compare with the MR data.
### 3.3 Exchange energy
We employ the Zener, or s-d model of a ferromagnetic metal, in which itinerant $`s`$ electrons interact with $`d`$ electrons localized at atomic sites. This model was extended by Kasuya to calculate the MR of dilute magnetic alloys. Assuming negligible potential scattering from the spatially ordered Ru moments, the magnetic contribution to the resistivity becomes
$$\mathrm{\Delta }\rho \frac{2\pi N_{ϵ_F}}{zN_V}\frac{m}{e^2\mathrm{}}cJ_{ex}^2S(S+1)\alpha ^2\frac{4}{27}$$
(1)
Where $`N_{ϵ_F}`$ is the density of states per spin per unit cell and $`N_V`$ is the number of unit cells per unit volume, each containing $`z`$ conduction electrons of mass $`m`$. $`J_{ex}`$ is the exchange interaction between the spins and conduction electrons and $`c`$ is the spin concentration.
In the limit of high temperatures $`\left(\alpha =\frac{g\mu _BH}{k_BT}1\right)`$, $`S_zS(S+1)\alpha /3`$ and the magnetisation is $`M=\mu _BgS_z`$. Substituting into eqn. 1 gives the well-known experimental result $`\mathrm{\Delta }\rho M^2`$ for dilute magnetic alloys.
Figure 3 shows the magnetoresistance plotted against magnetic field for three temperatures above T<sub>Curie</sub>. The square of the Ru magnetisation is also plotted, the paramagnetic Gd component having been subtracted.
Though this material is not a dilute alloy, the s-d model is a good fit to the data for temperatures well above T<sub>Curie</sub>, allowing an estimate of the exchange interaction $`J_{ex}`$ to be made. The Uemura relation gives $`n_s/m=4.8\times 10^{56,}`$ based on a $`T_c`$ of 46K, and with $`N_{ϵ_F}3\times 10^{19}`$ J<sup>-1,</sup> we derive an exchange interaction of 27-47 meV depending on the choice of $`c`$, the higher value being for transport solely in the CuO<sub>2</sub> planes. These values are of the order of the superconducting energy gap and so would be expected to have a significant effect on the superconducting properties.
## 4 CONCLUSIONS
The above results lead us to two possible scenarios. The first is that the RuO<sub>2</sub> layer is a local-moment ferromagnet, the transport occurring in the CuO<sub>2</sub> layers. In this case it is easy to understand the resistivity, TEP and Hall data, but difficult to see why such a large exchange field between the carriers and Ru moments does not seriously affect the superconductivity. The second scenario is that significant current flows in the RuO<sub>2</sub> planes and the material is therefore an itinerant ferromagnet. In this case the MR is determined by interactions between Ru spins and RuO<sub>2</sub> carriers, which can be large without affecting the superconductivity. It is, however, more difficult to understand the resistivity, TEP and Hall data within this scenario.
## ACKNOWLEDGEMENTS
The authors wish to thank J.W. Loram for many helpful discussions. This work was supported by the UK EPSRC, grant RG26680.
|
no-problem/9909/astro-ph9909296.html
|
ar5iv
|
text
|
# A Search for Chemical Signatures of Galactic Mergers
## 1 Overview
A number of very high-velocity metal-poor field stars have been discovered that have very unusual ratios of alpha-elements to iron. The stars discovered to date all have large apogalacticon distances, and so the unusual abundance ratios may suggest a chemical “signature” of previous merger or accretion events. That is, these stars may have originated within a satellite galaxy or galaxies that experienced a different nucleosynthetic chemical evolution history than the Milky Way and which were later accreted by it.
We have observed a sample of over two dozen high-velocity metal-poor field stars using high resolution echelle spectrographs at the KPNO, CTIO, and McDonald Observatories. The stars were selected from Ryan & Norris (1991), Carney et al (1994), as well as unpublished studies yielding private catalogs of metal-poor stars. So far, we have analysed the more metal-poor stars and, for those stars in common with previous studies, we obtain results that agree well with those in the literature. Included in our study are a re-analysis of BD+80 245, a star previously known to have low $`\alpha `$-element ratios (Carney et al 1997) and G4-36, a new low-$`\alpha `$ star discovered by James (1998).
## 2 BD+80 245: re-analysis of the “discovery” star
Carney et al (1997) reported the discovery of an $`\alpha `$-element poor star as a result of their search for low-metallicity disk stars. We have since aquired new spectra of the star, with a resolution of 60,000 and a SNR of $``$200. We have independently re-derived the abundances, using a different linelist, and largely confirm the previous analysis, as well as expand the abundance list to include additional key elements.
## 3 G4-36: a star of low Na, Mg, Ca, but high Ni
Among our observations, we included a star of unusual abundances discovered by James (1998). In addition to confirming the unusually low Na, Mg, Ca and Ba abundances determined by James (PhD Thesis in preparation, Univ. of Texas), we find significantly enhanced Ni. This is contrary to the results of Nissen & Schuster who found that the abundances of Ni to be strongly correlated with Na in their low-$`\alpha `$ stars.
## 4 Na vs. Ni
As previously found by Nissen & Schuster (1997), for most of the stars, there appears to be a good correlation between Na and Ni abundances with respect to iron. As seen in figure 2, this correlation is also found in the Stephens (1999) data. However, a few exceptions stand out: the low globular cluster abundances found by Brown et al (1997) for Rup 106 and Pal 12; the unusual star BD+80 245 found by Carney et al (1997); as well as BD+24 1676, a star which seems to show mild enhancements in Mg, Ca, and Ti with respect to its iron abundance. Including G4-36, yet another star of interesting abundance ratios, these “unusual” stars seem to fit together, and correspond to a Ni/Na correlation as well, albeit a very different one from the rest of the stars. Whether this is a real correlation, or simply a coincidence imposed by the small number of stars analysed so far, will be determined as the remainder of our two-dozen high velocity star sample is analysed. However, if the secondary correlation exists, a Ni/Na discriminant such as this could be extremely useful as a proxy indicator to target stars of other unusual abundance ratios, thus alerting us to their different nucleosynthetic histories.
## 5 Possible Correlations with Kinematics
The abundances of $`\alpha `$\- and iron-peak elements as a function of the iron abundance do show abundance trends but, these trends are for the halo stars only. Most of the disk star abundances are constant across the almost 3 dex range in metallicity. While no abundance trends are observed as a function of eccentricity, there does appear to be a larger scatter in the abundances with increasing eccentricity. This is consistent with the relationship of the $`\alpha `$\- and iron-peak elemental abundances with the iron abundance: stars defined as members of the disk population have less scatter in the derived abundances. A quantitative estimate of the scatter as a function of the kinematics will be derived once the abundance determinations have been made for our complete sample (for instance, our initial analyses suggest that the scatter in the abundances of $`\alpha `$\- and iron-peak elements as a function of the predicted perigalactocentric distances are larger for stars that get closer to the centre of the galaxy). Upon completion of the abundance analyses for the entire sample, we will also investigate whether the unusual \[$`\alpha `$/Fe\] ratios have been found in a large enough sample of stars to identify the kinematics of the progenitor galaxy or galaxies.
## Acknowledgements
We are indebted to Renee James notifying us of the unusual nature of G4-36 prior to publication as well as supplying comparison abundance information. Ken Freeman has our appreciation for a helpful discussion and thoughtful suggestions regarding the analysis of the abundance trends and Sofia Feltzing for her sharp eyes in spotting a misplaced point on our poster display. We also thank Daryl Wilmarth for his excellent technical help at the KPNO 4-m telescope as well as the subsequent assistance he provided in using the new and improved Thorium-Argon Spectral Atlas (http://www.noao.edu/kpno/tharatlas/).
## References
> Brown, Wallerstein, & Vanture 1997, AJ 114, 180
>
> Carney et al 1994, AJ 107, 2240
>
> Carney et al 1997, AJ 114, 363
>
> Gratton & Sneden 1988, A&A 204, 193
>
> James 1998, BAAS 30, 1321
>
> James 1999, PhD Thesis in preparation, Univ. of Texas
>
> Magain 1989, A&A 209, 211
>
> Nissen & Schuster 1997, A&A 326, 751
>
> Ryan & Norris 1991, AJ 101, 1835
>
> Stephens 1999, AJ 117, 1771
|
no-problem/9909/hep-ph9909285.html
|
ar5iv
|
text
|
# THE PHENOMENOLOGY OF THE LIGHTEST PSEUDO NAMBU GOLDSTONE BOSON AT FUTURE COLLIDERS 11footnote 1Talk given at the International Workshop on Linear Colliders, Sitges, Barcelona, Spain, April 28-May 5, 1999.
## 1 Introduction
Theories of the electroweak interactions based on dynamical symmetry breaking (DSB) avoid the introduction of fundamental scalar fields but generally predict many pseudo-Nambu-Goldstone bosons (PNGB’s) due to the breaking of a large initial global symmetry group $`G`$. Among the PNGB’s the colorless neutral states are the lightest ones. Direct observation of a PNGB would not have been possible at any existing accelerator, however light the PNGB’s are, unless the number of technicolors, denoted $`N_{TC}`$, is very large. The phenomenological analysis presented here is extracted from ref., where all the details can be found, and is based on a $`SU(8)\times SU(8)`$ effective low-energy Lagrangian approach. In the broad class of models considered, the lightest neutral PNGB $`P^0`$ is of particular interest because it contains only down-type techniquarks (and charged technileptons) and thus will have a mass scale that is most naturally set by the mass of the $`b`$-quark. The $`P^0`$ total width is typically in the few $`\mathrm{MeV}`$ range and dominant decay modes are $`b\overline{b}`$, $`\tau ^+\tau ^{}`$ and $`gg`$. Other color-singlet PNGB’s will have masses most naturally set by $`m_t`$, while color non-singlet PNGB’s will generally be even heavier.
Detection of the PNGB’s at the Tevatron and LHC colliders, has been extensively considered . However, inclusive $`gg`$ fusion production of a neutral PNGB, followed by its decay to $`\gamma \gamma `$, was not given detailed consideration until recently . In this paper it was noticed that for a particular class of models the ratio $`\mathrm{\Gamma }(P^0gg)B(P^0\gamma \gamma )/\mathrm{\Gamma }(Hgg)B(H\gamma \gamma )`$ with $`H`$ being the SM Higgs and $`N_{TC}=4`$ is of the order $`10^2`$ for $`50m_{P^0/H}(\mathrm{GeV})150`$. Therefore, using the results on the Higgs analysis, we can conclude that, for $`N_{TC}=4`$, the $`P^0`$ can be detected in the $`ggP^0\gamma \gamma `$ mode for at least $`3050<m_{P^0}<150200\mathrm{GeV}`$, or perhaps also at Tevatron RunII with $`S/\sqrt{B}3`$ for $`m_{P^0}60\mathrm{GeV}`$.
## 2 $`e^+e^{}`$ mode
The best mode for $`P^0`$ production at an $`e^+e^{}`$ collider (with $`\sqrt{s}>m_Z`$) is $`e^+e^{}\gamma P^0`$. Because the $`P^0Z\gamma `$ coupling-squared is much smaller than the $`P^0\gamma \gamma `$ coupling-squared, the dominant diagram is $`e^+e^{}\gamma \gamma P^0`$. Even when kinematically allowed, rates in the $`e^+e^{}ZP^0`$ channel are substantially smaller, as we shall discuss. We will give results for the moderate value of $`N_{TC}=4`$. For $`\sqrt{s}=200\mathrm{GeV}`$, we find that, after imposing an angular cut of $`20^{}\theta 160^{}`$ on the outgoing photon (a convenient acceptance cut that also avoids the forward/backward cross section singularities but is more than 91% efficient), the $`e^+e^{}\gamma P^0`$ cross section is below $`1\mathrm{fb}`$ for $`N_{TC}=4`$. Given that the maximum integrated luminosity anticipated is of order $`L0.5\mathrm{fb}^1`$, we conclude that LEP2 will not allow detection of the $`P^0`$ unless $`N_{TC}`$ is very large.
The cross section for $`e^+e^{}\gamma P^0`$ at $`\sqrt{s}=500\mathrm{GeV}`$, after imposing the same angular cut, ranges from $`0.9\mathrm{fb}`$ down to $`0.5\mathrm{fb}`$ as $`m_{P^0}`$ goes from zero up to $`200\mathrm{GeV}`$. For $`L=50\mathrm{fb}^1`$, we have at most 45 events with which to discover and study the $`P^0`$. The $`e^+e^{}ZP^0`$ cross section is even smaller. Without cuts and without considering any specific $`Z`$ or $`P^0`$ decay modes, it ranges from $`0.014\mathrm{fb}`$ down to $`0.008\mathrm{fb}`$ over the same mass range. If TESLA is able to achieve $`L=500\mathrm{fb}^1`$ per year, $`\gamma P^0`$ production will have a substantial rate, but the $`ZP^0`$ production rate will still not be useful. Since the $`\gamma P^0`$ production rate scales as $`N_{TC}^2`$, if $`N_{TC}=1`$ a $`\sqrt{s}=500\mathrm{GeV}`$ machine will yield at most 3 (30) events for $`L=50\mathrm{fb}^1`$ ($`500\mathrm{fb}^1`$), making $`P^0`$ detection and study extremely difficult. Thus, we will focus our analysis on the $`N_{TC}=4`$ case.
In order to assess the $`\gamma P^0`$ situation more fully, we must consider backgrounds. The dominant decay modes of the $`P^0`$ are typically to $`b\overline{b}`$, $`\tau ^+\tau ^{}`$ or $`gg`$. For the $`b\overline{b}`$ and $`gg`$ modes, the backgrounds relevant to the $`\gamma P^0`$ channel are $`\gamma b\overline{b}`$, $`\gamma c\overline{c}`$ and $`\gamma q\overline{q}`$ ($`q=u,d,s`$) production. The cross sections for these processes obtained after integrating over a 10 GeV bin size in the quark-antiquark mass are, for $`10<m_{P^0}<80\mathrm{GeV}`$ and $`m_{P^0}100\mathrm{GeV}`$, of the same order of the signal.
Results for $`S/\sqrt{B}`$, in the various tagged channels, for $`N_{TC}=4`$ and assuming $`L=100\mathrm{fb}^1`$ (and $`L=500\mathrm{fb}^1`$) at $`\sqrt{s}=500\mathrm{GeV}`$, are plotted in Fig. 1. We have assumed a mass window of $`\mathrm{\Delta }M_X=10\mathrm{GeV}`$ in evaluating the backgrounds in the various channels. Also shown in Fig. 1 is the largest $`S/\sqrt{B}`$ that can be achieved by considering (at each $`m_{P^0}`$) all possible combinations of the $`gg`$, $`c\overline{c}`$, $`b\overline{b}`$ and $`\tau ^+\tau ^{}`$ channels. From the figure, we find for $`L=100\mathrm{fb}^1`$ $`S/\sqrt{B}3`$ (our discovery criterion) for $`m_{P^0}75\mathrm{GeV}`$ and $`m_{P^0}130\mathrm{GeV}`$, i.e. outside the $`Z`$ region. A strong signal, $`S/\sqrt{B}4`$, is only possible for $`m_{P^0}2060\mathrm{GeV}`$. As the figure shows, the signal in any one channel is often too weak for discovery, and it is only the best channel combination that will reveal a signal. For the TESLA $`L=500\mathrm{fb}^1`$ luminosity, $`S/\sqrt{B}`$ should be multiplied by $`2.2`$ and discovery prospects will be improved. Tagging and mistagging efficiencies have been included .
After discovery, one can determine branching fractions in various channels and couplings. The only channel with reasonable ($`15\%`$) statistical error would be $`b\overline{b}`$, for $`L=500\mathrm{fb}^1`$.
## 3 $`\gamma \gamma `$ mode
By folding the cross section for the $`P^0`$ production at a given energy $`E_{\gamma \gamma }`$ of a $`\gamma \gamma `$ collider with the differential luminosity, one gets
$`N(\gamma \gamma P^0F)`$ $`=`$ $`{\displaystyle \frac{8\pi \mathrm{\Gamma }(P^0\gamma \gamma )B(P^0F)}{m_{P^0}^2E_{e^+e^{}}}}\mathrm{tan}^1{\displaystyle \frac{\mathrm{\Gamma }_{\mathrm{exp}}}{\mathrm{\Gamma }_{P^0}^{\mathrm{tot}}}}`$ (1)
$`\times \left(1+\lambda \lambda ^{}\right)G(y_0)L_{e^+e^{}},`$
where $`y_0=m_{P^0}/E_{e^+e^{}}`$, $`\lambda `$ and $`\lambda ^{}`$ are the helicities of the colliding photons, $`\mathrm{\Gamma }_{\mathrm{exp}}`$ is the mass interval accepted in the final state $`F`$ and $`L_{e^+e^{}}`$ is the integrated luminosity for the colliding electron and positron beams. For initial discovery one chooses initial laser polarizations $`P`$ and $`P^{}`$ and $`e^+e^{}`$ beam helicities $`\lambda _e`$ and $`\lambda _e^{}`$ for a broad spectrum $`2\lambda _eP+1`$, $`2\lambda _e^{}P^{}+1`$, $`PP^{}+1`$ such that $`G>1`$ and $`\lambda \lambda ^{}1`$ (which suppresses $`\gamma \gamma q\overline{q}`$ backgrounds) over the large range $`0.1y_00.7`$. The $`P^0`$ is always sufficiently narrow that $`\mathrm{tan}^1\pi /2`$. In this limit, the rate is proportional to $`\mathrm{\Gamma }(P^0\gamma \gamma )B(P^0F)`$. For the $`P^0`$, $`\mathrm{\Gamma }(P^0\gamma \gamma )`$ is large and the total production rate will be substantial.
Since it is well-established that the SM $`h`$ can be discovered in this decay mode for $`40<m_h<2m_W`$, it is clear that $`P^0`$ discovery in the $`b\overline{b}`$ final state will be possible up to at least $`200\mathrm{GeV}`$, down to $`0.1\sqrt{s}50\mathrm{GeV}`$ (at $`\sqrt{s}500\mathrm{GeV}`$), below which $`G(y)`$ starts to get small. Discovery at lower values of $`m_{P^0}`$ would require lowering the $`\sqrt{s}`$ of the machine. For the $`b\overline{b}`$ channel, the statistical significance $`S/\sqrt{B}`$ is plotted in Fig. 2.
Once the $`P^0`$ has been discovered, either in $`\gamma \gamma `$ collisions or elsewhere, one can configure the $`\gamma \gamma `$ collision set-up so that the luminosity is peaked at $`\sqrt{s}_{\gamma \gamma }m_{P^0}`$. A very precise measurement of the $`P^0`$ rate in the $`b\overline{b}`$ final state will then be possible if $`N_{TC}=4`$. For example, rescaling the SM Higgs ‘single-tag’ results of Table 1 of Ref. (which assumes a peaked luminosity distribution with a total of $`L=10\mathrm{fb}^1`$) for the $`106\mathrm{GeV}m_{jj}126\mathrm{GeV}`$ mass window to the case of the $`P^0`$, we obtain $`S5640`$ compared to $`B325`$, after angular, topological tagging and jet cuts. This implies a statistical error for measuring $`\mathrm{\Gamma }(P^0\gamma \gamma )B(P^0b\overline{b})`$ of $`<1.5\%`$. Systematic errors will probably dominate. Following the same procedure for $`N_{TC}=1`$, we find (at this mass) a statistical error for this measurement of $`<5\%`$. Of course, for lower masses the error will worsen. For $`N_{TC}=4`$, we estimate an error for the $`b\overline{b}`$ rate measurement still below $`10\%`$ even at a mass as low as $`m_{P^0}=20\mathrm{GeV}`$ (assuming the $`\sqrt{s}`$ of the machine is lowered sufficiently to focus on this mass without sacrificing luminosity). For $`N_{TC}=1`$, we estimate an error for the $`b\overline{b}`$ rate measurement of order $`1520\%`$ for $`m_{P^0}60\mathrm{GeV}`$.
## 4 Conclusions
We have reviewed the production and study of the lightest pseudo-Nambu Goldstone state $`P^0`$ of a typical technicolor model at future colliders, focusing mainly on $`e^+e^{}`$. For $`N_{TC}=4`$, discovery of the $`P^0`$ in the $`ggP^0\gamma \gamma `$ mode at the LHC will be almost certainly be possible unless its mass is either very small ($`<30\mathrm{GeV}`$?) or very large ($`>200\mathrm{GeV}`$?), where the question marks are related to uncertainties in LHC backgrounds in the inclusive $`\gamma \gamma `$ channel.
In contrast, an $`e^+e^{}`$ collider, while able to discover the $`P^0`$ via $`e^+e^{}\gamma P^0`$, so long as $`m_{P^0}`$ is not close to $`m_Z`$ and $`N_{TC}3`$, is unlikely (unless the TESLA $`500\mathrm{fb}^1`$ per year option is built or $`N_{TC}`$ is very large) to be able to determine the rates for individual $`\gamma F`$ final states ($`F=b\overline{b},\tau ^+\tau ^{},gg`$ being the dominant $`P^0`$ decay modes) with sufficient accuracy as to yield more than very rough indications regarding the important parameters of the technicolor model.
The $`\gamma \gamma `$ option at an $`e^+e^{}`$ collider is actually a more robust means for discovering the $`P^0`$ than direct operation in the $`e^+e^{}`$ collision mode. For $`N_{TC}=4`$, $`\gamma \gamma P^0b\overline{b}`$ should yield an easily detectable $`P^0`$ signal for $`0.1<\frac{m_{P^0}}{\sqrt{s}}<0.7`$. Once $`m_{P^0}`$ is known, the $`\gamma \gamma `$ collision set-up can be re-configured to yield a luminosity distribution that is strongly peaked at $`\sqrt{s}_{\gamma \gamma }m_{P^0}`$ and, for much of the mass range of $`m_{P^0}<200\mathrm{GeV}`$, a measurement of $`\mathrm{\Gamma }(P^0\gamma \gamma )B(P^0b\overline{b})`$ can be made with statistical accuracy in the $`<2\%`$ range.
A $`\mu ^+\mu ^{}`$ collider would be crucial for detecting a light $`P^0`$ ($`m_{P^0}<30\mathrm{GeV}`$) and would play a very special role with regard to determining key properties of the $`P^0`$ . In particular, the $`P^0`$, being, in the class of models we have considered, comprised of $`D\overline{D}`$ and $`E\overline{E}`$ techniquarks, will naturally have couplings to the down-type quarks and charged leptons of the SM. Thus, $`s`$-channel production ($`\mu ^+\mu ^{}P^0`$) is predicted to have a substantial rate for $`\sqrt{s}m_{P^0}`$. Because the $`P^0`$ has a very narrow width, in order to maximize this rate it is important that one operates the $`\mu ^+\mu ^{}`$ collider so as to have extremely small beam energy spread, $`R=0.003\%`$. The complete analysis of how the precision $`\mu ^+\mu ^{}`$ measurements of various channel rates together with LHC and $`e^+e^{}`$ measurement can determine (up to a discrete set of ambiguities) the parameters of the effective low-energy Yukawa Lagrangian that determine $`T_3=1/2`$ fermion masses and their couplings to the $`P^0`$ can be found in .
## Acknowledgments
I would like to thank R. Casalbuoni, S. De Curtis, A. Deandrea, R. Gatto and J. Gunion for the fruitful and enjoyable collaboration on the topics covered here and R. Rückl for interesting discussions..
## References
|
no-problem/9909/astro-ph9909031.html
|
ar5iv
|
text
|
# The Role of Mixing in Astrophysics
## 1 Introduction
In astronomy, mixing is important in two widely different situations. First, there is the mixing of chemically discrete materials. Here we consider the interstellar medium, and sufficiently cold environments that solid particles (grains) may survive (Clayton (1982)). This area is exciting now due to direct experimental identification of presolar grains (see Anders & Zinner (1993); Huss, et al. (1994); Bernatowicz & Zinner (1997), and references therein).
The second situation involves the mixing of plasma which differs in its isotopic composition; this is vital to the evolution of stars, which produce isotopic and nuclear variation by thermonuclear burning (Clayton (1968); Arnett (1996)). Thermonuclear burning is analogous to chemical combustion in many ways, and may be as complex. Mixing becomes important in determining whether flame can spread to new fuel, or is choked by build up of ashes. Mixing, even in small degree, can provide indications of ashes which can be used to diagnose burning conditions.
## 2 Mixing
We sometimes forget that stars are really very large. Let us make a order of magnitude estimate of diffusion time scales in a dense stellar plasma. It is the nuclei, not the electrons which define the composition. The coulomb cross section for pulling ions past each other is of order $`\sigma 10^{16}\mathrm{cm}^2`$. For a number density $`N10^{24}\mathrm{cm}^3`$, this implies a mean free path $`\lambda =1/\sigma N10^8\mathrm{cm}`$. For a particle velocity $`v_d10^8\mathrm{cm}/\mathrm{s}`$, this gives a diffusion time $`\tau _d=(\mathrm{\Delta }r)^2/\lambda v_d(\mathrm{\Delta }r)^2\mathrm{s}\mathrm{cm}^2`$. For a linear dimension of stellar size, $`\mathrm{\Delta }r=10^{11}\mathrm{cm}`$, $`\tau _d=3\times 10^{14}\mathrm{y}`$, or 3,000 Hubble times! While one may quibble about the exact numbers used, it is clear that pure diffusion is ineffective for mixing stars, except for extreme cases involving extremely long time scales and steep gradients.
Actually, we all know from common experience—such as stirring cream into coffee (tea)—that this discussion is incomplete. To diffusion must be added advection, or stirring. Stars may be stirred too. For example, rotation may induce currents, as may accretion, and perturbations from a binary companion. However, the prime mechanism for stirring that is used in stellar evolutionary calculations is thermally induced convection. The idea is that convective motions will stir the heterogeneous matter, reducing the typical length scale $`\mathrm{\Delta }r`$ to a value small enough that diffusion can insure microscopic mixing. For our stellar example above, this would require a reduction in scale of $`(\lambda /\mathrm{\Delta }r)^{1/2}10^8`$. Convection is not perfectly efficient, so that the actual mixing time would still be finite. Given that such a limit exists, we must examine rapid evolutionary stages to see if microscopic mixing is a valid approximation. For presupernovae, the approximation is almost certainly not correct, so that these stars are not layered in uniform spherical shells as conventionally assumed, but heterogeneous in angle as well as radius.
## 3 What The Light Curves Tell Us
One of the most noticed aspects of SN1987A was the fact that the progenitor was not a red supergiant, as most stellar evolutionary calculations predicted, but had a smaller radius, $`r3\times 10^{12}\mathrm{cm}`$. The nature of the HR diagram for massive stars in the LMC was already an old problem (El Eid, et al. 1987, Maeder 1987, Renzini 1987, Truran & Weiss 1987). In retrospect, this expectation of a red supergiant was due to the implicit assumption that semiconvective mixing was instantaneous, and that the Schwarschild gradient was the one to use (this is more reasonable for lower mass stars, which evolve more slowly, see Chapter 7 in Arnett 1996). As luck would have it, my formulation of the stellar evolutionary equations gave the Ledoux criterion as the default, and the progenitor was a “blue” supergiant when the core collapsed (Arnett 1987). An example is shown in Figure 2, with the error box for the observed progenitor, Sk -69 202. This type of behavior is robust in the sense that, as long as the criterion is similar to the Ledoux one, LMC star models around 20 solar masses will loop back from the red giant branch when they do core carbon burning. The actual physical nature of this mixing process (presumably due to the double diffusion of ions and heat) does not seem well understood, at least in this context. The “blue” nature of the progenitor could be due to some other cause, of course, such as interaction or merger with a binary companion, but no such complex scenario is required for this position in the HR diagram.
After the euphoria of realizing that SN1987A actually was a close supernova, I happily simulated the early light curve by running shocks through stellar models of appropriate radius and mass. This worked fine for the first twenty days of data (Arnett 1987), but then the agreement degraded between the observed and computed light curves, as shown in Figure 1. Further, synthesis of the spectra was no longer successful at this epoch (Lucy 1987). This was followed by the Bochum event in the evolution of the spectra (Hanuschik & Dachs 1987), and later by the early emergence of x-rays (Donati, et al. 1987, Sunyaev, et al. 1987) and detection of $`\gamma `$-ray lines (Matz et al. 1987). Something more was happening than implied by the spherically symmetric models.
While the powerful tools of radiation hydrodynamics were failing, a far simpler tool succeeded: after the first two weeks in which the effects of shock break-out were still felt, an analytic model (see Arnett 1996 and references therein) reproduced the observed light curve much more accurately. To obtain the analytic solution, it was necessary to assume that the opacity was relatively uniform and the radioactive Ni, while centrally concentrated, was distributed half-way to the surface! It seemed that some sort of mixing had occurred after the $`{}_{}{}^{56}\mathrm{Ni}`$ was synthesized in the explosion.
Various amounts of arbitrary mixing were added to the simulations to obtain a match to the observed light curve (ABKW 1989). It is easy to assume mixing as an ad hoc process, but considerably more demanding to make a plausible simulation of how the mixing actually occurs. Figure 1 illustrates some options. The observational data for the UVOIR light curve are shown as crosses. The analytic model fits from about $`t=10^6\mathrm{sec}`$ onward (see Figure 13.8 in Arnett 1996). A one-dimensional numerical model without mixing is labeled “unmixed.” As the photosphere moves into the burned layers, wild variations in luminosity occur as the ionization state varies, and hence in the dominant opacity (which is due to Thomson scattering from free electrons). More careful treatment of spherical effects will smooth this only slightly (over a time $`r/c3\times 10^4\mathrm{seconds}`$). If neighboring zones which are convectively unstable ($`\rho \times P<0`$) are instantaneously mixed, the curve labeled “locally mixed model” is obtained. This maximal local mixing case is inadequate to explain the result. This implies that multidimensional effects—in particular, advection—are in operation. The mixing is not defined solely by local properties, but rather is deeply nonlinear. Subsequent data from much later stages supports this conclusion (Wooden 1997, McCray 1997).
## 4 Mixing by Advection and/or Diffusion
Stars are thermonuclear reactors, so that the change of abundances both drives the evolution and provides a diagnostic of that process. The rate of change in nuclear abundance $`Y_i`$ is usually assumed to be governed by the set of equations,
$$dY_i/dt=Y_iY_jR_{ij}\mathrm{}+Y_kY_lR_{kl}+\mathrm{}$$
(1)
in which all participating species (denote by indices i, j k, l) are included. Only binary reactions are explicitly shown, for brevity. The nuclear reaction rates are denoted $`R_{ij}`$, with the indices ij running through the corresponding species. See Arnett (1996) for detail.
If we are not dealing with homogeneous matter, complications arise (Arnett 1997). First, gradients are not zero, so that we have variations in both space and time. The ordinary differential equations become partial differential equations. As seen from a fixed frame, with material flowing past, the operator $`dY_i/dt`$ becomes $`Y_i/t+𝐯Y_i`$, where $`𝐯`$ is the fluid velocity. The new second term is advection. This gives,
$$Y_i/t+𝐯Y_i=Y_iY_jR_{ij}\mathrm{}+Y_kY_lR_{kl}+\mathrm{},$$
(2)
which couples the abundance distribution to the hydrodynamic flow. Energy release or absorption by nuclear burning further affect the flow. The system may now be heterogeneous.
Second, if gradients in composition (e.g., in $`Y_i`$) are present, then a new term is generated when we move to the fluid frame. The velocities of nuclei are split into a symmetric part around the center of momentum (characterized by a temperature $`T`$), and the fluid velocity $`𝐯`$. With a composition gradient, the flux of composition is nonzero, unlike the flux of momentum in the comoving frame. This gives rise to source terms due to diffusion (Landau & Lifshitz 1959), which in the continuity equation for species have the form $`\frac{1}{\rho }𝐅_𝐢`$, where the composition flux is $`𝐅_𝐢=\rho Y_i𝐯`$. Thus,
$$Y_i/t+𝐯Y_i=Y_iY_jR_{ij}\mathrm{}+Y_kY_lR_{kl}+\mathrm{}\frac{1}{\rho }𝐅_𝐢.$$
(3)
If the composition gradients are small, we approximate $`𝐅_𝐢\rho DY_i`$; this is the usual diffusion flux for composition, with the diffusion coefficient $`D\lambda v_d/3`$, where $`\lambda `$ is the diffusion mean free path and $`v_d`$ the mean velocity of diffusing particles relative to the fluid frame. Notice that the diffusion and advection terms may act on strongly differing length scales in this equation.
We may recover the original simplicity of Eq. 1 if either (1) the region of interest (the computational “zone”) is really homogeneous, or (2) it is very well mixed (large diffusion coefficient D). In the limit of many mean free paths taken, diffusion approximates a random walk process. Because of the benign numerical properties of the diffusion operator, stellar evolutionists have often used some variety of diffusion to model convective mixing, assuming that $`\lambda `$ is approximately a mixing length $`\mathrm{}`$, and many paths were taken. Note that this involves the singular idea that scales of order $`\lambda 10^8\mathrm{cm}`$ are equivalent to those of order $`\mathrm{}10^{+8}\mathrm{cm}`$ or more. Ignoring the advection term, this gives,
$$Y_i/tY_iY_jR_{ij}\mathrm{}+Y_kY_lR_{kl}+\mathrm{}\frac{1}{\rho }𝐅_𝐢,$$
(4)
which is an approximation to Eq. 3, and is commonly used in stellar evolutionary codes (e.g., Woosley & Weaver 1995). It ignores advection, which is the dominant mode of macroscopic mixing.
## 5 Applications to Stellar Hydrodynamics
In discussion of stellar evolution, one encounters the topics of rotation, convection, pulsation, mass loss, micro-turbulence, sound waves, shocks, and instabilities—to name a few—which are all just hydrodynamics. However, direct simulation of stellar hydrodynamics is limited by causality. In analogy to light cones in relativity, in hydrodynamics one may define space-time regions in which communication can occur by the motion of sound waves. To correctly simulate a wave traveling through a grid, the size of the time step must be small enough so that sound waves cannot “jump” zones. Thus the simulation is restricted to short time steps—an awkward problem if stellar evolution is desired. While simulations of the solar convection zone are feasible, the simulation time would be of order hours instead of the billions of years required for hydrogen burning. For the latter, a stellar evolution code is used, which damps out the hydrodynamic motion, obviating the need for the time step restriction. Any presumed hydrodynamic motion is then replaced by an algorithm (such as adiabatic structure and complete mixing in formally convective regions). Thus, stellar evolution deals with the long, slow phenomena, and stellar hydrodynamics has dealt with the short term.
However, the stages of evolution prior to a supernova explosion are fast and eventful. Here direct simulation is feasible (Bazan & Arnett (1998)). A key region for nucleosynthesis is the oxygen burning shell in a presupernova star. Besides producing nuclei from Si through Fe prior to and during the explosive event, it is the site at which the radioactive $`{}_{}{}^{56}\mathrm{Ni}`$ is made and is mixed. The conventional picture of this region relies upon the notion of thermal balance between nuclear heating and neutrino cooling in the context of complete microscopic mixing by convective motions.
This is usually treated by the mixing length scenario for convection, which assumes statistical (well developed) turbulence, random walk of convective blobs approximated by diffusion, subsonic motions, and almost adiabatic flow. These approximations are further constrained by a simplistic treatment of the boundaries of the convective region.
The time scales for the oxygen burning shell are unusual. The evolutionary time is $`\tau _{evol}4\times 10^3\mathrm{s}`$. The convective “turnover” time is $`\tau _{conv}\mathrm{\Delta }r/v_{conv}\tau _{evol}/10`$, while the sound travel time across the convective region is $`\tau _{sound}\mathrm{\Delta }r/v_{sound}\tau _{conv}/100`$. The burning time is $`\tau _{burn}=E/\epsilon \tau _{conv}`$. Obviously the approximations of subsonic flow, well developed turbulence, complete microscopic mixing, and almost adiabatic flow are suspect.
These time scales are rapid enough to make the oxygen shell a feasible target for direct numerical simulations, and an extensive discussion has appeared (Bazan & Arnett (1998), and Asida & Arnett, in preparation). The two dimensional simulations show qualitative differences from the previous one dimensional ones. The oxygen shell is not well mixed, but heterogeneous in coordinates $`\theta `$ and $`\varphi `$ as well as $`r`$. The burning is episodic, localized in time and space, occurring in flashes rather than as a steady flame. The burning is strongly coupled to hydrodynamic motion of individual blobs, but the blobs are more loosely coupled to each other.
Acoustic and kinetic luminosity are not negligible, contrary to the assumptions of mixing length theory. The flow is only mildly subsonic, with mach numbers of tens of percent. This gives nonspherical perturbations in density and temperature of several percent, especially at the boundaries of the convective region.
At the edges of convective regions, the convective motions couple to gravity waves, giving a slow mixing beyond the formally unstable region. The convective regions are not so well separated as in the one dimensional simulations; “rogue blobs” cross formally stable regions. A carbon rich blob became entrained in the oxygen convective shell, and underwent a violent flash, briefly out-shining the oxygen shell itself by a factor of 100. Significant variations in neutron excess occur throughout the oxygen shell. Because of the localized and episodic burning, the typical burning conditions are systematically hotter than in one dimensional simulations, sufficiently so that details of the nucleosynthesis yields will be affected.
The two dimensional simulations are computationally demanding. Our radial zoning is comparable to that used in one dimensional simulations, to which we add several hundred angular zones, giving a computational demand several hundred times higher. This has limited us to about a quarter of the final oxygen shell burning in a SN1987A progenitor model. Given the dramatic differences from one dimensional simulations, it is important to pursue the evolutionary effects to see exactly how nucleosynthesis yields, presupernova structures, collapsing core masses, entropies, and neutron excesses will be changed. It may be that hydrostatic and thermal equilibrium on average, and the temperature sensitivity of the different burning stages, taken together, tend to give a rough layering in composition, even if the details of how this happens are quite different.
## 6 Toward a Predictive Theory: Tests <br>with The NOVA Laser
Ultimately simulations must be well resolved in three spatial dimensions. One of the great assets of computers is their ability to represent complex geometries. If we can implement realistic representations of the essential physics, then simulations should become tools to predict—not “postdict”—phenomena. An essential step toward that goal is the testing of computer simulations against reality in the form of experiment (Remington, Weber, Marinak, et al. (1995)). This is a venue in which we can alter conditions (unlike astronomical phenomena), and thereby understand the reasons for particular results. Experiments are intrinsically three dimensional, with two dimensional symmetry available with some effort, so that they provide a convenient way to assess the effects of dimensionality.
For Rayleigh-Taylor instabilities, the NOVA experiments not only sample temperatures similar to those in the helium layer of a supernova, but hydrodynamically scale to the supernova as well (Kane, et al. (1997)). In the same sense that aerodynamic wind tunnels have been used in aircraft design, these high energy density laser experiments allow us to precisely reproduce a scaled version of part of a supernova.
The NOVA laser is physically imposing. The building is larger in area than an American football field; the lasers concentrate their beams on a target about the size a BB (or a small ball bearing). This enormous change in scale brings home just how high these energy densities are. Preliminary results show that the astrophysics code (PROMETHEUS) and the standard inertial confinement fusion code (CALE) both give qualitative agreement with the experiment. For example, the velocities of the spikes and bubbles are both in agreement with experiment, and analytic theory which is applicable in this experimental configuration (Kane, et al. (1997)). The two codes give similar, but not identical results. These differences will require new, more precise experiments to determine which is most nearly correct.
We are grateful to Grant Bazan, Jave Kane, and Bruce Remington for help and collaboration.
|
no-problem/9909/cond-mat9909278.html
|
ar5iv
|
text
|
# Many-Spin Effects and Tunneling Properties of Magnetic Molecules
## Abstract
Spin tunneling in molecular magnets has attracted much attention, however theoretical considerations of this phenomenon up to now have not taken into account the many-spin nature of molecular magnets. We present, to our knowledge, the first successful attempt of a realistic calculation of tunneling splittings for Mn<sub>12</sub> molecules, thus achieving a quantitatively accurate many-spin description of a real molecular magnet in the energy interval ranging from about 100 K down to 10<sup>-12</sup> K. Comparison with the results of the standard single-spin model shows that many-spin effects affect the tunneling splittings considerably. The values of ground state splitting given by single-spin and many-spin models differ from each other by a factor of five.
Progress in coordination chemistry lead recently to synthesis of a completely new class of magnetic materials of nanometer size, molecular magnets, which has drawn the attention of physicists as well as chemists . In particular, these materials have been proven to be very suitable for the study of mesoscopic quantum tunneling effects in magnetic materials. A number of impressive experimental results have been obtained recently, such as the thermally-induced tunneling , ground state - to - ground state tunneling and topological phase effects in spin tunneling .
Among others, the molecular magnet $`\mathrm{Mn}_{12}\mathrm{O}_{12}(\mathrm{CH}_3\mathrm{COO})_{16}(\mathrm{H}_2\mathrm{O})_4`$ (below referred to as Mn<sub>12</sub>) constitutes a subject of great interest. The effect of resonant magnetization tunneling has been first observed and studied in detailed experiments on Mn<sub>12</sub> , and, at present, a substantial amount of reliable experimental data has been collected. However, progress in understanding the properties of molecular magnets is greatly hampered by the lack of a comprehensive theoretical description. Recent experiments show that the conventional single-spin description of magnetic molecules, including Mn<sub>12</sub>, does not accurately describe their properties and the constituent many-spin nature of these molecules should be taken into account. This can be particularly important for understanding tunneling properties; for which it has been demonstrated that the single-spin description of a many-spin system can give totally misleading results for tunneling splittings, differing by three orders of magnitude from exact values.
Therefore, the development of an adequate many-spin description of molecular magnets constitutes an important theoretical problem, which becomes especially complicated if the tunneling properties are to be studied. For example, the magnetic cluster of Mn<sub>12</sub> molecule consists of eight ions Mn<sup>3+</sup> with spin 2 and four Mn<sup>4+</sup> ions with spin 3/2 (see Fig. 1), coupled by anisotropic interactions, so that the Hilbert space of the corresponding spin Hamiltonian consists of 10<sup>8</sup> levels. Furthermore, the tunneling splittings in Mn<sub>12</sub> are very small, of order of 10<sup>-12</sup> Kelvin. Obviously, the brute-force direct calculation of tiny tunneling splittings, even for several low-lying states, is far beyond the capabilities of modern computers. Although several methods have been developed to cope with this problem , we are not aware of any calculations of tunneling splittings made for realistic models of complex molecular magnets, so that the possibility of such a simulation has not been demonstrated yet.
In this paper, we present, to our knowledge, the first successful attempt to calculate the tunneling splitting for a realistic model of Mn<sub>12</sub> , thus explicitly showing the possibility of calculating tunneling splittings in rather complex molecular magnets. As a result, we achieve a quantitatively accurate many-spin description of a real system in the energy interval ranging from about 100 K down to 10<sup>-12</sup> K, covering 14 orders of magnitude.
The 8-spin model, which forms a basis for present calculations, has been described and discussed thoroughly in Ref. . The central idea of the model is to use the natural hierarchy of interactions present in Mn<sub>12</sub>. Namely, the antiferromagnetic exchange interactions $`J_1220`$ K (see Fig. 1) between ions Mn<sup>3+</sup> and Mn<sup>4+</sup> are significantly stronger than other isotropic exchange ($`J_2`$, $`J_3`$ and $`J_4`$) and anisotropic interactions, so at low temperatures they can be considered as absolutely rigid. In other words, the spins of the corresponding ions can be considered as forming dimers with a total rigid spin 1/2. The validity of this model has been supported by megagauss-field experiments : the states of dimers with the spin higher than 1/2 (excitations of dimers) come into play when the external magnetic field is about 400 T. The same conclusion can be drawn from the the dependence of the magnetic susceptibility of Mn<sub>12</sub> versus temperature (see, e.g. Ref. ).
After consideration of different possible interactions, the following spin Hamiltonian has been proposed in Ref. to describe Mn<sub>12</sub> molecules:
$``$ $`=`$ $`J\left({\displaystyle \underset{i}{}}𝐬_i\right)^2J^{}{\displaystyle \underset{k,l}{}}𝐬_k𝐒_lK_z{\displaystyle \underset{i=1}{\overset{4}{}}}\left(S_i^z\right)^2`$ (2)
$`+{\displaystyle \underset{i,j}{}}𝐃^{i,j}[𝐬_i\times 𝐒_j].`$
Here, $`𝐒_i`$ and $`𝐬_i`$ are the spin operators of large spins $`S=2`$ and small dimer spins $`s=1/2`$, correspondingly, where the subscript $`i`$ denotes the index of the spin. Isotropic exchange between small spins and large spins is described by the parameter $`J^{}`$, whereas $`J`$ denotes the exchange of small spins with each other. The third term describes the single-ion uniaxial anisotropy of large spins. Finally, the fourth term describes the antisymmetric Dzyaloshinsky-Morya (DM) interactions in Mn<sub>12</sub>, and $`𝐃^{i,j}`$ is the Dzyaloshinsky-Morya vector describing DM-interaction between $`i`$-th small spin and $`j`$-th large spin. The molecules of Mn<sub>12</sub> possess the fourfold rotary-reflection axis (the symmetry $`C_4`$), which imposes restrictions on the DM-vectors $`𝐃^{i,j}`$, so that Dzyaloshinsky-Morya interactions can be desribed by only three parameters $`D_xD_x^{1,8}`$, $`D_yD_y^{1,8}`$, and $`D_zD_z^{1,8}`$.
The model discussed above has been demonstrated to describe correctly the energy spectrum up to about 100 K . It has allowed explaination of a wide range of experimental data, such as the unexpected splitting of neutron scattering peaks , results of EPR experiments and measurements of magnetic susceptibility versus temperature . This model has provided a quantitative description of the response of Mn<sub>12</sub> molecules to the transversal magnetic field (external field applied perpendicular with respect to the easy axis of the molecule). The following set of the parameters has been determined for the spin Hamiltonian (2):
$`J=3.6\text{ K},J^{}=84\text{ K},K_z=5.69\text{ K}`$ (3)
$`D_x=25.3\text{ K},D_y=0.6\text{ K},D_z=2.0\text{ K}.`$ (4)
Having this model at hand, can we describe correctly the tunneling properties of Mn<sub>12</sub>?
In the Hamiltonian (2), only the fourth term, representing Dzyaloshinsky-Morya interactions, allows for tunneling. Indeed, the first two terms (describing isotropic exchange interactions) conserve both the total spin of the molecule $`𝒮`$ and its projection $`𝒮_z`$ (the $`z`$-axis is chosen to coincide with the $`C_4`$ axis of the molecule). Thus, these terms can not lead to tunneling between the levels with different $`𝒮_z`$. The third term, representing an easy-axis anisotropy, also conserves this quantity. Therefore, even though the levels $`|𝒮_z=+M`$ and $`|𝒮_z=M`$ are degenerate, in the absense of the Dzyaloshinsky-Morya term no tunneling between them can appear. But the DM-interaction mixes levels with different $`𝒮_z`$, thus giving rise to tunneling of the molecule’s spin. Below, for simplicity, we will denote the energy levels by the value of $`𝒮_z`$. Although it is not an exact quantum number in the system under consideration, we can formally consider the DM-interaction as a perturbation, and use perturbation theory terminology.
The first question to pose concerns the precision of the level splitting calculation. Parameters of the Hamiltonian are determined with some finite precision, and a small error (say, of the order of several Kelvin) affects the level energy by an amount of order of Kelvin, which is much larger than the very small value of tunneling splitting (of order of 10<sup>-12</sup> K). Does it deprive the calculational results of all meaning? To answer this question, we note that the levels $`|𝒮_z=+M`$ and $`|𝒮_z=M`$ are degenerate due to exact symmetry properties of the spin Hamiltonian, and, in the absense of the DM-term, would be degenerate at any values of parameters. Therefore, the tunneling splittings $`\mathrm{\Delta }E_{+M,M}`$ are governed only by the strength of the interaction which breaks the symmetry, i.e. DM-interaction. Therefore, if the parameters of the Hamiltonian are determined with reasonably small relative error, and if the simulation is done with sufficient accuracy, then the relative error of the level splittings will also be small. The results of our calculations confirm this conclusion.
In the calculation scheme we employed , the most significant error comes from the fact that only a fraction of all the levels produced by the spin Hamiltonian (2) are used in the calculations, and the influence of levels having higher energy is neglected. Therefore, to achieve reasonable accuracy, a sufficiently large number of levels should be taken into account. We studied the dependence of the resulting tunneling splittings $`\mathrm{\Delta }E_{+M,M}`$ for different pairs of degenerate levels $`|𝒮_z=+M`$ and $`|𝒮_z=M`$ on the number of lowest levels actually used in calculations. The results are shown on Fig. 2. It can be seen that the reasonable accuracy can be achieved by accounting for about the 700 lowest levels.
Final results for the tunneling splittings $`\mathrm{\Delta }E_{+M,M}`$ are presented in the Table, where the values obtained in many-spin calculations are compared with those given by the single-spin model. It is important to mention that the tunneling splittings should be zero for the levels with odd values of $`M`$ (i.e., $`|𝒮_z=\pm 9`$, $`|𝒮_z=\pm 7`$ etc.) because the fourfold symmetry of the molecule imposes certain restrictions on the symmetry of the spin Hamiltonian and makes some matrix elements vanish. In the single-spin model of Mn<sub>12</sub> this property of the spin Hamiltonian is introduced explicitly, by retaining only those operators which possess the required fourfold symmetry. In the many-spin simulations, we obtain the same result independently: the energies of the levels $`|𝒮_z=\pm M`$ for odd $`M`$ are the same with the accuracy of order of 10<sup>-28</sup> K, i.e. of order of computational error. This value is much less than the smallest of the splittings $`\mathrm{\Delta }E_{+10,10}=2.0310^{12}`$ K.
Analyzing the results presented in the Table, we see that the single-spin and many-spin models give rather close results for higher energy levels $`M=\pm 2`$ and $`M=\pm 4`$. It has been shown that these values of tunneling splittings allow to describe the experimental data available now with a good precision. On the other hand, for the ground-state levels $`M=\pm 10`$ the single-spin model predicts the value of tunneling splitting five times larger than the many-spin model. The possibility of such a difference has been predicted before , and, to our knowledge, Mn<sub>12</sub> is the first example of a real system where this difference has been found. Unfortunately, reliable experimental data concerning ground state - to - ground state tunneling in Mn<sub>12</sub> are absent, so the single-spin and many-spin models can not be distinguished on the basis of experimental results. Nevertheless, many-spin consideration can be important for a correct quantitative description of the ground state - to - ground state tunneling in other molecular magnets, such as Fe<sub>8</sub>. Many-spin effects may be one possible explanation for the disagreement found in Ref. between the experimental results and predictions of the single-spin model.
Summarizing, we evaluated the tunneling splittings in Mn<sub>12</sub> on the basis of the realistic many-spin model. We demonstrated that even tiny energy splittings, of order of 10<sup>-12</sup> K, can be calculated with reasonable precision for a rather complex molecular magnet. The results obtained have been compared with predictions of the single-spin model. We found that both models give close values for splittings of higher levels. Thus, the models can not be distinguished on the basis of experimental results available now, which provide the information only about upper levels splittings ($`M=\pm 4`$ and $`M=\pm 2`$). However, the ground state splittings calculated using these two models differ by a factor of five. This difference may be important also for other molecular magnets, such as Fe<sub>8</sub>, where the ground state splitting given by the single-spin model is three times less than the value obtained in experiment.
This work was carried out at the Ames Laboratory, which is operated for the U. S. Department of Energy by Iowa State University under Contract No. W-7405-82 and was supported by the Director for Energy Research, Office of Basic Energy Sciences of the U. S. Department of Energy.
|
no-problem/9909/cond-mat9909452.html
|
ar5iv
|
text
|
# Parallel magnetic field induced giant magnetoresistance in low density quasi-two dimensional layers
\[
## Abstract
We provide a possible theoretical explanation for the recently observed giant positive magnetoresistance in high mobility low density quasi-two dimensional electron and hole systems. Our explanation is based on the strong coupling of the parallel field to the orbital motion arising from the finite layer thickness and the large Fermi wavelength of the quasi-two dimensional system at low carrier densities.
PACS Number : 73.50.Jt; 71.30.+h, 73.50.Bk; 73.40.Hm; 73.40.Qv
\]
Recently an intriguing set of low temperature transport properties of low density high mobility two dimensional (2D) electron (Si inversion layers) and hole (p-type GaAs heterostructures) systems, being collectively referred to as the 2D metal-insulator-transition (M-I-T) phenomenon, has attracted a great deal of experimental and theoretical attention. Among the puzzling and interesting aspects of the 2D M-I-T observations is the strong parallel (to the 2D layer) magnetic field (B) dependence of the measured low temperature (T) resistivity $`\rho `$ of the apparent metallic phase at electron (we will refer to both electrons and holes by the generic name “electron” in this paper) densities ($`n`$) above the nominal M-I-T transition density ($`n_c`$). The unexpectedly strong parallel field (B in this paper refers exclusively to a 2D in-plane magnetic field with zero perpendicular field) dependence of the 2D resistivity is suppressed deep into the metallic phase ($`nn_c`$), and the system behaves more like a “conventional metal”. The effect of B is dramatic : Modest B fields ($`510T`$) spectacularly increase the measured resistance by upto two orders of magnitude at low $`n`$, independent of whether the system is in the nominal metallic ($`nn_c`$) or insulating ($`nn_c`$) phase — in Si inversion layer $`\rho (B)`$ eventually seems to “saturate” at very high resistances of the order of $`\rho 10^6\mathrm{\Omega }`$ (in the nominal “metallic” phase) for $`B10T`$ whereas in p-GaAs systems no such resistivity saturation has yet been reported. In addition to this giant magnetoresistance the temperature dependence of $`\rho (T)`$ is also affected by B — in particular, increasing B modifies the sign of $`d\rho /dT`$ which is positive (“metallic” behavior) at low B and negative (“insulating” behavior) at high B.
In this Letter we develop a possible theoretical explanation for the observed parallel magnetic field dependence of the 2D resistivity at low T and $`n`$. Our theoretical explanation for the observed giant magnetoresistance and the associated “suppression” of the metallic phase is compellingly simple conceptually, and is robust in the sense that it is independent of whether the conducting state is a true $`T=0`$ novel 2D metallic phase or just an effective “metal” (at finite T) by virtue of the localization length being exceedingly large. Our calculated B-dependent resistivity is in good qualitative agreement with the experimental observations , particularly in the GaAs-based 2D systems, and can be systematically improved in the future. The agreement between our theory and experimental results in Si inversion layers is not particularly good, and we mostly concentrate on GaAs systems in this paper. In this Letter we use a minimal model implementing the basic theoretical idea to obtain the generic trends in $`\rho (B)`$ in order to demonstrate that we have found a possible explanation, at least in the GaAs hole systems , for the giant magnetoresistance reported in recent experiments . We also emphasize that the theory presented in this Letter is totally independent of our $`B=0`$ theory for $`\rho (T,n)`$ developed in Ref. , and the parallel field results presented herein are not an extension and/or application of our $`B=0`$ theory.
The new physics we introduce in this paper is the coupling of the parallel B-field to the orbital motion of the 2D carriers. It has so far been assumed that the observed parallel field induced giant magnetoresistance must necessarily be a spin effect because the orbital motion does not couple to a B field parallel to the 2D plane. This would certainly be true if the systems being studied were strictly 2D systems with zero thickness (i.e., perfect $`\delta `$-function confinement) in the direction (z) normal to 2D (x-y) plane. In reality, however, the systems being studied are quasi-2D with their average thickness in the z-direction $`z`$, being of the order of $`30300\AA `$ depending on the system, carrier density, and other parameters (e.g., depletion charge density) which are often not accurately known. Therefore, a parallel magnetic field in the x-y plane does couple to the z-orbital degree of freedom of the system, and such an orbital coupling could in fact be strong when $`l_c<z`$ where $`l_c=(\mathrm{}c/eB)^{1/2}`$ is the magnetic length associated with the parallel field. Since $`z`$ depends on (increases with decreasing $`n`$) the electron density $`n`$ through the self-consistent confinement potential, and may be quite large at low electron densities, the condition $`l_c<z`$ is fulfilled in the low density experimental regime where the phenomenon of giant magnetoresistance is observed. In addition, the effect of the parallel field is enhanced at low carrier densities by the fact that the 2D Fermi wavelength $`\lambda _F`$ ($`=2\pi /k_F`$) at $`B=0`$ is substantially larger than the magnetic length $`l_c`$ associated with the typical B indicating a massive nonperturbative orbital effect associated with the applied parallel field. For example, for $`B5T`$ and $`n=5\times 10^{10}cm^2`$, $`\lambda _F1600\AA `$ (Si); 1100 $`\AA `$ (p-GaAs), and $`l_c100\AA `$ — thus $`\lambda _F1015l_c`$, a situation which would be achieved in a simple metal only for astronomical fields around $`B10^7T`$. One should therefore expect a strong orbital effect arising from the applied field as reflected in the observed giant magnetoresistance in these systems. Since we are interested only in the orbital motion we neglect any Zeeman spin splitting in our consideration assuming a spin degeneracy of 2 throughout.
We note that the new physics arising from the coupling of the orbital motion to the parallel field in these quasi-2D inversion layer type systems has no analog in purely 3D or 2D systems. The zero field z-motion in these quasi-2D systems is quantized into subbands due to the confining potential — the coupling of the B field to the orbital motion thus necessarily involves intersubband dynamics or scattering, and this intersubband dynamics gets coupled with the in-plane 2D dynamics in the presence of the parallel field, leading to the giant magnetoresistance. The effect should in fact persist to high densities (i.e., deep in the so-called metallic phase) except the size of the magnetoresistance should be only a few percent at high electron densities ($`l_cz,\lambda _F`$) whereas in the nonperturbative strong field limit the effect can be huge. Another way of understanding this strong (and unusual) orbital coupling is to note that the cyclotron energy associated with $`B10T`$ is about 6 meV, which is larger than the zero field subband splitting and much larger than the 2D Fermi energy of the system in the low density regime. This interplay of cyclotron and subband dynamics is a novel feature of the quasi-2D system which has no purely 2D or 3D analog. The physics of orbital coupling introduced in this paper could be thought of as a parallel field induced effective 2D to 3D crossover in the low density quasi-2D systems.
We implement the above idea by assuming a simple harmonic confinement of the z wavefunction at $`B=0`$. We adjust the parabolic confinement potential variationally to obtain the best fit to the $`B=0`$ wavefunction and energy of the actual system at the appropriate density $`n`$. The harmonic confinement allows us to incorporate the effect of the parallel B field nonperturbatively , and in the high field limit ($`l_cz_{B=0}`$) our harmonic approximation (the Fock-Darwin levels) becomes almost exact. Any error arising from our somewhat inaccurate choice of the z-wavefunction at $`B=0`$ has only a small effect on the rather large magnetoresistance we calculate. We take the B-field direction to be the x-axis without any loss of generality, and denote by $`\rho _{xx}`$ ($`\rho _{yy}`$) the 2D resistivity associated with the electric current flowing along (perpendicular) to the direction of B. As we describe below in details one of our specific predictions is that $`\rho _{yy}\rho _{xx}`$ due to the large 2D anisotropy introduced by the applied B-field, provided there is no other anisotropy in the system.
For our transport calculation we apply the Boltzmann theory (in presence of the parallel field which is treated nonperturbatively by including it explicitly in our one electron wavefunction and energy via the harmonic confinement model ) assuming scattering by short-range random impurities distributed uniformly throughout the quasi-2D layer. Our neglect of screening effects and the associated assumption of $`\delta `$-function impurity scattering potential arising from random impurity centers is done purely for the purpose of simplicity and in order to keep the number of free parameters at a minimum. Since the effect we are considering is a rather gross effect (involving a very large increase in magnetoresistance) any errors associated with our simplified scattering model are not of any qualitative significance in understanding the basic phenomenon. One will have to improve the model (both for confinement and for impurity scattering) if one is interested in precise quantitative agreement with the experimental data in a specific system — we believe that such improvements may be computationally extremely demanding. Details of our calculation will be given in a future long publication.
We show our calculated results in Figs. 1 and 2 concentrating on the p-GaAs samples of Ref. . In Fig. 1(a) we show our calculated $`\rho _{xx}`$ at $`T=50mK`$ for various carrier densities as a function of the applied B-field. The harmonic confinement at each density has been variationally adjusted to give the best wavefunction for the holes in the GaAs heterostructure appropriate for that density at $`B=0`$. The overall qualitative trends of our calculated results agree well with the experimental data (c.f., Fig. 3 of Ref. ). In fact, the specific experimental results of Ref. agree very well with our calculations as can be seen in our Fig. 1(a) where we have put some representative experimental data. At low B ($`\omega _c<\omega _0`$), $`\mathrm{ln}(\rho _{xx})`$ shows a $`B^2`$ dependence, changing to a linear B dependence at high B ($`\omega _c>\omega _0`$) in agreement with experiment , where $`\omega _c=eB/mc`$ is the cyclotron frequency of the B field and $`\omega _0`$ is the curvature (or the subband splitting) of the confinement potential. We note that the overall resistivity scale in our results is set by the density $`N_b`$ of the $`\delta `$-function impurity scatterers which uniquely defines the $`B=0`$ value of $`\rho _{xx}`$ in a particular sample. In Fig. 1(b) we show our calculated qualitative behavior for the Si inversion layer situation where the impurity scattering centers, instead of being randomly distributed throughout the layer, are located at the $`SiSiO_2`$ inter-
face or mostly on the insulating side. In this situation (when the dominant scattering mechanism is planar, as due to the charged impurities at the interface and the surface roughness scattering at the $`SiSiO_2`$ interface) elementary considerations show that the scattering effect must saturate eventually at high enough B fields as can be seen in Fig. 1(b) with the approximate saturation field increasing with increasing electron density consistent with experimental observations . The qualitative difference between the non-saturation behavior \[Fig. 1(a)\] and the saturation behavior \[Fig. 1(b)\] arises from the $`\delta `$-function random scattering centers being distributed three dimensionally in Fig. 1(a) and in a 2D plane at the interface in Fig. 1(b). The results in Fig. 1(a) correspond qualitatively to GaAs where the main resistive scattering centers are the random impurities in GaAs whereas the results in Fig. 1(b) correspond more to Si inversion layer where the main scattering centers (charged impurities and surface roughness) are located in a plane near the interface. We have not adjusted the parameters used for Fig. 1(b) to get agreement with Si inversion layer data — our only point here is to demonstrate the qualitative physics underlying the saturation behavior. In principle, we can get semi-quantitative agreement with any given set of data by adjusting the confinement parameter $`\omega _0`$, but we do not believe such an exercise to be meaningful particularly for the highly simplified model used in our theoretical calculations.
In Fig. 1(c) we show our predicted anisotropic magnetoresistivity with $`\rho _{yy}\rho _{xx}`$ — note that the anisotropy can be very large, and decreases with increasing density. (The corresponding 2D Fermi surfaces, not shown here due to lack of space, are strongly anisotropic in shape, being elliptic rather than circular with the eccentricity increasing with increasing field.) The highest density results (the lowest set of curves) in Fig. 1(c) show another predicted feature of our theory: At relatively high density, if the first excited subband of the system is occupied by the carriers at $`B=0`$ (a situation which in principle is achievable), the calculated resistivity would in fact first exhibit a negative magnetoresistance, as the excited subband depopulates with increasing B, before showing the characteristic giant magnetoresistance phenomenon. This oscillatory feature in the lowest set of curves in fig. 1(c) is the analog of the usual SdH oscillations in this problem. It is important to mention that the features predicted in Fig. 1(c) have already been observed experimentally in parabolic n-type GaAs structures at higher densities (where the overall magnetoresistivity is a factor of 6 for $`\rho _{yy}`$ and only a factor of 2 for $`\rho _{xx}`$), and our calculations for this structure agree well with the experimental findings.
It is important to emphasize that our predicted anisotropy (Fig. 1(c)), which has been observed in n-type GaAs systems , is not seen in Si inversion layers where $`\rho _{xx}\rho _{yy}`$ even in the presence of a strong applied parallel magnetic field. In this context we also point out that the saturation behavior of Fig. 1(b) calculated in our theory is not in particularly good agreement with the observed behavior in Si inversion layers where the saturation sets in more abruptly than in our theory. On the other hand our calculated magnetoresistance for GaAs systems, as shown in Fig. 1(a), is in excellent qualitative agreement with the corresponding GaAs results reported in Ref. . The reason for this difference between the observed experimental behavior
between Si and GaAs based systems is currently not known. Our current theory applies rather well to the GaAs-based 2D systems but not to Si systems for reasons which are not clear at this stage. One possibility for the difference between the two systems could be spin effects neglected in our theory.
Finally in Fig. 2 we provide the calculated temperature dependence of $`\rho (T,B,n)`$ which shows interesting non-monotonicity (“the suppression of the metallic phase”) experimentally . We emphasize that since we neglect all screening effects (due to our $`\delta `$-function impurity scattering model) our calculated temperature dependence, which arises entirely from Fermi surface and thermal occupancy effects, is necessarily simplistic (compared, for example with our $`B=0`$ theory for $`\rho (T,n)`$ as given in Ref. ). Nevertheless we believe that even this drastically simplified model catches the basic physics of the phenomenon, and explains on a qualitative level why $`d\rho /dT`$ changes sign from being positive (“metallic”) at low field to negative (“insulating”) at high fields. This is essentially a “quantum-classical crossover” type phenomenon where the strong modification of the Fermi surface, as shown in Fig. 1(d), leads to non-monotonic temperature dependence at various B values. The physics of the negative $`d\rho /dT`$ at high fields is entirely a Fermi surface effect. In Fig. 2 we plot $`\rho _{xx}(B)`$ for various fixed values of T, and we note that we obtain rough qualitative agreement with the experimental observations that there is a transition \[around $`B_c1T`$ in Fig. 2\] point $`B_c`$ where $`d\rho (B)/dT`$ changes its sign from being positive (“metallic”) for $`B<B_c`$ to being negative (“insulating”) for $`B>B_c`$ — in our theoretical results $`B_c`$ is not a sharp transition point, rather a rough transition regime whereas in the experiment $`B_c`$ seems to be a sharp point. We see no particular reason for $`B_c`$ to be a sharp single transition point since this phenomenon is obviously not a phase transition (both $`B<B_c`$ and $`B>B_c`$ are effective “metallic” phases), and the crossover behavior is only a Fermi surface (which is drastically distorted by high B values) effect. We suggest more precise measurements to check whether $`B_c`$ is really a single transition point or more a rough transition regime.
We conclude by summarizing our theory and by briefly discussing our various approximations and limitations. We have shown that the observed giant positive magnetoresistance phenomenon in quasi-2D systems in the presence of a parallel magnetic field can be qualitatively explained as arising from the coupling of the field to the carrier orbital motion by virtue of the finite thickness and the low density of the layer (spin plays no role in our explanation). We predict a large anisotropy of resistivity in the 2D plane. Our main approximations are: (1) Boltzmann transport theory; (2) harmonic confinement; (3) $`\delta `$-function random impurity scattering. None of these approximations is qualitatively significant because we predict a very large (factors of $`101000`$) and robust effect. One important corollary of our theory is that the same effect should persist on the insulating side as long as the localization length is larger than the magnetic length except for the fact that $`d\rho /dT`$ should always be negative on the insulating side, which is what is experimentally observed. Two important approximations of our theory are that we have neglected all spin-related effects as well as all crystallographic anisotropy effects, which could, in principle, be added to our theory if future experiments demand such an improvement. The main (and an important) limitation of the theory is that our predicted anisotropy seems not to be consistent with the existing data in Si inversion layers where no magnetoresistive anisotropy has been seen. We do not know the reason for this disagreement — one possibility being that there is an additional scattering mechanism, possibly spin-related, which also plays a role in the observed magnetoresistance and compensates for the anisotropy arising from orbital effects. The other possibility is that screening could be anisotropic in the presence of a strong parallel field (since the 2D Fermi surface is highly anisotropic), leading to a cancellation of the transport anisotropy in the Si inversion layer where screening effects are typically very strong (our theory neglects screening as we assume short range impurity scattering). While more work is clearly needed to understand the quantitative details of the observed magnetoresistance (particularly in Si inversion layers) and spin-related effects may very well be playing an additional role in the experimental data, our work compellingly demonstrates the importance of orbital magnetoresistance in the presence of a parallel magnetic field in the low density limit which cannot be neglected in future work on the subject.
This work is supported by the U.S.-ARO and the U.S.-ONR.
|
no-problem/9909/astro-ph9909326.html
|
ar5iv
|
text
|
# Polarization of Thermal X-rays from Isolated Neutron Stars
## 1 Introduction
Optical and radio polarimetry has proven to be a powerful tool to elucidate properties of various astrophysical objects. For instance, virtually all our knowledge about the orientations of the magnetic and rotation axes of radio pulsars comes from analyzing the swing of polarization position angle within the pulse (see, e.g., Manchester & Taylor 1977; Lyne & Manchester 1988). On the other hand, X-ray polarimetry has remained an underdeveloped field of astrophysics. Although various X-ray polarimeters have been designed (e.g., Kaaret et al. 1990; Weisskopf et al. 1994; Elsner et al. 1997; Marshall et al. 1998), and importance of X-ray polarimetry convincingly demonstrated (Mészáros et al. 1988), most recent measurements of X-ray polarization has been made as long ago as in 1977, with the OSO 8 mission (Weisskopf et al. 1978). Nevertheless, it is expected that X-ray polarimeters will be launched in near future (see, e.g., Tomsick et al. 1997). To develop efficient observational programs for forthcoming X-ray missions whose objectives will include X-ray polarimetry, the problem of polarization of various X-ray sources should be carefully analyzed, with the main emphasis on new astrophysical information to be inferred from such observations.
In the present paper we consider polarization of thermal X-ray radiation from isolated neutron stars (NSs) with strong magnetic fields. Recent observations with the ROSAT and ASCA missions have shown that several such objects are sufficiently bright for polarimetric observations — e.g., the radio pulsars PSR 0833–45 and PSR 0656+14 (see Ögelman 1995, and Becker & Trümper 1997, for reviews), and the radio-quiet NSs RX J0822–4300 (Zavlin, Trümper, & Pavlov 1999) and RX J1856.5–3754 (Walter, Wolk, & Neuhäuser 1996). Their soft X-ray radiation was interpreted as emitted from NS surface layers (atmospheres) with effective temperatures $`T_{\mathrm{eff}}`$ in the range of $`(0.33)\times 10^6`$ K. Since photons of different energies escape from different depths of the NS atmosphere with temperature growing inward, the spectrum of the thermal radiation may substantially deviate from the blackbody spectrum (Pavlov & Shibanov 1978). Moreover, if there is a strong magnetic field in the NS atmosphere, such that the electron cyclotron energy $`E_{Be}=\mathrm{}eB/m_ec=11.6(B/10^{12}\mathrm{G})`$ keV is comparable to or exceeds the photon energy $`E`$, then the radiation propagates as two normal modes (NMs) with different (approximately orthogonal) polarizations and opacities (Gnedin & Pavlov 1974). For typical magnetic fields, $`B10^{11}`$$`10^{13}`$ G, the NMs at soft X-ray energies, $`EE_{Be}`$, are linearly polarized in a broad range of wavevector directions, and the opacity $`\kappa _e`$ of the so-called extraordinary mode (polarized perpendicular to $`𝑩`$) is much smaller than that of the ordinary mode, $`\kappa _e(E/E_{Be})^2\kappa _o`$. As a result, the extraordinary mode escapes from deeper and hotter layers, so that the emergent radiation acquires strong linear polarization perpendicular to the local magnetic field (Pavlov & Shibanov 1978). Polarization of the observed radiation depends on the distribution of magnetic field and temperature over the visible NS surface. If these distributions are axisymmetric, the polarization is a function of the angle $`\mathrm{\Theta }`$ between the symmetry (magnetic) axis and the line of sight. If the direction of the magnetic axis varies due to NS rotation, the polarization patterns show pulsations with the period of rotation, so that measuring the polarization pulse profile allows one to constrain the orientations of the axes. Due to the gravitational bending of the photon trajectories, the visible fraction of the NS surface grows with increasing the NS mass-to-radius ratio, $`M/R`$, which reduces the net polarization because the observer sees additional regions with differently directed magnetic fields. On the other hand, the gravitational field affects the magnetic field geometry making the field more tangential (Ginzburg & Ozernoy 1965), which increases the observed polarization. These GR effects enable one, in principle, to constrain $`M/R`$ by measuring the X-ray polarization. We demonstrate that the expected X-ray polarization of the thermal NS radiation is high enough to be measured with soft-X-ray polarimeters in a modest exposure time, and these measurements can provide important new information on both the geometry of the magnetic field and the mass-to-radius ratio.
## 2 Description of calculations
The intensity $`I`$ and the Stokes parameters $`Q`$ and $`U`$ at a given point of the NS surface can be expressed as (Gnedin & Pavlov 1974)
$`I`$ $`=`$ $`I_o+I_e,`$ (1)
$`Q`$ $`=`$ $`(I_oI_e)p_L\mathrm{cos}2\chi _o,`$ (2)
$`U`$ $`=`$ $`(I_oI_e)p_L\mathrm{sin}2\chi _o,`$ (3)
where $`I_o`$ and $`I_e`$ are the intensities of the ordinary and extraordinary modes, $`p_L=(1𝒫^2)/(1+𝒫^2)`$ is the degree of linear polarization of the NMs ($`𝒫`$ is the ellipticity, i.e., the ratio of the minor axis to the major axis of the polarization ellipse), and $`\chi _o`$ is the angle between the major axis of the polarization ellipse of the ordinary mode and the $`x`$ axis of a reference frame in which the Stokes parameters are defined.
We calculate the local NM intensities with the aid of NS atmosphere models (e.g., Pavlov et al. 1994, 1995). In the present work we assume that the surface temperature is high enough for the atmospheric matter to be completely ionized. If the NS surface is covered with a hydrogen atmosphere, this assumption is justified at $`T_{\mathrm{eff}}\text{ }\stackrel{>}{}10^6`$ K, for typical magnetic fields of NS (Shibanov et al. 1993). The local intensities $`I_o`$ and $`I_e`$ depend on the photon energy, magnetic field, and direction of emission.
In the dipole approximation, valid at photon and particle energies much lower than $`m_ec^2`$, the degree of linear polarization of NMs can be expressed as
$$p_L=\frac{|q|\mathrm{sin}^2\theta ^{}}{\sqrt{4\mathrm{cos}^2\theta ^{}+q^2\mathrm{sin}^4\theta ^{}}}$$
(4)
where $`\theta ^{}`$ is the angle between the magnetic field $`𝑩`$ and the unit wavevector $`\widehat{𝒌}^{}`$ at the NS surface. The (angle-independent) parameter $`q`$ is determined by the components of the Hermitian part of the polarizability tensor in the coordinate frame with the polar axis along the magnetic field (Gnedin & Pavlov 1974). The parameter $`q`$ depends on photon energy and magnetic field (e.g., Pavlov, Shibanov, & Yakovlev 1980; Bulik & Pavlov 1996). For instance, if the hydrogen plasma is completely ionized, and the electron-positron vacuum polarization by the magnetic field can be neglected, this parameter equals
$$q=\frac{E^2(E_{Be}^2+E_{Bi}^2E_{Be}E_{Bi})E_{Be}^2E_{Bi}^2}{E^3(E_{Be}E_{Bi})},$$
(5)
where $`E^{}`$ is the photon energy as measured at the NS surface, $`E_{Bi}=(m_e/m_p)E_{Be}=6.32(B/10^{12}\mathrm{G})`$ eV is the ion (proton) cyclotron energy. If the photon energy is much greater than the ion cyclotron energy, the $`q`$ parameter is particularly simple: $`q=E_{Be}/E^{}`$. This means that the NMs are linearly polarized, $`p_L1`$, in a broad range of directions, $`\mathrm{sin}^2\theta ^{}2E^{}/E_{Be}`$, at photon energies much lower than the electron cyclotron energy. It should be mentioned that equations (1)–(4) imply that the NM polarizations are orthogonal to each other (in particular, $`\chi _e=\chi _o\pm \pi /2`$). This condition is fulfilled in a broad domain of photon energies and directions, except for a few special values of $`\theta ^{},E^{}`$ (e.g., Pavlov & Shibanov 1979; Bulik & Pavlov 1996). Within the same approximations, the angle $`\chi _o`$ coincides with the azimuthal angle of the magnetic field in a reference frame whose polar axis is parallel to $`\widehat{𝒌}^{}`$.
To find the observed flux $`F_I`$ and the observed Stokes parameters $`F_Q`$ and $`F_U`$, one should sum contributions from all the elements of the visible NS surface. We assume the magnetic field and the temperature distribution are axially symmetric and define $`F_Q`$ and $`F_U`$ in the reference frame such that the axis of symmetry $`𝒎`$ lies in the $`xz`$ plane, the $`z`$ axis is directed along the line of sight. In such a frame $`F_U=0`$, and $`F_I`$, $`F_Q`$ are functions of the angle $`\mathrm{\Theta }`$ between $`𝒎`$ and $`\widehat{𝒛}`$. The ratio $`P_L=F_Q/F_I`$ gives the observed degree of linear polarization, $`|P_L|`$, and the observed position angle: the polarization direction is perpendicular or parallel to the projection of $`\widehat{𝒎}`$ onto the sky plane for $`F_Q>0`$ or $`F_Q<0`$, respectively.
Since the NS radius $`R`$ is comparable with the gravitational (Schwarzschild) radius $`R_g=2GM/c^2`$, the photon energy and the wavevector and polarization directions change in the course of propagation in the strong gravitational field. We will assume that the NS gravitational field is described by the exterior Schwarzschild solution. Since strong magnetic fields ($`B10^{10}`$ G) are needed to obtain measurable polarization in the X-ray range, and all observed NSs with strong magnetic fields are not very fast rotators ($`P>10`$ ms), the effects of rotation on the metric and on the observed radiation are very small. For the Schwarzshild metric, the observed energy is redshifted as $`E=g_rE^{}`$, where $`g_r=(1R_g/R)^{1/2}`$ is the redshift factor. The observed wavevector is inclined to the emitted wavevector by the angle $`K\vartheta `$, $`\widehat{𝒌}^{}\mathbf{}\widehat{𝒛}=\mathrm{cos}(K\vartheta )`$, where $`K`$ is the colatitude of the emitting point in the reference frame with the origin at the NS center and the $`z`$ axis directed towards the observer, $`\vartheta `$ is the angle between the normal to the surface and the wavevector direction $`\widehat{𝒌}^{}`$ at the emitting point. The angle $`\vartheta `$ would coincide with $`K`$ in flat space-time. In the Schwarzschild geometry, $`K`$ always exceeds $`\vartheta `$, i.e., some part of the NS back hemisphere is visible. For instance,
$$K=a_0^{R_g/R}\frac{\mathrm{d}x}{\sqrt{1a^2(1x)x^2}},$$
(6)
for $`K\pi `$ ($`g_r0.65`$), where $`a=R\mathrm{sin}\vartheta /(R_gg_r)`$ is the impact parameter in units of $`R_g`$ (see, e.g., Zavlin, Shibanov, & Pavlov 1995). In particular,
$$K\vartheta u\mathrm{tan}\frac{\vartheta }{2}+\frac{1}{16}u^2\left[\frac{15(\vartheta \mathrm{sin}\vartheta )}{\mathrm{sin}^2\vartheta }+7\mathrm{tan}\frac{\vartheta }{2}\right]$$
(7)
for $`uR_g/R1`$.
The bending of the photon trajectories is associated with changing the direction of linear polarization. The polarization direction rotates in such a way that it keeps fixed orientation with respect to the normal to the trajectory plane (e.g., Pineault 1977), remaining perpendicular to the wavevector. Without this rotation, the angles $`\chi _o`$ and $`\chi _e`$ would be conserved: $`\chi _o^{\mathrm{obs}}=\varphi `$ at the observation point, where $`\varphi =\mathrm{tan}^1(B_y/B_x)`$ is the azimuthal angle of the magnetic field at the emitting point in the $`x,y,z`$ frame. To find $`\chi _o^{\mathrm{obs}}`$ with allowance for the GR effects, it is convenient to introduce the frame $`x^{},y^{},z^{}`$ such that the $`z^{}`$ axis is parallel to $`\widehat{𝒌}^{}`$, and the photon trajectory is in the $`x^{}z^{}`$ plane (see Fig. 1). The unit vectors along the axes of the two frames are connected with each other as follows
$`\widehat{𝒙}^{}`$ $`=`$ $`\mathrm{cos}\phi \mathrm{cos}(K\vartheta )\widehat{𝒙}+\mathrm{sin}\phi \mathrm{cos}(K\vartheta )\widehat{𝒚}\mathrm{sin}(K\vartheta )\widehat{𝒛},`$ (8)
$`\widehat{𝒚}^{}`$ $`=`$ $`\mathrm{sin}\phi \widehat{𝒙}+\mathrm{cos}\phi \widehat{𝒚},`$ (9)
$`\widehat{𝒛}^{}`$ $`=`$ $`\mathrm{cos}\phi \mathrm{sin}(K\vartheta )\widehat{𝒙}+\mathrm{sin}\phi \mathrm{sin}(K\vartheta )\widehat{𝒚}+\mathrm{cos}(K\vartheta )\widehat{𝒛},`$ (10)
where $`\phi `$ is the azimuthal angle of the emitting point in the $`x,y,z`$ frame. Using the conditions that the angle between $`\widehat{𝒚}^{}`$ and the polarization direction is conserved, and the polarization direction is perpendicular to $`\widehat{𝒛}`$ at the observation point, we obtain $`\chi _o^{\mathrm{obs}}=\phi +\varphi ^{}`$, where $`\varphi ^{}`$ is the azimuthal angle of the magnetic field in the $`x^{},y^{},z^{}`$ frame: $`\varphi ^{}=\mathrm{tan}^1(B_y^{}/B_x^{})`$. The angle $`\varphi ^{}`$ depends on $`K\vartheta `$ and $`\phi `$, and it tends to $`\varphi \phi `$ when $`R_g/R0`$.
With allowance for the above-described gravitational effects, the observed flux $`F_I`$ and the Stokes parameter $`F_Q`$ are given by the following integrals over the visible NS surface (see Zavlin et al. 1995):
$`F_I(E,\mathrm{\Theta })`$ $`=`$ $`{\displaystyle \frac{R^2}{d^2}}g_r{\displaystyle _0^1}\mu d\mu {\displaystyle _0^{2\pi }}d\phi (I_o+I_e)=F_o(E,\mathrm{\Theta })+F_e(E,\mathrm{\Theta }),`$ (11)
$`F_Q(E,\mathrm{\Theta })`$ $`=`$ $`{\displaystyle \frac{R^2}{d^2}}g_r{\displaystyle _0^1}\mu d\mu {\displaystyle _0^{2\pi }}d\phi (I_oI_e)p_L\mathrm{cos}2(\phi +\varphi ^{}),`$ (12)
where $`d`$ is the distance, $`\mu =\mathrm{cos}\vartheta `$, and the integrands are taken at the photon energy $`E^{}=E/g_r`$.
To calculate the integrands, we should know the magnetic field at the NS surface as a function of $`\mu `$ and $`\phi `$. We consider a dipole magnetic field in the Schwarzschild metric. According to Ginzburg & Ozernoy (1965), the field equals
$$𝑩=B_p\frac{(2+f)(\widehat{𝒓}\widehat{𝒎})\widehat{𝒓}f\widehat{𝒎}}{2},$$
(13)
where $`B_p`$ is the field strength at the magnetic pole, $`\widehat{𝒓}`$ is the unit radius vector of a surface point, and $`\widehat{𝒎}`$ is the unit vector of the magnetic moment. The parameter
$$f=2\frac{u^22u2(1u)\mathrm{ln}(1u)}{[u^2+2u+2\mathrm{ln}(1u)]\sqrt{1u}}$$
(14)
accounts for the GR effect. For $`u1`$ ($`RR_g`$), we have $`f(u)1+u/4+11u^2/80`$. The radial and tangential components of the magnetic field are $`B_r=B_p\mathrm{cos}\gamma `$ and $`B_t=(B_p/2)f\mathrm{sin}\gamma `$, where $`\mathrm{cos}\gamma =\widehat{𝒓}\mathbf{}\widehat{𝒎}=\mathrm{sin}\mathrm{\Theta }\mathrm{sin}K\mathrm{cos}\phi +\mathrm{cos}\mathrm{\Theta }\mathrm{cos}K`$. Since $`f>1`$, the GR effect makes the magnetic field more tangential. The projections of $`𝑩`$ onto the $`x,y,z`$ and $`x^{},y^{},z^{}`$ axes can be easily found with the aid of equations
$`\widehat{𝒓}`$ $`=`$ $`\mathrm{sin}K\mathrm{cos}\phi \widehat{𝒙}+\mathrm{sin}K\mathrm{sin}\phi \widehat{𝒚}+\mathrm{cos}K\widehat{𝒛},`$ (15)
$`\widehat{𝒎}`$ $`=`$ $`\mathrm{sin}\mathrm{\Theta }\widehat{𝒙}+\mathrm{cos}\mathrm{\Theta }\widehat{𝒛},`$ (16)
and equations (8)–(10). The strength of the magnetic field is
$$B=(B_p/2)\left[(4f^2)\mathrm{cos}^2\gamma +f^2\right]^{1/2}=B_pf\left[4(4f^2)\mathrm{cos}^2\theta _B\right]^{1/2},$$
(17)
where $`\theta _B`$ is the angle between $`𝑩`$ and the normal to the surface $`\widehat{𝒓}`$.
The integration over the NS surface (eqs. , ) proceeds as follows. For each point of the $`\mu ,\phi `$ grid, we calculate the colatitude $`K(\vartheta )`$ from equation (6) and the components of the radius vector $`\widehat{𝒓}`$ in the $`x,y,z`$ and $`x^{},y^{}z^{}`$ frames (eqs. and –). This gives us the projections and strength of the local magnetic field $`𝑩`$ (eqs. and ), the angles $`\varphi ^{}=\mathrm{tan}^1(B_y^{}/B_x^{})`$, $`\theta ^{}=\mathrm{cos}^1(B_z^{}/B)`$, and $`\theta _B=\mathrm{cos}^1(𝑩\mathbf{}\widehat{𝒓})`$, and the degree of NM polarization $`p_L`$ (eq. ). To obtain the intensities of the extraordinary and ordinary modes of radiation emitted to the observer from a given surface point, one needs to know the local depth dependences of the temperature and density in the NS atmosphere, determined by the local values of $`B`$, $`\theta _B`$, and $`T_{\mathrm{eff}}`$. These dependences are obtained by interpolation within a set of the diffusion atmosphere models (Pavlov et al. 1995). Then, the $`I_o`$ and $`I_e`$ intensities are computed as described by Pavlov et al. (1994) and Shibanov & Zavlin (1995). Subsequent numerical integration over $`\mu ,\phi `$ gives us $`F_I`$, $`F_Q`$, the degree of the observed linear polarization and the position angle.
## 3 Results
To demonstrate how the observed linear polarization depends on photon energy, magnetic field, and mass-to-radius ratio, we consider a NS covered with a hydrogen atmosphere with a uniform effective temperature and a dipole magnetic field. We present the results for $`T_{\mathrm{eff}}=1\times 10^6`$ K, $`B_p/(10^{12}\mathrm{G})=0.3`$, 1.0, 3.0 and 10.0. We choose a standard NS radius $`R=10`$ km and three NS masses, $`M/M_{}=0.66`$, 1.40 and 1.92, from an allowed domain in the $`M`$$`R`$ diagram (filled circles in Fig. 2). These masses correspond to the redshift parameters $`g_r=0.90`$, 0.77 and 0.66, and the surface gravitational accelerations $`g/(10^{14}\mathrm{cm}^2\mathrm{s}^1)=0.97`$, 2.43, and 3.89. It should be noted that the properties of the emitted radiation are almost independent of the $`g`$ value, so that the gravitational effects on the observed radiation are determined mainly by the redshift parameter $`g_r`$, i.e., by the mass-to-radius ratio.
The left panel of Figure 3 demonstrates the observed photon spectral fluxes $`F_I`$ (eq. ) from a NS with the magnetic field $`B_p=1\times 10^{12}`$ G at the magnetic pole and the magnetic axis perpendicular to the line of sight, $`\mathrm{\Theta }=90^{}`$. The flux is normalized to a distance of 1 kpc. The spectra are presented for the three values of the redshift parameter $`g_r`$. To demonstrate the effect of the interstellar absorption, we plot the spectra for the effective hydrogen column densities $`n_H=0`$ (unabsorbed flux), $`1\times 10^{20}`$ and $`1\times 10^{21}`$ cm<sup>-2</sup> (the latter two are shown for $`g_r=0.77`$ only). The effect of redshift is clearly seen, as well as that of the interstellar absorption: spectral maxima shift from $`0.15`$ keV at $`n_H=0`$ to $`0.6`$ keV at $`n_H=1\times 10^{21}`$ cm<sup>-2</sup>. The contributions from the extraordinary and ordinary modes (fluxes $`F_e`$ and $`F_o`$) to the unabsorbed spectrum $`F_I=F_e+F_o`$ are shown in the right panel of Figure 3 for $`g_r=0.77`$. At energies around the maxima of the flux spectra the radiative opacity of the ordinary mode significantly exceeds that of the extraordinary mode. Hence, the extraordinary mode is emitted from deeper and hotter atmosphere layers, providing the main contribution to the total flux (see Pavlov et al. 1995 for details). At higher energies ($`E^{}\text{ }\stackrel{>}{}3^{1/2}E_{Be}`$) the relation between the two opacities, and the two NM fluxes, is reversed (Kaminker, Pavlov, & Shibanov 1982). This leads to changing the sign of $`F_Q`$ (i.e., to the jump of the polarization position angle by $`\pi /2`$).
Several examples of the dependences of $`P_L=F_Q/F_I`$ on photon energy are presented in Figure 4, for $`B_p=1\times 10^{12}`$ G and different values of the angle $`\mathrm{\Theta }`$ and the redshift parameter $`g_r`$. In the soft X-ray range, where the thermal NS radiation is most easily observed, $`P_L`$ is positive, i.e., the polarization direction is perpendicular to the projection of the NS magnetic axis onto the image plane. In this energy range the ordinary mode is emitted from superficial layers with lower temperature, whereas the extraordinary mode is formed in deeper and hotter layers with a larger temperature gradient. As a result, the ratio $`F_e/F_o`$ grows with $`E`$ at low energies until this effect is compensated by the decrease of the difference between the extraordinary and ordinary opacities (see the right panel of Fig. 3). At higher energies the extraordinary and ordinary fluxes approach each other, so that $`F_e/F_o`$ decreases with increasing $`E`$ reaching unity at $`E0.3g_rE_{Be}`$ (hereafter, $`E_{Be}`$ and $`E_{Bi}`$ are the cyclotron energies for the magnetic field $`B_p`$). Since $`p_L1`$ at $`E_{Bi}E^{}E_{Be}`$, and $`\phi +\varphi ^{}`$ does not depend on $`E^{}`$, it follows from equations (11) and (12) that $`P_L(F_eF_o)/(F_e+F_o)`$, with a proportionality coefficient independent of $`E`$. This explains the energy dependence of $`P_L`$ in Figure 4. In particular, starting from energies $`Eg_rE_{Bi}`$, $`P_L`$ grows with $`E`$ until it reaches a maximum (at $`E1`$ keV for $`B_p=1\times 10^{12}`$ G). At higher energies the polarization spectra steeply decrease with increasing $`E`$, reach zero at $`E0.3g_rE_{Be}`$, where the contributions from the two NMs cancel each other, and become negative (polarization direction becomes parallel to the $`𝒎`$ projection) at higher energies, where the flux decreases exponentially at the effective temperature chosen.
As expected, the degree of polarization grows with increasing $`\mathrm{\Theta }`$ from $`0^{}`$ to $`90^{}`$. It equals zero at $`\mathrm{\Theta }=0^{}`$ (the magnetic axis is parallel to the line of sight) because the contributions from the azimuthal angles $`\phi `$ and $`\phi +\pi /2`$ to $`F_Q`$ (eq. ) are polarized in orthogonal directions, being of the same magnitude ($`B_y^{}=0`$, $`\varphi ^{}=0`$, $`\mathrm{cos}2\phi =\mathrm{cos}2(\phi \pm \pi /2)`$). The polarization is maximal at $`\mathrm{\Theta }=90^{}`$, when the magnetic lines are seen by the observer almost parallel to the magnetic axis on a substantial (central) part of the visible stellar disk.
We see from Figure 4 that the degree of polarization decreases with increasing $`M/R`$ (or decreasing $`g_r`$). The main reason is that the observer sees a larger fraction of the whole NS surface due to stronger bending of photon trajectories. As a result, the overall pattern of the magnetic lines on the visible NS disk becomes more nonuniform, which leads to additional “cancellation” of the mutually orthogonal polarizations emitted from parts of the surface with orthogonally oriented magnetic line projections. This effect is partly compensated by the other GR effect, more tangential dipole magnetic field in the Schwarzschild metric (the parameter $`f`$ in eq. equals 1.08, 1.14, and 1.22 for $`g_r=0.90`$, 0.77, and 0.66, respectively), but the effect of bending prevails.
Figure 5 demonstrates the effect of magnetic field strength on the degree of polarization. In particular, $`P_L`$ in the soft X-ray range grows with $`B_p`$ for typical NS magnetic fields. When the magnetic field is relatively small, $`B_p\text{ }\stackrel{<}{}1\times 10^{12}`$ G, the fast growth of $`P_L`$ is due to the increasing difference between the extraordinary and ordinary opacities (hence, increasing differences between the escape depths and between the NM intensities). Slower growth of $`P_L`$ at intermediate fields (around $`B_p1\times 10^{12}`$ G) is mainly due to the decrease of the surface temperature with increasing $`B`$ (Pavlov et al. 1995), which reduces the ordinary flux $`F_o`$ and the ratio $`F_o/F_e`$. With further increase of magnetic field, the surface temperature ceases to decrease, and $`P_L(B_p)`$ saturates in the soft X-ray range at $`B_p\text{ }\stackrel{>}{}3\times 10^{12}`$ G, until the field becomes so strong, $`B_p\text{ }\stackrel{>}{}3\times 10^{13}`$ G, that $`g_rE_{Bi}`$ becomes comparable with $`E`$, and the proton cyclotron spectral feature gets into the soft X-ray range.
The proton cyclotron feature in the polarization spectrum is shown in Figure 5 for $`B_p=1\times 10^{13}`$ G. The shape of the feature can be explained by the behavior of the parameter $`q`$ and the NM intensities near the proton cyclotron resonance. According to equation (5), $`q`$ grows with decreasing photon energy until $`E^{}`$ reaches $`3^{1/2}E_{Bi}`$; then it sharply decreases, crosses zero in the very vicinity of proton cyclotron resonance, at $`E^{}E_{Bi}(1+2m_e/m_p)`$, and tends to $`\mathrm{}`$ ($`qE_{Be}E_{Bi}^2/E^3`$ at $`E^{}E_{Bi}`$). This means that $`p_L`$ reaches zero, the NM intensities $`I_e`$ and $`I_o`$ equal each other, and the integrand of equation (12) equals zero at an energy in the very vicinity of the proton resonance corresponding to the local magnetic field. If the direction of the local magnetic field is such that $`\mathrm{cos}2(\phi +\varphi ^{})>0`$ (see eq. ), which roughly corresponds to the projection of the local magnetic field onto the sky plane within $`\pm 45^{}`$ of the magnetic axis projection, then the integrand in the expression for $`F_Q`$ is positive at energies around the resonance, so that the energy dependence of the integrand for the corresponding surface points looks like an “absorption line” in a positive continuum, with its minimum (zero) value at the local resonance energy. Integration over the area with positive $`\mathrm{cos}2(\phi +\varphi ^{})`$ yields a positive contribution to $`F_Q`$, with an absorption line somewhat broadened, and its minimum above zero, because of nonuniformity of the magnetic field. On the contrary, the energy dependence of the integrand in the area where $`\mathrm{cos}2(\phi +\varphi ^{})<0`$ looks like an “emission line” on a negative continuum, with its maximum equal zero at the local resonance energy. The integral over this area gives a negative contribution to $`F_Q`$, its absolute values are minimal in the energy range which includes the local resonance energies. If the strengths of the local magnetic fields are different in the areas of positive and negative $`\mathrm{cos}2(\phi +\varphi ^{})`$, the integration over the whole visible disk results in a complex feature in the $`F_Q(E)`$ and $`P_L(E)`$ spectra. In Figure 5 this feature is most pronounced at $`B_p=1\times 10^{13}`$ G and $`\mathrm{\Theta }=90^{}`$. It consists of two components: the absorption component with a sharp minimum at $`E28`$ eV corresponding to the field at the magnetic equator ($`B_e=B_pf/2`$), and the emission component at energies below $`E48`$ eV, corresponding to the field at the magnetic pole. The feature is also clearly seen at the same $`B_p`$ and $`\mathrm{\Theta }=45^{}`$, whereas only the emission component of the feature is seen at $`E>10`$ eV in the curves of $`P_L(E)`$ for $`B_p=3\times 10^{12}`$ G. If the magnetic field is superstrong, $`B10^{14}10^{15}`$ G, as suggested for anomalous X-ray pulsars and soft gamma repeaters, the proton cyclotron feature gets into the soft or medium X-ray range, and its detection would enable one to measure directly the magnetic field strength.
Generally, the polarization spectra are much more sensitive to the strength of magnetic field than the intensity spectra (Fig. 6). The main effect of $`B`$ on $`F_I(E)`$ spectra in the soft energy range is a shallow proton cyclotron absorption feature (see the spectrum for $`B_p=1\times 10^{13}`$ G in Fig. 6). At $`E0.21.0`$ keV the intensity spectra at different magnetic fields typical for NSs are almost indistinguishable, with the main contribution coming from the extraordinary mode whose spectrum is almost independent of $`B`$ at $`E_{Bi}E^{}E_{Be}`$.
If the angle $`\alpha `$ between the magnetic and rotation axes differs from zero, the projection of $`𝒎`$ onto the sky changes its orientation with the period of rotation. This means that the angle $`\mathrm{\Theta }`$ and the polarization position angle $`\delta `$ with respect to a fixed (nonrotating) direction also oscillate with the rotation period $`P`$:
$$\mathrm{cos}\mathrm{\Theta }=\mathrm{cos}\zeta \mathrm{cos}\alpha +\mathrm{sin}\zeta \mathrm{sin}\alpha \mathrm{cos}2\pi \mathrm{\Phi },$$
(18)
$$\mathrm{tan}\delta =\frac{\mathrm{cos}\alpha \mathrm{sin}\zeta \mathrm{sin}\alpha \mathrm{cos}\zeta \mathrm{cos}2\pi \mathrm{\Phi }}{\mathrm{sin}\alpha \mathrm{sin}2\pi \mathrm{\Phi }},$$
(19)
where $`\zeta `$ is the angle between $`𝛀`$ (NS rotation axis) and the line of sight, $`\mathrm{\Phi }=t/P`$ is the rotation phase, the angle $`\delta `$ is counted from the projection of $`𝛀`$ onto the sky. Equation (19) is applicable in the case of polarization perpendicular to the projection of $`𝒎`$ onto the sky plane ($`P_L>0`$); for $`P_L<0`$, the left-hand-side is replaced by $`\mathrm{cot}\delta `$.
Figure 7 shows several characteristic dependences of $`P_L(\mathrm{\Phi })`$ and $`\delta (\mathrm{\Phi })`$ for $`E=0.3`$ keV, $`g_r=0.77`$, and a few sets of $`\zeta ,\alpha `$. In the particular case of an orthogonal rotator, $`\zeta =\alpha =90^{}`$, we have $`\mathrm{\Theta }=\mathrm{\Phi }`$, $`\delta =0`$, i.e., the degree of polarization oscillates between zero (at $`\mathrm{\Phi }=0`$, 0.5) and a maximum value, $`P_L=25\%`$ at $`\mathrm{\Phi }=0.25`$, 0.75, showing two pulses per period, while the position angle remains constant (the polarization is oriented along the direction of the rotation axis). For the case $`\zeta =60^{}`$, $`\alpha =50^{}`$, the minimum polarization, $`P_L=1\%`$ at $`\mathrm{\Phi }=0`$, 1, corresponds to $`\mathrm{\Theta }=\zeta \alpha =10^{}`$. The polarization pulse has two maxima per period, $`P_L=25\%`$ at $`\mathrm{\Phi }=0.36`$, 0.64 (corresponding to $`\mathrm{\Theta }=90^{}`$) and a local minimum, $`P_L=22\%`$ at $`\mathrm{\Phi }=0.5`$ (corresponding to $`\mathrm{\Theta }=\zeta +\alpha =110^{}`$). The position angle oscillates around $`\pi /2`$ (or $`\pi /2`$). For $`\zeta =\alpha =45^{}`$, $`P_L`$ has one broad maximum per period, of the same height as for the orthogonal rotator, when $`\mathrm{\Theta }=90^{}`$ at $`\mathrm{\Phi }=0.5`$. The position angle swings from 0 to $`\pi `$ during the period, crossing $`\pi /2`$ at the phases of maximum polarization. Finally, for $`\zeta =40^{}`$, $`\alpha =10^{}`$, the phase dependence of the degree of polarization is almost sinusoidal, with one maximum per period; it oscillates between $`7\%`$ and $`16\%`$. The position angle oscillates around $`\pi /2`$ (or $`\pi /2`$) with a small amplitude because of the small value of $`\alpha `$. Thus, we see that a variety of the phase dependences of $`P_L`$ and $`\delta `$ can be obtained for various combinations of $`\zeta ,\alpha `$, which is potentially useful for evaluating these angles from polarimetric observations.
## 4 Discussion
As we see from the examples above, the degree of linear polarization of thermal NS radiation is quite high, up to 20%–50% in pulse peaks, and about twice lower for the phase-averaged polarization, for typical NS parameters. Optimal energies for observing the polarization are in the soft X-ray range, $`0.1`$–1 keV, for typical surface temperatures of young and middle-aged NSs. The polarization can be observed at these energies with the use of multilayer coated mirrors which provide high reflectivity at large grazing angles — see Marshall et al. (1998) for a concept for a satellite-borne polarimeter (PLEXAS) which would be able to measure the linear polarization from brightest thermally emitting pulsars to an accuracy of 1%–3% during modest exposure times $`30`$–100 ks. The polarization can be measured in several narrow energy bands. Complementing the spectral flux with the polarization spectral data and comparing the both with the NS atmosphere models would considerably narrow allowed ranges for NS parameters such as the magnetic field and effective temperature. Particularly strong constraints could be obtained if the X-ray polarimetric measurements are supplemented by measuring optical-UV polarization of the same sources. Several middle-age pulsars have been observed successfully in the optical-UV range with the Hubble Space Telescope (e.g., Pavlov, Welty, & Córdova 1997, and references therein). Although the expected polarization in this range is lower than in soft X-rays, it still can be as high as 5%–15% (see Figures 4 and 5), so that UV-optical polarimetric observations of these sources seem quite feasible.
Using detectors with high timing resolution (e.g., microchannel-based photon counters) for polarimetric observations of pulsars would allow one to measure phase dependences of the degree of polarization and the position angle. These observations would be most useful to determine the inclinations of the rotation and magnetic axis ($`\zeta `$ and $`\alpha `$). Although radio band polarization data have been widely used to constrain these angles, the results are often ambiguous because the same behavior of the position angle can be fitted with different combinations of $`\zeta `$ and $`\alpha `$. An additional difficulty in interpreting the radio polarization data is caused by shortness of the duty cycle of radio pulsars — the radio flux is too low during a substantial fraction of the period to measure the polarization. Since the X-ray flux remains bright during the whole period, measuring the position angle and the degree of polarization in X-rays would enable one to infer the orientation of the pulsar axes with much greater certainty.
Of particular interest is the result that the observed polarization is sensitive to the NS mass-to-radius ratio, the most crucial parameter to constrain still poorly known equation of state of the superdense matter in the NS interiors. Since the radio emission of pulsars is generated well above the NS surface, this ratio cannot be constrained from polarimetric observations of pulsars in the radio band. Although some constraints can be obtained from the pulse profiles of the X-ray flux (Pavlov & Zavlin 1997), the polarization pulse profiles are more sensitive to the gravitational effects.
Primary targets for studying the X-ray polarization of thermal NS radiation are the X-ray brightest, thermally radiating pulsars such as PSR 0833–45 (Vela) and PSR 0656+14. A prototype of another class of promising targets, radio-quiet NSs in supernova remnants, is RX J0822–4300 in Puppis A. X-ray polarimetric measurements would be crucial to establish the strength and geometry of the magnetic field of this putative isolated X-ray pulsar (Pavlov, Zavlin, & Trümper 1999). Also it would be very interesting to study X-ray polarization of X-ray bright, radio-quiet isolated NSs which are not associated with supernova remnants. Particularly interesting is the object RX J1856.5–3754 which has a thermal-like soft X-ray spectrum but does not show pulsations (Walter et al. 1996). The lack of pulsations may be explained either by smallness of the magnetic inclination $`\alpha `$, if the magnetic field is typical for NSs ($`10^{11}`$$`10^{13}`$ G), or by a very low surface magnetic field. Polarimetric observations would enable one to distinguish between the two hypotheses — the polarization is expected to be high (and unpulsed) in the former case (unless $`\zeta `$ is also small), and it would be very low if the magnetic field is lower than $`10^{10}`$ G. Distinguishing between the two options is needed to choose either low-field or high-field NS atmosphere models — applications of these types of models to interpretation of the multiwavelength observations of this source yield quite different NS parameters (Pavlov et al. 1996).
It follows from our results that thermal radiation from millisecond pulsars, whose typical magnetic fields are $`10^810^9`$ G ($`E_{Be}110`$ eV), is not polarized in the X-ray range. However, their polarization is expected to be quite strong in the optical-UV range and can be measured in sufficiently deep observations. The best candidate for such observations is the nearest millisecond pulsar J0437–4715 whose magnetic field, $`B8\times 10^8`$ G, was estimated from radio observations. The thermal radiation from the NS surface, which is expected to be heated up to $`10^5`$ K, should prevail over the radiation from the very cool white dwarf companion at $`\lambda \text{ }\stackrel{<}{}2000`$ Å. This means that polarization from this pulsar could be observed in the far-UV range with the Hubble Space Telescope. It should be mentioned that the rapid rotation of millisecond pulsars may affect not only the intensity pulse shape (Braje, Romani, & Rauch 1999), but also the polarization, the effect neglected in the present paper.
In summary, our results demonstrate that including X-ray polarimeters in future X-ray observatories, or launching dedicated X-ray polarimetry missions, would be of great importance for studying NSs. The polarimetric observations would be useful for studying not only the thermal component of the NS X-ray radiation, but also the nonthermal component which dominates in many X-ray emitting pulsars, particularly at higher energies. Further theoretical investigation of polarization of both thermal and nonthermal X-ray emission from NSs are also warranted to provide firm interpretation of future observational data. In particular, it would be useful to consider the polarization with allowance for possible nonuniformity of the temperature distribution over the NS surface and to study the effects of chemical composition of the NS atmosphere on polarization.
We are grateful to Hermann Marshall for the useful discussions of capabilities of modern soft X-ray polarimeters. This work has been partially supported through NASA grant NAG5-7017.
|
no-problem/9909/astro-ph9909436.html
|
ar5iv
|
text
|
# Large Scale Structure of the Universe: Current Problems
## 1 Introduction
According to current paradigms the Universe is homogeneous and isotropic on large scales, density perturbations grow from small random fluctuations generated in the early stage of the evolution (inflation), and the dynamics of the Universe is dominated by cold dark matter (CDM) with some possible mixture of hot dark matter (HDM). On small scales galaxies are associated in groups and clusters. Until recently it was assumed that the homogeneity of the Universe starts on scales above 50 $`h_{100}^1`$ Mpc. However, there is growing evidence that the supercluster-void network has some regularity, and that homogeneity occurs on larger scales only. Broadhurst et al. (1990) measured redshifts of galaxies in a narrow beam towards the northern and southern Galactic poles and found that the distribution is periodic: high-density regions (which indicate superclusters of galaxies, see Bahcall 1991) alternate with low-density ones (voids) with a surprisingly constant interval of $`128`$ $`h_{100}^1`$ Mpc (here $`h`$ is the Hubble constant in units of 100 km s<sup>-1</sup> Mpc<sup>-1</sup>). The distribution of galaxies in this beam can be characterized by a power spectrum which has a sharp peak. The power spectra derived on the basis of large cluster samples have maxima around the same scale (Einasto et al. 1997a, 1999a). Below we shall analyze observed power spectra of galaxies and clusters of galaxies and compare them with theoretical spectra found for various models.
## 2 The power spectrum of galaxies and its interpretation
We have analyzed recent determinations of power spectra of large galaxy and cluster samples. The mean power spectrum found from cluster samples (Einasto et al. 1997c, Retzlaff et al. 1998, Tadros et al. 1998) and the APM 3-D galaxy sample (Tadros and Efstathiou 1996) has a relatively sharp maximum at wavenumber $`k=0.05`$ $`h`$ Mpc<sup>-1</sup>, which corresponds to a scale of 120 $`h_{100}^1`$ Mpc, and an almost exact power law with index $`n=1.9`$ on scales shorter than the maximum. In contrast, the power spectrum found from deprojection of the 2-D distribution of APM galaxies (Peacock 1997, Gaztañaga & Baugh 1997) is shallower around the maximum, see Figure 1. We may expect that true 3-D and deeper surveys reflect better the actual distribution of galaxies and clusters, thus we assume that the power spectrum based on cluster data is characteristic for all galaxies in a fair sample of the Universe (Einasto et al. 1999a,b). The spectrum is derived in real space, then reduced to the amplitude of the spectrum of matter, and finally corrected for non-linear effects; it is determined from observations on scales $`200`$ $`h_{100}^1`$ Mpc, while on very large scales it is extrapolated using theoretical model spectra.
In the right panel of Figure 1 we compare the empirical power spectrum with theoretical models. The best agreement is achieved with a mixed dark matter (MDM) model with cosmological constant. We have accepted parameters of models in agreement with recent data: Hubble constant $`h=0.6`$; baryon density $`\mathrm{\Omega }_b=0.04`$ (this gives $`\mathrm{\Omega }_bh^2=0.0144`$); hot dark matter density $`\mathrm{\Omega }_\nu =0.1`$. We use spatially flat models with cosmological constant, $`\mathrm{\Omega }_0+\mathrm{\Omega }_\mathrm{\Lambda }=1`$. The matter density $`\mathrm{\Omega }_0=\mathrm{\Omega }_b+\mathrm{\Omega }_c+\mathrm{\Omega }_\nu `$ was varied between 1.0 and 0.25, the cold dark matter fraction $`\mathrm{\Omega }_c`$ and vacuum energy term $`\mathrm{\Omega }_\mathrm{\Lambda }`$ were chosen in agreement with restrictions given above. The amplitude of power spectra on large scales was normalized using four-year COBE data. All models are based on the basic assumption that the primordial power spectrum is a power law; we have calculated model spectra for power indices of $`1.0,1.1,\mathrm{}1.4`$; model spectra plotted in Figure 1 were derived for $`n=1.0`$.
Figure 1 shows that on scales shorter than the scale of the maximum the best agreement with observations is obtained with a model with density parameter $`\mathrm{\Omega }_00.4`$. By fine tuning the density parameter $`\mathrm{\Omega }_0`$ and power index $`n`$ it is possible to get an almost exact representation of the empirical power spectrum on scales $`<120`$ $`h_{100}^1`$ Mpc. The agreement is lost on large scales. The power spectra of models with low density value continue to rise toward large scales as seen in Figure 1. An agreement with the amplitude of theoretical power spectra is possible only for models with high density parameter, $`\mathrm{\Omega }_01`$. However, models with high density parameter have amplitudes much higher than the empirical spectrum. It is impossible to satisfy the shape of the empirical power spectrum with models with any fixed density parameter simultaneously on large and small scales. This is the main conclusion obtained from the comparison of cosmological models with the data. The main reason for this disagreement is that the empirical power spectrum is narrower: its full width at half height of the maximum is about 0.8 dex, whereas conventional CDM models have this parameter in the range $`1.101.26`$ dex, and MDM models in the range $`1.001.15`$ dex (lower value for lower density parameter).
One possibility to explain this discrepancy and to decrease the width of the power spectrum is to increase the baryon fraction in the cosmic density budget (Eisenstein et al. (1998) and Meiksin, White & Peacock (1999)). In this case the amplitude of Sakharov oscillations of the hot plasma before recombination increases which decreases the width of the power spectrum and increases the amplitude of the power spectrum near the peak. We have checked this possibility and calculated the power spectra for a range of baryon densities, varying also the Hubble parameter and vacuum energy density (cosmological constant term). Figure 2 shows results for a set of MDM models with Hubble constant $`h=0.5`$ and $`h=0.6`$, vacuum energy density $`\mathrm{\Omega }_\mathrm{\Lambda }=0.0`$ and $`0.6`$, and HDM fraction $`\mathrm{\Omega }_\nu =0.1`$, the baryon density was varied between 0.05 and 0.20. The increase of the baryon fraction in models with high cosmological density (and zero cosmological constant) does not change the power spectrum considerably – the width of the spectrum remains too large. In models with large cosmological constant an increase of the baryon fraction decreases the width of the power spectrum, however, Sakharov oscillations of the spectrum become too large. Moreover, the shape of all theoretical power spectra is very different from the shape of empirical spectra. The first peak of Sakharov oscillations occurs on a scale of $`k0.1`$ $`h`$ Mpc<sup>-1</sup>; the location of the overall maximum of the power spectrum depends on the density parameter. For low-density models it is located near $`k0.01`$ $`h`$ Mpc<sup>-1</sup>, the observed maximum lies in-between. Varying the Hubble constant does not change the overall picture, and there remain essential differences between models and data.
Thus our calculations show that no combination of cosmological parameters enables us to obtain a good representation of the empirical power spectrum with theoretical models which are based on the assumption that the primordial power spectrum is a single power law. There remain two possibilities, either empirical data are in error or the single power law assumption is wrong.
## 3 Geometry of the distribution of clusters
Consider first the possibility that the observed power spectrum is not accurate enough, and that there is actually no discrepancy between models and data. Differences occur on scales near the maximum of the spectrum. Here density perturbations have the largest amplitudes, thus it is clear that maxima correspond to superclusters – large-scale regions of highest density in the Universe, and minima to large voids between superclusters – regions of lowest overall density. Differences in power spectra on these scales reflect differences in the spatial distribution of superclusters and voids. To understand the meaning of differences between observed and theoretical power spectra we shall compare the distribution of real and model superclusters and voids. The most suitable objects to investigate the distribution of superclusters are rich clusters of galaxies.
Figure 3 presents the distribution of Abell-ACO and APM clusters of galaxies located in rich superclusters with at least 4 or 8 member clusters (Toomet et al. 1999). To emphasize the distribution of regions of highest density, which define the power spectrum near the maximum, we plot only clusters in rich superclusters. Figure 3 shows that the distribution of rich clusters is quasi-regular: superclusters and voids form a honeycomb-like pattern. The diameter of a cell in this network is approximately 120 $`h_{100}^1`$ Mpc, which is very close to the scale of the maximum of the power spectrum.
In contrast to the observed case the distribution of rich superclusters in CDM dominated models is almost random (Frisch et al. 1995). Mock catalogues with randomly distributed superclusters have power spectra with broad maxima similar to spectra of CDM-type models (Einasto et al. 1997b). The presence of broad maxima is an intrinsic property of all CDM-type models (if the baryon fraction is not too high).
The distribution of clusters can also be quantified using the correlation function of clusters of galaxies. While on small scales the correlation function characterizes the distribution of clusters within superclusters, on large scales it describes the distribution of superclusters themselves (Einasto et al. 1997a, 1999b). In Figure 4 we compare power spectra and correlations functions of the MDM model for a density parameter $`\mathrm{\Omega }_0=0.4`$ with respective empirical data. We use the observed correlation function of clusters of galaxies located in rich superclusters, and for comparison the Fourier transform of the empirical power spectrum of matter, enhanced in amplitude to obtain a correlation function comparable with the function for rich superclusters. The observed correlation function of clusters in rich superclusters is oscillating with a period equal to the wavelength of the maximum of the power spectrum. The Fourier transform of the empirical power spectrum has a similar property, only that the amplitude of oscillations is lower. The reason for this difference is due to the elongated form of the cluster sample which enhances the amplitude of oscillations at large separations. These oscillations are due to quasi-regular distribution of rich superclusters seen in Figure 3. The correlation function calculated for the MDM model has a completely different character on large scales, and corresponds to an almost random distribution of rich superclusters.
Finally, to describe the regularity of the cluster distribution we use a novel method which is sensitive to the geometry of the distribution (Toomet et al. 1999). The method is a 3-D generalization of the periodicity analysis of time series of variable stars. The space under study is divided into cubical trial cells of side length $`d`$. All objects in the individual cubical cells are stacked to a single combined cell, preserving their phases in the original cells. We then vary the side-length of the trial cube to search for the periodicity of the cluster distribution. We find the goodness of regularity for the side length $`d`$ of the trial cell; it is defined so that it has a maximum $`>1`$ if the length of the trial cell is equal to the period of the regularity, otherwise it is equal to unity. The goodness of regularity is shown in Figure 5, the left panel gives results for a mock catalogue (see Figure caption), the right panel for the actual Abell-ACO cluster sample.
The method is sensitive to the direction of the axes of the trial cubes. If clusters form a quasi-rectangular cellular network, and the search cube is oriented along the main axis of the network, then the period is found to be equal to the side-length of the cell. If the search cube is oriented at some non-zero angle in respect to the major axis of the network, then the presence of the periodicity and the period depend on the angle. If the angle is $`45^{}`$, then the period is equal to the length of the diagonal of the cell. If the angle differs considerably from $`0^{}`$ and $`45^{}`$, the periodicity is weak or absent. As seen from Figure 5, the main axis of the supercluster-void network is approximately oriented toward supergalactic coordinates. As the supergalactic $`Y`$ axis is very close to the direction of the Galactic poles, it is natural to expect a well defined periodicity in these directions as really observed by Broadhurst et al. (1990). Our periodicity analysis confirms earlier results on the presence of a high concentration of clusters and superclusters towards both the Supergalactic Plane (Tully et al. 1992), and towards the Dominant Supercluster Plane, which are oriented at right angles with respect to each other (Einasto et al. 1997d).
## 4 Primordial power spectra
Previous analysis has shown that there exist essential differences between data and CDM-type models with scale-free primordial power spectra. To explain the difference between model and data we have to accept a non-conventional theoretical power spectrum. As we have presently no reason to assume that our understanding of physical processes during the radiation domination era is wrong, we suppose that the peaked power spectrum originated during the earliest inflational phase of the evolution of the Universe. If we accept the transfer function (which describes the evolution of the power spectrum during the radiation domination era) according to models described above, we derive the primordial power spectrum shown in Figure 6 (Einasto et al. 1999b).
The main features of primordial power spectra are the presence of a spike and the change of the power index at the same scale as that of the maximum of the empirical power spectrum. On scales shorter or larger than that of the spike, the primordial spectrum can be well approximated by a power law. The power indices of the approximation are different on small and large scales. Both alternative empirical power spectra lead to similar primordial power spectra, only the shape around the break is different. Broken-scale-invariant primordial power spectra have been studied by Starobinsky (1992), Adams, Ross & Sarkar (1997), and Lesgourgues et al. (1998), among others. It is too early to say which of these models describes the observational data better.
## 5 Conclusions
Our main conclusions are:
$``$ The empirical power spectrum of matter has a peak on scales near 120 $`h_{100}^1`$ Mpc; on shorter scales it can be approximated by a power law with index $`n=1.9`$.
$``$ Superclusters and voids form a quasi-regular lattice of mean cell size 120 $`h_{100}^1`$ Mpc; the main axis of the lattice is directed toward the supergalactic $`Y`$ coordinate.
$``$ On scales around 100 $`h_{100}^1`$ Mpc the Universe is neither homogeneous nor isotropic.
$``$ The primordial power spectrum of matter is broken, its effective power index changes around the scale $`120`$ $`h_{100}^1`$ Mpc.
I thank H. Andernach, F. Atrio-Barandela, M. Einasto, E. Kasak, A. Knebe, V. Müller, A. Starobinsky, E. Tago, O. Toomet and D. Tucker for fruitful collaboration and permission to use our joint results in this review article. This study was supported by the Estonian Science Foundation.
|
no-problem/9909/hep-lat9909133.html
|
ar5iv
|
text
|
# Heavy Quarkonia from Anisotropic and Isotropic Lattices talk presented by T. Manke
## 1 INTRODUCTION
Heavy quarkonia play an important role for the theoretical understanding of QCD. Their non-relativistic character has frequently been used to perform efficient lattice simulations and has triggered many detailed studies of systematic errors such as lattice spacing artefacts, relativistic corrections and quenching effects. As an additional advantage there exists a wealth of experimental data, which provide an ultimate check on different improvement programmes. Moreover, lattice calculations have also resulted in predictions from first principles for heavy $`Q\overline{Q}g`$-states containing an additional gluonic excitation . However, those attempts were hampered by the rapidly decaying signal-to-noise ratio of such high-energetic hybrid states on conventional lattices.
More recently anisotropic lattices have been used to circumvent this problem by giving the lattice a fine temporal resolution whilst maintaining a coarse discretisation in the spatial direction . In a previous study we already reported on first NRQCD results for charmonium and bottomonium hybrid states from anisotropic lattices . In Section 2 we report on further applications of those methods and study also other excitations in heavy quarkonia more carefully.
In quenched simulations without dynamical sea quarks the strong coupling does not run as in nature and therefore one cannot reproduce experimental quantities at all scales. Observed deviations of the quenched hadron spectrum from experiment have been reported previously and an improvement has been noticed once dynamical quarks are inserted into the gluon background . Here we study unquenching effects for heavy quarkonia and report on our results from isotropic lattices in Section 3.
## 2 EXCITED QUARKONIA
To study excited states with small statistical errors it is mandatory to have a fine resolution in the temporal lattice direction, along which we measure the multi-exponential decay of meson correlators. To this end we employ an anisotropic and spatially coarse gluon action:
$`S=\beta {\displaystyle \underset{x,\mathrm{i}>\mathrm{j}}{}}\xi ^1\left\{{\displaystyle \frac{5}{3}}P_{\mathrm{ij}}{\displaystyle \frac{1}{12}}\left(R_{\mathrm{ij}}+R_{\mathrm{ji}}\right)\right\}`$
$`\beta {\displaystyle \underset{x,\mathrm{i}}{}}\xi \left\{{\displaystyle \frac{4}{3}}P_{\mathrm{it}}{\displaystyle \frac{1}{12}}R_{\mathrm{it}}\right\}.`$ (1)
Here $`(\beta ,\xi )`$ are two parameters, which determine the gauge coupling and the anisotropy of the lattice. Action (1) is Symanzik-improved and involves plaquette terms, $`P_{\mu \nu }`$, as well as rectangles, $`R_{\mu \nu }`$. It is designed to be accurate up to $`𝒪(a_s^4,a_t^2)`$, classically. To reduce the radiative corrections we used self-consistent mean-field improvement for both spatial and temporal links. With this prescription we expect only small deviations of $`\xi `$ from its tree-level value $`a_s/a_t`$.
For the heavy quark propagation in the gluon background we used the NRQCD approach on anisotropic lattices as described in . From the quark propagators we construct meson correlators for bound states with quantum numbers $`S(0,1)\times L(0,1,2)`$ and for hybrid states with additional gluonic excitation. For example, the spin-singlet operators read
$$\overline{Q}^{}Q,\overline{Q}^{}\mathrm{\Delta }_iQ,\overline{Q}^{}\mathrm{\Delta }_j\mathrm{\Delta }_kQ\text{ and }\overline{Q}^{}B_iQ.$$
(2)
Within the NRQCD approach it is paramount to establish a scaling region at finite lattice spacing. Our results in Fig. 1 demonstrate that we succeeded in finding such a window.
As we only measure excitation energies relative to the ground state it is natural to present our results as the ratio $`R_X=(X1S)/(1P1S)`$, which gives the normalized splitting of state X above the 1S.
For the lowest lying hybrid excitations, $`c\overline{c}g`$ and $`b\overline{b}g`$, our results from leading order NRQCD are in excellent agreement with previous calculations on isotropic lattices , but with much smaller errors. This is the combined success of anisotropic and coarse lattices with a clear signal over many timeslices at small computational cost. Here we have also checked the spin-averaged hybrid against possible finite volume errors, temporal lattice spacing artefacts and relativistic corrections, but we did not find any significant effect.
In Fig. 1 we have also shown our new results for higher radial excitations and D-states ($`L=2`$). Since all the spin corrections up to $`𝒪(mv^6)`$ are now included in our analysis we can also determine the spin-structure very accurately. In particular we were able to extract the exotic hybrid quarkonia, $`{}_{}{}^{3}B_{1}^{+}`$, explicitly for the first time from NRQCD. Our data indicates that the spin splittings in hybrid states are enlarged compared to P-states, whereas the D-state splittings are much suppressed. A more detailed discussion of the spin structure in quarkonia is presented elsewhere .
The dominating systematic error for all the predictions in this section is an uncertainty in the scale as a result of the quenched approximation. This is not yet controlled and we find a variation of 10-20%, depending on which experimental quantity is used to set the scale.
## 3 SPIN STRUCTURE AND SEA- <br>QUARK EFFECTS
To study sea quark effects in heavy quarkonia we employ an ensemble of isotropic lattices, where the gauge configurations were generated from an RG improved gluon action and tadpole-improved SW fermions for two flavours of light sea-quarks . We then performed an NRQCD calculation of S- and P-states at two different gauge couplings of $`\beta =1.80`$ and $`\beta =1.95`$, corresponding to $`a0.2`$ fm and $`a0.15`$ fm. In addition to the chiral extrapolation from four different sea quark masses, we also compared our results directly to quenched simulations at the same lattice spacing.
In Fig. 2 we see an encouraging trend for the lattice spacings from different physical quantities to agree much better in the chiral limit. This is an improvement over quenched simulations, where one has large uncertainties in the scale.
Furthermore, we observed a clear shift upwards of the hyperfine splittings as the sea quark mass is decreased.
For the Bottomonium we find an effect of about 3 MeV in a direct comparison to a quenched calculation using an identical formulation of tadpole-improved NRQCD with accuracy $`𝒪(mv^6)`$. This is a 10-$`\sigma `$ effect (or 10%) as shown in Fig. 3. In Charmonium the hyperfine splitting is also raised by about $`+15\%`$, i.e. to around 60 MeV in the chiral limit on our coarsest lattice.
In P-states one expects a different situation, since their wavefunctions vanish at the origin. Indeed, the very small hyperfine splitting ($`{}_{}{}^{3}P{}_{}{}^{1}P_{1}^{}`$) is further suppressed ($``$ 3-$`\sigma `$) on dynamical configurations and there is no resolvable shift for the fine structure, e.g. $`{}_{}{}^{3}P{}_{}{}^{3}P_{0}^{}`$. This validates the quenched approximation for such quantities.
For the Bottomonium system we are presently performing a similar analysis at $`\beta =2.1`$ ($`a0.1`$ fm) to determine whether those observations still hold on finer lattices.
This work is supported in part by Grants-in-Aid of the Ministry of Education, Science and Culture (Nos. 09304029, 10640246, 10640248, 10740107, 11640250, 11640294, 11740162). SE and KN are JSPS Research Fellows. AAK, HPS and TM are supported by the Research for the Future Program of JSPS, and HPS also by the Leverhulme foundation.
|
no-problem/9909/hep-lat9909024.html
|
ar5iv
|
text
|
# Overlap Dirac Operator, Eigenvalues and Random Matrix Theory
## Abstract
The properties of the spectrum of the overlap Dirac operator and their relation to random matrix theory are studied. In particular, the predictions from chiral random matrix theory in topologically non-trivial gauge field sectors are tested.
An important property of massless QCD is the spontaneous breaking of chiral symmetry. The associated Goldstone pions dominate the low-energy, finite-volume scaling behavior of the Dirac operator spectrum in the microscopic regime, $`1/\mathrm{\Lambda }_{QCD}<<L<<1/m_\pi `$, with $`L`$ the length of the system . This behavior can be characterized by chiral random matrix theory (RMT). The RMT description of the low-energy, finite-volume scaling behavior is specified by symmetry properties of the Dirac operator and the topological charge sector being considered . The RMT predictions are universal in the sense that the symmetry properties, but not the form of the potential matters . Furthermore, the properties can be derived directly from the effective, finite-volume partition functions of QCD of Leutwyler and Smilga, without the detour through RMT , though RMT nicely and succinctly describes and classifies all these properties. The topological charge enters the RMT prediction via the number of fermionic zero modes, related to the topological charge through the index theorem. The symmetry properties of the Dirac operator fall into three classes, corresponding to the chiral orthogonal, unitary, and symplectic ensembles . Examples are, respectively, fermions in the fundamental representation of gauge group SU(2), fermions in the fundamental representation of gauge group SU($`N_c`$) with $`N_c3`$, and fermions in the adjoint representation of gauge group SU($`N_c`$).
The classification according to the three RMT ensembles is connected to the chiral properties of the fermions . A good non-perturbative regularization of QCD should therefore retain those chiral properties. Until recently such a regularization was not known. The next best thing were staggered fermions, which at least retained a reduced chiral-like symmetry on the lattice. Indeed, staggered fermions were used to verify predictions of chiral RMT, albeit with two important shortcomings: (i) staggered fermions in the fundamental representation of SU(2) have the symmetry properties of the symplectic ensemble, not the orthogonal ensemble as continuum fermions, while adjoint staggered fermions belong to the orthogonal ensemble, not the symplectic one. (ii) staggered fermions do not have exact zero modes at finite lattice spacing, even for topologically non-trivial gauge field backgrounds, and thus seem to probe only the $`\nu =0`$ predictions of chiral RMT.
The development of the overlap formalism for chiral fermions on the lattice led to the massless overlap Dirac operator, a lattice regularization for vector-like gauge theories that retains the chiral properties of continuum fermions on the lattice . In particular, the continuum predictions of chiral RMT should apply. Overlap fermions have exact zero modes in topologically non-trivial gauge field backgrounds , allowing, for the first time, verification of the RMT predictions in $`\nu 0`$ sectors. The nice agreement we shall describe further validates the chiral RMT predictions and strengthens the case for the usefulness of the Overlap regularization of massless fermions.
The massless overlap Dirac operator is given by
$$D=\frac{1}{2}\left[1+\gamma _5ϵ(H_w(m))\right].$$
(1)
Here, $`\gamma _5H_w(m)`$ is the usual Wilson-Dirac operator and $`ϵ`$ denotes the sign function. The mass $`m`$ has to be chosen to be positive and well above the critical mass for Wilson fermions but below the mass where the doublers become light. We are interested in the low lying eigenvalues of the hermitian operator $`H=\gamma _5D`$ described in Ref. . We will use the Ritz algorithm applied to $`H^2`$ to obtain the lowest few eigenvalues. The numerical algorithm involves the action of $`H`$ on a vector and for this purpose one will have to use a representation of $`ϵ(H_w(m))`$. We used the rational approximation discussed in Ref. .
We computed the distribution of the lowest lying eigenvalue of the overlap Dirac operator in the fundamental representation on pure gauge SU(2) configurations with $`\beta =1.8`$ as an example of the chiral orthogonal ensemble, on pure gauge SU(3) configurations with $`\beta =5.1`$ as an example of the chiral unitary ensemble, and in the adjoint representation on pure gauge SU(2) configurations with $`\beta =2.0`$ as an example of the chiral symplectic ensemble. The lattice size was $`4^4`$ in all cases. Chiral RMT predicts that these distributions are universal when they are classified according to the three ensembles and according to the number of exact zero modes $`\nu `$ within each ensemble and then considered as functions of the rescaled variable $`z=\mathrm{\Sigma }V\lambda _{\mathrm{min}}`$. Here $`V`$ is the volume and $`\mathrm{\Sigma }`$ is the infinite volume value of the chiral condensate $`\overline{\psi }\psi `$ determined up to an overall wave function normalization, which is dependent in part on the Wilson–Dirac mass $`m`$. RMT gives the distribution of the rescaled lowest eigenvalue. A collection of the necessary formulae for the distribution of the lowest eigenvalue, $`P_{\mathrm{min}}(z)`$, can be found in .
We compare the RMT predictions with our data in Fig. 1. If $`\mathrm{\Sigma }`$ is known, the RMT predictions for $`P_{\mathrm{min}}(z)`$ are parameter free. On the rather small systems that we considered here, we did not obtain direct estimates of $`\mathrm{\Sigma }`$. Instead, we made one-parameter fits of the measured distributions, obtained from histograms with jackknife errors, to the RMT predictions, with $`\mathrm{\Sigma }`$ the free parameter. Our results and some additional information are given in Table 1. We note the consistency of the values for $`\mathrm{\Sigma }`$ obtained in the $`\nu =0`$ and $`\nu =1`$ sectors of each ensemble. Alternatively, we could have used the value of $`\mathrm{\Sigma }`$ obtained in the $`\nu =0`$ sector, to obtain a parameter free prediction for the distribution of the rescaled lowest eigenvalue in the $`\nu =1`$ sector. Obviously, the predictions would have come out very well.
With the fermions in the fundamental representation, we found 81 (for SU(2)), and 147 (for SU(3)) configurations with two zero modes and 1 and 3 with three zero modes. For the orthogonal ensemble, we are not aware of a prediction for $`P_{\mathrm{min}}(z)`$ in the $`\nu =2`$ sector, while for the unitary ensemble our data, albeit with very limited statistics, agrees reasonably well with the parameter free prediction with $`\mathrm{\Sigma }`$ from Table 1.
For fermions in the adjoint representation, we keep only one of each pair of degenerate eigenvalues so $`\nu =1`$ is the sector with two exact zero modes. Such configurations cannot be assigned an integer topological charge since integer charges give rise to zero modes in multiples of four , and we note there are a significant number of configurations with two zero modes as seen in Table 1. The good agreement with the RMT prediction found in this case lends further support to the existence of configurations with fractional topological charge .
We have tested the predictions of chiral random matrix theory using the overlap Dirac operator on pure gauge field ensembles. We find the distribution of the lowest eigenvalue in the different topological sectors fits well with the predictions of chiral RMT, with compatible values for the chiral condensate from the different topological sectors.
This research was supported by DOE contracts DE-FG05-85ER250000 and DE-FG05-96ER40979.
|
no-problem/9909/astro-ph9909478.html
|
ar5iv
|
text
|
# Density Profiles of Dark Matter Halo are not Universal
## 1 Introduction
Can one recover (some aspects of) the initial conditions of the universe from the distribution of galaxies at $`z0`$ ? A conventional answer to this question is affirmative, provided that the effect of a spatial bias is well understood and/or if it does not significantly alter the interpretation of the observed distribution. This consensus underlies the tremendous effort in the past and at present to extract the cosmological implications from the existing and future galaxy redshift surveys. The two-point correlation function $`\xi (r)`$ is a good example supporting this idea; on large scales it is trivially related to the primordial spectrum of mass fluctuations, $`P_\mathrm{i}(k)`$. Furthermore the effective power-law index of the two-point correlation function on sufficiently small scales is related to the initial power-law index $`n_\mathrm{i}`$ of $`P_\mathrm{i}(k)k^{n_\mathrm{i}}`$ as $`\xi (r)r^{3(3+n_\mathrm{i})/(5+n_\mathrm{i})}`$ (e.g. Peebles (1980); Suginohara et al. (1991); Suto (1993)). In other words, the initial conditions of the universe are imprinted in the behavior of galaxies on small scales (again apart from the effect of bias). This is why the phenomenological fitting formulae for the nonlinear power spectrum (Hamilton et al., 1991; Jain, Mo, & White, 1995; Peacock & Dodds, 1996; Ma, 1998) turn out to be so successful. This fact, however, seems to be in conflict with the universal density profile proposed by Navarro, Frenk & White (1996,1997; hereafter NFW) for virialized dark matter halos. In their study, NFW selected halos which look to be virialized, and found that the density profiles universally obey the NFW form $`\rho (r)r^1(r+r_s)^2`$. It is yet unclear to which degree their results are affected by their selection criterion which is not well-defined. In general, different halos should have experienced different merging histories depending on their environment and mass. Thus even if the halos do have a universal density profile statistically (i.e., after averaging over many realizations), it is also natural that individual halo profiles are intrinsically scattered around the universal profile (Jing, 1999). Definitely this is a matter of semantics to a certain extent; the most important finding of NFW is that such halo-to-halo variations are surprisingly small.
A universal density profile was also reported by Moore et al. (1999) on the basis of high – resolution simulations of one cluster-mass halo and four galaxy-mass halos, and they claim that the density profile $`\rho (r)r^{1.5}`$ in the most inner region. In what follows, we will address the following quantitative and specific questions concerning the halo profile, especially its most inner region, using the high-resolution $`N`$-body simulations; the inner slope of the halo profile is really described by $`\rho (r)r^1`$ or $`\rho (r)r^{1.5}`$ universally as NFW and Moore et al. (1999) claimed ? If not, does the slope vary among the different halos ? Is there any systematic correlation between the slope and the mass of halos ?
In fact, some of the above questions have been partially addressed previously with different approaches and methodologies (Fukushige & Makino, 1997; Evans & Collet, 1997; Moore et al., 1998; Syer & White, 1998; Nusser & Sheth, 1999; Jing, 1999; Avila-Reese et al., 1999). In order to revisit those in a more systematic and unambiguous manner, we have developed a nested grid P<sup>3</sup>M N-body code designed to the current problem so as to ensure the required numerical resolution in the available computer resources. This enables us to simulate 12 realizations of halos in a low-density cold dark matter (LCDM) universe with $`(0.51)\times 10^6`$ particles in a range of mass $`10^{1215}M_{}`$.
## 2 Simulation procedure
As Fukushige & Makino (1997) and later Moore et al. (1998) demonstrated, the inner profile of dark matter halos is substantially affected by the mass resolution of simulations. To ensure the required resolution (at least comparable to theirs), we adopt the following two-step procedure. A detailed description of the implementation and resolution test will be presented elsewhere.
First we select dark matter halos from our previous cosmological P<sup>3</sup>M N-body simulations with $`256^3`$ particles in a $`(100h^1\mathrm{Mpc})^3`$ cube (Jing & Suto, 1998; Jing, 1998). To be specific, we use one simulation of the LCDM model of $`\mathrm{\Omega }_0=0.3`$, $`\lambda _0=0.7`$, $`h=0.7`$ and $`\sigma _8=1.0`$ according to Kitayama & Suto (1997). The mass of the individual particle in this simulation is $`7\times 10^9M_{}`$. The candidate halo catalog is created using the friend-of-friend grouping algorithm with the bonding length of 0.2 times the mean particle separation.
We choose twelve halos in total from the candidate catalog so that they have mass scales of clusters, groups, and galaxies (Table 1). Except for the mass range, the selection is random, but we had to exclude about 40% halos of galactic mass from the original candidates since they have a neighboring halo with a much larger mass. We use the multiple mass method to re-simulate them. To minimize the contamination of the coarse particles on the halo properties within the virial radius at $`z=0`$, $`r_{\mathrm{vir}}`$, we trace back the particles within $`3r_{\mathrm{vir}}`$ of each halo to their initial conditions at redshift $`z=72`$. This is more conservative than that adopted in previous studies, and in fact turned out to be important for galactic mass halos. Note that we define $`r_{\mathrm{vir}}`$ such that the spherical overdensity inside is $`18\pi ^2\mathrm{\Omega }_0^{0.4}110`$ times the critical density, $`\rho _{\mathrm{crit}}(z=0)`$.
Then we regenerate the initial distribution in the cubic volume enclosing these halo particles with larger number of particles by adding shorter wavelength perturbation to the identical initial fluctuation of the cosmological simulation. Next we group fine particles into coarse particles (consisting of at most 8 fine particles) within the high-resolution region if they are not expected to enter the central halo region within $`3r_{\mathrm{vir}}`$. As a result, there are typically $`2.2\times 10^6`$ simulation particles, $`1.5\times 10^6`$ fine particles and $`0.7\times 10^6`$ coarse particles for each halo. Finally about $`(0.51)\times 10^6`$ particles end up within $`r_{\mathrm{vir}}`$ of each halo. Note that this number is significantly larger than those of NFW, and comparable to those of Fukushige & Makino (1997) and Moore et al. (1998, 1999). The contamination of the coarse particles, measured by the ratio of the mass of the coarse particles within the virial radius to the total virial mass, is small, about $`10^4`$, $`10^3`$, and $`10^2`$ for cluster, group, and galactic halos respectively.
We evolve the initial condition for the selected halo generated as above using a new code developed specifically for the present purpose. The code implements the nested-grid refinement feature in the original P<sup>3</sup>M N-body code of Jing & Fang (1994). Our code implements a constant gravitational softening length in comoving coordinates, and we change its value at $`z=4`$, 3, 2, and 1 so that the proper softening length (about 3 times the Plummer softening length) becomes 0.004 $`r_{vir}`$. Thus our simulations effectively employ the constant softening length in proper coordinates at $`z4`$. The first refinement is placed to include all fine particles, and the particle-particle (PP) short range force is added to compensate for the larger softening of the particle-mesh (PM) force. When the CPU time of the PP computation exceeds twice the PM calculation as the clustering develops, a second refinement is placed around the center of the halo with the physical size about $`1/3`$ of that of the first refinement. The mesh size is fixed to $`360^3`$ for the parent periodic mesh and for the two isolated refinements. The CPU time for each step is about $`1.5`$ minutes at the beginning and increases to $`5`$ minutes at the final epoch of the simulation on one vector processor of Fujitsu VPP300 (peak CPU speed of 1.6 GFLOPS). A typical run of $`10^4`$ time steps, which satisfies the stability criteria (Couchman et al. 1995), takes $`700`$ CPU hours to complete.
## 3 Results
Figure 1 displays the snapshot of the twelve halos at $`z=0`$. Clearly all the halos are far from spherically symmetric, and surrounded by many substructures and merging clumps. This is qualitatively similar to that found by Moore et al. (1998, 1999) for their high-resolution halos in the $`\mathrm{\Omega }_0=1`$ CDM model. The corresponding radial density profiles are plotted in Figure 2. The halo center is defined as the position of the particle which possesses the minimum potential among the particles within the sphere of radius $`r_{vir}`$ around the center-of-mass of the fine particles. In spite of the existence of apparent sub-clumps (Fig. 1), the spherically averaged profiles are quite smooth and similar to each other as first pointed out by NFW. The inner slope of the profiles, however, is generally steeper than the NFW value, $`1`$, in agreement with the previous findings of Fukushige & Makino (1997) and Moore et al. (1998). We have fitted the profiles to $`\rho (r)r^\beta (r+r_s)^{3+\beta }`$ with $`\beta =1.5`$ (similar to that used by Moore et al.1999; the solid curves) and $`\beta =1`$ (NFW form; the dotted curves) for $`0.01r_{200}rr_{200}`$, where $`r_{200}`$ is the radius within which the spherical overdensity is $`200\rho _{\mathrm{crit}}(z=0)`$. The resulting concentration parameter $`c`$, defined as the $`r_{200}/r_s`$, is plotted in the left panel of Figure 3. This is the most accurate determination of the concentration parameter for the LCDM model. There exists a significant scatter among $`c`$ for similar mass (Jing, 1999), and a clear systematic dependence on halo mass (NFW, Moore et al. (1999)).
The most important result is that the density profiles of the 4 galactic halos are all well fitted by $`\beta =1.5`$, but those of the cluster halos are better fitted to the NFW form $`\beta =1`$. This is in contrast with Moore et al. (1999) who concluded that both galactic and cluster halos have the inner density profile $`\rho (r)r^{1.5}`$, despite that they considered one cluster-mass halo alone. In fact, our current samples can address this question in a more statistical manner. CL1 has significant substructures, and the other three are nearly in equilibrium. Interestingly the density profiles of CL2 and CL3 are better fitted to the NFW form, and that of CL4 is in between the two forms. The density profiles of the group halos are in between the galactic and cluster halos, as expected. One is better fitted to the NFW form, whereas the other three follow the $`\beta =1.5`$ form.
To examine this more quantitatively, we plot the inner slope fitted to a power-law for $`0.007<r/r_{200}<0.02`$ as a function of the halo mass in the right panel of Figure 3. This figure indicates two important features; a significant scatter of the inner slope among the halos with similar masses and a clear systematic trend of the steeper profile for the smaller mass. For reference we plot the predictions for the slope, $`3(3+n)/(4+n)`$ by Hoffman & Shaham (1985) and $`3(3+n)/(5+n)`$ by Syer & White (1998), using for $`n`$ the effective power-law index $`n_{\mathrm{eff}}`$ of the linear power spectrum at the corresponding mass scale(Table 1). With a completely different methodology, Nusser & Sheth (1999) argue that the slope of the density profile within $`0.01r_{\mathrm{vir}}`$ is in between the above two values. On the basis of the slope – mass relation that we discovered, we disagree with their interpretation; for the galactic halos, the analytical predictions could be brought into agreement with our simulation only if the effective slope were $`2`$, which is much larger than the actual value $`2.5`$ on the scale.
We would like to emphasize that our results are robust against the numerical resolution for the following reasons. Since we have used the same time steps and the same force softening length in terms of $`r_{200}`$, the resolution effect, which is generally expected to make the inner slope of $`\rho (r)`$ shallower, should influence the result of the galactic halos more than that of cluster halos. In fact this is opposite to what we found in the simulation. Furthermore our high–resolution simulation results agree very well with those of the lower–resolution cosmological simulations (open triangles) for the cluster halos on scales larger than their force softening length (short thin lines at the bottom of Figure 2). We have also repeated the simulations of several halos employing 8 times less particles and 2 times larger softening length, and made sure that the force softening length $`0.005r_{200}`$ (the vertical dashed lines of Figure 2) is a good indicator for the resolution limit.
## 4 Conclusion and Discussion
In this Letter we have presented the results of the largest, systematic study on the dark matter density profiles. This is the first study which simulates a dozen of dark halos with about a million particles in a flat low-density CDM universe. This enables us to address the profile of the halos with unprecedented accuracy and statistical reliability. While qualitative aspects of our results are not inconsistent with those reported by Moore et al. (1999), our larger sample of halos provides convincing evidence that the form of the density profiles is not universal; instead it depends on halo mass. Since mass and formation epoch are linked in hierarchical models, the mass dependence may reflect an underlying link to the age of the halo. Older galactic halos more closely follow the $`\beta =1.5`$ form while younger cluster halos have shallower inner density profiles fitted better by the NFW form. Whether this difference represents secular evolution remains to be investigated in future experiments.
Our results are not fully expected by the existing analytical work. Although the analytical work (Syer & White, 1998; Nusser & Sheth, 1999; Lokas, 1999) concluded that the inner profile should be steeper than $`1`$, their interpretation and/or predicted mass-dependence are different from our numerical results. This implies that while their arguments may cover some parts of the physical effects, they do not fully account for the intrinsically complicated nonlinear dynamical evolution of non-spherical self-gravitating systems.
We also note that the small-scale power which was missed in the original cosmological simulation has been added to the initial fluctuation of the halos. The fact that each halo has approximately the same number of particles means that more (smaller-scale) power has been added to the low mass halos than to the high mass ones. It is yet unclear how much effect this numerical systematics would have on the mass dependence of the inner slope found in this paper, and we will investigate this question in future work.
In summary, the mass dependence of the inner profile indicates the difficulty in understanding the halo density profile from the cosmological initial conditions in a straightforward manner. Even if the density profiles of dark halos are not universal to the extent which NFW claimed, however, they definitely deserve further investigation from both numerical and analytical points of view.
We thank J. Makino for many stimulating discussions and suggestions, and the referee for a very detailed report which significantly improves the presentation of this paper. Y.P.J. gratefully acknowledges support from a JSPS (Japan Society for the Promotion of Science) fellowship. Numerical computations were carried out on VPP300/16R and VX/4R at ADAC (the Astronomical Data Analysis Center) of the National Astronomical Observatory, Japan, as well as at RESCEU (Research Center for the Early Universe, University of Tokyo) and KEK (High Energy Accelerator Research Organization, Japan). This research was supported in part by the Grant-in-Aid by the Ministry of Education, Science, Sports and Culture of Japan (07CE2002) to RESCEU, and by the Supercomputer Project (No.99-52) of KEK.
|
no-problem/9909/astro-ph9909471.html
|
ar5iv
|
text
|
# The Kinematics of the Outer Halo of M87Based in large part on observations obtained at the W.M. Keck Observatory, which is operated jointly by the California Institute of Technology and the University of California
## 1 Introduction
Cohen & Ryzhov (1997) (henceforth CR) present an analysis of the kinematics of M87 based on radial velocities for 205 globular clusters (henceforth GCs) in the halo of M87. In that paper we adopted a rotation curve which rises linearly with radius from the center of M87 ($`R`$) until reaching 180 km s<sup>-1</sup> at a radius of 225 arcsec. Beyond that point, and particularly beyond $`R350`$ arcsec, our sample became too sparse to establish a meaningful rotation curve, and, following the example of the Galaxy, we adopted the fixed value of 180 km s<sup>-1</sup> as the rotation of the outer part of M87.
The rotation of M87 is important for two reasons. The total angular momentum of M87 is a clue to the mode of formation of this massive elliptical galaxy at the center of the Virgo cluster cooling flow. This parameter can help distinguish between formation via a gravitational collapse as advocated by Peebles (1969) as compared to a merger model. In addition, a major goal of CR was the determination of the enclosed mass of M87 as a function of $`R`$. A proper dynamical mass estimate requires knowledge of both rotation and dispersion.
Kissler-Patig & Gebhardt (1998) suggested, based on a reanalysis of our previously published data, that the rotation in the outer part of M87 is large and decreases the observed $`\sigma _v`$ more than we allowed for in our previous work. Since our sample of GCs there is very sparse, the purpose of this paper is to present radial velocities for an additional group of M87 globular clusters selected to be in spatial positions which maximize their ability to constrain the rotation of the outer halo of M87. We then combine this new data with our previously published data to re-examine the issue of the rotation and velocity dispersion in the outer part of M87.
Whitmore et al. (1995) derive a distance to M87 of 17 Mpc from the turnover of the M 87 GC luminosity function. This is in excellent agreement with the mean of the Cepheid distances for Virgo cluster spirals from Pierce et al. (1994) and from Ferrarese et al. (2000) of 16 Mpc, ignoring NGC 4639 (Saha et al. 1996) as a background object. Hence we adopt 16.3 Mpc (80 pc/arcsec) as the distance to M87.
## 2 New Observations of M87 GCs
We observed a single slitmask of M87 GC candidates with the Low Resolution Imaging Spectrograph (henceforth LRIS) (Oke et al. 1995) at the Keck Telescope during the spring of 1999. This mask was designed to include objects located along the major axis of M87 $``$400 arcsec SE of the nucleus, with the slit length running perpendicular to the major axis. It contains objects with $`R300`$ to 540 arcsec from the center of M87. Candidates from the catalog of Strom et al. (1981) in this area were checked on existing direct images to determine if they appeared to be M87 GCs. Those that are brighter than $`B=22`$ mag and that were not observed by CR were included here. In addition, our new sample is at the maximum radius from M87 of the Strom et al. survey. Since we wish to include M87 GCs at even larger radii, we selected several stellar objects from the area on these LRIS images beyond that of the Strom et al. survey which appeared to be M87 GCs. Finally, we included the object in this region with the most discordant $`v_r`$ from CR.
The slitmask was observed with the same instrumental configuration of LRIS as was used in CR, but the grating was tilted to center the spectra at H$`\alpha `$. Only H$`\alpha `$ was used to determine the radial velocities. Because of the plethora of night sky lines and the use of a 1 arcsec wide slit, these $`v_r`$ are more accurate than those of CR, with typical 1$`\sigma `$ errors of only $`\pm `$50 km s<sup>-1</sup>. Table 1 lists the new radial velocities. There are 20 entries, four of which are included in CR. Two of the M87 GCs were found through the procedure described above and have not been previously cataloged. They are assigned identifying numbers beginning with 6000; their location and brightness in the $`R`$ filter bandpass using the standard stars of Landolt (1992) are given in footnotes to the table. Both are more than 500 arcsec from the center of M87.
The $`v_r`$ from CR for the four cases in common are given in the last column of Table 1. The agreement is very gratifying and suggests yet again that the quoted error estimates for these GC radial velocities are realistic.
Several interlopers were also found among these new spectroscopic observations. Strom 81 is a galaxy showing strong H$`\alpha `$ emission with $`z=0.335`$, and Strom 87, 221 and 286 are galactic stars.
## 3 Rotation and Velocity Dispersion Analysis
The sample of 16 new GCs in M87 presented in Table 1 more than doubles the sample with spectroscopic $`v_r`$ with $`R>420`$ arcsec. This should produce credible measures of the rotation and velocity dispersion in the outer part of M87. The sample of M87 GC radial velocities we utilize below is that of CR as augmented and updated in Cohen, Blakeslee & Ryzhov (1998) plus the new material presented in Table 1. This gives a total of 222 objects believed to be M87 GCs.
### 3.1 Qualitative Results
To demonstrate in a simple yet convincing manner that significant rotation exists in the outer halo of M87, we assume that the rotation is about a fixed position angle, that characteristic of the isophotes of M87. A modern study of the isophotes of M87 by Zeilinger, Moller & Stiavelli (1993) finds the major axis of the galaxy to be at PA = 160 and the effective radius to be $`90`$ arcsec (7.2 kpc). The ellipticity they deduce increases with radius outside the core, reaching 0.2 at 1.3$`r_{eff}`$. (In early work on this galaxy, Cohen 1986 found $`ϵ0.2`$ at $`R=230`$ arcsec with a position angle of 155.) McLaughlin, Harris & Hanes (1994) establish that the PA of the M87 GC system is identical to that of the underlying galaxy halo light. Kundu et al. (1999) find the PA of the globular cluster system very close to the center of M87 to be somewhat larger, $`190^{}`$. The analysis of the isophotes of this galaxy by Blakeslee (1999) covers this entire range of $`R`$ and shows the twisting of their major axis within the central 20 arcsec and the increase of $`ϵ`$ outward.
We further assume that the mean velocity at all radii is the systemic velocity of M87, 1277$`\pm `$5 km s<sup>-1</sup> (van der Marel 1994). Figure 1 shows the rotation curve deduced under these assumptions for all M87 GCs beyond $`R=380`$ arcsec with separate symbols for the new observations presented here and for the published data of CR. The horizontal line indicates the systemic velocity while the curve shown is for $`v_{rot}sin(i)=300`$ km s<sup>-1</sup>. The sample at such large distance from the center of M87 is now reasonably large (31 M87 GCs) with good coverage near PA = 160 and 340, i.e. along the major axis both towards the SE and the NW from the center of M87, and clearly demonstrates that the outer part of M87 is rotating rapidly. In addition, Figure 1 shows that the velocity dispersion of the M87 GC system is still very large, $`400`$ km s<sup>-1</sup>, even at the outermost point reached.
### 3.2 Quantitative Determination of the Rotation Curve of M87
Both the analysis given in CR and that of Kissler-Patig & Gebhardt (1998) suggest that the assumptions made above with respect to the position angle of the axis of rotation and the systemic velocity are correct, and we therefore adopt them for our detailed analysis of the rotation of M87.
GCs that are located on the minor axis constrain the mean velocity and velocity dispersion but contribute no information towards determining the amplitude of rotation. Having made the above assumptions, given the large velocity dispersion of the M87 GC system compared to the expected rotation, the GCs near the minor axis of M87 contribute mostly noise to the determination of the amplitude of rotation. We thus do not include clusters with $`|cos(\theta \theta _0)|<0.30`$ in the rotation solution, the choice of angle being somewhat arbitrary but based on the ratio of $`\sigma _v/v_{rot}`$. This excludes GCs in two arcs, each $`35\mathrm{deg}`$ long, centered on each end of the minor axis, or 19% of the total sample, if the GCs are distributed uniformly in angle at all radii. All GCs, including the ones rejected here, are subsequently utilized to determine the velocity dispersion.
The rotation analysis is thus reduced to finding a suitable statistically accurate representation of the set of values $`\{v_{rot}(R)\}_i=(v_r(i)v_{sys})/cos(\theta \theta _0)`$ for GCs within a specified range of $`R`$. Within each radial bin considered, a two step $`\chi ^2`$ minimization solution was implemented to solve for an appropriate value of the amplitude of the rotation, $`v_{rot}(R)`$. The errors in the individual terms on the right hand side of the above expression for a constant observational uncertainty in $`v_r`$ are highly variable and depend on $`\theta `$. The procedure adopted allows for these varying uncertainties. In the first pass, an initial guess at $`\sigma _v(R)`$ is used, and a solution for $`v_{rot}(R)`$ is found. This solution for $`v_{rot}(R)`$ is used to derive $`\sigma _v(R)`$. Since $`\sigma _v(R)`$ is used is used as the error estimate for each point for $`v_{rot}(R)`$, a second pass solution, which only makes very small updates, is then carried out to derive $`v_{rot}(R)`$.
The GCs in the spectroscopic sample are sorted in ascending order in $`R`$. The analysis is carried out with 30 point bins at the extreme inward and outward points, increasing to bins with 50 GCs wherever possible. The bin center is shifted outward by 1 GC, and the solution is repeated. Figure 2 displays the resulting solution for $`v_{rot}(R_m)`$ as a function of the median distance $`R_m`$ from the center of M87 for the GCs in each bin. The errors are calculated assuming Gaussian statistics within each bin. The radial extents for a few typical bins are indicated by the horizontal lines in the Figure. The rotation curve is heavily oversampled; there are only five independent points on this Figure.
### 3.3 Determination of the Velocity Dispersion
The calculation of the velocity dispersion requires removal of the rotational velocity. This is done using a smoothed version of the rotation amplitude found as described above. The biweight estimator described in Beers, Flynn & Gebhardt (1990) which is strongly resistant to outliers is used. The instrumental contribution to $`\sigma _v(R)`$ is also removed in quadrature. The same variable binning with $`R`$ used for the rotation solution is adopted here. The entire spectroscopic sample of M87 GCs is used.
The resulting radial profile of $`\sigma _v`$ is shown in Figure 2. The radial extent of a few typical bins is indicated by the horizontal lines in the Figure. As is the case for Figure 1, the velocity dispersion profile shown in this Figure is heavily oversampled.
## 4 Discussion
### 4.1 Comments on Galaxy Formation
Both the qualitative and the quantitative analysis show that the outer part of the halo of M87 with $`R400`$ arcsec, 32 kpc from the center of M87 and at 4.4$`r_{eff}`$, is rotating with a projected rotational velocity of $`300`$ km s<sup>-1</sup>. This is a very large rotational velocity to be found so far out in M87, implying that the total angular momentum of M87 is very large. It is interesting to note that the halo population of the Galactic globular cluster system shows a rotation of only 50 $`\pm `$23 km s<sup>-1</sup> while the disk population of Galactic GCs is highly flattened with $`v_{rot}`$ 152 $`\pm 29`$ km s<sup>-1</sup> (Zinn 1985).
Among the various theories of galaxy formation, there are several that are often applied to elliptical galaxies. The theory of dissipationless collapse from a single gas cloud through a gravitational instability, and acquisition of angular momentum through tidal torques was worked out by Peebles (1969). Binney (1978) calculated the expected rotation velocities of elliptical galaxies under various assumptions regarding orbit anisotropy. Detailed N-body simulations along these lines have been carried out by several groups, including Barnes & Efstathiou (1987), Stiavelli & Sparke (1991) and Ueda et al. (1994). The relevant parameter for comparison of observations with analytical and numerical models of galaxy formation is the spin parameter (Peebles 1971), a dimensionless combination of the total angular momentum, total mass, and total energy for a galaxy, $`\lambda =JE^{1/2}G^1M^{5/2}`$. This does not exceed 0.1 for such models, while Kissler-Patig & Gebhardt (1998) find $`\lambda 0.18`$ for M87, with the caution that calculating $`\lambda `$ from our existing observational material requires an extrapolation to the half mass radius in the halo of M87, much further out in $`R`$. The complication of possible triaxial shapes rather than oblate isotropic rotators, reviewed by de Zeeuw & Franx (1991), further obscures the validity of such calculations.
The other major theory currently in vogue for the formation of elliptical galaxies is through the merger of several large gas rich fragments, as originally suggested by Toomre & Toomre (1972). Such models come in various flavors, with the mergers happening relatively late and the pieces being entire galaxies as in Kormendy (1989) and Kormendy & Sanders (1992) or happening at early times involving protogalaxy clumps as in White & Rees (1977). Recent numerical simulations of this type of model, expanding on the earlier work of Barnes (1988), can be found in Hernquist & Bolte (1993) and in Bekki (1998). It is likely that such a model is more capable of reproducing the high rotation we find. Furthermore the observation of non-symmetric diffuse light in the halo of M87 extending 100 Kpc from its center by Weil, Bland-Hawthorn & Malin (1997) may also be explained by the recent accretion of a low mass galaxy.
### 4.2 The Effect of the Present Results on Those of Cohen & Ryzhov (1997)
Even with the large rotation we have found in the outer part of M87, we find the velocity dispersion after the observed values are corrected for the rotation to be still high there. $`\sigma _v`$ in the outermost bin of CR agrees to within 5% with the value found here. Only in one of the the nine radial bins used by CR is $`\sigma _v`$ from the current solution smaller than the values given in CR. We therefore expect that the results of CR with respect to the distribution of mass within M87 still are approximately correct.
## 5 Summary
We have presented spectroscopic observations of a new sample of M87 GCs chosen to put maximum leverage on a determination of the rotation in the outer halo of M87. Using this data combined with our previously published data we find that the rotation of M87 increases outward and reaches a value of $``$300 km s<sup>-1</sup> at a distance of 400 arcsec (32 kpc, 4.4 $`r_{eff}`$) from the center of this galaxy, confirming the suggestion based on our previously published set of M87 GC radial velocities by Kissler-Patig & Gebhardt (1998) of high rotation. That is rather surprising. The velocity dispersion remains high even at that large $`R`$, and the enclosed total mass is very large, as our earlier work (see CR) suggested.
The high rotation may provide a clue to the mode of formation of M87, but the calculation of total spin is uncertain as it involves extrapolation to still larger radii. In addition, M87 is a located in a very special place, the center of a very large mass concentration, a large cooling flow, etc. Its history may be quite different from that of most ellipticals, even of most massive ellipticals.
The results of the initial spectroscopic studies of the dynamics of elliptical galaxies in the early 1980s were very surprising. Davies et al. (1983) showed that low luminosity ellipticals rotate as rapidly as spiral bulges and as rapidly as predicted by models with oblate figures and isotropic distributions of residual velocities. However, as was discovered by Illingworth (1977), high luminosity ellipticals show surprisingly small values of $`v_{rot}/\sigma _v`$. Our results for the outer part of M87, where $`v_{rot}/\sigma _v0.6`$, include no correction to $`v_{rot}`$ for projection effects, which would only make this ratio larger. Similar suggestions for high rotation in the outer parts of luminous ellipticals near the center of large clusters of galaxies have been obtained from the analysis of a sample of globular clusters in M49 by Sharples et al. (1998) and one in NGC 1399 by Kissler-Patig et al. (1999). This gives rise to some interesting questions. Is this large rotation in the outer parts of the M87 GC system also shared by the M87 stellar halo ? How far out does this rotation continue ? To be provocative, we might ask if most luminous elliptical galaxies have $`v_{rot}/\sigma _v>0.5`$ in their outer parts, and whether this was missed in earlier studies due to limitations on slit length and on surface brightness in those spectroscopic studies ?
One of the few ways to explore these issues is to attempt to find large samples of GCs even further out in the halo of M87. We have paved the way with a first identification of M87 GCs beyond the spatial limit of existing surveys.
The entire Keck/LRIS user community owes a huge debt to Jerry Nelson, Gerry Smith, Bev Oke, and many other people who have worked to make the Keck Telescope and LRIS a reality. We are grateful to the W. M. Keck Foundation, and particularly its late president, Howard Keck, for the vision to fund the construction of the W. M. Keck Observatory. We thank John Blakeslee and Patrick Côté for helpful discussions.
|
no-problem/9909/cond-mat9909389.html
|
ar5iv
|
text
|
# An interpretation of Tsallis statistics based on polydispersity
## Abstract
It is argued that polydispersed sytems like colloids provide a direct example where Tsallis’ statistical distribution is useful for describig the heirarchical nature of the system based on particle size.
05.20-y, 05.70-a, 61.25.Hq, 64.10.+h
It is believed that systems which have heirarchical organization in phase-space, involve long-range interactions, display spatio-temporal complexity or have (multi)fractal boundary conditions, are not adequately treated within the statistical framework of Boltzmann and Gibbs. Such sytems are said to exhibit nonextensive behaviour and Tsallis has advanced a thermostatistical approach based on a generalization of Shannon entropy as
$$S_q^T=\frac{1_{i=1}^W(p_i)^q}{q1}.$$
(1)
Under the generalized constraints , the maximum entropy principle yields the following equilibrium distribution
$$p_i\{1(1q)\beta \epsilon _i\}^{1/(1q)},$$
(2)
where $`\beta `$ is the Lagrange multiplier, $`ϵ_i`$ is a random variable like energy, satisfying some constraint analogous to mean value. For $`q1`$, we recover Boltzmann or canonical distribution. Tsallis formalism is finding a growing number of applications which do lend support to the validity of underlying postulates, still it is not fully established as to what physical principles are involved therein. It has been conjectured that Tsallis formalism alludes to a scale invariant statistical mechanics. More precisely, Tsallis-like distribution can be derived by assuming an ensemble which has replicas of the same system at different scales. Typical examples of these are the polydispersed systems like polymers, commercial surfactants, colloidal suspensions and critical spin systems on heirarchical lattices. In this letter, we present an interpretation of Tsallis distribution in terms of colloidal polydispersity.
Recently , for the case of Levy flights, the exponential distribution $`\mathrm{exp}(x/\lambda )`$ was mapped onto Tsallis distribution by considering fluctuations in the parameter $`\lambda `$ about the mean value $`\lambda _0`$. The appropriate distribution function turns out to be Gamma distribution
$$f\left(\frac{1}{\lambda }\right)=\frac{1}{\mathrm{\Gamma }(\alpha )}(\alpha \lambda _0)^\alpha \left(\frac{1}{\lambda }\right)^{\alpha 1}\mathrm{exp}\left(\alpha \frac{\lambda _0}{\lambda }\right).$$
(3)
Thus
$$_0^{\mathrm{}}e^{x/\lambda }f\left(\frac{1}{\lambda }\right)d\left(\frac{1}{\lambda }\right)=\left(1+\frac{1}{\alpha }\frac{x}{\lambda _0}\right)^\alpha ,$$
(4)
which on identifying $`\alpha =1/(q1)`$, becomes Tsallis-like distribution, Eq. (2). The Tsallis index is given in terms of the relative variance
$$\omega =\frac{\left(\frac{1}{\lambda }\right)^2\frac{1}{\lambda }^2}{\frac{1}{\lambda }^2}=\frac{1}{\alpha }=(q1).$$
(5)
Now thermodynamically, a polydispersed colloid can be treated as a system with continuously infinite species type -. The fluctuating parameter in this case may be taken as the particle size $`\sigma `$. Moreover, it is usual to describe particle sizes by Gamma or Schulz distribution
$$p(\sigma )=\frac{1}{\gamma ^\alpha \mathrm{\Gamma }(\alpha )}\sigma ^{\alpha 1}e^{\sigma /\gamma },$$
(6)
where we have put $`\gamma =\frac{\overline{\sigma }}{\alpha }`$ and $`\overline{\sigma }=\sigma p(\sigma )𝑑\sigma `$. Again using $`\alpha =1/(q1)`$, we find that Tsallis distribution over variable $`\theta `$ can be defined as
$$(1(1q)\overline{\sigma }\theta )^{1/(1q)}=e^{\sigma \theta }p(\sigma )𝑑\sigma .$$
(7)
In a different language , r.h.s. of the above equation defines the cumulant generating function $`h(\theta )`$ as
$$h(\theta )=\mathrm{ln}e^{\sigma \theta }p(\sigma )𝑑\sigma .$$
(8)
In other words, cumulant generating function for Schulz distribution is the natural logarithm of Tsallis distribution, (Eq.(7)).
Futhermore as established in , $`h(\theta )`$ is related to combinatorial entropy per particle (which appears in the joint free energy on mixing two systems, for details see ) by a Legendre transform
$$\mathrm{ln}P(m)/N=h(\theta )+m\theta ,$$
(9)
where $`N`$ is the number of particles in one subsystem and $`m=\frac{h}{\theta }`$ is the moment variable which was taken to be mean size $`_i\sigma _i/N`$.
From Eq. (9), we see that there is a direct relation between Tsallis distribution and combinatorial entropy. From standard thermodynamics, we know that entropy $`S`$ and free energy $`\mathrm{\Psi }`$ are related by Legendre transform $`S(M)=\mathrm{\Psi }(\beta )+\beta M`$, where $`\beta `$ is called intensity and mean value $`M`$ is called extensity. Thus noting that $`\theta \beta `$, $`Mm`$, we infer that $`h(\theta )`$ is a kind of generalized free energy per particle. Also as free energy $`\mathrm{\Psi }=\mathrm{ln}Z`$, where $`Z`$ is the partition function, so from Eq. (8) we conclude that Eq. (7) represents generalized partition function per particle.
We see from Eq. (5) that relative variance of particle size with respect to Schulz distribution is equal to $`(q1)`$. In other words, the degree of nonextensivity $`(q1)`$ is directly reflected in the polydispersity. As the relative variance goes to zero (or $`q1`$), we have monodisperse colloidal system and we expect that statistical description of the system goes from Tsallis to Boltzmann distribution.
Note that it is not implied that statistical mechanics of polydispersity is completely equivalent to Tsallis statistics. Rather we propose that heirarchical nature of systems describable under Tsallis formalism is captured by polydispersity. Tsallis formalism also treats systems with long range interactions and it doesn’t seem correct to say that such interactions are ”switched on” in going from mono- to polydisperse system e.g. in charged stabilised colloidal particles interacting through screened Coulomb potential. On the other side, statistical mechanics of polydispersity has been studied using different approaches -. The connection with Tsallis formalism can be hoped to contribute towards these approaches by incorporating the self similar (heirarchical) nature of the colloidal systems.
Finally, we make a remark about alternative approaches for mapping exponential distribution to Tsallis distribution. This has been done using Hilhorst integral transformation . In a recent work , the equivalence of Tsallis distribution with a modification of the exponential distribtion, based on quantum groups was studied. The latter distribution in the present case is given as $`q_g^{\overline{\sigma }\theta /(q_g1)}`$, where $`q_g>1`$ is the deformation or quantum group parameter. (For $`q_g1`$, we recover $`e^{\overline{\sigma }\theta }`$.) This function can be written as $`e^{\sigma \theta }`$, where $`\sigma =\overline{\sigma }\frac{\mathrm{ln}q_g}{q_g1}\overline{\sigma }u`$. Thus fluctuations in parameter $`\sigma `$ as employed above, can effectively be considered as fluctuations in $`u`$. In terms of $`u`$, then one can write Tsallis distribution as
$$\left(1+\frac{\overline{\sigma }\theta }{\alpha }\right)^\alpha =e^{\sigma \theta u}p(u)𝑑u,$$
(10)
where as before $`\alpha =1/(q1)`$, and
$$p(u)=\frac{\alpha ^\alpha }{\mathrm{\Gamma }(\alpha )}u^{\alpha 1}e^{u\alpha }.$$
(11)
Also note that $`\overline{u}=up(u)𝑑u=1`$. The above distribution actually defines Mellin’s Transformation, which has been used for a similar purpose in the context of thermal field theory .
|
no-problem/9909/physics9909015.html
|
ar5iv
|
text
|
# Optimization of thermal noise in multi-loop pendulum suspensions for use in interferometric gravitational-wave detectors
## 1 Introduction
The thermal noise is expected to be one of the main limitations on the sensitivity of long-baseline interferometric gravitational-wave detectors like LIGO and VIRGO . Thermal fluctuations of internal modes of the interferometer’s test masses and of suspension modes will dominate the noise spectrum at the important frequency range between 50 and 200 Hz (seismic noise and photon shot noise dominate for lower and higher frequencies, respectively). The thermal fluctuations in pendulum suspensions were studied both theoretically and experimentally in a number of works (see, e.g. Refs. ). The predictions of the thermal-noise spectrum in interferometric gravitational-wave detectors combine theoretical models (with the fluctuation-dissipation theorem of statistical mechanics serving as a basis) and experimental measurements of quality factors of systems and materials involved. It is usually assumed that losses in the suspensions will occur mainly due to the internal friction in the wires, which is related to anelasticity effects . This assumption will be correct only provided that all the losses due to interactions with the external world (friction in the residual gas, dumping by eddy currents, recoil losses into the seismic isolation system, friction in the suspension clamps, etc.) are made insignificant by careful experimental design.
In the present work we consider a multi-loop pendulum suspension and study the dependence of the thermal-noise spectrum on properties of the wire material and on suspension parameters. The thermal-noise spectral density $`x^2(\omega )`$ depends strongly on the type of the internal friction in the wires. We consider two possibilities: (i) the wire internal friction with a constant loss function and (ii) the thermoelastic damping mechanism . The main conclusion is that the thermal noise can be reduced by increasing the number of suspension wires, especially in the case of the thermoelastic damping. This conclusion is valid as long as the dissipation due to the friction in the suspension clamps is insignificant.
## 2 Thermal-noise spectrum for a pendulum suspension
In interferometric gravitational-wave detectors, the test masses are suspended as pendulums by one or two loops of thin wires. We will consider a multi-loop suspension with the wires attached to the bob near the horizontal plane which cuts the bob through its center of mass. We will also assume that the mass of the wires is much smaller than the mass of the bob. In such a multi-loop suspension the rocking motion of the test mass is essentially suppressed and the main contribution to the thermal-noise spectrum is due to the pendulum mode and the violin modes. Then one can write the suspension thermal-noise spectral density as a sum,
$$x^2(\omega )=x_\mathrm{p}^2(\omega )+x_\mathrm{v}^2(\omega ),$$
(1)
of the pendulum-mode contribution, $`x_\mathrm{p}^2(\omega )`$, and of the violin-modes contribution, $`x_\mathrm{v}^2(\omega )`$.
According to the fluctuation-dissipation theorem, the pendulum-mode contribution can be expressed as
$$x_\mathrm{p}^2(\omega )=\frac{4k_B𝒯}{\omega M}\frac{\omega _\mathrm{p}^2\varphi _\mathrm{p}(\omega )}{(\omega _\mathrm{p}^2\omega ^2)^2+\omega _\mathrm{p}^4\varphi _\mathrm{p}^2},$$
(2)
where $`k_B`$ is Boltzmann’s constant, $`𝒯`$ is the temperature, $`M`$ is the pendulum mass, $`\varphi _\mathrm{p}(\omega )`$ is the loss function, $`\omega _\mathrm{p}=(g/L)^{1/2}`$ is the pendulum frequency, $`g`$ is the acceleration due to the Earth gravity field, and $`L`$ is the pendulum length. Note that the spectral density $`x^2(\omega )`$ is written explicitly in terms of the angular frequency $`\omega `$, but in fact the density is with respect to the linear frequency $`f=\omega /2\pi `$ and $`x^2(\omega )`$ is measured in units of m<sup>2</sup>/Hz.
The loss function $`\varphi `$ is a measure of the energy dissipation. Let $``$ be the total energy of a dissipative oscillator (assuming that the losses are small) and $`\mathrm{\Delta }`$ be the energy dissipated per cycle. Then
$$\varphi =\frac{\mathrm{\Delta }}{2\pi }.$$
(3)
The energy of the pendulum consists of two parts: the gravitational energy $`_{\mathrm{gr}}`$ and the elastic energy $`_{\mathrm{el}}`$ due to the bending of the wire. The gravitational energy is lossless; provided that all the losses due to interactions with the external world are made insignificant by careful experimental design, the assumption is made that the losses are dominated by internal friction in the wire material. Consequently, $`\mathrm{\Delta }=\mathrm{\Delta }_{\mathrm{el}}`$, and one obtains
$$\varphi _\mathrm{p}=\xi _\mathrm{p}\varphi _\mathrm{w},$$
(4)
where $`\varphi _\mathrm{w}=\mathrm{\Delta }_{\mathrm{el}}/(2\pi _{\mathrm{el}})`$ is the loss function for the wire itself which occurs due to anelastic effects in the wire material, and $`\xi _\mathrm{p}=(_{\mathrm{el}}/_{\mathrm{gr}})_\mathrm{p}`$ is the ratio between the elastic energy and the gravitational energy for the pendulum mode. The elastic energy depends on how many wires are used and how they are attached to the pendulum bob. In the multi-loop configuration we consider, the wires bend both at the top and the bottom, so $`\xi _\mathrm{p}(k_eL)^1`$, where $`k_e^1(EI/T)^{1/2}`$ is the characteristic distance scale over which the bending occurs. Here, $`T`$ is the tension force in the wire, $`E`$ is the Young modulus of the wire material, and $`I`$ is the moment of inertia of the wire cross section ($`I=\frac{1}{2}\pi r^4`$ for a cylindrical wire of radius $`r`$). For a suspension with $`N`$ wires (the number of wires is twice the number of loops), $`T=Mg/N`$, and one obtains
$$\xi _\mathrm{p}\frac{N\sqrt{TEI}}{MgL}=\frac{1}{L}\sqrt{\frac{EIN}{Mg}}.$$
(5)
For LIGO suspensions, $`f_\mathrm{p}=\omega _\mathrm{p}/2\pi `$ is about 1 Hz. This is much below the working frequency range (near 100 Hz), so we may assume $`\omega _\mathrm{p}/\omega 1`$. Also, the loss function is very small, $`\varphi _\mathrm{p}<10^5`$. Then the pendulum-mode contribution to the thermal noise spectrum is
$$x_\mathrm{p}^2(\omega )\frac{4k_B𝒯\omega _\mathrm{p}^2\varphi _\mathrm{p}(\omega )}{M\omega ^5}=\frac{4k_B𝒯}{L^2}\sqrt{\frac{gEIN}{M^3}}\frac{\varphi _\mathrm{w}(\omega )}{\omega ^5}.$$
(6)
The contribution of the violin modes to the thermal noise spectrum is given by
$$x_\mathrm{v}^2(\omega )=\frac{4k_B𝒯}{\omega }\underset{n=1}{\overset{\mathrm{}}{}}\frac{\mu _n^1\omega _n^2\varphi _n(\omega )}{(\omega _n^2\omega ^2)^2+\omega _n^4\varphi _n^2},$$
(7)
where $`n=1,2,3,\mathrm{}`$ is the mode number. The angular frequency of the $`n`$th mode is
$$\omega _n=\frac{n\pi }{L}\sqrt{\frac{T}{\rho }}\left[1+\frac{2}{k_eL}+\frac{1}{2}\left(\frac{n\pi }{k_eL}\right)^2\right],$$
(8)
where $`\rho `$ is the linear mass density of the wire. For heavily loaded thin wires like in LIGO, $`k_e^1L`$, so
$$\omega _n\frac{n\pi }{L}\sqrt{\frac{T}{\rho }}.$$
(9)
This is just the angular frequency of the $`n`$th transverse vibrational mode of an ideal spring. The effective mass of the $`n`$th violin mode is
$$\mu _n=\frac{1}{2}NM\left(\frac{\omega _n}{\omega _\mathrm{p}}\right)^2\frac{\pi ^2M^2}{2\rho L}n^2,$$
(10)
where we took expression (9) for $`\omega _n`$ and $`T=Mg/N`$. This effective mass arises because the violin vibrations of the wire cause only a tiny recoil of the test mass $`M`$. The loss function for the $`n`$th violin mode is
$$\varphi _n=\xi _n\varphi _\mathrm{w},$$
(11)
where $`\xi _n=(_{\mathrm{el}}/_{\mathrm{gr}})_n`$ is the ratio between the elastic energy and the gravitational energy. This ratio is
$$\xi _n=\frac{2}{k_eL}\left(1+\frac{n^2\pi ^2}{2k_eL}\right).$$
(12)
Since $`k_eL1`$, for first several modes the energy ratio is approximately
$$\xi _n\xi _\mathrm{v}=\frac{2}{L}\sqrt{\frac{EIN}{Mg}}.$$
(13)
This expression takes into account only the contribution to the elastic energy due to wire bending near the top and the bottom. For higher violin modes, one should also consider the contribution due to wire bending along its length, which leads to Eq. (12).
Typical values of $`f_1=\omega _1/2\pi `$ are from 250 to 500 Hz. If we are interested in the thermal spectral density near 100 Hz, we can assume $`\omega ^2\omega _n^2`$. Then we have approximately
$$x_\mathrm{v}^2(\omega )\frac{8k_B𝒯\omega _\mathrm{p}^2}{NM\omega }\underset{n=1}{\overset{\mathrm{}}{}}\frac{\varphi _n(\omega )}{\omega _n^4}\frac{8k_B𝒯N\rho ^2L^3}{\pi ^4gM^3\omega }\underset{n=1}{\overset{\mathrm{}}{}}\frac{\varphi _n(\omega )}{n^4}.$$
(14)
One can see that the contributions of higher violin modes are very small due to the factor $`n^4`$ in the sum. Taking $`\varphi _n=\xi _n\varphi _\mathrm{w}`$ and assuming $`k_eL1`$, we find the following expression for the violin-mode contribution to the thermal-noise spectrum,
$$x_\mathrm{v}^2(\omega )\frac{8}{45}k_B𝒯\rho ^2L^2\sqrt{\frac{EIN^3}{g^3M^7}}\frac{\varphi _\mathrm{w}(\omega )}{\omega }.$$
(15)
## 3 Dependence of thermal noise on wire material and suspension parameters
It can be seen from Eqs. (6) and (15) that the thermal noise increases with the area $`A`$ of the wire cross section. Therefore, it is desirable to use wires as thin as possible. However, the wire thickness may not be too small since the stress $`\sigma =T/A`$ in the wire may not exceed the breaking stress $`\sigma _{\mathrm{br}}`$. In fact, the wires are always operated at a fixed fraction of their breaking stress,
$$\sigma =\sigma _0=\kappa \sigma _{\mathrm{br}},$$
(16)
where $`\kappa `$ is a numerical coefficient. Typical values of $`\kappa `$ are from 0.3 to 0.5 (it is undesirable to have larger values of $`\kappa `$ because then events of spontaneous stress release will contribute excess noise ). Thus for a given type of the wire material, the cross-section area $`A`$ should be proportional to the pendulum mass $`M`$, according to the relation $`\sigma _0=Mg/(NA)`$. For a cylindrical wire, one has $`I=A^2/2\pi `$. Then we obtain
$$x_\mathrm{p}^2(\omega )=\frac{4k_B𝒯}{L^2}\left(\frac{g^3E}{2\pi MN\sigma _0^2}\right)^{1/2}\frac{\varphi _\mathrm{w}}{\omega ^5},$$
(17)
$$x_\mathrm{v}^2(\omega )=\frac{8}{45}k_B𝒯\rho _v^2L^2\left(\frac{g^3E}{2\pi MN^3\sigma _0^6}\right)^{1/2}\frac{\varphi _\mathrm{w}}{\omega },$$
(18)
where $`\rho _v=\rho /A`$ is the volume mass density of the wire which depends only on the material used.
All the parameters in Eqs. (17) and (18) are easily measured except for the wire loss function $`\varphi _\mathrm{w}`$. A number of experiments were recently performed to study internal losses of various wire materials (e.g., steel, tungsten, fused quartz, and some others). However, the exact frequency dependence of the wire loss function $`\varphi _\mathrm{w}(\omega )`$ is not yet completely understood. In many experiments $`\varphi _\mathrm{w}`$ was measured only at few frequencies and experimental uncertainty of results was often quite large. Moreover, there are discrepancies between results of different experiments. Therefore, it is sometimes difficult to make certain conclusions about the behavior of $`\varphi _\mathrm{w}(\omega )`$.
A well known dissipation mechanism for thin samples in flexure is the so-called thermoelastic damping . As a wire bends, one side contracts and heats and the other expands and cools. The resulting thermal diffusion leads to the dissipation of energy. The corresponding loss function is
$$\varphi _\mathrm{w}(\omega )=\mathrm{\Delta }\frac{\omega \overline{\tau }}{1+\omega ^2\overline{\tau }^2},$$
(19)
where $`\mathrm{\Delta }`$ is the relaxation strength and $`\overline{\tau }`$ is the relaxation time. The loss function has its maximum $`\varphi =\mathrm{\Delta }/2`$ at $`\omega =\overline{\tau }^1`$ (this is called the Debye peak). This behavior is characteristic for processes in which the relaxation of stress and strain is exponential and occurs via a diffusion mechanism. For the thermoelastic damping, one has
$$\mathrm{\Delta }=\frac{E𝒯\alpha ^2}{C_v},\overline{\tau }\frac{d^2}{D},$$
(20)
where $`\alpha `$ is the linear thermal expansion coefficient, $`C_v`$ is the specific heat per unit volume, $`d`$ is the characteristic distance heat must flow, and $`D`$ is the thermal diffusion coefficient, $`D=\varrho /C_v`$, where $`\varrho `$ is the thermal conductivity. For a cylindrical wire of diameter $`d`$, the frequency of the Debye peak is
$$\overline{f}=\frac{1}{2\pi \overline{\tau }}2.6\frac{D}{d^2}.$$
(21)
For thin metallic wires ($`d100\mu `$m) at the room temperature, the Debye peak frequency is typically from few hundred Hz to few kHz. Therefore at the frequency range near 100 Hz, we are usually far below the Debye peak, and
$$\varphi _\mathrm{w}(\omega )\mathrm{\Delta }\omega \overline{\tau }=\beta A\omega ,$$
(22)
where $`\beta \mathrm{\Delta }/(1.3\pi ^2D)`$.
According to a recent experiment by Huang and Saulson , internal losses in stainless steel wires are in good agreement with predictions of thermoelastic damping, with $`\varphi _\mathrm{w}(\omega )`$ exhibiting the characteristic frequency dependence of Eq. (19). On the other hand, the loss function for tungsten wires was nearly constant, increasing slightly at high frequencies (above 500 Hz). $`\varphi _\mathrm{w}`$ for tungsten wires increased with the wire cross-section area $`A`$, but the exact functional dependence of $`\varphi _\mathrm{w}`$ on $`A`$ is unclear as only three different wire diameters were examined. In some other experiments, the loss functions for various materials were found to be nearly constant over a wide frequency range. In a recent experiment by Cagnoli et al. , internal damping of a variety of metallic wires was found to be well modelled by the loss function of the form
$$\varphi _\mathrm{w}(\omega )=\varphi _0+\varphi _{\mathrm{ted}}(\omega ),$$
(23)
where $`\varphi _{\mathrm{ted}}(\omega )`$ is the thermoelastic-damping loss function of Eq. (19) and $`\varphi _0`$ is a frequency-independent term. Unfortunately, the dependence of $`\varphi _0`$ on the wire diameter was not examined. It can be assumed that the thermoelastic damping is a basic dissipation mechanism, but for some materials it is masked by other processes. When those additional losses (whose nature is still a matter of controversy) are small, the characteristic frequency dependence of Eq. (19) may be observed. However, when the losses due to the thermoelastic damping are very small (which happens, for example, in the case of thin Invar and tungsten wires), then additional losses prevail, leading to $`\varphi _\mathrm{w}`$ which is nearly constant far from the Debye peak.
In what follows we will consider two possibilities: (i) a constant loss function $`\varphi _\mathrm{w}`$ and (ii) the loss function of Eq. (22) which is characteristic for the thermoelastic damping at frequencies well below the Debye peak. We might assume that for some materials the true behavior is somewhere between these two extreme variants. For example, for tungsten wires, $`\varphi _\mathrm{w}`$ is nearly frequency-independent from 50 to 500 Hz, but still increases to some extent with the wire cross-section area $`A`$, as one should expect from Eq. (22).
### 3.1 A constant loss function
For a constant $`\varphi _\mathrm{w}`$, the dependence of the thermal-noise spectrum on various parameters is given directly by Eqs. (17) and (18). For the pendulum-mode contribution, we find
for constant $`M`$ and $`\sigma _0`$, $`x_\mathrm{p}^2N^{1/2}`$;
for constant $`M`$ and $`N`$, $`x_\mathrm{p}^2\sigma _0^1`$;
for constant $`N`$ and $`\sigma _0`$, $`x_\mathrm{p}^2M^{1/2}`$.
For the violin-modes contribution, we find
for constant $`M`$ and $`\sigma _0`$, $`x_\mathrm{v}^2N^{3/2}`$;
for constant $`M`$ and $`N`$, $`x_\mathrm{v}^2\sigma _0^3`$;
for constant $`N`$ and $`\sigma _0`$, $`x_\mathrm{v}^2M^{1/2}`$.
The allowed stress $`\sigma _0`$ is a property of the wire material (which is also true for $`E`$, $`\rho _v`$, and $`\varphi _\mathrm{w}`$), so changing $`\sigma _0`$ means taking wires made of different materials. Clearly, it is desirable to have a material with a large value of $`\sigma _0`$, but what decides is the value of the factor $`\mathrm{\Lambda }_\mathrm{w}=E^{1/2}\varphi _\mathrm{w}/\sigma _0`$ for the pendulum mode and $`\mathrm{\Lambda }_\mathrm{w}=\rho _v^2E^{1/2}\varphi _\mathrm{w}/\sigma _0^3`$ for the violin modes. The factor $`\mathrm{\Lambda }_\mathrm{w}`$ comprises all the parameters in $`x^2`$ which characterize the wire material.
One may see that taking multi-loop suspensions with large numbers of wires may help to reduce the thermal noise. As an example, let us consider tungsten wires of the type examined by Huang and Saulson . The relevant parameters are $`E3.4\times 10^{11}`$ Pa, $`\sigma _{\mathrm{br}}1671`$ MPa, $`\rho _v1.93\times 10^4`$ kg/m<sup>3</sup>. We also take $`M=10.8`$ kg, $`L=45`$ cm and $`\kappa =0.5`$ (the wires are operated at one half of their breaking stress), like in suspensions of the LIGO test masses. According to the data by Huang and Saulson , the loss function is nearly frequency-independent from 50 to 500 Hz, but depends on the wire diameter. For a two-loop suspension ($`N=4`$), the wire diameter should be $`d200`$ $`\mu `$m, and the corresponding quality factor $`Q_\mathrm{w}=\varphi _\mathrm{w}^1`$ can be estimated to be $`Q_\mathrm{w}1.3\times 10^3`$. For an eight-loop suspension ($`N=16`$), the wire diameter should be $`d100`$ $`\mu `$m, and the corresponding quality factor can be estimated to be $`Q_\mathrm{w}4.0\times 10^3`$. In Fig. 1 we plot the thermal-noise displacement spectrum $`\sqrt{x^2(\omega )}`$ for the room temperature ($`𝒯=295`$ K) for three possibilities: (a) $`N=4`$, $`Q_\mathrm{w}=1.3\times 10^3`$; (b) $`N=16`$, $`Q_\mathrm{w}=1.3\times 10^3`$; (b) $`N=16`$, $`Q_\mathrm{w}=4.0\times 10^3`$. We see that for a constant loss function, the thermal noise is reduced by increasing the number of wires. The spectral density $`x^2(\omega )`$ scales as $`N^{1/2}`$ for frequencies near 100 Hz (where the pendulum mode dominates), in accordance with our analysis. Also, if the decrease of $`\varphi _\mathrm{w}`$ with the wire diameter is taken into account, the increase in the number of wires is even more helpful.
### 3.2 Thermoelastic loss function
If we take the loss function of Eq. (22), then the thermal-noise spectrum is given by
$$x_\mathrm{p}^2(\omega )=\frac{4k_B𝒯}{L^2}\beta \left(\frac{g^5EM}{2\pi N^3\sigma _0^4}\right)^{1/2}\frac{1}{\omega ^4},$$
(24)
$$x_\mathrm{v}^2(\omega )=\frac{8}{45}k_B𝒯\beta \rho _v^2L^2\left(\frac{g^5EM}{2\pi N^5\sigma _0^8}\right)^{1/2}.$$
(25)
The dependence of the thermal-noise spectrum on various parameters can be characterized as follows. For the pendulum-mode contribution, we find
for constant $`M`$ and $`\sigma _0`$, $`x_\mathrm{p}^2N^{3/2}`$;
for constant $`M`$ and $`N`$, $`x_\mathrm{p}^2\sigma _0^2`$;
for constant $`N`$ and $`\sigma _0`$, $`x_\mathrm{p}^2M^{1/2}`$.
For the violin-modes contribution, we find
for constant $`M`$ and $`\sigma _0`$, $`x_\mathrm{v}^2N^{5/2}`$;
for constant $`M`$ and $`N`$, $`x_\mathrm{v}^2\sigma _0^4`$;
for constant $`N`$ and $`\sigma _0`$, $`x_\mathrm{v}^2M^{1/2}`$.
Now, the dependence of $`x^2`$ on the wire material is given by the factor $`\mathrm{\Lambda }_\mathrm{w}=\beta E^{1/2}/\sigma _0^2`$ for the pendulum mode and $`\mathrm{\Lambda }_\mathrm{w}=\beta \rho _v^2E^{1/2}/\sigma _0^4`$ for the violin modes. So, the value of the allowed stress $`\sigma _0`$ in this situation is more important than for the case of constant $`\varphi _\mathrm{w}`$.
One may see that in the case of the thermoelastic damping the thermal noise may be reduced to a larger extent by increasing the number of wires, as compared to the case of constant $`\varphi _\mathrm{w}`$. As an example, let us consider wires made of stainless steel (AISI 302), which were examined by Huang and Saulson . The relevant parameters are $`E1.9\times 10^{11}`$ Pa, $`\sigma _{\mathrm{br}}1342`$ MPa, $`\rho _v8.0\times 10^3`$ kg/m<sup>3</sup>. The losses are dominated by the thermoelastic damping mechanism. Taking $`\alpha 1.6\times 10^5`$ 1/K, $`C_v4.8\times 10^6`$ J/(K m<sup>3</sup>), $`\varrho 16.3`$ J/(K m s) and $`𝒯=295`$ K, one obtains $`\mathrm{\Delta }3.0\times 10^3`$ and $`\beta 68.6`$ s/m<sup>2</sup>. We also take $`M=10.8`$ kg, $`L=45`$ cm and $`\kappa =0.5`$, like in suspensions of the LIGO test masses. The thermal-noise displacement spectrum $`\sqrt{x^2(\omega )}`$ is plotted in Fig. 2 for three possibilities: (a) $`N=4`$ (then $`d224`$ $`\mu `$m and $`\overline{f}176`$ Hz); (b) $`N=8`$ (then $`d159`$ $`\mu `$m, and $`\overline{f}352`$ Hz); (b) $`N=16`$ (then $`d112`$ $`\mu `$m, and $`\overline{f}703`$ Hz). The conclusion is that the thermal noise may be significantly reduced by increasing the number of wires. The numerical results confirm that the proportionalities $`x_\mathrm{p}^2N^{3/2}`$ and $`x_\mathrm{v}^2N^{5/2}`$ are valid for frequencies well below the Debye peak $`\overline{f}`$.
### 3.3 Comparison between different materials
We would like to compare the thermal-noise performance of a multi-loop suspension for different wire materials. For example, the tungsten wires examined by Huang and Saulson have rather low breaking stress of 1671 MPa. There exist tungsten wires with higher breaking stress; for example, Dawid and Kawamura experimented with tungsten wires for which they measured $`\sigma _{\mathrm{br}}=2037`$ MPa. It would be interesting to compare between tungsten wires with different breaking stress but with the same loss function. On the other hand, the comparison between wires made of tungsten and stainless steel will clarify how the difference in the loss mechanism (frequency-independent $`\varphi _\mathrm{w}`$ versus the thermoelastic damping) affects the thermal-noise spectrum. To this end, we also would like to consider a situation in which wires made of stainless steel have all properties as above except for the losses being dominated by a mechanism with frequency-independent $`\varphi _\mathrm{w}`$, instead of the thermoelastic damping.
We consider an eight-loop suspension ($`N=16`$), with $`M=10.8`$ kg, $`L=45`$ cm and $`\kappa =0.5`$ (like in LIGO), and examine four possibilities: (a) tungsten wires as considered in see Sec 3.1, with $`\sigma _{\mathrm{br}}=1671`$ MPa (this gives $`d100`$ $`\mu `$m) and $`Q_\mathrm{w}=4.0\times 10^3`$; (b) tungsten wires with different breaking stress, $`\sigma _{\mathrm{br}}=2037`$ MPa (this gives $`d91`$ $`\mu `$m) and the same quality factor, $`Q_\mathrm{w}=4.0\times 10^3`$; (c) stainless steel wires as considered in Sec 3.2 ($`\sigma _{\mathrm{br}}=1342`$ MPa, $`d112`$ $`\mu `$m), with the thermoelastic damping mechanism ($`\beta 68.6`$ s/m<sup>2</sup>, $`\overline{f}703`$ Hz); (d) stainless steel wires with the same parameters, but with a frequency-independent loss function, $`Q=2.0\times 10^3`$ (this value is close to the one given by the thermoelastic damping near 120 Hz). The resulting thermal-noise displacement spectra $`\sqrt{x^2(\omega )}`$ are shown in Fig. 3. One can see that the violin resonances of the stainless steel wires appear at higher frequencies (due to smaller density). On the other hand, the tungsten wires exhibit smaller thermal fluctuations at the frequency range between 50 and 200 Hz. The thermal noise is reduced by using wires with larger breaking stress, provided the other parameters remain the same.
### 3.4 Optimization of the pendulum length
The thermal-noise spectrum depends on the pendulum length $`L`$. For frequencies well below the first violin resonance, $`\omega ^2\omega _1^2`$, the pendulum-mode contribution dominates and the spectral density $`x^2(\omega )`$ is proportional to $`L^2`$. However, by increasing $`L`$, one not only decreases the thermal fluctuations due to the pendulum mode, but also brings the violin resonances to lower frequencies, as $`\omega _nL^1`$. This effect is illustrated in Fig. 4, where the displacement spectrum $`\sqrt{x^2(\omega )}`$ is shown for an eight-loop suspension with stainless steel wires of various length. (We take $`M=10.8`$ kg, $`\kappa =0.5`$, and stainless steel wires with properties listed in Sec. 3.2.) Due to this competition between two opposite tendencies, the choice of the pendulum length is a delicate matter which depends on where in the spectrum the seismic perturbations and the photon shot noise prevail over the thermal fluctuations and on properties of expected gravitational-wave signals.
## 4 Discussion
Our analysis brings to an observation that the thermal noise in pendulum suspensions can be significantly reduced by using multi-loop configurations with a large number of wires. However, before implementing this conclusion one should consider a number of issues. First, our analysis is valid only if the losses are dominated by the internal friction in the pendulum wires and all other sources of dissipation are made negligible by careful experimental design. However, as was shown recently by Huang and Saulson , the sliding friction in the suspension clamps is often important as well. If this is the case, a large number of suspension loops will only sever the dissipation and thereby increase the thermal fluctuations. Therefore, if one wants to use multi-loop suspensions, a special care should be paid to the design of clamps. Another technical problem is to make a suspension in which all the loops will be equally loaded. One more issue which should be carefully studied is the effect which may have a large number of suspension wires on the internal resonances of the suspended test mass.
## Acknowledgments
This work would not be possible without great help by Malik Rakhmanov. I thank him for long hours of illuminating discussions and for encouraging me to enter the realm of thermal noise and anelasticity. I am also grateful to Peter Saulson and Gregg Harry for sending me their data on properties of wire materials. Financial support from the Lester Deutsch Fund in the form of a postdoctoral fellowship is gratefully acknowledged. The LIGO Project is supported by the National Science Foundation under the cooperative agreement PHY-9210038.
|
no-problem/9909/hep-lat9909156.html
|
ar5iv
|
text
|
# Finite Volume Effects in Self Coupled Geometries
## I Introduction
In investigating the effects of self coupled boundaries in Monte Carlo lattice QCD simulations there is a need for guidance as to the expected size and direction of boundary-induced energy shifts as well as strategies to remove these effects. Starting with parameterizations of (quenched) lattice potentials within the context of a nonrelativistic two-body Schrödinger-Pauli Hamiltonian with relativistic corrections, a recent approach uses these potentials to produce genuine predictions of charmonium and bottomonium energy levels. These predictions are obtained from numerical solutions, using a purely spatial grid, of the wavefunctions plus perturbation theory. Using this approach, one may begin to understand finite size effects numerically by changing the size of the enclosing box, which is relatively easy to do, unlike the Monte Carlo simulations. Although the general case must still be studied numerically, this article shows that the asymptotic energy shifts from distant boundaries can be determined analytically in terms of the unbounded wavefunctions. Where these wavefunctions are unavailable, it provides functional forms to compare with in potential type simulations, and ultimately, directly to the lattice data once box sizes can confidently be said to be in the asymptotic region.
The systems studied here will be described by the Schrödinger equation in the interior of a volume, the surface of which is determined by a single parameter of a general separable curvilinear coordinate system. It will be assumed that the fields on the boundaries are self-coupled in the sense of being there being either periodic or antiperiodic boundary conditions relating the antipodal points at $`𝐫`$ and $`𝐫`$. We will see that periodic (antiperiodic) boundary conditions require Neumann (Dirichlet) boundary conditions for even parity states and Dirichlet (Neumann) boundary conditions for odd parity states for parity invariant potentials. These boundary conditions are in general not the usual ones applied in cubic geometries, which couple opposite walls rather than antipodal points. However, we will see that the asymptotic shifts when opposite walls are coupled are effectively antipodal. Thus the description here should apply to the situation of Refs., which studies confining solutions (linear plus Coulomb) in a periodic box.
We will proceed to a derivation of the energy shift formulas in Sections II and III. The basic energy shift equations for Dirichlet and Neumann boundary conditions will be derived in Section IV. An asymptotic form of the wavefunction will be presented in Section V, which will then allow us to integrate the differential forms, resulting in simple equations for the energies themselves. It will be seen that the asymptotic energy shifts for Dirichlet and Neumann cases are equal in magnitude but opposite in direction. In Section VI we will make some simplifying observations regarding surface integrals in the energy shift formulas and in Section VII we will explicitly consider the usual lattice situation of cubic geometry with self coupled walls. As examples of the use of these formulas, in Section VIII we will study a one dimensional example, the harmonic oscillator, and a three dimensional example, the hydrogen atom. We will close with some general observations regarding qualitative behaviors of energy shifts in the general case as well as some summary comments.
## II Scrödinger Momentum Tensor Derivation
One way to relate boundary induced energy shifts to properties of the Schrödinger wavefunctions is to use the momentum tensor. The Lagrangian desity for the Schrödinger case may be written
$$=\frac{1}{2m}_i\psi ^{}_i\psi +(VE)\psi ^{}\psi .$$
(1)
The energy, $`E`$, can be viewed as a Lagrange multiplier associated with the normalization constraint, $`d^3x\psi ^{}\psi =1`$. The Schrödinger equation is recovered from the action defined by Eq.(1) by variation of $`\psi ^{}`$. (In the time dependent case one concludes $`\psi ^{}`$ and $`\psi `$ are related by complex conjugation only after independent variations on $`\psi ^{}`$ and $`\psi `$.) The mass parameter $`m`$ should be understood to represent the reduced mass in applications to two body systems.
The momentum tensor for complex fields may be written as
$$T_{ij}=\frac{\delta }{\delta (_i\psi ^{})}_j\psi ^{}+\frac{\delta }{\delta (_j\psi ^{})}_i\psi ^{}\delta _{ij},$$
(2)
which is explicitly symmetric in $`i`$ and $`j`$. Using Eq.(1), this gives
$$T_{ij}=\frac{1}{2m}[_i\psi _j\psi ^{}+_j\psi _i\psi ^{}]\frac{\delta _{ij}}{2m}_k(\psi ^{}_k\psi ).$$
(3)
As a check, we notice that
$$_iT_{ij}=\psi ^{}_jV\psi ,$$
(4)
which is just a form of Ehrenfest’s theorem when integrated over the volume. The continuity equation
$$\frac{g_i}{t}+_jT_{ji}=0,$$
(5)
is satisfied in the time dependent case, where the momentum density is given by
$$g_i=\frac{i}{2}(\psi ^{}_i\psi \psi _i\psi ^{}),$$
(6)
assuring momentum conservation. We will calculate energy shifts by evaluating the pressure exerted by the wavefunction on the surface. For this purpose consider the momentum tensor with locally normal indices:
$$T_{nn}=\frac{1}{2m}_n\psi _n\psi ^{}\frac{1}{2m}_T\psi _T\psi ^{}\frac{1}{2m}\psi ^{}^2\psi .$$
(7)
Here
$$𝐧\psi _n\psi =\frac{1}{h_n}\frac{\psi }{n},$$
(8)
is a locally normal derivative ($`𝐧`$ is outwardly directed), $`h_n`$ is a possible scale factor, and $`_T`$ are the tranverse derivatives. In the context of integrations over a closed surface,
$$\frac{1}{2m}\psi ^{}^2\psi +\frac{1}{2m}_T\psi _T\psi ^{}\frac{1}{2m}\psi ^{}_n^2\psi ,$$
(9)
giving
$$T_{nn}=\frac{1}{2m}|_n\psi |^2\frac{1}{2m}\psi ^{}_n^2\psi .$$
(10)
Antipodal boundary conditions are given by
$`\psi (𝐫)|_s=\pm \psi (𝐫)|_s,`$ (11)
$`_n\psi (𝐫)|_s=_n\psi (𝐫)|_s.`$ (12)
where the top signs for the periodic case and the bottom signs give the antiperiodic case. (One plugs in $`𝐫`$ after the normal derivative is taken in (12).) For a parity invariant potential, one has
$$\psi |_s=0,$$
(13)
by continuity of the wavefunction at the surface for odd parity states, and
$$_n\psi |_s=0,$$
(14)
by continuity of the first normal derivative of the wavefunction for even parity states in the periodic case. For antiperiodic boundary conditions, the conditions (13) and (14) apply instead to even and odd parity states, respectively. When the states can not be classified by parity, as when an otherwise good parity system is not centered at the origin, Eqs.(11) and (12) can no longer be simplified.
For the Dirichlet condition Eq.(13), one then has
$$T_{nn}=\frac{1}{2m}|_n\psi |^2,$$
(15)
and
$$T_{nn}=\frac{1}{2m}\psi ^{}_n^2\psi ,$$
(16)
for Neumann conditions, Eq.(14).
Finally, we relate the energy shift to an integration over $`T_{nn}`$ via
$$\mathrm{\Delta }E\beta (a_0)=P𝑑V=_{\mathrm{}}^{a_0}𝑑a𝑑sh_nP(a),$$
(17)
where
$$P(a)=T_{nn}(a).$$
(18)
Thus, in a differential sense one has
$$\frac{d\beta }{da}=𝑑sh_nT_{nn}(a),$$
(19)
resulting in
$$\frac{d\beta }{da}=\frac{1}{2m}𝑑sh_n|_n\psi |^2,$$
(20)
in the Dirichlet case and
$$\frac{d\beta }{da}=\frac{1}{2m}𝑑sh_n\psi ^{}_n^2\psi ,$$
(21)
in the Neumann. Again, in the general case without good parity, the form from Eq.(10) can not be simplified.
These differential energy shift formulas are actually exact. The wavefunction, $`\psi `$, is a shorthand for $`\psi (\beta (a))|_s`$, denoting the exact wavefunction on the boundary determined at finite $`\beta `$, which is itself determined by the boundary parameter, $`a`$, for a given surface. As they stand, these equations are not yet useful because we have no information on the values of the wavefunction and it’s normal derivatives on $`S`$. A hint on how to proceed is provided by an alternate derivation of these results, which will now be presented.
## III Parameter Variation Derivation
Let us consider the variation of the Schrödinger equation,
$$\frac{1}{2m}^2\psi +V\psi =E\psi ,$$
(22)
with respect to the parameter (not the coordinate), $`a`$, which determines the self coupling surface, assuming the normalization condition, $`d^3x\psi ^{}\psi =1`$, is satisfied. (Usually the following procedure determines the normalization integral given $`E(a)`$, but here we use it in reverse.) We have
$$\frac{1}{2m}\psi ^{}^2\frac{\psi }{a}+\psi ^{}V\frac{\psi }{a}=\frac{d\beta }{da}\psi ^{}\psi +E\psi ^{}\frac{\psi }{a},$$
(23)
after variation followed by $`\psi ^{}`$ multiplication. On the other hand, complex conjugation of Eq.(22) followed by multiplication by $`\frac{\psi }{a}`$ results in
$$\frac{1}{2m}\frac{\psi }{a}^2\psi ^{}+\frac{\psi }{a}V\psi ^{}=E\frac{\psi }{a}\psi ^{}.$$
(24)
The difference produces a perfect differential form, which gives
$$\frac{d\beta }{da}=\frac{1}{2m}𝑑s\left(\frac{\psi }{a}_n\psi ^{}\psi ^{}_n\frac{\psi }{a}\right),$$
(25)
where $`_n`$ are again local normal derivatives. (The order of the partial derivatives in the last term is immaterial since $`a`$ appears only in $`\psi `$.) We then have
$$\frac{d\beta }{da}=\frac{1}{2m}𝑑s\frac{\psi }{a}_n\psi ^{},$$
(26)
for Dirichlet boundary conditions, and
$$\frac{d\beta }{da}=\frac{1}{2m}𝑑s\psi ^{}\frac{}{a}_n\psi ,$$
(27)
for the Neumann case. These results may be reconciled with Eqs.(20) and (21) by the use of expansions in the normal curvilinear coordinate, $`n`$, about the boundary. In the Dirichlet case since the wavefunction vanishes on the surface one must have
$$\psi (na)\frac{\psi }{n}|_s,$$
(28)
resulting in
$$\frac{\psi }{a}|_s=h_n_n\psi |_s.$$
(29)
In the Neumann case since the first normal derivative vanishes one has
$$_n\psi (na)\frac{}{n}_n\psi |_s,$$
(30)
giving
$$\frac{}{a}_n\psi |_s=h_n_n^2\psi |_s.$$
(31)
This reconciles Eqs.(26) and (27) with (20) and (21). In the general case with no good parity, one must proceed differently. One may expand about the boundary the quantity,
$`\left({\displaystyle \frac{_n\psi (𝐫)}{\psi ^{}(𝐫)}}+{\displaystyle \frac{_n\psi (𝐫)}{\psi ^{}(𝐫)}}\right),`$ (32)
(for $`|\psi |_s`$ nonvanishing) which also vanishes on S. Then one learns that Eq.(19), with the general $`T_{nn}`$ from (10), and Eq.(25) are equivalent when the antipode terms are grouped together inside the surface integral. One also learns that one may replace
$$_n^2\psi |_s\frac{1}{h_n^2}\frac{^2\psi }{n^2}|_s,$$
(33)
anywhere in these expressions.
## IV Basic Formulas
The use of the $`a`$ parameter allows one to derive a differential equation for the energy shift, $`\beta `$, and simplifies the Neumann expression. We begin with a change of variables from $`a`$ to $`\beta (a)`$:
$$\frac{\psi }{a}=\frac{d\beta (a)}{da}\frac{\psi }{\beta }.$$
(34)
In the Dirichlet case one may expand the boundary condition, which determines the energy values, in $`\beta `$,
$$0=\psi |_s\psi (0)|_s+\beta \frac{\psi }{\beta }|_s,$$
(35)
($`\psi (0)`$ specifies the unbounded wavefunction) allowing the approximate replacement
$$\frac{\psi }{\beta }|_s\frac{1}{\beta }\psi (0)|_s.$$
(36)
When (29), (34) and (36) are are used in (26), the variables $`\beta `$ and $`a`$ separate and we have
$$^{\beta (a_0)}\frac{d\beta }{\beta ^2}2m^{a_0}\frac{da}{𝑑sh_n^1|\psi (0)|^2}.$$
(37)
This identifies the anayltic form for $`1/\beta (a_0)`$ up to a constant. Thus, asymptotically one has
$$\beta ^D(a_0)\left(2m^{a_0}\frac{da}{𝑑sh_n^1|\psi (0)|^2}\right)^1.$$
(38)
The Neumann case proceeds similarly, beginning with
$$\frac{}{a}_n\psi =\frac{d\beta (a)}{da}\frac{}{\beta }_n\psi .$$
(39)
The boundary condition is again expanded in $`\beta `$,
$$0=_n\psi |_s_n\psi (0)|_s+\beta \frac{}{\beta }_n\psi |_s,$$
(40)
resulting in
$$\frac{}{\beta }_n\psi |_s\frac{1}{\beta }_n\psi (0)|_s.$$
(41)
As we will see in the next Section under very general conditions for good parity states,
$$\psi (\beta (a))|_s2\psi (0)|_s.$$
(42)
When (39), (41) and (42) are used in (27), no integration is needed and we simply obtain
$$\beta ^N(a_0)\frac{1}{m}𝑑s_0\psi ^{}(0)_n\psi (0),$$
(43)
for the energy shift. Note that we have not assummed that the wavefunctions are separable. For a general confining ellipsoidal surface it can be shown that in the asymptotic limit the appropriate scale factor goes to one. In the following we will also see that it is always possible to choose surfaces for which these factors are effectively unity. Therefore we will set $`h_n=1`$ in the following.
## V Alternate Approach
An alternate approach determines the energy shifts from the form of the asypmtotic wavefunction. Assuming that the potential, $`V`$, is spherically symmetric far from the region where the unbounded wavefunction exists (measured by it’s average radius, $`<r>`$, say), we have the approximate wave equation
$$\left[\frac{d^2}{dr^2}\kappa ^2(r)\right]\psi (\beta (a))=0,$$
(44)
which is solved by
$`\psi (\beta (a))=\psi (0)|_s(e^{_{r_a}^r\kappa (r^{})𝑑r^{}}\pm e^{_{r_a}^r\kappa (r^{})𝑑r^{}}),`$ (45)
where $`r_a=r|_s`$ and where the upper sign goes with Neumann boundary conditions and the lower goes with Dirichlet and the second term gives a vanishingly small contribution far from the boundary. In the immediate vincinity of the boundary, the unbounded wavefunction may be characterized as
$$\psi (0)\psi (0)|_se^{_{r_a}^r\kappa (r^{})𝑑r^{}}.$$
(46)
Thus we conclude in the Dirichlet case,
$$_n\psi (\beta (a))|_s2_n\psi (0)|_s,$$
(47)
giving the alternate expression,
$$\beta ^D(a_0)\frac{2}{m}_{\mathrm{}}^{a_0}𝑑a𝑑s\left|_n\psi \right|^2.$$
(48)
Similarly, in the Neumann case one has (42) above as well as
$$_n^2\psi (\beta (a))|_s2_n^2\psi (0)|_s,$$
(49)
giving the alternate form
$$\beta ^N(a_0)\frac{2}{m}_{\mathrm{}}^{a_0}𝑑a𝑑s\psi ^{}(0)_n^2\psi (0).$$
(50)
The next Section will show that use of the aymptotic forms of the wavefunctions allows the Dirichlet or Neumann energy shift formulas to be integrated and results in<sup>1</sup>
$$\beta ^D(a_0)\beta ^N(a_0)\frac{1}{m}𝑑s_0\psi ^{}(0)_n\psi (0).$$
(51)
This equation can also be interpreted as the energy shift in systems in higher or lower dimensions as long as the basic asymptotic wave equation (44) holds in the appropriate variable.
## VI Surface Integrals
In the last two Sections we have derived differently appearing formulas for the energy shifts. These will be reconciled in this Section. In the process we will learn about the surface integrals in these formulas in the general situation when the enclosing surface is not spherical.
The reconcilation of (43) and (50) in the Neumann case is accomplished by use of the asymptotic solutions to Eq.(44). The condition $`r_a<r>`$ in each quantum state (or the appropriate statement in one dimension) and the existence of the asymptotic wave equation, Eq.(44), insures that we can ignore any $`r`$ dependence in the unbounded wavefunction, $`\psi (0)`$, when integrating or taking normal partial derivatives, other than from the exponential in Eq.(46). Writing the general asymptotic form of the unbounded wavefunction (still not assuming separability; $`\mathrm{\Omega }`$ represents the angular variables) as
$$\psi (0)=N(r,\mathrm{\Omega })e^{_0^r𝑑r^{}\kappa (r^{})},$$
(52)
one can show that either (43) or (50) give the common form,
$$\beta ^N(a_0)\frac{1}{m}𝑑s_0\kappa (r_a)_nr|\psi (0)|^2,$$
(53)
This form may be further reduced because of the presence of the exponential in the surface integral. This will suppress contributions from regions of the surface not in the immediate vicinity of the points of closest approach to the potential center. Let us call $`r_{}`$ this closest distance. Thus, we may pull the $`r`$ dependent quantities outside of the spatial integral, frozen at their values for $`r=r_{}`$,
$$\beta ^N(a_0)\frac{1}{m}\kappa (r_{})\frac{r_{}}{n}𝑑s_0|\psi (0)|^2.$$
(54)
In the Dirichlet case, Eq.(48) may similarly be reduced to the negative of the right hand side of Eq.(54) using the asymptotic form, Eq.(52). On the other hand, starting from (38) one may write
$$𝑑s|\psi (0)|^2=e^{2_0^r_{}\kappa (r^{})𝑑r^{}}𝑑s|N(r_a,\mathrm{\Omega })|^2e^{2_r_{}^{r_a}\kappa (r^{})𝑑r^{}}.$$
(55)
The surface integral in (55) can be treated as a constant in the boundary integral in (38) (verified below), again resulting in the negative of the right hand side of Eq.(54) for $`\beta ^D(a_0)`$.
Eq.(54) is still unnecessarily complicated. Since the near points on the boundary will dominate the surface integral, there are three cases that can occur for the locus of these points: sphere, circle and point. In the sphere case one simply has
$$\beta _{sphere}^N(a_0)\frac{1}{m}\kappa (r_{})𝑑s_0|\psi (0)|^2.$$
(56)
In the case where the near points are a circle, one can construct a local cylindrical surface for the integration, giving
$`{\displaystyle 𝑑s|N(r_a,\mathrm{\Omega })|^2e^{2_r_{}^{r_a}\kappa (r^{})𝑑r^{}}}`$ $``$ $`r_{}{\displaystyle _0^{2\pi }}𝑑\varphi {\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑z|N(r_{},\mathrm{\Omega }_{})|^2e^{2\kappa _{}(r_ar_{})},`$ (57)
$``$ $`r_{}{\displaystyle _0^{2\pi }}𝑑\varphi |N(r_{},\mathrm{\Omega }_{})|^2{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑ze^{\frac{\kappa _{}}{r_{}}z^2},`$ (58)
$`=`$ $`\sqrt{{\displaystyle \frac{\pi r_{}^3}{\kappa (r_{})}}}{\displaystyle _0^{2\pi }}𝑑\varphi |N(r_{},\mathrm{\Omega }_{})|^2.`$ (59)
This results in
$$\beta _{circle}^N(a_0)\frac{1}{m}\sqrt{\pi r_{}^3\kappa (r_{})}_0^{2\pi }𝑑\varphi |\psi _{}(0)|^2,$$
(60)
where $`\psi _{}(0)`$ is the wavefunction along the $`r_{}`$ points. Likewise, if the nearest equidistant points are discrete, one can consider a locally flat surface and evaluate
$`{\displaystyle 𝑑s|N(r_a,\mathrm{\Omega })|^2e^{2_r_{}^{r_a}\kappa (r^{})𝑑r^{}}}`$ $``$ $`2\pi {\displaystyle \underset{i}{}}|N_i(r_{},\mathrm{\Omega }_{})|^2{\displaystyle _0^{\mathrm{}}}𝑑\rho \rho e^{2\kappa _{}(r_ar_{})},`$ (61)
$``$ $`2\pi {\displaystyle \underset{i}{}}|N_i(r_{},\mathrm{\Omega }_{})|^2{\displaystyle _0^{\mathrm{}}}𝑑\rho \rho e^{\frac{\kappa _{}}{r_{}}\rho ^2},`$ (62)
$`=`$ $`{\displaystyle \frac{\pi r_{}}{\kappa (r_{})}}{\displaystyle \underset{i}{}}|N_i(r_{},\mathrm{\Omega }_{})|^2,`$ (63)
where the sum is over the equidistant points. This results in
$$\beta _{points}^N(a_0)\frac{\pi r_{}}{m}\underset{i}{}|\psi _i(0)|^2.$$
(64)
This last case is the appropriate one for a cubic box enclosure with antipodal boundary conditions.
## VII Boxes with Self Coupled Sides
The antipodal boundary conditions considered here are not the ones used in Monte Carlo lattice simulations since they distinguish one spatial point as the “center”. These simulations generally use periodic or antiperiodic couplings on opposite box sides. Nevertheless, the antipodal conditions are effectively implemented in such simulations because of the local nature of the surface integrals. Let us see how this comes about.
In general one should consider both continuity of $`\psi `$ and $`_n\psi `$ on the surfaces in constructing the energy shift formulas for boxes with self coupled sides. Let us consider the energy shift from the $`x=\pm a`$ surfaces of the box for a parity invariant, centered potential. The asymptotic wavefunction with antipodal boundary conditions in this case can be written
$$\psi (\beta (a))\psi (r_a,+)e^{_{r_a}^r\kappa (r^{})𝑑r^{}}\pm \psi (r_a,)e^{_{r_a}^r\kappa (r^{})𝑑r^{}},$$
(65)
where the upper sign is for periodic and the lower is for the antiperiodic case, and where
$$\psi (r_a,+)N(r_a,\mathrm{\Omega })e^{_0^{r_a}\kappa (r^{})𝑑r^{}},$$
(66)
gives the general form of the unbounded wavefunction with $`r`$ replaced by $`r_a=\sqrt{a^2+y^2+z^2}`$. $`\psi (r_a,)`$ is the same as above except with $`\mathrm{\Omega }`$ reflected through the $`x=0`$ plane. The second term on the right of Eq.(65) will be exponentially suppressed in the interior of the box. Now using the general results, Eqs.(10) and (19), and integrating over $`a`$ using the asymptotic form (65), we have
$$\beta ^{box}(a_0)\frac{1}{m}𝑑s_0\kappa (r_a)_nr\mathrm{Re}(\psi ^{}(r_a,+)\psi (r_a,)).$$
(67)
($`S_0`$ indicates both sides of the box.) We will get the full energy shift by integrating over the other sides of the box in a similar manner. Because of the local nature of these integrations, the contributing portions of the box sides are almost antipodal, and the factors $`\kappa (r_a)`$ and $`_nr`$ can come outside of the surface integral. Then, for wavefunctions with good parity,
$$\psi _{}(r_a,+)=\pm \psi _{}(r_a,),$$
(68)
Eq.(67) in the periodic case just gives the results of the last Section for Neumann and Dirichlet boundary conditions, respectively. Antiperidicity just reverses these results. Near the box edges the representation (65) fails to be correct, but the local nature of the surface integrations allows it as an approximation. When the potential is not centered or the states do not have good parity, similar considerations show that periodic and antiperiodic energy shifts are negative and equal, at least in the case where one point is much closer to the force center than the other.
## VIII Examples
We will consider two examples of the use of the energy shift formulas, the one dimensional harmonic oscillator and the hydrogen atom.
For the harmonic oscillator, the asymptotic form of the wavefunctions are given by ($`\overline{x}(m\omega )^{1/2}`$; $`\stackrel{~}{x}x_0/\overline{x}`$; $`x_0`$ is the boundary distance from the origin))
$$\psi _N(\stackrel{~}{x})\frac{\stackrel{~}{x}^Ne^{\frac{1}{2}\stackrel{~}{x}^2}}{\sqrt{\overline{x}N!(2)^N\sqrt{\pi }}}.$$
(69)
Use of (69) in Eq.(51) (the “surface” integral simply supplies a factor of two) gives immediately that
$$\beta _N^D(x_0)\frac{(2)^{N+1}\stackrel{~}{x}^{2N+1}e^{\stackrel{~}{x}^2}}{m\overline{x}^2N!\sqrt{\pi }}.$$
(70)
On the other hand, the explicit energy shifts for the harmonic oscillator may be found from the general forms of the solutions, proportional to confluent hypergeometic functions, using the boundary condition on the wavefunction. The form of the wavefunction is
$$\psi ^{even}(\stackrel{~}{x})e^{\frac{1}{2}\stackrel{~}{x}^2}F(\frac{\lambda +1}{4}|\frac{1}{2}|\stackrel{~}{x}^2),$$
(71)
for the even parity states, and the energies are $`E=\lambda /2m\overline{x}^2`$. Expanding $`\lambda `$ as
$$\lambda =(4N+1)+2m\overline{x}^2\mathrm{\Delta }E,$$
(72)
($`N=0,1,2,\mathrm{}`$) one has in the Dirichlet case , $`\psi ^{even}(\stackrel{~}{x})=0`$, that
$$\mathrm{\Delta }E_N^D(x_0)\frac{F(N|\frac{1}{2}|\stackrel{~}{x}^2)}{2m\overline{x}^2\frac{}{\lambda }F(N\frac{\lambda }{4}|\frac{1}{2}|\stackrel{~}{x}^2)|_{\lambda =0}}.$$
(73)
Asymptotically,
$$F(N|\frac{1}{2}|\stackrel{~}{x}^2)\frac{(1)^N}{\left(\frac{1}{2}\right)_N}\stackrel{~}{x}^2,$$
(74)
where
$$(a)_N\underset{i=0}{\overset{N1}{}}(a+i)=\frac{\mathrm{\Gamma }(N+a)}{\mathrm{\Gamma }(a)}.$$
(75)
The method of taking derivatives of confluent hypergeometric functions in Ref. is used, neglecting polynomials in $`x_0`$ up to order $`2N`$ in $`\frac{}{\lambda }F(N\frac{\lambda }{4},\frac{1}{2},\frac{1}{2}\stackrel{~}{x}^2)|_{\lambda =0}`$. After a shift in the summand and considerable reduction, this can be put into the form
$$\mathrm{\Delta }E_N^D(x_0)\frac{2\stackrel{~}{x}}{N!}\left[\underset{\tau =2N+1}{\overset{\mathrm{}}{}}\frac{\stackrel{~}{x}^{2\tau 4N+1}}{(N+\frac{1}{2})_{\tau 2N}(\tau 2N)_{N+1}}\right]^1.$$
(76)
In dropping the polynomial contribution, we must have that the major contribution to the infinite sum in (76) occurs for $`\tau 2N`$. Since $`<N|x^2|N>=\overline{x}^2(N+\frac{1}{2})`$ in the unbounded state N and because the major contribution in the infinite sum actually occurs for $`\tau \stackrel{~}{x}^2`$, we have that $`x_0^2\mathrm{\hspace{0.17em}2}<N|x^2|N>`$ for these asymptotic forms to hold, as would be expected.
One notices that the Eq.(70) is odd in $`x_0`$, while (76) is even, so a term by term comparison of the two expressions (expanding $`e^{\stackrel{~}{x}^2}=(e^{\stackrel{~}{x}^2})^1`$ in a power series) is not possible. However, the asymptotic limit of the two sums are identical. This can be established by using
$`(N+{\displaystyle \frac{1}{2}})_{\tau 2N}(\tau 2N)_{N+1}`$ $``$ $`{\displaystyle \frac{(\frac{1}{2})_{\tau +1}2^N}{(2N1)!!}},`$ (77)
$`{\displaystyle \underset{\tau =2N+1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{\stackrel{~}{x}^{2\tau +1}}{(\frac{1}{2})_{\tau +1}}}`$ $``$ $`\sqrt{\pi }e^{\stackrel{~}{x}^2}.`$ (78)
Eq.(77) holds for $`\tau 2N`$ and Eq.(78) can be justified by using Eq.(75), replacing the sum on $`\tau `$ with an integral, and doing the shift $`\tau \tau \frac{1}{2}`$ (neglecting a polynomial of order $`4N+1`$ in $`\stackrel{~}{x}`$). Finally, using (77) and (78) in (70), we now have that
$$\mathrm{\Delta }E_N^D(x_0)=\beta _{2N}^D(x_0),$$
(79)
as is appropriate for the even parity states. Going back to the general form of the wavefunction, one may also verify that asymptotically $`\mathrm{\Delta }E_N^N(x_0)=\mathrm{\Delta }E_N^D(x_0)`$ from the Neumann boundary condition, $`_x\psi ^{even}(x)|_{x=x_0}=0`$.
For our second example we consider a hydrogen atom bounded by a sphere. The asymptotic form of the radial wavefunctions for the hydrogen atom are ($`\stackrel{~}{r}r_0/Na_0`$; “$`a_0`$” is the Bohr radius)
$$R_{NL}(\stackrel{~}{r})\left(\frac{2}{Na_0}\right)^{3/2}\frac{(2\stackrel{~}{r})^{N1}e^{\stackrel{~}{r}}}{\sqrt{2N(N+L)!(NL1)!}}.$$
(80)
This immediately implies the Dirichlet energy shifts
$$\beta _{NL}^D(r_0)\frac{1}{ma_0^2N^3}\frac{(2\stackrel{~}{r})^{2N}e^{2\stackrel{~}{r}}}{(N+L)!(NL1)!},$$
(81)
from Eq.(51). Obviously, the surface integral was trivial in this case.
The explicit form of the energy shifts for the hydrogen atom have previously been evaluated in Ref. in the Dirichlet case. This was done by expanding the general form of the radial wavefunction
$$R(r)e^{\frac{r}{na_0}}r^LF(Ln+1|2L+2|\frac{2r}{na_0}),$$
(82)
with energy
$$E=(2ma_0^2n^2)^1,$$
(83)
to first order in a Taylor series in the energy shift,
$$nN+ma_0^2N^3\mathrm{\Delta }E,$$
(84)
($`N=1,2,3,\mathrm{}`$) and then solving for $`\mathrm{\Delta }E_{NL}^D`$ with boundary condition $`R(r_0)=0`$. One obtains after reductions,
$$\mathrm{\Delta }E_{NL}^D(r_0)\left[ma_0^2N^3(N+L)!(NL1)!\underset{\tau =2N+1}{\overset{\mathrm{}}{}}\frac{(\tau 2N1)!(2\stackrel{~}{r})^{\tau 2N}}{(\tau LN1)!(\tau +LN)!}\right]^1,$$
(85)
using the asymptotic form of the wavefunctions and neglecting a finite polynomial series of order $`NL1`$ in $`r_0`$ in the denominator. For $`\tau N`$ or $`L`$ (guaranteed for $`\stackrel{~}{r}N^2`$), one has
$$\frac{(\tau 2N1)!}{(\tau LN1)!(\tau +LN)!}\frac{1}{\tau !},$$
(86)
which results in
$$\mathrm{\Delta }E_{NL}^D(r_0)=\beta _{NL}^D(r_0),$$
(87)
when another polynomial of order $`2N`$ in $`r_0`$ is neglected in forming the exponential, $`e^{2\stackrel{~}{r}}`$. Once again, one may verify that the Neumann shift is just the negative of the Dirichlet one.
## IX Comments and Summary
A number of comments may be made about energy shifts in general from the above formulas.
$``$ Averaging over Dirichlet and Neumann boundary conditions by averaging over periodic and antiperiodic boundary conditions in lattice simulations, is an effective way of removing the asymptotic energy shifts for nonrelativistic bound systems with good parity. This point can and should be tested numerically in potential simulations. It is the assumption of good parity which allows this statement to be made, and is not true for antipodal boundary conditions in general.
$``$ The exact formula, Eq.(20), shows that the energy shift is never negative in the case of Dirichlet boundary conditions, which corresponds to the usual meaning of a bounded system. This makes sense from the point of view of the the extra kinetic energy generated from the Heisenberg uncertainty principle.
$``$ The exact differential energy shift in the case of Neumann boundary conditions for a central potential bounded by a sphere can be written as
$$\frac{d\beta }{da}=𝑑s\psi ^{}(\beta (a))\left[V(a)E(a)+\frac{L(L+1)}{2ma^2}\right]\psi (\beta (a)).$$
(88)
Assuming the right hand side is dominated by the (positive) spherical confinement potential, $`V(a)`$, at large $`a`$, one immediately sees that the asymptotic energy shifts are negative as $`a`$ is decreased. For purely Coulombic systems the right hand side is instead dominated by $`E(a)`$, which is negative, leading again to asymptotic negative energy shifts. At extremely small $`a`$, one expects the positive Neumann box energies, $`E1/a^2`$ to dominate, leading to positive energy shifts. Thus, in the simplest scenario one expects a minimum in the energy at some intermediate value of $`a`$. This seems to be what is seen in the numerical solutions corresponding to Neumann boundary conditions (periodic even parity states) in Ref., although the confining geometry is a cube, not a sphere. Actual proof of this scenario in spherical and other geometries however is difficult and more complicated types of behaviors can not be ruled out.
As illustrations of the derived formulas we considered two examples of centered potentials: a one dimensional harmonic oscillator and a spherically bounded three dimensional hydrogen atom. The asymptotic energy shifts were calculated by the use of the derived formulas and verified from the known forms of the wavefuctions for the even parity oscillator and general quantum number Coulomb systems. Although we have not illustrated the more general case of a confining nonspherical boundary, formulas have been given for the asymptotic surface integrals encountered. The common situation of a confining box with self coupled sides is also amenable to numerical calculation using a formula of Section VII. Thus, the formalism here may readily be applied to lattice simulations of nonrelativistic bound systems in the usual box geometry. When applied to truly periodic systems, not self coupled as in lattice studies, these results actually estimate the location of one edge of the Bloch energy bands which emerge.
## Acknowledgments
This work originated in a conversation with P. Boyle regarding the results in Ref.. The author gratefully acknowledges support from NSF Grant No. PHY-9722073, the Department of Physics and Astronomy, University of Kentucky, as well as from the Special Research Centre for the Subatomic Structure of Matter, University of Adelaide.
FOOTNOTE
1. An alternate unintegrated expression, coming from the Dirichlet expression, Eq.(38), is
$`\beta ^D(a_0){\displaystyle \frac{1}{2m}}{\displaystyle 𝑑s\left[^{a_0}\frac{da}{|\psi (0)|^2}\right]^1}.`$ (89)
|
no-problem/9909/math9909107.html
|
ar5iv
|
text
|
# 1. Introduction
## 1. Introduction
In 1941 Kolmogorov derived his famous scaling relations for the local structure of turbulence ; in particular he deduced that the second-order structure function in the inertial range was proportional to the $`r^{2/3}`$ where $`r`$ is the separation of the points (For definitions and analysis, see below). Obukhov simultaneously determined the energy spectrum in the inertial range by similar means. Soon afterwards, Landau suggested that the Kolmogorov-Obukhov $`2/3`$ (–5/3) exponent may be modified by the presence of intermittency, and various proposals for Reynolds-number-independent modifications have been made since then (see e.g. ).
On the other hand, the Kolmogorov-Obukhov scaling argument has been reexamined through the prism of modern scaling theory which produced a Reynolds-number-dependent exponent in the structure function, tending to $`2/3`$ in the limit of vanishing viscosity. This argument is supported by the near-equilibrium statistical theory of turbulence , as well as by vanishing-viscosity asymptotics .
The Kolmogorov-Obukhov theory was not derived from first principles such as as the Navier-Stokes equations, and contains additional assumptions which are open to debate. In the absence of general analytical solutions of the Navier-Stokes equations and of adequate computational data, the only way to check the theory is to subject it to experimental verification. Several experimentalists have claimed to have observed a correction to the $`2/3`$ exponent, and that it was independent of $`Re`$; among the influential papers in this direction are the papers of Benzi et al. . In (page 389), the authors state that “the exponents$`\mathrm{}`$are the same in all experiments” (i.e., they are independent of Reynolds number), and the exponent in the second order structure function is “close but definitely different from the value $`2/3`$ used by Obukhov”. Our goal here is to refute this statement: to the extent that the data exhibited in can be relied upon, they militate in favor of a Reynolds-number dependent exponent with a $`2/3`$ vanishing-viscosity limit. The difference between the conclusions reached in and the conclusions reached below is apparently due to the data not having been carefully enough examined in for possible dependence on Re.
In the next section we provide a brief summary of scaling arguments as they apply to the structure functions in fully developed turbulence. We then present our analysis of the data and draw a conclusion.
## 2. Scaling in the local structure of turbulence
The quantities of interest in the local structure of turbulence are the moments of the relative velocity field, in particular the second order tensor with components
$$D_{ij}=(\mathrm{\Delta }_𝕣)_i(\mathrm{\Delta }_𝕣)_j,$$
$`2.1`$
where $`\mathrm{\Delta }_𝕣=𝕦(𝕩+𝕣)𝕦(𝕩)`$ is a velocity difference between $`𝕩`$ and $`𝕩+𝕣`$. In incompressible flow that is in addition locally isotropic, all the components of this tensor are determined if one knows $`D_{LL}=[u_L(𝕩+𝕣)u_L(𝕩)]^2`$ where $`u_L`$ is the velocity component along the vector r.
To derive an expression for $`D_{LL}`$ assume, following Kolmogorov, that for $`r=|𝕣|`$ small, it depends on $`\epsilon `$, the mean rate of energy dissipation per unit volume, $`r`$, the distance between the points at which the velocity is measured, a length scale $`\mathrm{\Lambda }`$, for example the Taylor macroscale $`\mathrm{\Lambda }_T`$, and the kinematic viscosity $`\nu `$ :
$$D_{LL}(r)=f(\epsilon ,r,\mathrm{\Lambda }_T,\nu ),$$
$`2.2`$
where the function $`f`$ is assumed to be the same for all developed turbulent flows. Introduce the Kolmogorov scale $`\mathrm{\Lambda }_K`$, which marks the lower bound of the “inertial” range of scales in which energy dissipation is negligible:
$$\mathrm{\Lambda }_K=\frac{\nu ^{3/4}}{\epsilon ^{1/4}};$$
$`2.3`$
Clearly, the appropriate velocity scale is
$$u=(\epsilon \mathrm{\Lambda }_T)^{1/3},$$
$`2.4`$
and this yields a Reynolds number
$$Re=\frac{(\epsilon \mathrm{\Lambda }_T)^{1/3}\mathrm{\Lambda }_T}{\nu }=\frac{\epsilon ^{1/3}\mathrm{\Lambda }_T^{4/3}}{\nu }=\left(\frac{\mathrm{\Lambda }_T}{\mathrm{\Lambda }_K}\right)^{4/3}.$$
$`2.5`$
Dimensional analysis yields the scaling law:
$$D_{LL}=(\epsilon r)^{\frac{2}{3}}\mathrm{\Phi }(\frac{r}{\mathrm{\Lambda }_K},Re),$$
$`2.6`$
where as before, the function $`\mathrm{\Phi }`$ is an unknown dimensionless function of its arguments, which have been chosen so that in the inertial range they are both large.
If one now subjects (2.6) to an assumption of complete similarity in both its arguments (i.e., one assumes that $`\mathrm{\Phi }`$ tends to a finite non-zero limit when its arguments tend to infinity, see ), one obtains the classical Kolmogorov $`2/3`$ law
$$D_{LL}=A_0(\epsilon r)^{\frac{2}{3}},$$
$`2.7`$
from which the Kolmogorov-Obukhov “5/3” spectrum can be obtained via Fourier transform. If one makes the assumption of incomplete similarity in $`r/\mathrm{\Lambda }_K`$ and no similarity in $`Re`$, (i.e., one assumes that as $`(r/\mathrm{\Lambda }_K)0`$ the function $`\mathrm{\Phi }`$ has power-type asymptotics with $`Re`$-dependent parameters), as we have shown to be appropriate in certain shear flows , the result is
$$\frac{D_{LL}(r)}{(\epsilon r)^{2/3}}=C(Re)\left(\frac{r}{\mathrm{\Lambda }_K}\right)^{\alpha (Re)},$$
$`2.8`$
where $`C,\alpha `$ are functions of $`Re`$ only. Expand $`C`$ and $`\alpha `$ in powers of $`\frac{1}{\mathrm{ln}Re}`$, as is suggested by vanishing viscosity asymptotics, (for another example of this expansion, see ), and keep the two leading terms; this yields
$$D_{LL}=(\epsilon r)^{2/3}\left(C_0+\frac{C_1}{\mathrm{ln}Re}\right)\left(\frac{r}{\mathrm{\Lambda }_K}\right)^{\alpha _1/\mathrm{ln}Re},$$
$`2.9`$
where $`C_0,C_1,\alpha _1`$ are constants and the zero-order term in the exponent has been set equal to zero so that $`D_{LL}`$ has a finite limit as $`\nu 0`$ . According to (2.9), the exponent in $`D_{LL}`$ is $`Re`$dependent and converges to $`2/3`$ as $`Re\mathrm{}`$. Note that the prefactor, (the “Kolmogorov constant”), is also $`Re`$dependent, as has indeed been observed experimentally .
A further possibility is to subject $`D_{LL}`$ to an assumption of complete similarity in $`Re`$ and incomplete similarity in $`r/\mathrm{\Lambda }_K`$, opening the door to $`Re`$-independent corrections to the $`2/3`$ power. This possibility, discussed in , is incompatible with the existence of a well-behaved vanishing-viscosity limit for the second-order structure function, in contradiction with the theory in . This assumption corresponds to the “extended similarity” discussed in .
The conclusion that the classical Kolmogorov-Obukhov value is obtained in the limit $`Re\mathrm{}`$ was reached in by a statistical mechanics argument. Furthermore, the usual explanation for possible departures of the exponent from the Kolmogorov-Obukhov value is the need to account for intermittency. However, it was shown in that the Kolmogorov-Obukhov value already takes intermittency into account; indeed, mean-field theories presented in give exponents which differ from $`2/3`$, and of course depend on the additional assumptions used to define a system to which a mean-field theory can be applied. In it was argued that the $`2/3`$ value corresponds to “perfect” intermittency, with the $`Re`$-dependent correction being created by the decrease in intermittency due to viscosity. This physical picture is mirrored by the fact that we obtained the $`2/3`$ exponent not by an assumption of complete similarity but as the vanishing-viscosity limit of a power-law derived from an assumption of incomplete similarity; for a more detailed explanation in a related problem, see .
Kolmogorov proposed similarity relations also for the higher-order structure functions:
$$D_{LL\mathrm{}L}(r)=[u_L(𝕩+𝕣)u_L(𝕩)]^p,$$
where $`LL\mathrm{}L`$ denotes $`L`$ repeated $`p`$ times; the scaling gives $`D_{LL\mathrm{}L}=C_p(\epsilon r)^{p/3}`$. As is well-known, for $`p=3`$ the Kolmogorov scaling is valid with no corrections. We shall not be concerned here with higher-order structure functions, for which the validity of the vanishing-viscosity arguments is at present unknown , and whose very existence in the limit of vanishing viscosity is doubtful .
## 3. A reexamination of the data of Benzi et al.
Our starting point is the graph in Figure 3 of the paper , which contains a plot of the second order structure function $`\mathrm{log}D_{LL}`$ as a function of the logarithm of the third moment $`D_{LLL}`$, whose dependence on $`r`$ in the inertial interval is well-known to obey the appropriate Kolmogorov scaling and thus be proportional to $`r`$. This way of processing the data provides a longer interval in which the exponent can be seen, and also produces as an artifact a slope of $`2/3`$ for separations $`r`$ in the dissipation range, as is indeed carefully explained in . The data come from four series of experiments in a small wind-tunnel: experiments labeled J in which the turbulence was produced by a jet at $`Re=300,000`$ (based on the integral scale), experiments labeled C6 where the turbulence was produced by a cylinder at $`Re=6000`$, experiments labeled C18, with a cylinder and $`Re=18000`$, and experiments in which the turbulence was produced by a grid and $`Re`$ was not specified in the paper; we have found from referees that the Reynolds number of the grid data was low, with no precise value. The various experimental procedures are detailed in and we do not query them in any way.
The resulting values of $`\mathrm{log}D_{LL}`$ as a function of $`\mathrm{log}D_{LLL}`$ were plotted in Figure 3 of without regard to $`Re`$, and a line was fitted to them as if they came from a single experiment. That line had slope roughly equal to $`0.7`$, and this is the basis for the claim by Benzi et al. that the exponent is definitely larger than $`2/3`$. However, it is obvious even to the naked eye that the points that come from experiments with differing $`Re`$ do not lign up well on that single line. We now show this lack of alignment in detail.
To our regret, Benzi et al. have not responded to our requests for data in digital form, so we scanned Figure 3 of with the modern equipment available at the Lawrence Berkeley Laboratory and obtained numerical values in this way. The set of values is incomplete because in certain regions of that Figure there are so many points that it is impossible to separate them properly; there are however enough scanned points so that the conclusions below are independent of the remainder.
In Figure 1 we display the resulting values of $`\mathrm{log}D_{LL}`$ as a function of $`\mathrm{log}D_{LLL}`$ separately for each run, together with lines that correspond to $`y=(2/3)x+a_1`$, $`y=0.7x+a_2`$, where $`x,y`$ are the coordinates in those graphs and the values of $`a_1,a_2`$ are the same as in . As one can see, the slope defined by the experimental points is almost exactly $`0.7`$ for the experiment $`C6`$ ($`Re=6000`$); it goes down as $`Re`$ increases first to 18,000 and then to 300,000. The grid data, which we are told belong to a low Reynolds number, are particularly instructive: They follow the $`2/3`$ line for a while, presumably while the separation $`r`$ is in the dissipation scale, and then they bend towards the $`.7`$ line, as the separation emerges from the dissipation scale but the Reynolds number is not large enough to produce the asymptotic $`2/3`$ scaling. The information at our disposal is not sufficient to estimate a priori where the bend should be. It is quite clear from these figures that one cannot view all these points as lying on a single line, and the data are compatible with incomplete similarity in $`r/\mathrm{\Lambda }_K`$ and an absence of similarity in $`Re`$, so that the exponent in the power law for $`D_{LL}`$ and therefore the slopes of the lines in Figure 1 are slowly decreasing functions of $`Re`$ when the separation $`r`$ is in the inertial range.
To make this point another way, we display in Figure 2 the local slopes in these figures, defined as
$$s_i^{\text{local}}=\frac{y_iy_1}{x_ix_1},$$
$`3.1`$
where $`(x_i,y_i)`$ are the coordinates of the i-th point in the graph for specific experiment, and $`(x_1,y_1)`$ is the first (leftmost) point in that graph. The local slopes that result from using successive points are too noisy for any conclusion to be drawn. Figure 2 clearly shows that the slopes decrease as $`Re`$ increases for separations $`r`$ outside the dissipation range. In particular, for the grid data (apparently lowest $`Re`$) the slope increases with the separation $`r`$ as that separation emerges from the dissipation range.
## 4. Conclusion
Figures 1 and 2 show, we believe conclusively, that there is no reason to conclude with that the Kolmogorov-Obukhov exponent is “definitely” different from $`2/3`$ and independent of $`Re`$. The data as we presented them suggest to the contrary that the exponent slowly decreases with $`Re`$ and tends to $`2/3`$ as $`Re\mathrm{}`$. The experimental uncertainties detailed in , the uncertainty aboout the Reynolds number of the grid data, the small differences between the slopes under discussion, and the added uncertainties of the scanning, deter us from making the statement more emphatic yet.
References
Figure Captions
Figure 1. Variation of $`\mathrm{log}D_{LL}`$ with $`\mathrm{log}D_{LLL}`$ plotted separately for the several values of $`Re`$:
> a. Experiment C6 ($`Re=6000`$); b. Experiment C18 ($`Re=18000`$); c. Experiment J ($`Re=300000`$); d. Grid turbulence.
Figure 2: Local slopes in Figure 1 for the several experiments. (Diamonds– C6; squares– C18; triangles– J; stars– grid turbulence). Note that the slope for grid turbulence start from 2/3 and increases, as one may expect from the calculation of the dissipation range (see the text).
|
no-problem/9909/cond-mat9909034.html
|
ar5iv
|
text
|
# Asymmetric particle systems on ℝ
## 1 Introduction and outline
In this paper we are concerned with systems of interacting particles moving on the real line. The models of interest can be described as follows: Let $`x_i`$ denote the position of the $`i`$-th particle. In an elementary move particle $`i`$ jumps to the right to a position $`x_i+\delta _i`$ between $`x_i`$ and $`x_{i+1}>x_i`$. In the absence of a lattice spacing, there are two natural ways of setting the scale for the jump distance $`\delta _i`$: It can be imposed externally through the choice of a fixed probability density $`f_i(\delta _i)`$, in which case moves with $`\delta _i>x_{i+1}x_i`$ have to be rejected, or the scale can be set by the gap or “headway”
$$u_i=x_{i+1}x_i$$
(1.1)
in front of particle $`i`$ by letting $`f_i`$ depend on the configuration $`𝒰\{u_i\}_i`$ as
$$f_i(\delta _i|𝒰)=u_i^1\varphi (\delta _i/u_i),$$
(1.2)
where $`\varphi (r)`$ is a probability density with support on the unit interval. Equation (1.2) implies that the jump length $`\delta _i`$ is a random fraction $`r`$ of the headway $`u_i`$. The rate for the move is a function $`\gamma (u_i)`$ of the headway. The moves are executed in continuous time (in which case each particle is equipped with an exponential clock) or in discrete time; in the latter case the particle positions are updated either in parallel, or sequentially by going through the system against the direction of particle motion. A model is defined by specifying the functions $`f_i(\delta )`$ and $`\gamma (u)`$ as well as the type of dynamics (continuous time, parallel or sequential).
Two equivalent representations of the dynamics will prove to be useful. In terms of the headway variables $`u_i`$ the particle configuration may be visualized as a system of sticks located at the sites $`i`$ of the integer lattice, $`u_i`$ being the length of stick $`i`$. In an elementary move a fraction $`\delta _i`$ of stick $`i`$ is broken off and added to stick $`i1`$ . Alternatively, the particle positions $`x_i(t)`$ can be taken to define the height of a one-dimensional interface above the point $`i`$. The asymmetric particle motion translates into a growth process, and the fact that particles cannot pass each other implies that the interface is a monotonically increasing staircase ($`x_{i+1}x_i>0`$) at all times. We will refer to these two viewpoints as the stick representation and the interface representation, respectively.
For continuous time dynamics, a jump length distribution of the type (1.2) with $`\varphi `$ uniform, and $`\gamma (u)=u`$ the model reduces to the Hammersley process discussed in . In this case the invariant distribution of particle positions is Poisson. Here we are interested in obtaining similar results for other choices of $`f_i`$ and $`\gamma `$, and other types of dynamics. Our motivation is mainly conceptual: While a wealth of results are available for particle systems on the integer lattice such as the asymmetric simple exclusion process , little is known analytically for the case of continuous particle positions, although motion on the real line appears naturally e.g. in applications to highway traffic .
An important simplifying feature of the asymmetric exclusion process is the existence of stationary product measures. Here the analogous desirable property is the product form
$$𝒫(𝒰)=\underset{i}{}P(u_i)$$
(1.3)
for the stationary probability of a configuration $`𝒰`$ of particle headways. Therefore a primary goal will be to find nontrivial examples of asymmetric particle systems on $``$ for which (1.3) holds.
We provide an outline of the paper. In the next section we explore the conditions for a Poisson distribution of particle positions (corresponding to an exponential distribution of interparticle spacings in (1.3)) to be invariant for continuous time dynamics. Our strategy is to consider a finite number $`N`$ of particles moving on a ring of length $`L`$, and to demand that the stationary measure gives the same weight to all allowed configuration; this then implies a Poisson measure for $`N,L\mathrm{}`$ at fixed density $`\rho =N/L`$. Provided the jump rate $`\gamma `$ is independent of the headway, we find that the Poisson measure is invariant for arbitrary externally imposed (i.e., configuration and particle independent) jump length distributions $`f(u)`$. On the other hand, if the jump length is scaled to the headway as in (1.2), the Poisson measure is stationary only for a one-parameter family of power law functions $`\varphi `$ and $`\gamma `$, which have been identified previously in the context of (symmetric) stick models .
Sections 3 and 4, which constitute the main part of the paper, are devoted to models with constant jump rate, $`\gamma 1`$ independent of the headway, and jump length distributions of the type (1.2). In the interface representation these belong to the class of random average processes (RAP) studied by Ferrari and Fontes : The particle position $`x_i^{}`$ after the move is an average
$$x_i^{}=rx_i+(1r)x_{i+1}$$
(1.4)
of the previous positions $`x_i`$, $`x_{i+1}`$, with a random weight $`r[0,1]`$ drawn from the probability density $`\varphi (r)`$. We therefore refer to these models as Asymmetric Random Average Processes (ARAP). Discrete time ARAP’s have been introduced previously to model force fluctuations in random bead packs . In that context the headway $`u_i(t)`$ represents the (scaled) force supported by bead $`i`$ at depth $`t`$ below the surface of a two-dimensional packing (see Section 3.2.1).
In Section 3.1.1 we show, for the case of continuous time dynamics, that the two-point correlation function of particle headways $`u_iu_j`$ factorizes in the stationary state for any choice of $`\varphi (r)`$, and obtain the expression
$$u^2u^2=\frac{\mu _2}{\rho ^2(\mu _1\mu _2)}$$
(1.5)
for the stationary variance of headways in terms of the moments
$$\mu _n=_0^1𝑑rr^n\varphi (r)$$
(1.6)
of $`\varphi (r)`$ and the particle density $`\rho `$. Similar results for the discrete time models are derived in Section 3.2.
More detailed information about the stationary headway distribution can be obtained when $`\varphi (r)`$ is the uniform distribution on $`[0,1]`$. Assuming that the factorization property of the two-point function implies pairwise independence of the $`u_i`$, we derive and solve stationarity conditions for their moments, which show that the invariant density of headways (normalized to $`u_i=1`$) takes the form of a gamma distribution,
$$P_\nu (u)=\frac{\nu ^\nu }{\mathrm{\Gamma }(\nu )}u^{\nu 1}e^{\nu u}$$
(1.7)
where the parameter $`\nu `$ depends on the dynamics: For continuous time dynamics $`\nu =1/2`$, while sequential and parallel dynamics yield $`\nu =1`$ and 2, respectively. The result for parallel dynamics has been previously derived by Coppersmith et al. , who also gave an explicit proof of the factorization property (1.3). Equation (1.7) implies bunching of particles (enhanced density fluctuations compared to the Poisson measure) for continuous time dynamics ($`\nu =1/2`$) and antibunching for parallel dynamics ($`\nu =2`$). The associated nontrivial particle-particle correlations are explicitly computed in Section 3.3.
Based on numerical simulations, we conjecture that the stationary single particle headway distribution is exactly given by (1.7) for all three types of dynamics. For continuous time dynamics and a finite number of particles on a ring the assumption of an invariant product measure is examined in Section 3.1.2. Surprisingly, we find that the product measure is not invariant for the ARAP, although it is invariant for a related symmetric stick model. This conclusion agrees with recent results for the infinite system obtained by Rajesh and Majumdar .
Section 4 is devoted to the large scale, long time behavior of the ARAP. We derive a hydrodynamic equation of singular diffusion type, and compute the tracer diffusion coefficient using a Langevin approach. Since these results depend only on the stationary two-point function of headways, they are valid for any choice of the jump length distribution $`\varphi (r)`$. Finally, some conclusions and open questions are formulated in Section 5.
## 2 Models with invariant Poisson measures
### 2.1 Constant invariant measure on the ring
In this section we want to identify continuous time dynamics which leave a Poisson distribution of particle positions invariant. For this purpose we first consider $`N`$ particles moving in continuous time on a ring of length $`L`$, with density $`\rho =N/L`$. Allowed headway configurations then satisfy the constraint
$$\underset{i=1}{\overset{N}{}}u_i=L$$
(2.1)
and the product measure (1.3) is required to hold on the set of configurations defined by (2.1). For an exponential distribution $`P(u)e^{\rho u}`$ this implies that all allowed headway configurations carry the same weight $`\mathrm{\Omega }(N,L)^1`$, where
$$\mathrm{\Omega }(N,L)=\frac{L^{N1}}{(N1)!}$$
(2.2)
denotes the volume of the set, i.e. the invariant measure is constant on allowed configurations. It is straightforward to check that this implies Poisson measure in the limit $`N,L\mathrm{}`$ at fixed density $`\rho `$. For example, the distribution of a single headway on the ring is given by
$$P_{N,L}(u)=\frac{\mathrm{\Omega }(N1,Lu)}{\mathrm{\Omega }(N,L)}\rho e^{\rho u},N,L\mathrm{}$$
(2.3)
while the joint distribution of the headways of two neighboring particles is
$$P_{N,L}(u_i,u_{i+1})=\frac{\mathrm{\Omega }(N2,Lu_iu_{i+1})}{\mathrm{\Omega }(N,L)}\rho ^2e^{\rho (u_i+u_{i+1})},N,L\mathrm{}.$$
(2.4)
A similar argument can be carried out for the probability distribution of the particle positions on the ring.
Invariance of the constant measure requires the total transition rates for going into and out of any configuration to balance. This yields the condition
$$\underset{i=1}{\overset{N}{}}_0^{u_{i1}}𝑑wf_i(w|𝒰^{(i)}(w))\gamma (u_i+w)=\underset{i=1}{\overset{N}{}}_0^{u_i}𝑑wf_i(w|𝒰)\gamma (u_i)$$
(2.5)
for any configuration $`𝒰`$, with the configuration $`𝒰^{(i)}(w)=\{u_j^{(i)}(w)\}_j`$ defined through
$$u_j^{(i)}(w)=\{\begin{array}{cc}u_i+w:\hfill & j=i\hfill \\ u_{i1}w:\hfill & j=i1\hfill \\ u_j:\hfill & \mathrm{else}\hfill \end{array}$$
(2.6)
and periodic boundary conditions implied in the summation over $`i`$. Note the upper integration limits, which ensure that particles cannot pass each other ($`\delta _iu_i`$). Two examples of dynamics which satisfy (2.5) will be given in the following.
### 2.2 Configuration-independent jump length distributions
If the jump rate $`\gamma `$ is independent of headway, the invariance condition (2.5) is seen to hold for any jump length distribution $`f(w)`$ which is independent of the configuration and of the particle label $`i`$. The stationary speed $`\overline{v}`$ of particles at density $`\rho `$ is then computed from
$$\overline{v}=\gamma \rho _0^{\mathrm{}}𝑑ue^{\rho u}_0^u𝑑wwf(w),$$
(2.7)
and the current follows from $`j(\rho )=\rho \overline{v}(\rho )`$. For example, for jump lengths chosen uniformly in the unit interval one finds
$$j(\rho )=\frac{\gamma }{\rho }[1(1+\rho )e^\rho ].$$
(2.8)
It should be noted that in general the Poisson distribution is not the unique invariant measure. For example, if $`f(w)=0`$ for $`w`$ less than some minimum jump length $`a`$, then all configurations with $`u_i<a`$ for all $`i`$ are trivially invariant. Numerical simulations indicate, however, that such “absorbing” states are typically not reached, even if the system is started very close to them. If $`f(w)=\delta (w1)`$ and the particles are started on the integer lattice, the model reduces to the asymmetric exclusion process, which has a geometric (rather than exponential) headway distribution.
### 2.3 Scale-invariant models
When the scale of the jumps is set by the headways, inserting (1.2) into (2.5) and requiring the terms on both sides to cancel pairwise yields the following integral equation connecting the functions $`\varphi `$ and $`\gamma `$,
$$_0^u𝑑w\gamma (u^{}+w)\frac{\varphi (w/(u^{}+w))}{u^{}+w}=\gamma (u),$$
(2.9)
which should be true for all $`u`$, $`u^{}`$. Taking the derivative with respect to $`u`$ this becomes a differential equation for $`\gamma `$,
$$\frac{d\gamma }{du}=\frac{\gamma (v)\varphi (u/v)}{v}$$
(2.10)
with $`v=u+u^{}u`$. Setting in particular $`v=u`$ we see that $`\gamma `$ has to be a power law function,
$$\gamma (u)=\gamma _0u^{\alpha 1},$$
(2.11)
where $`\gamma _0>0`$ is a constant and $`\alpha =1+\varphi (1)`$. Using (2.10) the jump length distribution is then found to be also a power law,
$$\varphi (v)=(\alpha 1)v^{\alpha 2}.$$
(2.12)
Normalizability of $`\varphi `$ requires $`\alpha >1`$.
Equations (2.11,2.12) define a one-parameter family of models for which the Poisson distribution of positions is invariant for an arbitrary number of particles $`N`$, the Hammersley process being given by $`\alpha =2`$. The corresponding symmetric stick models, in which the broken-off piece is distributed with equal probability to the left or right neighbor, were considered by Feng et al. . Since $`\gamma `$ is a power law, these models are scale invariant in the sense that the average particle spacing $`u_i=1/\rho `$ is the only length scale in the problem. Therefore also the stationary particle current $`j`$ is a power law function of the density. To compute it, we note that the average particle speed is given by
$$\overline{v}=\gamma (u_i)\delta _i=\rho _0^{\mathrm{}}𝑑ue^{\rho u}\gamma (u)u_0^1𝑑vv\varphi (v)=\gamma _0(11/\alpha )\mathrm{\Gamma }(\alpha +1)\rho ^\alpha $$
(2.13)
and therefore
$$j(\rho )=\rho \overline{v}=\gamma _0(11/\alpha )\mathrm{\Gamma }(\alpha +1)\rho ^{1\alpha }.$$
(2.14)
## 3 Asymmetric Random Average Processes
The asymmetric random average process is a scale-invariant model characterized by a jump length distribution of type (1.2), and a constant jump rate $`\gamma \gamma _0=1`$. The discussion is phrased most naturally in the stick representation, and begins with the continuous time models.
### 3.1 Continuous time dynamics
#### 3.1.1 Stationary headway correlations
Consider first the time evolution of the second moment $`u_i^2`$. In a small time interval $`\mathrm{\Delta }t`$ two processes affecting $`u_i`$ may occur: A random fraction $`\delta _i`$ of $`u_i`$ may be lost to $`i1`$, and a random fraction $`\delta _{i+1}`$ of $`u_{i+1}`$ may be gained from $`i+1`$. Both processes occur with probability $`\mathrm{\Delta }t`$. Thus
$$u_i^2(t+\mathrm{\Delta }t)=\mathrm{\Delta }t[(u_i\delta _i)^2+(u_i+\delta _{i+1})^2]+(12\mathrm{\Delta }t)u_i^2(t).$$
(3.1)
Stationarity then implies
$$2\delta _iu_i+\delta _i^2+2\delta _{i+1}u_i+\delta _{i+1}^2=0.$$
(3.2)
Since $`\delta _j=r_ju_j`$ where $`r_j`$ is an independent random variable with mean $`\mu _1`$ and second moment $`\mu _2`$, we have that $`\delta _iu_i=\mu _1u_i^2`$, $`\delta _i^2=\delta _{i+1}^2=\mu _2u_i^2`$ and $`\delta _{i+1}u_i=\mu _1u_iu_{i+1}`$. Thus (3.2) becomes
$$(\mu _1\mu _2)u_i^2=\mu _1u_iu_{i+1}.$$
(3.3)
Similarly for the general two-point function $`C_ku_iu_{i+k}`$ we obtain the stationarity condition
$$\mu _1(C_{k+1}+C_{k1}2C_k)=\mu _2C_0(\delta _{k,1}+\delta _{k,1}2\delta _{k,0}),$$
(3.4)
where translational invariance and symmetry ($`C_k=C_k`$) of the correlations has been used. Solving eq.(3.4) starting from $`k=0`$ one finds
$$C_k=[1(\mu _2/\mu _1)(1\delta _{k,0})]C_0.$$
(3.5)
Imposing the boundary condition $`lim_k\mathrm{}C_k=u_i^2=1/\rho ^2`$ for an infinite system of density $`\rho `$, eq.(3.5) then shows that the two-point function factorizes for any $`k1`$ and the variance of headways is given by (1.5).
#### 3.1.2 Stationary headway distribution for uniform $`\varphi (r)`$
We now specialize to the case when the distribution of scaled jump lengths $`\varphi (r)`$ is uniform in $`[0,1]`$, and assume that the factorization property which was verified above for the two-point function implies the pairwise independence of the $`u_i`$. Then the stationarity condition for the $`n`$-th moment
$$(u_i+\delta _{i+1})^n+(u_i\delta _i)^n=2u_i^n.$$
(3.6)
yields (the index $`i`$ of $`u_i`$ is now dropped)
$$\underset{k=0}{\overset{n}{}}\left(\genfrac{}{}{0pt}{}{n}{k}\right)\frac{1}{k+1}[u^{nk}u^k+(1)^ku^n]=2u^n$$
(3.7)
which can be rewritten as a recursion relation,
$$u^n=\frac{n+1}{n1}\underset{k=1}{\overset{n1}{}}\left(\genfrac{}{}{0pt}{}{n}{k}\right)\frac{1}{k+1}u^{nk}u^k.$$
(3.8)
Evaluating this expression for $`n=1,\mathrm{},5`$ we find that the relation
$$u^n=\left[\underset{k=1}{\overset{n}{}}(2k1)\right]u^n$$
(3.9)
appears to hold, which is characteristic of the gamma distribution (1.7) with parameter $`\nu =1/2`$.
To prove it, we first insert (3.9) into (3.8), and obtain
$$\left(\genfrac{}{}{0pt}{}{2n}{n}\right)=\frac{n+1}{n1}\underset{k=1}{\overset{n1}{}}\frac{1}{k+1}\left(\genfrac{}{}{0pt}{}{2k}{k}\right)\left(\genfrac{}{}{0pt}{}{2(nk)}{nk}\right).$$
(3.10)
This can be verified using the binomial expansion
$$\frac{1}{2}(14x)^{1/2}=\frac{1}{2}+\underset{k=1}{\overset{\mathrm{}}{}}\left(\genfrac{}{}{0pt}{}{2k1}{k1}\right)x^k.$$
(3.11)
Integrating with respect to $`x`$ we also have
$$\frac{1}{4}(14x)^{1/2}=\frac{1}{4}+\frac{x}{2}+\underset{k=1}{\overset{\mathrm{}}{}}\frac{1}{k+1}\left(\genfrac{}{}{0pt}{}{2k1}{k1}\right)x^{k+1}.$$
(3.12)
Since the product of the left hand sides is a constant, all coefficients of $`x^m`$ with $`m>0`$ in the series obtained by multiplying (3.11) and (3.12) must vanish. After rearranging terms this is seen to imply (3.10).
In fact the relation (3.9) was first guessed on the basis of numerical simulations. Rather accurate numerical estimates for the stationary moments of $`u_i`$ can be obtained by starting from an ordered initial condition ($`u_i=1`$ for all $`i`$) and fitting the finite time data to the form
$$u^n(t)=A_n+B_nt^{1/2}$$
(3.13)
which is suggested by the fluctuation theory of Section 4.2 (see eq.(4.21)). The results shown in Table I strongly indicate that the stationary single particle headway distribution is exactly given by the $`\nu =1/2`$ gamma distribution.
To test the assumption of an invariant product measure underlying the derivation of (3.8), we proceed as in Section 2 and consider a finite number $`N`$ of particles on a ring. The condition for the product measure (1.3), restricted to the set (2.1) of allowed configurations, to be invariant now reads
$$\underset{i=1}{\overset{N}{}}_0^{u_{i1}}\frac{dw}{u_i+w}\frac{P(u_i+w)P(u_{i1}w)}{P(u_i)P(u_{i1})}=\underset{i=1}{\overset{N}{}}\gamma (u_i)=N.$$
(3.14)
Inserting the gamma distribution with parameter $`\nu =1/2`$ (eq.(1.7)) and noting that
$$_0^v𝑑w(u+w)^{3/2}(vw)^{1/2}=\frac{2\sqrt{v/u}}{u+v}$$
(3.15)
the condition (3.14) becomes
$$\underset{i=1}{\overset{N}{}}\frac{2u_{i1}}{u_i+u_{i1}}=N,$$
(3.16)
with periodic boundary conditions, $`u_1=u_N`$. Equation (3.16) is satisfied for $`N=2`$, but not for general $`N`$. We conclude that the product measure (1.3) is not invariant for $`N`$ different from 2. It is however the exact invariant measure for the symmetric stick process obtained by transferring the piece broken off stick $`i`$ to $`i1`$ or $`i+1`$ with equal probability. Indeed, in that case the left hand side of (3.16) becomes
$$\underset{i=1}{\overset{N}{}}\frac{u_{i1}}{u_i+u_{i1}}+\frac{u_{i+1}}{u_i+u_{i+1}}=N.$$
(3.17)
While these arguments are restricted to finite systems, the conclusions agree with calculations carried out for the infinite system by Rajesh and Majumdar . Specifically, they show that the product measure ansatz for the continuous time ARAP breaks down at the level of three-point correlations, but is exact for the symmetric stick model.
### 3.2 Discrete time dynamics
#### 3.2.1 Parallel update
A discrete time version of the ARAP is obtained by writing
$$u_i(t+1)=u_i(t)\delta _i(t)+\delta _{i+1}(t)$$
(3.18)
where $`\delta _j=r_ju_j`$ with independent random numbers $`r_j`$ distributed according to the density $`\varphi (r)`$. This is closely related to a model introduced by Coppersmith, Liu, Majumdar, Narayan and Witten for the description of force fluctuations in bead packs . To see the connection, let $`W(i,t)`$ denote the weight supported by bead $`i`$ in the $`t`$-th layer below the (free) surface of the packing. The key assumption of the model is that the beads are arranged on a regular lattice, and that each bead transfers its weight to exactly $`M`$ beads in the layer below. The fraction $`q_{ij}(t)[0,1]`$ of the weight of bead $`i`$ in layer $`t`$ which is transferred to bead $`j`$ in layer $`t+1`$ defines a matrix with random entries subject to the constraint $`_jq_{ij}(t)=1`$. Assigning unit mass to each bead, the weights evolve according to
$$W(j,t+1)=1+\underset{i}{}q_{ij}(t)W(i,t).$$
(3.19)
For large $`t`$ all weights increase linearly with $`t`$, which suggests to introduce normalized variables $`U(i,t)=W(i,t)/t`$. Specializing to a two-dimensional lattice where the beads are labeled such that bead $`i`$ is connected to beads $`i`$ and $`i+1`$ in the layer below, we see that for $`t\mathrm{}`$ the evolution of the $`U(i,t)`$ reduces to (3.18) with the identification $`q_{ii}=1r_i`$ and $`q_{i+1i}=r_{i+1}`$. In the context of beak packs $`q_{ii}`$ and $`q_{i+1i}`$ should have the same distribution, and hence strict equivalence between the two models holds only when $`\varphi (r)`$ is symmetric around $`r=1/2`$.
Let us first show that the stationary two-point headway correlations factorize for any $`\varphi (r)`$. Proceeding as above in Section 3.1.1, we obtain the stationarity condition
$$(\mu _1\mu _1^2)(C_{k+1}+C_{k1}2C_k)=(\mu _2\mu _1^2)C_0(\delta _{k,1}+\delta _{k,1}2\delta _{k,0}),$$
(3.20)
with the solution
$$C_k=[1(\mu _2\mu _1^2)/(\mu _1\mu _1^2)(1\delta _{k,0})]C_0.$$
(3.21)
As in the continuous time case this implies factorization for $`k1`$ in the infinite system, with the stationary variance of headways given by
$$u^2u^2=\frac{\mu _2\mu _1^2}{\rho ^2(\mu _1\mu _2)}.$$
(3.22)
For the case of a uniform distribution $`\varphi (r)`$, Coppersmith et al. (see also ) have shown explicitly that the stationary measure takes the product form (1.3), with the headway distribution $`P(u)`$ given by the gamma distribution (1.7) with $`\nu =2`$. The latter is easily derived along the lines of Section 3.1.2. Under the assumption of pairwise independence, the stationarity condition for general moments $`u_i^n`$ now reads
$$u^n=\frac{1}{(n1)(n+2)}\underset{k=1}{\overset{n1}{}}\left(\genfrac{}{}{0pt}{}{n+2}{k+1}\right)u^{nk}u^k.$$
(3.23)
A straightforward computation shows that this is solved by the expression
$$u^n=2^n(n+1)!u^n$$
(3.24)
for the moments of the gamma distribution (1.7) with parameter $`\nu =2`$.
#### 3.2.2 Ordered sequential update
In the context of traffic modeling it has been found useful to implement a different kind of discrete time dynamics, in which the particles are moved one by one, in the order of their positions in the system. This ordered sequential update can proceed either in the direction of particle motion (forward update) or against it (backward update). For the ARAP it is easy to see that the forward update is equivalent to the parallel dynamics discussed in Section 3.2.1, however the backward update is not.
In the stick representation, backward sequential update implies that stick $`i`$ first receives a random fraction of stick $`i+1`$, placing it in an intermediate state of length $`u_i^{}`$, and subsequently transfers a random fraction $`\delta _i^{}`$ of $`u_i^{}`$ to stick $`i1`$. It is important to note that, at the time of transfer of mass to stick $`i`$, stick $`i+1`$ has already received mass from $`i+2`$ and thus the amount transferred from $`i+1`$ to $`i`$ is a random fraction of $`u_{i+1}^{}>u_{i+1}`$. The dynamics therefore proceeds in two steps,
$$u_i^{}(t)=u_i(t)+\delta _{i+1}^{}(t)$$
(3.25)
$$u_i(t+1)=u_i^{}(t)\delta _i^{}(t),$$
(3.26)
where $`\delta _j^{}`$ is a random fraction of $`u_j^{}`$. Taking the average of both sides of (3.25) or (3.26) the stationary mean of $`u_i^{}`$ is seen to be
$$u^{}=\frac{u}{1\mu _1}=\frac{1}{\rho (1\mu _1)}.$$
(3.27)
Equation (3.26) implies the relation
$$C_k=[(1\mu _1)^2+(\mu _2\mu _1^2)\delta _{k,0}]C_k^{}$$
(3.28)
between the stationary two-point functions $`C_k`$ of $`u_i`$ and $`C_k^{}`$ of $`u_i^{}`$. Using (3.25) it is easy to show that the stationarity condition for $`C_k^{}`$ is identical to the condition (3.20) obtained in the case of parallel update. Therefore also $`C_k^{}`$ factorizes in the infinite system, and through (3.28) this property carries over to $`C_k`$. For the stationary variance of the backward sequential update model we find the expression
$$u^2u^2=\frac{\mu _2\mu _1^2}{\rho ^2(1\mu _1)(\mu _1\mu _2)}.$$
(3.29)
Turning to the stationary headway probability distribution $`P(u)`$, we again assume pairwise independence and note the functional equation
$$P(u)=_0^1𝑑rr^1\varphi (1r)P^{}(u/r)$$
(3.30)
relating $`P(u)`$ to the distribution $`P^{}(u^{})`$ of the intermediate state headway. For uniform $`\varphi (r)`$ the stationarity condition for the $`n`$-th moment of $`u_i^{}`$ then reads
$$(u_i^{})^n=(u_i+\delta _{i+1}^{})^n=\underset{k=0}{\overset{n}{}}\left(\genfrac{}{}{0pt}{}{n}{k}\right)\frac{1}{k+1}(u_i^{})^ku_i^{nk}.$$
(3.31)
Using the relation $`(u^{})^n=(n+1)u^n`$ obtained from (3.30) this reduces to
$$u^n=\frac{1}{n+1}\underset{k=0}{\overset{n}{}}\left(\genfrac{}{}{0pt}{}{n}{k}\right)u^ku^{nk},$$
(3.32)
which is solved by setting $`u^n=n!u^n`$. We conclude that $`P(u)`$ is an exponential distribution (a gamma distribution (1.7) with $`\nu =1`$). This is confirmed by the numerical data shown in Table I.
From (3.30) the distribution of the intermediate state headway is found to be a $`\nu =2`$ gamma distribution with mean $`2u=2/\rho `$,
$$P^{}(u)=\rho ^2ue^{\rho u}.$$
(3.33)
Given the equivalence between the intermediate state headway and the headway for parallel update which we found on the level of the two-point function, it is no surprise that (3.33) is identical, up to a scale factor, to the headway distribution $`P(u)`$ for parallel dynamics.
### 3.3 Particle-particle correlations
In this section we illustrate how the product measure (1.3) with the headway distribution (1.7) translates into nontrivial particle-particle correlations when $`\nu 1`$. For example, the probability density $`g(x)`$ for finding a particle at $`x`$, conditioned on having a particle at the origin, can be written as
$$g(x)=\underset{n=1}{\overset{\mathrm{}}{}}P_n(x),$$
(3.34)
where $`P_n(x)`$ is the probability density for the $`n`$-th particle to be at $`x`$ when the $`0`$-th is at the origin or, equivalently, the probability that $`_{i=0}^{n1}u_i=x`$. The $`P_n`$ are obtained iteratively from $`P_1(x)=P(x)`$ through the convolution
$$P_n(x)=_0^x𝑑yP_{n1}(y)P(xy).$$
(3.35)
Inserting the gamma distributions (1.7) with parameters $`\nu =1/2`$ and $`\nu =2`$, one finds that
$$P_n(x)=\rho (\mathrm{\Gamma }(n/2)2^{n/2})^1(\rho x)^{n/21}e^{\rho x/2}$$
(3.36)
for the continuous time case, and
$$P_n(x)=\frac{2^{2n}\rho }{(2n1)!}(\rho x)^{2n1}e^{2\rho x}$$
(3.37)
for parallel dynamics.
In the parallel case the evaluation of the sum (3.34) is straightforward, and yields the expression
$$g(x)=\rho (1e^{4\rho x})$$
(3.38)
for the correlation function, which explicitly displays the tendency of particles to avoid each other at distances short compared to $`1/\rho `$.
To compute (3.34) with the $`P_n`$ given by (3.36), it is useful to write $`g`$ as the sum of two contributions $`g_{\mathrm{even}}`$ and $`g_{\mathrm{odd}}`$ from even and odd $`n`$, respectively. One finds that $`g_{\mathrm{even}}(x)=\rho /2`$ independent of $`x`$, while the odd part can be brought into the form
$$g_{\mathrm{odd}}(x)=P_1(x)+\frac{\rho }{2\sqrt{\pi }}e^{\rho x/2}\underset{m=1}{\overset{\mathrm{}}{}}\frac{(m1)!}{(2m1)!}(\sqrt{2\rho x})^{2m1}.$$
(3.39)
To sum the series we write $`(m1)!=_0^{\mathrm{}}𝑑zz^{m1}e^z`$ and interchange the summation over $`m`$ with the integration over $`z`$. This yields finally
$$g(x)=\sqrt{\frac{\rho }{2\pi x}}e^{\rho x/2}+\frac{\rho }{2}(1+\mathrm{erf}\sqrt{\rho x/2})$$
(3.40)
with the error function $`\mathrm{erf}(z)=(2/\sqrt{\pi })_0^z𝑑te^{t^2}`$. For $`x0`$ the correlation function is dominated by $`P_1(x)`$ and correspondingly diverges as $`1/\sqrt{x}`$, reflecting the tendency of particles to bunch together in the continuous time case. For $`x\mathrm{}`$ $`g(x)`$ decays somewhat faster than exponentially, as
$$g(x)\rho \frac{\rho }{\sqrt{2\pi }(\rho x)^3}e^{\rho x/2}.$$
(3.41)
Alternatively the correlations between particles can be characterized through the variance $`(\mathrm{\Delta }N_L)^2`$ of the number of particles $`N_L`$ in an interval of size $`L`$. When $`L`$ is small compared to the mean interparticle spacing $`N_L`$ is either 0 or 1, and $`(\mathrm{\Delta }N_L)^2=\rho L`$. For $`L1/\rho `$ a central limit argument shows that
$$(\mathrm{\Delta }N_L)^2\chi L$$
(3.42)
where the “compressibility” $`\chi `$ (defined in analogy with equilibrium systems ) is given by
$$\chi (\rho )=\rho ^3(u^2u^2)=\rho /\nu ,$$
(3.43)
with the parameter $`\nu `$ of the headway distribution (1.7). Thus the slope of $`(\mathrm{\Delta }N_L)^2`$ versus $`L`$ changes from unity for $`L1/\rho `$ to $`1/\nu `$ for $`L1/\rho `$, reflecting the increase (decrease) of particle number fluctuations for continuous time (parallel) dynamics, respectively. The compressibility is related to the pair correlation function (3.34) through
$$\chi =\rho (1+_{\mathrm{}}^{\mathrm{}}𝑑x(g(x)\rho )).$$
(3.44)
## 4 Large scale dynamics of the ARAP
### 4.1 Hydrodynamic equation
The average particle speed $`\overline{v}`$ in the ARAP is inversely proportional to the density, hence the current $`j=\rho \overline{v}`$ is independent of $`\rho `$. The dynamics on the Euler scale $`xt`$ is therefore trivial, and one expects a hydrodynamic equation of diffusion type . A simple derivation will be given below. Throughout this section we consider a general scaled jump length distribution $`\varphi (r)`$.
#### 4.1.1 Continuous time dynamics
In the continuous time case the ensemble averaged particle positions $`X_ix_i`$ evolve according to the linear equations
$$\frac{dX_i}{dt}=\mu _1(X_{i+1}X_i).$$
(4.1)
This problem has been studied previously in the context of crystal growth , and the procedure can be directly applied to the present context.
To extract the long wavelength behavior, we introduce a scaling parameter $`ϵ`$ and a smooth function $`\xi (y,\tau )`$ such that
$$X_i(t)=\xi (ϵi,ϵt).$$
(4.2)
Inserting this into (4.1) and expanding to second order in $`ϵ`$ we obtain
$$\mu _1^1\frac{\xi }{\tau }=\frac{\xi }{y}+\frac{ϵ}{2}\frac{^2\xi }{y^2}.$$
(4.3)
In the scaling limit $`ϵ0`$ this becomes a first order equation which describes simple translation to the left .
Here we will however postpone to take the limit, and first carry out a Lagrange transformation , which relates the Lagrangian description in terms of the particle positions $`X_i(t)`$ to the Eulerian evolution of the density field. The local density $`\rho `$ near the position of particle $`i`$ is estimated as $`(X_{i+1}X_i)^1`$, so using (4.2) we have the relation
$$\rho (\xi (y,\tau ),\tau )=ϵ^1(\xi /y)^1.$$
(4.4)
Differentiating this equation with respect to $`\tau `$ and using the evolution equation (4.3) for $`\xi (y,\tau )`$ one obtains, after some algebra,
$$\frac{\rho }{t}=ϵ\frac{\rho }{\tau }=\frac{}{x}\left(\frac{\mu _1}{2\rho ^2}\right)\frac{\rho }{x}.$$
(4.5)
The scaling factor $`ϵ`$ cancels, and the collective diffusion coefficient is identified to be
$$D_\mathrm{c}(\rho )=\frac{\mu _1}{2\rho ^2}.$$
(4.6)
The $`\rho ^2`$-dependence is dictated by scale invariance: The typical jump length in a region of density $`\rho `$ is $`\overline{\delta }=\mu _1/\rho `$, and $`D_\mathrm{c}\gamma \overline{\delta }^2\rho ^2`$.
#### 4.1.2 Discrete time dynamics
For discrete parallel update eq.(4.1) is replaced by
$$X_i(t+1)X_i(t)=\mu _1[X_{i+1}(t)X_i(t)].$$
(4.7)
In the scaling limit $`ϵ0`$ this results in the same coarse grained evolution equation (4.3), and thus also the nonlinear diffusion equation (4.5) is the same as in the continuous time case.
In the case of ordered sequential update one has to take into account that the new position of particle $`i`$ is a random average of its old position and the new position of particle $`i+1`$, hence
$$X_i(t+1)X_i(t)=\mu _1[X_{i+1}(t+1)X_i(t)].$$
(4.8)
Making the ansatz $`X_i(t)=i/\rho +\overline{v}t`$, we see that the average particle speed is
$$\overline{v}=\frac{\mu _1}{\rho (1\mu _1)}>\frac{\mu _1}{\rho }.$$
(4.9)
The speedup compared to continuous time and parallel dynamics is due to the decrease of the local density near the update site, see for a discussion of similar effects in the asymmetric exclusion process. For the derivation of the hydrodynamic equation it is useful to incorporate the expected diffusive scaling from the outset and replace (4.2) by
$$X_i(t)=\xi (ϵi,ϵ^2t).$$
(4.10)
The expansion of (4.8) to second order in $`ϵ`$ then yields
$$\left(\frac{1\mu _1}{\mu _1}\right)\frac{\xi }{\tau }=ϵ^1\frac{\xi }{y}+\frac{1}{2}\frac{^2\xi }{y^2}.$$
(4.11)
As before, the drift term disappears under the Lagrange transformation based on the relation (4.4), and one obtains
$$\frac{\rho }{t}=ϵ^2\frac{\rho }{\tau }=\frac{}{x}\left(\frac{\mu _1}{2(1\mu _1)\rho ^2}\right)\frac{\rho }{x}.$$
(4.12)
As far as the hydrodynamics is concerned, the different types of dynamics are seen to be equivalent up to a rescaling of time.
### 4.2 Tracer diffusion
Hydrodynamic equations of diffusion type are usually associated with symmetric (unbiased) particle systems . In one dimension the tracer diffusion coefficient in such systems typically vanishes, and the mean square displacement of a tagged particle grows subdiffusively as $`t^{1/2}`$ . By contrast, the biased random average process shows normal tracer diffusion when started from a random initial condition and subdiffusive behavior when the initial configuration is ordered . Here we provide a compact derivation of the two cases and compute the coefficient of the asymptotic law for different types of dynamics.
#### 4.2.1 Langevin approach for continuous time dynamics
We start the system in an initial condition without long wavelength fluctuations, such as $`x_i(0)=i/\rho `$, $`i`$, and denote the positional fluctuation of particle $`i`$ by
$$\zeta _i(t)=x_i(t)x_i=x_i(t)x_i(0)\overline{v}t.$$
(4.13)
For the purpose of extracting the long time behavior of fluctuations, a Langevin approximation to the dynamics of $`\zeta _i`$ is sufficient. Thus we add a phenomenological noise term $`\eta _i(t)`$ to the linear equation (4.1),
$$\frac{d\zeta _i}{dt}=\mu _1(\zeta _{i+1}\zeta _i)+\eta _i.$$
(4.14)
The noise is taken Gaussian with zero mean and covariance
$$\eta _i(t)\eta _j(t^{})=\sigma \delta _{ij}\delta (tt^{}).$$
(4.15)
The noise strength $`\sigma `$ will eventually be matched to the variance of particle headways.
Equation (4.14) is solved by introducing the Fourier transformed fluctuations
$$\widehat{\zeta }(q,t)=\underset{n}{}e^{iqn}\zeta _n(t)$$
(4.16)
with wave numbers $`q`$ in the first Brillouin zone $`[\pi ,\pi ]`$, and the corresponding Fourier transformed noise
$$\widehat{\eta }(q,t)=\underset{n}{}e^{iqn}\eta _n(t)$$
(4.17)
with covariance
$$\widehat{\eta }(q,t)\widehat{\eta }(q^{},t^{})=2\pi \sigma \delta (q+q^{})\delta (tt^{}).$$
(4.18)
The most general quantity of interest is the variance of the displacement between particle $`i`$ at time $`t`$ and particle $`j`$ at time $`t^{}`$. By translational invariance this depends only on $`n=ij`$ and is given by the correlation function
$$G_n(t,t^{})=(\zeta _0(t)\zeta _n(t^{}))^2.$$
(4.19)
Inserting (4.16) into (4.14), solving the equation for $`\widehat{\zeta }(q,t)`$ and averaging over the noise according to (4.18) one arrives at the expression
$$G_n(t,t^{})=$$
$$\frac{\sigma }{2\pi }_0^\pi \frac{dq}{\omega (q)}(2e^{2\omega t}e^{2\omega t^{}}2\mathrm{cos}[qn\mu (q)T](e^{\omega |T|}e^{\omega T^{}}))$$
(4.20)
with $`\omega (q)=\mu _1(1\mathrm{cos}(q))`$, $`\mu (q)=\mu _1\mathrm{sin}(q)`$, $`T=t^{}t`$ and $`T^{}=t^{}+t`$.
The evaluation is straightforward in the relevant limiting cases. Consider first the variance of the headways at time $`t=t^{}`$. For large $`t`$ (4.20) yields
$$G_1(t,t)\frac{\sigma }{\mu _1}\left(1\frac{1}{2\sqrt{\pi \mu _1t}}\right).$$
(4.21)
This allows us to identify the noise strength $`\sigma `$ as
$$\sigma =\mu _1(u^2u^2),$$
(4.22)
and explicitly demonstrates the $`1/\sqrt{t}`$-approach to the stationary headway distribution alluded to in (3.13).
Next we focus on the dynamics of a single particle and set $`n=0`$ in (4.20). If we fix the time increment $`T=t^{}t`$ and let both $`t`$ and $`t^{}\mathrm{}`$, $`G_0`$ represents the mean square displacement of a particle in the stationary regime. Evaluation of (4.20) gives $`G_0(t,t^{})\sigma |T|`$, which shows that $`\sigma `$ is precisely the tracer diffusion coefficient $`D_{\mathrm{tr}}`$. Combining this with (4.22) and (1.5) we obtain
$$D_{\mathrm{tr}}=\mu _1(u^2u^2)=\frac{\mu _1\mu _2}{\rho ^2(\mu _1\mu _2)}.$$
(4.23)
In fact the first relation in (4.23) is easy to understand. The linear equation (4.3) shows that fluctuations in the particle positions drift backwards in “label space” $`y=ϵi`$. This translates the stationary distance fluctuations into temporal fluctuations, with a conversion factor given by the drift speed $`\mu _1`$. As was mentioned already, the existence of a nonvanishing tracer diffusion coefficient for models with a hydrodynamic equation of diffusion type is unusual in one dimension, since generically such an equation implies symmetric particle jumps, in which case the tracer particle displacement grows only subdiffusively due to the single file constraint . Here $`D_{\mathrm{tr}}`$ is nonzero because the particles move, at speed $`\overline{v}`$, relative to the (stationary) density fluctuations. A rigorous derivation of (4.23) has recently been presented by Schütz .
Since the hydrodynamic equations in the two cases are identical, the argument leading to first relation in (4.23) carries over directly to discrete parallel update, and using (3.22) we conclude that the tracer diffusion coefficient in this case is given by
$$D_{\mathrm{tr}}^{\mathrm{par}}=\frac{\mu _1(\mu _2\mu _1^2)}{\rho ^2(\mu _1\mu _2)}.$$
(4.24)
Similarly the expression
$$D_{\mathrm{tr}}^{\mathrm{seq}}=\frac{\mu _1(\mu _2\mu _1^2)}{\rho ^2(1\mu _1)^2(\mu _1\mu _2)}.$$
(4.25)
is obtained for the backward sequential case by combining eqs.(3.29) and (4.9). Both (4.24) and (4.25) have been verified numerically for the case of uniform $`\varphi (r)`$.
Subdiffusive behavior is found in the mean square displacement of a particle starting from an initial configuration without long wavelength disorder . This is given by (4.20) with $`n=t^{}=0`$. For large $`t`$ one obtains
$$\zeta _0^2(t)=G_0(t,0)\sigma \sqrt{\frac{t}{\pi \mu _1}}=\frac{\mu _2}{\rho ^2(\mu _1\mu _2)}\sqrt{\frac{\mu _1t}{\pi }}.$$
(4.26)
Using (3.43) and (4.6) this is seen to agree with the expression
$$\zeta _0^2(t)=\sqrt{2/\pi }(\chi /\rho ^2)\sqrt{D_\mathrm{c}t}$$
(4.27)
derived from hydrodynamic arguments .
#### 4.2.2 The independent jump approximation
For the totally asymmetric simple exclusion process it is known that the motion of a tagged particle in the stationary state follows a Poisson process, and therefore the tracer diffusion coefficient is simply equal to the mean speed $`1\rho `$. Here we show that the expressions (4.23 \- 4.25) for the ARAP are consistent with a similar independent jump picture.
Consider first the case of discrete time dynamics, where the random choice of the jump length $`\delta _i`$ is the only source of disorder, and therefore the tracer diffusion coefficient for independent jumps is equal to the variance of $`\delta _i`$. For parallel update $`\delta _i`$ is a uniform random fraction of the particle headway $`u_i`$, hence $`\delta ^2\delta ^2=\mu _2u^2\mu _1^2/\rho ^2`$, which is easily checked to coincide with (4.24). For the backward sequential case $`\delta _i`$ is a random fraction of the intermediate state headway $`u_i^{}`$. Therefore, using eqs. (3.28), (3.27) and (3.29),
$$\delta ^2\delta ^2=\mu _2(u^{})^2\mu _1^2u^{}^2=\frac{1}{(1\mu _1)^2}\left(\frac{\mu _2u^2}{12\mu _1+\mu _2}\frac{\mu _1^2}{\rho ^2}\right),$$
(4.28)
which is also found to agree with (4.25).
In the continuous time case the random timing of jumps introduces an additional source of disorder. It is natural to assume, in analogy with the asymmetric exclusion process, that the jumps occur according to a Poisson process. In the independent jump approximation the particle displacement $`\mathrm{\Delta }x`$ in time $`t`$ is then given by
$$\mathrm{\Delta }x(t)=\underset{l=1}{\overset{n(t)}{}}\delta ^{(l)}$$
(4.29)
where $`n(t)`$ is a Poisson random variable with mean $`t`$ and the jump lengths $`\delta ^{(l)}`$ are independent random fractions of the (independent, random) particle headways. It is straightforward to show that the variance of $`\mathrm{\Delta }x`$ is
$$(\mathrm{\Delta }x)^2\mathrm{\Delta }x^2=\delta ^2t,$$
(4.30)
thus in this case the independent jump approximation to $`D_{\mathrm{tr}}`$ is $`\delta ^2=\mu _2u^2`$ in agreement with (4.23).
## 5 Summary and outlook
We have presented results for two classes of particle systems on $``$. The models considered in Section 2. have Poisson invariant measures and nonlinear current-density relations (see eqs.(2.8, 2.14)). Time-dependent fluctuations in these models are therefore expected to be governed by the noisy Burgers (or Kardar-Parisi-Zhang ) equation, which is not amenable to simple analysis. By contrast, the asymmetric random average processes introduced in Section 3. have nontrivial invariant measures, but the linearity of the jump rules allows for a detailed study of dynamic properties (Section 4.).
A central result for the ARAP is the dependence of the headway distribution (1.7) on the type of dynamics. The idea that parallel update reduces density fluctuations is familiar from earlier work on the asymmetric exclusion process and related models for traffic flow, however in that case the ordered sequential update produces the same (Bernoulli) invariant measure as the continuous time process .
Our study suggests that the invariant measure of the continuous time ARAP displays an unusual combination of features: The two-point headway correlations factorize, the single particle headway distribution appears to be exactly given by the expression (1.7) derived under the assumption of pairwise independence, but nevertheless the product measure (1.3) is not invariant. Rajesh and Majumdar have found the same features in a larger class of models which interpolate between continuous time and parallel update . It would be most interesting to find a simple “deformation” of the product measure which explains this behavior. The status of the product measure assumption for the ordered sequential update also remains to be clarified. The considerations of Section 3.2.2 indicate that it might be possible to exactly reduce this case to that of parallel update, for which the product measure is known to be invariant .
Another interesting direction for future work is the introduction of quenched random inhomogeneities. In asymmetric exclusion models it is possible to find invariant product measures also in the presence of random jump rates associated with particles . For the continuous time ARAP with jump rates $`\gamma _i`$ depending on the particle label $`i`$ (the position $`i`$ in the stick representation) preliminary numerical simulations indicate that the product measures discussed above do not persist. It is possible to write down a closed set of linear equations for the two-point function $`u_iu_j`$ which depends on the disorder configuration $`\{\gamma _i\}`$ and which should yield insight into the emergence and nature of correlations. Here we merely remark that, since the mean speed of particle $`i`$ is $`\gamma _iu_i`$, stationarity implies $`u_i=C/\gamma _i`$ where the constant $`C`$ is fixed by the average headway. If the distribution of jump rates is chosen such that $`1/\gamma _i`$ exists, $`C1/\gamma _i^1`$ in the limit of infinite system size, and all headways have a finite mean. Otherwise (e.g. for a uniform distribution of jump rates) arbitrarily large headways will open in front of the slowest particles, similar to the low density phase of asymmetric exclusion models with particlewise disorder .
Acknowledgements. We are much indebted to Bernard Derrida, Pablo Ferrari, Herve Guiol, Satya Majumdar, Gunter Schütz and Timo Seppäläinen for useful discussions and remarks. This work was supported by DAAD and CAPES within the PROBRAL programme. J.K. acknowledges the hospitality of IME/USP, São Paulo, and the Erwin Schrödinger Institute for Mathematical Physics, Vienna, where part of the paper was written. J.G. acknowledges the hospitality of Universität GH Essen during the early stages of the project.
| Dynamics: | $`u^2`$ | $`u^3`$ | $`u^4`$ | $`u^5`$ |
| --- | --- | --- | --- | --- |
| Continuous | $`2.998\pm 1`$ | $`15.02\pm 1`$ | $`105.3\pm 2`$ | $`947\pm 3`$ |
| | (3) | (15) | (105) | (945) |
| Sequential | $`1.9997\pm 2`$ | $`5.995\pm 2`$ | $`23.93\pm 2`$ | $`119.2\pm 2`$ |
| | (2) | (6) | (24) | (120) |
| Parallel | $`1.4998\pm 1`$ | $`2.996\pm 1`$ | $`7.466\pm 3`$ | $`22.26\pm 2`$ |
| | (3/2) | (3) | (15/2) | (45/2) |
Table I. The Table contains numerical estimates of the first few moments of the stationary headway distribution for the ARAP with uniform $`\varphi (r)`$ and different kinds of update. The data were obtained from simulations of systems of $`2\times 10^5`$ particles which were started from an ordered initial condition, $`u_i=1`$ for all $`i`$, and allowed to evolve for $`10^4`$ time steps. To extrapolate to $`t\mathrm{}`$, each run was fitted to eq.(3.13), and the errors were estimated by taking an average over 10 runs (errors refer to the last digit shown). The numbers in parentheses are the conjectured values of the moments; for the case of parallel update these are known to be exact . The remaining discrepancies are in fact largest for parallel update, and can probably be attributed to residual finite time effects.
|
no-problem/9909/astro-ph9909378.html
|
ar5iv
|
text
|
# Accretion in Taurus PMS binaries: a spectroscopic study Based on observations made with the Canada-France-Hawaii Telescope, operated by the National Research Council of Canada, the Centre National de la Recherche Scientifique de France and the University of Hawaii
## 1 Introduction
During the past five years, many studies have addressed the issue of multiplicity in low mass star-forming regions. A majority of G-K main sequence (MS) dwarfs belong to multiple systems in the solar vicinity (Duquennoy & Mayor dm91 (1991)), and several studies (Leinert et al. leinert (1993), Reipurth & Zinnecker rz (1993), Ghez et al. ghez (1993), Simon et al simon2 (1995)) have shown that this is also the case among pre-Main Sequence (PMS) stars. The binary fraction can vary with star formation region (SFR), and in the Taurus cloud, the binary excess over MS stars is of the order of 1.7, indicating that binarity is a fundamental feature of stellar formation, at least in this SFR (see Duchêne, duch99 (1999)).
Amongst the various mechanisms proposed so far for binary star formation, fragmentation appears as the most likely to meet observational constraints (Boss boss93 (1993)). Numerical codes have been successful in reproducing the formation of binary or multiple systems (Bonnell et al. frag0 (1992), Sigalotti & Klapp 1997a b, Boss frag3 (1997), Burkert et al. frag1 (1997)). However, current binary formation codes do not offer enough resolution and time span to follow the formation and evolution of circumstellar accretion disks. Only larger structures, which are not necessarily in equilibrium, are predicted, providing only indirect information about these disks, and the fate of the available circumstellar matter remains unclear.
Various authors have studied tidal interaction of circumstellar disks in binary systems for coplanar disks (see a review by Lin & Papaloizou linpapa (1993)), and demonstrated that Lindblad resonances create a gap in the binary environment, separating two circumstellar disks from a circumbinary one. Accretion from the outer disk onto the inner ones and, eventually, on both stars is prevented by gravitational resonances. However, Artymowicz & Lubow (arty2 (1996)) showed that, under some hypotheses on the disk properties, matter could flow through one or two points of the inner ring of the circumbinary disk toward the central system. If both stars have similar masses, both circumstellar disks are replenished, while, in the case of very unequal masses, the accretion funnel is mainly directed toward the secondary. On the other hand, Bonnell et al. (frag0 (1992)) used a SPH code to study cloud fragmentation processes and concluded that fragmentation of an elongated cloud rotating around an arbitrary axis leads to parallel but non-coplanar accretion disk like structures. They find that, in low mass ratio systems ($`q1`$), accretion of low angular momentum material is directed toward the centre of mass, which is close to the most massive star. Thus, in these systems, the primary appears more obscured and reddenned than its less active companion. The different conclusions about the more actively accreting star are likely due to the different approaches used in these studies: while Artymowicz & Lubow (arty2 (1996)) start with a star+disk system to which they add a second star, Bonnell & Bastien (bonnell (1992)) model the formation of such a binary from the onset of the gravitational collapse. Also, the different initial conditions used in these two studies imply different angular momentum values for the accreting material (see Bate & Bonnell bate-bonnell (1997)).
The study of accretion activity on both components in PMS binary systems brings insight into the way the residual matter flows onto the central stars. This activity can be traced by spectroscopic measurements. However, up to now, such studies on PMS binaries in Taurus have been limited to wide systems (Hartigan et al. hss (1994), hereafter H94) due to the limited spatial resolution of the observations. Monin et al. (monin (1998), hereafter paper I) have started a spectroscopic survey of wide young binaries in Taurus. In this paper, we extend this study to closer systems (down to $`0\stackrel{}{.}9`$), investigating the classification as classical (C) or weak-line (W) TTS of both stars in these binaries, along with a more detailed study of the spectroscopic signature of their accretion activity. We restrict ourself to the Tau-Aur association and we complement our results with those of H94 and Prato & Simon (prato\_simon (1997), hereafter PS97) to extend this study to a wider range of systems.
In section 2, we present the observations and the data reduction process. The results and the classification of individual stars as C/W TTS are presented and discussed in section 3, and an evaluation of the random pairing hypothesis is presented in section 4. The accretion activity of each component within binaries is compared in section 5. A discussion and a summary are presented in section 6.
## 2 Observations
### 2.1 The sample
We have chosen our sample from the list of Mathieu (mathieu (1994)). In paper I, Monin et al. already presented some spectroscopic measurements on five objects in this list, with separations ranging between $`2\stackrel{}{.}4`$ and $`5\stackrel{}{.}9`$. In this paper we present complementary observations of closer binaries from the same list. This new sample (see Table 1) now includes all the binaries in this list with separations ranging between $`0\stackrel{}{.}89`$ and $`3\stackrel{}{.}1`$, to the exception of HBC 411 (CoKu Tau/3) and HBC 389 (Haro 6-10).
### 2.2 New spectroscopic observations
The observations were conducted on 1996 November 5 and 6, and December 1, at the Canada-France-Hawaii Telescope on Mauna Kea. We used the STIS2 $`2048\times 2048`$ detector with a $`0\stackrel{}{.}16`$/pixel scale. Using SIS (Subarcsecond Imaging Spectrograph) providing tip-tilt correction, we obtained an angular resolution of about $`0\stackrel{}{.}6`$ to $`0\stackrel{}{.}8`$. Differential $`VRI`$ imaging photometry was also performed during the first two nights for some targets. For each system, the primary has been defined as the brightest star in the $`V`$band.
Long-slit spectra were obtained using a 1$`\mathrm{}`$ slit and a grism. The usefull range of the spectra is 4000 to $`7800`$Å, yielding a $`1.8`$Å /pixel scale. However, the actual resulting spectral resolution is $`9.6`$Å, except for HBC 356–357 where it is $`12.5`$Å. Spectra of calibration lamps and of a spectrophotometric standard (Feige 110) were obtained every night. All spectra have been wavelength calibrated, cosmic-ray cleaned, flat fielded, sky emission subtracted and flux calibrated. All data reduction steps were performed with standard IRAF<sup>1</sup><sup>1</sup>1IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc., under contract to the National Science Foundation routines. The two stellar spectra of each binary were deblended and extracted using a task fitting two gaussians with the same FWHM profile. This reduction procedure is accurate as long as the separation remains larger than the seeing, which was the case for all our sources except FX Tau and UY Aur, the closest systems of our sample (see section 3.1 for details).
Our estimates of the spectral types are based on the strength of TiO bands for M stars, and on relative strengths of Ca i $`\lambda \lambda `$6122,62, Na i $`\lambda 5893`$, CaH $`\lambda \lambda `$6350,80 and CaH $`\lambda \lambda `$6750–7050 for K stars. We used the standard grids from Allen & Strom (allen (1995)) and Kirkpatrick et al. (kirk (1991)), and we also observed a series of spectral type standards during the same nights as the binary targets. From these standard stars measurements, we find that our estimates are accurate to within one subclass for the whole sample. However, we are unable to determine spectral types later than M5, because most of the spectral features we use do not change anymore with effective temperature for such late type stars. Spectra at longer wavelengths are needed for the classification of the reddest objects.
Uncertainties on emission line equivalent widths (hereafter EWs) were estimated by using the maximum and minimum acceptable continuum values next to the lines. They are typically smaller than 5%, except for the weakest lines, where they are of the order of 0.1–0.2 Å. In the blue part of the spectrum, for the faintest stars, uncertainties can reach 10–15%.
We evaluated differential photometry for 8 of our sources in the $`VRI`$ bands. Uncertainties are usually smaller than 0.02 mag and never exceed 0.03 mag.
## 3 Results
The spectra are shown in Figure 1 and the corresponding results are summarized in Table 2, with the relative photometry given as $`\mathrm{\Delta }M=M_BM_A`$. For some objects, we could also detect H$`\gamma `$, H$`\delta `$, \[O i\]$`\lambda 6363`$ and the \[S ii\]$`\lambda \lambda `$6716,31 doublet in emission (see A).
### 3.1 Comments on individual binaries
The spectral type of GK Tau B could not be determined due to a poor signal-to-noise ratio but its spectrum does not show strong TiO absorption bands. The spectral type of RW Aur A is undetermined from our spectra because the star is heavily veiled by a hot continuum and does not show any photospheric feature ; higher resolution spectra are needed to assess its spectral type, see Basri & Batalha (baba (1990)) or Chen et al. (chen (1995)).
UY Aur is one of the closest binary in our sample, leading to a possible contamination of the spectrum of the secondary by that of the primary. We have checked this point by performing careful cuts through the UY Aur spectrum perpendicular to the dispersion axis. These cuts show a systematic asymmetry, which position does not change with wavelength and is not observed on the primary of any other system, even if it is observed with the same position of the slit. Furthermore, the separation we infer from the spectra ($`0\stackrel{}{.}90\pm 0\stackrel{}{.}05`$) is fully consistent with the result of near-infrared imaging (Close et al. uyao (1998)) and the resulting spectrum of the secondary displays different spectral features than the primary.
In the case of FX Tau, the raw spectrum clearly shows two separated peaks, but they are very close (the seeing was about $`0\stackrel{}{.}8`$ FWHM). The double gaussian fitting procedure was unsuccessful, and we had to apply a line by line deconvolution process. The seeing is slightly better at longer wavelength and we could retrieve both components in this part of the spectra only. They show significantly different features, so we believe that we have resolved the system. This is enough to measure the H$`\alpha `$emission and to estimate the spectral type, though with a larger uncertainty (2 subclasses).
Optical spectra of the GG Tau/c binary were obtained by White et al. (white (1999)), who found spectral types M5 and M7 for the primary and secondary, respectively. This is in agreement with our findings for both components, although we could not determine accurately the spectral type of the secondary.
We have also determined an accurate estimate of the separation of HBC 356–357: $`1\stackrel{}{.}33\pm 0\stackrel{}{.}05`$. Walter et al. (walter (1988)) reported a somewhat larger separation (2$`\mathrm{}`$). However, these authors did not publish the uncertainty on their result, and we believe that this discrepancy is unlikely to be due to orbital motion.
In order to study the relative accretion activity of the individual components of the binaries of our sample, we first determined which stars actually accrete, i.e. the respective classification of the observed stars as CTTS or WTTS. In the following, we use every available piece of information to establish this classification.
### 3.2 TTS classification criteria
The first large scale surveys for TTS were objective prism surveys and the “historical” criterion to detect a CTTS used the H$`\alpha `$EW by checking whether it was larger than 10 Å or not (e.g. Strom et al. S89 (1989)). The stars identified as TTS from their photometry, but with smaller H$`\alpha `$EWs were classified as WTTS, i.e. non-active PMS stars. However, this threshold is not a sharp edge, and a more physically meaningful diagnosis would be the H$`\alpha `$flux (Cohen & Kuhi, ck (1979)). Moreover, Martín (martin98 (1998)) discussed the possibility that the H$`\alpha `$EW threshold varies with spectral type, later spectral types stars having a higher threshold. He proposed a 5 Å EW limit for K stars and 10 Å for early M stars. We adopt this criterion in our classification.
We have also checked this classification against other criteria such as \[O i\]$`\lambda 6300`$ emission line and $`KL`$ infrared excess. Edwards et al. (edwards (1993)) have found that all stars with detectable \[O i\] emission or $`KL`$ excess ($`>0.4`$) systematically have H$`\alpha `$EWs larger than 10 Å.
However, in order to compare our newly classified TTS with previously known field TTS, the use of different criteria may lead to confusion and unexpected biases. This point will be carefully examined below (see section 4.2), but we stress that only one star out of the 31 listed in Table 2 has a discrepant classification when using different criteria (see also section 4.2.1).
### 3.3 Classification of individual stars
NTTS 040012+2545, 040047+2603, LkCa 7 and J 4872: no component in any of these systems shows evidence of accretion activity, as they all exhibit only low H$`\alpha `$emission and no other emission line, to the exception of HBC 358. These four systems are thus considered as WW binaries.
GK Tau, IT Tau, UY Aur, and RW Aur: all of these systems contain stars with moderate to strong H$`\alpha `$and H$`\beta `$ emission and, for some of the stars, metallic and forbidden lines. All the stars in these systems can thus be safely classifed as CTTS.
FX Tau: the secondary shows a very low H$`\alpha `$emission and is probably a WTTS. On the other hand, the primary shows a moderate emission in this line, as well as H$`\beta `$ emission (Cohen & Kuhi ck (1979)). Furthermore, Strom et al. (S89 (1989)) and Skrutskie et al. (S90 (1990)) reported moderate $`\mathrm{\Delta }K`$ and $`\mathrm{\Delta }N`$ excesses for the system. All these evidences support the idea that the primary is a CTTS.
UX Tau and HK Tau: UX Tau A was observed in paper I and classified as a WTTS from its H$`\alpha `$EW of $`9.5`$Å. In such a star apparently “at the border” between C and WTTS, we reexamined this classification and propose to (re)classify it as a CTTS, because of the large H$`\alpha `$emission flux (for a spectral type K4, the CTTS threshold is only about $`5`$Å). Another clue is its significant $`\mathrm{\Delta }N`$ excess (Skrutskie et al. S90 (1990)). This post facto reconsideration of classification criteria can be misleading at first sight, but we stress that it is only done here to define a more accurate picture of an accreting TTauri star. The classification of all other stars is identical to paper I. In particular, our previous classification of HK Tau B as a CTTS (H$`\alpha `$EW of 12.5 Å) has been recently confirmed by an HST image of this star showing a remarkable edge-on circumstellar disk (Stappelfeldt et al. stap98 (1998)).
## 4 CTTS - WTTS pairing within Taurus binaries
In the following, we call “twins” the systems where the TTS are of the same type (either CC or WW), and “mixed” the systems where the stars are different.
One of our objects contains stars physically associated in a multiple system (UX Tau). Although this system is not strongly hierarchical ($`5\stackrel{}{.}9`$ \- $`2\stackrel{}{.}7`$), we consider that it can be split into two “independent” binaries, leading to a total of 16 binaries in our sample. The validity of this assumption will be evaluated in Section 4.2.4.
### 4.1 Testing the random pairing hypothesis
The 16 binaries considered here can be divived into three categories: 9 binaries contain only CTTS ($`56`$%), 4 are formed of two WTTS (25 %), and 3 are mixed systems, all with a CTTS primary and two of them in the same triple system, UX Tau, representing less than 19 % of our sample. Mixed systems appear to be rare in TTS binaries, and this is even more striking when we use the “historical” H$`\alpha `$$`10`$Å EW criterion. Then only one mixed system remains among 16 binaries, and the proportion drops to about 6%. We use this sample to address the question: are binary components taken at random from the TTS population?
If we want to compare this result with a distribution randomly taken from a single TTS population, we need to know the ratio of WTTS-to-CTTS in Taurus. In a study limited to the central parts of the Tau-Aur dark cloud, Hartmann et al. (h91 (1991)) found a ratio close to unity. Considering a larger sky area leads to an even larger WTTS-to-CTTS ratio, mainly because of the widespread $`ROSAT`$ population (e.g. Wichmann et al. wichmann (1996)). Since our sample mostly contains systems in the center of the molecular cloud, we conservatively adopt a W/C ratio $`=1`$.
Taking a fixed distribution of primaries (4 WTTS and 12 CTTS), the probability to get 3 mixed systems out of 16 binaries from randomly taken secondaries is $`\mathrm{C}_{16}^3(\frac{1}{2})^{16}1\%`$. We therefore reject the hypothesis that components of TTS binaries are randomly associated from the distribution of single stars. In other words, the TTS types of Taurus binary components are significantly correlated.
### 4.2 Possible sources of bias
In this section, we discuss some possible sources of errors in our result on a preferential CC pairing in TTS binaries.
#### 4.2.1 The use of different classification criteria
In section 3.2, we have used complementary criteria to establish the C/W TTS nature of our sources. However, considering only the “historical” $`10`$Å H$`\alpha `$EW classification criterion does not only imply minor changes (1 star in UX Tau is modified upon 31), but makes mixed binaries even more rare: only one mixed pair (FX Tau) out of 16 remains. The probability that the observed C/W distribution in our 16 binaries results from random pairing then falls to $`0.02\%`$.
#### 4.2.2 The case of WW pairs
The evolutionnary status of the WTTS population identified from the $`ROSAT`$ All-Sky Survey is somewhat uncertain: some of these stars may be unrelated to the TTS population (e.g. Favata et al. favata (1997)). If they are young main sequence stars, we expect that both components will mimic WTTS since they are too old to still be accreting. Then the observation of such binaries can lead to a bias towards WW pairs in our study. This can potentially affect 2 binaries in our sample, which were first detected by $`ROSAT`$ (their names starts with “NTTS”). If we exclude all WW binaries for safety, we end with at most 3 mixed systems out of 12, yielding a proportion of 25 % mixed systems in TTS binaries. Then the probability that this distribution results from random associations is only $`5\%`$. We therefore conclude that the high proportion of twin binaries in our sample is not strongly affected by the presence of spurious WW binaries.
#### 4.2.3 Time evolution
Since the proportion of stars surrounded by a circumstellar disk decreases with age, we inspect the possibility that our binary population is younger, on average, than the population of singles. In such a case, we would expect to find more CTTS (“young and active”) than in the singles sample and, consequently, more CC binaries. In Simon & Prato’s (simon\_prato (1995)) study, the median age of their single stars sample is $`\mathrm{log}t_{\mathrm{SW}}5.8`$. In our sample, we find that half of the primaries that have an age assigned by Simon & Prato are older than this value. We thus conclude that our study includes about as many young systems as old systems, and time evolution effects do not impinge our conclusion.
#### 4.2.4 Close companions and hierarchical systems
The issue of how to treat known binaries which we do not resolve is not straightforward. Moreover, currently undetected companions may exist around some of the stars in our sample. These unresolved companions may strongly impact on the evolution and the accretion history of their associated star. Furthermore, considering a triple system as two independent binaries may not be a valid hypothesis.
To evaluate the impact of such multiple systems, we considered a subsample of binaries where no third component, either spectroscopic (Mathieu mathieu (1994)), very tight visual (Simon et al. simon2 (1995)) or wider, is known so far. To our knowledge, only 7 binaries in our overall sample match this criterion: LkCa 7, FX Tau, DK Tau, HK Tau, IT Tau, HN Tau and UY Aur. This subsample contains 6 twins and 1 mixed binaries. Once again, only about $`15\%`$ of these binaries are mixed, leading us to think that the possible existence of additional companions does not significantly modify the results.
### 4.3 Complementary results from the literature
We have considered previous results in the literature providing information on the classification of the components of more PMS binaries in the Taurus SFR. We complement our results with those of H94 and PS97 and obtain a sample that contains over 90 % of all known binaries located in Taurus in the separation range $`0\stackrel{}{.}8`$$`13\mathrm{}`$. The list of these supplementary objects is given in Table 3.
To classify the members of binaries studied by H94, we used their H$`\alpha `$EWs and spectral types together with the estimated near-infrared excesses. Both indicators agree well for all stars except for V710 Tau S. This star presents an H$`\alpha `$EW hardly above the classical limit (11 Å), with a spectral type M3, and no infrared excess. Moreover, Cohen & Kuhi (ck (1979)), measured an H$`\alpha `$EW of 3.3 Å and no emission in the forbidden lines, leading us to classify this star as a WTTS. V710 Tau consequently happens to be one of the few mixed pairs (CW) among TTS binaries.
For the two binaries studied by PS97 and included in our sample, all stars have $`KL1.2`$ mag. Such high values are strong evidences for the presence of an optically thick accretion disk in the inner 0.5 AU around each star (the upper limit for photospheric colors is $`KL0.4`$ mag, Edwards et al. edwards (1993)), so that these stars can be safely classified as CTTS. It is also worth mentionning that for all systems common to the PS97’s sample and ours (DK Tau and UY Aur with $`KL`$ photometry and Haro 6-37 with Br$`\gamma `$ spectroscopy), their classification and ours are fully consistent.
If we take these complementary results into account, we obtain a sample of 26 binaries with 15 CC twins, 7 WW twins, and 4 mixed. The proportion of mixed systems is then $`15\%`$, and even only $`4\%`$ if we adopt the H$`\alpha `$EW criterion, yielding similar results as in Section 4.1.
## 5 Differential accretion in CC binaries
For all binaries where both stars have active accretion disks (CC pairs), we have used the available spectra to compare the accretion activity of each component, using their H$`\alpha `$flux as an accretion diagnosis. The H$`\alpha `$EW has already been shown to correlate well with the infrared excess in CTTS (eg. Edwards et al., edwards (1993)). Moreover, recent studies in the near infrared where the extinction is about ten times smaller than at H$`\alpha `$wavelengths, have revealed tight correlations between the accretion luminosity and the Pa$`\beta `$ and Br$`\gamma `$ emission fluxes (Muzerolle et al., muzerolle (1998)). We will then hereafter use the ratio of H$`\alpha `$fluxes in binaries, assuming that this flux is proportional to the energy dissipated in the accretion process, i.e. to the accretion luminosity.
We also assume that the extinction toward both components of the binary is the same, based on the tight correlation observed in the data of H94 between $`A_J`$ toward the primary and the secondary. We checked that this correlation is still valid at smaller separations: we evaluated a rough $`VR`$ photometry from our spectra and compared the results to dwarfs colors. Due to observational uncertainties ($`\sigma _V\sigma _R0.1`$mag and 1 subclass for the spectral type), the final accuracy of the extinction is rather poor (typically, $`\sigma (A_V)0.7`$mag). However, we did not find any evidence that the correlation is modified. This correlation is likely due to the fact that both components of a binary system are equally embedded in the Taurus molecular cloud, but other explanations include the existence of a common circumbinary envelope and/or of circumstellar disks with similar orientations. Brandner & Zinnecker (bz (1997)) reported a similar correlation for close ($`<250`$AU) PMS binaries in southern SFRs.
For each binary, the ratio of the H$`\alpha `$luminosities is computed as follows:
$$\frac{F_{H_\alpha }^A}{F_{H_\alpha }^B}=\frac{EW_{H_\alpha }^A}{EW_{H_\alpha }^B}\times \frac{F_c^A}{F_c^B}10^{0.4(A_V^AA_V^B)}\frac{EW_{H_\alpha }^A}{EW_{H_\alpha }^B}\times \frac{F_c^A}{F_c^B}$$
where $`F_c`$ is the nearby continuum flux estimated from our spectra when available and from H94’s $`R`$ photometry otherwise.
Figure 2 shows a clear trend for the primaries to have a higher H$`\alpha `$flux than the secondaries thus a higher accretion luminosity. It is unlikely that this result is the consequence of a systematic bias introduced by the assumption that both extinctions are the same. This would imply that we systematically overestimated the H$`\alpha `$luminosity ratios by a factor of 4, which requires that the extinctions toward the secondaries are larger by about $`A_V^BA_V^A`$1.5 mag. Such a systematic trend would have been detected in H94 (the authors quote $`\sigma (\mathrm{\Delta }A_V)0.3`$ mag), as well as in our data. This suggests that the accretion rate is larger on the more massive component of the system.
In a few cases, clues exist that the photosphere is not seen directly, and that we only detect scattered light. This is the case for UY Aur B (Close et al. uyao (1998)) and HK Tau B (Stapelfeldt et al. stap98 (1998)) where an edge-on disk has been recently detected. For these stars, the observed photometry is therefore only a lower limit to their actual flux, and we consequently underestimated their H$`\alpha `$flux. Arrows have been accordingly added in Fig. 2. No nebular structure is seen at high angular resolution around any other star so far and we assume that there is no object with strong scattering in our sample appart from these two stars.
## 6 Discussion and summary
We have shown that there exists only few mixed systems in Taurus PMS binaries in the separation range 130 - 1800 AU. This result extends those of PS97 who did not find any mixed system in a sample including binaries with separations of 40–360 AU. This indicates that the accretion history of the two stars are not independent, even for binaries with separations up to $`800`$AU (from our new spectroscopic observations) and even $`1800`$AU if we take into account the results from H94.
What can explain such a correlation in binaries with separations as large as $`1800`$AU ? This “twinning” trend, together with the fact that circumstellar disk dissipation times from optically thick to optically thin are short (Simon & Prato simon\_prato (1995)), led PS97 to propose that both components of a close binary system accrete over the same time span because their circumstellar disks are replenished by material from a common (circumbinary) environment. As soon as this environment is cleared, both disks dissapear over a short viscous timescale. However, the circumbinary environment hypothesis appears difficult to apply to wide binaries, and if such envelopes have been detected around a few close binaries, they generally remain elusive. Similarly, it appears unprobable that the binary as a whole can sweep enough material during its wander through the parent cloud: at 1 km.s<sup>-1</sup>, a 100 AU radius wide binary sweeps only $`10^{12}M_{}.yr^1`$ in a $`10^2`$cm<sup>-3</sup> density cloud.
On the other hand, we find that the primaries have larger H$`\alpha `$fluxes than their secondaries. We call ’primary’ the brightest component in the $`V`$ band, which always has an earlier spectral type than the secondary so that it is likely the most massive star. The H$`\alpha `$luminosity is assumed to be proportional to the accretion luminosity:
$$L_{\mathrm{H}\alpha }L_{\mathrm{acc}}=\frac{GM_{}\dot{M}}{R_{}}$$
Baraffe et al.’s (isa (1998)) evolutionary models show that two 2 Myr-old TTS with masses of 1 $`M_{}`$ and 0.1 $`M_{}`$ have $`M_{}/R_{}`$ ratios only differing by a factor of 4 (the most massive star also has a larger radius). Our measured H$`\alpha `$luminosity ratios vary by over 2 orders of magnitude and therefore cannot be accounted for by extreme mass ratios. The difference in the accretion luminosities is thus likely to reveal that, in most cases, the primary accretes more than its companion: $`\dot{M}_A>\dot{M}_B`$. It is also noticeable that the mixed systems in our sample all have a CTTS primary, so that in the case of CW pairs, the more massive star again seems to be more active than the other one.
If both components have similar circumstellar disk lifetimes ($`\tau _D=M_D/\dot{M}`$), these results suggest that the circumprimary disk is preferentially feeded in the early binary formation process by a common circumbinary reservoir of mass. This is in agreement with the prediction of Bonnell et al.’s (frag0 (1992)) model.
Another possibility is that the accretion rate on the star, $`\dot{M}`$, is proportional to the disk mass, itself related to the mass of the central star. In the canonical accretion disk theory, the accretion rate is related to the surface density $`\mathrm{\Sigma }`$, itself evidently linked to the disk mass. This mechanism would explain simultaneously why $`\dot{M}_A>\dot{M}_B`$ and why the disk lifetime $`\tau =M/\dot{M}`$ does not depend on the mass of the central star. If true, such a $`M\dot{M}`$ relation should hold for single TTS but current mass determination lack the precision needed to study this point further.
Observations of closer binaries down to separations of the order of the peak value in the PMS separation distribution ($`50`$ AU, Mathieu mathieu (1994)) should shed more light on this question. Such observations are within reach of current adaptive optics systems equiped with spectroscopic capabilities. Such a peak separation is of the order of the size of a canonical accretion disk and these observations would allow to study systems where the star-disk and disk-disk interactions are strong, and also where the eventual leftover circumbinary environment has a major influence.
###### Acknowledgements.
We are grateful to Caroline Terquem and Mike Simon for enriching discussions. Comments from an anonymous referee helped to significantly improve this paper. This research has made use of the Simbad database, operated at CDS, Strasbourg, France, and of the NASA’s Astrophysics Data System Abstract Service.
## Appendix A Complementary line measurements
In our new spectroscopic observations, some emission lines which are not presented in Table 2 were detected in some of our targets. EWs measurements for H$`\gamma `$, H$`\delta `$, \[O i\]$`\lambda 6363`$ and the doublet \[S ii\]$`\lambda \lambda 6716,31`$ lines are given in Table 4 for the stars where these lines were detected.
|
no-problem/9909/hep-lat9909029.html
|
ar5iv
|
text
|
# Cooling, Physical Scales and the Vacuum Structure of Y-M Theories.
## Abstract
We present a cooling method controlled by a physical cooling radius that defines a scale below which fluctuations are smoothed out while leaving physics unchanged at all larger scales. This method can be generally used as a gauge invariant low pass filter to extract the physics from noisy MC configurations. Here we apply this method to study topological properties of lattice gauge theories where it allows to retain instanton–anti-instanton pairs.
Since MC configurations are noisy at small distances, smoothing is needed to observe the physical structure. One approach is “cooling” , a local minimization of a given lattice action. Cooling works as a diffusion process, smoothing out increasingly large regions . Thereby the physical spectrum is affected, in particular the string tension drops rapidly with cooling. Topological excitations may be distorted by bad scaling properties of the action and by instanton - anti-instanton (I-A) annihilation (pairs are not minima of the action). We thus need:
1) to use an action with practically scale invariant instanton solutions and a dislocation threshold to eliminate UV noise (dislocations), and
2) to control cooling by means of a physical scale such that no monitoring or engineering which may introduce uncertainties is necessary.
Restricted Improved Cooling (RIC) fulfills these requirements . RIC preserves physics at scales above a cooling radius $`r`$ which can be fixed unequivocally beforehand, while smoothing out the structure below $`r`$. In particular the string tension is preserved beyond $`r`$, instantons are stable, dislocations are eliminated, and I-A pairs are retained above a threshold defined by $`r`$.
Properties of RIC. RIC uses the action of the Improved Cooling algorithm (IC) with 5 planar, fundamental Wilson loops . This action is correct to order $`𝒪(a^6)`$ and has a dislocation threshold $`\rho _02.3a`$, below which short range topological structure is smoothed out (note that $`\rho _00`$ in approaching continuum). Above $`\rho _0`$, instantons are stable to any degree of cooling (however, I-A pairs annihilate). The corresponding improved charge density using the same combination of loops leads to an integer charge already after a few cooling sweeps and stable thereafter.
Here we shall restrict ourselves to $`SU(2)`$, for the general case see . Recall that the cooling algorithm is derived from the equations of motion
$$U_\mu (x)W_\mu (x)^{}W_\mu (x)U_\mu (x)^{}=0,$$
(1)
where $`W`$ is the sum of staples connected to the link $`U_\mu (x)`$ in the action. It amounts to substitute
$$UU^{}=V=\frac{W}{W},W^2=\frac{1}{2}\mathrm{Tr}(WW^{}).$$
(2)
We define RIC by the constraint<sup>1</sup><sup>1</sup>1We thank F. Niedermayer for this suggestion. that only those links be updated, which violate the equation of motion by more than some chosen threshold:
$$UV\mathrm{iff}\mathrm{\Delta }_\mu (x)^2=\frac{1}{a^6}\mathrm{Tr}(1UV^{})\delta ^2.$$
(3)
We have $`\mathrm{\Delta }_\mu ^2(x)\mathrm{Tr}((D_\nu F_{\nu \mu }(x))^2)`$ in continuum limit . Thus $`\delta `$ controls the energy of the fluctuations around classical solutions and acts as a filter for short wavelengths. Since it uses the same action RIC has the same scaling properties as IC. However, since RIC does not update links already close to a solution, it changes fewer links after every iteration until the algorithm saturates. (3) defines a constrained minimization and the smoothing is homogeneous over the lattice.
The next step is to relate the parameter $`\delta `$ which defines the cooling to a physical scale. For Yang Mills theory the latter should involve the string tension $`\sigma `$. We calculate the “effective mass” $`M(t)`$ from correlation functions of spatial Polyakov loops separated by $`t`$ steps in time. Asymptotically $`M(t)N_s\sigma `$ up to finite size corrections.
For the calculations we generate configurations using the Wilson action on the lattices:
$`12^3\times 36`$, p.b.c., $`\beta =2.4`$ ($`a=0.12`$ fm): 800 configurations with 100 sweeps separation;
$`24^4`$, twist in time, $`\beta =2.6`$ ($`a=0.06`$ fm): 350 configurations with 200 sweeps separation.
We do 20000 thermalization sweeps. We use $`\delta =2.89,4.92,11.57,23.15`$ and $`46.30`$ fm<sup>-3</sup>.
For illustration we present in Fig. 1 $`M(t)`$ for the $`24^4`$ lattice. Defining $`r(\delta )`$ as the distance $`t`$ at which $`M(t)`$ on $`\delta `$-cooled configurations starts to agree with the uncooled value (obtained by fuzzing and fitted to a smooth function of $`t`$) we arrive at the results in Fig. 2, compatible with
$$r(\delta )0.8\delta ^{1/3},$$
(4)
showing the correct scaling behaviour, as expected since $`\delta `$ has a continuum limit.
The behaviour of instantons is similar under RIC and IC whereby the former has a somewhat smaller, $`\delta `$-dependent dislocation threshold $`\rho _0(\delta )`$. I-A pairs, however, do not annihilate under RIC the way they do for the other cooling methods (including IC): since the distortion of the partners in a pair depends on the overlap, RIC preserves pairs below some overlap-threshold. More precisely, it turns out that there is a well defined relation between $`\mathrm{\Delta }`$ and $`S_{\mathrm{int}}^{\mathrm{IA}}=16\pi ^2S^{IA}`$, therefore RIC stabilizes I-A pairs with $`S_{\mathrm{int}}`$ below a threshold which is a function of $`\delta `$, hence of $`r`$.
SU(2) topological properties by RIC. The total, improved topological charge stabilizes to integer values after only few IC or RIC sweeps. Therefore the topological susceptibility appears very stable if $`\delta `$ is small enough to eliminate dislocations, shows correct scaling and agrees with the Witten-Veneziano relation.
To describe the details of the instanton ensemble one needs to recognize the particular I’s and A’s. Since the objects can be very distorted, our description relies on two assumptions: a) instantons should appear as local self-dual peaks in the action and charge density, and b) only pairs with $`S_{\mathrm{int}}^{\mathrm{IA}}`$ considerably smaller than $`16\pi ^2`$ should be considered as such. We approximate the action and charge density by a superposition of I’s and A’s parameterized through the BPST formula for an instanton of size $`\rho `$. We measure the adequacy of this ansatz through the fraction of the total action (charge) reproduced in this way. We represent the data in physical units using the corresponding $`a`$ and the $`r(\delta )`$ obtained from the string tension analysis without any further fit or tuning.
The main results of our analysis are:
1. The density of instantons increases drastically with decreasing smoothing scale $`r`$.
2. Size distributions depend only weakly on $`r`$.
3. The typical I-A distance $`d_{IA}`$ (from an I to the nearest A) seems given by the smoothing scale.
4. The overlap $`(\rho _I+\rho _A)/2d_{IA}`$ increases strongly with decreasing $`r`$ and the fit quality deteriorates.
All these results are largely invariant under rescaling of the lattice spacing $`a`$. They are summarized in the Figures 3, 4 and 5.
It appears therefore that there are topological features of the vacuum structure, like susceptibility and charge distributions and to a fair degree also size distributions, which are well defined, show correct scaling and are rather independent on the smoothing scale once the noise has been damped sufficiently. However, I(A)-density, overlap and I-A distance distributions are not of this kind: although these quantities seem to scale correctly with the cut off, their behaviour with $`r`$ suggests that looking at smaller and smaller scales a continuous spectrum of fluctuations emerges and a description in terms of I’s, A’s and I-A pairs becomes less and less meaningful. This may explain both the agreements and the disagreements between various analyses – see, e.g. .
Generally RIC proves itself a good instrument for probing different scales in a controlled way and extract physics from noisy configurations.
|
no-problem/9909/cond-mat9909004.html
|
ar5iv
|
text
|
# Many-body approach to the dynamics of batch learning
\[
## Abstract
Using the cavity method and diagrammatic methods, we model the dynamics of batch learning of restricted sets of examples, widely applicable to general learning cost functions, and fully taking into account the temporal correlations introduced by the recycling of the examples.
PACS numbers: 87.10.+e, 87.18.Sn, 07.05.Mh, 05.20.-y
\]
The extraction of input-output maps from a set of examples, usually termed learning, is an important and interesting problem in information processing tasks such as classification and regression . During learning, one defines an energy function in terms of a training set of examples, which is then minimized by a gradient descent process with respect to the parameters defining the input-output map. In batch learning, the same restricted set of examples is provided for each learning step. There have been attempts using statistical physics to describe the dynamics of learning with macroscopic variables. The major difficulty is that the recycling of the examples introduces temporal correlations of the parameters in the learning history. Hence previous success has been limited to Adaline learning , linear perceptrons learning nonlinear rules , Hebbian learning and binary weights .
Recent advances in on-line learning are based on the circumvention of this difficulty. In contrast to batch learning, an independent example is generated for each learning step . Since statistical correlations among the examples can be ignored, the dynamics can be simply described by instantaneous dynamical variables. However, on-line learning represents an ideal case in which one has access to an almost infinite training set, whereas in many applications, the collection of training examples may be costly.
In this paper, we model batch learning of restricted sets of examples, by considering the learning model as a many-body system. Each example makes a small contribution to the learning process, which can be described by linear response terms in a sea of background examples. Our theory is widely applicable to any gradient-descent learning rule which minimizes an arbitrary cost function in terms of the activation. It fully takes into account the temporal correlations during learning, and is exact for large networks. Preliminary work has been presented recently .
Consider the single layer perceptron with $`N1`$ input nodes $`\{\xi _j\}`$ connecting to a single output node by the weights $`\{J_j\}`$ and often, the bias $`\theta `$ as well. For convenience we assume that the inputs $`\xi _j`$ are Gaussian variables with mean 0 and variance 1, and the output state $`S`$ is a function $`f(x)`$ of the activation $`x`$ at the output node, i.e. $`S=f(x)`$; $`x=\stackrel{}{J}\stackrel{}{\xi }+\theta `$. For binary outputs, $`f(x)=\mathrm{sgn}x`$.
The network is assigned to “learn” $`p\alpha N`$ examples which map inputs $`\{\xi _j^\mu \}`$ to the outputs $`\{S_\mu \}(\mu =1,\mathrm{},p)`$. In the case of random examples, $`S_\mu `$ are random binary variables, and the perceptron is used as a storage device. In the case of teacher-generated examples, $`S_\mu `$ are the outputs generated by a teacher perceptron with weights $`\{B_j\}`$ and often, a bias $`\varphi `$ as well, namely $`S_\mu =f(y_\mu )`$; $`y_\mu =\stackrel{}{B}\stackrel{}{\xi }^\mu +\varphi `$.
Batch learning is achieved by adjusting the weights $`\{J_j\}`$ iteratively so that a certain cost function in terms of the activations $`\{x_\mu \}`$ and the output $`S_\mu `$ of all examples is minimized. Hence we consider a general cost function $`E=_\mu g(x_\mu ,y_\mu )`$. The precise functional form of $`g(x,y)`$ depends on the adopted learning algorithm. In previous studies, $`g(x,y)=(Sx)^2/2`$ with $`S=\mathrm{sgn}y`$ in Adaline learning , and $`g(x,y)=xS`$ in Hebbian learning .
To ensure that the perceptron is regularized after learning, it is customary to introduce a weight decay term. In the presence of noise, the gradient descent dynamics of the weights is given by
$$\frac{dJ_j(t)}{dt}=\frac{1}{N}\underset{\mu }{}g^{}(x_\mu (t),y_\mu )\xi _j^\mu \lambda J_j(t)+\eta _j(t),$$
(1)
where the prime represents partial differentiation with respect to $`x`$, $`\lambda `$ is the weight decay strength, and $`\eta _j(t)`$ is the noise term at temperature $`T`$ with $`\eta _j(t)=0`$ and $`\eta _j(t)\eta _k(s)=2T\delta _{jk}\delta (ts)/N`$. The dynamics of the bias $`\theta `$ is similar, except that no bias decay should be present according to consistency arguments ,
$$\frac{d\theta (t)}{dt}=\frac{1}{N}\underset{\mu }{}g^{}(x_\mu (t),y_\mu )+\eta _\theta (t).$$
(2)
Our theory is the dynamical version of the cavity method . It uses a self-consistency argument to consider what happens when a new example is added to a training set. The central quantity in this method is the cavity activation, which is the activation of a new example for a perceptron trained without that example. Since the original network has no information about the new example, the cavity activation is random. Here we present the theory for $`\theta =\varphi =0`$, skipping extensions to biased perceptrons. Denoting the new example by the label 0, its cavity activation at time $`t`$ is $`h_0(t)=\stackrel{}{J}(t)\stackrel{}{\xi }^0`$. For large $`N`$, $`h_0(t)`$ is a Gaussian variable. Its covariance is given by the correlation function $`C(t,s)`$ of the weights at times $`t`$ and $`s`$, that is, $`h_0(t)h_0(s)=\stackrel{}{J}(t)\stackrel{}{J}(s)C(t,s)`$, where $`\xi _j^0`$ and $`\xi _k^0`$ are assumed to be independent for $`jk`$. For teacher-generated examples, the distribution is further specified by the teacher-student correlation $`R(t)`$, given by $`h_0(t)y_0=\stackrel{}{J}(t)\stackrel{}{B}R(t)`$.
Now suppose the perceptron incorporates the new example at the batch-mode learning step at time $`s`$. Then the activation of this new example at a subsequent time $`t>s`$ will no longer be a random variable. Furthermore, the activations of the original $`p`$ examples at time $`t`$ will also be adjusted from $`\{x_\mu (t)\}`$ to $`\{x_\mu ^0(t)\}`$ because of the newcomer, which will in turn affect the evolution of the activation of example 0, giving rise to the so-called Onsager reaction effects. This makes the dynamics complex, but fortunately for large $`pN`$, we can assume that the adjustment from $`x_\mu (t)`$ to $`x_\mu ^0(t)`$ is small, and linear response theory can be applied.
Suppose the weights of the original and new perceptron at time $`t`$ are $`\{J_j(t)\}`$ and $`\{J_j^0(t)\}`$ respectively. Then a perturbation of (1) yields
$`\left({\displaystyle \frac{d}{dt}}+\lambda \right)(J_j^0(t)J_j(t))={\displaystyle \frac{1}{N}}g^{}(x_0(t),y_0)\xi _j^0`$ (3)
$`+{\displaystyle \frac{1}{N}}{\displaystyle \underset{\mu k}{}}\xi _j^\mu g^{\prime \prime }(x_\mu (t),y_\mu )\xi _k^\mu (J_k^0(t)J_k(t)).`$ (4)
The first term on the right hand side describes the primary effects of adding example 0 to the training set, and is the driving term for the difference between the two perceptrons. The second term describes the many-body reactions due to the changes of the original examples caused by the added example. The equation can be solved by the Green’s function technique, yielding
$$J_j^0(t)J_j(t)=\underset{k}{}𝑑sG_{jk}(t,s)\left(\frac{1}{N}g_0^{}(s)\xi _k^0\right),$$
(5)
where $`g_0^{}(s)=g^{}(x_0(s),y_0)`$ and $`G_{jk}(t,s)`$ is the weight Green’s function, which describes how the effects of a perturbation propagates from weight $`J_k`$ at learning time $`s`$ to weight $`J_j`$ at a subsequent time $`t`$. In the present context, the perturbation comes from the gradient term of example 0, such that integrating over the history and summing over all nodes give the resultant change from $`J_j(t)`$ to $`J_j^0(t)`$.
For large $`N`$ the weight Green’s function can be found by the diagrammatic approach. The result is self-averaging over the distribution of examples and is diagonal, i.e. $`lim_N\mathrm{}G_{jk}(t,s)=G(t,s)\delta _{jk}`$, where
$`G(t,s)=`$ $`G^{(0)}(ts)+\alpha {\displaystyle 𝑑t_1𝑑t_2G^{(0)}(tt_1)}`$ (7)
$`g_\mu ^{\prime \prime }(t_1)D_\mu (t_1,t_2)G(t_2,s).`$
$`G^{(0)}(ts)\mathrm{\Theta }(ts)\mathrm{exp}(\lambda (ts))`$ is the bare Green’s function, and $`\mathrm{\Theta }`$ is the step function. $`D_\mu (t,s)`$ is the example Green’s function given by
$$D_\mu (t,s)=\delta (ts)+𝑑t^{}G(t,t^{})g_\mu ^{\prime \prime }(t^{})D_\mu (t^{},s).$$
(8)
Our approach to the macroscopic description of the learning dynamics is to relate the activation of the examples to their cavity counterparts. Multiplying both sides of (5) and summing over $`j`$, we get
$$x_0(t)h_0(t)=𝑑sG(t,s)g_0^{}(s).$$
(9)
The activation distribution is thus related to the cavity activation distribution, which is known to be Gaussian. In turn, the covariance of this Gaussian distribution is provided by the fluctuation-response relation
$`C(t,s)=`$ $`\alpha {\displaystyle 𝑑t^{}G^{(0)}(tt^{})g_\mu ^{}(t^{})x_\mu (s)}`$ (11)
$`+2T{\displaystyle 𝑑t^{}G^{(0)}(tt^{})G(s,t^{})}.`$
Furthermore, for teacher-generated examples, its mean is related to the teacher-student correlation given by
$$R(t)=\alpha 𝑑t^{}G^{(0)}(tt^{})g_\mu ^{}(t^{})y_\mu .$$
(12)
To monitor the progress of learning, we are interested in three performance measures: (a) Training error $`ϵ_t`$, which is the probability of error for the training examples. (b) Test error $`ϵ_{test}`$, which is the probability of error when the inputs $`\xi _j^\mu `$ of the training examples are corrupted by an additive Gaussian noise of variance $`\mathrm{\Delta }^2`$. This is a relevant performance measure when the perceptron is applied to process data which are the corrupted versions of the training data. When $`\mathrm{\Delta }^2=0`$, the test error reduces to the training error. (c) Generalization error $`ϵ_g`$ for teacher-generated examples, which is the probability of error for an arbitrary input $`\xi _j`$ when the teacher and student outputs are compared.
The cavity method can be applied to the dynamics of learning with an arbitrary cost function. When it is applied to the Hebb rule, it yields results identical to . Here for illustration, we present the results for the Adaline rule. This is a common learning rule and bears resemblance with the more common back-propagation rule. Theoretically, its dynamics is particularly convenient for analysis since $`g^{\prime \prime }(x)=1`$, rendering the weight Green’s function time translation invariant, i.e. $`G(t,s)=G(ts)`$. In this case, the dynamics can be solved by Laplace transform, and the cavity approach facilitates a deeper understanding than previous studies. Illustrative results are summarized with respect to the following aspects:
1) Overtraining of $`ϵ_g`$: As shown in Fig. 1, $`ϵ_g`$ decreases at the initial stage of learning. However, for sufficiently weak weight decay, it attains a minimum at a finite learning time before reaching a higher steady-state value. This is called overtraining since at the later stage of learning, the perceptron is focusing too much on the specific details of the training set. In this case $`ϵ_g`$ can be optimized by early stopping, i.e. terminating the learning process before it reaches the steady state. Similar behavior is observed in linear perceptrons .
This phenomenon can be controlled by tuning the weight decay $`\lambda `$. The physical picture is that the perceptron with minimum $`ϵ_g`$ corresponds to a point with a magnitude $`|\stackrel{}{J}^{}|`$. When $`\lambda `$ is too strong, $`|\stackrel{}{J}|`$ never reaches this magnitude and $`ϵ_g`$ saturates at a suboptimal value. On the other hand, when $`\lambda `$ is too weak, $`|\stackrel{}{J}|`$ grows with learning time and is able to pass near the optimal point during its learning history. Hence the weight decay $`\lambda _{ot}`$ for the onset of overtraining is closely related to the optimal weight decay $`\lambda _{opt}`$ at which the steady-state $`ϵ_g`$ is minimum. Indeed, at $`T=0`$ and for all values of $`\alpha `$, $`\lambda _{ot}=\lambda _{opt}=\pi /21`$; the coincidence of $`\lambda _{ot}`$ and $`\lambda _{opt}`$ is also observed previously . Early stopping for $`\lambda <\lambda _{ot}=\lambda _{opt}`$ can speed up the learning process, but cannot outperform the optimal result at the steady state. A recent empirical observation confirms that a careful control of the weight decay may be better than early stopping in optimizing generalization .
At nonzero temperatures, we find the new result that $`\lambda _{ot}`$ and $`\lambda _{opt}`$ may become different. While various scenarios are possible, here we only mention the case of sufficiently large $`\alpha `$. As shown in the inset of Fig. 1, $`\lambda _{opt}`$ lies inside the region of overtraining, implying that even the best steady-state $`ϵ_g`$ is outperformed by some point during its own learning history. This means the optimal $`ϵ_g`$ can only be attained by tuning both the weight decay and learning time. However, at least in the present case, computational results show that the improvement is marginal.
2) Overtraining of $`ϵ_{test}`$: This is best understood by considering the effects of tuning the input noise from zero, when $`ϵ_{test}`$ starts to increase from $`ϵ_t`$. At the steady state $`ϵ_t`$ is optimized by $`\lambda =0`$ for $`\alpha <1`$, and by a relatively small $`\lambda >0`$ for $`\alpha >1`$. This means that $`ϵ_{test}`$ is optimized with no or only little concern about the magnitude of $`J^2`$. However, when input noise is introduced, it adds a Gaussian noise of variance $`\mathrm{\Delta }^2J^2`$ to the activation distribution. The optimization of $`ϵ_{test}`$ now involves minimizing the error of the training set without using an excessively large $`J^2`$. Thus the role of weight decay becomes important. Indeed, at $`T=0`$, $`\lambda _{opt}=\alpha \mathrm{\Delta }^2`$ for random examples, whereas $`\lambda _{opt}\mathrm{\Delta }^2`$ approximately for teacher-generated examples. This illustrates how the environment in anticipated applications, i.e. the level of input noise, affects the optimal choice of perceptron parameters.
Analogous to the dynamics of $`ϵ_g`$, overtraining can occur when a sufficiently weak $`\lambda `$ allows $`\stackrel{}{J}`$ to pass near the optimal point during its learning history. Indeed, at $`T=0`$ the onset of overtraining is given by $`\lambda _{ot}=\lambda _{opt}`$ for random examples, whereas $`\lambda _{ot}\lambda _{opt}`$ for teacher-generated examples. At nonzero temperatures, $`\lambda _{ot}`$ and $`\lambda _{opt}`$ become increasingly distinct, and for sufficiently large $`\alpha `$, $`\lambda _{opt}<\lambda _{ot}`$ as shown in the inset of Fig. 2, so that the optimal $`ϵ_{test}`$ can only be attained by tuning both the weight decay and learning time.
3) Average dynamics: When learning has reached steady-state, the dynamical variables fluctuates about their temporal averages because of thermal noises. If we consider a perceptron constructed using the thermally averaged weights $`J_j_{th}`$, we can then prove that it is equivalent to the perceptron obtained at $`T=0`$. This equivalence implies that for perceptrons with thermal noises, the training and generalization errors can be reduced by temporal averaging down to those at $`T=0`$.
We can further compute the performance improvement as a function of the duration $`\tau `$ of the monitoring period for thermal averaging, as confirmed by simulations in Fig. 2. Note that the Green’s function is a superposition of relaxation modes $`\mathrm{exp}(kt)`$ whose rate $`k`$ lies in the range $`k_{min}kk_{max}`$, where $`k_{max}`$ and $`k_{min}`$ are $`\lambda +(\sqrt{\alpha }\pm 1)^2`$ respectively. For $`\alpha <1`$, there is an additional relaxation mode with rate $`\lambda `$, which describes the relaxation by weight decay inside the $`Np`$ dimensional solution space of zero training error. Hence the monitoring period scales as $`k_{min}^1`$ for $`\alpha >1`$, and $`\lambda ^1`$ for $`\alpha <1`$. Note that this time scale diverges for vanishing weight decay at $`\alpha <1`$. The time scale for thermal averaging agrees with the relaxation time proposed for asymptotic dynamics in .
We remark that the relaxation time for steady-state dynamics may not be the same as the convergence time for learning in the transient regime. For example, for $`\alpha <1`$ and vanishing weight decay at $`T=0`$, significant reduction of $`ϵ_t`$ takes place in a time scale independent of $`\lambda `$, since the dynamics is dominated by a growth of the projection onto the solution space of zero training error. On the other hand, the asymptotic relaxation time diverges as $`\lambda ^1`$.
4) Dynamics of the bias: For biased perceptrons, $`\theta (t)`$ approaches the steady-state value $`\mathrm{erf}(\varphi /\sqrt{2})`$. (The failure of the student to learn the teacher bias is due to the inadequacy of Adaline rule, and will be absent in other learning rules such as back-propagation.)
The absence of bias decay modifies the dynamics of learning. For $`\lambda <\sqrt{\alpha }1`$, $`\theta (t)`$ consists of relaxation modes with rates $`k`$ in the range $`k_{min}kk_{max}`$, as in the evolution of the weights. Hence the weights and the bias learn at the same rate, and convergence is limited by the rate $`k_{min}`$. However, for $`\lambda >\sqrt{\alpha }1`$, $`\theta (t)`$ has an additional relaxation mode with rate $`\stackrel{~}{\lambda }=\alpha \lambda /(1+\lambda )`$. Since $`\stackrel{~}{\lambda }<k_{min}`$, the bias learns slower than the weights, and convergence is limited by the rate $`\stackrel{~}{\lambda }`$, as illustrated in Fig. 3 which compares the evolution of the weight overlap $`R(t)`$ and $`\theta (t)`$. If faster convergence is desired, the learning rate of the bias has to be increased.
In summary, we have introduced a general framework for modeling the dynamics of learning based on the cavity method, which is much more versatile than existing theories. It allows us to reach useful conclusions about overtraining and early stopping, input noise and temperature effects, transient and average dynamics, and the convergence of bias and weights. We consider the present work as only the beginning of a new area of study. Many interesting and challenging issues remain to be explored. For example, it is interesting to generalize the method to dynamics with discrete learning steps of finite learning rates. Furthermore, the theory can be extended to multilayer networks.
This work was supported by the Research Grant Council of Hong Kong (HKUST6130/97P).
|
no-problem/9909/astro-ph9909130.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Observational evidence suggests that much of the on-going star-formation in the young universe takes place in a heavily obscured ISM and therefore must be hidden from extragalactic optical and IR surveys. Hence the transparent view of the high-$`z`$ universe provided by submm–mm wavelength observations ($`\lambda 2003000\mu `$m) which are insensitive to the obscuring effects of dust, and the strength of the negative k-correction which enhances the observed submm–mm fluxes of starburst galaxies by factors of 3–10 at $`z>1`$ , offer obvious advantages.
The opportunity to conduct cosmological observations at submm–mm wavelengths has been realised in the last few years with the successful development and commissioning of sensitive bolometer arrays (e.g. SCUBA-I, SHARC-I, BoloCam-I, MPIfR 37-channel). These arrays, which will be upgraded within 1–2 years, operate on the largest submm and mm telescopes (15-m JCMT, 10-m CSO, 30-m IRAM).
## 2 Submillimetre Cosmological Surveys
By early 2001 the first series of extragalactic SCUBA (850 $`\mu `$m) surveys will be completed, covering areas of 0.002–0.12 deg<sup>2</sup> with respective $`3\sigma `$ depths in the range $`1.5\mathrm{mJy}<\mathrm{S}_\nu <8\mathrm{mJy}`$ , , , , . The evolution of the high-$`z`$ starburst galaxy population can be determined from an accurate measure of the integral submm source-counts, the FIR luminosities, star formation rates (SFRs) and redshift distribution of the submm selected galaxies. The contribution of the submm sources to the total FIR - mm background places an additional constraint on the competing models. To ensure these submm data are fully exploited, the current SCUBA surveys have been restricted to fields that have been extensively studied at other wavelengths (X-ray, optical, IR and radio) and have yielded the following preliminary results:
* The faint submm source-counts at 850$`\mu `$m are reasonably well determined between 1–10 mJy and significantly exceed a no-evolution model, requiring roughly $`(1+z)^3`$ luminosity evolution out to $`z2`$, but with poor constraints at higher redshifts (fig. 1). The submm background measured by COBE requires that the SCUBA source-counts must converge at $`\mathrm{S}_{850\mu \mathrm{m}}0.5`$mJy. Approximately 30–50% of the submm background has been resolved into individual galaxies with flux densities $`S_{850\mu \mathrm{m}}>2`$ mJy.
* Submm sources generally appear to be associated with $`z>1`$ galaxies, although it not clear whether they necessarily have optical, IR and radio counterparts. There is still much debate about the fraction of submm sources at $`z2`$, and the fraction of submm sources that contain an AGN.
* At high-redshift ($`2<z<4`$) the sub-mm surveys appear to find $`5`$ times the star formation rate observed in the optical surveys, although the effects of dust obscuration and incompleteness in the optical are still uncertain.
### 2.1 Limitations on an understanding of high-$`z`$ galaxy evolution
Despite the success of the first SCUBA surveys, we can identify the deficiencies in the submm data which prevent a more accurate understanding of the star-formation history of high-$`z`$ galaxies. This paper summarises these deficiencies and outlines the future observations which will alleviate the following problems.
2.1.1 Poorly constrained evolutionary models To improve the constraints on the competing evolutionary models provided by the current submm source-counts, it is necessary to (1) extend the restricted wavelength range of the surveys, (2) increase the range of the flux densities over which accurate source-counts are measured and (3) increase the number of sources detected at a given flux level by surveying greater areas.
Ground-based surveys at mm wavelengths can take advantage of a more stable and transparent atmosphere which will provide increased available integration time (to gain deeper survey sensitivity or greater survey area) and increased flux calibration accuracy. Future surveys with more sensitive and larger format arrays (e.g. BoloCam) operating at 200$`\mu `$m – 3 mm on airborne and ground-based telescopes will allow signifcantly greater areas to be covered (hence more sources detected) and will increase the range of the flux densities over which sources are detected (fig 1.). The deepest surveys todate are still only sensitive to high-$`z`$ galaxies with SFRs comparable to the most luminous local ULIRGs ($`200M_{}\mathrm{yr}^1`$). Furthermore conducting surveys with larger diameter telescopes (e.g. 50-m Gran Telescopio Milimetrico (LMT), 100-m GBT) will reduce the beam-size, hence decrease the depth of the confusion limit (allowing deeper surveys) and improve the positional accuracy of detected sources.
2.1.2 Ambiguity in the optical counterparts & redshifts of submm galaxies The current SCUBA surveys (with $`15^{\prime \prime }`$ resolution at 850$`\mu `$m) are struggling to unambiguously identify the submm sources with their optical/IR/radio counterparts. Hence the redshift distribution and luminosities of the submm sources are still uncertain. This results directly from the submm positional errors of $`23^{\prime \prime }`$ that are typical for even the highest S/N submm detections, and from the lack of submm data measuring the redshifted FIR spectral peak at 200–450 $`\mu `$m.
The positions of the brightest SCUBA sources ($`S_{850\mu m}>8`$ mJy) can be improved with mm-interferometric observations. However our IRAM Plateau de Bure follow-up of the brightest source in the Hubble Deep Field has demonstrated that even with $`2^{\prime \prime }`$ resolution and sub-arcsec positional errors, ambiguous optical identifications, and hence ambiguous redshifts remain . It should be no surprise that submm selected galaxies, including those with mm-interferometric detections, do not always have optical counterparts since high-$`z`$ galaxies observed in the earliest stages of formation may be heavily obscured by dust. Indeed this is the most compelling reason for conducting the submm surveys in the first instance and therfore searches for the counterparts may be more successful at near-infrared wavelengths. This was recently demonstrated by Smail et al. who took deep near-IR ($`2\mu `$m) images of two lensed clusters previously observed by SCUBA . The original counterparts were identified as bright low-redshift ($`z0.4`$) galaxies 5–10 arcsecs distant from the submm sources. However the new IR images revealed two high-$`z`$ ($`z>2`$) IR galaxies, with no optical counterparts, within 2–3 arcsecs of the SCUBA sources. The obvious consequence of these misidentifications is an inaccurate determination of the star-formation history of high-$`z`$ starburst galaxies.
The uncertainty in the redshift distribution of submm-selected galaxies can be significantly reduced by measuring the mid-IR to radio SEDs of the individual sources. The power of using mid-IR to radio flux ratios (e.g. 15/850$`\mu `$m, 450/850$`\mu `$m, 850$`\mu `$m/1.4 GHz) as a crude measure of redshift was demonstrated by Hughes et al. during the SCUBA survey of the Hubble Deep Field and has since been described elsewhere , . Given sufficient sensitivity, the mid-IR–submm–radio colours of a submm source can discriminate between optical/IR counterparts which are equally probable on positional grounds alone, but which have significantly different redshifts, $`\delta z2`$, (fig. 2). This important technique, and the necessity for sensitive short submm data (200–500$`\mu `$m) measuring the rest-frame FIR SEDs of the individual high-$`z`$ submm galaxies, without which it remains impossible to constrain their bolometric luminosities and SFRs, provide the major scientific justifications behind BLAST, a possible future long-duration Balloon-borne Large Aperture Submm Telescope (P.I. M.Devlin, University of Pennsylvania).
Alternative BLAST 300$`\mu `$m surveys based on the model shown in fig 1 are described in Table 1. Assuming a 3$`\sigma `$ 300$`\mu `$m confusion limit of $`2030`$ mJy, a single 6 hour BLAST test-flight survey can follow-up the widest of the current SCUBA surveys, detecting all $`>\mathrm{\hspace{0.17em}4}\sigma `$ SCUBA sources ($`100`$ sources with $`S_{850\mu \mathrm{m}}>\mathrm{\hspace{0.17em}10}`$ mJy) in a 0.24 sq. deg. survey at all redshifts $`<3`$. Non-detections at 300$`\mu `$m imply $`z>3`$. Increasing the primary aperture of BLAST to 2.7-m and conducting a 50-hour survey during a long-duration balloon flight significantly increases the survey area and number of sources detected to $`>1500`$, a comparable number to that detected by BoloCam in a future 50-hour 0.45 sq. degree 1100$`\mu `$m survey with the GTM/LMT.
An accurate determination of the redshift distribution of submm selected galaxies will ultimately be achieved through the measurement of mm-wavelength <sup>12</sup>CO spectral-line redshifts, without recourse to having first identified the correct optical or IR counterparts. This “CO redshift machine” requires a large instantaneous bandwidth ($`\mathrm{\Delta }\nu 3040`$ GHz) to take advantage of the reduced separation of adjacent mm-wavelength <sup>12</sup>CO transitions in the high-$`z`$ universe, $`\delta \nu _{J,J1}115/(1+z)`$ GHz. Hence at redshifts $`>2`$ any adjacent pair of <sup>12</sup>CO lines are separated by $`<40`$ GHz, the frequency separation defining the precise redshift of the galaxy. The combination of data from BLAST, JCMT, CSO and the GTM/LMT will efficiently pre-select from submm surveys, using their FIR–mm colours, those galaxies with sufficiently high (but still unknown) redshifts that are suitable targets to follow-up with a “CO redshift machine”.
|
no-problem/9909/astro-ph9909466.html
|
ar5iv
|
text
|
# Gravitational Lensing: Recent Progress and Future Goals – Conference Summary
## 1 Confession
Predicting whether a conference will prove exciting or ho-hum has always been difficult for me. So it is with especially deep admiration that I congratulate the Scientific Organizing Committee on the superb job they have done (at least until this point) in selecting speakers for this meeting. We have been treated to an excellent smorgasbord of reviews, background talks and exciting new results. But on second thought, perhaps the SOC did not have so demanding a job after all. The poster presentations have been so outstanding, with such very high signal to noise, that a more or less random selection from among the submitted abstracts might have produced an equally good set of talks.
## 2 Question
This said, let me pose a question which may seem churlish: why did we have this meeting? After all, the cost of such a gathering is very considerable, especially in person hours spent preparing for it and in actual attendance, but also in the cost of travel and discomfort – including circling over Logan airport for several hours, being diverted to Hartford, New York or Washington, and being drenched in downpours.
We’ve heard ourselves referred to by several of our speakers as “the lensing community.” If there really is such an entity, it is one that spontaneously fragments into constituencies. There is a microlensing community and there are strong and weak lensing communities, with the latter again split into those who study weak lensing by identifiable objects and those who study the properties of the potential field without little regard for the objects responsible for it.
Were you to look at a list of astronomical meetings in any given year, you would find that they fall into two broad categories. The majority of meetings are organized around some class of astronomical phenomenon: peculiar A stars, high redshift galaxies, molecular clouds. But many meetings are organized around specific techniques rather than phenomena: long baseline interferometry, adaptive optics, TeV astronomy. Typically these are in areas where the technology is new or rapidly changing. And so there is a second question related to my first question. Is this meeting a phenomenological meeting or is it a technological meeting?
The list of the phenomena covered by participants at this meeting is very long. One is impressed at how large a fraction of the astronomical universe has been discussed: planets, stellar surfaces, quasars and their hosts, the microwave background. Nobody would ever organize a meeting with so wide a range of topics; should we conclude, by elimination, that this is a technique meeting?
In addressing this question it is helpful to look at the classic gravitational lensing diagram, the variants of which Virginia Trimble traced back over two hundred years. The figure has three sections – the source end, the observer end, and middle, where the lensing takes place. Phenomenon oriented meetings are usually concerned with the source end of the diagram. Technique oriented meetings are usually concerned with the observer end of the diagram. What makes this meeting unusual, distinguishing it from most other meetings, is that the principal interest of most contributors at this meeting has been neither at the observer end of the diagram nor at the source end but the machinery in the middle. We would seem to be in a class by ourselves (though not quite if we count students of the interstellar medium).
The subject of our meeting goes by three different names, each of which carries somewhat different connotations. In French it is called “mirage gravitationnel,” which tends to emphasize the experience of the observer. Some of our speakers describe their use of “gravitational telescopes.” For them the effect is simply a tool which gives them more photons or higher resolution from an otherwise faint or small source. The term “gravitational lens” emphasizes the intermediate optics rather than the astronomical sources or the observer.
If I were to characterize where our contributors’ interests lie, I would guess that 25% are primarily interested in the sources and 15% are primarily interested in the detectors and analysis techniques with the remaining 60% interested in the lenses themselves, though of course there’s scarcely a person in this room who isn’t interested in all three.
The reason for interest in the lenses is manifest – observations of the deflections, distortions and delays introduced by lensing permit one to measure the masses of intervening objects. The circumstances under which astronomers can measure masses are so rare that they jump at the opportunity.
The history of the measurement of masses of clusters of galaxies drives this point home. Zwicky and Smith were the first to measure the mass of a cluster, but the answer they found was so far from the expectations of the day that the astronomical community chose to ignore it. By the mid-1970s measurements of X-ray gas profiles and temperatures more or less confirmed the optical velocity dispersion measurements, but the astronomical community was still in a state of denial about the consequences. Doubts and suspicions have lingered into the present, so people have seized upon gravitational lensing as a means of resolving the issue.
Our classic lensing diagram, as drawn here, is grossly exaggerated, and represents a rather bad case of wishful thinking (something to which we, as astronomers, are in no way immune). Our figure pretends to be a case of strong imaging. The widest separations, of order 1 arcminute – a third of a milliradian – are produced by clusters of galaxies. Strong lensing by galaxies produces deflections of several microradians. Microlensing within the Local Group produces deflections of nanoradians, and microlensing on cosmological scales gives deflections measured in picoradians. Even for the largest deflections we consider, this diagram collapses to a line if one tries to draw it to scale. The exceedingly small solid angle of influence of the lenses we seek to study drives us to extremes in terms of photometric accuracy, astrometric precision and in numbers of objects needed to produce the wanted effects. In some cases that quest borders on the quixotic. The fact that so many of us are willing to undertake such efforts is testimony to how important we believe the results might be.
I will use our classic diagram in reviewing what we’ve heard and seen in the last five days, grouping together results at the observer end, results at the source end and results in the middle.
## 3 The Observer
The lensing community can take considerable pride in the extent to which some of our members have led the larger astronomical community in experimental techniques. Chief among these has been the development large format CCD detectors. When Tony Tyson first undertook measurements of galaxy-galaxy lensing in the early 1980s, he used photographic plates. We have all had a good laugh at the old-fashioned darkroom timer that was called out of retirement to keep our speakers in line. But remember that well into the 1990s, the photographic plate, despite its 1% quantum efficiency and its horribly non-linear response, has remained a valuable tool in our field because of the small size of solid state detectors. The MACHO and EROS groups, Tyson and Bernstein, and more recently Gerry Luppino and his group have been world leaders in constructing large area solid state imagers.
A number of gravitational lens programs has been very large in scale, requiring a degree of organization and coordination rarely seen in astronomy. The MACHO and EROS collaborations, in particular, have brought the culture of particle physics to ground based astronomy. The CLASS collaboration (Myers), the MIT surveys (Winn) and the ACT effort (Prouton) have brought a new style to radio observations as well, with radio telescopes spending almost as much time slewing between objects as observing.
I wish I could say that optical astronomers have done as good a job as radio astronomers in searching for new lenses. Strong lensers (myself among them) have not been as effective in marshalling the resources necessary. There is a crying need for wide field optical telescopes of moderate size with silicon focal planes to carry out survey work. On a more positive note, the PLANET and G-MAN collaborations have been spectacularly successful in assembling the instruments necessary to carry out round the clock monitoring of exotic lensing events.
Lensers have also lead the way in software. FOCAS was an early effort on the part of Tyson and his collaborators (Jarvis et al. 1981) to deal with unprecedentedly large numbers of images. The nearly total automation of the MACHO project may not seem particularly noteworthy to radio or X-ray astronomers, but it is quite remarkable among ground-based optical efforts. The OGLE program, we are told, is automated to the point where a program field is specified and the reduced data are emailed to a designated recipient. Image differencing is another development which, while straightforward in principle, has only now been made to work, and which promises major improvements in sensitivity. It is remarkable that Crotts and his collaborators and now Alard and Lupton (1998) have been able to press to the photon limit.
Alas one must worry not only about photon statistics but also systematic errors. The efforts of the various weak lensing groups to remove myriad sources of systematic image distortion have been nothing short of heroic. Chris Fassnacht and Leon Koopmans have likewise pushed the envelope in their exceedingly accurate radiometric measurements. We have also seen extraordinarily high dynamic range measurements in the ring of B0218+357 (Biggs) which will help in its modelling. The UH optical astronomers are the first I know of who have dared to show the Hubble Deep Field image side by side with their own. Theirs may be somewhat less deep, but it is certainly very much wider, and that is clearly what we need for weak lensing.
Some measure of the excitement generated by the phenomenon of gravitational lensing can be had by noting the prominent role it plays in the justification for many of the major projects now being evaluated by the US National Research Council’s decennial survey of astronomy. We have heard lensing invoked as a justification for NGST, for an 8m ground based “dark matter telescope”, for the VLA+ upgrade and for the Square Kilometer Array. Lensing likewise figures prominently in the programs for the Advanced Camera for Surveys on HST, Chandra, XMM, SIM and Planck.
## 4 Sources
Lensing has provided data about sources which could not otherwise have been obtained. Hans-Walter Rix has shown us that the hosts of high redshift quasars are surprisingly easy to see if one uses lensing to boost the resolution of NICMOS. It is not atypical to find that the increased resolution produced by a lens is more important than the increased photon count.
A number of speakers and presenters have shown us how microlensing can be used to set limits on the sizes of quasar components (Agol; Yonehara) both in the optical and in the radio. Some of our theorists have outlined how one might use a caustic moving across a quasar to study the structure of quasar accretion disks.
We have heard from Penny Sackett and others about how microlensing can be used to study the surfaces of stars, and in particular to check models of limb darkening and for the presence of starspots. Such stars have diameters (if I heard correctly) of 100 nanoseconds.
Bob Nemiroff told us how gravitational lenses give us information about otherwise elusive gamma ray bursts. It should be noted that one of the two subclasses of gamma ray bursts, the short ones, have not yet had host galaxies measured. Lensing may therefore provide the only limit on the redshifts of these objects, albeit a weak one.
And we have heard, en passant, about how gravitational lensing has twice given us the record holding high redshift galaxies, first in a CNOC cluster (Yee et al. 1996) and then in 1358+62 (Franx et al. 1997).
It is notable that we have not had at talk about the use of gravitational telescopes to improve the spatial resolution of the submillimeter bolometer array (SCUBA) on the JCMT in the study of dusty galaxies at high redshift. Some of the best work in that field has been done using lensing (e.g. Smail et al. 1997), and the people who did it have in years past been active participants at gravitational lens meetings. I doubt that our SOC slighted this work; rather, I suspect that these individuals treat gravitational telescopes as just another weapon in the astronomical armory, and that they feel their time is more wisely spent going to meetings where the high redshift universe is the principal focus.
## 5 The Lenses
The majority of the papers at this meeting have emphasized neither the sources nor the observing and detection but the deflection, distortion and delay of light by intervening masses. Until now the masses of astronomical objects have been measured by observing the bound orbits of gas, stars or galaxies in gravitational potentials. Now we study mass distributions by studying the unbound the orbits of photons. Until now we have relied there being stars, gas or galaxies present to study potentials. But now we know, at least on average, that we can count on a certain number of background sources to be lensed by the foreground objects we wish to study.
Lensing might not be quite so interesting were the universe not pervaded by dark matter. We suspect that 90% or more of the matter in the universe is non-baryonic, and that a major fraction of the baryonic matter may itself be dark (though not quite so totally and unrelentingly dark as the non-baryonic stuff). It is with a combination of embarrassment and frustration that we explain to people outside astronomy that we cannot observe 90% of the universe. Gravitational lensing offers us a chance to redeem ourselves.
What our friends outside astronomy don’t know is that luminous matter is at best a treacherous tracer of dark matter. We know that light fails to trace mass in the Milky Way and other galaxies, and we suspect that galaxies may fail to trace light in bound clusters of galaxies and in yet larger structures in the universe. So we are driven to gravitational lensing as the most reliable means of studying the distribution of dark matter.
There has been spirited discussion of the observation of gravitational microlensing toward the Magellanic clouds and its implication for the composition of the dark halo of the Milky Way. While the MACHO collaboration has argued that most of these events arise from compact objects in the galactic halo (Alcock et al. 1997), we have heard forceful arguments for self lensing by the LMC (Evans). Given theorists’ creativity in coming up with models, the issue is likely to be settled only with a very much larger set of events than we presently have. However the question is resolved, we will have learned an enormous amount from the microlensing searches.
Another subject which generated fascinating discussion was the gravitational potentials of galaxies for which time delays have been measured. There has been superb progress on the observational front. At the Melbourne IAU symposium (Kochanek and Hewitt 1996) even the time delay for B0957+561 was a matter of contention. Today there are 8 systems with measured time delays, with two of those delays reported for the first time at this meeting. This is the result of prodigious, painstaking effort on the part of radio and optical observers. It is easy to forget that a set of 50 data points demands 50 times the effort (perhaps even more, given the spacing requirements) than a single data point. The first reported delays for RX J0911+0551 and CLASS B1600+434, from data obtained by Ingunn Burud and collaborators with the NOT, were breathtaking. Tommy Wicklind’s confirmation of the time delay for PKS B1830-211 using single dish molecular absorption spectra was another tour de force.
There are several schools regarding the interpretation of time delays. There are those who choose parameterized models for potentials (Bernstein; Chae) and those who despair of adequate parameterization and instead adopt a non-parametric approach (Saha; Williams). There are those who insist that every detail of the gravitational potential (most importantly its logarithmic slope) must be measured from the lens itself. On the other hand are those who are willing to bring their knowledge of the dynamics of other galaxies to bear on the problem. The former are the perfectionists, members of the “golden lens” school. The latter are the compromisers, members of the “warts and all” school. As a member of the latter, I will exercise my prerogative as summarizer and note that if one adopts a simple model and observes a small scatter in the derived values of the Hubble constant, one might not be making so large an error in transferring one’s hard won knowledge of galaxy dynamics to galaxies for which the dynamics are nearly impossible to measure. In this regard my reaction to Liliya Williams’ non-parametric models was exactly the opposite of Roger Blandford’s. Where he drew the conclusion that the Hubble constant was hopelessly uncertain, I was pleased to see how little the Hubble constant depended on anything except the logarithmic slope of the potential, a result also emphasized by Olaf Wucknitz.
In his review, Ed Turner opined that lenses now give the best value of the Hubble constant. Considering the care that has gone into the HST Cepheid Key Project, especially in estimating their error budget, I don’t think we are yet in a position to claim this particular piece of high ground. But if we see redshifts for RX J0911+0551 and HE B1104–1805, and if in another year the present time delay results don’t change dramatically, it might be that even unbiassed observers (creatures rarer than unicorns) would agree with him.
Both those of the “golden lens” school and those of the “warts and all” school agree that many new lenses are needed. Survey work is sine qua non of astronomy. CLASS (Browne; Myers; Rusin) has been gloriously successful in producing new cases, including two of those for which we now have delays. Optical searches have until now lagged behind, especially when one folds in the fact that 90% of quasars are radio quiet. The Sloan telescope in the north (Pindor) and the VST in the south may go part way toward redressing this imbalance, but for reasons which are in no way fundamental (e.g. pixel size, programatic constraints) neither is ideally suited to the task of finding strong lenses.
The strong lenses have also given us a picture of the luminosity evolution of early type systems which is completely independent of the work done in clusters of galaxies. It is amazing that the results reported by Kochanek, determining parameters for the so-called “fundamental” plane using lensing galaxies, agree as well as they do with results obtained for clusters using conventional methods. Who would have thought that galaxies selected by mass would agree as well as they do with galaxies selected by light?
Brian McLeod spoke about the non-gravitational aspects of propagation of multiply imaged quasar light through lens galaxies, giving us a unique handle on the properties of the ISM at high redshift. In the course of that he was able to show that, for whatever mysterious reason, lensing has helped us to find two of the intrinsically reddest quasars known in the universe.
It must be remembered that strong gravitational lenses are poorly designed and, moreover, fabricated from inferior materials. The lens material typically exhibits huge variations in its index of refraction due to the substantial percentage of its mass in stars and MACHOs. The stars must introduce microlensing even if MACHOs do not. Here again we’ve begun to address questions which I would not have thought possible. While there is a near degeneracy between the rms mass of the microlenses and the fraction of the intervening mass in condensed objects, there is hope for separating these two effects in the higher order statistics of light curves. One need only remember that Sjur Refsdals’s two curves, one based on a peak and the other based on a plateau, did not have the same shape in his log-log “exclusion” diagram. Koopmans’ results on CLASS B1600+434 are all the more fascinating for being a case of observation not yet confirmed by theory. While microlensing is noise to those who wish to measure time delays, perhaps we must count ourselves lucky that at least some of our lenses suffer from it.
The developments in galaxy-galaxy lensing have been very encouraging. Several groups have described their efforts (Brainerd; Casertano; Fischer; Smith) and, quite remarkably, they all agree with each other. We still haven’t seen the cutoff expected in our isothermal sphere models and Hank Hoekstra’s result for groups lead me to suspect that we may never see one. But there are other things to be tried, including testing of the isothermality hypothesis. Phil Fischer showed that variations in the Sloan survey PSF were not so malignant as to swamp the galaxy-galaxy lensing signal.
Probably only at meetings on adaptive optics do point spread functions receive more attention than they did at this one. Hans-Walter Rix described the NICMOS PSF as one that only a mother could love. I suspect these words will find new application as people analyze the data obtained with new generations of wide field cameras now coming on line.
Weak lensing observations of clusters have moved from the regime of mar-ginal detection to that of serious astrophysical tool. Nick Kaiser has shown us that there is surprisingly little radial bias in the luminosity profiles of clusters of galaxies – this from the man whose name is most closely associated with the concept. It’s far too soon to accede on this point – there are troubling differences between lensing results and those obtained from optical and X-ray data. I wonder whether we shouldn’t introduce a few weak lensing “standards”, in the same way we have adopted photometric standards, to ensure that everyone is on the same system. A point that was made many times, which may nonetheless have failed to penetrate the stubbornest of listeners, is that “seeing is everything.”
With the successful launch of Chandra and the promise of XMM and several CMB imagers, we have the prospect of comparing 4 different estimators of mass and substructure in clusters of galaxies. A crucial issue in this regard is the boundary between galaxy and cluster – where the galaxy ends and the cluster begins. Priya Natarajan’s results have whetted our appetite for further investigations of this sort.
On the scales so large that structure is still linear or weakly non-linear, scales on which one must study fields rather than objects, we have heard about mean polarizations (Wilson) and polarization correlations (Wittmann) and aperture masses (Schneider) as alternative vehicles for studying the the power spectrum of mass fluctuations. The complementarity (a word that brings down the duck with $100) of weak lensing results and CMB measurements has been repeatedly emphasized as has the point that these probe large scale structure at different epochs (Jain; Seljak). It is a measure of how exciting these prospects are that people are willing to undertake huge programs of extraordinarily delicate measurement. The signal may already have been seen but the uncertainties, almost all of them systematic, are as yet too poorly understood for firm conclusions to be drawn.
Next there is the small matter of the universe itself. In addition to the dimensioned parameter $`H_0`$, lensing can in principle tell us about dimensionless parameters, the cosmological density parameter $`\mathrm{\Omega }_m`$, and the dimensionless version of the cosmological constant (or the vacuum energy density), $`\mathrm{\Omega }_\mathrm{\Lambda }`$. There are several approaches to measuring these. The largest effect involves the numbers of lensed systems (Helbig), but as yet the luminosity functions for unlensed objects and the mass functions (and shapes) of lenses are too poorly known for these to provide strong limits. There are other effects, such as comparison of the sizes of Einstein rings for objects at different redshifts behind a lens (Link). We may get lucky in this regard and find a lens with simple geometry and multiple sources each multiply imaged.
Finally let’s return to our own neighborhood and consider a different kind of dark matter – planets. We have seen that planet detection is a serious possibility (Dalil; Gaudi) and will be even more likely with the advent of SIM (Boden). We enjoyed an outlaw poster claiming a planet of 5 $`M_J`$ has already been observed in a microlensing event (Bennett et al. 1999).
## 6 Broad Issues
Several themes emerged in the course of the meeting which don’t fit easily into our observer, source or lens pigeonholes. The first of these regards a sea change in the way we do astronomy. Many of you saw the article in Sunday’s NY Times Magazine called “The Loneliness of the Long Distance Cosmologist” (Panek, 1999). It describes how the nature of the astronomical enterprise in general, and how measurement of $`H_0`$ in particular, has changed from the solo effort of a lone wolf carried out at the prime focus of a unique telescope to the concerted effort of a large team, often using multiple telescopes (which many members of the team may never even have have seen). While there may still be room for lone wolves in gravitational lensing, team efforts, with the attendant headaches, the massaging of egos and the compromising on means and ends, seems to be the order of the day. Like it or not, we are destined to live in an era of cloying and lame acronyms.
A second recurring theme, not unrelated to the first, has been that of the “exclusion” diagram. We have seen many instances of observations that, while they rule out the large volumes of model space, produce allowed volumes (error ellipsoids, to first order) whose principal axes happen not to lie parallel to the axes of the model space of interest. The lone wolves among us show a strong preference for measurements which produce nicely aligned error ellipsoids. The team players don’t care whether ellipsoids (singly or from more than one measurement) and axes are aligned or not, as long as the resulting volume is small. I can sympathize with the lone wolves on aesthetic grounds but the future belongs to the tilted ellipsoids.
## 7 Bread and Butter Issues
The success of the Scientific Organizing Committee has been surpassed only by that of the Local Organizing Committee. With the exception of a friendly visit by the local firefighters our meeting has proceeded seamlessly. The accommodations have been excellent, the meeting room and poster area ideal, the pastry and coffee far above average, and the dinner cruise up and down the Charles and around a moonlit Boston harbor most memorable. We owe the chair of the LOC, Tereasa Brainerd, our host institution, Boston University, and our sponsors, the US National Science Foundation, NASA, and Boston University, considerable thanks for making this meeting as productive as it has been.
Perhaps the most appropriate place to end is with a call for volunteers to organize a gravitational lens meeting in 2002!
|
no-problem/9909/cond-mat9909306.html
|
ar5iv
|
text
|
# Emergence of macroscopic temperatures in systems that are not thermodynamical microscopically: towards a thermodynamical description of slow granular rheology.
## I
Granular matter set into motion by shearing, shaking or tapping is one of the most interesting cases of macroscopic out of equilibrium systems. Given a granular system subjected to some form of power input that makes it perform stationary flow on average, a very natural question that arises is to what extent it resembles a thermodynamic system of interacting particles such as, for example, a liquid.
More specifically, many attempts have been made to define a ‘granular temperature’ (see e.g. ). In order to deserve its name, a temperature has to play the role of deciding the direction of heat flow: it must be connected to a form of the zero-th law. In order to pursue this line, however, one has to somehow take care of the characteristics of granular flow that distinguish it from usual kinetic theory:
1) Energy is not conserved, and, more generally, the motion does not have the very strong phase-space volume conservation properties typical of Hamilton’s equations. The dissipation is due to friction which is in general not linear in the velocity, and dependent upon the relative positions of the particles.
2) Power is supplied by tapping (which may be periodic in time) or by shearing, a manner very different from that of the ‘collisions’ of a thermal bath.
Under these circumstances, there is no reason why the observables should be related to a Gibbs (or equivalent) ensemble, and the possibility of having thermodynamic concepts seems lost.
In this paper we shall consider situations with shear and friction, in the limit of weak shear. The effect of a coherent ‘tapping’ will be discussed in further work . In this limit of ‘slow rheology’, it will turn out that even though the rapid motion cannot be associated with thermal motion, there appears for the slow flow a natural temperature playing the usual role in thermometry and thermalisation .
The computation presented here can be done in a wide range of approximation schemes consisting in resumming a perturbative expression for the dynamics in several forms — and to higher and higher levels of approximation — in particular the so-called mode-coupling approximation. Here, for concreteness, I will carry it through for a simple model for which the (single mode) mode-coupling approximation is exact.
Multiple Thermalisation in Aging and Rheology
Granular systems have been recognised as being closely related to glassy systems . Accordingly, several recent developments and models have been borrowed from the field of glasses to understand their properties.
A picture has arisen in the last few years for aging or gently driven glasses involving multiple thermalisations at widely separated timescales (see for a review). In the simplest scheme, the situation is as follows: Given any two observables $`A`$ and $`B`$ belonging to the system, define the correlation function as:
$$A(t)B(t^{})=C_{AB}(t,t^{})$$
(1)
and the response of $`A`$ to a field conjugate to $`B`$:
$$\frac{\delta }{\delta h_B(t^{})}A(t)=R_{AB}(t,t^{})$$
(2)
For a pure relaxational (undriven) glass, the correlation breaks up into two parts:
$$C_{AB}(t,t^{})=C_{AB}^F(tt^{})+\stackrel{~}{C}_{AB}\left(\frac{h(t^{})}{h(t)}\right)$$
(3)
with $`h`$ the same growing function for all observables $`A`$, $`B`$. The fact that $`C_{AB}(t,t^{})`$ never becomes a function of the time-differences means that the system is forever out of equilibrium, it ages. If instead the glass is gently driven (with driving forces proportional to, say, $`ϵ`$), aging may stop, and we have:
$$C_{AB}(t,t^{})=C_{AB}^F(tt^{})+\stackrel{~}{C}_{AB}\left(\frac{tt^{}}{\tau _o}\right)$$
(4)
where $`\tau _o`$ is a time scale that diverges as $`ϵ`$ goes to zero.
In the long time limit and in the small drive limit, the time scales become very separate. When this happens, it turns out that the responses behave as:
$$R_{AB}(t,t^{})=\beta \frac{}{t^{}}C_{AB}^F+\beta ^{}\frac{}{t^{}}\stackrel{~}{C}_{AB}$$
(5)
in the aging and the driven case.
The fast degrees of freedom behave as if thermalised at the bath temperature $`\beta `$. On the other hand, the effective, system-dependent temperature $`T^{}=1/\beta ^{}`$ indeed deserves its name: it can be shown that it is what a ‘slow’ thermometer measures, and it controls the heat flow and the thermalisation of the slow degrees of freedom. It is the same for any two observables at a given timescale, whether the system is aging or gently driven. Furthermore, it is macroscopic: it remains non-zero in the limit in which the bath temperature is zero.
If the system is not coupled to a true thermal bath, but energy is supplied by shaking and shearing, while it is dissipated by a nonlinear complicated friction, there is no bath temperature $`\beta `$. What will be argued in what follows is that even so, the ‘slow’ temperature $`\beta ^{}`$ survives despite the fact that the fast motion is not thermal in that case. Indeed, if we have correlation having fast and slow components:
$$C_{AB}(t,t^{})=C_{AB}^F(t,t^{})+\widehat{C}_{AB}^S(t,t^{})$$
(6)
the response is of the form:
$$R_{AB}(t,t^{})=R_{AB}^F(t,t^{})+\beta ^{}\frac{}{t^{}}\widehat{C}_{AB}^S(t,t^{})$$
(7)
with the fast response $`R_{AB}^F(t,t^{})`$ bearing no general relation with the fast correlation $`C_{AB}^F(t,t^{})`$.
The effective ‘slow’ temperature so defined is then found to be directly related to Edwards’ compactivity , but, in the spirit of Ref. , in the context of slowly moving rather than stationary systems. It seems also closely related to the macroscopic temperature driving activated proceses in the SGR model.
A Simple Example
For concreteness, let us consider a variation of the standard mean-field glass model . The variables are $`x_i`$, $`i=1,\mathrm{},N`$, and are suject to an equation of motion:
$$m\ddot{x}_i+\frac{\delta E(𝒙)}{\delta x_i}+\mathrm{\Omega }x_i=ϵf_i^{\text{‘shear’}}(𝒙)f_i^{\text{‘friction’}}(\dot{𝒙})$$
(8)
The left hand side is just Newtonian dynamics (with $`\mathrm{\Omega }`$ possibly time-dependent), with a ‘glassy’ potential which we can take, for example, as:
$$E(𝒙)=J_{ijk}x_ix_jx_k$$
(9)
where the $`J_{ijk}`$ is a symmetric tensor of random quenched variables of variance $`1/N^2`$. These terms correspond to the p-spin glass . It was realised some ten years ago that this kind of model constitutes a mean-field caricature of fragile glass , and in particular its dynamics yields above the glass transition the simplified mode-coupling equations .
On the right hand-side of Eqn. (8) we have added two terms that mimic granular experiments. The forces $`f_i`$ do not derive from a potential, for example :
$$f_i^{\text{‘shear’}}(𝒙)=K_{jk}^ix_jx_k$$
(10)
with $`K_{jk}^i`$ a non-symmetric tensor with random elements with variance $`1/N`$. They pump energy into the system, and hence play a role similar to shearing. All our discussion will be restricted to weak driving, i.e. $`ϵ`$ small. For the friction terms we can take, instead of a linear term $`\dot{x}_i`$, a more complicated odd function $`f_i^{\text{‘friction’}}=f^{\text{‘friction’}}(\dot{x}_i)`$.
These equations for the correlation $`C(t,t^{})=\frac{1}{N}x_i(t)x_i(t^{})`$ and response $`R(t,t^{})=\frac{1}{N}\delta x_i(t)/\delta h_i(t^{})`$ can be exactly solved in the large $`N`$ limit. One may do so by reducing the system to a self-consistent single-site equation
$`m\ddot{x}+\mathrm{\Omega }x=ϵf^{\text{‘shear’}}(t)`$ $``$ $`f^{\text{‘friction’}}(\dot{x})+`$ (11)
$`+`$ $`6{\displaystyle ^t}𝑑t^{}C(t,t^{})R(t,t^{})x(t^{})+\rho (t)`$ (12)
$`f^{\text{‘shear’}}(t)`$ and $`\rho (t)`$ are independent coloured Gaussian noises that satisfy:
$$<f^{\text{‘shear’}}(t)f^{\text{‘shear’}}(t^{})>=<\rho (t)\rho (t^{})>=3C^2(t,t^{})$$
(13)
Equation (12) is supplemented by the self-consistency conditions
$$x(t)x(t^{})=C(t,t^{});R(t,t^{})=\frac{\delta x(t)}{\delta h(t^{})}$$
(14)
where $`h(t)`$ is a field that acts additively in (12).
We now perform the usual step of separating ‘fast’ and ‘slow’ functions. Accordingly, we put:
$`C(t,t^{})=C_F(t,t^{})+C_S(t,t^{})`$ ; $`R(t,t^{})=R_F(t,t^{})+R_S(t,t^{})`$ (15)
$`f^{\text{‘shear’}}=f_F^{\text{‘shear’}}+f_S^{\text{‘shear’}}`$ ; $`\rho =\rho _F+\rho _S`$ (16)
the additive separation is such that in the region of large time-differences $`C_F(t,t^{})`$ and the integral of $`R_F(t,t^{})`$ tend to zero, and the induced noises are now divided into fast and slow:
$`<f_F^{\text{‘shear’}}(t)f_F^{\text{‘shear’}}(t^{})>=<\rho _F(t)\rho _F(t^{})>`$ $`=`$ $`3C_F^2(t,t^{})`$ (17)
$`<f_S^{\text{‘shear’}}(t)f_S^{\text{‘shear’}}(t^{})>=<\rho _S(t)\rho _S(t^{})>`$ $`=`$ $`3C_S^2(t,t^{})`$ (18)
We can now rewrite the single-site equation (12) in the following way
$`m\ddot{x}+\mathrm{\Omega }x=ϵf_F^{\text{‘shear’}}(t)`$ $``$ $`f^{\text{‘friction’}}(\dot{x})+`$ (19)
$`+`$ $`6{\displaystyle ^t}𝑑t^{}C_F(t,t^{})R_F(t,t^{})x(t^{})+\rho _F(t)+Z(t)`$ (20)
where:
$$Z(t)=ϵf_S^{\text{‘shear’}}(t)+6^t𝑑t^{}C_S(t,t^{})R_S(t,t^{})x(t^{})+\rho _S(t)$$
(21)
Consider equation (20): it describes a single degree of freedom which has nonlinear friction and a (short) memory kernel, plus a slowly varying field $`Z(t)`$. Upon the assumption of large separation of timescales (which will be valid if the system is weakly sheared and ‘old’), we can treat $`Z(t)`$ as adiabatic. However, because of the absence of detailed balance, we know that the distribution for fixed $`Z`$ is non-Gibbsean. The fast correlation and response functions will be of the form $`C_F(t,t^{})=C_F(tt^{})`$ and $`R_F(t,t^{})=R_F(tt^{})`$, but we cannot say anything in general about their relation. The average of $`x_Z`$ over an interval of time large compared to the fast relaxation (the range of $`C_F`$ and $`R_F`$) is a certain function of $`Z`$:
$$x_Z=\frac{F(Z)}{Z}$$
(22)
which defines the single variable function $`F(Z)`$.
We now turn to the slow evolution. Because the memory kernels in equation (21) are slowly varying and smooth, we can substitute $`x`$ by its average $`x_Z`$. We hence have:
$$Z(t)=ϵf_S^{\text{‘shear’}}(t)+6^t𝑑t^{}C_S(t,t^{})R_S(t,t^{})\frac{F(Z)}{Z}(t^{})+\rho _S(t)+h^{\text{adiab}}(t)$$
(23)
where we have explicitated the slow component a field acting additively in (12). The self-consistency equations now read:
$$C_S(t,t^{})=\frac{F(Z)}{Z}(t)\frac{F(Z)}{Z}(t^{});R_S(t,t^{})=\frac{\delta }{\delta h^{\text{adiab}}(t^{})}\frac{F(Z)}{Z}(t)$$
(24)
Equations (23) and (24) describe the slow part of the evolution. The only input of the fast equations is through the function $`F(Z)`$. The manner of solution depends little on the fact that the ‘fast’ evolution is not thermal, and is by now standard. Here I sketch the steps for completeness. Our aim is to show that they admit in the small $`ϵ`$ limit a solution of the form:
$`C_S(t,t^{})=\stackrel{~}{C}\left({\displaystyle \frac{h(t^{})}{h(t)}}\right);R_S(t,t^{})=\beta ^{}{\displaystyle \frac{}{t^{}}}\stackrel{~}{C}\left({\displaystyle \frac{h(t^{})}{h(t)}}\right)`$ (25)
where $`\beta ^{}`$ is the effective temperature that emerges for the slow dynamics, to be determined by the matching with the ‘fast’ time-sector. To show this, we first write $`\tau =\mathrm{ln}(h(t))`$, $`\tau ^{}=\mathrm{ln}(h(t^{}))`$, etc, and put
$`C_S(t,t^{})`$ $`=`$ $`\widehat{C}(\tau \tau ^{})`$ (26)
$`R_S(t,t^{})`$ $`=`$ $`\beta ^{}{\displaystyle \frac{}{\tau ^{}}}\widehat{C}(\tau \tau ^{})`$ (27)
$`Z(t)`$ $`=`$ $`\widehat{Z}(\tau )`$ (28)
Equations (23) and (24) now take the form, in the $`ϵ0`$ limit:
$$\widehat{Z}(\tau )=ϵf_S^{\text{‘shear’}}(\tau )+6\beta ^{}^\tau 𝑑\tau ^{}\widehat{C}(\tau \tau ^{})\widehat{C}^{}(\tau \tau ^{})\frac{F(\widehat{Z})}{\widehat{Z}}(\tau ^{})+\rho _S(\tau )+h^{\text{adiab}}$$
(29)
where we have explicitated the slow component a field acting additively in (12). The self-consistency equations read:
$$\widehat{C}(\tau \tau ^{})=\frac{F(\widehat{Z})}{\widehat{Z}}(\tau )\frac{F(\widehat{Z})}{\widehat{Z}}(\tau ^{})$$
(30)
and
$$\beta ^{}\frac{}{\tau ^{}}\widehat{C}(\tau \tau ^{})=\frac{\delta }{\delta h^{\text{adiab}}(\tau ^{})}\frac{F(\widehat{Z})}{\widehat{Z}}(\tau )$$
(31)
To prove that (28) is a solution of the system (29) - (31), and hence that (25) is a solution of (23) and (24) we can proceed as follows: We introduce an infinite set of auxiliary variables $`l_i(\tau )`$ and a set of dynamical variables $`y_i`$ satisfying an ordinary Langevin equation with inverse temperature $`\beta ^{}`$:
$$\left[m_j\frac{d^2}{d\tau ^2}+\mathrm{\Gamma }_j\frac{d}{d\tau }+\mathrm{\Omega }_j\right]y_j+l_j(\tau )=\mathrm{\Delta }_jy_j+l_j(\tau )=\xi _j(\tau )\frac{F\left(_jA_jy_j\right)}{y_j}$$
(32)
with
$$\xi _i(\tau )\xi _j(\tau _w)=2T^{}\mathrm{\Gamma }_j\delta _{ij}\delta (\tau \tau _w).$$
(33)
We can now choose the $`A_j`$ and the $`l_j`$ such that:
$`\mathrm{\Delta }_j^1{\displaystyle \underset{j}{}}A_j\xi _j(t)`$ $`=`$ $`\rho (t)`$ (34)
$`{\displaystyle \underset{j}{}}A_j^2\mathrm{\Delta }_j^1`$ $`=`$ $`\widehat{C}^{}(\tau )\mathrm{\Theta }(\tau )`$ (35)
$`{\displaystyle \underset{j}{}}A_j\mathrm{\Delta }_j^1l_j(\tau )`$ $`=`$ $`h^{\text{adiab}}(\tau )`$ (36)
and check that the quantity $`_jA_jy_j`$ obeys the same equation of motion as $`Z(\tau )`$.
Because the problem reduced to an ordinary (not glassy!) Langevin equation, we can assure that the system thermalises at temperature $`T^{}=1/\beta ^{}`$, and hence verify that the ansatz closes.
We have followed essentially the same steps as in the treatment of a glassy models coupled to a ‘good’ heat bath. Here instead of having two time scales each with its own temperature we have a temperature associated only with the low frequency motion.
Conclusions
It has been recognised for some time that granular systems bear a deep similarity with glassy systems at essentially zero temperature. In order to introduce some agitation, both tapping and shearing have been often introduced. This certainly makes sand look more like a fluid, but at the same time poses the problem of introducing (and dissipating) energy in a manner that is quite different from that of a thermal bath.
We have shown in this paper that, at least within an approximation scheme and for slowly flowing systems, one can still introduce thermodynamic concepts — provided one attempts to apply them only to the low frequency motion.
For very small power input, it turns out that the fluctuation-dissipation temperature of the slow degrees of freedom coincides in these models with Edwards’, defined on the basis of the logarithm of the number of stable configurations. For stronger driving power, an analogous definition needs the counting of stable states each composed of many configurations.
Aknowledgements
I wish to thank Anita Mehta for useful comments.
|
no-problem/9909/physics9909012.html
|
ar5iv
|
text
|
# Pattern formation by competition: a biological example
## 1 Introduction
Bacteria are unicellular organisms generally studied as isolated units, however they are interactive organisms able to perform collective behaviour, and a clear marker of the presence of a multicellular organization level is the formation of growth patterns . Particularly it has been pointed out that unfavorable conditions may lead bacteria to a cooperative behavior, as a means to react to the environmental constraints .
Many studies about the multicellular level of organization of bacteria have been proposed and pattern formation during colonies growth has been observed in Cyanobacteria , in Bacillus subtilis , in Escherichia coli , Proteus mirabilis and others. Some of these patterns have been studied by mathematical models , that explain the macroscopic patterns through the microscopic observations.
There is a group of bacteria that differs from those cited above because their normal morphological organization is clearly multicellular: Actinomycetes, and Streptomyces is a genus of this group. Streptomycetes are gram-positive bacteria that grow as mycelial filaments in the soil, whose mature colonies may contain two types of mycelia, the substrate, or vegetative, mycelium and the aerial mycelium, that have different biological roles . Vegetative mycelium absorbs the nutrients, and is composed of a dense and complex network of hyphae usually embedded in the soil. Once the cell culture becomes nutrient-limited, the aerial mycelium develops from the surface of the vegetative mycelium. The role of this type of mycelium is mainly reproductive, indeed the aerial mycelium develops the spores and put them in a favorable position to be dispersed .
In our laboratory we have isolated a bacterial strain, identified with morphological criteria as belonging to Streptomyces. This strain is interesting because its growth pattern differs on maximal and minimal culture media. On maximal culture medium (LB, Luria Broth) , after $`34`$ days of growth at $`30^{}C`$, the strain shows a typical bacterial growth with formation of the rounded colony characteristic of most of the bacterial strains (Fig. 1. On minimal culture medium (Fahreus) growth proceeds more slowly than in maximal media and a concentric rings pattern of aerial mycelium sets up (Fig. 2). The rings are centered on the first cell that sets up the colony - we call it the founder - where usually the aerial mycelium develops as well. The number of rings increases with time till $`78`$ after $`20`$ days of growth at $`30^{}C`$. In both cases agar concentration was $`1.5\%`$.
The presence of concentric rings patterns is a quite common feature in bacterial and fungi colonies ; many models can originate such patterns , a possible explanation was proposed in , where is suggested that the interplay of front propagation and Turing instability can lead to concentric ring and spot patterns. A different approach based on competition for resources has been recently proposed to study species formation as pattern formation in the genotypic space. We consider a similar mechanism to investigate the spatial pattern formations observed in our laboratory in a Streptomyces colony.
## 2 The model
### 2.1 Biological constraints
Before introducing the mathematical model we have to go through some of the biological features of the system. Aerial mycelia are connected through the vegetative hypae network. This network has a peculiar structure in the Streptomyces isolated in our laboratory, indeed we observe that the growing boundary of the substrate mycelium is made by many hyphae extending radially from the founder so that, in this area, the substrate mycelium has a radial polarity, also if the hyphae have many branching segments.
Substrate mycelium has the biological objective to find nutrients to give rise to spores, therefore we expect that on minimal media a strong competition arises for the energetic resources between neighbor substrate mycelia, whereas in maximal media, where there are sufficient nutrients, the competition is weaker.
If the cells are connected mainly along the radial direction, then competition will be stronger along this direction than along the tangential one. In other words, in the growing edge of the colony, the competition is not isotropic but, following the vegetative mycelium morphology, it will be stronger among cells belonging to neighboring circumferences (radial direction) than among cells of the same (tangential direction), and we will keep track of these aspects in the model. Although the radial polarity is lost inside the colony, the asymptotic distribution of aerial mycelium is strongly affected by the initial spots derived by the growing boundary of the vegetative mycelium.
Finally another important feature of the biological system is the presence of a founder. The founder behaves as every other aerial mycelium - it competes with the other cell -, moreover it is the center of every circle. That means that every hypha originates from the founder: it is the source of the vegetative hyphae, and as the colony grows the ring near the founder become increasingly densely packed. Moreover during the enlargement of the colony no new center sets up and therefore substrate mycelium density is highest near the founder and decreases radially away from it.
To summarize, in our model we make the following assumptions based on the previous considerations.
* There is competition among every aerial mycelium for some substances that we assume for sake of simplicity uniformly distributed over the culture.
* We consider only the aerial mycelium: we do not introduce explicitly the substrate mycelium but we take in account it assuming that
+ The competition is stronger along the radial direction than along the tangential one.
+ The probability for the aerial mycelium to appear is higher near the founder
Assuming this framework we show that a concentric rings pattern may be explained as a consequence of strong competition, and a rounded pattern of weak competition. From the biological point of view this result implies that the formation of concentric rings patterns is a mean that Streptomyces adopts to control growth.
### 2.2 The mathematical model
In the following we propose a mathematical model to reproduce the aerial mycelium growth patterns described in the Introduction. This model is derived from a similar model introduced, in a different framework, (species formation in genotypic space) in .
Let us consider a two-dimensional spatial lattice, that represents the Petri dish. Each point $`𝐱`$ is identified by two coordinates $`𝐱=(x_1,x_2)`$, we study the temporal evolution of the normalized probability $`p(𝐱,t)`$ to have an aerial mycelium in $`𝐱`$ position at time $`t`$. The evolution equation for $`p(𝐱,t)`$, is in the form:
$$p(𝐱,t+1)=A(𝐱,p(𝐱,t))p(𝐱,t)$$
(1)
where $`A(𝐱,p(𝐱,t))`$ is the probability of formation of a new aerial mycelium in position $`𝐱`$ and we suppose it can depend also on the distribution $`p(𝐱,t)`$. According to the hypothesis described above, it is the product of two independent terms:
$$A(𝐱,p(𝐱,t))=\frac{A_1(𝐱)A_2(𝐱,p(𝐱,t))}{\overline{A}}$$
where $`A_1(𝐱)`$ is the so-called static fitness, and represents the probability of growth of an aerial mycelium in presence of an infinite amount of resources (no competition). The founder is the source of every hypha, so we expect it will be a decreasing function of the distance $`|x|`$ from the founder, with $`|x|=\sqrt{x_1^2+x_2^2}`$, assuming the founder occupies $`(0,0)`$ position.
The second term $`A_2(𝐱,p(𝐱,t))`$ is the competition term, and in general it depends on the whole spatial distribution $`p(𝐱,t)`$, moreover we suppose that two aerial micelia compete as stronger as close they are.
$`\overline{A}`$ is the average fitness and it is necessary to have $`p(𝐱,t+1)`$ normalized. It is defined as following :
$$\overline{A}(t)=_𝐱A(𝐱,p(𝐱,t))𝑑𝐱$$
Both terms are positive, therefore can be written in the exponential form
$$A_1(𝐱)A_2(𝐱,p(𝐱,t))=\mathrm{exp}\left(H_1(𝐱)J_𝐲K(d(𝐱,𝐲))p(𝐲,t)𝑑y\right)$$
where $`J`$ is the intensity of competition (it will be large in presence of strong competition, i.e. low resource level) and $`K(d(𝐱,𝐲))`$ is a decreasing function of the distance between two mycelia $`d(𝐱,𝐲)`$.
We also allow $`p(𝐱,t)`$ to diffuse to the nearest neighbors with diffusing coefficient $`\mu `$<sup>1</sup><sup>1</sup>1The presence of diffusion is necessary to allow the bacteria to populate the whole lattice.
Finally we get:
$$p(𝐱,t+1)=\frac{\mathrm{exp}\left(H_1(𝐱)J_𝐲K(d(𝐱,𝐲))p(𝐲,t)𝑑𝐲\right)}{\overline{A}(t)}p(𝐱,t)+\mu ^2p(𝐱,t)$$
(2)
According to the assumptions stated in Section 2.1, we now introduce the particular forms for $`H_1(𝐱)`$ and $`K(d)`$. $`H_1(𝐱)`$ depends on the distance from the founder $`H_1(𝐱)=H_1(|x|)`$, and the competition kernel $`K(d)`$, depending on the distance $`d`$ between mycelia. As mentioned above, we expected the probability of growth for the aerial mycelium to be higher near the founder, therefore $`H_1(|x|)`$ has to be a decreasing function of $`|x|`$. For the sake of simplicity we have chosen a single maximum, “almost linear” function,
$$H_1(|x|)=h_0+b\left(1\frac{|x|}{r}\frac{1}{1+|x|/r}\right)$$
(3)
that has a quadratic maximum in $`𝐱=(0,0)`$ (founder), in fact close to $`𝐱=(0,0)`$ we have $`h(|x|)h_0b|x|^2/r^2`$ and for $`|x|\mathrm{}`$, is linear $`h(|x|)h_0+b(1|x|/r)`$. $`b`$ and $`r`$ control the intensity of the static fitness.
The competition kernel $`K(d)`$ has to be a steep decreasing function of $`d`$; we expect to have a finite range of competition, i.e. two mycelia at distance $`d>R`$ do not compete (or compete very weakly). A possible choice is:
$$K(d)=\mathrm{exp}\left(\frac{1}{4}\left|\frac{d}{R}\right|^4\right)$$
(4)
We have also chosen the form for the kernel (4) and static fitness (3) because it is possible to derive some analytical results that assure us the existence of a non-trivial spatial distribution for exponential kernel with exponent greater than $`2`$; $`R`$ is the range of competition. All the numerical and analytical results described in this paper are obtained using (34), but we have also tested similar potential obtaining the same qualitative results.
Computing numerically from Eq. (2) the asymptotic probability distribution $`p(𝐱)p(𝐱,t)_t\mathrm{}`$, we get, for different values of the parameters, two types of spatial patterns. In particular numerical and analytical studies (see Ref. ) show that the crucial parameter is $`G=\left(J/R\right)/\left(b/r\right)`$, i.e. the ratio between the intensity of competition and the intensity of the static fitness.
For small values of $`G`$, that is the competition is rather weak or in other words we have a maximal medium, we get a single peak gaussian-like distribution centered on the founder (similar to the one showed on the left in Fig. 5 (left) with $`G=0.5`$).
For larger values of $`G`$ we get a multi-peaked distribution (see Fig. 3, $`G=248.0`$), where the central peak (founder) is still present, but we get also some others peaks at an approximate distance $`R`$, range of competition, between each other. This is the expected pattern for an isotropic competition, in fact the presence of equally distanced spots is due to the competition term, that inhibits the growth of any aerial mycelium around another one.
To obtain spatial patterns similar to the concentric rings observed in our experiments, some feature of the peculiar spatial structure of Streptomyces has to be added. As stated before, we hypothesize that due to the presence of the substrate mycelium morphology the competition is much stronger in the radial direction (along the hyphae) than in the tangential direction.
Therefore we decompose the distance between any points $`𝐱`$ and $`𝐲`$ in a radial $`d_R(𝐱,𝐲)`$ and tangential part $`d_T(𝐱,𝐲)^2`$ (see Fig. 4)
$$d(𝐱,𝐲)^2=d_R(𝐱,𝐲)^2+\alpha d_T(𝐱,𝐲)^2$$
(5)
where $`\alpha `$ is a parameter that allows to change the metric of our space.
For $`\alpha >1`$ the relative weight of tangential distance is larger than one due to the lack of cell communications along this direction, the competition is mainly radial along the hyphae because the mycelia do not compete if they are not directly connected by an hypha. For $`\alpha =1`$ we get the usual euclidean distance.
Using the distance (5) in Eq.(2) with $`\alpha >1`$ and strong competition we are able to obtain a set of rings composed by equally spaced spots at fixed distances from the founder (see Fig. 5 (right) for $`\alpha =6`$), while in presence of large resource we still have a single peaked distribution (Fig. 5 (left)). For larger values of $`\alpha `$ the rings become continuous, while for low values, $`\alpha 1`$, the multi-peaked structure of $`p(𝐱)`$ appears.
These results are in agreement with those presented in Ref. , where an one-dimensional system is considered. In this case the genotypic space plays the role of the real space, and using and a gaussian kernel
$$K(d)=\mathrm{exp}\left(\frac{1}{2}\left|\frac{d}{R}\right|^2\right)$$
is possible to derive analytically the value of transition $`G_c`$ between the two regimes (single peaked and multi-peaked distribution). It is, for $`\mu 0`$ (slow diffusion) and $`\frac{r}{R}0`$ (static fitness almost flat)
$$G_c(\frac{r}{R})G_c(0)\frac{r}{R}$$
with $`G_c(0)=2.216\mathrm{}`$. Thus for $`G>G_c(\frac{r}{R})`$ we have a multi-peaked distribution, while for $`G<G_c(\frac{r}{R})`$ only the fittest one survives (single-peaked distribution).
## 3 Discussion and conclusions
We isolated a strain of Streptomyces that has a dual pattern of growth concerning the aerial mycelium: it gives rise to concentric rings centered on the founder cell, or to the classic circular bacterial colony. The medium is discriminant: in minimal media the first type of pattern arises, in maximal media the second one.
The substrate mycelium follows a different pattern: optical microscopy observations revealed that every hypha originates from the primordial central colony (the founder). Moreover the growth of the substrate mycelium growing edge proceeds in radial direction from the founder.
Using a simple mathematical model for the formation of aerial mycelium we are able to simulate both aerial mycelium spatial patterns. The parameter we modulate to obtain these two different patterns is the competition intensity. Indeed the main assumption of the model is that there is competition among the hyphae of vegetative mycelia for the energetic sources necessary for the formation of the aerial mycelium. In a medium with low nutrient concentration there is a strong competition for the aerial mycelium formation - and the model produces concentric rings patterns - instead in a maximal medium the competition is weaker - and the model produces the classic circular bacterial colony.
The aerial mycelium is derived by the substrate mycelium, so we derived the constraints of the model from the morphological observations concerning the substrate mycelium described in the Introduction. The system has a radial geometry centered on the founder (the probability of formation of aerial mycelium is higher near the founder), and we assumed that the competition is affected by this feature. Indeed the competition is stronger along an hypha due to the cell-cell communication typical of the “multicellular” organization of Streptomyces. This implies that the competition is stronger along the radial direction than along the tangential, at least in the outer boundary of the colony.
The growth pattern description above is referred to the presence of one single primordial colony. In presence of two or more colonies close one another we have observed different patterns with additive and negative interactions among the colonies. Our minimal model is not able to reproduce these behaviors, due to the fact that in presence of many founders the simple assumptions of radial growth centered on a single founder is no more fulfilled.
In conclusion we have found some peculiar spatial patterns for the aerial mycelium of Streptomyces. We have proposed a simple mathematical model to explain these patterns assuming competition along the hyphae as the main ingredient that leads to pattern formation. Our numerical results are able to reproduce spatial patterns obtained experimentally under different conditions (minimal and maximal medium), while to get more complex behavior (interference patterns, see Fig. 6) we expect more “chemical” species have to be added to our minimal model.
## Acknowledgements
We wish to thank F. Bagnoli, M. Buiatti, R. Livi and A. Torcini for fruitful discussions. M.B. and A.C. thank the Dipartimento di Matematica Applicata “G. Sansone” for friendly hospitality.
|
no-problem/9909/astro-ph9909147.html
|
ar5iv
|
text
|
# Formation of millisecond pulsars
## 1 Introduction
Millisecond pulsars are characterized by short rotational periods ($`P_{\mathrm{spin}}<30`$ ms) and relatively weak surface magnetic fields ($`B<10^{10}`$ G) and are often found in binaries with a white dwarf companion. They are old neutron stars which have been recycled in a close binary via accretion of mass and angular momentum from a donor star. The general scenario of this process is fairly well understood qualitatively (cf. review by Bhattacharya & van den Heuvel 1991), but there remain many details which are still uncertain and difficult to analyze quantitatively. It is our aim to highlight these problems in a series of papers and try to answer them using detailed numerical calculations with refined stellar evolution and binary interactions.
There are now more than 30 binary millisecond pulsars known in the Galactic disk. They can be roughly divided into three observational classes (Tauris 1996). Class A contains the wide-orbit ($`P_{\mathrm{orb}}>20`$ days) binary millisecond pulsars (BMSPs) with low-mass helium white dwarf companions ($`M_{\mathrm{WD}}<0.45M_{}`$), whereas the close-orbit BMSPs ($`P_{\mathrm{orb}}15`$ days) consist of systems with either low-mass helium white dwarf companions (class B) or systems with relatively heavy CO white dwarf companions (class C). The latter class evolved through a phase with significant loss of angular momentum (e.g. common envelope evolution) and descends from systems with a heavy donor star: $`2<M_2/M_{}<6`$. The single millisecond pulsars are believed to originate from tight class B systems where the companion has been destroyed or evaporated – either from X-ray irradiation when the neutron star was accreting, or in the form of a pulsar radiation/wind of relativistic particles (e.g. Podsiadlowski 1991; Tavani 1992).
The evolution of a binary initially consisting of a neutron star and a main-sequence companion depends on the mass of the companion (donor) star and the initial orbital period of the system. If the donor star is heavy compared to the neutron star then the mass transfer is likely to result in a common envelope (CE) evolution (Paczynski 1976; Webbink 1984; Iben & Livio 1993) where the neutron star spirals in through the envelope of the donor in a very short timescale of less than $`10^4`$ yr. The observational paucity of Roche-lobe filling companions more massive than $`2M_{}`$ has been attributed to their inability to transfer mass in a stable mode such that the system becomes a persistent long-lived X-ray source (van den Heuvel 1975; Kalogera & Webbink 1996). For lighter donor stars ($`<2M_{}`$) the system evolves into a low-mass X-ray binary (LMXB) which evolves on a much longer timescale of $`10^710^9`$ yr. It has been shown by Pylyser & Savonije (1988,1989) that an orbital bifurcation period ($`P_{\mathrm{bif}}`$) separates the formation of converging systems (which evolve with decreasing orbital periods until the mass-losing component becomes degenerate and an ultra-compact binary is formed) from the diverging systems (which finally evolve with increasing orbital periods until the mass losing star has lost its envelope and a wide detached binary is formed). It is the LMXBs with $`P_{\mathrm{orb}}>P_{\mathrm{bif}}`$ ($`2`$ days) which are the subject of this paper – the progenitors of the wide-orbit class A BMSPs.
In these systems the mass transfer is driven by the interior thermonuclear evolution of the companion star since it evolves into a (sub)giant before loss of orbital angular momentum dominates. In this case we get an LMXB with a giant donor. These systems have been studied by Webbink, Rappaport & Savonije (1983), Taam (1983), Savonije (1987), Joss, Rappaport & Lewis (1987) and recently Rappaport et al. (1995) and Ergma, Sarna & Antipova (1998). For a donor star on the red giant branch (RGB) the growth in core-mass is directly related to the luminosity, as this luminosity is entirely generated by hydrogen shell burning. As such a star, with a small compact core surrounded by en extended convective envelope, is forced to move up the Hayashi track its luminosity increases strongly with only a fairly modest decrease in temperature. Hence one also finds a relationship between the giant’s radius and the mass of its degenerate helium core – almost entirely independent of the mass present in the hydrogen-rich envelope (Refsdal & Weigert 1971; Webbink, Rappaport & Savonije 1983). In the scenario under consideration, the extended envelope of the giant is expected to fill its Roche-lobe until termination of the mass transfer. Since the Roche-lobe radius $`R_\mathrm{L}`$ only depends on the masses and separation between the two stars it is clear that the core-mass, from the moment the star begins Roche-lobe overflow, is uniquely correlated with the orbital period of the system. Thus also the final orbital period, $`P_{\mathrm{orb}}^\mathrm{f}`$ ($`210^3`$ days) is expected to be a function of the mass of the resulting white dwarf companion (Savonije 1987). It has also been argued that the core-mass determines the rate of mass transfer (Webbink, Rappaport & Savonije 1983). For a general overview of the evolution of LMXBs – see e.g. Verbunt (1990).
In this study we also discuss the final post-accretion mass of the neutron star and confront it with observations and the consequences of the new theory for kaon condensation in the core of neutron stars which result in a very soft equation-of-state and a corresponding maximum neutron star mass of only $`1.5M_{}`$ (Brown & Bethe 1994).
In Section 2 we briefly introduce the code, and in Sections 3 and 4 we outline the orbital evolution and the stability criteria for mass transfer. We present the results of our LMXB calculations in Section 5 and in Section 6 we discuss our results and compare with observations. Our conclusions are given in Section 7 and a summary table of our numerical calculations is presented in the Appendix.
## 2 A brief introduction to the numerical computer code
We have used an updated version of the numerical stellar evolution code of Eggleton. This code uses a self-adaptive, non-Lagrangian mesh-spacing which is a function of local pressure, temperature, Lagrangian mass and radius. It treats both convective and semi-convective mixing as a diffusion process and finds a simultaneous and implicit solution of both the stellar structure equations and the diffusion equations for the chemical composition. New improvements are the inclusion of pressure ionization and Coulomb interactions in the equation-of-state, and the incorporation of recent opacity tables, nuclear reaction rates and neutrino loss rates. The most important recent updates of this code are described in Pols et al. (1995; 1998) and some are summarized in Han, Podsiadlowski & Eggleton (1994).
We performed such detailed numerical stellar evolution calculations in our work since they should result in more realistic results compared to models based on complete, composite or condensed polytropes.
We have included a number of binary interactions in this code in order to carefully follow the details of the mass-transfer process in LMXBs. These interactions include losses of orbital angular momentum due to mass loss, magnetic braking, gravitational wave radiation and the effects of tidal interactions and irradiation of the donor star by hard photons from the accreting neutron star.
## 3 The equations governing orbital evolution
The orbital angular momentum for a circular<sup>1</sup><sup>1</sup>1We assume circular orbits throughout this paper – tidal effects acting on the near RLO giant star will circularize the orbit anyway on a short timescale of $`10^4`$ yr, cf. Verbunt & Phinney (1995). binary is:
$$J_{\mathrm{orb}}=\frac{M_{\mathrm{NS}}M_2}{M}\mathrm{\Omega }a^2$$
(1)
where $`a`$ is the separation between the stellar components, $`M_{\mathrm{NS}}`$ and $`M_2`$ are the masses of the (accreting) neutron star and the companion (donor) star, respectively, $`M=M_{\mathrm{NS}}+M_2`$ and the orbital angular velocity, $`\mathrm{\Omega }=\sqrt{GM/a^3}`$. Here $`G`$ is the constant of gravity. A simple logarithmic differentiation of this equation yields the rate of change in orbital separation:
$$\frac{\dot{a}}{a}=2\frac{\dot{J}_{\mathrm{orb}}}{J_{\mathrm{orb}}}2\frac{\dot{M}_{\mathrm{NS}}}{M_{\mathrm{NS}}}2\frac{\dot{M}_2}{M_2}+\frac{\dot{M}_{\mathrm{NS}}+\dot{M}_2}{M}$$
(2)
where the total change in orbital angular momentum is:
$$\frac{\dot{J}_{\mathrm{orb}}}{J_{\mathrm{orb}}}=\frac{\dot{J}_{\mathrm{gwr}}}{J_{\mathrm{orb}}}+\frac{\dot{J}_{\mathrm{mb}}}{J_{\mathrm{orb}}}+\frac{\dot{J}_{\mathrm{ls}}}{J_{\mathrm{orb}}}+\frac{\dot{J}_{\mathrm{ml}}}{J_{\mathrm{orb}}}$$
(3)
The first term on the right side of this equation gives the change in orbital angular momentum due to gravitational wave radiation (Landau & Lifshitz 1958):
$$\frac{\dot{J}_{\mathrm{gwr}}}{J_{\mathrm{orb}}}=\frac{32G^3}{5c^5}\frac{M_{\mathrm{NS}}M_2M}{a^4}\mathrm{s}^1$$
(4)
where $`c`$ is the speed of light in vacuum. The second term arises due to magnetic braking. This is is a combined effect of a magnetically coupled stellar wind and tidal spin-orbit coupling which tend to keep the donor star spinning synchronously with the orbital motion. Observations of low-mass dwarf stars with rotational periods in the range of $`130`$ days (Skumanich 1972) show that even a weak (solar-like) wind will slow down their rotation in the course of time due to interaction of the stellar wind with the magnetic field induced by the differential rotation in the convective envelope. For a star in a close binary system, the rotational braking is compensated by tidal coupling so that orbital angular momentum is converted into spin angular momentum and the binary orbit shrinks. Based on this observed braking law correlation between rotational period and age, Verbunt & Zwaan (1981) estimated the braking torque and we find:
$$\frac{\dot{J}_{\mathrm{mb}}}{J_{\mathrm{orb}}}0.5\times 10^{28}f_{\mathrm{mb}}^2\frac{IR_2^2}{a^5}\frac{GM^2}{M_{\mathrm{NS}}M_2}\mathrm{s}^1$$
(5)
where $`R_2`$ is the radius of the donor star, $`I`$ is its moment of inertia and $`f_{\mathrm{mb}}`$ is a constant of order unity (see also discussion by Rappaport, Verbunt & Joss 1983). In order to sustain a significant surface magnetic field we required a minimum depth of $`Z_{\mathrm{conv}}>0.065R_{}`$ for the convective envelope (cf. Pylyser & Savonije 1988 and references therein). Since the magnetic field is believed to be anchored in the underlaying radiative layers of the star (Parker 1955), we also required a maximum depth of the convection zone: $`Z_{\mathrm{conv}}/R_2<0.80`$ in order for the process of magnetic braking to operate. These limits imply that magnetic braking operates in low-mass ($`M_21.5M_{}`$) stars which are not too evolved.
The third term on the right side of eq. (3) describes possible exchange of angular momentum between the orbit and the donor star due to its expansion or contraction. For both this term and the magnetic braking term we estimate whether or not the tidal torque is sufficiently strong to keep the donor star synchronized with the orbit. The tidal torque is determined by considering the effect of turbulent viscosity in the convective envelope of the donor on the equilibrium tide. When the donor star approaches its Roche-lobe tidal effects become strong and lead to synchronous rotation. The corresponding tidal energy dissipation rate was calculated and taken into account in the local energy balance of the star. The tidal dissipation term was distributed through the convective envelope according to the local mixing-length approximation for turbulent convection – see Appendix for further details.
Since we present calculations here for systems with $`P_{\mathrm{orb}}>2`$ days, the most significant contribution to the overall change in orbital angular momentum is caused by loss of mass from the system. This effect is given by:
$$\frac{\dot{J}_{\mathrm{ml}}}{J_{\mathrm{orb}}}\frac{\alpha +\beta q^2+\delta \gamma (1+q)^2}{1+q}\frac{\dot{M}_2}{M_2}$$
(6)
Here $`qM_2/M_{\mathrm{NS}}`$ is the mass ratio of the donor over the accretor and $`\alpha `$, $`\beta `$ and $`\delta `$ are the fractions of mass lost from the donor in the form of a fast wind, the mass ejected from the vicinity of the neutron star and from a circumstellar coplanar toroid (with radius, $`a_\mathrm{r}=\gamma ^2a`$), respectively – see van den Heuvel (1994a) and Soberman, Phinney & van den Heuvel (1997). The accretion efficiency of the neutron star is thus given by: $`ϵ=1\alpha \beta \delta `$, or equivalently:
$$M_{\mathrm{NS}}=(1\alpha \beta \delta )M_2$$
(7)
where $`M_2<0`$. Note, that these factors will also be functions of time as the binary evolve. Low-mass ($`12M_{}`$) donor stars do not lose any significant amount material in the form of a direct wind – except for an irradiated donor in a very close binary system, or an extended giant donor evolving toward the tip of the RGB which loses a significant amount of material in a wind. For the latter type of donors we used Reimers’ (1975) formula to calculate the wind mass-loss rate:
$$\dot{M}_{2\mathrm{wind}}=4\times 10^{13}\eta _{\mathrm{RW}}\frac{LR_2}{M_2}M_{}\text{ yr}\text{-1}$$
(8)
where the mass, radius and luminosity are in solar units and $`\eta _{\mathrm{RW}}`$ is the mass-loss parameter. We assumed $`\eta _{\mathrm{RW}}=0.5`$ for our work – cf. Renzini (1981) and Sackmann, Boothroyd & Kraemer (1993) for discussions. The mass-loss mechanism involving a circumstellar toroid drains far too much orbital angular momentum from the LMXB and would be dynamical unstable resulting in a runaway event and formation of a CE. Also the existence of binary radio pulsars with orbital periods of several hundred days exclude this scenario as being dominant.
Hence, for most of the work in this paper we have $`\alpha \beta `$, and we shall assume $`\delta =0`$, and for LMXBs with large mass-transfer rates the mode of mass transfer to consider is therefore the ”isotropic re-emission” model. In this model all of the matter flows over, in a conservative way, from the donor star to an accretion disk in the vicinity of the neutron star, and then a fraction, $`\beta `$ of this material is ejected isotropically from the system with the specific orbital angular momentum of the neutron star.
As mentioned above, since we present calculations here for systems with initial periods larger than 2 days, loss of angular momentum due to gravitational wave radiation and magnetic braking (requiring orbital synchronization) will in general not be very significant.
### 3.1 The mass-transfer rate
For every timestep in the evolution calculation of the donor star the mass-transfer rate is calculated from the boundary condition on the stellar mass:
$$\dot{M}_2=1\times 10^3PS\left[\mathrm{ln}\frac{R_2}{R_\mathrm{L}}\right]^3M_{}\text{ yr}\text{-1}$$
(9)
where $`PS[x]=0.5[x+abs(x)]`$ and $`R_\mathrm{L}`$ is the donor’s Roche-radius given by (Eggleton 1983):
$$R_\mathrm{L}=\frac{0.49q^{2/3}a}{0.6q^{2/3}+\mathrm{ln}(1+q^{1/3})}$$
(10)
The orbital separation $`a`$ follows from the orbital angular momentum balance – see eqs (1) and (3). All these variables are included in a Henyey iteration scheme. The above expression for the mass-transfer rate is rather arbitrary, as is the precise amount of Roche-lobe overfill for a certain transfer rate; but the results are independent of the precise form as they are determined by the response of the stellar radius to mass loss.
## 4 Stability criteria for mass transfer
The stability and nature of the mass transfer is very important in binary stellar evolution. It depends on the response of the mass-losing donor star and of the Roche-lobe – see Soberman, Phinney & van den Heuvel (1997) for a nice review. If the mass transfer proceeds on a short timescale (thermal or dynamical) the system is unlikely to be observed during this short phase, whereas if the mass transfer proceeds on a nuclear timescale it is still able to sustain a high enough accretion rate onto the neutron star for the system to be observable as an LMXB for an appreciable interval of time.
When the donor evolves to fill its Roche-lobe (or alternatively, the binary shrinks sufficiently as a result of orbital angular momentum losses) the unbalanced pressure at the first Lagrangian point will initiate mass transfer (Roche-lobe overflow, RLO) onto the neutron star. When the donor star is perturbed by removal of some mass, it falls out of hydrostatic and thermal equilibrium. In the process of re-establishing equilibrium, the star will either grow or shrink – first on a dynamical (sound crossing), and then on a slower thermal (heat diffusion, or Kelvin-helmholtz) timescale. Also the Roche-lobe changes in response to the mass transfer/loss. As long as the donor star’s Roche-lobe continues to enclose the star the mass transfer is stable. Otherwise it is unstable and proceeds on a dynamical timescale. Hence the question of stability is determined by a comparison of the exponents in power-law fits of radius to mass, $`RM^\zeta `$, for the donor star and the Roche-lobe, respectively:
$$\zeta _{\mathrm{donor}}\frac{\mathrm{ln}R_2}{\mathrm{ln}M_2}\zeta _\mathrm{L}\frac{\mathrm{ln}R_\mathrm{L}}{\mathrm{ln}M_2}$$
(11)
where $`R_2`$ and $`M_2`$ refer to the mass losing donor star. Given $`R_2=R_\mathrm{L}`$ (the condition at the onset of RLO) the initial stability criterion becomes:
$$\zeta _\mathrm{L}\zeta _{\mathrm{donor}}$$
(12)
where $`\zeta _{\mathrm{donor}}`$ is the adiabatic or thermal (or somewhere in between) response of the donor star to mass loss. Note, that the stability might change during the mass-transfer phase so that initially stable systems become unstable, or vice versa, later in the evolution. The radius of the donor is a function of time and mass and thus:
$$\dot{R}_2=\frac{R_2}{t}|_{\mathrm{M}_2}+R_2\zeta _{\mathrm{donor}}\frac{\dot{M}_2}{M_2}$$
(13)
$$\dot{R}_\mathrm{L}=\frac{R_\mathrm{L}}{t}|_{\mathrm{M}_2}+R_\mathrm{L}\zeta _\mathrm{L}\frac{\dot{M}_2}{M_2}$$
(14)
The second terms follow from eq. (11); the first term of eq. (13) is due to expansion of the donor star as a result of nuclear burning (e.g. shell hydrogen burning on the RGB) and the first term of eq. (14) represents changes in $`R_\mathrm{L}`$ which are not caused by mass transfer such as orbital decay due to gravitational wave radiation and tidal spin-orbit coupling. Tidal coupling tries to synchronize the orbit whenever the rotation of the donor is perturbed (e.g. as a result of magnetic braking or an increase of the moment of inertia while the donor expands). The mass-loss rate of the donor can be found as a self-consistent solution to eqs (13) and (14) assuming $`\dot{R}_2=\dot{R}_\mathrm{L}`$ for stable mass transfer.
### 4.1 The Roche-radius exponent, $`\zeta _\mathrm{L}`$
For binaries with orbital periods larger than a few days it is a good approximation that $`\dot{J}_{\mathrm{gwr}},\dot{J}_{\mathrm{mb}}\dot{J}_{\mathrm{ml}}`$ and $`\alpha \beta `$ during the RLO mass-transfer phase. Assuming $`\dot{J}_{\mathrm{gwr}}=\dot{J}_{\mathrm{mb}}=0`$ and $`\alpha =\delta =0`$ we can therefore use the analytical expression obtained by Tauris (1996) for an integration of eq. (2) to calculate the change in orbital separation during the LMXB phase (assuming a constant $`\beta `$):
$$\frac{a}{a_0}=\left(\frac{q_0(1\beta )+1}{q(1\beta )+1}\right)^{{\scriptscriptstyle \frac{3\beta 5}{1\beta }}}\left(\frac{q_0+1}{q+1}\right)\left(\frac{q_0}{q}\right)^2\mathrm{\Gamma }_{\mathrm{ls}}$$
(15)
where the subscript ‘0’ denotes initial values. Here we have added an extra factor, $`\mathrm{\Gamma }_{\mathrm{ls}}`$:
$$\mathrm{\Gamma }_{\mathrm{ls}}=\mathrm{exp}\left(2_0\frac{(dJ)_{\mathrm{ls}}}{J_{\mathrm{orb}}}\right)$$
(16)
to account for the tidal spin-orbit coupling since $`\dot{J}_{\mathrm{ls}}0`$. One aim of this study is to evaluate the deviation of $`\mathrm{\Gamma }_{\mathrm{ls}}`$ from unity.
If we combine eqs (7), (10) and (15), assuming $`\mathrm{\Gamma }_{\mathrm{ls}}=1`$, we obtain analytically:
$`\zeta _\mathrm{L}`$ $`=`$ $`{\displaystyle \frac{\mathrm{ln}R_\mathrm{L}}{\mathrm{ln}M_2}}=\left({\displaystyle \frac{\mathrm{ln}a}{\mathrm{ln}q}}+{\displaystyle \frac{\mathrm{ln}(R_\mathrm{L}/a)}{\mathrm{ln}q}}\right){\displaystyle \frac{\mathrm{ln}q}{\mathrm{ln}M_2}}`$ (17)
$`=`$ $`[1+(1\beta )q]\psi +(53\beta )q`$
where
$$\psi =\left[\frac{4}{3}\frac{q}{1+q}\frac{2/5+1/3q^{1/3}(1+q^{1/3})^1}{0.6+q^{2/3}\mathrm{ln}(1+q^{1/3})}\right]$$
(18)
In the limiting case where $`q0`$ (when the accretor is much heavier than the donor star) we find:
$$\underset{q0}{lim}\zeta _\mathrm{L}=5/3$$
(19)
The behavior of $`\zeta _\mathrm{L}(q,\beta )`$ for LMXBs is shown in Fig. 1. We note that $`\zeta _\mathrm{L}`$ does not depend strongly on $`\beta `$. This figure is quite useful to get an idea of the stability of a given mass transfer when comparing with $`\zeta `$ for the donor star. We see that in general the Roche-lobe, $`R_\mathrm{L}`$ increases ($`\zeta _\mathrm{L}<0`$) when material is transferred from a light donor to a heavier NS ($`q<1`$) and correspondingly $`R_\mathrm{L}`$ decreases ($`\zeta _\mathrm{L}>0`$) when material is transferred from a heavier donor to a lighter NS ($`q>1`$). This behavior is easily understood from the bottom panel of the same figure where we have plotted $`\mathrm{ln}(a)/\mathrm{ln}(q)`$ as a function of $`q`$. The sign of this quantity is important since it tells whether the orbit expands or contracts in response to mass transfer (note $`q<0`$). We notice that the orbit always expands when $`q<1`$ and it always decreases when $`q>1.28`$, whereas for $`1<q1.28`$ it can still expand if $`\beta >0`$. There is a point at $`q=3/2`$ where $`\mathrm{ln}(a)/\mathrm{ln}(q)=2/5`$ is independent of $`\beta `$. It should be mentioned that if $`\beta >0`$ then, in some cases, it is actually possible to decrease the separation, $`a`$ between two stellar components while increasing $`P_{\mathrm{orb}}`$ at the same time!
## 5 Results
We have evolved a total of a few hundred different LMXB systems. 121 of these are listed in Table A1 in the Appendix. We chose donor star masses of $`1.0M_2/M_{}2.0`$ and initial orbital periods of $`2.0P_{\mathrm{orb}}^{\mathrm{ZAMS}}/\mathrm{days}800`$. We also evolved donors with different chemical compositions and mixing-length parameters. In all cases we assumed an initial neutron star mass of $`1.3M_{}`$.
In Fig. 2 we show the evolution of four LMXBs. As a function of donor star mass ($`M_2`$) or its age since the ZAMS, we have plotted the orbital period ($`P_{\mathrm{orb}}`$), the mass-loss rate of the donor as well as the mass-loss rate from the system ($`\dot{M}_2`$ and $`\dot{M}`$), the radius exponent ($`\zeta `$) of the donor and its Roche-lobe and finally the depth of the donor’s convection zone ($`Z_{\mathrm{conv}}/R_2`$). Note, that we have zoomed in on the age interval which corresponds to the mass-transfer phase. As an example, we have chosen two different initial donor masses ($`1.0M_{}`$ and $`1.6M_{}`$) – each with two different initial orbital periods (3.0 and 60.0 days) of the neutron star (NS) and its ZAMS companion. The evolutionary tracks of the donor stars are plotted in the HR-diagram in Fig. 3. We will now discuss the evolution of each of these systems in more detail.
### 5.1 Fig. 2a
In Fig. 2a we adopted $`M_2^\mathrm{i}=1.0M_{}`$ and $`P_{\mathrm{orb}}^{\mathrm{ZAMS}}=3.0`$ days. In this case the time it takes for the donor to become a (sub)giant and fill its Roche-lobe, to initiate mass transfer, is 11.89 Gyr. Before the donor fills its Roche-lobe the expansion due to shell hydrogen burning causes its moment of inertia to increase which tends to slow down the rotation of the star. However, the tidal torques act to establish synchronization by transferring angular momentum to the donor star at the expense of orbital angular momentum. Hence at the onset of the mass transfer ($`A`$) the orbital period has decreased from the initial $`P_{\mathrm{orb}}^{\mathrm{ZAMS}}=3.0`$ days to $`P_{\mathrm{orb}}^{\mathrm{RLO}}=1.0`$ days and the radius is now $`R_2=R_\mathrm{L}=2.0R_{}`$.
We notice that the mass-loss rate of the donor star remains sub-Eddington ($`|\dot{M}_2|<\dot{M}_{\mathrm{Edd}}1.5\times 10^8M_{}\mathrm{yr}^1`$ for hydrogen-rich matter) during the entire mass transfer<sup>2</sup><sup>2</sup>2 Strictly speaking $`\dot{M}_{\mathrm{Edd}}=\dot{M}_{\mathrm{Edd}}(R_{\mathrm{NS}})`$ is slightly reduced during the accretion phase since the radius of the neutron star decreases with increasing mass (e.g. for an ideal $`n`$-gas polytrope: $`R_{\mathrm{NS}}M_{\mathrm{NS}}^{1/3}`$). However, this only amounts to a correction of less than 20% for various equations-of-state, and thus we have not taken this effect into account.. Thus we expect all the transferred material to be accreted onto the neutron star, if disk instabilities and propeller effects can be neglected (see Sections 5.7 and 6.4). Therefore we have no mass loss from the system in this case – i.e. $`\dot{M}=0`$. The duration of the mass-transfer phase for this system is quite long: $``$1.0 Gyr ($`11.8912.91`$ Gyr). At age, $`t12.65`$ Gyr ($`P_{\mathrm{orb}}=5.1`$ days; $`M_2=0.317M_{}`$) the donor star detaches slightly from its Roche-lobe ($`d`$) and the mass transfer ceases temporarily for $``$ 25 Myr – see next subsection for an explanation.
The Roche-radius exponent calculated from eq. (17) is plotted as a dotted line as a function of $`M_2`$ in the upper right panel. However, our numerical calculations (full line) show that tidal effects are significant and increase $`\zeta `$ by $``$0.5–0.8 until $`M_20.54M_{}`$ ($`p`$). At this point the magnetic braking is assumed to switch off, since $`Z_{\mathrm{conv}}/R_2>0.80`$. Note that during the mass transfer phase $`\zeta \zeta _\mathrm{L}`$ and, as long as the mass transfer is not unstable on a dynamical timescale, we typically have in our code: $`1\times 10^4<\mathrm{ln}(R_2/R_\mathrm{L})<7\times 10^3`$ and hence practically $`\zeta =\zeta _\mathrm{L}`$.
The final outcome for this system is a BMSP with an orbital period of $`P_{\mathrm{orb}}^\mathrm{f}=9.98`$ days and a He white dwarf (WD) with a mass of $`M_{\mathrm{WD}}=0.245M_{}`$ ($`B`$). The final mass of the NS is $`M_{\mathrm{NS}}=2.06M_{}`$, since we assumed all the material was accreted onto the NS given $`|\dot{M}_2|<\dot{M}_{\mathrm{Edd}}`$ during the entire X-ray phase. However, in Section 6 we will discuss this assumption and the important question of disk instabilities and the propeller mechanism in more detail.
### 5.2 Fig. 2b
In Fig. 2b we adopted $`M_2^\mathrm{i}=1.6M_{}`$ and $`P_{\mathrm{orb}}^{\mathrm{ZAMS}}=3.0`$ days. The RLO is initiated at an age of $`t=2.256`$ Gyr when $`P_{\mathrm{orb}}^{\mathrm{RLO}}=2.0`$ days and $`R_2=3.8R_{}`$ ($`A`$). In this case the mass-transfer rate is super-Eddington ($`|\dot{M}_2|>\dot{M}_{\mathrm{Edd}}`$, cf. dashed line) at the beginning of the mass-transfer phase. In our adopted model of ”isotropic re-emission” we assume all material in excess of the Eddington accretion limit to be ejected from the system, while carrying with it the specific orbital angular momentum of the neutron star. Hence $`|\dot{M}|=|\dot{M}_2|\dot{M}_{\mathrm{Edd}}`$. Initially $`|\dot{M}_2|10^2\dot{M}_{\mathrm{Edd}}`$ at the onset of the RLO and then $`|\dot{M}_2|`$ decreases from 10 to $`1\dot{M}_{\mathrm{Edd}}`$ at $`M_20.7M_{}`$ ($`e`$) before it becomes sub-Eddington for the rest of the mass-transfer phase. Mass loss from the system as a result of a Reimers wind in the red giant stage prior to RLO (A) is seen to be less than $`10^{11}M_{}`$ yr<sup>-1</sup>. By comparing the different panels for the evolution, we notice that the initial super-Eddington mass transfer phase ($`Ae`$) lasts for 22 Myr. In this interval the companion mass decreases from $`1.6M_{}`$ to $`0.72M_{}`$. Then the system enters a phase ($`ed`$) of sub-Eddington mass transfer at $`P_{\mathrm{orb}}`$=5.31 days which lasts for 41 Myr. When $`M_2=0.458M_{}`$, and $`P_{\mathrm{orb}}`$=13.6 days, the system detaches and the X-ray source is extinguished for about 40 Myr ($`d`$), cf. gray-shaded area. The temporary detachment is caused by a transient contraction of the donor star when its hydrogen shell source moves into the hydrogen rich layers left behind by the contracting convective core during the early main sequence stage. At the same time the convective envelope has penetrated inwards to its deepest point, i.e. almost, but not quite, to the H-shell source. The effect of a transient contraction of single low-mass stars evolving up the RGB, as a result of a sudden discontinuity in the chemical composition, has been known for many years (Thomas 1967; Kippenhahn & Weigert 1990) but has hitherto escaped attention in binary evolution. After the transient contraction the star re-expands enough to fill its Roche-lobe again and further ascends the giant branch. The corresponding final phase of mass transfer ($`dB`$) is sub-Eddington ($`|\dot{M}_2|0.2\dot{M}_{\mathrm{Edd}}`$) and lasts for 60 Myr. The end product of this binary is a recycled pulsar and a He-WD companion with an orbital period of 41.8 days. In this case we obtain $`M_{\mathrm{WD}}=0.291M_{}`$ and $`M_{\mathrm{NS}}=2.05M_{}`$.
The total duration of the mass-transfer phase during which the system is an active X-ray source is $`t_\mathrm{X}=123`$ Myr (excluding the quiescence phase of 40 Myr) which is substantially shorter compared to the case discussed above (Fig. 2a).
The reason for the relatively wide final orbit of this system, compared to the case discussed above with the $`1.0M_{}`$ donor, is caused by the super-Eddington mass transfer during which a total of $`0.55M_{}`$ is lost from the system.
The numerical calculations of $`\zeta `$ for this donor star (full line) fits very well with our simple analytical expression (dotted line) which indicates that the effects of the tidal spin-orbit interactions are not so significant in this case.
### 5.3 Fig. 2c
In this figure we adopted $`M_2^\mathrm{i}=1.0M_{}`$ and $`P_{\mathrm{orb}}^{\mathrm{ZAMS}}=60.0`$ days. The RLO is initiated ($`A`$) at an age of $`t12.645`$ Gyr. At this stage the mass of the donor has decreased to $`M_2^{\mathrm{RLO}}=0.976M_{}`$ as a result of the radiation-driven wind of the giant star. However, the orbital period has also decreased ($`P_{\mathrm{orb}}^{\mathrm{RLO}}=58.1`$ days) and thus the shrinking of the orbit due to tidal spin-orbit coupling dominates over the widening of the orbit caused by the wind mass loss.
The total interval of mass transfer is quite short, $`t_\mathrm{X}=13.3`$ Myr. The mass-transfer rate is super-Eddington during the entire evolution ($`|\dot{M}_2|16\dot{M}_{\mathrm{Edd}}`$) and therefore the NS only accretes very little material: $`\mathrm{\Delta }M_{\mathrm{NS}}=\dot{M}_{\mathrm{Edd}}\mathrm{\Delta }t_{\mathrm{mt}}0.20M_{}`$. The reason for the high mass-loss rate of the donor star is its deep convective envelope (see lower right panel). Since the initial configuration of this system is a very wide orbit, the donor will be rather evolved on the RGB when it fills its Roche-lobe ($`R_2=29.3R_{}`$ and $`P_{\mathrm{orb}}^{\mathrm{RLO}}=58.1`$ days). Hence the donor swells up in response to mass loss (i.e. $`\zeta <0`$) as a result of the super-adiabatic temperature gradient in its giant envelope. The radius exponent is well described by our analytical formula in this case. The final outcome of this system is a wide-orbit ($`P_{\mathrm{orb}}^\mathrm{f}=382`$ days) BMSP with a $`0.40M_{}`$ He-WD companion.
### 5.4 Fig. 2d
Here we adopted $`M_2^\mathrm{i}=1.6M_{}`$ and $`P_{\mathrm{orb}}^{\mathrm{ZAMS}}=60.0`$ days. At the onset of the RLO the donor mass is $`M_2^{\mathrm{RLO}}=1.582M_{}`$. In this case we do not only have a giant donor with a deep convective envelope. It is also (initially) heavier than the accreting NS. Both of these circumstances makes it difficult for the donor to retain itself inside its Roche-lobe once the mass transfer is initiated. It is likely that such systems, with huge mass-transfer rates, evolve into a phase where matter piles up around the neutron star and presumably forms a growing, bloated cloud engulfing it. The system could avoid a spiral-in when it manages to evaporate the bulk of the transferred matter from the surface of the (hot) accretion cloud via the liberated accretion energy. This scenario would require the radius of the accretion cloud, $`r_{\mathrm{cl}}`$ to be larger than $`R_{\mathrm{NS}}(|\dot{M}|/\dot{M}_{\mathrm{Edd}})`$ in order for the liberated accretion energy to eject the transfered material. However, if there is insufficient gas cooling $`r_{\mathrm{cl}}`$ could be smaller from an energetic point of view. At the same time $`r_{\mathrm{cl}}`$ must be smaller than the Roche-lobe radius of the neutron star (cf. eq. 10 with $`q=M_{\mathrm{NS}}/M_2`$) during the entire evolution. In that case our simple isotropic re-emission model would approximately remain valid. Assuming this to be the case we find the mass-transfer rate is extremely high: $`|\dot{M}_2|10^4\dot{M}_{\mathrm{Edd}}`$ and more than $`0.5M_{}`$ is lost from the donor (and the system) in only a few $`10^3`$ yr. The system survives and the orbital period increases from 54.5 days to 111 days during this short phase.
After this extremely short mass-transfer epoch, with an ultra-high mass-transfer rate, the donor star relaxes ($`d`$) and shrink inside its Roche-lobe for 2.5 Myr when $`M_2=0.98M_{}`$. The mass transfer is resumed again for 7.5 Myr at a more moderate super-Eddington rate ($`dB`$). The final outcome is a binary pulsar with a $`0.43M_{}`$ He-WD companion and an orbital period of 608 days. Though the NS only accretes $`0.10M_{}`$ as a result of the short integrated accretion phase it will probably be spun-up sufficiently to become a millisecond pulsar since millisecond pulsars evidently are also formed in systems which evolve e.g. through a CE with similar (or even shorter) phases of accretion (van den Heuvel 1994b).
The initial extreme evolution of this system causes an offset in $`\zeta `$ until the more moderate mass-transfer phase ($`dB`$) continues at $`M_2=0.98M_{}`$. It should be noted that a system like this is very unlikely to be observed in the ultra-high mass-transfer state due to the very short interval ($`<10^4`$ yr) of this phase.
### 5.5 $`M_2>2M_{}`$, runaway mass transfer and onset of a CE
The latter example above illustrates very well the situation near the threshold for unstable mass transfer on a dynamical timescale and the onset of a CE evolution<sup>3</sup><sup>3</sup>3 We notice, that this very high mass-transfer rate might lead to hyper-critical accretion onto the neutron star and a possible collapse of the NS into a black hole if the equation-of-state is soft (cf. Chevalier 1993; Brown & Bethe 1994; Brown 1995). However, new results obtained by Chevalier (1996) including the centrifugal effects of a rotating infalling gas might change this conclusion.. If the donor star is heavier than $`1.8M_{}`$ a critical overflow is likely to occur since the orbit shrinks in response to mass transfer ($`q<1.28`$, cf. Section 4). This is also the situation if $`P_{\mathrm{orb}}`$ is large because the donor in that case develops a deep convective envelope which causes it to expand in response to mass loss and a runaway mass transfer sets in. When a runaway mass transfer sets in we were not able to prevent it from critically overflowing its Roche-lobe and our code breaks down. At this stage the neutron star is eventually embedded in a CE with its companion and it will spiral in toward the center of its companion as a result of removal of orbital angular momentum by the drag force acting on it<sup>4</sup><sup>4</sup>4 However, even binaries with donor stars of 2–6$`M_{}`$ might survive the mass transfer avoiding a spiral-in phase in case the envelope of the donor is still radiative at the onset of the RLO.. The final result of the CE depends mainly on the orbital period and the mass of the giant’s envelope. If there is enough orbital energy available (i.e. if $`P_{\mathrm{orb}}`$ is large enough at the onset of the CE), then the entire envelope of the giant can be expelled as a result of the liberated orbital energy, which is converted into kinetic energy that provides an outward motion of the envelope decoupling it from its core. This leaves behind a tight binary with a heavy WD (the core of the giant) and a moderately recycled pulsar. There are five such systems observed in our Galaxy. They all have a CO-WD and $`P_{\mathrm{orb}}=68`$ days. These are the so-called class C BMSPs.
If there is not enough orbital energy available to expel the envelope, then the NS spirals in completely to the center of the giant and a Thorne-Żytkow object is formed. Such an object might evolve into a single millisecond pulsar (e.g. van den Heuvel 1994a) or may collapse into a black hole (Chevalier 1996).
### 5.6 The ($`P_{\mathrm{orb}},M_{\mathrm{WD}}`$) correlation
We have derived new ($`P_{\mathrm{orb}},M_{\mathrm{WD}}`$) correlations based on the outcome of the 121 LMXB models calculated for this work. They are shown in Fig. 4a (the top panel). We considered models with donor star masses ($`M_2`$) in the interval $`1.02.0M_{}`$, chemical compositions ranging from Population I (X=0.70, Z=0.02) to Population II (X=0.75, Z=0.001) and convective mixing-length parameters $`\alpha l/H_\mathrm{p}`$ from 2–3 (here $`l`$ is the mixing length and $`H_\mathrm{p}`$ is the local pressure scaleheight). Following Rappaport et al. (1995) we chose our standard model with $`M_2=1.0M_{}`$, Population I composition and $`\alpha =2`$, cf. thick line in Fig. 4. The upper limit of $`M_2`$ is set by the requirement that the mass transfer in the binary must be dynamically stable, and the lower limit by the requirement that the donor star must evolve off the main sequence within an interval of time given by: $`t_{\mathrm{ms}}<t_{\mathrm{Hubble}}t_{\mathrm{gal}}t_{\mathrm{cool}}`$. Here $`t_{\mathrm{Hubble}}15`$ Gyr is the age of the Universe, $`t_{\mathrm{gal}}1`$ Gyr is the minimum time between the Big Bang and formation of our Milky Way and $`t_{\mathrm{cool}}3`$ Gyr is a typical low value of WD companion cooling ages, following the mass-transfer phase, as observed in BMSPs (Hansen & Phinney 1998). We thus find $`M_21.0M_{}`$ as a conservative lower limit.
The first thing to notice, is that the correlation is more or less independent of the initial donor star mass ($`M_2`$) – only for $`M_22.0M_{}`$ (where the mass transfer becomes dynamically unstable anyway for $`P_{\mathrm{orb}}^\mathrm{i}4.2`$ days) we see a slight deviation. This result is expected if $`M_{2\mathrm{core}}`$ (and therefore $`R_2`$ and $`P_{\mathrm{orb}}`$) is independent of $`M_2`$. We have performed a check on this statement using our calculations for an evolved donor star on the RGB. As an example, in Table 1 we have written $`L`$, $`T_{\mathrm{eff}}`$ and $`M_{2\mathrm{core}}`$ as a function of $`M_2`$ when it has evolved to a radius of $`50.0R_{}`$. In addition we have written the mass of the donor’s envelope at the moment $`R_2=50.0R_{}`$.
We conclude, for a given chemical composition and mixing-length parameter, $`M_{2\mathrm{core}}`$ is practically independent of $`M_2`$ (to within a few per cent) and that mass loss from the envelope via RLO has similar little influence on the ($`R_2,M_{2\mathrm{core}}`$) correlation as well. For other choices of $`R_2`$ the differences were found to be smaller. In Fig. 5 we have shown $`P_{\mathrm{orb}}`$ (which resembles $`R_2`$) as a function of $`M_{2\mathrm{core}}`$.
Much more important is the theoretical uncertainty in the value of the convective mixing-length parameter and most important is the initial chemical composition of the donor star. We have estimated a ($`P_{\mathrm{orb}},M_{\mathrm{WD}}`$) correlation from an overall best fit to all the models considered in Table A1 and obtain ($`P_{\mathrm{orb}}`$ in days):
$$\frac{M_{\mathrm{WD}}}{M_{}}=\left(\frac{P_{\mathrm{orb}}}{b}\right)^{1/a}+c$$
(20)
where, depending on the chemical composition of the donor,
$$(a,b,c)=\{\begin{array}{cccc}4.50\hfill & 1.2\times 10^5\hfill & 0.120\hfill & \text{ Pop.I}\hfill \\ 4.75\hfill & 1.1\times 10^5\hfill & 0.115\hfill & \text{ Pop.I+II}\hfill \\ 5.00\hfill & 1.0\times 10^5\hfill & 0.110\hfill & \text{ Pop.II}\hfill \end{array}$$
(21)
This formula is valid for binaries with: $`0.18M_{\mathrm{WD}}^{\mathrm{He}}/M_{}0.45`$. The uncertainty in the initial chemical abundances of the donor star results in a spread of a factor $``$1.4 about the median (Pop.I+II) value of $`P_{\mathrm{orb}}`$ at any given value of $`M_{\mathrm{WD}}`$. The spread in the ($`P_{\mathrm{orb}},M_{\mathrm{WD}}`$) correlation arises solely from the spread in the ($`R_2,M_{2\mathrm{core}}`$) correlation as a result of the different chemical abundances, and/or $`\alpha `$, of the giant donor star ascending the RGB.
If we compare our calculations with the work of Rappaport et al. (1995) we find that our best fit results in significantly lower values of $`P_{\mathrm{orb}}`$ for a given mass of the WD in the interval $`0.18M_{\mathrm{WD}}/M_{}0.35`$. It is also notable that these authors find a maximum spread in $`P_{\mathrm{orb}}`$ of a factor $``$2.4 at fixed $`M_{\mathrm{WD}}`$. For $`0.35M_{\mathrm{WD}}/M_{}0.45`$ their results agree with our calculations to within 20 %. A fit to our eq. (20) with the results of Rappaport et al. (1995) yields: $`a=5.75`$, $`b=8.42\times 10^4`$ and $`c=0`$ (to an accuracy within 1 % for $`0.18M_{\mathrm{WD}}/M_{}0.45`$) for their donor models with population I chemical composition and $`\alpha =2.0`$. For their Pop. II donors we obtain $`b=3.91\times 10^4`$ and same values for $`a`$ and $`c`$ as above. We also obtain somewhat lower values of $`P_{\mathrm{orb}}`$, for a given mass of the WD, compared with the results of Ergma, Sarna & Antipova (1998).
The discrepancy between the results of the above mentioned papers and our work is a result of different input physics for the stellar evolution (cf. Section 2). Ergma, Sarna & Antipova (1998) uses models based on Paczynzki’s code, and Rappaport et al. (1995) used an older version of Eggleton’s code than the one used for this work. In our calculations we have also included the effects of tidal dissipation. However, these effects can not account for the discrepancy since in this paper we only considered binaries with $`P_{\mathrm{orb}}^\mathrm{i}>2`$ days and thus the effects of the tidal forces are relatively small (the contribution to the stellar luminosity from dissipation of tidal energy is only $`L_{\mathrm{tidal}}0.05L_{\mathrm{nuc}}`$ for $`P_{\mathrm{orb}}=2`$ days).
In analogy with Rappaport et al. (1995) and Ergma, Sarna & Antipova (1998) we find that, for a given value of $`M_{\mathrm{WD}}`$, $`P_{\mathrm{orb}}`$ is decreasing with increasing $`\alpha `$, and $`P_{\mathrm{orb}}`$ is increasing with increasing metallicity. We find $`M_{\mathrm{WD}}^{z=0.001}M_{\mathrm{WD}}^{z=0.02}+0.03M_{}`$ which gives a stronger dependency on metallicity, by a factor $``$2, compared to the work of Ergma, Sarna & Antipova (1998).
It should be noticed, that the ($`P_{\mathrm{orb}},M_{\mathrm{WD}}`$) correlation is independent of $`\beta `$ (the fraction of transferred material lost from the system), the mode of mass loss and degree of magnetic braking since, as demonstrated above, the relationship between $`R_2`$ and $`M_{2\mathrm{core}}`$ of the giant donors remains unaffected by the exterior stellar conditions governing the process of mass transfer – see also Alberts et al. (1996). But for the individual binary, $`P_{\mathrm{orb}}`$ and $`M_{\mathrm{WD}}`$ do depend on $`\beta `$ and they increase with increasing values of $`\beta `$ (see e.g. the bottom panel of Fig. 1 for $`q<1`$ which always applies near the end of the mass-transfer phase).
As mentioned in our examples earlier in this section, there is a competition between the wind mass loss and the tidal spin-orbit interactions for determining the orbital evolution prior to the RLO-phase. This is demonstrated in Fig. 6 where we have shown the changes in $`P_{\mathrm{orb}}`$ and $`M_2`$, from the ZAMS stage to the onset of the RLO, as a function of the initial ZAMS orbital period. It is seen that only for binaries with $`P_{\mathrm{orb}}^{\mathrm{ZAMS}}>100`$ days will the wind mass-loss be efficient enough to widen the orbit. For shorter periods the effects of the spin-orbit interactions dominate (caused by expansion of the donor) and loss of orbital angular momentum causes the orbit to shrink.
### 5.7 The ($`P_{\mathrm{orb}},M_{\mathrm{NS}}`$) anti-correlation
We now investigate the interesting relationship between the final mass of the NS and the final orbital period. In Fig. 4b (the bottom panel) we have plotted $`P_{\mathrm{orb}}`$ as a function of the potential maximum mass of the recycled pulsar, $`M_{\mathrm{NS}}^{}`$. This value is the final mass of the NS if mass loss resulting from instabilities in the accretion process are neglected. Another (smaller) effect which has also been ignored is the mass deficit of the accreted material as it falls deep down the gravitational potential well of the NS. The gravitational mass of a NS (as measured from a distant observer by its gravitational effects) contains not only the rest mass of the baryons, but also the mass equivalent of the negative binding energy, $`\mathrm{\Delta }M_{\mathrm{def}}=E_{\mathrm{bind}}/c^2<0`$. Depending on the equation-of-state $`\mathrm{\Delta }M_{\mathrm{def}}`$10% of the gravitational mass (Shapiro & Teukolsky 1983). This is hence also the efficiency of radiative emission in units of available rest-mass energy incident on the NS. Thus we can express the actual post-accretion gravitational mass of a recycled pulsar by ($`m_2<0`$):
$$M_{\mathrm{NS}}=M_{\mathrm{NS}}^\mathrm{i}+\left[_{M_2}^{M_{\mathrm{WD}}}(1\beta ^{})m_2\mathrm{\Delta }M_{\mathrm{dp}}\right]k_{\mathrm{def}}$$
(22)
Here $`\beta ^{}\mathrm{max}((|\dot{M}_2|\dot{M}_{\mathrm{Edd}})/|\dot{M}_2|,0)`$ is the fraction of material lost in a relativistic jet as a result of super-Eddington mass transfer; $`\mathrm{\Delta }M_{\mathrm{dp}}=\mathrm{\Delta }M_{\mathrm{disk}}+\mathrm{\Delta }M_{\mathrm{prop}}`$ is the sum of matter lost from the accretion disk (as a result of viscous instabilities or wind corona) and matter being ejected near the pulsar magnetosphere as a result of the centrifugal propeller effect, and finally $`k_{\mathrm{def}}=\frac{M_{\mathrm{NS}}}{M_{\mathrm{NS}}\mathrm{\Delta }M_{\mathrm{def}}}0.90`$ is a factor that expresses the ratio of gravitational mass to rest mass of the material accreted onto the NS. $`M_{\mathrm{NS}}^{}`$ used in Fig. 4b is given by the expression above assuming $`\mathrm{\Delta }M_{\mathrm{dp}}=0`$ and $`\mathrm{\Delta }M_{\mathrm{def}}=0`$ ($`k_{\mathrm{def}}=1`$).
We see that the ($`P_{\mathrm{orb}},M_{\mathrm{NS}}^{}`$) anti-correlation is more or less independent of the chemical composition and $`\alpha `$ of the donor star, whereas it depends strongly on $`M_2`$ for $`P_{\mathrm{orb}}50`$ days. This anti-correlation between $`P_{\mathrm{orb}}`$ and $`M_{\mathrm{NS}}^{}`$ is quite easy to understand: binaries with large initial orbital periods will have giant donor stars with deep convective envelopes at the onset of the mass transfer; hence the mass-transfer rate will be super-Eddington and subsequently a large fraction of the transferred material will be lost from the system. Therefore BMSPs with large values of $`P_{\mathrm{orb}}`$ are expected to have relatively light NS – cf. Sections 5.3 (Fig. 2c) and 5.4 (Fig. 2d). Similarly, binaries with small values of $`P_{\mathrm{orb}}^{\mathrm{ZAMS}}`$ will result in BMSPs with relatively small $`P_{\mathrm{orb}}`$ and large values of $`M_{\mathrm{NS}}^{}`$, since $`\dot{M}_2`$ will be sub-Eddington and thus the NS has the potential to accrete all of the transferred material – cf. Sections 5.1 (Fig. 2a) and 5.2 (Fig. 2b). Therefore, if disk instabilities, wind corona and propeller effects were unimportant we would expect to find an ($`P_{\mathrm{orb}},M_{\mathrm{NS}}`$) anti-correlation among the observed BMSPs. However, below (Section 6.2) we demonstrate that mass ejection arising from these effects is indeed important and thus it is very doubtful whether an ($`P_{\mathrm{orb}},M_{\mathrm{NS}}`$) anti-correlation will be found from future observations.
## 6 Discussion and comparison with observations
### 6.1 Comparison with condensed polytrope donor models
Hjellming & Webbink (1987) studied the adiabatic properties of three simple families of polytropes by integrating the non-linear Lane-Emden equation in Lagrangian coordinates. The condensed polytropes, consisting of $`n=3/2,\gamma =5/3`$ (convective) envelopes with He-core point masses, are suitable for red giant stars. In is not trivial to directly compare our calculations with e.g. the stability analysis of Soberman, Phinney & van den Heuvel (1997) and Kalogera & Webbink (1996) since the donor does not restore (thermal) equilibrium after initiation of an unstable mass-transfer process. But it is important to point out, that systems which initiate RLO with thermally unstable mass transfer could, in some cases, survive this temporary phase – even if $`|\dot{M}_2|`$ exceeds the Eddington accretion limit by as much as a factor $``$10<sup>4</sup> (see Fig. 2d). Similarly, systems which begin mass transfer on a thermal timescale may in some cases (if $`M_2`$ is large compared to $`M_{\mathrm{NS}}`$) eventually become dynamically unstable. These results were also found by Hjellming (1989) and Kalogera & Webbink (1996), and we refer to these papers for a more detailed discussion on the fate of thermally unstable systems. Therefore it is not always easy to predict the final outcome of an LMXB system given its initial parameters – especially since the onset criteria of a CE phase is rather uncertain. Nevertheless, we can conclude that LMXBs with $`M_21.8M_{}`$ will always survive the mass transfer phase. Systems with donor stars $`M_22M_{}`$ only survive if $`P_{\mathrm{orb}}^{\mathrm{ZAMS}}`$ is within a certain interval.
Soberman, Phinney & van den Heuvel (1997) also used the polytropic models of Hjellming & Webbink (1987) to follow the mass transfer in binaries. The global outcome of such calculations are reasonably good. However, the weakness of the polytropic models is that whereas they yield the radius-exponent at the onset of the mass transfer, and the approximated stellar structure at that given moment, they do not trace the response of the donor very well during the mass-transfer phase. The structural changes of the donor star (e.g. the outward moving H-shell and the inward moving convection zone giving rise to the transient detachment of the donor from its Roche-lobe) can only be followed in detail by a Henyey-type iteration scheme for a full stellar evolutionary model.
### 6.2 The observed ($`P_{\mathrm{orb}},M_{\mathrm{WD}}`$) correlation
The companion mass, $`M_{\mathrm{WD}}`$ of an observed binary pulsar is constrained from its Keplerian mass function which is obtained from the observables $`P_{\mathrm{orb}}`$ and $`a_\mathrm{p}\mathrm{sin}i`$:
$$f(M_{\mathrm{NS}},M_{\mathrm{WD}})=\frac{(M_{\mathrm{WD}}\mathrm{sin}i)^3}{(M_{\mathrm{NS}}+M_{\mathrm{WD}})^2}=\frac{4\pi ^2}{G}\frac{(a_\mathrm{p}\mathrm{sin}i)^3}{P_{\mathrm{orb}}^2}$$
(23)
Here $`i`$ is the inclination angle (between the orbital angular momentum vector and the line-of-sight toward the observer) and $`a_\mathrm{p}=a(M_{\mathrm{WD}}/(M_{\mathrm{NS}}+M_{\mathrm{WD}}))`$ is the semi-major axis of the pulsar in a c.m. reference frame. The probability of observing a binary system at an inclination angle $`i`$, less than some value $`i_0`$, is $`P(i<i_0)=1\mathrm{cos}(i_0)`$.
As mentioned earlier, there are indeed problems with fitting the observed low-mass binary pulsars onto a theoretical core-mass period relation. The problem is particularly pronounced for the very wide-orbit BMSPs. Although the estimated masses of the companions are quite uncertain (because of the unknown orbital inclination angles and $`M_{\mathrm{NS}}`$) no clear observed ($`P_{\mathrm{orb}},M_{\mathrm{WD}}`$) correlation seems to be present – opposite to what is proposed by several authors (e.g. Phinney & Kulkarni 1994, Lorimer et al. 1996 and Rappaport et al. 1995). In Table A2 in the Appendix we have listed all the observed galactic (NS+WD) binary pulsars and their relevant parameters. It was noticed by Tauris (1996) that the five BMSPs with $`P_{\mathrm{orb}}>100`$ days all seem to have an observed $`M_{\mathrm{WD}}^{\mathrm{obs}}`$ which is lighter than expected from the theoretical correlation (at the $``$80 % confidence level on average). There does not seem to be any observational selection effects which can account for this discrepancy (Tauris 1996; 1998) – i.e. why we should preferentially observe systems with small inclination angles (systematic small values of $`i`$, rather than a random distribution, would increase $`M_{\mathrm{WD}}^{\mathrm{obs}}`$ for the given observed mass functions and thus the observations would match the theory). Evaporation of the companion star, from a wind of relativistic particles after the pulsar turns on, also seems unlikely since the evaporation timescale (proportional to $`P_{\mathrm{orb}}^{4/3}`$) becomes larger than $`t_{\mathrm{Hubble}}`$ for such wide orbits. It is also worth mentioning that the orbital period change due to evaporation, or general mass-loss in the form of a stellar wind, is at most a factor of $``$2, if one assumes the specific angular momentum of the lost matter is equal to that of the donor star.
Beware that the ($`P_{\mathrm{orb}},M_{\mathrm{WD}}`$) correlation is not valid for BMSPs with CO/O-Ne-Mg WD companions as these systems did not evolve through a phase with stable mass transfer. The exception here are the very wide orbit systems with $`P_{\mathrm{orb}}^\mathrm{f}800`$ days. PSR B0820+02 might be an example of such a system. From Table A1 (Appendix) it is seen that we expect a maximum orbital period of $`1400`$ days for the NS+WD binaries. Larger periods are, of course, possible but the binaries are then too wide for the neutron star to be recycled via accretion of matter.
It should also be mentioned that the recycling process is expected to align the spin axis of the neutron star with the orbital angular momentum vector as a result of $`>10^7`$ yr of stable disk accretion. Hence we expect (Tauris 1998) the orbital inclination angle, $`i`$ to be equivalent to (on average) the magnetic inclination angle, $`\alpha _{\mathrm{mag}}`$ defined as the angle between the pulsar spin axis and the center of the pulsar beam (viz. line-of-sight to observer).
### 6.3 PSR J2019+2425
PSR J2019+2425 is a BMSP with $`P_{\mathrm{orb}}=76.5`$ days and a mass function $`f=0.0107M_{}`$ (Nice, Taylor & Fruchter 1993). In a recent paper (Tauris 1998) it was demonstrated that for this pulsar $`M_{\mathrm{NS}}1.20M_{}`$, if the ($`P_{\mathrm{orb}},M_{\mathrm{WD}}`$) correlation obtained by Rappaport et al. (1995) was taken at face value. This value of $`M_{\mathrm{NS}}`$ is significantly lower than that of any other estimated pulsar mass (Thorsett & Chakrabarty 1999). However with the new ($`P_{\mathrm{orb}},M_{\mathrm{WD}}`$) correlation presented in this paper we obtain a larger maximum mass ($`i=90\mathrm{°}`$) of this pulsar: $`M_{\mathrm{NS}}^{\mathrm{max}}=1.39M_{}`$ or $`1.64M_{}`$ for a donor star of Pop.I or Pop.II chemical composition, respectively. This result brings the mass of PSR J2019+2425 inside the interval of typical estimated values of $`M_{\mathrm{NS}}`$.
### 6.4 $`M_{\mathrm{NS}}`$: dependence on the propeller effect and accretion disk instabilities
It is still an open question whether or not a significant amount of mass can be ejected from an accretion disk as a result of the effects of disk instabilities (Pringle 1981; van Paradijs 1996). However, there is clear evidence from observations of Be/X-ray transients that a strong braking torque acts on these neutron stars which spin near their equilibrium periods. The hindering of accretion onto these neutron stars is thought to be caused by their strong rotating magnetic fields which eject the incoming material via centrifugal acceleration – the so-called propeller effect (Illarionov & Sunyaev 1985).
For a given observed BMSP we know $`P_{\mathrm{orb}}`$ and using eqs (20), (21) we can find $`M_{\mathrm{WD}}`$ for an adopted chemical composition of the donor star. Hence we are also able to calculate the maximum gravitational mass of the pulsar, $`M_{\mathrm{NS}}^{\mathrm{max}}`$ (which is found for $`i=90\mathrm{°}`$, cf. eq. 23) since we know the mass function, $`f`$ from observations. This semi-observational constraint on $`M_{\mathrm{NS}}^{\mathrm{max}}`$ can then be compared with our calculations of $`M_{\mathrm{NS}}^{}`$ (cf. Section 5.7). The interesting cases are those where $`M_{\mathrm{NS}}^{}>M_{\mathrm{NS}}^{\mathrm{max}}`$ (after correcting $`M_{\mathrm{NS}}^{}`$ for the mass deficit). These systems must therefore have lost matter ($`\mathrm{\Delta }M_{\mathrm{dp}}0`$), from the accretion disk or as a result of the propeller effect, in addition to what is ejected when $`|\dot{M}_2|>\dot{M}_{\mathrm{Edd}}`$. These binaries are plotted in Fig. 4b assuming an ‘intermediate’ Pop. I+II chemical composition for the progenitor of the white dwarf. We notice that in a some cases we must require $`\mathrm{\Delta }M_{\mathrm{dp}}0.50M_{}`$, or even more for $`M_2>1.0M_{}`$, in order to get $`M_{\mathrm{NS}}`$ below the maximum limit ($`M_{\mathrm{NS}}^{\mathrm{max}}`$) indicated by the plotted arrow. We therefore conclude that mass ejection, in addition to what is caused by super-Eddington mass-transfer rates, is very important in LMXBs. Whether or not this conclusion is equally valid for super- and sub-Eddington accreting systems is difficult to answer since systems which evolve through an X-ray phase with super-Eddington mass-transfer rates lose a large amount of matter from the system anyway and therefore naturally end up with small values of $`M_{\mathrm{NS}}^{}`$.
### 6.5 Kaon condensation and the maximum mass of NS
It has recently been demonstrated (Brown & Bethe 1994; Bethe & Brown 1995) that the introduction of kaon condensation sufficiently softens the equation-of-state of dense matter, so that NS with masses more than $`1.56M_{}`$ will not be stable and collapse into a black hole. If this scenario is correct, then we expect a substantial fraction of LMXBs to evolve into black hole binaries – unless $`\mathrm{\Delta }M_{\mathrm{dp}}`$ is comparable to the difference between $`M_2`$ and $`M_{\mathrm{WD}}`$ as indicated above. However, it has recently been reported by Barziv et al. (1999) that the HMXB Vela X-1 has a minimum value for the mass of the neutron star of $`M_{\mathrm{NS}}>1.68M_{}`$ at the 99 % confidence level. It is therefore still uncertain at what critical mass the NS is expected to collapse into a black hole.
### 6.6 PSR J1603–7202
The maximum allowed value of the pulsar mass in this system is extremely low compared to other BMSP systems with He-WD companions. We find $`M_{\mathrm{NS}}^{\mathrm{max}}=0.961.11M_{}`$ depending on the chemical abundances of the white dwarf progenitor. It is therefore quite suggestive that this system did not evolve like the other BMSPs with a He-WD companion. Furthermore (as noted by Lorimer et al. 1996), it has a relatively slow spin period of $`P_{\mathrm{spin}}=14.8`$ ms and $`P_{\mathrm{orb}}=6.3`$ days. Also its location in the ($`P`$,$`\dot{P}`$) diagram is atypical for a BMSP with a He-WD (Arzoumanian, Cordes & Wasserman 1999). All these characteristica are in common with BMSPs which possibly evolved through a CE evolution (van den Heuvel 1994b; Camilo 1996). We conclude therefore, that this system evolved through a phase with critical unstable mass-transfer (like in a CE) and hence most likely hosts a CO-WD companion rather than a He-WD companion. The latter depends on whether or not helium core burning was ignited, and thus on the value of $`P_{\mathrm{orb}}^\mathrm{i}`$ and $`M_2`$. Spectroscopic observations should answer this question.
## 7 Conclusions
* We have adapted a numerical computer code, based on Eggleton’s code for stellar evolution, in order to carefully study the details of mass-transfer in LMXB systems. We have included, for the first time to our knowledge, other tidal spin-orbit couplings than magnetic braking and also considered wind mass-loss during the red giant stage of the donor star.
* We have re-calculated the ($`P_{\mathrm{orb}},M_{\mathrm{WD}}`$) correlation for binary radio pulsar systems using new input physics of stellar evolution in combination with detailed binary interactions. We find a correlation which yields a larger value of $`M_{\mathrm{WD}}`$ for a given value of $`P_{\mathrm{orb}}`$ compared to previous work.
* Comparison between observations of BMSPs and our calculated post-accretion $`M_{\mathrm{NS}}`$ suggests that a large amount of matter is lost from the LMXBs; probably as a result of either accretion disk instabilities or the propeller effect. Hence it is doubtful whether or not observations will reveal an ($`P_{\mathrm{orb}},M_{\mathrm{NS}}`$) anti-correlation which would otherwise be expected from our calculations.
* The mass-transfer rate is a strongly increasing function of initial orbital period and donor star mass. For relatively close systems with light donors ($`P_{\mathrm{orb}}^{\mathrm{ZAMS}}<10`$ days and $`M_2<1.3M_{}`$) the mass-transfer rate is sub-Eddington, whereas it can be highly super-Eddington by a factor of $`10^4`$ for wide systems with relatively heavy donor stars ($`1.62.0M_{}`$), as a result of their deep convective envelopes. Binaries with (sub)giant donor stars with mass in excess of $`2.0M_{}`$ are unstable to dynamical timescale mass loss. Such systems will evolve through a common envelope evolution leading to a short ($`<10`$ days) orbital period BMSP with a heavy CO/O-Ne-Mg white dwarf companion. Binaries with unevolved heavy ($`>2M_{}`$) donor stars might be dynamically stable against a CE, but also end up with a relatively short $`P_{\mathrm{orb}}`$ and a CO/O-Ne-Mg WD.
* Based on our calculations, we present new evidence that PSR J1603–7202 did not evolve through a phase with stable mass transfer and that it is most likely to have a CO white dwarf companion.
* The pulsar mass of PSR J2019+2425 now fits within the standard range of measured values for $`M_{\mathrm{NS}}`$, given our new ($`P_{\mathrm{orb}},M_{\mathrm{WD}}`$) correlation.
###### Acknowledgements.
We would like to thank Ed van den Heuvel for several discussions on many issues; Guillaume Dubus for discussions on accretion disk instabilities; Jørgen Christensen-Dalsgaard for pointing out the well-known tiny loop in the evolutionary tracks of low-mass stars on the RGB and Lev Yungelson for comments on the manuscript. T.M.T. acknowledges the receipt of a Marie Curie Research Grant from the European Commission.
## Appendix A Tidal torque and dissipation rate
We estimate the tidal torque due to the interaction between the tidally induced flow and the convective motions in the stellar envelope by means of the simple mixing-length model for turbulent viscosity $`\nu =\alpha H_\mathrm{p}V_\mathrm{c}`$, where the mixing-length parameter $`\alpha `$ is adopted to be 2 or 3, $`H_\mathrm{p}`$ is the local pressure scaleheight, and $`V_\mathrm{c}`$ the local characteristic convective velocity. The rate of tidal energy dissipation can be expressed as (Terquem et al. 1998):
$$\frac{\mathrm{d}E}{\mathrm{d}t}=\frac{192\pi }{5}\mathrm{\Omega }^2_{R_i}^{R_o}\rho r^2\nu \left[\left(\frac{\xi _r}{r}\right)^2+6\left(\frac{\xi _h}{r}\right)^2\right]𝑑r$$
(24)
where the integration is over the convective envelope and $`\mathrm{\Omega }`$ is the orbital angular velocity, i.e. we neglect effects of stellar rotation. The radial and horizontal tidal displacements are approximated here by the values for the adiabatic equilibrium tide:
$$\xi _r=fr^2\rho \left(\frac{\mathrm{d}P}{\mathrm{d}r}\right)^1$$
(25)
$$\xi _h=\frac{1}{6r}\frac{\mathrm{d}(r^2\xi _r)}{\mathrm{d}r}$$
(26)
where for the dominant quadrupole tide ($`l=m=2`$) $`f=\frac{GM_2}{4a^3}`$.
The locally dissipated tidal energy is taken into account as an extra energy source in the standard energy balance equation of the star, while the corresponding tidal torque follows as:
$$\mathrm{\Gamma }=\frac{1}{\mathrm{\Omega }}\frac{\mathrm{d}E}{\mathrm{d}t}$$
(27)
The thus calculated tidal angular momentum exchange $`dJ=\mathrm{\Gamma }dt`$ between the donor star and the orbit during an evolutionary timestep dt is taken into account in the angular momentum balance of the system. If the so calculated angular momentum exchange is larger than the amount required to keep the donor star synchronous with the orbital motion of the compact star we adopt a smaller tidal angular momentum exchange (and corresponding tidal dissipation rate in the donor star) that keeps the donor star exactly synchronous.
|
no-problem/9909/cond-mat9909189.html
|
ar5iv
|
text
|
# Disorder Effects in the Bipolaron System Ti4O7 Studied by Photoemission Spectroscopy
\[
## Abstract
We have performed a photoemission study of Ti<sub>4</sub>O<sub>7</sub> around its two transition temperatures so as to cover the metallic, high-temperature insulating (bipolaron-liquid), and low-temperature insulating (bipolaron-crystal) phases. While the spectra of the low-temperature insulating phase show a finite gap at the Fermi level, the spectra of the high-temperature insulating phase are gapless, which is interpreted as a soft Coulomb gap due to dynamical disorder. We suggest that the spectra of the high-temperature disordered phase of Fe<sub>3</sub>O<sub>4</sub>, which exhibits a charge order-disorder transition (Verwey transition), can be interpreted in terms of a Coulomb gap.
preprint: HEP/123-qed
\]
Since Mott proposed the idea of variable-range hopping and minimum metallic conductivity for disordered systems and Anderson raised the concept of localization due to disorder, physical properties of disordered solids have been extensively studied. Influence of Coulomb interaction on the electronic density of states (DOS) near the Fermi level ($`E_F`$) of disordered systems is one of the most fundamental issues to be clarified. Efros and Shklovskii proposed that in disordered insulators long-range Coulomb interaction opens a soft Coulomb gap (SCG), whose DOS is proportional to $`(EE_F)^2`$. So far, there have been tunneling spectroscopic confirmations of the SCG in some disordered systems such as doped semiconductors . Davies and Franz pointed out the possibility of an SCG opening in the photoemission spectra of Na<sub>x</sub>Ta<sub>y</sub>W<sub>1-y</sub>O<sub>3</sub> , but the experiments did not have sufficient energy resolution to critically address this problem. Another important issue is how short-range order (SRO) affects DOS near $`E_F`$, that is, how the electronic structure evolves when a charge ordering develops from disorder to SRO to long-range order.
Ti<sub>4</sub>O<sub>7</sub> is a suitable system to study the above problems: It undergoes successive phase transitions with decreasing temperature from a metal to a charge-ordered insulator via an insulator with SRO . It is a system with nominally 0.5 3d electron per Ti, allowing two possible valence states of Ti<sup>3+</sup> ($`3d^1`$) and Ti<sup>4+</sup> ($`3d^0`$), and attracted particular attention in 1970’s as a system where bipolarons, or singlet pairs of two polarons, are formed in real space . Above $`T_{MI}=154`$ K, the system is in the metallic (M) phase and the Ti valence is believed to be uniform 3.5+ as shown in Fig. 1 (a). That is, the electrical resistivity $`\rho (T)`$ is metallic with Pauli-paramagnetic $`\chi (T)`$. With decreasing temperature, singlet Ti<sup>3+</sup>-Ti<sup>3+</sup> pairs, namely bipolarons, are formed, resulting in the metal-to-insulator transition at $`T_{MI}`$ with a steep increase in $`\rho (T)`$ by three orders of magnitude. At the same time, $`\chi (T)`$ almost vanishes, reflecting the formation of the singlet bipolarons. We refer to this phase as the high-temperature insulating (HI) phase. Because the bipolarons are dynamically disordered in this phase as has been established by EPR studies , the HI phase may be called a bipolaron liquid. Subsequently, bipolarons become ordered below $`T_{II}140`$ K as shown in Fig. 1 (a), with a further rise in $`\rho (T)`$ by three orders of magnitude while $`\chi (T)`$ remains unaffected. We refer to this phase as the low-temperature insulating (LI) phase. The transition from the liquid to the crystal of bipolarons is a kind of Verwey transition and the bipolaron-liquid formation in the HI phase is a kind of a SRO of charge carriers. The Verwey transition was originally found for Fe<sub>3</sub>O<sub>4</sub> at $`T_V120`$ K. Analogous to Ti<sub>4</sub>O<sub>7</sub>, Fe<sub>3</sub>O<sub>4</sub> has two possible valence states of Fe<sup>2+</sup> and Fe<sup>3+</sup> which are ordered below $`T_V`$ and are disordered above $`T_V`$, leading to the characteristic jump in the electrical resistivity.
In this Letter, we have studied the single-particle excitation spectra of the three phases of Ti<sub>4</sub>O<sub>7</sub> by means of photoemission spectroscopy (PES) with high energy resolution. To add to the variation of the electronic structure of Ti<sub>4</sub>O<sub>7</sub> as a function of temperature across $`T_{MI}`$ and $`T_{II}`$, we focus on general aspects of the single-particle excitation spectra of dynamically disordered systems and then made comparison with the PES result of Fe<sub>3</sub>O<sub>4</sub> .
Single crystals of Ti<sub>4</sub>O<sub>7</sub> were grown by the vapor transport method . PES measurements were performed using an OMICRON hemi-spherical analyzer and a He lamp (He I: $`h\nu =`$21.2 eV). The energy calibration and the estimation of the instrumental resolution were done by measuring the Fermi edge of Au evaporated on the sample. The energy resolution was set $``$50 meV. Samples were cleaved in situ. This always gave an irregular surface, an assembly of randomly orientated small facets. As the analyzer had an acceptance angle of $`\pm 8^{}`$, the obtained spectra can be regarded as angle-integrated PES spectra. The measurement temperature was controlled within the accuracy of $`\pm 0.2`$ K. The base pressure in the spectrometer was less than $`1\times 10^{10}`$ Torr. Below, we show results reproducible for several cleaves and obtained within a few hours after cleavage.
The position and the width of the O 2p band (not shown) show almost no temperature dependence across the two phase transitions as in the previous photoemission study . In contrast, the PES spectra in the Ti 3d band region show strong temperature dependence as shown in Fig. 1 (b), being consistent with the previous results for the M and LI phases with a lower energy resolution . Here, the sample was first cooled and then heated as indicated in the figure, so as to cross twice each transition temperature. Two observations are remarked. First, as superimposed in the bottom panel, there are three kinds of spectra, which reflect the electronic structure of the M, HI, and LI phases of Ti<sub>4</sub>O<sub>7</sub> as discussed below. Second, the 142 K spectra show different spectral lineshapes between cooling and heating due to thermal hysteresis between the HI and LI phases, as has been observed in the electrical resistivity, thermopower , and EPR measurements . In Fig. 1 (c), we have plotted the temperature dependence of the integrated PES intensity within 0.5 eV of $`E_F`$ normalized to the intensity integrated from $`E_F`$ to binding energy $`E_B=1`$ eV. This hysteretic behavior was reproducible for several thermal cyclings across $`T_{MI}`$ and $`T_{II}`$.
Let us discuss the spectra of each phase in more detail. As shown in Fig. 1 (d), M-phase spectra show a weak but finite Fermi edge, reflecting its metallic nature. The spectrum shows a broad maximum centered at $`E_B=0.75`$ eV, around which most of spectral weight is distributed. A similar broad feature has been observed at $`E_B1`$–1.5 eV in the spectra of three-dimensional titanium and vanadium oxides with $`3d^1`$ configuration, LaTiO<sub>3</sub> and YTiO<sub>3</sub> , and has been interpreted as the incoherent part of the spectral function accompanying the coherent quasi-particle excitations around $`E_F`$. The coherent part in Ti<sub>4</sub>O<sub>7</sub> is hardly separable from the incoherent part because of their strong overlap, leading to the pseudo-gap-like behavior at $`E_F`$. This implies that the M phase of Ti<sub>4</sub>O<sub>7</sub> is a strongly correlated metal, where the motion of conduction electrons is largely incoherent, resulting in the weak coherent part. In going from the M to the LI phase, the incoherent peak at $`E_B=0.75`$ eV becomes sharper and a clear gap of order $``$0.1 eV is opened as seen in Fig. 1 (d). The gap opening is attributed to the bipolaron ordering in this phase. The HI-phase spectra take an intermediate lineshape between the LI and M phases: The 0.75 eV peak is somewhat broader than in the LI phase. The spectra show neither a Fermi edge nor a real gap. Indeed, the PES intensity vanishes only at $`E_B=0.0`$ eV as shown in Fig. 1 (d). Figure 2 (a) shows the same spectra plotted against $`E_B^2sgn(E_B)`$. The Fermi edge in the M-phase spectra is now clearer while the LI spectra, which show a “hard gap”, are concave around $`E_F`$. Most remarkably, the HI-phase spectra form an almost straight line from $`E_B^2=0.00`$ to $`0.08`$ eV<sup>2</sup> ($`0.0E_B0.3`$ eV), meaning that the spectra show a power-law behavior with the exponent of $`2`$.
To be more quantitative, we have performed a lineshape analysis for the LI- and HI-phase spectra taking the experimental resolution into account. The lineshape was assumed to be of the form $`I(E_B,T)=I_0(E_B,T)+I_{bg}(E_B)`$, where $`I_0(E_B,T)`$ is the intrinsic part expressed by $`I_0(E_B,T)=a_2E_B^2f(E_B,T)`$. $`f(E_B,T)=[\mathrm{exp}(E_B/k_BT)+1]^1`$ is the Fermi-Dirac distribution function at $`T`$ although the finite-temperature effect due to $`f(E_B,T)`$ is negligibly small because $`I_0(0,T)=0`$. $`I_{bg}(E_B)=b_0`$ represents a constant background of the PES spectra. For the LI phase, in order to represent the finite gap, we assumed $`I_0(E_B,T)=a_2(E_BE_0)^2𝜃(E_BE_0)`$, where $`𝜃(x)`$ is a step function and $`E_0`$ was allowed to take a finite positive value. The instrumental resolution was included through the convolution of $`I(E_B,T)`$ with a Gaussian of FWHM $`50`$ meV. $`a_2`$ and $`b_0`$ were treated as adjustable parameters. As shown in Fig. 2 (a), the result of the fitting is satisfactory, especially for the HI-phase spectra for $`0.1E_B0.3`$ eV. One HI-phase spectrum is shown in Fig. 2 (b) on the log-log scale to emphasize the $`E_B^2`$ dependence and the excellent fit for small $`E_B`$. This makes a clear contrast to the LI-phase spectrum shown in Fig. 2 (c), which shows a finite gap of $`E_0`$.
We propose that the above behavior of the HI phase can be explained as an SCG in the DOS, $`N(E_B)=\frac{3}{\pi }(\frac{\kappa }{e^2})^3E_B^2`$ , where $`\kappa `$ is the dielectric constant. For Ti<sub>4</sub>O<sub>7</sub>, the magnitude of the SCG $`\mathrm{\Delta }_C=e^3(N_0/\kappa ^3)^{1/2}`$, where $`N_0`$ is the noninteracting DOS at $`E_F`$, is estimated to be $``$0.2 eV if we take $`N_00.01`$ eV<sup>-1</sup>Å<sup>-3</sup> from the free-electron-like DOS around $`E_F`$ and $`\kappa 10`$. As the estimated $`\mathrm{\Delta }_C`$ and the observed soft gap have the same order of magnitude, we may conclude that the HI-phase spectra are consistent with the opening of an SCG. Here, it should be noted that, even if there exists substantial SRO, i. e., bipolaron-liquid formation, in the HI phase of Ti<sub>4</sub>O<sub>7</sub> the system is sufficiently disordered over long distance for an SCG to appear around $`E_F`$. That is, the length scale of the SRO, which is equal to the Ti-Ti distance $``$3 Å , is sufficiently shorter than the critical distance $`R_Ce^2/\kappa \mathrm{\Delta }_C8`$ Å for the SCG to survive. The SRO would be reflected on high-energy spectral features, e. g., the sharpening of the $`E_B=0.75`$ eV peak in the HI phase. Experimentally, an SCG was observed in tunneling spectroscopy measurements of the doped semiconductor Si:B , whose gap was as small as 0.75 meV due to low $`N_0`$. As for PES measurements, besides the aforementioned Na<sub>x</sub>Ta<sub>y</sub>W<sub>1-y</sub>O<sub>3</sub>, possible existence of SCG in Ti<sub>4</sub>O<sub>7</sub> and Fe<sub>3</sub>O<sub>4</sub> was suggested , while no quantitative analysis on the experimental spectra has been performed so far. In this sense, the present result $`IE_B^2`$ for the HI phase of Ti<sub>4</sub>O<sub>7</sub> is the first clear indication of an SCG using PES. Since our measurements were performed on cleaved surfaces, there are no extrinsic disorder effects induced by scraping.
It is worth comparing Ti<sub>4</sub>O<sub>7</sub> in the HI phase with Fe<sub>3</sub>O<sub>4</sub> above $`T_V120`$ K because of the analogy between the two materials pointed out repeatedly . For this purpose, we reanalyzed the PES spectra of Fe<sub>3</sub>O<sub>4</sub> reported by Chainani et al. in the context of an SCG. Figure 3 (a) shows the PES spectra of Fe<sub>3</sub>O<sub>4</sub> measured with the resolution of $``$70 meV. They claimed that the intensity at $`E_F`$ in the metallic phase evolves as the temperature increases. In Fig. 3 (b), we have replotted the spectra against $`E_B^2sgn(E_B)`$, which makes their argument clearer. More interestingly, we find that the 140 K ($`>T_V`$) spectrum almost falls on a straight line at $`E_B0.07`$ eV. In contrast, the 100 K spectrum is slightly concave, signaling the opening of a hard gap. Fe<sub>3</sub>O<sub>4</sub> is, however, different from Ti<sub>4</sub>O<sub>7</sub> in that the 140 K spectrum, which would correspond to the HI-phase spectra of Ti<sub>4</sub>O<sub>7</sub>, does not vanish at $`E_F`$ but shows a finite Fermi edge. We have modeled the spectra $`I(E_B,T)`$ above $`T_V`$ as $`I(E_B,T)=I_0(E_B,T)+I_{bg}(E_B)`$, where $`I_0(E_B,T)=(a_2E_B^2+a_0)f(E_B,T)`$ ($`a_0`$: finite DOS at $`E_F`$). The 100 K spectrum was assumed to be $`I_0(E_B,T)=a_2(E_BE_0)^2𝜃(E_BE_0)`$ as in the LI phase of Ti<sub>4</sub>O<sub>7</sub>. $`I_{bg}(E_B)=b_1E_B+b_0`$ represents the linear background of the PES spectra . The instrumental resolution was also included.
The result of the fitting was satisfying between $`E_B=0.15`$ eV and 0.15 eV as shown by solid curves in Figs. 3 (a) and (b). The dashed curves in Fig. 3 (a) for $`T140`$ K represent the intrinsic DOS cut-off at $`E_F`$ $`(a_2E_B^2+a_0)f(E_B,0)`$. The most remarkable point is that the finite intensity at $`E_F`$ evolves systematically with increasing temperature. Figure 3 (c) shows the ratio $`a_0/a_2`$ between the intensity at $`E_F`$, which contributes to the “metallic” conductivity, and the magnitude of the $`E_B^2`$ term, which represents the dynamical disorder. The electrical dc conductivity data of Fe<sub>3</sub>O<sub>4</sub> is superimposed in the figure. The $`a_0/a_2`$ values at 140, 200, and 300 K form a straight line which extrapolates to 0 at 119$`\pm 5`$ K $`T_V`$. This observation, namely, $`a_0/a_2TT_V`$ strongly indicates that $`I_0E_B^2`$ just above $`T_V`$. As the temperature goes up above $`T_V`$, the finite intensity at $`E_F`$ grows up in proportion to $`TT_V`$. Park et al. reported that the spectra of Fe<sub>3</sub>O<sub>4</sub> at 130 K show no Fermi edge, which is naturally understood as due to the small $`a_0/a_2`$. We propose that an SCG exists in Fe<sub>3</sub>O<sub>4</sub> just above $`T_V`$ and continuously evolves into a pseudogap well above $`T_V`$, reflecting the crossover from the semiconducting ($`d\rho /dT<0`$) behavior just above $`T_V`$ to the metallic ($`d\rho /dT>0`$) one well above $`T_V`$. Just above $`T_V`$, charges are disordered with SRO, namely, the system is in a Wigner glass state as is the HI phase of Ti<sub>4</sub>O<sub>7</sub>. Here, it should be noted that the M-phase spectra of Ti<sub>4</sub>O<sub>7</sub> near $`E_F`$ ($`E_B<0.2`$ eV) may be fitted to $`(a_2E_B^2+a_0)f(E_B,T)`$ like the spectra of Fe<sub>3</sub>O<sub>4</sub> above $`T_V`$. However, $`a_0/a_2=2`$-4$`\times 10^2`$ eV<sup>-2</sup> is much larger than that in Fe<sub>3</sub>O<sub>4</sub> at 300 K, reflecting the strongly first-order transition between the HI and M phases in Ti<sub>4</sub>O<sub>7</sub>.
In conclusion, we have performed a PES study of Ti<sub>4</sub>O<sub>7</sub> covering its LI, HI, and M phases. The spectra of the Ti 3d band show peculiar temperature-dependent spectra characteristic of the three phases, among which the HI-phase spectra are gapless and can be fitted to $`E_B^2`$ near $`E_F`$. We interpret this as an SCG resulting from disordered bipolarons. By reanalyzing the PES spectra of Fe<sub>3</sub>O<sub>4</sub>, an SCG was also found just above $`T_V`$, indicating the significant role of disorder and long-range Coulomb interaction. With increasing temperature, the SCG is found to continuously evolve into a pseudogap as the metallicity is gradually established, whereas the first-order HI-to-M transition in Ti<sub>4</sub>O<sub>7</sub> precludes the observation of the corresponding continuous change. Presumably, similar SCG behavior may be observed in other systems such as $`\beta `$-Na<sub>x</sub>V<sub>2</sub>O<sub>5</sub> . The very recent scanning tunneling spectra of hole-doped manganites above $`T_C`$ may be interpreted in the same way as Fe<sub>3</sub>O<sub>4</sub> above $`T_V`$.
We would like to thank D. Khomskii, M. Abbate, and T. Mizokawa for informative discussions. We appreciate Y. Aiura, K. Tobe and Y. Ishikawa for technical support. This work was supported by the New Energy and Industrial Technology Development Organization (NEDO) and by a Special Coordination Fund from the Science and Technology Agency of Japan.
|
no-problem/9909/astro-ph9909481.html
|
ar5iv
|
text
|
# Effect of the large scale environment on the internal dynamics of early-type galaxies
## 1. Background
It is generally accepted that the rate of star formation in early-type galaxies is enhanced in low-density environments (Schweizer & Seitzer 1992, de Carvalho & Djorgovski 1992, Guzmán et al. 1992, Rose et al. 1994, Jørgensen & Jønch-Sørensen 1998, Bernardi et al. 1998). It results in a population - density relation: The metallic features in the spectra (eg. Mg2) are weaker and the Balmer lines stronger in low-density regions.
Recently (Prugniel et al. 1999) we have analyzed the population-density relation in low-density environments (isolated galaxies to poor clusters). We found that the early-type galaxies which are likely to contain a young sub-population are mostly found in the sparsest environments.
The approach used to diagnostic this effect is to study the Mg<sub>2</sub>$`\sigma _0`$ and the Fundamental Plane (FP) relations. Indeed, although both relations are sensitive to the dynamics and to the stellar content, departures resulting from one or the other origin will have opposite signs. One the one hand, if a galaxy is found below the Mg<sub>2</sub>$`\sigma _0`$ relation (see Fig. 1), it may have either an unusually low velocity dispersion or a high Mg<sub>2</sub> index. On the other hand, a galaxy below the FP relation has either a low velocity dispersion or contains a young population.
Our analysis of the residuals shows that the environment has primarily a visible effect on the stellar content (not on the dynamics).
## 2. Dynamical evolution
It is not a surprise that an enhancement of the young population is detected: A relatively small fraction of young stars is sufficient to significantly modify the broad-band colors and the line strength indices.
However, the enhancement of the stellar formation should have a long term effect on the dynamics of the galaxies. This delayed star formation in early-type galaxies is mostly occurring in the central regions and hence modifies the mass balance in these galaxies.
Unfortunately, when we into account the stellar population effect as deduced from the residuals to the Mg2-$`\sigma `$ relation, no more environmental effect persist in the FP analysis. We cannot find any dynamical evolution with this analysis. However, the FP analysis mostly diagnoses the equilibrium status of the galaxies, and this long term evolution is not expected to significantly disturb the equilibrium. However, the FP analysis is also sensitive to the details of the dynamics and of the structure (see Prugniel et al. 1997).
Using data collected in the Hypercat database, in particular the kinematic and photometric profiles, we are trying to study the systematics of these non-homologies and their relation with the environment.
## References
Bernardi, M., Renzini, A., da Costa, L.N. et al., 1998, ApJ 508, L143
De Carvalho, R., R., Djorgovski, S., 1992, ApJ 389, L49
Guzmán, R., Lucey, J. R., Carter, D., Terlevich, R. J., 1992, MNRAS 257, 187
Jørgensen, I., Jønch-Sørensen, H., 1998, MNRAS 297, 968
Prugniel, Ph., Simien, F., 1997, A&A 321, 111
Prugniel, Ph., Golev, V., Maubon, G., 1999, A&A 346, L25
Rose, J., Bower, R., Caldwell, N. et al., 1994, AJ 108, 2054
Schweizer, F., 1982, ApJ 252, 455
Schweizer, F., Seitzer, P., 1992, AJ 104, 1039
|
no-problem/9909/hep-lat9909097.html
|
ar5iv
|
text
|
# High temperature meson propagators with domain-wall quarks.Talk presented by D. K. Sinclair. Supported by DOE contract W-31-109-ENG-38.
## 1 Introduction
We use Shamir’s formulation of lattice domain-wall fermions, which have exact chiral flavour symmetry for infinite 5th dimension ($`N_5`$), to study the chiral properties of quarks in high temperature QCD and their relation to topology.
The chirally symmetric Dirac operator should have a zero mode associated with each instanton, and obey the Atiyah-Singer index theorem. In addition, these modes yield the disconnected contributions which distinguish the $`\pi `$ from the $`\eta ^{}`$ screening propagators and the $`\sigma (f_0)`$ from the $`\delta (a_0)`$ propagators, and give related contributions to the connected propagators. Since we use quenched configurations, we cannot determine whether the anomalous $`U(1)`$ axial symmetry is restored in the high temperature phase of the massless $`N_f=2`$ theory, but we can check relations required for the $`U(1)`$ axial symmetry to remain broken.
We measured eigenmodes of the domain-wall Dirac operator as a function of $`N_5`$ on a set of quenched configurations in the high temperature phase. We check the Atiyah-Singer theorem for $`N_5=10`$ and calculate connected and disconnected scalar and pseudoscalar screening propagators. Early results were presented at Lattice’98 . Section 2 describes our analysis and tests of the index theorem. The meson screening propagators are discussed in section 3. Section 4 gives our discussions and conclusions.
## 2 Domain-wall quarks at high temperatures
For our study of domain-wall quarks at high temperatures we use $`16^3\times 8`$ quenched configurations at $`\beta =6.2`$ (170), $`\beta =6.1`$ (100) and $`\beta =6.0`$ (100). (At $`N_t=8`$, $`\beta _c6.0`$.) On each configuration, we estimated the topological charge by the cooling method.
We studied eigenvalue trajectories of the hermitian Wilson Dirac operator $`\gamma _5D_W`$ as the bare mass was varied. These vanish at would-be zero modes associated with instantons . Based on these studies we chose the Shamir domain-wall mass parameter $`M=1.7`$ for all 3 $`\beta `$ values.
We calculated the lowest 2 eigenmodes of the hermitian domain-wall Dirac operator $`\gamma _5D_{dw}`$ for all $`\beta =6.2`$ configurations with non-trivial topology and for all $`\beta =6.1`$ and $`\beta =6.0`$ configurations, for $`N_5=4`$, $`6`$, $`8`$ and $`10`$. (A third $`N_5=10`$ eigenmode was calculated for each $`\beta =6.0`$ configuration with topological charge $`\pm 3`$.)
For $`\beta =6.2`$, there is a clear separation between would-be zero modes whose eigenvalues decrease exponentially with $`N_5`$ and non-zero modes which rapidly approach a constant. By $`\beta =6.0`$ the situation is less clear. For $`\beta =6.2`$, $`N_5=10`$ appears adequate.
We calculate the chiral condensates $`\overline{q}q`$ and $`\overline{q}\gamma _5q`$ for the case $`N_5=10`$, using an eigenmode enhanced stochastic estimator. For each configuration, exact chiral symmetry requires $`\overline{q}\gamma _5q`$ to obey the Atiyah-Singer index theorem:
$$m\underset{x}{}\overline{q}(x)\gamma _5q(x)_U=Q_U.$$
(1)
As figure 1 shows, the index theorem at $`\beta =6.2`$ is well approximated down to masses comparable to the eigenvalue of $`\gamma _5D_{dw}`$ at $`m=0`$ of smallest magnitude.
## 3 Meson screening propagators.
Screening propagators, i.e. propagators for spatial separations, describe the excitations of hadronic matter and the quark-gluon plasma at finite temperature. We measure the connected and disconnected parts of these propagators for scalar and pseudoscalar mesons with zero momenta transverse to their separation ($`Z`$).
The connected parts of these propagators are measured with a noisy point source on 1 $`z`$ slice for $`Q=0`$ and on each $`z`$ slice for $`Q0`$. The disconnected parts of these propagators are measured separating out the contributions of the eigenmodes (projected on to the domain walls) and approximating the remainder of the required traces with a stochastic estimator. For a discussion of stochastic estimators for staggered quarks see.
The $`\beta =6.2`$ disconnected propagators behave as expected. For $`Q=1`$, the contribution is sizeable and increases as quark mass decreases (it should diverge as $`1/m^2`$ if chiral symmetry were exact), as shown in figure 2.
For $`Q=0`$ the contribution is extremely small compared with the connected contribution (on the scale of figure 2 all points would lie on the horizontal axis). This is consistent with the expectation that the $`Q=0`$ contribution vanishes in the chiral limit.
For the connected propagators, we consider the combinations $`\frac{1}{2}(P_\pi \pm P_\delta )`$ (Note: the connected part of the $`\eta ^{}`$ propagator is the $`\pi `$ propagator and the connected part of the $`\sigma `$ propagator is the $`\delta `$ propagator). For $`Q=0`$, flavour chiral symmetry restoration and the absence of a disconnected contribution would make $`\frac{1}{2}(P_\pi +P_\delta )`$ vanish in the chiral limit. The observed value is very small. The other combination, $`\frac{1}{2}(P_\pi P_\delta )`$, is non-vanishing and has a finite chiral limit, as expected.
In the $`Q=1`$ sector, $`\frac{1}{2}(P_\pi +P_\delta )`$ should equal the disconnected part of the $`\sigma `$ and $`\eta ^{}`$ propagators in the chiral limit, for chiral flavour symmetry restoration with broken $`U(1)`$ axial symmetry to be possible for $`N_f=2`$ . $`\frac{1}{2}(P_\pi +P_\delta )`$ is plotted in figure 3.
Figures 3 and 2 are consistent with this expectation.
## 4 Discussion and conclusions.
For quenched lattice QCD at $`N_t=8`$, domain-wall quarks with $`N_510`$ are a good approximation to chiral quarks at $`\beta =6.2`$ and suggest an approach to chiral symmetry which is exponential in $`N_5`$. The index theorem is well approximated and the $`\pi `$, $`\sigma `$, $`\eta ^{}`$ and $`\delta `$ propagators show the correct behaviour for the restoration of chiral $`SU(2)\times SU(2)`$ flavour symmetry for $`N_f=2`$. The disconnected parts of the $`\sigma `$ and $`\eta `$ propagators come entirely from the $`Q0`$ sector and have the behaviour needed to give a finite contribution when the $`N_f=2`$ determinant is included. Isolating the lowest eigenmodes greatly improves our stochastic estimation of traces.
Our preliminary results for $`\beta =6.0`$ indicate a more complex behaviour. It is not yet clear whether this indicates that $`N_5=10`$ is too small, or that the condensation of instanton-antiinstanton pairs, as observed by Heller et al. for $`N_t=4`$ using Ginsparg-Wilson fermions , is playing an important role.
A dynamical simulation is needed to determine if, for $`N_f=2`$, the $`U(1)`$ axial symmetry remains broken above the transition. Such simulations are being performed by the Columbia group at $`N_t=4`$ , and preliminary indications are that it does remain broken. These pioneering results are limited by the strong couplings dictated by $`N_t=4`$, and use of heavier quarks than would be desirable. To push closer to the chiral limit probably requires isolating the lowest eigenmode(s) in the simulations.
These calculations were performed on the C90 and J90’s at NERSC.
|
no-problem/9909/cond-mat9909196.html
|
ar5iv
|
text
|
# Anomalous temperature behavior of resistivity in lightly doped manganites around a metal-insulator phase transition
## Abstract
An unusual temperature behavior of resistivity $`\rho (T,x)`$ in $`La_{0.7}Ca_{0.3}Mn_{1x}Cu_xO_3`$ has been observed at slight $`Cu`$ doping ($`0x0.05`$). Namely, introduction of copper results in a splitting of the resistivity maximum around a metal-insulator transition temperature $`T_0(x)`$ into two differently evolving peaks. Unlike the original $`Cu`$-free maximum which steadily increases with doping, the second (satellite) peak remains virtually unchanged for $`x<x_c`$, increases for $`xx_c`$ and finally disappears at $`x_m2x_c`$ with $`x_c0.03`$. The observed phenomenon is thought to arise from competition between substitution induced strengthening of potential barriers (which hamper the charge hopping between neighboring $`Mn`$ sites) and weakening of carrier’s kinetic energy. The data are well fitted assuming a nonthermal tunneling conductivity theory with randomly distributed hopping sites.
To clarify the underlying microscopic transport mechanisms in exhibiting colossal magnetoresistance manganites, numerous studies (both experimental and theoretical) have been undertaken during the past few years which revealed a rather intricate correlation of structural, magnetic and charging properties in these materials based on a crucial role of the $`Mn^{3+}OMn^{4+}`$ network. In addition to the so-called double-exchange (DE) mechanism (allowing conducting electrons to hop from the singly occupied $`e_{2g}`$ orbitals of $`Mn^{3+}`$ ions to empty $`e_{2g}`$ orbitals of neighboring $`Mn^{4+}`$ ions), these studies emphasized the important role of the Jahn-Teller (JT) mechanism associated with the distortions of the network’s bond angle and length and leading to polaron formation and electron localization in the paramagnetic insulating region. In turn, the onset of ferromagnetism below Curie point increases the effective bandwidth with simultaneous dissolving of spin polarons into band electrons and rendering material more metallic. To modify this network, the substitution effects on the properties of the most popular $`La_{0.7}Ca_{0.3}MnO_3`$ manganites have been studied including the isotopic substitution of oxygen (”giant” isotope effect ), rare-earth (RE) and transition element (TE) doping at the $`Mn`$ site. In particular, an unusually sharp decrease of resistivity $`\rho (T)`$ in $`La_{0.7}Ca_{0.3}Mn_{0.96}Cu_{0.04}O_3`$ due to just $`4\%`$ $`Cu`$ doping has been reported and attributed to the $`Cu`$ induced weakening of the kinetic carrier’s energy $`E_0(x)`$. On the other hand, the opposite temperature behavior of resistivity (that is an increase of $`\rho `$ upon TE doping) can also be expected based on deactivation of the DE Zener mechanism. Indeed, this mechanism is effective when electrons can hop (tunnel) between nearest-neighbor TE ions without altering their spin or energy. Hence, the observed lowering of the metal-insulator (M-I) transition temperature and hopping based conductivity by TE substitution can be ascribed to an inequivalence of the ground-state energies of neighboring $`Mn`$ and TE ions resulting in an appearance of the doping dependent potential barrier $`U(x)`$. More precisely , this potential energy exceeds the polaron bandwidth (virtually weakening the DE interaction between neighboring TE and $`Mn`$ ions and impeding thus the possibility of energy-conserving coherent hops) and is defined as the difference between the binding energies of an electron on a TE ion (e.g., $`Cu`$) and $`Mn`$ ion, respectively.
In an attempt to pinpoint the above-mentioned potential energy controlled hopping mechanism and gain some insight into the barrier’s doping profile, in this Letter we present a comparative study of resistivity measurements on $`Cu`$ doped polycrystalline manganite samples from the $`La_{0.7}Ca_{0.3}Mn_{1x}Cu_xO_3`$ family for $`0x0.05`$ for a wide temperature interval (from $`20K`$ to $`300K`$). As we shall see, the data are reasonably well fitted (for all $`T`$ and $`x`$) by a unique (nonthermal) tunneling expression for the resistivity assuming a random (Gaussian) distribution of hopping sites and an explicit form for the temperature and doping dependent effective potential $`U_{eff}(T,x)=U(x)E(T,x)`$. Besides, the $`Cu`$ doping induced competition between the barrier’s height profile $`U(x)`$ and the previously found behavior of the carrier’s kinetic energy $`E_0(x)E(0,x)`$ results in emergence of a satellite peak in the temperature behavior of the observed resistivity on the insulating side.
The samples examined in this study were prepared by the standard solid-state reaction from stoichiometric amounts of $`La_2O_3`$, $`CaCO_3`$, $`MnO_2`$, and $`CuO`$ powders. The necessary heat treatment was performed in air, in alumina crucibles at $`1300C`$ for 2 days to preserve the right phase stoichiometry. Powder X-ray diffraction patterns are characteristic of perovskites and show structures that reflect the presence of orthorhombic (or tetragonal) distortions induced by $`Cu`$ doping. It was confirmed that our data for the undoped samples are compatible with the best results reported by other groups ensuring thus the quality of our sample processing conditions and procedures.
The electrical resistivity $`\rho (T,x)`$ was measured using the conventional four-probe method. To avoid Joule and Peltier effects, a dc current $`I=1mA`$ was injected (as a one second pulse) successively on both sides of the sample. The voltage drop $`V`$ across the sample was measured with high accuracy by a KT256 nanovoltmeter. Figure 1 presents the temperature behavior of the resistivity $`\rho (T,x)`$ for six $`La_{0.7}Ca_{0.3}Mn_{1x}Cu_xO_3`$ samples, with $`0x0.05`$. Notice a rather broad bell-like form of resistivity for the undoped sample (Fig.1(a)) reaching a maximum at the so-called metal-insulator transition (peak) temperature $`T_0(0)=200K`$. Upon $`Cu`$ doping, two markedly different processes occur. First of all, the $`Cu`$-free (left) resistivity peak increases and becomes more narrow (with $`T_0(x)`$ shifting towards lower temperatures). Secondly, at a higher temperature another (satellite) peak emerges splitting from the original one. It remains virtually unchanged for small $`x`$ (up to $`x_c0.03`$) and starts to increase for $`x>x_c`$ until it finally merges with the main (left) peak at the highest doping level of $`x=0.05`$.
Due to tangible microstructural changes (observed upon copper doping), the JT mechanism plays a decisive role in the above-described resistivity anomalies by assisting electron localization near the M-I transition temperature. Given the growing experimental evidence that polaronic distortions (evident in the paramagnetic state) persist in the ferromagnetic phase as well, we consider the observed resistivity to arise from tunneling of small spin polarons through the doping created potential barriers. According to a conventional picture , the conductivity due to tunneling of a carrier through an effective barrier of height $`U_{eff}`$ and width $`R`$ reads
$$\sigma =\sigma _he^{2R/L},$$
(1)
where $`L=h/\sqrt{2mU_{eff}}`$ is a characteristic length with $`h`$ the Plank’s constant and $`m`$ an effective carrier mass.
To account for the observed anomalous behavior of the resistivity in our samples, we assume that around the metal-insulator transition in addition to the Cu-doping induced slight modification ($`x1`$) of the barrier’s height $`U(x)U_{eff}(T_0,x)xU_1+(1x)U_2`$, the effective potential $`U_{eff}(T,x)=U(x)E(T,x)`$ will also depend on the temperature via the corresponding dependence of the carrier’s energy $`E(T,x)=h^2/2m\xi ^2(T,x)`$ with some characteristic length $`\xi (T,x)\xi _0(x)/[1T/T_0(x)]`$ (with $`\xi _0(x)\xi _0(0)/(1x)^2`$) which plays a role of the charge carrier localization length above $`T_0`$ (in insulating phase) and the correlation length below $`T_0`$ (in metallic phase), so that $`\xi ^1(T_0,x)=0`$. Furthermore, given a rather wide temperature dependence of resistivity for the undoped sample (see Fig.1(a)), we adopt the effective medium approximation scheme and assume a random distribution of hopping distances $`R`$ with the normalized function $`f(R)`$ leading to
$$\rho <\sigma ^1>=\frac{1}{Z}_0^{R_m}𝑑Rf(R)\sigma ^1(R),$$
(2)
for the effective medium resistivity, where $`Z=_0^{R_m}𝑑Rf(R)`$ with $`R_m`$ being the largest hopping distance. In what follows, for simplicity we consider a Gaussian distribution (around a mean value $`R_0`$) with the normalized function $`f(R)=(2\pi R_0^2)^{1/2}e^{R^2/2R_0^2}`$ resulting in the following expression for the observed resistivity
$$\rho (T,x)=\rho _he^{\gamma ^2}\left[\frac{\mathrm{\Phi }(\gamma )\mathrm{\Phi }(\gamma \gamma _m)}{\mathrm{\Phi }(\gamma _m)}\right],$$
(3)
where
$$\gamma (T,x)=\sqrt{\mu (x)\gamma _0(x)\left[1\frac{T}{T_0(x)}\right]^2},$$
(4)
with
$$\mu (x)=\frac{2mU(x)R_0^2}{h^2}\mu (0)+x\mathrm{\Delta }\mu ,$$
(5)
(which measures the substitution induced potential barriers $`U(x)`$ hampering the charge hopping between neighboring $`Mn`$ sites) and
$$\gamma _0(x)=\frac{R_0^2}{\xi _0^2(x)}\gamma _0(0)(1x)^4,$$
(6)
(which measures the effects due to the carrier’s kinetic energy $`E_0(x)E(0,x)`$, see above). Here $`\rho _h=1/\sigma _h`$, $`\gamma _m=R_m/R_0`$, and $`\mathrm{\Phi }(\gamma )`$ is the error function.
Turning to the discussion of the main (left) resistivity profile, we note that the $`Cu`$ induced changes of its peak temperature $`T_0(x)`$ are well fitted by the exponential law
$$T_0(x)=T_0(0)T_m\left(1e^{x\tau }\right),$$
(7)
with $`T_0(0)=200K`$, $`T_m=73K`$ and $`\tau =56`$. At the same time, according to Eqs.(3)-(7) (and in agreement with the observations, see Fig.2), the corresponding peak resistivity $`\rho _0(x)\rho (T_0,x)`$ increases with $`x`$ as follows
$$\rho _0(x)=\rho _0(0)e^{x\mathrm{\Delta }\mu },$$
(8)
yielding $`\rho _0(0)=\rho _he^{\mu (0)}=4m\mathrm{\Omega }m`$, and $`\mathrm{\Delta }\mu =54`$ for the model parameters and suggesting that $`\rho _0(x)1/T_0(x)`$. To further emphasize this similarity, Fig.2 depicts the extracted doping variation of the normalized quantities, $`[T_0(0)T_0(x)]/T_m`$ (open dots) and left peak conductivity $`\sigma _0(x)/\sigma _0(0)=\rho _0(0)/\rho _0(x)`$ (solid dots) along with the fitting curves (solid lines) according to Eqs.(7) and (8).
A more careful analysis of Eq.(3) shows that in addition to the main peak at $`T_0(x)`$, equation $`\frac{d\rho (T,x)}{dT}=0`$ has two more conjugated extreme points at $`T=T_S^\pm (x)`$ intrinsically linked to the main peak, viz. $`T_S^{}(x)=T_0(x)\left[1\sqrt{\frac{\mu (x)\mu ^{}}{\gamma _0(x)}}\right]`$ and $`T_S^+(x)=T_0(x)\left[1\sqrt{\frac{\mu ^+\mu (x)}{\gamma _0(x)}}\right]`$ with $`\mu ^\pm =\sqrt{2}(2\pm \gamma _m)`$. To attribute these temperatures to the observed satellite (right) peak (see Fig.1), first of all, we have to satisfy the ”boundary conditions” at zero ($`x=0`$) and highest ($`x=x_m=0.05`$) doping levels by assuming $`T_S^{}(0)=T_0(0)`$ and $`T_S^+(x_m)=T_0(x_m)`$ which lead to the following constraints on the model parameters: $`\mu ^{}=\mu (0)`$ and $`\mu ^+=\mu (x_m)`$. And secondly, to correctly describe the observed evolution of the satellite peak with copper doping and to introduce a critical concentration parameter $`x_c`$ into our model, we use the continuity condition $`T_S^+(x_c)=T_S^{}(x_c)`$. As a result, we find that the satellite’s peak is governed by the unique law over the whole doping interval with
$`T_S^{}(x)`$ $`=`$ $`T_0(x)\left[1\sqrt{{\displaystyle \frac{x}{2x_c}}}\right],0x<x_c,`$ (9)
$`T_S^+(x)`$ $`=`$ $`T_0(x)\left[1\sqrt{1{\displaystyle \frac{x}{2x_c}}}\right],x_cxx_m,`$ (10)
where $`x_m=2x_c`$ with $`x_c=\gamma _0(0)/\mathrm{\Delta }\mu `$. Noting that according to Eqs.(4)-(10), $`\gamma ^2(T_S^{})=\mu (0)`$ and $`\gamma ^2(T_S^+)=\mu (0)+2(xx_c)\mathrm{\Delta }\mu `$, in good agreement with the observations (see Fig.1) it follows now from Eq.(3) that indeed the satellite peak shows practically no changes with $`x`$ (up to $`xx_c`$) since $`\rho _S^{}(x)=\rho _h\mathrm{exp}[\gamma ^2(T_S^{})]\rho _0(0)`$ and starts to increase above the threshold (for $`x>x_c`$) as $`\rho _S^+(x)=\rho _0(0)\mathrm{exp}[2(xx_c)\mathrm{\Delta }\mu ]`$ until it totally merges with the main peak at $`xx_m`$. By comparing the above expressions with our experimental data for resistivity peaks at $`x=x_c`$ and $`x=x_m`$, we get $`\gamma _0(0)=R_0^2/\xi _0^2(0)1.5`$ which (along with extracted above value of $`\mathrm{\Delta }\mu `$, see Eq.(8)) leads to $`x_c=\gamma _0(0)/\mathrm{\Delta }\mu =E_0(0)/\mathrm{\Delta }U0.03`$ for the critical concentration of copper, in very good agreement with the observations. As expected, $`x_c`$ reflects the competition between the carrier’s kinetic energy and the copper induced potential barrier. In turn, assuming as usual $`R_05.5\AA `$ for a mean value of the hopping distance and using a free-electron mass value for $`m`$, the above estimates yield $`U_2U(0)E_0(0)0.1eV`$ and $`U_1\mathrm{\Delta }U3eV`$ for the barrier’s height of the undoped and maximally doped samples, respectively.
Finally, given the above explicit dependencies for $`T_0(x)`$ and $`T_S^\pm (x)`$ along with the fixed model parameters, we are able to fit all the resistivity data with a single function $`\rho (T,x)`$ given by Eq.(3). The solid lines in Fig.1 are the best fits according to this equation assuming nearest-neighbor hopping approximation (with $`\gamma _m=1`$).
In summary, due to the competition between the copper modified kinetic carrier’s energy $`E(0,x)`$ and the potential barriers $`U(x)`$ between $`Mn^{3+}Mn^{4+}`$ dominated hopping sites, a rather unusual ”double-peak” behavior of the resistivity $`\rho (T,x)`$ is observed in $`La_{0.7}Ca_{0.3}Mn_{1x}Cu_xO_3`$ at slight $`Cu`$ doping around the metal-insulator transition temperature $`T_0(x)`$. The temperature and $`x`$ dependencies of the resistivity are rather well fitted by a coherent (nonthermal) tunneling of charge carriers with heuristic expressions for the effective potential $`U_{eff}(T,x)=U(x)E(T,x)`$ and the critical concentration of copper $`x_c`$.
Part of this work has been financially supported by the Action de Recherche Concertées (ARC) 94-99/174. S.S. thanks FNRS (Brussels) for some financial support.
|
no-problem/9909/nucl-ex9909007.html
|
ar5iv
|
text
|
# Measurement of Differences Between 𝐽/𝜓 and 𝜓' Suppression in p-A Collisions
## Abstract
Measurements of the suppression of the yield per nucleon of $`J/\psi `$ and $`\psi ^{}`$ production for 800 GeV/c protons incident on heavy relative to light nuclear targets have been made with very broad coverage in $`x_F`$ and $`p_T`$. The observed suppression is smallest at $`x_F`$ values of 0.25 and below and increases at larger values of $`x_F`$. It is also strongest at small $`p_T`$. Substantial differences between the $`\psi ^{}`$ and $`J/\psi `$ are observed for the first time in p-A collisions. The suppression for the $`\psi ^{}`$ is stronger than that for the $`J/\psi `$ for $`x_F`$ near zero, but becomes comparable to that for the $`J/\psi `$ for $`x_F>0.6`$.
Strong suppression of the yield per nucleon of heavy vector mesons produced in heavy nuclei relative to that in light nuclei has been observed in proton and pion-nucleus collisions . Similar effects have also been observed in heavy-ion collisions . This suppression exhibits strong kinematic dependences, especially with Feynman-$`x`$ ($`x_F`$) and transverse momentum ($`p_T`$) of the produced vector meson. Since the suppression of heavy vector meson production in heavy-ion collisions is predicted to be an important signature for the formation of the quark-gluon plasma (QGP), it is important to understand the mechanisms that can produce similar effects in the absence of a QGP. These mechanisms can be studied in proton-nucleus production of vector mesons where no QGP is presumed to occur. Many effects have been considered in attempting to describe the observed proton-induced charmonium yields from nuclear targets, e.g. absorption, parton energy loss, shadowing and feed-down from higher mass resonances, but it is clear that no adequate understanding of the problem has been achieved. Even the absolute cross sections are poorly understood due to poor knowledge of the production mechanism, and most models ignore or use naive pictures of the space-time evolution of the $`c\overline{c}`$ pair. Recognizing that the production and suppression mechanisms can be identified by their strong kinematic dependences, it is crucial to have new data with broad kinematic coverage to challenge comprehensive descriptions of charmonium production in nuclei.
Here we report new high statistics measurements made in Fermilab E866/NuSea of the nuclear dependence of $`J/\psi `$ and $`\psi ^{}`$ production for proton-nucleus collisions on Be, Fe, and W targets. Over $`3\times 10^6`$ $`J/\psi `$’s and $`10^5`$ $`\psi ^{}`$’s with $`x_F`$ between $``$0.10 and 0.93 and $`p_T`$ up to 4 GeV/c were observed. Previous measurements in E772 and E789 have suffered from limited $`p_T`$ acceptance and limited statistics at larger values of $`x_F`$, both of which are greatly extended in these new data.
E866/NuSea used a 3-dipole magnet pair spectrometer employed in previous experiments (E605, E772, and E789), modified by the addition of new drift chambers and hodoscopes with larger acceptance at the first tracking station and a new trigger system . This spectrometer was also used for other measurements in E866/NuSea . An 800 GeV/c extracted proton beam of up to $`6\times 10^{11}`$ protons per 20 s spill bombarded the targets used in these measurements. A rotating wheel which was located upstream of either the first or second magnet held thin solid targets of Be, Fe and W with thicknesses corresponding to between 3% and 19% of an interaction length. After passing through the target, the remaining beam was absorbed in a copper beam dump located inside the large second magnet. Following the beam dump was a 13.4 interaction length absorber wall which filled the entire aperture of the magnet, eliminated hadrons, and assured that only muons traversed the spectrometer’s detectors. These muons were then tracked through a series of detector stations composed of drift chambers, hodoscopes and proportional tubes. Because of improvements in the trigger system, the coverage in $`p_T`$ was much broader than in previous experiments with this spectrometer (e.g. E772), extending to over 4 GeV/c. Beam intensity was monitored using secondary-emission detectors.
Three magnetic field and target location configurations were used to span the full range in $`x_F`$: small-$`x_F`$ (SXF, $`0.1x_F0.3`$), intermediate–$`x_F`$ (IXF, $`0.2x_F0.6`$) and large-$`x_F`$ (LXF, $`0.3x_F0.93`$). Detailed Monte Carlo simulations of the $`J/\psi `$ and $`\psi ^{}`$ peaks and of the Drell-Yan continuum were used to generate lineshapes in each bin in $`x_F`$ or in $`p_T`$. For the Drell-Yan calculations we use MRST NLO with EKS98 shadowing corrections. The contribution to the continuum from semi-leptonic decay to muons of open charm pairs was estimated using PYTHIA and a small correction, less than half the statistical uncertainties, was made for it in the SXF data set; but for the larger $`x_F`$ data sets it is negligible and no corrections were made. In addition, a detailed construction of random muon pairs using single-muon events (which also provided a good fit to the like-sign muon mass spectra) was used to account for the smooth random background underneath the peaks. A maximum-likelihood method was used for fitting that took into account the statistical uncertainty of the data and of the Monte Carlo and randoms. Figure 1 shows a typical fit to a mass spectrum using these components. Since the rates in the various detectors were nearly equal for the different targets, a correction for rate-dependent inefficiencies was not necessary.
We present our results in terms of $`\alpha `$, where $`\alpha `$ is obtained by assuming the cross section dependence on nuclear mass, $`A`$, to be of the form $`\sigma _A=\sigma _N\times A^\alpha `$, where $`\sigma _N`$ is the cross section on a nucleon. For the SXF data, $`\alpha `$ was obtained using Be and two different thickness W targets, while for the IXF and LXF data, Be, Fe and W targets were used. The SXF data from the two W targets verified that no corrections for secondary production were necessary. The $`p_T`$ dependence of $`\alpha `$ is shown in Fig. 2, where we see essentially the same increase in $`\alpha `$ for all $`x_F`$ ranges for both the $`J/\psi `$ and the $`\psi ^{}`$, as well as for the 200 GeV/c NA3 data. This increase is characteristic of multiple scattering of the incident parton and of the nascent $`c\overline{c}`$ in the final state. Note that for the IXF data the $`p_T`$ acceptance is truncated at about 2 GeV/c because a more restrictive trigger was used.
Previous experiments such as E772 have had a limited acceptance in $`p_T`$ which varied with $`x_F`$. Since the value of $`\alpha `$ depends strongly on $`p_T`$ this can cause a distortion of the apparent shape of $`\alpha `$ versus $`x_F`$. The improvements in the E866/NuSea trigger allowed a much broader $`p_T`$ acceptance than in these earlier measurements. However, for the lowest values of $`x_F`$ at each spectrometer setting our $`p_T`$ acceptance still becomes somewhat restricted. For the results presented here we have corrected the values of $`\alpha (x_F)`$ using a detailed simulation of our acceptance and a differential cross section shape versus $`p_T`$ derived from our data.
The resulting dependence of $`\alpha `$ on $`x_F`$ is shown in Fig. 3 and listed in Table I. The systematic uncertainty of $`1\%`$ in the corrected $`\alpha `$ is dominated by the $`p_T`$ acceptance correction. $`\alpha `$ for the $`J/\psi `$ is largest at values of $`x_F`$ of 0.25 and below but strongly decreases at larger values of $`x_F`$. For the $`\psi ^{}`$ $`\alpha `$ is smaller than for the $`J/\psi `$ for $`x_F<0.2`$, remains relatively constant up to $`x_F`$ of 0.5 (becoming slightly larger than for the $`J/\psi `$) and then falls to values consistent with those for the $`J/\psi `$ for $`x_F>0.6`$. The significance of the overall $`J/\psi `$, $`\psi ^{}`$ difference for $`x_F<0.2`$ is about 4 sigma with respect to the statistical and relative systematic uncertainties. This difference is consistent with less accurate results obtained by NA38 for p-A at 450 GeV/c, but is inconsistent with the quoted NA38 result that also included the p-p and p-d data from NA51. Although slightly larger $`\alpha `$ values for the $`\psi `$’ than for the $`J/\psi `$ can be seen near $`x_F=0.55`$, we should point out that if instead we emphasize the velocity of the $`c\overline{c}`$ and plot $`\alpha `$ versus rapidity, then the agreement is quite good in this region. The reduced $`\alpha `$ at small $`x_F`$ is also evident in Fig. 2 where $`\alpha `$ for the $`\psi ^{}`$ falls consistently below that for the $`J/\psi `$ at low $`p_T`$ for the SXF data set.
Our results for the $`J/\psi `$ $`\alpha `$ can be represented for convenience by the simple parameterizations shown as solid lines in Figs. 2 and 3: $`\alpha (x_F)=0.960(10.0519x_F0.338x_F^2)`$, and $`\alpha (p_T)=A_i(1+0.0604p_T+0.0107p_{T}^{}{}_{}{}^{2})`$, where $`A_i=0.870`$, $`0.840`$, $`0.782`$ and $`0.881`$ for the SXF, IXF and LXF datasets and for the NA3 data, respectively.
A comparison of our results with earlier data from E772 at 800 GeV/c and also with NA3 at 200 GeV/c is shown in Fig. 4. It illustrates that the suppression seen for $`J/\psi `$ production scales with $`x_F`$ but not with $`p_{J/\psi }^{LAB}`$ above about 90 GeV/c, which corresponds to $`x_F>0.05`$ for our data and to $`x_F>0.4`$ for NA3. Also of interest in these figures is a comparison of our results with those of E772. At the small-$`x_F`$ end of the E772 data their published results drop significantly below our results. As was discussed above, the E772 data have severe narrowing of the $`p_T`$ acceptance for their smallest $`x_F`$ bins, and a large correction that could easily bring the E772 points into agreement with our data is expected. Similar arguments hold for the E789 $`J/\psi `$ data (not shown) at small to negative $`x_F`$, where we estimate about an 8% correction which would bring those results into agreement with ours. On the other hand, the large $`x_F`$ results from E789 appear to be high by more than their systematic uncertainty of 2.5%.
The suppression of $`J/\psi `$ production near $`x_F=0`$ is usually thought to be caused by absorption, the dissociation of the $`c\overline{c}`$ pair by interactions with the nucleus or comovers into separate quarks that eventually hadronize into $`D`$ mesons. This model is supported by both the increased suppression of the $`\psi ^{}`$ that we observe near $`x_F=0`$ and the absence of suppression of $`D`$ meson production in the same kinematic region . At small $`x_F`$, the velocity of the $`c\overline{c}`$ pair is low enough that it may hadronize within the nucleus, so the larger $`\psi ^{}`$ would be absorbed more strongly . However, the observed constancy of $`\alpha `$ for both the $`J/\psi `$ and the $`\psi ^{}`$ up to $`y_{cm}1`$ complicates this interpretation since these models predict that faster $`c\overline{c}`$ pairs above $`x_F0.1`$ would experience similar absorption, whether they eventually hadronize outside the nucleus into a $`J/\psi `$ or a $`\psi ^{}`$. At larger values of $`x_F`$, above $`0.3`$, our data does show similar suppression for the $`J/\psi `$ and $`\psi ^{}`$. Furthermore, if absorption by the nuclear medium is the dominant suppression mechanism, the effect should scale with $`p_{J/\psi }^{LAB}`$, but Fig. 4 shows that scaling breaks down in the middle of the region where we observe $`\alpha `$ to be constant.
Shadowing of the small-$`x`$ target gluon distributions is also thought to play a role in the observed suppression, but current estimates predict at most a few percent drop in $`\alpha (x_F)`$, even at the largest $`x_F`$ values. Also, as is seen for our data (but not shown) and was seen previously, there is a lack of scaling with $`x_2`$, which is related to that shown above for $`p_{J/\psi }^{LAB}`$ since $`x_21/p_{J/\psi }^{LAB}`$. This appears to rule out large contributions from shadowing. Our studies show that for Drell-Yan the dominant nuclear effect is shadowing of the anti-quark distributions and that the energy-loss of the incident quark is small. Although the incoming gluon’s energy loss is expected to be larger by a color factor of $`9/4`$ and the additional energy loss of the outgoing $`c\overline{c}`$ may be as large as that of a gluon, we still expect the overall contribution of energy loss to be small for resonance production.
In conclusion, we have presented new data for the suppression of $`J/\psi `$ and $`\psi ^{}`$ production in heavy versus light nuclei for 800 GeV/c proton-nucleus collisions. The kinematic coverage in $`x_F`$ ($``$0.10 to 0.93) and $`p_T`$ (0 - 4 GeV/c) and statistical accuracy surpass that of previous experiments. Corrections are made to the data to account for the narrowing $`p_T`$ acceptance at the smaller values of $`x_F`$. The largest value of $`\alpha `$ (integrated over $`p_T`$) of about 0.95 is seen at $`x_F`$ near 0.25 and below with strongly falling values for larger $`x_F`$. The most striking new result is that the suppression for the $`\psi ^{}`$ is stronger than that for the $`J/\psi `$ at $`x_F`$ near zero.
We thank Ramona Vogt and Boris Kopeliovich for many useful discussions and the Fermilab Particle Physics, Beams and Computing Divisions for their assistance. This work was supported in part by the U.S. Department of Energy.
|
no-problem/9909/cond-mat9909343.html
|
ar5iv
|
text
|
# Probability distribution of the order parameter for the 3D Ising model universality class: a high precision Monte Carlo study
\[
## Abstract
We study the probability distribution $`P(M)`$ of the order parameter (average magnetization) $`M`$, for the finite-size systems at the critical point. The systems under consideration are the 3-dimensional Ising model on a simple cubic lattice, and its 3-state generalization known to have remarkably small corrections to scaling. Both models are studied in a cubic box with periodic boundary conditions. The model with reduced corrections to scaling makes it possible to determine $`P(M)`$ with unprecedented precision. We also obtain a simple, but remarkably accurate approximate formula describing the universal shape of $`P(M)`$.
\]
This work is devoted to the study of the following problem. Consider a finite system belonging to the universality class of the three-dimensional (3D) Ising model, exactly at its critical point. Let the system have a non-conserved order parameter, cubic symmetry and periodic boundary conditions. For such a finite-size system the order parameter $`M`$ (for the Ising model, the sum of all spins, divided by the total number of spins in the system) will be a fluctuating quantity, characterized by the probability distribution $`P(M)`$ . In the scaling limit (system size going to infinity) this function is universal (up to rescaling of $`M`$) and can be thus considered a very interesting and informative characteristic of the given universality class. (One should bear in mind that $`P(M)`$ depends on the geometry of the box, and on the boundary conditions; in this study we always consider a cubic box with periodic boundary conditions). For example, $`P(M)`$ contains the information about all momenta $`M^k`$ of $`M`$, including the universal ratios such as the Binder cumulant $`U=1(1/3)M^4/M^2^2`$, which has been a subject of many Monte Carlo studies . Moreover, the precise knowledge of $`P(M)`$ proved to be important for locating and characterizing the critical point in Monte Carlo studies of various systems, including the liquid-gas critical point , the critical point in the unified theory of weak and electromagnetic interactions and in quantum chromodynamics .
The first Monte Carlo computation of $`P(M)`$ for the 3D Ising model in a cubic box with periodic boundary conditions has been performed in Ref. , where its double-peak shape was established. A more accurate determination of $`P(M)`$ has been done in Ref. , also by Monte Carlo. Results reported for the 3D case in Ref. appear to be incorrect.
Despite considerable progress in computation of $`P(M)`$ by analytical methods , numerical simulation remains the main source of information about its properties.
Our aim was to compute $`P(M)`$ on a qualitatively new level of accuracy, in comparison to what has been done before , and to put the result into form convenient for further use. We would like to emphasize the following two features of our computation that made this possible.
1. The computation of Ref. used the 3D Ising model on a simple cubic lattice of the size $`20^3`$ and $`30^3`$. As we will see, the shape of $`P(M)`$ obtained with these relatively small lattice sizes still differs noticeably from its scaling limit, due to nonnegligible corrections to scaling. To get over this difficulty, we employed, besides the 3D Ising model, the more sophisticated model in the same universality class, which was shown to have remarkably small corrections to scaling . This made it possible to determine the scaling limit of $`P(M)`$ with an accuracy far exceeding what would be achievable when one is restricted to the standard 3D Ising model.
2. The existing results for $`P(M)`$ were presented in the form of Monte Carlo-generated histograms . We present a simple 3-parameter formula which is suitable for quantitative applications. Its accuracy is about $`210^3`$ of the maximum value of $`P(M)`$.
We have performed Monte Carlo simulations of two models. The first one is the standard 3D Ising model on the simple cubic lattice, defined by the partition function
$$Z=\underset{\{s_i\}}{}\mathrm{exp}\left\{\beta \underset{ij}{}s_is_j\right\},s_i=\pm 1.$$
(1)
Here $`ij`$ denotes the pairs of nearest neighbours, and the sum is over the $`2^N`$ possible configurations, where $`N`$ is the total number of spins. We simulate this model at the critical point, which we take to be at $`\beta _c=0.221654`$, using the Swendsen-Wang cluster algorithm , and lattice sizes ranging from $`12^3`$ to $`58^3`$ (with periodic boundary conditions).
The second model (with dramatically reduced corrections to scaling, as was shown in Ref. ) is the spin-1 Blume-Capel model . Here the spins can take 3 discrete values: $`1`$, 0, and $`+1`$. The model is defined by the partition function
$$Z=\underset{\{s_i\}}{}\mathrm{exp}\left\{\beta \underset{ij}{}s_is_jD\underset{m}{}s_m^2\right\},s_i=1,0,+1.$$
(2)
The sum thus includes $`3^N`$ possible configurations. The parameter $`D`$ is fixed to the special value $`D=\mathrm{ln}2`$ (as explained in Ref. ), and we perform the simulations at the critical point, which is taken to be $`\beta _c=0.393422`$ , using lattice sizes from $`12^3`$ to $`58^3`$. The simulations used a hybrid method, which alternates one Metropolis sweep with 5 or 10 Wolff steps, depending on the system size, as described in Ref. .
The probability distribution $`P(M)`$ is obtained as follows. For each configuration generated by the Monte Carlo algorithm, we determine the order parameter $`M=\frac{1}{N}_{i=1}^Ns_i`$, and increment the population of the corresponding bin of the histogram by 1.
We have found that the following ansatz gives a surprisingly good approximation to $`P(M)`$:
$$P(M)\mathrm{exp}\left\{\left(\frac{M^2}{M_0^2}1\right)^2\left(a\frac{M^2}{M_0^2}+c\right)\right\}.$$
(3)
At the same time, the simpler ansatz
$$P(M)\mathrm{exp}\left\{c\left(\frac{M^2}{M_0^2}1\right)^2\right\}$$
(4)
is clearly insufficient. This is illustrated in Fig. 1, using our highest-statistics data set for the $`20^3`$ lattice. One observes that the accuracy of approximation (3) is approximately 20 times higher than that of Eq. (4), and the residual discrepancy of Eq. (3) is comparable to the statistical noise, even with the high statistics used.
The ansatz (3) was motivated by the observation that $`M^6`$ plays an important role in the effective potential of the models in the 3D Ising universality class, while higher powers of the order parameter can usually be neglected . That is, the effective potential can in many cases well be approximated by a polynomial consisting of $`M^2`$, $`M^4`$ and $`M^6`$ terms. This is exactly what appears in the exponent in Eq. (3).
The approximate nature of the ansatz (3) manifests itself by its failure to correctly reproduce the large-$`M`$ behaviour of the tails of $`P(M)`$, which is governed by the critical index $`\delta `$,
$$P(M)M^{(\delta 1)/2}\mathrm{exp}\{constM^{\delta +1}\}$$
(5)
(see Ref. ; for the discussion of the preexponential factor in a more general setting, see Ref. ). However, due to the fact that for the 3D Ising universality class the exponent $`\delta +15.8`$ is close to 6, this does not prevent the ansatz (3) from accurately describing the main part of $`P(M)`$ (excluding extremely-far-tail region).
The polynomial in the exponent of Eq. (3) has three parameters. Instead of simply parametrizing it by the coefficients in front of $`M^2`$, $`M^4`$ and $`M^6`$, we have chosen the parametrization so as to separate the scale-invariant parameters ($`a`$ and $`c`$) and the scale-dependent parameter $`M_0`$ (which parametrizes the position of the peak of the order parameter). The values of $`a`$ and $`c`$ in the scaling limit are universal and determine the “universal shape” of $`P(M)`$.
The results of our Monte Carlo simulations are collected in Tables I and II and shown in Figures 1, 2. For the spin-1 model, no deviations from scaling are observed on lattices $`16^3`$ and larger, while the simple cubic Ising model demonstrates pronounced corrections to scaling, which are, even on our largest lattices, much higher than both statistical errors of our spin-1 simulations and the accuracy of approximation (3). Corrections to scaling make it difficult to extract accurate scaling limit values of $`a`$ and $`c`$ from the simple cubic Ising model data, even if statistical errors are reduced by a higher-statistics simulation, due to necessity to extrapolate to $`L\mathrm{}`$.
There is no such problem with the spin-1 model, and we obtain the universal parameters of Eq. (3),
$$a=0.158(2),c=0.776(2).$$
(6)
Here the errors take into account both statistical uncertainties and the systematic deviations inherent in the approximation (3). The latter are estimated from the lower right plot in Fig. 1.
From Eqs. (3) and (6) one can easily obtain any required property of $`P(M)`$. For example, one immediately learns that the ratio of the peak value of $`P(M)`$ to its value at $`M=0`$ is
$$e^c=2.173(4).$$
(7)
Summarizing, we have computed, with a higher accuracy than previously available, the scaling limit form of the probability distribution $`P(M)`$ of the order parameter $`M`$ of systems with 3D Ising universality, in a cubic box with periodic boundary conditions. A convenient description of $`P(M)`$ is given by Eqs. (3) and (6), which deviates from the actual $`P(M)`$ by no more than $`210^3`$ times its maximum value (Fig. 1, right).
###### Acknowledgements.
We thank INTAS (grant CT93-0023) and DRSTP (Dutch Research School for Theoretical Physics) for enabling one of us (M.T.) to visit Delft University.
|
no-problem/9909/cond-mat9909154.html
|
ar5iv
|
text
|
# Hopping motion of lattice gases through nonsymmetric potentials under strong bias conditions
## I Introduction
The motion of particles in potentials that do not have mirror reflection symmetry has attracted much attention in the last years for several reasons. The interest extends from fundamental problems concerning the validity of the Second Law of Thermodynamics to applications in biological and chemical systems , as well as for solid-state devices . Major efforts have been devoted to an understanding of molecular motors, where proteins move in nonsymmetric potentials under the influence of stochastic and/or other forces. One specific observation for transport in nonsymmetric potentials is the possibility of rectification effects if the forces on the particles are beyond the regime where linear-response theory is applicable . Rectification effects have been discussed in continuous as well as in hopping systems . If applications of effects of particle motion in nonsymmetric potentials are envisaged then the question arises as to the influence of many-particle effects. The limit of single-particle motion is rarely realized; in real systems many particles are present that compete about the sites that can be occupied. Many-particle effects have been studied in continuous nonsymmetric periocic potentials in Ref., where interesting dependencies of the current on particle concentration and size were found. In this paper we will investigate hopping motion of lattice-gas particles in nonsymmetric hopping potentials under the influence of strong bias. We utilize the simple site exclusion model where multiple occupancy of sites is excluded and direct our attention to nonlinear effects on the particle current.
The stationary current of a single particle performing a hopping motion in a nonsymmetric potential under an arbitrary bias is known exactly . The calculation of the stationary current of site-exclusion lattice gases in nonsymmetric potentials that lead to rectification effects in the single-particle case is a difficult problem. Extensive work has been devoted to the asymmetric site exclusion process including the totally asymmetric site exclusion process (TASEP) where the particles can only hop in one direction, corresponding to very strong bias. The case of uniform hopping potentials is now well understood , but the case of nonuniform potentials is not generally solved. Recent work has been devoted to the TASEP with disordered potentials . For the general asymmetric case one has to resort to numerical simulations; we are going to present simulation results for the stationary current of lattice-gas particles in nonsymmetric potentials, for various concentrations and values of the bias.
Nonetheless, some analytical treatment can be given. First, the case of very small periodic systems can be treated explicitly: the motion of 2 particles on a ring of period 4 can be solved by elementary means. Although this is a very simple system, conclusions can be drawn in the limit of very strong bias that are of interest for the totally asymmetric site exclusion process. The nonlinear current of site-exclusion lattice gases in extended systems with periodic repetitions of nonsymmetric segments can be derived in a mean-field approximation for strong bias conditions. Interesting symmetry properties have been pointed out for the TASEP in disordered hopping potentials . While a particle-vacancy symmetry is also present in our model, the case of inversion of the bias direction is different here.
In the following section the hopping motion of 2 particles on a ring of period 4 is solved and analyzed. In Sec. III a mean-field approximation for the stationary current of lattice gases under strong bias in nonsymmetric hopping potentials is presented and compared with simulation results in a sawtooth potential. The symmetry properties of the model are discussed in Sec. IV and concluding remarks are given in Sec. V.
## II Two Particles on Ring with Four Sites
### A Solution of the stationary master equations
A very simple yet nontrivial model is given by a ring with 4 sites and 2 particles, cf. Fig. 1. The basic quantities for the description of the system are the joint probabilities $`P(i,j;t)(ij)`$ of finding one particle at site $`i`$ and the other particle at site $`j`$, at time $`t`$, for specified initial conditions. Since the particles are considered as indistinguishable, $`P(i,j;t)=P(j,i;t)`$. There are 6 different joint two-particle probabilities on the ring with 4 sites (generally $`L(L1)/2`$ on rings with $`L`$ sites). Higher-order joint probabilities do not occur for 2 particles.
The probabilities $`P(i;t)`$ of finding a particle at site $`i`$ at time $`t`$ are given by
$$P(i;t)=\underset{ji}{}P(i,j;t)$$
(1)
For two particles they are normalized to
$$\underset{i=1}{\overset{4}{}}P(i;t)=2.$$
(2)
This condition implies
$$\underset{i<j}{}P(i,j;t)=1.$$
(3)
The master equations for the joint probabilities are easily written down,
$`{\displaystyle \frac{d}{dt}}P(1,2;t)`$ $`=`$ $`\delta _3P(1,3;t)+\gamma _4P(2,4;t)(\gamma _2+\delta _1)P(1,2;t)`$ (4)
$`{\displaystyle \frac{d}{dt}}P(2,3;t)`$ $`=`$ $`\delta _4P(2,4;t)+\gamma _1P(1,3;t)(\gamma _3+\delta _2)P(2,3;t)`$ (5)
$`{\displaystyle \frac{d}{dt}}P(3,4;t)`$ $`=`$ $`\delta _1P(1,3;t)+\gamma _2P(2,4;t)(\gamma _4+\delta _3)P(3,4;t)`$ (6)
$`{\displaystyle \frac{d}{dt}}P(1,4;t)`$ $`=`$ $`\delta _2P(2,4;t)+\gamma _3P(1,3;t)(\gamma _1+\delta _4)P(1,4;t)`$ (7)
$`{\displaystyle \frac{d}{dt}}P(1,3;t)`$ $`=`$ $`\gamma _2P(1,2;t)+\delta _2P(2,3;t)+\gamma _4P(3,4;t)`$ (8)
$`+`$ $`\delta _4P(1,4;t)(\gamma _1+\delta _1+\gamma _3+\delta _3)P(1,3;t)`$ (9)
$`{\displaystyle \frac{d}{dt}}P(2,4;t)`$ $`=`$ $`\delta _1P(1,2;t)+\gamma _3P(2,3;t)+\delta _3P(3,4;t)`$ (10)
$`+`$ $`\gamma _1P(1,4;t)(\gamma _2+\delta _2+\gamma _4+\delta _4)P(2,4;t).`$ (11)
The sum of the 6 master equations leads to the conservation law
$$\frac{d}{dt}[\underset{i<j}{}P(i,j;t)]=0,$$
(13)
consistent with the relation (3) given above.
We are interested in the stationary solution of the system of master equations (4). The stationary values $`P(i,j;t\mathrm{})`$ will be denoted by $`P_{ij}`$. The stationary joint probabilities for adjacent sites, e.g. $`P_{12}`$, can all be expressed by the stationary joint probabilities $`P_{13}`$ and $`P_{24}`$. For instance, the first line of Eq.(4) yields
$$P_{12}=\frac{1}{\gamma _2+\delta _1}(\delta _3P_{13}+\gamma _4P_{24}).$$
(14)
Three analogous relations follow from (4); they can be obtained by cyclically increasing the indices in Eq.(14). If the joint probabilities for adjacent sites are eliminated from the stationary master equations, two homogeneous equations remain which are equivalent. We write this equation as
$$a_{11}P_{13}+a_{12}P_{24}=0$$
(15)
with the coefficients
$`a_{11}`$ $`=`$ $`({\displaystyle \frac{\delta _1\delta _3}{\gamma _2+\delta _1}}+{\displaystyle \frac{\gamma _1\gamma _3}{\gamma _3+\delta _2}}+{\displaystyle \frac{\delta _1\delta _3}{\gamma _4+\delta _3}}+{\displaystyle \frac{\gamma _1\gamma _3}{\gamma _1+\delta _4}})`$ (16)
$`a_{12}`$ $`=`$ $`{\displaystyle \frac{\gamma _2\gamma _4}{\gamma _2+\delta _1}}+{\displaystyle \frac{\delta _2\delta _4}{\gamma _3+\delta _2}}+{\displaystyle \frac{\gamma _2\gamma _4}{\gamma _4+\delta _3}}+{\displaystyle \frac{\delta _2\delta _4}{\gamma _1+\delta _4}}.`$ (17)
The second equation for $`P_{13}`$ and $`P_{24}`$ is obtained from the normalization condition Eq.(3), after elimination of the joint probabilities of adjacent sites. It reads
$$a_{21}P_{13}+a_{22}P_{24}=1.$$
(18)
with the coefficients
$`a_{21}`$ $`=`$ $`{\displaystyle \frac{\delta _3}{\gamma _2+\delta _1}}+{\displaystyle \frac{\gamma _1}{\gamma _3+\delta _2}}+{\displaystyle \frac{\delta _1}{\gamma _4+\delta _3}}+{\displaystyle \frac{\gamma _3}{\gamma _1+\delta _4}}+1`$ (19)
$`a_{22}`$ $`=`$ $`{\displaystyle \frac{\gamma _4}{\gamma _2+\delta _1}}+{\displaystyle \frac{\delta _4}{\gamma _3+\delta _2}}+{\displaystyle \frac{\gamma _2}{\gamma _4+\delta _3}}+{\displaystyle \frac{\delta _2}{\gamma _1+\delta _4}}+1.`$ (20)
The solution of the two linear equations is
$`P_{13}`$ $`=`$ $`{\displaystyle \frac{a_{12}}{a_{11}a_{22}a_{12}a_{21}}},`$ (21)
$`P_{24}`$ $`=`$ $`{\displaystyle \frac{a_{11}}{a_{11}a_{22}a_{12}a_{21}}}.`$ (22)
Since the joint probabilities for adjacent sites are obtained from the $`P_{13}`$, $`P_{24}`$, and the one-site stationary probabilities $`P_iP(i;t\mathrm{})`$ from Eq.(1), Eq.(21) represents the complete solution of the stationary problem.
We derive the stationary current in the system by considering the bond connecting sites 1 and 2. The stationary current is given by
$$J=\gamma _1(P_1P_{12})\delta _2(P_2P_{12})$$
(23)
The joint probabilities in Eq.(23) ensure exclusion of double occupancy of sites. Using Eq.(1) the current is expressed in terms of the joint probabilities,
$$J=\gamma _1(P_{13}+P_{14})\delta _2(P_{23}+P_{24}).$$
(24)
Insertion of the stationary solution for the joint probabilities gives
$$J=\frac{\gamma _1+\gamma _3+\delta _2+\delta _4}{(\gamma _1+\delta _4)(\gamma _3+\delta _2)}(\gamma _1\gamma _3P_{13}\delta _2\delta _4P_{24}).$$
(25)
The current may also be derived by considering the other bonds of the ring. Two equivalent forms of the current result; the second (equivalent) form reads
$$J=\frac{\gamma _2+\gamma _4+\delta _1+\delta _3}{(\gamma _2+\delta _1)(\gamma _4+\delta _3)}(\gamma _2\gamma _4P_{24}\delta _1\delta _3P_{13}).$$
(26)
It can be shown that the current vanishes if the right and left transition rates fulfill the following condition,
$$\gamma _1\gamma _2\gamma _3\gamma _4=\delta _1\delta _2\delta _3\delta _4$$
(27)
corresponding to a detailed balance relation over the ring.
### B Solution for the sawtooth potential
The sawtooth potential including bias on a 4-site ring is defined by choosing
$`\gamma _1`$ $`=`$ $`\gamma _2=\gamma _3=\gamma _4=b\gamma ,`$ (28)
$`\delta _1`$ $`=`$ $`b^1\gamma ^4,`$ (29)
$`\delta _2`$ $`=`$ $`\delta _3=\delta _4=b^1,`$ (30)
where $`b`$ represents the bias and $`\gamma <1`$ is a constant representing a transition rate to the right in the absence of a bias, cf. Fig.1(b). Note that the right transition rates are explicitly multiplied by the bias factor $`b`$ and the left transition rates by $`b^1`$, respectively. Physically, $`b=\mathrm{exp}(\mathrm{\Delta }U/2k_BT)`$ where $`\mathrm{\Delta }U`$ represents the potential drop between two neighboring sites under the influence of the bias. For $`b=1`$ the system satisfies the detailed balance condition and the current $`J`$ vanishes. In what follows the current obtained in a system with $`M`$ particles will be denoted as $`J_M`$.
In Fig. 2 we present a plot of $`J_1`$ and $`J_2`$ as functions of the bias $`b`$ for the ring with 4 sites and $`\gamma =\mathrm{exp}(2)`$. The result for the two-particle system was obtained using Eq. (26), and for a single-particle system we employed the exact solution derived in Ref. . We can see that the behavior of the currents of one- and two-particle systems are qualitatively similar. Of course, the current $`J_2`$ of two particles is larger than the one-particle current $`J_1`$. The inset shows the behavior of the current for smaller bias. The curves for the bias to the right and to the left become equal in the limit $`b1`$, i.e., in the linear-response regime, for two particles on the ring with 4 sites, and also for one particle on this ring. However, the two-particle current is about $`17\%`$ larger than the one-particle current.
In the case of a strong bias to the right, $`b1`$, the two-particle current $`J_2`$ differs from $`J_1`$ by a constant factor. For the sawtooth potential this behavior can be understood as follows. If $`b1`$, only transitions to the right are important, and backward transitions can be neglected. In our model the transition rates to the right are all equal, $`\gamma _i=b\gamma `$ for $`i=1,\mathrm{},4`$. Hence for $`b1`$ all stationary site occupation probabilities become equal, $`P_i=1/2`$ for the 4-site ring and $`P_i=2/L`$ for a ring with L sites. In the limit of a strong bias to the right all stationary joint probabilities also become equal, i.e., $`_{i,j}`$ $`P_{ij}=1/6`$ for the 4-site ring and, generally, $`P_{ij}=2/L(L1)`$ (see Ref. ). Using expression (23) we thus expect that for $`b1`$
$$Jb\gamma \left[\frac{2}{L}\frac{2}{L(L1)}\right].$$
(31)
For $`L=4`$ there is thus $`J_2=b\gamma /3`$, which should be compared with the single-particle current $`J_1=b\gamma /4`$. Similarly, in the general case of an $`L`$-site ring we have
$$\underset{b\mathrm{}}{lim}\frac{J_2}{J_1}=\frac{2(L2)}{L1}.$$
(32)
For the 4-site ring this limiting behavior can be easily derived from the exact formula (26). Actually, for $`L=4`$, $`\gamma =\mathrm{exp}(2)`$, and $`b=10`$ there is $`J_2/J_11.3337`$, in agreement with the above considerations.
Figure 2 also shows that $`J_2`$ becomes almost identical to $`J_1`$ in the case of a strong bias to the left, $`b1`$. To understand this phenomenon assume that $`\gamma 1`$, so that $`\delta _1\delta _2=\delta _3=\delta _4`$, i.e., site 1 acts as a “bottle-neck”. If $`b1`$ the particles are driven against the high barrier at site 1, which has a relatively very small transition rate $`\delta _1`$ to the left. The second particle on site 2 has to wait until the first particle has jumped over the high barrier, and only then can it make an attempt to jump over that barrier. Soon after the first particle has managed to pass the bottleneck at site 1, the second particle will jump from site 2 to 1 and the first particle will quickly line up behind the second particle, waiting for it to jump over the high barrier. Consequently, the current becomes practically equal to that of a single-particle system. It is evident that in the limit of a large bias to the left the system behaves as a TASEP on a ring with one defect. If the defect is characterized, in a discrete-time dynamics, by the transition probability $`p1`$, the current of $`M`$ particles on a ring with $`L`$ sites ($`M<L`$) will approach the one-particle current. The above reasoning is confirmed by an explicit calculation of the current $`J_2`$ in the limit $`b0`$. Using (26) we conclude, after some algebra, that $`J_22b^1(1+\gamma ^4)(5\gamma ^4+2\gamma ^4+5)^1`$. Since for a single-particle system the current $`J_1`$, for $`b1`$, is approximately equal $`b^1\gamma ^4(1+3\gamma ^4)`$ (see Ref. ), we find that
$$\underset{b0}{lim}\frac{J_2}{J_1}=\frac{2(1+\gamma ^4)(1+3\gamma ^4)}{5\gamma ^8+5\gamma ^4+2}.$$
(33)
For $`\gamma 0`$, i.e., for a growing asymmetry of the sawtooth potential, this limit actually approaches 1. In particular, for the value of $`\gamma =\mathrm{exp}(2)`$ used in Fig. 2 there is $`lim_{b0}J_2/J_11.0005`$.
Note, however, that in contrast to the case $`b1`$, for $`b1`$ the current depends on the parameter $`\gamma `$ characterizing the inhomogeneity of the sawtooth potential. In particular, for $`\gamma =1`$, which corresponds to a fully homogeneous system, $`lim_{b0}J_2/J_1=4/3`$. Actually, for $`\gamma =1`$, the ratio $`J_2/J_1`$ equals $`4/3`$ irrespective of the bias $`b`$ (see ).
## III Extended Nonsymmetric Potentials
### A The model
In this section lattice gases in extended potentials are considered that consist of periodic repetitions of nonsymmetric segments. First the situation of very strong bias is discussed and a mean-field approximation is given for the case where the particles experience periodically arranged high barriers. The analytical results are then compared with numerical simulations of the motion of lattice-gas particles in nonsymmetric hopping potentials for different concentrations and under various bias conditions. The hopping potential that is used in in this section is the sawtooth potential as shown in Fig. 1(b), except that it is periodically repeated with period $`L`$. The nearest-neighbor transition rates from site $`l`$ to $`l\pm 1`$ are $`\mathrm{\Gamma }_{l,l\pm 1}`$. As a short notation we use $`\gamma _l\mathrm{\Gamma }_{l,l+1}`$ for the “right” and $`\delta _l\mathrm{\Gamma }_{l,l1}`$ for the “left” transition rates. Without additional bias, the transition rates between neighbor sites fulfill detailed balance. Bias is introduced by multiplying all right transition rates by $`b`$, $`\gamma _lb\gamma _l`$, and all left transition rates by $`b^1`$, $`\delta _lb^1\delta _l`$.
The linear chain on which the model is defined shall have $`N=\nu L`$ sites where we consider $`\nu 1`$ in this section. Periodic boundary conditions are introduced and the sites are occupied by $`M`$ particles. The concentration is then $`\rho =M/N`$. Multiple occupancy of the sites is excluded; no further interactions of the particles are taken into account.
### B Strong bias
#### 1 The case $`b1`$
For $`b1`$ we can apply essentially the same reasoning as in the case of the 2-particle system considered in Sec. II. In this limit transitions to the left are so rare that they can be ignored and the system essentially behaves like a TASEP with transitions $`\gamma _i=b\gamma `$, $`i=1,\mathrm{},N`$. The current for such a system reads
$$J=b\gamma M\frac{NM}{N1}.$$
(34)
For large system sizes $`N1`$ this formula can be rewritten as
$$J(\rho )=b\gamma \rho (1\rho ).$$
(35)
#### 2 The case $`b1`$
For $`b1`$ we can neglect transition rates to the right, and so the system behaves like a TASEP with transition rates $`\delta _i=b^1\gamma ^4`$ if $`i=1(\text{mod }L)`$ and $`\delta _i=b^1`$ otherwise. If additionally $`\gamma =1`$, all $`\delta _i`$ are equal to each other and the current is given simply by (34).
A more complicated situation appears for $`\gamma 1`$, a condition which will be assumed henceforth. In this case sites $`i=1,L+1,\mathrm{},NL+1`$ act on the flow of particles as “bottlenecks”, for the mean time necessary to leave them is much larger than the time to leave any other site. Therefore the system, which consists of $`\nu `$ similar segments of length $`L`$, effectively behaves like a ring made up of $`\nu `$ similar “boxes”, each able to contain up to $`L`$ particles. A transition from a segment $`j`$ to $`j1`$ occurs with a rate $`b^1\gamma ^4`$ irrespective of the number of particles in each of the segments, provided, of course, that there is at least one particle in segment $`j`$ and at most $`L1`$ particles in segment $`j1`$.
Let $`Q_n`$ denote the probability that in the steady state there are $`n`$ particles in a given segment ($`n=0,\mathrm{},L`$). Let $`Q_{m,n}`$ denote the joint probability of finding, in the steady state, $`0mL`$ particles at a given segment $`j`$ and $`0nL`$ particles at $`j+1`$. Of course $`Q_n`$ and $`Q_{m,n}`$ do not depend on $`j`$, and the $`Q_n`$ satisfy
$`{\displaystyle \underset{n=0}{\overset{L}{}}}Q_n`$ $`=`$ $`1,`$ (36)
$`{\displaystyle \underset{n=0}{\overset{L}{}}}nQ_n`$ $`=`$ $`L\rho .`$ (37)
Let us assume a mean-field approximation: $`Q_{m,n}=Q_mQ_n`$. In the stationary state the mean number $`𝒩_n`$ of segments occupied by $`n`$ particles does not depend on time. As the particles hop between segments, $`𝒩_n`$ can decrease when one of the particles jumps from or to a segment occupied by $`n`$ particles. The corresponding rates are $`Q_n(1Q_L)`$ and $`Q_n(1Q_0)`$, respectively. The number of segments containing $`n`$ particles can also increase owing to jumps ending at segments containing $`n1`$ particles or originating at segments with $`n+1`$ particles; the corresponding transition rates are $`Q_{n1}(1Q_0)`$ and $`Q_{n+1}(1Q_L)`$, respectively. Consequently, the appropriate balance conditions read
$`(Q_nQ_{n+1})(1Q_L)`$ $`=`$ $`(1Q_0)(Q_{n1}Q_n),`$ (38)
$`Q_1(1Q_L)`$ $`=`$ $`(1Q_0)Q_0,`$ (39)
$`Q_L(1Q_L)`$ $`=`$ $`(1Q_0)Q_{L1},`$ (40)
where $`n=1,\mathrm{},L1`$ in (38) and in (39) and (40) we have taken into account the fact that neither jumps from a segment containing $`0`$ particles nor transitions to a segment with $`L`$ particles are possible. Together with (36) and (37) these relations form $`L+3`$ equations for $`L+1`$ variables $`Q_n`$, $`n=0,\mathrm{},L`$, with the concentration $`\rho `$ being the only free parameter. This system of equations is easily shown to have a unique solution
$$Q_n=\frac{a^n}{1+a+\mathrm{}+a^L},$$
(41)
where the parameter $`a`$ can be determined using
$$L\rho =\frac{_{n=0}^Lna^n}{_{n=0}^La^n}.$$
(42)
The concentration $`\rho `$ is a monotonic function of $`a`$, increasing from 0 for $`a=0`$ to 1 in the limit $`a\mathrm{}`$. The value $`a=1`$ corresponds to $`\rho =\frac{1}{2}`$ and, generally,
$$\rho (a)=1\rho (1/a).$$
(43)
Having obtained $`Q_n`$ we can calculate the current as
$`J`$ $`=`$ $`b^1\gamma ^4(1Q_0)(1Q_L)`$ (44)
$`=`$ $`b^1\gamma ^4{\displaystyle \frac{a\left(_{n=0}^{L1}a^n\right)^2}{\left(_{n=0}^La^n\right)^2}}.`$ (45)
Using (43) it is easy to see that
$$J(\rho )=J(1\rho ).$$
(46)
Because for $`\rho \frac{1}{2}`$ Eq. (42) implies $`a\rho /L`$, using our formula (44) we conclude that for small concentrations of particles the current $`J`$ grows linearly with $`\rho `$,
$$Jb^1\gamma ^4L^1\rho .$$
(47)
For $`\rho =\frac{1}{2}`$ the mean-field theory (44) predicts
$$J(0.5)=\frac{b^1\gamma ^4L^2}{(L+1)^2}.$$
(48)
### C Numerical simulations
In our simulations we used a lattice with $`N=400`$ sites consisting of $`\mu =100`$ segments, each of length $`L=4`$. We used a sawtooth potential with $`\gamma =\mathrm{exp}(2)0.135`$. The number of particles in the system varied from $`M=1`$ to $`M=399`$. We carried out our simulations for $`t=10^6`$ Monte Carlo time steps per particle and the results were averaged over $`10`$ different realizations of the process, which enabled us to estimate the statistical errors.
We first present simulation results for the current at a fixed concentration $`\rho =0.5`$, or for $`M=200`$, as a function of the bias parameter $`b`$ for bias to the right, and $`b^1`$ for bias to the left, respectively. Figure 3 shows the current $`J`$ observed in simulations (symbols) together with a simple approximation obtained by multiplying a single-particle current $`J_1`$ by the number $`M`$ of particles in the system (free particle approximation). One observes that the current in the case of a system with a hard-core interactions is reduced as compared to the case of non-interacting particles; but the general behavior as a function of the bias parameter is practically the same. In particular, the rectification effects for particle motion in nonsymmetric potentials are qualitatively the same in both cases. The inset in Fig. 3 depicts the ratio $`J/MJ_1`$ as a function of the bias. Owing to (34) we expect that for $`b1`$ $`J/MJ_1=(NM)/(N1)1\rho `$. For $`b=20`$ we found $`J/MJ_10.5014\pm 0.0001`$, in excellent agreement with the theoretical value $`200/399050125`$. For $`b1`$ our mean-field approximation (48) predicts $`J/MJ_1=L^2/2(L+1)^2=0.32`$; for $`b=1/20`$ our simulations yielded a slightly smaller value $`0.305\pm 0.001`$.
We now discuss the dependence of the current on concentration for selected values of the bias $`b>1`$, or $`b<1`$, respectively, and compare the results with the theoretical considerations of Sec. IIIB. In Fig. 4 we present results of our simulations for a bias to the right ($`b=30`$, 10 and 2). For a strong bias ($`b=30`$) the agreement with the theoretical prediction, Eq. (35), is very good.
The results obtained for a bias to the left ($`b=0.001`$, 0.1, 0.5, and 0.9) are depicted in Fig. 5. We can see that if the bias is strong ($`b0.1`$), the agreement between the mean-field theory (solid line) and the simulation data (circles and crosses) is very good for concentrations close to 0 and 1. However, for $`\rho \frac{1}{2}`$ we observe that the mean-field theory tends to overestimate the actual value of $`J`$ by approximately 5%, which is much more than the statistical errors of our data (the relative standard deviation at $`\rho =0.5`$ is about 0.33%). We repeated our simulations for larger number of Monte-Carlo time steps ($`t=510^6`$) and for different values of the bias $`b`$, but the difference between simulations and the theory remained practically the same. We thus conclude that it is not a numerical artifact. A similar discrepancy was observed by Tripathy and Barma , who considered a TASEP with random transition rates. However, in their model the mean-field approach underestimated the magnitude of the current obtained in simulations for $`\rho 0.5`$. Moreover, they found that $`J(\rho )`$ has quite a broad plateau around $`\rho =0.5`$. This phenomenon is not observed in our case because the transition rates in our model are not random.
## IV Symmetry Properties
In this section we discuss the symmetry properties of our lattice-gas model with nonsymmetric potentials, and of related models. In the simulations, as well as in the mean-field approximation, the current exhibits a particle-vacancy symmetry,
$$J(\rho )=J(1\rho ).$$
(49)
The symmetry properties of the TASEP have been analyzed in and the relation Eq.(49) has been established in this context. However, the model employed in those references differs in important aspects from our model. Hence a detailed discussion is in order.
The particle-vacancy symmetry of the current for the TASEP has been shown in Refs. for disordered hopping potentials where the transition rates are associated with the bonds between the sites. If the motion of the particles is reversed (symmetry operation T according to Refs., the particles experience the same set of transition rates as before, only the order of the rates has been changed. If the vacancies are interpreted as particles (symmetry operation C), they experience the same transition rates as the particles after the operation T. The symmetry under CT is evident; the nontrivial statement is the symmetry of the current (up to a sign) under the operations C, or T, separately.
The class of models for the hopping potential that are considered here do not correspond to bond disorder. The set of “right” transition rates is different from the set of the “left” transition rates. If a strong bias $`b>>1`$ to the right is applied, leading approximately to a TASEP, the current is different from the case of strong bias to the left with $`b^1>>1`$. In other words, the symmetry under reversal of motion T does not exist for the class of models leading to rectification, by their definition. If the vacancies are considered as particles, they experience the same set of transition rates as the original particles, see also below. We conjecture that symmetry under the operation C also exists for our models, if the limiting case of the TASEP is considered. Hence we expect Eq.(49) to be approximately valid for the models that lead to rectification effects, in the limit of very strong bias.
The sawtooth potential that is investigated in this paper has a special symmetry which will be described now. In the limit of concentration of the lattice gas approaching one, the particle problem is equivalent to the problem of hopping motion of single, independent vacancies. The hopping transitions of an isolated vacancy are reversed in comparison to the transitions of the particle that makes an exchange with the vacancy, e.g.,
$$\mathrm{\Gamma }_{l,l+1}^V=\mathrm{\Gamma }_{l,l+1}.$$
(50)
Using the rates Eq.(50) it is easy to reconstruct the hopping potential for single vacancies. If this construction is done for the the extended sawtooth potential of Fig.1(b), a sawtooth potential is obtained for the vacancy which is mirror-symmetric with respect to the original sawtooth potential, see Fig. 6. If a bias is applied to the particles, expressed by the factor $`b`$ in the transition rates to the right, the factor $`b`$ appears in the transition rates of the vacancy to the left. It is evident from this consideration that the particle current for $`\rho 0`$ is identical to the one for $`\rho 1`$. It is obvious that a particle-vacancy symmetry pertains for the problem of motion of lattice gases in a sawtooth potential with the above symmetry property; hence we expect Eq.(49) to be valid for all values of the bias $`b`$.
We point out that the sawtooth potential represents a special case; general nonsymmetric potentials do not provide mirror-symmetric potentials for the vacancies in the limit $`\rho 1`$. For instance, if the potential corresponding to an Ehrlich-Schwoebel barrier (see, eg. ) is transformed by using Eq.(50) into the corresponding hopping potential of a single vacancy, a different potential is obtained. As a consequence, the mobility of a single particle is different from the mobility of a single vacancy. Hence for this example $`J(\rho )J(1\rho )`$ for $`b`$ close to $`1`$. This example is sufficient to show that the particle-vacancy symmetry (49) cannot be generally valid, for arbitrary $`b`$. Another counterexample is provided by the random-trap model, see Ref.().
## V Concluding remarks
In this paper we investigated the motion of lattice-gas particles in hopping potentials that are composed of segments without mirror-reflection symmetry. We considered in particular the effects of exclusion of multiple occupancy of sites, under various bias conditions. We first studied the case of two particles on a ring of 4 sites with a sawtooth potential. The explicit solution of this simple system can be given, and interesting conclusions emerge in the limits of large bias to the right, or to the left. We point out that the ring with 4 sites is a model case for the treatment of 2 site-exclusion particles on a finite ring; larger systems can be solved in a similar manner, e.g., by using symbolic formula manipulation programs.
We then investigated the case of many particles on extended systems which consist of periodic repetitions of sawtooth potentials. These systems behave, for strong bias in one direction, as uniform systems where the result for the current of lattice gases is known. For strong bias in the reverse directions, the extended sawtooth potential acts as a periodic arrangement of weak links. A mean-field expression for the current can be derived for this case from the cluster dynamics of the particles on the segments, which shows similarities to the cluster dynamics of the bosonic lattice gases of Ref.. Good agreement with the numerical simulations was found for both cases under strong bias; deviations appear for smaller bias values. The results for the current exhibit a particle-vacancy symmetry as a consequence of a special particle-hole symmetry of the hopping processes in the sawtooth potential used.
Generally, the current per particle of a site-exclusion lattice gas shows the same qualitative behavior as a function of the strength and the direction of the bias parameter, as the current of independent particles. This observation is important for possible applications, for instance for transport through channels in membranes or through layered structures with suitable potential structures. It means that qualitative or even semiquantitative predictions of the effects of strong bias on the current can already be obtained from the single-particle description.
## Acknowledgments
We thank G. Schütz for discussions on the TASEP. Z.K. gratefully acknowledges partial support by the Polish KBN Grant Nr 2 P03B 059 12.
|
no-problem/9909/cond-mat9909270.html
|
ar5iv
|
text
|
# Stripe Conductivity in La1.775Sr0.225NiO4
## Abstract
We report Raman light-scattering and optical conductivity measurements on a single crystal of La<sub>1.775</sub>Sr<sub>0.225</sub>NiO<sub>4</sub> which exhibits incommensurate charge-stripe order. The extra phonon peaks induced by stripe order can be understood in terms of the energies of phonons that occur at the charge-order wave vector, $`𝐐_𝐜`$. A strong Fano antiresonance for a Ni-O bond-stretching mode provides clear evidence for finite dynamical conductivity within the charge stripes.
Recent experiments have shown a difference in the conductivity behavior of the stripe ordered phases in the cuprates La<sub>2-x-y</sub>Nd<sub>x</sub>Sr<sub>y</sub>CuO<sub>4</sub> and the nickelates La<sub>2-x</sub>Sr<sub>x</sub>NiO<sub>4</sub>. Strong localization and binding of charges to the lattice in nickelates manifest themselves by the appearance of: 1) an insulating state and a charge gap in optical conductivity ; and 2) additional diffraction peaks due to ionic displacements that are induced below $`T_c,`$ the charge ordering temperature . In cuprates such a gap is not observed even though charge-order superlattice peaks have been detected by both neutron and x-ray scattering . The diffraction measurements have shown that the corresponding lattice modulation is much smaller in cuprates than in nickelates . At the same time both cuprates and nickelates have demonstrated incommensurate magnetic ordering . This by itself is important evidence of the stripe state because it implies a modulation of the charge density. Moreover, the coexistence of superconductivity and stripe order has been observed in La<sub>1.6-x</sub>Nd<sub>0.4</sub>Sr<sub>x</sub>CuO<sub>4</sub> with $`x=0.12`$, 0.15, and $`0.20`$ .
The problem of whether or not stripes in cuprates and nickelates are insulating or metallic is fundamental to the physics of the stripe state. One can expect two possible scenarios which could lead to conductivity in the stripe state. According to the first one the stripes themselves are insulating but the system can be metallic due to fluctuations and motion of stripes . Alternatively, metallic conductivity may exist along the charge threads without a violation of stripe ordering as a whole. In the latter case, Coulomb interactions between neighboring stripe should lead to charge-density-wave order along the stripes at sufficiently low temperatures and in the absence of stripe fluctuations .
Previous studies have shown that the stripe-order in La<sub>2-x</sub>Sr<sub>x</sub>NiO<sub>4</sub> with $`x=\frac{1}{3}`$ has a short-period commensurability and a very large charge gap of 0.26 eV relative to the charge ordering temperature of 230 K . Thermodynamic measurements have indicated that $`x=\frac{1}{3}`$ may be somewhat special , while recent phonon density-of-states measurements have found doping-dependent changes of the in-plane Ni-O bond-stretching modes for $`x=\frac{1}{4}`$, $`\frac{1}{3}`$, and $`\frac{1}{2}`$ . Here we present results from Raman light scattering (RLS) and optical conductivity measurements on a single crystal with $`x=0.225`$, which was previously characterized by neutron and x-ray diffraction. The stripe order in this sample is incommensurate, and the hole density per Ni site along a stripe is significantly less than 1 (in contrast to $`x=\frac{1}{3}`$, where the density is exactly 1). We observe a strong Fano antiresonance in the optical conductivity at an energy well below the charge pseudo-gap, which is 0.105 eV at 10 K. From a careful analysis of the phonon spectra, we conclude that the energy of the antiresonance corresponds to Ni-O bond stretching motions along the stripes. It follows that the antiresonance, which results from electron-phonon coupling, provides strong evidence for finite conductivity along the stripes, at least at optical-phonon frequencies.
The light scattering measurements were carried out in quasi-backscattering geometry using 514.5-nm argon laser light. The incident laser beam of 10 mW or 15 mW power was focused onto a 0.1 mm diameter spot on the $`a`$-$`b`$ plane of the mirror-like polished crystal surface. The $`x,y,z`$\- crystallographic axes in the $`I4/mmm`$ setting were determined by x-ray Laue diffraction. The incident photons were polarized along or perpendicular to the $`x^{}=x+y,`$ or $`y^{}=x+y`$ diagonal directions between in-plane Ni-O bonds. The scattered photons were polarized either parallel or perpendicular to the incident photons. Infrared (IR) reflectivity measurements in the 25–$`10^5`$ cm<sup>-1</sup> frequency region were carried out in the geometry of normal incidence to the $`a`$-$`b`$ plane of the sample. Optical conductivity spectra, $`\sigma (\omega )`$, were obtained by Kramers-Kronig analysis of the reflectivity data.
Let us begin with the optical conductivity spectra shown in Fig. 1. The conductivity below 2000 cm<sup>-1</sup> clearly decreases as the temperature is lowered. If we linearly extrapolate the frequency dependence from above 1500 cm<sup>-1</sup>, as indicated by the dashed line for the 10 K spectrum, then, following Katsufuji et al. , we estimate a low-temperature gap of 840 cm<sup>-1</sup>. At 150 K, which is the charge ordering temperature ($`T_c`$) indicated by diffraction , the conductivity has changed little. The extrapolated gap reaches zero somewhat closer to 200 K. Note that at low temperature, although $`\sigma (0)0`$, the residual conductivity within the extrapolated gap is significantly higher than that found for $`x=\frac{1}{3}`$ .
The temperature-dependent Raman spectra for two polarizations are presented in Fig. 2. For both $`x^{}x^{}`$ and $`x^{}y^{}`$ polarizations there is a significant electronic background that shifts from low to high energy as the temperature decreases. Low-temperature scans in the $`x^{}y^{}`$ geometry reveal 2-magnon scattering bands at 739 cm<sup>-1</sup> and 1130 cm<sup>-1</sup>, frequencies remarkably similar to those in La<sub>5/3</sub>Sr<sub>1/3</sub>NiO<sub>4</sub> . For both $`x=0.225`$ and $`x=\frac{1}{3}`$, the 2-magnon features disappear at approximately the charge-ordering temperature determined by neutron diffraction, 150 K and 240 K, respectively . Analysis of the 2-magnon spectra will be presented elsewhere.
Next we consider the phononic features. In Fig. 2, the numberic labels denote the positions of lines (in cm<sup>-1</sup>) at low temperature, determined by curve fitting. An expanded view of the low-frequency range of the optical conductivity is shown in Fig. 3; again, the numbers label low-temperature peak positions.
To interpret the spectra, we begin by noting that RLS and IR measurements are sensitive only to modes with momentum transfer $`\mathrm{}Q=0`$. One certainly expects a significant contribution from optically-active phonon modes corresponding to the average lattice structure. In addition, the occurrence of stripe order, with a characteristic wave vector $`𝐐_c`$, lowers the translational symmetry, and must lead to the appearance of extra lines in the RLS and IR spectra. Finally, the Sr dopant ions locally break the lattice symmetry, and hence can induce extra features, which are expected to be temperature independent.
The symmetry of the mean tetragonal lattice is described by space group $`I4/mmm`$. The corresponding Raman-active phonons are distributed among the irreducible representations (IREPs) of the space group as $`2A_{1g}(153,447)+2E_g(90,250)`$, where the numbers in parentheses are phonon frequencies (in cm<sup>-1</sup>) determined for La<sub>2</sub>NiO<sub>4</sub> by neutron scattering . The $`A_{1g}`$ lines are allowed in the $`x^{}x^{}`$ geometry, and none are allowed in $`x^{}y^{}`$. The native IR-active phonon lines are represented as $`3A_{2u}(346,436,490)+4E_u(150,220,347,654)`$. Experimentally, the $`A_{1g}`$ and $`E_u`$ modes have been observed in previous RLS and FIR studies, respectively, on undoped La<sub>2</sub>NiO<sub>4</sub>, and the frequencies obtained are in good agreement with the neutron results . Features at similar frequencies are also prominent in measurements on our $`x=0.225`$ sample, as can be seen in Figs. 2 and 3.
The charge-stripe order is characterized (approximately ) by the wave vector $`𝐐_c=(ϵ,ϵ,1)`$ with $`ϵ=0.275`$, which has rotational symmetry mm2 with the second order axis along the $`x^{}`$-direction. The phonon states of the average structure at $`𝐐_c`$ are distributed among the IREPs of the mm2 wave vector group in the following manner: $`7\mathrm{\Sigma }_1(x^{})+3\mathrm{\Sigma }_2+5\mathrm{\Sigma }_3(y^{})+6\mathrm{\Sigma }_4(z)`$. All of these states must be Raman-active, and $`\mathrm{\Sigma }_1`$, $`\mathrm{\Sigma }_3`$, and $`\mathrm{\Sigma }_4`$ states are IR-active. The $`\mathrm{\Sigma }_1`$ and $`\mathrm{\Sigma }_3`$ modes must appear in $`x^{}x^{}`$ and $`x^{}y^{}`$ RLS spectra, respectively, if charge ordering creates lattice distortions belonging to the $`\mathrm{\Sigma }_1`$ IREP. Intensities of these new lines depend strongly on the type and symmetry of the modes, and on the nature of the electron-phonon coupling. Stripe order perturbations of the dynamical phonon matrix must also lead to a lowering of the rotational symmetry of the phonon states at $`𝐐=0`$ from the $`D_{4h}`$ group to the $`C_{2v}`$ group, and to a splitting of the two-fold degenerate $`E`$-states.
To identify the new peaks induced by stripe order, we make use of the phonon-dispersion curves determined in the neutron-scattering study of La<sub>2</sub>NiO<sub>4</sub> by Pintschovius et al. . The $`Q=0`$ optically-active modes are expected to shift by only small amounts due to doping, although some hardening is expected on cooling to low temperatures, as the neutron study was performed at room temperature. Ideally, we would like to compare with the dispersion curves along $`𝐐=(\xi ,\xi ,1)`$; however, since measurements have not been made along that direction, we will compare with $`(\xi ,\xi ,0)`$, and rely on the fact that dispersion along is generally much smaller than in-plane dispersion .
In Fig. 4, we show the highest-energy phonon branches along $`(\xi ,\xi ,0)`$ from Ref. . The two upper branches involve Ni-O(1) (in-plane) bond-stretching motions, while the lower two derive from Ni-O(2) ($`c`$-axis) bond-stretching. The dashed line indicates the in-plane charge-ordering wave vector $`𝐐_c^{}=(ϵ,ϵ,0)`$. In the following, we will associate the new features in the RLS and IR spectra induced by charge order with $`𝐐_c^{}`$.
Starting with the Raman $`x^{}x^{}`$ spectra (Fig. 2), we observe that, at 150 K and below, several peaks appear in the vicinity of the native Ni-O(2) bond-stretching mode, which occurs at 447 cm<sup>-1</sup> for $`x=0`$. A somewhat similar set of split peaks was observed for $`x=\frac{1}{3}`$ in the charge-ordered phase . For $`x=0.225`$, significant peaks appear at 398, 453, 477, and 521 cm<sup>-1</sup>. We attribute the 453 cm<sup>-1</sup> feature to the $`Q=0`$ mode, and the 398 cm<sup>-1</sup> peak to the $`𝐐_c^{}`$ mode, on the $`\mathrm{\Sigma }_1`$ branch involving the already-noted Ni-O(2) motion. The 477 cm<sup>-1</sup> peak appears to be close to the $`\mathrm{\Sigma }_4`$ branch, which would require electron-phonon coupling to make it Raman active. In contrast, the 521 cm<sup>-1</sup> peak is too high in energy to involve Ni-O(2) motion (contrary to the speculation in Ref. ). It must correspond to the $`𝐐_c^{}`$ mode of the highest $`\mathrm{\Sigma }_1`$ branch, which involves Ni-O(1) bond-stretching motion.
For the Raman $`x^{}y^{}`$ spectra, there is a temperature-dependent peak at 580 cm<sup>-1</sup>. Symmetry arguments, together with the frequency, indicate that this is the $`𝐐_c^{}`$ mode of the highest $`\mathrm{\Sigma }_3`$ branch, which also involves Ni-O(1) bond-stretching motion.
Finally, we arrive at the optical conductivity (Fig. 3). The peak at 356 cm<sup>-1</sup> corresponds to the native $`E_u`$ mode involving Ni-O(1)-Ni bond-bending motion. It was observed to split into at least 3 peaks in the charge-ordered phase of $`x=\frac{1}{3}`$ . The total splitting is comparable to the width of our peaks. Model calculations that we have performed using a modified rigid-ion model indicate that the splitting and shifts of this peaks should be sensitive to the way in which the stripes are positioned with respect to the lattice. The lack of a clear splitting in the present case may indicate the absence of a unique positioning of the stripes, which would be consistent with the incommensurate wave vector and the finite correlation length for stripe order. Our calculations suggest that the peak at 677 cm<sup>-1</sup> (corresponding to the 654 cm<sup>-1</sup> peak at $`x=0`$) should also be sensitive to stripe pinning, but, again, no clear splitting is observed.
The most unusual feature in Fig. 3 is the strong dip at 577 cm<sup>-1</sup>. Because of the unusual nature of this apparent Fano antiresonance, the measurement of the reflectivity in this frequency range was repeated several times with different detectors and scanning rates in order to verify that it is not an artifact. The occurrence of a dip instead of a peak clearly indicates interference of a phonon mode with underlying electronic conductivity. The energy of this feature uniquely associates it with Ni-O(1) stretching motion.
Model calculations for stripes in a NiO<sub>2</sub> plane using the inhomogeneous Hartree-Fock plus random-phase-approximation approach by Yi et al. have demonstrated how the modulation of electronic hybridization by Ni-O(1) bond-stretching modes results in extra phonon branches and strong new IR modes. Our antiresonance mode is undoubtedly of this type. The model calculations were done for stripes with a hole density of one per Ni site, resulting in a large charge excitation gap . In contrast, for our $`x=0.225`$ sample the hole density per Ni site along a stripe is $`x/ϵ=0.225/0.275=0.82`$, which is consistent with a quasi-metallic character. The antiresonant behavior demonstrates the existence of a finite conductivity within the charge stripes at energies well below the pseudogap (0.105 eV), which is presumably associated with charge motion transverse to the stripes. The continued existence of the antiresonance at room temperature, well above $`T_c`$, suggests that stripe correlations do not disappear at $`T_c`$. Such a result should not be too surprising given that the low-frequency conductivity remains small well above $`T_c`$, indicating the continuing importance of strong correlations. The occurrence of stripe correlations without static order has significant implications for understanding the cuprates.
This work was supported in part by INTAS grant No. 96-0410 and DFG/SFB 341 and SFFR research grant No. 2.4/247 of Ukraine. JMT is supported by U.S. DOE Contract No. DE-AC02-98CH10886.
|
no-problem/9909/astro-ph9909319.html
|
ar5iv
|
text
|
# A Method for Distinguishing Between Transiently Accreting Neutron Stars and Black Holes, in Quiescence
## 1 Introduction
Many black holes and neutron stars are in binaries where a steady-state accretion disk (one that supplies matter to the compact object at the same rate as the mass is donated from the companion) is thermally unstable Van Paradijs (1996); King et al. (1996). This thermal instability results in a limit cycle – as in dwarf novae – with matter accumulating in the outer disk for $``$months to decades until a thermal instability is reached (see Huang & Wheeler (1989); Mineshige & Wheeler (1989)), triggering rapid accretion onto the compact object, and a substantial X-ray brightening (typically by $`\times 10^3`$ or more, up to $`10^{38}\mathrm{erg}\mathrm{s}^1`$). Both neutron star (NS) and black hole candidate (BHC) systems exhibit these X-ray outbursts, separated by periods ($``$ months to decades) of relative quiescence (for recent reviews, see Tanaka & Lewin (1995); Tanaka & Shibazaki (1996); Chen et al. (1997); Campana et al. (1998b)).
Deep pointed X-ray observations of these transients in quiescence have found that BHCs are, on average, less luminous in quiescence than their NS counterparts Barret et al. (1996); Chen et al. (1997); Narayan et al. (1997b); Asai et al. (1998). To explain the higher quiescent luminosities ($`10^{32}`$$`10^{33}`$$`\mathrm{erg}`$ $`\mathrm{s}^1`$) of the transient NSs, Narayan et al. Narayan et al. (1997b) suggested that matter continues to accrete during quiescence onto both objects, and that neutron stars re-radiate the in-falling matter’s kinetic energy, while black holes swallow most of the accreted mass-energy. In this picture, the required quiescent accretion rate, $`\dot{M}_q`$, onto the compact object in BHC systems must be substantially greater than in NS systems. Current spectral modeling (for an advection dominated flow) of the few X-ray detected BHC’s use $`\dot{M}_q`$ of $``$ $`1/3`$ of the total mass transfer rate in the binary Narayan et al. (1997a) whereas the more efficient energy release onto a neutron star requires that $`\dot{M}_q`$ be smaller by 2-3 orders of magnitude. The cause of this difference in $`\dot{M}_q`$ between the BHC and NS systems is not easily explained (Menou et al. (1999)).
An alternative picture for the NS emission was put forward by Brown, Bildsten, & Rutledge Brown et al. (1998), who showed that NSs radiate a minimum luminosity, even if accretion completely ceases during quiescence. This minimum luminosity comes from energy deposited in the inner crust (at a depth of $``$300m) during the large accretion events. The freshly accreted material compresses the inner crust and triggers nuclear reactions that deposit $`\mathrm{MeV}`$ per accreted baryon there Haensel & Zdunik (1990). This heats the NS core on a $`10^4`$-$`10^5`$ yr timescale, until it reaches a steady-state temperature $`4\times 10^7(\dot{M}/10^{11}M_{}\mathrm{yr}^1)^{0.4}\mathrm{K}`$ Bildsten & Brown (1997), where $`\dot{M}`$ is the time-averaged accretion rate in the binary. A core this hot makes the NS incandescent, at a luminosity $`L_q=1\mathrm{MeV}\dot{M}/m_p6\times 10^{32}(\dot{M}/10^{11}M_{}\mathrm{yr}^1)\mathrm{erg}\mathrm{s}^1`$, even after accretion halts Brown et al. (1998). The NS is then a thermal emitter in quiescence, much like a young NS.
For both hypotheses, the energy source for the quiescent luminosities of the BHCs and NSs have different physical causes, making it meaningful to search for distinguishing spectral signatures.
### 1.1 X-Ray Spectra of Quiescent Neutron Stars
The first NS transient detected in quiescence was Cen X$``$Van Paradijs et al. (1987). More recently, quiescent X-ray spectral measurements have been made of Aql X$``$Verbunt et al. (1994) and 4U 2129+47 Garcia & Callanan (1999) with the ROSAT/PSPC; of EXO 0748$``$676 with Einstein IPC Garcia & Callanan (1999); and of Cen X$``$4 and 4U 1608$``$522 with ASCA Asai et al. (1996b). The X-ray spectrum of Aql X$``$1 (0.4–2.4$`\mathrm{keV}`$) was consistent with a blackbody (BB) spectrum, a bremsstrahlung spectrum, or a pure power-law spectrum Verbunt et al. (1994). For 4U 1608$``$522, the spectrum (0.5–10.0$`\mathrm{keV}`$) was consistent with a BB ($`kT_{\mathrm{BB}}0.2\text{}0.3\mathrm{keV}`$), a thermal Raymond-Smith model ($`kT=0.32{}_{0.5}{}^{}{}_{}{}^{+0.18}\mathrm{keV}`$), or a very steep power-law (photon index $`6{}_{2}{}^{}{}_{}{}^{+1}`$). Similar observations of Cen X$``$4 with ASCA found its X-ray spectrum consistent with these same models, but with an additional power-law component (photon index $`2.0`$) above 5.0$`\mathrm{keV}`$ (recent observations with BeppoSAX of Aql X$``$1 in quiescence also revealed a power-law tail; Campana et al. (1998a)). The origin of the observed power-law spectral components in Cen X$``$4 and Aql X$``$1 is not clear. While it has been suggested they may be due to magnetospheric accretion Campana et al. (1998b), spectral models of metallic NS atmospheres Rajagopal & Romani (1996); Zavlin et al. (1996) also predict hard tails. These warrant further observational investigation. The observation of EXO 0748$``$676 shows it to be more luminous (by $`\times 1050`$) than the other four NSs.
In four of these five sources (the exception being EXO 0748$``$676), BB fits implied an emission area of radius $`1\mathrm{km}`$, much smaller than a NS. This had little physical meaning however, as the emitted spectrum from a quiescent NS atmosphere with light elements at the photosphere is far from a blackbody. For a weakly-magnetic ($`B10^{10}`$G) pure hydrogen or helium<sup>1</sup><sup>1</sup>1The strong surface gravity will quickly (within $`10\mathrm{s}`$) stratify the atmosphere (Alcock & Illarionov (1980); Romani (1987)); for accretion rates $`2\times 10^{13}M_{}\mathrm{yr}^1`$ (corresponding to an accretion luminosity $`2\times 10^{33}\mathrm{erg}\mathrm{s}^1`$), metals will settle out of the photosphere faster than the accretion flow can supply them (Bildsten, Salpeter, & Wasserman Bildsten et al. (1992)). As a result, the photosphere should be nearly pure hydrogen. atmosphere at effective temperatures $`kT_{\mathrm{eff}}`$$`0.5\mathrm{keV}`$ the opacity is dominated by free-free transitions (Rajagopal & Romani (1996); Zavlin et al. (1996)). Because of the opacity’s strong frequency dependence ($`\nu ^3`$), higher energy photons escape from deeper in the photosphere, where $`T>T_{\mathrm{eff}}`$ Pavlov & Shibanov (1978); Romani (1987); Zampieri et al. (1995). Spectral fits of the Wien tail (which is the only part of the spectrum sampled with current instruments) with a BB curve then overestimate $`T_{\mathrm{eff}}`$ and underestimate the emitting area, by as much as orders of magnitude Rajagopal & Romani (1996); Zavlin et al. (1996)<sup>2</sup><sup>2</sup>2Application of H atmosphere models to the isolated neutron stars in SNR PKS 1209$``$52 and Puppis A produced a source distance consistent with that measured through other means (assuming a 10 km NS radius), a lower surface temperature, and an X-ray measured column density that was consistent with that measured from the extended SNR (while the column density measured with an assumed BB spectrum was not consistent with other measurements; Zavlin et al. (1998, 1999))..
Rutledge et al. Rutledge et al. (1999) showed that fitting the spectra of quiescent NS transients with these models yielded emitting areas consistent with a 10 km radius NS. In Fig. 1, we compare the measured H atmosphere and blackbody spectral parameters for the quiescent NSs. The data for 4U 2129+47 is analysed here, while the other sources were analysed previously Rutledge et al. (1999). The emission area radii are larger from the H atmosphere spectra by a factor of a few to ten, and are consistent with the canonical radius of a NS. There is thus both observational evidence and theoretical motivation that thermal emission from a pure hydrogen photosphere contributes to – and perhaps dominates – the NS luminosity at photon energies at 0.1–1 keV. This makes possible the use of quiescent NS X-ray spectra as an astrophysical tool.
### 1.2 Quiescent X-Ray Spectroscopy of NSs and BHCs
Distinguishing between a stellar-mass black hole and a neutron star in an X-ray binary is a non-trivial observational problem. Although there are X-ray phenomena unique to NSs (for example, type I X-ray bursts and coherent pulsations), as yet, no X-ray phenomenon predicted to occur exclusively in BHs has been observed. In the absence of any distinctive NS properties, some X-ray transients are classified as BHC’s if they display X-ray spectral and variability properties similar to those of other BHCs, such as $``$30% rms frequency-band-limited variability accompanied by a hard spectrum, or 3-12 Hz quasi-periodic oscillations while the source has high X-ray intensity, although there are NS systems which display these properties as well (see Van der Klis (1995) for a review). While a statistical distinction between the quiescent luminosities of BHCs and NSs has been demonstrated (see refs. in the Introduction), there is overlap in the observed luminosities; thus, while the different average quiescent luminosities support the hypothesis that the (pre-classified) objects belong to distinct classes, quiescent luminosity cannot be used to distinguish bewteen a NS and a BHC on a case-by-case basis. A promising phenomenological distinction between NSs and BHCs is that, at X-ray luminosities $`>`$$`10^{37}`$ $`\mathrm{erg}`$ $`\mathrm{s}^1`$, the 20-200 keV luminosity of BHCs is systematically higher than that of NSs. However, the physical origin of this difference is not known, and the weakness of a phenomenological distinction is its vulnerability to a single counter example Barret et al. (1996).
By far the most solid technique is measuring or constraining the mass of the compact object via radial velocity measurements of the optical companion. If the resulting optical mass function indicates that the compact object mass exceeds 3$`M_{}`$, then it is likely to be a black hole, as the mass-limit of a NS has been calculated to be below 3.2$`M_{}`$Rhoades & Ruffini (1974); Chitre & Hartle (1976). The large amount of progress in this method has given us several very secure black holes McClintock (1998).
A new spectroscopic distinction between transient NSs and BHs – based on the presence of a neutron star’s photosphere – would be a valuable classification tool. We present the first such comparisons here, where we report a spectral analysis that uses an accurate emergent spectrum from a NS to fit the quiescent X-ray spectra of six transient BHCs with measured mass-functions (GRO J0422+32, A0620$``$00, GS 1124$``$68, GRO J1655$``$40, GS 2000+25, and GS 2023+33) and four transient neutron stars.
We begin in §2 by describing the BHC’s and NS’s we have chosen for this comparison, as well as the hydrogen atmosphere models. Three of the NS are from our previous study Rutledge et al. (1999), Aql X$``$1, Cen X$``$4, 4U 1608$``$522; and we discuss a fourth here, 4U 2129+47. We show in § 3, that although the neutron stars occupy a narrow range of effective temperatures and emitting area radii, no such relation is found among the BHCs. This implies that the H atmosphere spectrum may be used as a tool to distinguish between NSs and BHCs in quiescence (in the absence of other information). Section 4 discusses the state of our observational understanding of the energy source for the quiescent emission from the NSs. We conclude in § 5 by summarizing and briefly discussing the application of this work to X-ray sources in globular clusters.
## 2 Object Selection and Data Analysis
Our purpose in fitting the H atmosphere model – appropriate only for NSs – to the BHC data is to directly compare the measured spectral parameters of the BHCs to those of NSs. We selected BHCs for analysis from among those in Table 1 of Menou et al. Menou et al. (1999), which contains a list of 8 compact binary systems with implied masses $``$3$`M_{}`$. Three of these systems (A0620$``$00, GS 2023+33, and GRO J1655$``$40) were detected in X-rays in quiescence (at luminosities $``$$`10^{33}`$ $`\mathrm{erg}`$ $`\mathrm{s}^1`$), the data for which we analyse; the remaining five have upper limits in their luminosity. We analyse the three with luminosity upper-limits below $`10^{33}`$ $`\mathrm{erg}`$ $`\mathrm{s}^1`$ (GRO J0422+32, GS 2000+25, and GS 1124$``$68) to investigate the constraints on emission area radius as a function of surface temperature. The remaining two sources with high luminosity upper-limits $``$$`10^{33}`$ $`\mathrm{erg}`$ $`\mathrm{s}^1`$ (N Oph 1977 and 4U 1543$``$47) are consistent with or greater than the luminosities from NSs Rutledge et al. (1999), and therefore they cannot be excluded as NSs based on a spectral comparison; we do not investigate them here.
Brief descriptions of the analysed observations are in Table 1. Data were obtained from the public archive at HEASARC/GSFC <sup>3</sup><sup>3</sup>3http://heasarc.gsfc.nasa.gov/. All observations analysed are listed in Table 1; all were performed with ROSAT/PSPC, except for those of GRO J1655$``$40, which were performed with ASCA, and one observation of GRO J0422+32, which was performed with ROSAT/HRI.
For our spectral fits and derived parameters, we adopted column densities using the conversion of $`N_{\mathrm{H},22}N_\mathrm{H}/10^{22}\mathrm{cm}^2=0.179A_V`$ Predehl & Schmitt (1995) and $`A_V=3.1E(BV)`$ Schild (1977), except where noted. We extracted the data for each source as described in the Appendix, and fit each extracted spectrum with a spectral model of galactic absorption and a tabulated H atmosphere model Zavlin et al. (1996), using XSPEC Arnaud (1996).
The H atmosphere model is determined by two spectral parameters: an effective temperature ($`kT_{\mathrm{eff}}`$) and the ratio of an emission area radius ($`r_e`$; the true radius, which would be measured as the circumference of the NS, divided by 2$`\pi `$) to source distance. In our results, we quote $`r_e`$ by assuming a source distance (which can be uncertain by a factor of 2, and therefore represents a considerable systematic uncertainty in both the NSs and BHCs). The H atmosphere spectrum we used assumes the surface gravity of a 1.4$`M_{}`$, 10 km object. It is possible to adopt the surface gravity as an additional parameter to the H atmosphere model; however, the available data are of insufficient quality to constrain simultaneously all 3 parameters.
We discuss in the Appendix the column densities adopted for each individual source. In quiescence, the S/N of the data is typically not sufficient to measure the X-ray equivalent column density for the assumed spectrum. We thus adopt an historically measured value of $`N_\mathrm{H}`$, taken either from: (1) X-ray absorption observed while the object is X-ray bright; (2) optical reddening, which has been measured proportional to the equivalent hydrogen column density; or (3) the neutral hydrogen column density from radio observations Dickey & Lockman (1990) taken from the W3NH tool at HEASARC<sup>4</sup><sup>4</sup>4http://heasarc.gsfc.nasa.gov/docs/frames/mb\_tools.html, which measures the integrated column density not just to the distance of the X-ray object, but through the galaxy. All of these methods, when applied to estimating the $`N_\mathrm{H}`$ during X-ray quiescence, have systematic errors sufficient to produce $`>`$100% uncertainties in the implied emission area and surface temperature (see discussion in Rutledge et al. (1999)). In addition, there are observations suggesting a column density that varies over timescales of months to years, between outbursts of transients. An example is a change of $`\mathrm{\Delta }N_\mathrm{H}=0.5\times 10^{22}\mathrm{cm}^2`$ in 4U 1608$``$522 Penninx et al. (1989) over several months. This uncertainty in $`N_\mathrm{H}`$ can only be overcome by high S/N data in X-ray quiescence, which can permit a direct measurement of $`N_\mathrm{H}`$ during the observation. Interpretation of the results of the present analysis must bear this systematic uncertainty in mind.
## 3 Comparison between BHCs and NSs in Quiescence
In Fig. 2, we compare the measured spectral parameters $`kT_{\mathrm{eff}}`$ and $`r_e`$ of quiescent NSs and BHCs. Error bars are 90% confidence, as are upper-limits. For the four NSs in this analysis, the $`kT_{\mathrm{eff}}`$ and $`r_e`$ were constrained, with best-fit $`kT_{\mathrm{eff}}`$ in the range 0.08–0.20 keV, and $`r_e`$ in the range 8–12 km. The NSs exhibit a significant range in temperature, possibly due to different core temperatures (related to $`\dot{M}`$). The data are of sufficient quality to constrain $`r_e`$ to within a factor of 2 for the NSs (not accounting for the uncertainties in source distance), and the resulting $`r_e`$ are consistent with objects of 10 km radius. The two connected points for Aql X$``$1 are for two different distances (2.0 and 4.0 kpc). The two connected points for 4U 2129+47 are discussed in the analysis section A.7. We note that recent observations Callanan et al. (1999) have resolved the optical counterpart of Aql X$``$1 into two objects, only one (at most) will be associated with the X-ray system; most likely, this observation implies a distance to Aql X$``$1 greater than previous estimates, although a new distance estimate has not yet been produced.
The spectra of five of the six BHCs were of insufficient statistics to simultaneously constrain both $`r_e`$ and $`kT_{\mathrm{eff}}`$ of the H atmosphere model. The exception (GS 2023+33) was significantly harder than the NS spectra, which constrained the $`kT_{\mathrm{eff}}`$ to be above those observed from the NSs. The BHCs GRO J1655$``$40, GS 2000+25, GRO J0422+32 and GS 1124$``$68 have spectral parameters which are, within errors, consistent with those of the NSs. A0620$``$00 and GRO J0422+32, if we presumed these to be NSs of the same surface area and gravity, must be cooler ($``$0.05 keV) than the average observed transient NSs. The spectrum of A0620$``$00 is discrepant with those of the NSs, implying substantially smaller emission area ($`r_e0.2\text{}2\mathrm{km}`$) for the range of $`kT_{\mathrm{eff}}`$ observed from the NSs.
The addition of a power-law spectral component – such as the hard power-law tail as observed from Cen X-4 in ASCA data, and detected in Aql X-1 in BeppoSAX data – introduces enough uncertainty in the H atmosphere spectral parameters of BHCs that all sources would be consistent with those we observe from the NSs. In noting this systematic uncertainty, we point out that additional spectral components are not demanded by the data we have used, although the data is largely of low-bandwidth (0.4-2.4 keV), and that wider bandwith data (such as from ASCA, Chandra, or XMM) may alter this. In practice, we have neglected this systematic uncertainty to investigate the question: if these BHC sources were discovered in quiescence, and their H atmosphere spectral parameters were compared with those from known NSs, would we conclude they are similar or dissimilar to the NSs? For GS 2023+33 and A0620$``$00, we find that they are dissimilar; while for GRO J1655$``$40, GS1124$``$683, GS2000+25, and GRO J0422+32, we find they are consistent (albeit with a wide range of values as well). Higher signal-to-noise data, taken with an instrument of wider bandwidth, would permit us greater certainty regarding the possible contributions of spectral parameters for which we have not here accounted.
Moreover, no BHC has well-constrained spectral parameters which would place it exclusively within the parameter space occupied by the NSs, as depicted on Fig. 2. This is a phenomenologically defined region, which would have to be expanded with the discovery of NSs in quiescence outside of this box. Thus, we find no evidence that any of the BHCs have been misclassified, and should be re-classified as NSs.
## 4 The Energy Source for the Neutron Star Quiescent Luminosity
There are presently no observational results which exclude that part of the quiescent luminosity of these NSs is due to accretion. Brown et al. (1998) noted a few observational tests which can be applied to determine whether accretion is active. First, as accretion will increase source luminosity (at this low luminosity level), the spectra should be drawn when sources are at their lowest observed luminosity in the X-ray passband. Second, since the H atmosphere thermal flux is expected to be variable only on timescales of longer than $``$months to $`10^4`$ yr, variability on timescales less than this ($``$days) likely indicates active accretion. Second, active accretion onto the NS surface will produce metal absorption lines in the spectrum (O and Fe, below 1 keV), which can be observed with Chandra and XMM, but which cannot be observed with ASCA or ROSAT. The presence of photospheric absorption metal lines in the spectrum – aside from being observationally important – will indicate active accretion onto the NS surface. These indicators should be used to insure that the observed NS emission is not due to active accretion (in addition to being observationally consistent with the theoretical H atmosphere spectrum). Since present instrumentation is not capable of detecting the lines, we can only apply the variability and observed luminosity criteria.
If accretion is occuring during quiescence, this will increase the $`kT_{\mathrm{eff}}`$ of an emergent H atmosphere spectrum, and produce metal absorption lines; accretion will not affect $`r_e`$, unless emission originates from a different surface than the NS (such as in an accretion disk).
Some observational evidence suggests that accretion is indeed occurring onto the NS surface during quiescence; long-term (months-years) variability in the observed flux has been reported (for 4U 2129+47, see App. A.7.1 and Garcia & Callanan (1999); for Cen X$``$4, Van Paradijs et al. (1987)). While this variability can be explained by a variable absorption column depth, active accretion during quiescence is also a possibility. However, recent observations of Aql X$``$1 at the end of an outburst showed an abrupt fading into quiescence Campana et al. (1998a) associated with a sudden spectral hardening Zhang et al. (1998a). This was followed by a period of $``$15 days, over which the source was observed (three times) with a constant flux level Campana et al. (1998a). This behavior was interpreted as the onset of the “propellor effect” Illarionov & Sunyaev (1975); Stella et al. (1986) in this object, which would inhibit – perhaps completely – accretion from the disk onto the NS. The energy source for the long-term nearly constant flux is then a puzzle. Unlike the accretion-powered models, the work of Brown et al. (1998) makes a specific prediction for this constant flux level, relating it to the long term time-averaged accretion rate.
So as to minimize any contributions from accretion, we only analyse observations made during periods of the lowest observed flux. We calculate the bolometric luminosity from the H atmosphere fits. The fits are given in terms of an (unredshifted) effective temperature ($`kT_{\mathrm{eff}}`$) and emitting area radius ($`r_e`$). The observed bolometric flux is $`F_{\mathrm{bol}}=(1+z)^2(r_e/d_0)^2\sigma T_{\mathrm{eff}}^4`$, where $`d_0`$ is the source distance, and the observed bolometric luminosity is $`L_{\mathrm{bol}}=4\pi d_0^2F_{\mathrm{bol}}`$. For a NS of $`1.4M_{}`$ and $`10\mathrm{km}`$ radius, the surface redshift is $`1+z=(12GM/Rc^2)^{1/2}=1.31`$. In Table 4 we give the unabsorbed, observed X-ray and bolometric luminosities assuming a H atmosphere spectrum and the surface redshift of a $`1.4M_{}`$ $`10\mathrm{km}`$ radius NS ($`1+z=1.31`$), measured as described in the analysis of the present work and previously Rutledge et al. (1999). The luminosity quoted for 4U 1608$``$522 in the previous work was not calculated for the distance stated there; the correct value is in this table. While there is considerable systematic uncertainty in these values, due to uncertain distances and $`N_\mathrm{H}`$, the luminosities cover an order of magnitude, at about $`10^{3233}`$ $`\mathrm{erg}`$ $`\mathrm{s}^1`$.
Using these new bolometric quiescent luminosities for Aql X-1, Cen X-4, and 4U 1608$``$522, we have remade a plot (Fig. 3) of $`L_q/L_o`$ as a function of $`t_r/t_o`$ Brown et al. (1998). Here $`L_q`$ and $`L_o`$ are the observed quiescent and average outburst luminosities, and $`t_r`$ and $`t_o`$ are the recurrence interval and outburst duration. We show this relation for the NSs (open circles) Aql X-1, Cen X-4, 4U 1608$``$522, and EXO 0748$``$676 and the BHCs (filled circles) H 1705$``$250, 4U 1543$``$47, Tra X-1, V 404 Cyg (GS 2023+33), GS 2000+25, and A 0620$``$00. We denote with an arrow those BHCs for which only an upper limit on $`L_q`$ is known. Because the recurrence time for the NS 4U 2129+47 is unknown, we do not show it here. The expected incandescent luminosity is plotted for two different amounts of heat per accreted nucleon stored in the core during an outburst: 1$`\mathrm{MeV}`$(solid line) and 0.1$`\mathrm{MeV}`$(dotted line). The use of bolometric quiescent luminosities moved Aql X-1, Cen X-4, and 4U 1608$``$522 to higher $`L_q/L_o`$ (upwards on this diagram). With the exception of these three objects and the Rapid Burster ($`L_q`$ is from Asai et al. (1996a)), the data from this plot is taken from Chen et al. (1997). For Aql X-1 and the Rapid Burster, $`L_o`$ and $`t_o`$ are accurately known (RXTE/All-Sky Monitor public data); for the remaining sources $`L_o`$ and $`t_o`$ are estimated from the peak luminosities and the rise and decay timescales.
Four of the five NSs are within the band where the quiescent luminosity is that expected when the emitted heat is between 0.1-1.0 MeV per accreted baryon. The fifth NS (EXO 0748$``$676), has a higher quiescent luminosity (by a factor of 10), which we interpret as being due to continued accretion, an interpretation which is reinforced by the observation of spectral variability during the quiescent observations with ASCA Corbet et al. (1994); Thomas et al. (1997), on timescales of $``$1000 sec and longer. (Garcia and Callanan Garcia & Callanan (1999) measured $`L_x`$=1$`\times 10^{34}`$ $`\mathrm{erg}`$ $`\mathrm{s}^1`$ from Einstein/IPC observations of this source.) The BHCs on this figure are more spread out across the parameter-space, qualitatively indicating a statistical difference – although not one which is particular for each object – between the two classes of objects. This suggests the NS quiescent luminosity is more strongly related to the accreted energy than the BH quiescent luminosities.
## 5 Conclusions
We have fit the quiescent X-ray spectra of four transient NSs and six transient BHCs (with measured mass functions) with a pure H atmosphere spectrum. We compared the emitting area radius and effective temperature of the ten sources and found that the NSs are clustered in ($`r_e`$, $`kT_{\mathrm{eff}}`$) parameter space. Two of the BHCs (A0620$``$00, GS 2023+33) are inconsistent with the NS spectra; the upper-limits of three more (GS 1124$``$68, GS 2000+25, GRO J0422+32), and the ($`r_e`$, $`kT_{\mathrm{eff}}`$) locus of the sixth (GRO J1655$``$40) overlap the NS parameter space. The upper-limits of the parameter space of GS 2000+25 and GRO J0422+32 are marginally consistent with the observed ($`r_e`$, $`kT_{\mathrm{eff}}`$) of the NSs, indicating that these BHCs, if interpreted as NSs, would have to be cooler than the average NS observed in quiescence.
We found that the X-ray spectra excludes A0620$``$00 and GS 2023+33 from being NSs of the type we used for comparison. These differences between the quiescent X-ray spectra of BHCs and NSs cannot be directly attributed to an observational selection effect. The NSs were identified by the type I X-ray bursts, the BHCs were first identified by their X-ray variability behavior and later by the high ($``$3$`M_{}`$) implied mass of the compact object. The X-ray spectra of GS 1124$``$68, GS 2000+25, GRO J1655$``$40, and GRO J0422+32 are consistent with those of the NSs, but their parameter space is large, and better data are needed to determine whether they are spectrally similar or not.
Our method can be applied to objects which have low X-ray luminosities ($``$$`10^{34}`$ $`\mathrm{erg}`$ $`\mathrm{s}^1`$), to identify them as neutron stars, which have an atmosphere from which the theoretical emission originates, or as some object which is not a NS, as we have done here. In addition to the transient field objects, this method can be readily applied to the low-luminosity X-ray sources observed in globular clusters Hertz & Grindlay (1983) which are thought to be cataclysmic variables Cool et al. (1995); Grindlay et al. (1995), but may also be transient neutron stars in quiescence Verbunt et al. (1984). X-ray spectroscopic determination can identify these objects as NSs radiating thermal emission from the atmosphere, or imply a different origin for the emission. As discussed above and elsewhere Brown et al. (1998), the quiescent luminosities of these sources are set by the time average accretion rate. Thus, the low luminosity ($`10^{31}`$ $`\mathrm{erg}`$ $`\mathrm{s}^1`$) X-ray sources in globular clusters, if they were transient neutron stars in quiescence, would have $`\dot{M}2\times 10^{13}`$ $`M_{}`$$`\mathrm{yr}^1`$; comparing this to Aql X-1, with $`\dot{M}1\times 10^{10}`$ $`M_{}`$$`\mathrm{yr}^1`$ (estimated from the RXTE/ASM lightcurve history), and a mean outburst interval of $``$ 200 days, the low-luminosity X-ray sources would have Aql–like outbursts with recurrence times of $``$250 yr, assuming the quiescent luminosity is in steady-state with accretion. The estimated number of such sources down to this luminosity level is highly uncertain, about 1-10 per globular cluster Verbunt et al. (1995). For $``$200 GCs in the galaxy, one expects $``$1-10 such transients per year. Even assuming only 1/4 of all GCs are within distance to detect an Aql X-1-like transient with the RXTE/ASM (for a peak RXTE/ASM detection countrate of 10 c/s; Aql X-1 has an ASM peak countrate of $``$30 c/s), this produces an expected transient discovery rate of 0.3-3 per year, which is consistent with or greater than the observed discovery rate with RXTE/ASM (of no sources over a four-year period). Stronger constraints could be made with more sensitive all-sky monitoring observations over longer time-baselines, or with a more tightly constrained luminosity function of the low-luminosity sources in globular clusters.
Higher quality X-ray data from the coming X-ray spectroscopy missions (Chandra/ACIS, XMM and ASTRO-E) will permit this analysis to be performed with greater accuracy, in particular by permitting the simultaneous measurement of the X-ray column density—a dominant systematic uncertainty. These will also provide the means to account for possible contributions due to a hard-power law component in the BHCs. For example, a 15 ksec observation with Chandra of a source with photon power-law slope of 2, and luminosity of 2$`\times 10^{31}`$ $`\mathrm{erg}`$ $`\mathrm{s}^1`$, $`N_{\mathrm{H},22}`$=0.2 at 1kpc would produce a spectrum which can be excluded as a H atmosphere (or blackbody) with an (unconstrained) column depth, with probability=6$`\times 10^6`$. Finally, the high spectral resolution and countrates of these instruments will permit a search for short timescale variability and photospheric metal absorption lines, which would indicate ongoing accretion during X-ray quiescence Brown et al. (1998).
###### Acknowledgements.
This research was supported by NASA via grant NAGW-4517 and through a Hellman Family Faculty Fund Award (UC-Berkeley) to LB, who is also a Cottrell Scholar of the Research Corporation. EFB is supported by NASA GSRP Graduate Fellowship under grant NGT5-50052. GGP acknowledges support from NASA grants NAG5-6907 and NAG5-7017. This research was supported in part by the National Science Foundation under Grant No. PHY94-07194. We acknowledge use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. We gratefully acknowledge useful conversations with D. Fox and A. Prestwich regarding ROSAT/HRI response issues. We gratefully acknowledge helpful comments from J. McClintock and from the referee, J. Grindlay.
## Appendix A Sources
The assumed distances (used to calculate luminosity) and adopted $`N_\mathrm{H}`$ for the spectral fits performed here are listed in Table 2; values for the NSs we analysed previously (Aql X$``$1, Cen X$``$4, 4U 1608$``$522) are in the previous reference Rutledge et al. (1999). The results of the spectral fits are presented in Table 3, which contains: (1) the dataset number (cf. Table 1); (2) the $`N_\mathrm{H}`$; (3) the best-fit spectral parameters for the H atmosphere model, including the un-redshifted effective NS surface temperature ($`kT_{\mathrm{eff}}`$) and apparent emission area radius $`r_e`$(km), as well as the reduced $`\chi _\nu ^2`$ for that model. The H atmosphere model – appropriate only for NSs – is applied here to the BHCs, to produce spectral parameters which can then be directly compared with those observed from NSs.
Here, we describe our assumptions about individual objects, and provide specifics of the analyses of each observation.
### A.1 GRO J0422+32
The color excess, measured from an IUE spectrum of this object, was found to be $`E(BV)`$=0.40$`\pm `$0.07 ($`N_{\mathrm{H},22}`$$``$0.23 $`\pm `$0.04; Shrader et al. (1992)).
First, we analyse observation 1, a ROSAT/PSPC observation which had not previously been analysed (this observation is included for completeness, as this source was fainter during observation 2). We extracted the data from a 53″ circle about the source, and background from a 530″ annulus about the source, excluding another object in the FOV, and 3 low-surface brightness areas clustered in the SE side of the annulus (possibly background fluctuations). Due to the long exposure, this spectrum constrained all three parameters ($`N_\mathrm{H}`$, $`kT_{\mathrm{eff}}`$, and $`r_e`$). The resulting best-fit was $`r_e`$=3.0$`{}_{0.7}{}^{}{}_{}{}^{+2.4}`$ km, $`kT_{\mathrm{eff}}`$=0.22$`{}_{0.05}{}^{}{}_{}{}^{+0.03}`$ keV, and $`N_{\mathrm{H},22}`$$`<0.12`$ (90%), for an implied luminosity of 7.8$`\times 10^{32}`$ (d/2.6 kpc)<sup>2</sup> $`\mathrm{erg}`$ $`\mathrm{s}^1`$ (0.5-2.0 keV). In a fit holding the column density fixed at $`N_{\mathrm{H},22}`$=0.22, the best fit model was not acceptable at the p=1.8% level. The parameter space for a $`\mathrm{\Delta }\chi ^2`$=2.72 would constrain the $`r_e`$ to be similar to those found from the NSs (11$`{}_{2.5}{}^{}{}_{}{}^{+4.7}`$ km).
We analysed the HRI observation 2. The observation was previously analysed, producing a luminosity upper-limit of log(L)$`<`$31.6 (d=2.6 $`\mathrm{kpc}`$, $`\alpha `$=2.1, and $`N_{\mathrm{H},22}`$=0.2; Garcia et al. (1997)). We extracted the spectrum from an 8″ radius about the source position – finding 7 photons, consistent with the expected and measured background count-rate. We extracted background counts from an annulus of 50″ and 10″ outer and inner radii, respectively. We used the Dec 1 1990 HRI spectral response from GSFC/HEASARC calibration database. We find a slightly lower, but consistent $`3\sigma `$ upper-limit to the unabsorbed source luminosity for the same assumed spectrum ($`\mathrm{log}(L)<31.4`$) as found previously. We assumed a series of temperatures and found upper-limits to $`r_e`$ for the assumed spectrum. For a comparable temperature as derived from the PSPC analysis (above), the implied $`r_e<0.8`$ km – substantially below that found from the PSPC analysis, which indicates intensity variability at this low luminosity. As observation 2 has lower luminosity than observation 1, we use observation 2 in our interpretations (as thermal emission from the NS surface should not vary by $``$few% over the timescales here, and it is only the lowest observed luminosity from each object which may be due in its greatest part to the thermal surface emission).
### A.2 A0620$``$00
The optical color excess of the companion star has been measured as $`E(BV)`$=0.39$`\pm `$0.02. Wu et al. (1976); Oke & Greenstein (1977), corresponding to $`N_{\mathrm{H},22}`$=0.22 – somewhat below the $`N_{\mathrm{H},22}`$=0.41 Dickey & Lockman (1990) measured in this direction (which is the integrated value along this line of sight through the entire galaxy), consistent with a nearby object. We adopt the $`N_{\mathrm{H},22}`$=0.22 value for our spectral fits.
We analysed data taken by ROSAT/PSPC (observation 3); this data was previously analysed McClintock et al. (1995). An X-ray source was found in the extracted data at the source position. Counts from this source were extracted from a circle of radius 53″. Background was taken from an annulus about the source, 4′ outer radius, and 60″ inner radius; excluded from this annulus were circular areas (each only half-within the annulus, straddling the outer radius) about two unrelated sources, each with radius 53″. The source region contained a total of 116 counts. The background region contained 938 counts. We used data in the energy range 0.4-2.1 keV, rebinned into three energy bins 0.4-0.8, 0.8-1.2, and 1.2-2.1 keV. Using the same spectral model and assumptions as the previous work McClintock et al. (1995), we reproduce the best-fit blackbody temperature and source luminosity.
The data are of insufficient S/N to constrain the temperature and area independently. We held the temperature fixed at values $`kT_{\mathrm{eff}}`$$`=[0.04,0.54]`$ keV, and extracted the best fit emission area radii ($`r_e`$), which were found to range from 14 $`\pm `$2 km (at $`kT_{\mathrm{eff}}`$$`=0.04`$ keV) to 0.07 km (at $`kT_{\mathrm{eff}}`$$`=0.15`$ keV). The model $`\chi _\nu ^2`$ reaches a probability of 2% that the observed spectrum is produced by the model in a single random trial ($`\chi _\nu ^2=4.0`$, for 2 dof) at $`kT_{\mathrm{eff}}`$$`=0.265`$ keV, which sets our upper-limit on the effective temperature for this source.
### A.3 GS 1124$``$68
The X-ray measured equivalent column density was found to have evolved, decreasing by $`N_{\mathrm{H},22}`$$``$0.2 during the outburst, and settling to a value of $`N_{\mathrm{H},22}`$=0.16 Ebisawa et al. (1994), consistent with the optical reddening $`E(BV)`$=0.20$`\pm `$0.05 Gonzalez-Riestra et al. (1991) measured from a UV absorption line, and is below the measured hydrogen column density in the direction of this source $`N_{\mathrm{H},22}`$=0.25 Dickey & Lockman (1990), consistent with an object a short distance away relative to the size of the galaxy. We adopt $`N_{\mathrm{H},22}`$=0.16 for our spectral fits.
The PSPC observation (number 4) has been previously analysed Greiner et al. (1994); Narayan et al. (1997b). For the source, we extracted counts from a 75″ radius circle about the source position, as the source position is about 15′ off-axis. For background, we used three circular areas on the detector, one centered at the source position, the other two centered at distances from the FOV center equal to that of the source position. From these three circular areas, we excluded the 75″ region about the source position, and one other apparent (that is, faint) source. There were 66 counts in the source region, and 2104 counts in the background spectrum. We generated an ancillary response file for this off-axis spectrum using the FTOOL pcarf V2.1.0. In the spectral analyses, we find a 3$`\sigma `$ upper-limit to the (unabsorbed) source flux of $`<`$3.5$`\times 10^{14}`$ $`\mathrm{erg}`$$`\mathrm{cm}^2`$$`\mathrm{s}^1`$(0.3-2.4 keV), assuming $`N_{\mathrm{H},22}`$=0.22, and a power-law photon spectral slope of $`\alpha =2.5`$, consistent with the previously found value.
There is no detectable quiescent source at the position of the optical source, consistent with previous results Greiner et al. (1994); Narayan et al. (1997b). Across the investigated range of NS surface temperatures, the upper-limits of the implied radii range from 82-0.13 km.
### A.4 GRO J1655$``$40
This ASCA observation has been analysed previously Ueda et al. (1998), in which the flux was measured to be 2$`\times 10^{13}`$ $`\mathrm{erg}`$$`\mathrm{cm}^2`$$`\mathrm{s}^1`$ (2-10 keV). The interstellar column density was measured at $`N_{\mathrm{H},22}`$=0.74$`\pm `$0.03 in the X-ray high (bright) state, and $`<`$0.08 and $`<`$0.14 (90%)in the low (faint) state. The optical reddening is $`E(BV)`$=1.3$`\pm `$0.1 (cf. Orosz & Bailyn (1997); Horne et al. (1996)) is consistent with a value of $`N_{\mathrm{H},22}`$=0.74, which we adopt and hold fixed.
Using data taken with the GIS in PH mode, we extracted background counts from two circles, each 5′ in radius and centered 5′ from the image center – the same distance as the position of the object. The source X-ray spectra were extracted from a 5′ circle about the optical position. We excluded energy bins below 1.0 keV and above 5.0 keV from the fit.
For the SIS, data were taken from 2.5′ radius circles about the source. Background was taken from a square approximately 6′ on a side, excluding the 2.5′ source region. We used energy range of 1.0-10.0 keV for the GIS data, and 0.5-10.0 keV for the SIS data.
Based on this data, no single best fit is found for $`0.04\mathrm{keV}<`$$`kT_{\mathrm{eff}}`$$`<0.54\mathrm{keV}`$. As the assumed temperature (held fixed) is increased from $`kT_{\mathrm{eff}}`$$`=`$0.04 to 0.54 keV, $`r_e`$ decreases from 110$`\pm `$32 km to 0.19$`\pm `$0.02 km.
### A.5 GS 2000+25
The X-ray equivalent $`N_\mathrm{H}`$ for this object has been measured as $`N_{\mathrm{H},22}`$=1.14 Tsunemi et al. (1989), and optical reddening measured as $`E(BV)`$=1.5 Chevalier & Ilovaisky (1990) (corresponding to $`N_{\mathrm{H},22}`$=0.825), which are slightly discrepant from one another, though within the systematic uncertainties between these techniques. The measured hydrogen column density in the direction of this object is $`N_{\mathrm{H},22}`$=0.66 Dickey & Lockman (1990), although within a degree, values range between $`N_{\mathrm{H},22}`$=\[0.51,0.81\], and at the most proximate point, $`N_{\mathrm{H},22}`$=0.67. The higher measured optical reddening of the stellar companion and absorption of X-rays from the compact object, relative to the direct $`N_\mathrm{H}`$ measurement, may indicate significant absorption in the local environment of the binary, on the order of $`N_{\mathrm{H},22}`$=0.5. We use in turn $`N_{\mathrm{H},22}`$=0.66 and $`N_{\mathrm{H},22}`$=1.1.
Two of the presently analysed observations (#6 & 7) have been analysed previously Verbunt et al. (1994). In the ROSAT image for any of the three observations (#6, 7, and 8) there is no detected object at the source position. We combine the data, which are of three widely varying epochs, to produce average upper limits to spectral parameters.
In each of the three observations, we extract source counts from a circular region centered on the object of 60″ radius. Background counts were taken from annuli about the source, with an inner radius of 63.75″ and an outer radius of 375″. We also excluded two circular regions (60″ radius) which overlapped the annuli, which appear to contain sources. There were a total of 78 counts in the source region, and 2505 counts in the background region. The resultant spectrum produces flux upper-limits consistent with (but below, by about 25%) the higher upper-limits found previously Verbunt et al. (1994).
We do not detect a source above the background countrate; we produce 90% upper limits to $`r_e`$, which were found to range from 0.13-28 km, depending on the assumed surface temperature, in the range of 0.04-0.54 keV.
### A.6 GS 2023+33
The optical reddening has been measured at $`A_V3.0`$ Charles et al. (1989), and a comparable $`E(BV)`$=1.03 Wagner et al. (1991), which imply $`N_{\mathrm{H},22}`$=0.54-0.59. This is below the measured $`N_\mathrm{H}`$=0.81 in this direction Dickey & Lockman (1990) consistent with an object nearby relative to the size of the galaxy. We adopt a value of $`N_{\mathrm{H},22}`$=0.54.
This object was observed twice in quiescence: in 1994 with ASCA and in 1992 with ROSAT/PSPC (see Verbunt et al. (1994)). It was also observed and analysed in the ROSAT All-Sky Survey Verbunt et al. (1994), on Nov 5, with an average countrate of 0.028$`\pm `$0.009 c/s,in which it was found with a luminosity of about 1.5$`\times 10^{33}`$ (d/3.5 $`\mathrm{kpc}`$)<sup>2</sup> $`\mathrm{erg}`$ $`\mathrm{s}^1`$ (0.4-2.4 keV).
From the ROSAT image, data were selected centered on the X-ray source, in a circle 60″ in radius. Background was taken from an annulus with an inner-radius of 63.8″ and an outer radius of 375″. The source region had 425 counts, and the background region had a total of 1365 counts.
At effective temperatures below, $`kT_{\mathrm{eff}}`$$`=0.258`$, the model becomes untenable ($`\chi _\nu ^2`$=4.0, 2 dof, corresponding to 98% probability), which sets the lower-limit of the acceptable effective temperature parameter space for this data. The best fit area was not well constrained from below (as the temperature increases to values above those for which this model is valid), but the 90% upper limit on the radius is 1.9 km (d/3.5 $`\mathrm{kpc}`$). We find the (0.4-2.4 keV) luminosity of the best fit model to be 6$`\times 10^{32}`$ $`(d/3.5\mathrm{kpc})^2`$$`\mathrm{erg}`$ $`\mathrm{s}^1`$.
These results are consistent with those of an earlier analysis of these same data Wagner et al. (1994), although we note that intensity variability was found on $``$days timescales which, a priori cannot be attributed to the NS surface flux from the hot (thermal) core. These results are also consistent with results from ASCA analysis of quiescent data taken at a different time Narayan et al. (1997a).
### A.7 4U 2129+47
We inadvertently did not include this NS transient in our previous study, so we analyse it here. This spectrum has been analysed previously with a black-body model Garcia & Callanan (1999), and the derived parameters were consistent with those we find here, under similar distance/column density assumptions.
The distance to this object is controversial with values between 1 and 6 kpc (see Cowley & Schmidtke (1990) for arguments on both sides). Associated with this distance uncertainty is an optical reddening uncertainty; if the optical counterpart is more distant, it must be more luminous, of earlier type, and bluer, in which case the reddening is $`E(BV)`$=0.3 ($`N_{\mathrm{H},22}`$=0.17; Cowley & Schmidtke (1990)); if less distant, the companion is redder, and $`E(BV)`$=0.5 ($`N_{\mathrm{H},22}`$=0.28; Thorstensen et al. (1979); Chevalier et al. (1989)). The total $`N_\mathrm{H}`$ measured in the direction of this object is $`N_{\mathrm{H},22}`$=0.38 Dickey & Lockman (1990), while the best fit column density obtained by Garcia and Callanan (with this same data; Garcia & Callanan (1999)) was $`N_{\mathrm{H},22}`$=21.5 (no quoted error).
In interpreting the data, we alternately adopt a distance of 1.5 kpc and $`E(BV)`$=0.5 ($`N_{\mathrm{H},22}`$=0.28), and 6.0 kpc and $`E(BV)`$=0.3 ($`N_{\mathrm{H},22}`$=0.17).
For the March 1994 PSPC observation, the source counts were extracted from a circular region 45″ in radius, with a total 279 counts. Background was extracted from an annulus about the source, 278″ and 45″ in outer and inner radius, respectively, for a total of 1959 counts. We fit to data in the 0.4-2.4 keV energy range.
For an assumed $`N_{\mathrm{H},22}`$=0.28 (d=1.5 kpc), the best fit parameters were $`kT_{\mathrm{eff}}`$$`=`$0.08$`{}_{0.015}{}^{}{}_{}{}^{+0.02}`$ keV, and $`r_e`$=7.1$`{}_{3.5}{}^{}{}_{}{}^{+6.0}`$ km. For $`N_{\mathrm{H},22}`$=0.17 (d=6.0 keV), the best fit parameters are $`kT_{\mathrm{eff}}`$$`=`$0.105$`{}_{0.020}{}^{}{}_{}{}^{+0.025}`$, and $`r_e`$=12.0$`{}_{4.9}{}^{}{}_{}{}^{+8.6}`$. The unabsorbed luminosities for the assumed ($`N_\mathrm{H}`$, d) combinations are 6.5$`\times 10^{31}`$ and 5.6$`\times 10^{32}`$ $`\mathrm{erg}`$ $`\mathrm{s}^1`$ (0.4-2.4 keV), respectively.
#### A.7.1 An Observed Change in the Flux from 4U 2129+47
To compare the observed flux of 4U 2129+47 analysed here, with one measured previously with the ROSAT/HRI during a 7 ksec of observations (Nov-Dec 1992; Garcia et al. (1997)), we imposed a $`N_{\mathrm{H},22}`$=0.5 and a photon power-law slope of 2.0, and measured the (absorbed) flux from the March 1994 observation, which was (6.7$`\pm `$0.5)$`\times 10^{14}`$ $`\mathrm{erg}`$$`\mathrm{cm}^2`$$`\mathrm{s}^1`$(0.3-2.4 keV) – a factor of 3.4$`\pm `$0.6 below that measured during the earlier observation by Garcia (we independently analysed the earlier HRI observation, and confirm Garcia’s flux measurement). We cannot simultaneously extract $`r_e`$ and $`kT_{\mathrm{eff}}`$ from the HRI observation, as only one spectral bin is available, and so we cannot compare the photon energy spectra. However, we jointly fit both observed spectra with the same H atmosphere and $`N_\mathrm{H}`$, and found that there is a 2.7% chance that the best-fit spectrum produced both observed spectra. Assuming that $`r_e`$ and $`N_\mathrm{H}`$ are the same between the two observations, the effective temperature would have to have dropped from 0.091$`\pm `$0.006 keV (90%) to 0.081$`\pm `$0.002 keV to explain the different spectra. The thermal timescale at the depth where the crust heating occurs is about a year Brown et al. (1998); the decrease in effective temperature could be a signature of a cooling crust. Another possible explanation is that the $`N_\mathrm{H}`$ was lower during the 1992 observation than the 1994 observation; for example if $`N_{\mathrm{H},22}`$=0.10 during the HRI observation and $`N_{\mathrm{H},22}`$=0.28 during the 1994 observation, but with the same underlying H atmosphere spectrum, this would account for the observed difference in flux. In addition, if accretion is active during quiescence, the different X-ray spectra may reflect a decrease in the accretion rate by a factor (0.081/0.0906)<sup>4</sup>=0.64, assuming a thermal spectrum.
With the quality of spectra we have here, we cannot definitely state the cause of the discrepancy between the 1992 and 1994 observations, but it is consistent with either a difference in the intervening $`N_\mathrm{H}`$, a difference in the effective temperature, time-variable accretion, or a combination of the three.
|
no-problem/9909/astro-ph9909360.html
|
ar5iv
|
text
|
# Caltech Faint Field Galaxy Redshift Survey IX: Source Detection and Photometry in the Hubble Deep Field Region Based on observations made at the Palomar Observatory, which is owned and operated by the California Institute of Technology; and with the NASA/ESA Hubble Space Telescope, which is operated by AURA under NASA contract NAS 5-26555.
## 1 Introduction
The Hubble Deep Field (HDF, Williams et al 1996) was chosen to be at high Galactic latitude, at low extinction, and free of bright or unusual sources. The Hubble Space Telescope (HST) images of the HDF are the deepest optical images of the sky ever taken, reaching source densities of roughly $`10^6\mathrm{deg}^2`$. The HDF has quickly become a “standard field” for the study of very faint extragalactic sources; it has been studied at all wavelengths from x-ray to radio. It is hoped that the huge multi-wavelength database which is developing for this field will lead to a new understanding of faint galaxy properties and evolution. In this paper are presented the catalogs of source fluxes and colors created for the Caltech Faint Galaxy Redshift Survey, in a region of the sky centered on the the HDF.
The Caltech Faint Galaxy Redshift Survey is a set of magnitude-limited visual spectroscopic surveys, with visual- and near-infrared-selected samples of very faint sources in blank fields. The sources are observed spectroscopically with the Low Resolution Imaging Spectrograph (LRIS, Oke et al 1995) instrument on the Keck Telescope. The spectroscopic catalogs are presented in companion papers (Cohen et al 1999a, Cohen et al 1999c). Briefly, the survey comprises several fields which together contain many hundreds of sources with spectroscopically measured redshifts and multi-band photometry, down to $`R24.5`$ or $`K20`$ mag, the current practical limit of highly complete, magnitude-limited spectroscopic samples. (Of course the photometric catalogs, and some incomplete spectroscopic samples, reach fainter fluxes than these.) Early results include studies of galaxy groups out to redshifts $`z1`$ (Cohen et al 1996a, 1996b, 1999b, 1999c) and measurements of broad-band and emission line luminosity functions and their evolution to redshifts $`z1.5`$ (Hogg et al 1998a; Hogg 1998b; Cohen et al 1999b). The database of photometry and spectroscopy will be useful for studies of faint galaxies and stars.
The HST images of the HDF are very small, covering only about 5 arcmin<sup>2</sup>, so they are poorly matched to the 15 arcmin<sup>2</sup> spectroscopic field of the LRIS instrument. For this reason the spectroscopic surveys in the HDF are performed in a larger region of the sky surrounding the HST image, with sources selected with the ground-based data presented here. The HDF observations with HST also included short exposures (one or two orbits) for eight pointings surrounding the HDF; these are referred to as the “Flanking Fields” (FF). The potential for obtaining detailed morphological information on the brighter sources at the resolution of HST therefore exists for the photometric catalogs presented here, and the redshift catalogs presented elsewhere (Cohen et al 1996a, 1996b, 1999c).
## 2 Data
For visual data, $`U_n`$, $`G`$ and $``$ images taken with the COSMIC camera (Kells et al 1998) at the prime focus of the 200-inch Hale Telescope at the Palomar Observatory were used. The COSMIC camera has $`0.283\times 0.283\mathrm{arcsec}^2`$ pixels over a $`9\times 9\mathrm{arcmin}^2`$ field of view. The final, stacked images images are $`8.6\times 8.7\mathrm{arcmin}^2`$, centered on 12 36 51.4 +62 13 13 (J2000), ie, roughly centered on the HST image of the HDF (Williams et al 1996). The visual images were taken in order to identify candidate $`z>3`$ galaxies; details of the observations, calibration, and reduction of these images are described in Steidel et al (in preparation). The $`U_n`$, $`G`$ and $``$ filters are described in Steidel & Hamilton (1993); briefly, they have effective wavelengths of 3570, 4830 and 6930 Å, FWHM bandpasses of 700, 1200 and 1500 Å, and zero-magnitude flux densities of roughly 1550, 3890 and 2750 Jy. These magnitudes are Vega-relative, not AB.
The southwest corner of the $``$ image was contaminated by an asteroid trail. The trail was removed by transforming less sensitive but higher-resolution Keck LRIS $`R`$-band images taken as set-up for the spectroscopic program in this field (Cohen et al 1999c) onto the same pixel scale, smoothing to the same seeing, and scaling to the same zeropoint. A strip of full width $`4.7\mathrm{arcsec}`$ along the straight trail was replaced with the smoothed, transformed Keck LRIS image. This thin strip of the $``$-band image has slightly higher-than-average noise.
For near-infrared data, an 8-arcmin diameter circular region centered on the HDF was imaged on 1997 March 19–21 using a $`K_s`$ filter with a near-infrared camera (Jarrett et al 1994) mounted at the prime focus of the 200-inch Hale Telescope. The instrument reimages the focal plane at 1:1 onto a NICMOS–3 $`256\times 256\mathrm{pixel}`$ HgCdTe array (produced by Rockwell), producing a $`0.494\times 0.494\mathrm{arcsec}^2`$ projected pixel size and a $`2.1\times 2.1\mathrm{arcmin}^2`$ instantaneous field of view. The $`K_s`$ filter has an effective wavelength of $`2.15\mu \mathrm{m}`$, a FWHM bandpass of $`0.3\mu \mathrm{m}`$, and a zero-magnitude flux density of roughly 708 Jy. Fourteen separate subfields, offset by 2 arcmin, were required in order to mosaic the entire circular field; each of these subfield was imaged once per night. For each subfield each night, 45 separate frames were taken; each frame consisted of six exposures of three seconds each, coadded in the electronics before writing to disk. The telescope was dithered by 5–15 arcsec between frames. As a result, each subfield was exposed for 810 s each night, or 2430 s for the three nights. The seeing was $`1.0`$ arcsec FWHM for most of the three nights. The first two nights were judged photometric, and were calibrated using the faint Solar-type standard stars of Persson et al (1998).
The $`K_s`$-band data were reduced by the method of Pahre et al (1997). Each subfield was reduced separately for each night. The third night’s data were rescaled by factors of between 1.1 and 1.5 in order to account for cirrus; the scaling factors were determined from a fit to a large number of sources. The subfields were then registered by aligning the objects in common with adjacent subfields in the overlap region. Individual pixels in a given field were weighted by the number of pointings contributing to that pixel. A background level was estimated at every pixel by median-filtering the mosaic with a wide filter and sigma–clipping. This background was subtracted in order to remove subfield-to-subfield variations in the sky brightness of the final mosaic. The final $`K_s`$–band mosaic is displayed in Figure 2.
Table 1 gives the properties of the final, stacked images.
## 3 Source detection
Sources were detected in all four images independently to construct four catalogs, hereafter “$`U_n`$-selected”, “$`G`$-selected”, “$``$-selected” and “$`K_s`$-selected”. All catalogs were created with the SExtractor source detection and photometry package (Bertin & Arnouts 1996). The detection algorithm is as follows: Images are smoothed with a Gaussian filter which has roughly the same FWHM as the seeing (1.13 arcsec for the visual images and 1.5 arcsec for the $`K_s`$-band image). Sources in the smoothed image with central-pixel surface brightness above a certain limit are added to the catalog. If a source has multiple peaks within its 1.2-$`\sigma `$ isophotal area on the image (where $`\sigma `$ is the pixel-to-pixel root-mean-square fluctuation in the sky brightness), each peak is split into a separate catalog source if it contains at least one percent of the original source’s isophotal flux.
The $``$-selected SExtractor catalog was augmented in two ways. (1) Several sources were added which, by eye, appear that they ought to be split off of brighter nearby objects but were not. These sources, when above the $``$-band flux limit, were added to the $``$-selected catalog directly. (2) Several very faint sources were compiled into what is hereafter the “supplemental” catalog, even though they are below the $``$-selected catalog’s flux limit, because they have successful redshift measurements in the companion paper Cohen et al (1999c). The fluxes for the supplemental catalog are all aperture magnitudes and the colors were measured as described below.
The noise in the $`K_s`$ image is much worse along the edges of the mosaic than at the center, which can lead to spurious detections. Sources in the high-noise edges were removed from the $`K_s`$-selected catalog, leaving a total area coverage of $`59.4\mathrm{arcmin}^2`$.
## 4 Calibration with HST imaging
To maintain a flux or magnitude system consistent with previous work in the HDF, the $`U_n`$, $`G`$ and $``$ images are calibrated by comparison with the extremely deep HST images of the HDF. The acquisition, reduction and calibration of the HST images are described in Williams et al (1996). In what folows, the Vega-relative calibrations of the HST images are used.
The absolute calibrations and effective wavelengths for the HST and ground-based filters are used to compute the following transformation equations under the assumption that the sources have roughly power-law spectral energy distributions:
$$U_n=0.53F300W+0.47F450W0.29\mathrm{mag}$$
(1)
$$G=0.82F450W+0.18F606W0.07\mathrm{mag}$$
(2)
$$=0.46F606W+0.54F814W0.02\mathrm{mag}$$
(3)
where $`F300W`$, $`F450W`$, $`F606W`$ and $`F814W`$ are Vega-relative magnitudes in the HST bandpasses of the same name.
The “Version 2” HST HDF images (Williams et al 1996) are transformed onto the $`U_n`$, $`G`$ and $``$ image coordinate system and all seven images are Gaussian-smoothed to have the same effective seeing. Aperture magnitudes were measured for the $``$-selected sample through matched, 2-arcsec diameter apertures. For calibration, the Vega-relative magnitude zeropoints were used instead of the “AB” zeropoints used by Williams et al (1996). The measured $`U_n`$, $`G`$ and $``$-band magnitudes are given zeropoints such that the comparison with transformed HST magnitudes in Figure 3 shows the best possible agreement. This HST-relative calibration ought to be good to roughly 5 percent.
## 5 Photometry
All catalog sources were photometered two ways: Isophotal magnitudes were measured down to the 2-$`\sigma `$ isophote (where $`\sigma `$ is the pixel-to-pixel root-mean-square fluctuation in the sky brightness). Aperture magnitudes were measured through apertures of diameter 1.7 arcsec for the visual images and 2.0 arcsec for the $`K_s`$-band image. Corrections to account for flux outside the aperture were added to the raw aperture magnitudes. The aperture corrections were measured from bright stars in the field and were found to be $`0.13`$, $`0.10`$, $`0.10`$ and $`0.12`$ mag for the $`U_n`$, $`G`$, $``$ and $`K_s`$ images respectively. These corrections correct aperture magnitudes to total magnitudes for point sources; no adjustment was made to account for galaxy size or extended structure in galaxies because although faint galaxies are not point sources, in these ground-based images there is almost no detectable difference between a faint galaxy and star at the faintest levels. Each source in the catalogs was assigned a “total magnitude” which is the brighter of the isophotal and corrected-aperture magnitudes. In practice, this is the isophotal magnitude for $`79.9`$ percent of sources to $`=24.5`$ mag, and it is $`42.5`$ percent to $`=25.5`$ mag; and it is $`98.5`$ percent to $`K_s=19`$ mag and $`71.9`$ percent to $`K_s=20`$ mag. It should be noted that under this definition, the total magnitudes are not expected to represent entire source fluxes, because there may be significant flux at large radius and low surface-brightness around these sources. Unfortunately it is not possible to accurately measure this low surface-brightness flux on a source-by-source basis.
## 6 Color measurement
To measure unbiased colors, the visual images were smoothed with Gaussians to the same effective seeing as the $`K_s`$-band image. A catalog of over 500 objects common to the visual and $`K_s`$-band images were used to derive the fourth-order polynomial transformation mapping the visual images onto the $`K_s`$-band image and vice versa (with NOAO/IRAF tasks “geomap” and “geotran”). Colors were measured through matched apertures of diameter 2 arcsec. For the $`U_n`$, $`G`$ and $``$-selected catalogs, colors were measured in the smoothed visual image and the $`K_s`$-band image transformed onto the visual coordinates. For the $`K_s`$-selected catalog, colors were measured in the smoothed visual images transformed onto the $`K_s`$-band image coordinates and the $`K_s`$-band image.
Color distributions for the four main catalogs are shown in Figures 4 through 7. There are 1920 sources with $`U_n<25.0`$ mag in the $`U_n`$-selected catalog, 2863 with $`G<26.0`$ mag in the $`G`$-selected, 3607 with $`<25.5`$ mag in the $``$-selected, and 488 with $`K_s<20.0`$ mag in the $`K_s`$-selected. The full $``$-selected, $`K_s`$-selected, and supplemental catalogs are given in Tables 2, 3, and 4. \[For now the tables are available at
(http://www.sns.ias.edu/~hogg/Hogg.Rsel.txt), (http://www.sns.ias.edu/~hogg/Hogg.Ksel.txt), and (http://www.sns.ias.edu/~hogg/Hogg.extras.txt).\]
## 7 Astrometry
Absolute positions were assigned to the $``$-selected sources by comparison with the Williams et al (1996) and Phillips et al (1997) catalogs. In the HST-imaged portion of the field, absolute positions were found by identifying $``$-selected sources with those in the Williams et al catalog. In the flanking field, the $`100`$ sources in the Phillips et al catalog were identified with $``$-selected sources. A quadratic transformation was fit to the relation between COSMIC pixel locations and absolute positions for the identified sources. This transformation was used to assign absolute positions to all sources in the $``$-selected catalog. These positions are given in Tables 2, 3, and 4. Comparison with the radio maps of the HDF and flanking fields (Richards et al 1998) shows that the absolute positions have an rms accuracy of roughly 0.4 arcsec (Cohen et al 1999c).
## 8 Completeness
It appears from Figures 4 through 7 that the catalogs are complete to roughly $`U_n=25`$, $`G=26`$, $`=25.5`$ and and $`K_s=20`$ mag. No completeness simulations have been performed because the primary purpose of this study is to construct catalogs for spectroscopy, not to measure ultra-deep number counts. For the latter study, better data exist and have been analyzed. With typical colors, objects with $`>24`$ mag and $`K_s>20`$ mag cannot routinely, or with good completeness, be measured spectroscopically with the Keck Telescope, so this catalog is appropriate for selection of a complete spectroscopic sample.
## 9 Discussion
The results of this survey are entirely contained in Figures 4 through 7. However, they can be compared with the results of other authors. When divided by the solid angle of the $`8.6\times 8.7\mathrm{arcmin}^2`$ field, the integrated number of sources is $`1.3\times 10^5\mathrm{deg}^2`$ to $`=25.0`$ mag. This is consistent with number counts from similar studies (eg, Hogg et al 1997b). The color distributions are also consistent with the results of previous studies, in mean and scatter (Hogg et al 1997a, 1997b; Pahre et al 1997).
Number–flux relations of the power-law form $`d\mathrm{log}N/dm=Q`$, where $`Q`$ is a constant, can be fit to the $`U_n`$, $`G`$ and $``$-selected catalogs over the 4-magnitude range terminating at the completeness limits given in Section 8. In the $`K_s`$-selected catalog the fit is performed only over $`18<K<20`$ mag because many studies have shown that the slope changes significantly at $`K18`$ mag (eg, Gardner et al 1993; Djorgovski et al 1995). The resulting faint-end slopes are $`Q=0.42`$, $`0.33`$, $`0.27`$ and $`0.31`$ for the $`U_n`$, $`G`$, $``$ and $`K_s`$ counts respsectively. These slopes are consistent with those found in previous studies (Djorgovski et al 1995; Metcalfe et al 1995; Hogg et al 1997b; Pahre et al 1997).
Although all these observations are consistent with the results of previous observational studies, the bulk of the faint sources are significantly bluer than normal, bright galaxies would be if there were no evolution in galaxy spectra. For example, a non-evolving spiral galaxy would have $`K_s3`$ mag at redshift $`z=0.6`$, and the bluest local galaxies would have $`K_s2.5`$ mag, but in the samples presented here, where the median redshift is roughly 0.6 (Cohen et al 1999a, 1999c), there are many galaxies with $`K_s<2`$ mag. The appearance of this extremely blue population in faint samples is a consequence of the high star formation rates at intermediate and high redshift relative to those of in the present-day Universe, as inferred from metallicity in Lyman-alpha clouds (Pei & Fall 1995), ultraviolet luminosity density (Lilly et al 1996; Connolly et al 1997; Madau et al 1998) and emission line strengths (Hammer et al 1997; Heyl et al 1997; Small et al 1997; Hogg et al 1998). This evolutionary effect, the decrease in star formation rate since redshift unity, is perhaps the most widely and independently confirmed result in the study of field galaxy evolution.
The Hubble Deep Field (HDF) database was planned, taken, reduced and made public by a large team at Space Telescope Science Institute headed by Bob Williams. We thank the referee, Alan Dressler, for timely and helpful criticism. This study is based on observations made at the Palomar Observatory, which is owned and operated by the California Institute of Technology; and with the NASA/ESA Hubble Space Telescope, which is operated by AURA under NASA contract NAS 5-26555. Primary financial support was provided under NSF grant AST 95-29170. Some additional support was provided by Hubble Fellowship grants HF-01093.01-97A and HF-01099.01-97A from STScI, which is operated by AURA under NASA contract NAS 5-26555.
|
no-problem/9909/astro-ph9909115.html
|
ar5iv
|
text
|
# Monte-Carlo Simulations of Globular Cluster Evolution. I. Method and Test Calculations.
## 1 Introduction
The dynamical evolution of dense star clusters is a problem of fundamental importance in theoretical astrophysics, but many aspects of the problem have remained unresolved in spite of years of numerical work and improved observational data. On the theoretical side, some key unresolved issues include the role played by primordial binaries and their dynamical interactions in the overall cluster dynamics and in the production of exotic sources (Hut *et al.* 1992), and the importance of tidal shocking for the long-term evolution and survival of globular clusters in the Galaxy (Gnedin, Lee & Ostriker 1999). On the observational side, we now have many large data sets providing a wealth of information on blue stragglers, X-ray sources and millisecond pulsars, all found in large numbers in dense clusters (e.g., Bailyn 1995; Camilo *et al.* 2000; Piotto *et al.* 1999). Although it is clear that these objects are produced at high rates through dynamical interactions in the dense cluster cores, the details of the formation mechanisms, and in particular the interplay between binary stellar evolution and dynamical interactions, are far from understood.
### 1.1 Overview of Numerical Methods
Following the pioneering work of Hénon (1971a,b), many numerical simulations of globular cluster evolution were undertaken in the early 1970’s, by two groups, at Princeton and Cornell, using different Monte-Carlo methods, now known as the “Princeton method” and the “Cornell method” (see Spitzer 1987 for an overview of the methods). In the Princeton method, the orbit of each star is integrated numerically, while the diffusion coefficients for the change in velocity $`\mathrm{\Delta }𝐯`$ and $`(\mathrm{\Delta }v)^2`$ (which are calculated analytically) are selected to represent the average perturbation over an entire orbit. Energy conservation is enforced by requiring that the total energy be conserved in each radial region of the cluster. The Princeton method assumes an isotropic, Maxwellian velocity distribution of stars to compute the diffusion coefficients, and hence does not take in to account the anisotropy in the orbits of the field stars. One advantage of this method is that, since it follows the evolution of the cluster on a dynamical timescale, it is possible to follow the initial “violent relaxation” phase more easily. Unfortunately, for the same reason, it also requires considerably more computing time compared to other versions of the Monte-Carlo method. In the Cornell method, also known as the “Orbit-averaged Monte-Carlo method”, the changes in energy $`E`$ and angular momentum $`J`$ per unit time (averaged over an orbit) are computed analytically for each star. Hence, the time consuming dynamical integration of the orbits is not required. In addition, since the diffusion coefficients are computed for both $`\mathrm{\Delta }E`$ *and* $`\mathrm{\Delta }J`$, the Cornell method does take in to account the anisotropy in the orbits of the stars. The “Hénon method” is a variation of the Cornell method, in which the velocity perturbations are computed by considering an encounter between pairs of neighboring stars. This also allows the local 2-D phase space distribution $`f(E,J)`$ to be sampled correctly. Our code is based on a modified version of Hénon’s method. We have modified Hénon’s algorithm for determining the timestep and computing the representative encounter between neighboring stars. Our method allows the timestep to be made much smaller in order to resolve the dynamics in the core more accurately. We describe the basic method and our modifications in more detail below in §2.
The Monte-Carlo methods were first used to study the development of the gravothermal instability (Spitzer & Hart 1971a,b; Hénon 1971a,b) and to explore the effects of a massive black hole at the center of a globular cluster (Lightman & Shapiro 1977). In those early studies, the available computational resources limited the number of particles used in the Monte-Carlo simulations to $`<10^3`$. Since this is much smaller than the real number of stars in a globular cluster ($`N10^510^6`$), each particle in the simulation represents effectively a whole spherical shell containing many stars, and the method provides no information about individual objects and their dynamical interactions. More recent implementations have used up to $`10^410^5`$ particles and have established the method as a promising alternative to direct $`N`$-body integrations (Stodólkiewicz 1986; Giersz 1998). Monte-Carlo simulations have also been used to study specific interaction processes in globular clusters, such as tidal capture (Di Stefano & Rappaport 1994), interactions involving primordial binaries (Hut, McMillan, & Romani 1992) and stellar evolution (Portegies Zwart *et al.* 1997). However, in all these studies the background cluster was assumed to have a fixed structure, which is clearly not realistic. The main goal of our study is to perform Monte-Carlo simulations of cluster dynamics treating both the cluster itself and all relevant interactions self-consistently, including all dynamical interactions involving primordial binaries. This idea is particularly timely because the latest generation of parallel supercomputers now makes it possible to do such simulations for a number of objects equal to the actual number of stars in a globular cluster. Using the correct number of stars in a cluster simulation ensures that the relative rates of different dynamical processes (which all scale differently with the number of stars) are correct. This is crucial if many different dynamical processes are to be incorporated, as we plan to do in this study.
In addition to Monte-Carlo and $`N`$-body simulations, a new method was developed, mainly by Cohn and collaborators, based on the direct numerical integration of the orbit-averaged Fokker-Planck equation (Cohn 1979, 1980; Statler, Ostriker & Cohn 1987; Murphy & Cohn 1988). Unlike the Monte-Carlo methods, the direct Fokker-Planck method constructs the (smooth) distribution function of the system on a grid in phase space, effectively providing the $`N\mathrm{}`$ limit of the dynamical behavior. The original formulation of the method used a 2-D phase space distribution function $`f(E,J)`$ (Cohn 1979). However, the method was later reduced to a 1-D form using an isotropized distribution function $`f(E)`$ (Cohn 1980). The reduction of the method to one dimension speeded up the calculations significantly. In addition, the use of the Chang & Cooper (1970) differencing scheme provided much better energy conservation compared to the original 2-D method. The 1-D method provided very good results for isolated clusters, in which the effects of velocity anisotropy are small. The theoretically predicted emergence of a power-law density profile in the late stages of evolution for isolated single-component systems has been clearly verified using this method (Cohn 1980). Calculations that include the effects of binary interactions, including primordial binaries, have also allowed the evolution to be followed beyond core collapse (Gao *et al.* 1991). However, results obtained using the 1-D method showed substantial disagreement with $`N`$-body results for tidally truncated clusters, in which the evaporation rate is dramatically affected by the velocity anisotropy. Ignoring the velocity anisotropy led to a significant overestimate of the evaporation rate from the cluster, resulting in shorter core-collapse times for tidally truncated clusters (Portegies Zwart *et al.* 1998). A recent implementation of the Fokker-Planck method by Drukier *et al.* (1999) has extended the algorithm to allow a 2-D distribution function, while also improving the energy conservation. A similar 2-D method has also been developed by Takahashi (1995, 1996, 1997). The new implementations produce much better agreement with $`N`$-body results (Takahashi & Portegies Zwart 1998), and can also model the effects of mass loss due to stellar evolution (Takahashi & Portegies Zwart 1999), as well as binary interactions (Drukier *et al.* 1999).
For many years direct $`N`$-body simulations were limited to systems with $`N<10^3`$ stars. New, special-purpose computing hardware such as the GRAPE (Makino *et al.* 1997) now make it possible to perform direct $`N`$-body simulations with up to $`N10^5`$ single stars (Hut & Makino 1999), but the inclusion of a significant fraction of primordial binaries in these simulations remains prohibitively expensive. The large dynamic range of the orbital timescales of the stars in the cluster presents a serious difficulty for $`N`$-body simulations. The orbital timescales can be as small as the periods of the tightest binaries. The direct integration of stellar orbits is especially plagued by this effect. These difficulties are overcome using techniques such as individual integration timesteps, and various schemes for regularizing binaries (see, e.g., Aarseth 1998 for a review). These short-cuts introduce specific selection effects, and complicate code development considerably. Instead, in the Monte-Carlo methods, individual stellar orbits are represented by their constants of the motion (energy $`E`$ and angular momentum $`J`$ for a spherical system) and perturbations to these orbits are computed periodically on a timestep that is a fraction of the relaxation time. Thus the numerical integration proceeds on the natural timescale for the overall dynamical evolution of the cluster. Note also that, because of exponentially growing errors in the direct integration of orbits, $`N`$-body simulations, just like Monte-Carlo simulations, can only provide a statistically correct representation of cluster dynamics (Goodman *et al.* 1993; Hernquist, Hut, & Makino 1993).
A great advantage of the Monte-Carlo method is that it makes it particularly easy to add more complexity and realism to the simulations one layer at a time. The most important processes that we will focus on initially will be stellar evolution and mass loss through a tidal boundary. Interactions of single stars with primordial binaries, binary-binary interactions, stellar evolution in binaries, and a detailed treatment of the influence of the Galaxy, including tidal shocking of the cluster when it passes through the galactic disk, will be incorporated subsequently.
Recent improvements in algorithms and available computational resources have allowed meaningful comparisons between the results obtained using different numerical methods (see for example the “Collaborative Experiment” by Heggie *et al.* 1999). However, there still remain substantial unresolved differences between the results obtained using various methods. For example, the lifetimes of clusters computed recently using different methods have been found to vary significantly. Lifetimes of some clusters computed using direct Fokker-Planck simulations by Chernoff & Weinberg (1990) are up to an order of magnitude shorter than those computed using $`N`$-body simulations and a more recent version of the Fokker-Planck method (Takahashi & Portegies Zwart 1998). It has been found that, in many cases, the differences between the two methods can be attributed to the lack of an appropriate discrete representation of the cluster in the Fokker-Planck simulations. This can lead to an over-estimate of the mass-loss rate from the cluster, causing it to disrupt sooner. Recently, new calibrations of the mass loss in the Fokker-Planck method (Takahashi & Portegies Zwart 1999) that account for the slower mass loss in discrete systems, has led to better agreement between the methods. The limitation of $`N`$-body simulations to small $`N`$ (especially for clusters containing a large fraction of primordial binaries) makes it particularly difficult to compare the results with Fokker-Planck calculations, which are effectively done for very large $`N`$ (Portegies Zwart *et al.* 1998, Heggie *et al.* 1999). This gap can be filled very naturally with Monte-Carlo simulations, which can be used to cover the entire range of $`N`$’s not accessible by other methods.
### 1.2 Astrophysical Motivation
The realization over the last 10 years that primordial binaries are present in globular clusters in dynamically significant numbers has completely changed our theoretical perspective on these systems (see. e.g., the review by Hut *et al.* 1992). Most importantly, dynamical interactions between hard primordial binaries and other single stars or binaries are now thought to be the primary mechanism for supporting a globular cluster against core collapse (McMillan, Hut, & Makino 1990, 1991; Gao *et al.* 1991). In addition, exchange interactions between primordial binaries and compact objects can explain very naturally the formation of large numbers of X-ray binaries and recycled pulsars in globular cluster cores (Sigurdsson & Phinney 1995; Davies & Hansen 1998; Portegies Zwart *et al.* 1997). Previously, it was thought that primordial binaries were essentially nonexistent in globular clusters, and so other mechanisms such as tidal capture and three-body encounters had to be invoked in order to form binaries dynamically during core collapse. However, these other mechanisms have some serious problems, and are much more likely to result in mergers than in the formation of long-lived binaries (Chernoff 1996; Kochanek 1992; Kumar & Goodman 1996).
Hubble Space Telescope (HST) observations have provided direct constraints on primordial binary fractions in clusters. The binary fraction is a key input parameter for any realistic study of cluster dynamics. For example, the recent observation of a broadened main sequence in NGC 6752, based on HST PC images of its core, suggest that the binary fraction is probably in the range 15%–38% in the inner core (Rubenstein & Bailyn 1997).
Despite the fact that binaries play a crucial role in the late phases of evolution of a cluster, the overall evolution of a binary population within a cluster, and its direct implications for the formation rate of observable binaries and blue stragglers remains poorly understood. In addition, the relative importance of binaries in a cluster, like many other physical processes, may depend on the actual size ($`N`$) of the cluster. This makes it difficult to extend results obtained from smaller $`N`$-body simulations to realistic globular cluster models. When the initial primordial binary fraction is below a certain critical value, a globular cluster core can run out of binaries before the end of its lifetime, i.e., before being evaporated in the tidal field of the Galaxy (McMillan & Hut 1994). Without the support of binaries, the cluster will undergo a much deeper core collapse and so-called gravothermal oscillations (Sugimoto & Bettwieser 1983; Breeden *et al.* 1994; Makino 1996). At maximum contraction, the core density may increase by many orders of magnitude, leading to greatly enhanced interaction rates. Our new Monte-Carlo code will allow us to follow the evolution of a cluster through this phase, including in detail the dynamical interactions between the $`10^3`$ objects in the core.
Of particular interest is the possibility that successive collisions and mergers of MS stars might lead to a runaway process. The recent HST observations of stellar cusps in the cores of M15 (Guhathakurta *et al.* 1996, Sosin & King 1997) and NGC 6624 (Sosin & King 1995) have generated renewed interest in the possibility of massive black holes in globular clusters. The most significant unresolved theoretical issue concerns the manner in which such a black hole could form in a dense cluster. One of the likely routes, which we plan to examine with our simulations, is via the collisions and mergers of main-sequence stars, leading to the runaway build-up of a massive object and its eventual gravitational collapse (Portegies Zwart *et al.* 1999).
A very significant effect of the galactic environment on a cluster is the gravitational shock heating of the cluster due to passages close to the bulge and through the disk. When a cluster passes through the Galactic disk, it experiences a time-varying gravitational force that pulls the cluster toward the equatorial plane. The net effect of the shock is to induce an increase in the average energy of the stars, causing the binding energy of the cluster to decrease, and the rate of escape of stars through evaporation to increase (Chernoff, Kochanek, & Shapiro 1986). In addition, in some cases, “shock-induced relaxation” can be almost as important as two-body relaxation in the overall evolution of the cluster (Gnedin, Lee & Ostriker 1999; Gnedin & Ostriker 1997). Both the energy shift and the relaxation induced by tidal shocking can be incorporated in our Monte-Carlo method by assuming an orbit for the cluster around the Galactic center and introducing an appropriate perturbation to the energy of the stars each time the cluster passes through the disk. This can be done without adding much computational overhead to the problem, since tidal shocking only occurs twice during the orbital period of the cluster. The ability of the Monte-Carlo method to model such effects simultaneously with a realistic treatment of the internal dynamical evolution of the cluster makes it a very useful tool in verifying and extending previous results obtained using other methods.
The star-by-star representation of the system in Monte-Carlo simulations makes it easy to study of the evolution of a particular population of stars within a cluster. For example, the evolution of a population of neutron stars could be followed closely, to help predict their properties and expected distributions within clusters. Of particular interest are M15 and 47 Tuc, which have both been the targets of several highly successful searches for pulsars (Anderson 1992; Robinson *et al.* 1995; Camilo *et al.* 2000). The observed properties of pulsars in these clusters are found to be very different. The pulsars in 47 Tuc are all millisecond pulsars, and most are in short-period binaries, while those in M15 are mostly single recycled pulsars with longer pulse periods. This suggests that these two clusters may provide very different dynamical environments for the formation of recycled pulsars.
## 2 The Monte-Carlo Method
### 2.1 Overview
Our basic algorithm for doing stellar dynamics is based on the “orbit-averaged Monte-Carlo method” developed by Hénon (1971a,b). The method was later used and improved by Stodólkiewicz (1982, 1985, 1986). It has also recently been used by Spurzem & Giersz (1996) to follow the evolution of hard three-body binaries in a cluster with equal point-mass stars. New results using Stodólkiewicz’s version of the method were also presented recently by Giersz (1998). In earlier implementations of the Monte-Carlo method with $`N10^3`$, each particle in the simulation was a “superstar,” representing many individual stars with similar orbital properties. In our implementation, with $`N10^510^6`$, we treat each particle in the simulation as a single star. We have also modified Hénon’s original algorithm to allow the timestep to be made much smaller in order to resolve the dynamics in the core more accurately.
In the simplest case of a spherical system containing $`N`$ point masses the algorithm can be summarized as follows. We begin by assigning to each star a mass, radius and velocity by sampling from a spherical and isotropic distribution function (for example, the Plummer model). Once the positions and masses of all stars are known, the gravitational potential of the cluster is computed assuming spherical symmetry. The energy and angular momentum of each star are then calculated. Energy and angular momentum are perturbed at each timestep to simulate the effects of two-body and three-body relaxation. The perturbations depend on each star’s position and velocity, and on the density of stars in its neighborhood. The timestep should be a fraction of the relaxation time for the cluster (which is larger than the dynamical time by a factor $`N/\mathrm{ln}N`$). The perturbation of the energy and angular momentum of a star at each timestep therefore represents the cumulative effect of many small (and distant) encounters with other stars. Under the assumption of spherical symmetry, the cross-sections for these perturbations can be computed analytically. The local number density is computed using a sampling procedure. Once a new energy and angular momentum is assigned to each star, a new realization of the system is generated by assigning to each star a new position and velocity in an orbit that is consistent with its new energy and angular momentum. In selecting a new position for each star along its orbit, each position is weighted by the amount of time the star spends around that position. Using the new positions, the gravitational potential is then recomputed for the entire cluster. This procedure is then repeated over many timesteps. After every timestep, all stars with positive total energy (cf. §2.7) are removed from the computation since they are no longer bound to the cluster and are hence considered lost from the cluster instantly on the relaxation timescale. The method allows stars to have arbitrary masses and makes it very easy to allow for a stellar mass spectrum in the calculations.
We now describe our implementation of the Monte-Carlo method in detail. For completeness, we also include some of the basic equations of the method. For derivations of these equations, and a more detailed discussion of the basic method, see Hénon (1971b), Stodólkiewicz (1982), and Spitzer (1987).
### 2.2 Initial Conditions
The initial model is assumed to be in dynamical equilibrium, so that the potential does not change on the crossing timescale. This is important since the Monte-Carlo method uses a timestep which is of the order of the relaxation time, and hence cannot handle the initial phase of “violent relaxation” during which the potential changes on the dynamical timescale. Under the assumption of spherical symmetry, the distribution function for such an equilibrium system can be written in the form $`f=\mathrm{\Psi }(E,J)`$, where $`E`$ and $`J`$ are the energy per unit mass, and angular momentum per unit mass,
$`E`$ $`=`$ $`\mathrm{\Phi }(r)+{\displaystyle \frac{1}{2}}(v_r^2+v_t^2),`$ (1)
$`J`$ $`=`$ $`rv_t.`$ (2)
Here $`r`$ is the distance from the cluster center, $`v_r`$ is the radial velocity, $`v_t`$ is the transverse velocity, and $`\mathrm{\Phi }(r)`$ is the gravitational potential. In principle, the initial distribution function $`\mathrm{\Psi }(E,J)`$ can be arbitrary. However, in practice, computing a self-consistent potential for an arbitrary distribution function can be quite difficult. Since the method requires the initial potential $`\mathrm{\Phi }(r)`$ to be known, a simple initial model is usually selected so as to allow the potential to be computed quasi-analytically. Common examples are the sequence of King models and the Plummer model.
Once the number of stars $`N`$ is selected, the initial condition is constructed by assigning to each star values for $`r`$, $`v_r`$, $`v_t`$, and $`m`$, consistent with the selected model. Once the positions and masses of all the stars are known, the gravitational potential $`\mathrm{\Phi }`$ is computed as a function of distance from the center. The energy per unit mass $`E`$, and angular momentum per unit mass $`J`$ of each star are then computed using equations (1) and (2).
### 2.3 The Gravitational Potential
We compute the mean potential of the cluster by summing the potential due to each star, under the assumption of spherical symmetry. We use only the radial position $`r`$ of each star (since we assume spherical symmetry, we can neglect the angular positions of the stars, to a very good approximation). We begin by sorting all the stars by increasing radius. Then the potential at a point $`r`$, which lies between two stars at positions $`r_k`$ and $`r_{k+1}`$, is given by
$`\mathrm{\Phi }(r)`$ $`=`$ $`G\left({\displaystyle \frac{1}{r}}{\displaystyle \underset{i=1}{\overset{k}{}}}m_i{\displaystyle \underset{i=k+1}{\overset{N}{}}}{\displaystyle \frac{m_i}{r_i}}\right).`$ (3)
For any two neighboring stars at distances $`r_k`$ and $`r_{k+1}`$, the mass contained within the radius $`r`$ remains constant for $`r_k<r<r_{k+1}`$. Hence, we can compute the potential at r, if the potentials $`\mathrm{\Phi }_k=\mathrm{\Phi }(r_k)`$ and $`\mathrm{\Phi }_{k+1}=\mathrm{\Phi }(r_{k+1})`$ are known, as
$`\mathrm{\Phi }(r)`$ $`=`$ $`\mathrm{\Phi }_k+\left({\displaystyle \frac{1/r_k1/r}{1/r_k1/r_{k+1}}}\right)\left(\mathrm{\Phi }_{k+1}\mathrm{\Phi }_k\right).`$ (4)
At each timestep, we store pre-computed values of $`\mathrm{\Phi }_k=\mathrm{\Phi }(r_k)`$, for each star k in the cluster. The potential at an arbitrary point $`r`$ can then be quickly computed simply by finding the index k such that $`r_krr_{k+1}`$ and then using equation (4).
We now describe the process of evolving the system through one complete timestep.
### 2.4 Two-Body Relaxation and Timestep Selection
We simulate the effect of interactions during each timestep $`\mathrm{\Delta }t`$ by perturbing the energy and angular momentum of each star in the cluster. The perturbations $`\mathrm{\Delta }E`$ and $`\mathrm{\Delta }J`$ for a star are determined by computing a single *effective* encounter between the star and its nearest neighbor (in terms of distance from the center, since we assume spherical symmetry). During such an encounter, the two stars exchange kinetic energy, but the total energy is conserved. In the center of mass frame of the two interacting stars, the magnitude of the velocity does not change; instead the velocity is deflected through an angle $`\beta `$.
In the original method described by Hénon (1971b), the timestep used was a small fraction of the relaxation time for *the entire cluster*. Although the timestep computed in this way is suitable for the outer regions of the cluster, it is too large to provide an accurate representation of the relaxation in the core, especially in the later stages of cluster evolution where the relaxation time in the core can be many orders of magnitude smaller than in the outer regions. This caused the inner regions of the cluster to be under-relaxed. The limited computational resources available at that time did not permit the timestep to be made much smaller, without slowing down the computation to a crawl. The greatly increased computational power available today allows us to use a timestep that is small enough to resolve the relaxation process in the core, even for systems with $`N>10^5`$.
To provide an accurate description of the overall relaxation of the cluster, each effective encounter should give the correct mean value of the change in energy at each position. We achieve this by selecting the effective deflection angle $`\beta _e`$ for the encounter (in the center of mass frame of the two interacting stars) as follows. If the masses of the two stars are $`m_1`$ and $`m_2`$, and their velocities $`v_1`$ and $`v_2`$, respectively, then the kinetic energy changes can be written as
$`\mathrm{\Delta }KE_1`$ $`=`$ $`m_1v_1\mathrm{\Delta }v_1+{\displaystyle \frac{1}{2}}m_1(\mathrm{\Delta }v_1)^2,`$ (5)
$`\mathrm{\Delta }KE_2`$ $`=`$ $`m_2v_2\mathrm{\Delta }v_2+{\displaystyle \frac{1}{2}}m_2(\mathrm{\Delta }v_2)^2,`$ (6)
where $`\mathrm{\Delta }v_1`$ and $`\mathrm{\Delta }v_2`$ are the changes in the velocities during the encounter. Since the total kinetic energy in each encounter is conserved, the mean value of the first terms on the RHS of equations (5) and (6) must equal the mean value of the second terms (with the opposite sign). This indicates that in order to get a good representation of the energy exchange between stars in the relaxation process, we must consider the mean value of $`m_1(\mathrm{\Delta }v_1)^2`$ during each timestep.
The change in velocity $`\mathrm{\Delta }v_1`$ during an encounter with a deflection angle $`\beta `$, can be calculated from elementary mechanics as (see, e.g., Spitzer 1987, eq. \[2-6\]),
$`(\mathrm{\Delta }v_1)^2=4{\displaystyle \frac{m_2^2}{(m_1+m_2)^2}}w^2\mathrm{sin}^2(\beta /2),`$ (7)
where $`w`$ is the relative speed of the two stars before the encounter. The mean overall *rate* of change in the velocity $`<(\mathrm{\Delta }v_1)^2>`$ due to many distant (weak) encounters of the star with other cluster stars can then be calculated by averaging over the impact parameter (cf. Spitzer 1987, eq. \[2-8\]). Using this, the mean change in the velocity in the time $`\mathrm{\Delta }t`$ is given by
$`<(\mathrm{\Delta }v_1)^2>=8\pi G^2\nu \mathrm{\Delta }t<m_2^2w^1>\mathrm{ln}\mathrm{\Lambda },`$ (8)
where $`\mathrm{ln}\mathrm{\Lambda }\mathrm{ln}(\gamma N)`$ is the Coulomb logarithm ($`\gamma `$ is a constant $`0.1`$; see §3.1), and $`\nu `$ is the local number density of stars. We obtain the correct mean value of $`m_1(\mathrm{\Delta }v_1)^2`$ by equating the RHS of equations (7) and (8), giving
$`<4{\displaystyle \frac{m_1m_2^2}{(m_1+m_2)^2}}w^2\mathrm{sin}^2(\beta /2)>=8\pi G^2\nu \mathrm{\Delta }t<m_1m_2^2w^1>\mathrm{ln}(\gamma N).`$ (9)
Equation (9) relates the timestep $`\mathrm{\Delta }t`$ to the deflection angle $`\beta `$ for the encounter. Thus, in order to get the correct mean value of $`m_1(\mathrm{\Delta }v_1)^2`$ for the star during the time $`\mathrm{\Delta }t`$, we can define the *effective* deflection angle $`\beta _e`$ for the representative encounter, as
$`\mathrm{sin}^2(\beta _e/2)=2\pi G^2{\displaystyle \frac{(m_1+m_2)^2}{w^3}}\nu \mathrm{\Delta }t\mathrm{ln}(\gamma N).`$ (10)
In addition to using the correct mean value of $`m_1(\mathrm{\Delta }v_1)^2`$, we can also require that its variance be correct. To compute the variance, we must calculate the mean value of $`(\mathrm{\Delta }v_1)^4`$. Using equation (7), we have
$`(\mathrm{\Delta }v_1)^4=16{\displaystyle \frac{m_2^4}{(m_1+m_2)^4}}w^4\mathrm{sin}^4(\beta /2).`$ (11)
We then use Spitzer’s equation (2-5), and again integrate over the impact parameter to get the mean value of $`(\mathrm{\Delta }v_1)^4`$ in the time $`\mathrm{\Delta }t`$,
$`<(\mathrm{\Delta }v_1)^4>=16\pi G^2{\displaystyle \frac{m_2^4}{(m_1+m_2)^2}}w\nu \mathrm{\Delta }t.`$ (12)
Comparing equations (11) and (12), we see that, in order to have the correct variance of $`m_1(\mathrm{\Delta }v_1)^2`$, we should have
$`\mathrm{sin}^4(\beta _e/2)=\pi G^2{\displaystyle \frac{(m_1+m_2)^2}{w^3}}\nu \mathrm{\Delta }t.`$ (13)
Consistency between equations (10) and (13) gives the relation between the number of stars in the system, and the effective deflection angle that must be used,
$`\mathrm{sin}^2(\beta _e/2)={\displaystyle \frac{1}{2\mathrm{ln}(\gamma N)}}.`$ (14)
This relation indicates that for large $`N`$, the effective deflection angle must be small, while as $`N`$ decreases, close encounters become more important. If the timestep is too large, then $`<\mathrm{sin}^2(\beta /2)>`$ is also too large, and the system is under-relaxed. Hence the timestep used should be sufficiently small so as to get a good representation of the relaxation process in the cluster. In addition, the local relaxation time varies greatly with distance from the cluster center. In practice we use the shortest relaxation time in the core to compute the timestep. We first evaluate the local density $`\rho _c`$ in the core and the approximate core radius $`r_c=(3v_c^2/4\pi G\rho _c)^{1/2}`$. We then compute the timestep $`\mathrm{\Delta }t`$ using equation (10) and requiring that the average value of $`\mathrm{sin}^2(\beta _e/2)`$ for the stars within the core radius $`r_c`$ be sufficiently small. The value of $`\mathrm{sin}^2(\beta _e/2)`$ given by equation (14) varies only slightly between $`0.046`$ and $`0.072`$ for $`N`$ between $`10^4`$ and $`5\times 10^5`$ (assuming $`\gamma 0.1`$). Hence for all our simulations, we require that $`\mathrm{sin}^2(\beta _e/2)<0.05`$.
Equation (10) is then used to compute the effective deflection angle for all stars in the cluster. The local number density $`\nu `$ is computed by averaging over the nearest $`p`$ stars. We find that using a value of $`p`$ between 20 and 50 gives the best results for $`N10^5`$. We find that the difference in the core-collapse times obtained for various test models using values of $`p`$ between 20 and 50 is less than 1%. Of course, the value of $`p`$ should not be too large so as to maintain a truly local estimate of the number density. We use a value of $`p=40`$ in all our calculations, which gives consistently good agreement with published results.
### 2.5 Computing the Perturbations $`\mathrm{\Delta }E`$ and $`\mathrm{\Delta }J`$ during an Encounter
To compute the velocity perturbation during each timestep, a single representative encounter is computed for each star, with its nearest neighbor in radius. Selecting the nearest neighbor ensures that the correct local velocity distribution is sampled, and also accounts for any anisotropy in the orbits. Due to spherical symmetry, selecting the nearest neighbor in radius is equivalent to selecting the nearest neighbor in 3-D, since only the velocity (and not the position) of the nearest neighbor is used in the encounter. Following Hénon’s notation, we let ($`r`$, $`v_r`$, $`v_t`$) and ($`r^{}`$, $`v_r^{}`$, $`v_t^{}`$) represent the phase space coordinates of the two interacting stars, with masses $`m`$ and $`m^{}`$, respectively. In addition to these parameters, the angle $`\psi `$ of the plane of relative motion defined by ($`𝐫^{}𝐫`$, $`𝐯^{}𝐯`$) with some reference plane is selected randomly between 0 and $`2\pi `$, since the distribution of field stars is assumed to be spherically symmetric.
We take our frame of reference such that the $`z`$ axis is parallel to $`𝐫`$, and the $`(x,z)`$ plane contains $`𝐯`$. Then the velocities of the two stars are given by
$`𝐯=(v_t,0,v_r),𝐯^{}=(v_t^{}\mathrm{cos}\varphi ,v_t^{}\mathrm{sin}\varphi ,v_r^{}),`$ (15)
where $`\varphi `$ is also randomly selected between 0 and $`2\pi `$, since the transverse velocities are isotropic because of spherical symmetry. The relative velocity $`𝐰=(w_x,w_y,w_z)`$ is then
$`𝐰=(v_t^{}\mathrm{cos}\varphi v_t,v_t^{}\mathrm{sin}\varphi ,v_r^{}v_r).`$ (16)
We now define two vectors $`𝐰_1`$ and $`𝐰_2`$ with the same magnitude as $`𝐰`$, such that $`𝐰_1`$, $`𝐰_2`$, and $`𝐰`$ are mutually orthogonal. The vectors $`𝐰_1`$ and $`𝐰_2`$ are given by
$`𝐰_1`$ $`=`$ $`(w_yw/w_p,w_xw/w_p,0),`$ (17)
$`𝐰_2`$ $`=`$ $`(w_xw_z/w_p,w_yw_z/w_p,w_p),`$ (18)
where $`w_p=(w_x^2+w_y^2)^{1/2}`$. The angle $`\psi `$ is measured from the plane containing the vectors $`𝐰`$ and $`𝐰_\mathrm{𝟏}`$. The relative velocity of the two stars after the encounter is given by
$`𝐰^{}=𝐰\mathrm{cos}\beta +𝐰_1\mathrm{sin}\beta \mathrm{cos}\psi +𝐰_2\mathrm{sin}\beta \mathrm{sin}\psi ,`$ (19)
where $`\beta `$ is the deflection angle computed in §2.4. The new velocities of the two stars after the interaction are then given by
$`𝐯^{}`$ $`=`$ $`𝐯{\displaystyle \frac{m^{}}{m+m^{}}}(𝐰^{}𝐰),`$ (20)
$`𝐯^{}`$ $`=`$ $`𝐯^{}+{\displaystyle \frac{m}{m+m^{}}}(𝐰^{}𝐰).`$ (21)
The new radial and transverse velocities for the first star are given by $`v_r^{}=v_z^{}`$, and $`v_t^{}=(v_x^2+v_y^2)^{1/2}`$, from which we compute the new orbital energy $`E`$ and angular momentum $`J`$ as $`E^{}=\mathrm{\Phi }(r)+\frac{1}{2}(v_r^2+v_t^2)`$, and $`J^{}=rv_t^{}`$. Similar quantities $`E^{}`$ and $`J^{}`$ are also computed for the second star.
### 2.6 Computing New Positions and Velocities
Once the orbits of all the stars are perturbed, i.e., new values of $`E`$ and $`J`$ are computed for each star, a new realization of the system is generated, by selecting a new position for each star in its new orbit, in such a way that each position in the orbit is weighted by the amount of time that the star spends at that position. To do this, we begin by computing the pericenter and apocenter distances, $`r_{min}`$ and $`r_{max}`$, for each star. The orbit of a star in the cluster potential is a rosette, with $`r`$ oscillating between $`r_{min}`$ and $`r_{max}`$, which are roots of the equation
$`Q(r)=2E2\mathrm{\Phi }(r)J^2/r^2=0.`$ (22)
See Binney & Tremaine (1987; §3.1) for a general discussion, and see Hénon (1971b; Eqs. -) for a convenient method of solution. The new position $`r`$ should now be selected between $`r_{min}`$ and $`r_{max}`$, in such a way that the probability of finding r in an interval $`dr`$ is equal to the fraction of time spent by the star in the interval during one orbit, i.e.,
$`{\displaystyle \frac{dt}{P}}={\displaystyle \frac{dr/|v_r|}{_{r_{min}}^{r_{max}}𝑑r/|v_r|}},`$ (23)
where $`P`$ is the orbital period, and $`|v_r|`$ is given by
$`|v_r|=[2E2\mathrm{\Phi }(r)J^2/r^2]^{1/2}=[Q(r)]^{1/2}.`$ (24)
Thus the value of r should be selected from a probability distribution that is proportional to $`f(r)=1/|v_r|`$. Unfortunately, at the pericenter and apocenter points ($`r_{min}`$ and $`r_{max}`$), the radial velocity $`v_r`$ is zero, and the probability distribution becomes infinite. To overcome this problem, we make a change of coordinates by defining a suitable function $`r=r(s)`$ and selecting a value of s from the distribution
$`g(s){\displaystyle \frac{1}{|v_r|}}{\displaystyle \frac{dr}{ds}}.`$ (25)
We must select the function $`r(s)`$ such that $`g(s)`$ remains finite in the entire interval. A convenient function $`r(s)`$ that satisfies these requirements is given by
$`r={\displaystyle \frac{1}{2}}(r_{min}+r_{max})+{\displaystyle \frac{1}{4}}(r_{max}r_{min})(3ss^3),`$ (26)
where $`s`$ lies in the interval -1 to 1. We then generate a value for $`s`$, which is consistent with the distribution $`g(s)`$, using the von Neumann rejection technique. Equation (26) then gives a corresponding value for $`r`$ which is consistent with the distribution function $`f(r)`$.
The magnitude of the new radial velocity $`v_r`$ is computed using equation (24), and its sign is selected randomly. The transverse velocity is given by $`v_t=J/r`$.
Once a new position is selected for each star using the above procedure, the gravitational potential $`\mathrm{\Phi }(r)`$ is recomputed as described in §2.3. This completes the timestep, and allows the next timestep to be started.
Note that the gravitational potential used to compute new positions and velocities of the stars is from the previous timestep. The new potential can only be computed *after* the new positions are assigned, and it is then used to recompute the positions in the next timestep. Thus the computed potential always lags slightly behind the actual potential of the system. The exact potential is known only at the initial condition. This only introduces a small systematic error in the computation, since the potential changes significantly only on the relaxation timescale.
A more important source of error, especially in computing the new energies of the stars after the potential is recomputed, is the random fluctuation of the potential in the core, which contains relatively few stars, but has a high number density. Since the derivative of the potential is also steepest in the core, a small error in computing a star’s position in the core can lead to a large error in computing its energy. As the simulation progresses, this causes a slow but consistent leak in the total system energy. The magnitude of this error (i.e., the amount of energy lost per timestep) depends partly on the number of stars $`N`$ in the system. For large $`N`$, the grid on which the potential is pre-computed (see §2.3) is finer, and the number of stars in the core is larger, which reduces the noise in the potential. The overall error in energy during the course of an entire simulation is typically of order a few percent for $`N=10^5`$ stars. In any realistic simulation, the actual energy gain or loss due to real physical processes such as stellar evolution, escape of stars through a tidal boundary, and interactions involving binaries, is at least an order of magnitude greater than this error. Hence we choose not to renormalize the energy of the system, or employ any other method to artificially conserve the energy of the system, which could affect other aspects of the evolution.
Another possible source of error in Monte-Carlo simulations, which was noted by Hénon (1971b) is the “spurious relaxation” effect. This is the tendency for the system to relax because of the potential fluctuations from one timestep to the next, even in the absence of orbital perturbations due to two-body relaxation. However, this effect is significant only for simulations done with very low $`N10^210^3`$. In test calculations performed with $`N10^410^5`$ and two-body relaxation explicitly turned off (by setting the scattering angle $`\beta _e=0`$ in eq. ), we find no evidence of spurious relaxation. Indeed Hénon (1971b) himself showed that spurious relaxation was not significant in his models for $`N>10^3`$.
### 2.7 Escape of Stars and the Effect of a Tidal Boundary
For an isolated system, the gradual evaporation of stars from the cluster is computed in the following way. During each timestep, after the perturbations $`\mathrm{\Delta }E`$ and $`\mathrm{\Delta }J`$ are computed, all stars with a positive total energy (given by eq. ) are assumed to leave the cluster on the crossing timescale. They are therefore considered lost immediately on the relaxation timescale, and removed from the simulation. The mass of the cluster (and its total energy) decreases gradually as a result of this evaporation process.
As a simple first step to take in to account the tidal field of the Galaxy, we include an effective tidal boundary around the cluster, at a distance $`r_tR_g(M_{cluster}/3M_g)^{1/3}`$, where $`R_g`$ is the distance of the cluster from the Galactic center and $`M_g`$ is the mass of the Galaxy (approximated as a point mass). The tidal radius is roughly the size of the Roche lobe of the cluster in the field of the Galaxy. Once the initial tidal radius $`r_{t}^{}{}_{0}{}^{}`$ is specified, the tidal radius at a subsequent time $`t`$ during the simulation can be computed by $`r_t(t)=r_{t}^{}{}_{0}{}^{}(M_{cluster}(t)/M_{cluster}(0))^{1/3}`$. After each timestep, we remove all stars with an apocenter distance $`r_{max}`$ greater than the tidal radius, since they are lost from the cluster on the crossing timescale. As the cluster loses stars due to evaporation and the presence of the tidal boundary, its mass decreases, which causes the tidal boundary to shrink, in turn causing even more stars to be lost. The total mass loss due to a tidal boundary can be very significant, causing up to 90% of the mass to be lost (depending on the initial model) over the course of the simulation (see §3.2).
### 2.8 Units
Following the convention of most previous studies, we define dynamical units so that $`[G]=[M_0]=[4E_0]=1`$, where $`M_0`$ and $`E_0`$ are the initial total mass and total energy of the system (Hénon 1971). Then the units of length $`L`$, and time $`T`$ are given by
$`L=GM_0^2(4E_0)^1,\mathrm{and}T=GM_0^{5/2}(4E_0)^{3/2}.`$ (27)
We see that $`L`$ is basically the virial radius of the cluster, and $`T`$ is of the order of the initial dynamical (crossing) time. To compute the evolution of the cluster on a relaxation timescale, we rescale the unit of time to $`TN_0/\mathrm{ln}(\gamma N_0)`$, which is of the order of the initial relaxation time. Using this unit of time allows us to eliminate the $`\mathrm{ln}(\gamma N)`$ dependence of the evolution equations. The only equation that explicitly contains the evolution time is equation (10), which relates the timestep and the effective deflection angle. In our units, equation (10) can be written as,
$`\left[\mathrm{sin}^2(\beta _e/2)\right]=2\pi {\displaystyle \frac{([m_1]+[m_2])^2}{[w]^3}}[\nu ][\mathrm{\Delta }t]N,`$ (28)
where \[q\] indicates a quantity q expressed in our simulation units. Using a unit of time that is proportional to the initial relaxation time has the advantage that the evolution timescale is roughly independent of the number of stars $`N`$ once an initial model has been selected. This is only true approximately, for isolated systems of equal-mass stars, with no other processes that depend explicitly on the number of stars (such as stellar evolution or mass segregation). For example, the half-mass relaxation time for the Plummer model,
$`t_{rh}={\displaystyle \frac{0.138N}{\mathrm{ln}(\gamma N)}}\left({\displaystyle \frac{r_h^3}{GM}}\right)^{1/2},`$ (29)
is always 0.093 in our units, independent of $`N`$.
The dynamical units defined above are identical to the standard $`N`$-body units (Heggie & Mathieu 1986). Hence to convert the evolution time from $`N`$-body time units to our Monte-Carlo units, we must simply multiply by a factor $`\mathrm{ln}(\gamma N_0)/N_0`$.
### 2.9 Numerical Implementation
We have implemented our Monte-Carlo code on the SGI/CRAY Origin2000 parallel supercomputer at the National Center for Supercomputing Applications (NCSA), and at Boston University. Our parallelized code can be used to get significant speedup of the simulations, using up to 8 processors, especially for large $`N`$ simulations. This ability to perform large $`N`$ simulations will be particularly useful for doing realistic simulations of very large globular clusters such as 47 Tuc (with $`N>10^6`$ stars). A simulation with $`N=10^5`$ stars can be completed in approximately 15–20 CPU hours on the Origin2000, which uses MIPS R10000 processors. For comparison, a simulation of this size would take $`6`$ months to complete using the GRAPE-4, which is the fastest available hardware for $`N`$-body methods.
The most computationally intensive step in the simulation is the calculation of the new positions of stars. The operation involves solving for the roots of an equation (eq. ) using the indexed values of the positions of the $`N`$ stars. We find that the most efficient method to solve for the roots in this case is the simple bisection method (e.g., Press *et al.* 1992), which requires $`N\mathrm{log}_2N`$ steps to converge to the root. Hence the computation of the positions and velocities also scales as $`N\mathrm{log}_2N`$ in our method. The next most expensive operation is the evaluation of the potential at a given point $`r`$. As described in §2.3, this requires finding $`k`$ such that $`r_krr_{k+1}`$ and then using equation (4). This search can again be done easily using the bisection algorithm. However, since the evaluation of the potential is required several times for each star, in each timestep, it is useful to tabulate the values of $`k`$ on fine grid in $`r`$ at the beginning of the timestep. This allows the required values of $`k`$ to be found very quickly, at the minor cost of using more memory to store the table. The rest of the steps in the simulation scale almost linearly with $`N`$. This makes the overall computation time scale (theoretically) as $`N\mathrm{log}_2N`$.
In Figure 1, we show the scaling of the wall-clock time with the number of processors, and also the scaling of the overall computation time with the number of stars $`N`$ in the simulation. The overall computation time is consistent with the theoretical estimate for $`N<10^5`$. For larger $`N`$, the computation time is significantly higher, because of the less efficient use of cache memory and other hardware inefficiencies that are introduced while handling large arrays. For $`N`$ in the range $`15\times 10^5`$, we find that the actual computation time scales as $`N^{1.4}`$.
We find that we can easily reduce the overall computation time by a factor of $`3`$ by using up to 8 processors. The scaling is most efficient for $`24`$ processors for simulations with $`N15\times 10^5`$. The scaling gets progressively worse for more than 8 processors. This is in part caused by the distributed shared-memory architecture of the Origin2000 supercomputer, which allows very fast communication between the nearest 2-4 processors, but slower communication between the nearest 8 processors. Beyond 8 processors, the communication is even slower, since the processors are located on different nodes. The most suitable architecture for implementing the parallel Monte-Carlo code would be a truly shared memory supercomputer, with roughly uniform memory access times between processors. Our code is implemented using the Message Passing Interface (MPI) parallelization library, which is actively being developed and improved. The MPI standard is highly portable, and available on practically all parallel computing platforms in use today. The MPI library is optimized for each platform and automatically takes advantage of the memory architecture to the maximum extent possible. Hence we expect that future improvements in the communication speed and memory architectures will make our code scale even better. We are also in the process of improving the scaling of the code to a larger number of processors by designing a new algorithm for reducing the amount of communication required between processors. This will be described in detail in a subsequent paper, where we incorporate primordial binary interactions in our code.
## 3 Test Results
In this section, we describe our first results using the new Monte-Carlo code to compute the evolution of the Plummer and King models. We explore the evolution of the Plummer model in detail, and compare our results with those obtained using Fokker-Planck and $`N`$-body methods. We also compare core-collapse times and mass-loss rates for the series of King models ($`W_0=112`$), including a tidal radius, with similar results obtained by Quinlan (1996) using a 1-D Fokker-Planck method.
### 3.1 Evolution of an Isolated Plummer Model
We first consider the evolution of a cluster with the Plummer model (which is a polytropic model, with index $`n=5`$; see, e.g., Binney & Tremaine 1987) as the initial condition. Perhaps the best known result for single component systems, is the expected homologous evolution of the halo, leading to the eventual development of a power-law density profile between the core and the outer halo, during the late phases of evolution. At late times the cluster evolves through a sequence of nearly self-similar configurations, with the core contracting and a power-law halo with density $`\rho r^\beta `$ expanding out. The development of this power law has been predicted theoretically (Lynden-Bell & Eggleton 1980; Heggie & Stevenson 1988), and verified using direct Fokker-Planck integrations (Cohn 1980). The exponent $`\beta `$ is theoretically and numerically estimated to be about $`2.2`$ (Spitzer 1987). However, since the theoretical derivations are based on an analysis of the Fokker-Planck equation, it is not surprising that the numerical Fokker-Planck integrations (which solve the same Fokker-Planck equation numerically) reproduce the theoretical exponent exactly. Due to limitations in computing accurate density profiles using a small number of stars, this result has not been confirmed independently using an $`N`$-body simulation.
Here, we explore numerically for the first time the development of this power law using an independent method. Some early results were obtained using previous versions of the Monte-Carlo method, but with a small number of stars $`N10^3`$ (Duncan & Shapiro 1982). Although the Monte-Carlo method can be thought of as just another way of solving the Fokker-Planck equation, there are significant differences between solving the equation in the continuous limit ($`N\mathrm{}`$), as in direct Fokker-Planck integrations, and by using a discrete system with a finite $`N`$ as in our method. There are also many subtle differences in the assumptions and approximations made in the two methods, and even in different implementations of the same method.
In Figures 2a–c we show the density profile of the cluster at three different times during its evolution, up to core collapse. We start with an $`N=10^5`$ isolated Plummer model, and follow the evolution up to core-collapse, which occurs at $`t=t_{cc}15.2t_{rh}`$. This simulation, performed with $`N=10^5`$ stars, took about 18 CPU hours on the SGI/Cray Origin2000. In our calculations, the core-collapse time is taken as the time when the the innermost lagrange radius (radius containing 0.3% of the total mass of the cluster) becomes smaller than 0.001 (in our units described in §2.8), at which point the simulation is terminated. Given the very rapid evolution of the core near core collapse, we find that we can determine the core-collapse time to within $`<1\%`$. The accuracy is limited mainly by noise in the core. The value we obtain for $`t_{cc}/t_{rh}`$ is in very good agreement with other core-collapse times between $`1516t_{rh}`$ for the Plummer model, reported using other methods. For example Quinlan (1996) obtains a core collapse time of $`15.4t_{rh}`$ for the Plummer model using a 1-D Fokker-Planck method, and Takahashi (1993) finds a value of $`15.6t_{rh}`$, using a variational method to solve the 1-D Fokker-Planck equation.
Figure 2a shows the density profile at an intermediate time $`t=11.4t_{rh}`$ during the evolution. The dotted line indicates the initial Plummer profile. At this point in the evolution, we still see a well defined core, with the core density increased by a factor of $`30`$ compared to the initial core density. We see the power-law density profile developing, with the best-fit index $`\beta =2.8`$. In Figure 2b, we show the density profile just before core collapse, at $`t=15t_{rh}`$. We see that the core density has now increased by a factor of $`10^4`$ over the initial core density. The power law is now clearly visible, with the best-fit index $`\beta =2.3`$. Finally, in Figure 2c, we show the density profile at core-collapse, $`t=15.2t_{rh}`$. The dashed line now indicates the *theoretical* power law with $`\beta =2.2`$. We see that the actual density profile seems to approach the theoretical profile asymptotically as the system approaches core collapse. At this point in the evolution, the core density as measured in our simulation is about $`10^6`$ times greater than the initial density. In a globular cluster with $`N=2\times 10^5`$, an average stellar mass $`<m>=0.5\mathrm{M}_{}`$, and a mean velocity dispersion $`<v^2>^{1/2}=5\mathrm{km}\mathrm{s}^1`$, this would correspond to a number density of $`2\times 10^9\mathrm{pc}^3`$. Note that a real globular cluster is not expected to reach such high core densities, since the formation of binaries and the subsequent heating of the core due to binary interactions become significant at much lower densities. Numerical noise due to the extremely small size of the core makes it difficult to determine the core radius and density accurately at this stage. This also causes the numerical accuracy of the Monte-Carlo method to deteriorate, forcing us to stop the computation. Thus, we find that the power-law structure of the density profile as the cluster approaches core collapse is consistent with theoretical predictions, and the power-law index approaches its theoretical value asymptotically during the late stages of core collapse.
Next, we look at the evolution of the Lagrange radii (radii containing constant fractions of the total mass), and we compare our results with those of an equivalent $`N`$-body simulation. In Figure 3, we show the evolution of the Lagrange radii for an $`N=16384`$ direct $`N`$-body integration by Makino (1996) and for our Monte-Carlo integration with $`N=10^5`$ stars. Time in the direct $`N`$-body integration is scaled to the initial relaxation time (the standard time unit in our Monte Carlo method) using equation (27) with $`\gamma =0.11`$ (see Heggie & Mathieu 1986; Giersz & Heggie 1994; Makino 1996). The agreement between the $`N`$-body and Monte Carlo results is excellent over the entire range of Lagrange radii and time. The small discrepancy in the outer Lagrange radii is caused in part by a different treatment of escaping stars in the two models. In the Monte Carlo model, escaping stars are removed from the simulation and therefore not included in the determination of the Lagrange radii, whereas in the $`N`$-body model escaping stars are not removed. The difference is further explained by the effect of strong encounters, which is greater in the $`N`$-body simulation by a factor $`\mathrm{ln}(10^5)/\mathrm{ln}(16384)`$, or about 20%. In an isolated cluster, the overall evaporation rate is very low (less than 1% of stars escape up to core collapse). In this regime, the escape of stars is dominated by strong interactions in the core. Since the orbit-averaged Fokker-Planck equation is only valid when the fractional energy change per orbit is small, it does not account for strong interactions. Hence, our Monte-Carlo simulations cannot accurately predict the rate of evaporation from an isolated cluster (see, e.g., Binney & Tremaine 1987, §8.4). This problem does not occur in tidally truncated clusters, where the escape rate is much higher, and is dominated by the diffusion of stars across the tidal boundary, and not by strong interactions.
In Figure 4 we show the evolution of various global quantities for the system during the same simulation as in Figure 3. The virial ratio ($`K/|W|`$, where $`K`$ and $`W`$ are the total kinetic and potential energies of the cluster) remains very close to 0.5 (within 1%), indicating that dynamical equilibrium is maintained very well during the entire simulation. The virial ratio provides a very good measure of the quality of our numerical results, since it is not controlled in our calculations (except for the initial model, which is constructed to be in equilibrium). We see that in the absence of a tidal radius, there is very little mass loss (less than 1%), and hence very little energy is carried away by escaping stars.
### 3.2 Evolution of Isolated and Tidally Truncated King models
King models (King 1966) have long been used to fit observed profiles of globular clusters. They usually provide a very good fit for most clusters, except for those which have reached core collapse. A King model has a well-defined, nearly constant-density core, and a “lowered Maxwellian” velocity distribution, which represents the presence of a finite tidal radius. A King model is usually specified in terms of the dimensionless central potential $`W_0`$ or, equivalently, the central concentration $`c=\mathrm{log}(r_t/r_c)`$, where $`r_t`$ is the tidal radius, and $`r_c`$ is the core radius.
We study the evolution of the entire family of King models from $`W_0=1`$ to $`W_0=12`$, in two different configurations. We first consider the evolution of an isolated cluster i.e., even though the initial King model is truncated at its finite tidal radius, we do not enforce that tidal boundary during the evolution, allowing the cluster to expand indefinitely. We compute the core-collapse times for the entire sequence of King models. We then redo the calculations with a tidal boundary in place, to determine the enhanced rate of mass loss from the cluster and the final remaining mass at the time of core collapse. We compare our results for the sequence of King models with equivalent results obtained by Quinlan (1996) using direct Fokker-Planck integrations in 1-D. In Table 1, we show the core collapse times for the various models, along with the equivalent results from Quinlan (1996). All our Monte-Carlo calculations were performed using $`N=10^5`$ stars. We see that the agreement in the core collapse times for isolated clusters is excellent (within a few percent for the low-$`W_0`$ models, and within 10% up to $`W_0=9`$). For $`W_0>9`$, the agreement is still good, considering that the models start off in a highly collapsed state and therefore have very short core-collapse times, which leads to larger fractional errors.
In Figure 5, we show the evolution of the Lagrange radii for a tidally truncated King model with $`W_0=3`$. The initial tidal radius is $`3.1`$ times the virial radius. In this case, the mass loss through the tidal boundary is very significant, as is seen from the evolution of the outer Lagrange radii. The mass loss causes the tidal radius to constantly move inward, which further accelerates the process. Figure 6 shows the evolution of the total mass and energy of the tidally truncated cluster. Only 44% of the initial mass is retained in the cluster at core-collapse. Also, the binding energy of the cluster is significantly lower at core-collapse, since the escaping stars carry away mass as well as kinetic energy from the cluster. In contrast, the evolution of an isolated $`W_0=3`$ King model is very much like that of the isolated Plummer model described earlier, with a very low mass loss rate, and a longer core-collapse time of $`t_{cc}=17.7t_{rh}`$ (in excellent agreement with the value of $`17.6t_{rh}`$ computed by Quinlan 1996).
Our results for clusters with a tidal boundary show systematic differences from the 1-D Fokker-Planck results of Quinlan (1996). We find that the mass loss through the tidal boundary is significantly higher for the low-concentration models ($`W_0<6`$) in the Fokker-Planck models. For the high-concentration ( $`W_0>6`$) models, the difference between isolated models and tidally truncated models is small, and the agreement between the methods remains very good. Hence, for low $`W_0`$, our models undergo core collapse at a much later time compared to the Fokker-Planck models, and retain more mass at core collapse. This discrepancy is caused by the 1-D nature of the Fokker-Planck models. In 1-D Fokker-Planck calculations, stars are considered lost from the cluster when their energy is greater than the energy at the tidal radius. This clearly provides an overestimate of the escape rate, since it assumes the most extended radial orbits for stars, and ignores stars on more circular orbits with high angular momentum, which would have much smaller orbits at the same energy. In contrast, in the Monte-Carlo method, the orbit of each star is computed using its energy *and* angular momentum, which allows the apocenter distance to be determined correctly. Stars are considered lost only if their apocenter distances from the cluster center are greater than the tidal radius. As stars on radial orbits are removed preferentially, this creates an anisotropy within the cluster, which affects the overall evolution. The artificially high rate of mass loss in 1-D Fokker-Planck simulations has also been pointed out recently in comparisons with $`N`$-body results (Portegies Zwart *et al.* 1998; Takahashi & Portegies Zwart 1999). These authors show that, with appropriate modifications, the results of 2-D Fokker-Planck calculations can be made to agree much better with those from $`N`$-body simulations. Indeed, we find that our result for the $`W_0=3`$ model with a tidal boundary ($`t_{cc}=12.0t_{rh}`$, and $`M_{final}=0.44`$) agrees much better with that obtained using the improved 2-D Fokker-Planck method, which gives $`t_{cc}=11.3t_{rh}`$, and $`M_{final}=0.34`$ (Takahashi 1999, private communication). For further comparison, and to better understand the cause of the higher mass loss in the 1-D Fokker-Planck calculation, we have performed a Monte-Carlo simulation using the same energy-based escape criterion that is used in the 1-D Fokker-Planck integrations. We find that using the energy-based escape criterion for $`W_0=3`$ gives $`t_{cc}=10.9t_{rh}`$, and $`M_{final}=0.30`$, which agrees better with the 1-D Fokker-Planck result, but a significant discrepancy still remains. This is not surprising, since, even when using a 1-D escape criterion, our underlying method still remains 2-D. Again, our result agrees better with the corresponding result obtained by Takahashi (1999, private communication) using the energy-based escape criterion in his 2-D Fokker-Planck method, $`t_{cc}=10.2t_{rh}`$, and $`M_{final}=0.28`$. It is reassuring to note that the differences between our 2-D results and 1-D Fokker-Planck results are also mirrored in the 2-D Fokker-Planck calculations of Takahashi. Since our Monte-Carlo method is intrinsically 2-D, it is not possible for us to do a true 1-D (isotropic) calculation to compare results directly with 1-D Fokker-Planck calculations.
## 4 Summary and Future Directions
We have presented results obtained using our new Monte-Carlo code for the evolution of clusters containing $`10^5`$ stars, up to core collapse. We have compared our results with those of 1-D Fokker-Planck calculations (Quinlan 1996) for isolated as well as tidally truncated King models with $`W_0=112`$. We find very good agreement for the core-collapse times of isolated King models. For tidally truncated models (especially for $`W_0<6`$), we find that the escape rate of stars in our models is significantly lower than in the 1-D Fokker-Planck models. This is to be expected, since the 1-D Fokker-Planck models use an energy-based escape criterion, which does not account for the anisotropy in the orbits of stars, and hence overestimate the escape rate. This effect is most evident in tidally truncated clusters, since stars on radial orbits are preferentially removed, while those on more circular orbits (with the same energy) are not. In one case ($`W_0=3`$), we have verified that our results are in good agreement with those from new 2-D Fokker-Planck calculations (Takahashi 1999, private communication), which properly account for the velocity anisotropy, and use the same apocenter-based escape criterion as in our models. Further comparisons of our results with 2-D Fokker-Planck calculations will be presented in a subsequent paper (Joshi, Nave, & Rasio 1999). Our detailed comparison of the evolution of the Plummer model with an equivalent direct $`N`$-body simulation also shows excellent agreement between the two methods up to core collapse.
Our results clearly show that the Monte-Carlo method provides a robust, scalable and flexible alternative for studying the evolution of globular clusters. Its strengths are complementary to those of other methods, especially $`N`$-body simulations, which are still prohibitively expensive for studying large systems with $`N>10^5`$. The Monte-Carlo method requires more computational resources compared to Fokker-Planck methods, but it is several orders of magnitude faster than $`N`$-body simulations. The star-by-star representation of the system in this method makes it particularly well suited for studying the evolution of interesting sub-populations of stars within globular clusters, such as pulsars, blue stragglers, or black holes.
Our method also presents the interesting possibility of performing hybrid simulations that use the Monte-Carlo method for the bulk of the evolution of a cluster up to the core collapse phase, and then switch to an $`N`$-body simulation to follow the complex core-collapse phase during which the high reliability of the $`N`$-body method is desirable. The discreteness of the Monte-Carlo method, and the fact that it follows the same phase space parameters for a cluster as the $`N`$-body method, make it easy to switch from one method to the other during a single simulation.
In subsequent papers, we will present results for the dynamical evolution of clusters with different mass spectra, including the effects of mass loss due to stellar evolution. We are also in the process of incorporating primordial binaries in our Monte-Carlo code, in order to follow the evolution in the post-core collapse phase. Dynamical interactions involving binaries will be treated using a combination of direct numerical integrations of the orbits on a case-by-case basis and precomputed cross-sections. The cross-sections will be obtained from separate sets of scattering experiments as well as fitting formulae (Sigurdsson & Phinney 1995; Heggie, Hut, & McMillan 1996, and references therein). Our code will also incorporate a simple treatment of stellar evolution in binaries, using an extensive set of approximate recipes and fitting formulae developed recently for STARLAB (Portegies Zwart 1995). Simulations of clusters containing realistic numbers of stars and binaries will allow us for the first time ever to compute detailed predictions for the properties and distributions of all interaction products, including blue stragglers (from mergers of main-sequence stars), X-ray binaries and recycled pulsars (from interactions involving neutron stars) and cataclysmic variables (from interactions involving white dwarfs).
We thank Piet Hut and Stephen McMillan for many helpful discussions. We are grateful to Jun Makino and Koji Takahashi for kindly providing valuable data and answering numerous questions. This work was supported in part by NSF Grant AST-9618116 and NASA ATP Grant NAG5-8460. F.A.R. was supported in part by an Alfred P. Sloan Research Fellowship. F.A.R. also thanks the Theory Division of the Harvard-Smithsonian Center for Astrophysics for hospitality. This work was also supported by the National Computational Science Alliance under Grant AST970022N and utilized the SGI/Cray Origin2000 supercomputer at Boston University. NASA also supported this work through Hubble Fellowship grant HF-01112.01-98A awarded (to SPZ) by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555. SPZ is grateful to MIT for its hospitality and to Tokyo University for the extensive use of their GRAPE systems.
|
no-problem/9909/nucl-th9909034.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
There has recently been some concern over the significant negative values obtained for the square of the neutrino mass, for both the electron neutrino <sup>1</sup><sup>1</sup>1We will refer to both neutrino and antineutrino just as neutrino. measured in nuclear $`\beta `$-decay, and the muon neutrino measured in $`\pi ^+\mu ^+\nu _\mu `$ decay . In these measurements, the probability that the square of the electron neutrino mass $`m_{\nu _e}^2`$ is positive is only $`3\%`$ , while from most recent measurements the square of the muon neutrino mass $`m_{\nu _\mu }^2`$ is negative by 6.1 or 0.9 standard deviations, depending on the choice of the pion rest mass value .
While it may be argued that the negative values obtained for $`m_{\nu _e}^2`$ and $`m_{\nu _\mu }^2`$ are a consequence of systematic errors in these measurements which are still not understood, we must also investigate the possibility that there is a physical underlay for the measured results. In this paper, we therefore propose a mechanism which, in principle, allows the measured square of the neutrino mass to be negative, while at the same time does not cross allowed physical mass limits. Our assumption is that the negative values of $`m_{\nu _e}^2`$ and $`m_{\nu _\mu }^2`$ are not consequences of the dynamics of the decay, but rather are the result of the geometry, or metric, of the small volume in which the weak interaction drives the decay.
The large masses of the intermediate vector bosons $`W^+`$, $`W^{}`$ and $`Z^0`$ result in a very short range for the weak interaction. The interaction volume is small enough to ensure the success of the Fermi $`\beta `$-decay theory , in which the interaction is assumed to be a four-particle coupling. One may expect that the metric valid in vacuum is not necessarily valid in a volume with such a small dimension, especially considering that we already have a change in vacuum metric in the presence of mass, in accordance with the general theory of relativity.
In this paper we study the consequence of the change of metric within the weak-interaction volume on the measured value of the square of the neutrino mass. In our model, we make minimal changes to the metric, in a manner which is as simple as possible, and only as much as necessary to make it different from the vacuum metric, yet at the same time have clear physical consequences.
## 2 General basis for the model
The basic idea of this model is essentially the same as that of the general theory of relativity. Translated for the purpose of this paper, it means that the metric of space deforms at the distance scale comparable to the range of the weak interaction. The majority of the necessary mathematical formalism can be taken from the theory of the deformation of continuous media, and extended to 4-dimensional Minkowski space. We can explore the consequences of this space deformation without having the necessity to propose the exact mechanism which causes the deformation.
We define two 4-dimensional Minkowski spaces, an undeformed space $`C`$ and a deformed space $`C^{}`$. Let us choose a point $`P(\xi ^0,\xi ^1,\xi ^2,\xi ^3)`$ in the undeformed space $`C`$. Under the deformation of the space $`C`$, the point $`P(\xi ^0,\xi ^1,\xi ^2,\xi ^3)`$ from the undeformed space is mapped into the point $`P^{}(\xi ^0,\xi ^1,\xi ^2,\xi ^3)`$ of the deformed space $`C^{}`$. The space deformation is defined by the equations analogous to the Lagrangian coordinate point of view :
$`\xi ^0=\xi ^0(\xi ^0,\xi ^1,\xi ^2,\xi ^3);\xi ^1=\xi ^1(\xi ^0,\xi ^1,\xi ^2,\xi ^3);`$
$`\xi ^2=\xi ^2(\xi ^0,\xi ^1,\xi ^2,\xi ^3);\xi ^3=\xi ^3(\xi ^0,\xi ^1,\xi ^2,\xi ^3),`$ (1)
or the equations analogous to the Euler coordinate point of view:
$`\xi ^0=\xi ^0(\xi ^0,\xi ^1,\xi ^2,\xi ^3);\xi ^1=\xi ^1(\xi ^0,\xi ^1,\xi ^2,\xi ^3);`$
$`\xi ^2=\xi ^2(\xi ^0,\xi ^1,\xi ^2,\xi ^3);\xi ^3=\xi ^3(\xi ^0,\xi ^1,\xi ^2,\xi ^3).`$ (2)
These functions must be continuous and differentiable in the space $`C`$ ($`C^{}`$), because a discontinuity in these functions would imply a “rupture” of the space $`C`$ ($`C^{}`$).
Following the classical theory of elasticity we define the infinitesimal length $`ds`$ as a line element $`PQ`$ between two points $`P(\xi ^0,\xi ^1,\xi ^2,\xi ^3)`$ and $`Q(\xi ^0+d\xi ^0,\xi ^1+d\xi ^1,\xi ^2+d\xi ^2,\xi ^3+d\xi ^3)`$, and the infinitesimal length $`ds^{}`$ as a line element $`P^{}Q^{}`$ between two points $`P^{}(\xi ^0,\xi ^1,\xi ^2,\xi ^3)`$ and $`Q^{}(\xi ^0+d\xi ^0,\xi ^1+d\xi ^1,\xi ^2+d\xi ^2,\xi ^3+d\xi ^3)`$. Under the deformation, the length $`ds`$ can either be elongated or contracted. The magnification of the deformation is defined as $`\frac{ds^{}}{ds}`$. In the undeformed space $`C`$, $`ds`$ is defined as the square root of the invariant form
$$ds^2=g_{\mu \nu }d\xi ^\mu d\xi ^\nu ,$$
(3)
where $`g_{\mu \nu }`$ are elements of a symmetric tensor $`[g_{\mu \nu }]`$ defining the metric of the space $`C`$. The metric tensor $`[g_{\mu \nu }]`$,
$$[g_{\mu \nu }]=\left[\begin{array}{cccc}1& 0& 0& 0\\ 0& 1& 0& 0\\ 0& 0& 1& 0\\ 0& 0& 0& 1\end{array}\right],$$
(4)
defines four-dimensional Minkowski space in the orthogonal cartesian representation in the system where the speed of light $`c=1`$. The distance between two points $`P(t,x,y,z)`$ and $`Q(t+dt,x+dx,y+dy,z+dz)`$, where $`t`$ represents time and $`(x,y,z)`$ space, is the square root of the invariant form
$$ds^2=dt^2(dx^2+dy^2+dz^2).$$
(5)
In the deformed space $`C^{}`$, $`ds^{}`$ is the square root of the invariant form
$$ds^2=g_{\mu \nu }d\xi ^\mu d\xi ^\nu .$$
(6)
In the deformed Minkowski space defined by time $`t^{}`$ and space coordinates $`(x^{},y^{},z^{})`$, the distance between two points $`P^{}(t^{},x^{},y^{},z^{})`$ and $`Q^{}(t^{}+dt^{},x^{}+dx^{},y^{}+dy^{},z^{}+dz^{})`$ is the square root of the invariant form
$$ds^2=dt^2(dx^2+dy^2+dz^2).$$
(7)
Using again general notation, we can express total differentials $`(d\xi ^0,d\xi ^1,d\xi ^2,d\xi ^3)`$ in terms of total differentials $`(d\xi ^0,d\xi ^1,d\xi ^2,d\xi ^3)`$ by the transformation
$$d\xi ^\nu =a^{\nu \mu }d\xi ^\mu ,$$
(8)
where
$$a^{\nu \mu }=\frac{\xi ^\nu }{\xi ^\mu }.$$
(9)
The measure of deformation $`ds^2ds^2`$ can be then calculated as
$`ds^2ds^2`$ $`=`$ $`g_{\mu \nu }(d\xi ^\mu d\xi ^\nu d\xi ^\mu d\xi ^\nu )`$ (10)
$`=`$ $`(g_{\iota \kappa }a^{\iota \mu }a^{\kappa \nu }g_{\mu \nu })d\xi ^\mu d\xi ^\nu `$
$`=`$ $`ϵ_{\mu \nu }d\xi ^\mu d\xi ^\nu `$
or
$`ds^2ds^2`$ $`=`$ $`g_{\mu \nu }(d\xi ^\mu d\xi ^\nu d\xi ^\mu d\xi ^\nu )`$ (11)
$`=`$ $`(g_{\mu \nu }g_{\iota \kappa }b^{\iota \mu }b^{\kappa \nu })d\xi ^\mu d\xi ^\nu `$
$`=`$ $`\eta _{\mu \nu }d\xi ^\mu d\xi ^\nu `$
where
$$b^{\nu \mu }=\frac{\xi ^\nu }{\xi ^\mu }.$$
(12)
To simplify the calculation, we can rotate our coordinate system such that the coordinate axes correspond to the principal directions of the deformation tensors $`ϵ=[ϵ_{\mu \nu }]`$ and $`\eta =[\eta _{\mu \nu }]`$. The new coordinate system corresponds to orthogonal directions in the undeformed space which remain orthogonal after deformation . In this case, the quadratic forms in Eq.’s 10 and 11 reduce to their canonical forms, and the deformation tensors have diagonal form: $`ϵ=[ϵ_{\mu \mu }]`$ and $`\eta =[\eta _{\mu \mu }]`$. The deformation $`ds^2ds^2`$ is now
$$ds^2ds^2=ϵ_{\mu \mu }d\xi ^\mu d\xi ^\mu $$
(13)
or
$$ds^2ds^2=\eta _{\mu \mu }d\xi ^\mu d\xi ^\mu .$$
(14)
We are now in the same position as in the general theory of relativity where the existence of the gravitational potential changes the metric tensor , whose coefficients become functions of the local coordinates and can be written in the general form
$$[g_{\mu \nu }^{^{}}]=\left[\begin{array}{cccc}f_{00}(\xi )& 0& 0& 0\\ 0& f_{11}(\xi )& 0& 0\\ 0& 0& f_{22}(\xi )& 0\\ 0& 0& 0& f_{33}(\xi )\end{array}\right].$$
(15)
$`f_{\mu \nu }(\xi )=f_{\mu \nu }(\xi ^0,\xi ^1,\xi ^2,\xi ^3)`$ are functions generating the deformation of four-dimensional space.
In this paper we address the effect of this space deformation on the kinematics in the deformed region. Generally, we can again define two spaces, an undeformed space $`𝒞`$ and a deformed space $`𝒞^{}`$. Under the deformation, the point $`𝒫(\pi ^0,\pi ^1,\pi ^2,\pi ^3)`$ in the undeformed space $`𝒞`$, is mapped into the point $`𝒫^{}(\pi ^0,\pi ^1,\pi ^2,\pi ^3)`$ of the deformed space $`𝒞^{}`$. The invariant form is here defined as
$$dm^2=g_{\mu \nu }d\pi ^\mu d\pi ^\nu .$$
(16)
Translated into the orthogonal cartesian energy-momentum representation $`(E,p_x,p_y,p_z)`$, the distance between two points $`𝒫(E,p_x,p_y,p_z)`$ and $`𝒬(E+dE,p_x+dp_x,p_y+dp_y,p_z+dp_z)`$ is the square root of the same form as in Eq. 5, i.e.,
$$dm^2=dE^2(dp_x^2+dp_y^2+dp_z^2).$$
(17)
The effect of the space deformation is analogous to the effect in Eq.’s 13 and 14 ,
$$dm^2dm^2=_{\mu \mu }d\pi ^\mu d\pi ^\mu $$
(18)
and
$$dm^2dm^2=𝒢_{\mu \mu }d\pi ^\mu d\pi ^\mu .$$
(19)
The transformation from the space $`𝒞`$ into the space $`𝒞^{}`$ can be found in analogy with Eq.’s 8 and 9 . To relate the transformation coefficients, we postulate that the Heisenberg relations hold even if the space is deformed, and that the number of possible states cannot be increased or decreased by the mechanism causing space deformation. This means that, for each index $`\mu `$,
$$d\xi ^\mu d\pi ^\mu =d\xi ^\mu d\pi ^\mu .$$
(20)
As a result, the transformation is
$$d\xi ^\mu d\pi ^\mu =a^{\mu \nu }d\xi ^\nu b^{\mu \nu }d\pi ^\nu =\delta ^{\mu \nu }d\xi ^\nu d\pi ^\nu =d\xi ^\mu d\pi ^\mu ,$$
(21)
obviously $`b^{\mu \mu }=(a^{\mu \mu })^1`$, and:
$$d\pi ^\mu =b^{\mu \mu }d\pi ^\mu ;d\pi ^\mu =a^{\mu \mu }d\pi ^\mu .$$
(22)
The coefficients $`a^{\mu \mu }`$ and $`b^{\mu \mu }`$ are defined in Eq.’s 9 and 12.
## 3 Application to the neutrino mass measurement
In this section we study the consequences of the general formalism developed in the previous section for the neutrino mass measurements. Because the claim of this paper is that the negative values of the squares of the electron and muon neutrino masses are a consequence of the change in metric in the weak-interaction volume, the same model should be applied for both types of neutrino. There are many ways to deform the volume, but for the sake of simplicity we choose a very simple model of deformation, with just one free parameter. The model is a copy of the mathematical formalism of a mechanical deformation caused by hydrostatic pressure into our geometry . We also assume that the “time” (or “energy”) component is not affected. In this case, our deformation tensor is given by
$$[_{\mu \mu }]=\left[\begin{array}{cccc}1& 0& 0& 0\\ 0& ϵ& 0& 0\\ 0& 0& ϵ& 0\\ 0& 0& 0& ϵ\end{array}\right],$$
(23)
where $`ϵ`$ is a constant. We note here that in the limit $`ϵ1`$, $`[_{\mu \nu }][g_{\mu \nu }]`$, and the undeformed Minkowski space is recovered. This choice of transformation results in
$$m_\nu ^2=E^2(p_x^2+p_y^2+p_z^2)=E^2ϵ(p_x^2+p_y^2+p_z^2).$$
(24)
The square of the neutrino masses measured in nuclear $`\beta `$-decay and in $`\pi ^+\mu ^+\nu _\mu `$ decay are reconstructed using energy and momentum conservation assuming a free space metric. If space is actually deformed, but not taken into account, the free undeformed space metric can result in negative values for the squares of the neutrino masses. Because two separate experimental programs determine the electron neutrino and muon neutrino masses, while our model should describe both cases, we will use the results from the muon neutrino experiments to determine the parameter $`ϵ`$, and then use that value to see the consequences for the case of the electron neutrino measurements.
The muon neutrino mass can be measured from the decay reaction $`\pi ^+\mu ^+\nu _\mu `$ . For $`\pi ^+`$ decay at rest in undeformed space, the muon neutrino mass is determined from the kinematic relation
$$m_\pi =\sqrt{m_{\nu _\mu }^2+p_\mu ^2}+\sqrt{m_\mu ^2+p_\mu ^2},$$
(25)
where momentum conservation has been imposed. If the space is deformed, however, this relation becomes
$$m_\pi =\sqrt{m_{\nu _\mu }^2+p_\mu ^2}+\sqrt{m_\mu ^2+p_\mu ^2}.$$
(26)
Assuming that the “true” neutrino mass is zero <sup>2</sup><sup>2</sup>2This assumption, even if not critical for the discussion in this paper, does have a basis in the experiments which deal with “free” neutrinos, such as neutrino oscillation experiments, and suggest that the neutrino mass is very close to zero ., then $`m_{\nu _\mu }^2=0`$, and using the metric tensor defined in Eq. 23, we can calculate the value of the parameter $`ϵ`$ from Eq.’s 25 and 26 via
$$\sqrt{m_{\nu _\mu }^2+p_\mu ^2}=p_\mu ^{}=ϵp_\mu .$$
(27)
One must notice that $`\sqrt{m_\mu ^2+p_\mu ^2}=\sqrt{m_\mu ^2+p_\mu ^2}`$ because of Eq. 23. The value of $`ϵ`$ is thus determined by the momentum of muons from the pion decay, measured to be $`p_\mu =29.79207\pm 0.00024`$ $`MeV/c`$ .
There are two solutions corresponding to two choices of the pion mass, which have been labeled Solution A and Solution B . Solution A, for which $`m_\pi =139.56782\pm 0.00037`$ $`MeV`$ and $`m_{\nu _\mu }^2=0.143\pm 0.024`$ $`MeV^2`$ yields for the parameter $`ϵ`$ a value
$$ϵ=0.999988\pm 0.000004,$$
(28)
while Solution B with $`m_\pi =139.56995\pm 0.00035`$ $`MeV`$ and $`m_{\nu _\mu }^2=0.016\pm 0.023`$ $`MeV^2`$ yields
$$ϵ=0.9999998\pm 0.0000004.$$
(29)
As should have been expected, the value of the parameter $`ϵ`$ is very close to 1, suggesting that the space deformation is not very large.
We now apply the same model to the measurement of the electron neutrino rest mass. The electron neutrino rest mass determined from tritium $`\beta `$-decay is obtained from the shape of the $`\beta `$-spectrum close to the end-point, expressed as
$$W(E)=AFp(E+m_e)\underset{i}{}W_i(E_{0i}E)\sqrt{(E_{0i}E)^2m_{\nu _e}^2},$$
(30)
where $`A`$ is an amplitude, $`F`$ is the Fermi function, $`m_e`$ and $`m_{\nu _e}`$ are the electron and neutrino rest masses, $`p`$ and $`E`$ are the electron momentum and kinetic energy, $`W_i`$ is the relative transition probability to the $`i`$th molecular final state of corresponding end-point energy $`E_{0i}`$. Fitting the nuclear $`\beta `$-decay data with Eq. 30 produces a significant negative value for the square of the electron neutrino mass .
We apply the metric deformation parameter $`ϵ`$ obtained from the muon neutrino mass measurement to the nuclear $`\beta `$-decay and electron neutrino mass measurement. We do not lose on generality, but simplify our calculation, by assuming only one molecular final state. In this case, the shape of the $`\beta `$-spectrum close to the end-point reduces to
$$W(E)p(E+m_e)(E_0E)\sqrt{(E_0E)^2m_{\nu _e}^2}.$$
(31)
If the decay happens in the deformed space $`𝒞^{}`$, where, as in the case of the muon neutrino mass, we assume that the “true” neutrino mass $`m_{\nu _e}^2=0`$, then Eq. 31 becomes
$$W(E^{})p^{}(E^{}+m_e^{})(E_0E^{})^2.$$
(32)
All the kinematic quantities in the deformed space can be calculated from the quantities in the undeformed space using the transformation defined by the tensor in Eq. 23 and the value of the parameter $`ϵ`$ as determined from the $`\pi ^+\mu ^+\nu _\mu `$ decay. Thus $`p^{}=ϵp`$, and, in a non-relativistic approximation, $`E^{}=ϵ^2E`$, and $`m_e^{}=m_e`$. We do not transform $`E_0`$ because it is a parameter in the distribution, and therefore a constant. Then, Eq. 32 become
$$W(E,ϵ)ϵp(ϵ^2E+m_e)(E_0ϵ^2E)^2.$$
(33)
Assuming the weak-interaction volume has deformed metrics and the “true” neutrino mass is zero, Eq. 33 represents the shape of the $`\beta `$-spectrum. By assuming that the parameter $`ϵ`$ is constant and that the transformation affects only momentum and not the total energy, as shown by Eq. 23, we restrict ourselves to a very simple model. Even under these simplifying assumptions, there is sufficient new information contained in Eq. 33 that we can study several experimental signatures resulting from this distribution:
* We verified that the shape of the distribution represented by Eq. 33 is consistent with existing measurements. In Fig 1. we plot the deviation of this distribution for $`ϵ=0.999988`$ from the distribution for $`ϵ=1`$, corresponding to undeformed space, normalized to the undeformed distribution. It is clear that over the entire electron energy spectrum, except for the region very close to the end-point, the deviation is sufficiently small that it could not have been observed given the precision of existing experimental measurements. This is even more true for $`ϵ=0.9999998`$.
* We plot the distribution in the region close to the end-point in Fig. 2, where we show that the values corresponding to $`ϵ=0.999988`$ lay above the undeformed distribution, consistent with the distribution resulting in $`m_{\nu _e}^2<0`$.
* If we assume that the electron energy distribution is described by Eq. 33, but is fitted by the distribution with a shape described by Eq. 30 , there would be a mismatch at some point in the spectrum. In Fig. 3 one can easily see this mismatch close the end-point for the case when the distribution is generated by Eq. 33 with parameter $`ϵ=0.999988`$, and then fit with the distribution with shape described by Eq. 30. This mismatch results in a bump close to the end-point, as shown in Fig. 4, experimentally corresponding to an overestimation of the counting rate. This effect is observed in many electron neutrino mass measurements .
* Finally, it has also been observed experimentally that $`m_{\nu _e}^2`$ is dragged further below the endpoint as a function of the lower limit of $`E`$ in the fit interval . The $`W(E)`$ distribution generated using Eq. 33 with parameter $`ϵ=0.999988`$ and then fit using the distribution with shape described Eq. 30 is plotted in Fig. 5, showing that the same effect is observed; namely that $`m_{\nu _e}^2`$ becomes more negative as the fit interval is increased by lowering the limit of $`E`$.
A more realistic deformation of space, in which there would be more then one free parameter, and whose parameters could be energy and momentum dependent (generally as in Eq. 15), would result in a different relation in the mass-energy equation, where the calculation would be more complex and harder to relate to existing experimental results. Finding the exact deformation function is far beyond the scope of this paper, and our model is made as simple as possible. Despite its simplicity, this model still results in several significant consequences which already have been or could be experimentally tested.
## 4 Conclusion
In this paper we have suggested, analogous to the general theory of relativity and the theory of deformation for continuous media, a simple mechanism in which the negative values for the square of the neutrino mass reported in most of the neutrino experiments could be the result of a change in metrics in the small weak interaction volume in the energy-momentum representation. We constructed a simple model in which the changes in energy-momentum metrics do result in $`m_\nu ^20`$, while at the same time no components of the model violated allowed physical limits. The goal of this paper was not to construct the complete theory of metric deformation, but rather to demonstrate that the negative value of the square of the neutrino mass should not immediately be discarded as unphysical, and could indicate new physical phenomena.
I would like to thank Steven P. Wells for long and useful discussions.
|
no-problem/9909/cond-mat9909283.html
|
ar5iv
|
text
|
# Phase transitions in two dimensions - the case of Sn adsorbed on Ge(111) surfaces
## Abstract
Accurate atomic coordinates of the room-temperature ($`\sqrt{3}\times \sqrt{3}`$)R30 and low-temperature (3$`\times `$3) phases of 1/3 ML Sn on Ge(111) have been established by grazing-incidence x-ray diffraction with synchrotron radiation. The Sn atoms are located solely at T<sub>4</sub>-sites in the ($`\sqrt{3}\times \sqrt{3}`$)R30 structure. In the low temperature phase one of the three Sn atoms per (3$`\times `$3) unit cell is displaced outwards by 0.26$`\pm 0.04`$ Å relative to the other two. This displacement is accompanied by an increase in the first to second double-layer spacing in the Ge substrate.
Phase transitions at surfaces have aroused considerable interest among both theoreticians and experimentalists because they impact a wide variety of fields ranging from industrially important catalytic processes to providing insights into phenomena observed in cuprate superconductors. The suggestion that a commensurate charge density wave can form in Pb and Sn overlayers on Ge(111) demonstrated the importance of such simple model systems as testing grounds for modern theories . Upon cooling both of these adsorbate systems undergo a structural phase transition from a surface reconstruction with a ($`\sqrt{3}\times \sqrt{3}`$)R30 periodicity at room temperature to a (3$`\times `$3) periodicity at low temperatures. For the Pb/Ge(111) system the (3$`\times `$3) structure is accompanied by a small gap opening up in the electronic band structure, indicative of a metal-insulator transition. The picture of a symmetry breaking transition was seriously questioned in two recent papers, which reported almost identical electronic structures for both phases . The transition was proposed to be of order/disorder type with the Sn atoms fluctuating between two positions at room temperature, but freezing into an ordered (3$`\times `$3) structure at low temperature with an outwards displacement of every third Sn atom . This throws into question the generally accepted T<sub>4</sub> model for the ($`\sqrt{3}\times \sqrt{3}`$)R30 structure, in which the adsorbate atom is located at a single threefold hollow site above a second layer Ge atom as shown in Fig. 1. To add to the confusion the postulated Sn atom displacement in the (3$`\times `$3) structure was not found in a recent surface x-ray diffraction (SXRD) study .
Challenged by the discrepancies between the electronic and structural studies performed so far, we decided to undertake a thorough investigation using surface x-ray diffraction to determine the geometrical structure of the Ge(111)-Sn system both at room and low temperature and by comparison to determine unambiguously the nature of the phase transition.
The samples were prepared in an ultra high vacuum (UHV) system equipped with reflection high energy electron diffraction, low energy electron diffraction and a scanning tunneling microscope (STM). The substrates were cleaned using the standard procedure of repeated sputter-anneal cycles (500 eV Ar<sup>+</sup> ions, 450C) until good c(2$`\times `$8) diffraction patterns were observed. Tin was deposited from a calibrated effusion cell with the Ge(111) substrate held at room temperature; afterwards the sample was annealed to $``$ 150C. This procedure yielded a Ge(111)-($`\sqrt{3}\times \sqrt{3}`$)R30-Sn reconstruction with a tin coverage very close to the ideal value of 1/3 ML. STM measurements revealed well-ordered domains extending over $``$ 400-600 Å with a typical defect density of $``$4 %, and the absence of the low coverage “mosaic” phase with a mixture of Sn and Ge adatoms. The sample was then transferred in a portable UHV chamber equipped with a closed-cycle sample cooling system to the BW2 wiggler beamline at HASYLAB for the x-ray diffraction measurements. The x-ray photon energy was set to 8.8 keV and a glancing angle of incidence to 0.8 was used (i.e. above the critical angle to reduce the uncertainties in the measured intensities arising from mechanical displacements). A data-set consisting of 35 symmetry inequivalent in-plane reflections, 250 reflections along 14 fractional order rods and 62 reflections along three crystal truncation rods (CTRs) was recorded for the ($`\sqrt{3}\times \sqrt{3}`$)R30 structure determination. After completing the room temperature measurements the sample was cooled until the temperature of the sample holder reached 20 K. For the low temperature (3$`\times `$3) phase 278 reflections along 17 fractional order rods and 19 reflections along one CTR were recorded. The three rods specific to the (3$`\times `$3) structure were rather weak and to optimize the signal to background ratio these were measured with the angle of incidence set to the critical angle and the data were scaled accordingly. The condition of the sample was checked by measuring a standard reflection at hourly intervals. The integrated intensities were corrected for the Lorentz factor, polarization factor, active sample area and the rod interception appropriate for the z-axis geometry . The width of the fractional order reflections from the ($`\sqrt{3}\times \sqrt{3}`$)R30 phase corresponded to domains about 500 Å in diameter and this value did not change upon cooling. The reflections specific to the (3$`\times `$3) structure were considerably broader corresponding to an average domain size of only $``$ 120 Å. This indicates that cooling does not change the basic structure of the surface reconstruction, but it is modified by the superposition of a less well-correlated distortion. In the following we use the conventional surface coordinate system with $`𝐚=1/2[10\overline{1}]_{\text{cubic}}`$, $`𝐛=1/2[\overline{1}10]_{\text{cubic}}`$ and $`𝐜=1/3[111]_{\text{cubic}}`$. The cubic coordinates are in units of the germanium lattice constant (5.66 Å at 300 K).
A subset of the measured surface diffraction data is shown in Fig. 2. The rods for the ($`\sqrt{3}\times \sqrt{3}`$)R30 and the (3$`\times `$3) phase are very similar, but a careful inspection shows that there are important differences. Some of the rods are basically identical, as can be seen for the (2/3, 5/3) or (4/3, 1/3) rods, whereas for the (2/3, 8/3) or (7/3, 1/3) rods the intensities from the (3$`\times `$3) structure are significantly higher than those of the ($`\sqrt{3}\times \sqrt{3}`$)R30 structure. These differences are due to solely the changes in the atomic positions as a function of temperature.
In order to pinpoint the differences we first determined the atomic positions of the ($`\sqrt{3}\times \sqrt{3}`$)R30 structure using a least-squares refinement procedure. The atomic coordinates are given in Table I and a ball and stick model of the structure with the displacements relative to bulk-like positions is shown in Fig. 1b. The Ge-Ge bond lengths deviate less than 3 % from the bulk value of 2.45 Å. The Ge-Sn bonds with 2.83$`\pm 0.02`$ Å are slightly larger than the sum of the tetrahedral covalent radii for germanium and white tin (2.74 Å/2.82 Å) and larger than the value expected for grey tin ($`\alpha `$-Sn) and germanium (2.63 Å). The Sn bond angle is 82.0. The in-plane displacement of the first layer Ge atoms of 0.17 Å is significantly larger than the value of 0.05 Å given in Ref. and indicates that the earlier analysis was based on a too limited dataset. The results of a Keating energy minimization are incompatible with the smaller value so we are forced to conclude that the analysis presented in Ref. is incorrect. As shown in Fig. 2 the curves calculated using our structural model reproduce the experimental data extremely well and this is confirmed by the reduced $`\chi ^2`$ value of 1.6.
Next, we determined the atomic coordinates of the low-temperature (3$`\times `$3) reconstruction and obtained the values listed in Table I. The differences between the (3$`\times `$3) structure and the ($`\sqrt{3}\times \sqrt{3}`$)R30 structure are illustrated in Fig. 1c. There are several important features to be noted: (i) One Sn atom is displaced out of the surface plane by 0.29 Å; this Sn atom is at a vertical position 0.26 Å higher than the average position of the two lower Sn atoms. (ii) The three nearest-neighbor Ge atoms partially follow this relaxation, mainly in the $`z`$-direction and not in-plane, contrary to what was reported in Ref. . (iii) The average layer spacing between the first and second Ge double-layer is expanded relative to the room temperature phase. For the room temperature phase this distance is 1.004 and from the second to the third double-layer 0.993 in lattice coordinates, i.e. an expansion and a contraction relative to the bulk value of 1.000. However, for the low-temperature phase the first to second double-layer distance is 1.026 and the second to third layer distance is 1.002, i.e. a considerable expansion in the upper two double-layers. (iv) The outwards displaced Sn atom has a very anisotropic atomic displacement parameter (adp) with an amplitude ten times larger in the $`z`$-direction than in-plane. This means that either the atom is performing a very anisotropic motion with a large amplitude or, as is more likely at low temperatures, there is some disorder in the $`z`$-position of this atom. The adp’s for the nearest-neighbor Ge atoms are also larger than at room temperature again indicative of disorder. This is not surprising since the position of these atoms must at least partially follow the Sn atoms. The reduced $`\chi ^2`$ for the low temperature data is 1.3; a subset of the fractional order rods is shown in Fig. 2.
A trial using a single isotropic adp for all Sn atoms resulted in an increased outward displacement of one Sn atom and an inward displacement of the two other Sn atoms with a total height difference of $``$ 0.45 Å between the Sn atoms. However, the three rods specific to the (3$`\times `$3) structure were not adequately described by this model.
Several tests were performed to ensure that the features of the low-temperature phase were real and not caused by artifacts or local minima in the $`\chi ^2`$ minimization. First, to check the sensitivity of the structure determination to changes in the relative weight of reflections we set the error bars on all measured reflections equal and re-optimized every parameter. Although there were some minor differences the main features of the outward displacement and highly anisotropic adp for one Sn atom remained. In the second test we optimized the Ge positions in the third to sixth layers using a Keating model to minimize the elastic strain energy . All deviations were less than 0.06 Å, so we can rule out the possibility that the good agreement between the measured and calculated intensities arises from unphysical atomic displacements in the substrate. In the third test we checked whether the low temperature displacements are dependent on the weak rods specific to the (3$`\times `$3) periodicity, which have larger relative uncertainties than the other rods. By excluding these rods from the data analysis and re-optimizing the parameters only minor changes, typically $`<`$ 0.03 Å, occurred. From these checks we are convinced that our data analysis has revealed the intrinsic features of the low-temperature (3$`\times `$3) phase.
Now we can address the classification of the transition between the ($`\sqrt{3}\times \sqrt{3}`$)R30 and the (3$`\times `$3) phase in more detail. Recently, it was proposed to be an order/disorder transition . This would require two different sites for the Sn atom with a height difference of about 0.26 Å even at room temperature. However, if this were the case, there would be no difference between the ($`\sqrt{3}\times \sqrt{3}`$)R30 specific rods in the ($`\sqrt{3}\times \sqrt{3}`$)R30 and (3$`\times `$3) phase apart from the thermal motion effects affecting all rods, in contrast to what we observed experimentally as shown in Fig. 2. To quantify this, we used the (3$`\times `$3) low temperature structure and optimized the displacements using the room temperature data. This gave a more isotropic adp for the outwards displaced Sn atom and a reduction of the outwards displacement to 0.07 Å. The reduced $`\chi ^2`$ in this test increased compared to the best fit for the ($`\sqrt{3}\times \sqrt{3}`$)R30 structure from 1.6 to 1.7 due to the increase in the number of free parameters. Hence, we can conclude that if there is more than one site for the Sn atoms in the ($`\sqrt{3}\times \sqrt{3}`$)R30 structure the height difference is much less than that observed in the (3$`\times `$3) phase. The fact that the adp of the surface layer Ge atoms are similar in both phases is strong evidence against an order/disorder phase transition. At low temperatures one would normally expect both reduced thermal motion and disorder. The experimentally observed lattice distortion is reminiscent of a pseudo-Jahn-Teller-effect in which the energy of the system is lowered by a spontaneous symmetry-reducing displacement.
In summary, by performing a detailed analysis of comprehensive sets of x-ray diffraction data we have established definitive structural models for both the room-temperature Ge(111)-($`\sqrt{3}\times \sqrt{3}`$)R30-Sn and low-temperature Ge(111)-(3$`\times `$3)-Sn surface reconstructions. The atomic coordinates are given in Table I. The major feature of the (3$`\times `$3) structure is the outward displacement of one Sn atom by 0.26$`\pm 0.04`$ Å with respect to the average position of the other two Sn atoms per unit cell. The three nearest-neighbor Ge atoms bonding to the displaced Sn atom are also displaced outwards. In addition there is an increase in the average layer spacing between the first to second Ge double-layers compared to the ($`\sqrt{3}\times \sqrt{3}`$)R30 phase. We have shown that the phase transition from the ($`\sqrt{3}\times \sqrt{3}`$)R30 to the (3$`\times `$3) phase is not an order/disorder transition. We hope that the detailed structural information presented here will provide the foundation for a better theoretical understanding of this interesting model system.
Note added: Zhang et al. have re-analyzed their previously published SXRD data for the Ge(111)/Sn phases in conjunction with IV-LEED data and find an outward displacement of 0.37 Å for one Sn atom. However, the displacements in the substrate differ from the values reported in this letter. A comparable outward displacement of a Pb atom by $``$0.4 Å was described in a recent publication on the Ge(111)-(3$`\times `$3)-Pb structure by Mascaraque et al. .
We thank the staff of HASYLAB for their technical assistance. Financial support from the Danish Research Council through Dansync, the Bundesministerium für Bildung, Wissenschaft, Forschung und Technologie (BMBF) under project no. 05 SB8 GUA6 and the Volkswagen Stiftung is gratefully acknowledged.
Fig. 1
(a)
(b)
Fig. 2
|
no-problem/9909/cs9909002.html
|
ar5iv
|
text
|
# Semantic robust parsing for noun extraction from natural language queries
## 1 Introduction
The domain we are concerned with in our case study is the interaction through speech with information systems. The availability of a large collection of annotated telephone calls for querying the Swiss phone-book database (i.e the Swiss French PolyPhone corpus ) allowed us to experiment our recent findings in robust text analysis obtained in the context of the Swiss National Fund research project ROTA (Robust Text Analysis), and in the Swisscom funded project ISIS (Interaction through Speech with Information Systems). Within this domain, the goal is to build a valid query to an information system, using limited world knowledge of the domain in question. Although a task like this may, at its simplest, be performed quite effectively using heuristic methods such as keyword spotting, such an approach is brittle, and does not scale up easily in the case of conducting a dialogue.
### 1.1 Problem specification
In this section we will give an informal specification for the problem of processing telephone calls for querying a phone-book database.
#### 1.1.1 Swiss French PolyPhone Database
These database contains 4293 simulated recordings related to the “111” Swisscom service calls (e.g. “rubrique 38” of the calling sheet ). Each recording consists of 2 files, one ASCII text file corresponding to the initial prompt and the information request and one data file containing the sampled sound version. As far as the address fields are concerned, the data in the PolyPhone database are unfortunately not tagged and even not consistent. Prompts and information requests expressed by users have been extracted from the files an regrouped into a single representation in the following format:
> id:cd1/b00/f0000o06:sid17733
>
> prompt:1
>
> adr1:MOTTAZ MONIQUE
>
> adr2:rue du PRINTEMPS 4
>
> adr3:SAIGNELEGIER
>
> text: Bonjour j’aimerais un num ro de t l phone Saignelegier c’est Mottaz m o deux ta z Monique rue du printemps num ro quatre
>
> sample:0.200000:10.820000:88160:42801
where currently, the corresponding lines in text file are processed with the following heuristic:
identifies the original location of the file in the CD-ROM.
identifies both the type of prompt asking the user for posing the query (e.g. n. 1 corresponds to “*Veuillez maintenant faire comme si vous tiez en ligne avec le 111 pur demander le num ro de t l phone de la personne imaginaire dont les coordonn es se trouvent ci-dessous:*“).
corresponds to the *name*
corresponds to the *address* if line 3 is not empty and *town* otherwise
corresponds to the *town* if not empty.
corresponds to the text transcription. The number in square brackets is the total number of chars in the request.
groups the information for the sampled sound version of the request.
This heuristic seems to perform quite well but a more thorough and exhaustive evaluation still needs to be carried out. The main problem remains in finding enough information about the *original* data in order to be able to perform the validation automatically.
#### 1.1.2 The frame schema
Concerning the structure in the Swiss Phone-book database, we assumed it is the same as the one that appears on the web (e.g. http://www.ife.ee.ethz.ch/cgi-bin/etvq/f), namely (one field per line):
* Nom de famille / Firme
Pr nom / Autres informations
No de t l phone
Rue, num ro
NPA, localit
We chose to provide further information which are not available at web level but which can be used to form the query. The full frame description is given below<sup>1</sup><sup>1</sup>1 Bracketted slots are optional. :
(default: Person)
\[*yellow pages categories*\]
\[*yellow pages categories*\]
\[repres., direction, secretariat, …\]
(at least one of the sub-fields)
(default: standard) \[standard, priv , fax, natel\]
(default: ok) \[ok, ill-formed, missing-information, …\]
One point still remains unclear about the PolyPhone database (as no answers where found in ): what was the set of annotation used for the transcription of utterances? Several speech annotations such as “\<hesitation\>” appear in the text. Was it systematic? Are there other such markers? It is possible to rely on prosodic informations? In the first phase of the project we simply skipped these informations but we guess that they could be of great help in disambiguating interpretations of strict adjacent sequences of names such as in utterances like “*j’amerais le num ro de t l phone de Vedo-Moser Brigitte Brignon Baar-Nendaz*”.
### 1.2 Query analysis
The processing of the corpus data is performed at various linguistic levels performed by modules organized into a pipeline. Each module assumes as input the output of the preceding module. The main goal of this architecture is to understand how far it is possible go without using any kind of feedback and interactions among different linguistic modules.
#### 1.2.1 Morpho-Syntactic analysis
At a first stage, morphological and syntactic processing is applied to the output from the *speech recognizer* module which usually produces a huge word-graph hypothesis. Low-level processing (morphological analysis and tagging) were performed by ISSCO (Institute Dalle Molle, University of Geneva) using tools that were developed in the European Linguistics Engineering project MULTEXT. For syntactic analysis, ISSCO developed a Feature Unification Grammar based on the ELU formalism (i.e. an extension of PATRII grammars) and induced by a small sample of the Polyphone data. This grammar was taken by another of our partners (the Laboratory for Artificial Intelligence of the Swiss Federal Institute of Technology, Lausanne) and converted into a probabilistic context-free grammar, which was then initially trained with a sample of 500 entries from the Polyphone data. The forest of syntactic trees produced by this phase will be used to achieve two goals:
1. The n-best analyses are use to disambiguate speech recognizer hypotheses
2. They served as the input for the robust semantic analysis that we performed, that had as goal the production of query frames for the information system.
#### 1.2.2 Semantic annotations
While the semantic analysis will in general reduce the degree of ambiguity found after syntactic analysis, there remains the possibility that it might *increase* some degree of ambiguity due to the presence of coherent senses of words with the same syntactic category (e.g., the word “Geneva” can refer to either the canton or the city).
#### 1.2.3 Semantic robust analysis and frame filling
The component that deals with such input is generally referred to as a *robust analyzer*. Although robustness can be considered as being applied at either a syntactic or semantic level, we believe it is generally at the semantic level that it is most effective. This robust analysis needs a model of the domain in which the system operates, and a way of linking this model to the lexicon used by the other components. It specifies semantic constraints that apply in the world and which allow us to rule out incoherent requests (for instance). The degree of detail required of the domain model used by the robust analyzer depends upon the ultimate task that must be performed — in our case, furnishing a query to an information system. Taking the assumption that the information system being queried is relatively close in form to a relational database, the goal of the interpretative process is to furnish a query to the information system that can be viewed in the form of a frame with certain fields completed, the function of the querying engine being to fill in the empty fields.
One way in which the interface could interact with the querying system would be to submit such a frame at the end of the analysis process without performing any coherency checking. The advantage of this method is that the model of the domain of queries that is required by the interface can be limited. However, such an approach has two major disadvantages:
* the result of incorrectly formulated queries may be completely uninterpretable or erroneous, and the interface system would have no basis for evaluating the quality of such replies, or how to aid the user in formulating a better one;
* there might be a number of possible frames that could be submitted for any instance of a user utterance/query, and this number might be reducible by application of a model of coherent queries.
We will, therefore, presume that queries must be classified by the interface into three categories:
1. the query is correct — the fields of the frame which must be completed contain semantically valid data. The query may be submitted;
2. incomplete queries — certain necessary fields cannot be unambiguously filled in, and so a system-initiative dialogue can be invoked to furbish the necessary information to create a correct query;
3. incoherent queries — information in the fields of the frame is not coherent with the interfaces model of the domain. An error dialogue must be invoked.
The last query category is the most complex, since it requires a domain model sufficiently rich to decide whether a query is outside of the domain, or inside the domain but violating certain semantic constraints. In addition, it requires relatively complex dialogue management as the corrective dialogue may involve resolution of miscomprehension by either the system or the user.
## 2 Computational logic for robust analysis
What has been considered to be an advantage using *logic-based* programming languages is the symbol processing capability and the way of abstracting from the actual implementation of needed data structures. *Definite Clause Grammars* come to mind when relating Logic Programming and Natural Language Processing. This is of course one of the best couplings between Computational Linguistics and Logic to support both (i) the development of linguistic models of Natural Language (Computational Linguistics) and (ii) the design of real life applications (Language Engineering).
The main drawback to this approach is efficiency, but it is not the only one. In recent years several efforts have been done to improve efficiency of logic and functional programming languages by means of powerful abstract machines and optimized compilers. Sometimes, efficiency recovery leads to introduction of non-logical features in the language and the programmer should be aware of it in order to exploit it in the development of his or her applications (i.e. cut in logic programming).
An important question to ask is: “how can computational logic contribute to robust discourse analysis ?”. A partial answer to this question is that currently logic-based programming languages are able to integrate in an unifying framework all or most of the techniques necessary for robust text analysis. Furthermore this can be done in a rigorous “mathematical” fashion. In this sense robustness is related to correctness and provability with respect to the specifications. A NLP system developed within a logical framework has a predictable behavior which is useful in order to check the validity of the underlying theories.
### 2.1 Left-corner Head-driven Island Parser
LHIP is a system which performs robust analysis of its input, using a grammar defined in an extended form of the Definite Clause Grammar formalism used for implementation of parsers in Prolog. The chief modifications to the standard Prolog ‘grammar rule’ format are of two types: one or more right-hand side (RHS) items may be marked as ‘heads’, and one or more RHS items may be marked as ‘ignorable’.
LHIP employs a different control strategy from that used by Prolog DCGs, in order to allow it to cope with ungrammatical or unforeseen input. The behavior of LHIP can best be understood in terms of the complementary notions of span and cover. A grammar rule is said to produce an island which spans input terminals $`t_i`$ to $`t_{i+n}`$ if the island starts at the $`i^{th}`$ terminal, and the $`i+n^{th}`$ terminal is the terminal immediately to the right of the last terminal of the island. A rule is said to cover $`m`$ items if $`m`$ terminals are consumed in the span of the rule. Thus $`mn`$. If $`m=n`$ then the rule has completely covered the span.
As implied here, rules need not cover all of the input in order to succeed. More specifically, the constraints applied in creating islands are such that islands do not have to be adjacent, but may be separated by non-covered input. There are two notions of non-coverage of the input: unsanctioned and sanctioned non-coverage. The former case arises when the grammar simply does not account for some terminal. Sanctioned non-coverage means that special rules, called “*ignore*” rules, have been applied so that by ignoring parts of the input the islands are adjacent. Those parts of the input that have been *ignored* are considered to have been consumed. These *ignore* rules can be invoked individually or as a class. It is this latter capability which distinguishes *ignore* rules from regular rules, as they are functionally equivalent otherwise, but mainly serve as a notational aid for the grammar writer.
Strict adjacency between RHS clauses can be specified in the grammar. It is possible to define global and local thresholds for the proportion of the spanned input that must be covered by rules; in this way, the user of an LHIP grammar can exercise quite fine control over the required accuracy and completeness of the analysis.
A chart is kept of successes and failures of rules, both to improve efficiency and provide a means of identifying unattached constituents. In addition, feedback is given to the grammar writer on the degree to which the grammar is able to cope with the given input; in a context of grammar development, this may serve as notification of areas to which the coverage of the grammar might next be extended. Extensions of Prolog DCG grammars in LHIP permit:
1. nominating certain RHS clauses as heads;
2. marking some RHS clauses as being optional;
3. invocation of *ignore* rules;
4. imposing adjacency constraints between two RHS clauses;
5. setting a local threshold level in a rule for the fraction of spanned input that must be covered.
A threshold defines the minimum fraction of terminals covered by the rule in relation to the terminals spanned by the rule in order for the rule to succeed. For instance, if a rule spans terminals $`t_i`$ to $`t_{i+n}`$ covering $`j`$ terminals in that span, then the rule can only succeed if $`j/nT`$.
The following is an example of a LHIP rule. At first sight this rule appears left recursive. However, the sub-rule “conjunction(Conj)” is marked as a head and therefore is evaluated before either of “s(Sl)” or “s(Sr)”. Presuming that the conjunction-rule does not end up invoking (directly or indirectly) the s-rule, then the s-rule is not left-recursive.
* s(conjunct(Conj,Sl,Sr)) ~~\>
s(Sl)
\*conjunction(Conj),
s(Sr).
LHIP provides a number of ways of applying a grammar to input. The simplest allows one to enumerate the possible analyses of the input with the grammar. The order in which the results are produced will reflect the lexical ordering of the rules as they are converted by LHIP. With the threshold level set to 0, all analyses possible with the grammar by deletion of input terminals can be generated. By setting the threshold to 1, only those partial analyses that have no unaccounted for terminals within their spans can succeed. Thus, supposing a suitable grammar, for the sentence *John saw Mary and Mark saw them* there would be analyses corresponding to the sentence itself, as well as *John saw Mary*, *John saw Mark*, *John saw them*, *Mary saw them*, *Mary and Mark saw them,* etc. By setting the threshold to 1, only those partial analyses that have no unaccounted for terminals within their spans can succeed. Hence, *Mark saw them* would receive a valid analysis, as would *Mary and Mark saw them*, provided that the grammar contains a rule for conjoined NPs; *John saw them*, on the other hand, would not. As this example illustrates, a partial analysis of this kind may not in fact correspond to a true sub-parse of the input (since *Mary and Mark* was not a conjoined subject in the original). Some care must therefore be taken in interpreting results.
This rule illustrates a number of features: *negation*, and *optional forms*. The rule will only succeed if (with respect to the area of input in which it might occur) there is a noun with no determiner. In addition, there can be optional adjectives before the noun.
* np(propernoun(N,Mods)) ~~\>
~ determiner(\_),
(? adjectives(Mods) ?),
* noun(N).
This rule illustrates the use of disjunction and embedded Prolog code. It should be noted that within the scope of a disjunction or negation, a head is local to the disjunct or negation.
* noun(X) ~~\>
( * @pussy, (? @cat ?); * @cat),
{X=cat}.
This rule illustrates a typical use of adjacency, to specify compound nouns. Adjacency is not restricted such a use however, but may generally be used anywhere.
* noun(missionary\_camp) ~~\> @missionary : @camp.
A number of tools are provided for producing analyses of input by the grammar with certain constraints. For example, to find the set of analyses that provide maximal coverage over the input, to find the subset of the maximal coverage set that have minimum spans, and to find the find analyses that have maximal thresholds. In addition, other tools can be used to search the chart for constituents that have been found but are not attached to any complete analysis. The conversion of the grammar into Prolog code means that the user of the system can easily develop analysis tools that apply different constraints, using the given tools as building blocks.
## 3 Implementation of the semantic module
In our approach we try to integrate the above principles in our system in order to effectively compute hypotheses for the frame filling task. This can be done by building a lattice of *frame filling hypotheses* and possibly selecting the best one. Hypotheses are typically sequences of proper names. The lattice of hypotheses is generated by means of LHIP *discourse grammar*. This type of grammar is used to extract *names chunks* and assemble them into the hypothesized frame structure.
### 3.1 Tree-paths representation
Parse trees obtained from the previous module are encoded into a path representation which allows us to easily specify constraints over the tree structure. A *path-sentence* is a list of *path-words* which in turn are compound terms of the type terminal(word, path) where *word* is a constant term and *path* is a list of arc identifiers that is compound terms ’cat’(#number\_of\_nodes, #node, #identifier) uniquely identifying an arc in the parse tree. The functor ’cat’ is a category name and its arguments are integer positive numbers. For instance the representation of the parse tree:
is given by:
\[terminal(ici,\[’ADV’(1,1,14),’P’(2,1,12),’P’(2,1,11)),
terminal(madame,\[’N’(1,1,19),’SN’(1,1,17),’SN’(2,1,16),’P’(2,2,15),’P’(2,1,11)),
terminal(’Plant’,\[’NPR’(1,1,24),’SNOMPR’(1,1,22),’SN’(1,1,21),’SN’(2,2,20),’P’(2,2,15),’P’(2,1,11)\].
Using this representation it is possible to define a grouping operator (e.g. group/2) which given a sequence of adjacent names finds the subsequence of words having the least common ancestor which is closer than the least common ancestor (e.g. lca/2) of the given sequence. These two operators are very useful for imposing structural knowledge constraints and they are straightforwardly defined as PROLOG programs by:
* lca(\[terminal(\_,W)\],W).
lca(\[terminal(\_,W)|R\],P) :-
lca(R,P1),
prefix\_path(P1,P),
prefix\_path(W,P),!.
group(\[\],\[\]).
group(L,X) :-
lca(L,P),
proper\_sublist(L,X), length(X,N), N\>1,
lca(X,P1),
proper\_sublist(P1,P).
prefix\_path(A,A).
prefix\_path(\[\_|B\],C) :-
prefix\_path(B,C).
### 3.2 Discourse markers
Discourse segments allow us to model dialog by a set of pragmatic concepts (dialogue acts) representing what the user is expected to utter (for example initiation of a dialogue: *init*, expression of gratitude: *thank*, and demand for information: *request*, etc.) and in that way are useful for reducing the syntactic and semantic ambiguity. These are domain-dependent and must be defined for a given corpus. For their definition, we intend to follow the experiments done in the context of Verbmobil (see for example ). In our specific case identifying special words serving both as separators among logical subparts of the same sentence and as introducers of semantic constituents allows us to search for name sequences to fill a particular slot only in interesting part of the sentence. One of the most important separator is the *announcement-query separator*. The LHIP clauses defining this separator can be one or more word covering rule like for instance:
* ann\_query\_separator #1.0 ~~\>
@terminal(’t l phone’,\_).
ann\_query\_separator #1.0 ~~\>
( @terminal(’num ro’,\_):
@terminal(’de’,\_):
(? @terminal(’t l phone’,\_) ?)).
As an example of semantic constituents introducers we propose here the
* street\_intro(\[T,Prep,Det\],1) #1.0 ~~\>
* street\_type(T),
preposition(Prep),
determiner(Det).
which make use of some word knowledge about street types coming from an external thesaurus like:
* street\_type(terminal(X,P)) ~~\>
@terminal(X,P),
{thesaurus(street,W),member(X,W)}.
### 3.3 Generation of hypotheses
The generation of hypotheses for filling the frame is performed by: composing weighted rules, assembling chunks and filtering possible hypotheses.
#### 3.3.1 Weighted rules
The main assumption on which probabilistic approach to NLP is based, is that language is considered as being a random phenomenon with its own probability distribution function: *coverage* is often translated as *expectation* in a probabilistic sense. Changing perspective and considering language just as an *uncertain* and *imprecise* phenomenon and understanding as a *perception* process, it is naturally to think of *fuzzy* models of language (see and ). Recently, fuzzy reasoning has been partially integrated into a CLP paradigm (see ) in order to deal with so called *soft constraints* in weighted *constraint logic grammars*. We tried to get some inspiration from the above proposal for integrating fuzzy logic and parsing to compute weights to assign to each frame filling hypotheses. Each LHIP rule returns a confidence factor together with the sequence of names. The confidence factor for a rule can be either assigned statically (e.g. to pre-terminal rules) or they can be computed composing recursively the confidence factors of sub-constituents. Confidence factors are combined choosing the minimum among confidences of each sub-constituents. It is possible that there is no enough information for filling a slot. In this case the grammar should provide a mean to provide an empty constituent when all possible hypothesis rules have failed. This is possible using negation and epsilon-rules in LHIP as showed in the following rules for dealing with street names.
* found\_street\_name(L,Conf) #1.0 ~~\>
* street\_intro(Intro,Conf),
name\_list(X),
{append(Intro,X,L)}.
found\_street\_name(X,0.3) ~~\>
* name\_list(X).
hyp\_street\_name(Street,Conf) ~~\>
* found\_street\_name(Street,Conf).
hyp\_street\_name(\[\],1) ~~\>
~found\_street\_name(\_,\_),
lhip\_true.
where name\_list(X) accounts for sequence of adjacent proper names and lhip\_true corresponds to the empty sequence.
Observe that in this particular case there is no need to select the minimum confidence factor from the sub-constituents of the rule found\_street\_name since we have only street\_intro(Intro,Conf) which propagates its confidence factor.
#### 3.3.2 Chunk assembling
The highest level constituent is represented by the whole frame structure which simply specifies the possible orders of chunks relative to slot hypotheses. A rule for a possible frame hypothesis is:
* frame(Caller\_title, Caller\_name,
Target\_title, Target\_name,
Street\_name, Street\_number,
Locality, Weight)
~~\> hyp\_caller(Caller\_title,Caller\_name,C1),
* ann\_query\_separator,
hyp\_target(Target\_title,Target\_name,C2),
* location\_intro,
hyp\_street\_name(Street\_name,C3),
hyp\_street\_number(Street\_number,C4));
hyp\_locality\_name(Locality,C5),
{minlist(\[C1,C2,C3,C4,C5\],Weight)}.
In this rule we specify a possible order of chunks interleaved by separators and introducers. The computation of global weight may be more complex than the above rule which uses simply the minimum of each hypothesis confidence values. In this case we did not provide any structural constraint (e.g. preferring names chunks belonging to the minimal common sub-tree or those having the longest sequence of name belonging to the same sub-tree).
#### 3.3.3 Filtering and query generation
The obtained frame hypotheses can be further filtered by both using structural knowledge (e.g. constraints over the tree-path representation) and word knowledge. In order to combine the information extracted from the previous analysis step into the final query representation which can be directly mapped into the database query language we will make use of a frame structure in which slots represent information units or attributes in the database. A simple notion of context can be useful to fill by default those slots for which we have no explicit information. For doing this type of *hierarchical reasoning* we exploit the meta-programming capabilities of logic programming and we used a meta-interpreter which allows multiple inheritance among logical theories . More precisely we made use of the special *retraction* operator “$``$” for composing logic programs which allows us to easily model the concept of inheritance in hierarchical reasoning. The expression $`PQ`$, where $`P`$ and $`Q`$ are meta-variables used to denote arbitrary logic programs, means that the resulting logic programs contains all the definition of $`P`$ except those that are also defined in $`Q`$.
The definition of the *isa* operator is obtained combining the retraction operator with the union operator (e.g. $``$) that simply make the physical union of two logic programs, by
$$PisaQ=P(QP).$$
As an example for the above definition we provide some default definitions which have been used to represent part of the world knowledge in our domain. The *rules* theory contains rules for inferring the locality or the locality type when they are not explicitly mentioned in the query.
* locality(City) :-
caller\_prefix(X),
prefix(X,City).
loc\_type(Type) :-
locality(City),
gis(City,Type).
where prefix/2 and gis/2 are world knowledge bases (i.e. a collection of facts grouped in a theory called *kb*) and caller\_prefix/1 can be easily provided from the answer system.
If some information is missing then the system tries to provide some default additional information to complete the query. The following theory contains definition for some mandatory slots which need to be filled in case of incomplete queries, like for instance in the theory *query\_defaults*:
* identification(person).
phone\_type(standard).
loc\_type(city).
Finally starting from an incomplete query which does not account for the required information we can use deduction to generate the query completion like for instance asking for:
* ?- demo((query *isa* query\_default) $``$ rules $``$ kb), loc\_type(X)).
## 4 Conclusions
From a very superficial observation of the human language understanding process, it appears clear that no deep competence of the underlying structure of the spoken language is required in order to be able to process acceptably distorted utterances. On the other hand, the more experienced is the speaker, the more probable is a successful understanding of that distorted input. How can this kind of fault-tolerant behavior be reproduced in an artificial system by means of computational techniques? Several answers have been proposed to this question and many systems implemented so far, but no one of them is capable of dealing with robustness as a whole.
As examples of robust approaches applied to dialogue systems we cite here two systems which are based on similar principles.
In the dialogos human-machine telephone system (see ) the robust behavior of the *dialogue management* module is based both on a contextual knowledge base of pragmatic-based expectations and the dialogue history. The system identifies discrepancies between expectations and the actual user behavior and in that case it tries to rebuild the dialogue consistency. Since both the domain of discourse and the user’s goals (e.g. railway timetable inquiry) are clear, it is assumed the systems and the users cooperate in achieving reciprocal understanding. Under this underlying assumption the system pro-actively asks for the query parameters and it is able to account for those spontaneously proposed by the user.
In the syslid project (see ) where a robust parser constitutes the *linguistic component* (LC) of the *query-answering dialogue system* . An utterance is analyzed while at the same time its semantical representation is constructed. This semantical representation is further analyzed by the *dialogue control* *module* (DC) which then builds the database query. Starting from a *word graph* generated by the speech recognizer module*,* the robust parser will produce a search path into the word graph. If no complete path can be found, the robust component of the parser, which is an island based chart parser (see ), will select the maximal consistent partial results. In this case the parsing process is also guided by a *lexical semantic knowledge base* component that helps the parse in solving structural ambiguities.
We can conclude that robustness in dialogue is crucial when the artificial system takes part in the interaction since inability or low performance in processing utterances will cause unacceptable degradation of the overall system. As pointed out in it is better to have a dialogue system that tries to guess a specific interpretation in case of ambiguity rather than ask the user for a clarification. If this first commitment results later to be a mistake a robust behavior will be able to interpret subsequent corrections as repair procedures to be issued in order to get the intended interpretation.
|
no-problem/9909/astro-ph9909144.html
|
ar5iv
|
text
|
# Nonlinear Pulsations of Convective Stellar Models
## 1. Introduction
Richard Feynman is said to have once remarked something to the effect ”now that we have solved the proble of quantum electrodynamics, we will have to solve the real hard problems such as how water flows in a pipe”. The stellar problem, because of convection on top of turbulence and the compressibility of the fluid, is even harder to tackle and several generations of astrophysicists have tried to come to terms with this problem. Turbulence and convection (TC) are necessarily 3 dimensional phenomena, and with the development of faster computers increasingly realistic numerical simulations are being made, although it will be a long time before their spatial resolution approaches that required by the large stellar Rayleigh and small Prandtl numbers. In the meantime stellar physicists continue to attempt to reduce TC to a 1D recipe and thus to a mere subroutine that can be used in stellar evolution or pulsation calculations (for recent update on astrophysical convection c.f. e.g. Buchler & Kandrup 2000). In this paper we review some interesting recent developments in nonlinear pulsation calculations.
The seminal and most influential work has been the mixing length theory (MLT) of Erika Böhm-Vitense (c.f. Cox & Giuli 1968), and many of the newer recipes are extensions of MLT. In its original form it consists of an instantaneous, local approximation in which the convective flux is proportional to the 3/2’th power of the convectively unstable entropy gradient, $`F_c(ds/dr)^{3/2}`$.
The TC recipe that works best in stellar evolution is not necessarily the best for stellar pulsations. Indeed, in stellar evolution the convective timescales are typically many orders of magnitude smaller than the evolutionary time-scales. Furthermore, convective overshooting is very important because it mixes the chemical elements with often drastic consequences for nuclear burning and the subsequent evolution. In contrast, mixing plays no role in stellar pulsation because the pulsating envelopes are chemically homogeneous. But here we have large velocity fields and shear motions, so that time-dependence of TC may have to be taken into account. Thus the pulsation timescales, while generally longer than the convective time-scales, are sufficiently close so that there can be a feedback between pulsation and convection. This feedback is further enhanced because the convective regions that are caused by large opacity are also regions where pulsational driving occurs. An illustration of the time-dependence of the turbulent energy and convective flux during a pulsational cycle has been presented in Buchler, Yecko, Kolláth, & Goupil (1999, \[BYKG99\], Figs. 1 and 2).
How good are MLT and its extensions? An important 3D simulation by Cattaneo, Brummell & Toomre (1991) indicated first, that the convective flow is dominated by large downflows, but that these flows are ’convectively neutral’ in the sense that they carry as much kinetic energy downward as enthalpy upward, and second, that the convective flux is dominated by small scale upflows, precisely the type of picture that underlies MLT. However, for computational reasons, the Prandtl numbers used in the calculations were orders of magnitude larger than the stellar ones, and the Rayleigh numbers orders of magnitude smaller. Furthermore, the boundary conditions were fixed, whereas in a star convection has to adjust itself so that together with radiation it carries the given total energy flux (in a static context).
We recall the hydrodynamic equations in the context of radial stellar pulsation:
$`{\displaystyle \frac{du}{dt}}`$ $`=`$ $`{\displaystyle \frac{1}{\rho }}{\displaystyle \frac{}{r}}(p+p_t+p_\nu ){\displaystyle \frac{GM}{r^2}}`$ (1)
$`{\displaystyle \frac{de}{dt}}+p{\displaystyle \frac{dv}{dt}}`$ $`=`$ $`{\displaystyle \frac{1}{\rho r^2}}{\displaystyle \frac{}{r}}[r^2(F_r+F_c)]𝒞.`$ (2)
For the hydrodynamics all we need is a recipe for the turbulent pressure $`p_t`$, the eddy viscous pressure $`p_\nu `$, the convective flux $`F_c`$ and the source and sink of turbulent energy $`𝒞`$.
## 2. The Turbulent Convective Model Equations
Many recipes have been suggested to compute these four quantities. A very nice, albeit dated, review is that of Baker (1987), and for an update see Montesinos et al. (1999). Many of these are far too complicated (e.g. up to 10 nonlinear PDEs) and numerically tricky to implement in hydrocodes. Since this review concerns primarily stellar pulsations with an emphasis on nonlinear calculations, we will limit ourselves to mentioning the time-dependent recipes that have actually been used in nonlinear hydrodynamics calculations. All these recipes involve the addition of a single time-dependent diffusion equation for the turbulent energy $`e_t`$
$$\frac{de_t}{dt}+(p_t+p_\nu )\frac{dv}{dt}=\frac{1}{\rho r^2}\frac{}{r}(r^2F_t)+𝒞$$
(3)
The ancillary, defining equations are
| $`𝒞`$ | = | $`𝒮ϵ`$ | |
| --- | --- | --- | --- |
| $`ϵ`$ | = | $`\alpha _de_t^{3/2}/\mathrm{\Lambda }`$ | |
| $`\mathrm{\Lambda }`$ | = | $`\alpha _\mathrm{\Lambda }H_p`$ | |
| $`H_p`$ | = | $`(d\mathrm{ln}p/dr)^1=p/(\rho g)`$ | |
| $`p_t`$ | = | $`\alpha _p\rho e_t`$ | |
| $`p_\nu `$ | = | $`\alpha _\nu \mathrm{\Lambda }\rho e_t^{1/2}r((u/r)/r)`$ | |
| $`F_t`$ | = | $`\alpha _t\alpha _\mathrm{\Lambda }\rho H_pe_t^{1/2}(e_t/r)`$ | (4) |
For the sake of simplicity these equations disregard some features of convection and have therefore their shortcomings. They are based on a diffusion approximation ($`F_cds/dr`$ and $`F_tde_t/dr`$) and ignore nondiffusive transport, e.g. by plumes. They also disregard pressure fluctuations and are limited to subsonic convective velocities. Radiative losses in the convection however can easily be incorporated (Wuchterl & Feuchtinger 1998, Buchler & Kolláth 2000 \[BK00\]). It is only a comparison with observational constraints that will ultimately decide on the quality of the approximations in the context of stellar pulsations.
The recipes that have been used in the nonlinear codes fall into three groups depending on the choice of the functional relationships of $`F_c`$ and C on $`e_t`$ and the dimensionless entropy gradient which we call $`𝒴(H_p/c_p)ds/dr`$ (not to be confused with the helium abundance). We refer to BK00 for further details. They are the (1.) Stellingwerf (1982) \[S\] formulation, used by the Italian group (Bono & Stellingwerf, 1994, Bono, Caputo, Castellani & Marconi 1997 \[BCCM97\]) and by Gehmeyr (1992), (2.) the Kuhfuß\[K\] formulation (1986, c.f. also Gehmeyr & Winkler 1992) used in the Vienna code (Feuchtinger 1998, Feuchtinger 1999a \[F99a\]) and (3) a hybrid Florida \[FL\] formulation (Yecko, Kolláth, Buchler 1998, \[YKB98\]), that has been used by Kolláth, Beaulieu, Buchler & Yecko 1998 \[KBBY98\]). The Florida hydrocode has recently been extended to run with all three schemes (see below), and, importantly, also to perform a linear stability analysis (linear periods and growth-rates).
| $`F_c`$ | = | $`A/B`$ | $`e_t(𝒴)^{1/2}`$ | | |
| --- | --- | --- | --- | --- | --- |
| $`𝒮`$ | = | $`\alpha _dB/\mathrm{\Lambda }`$ | $`e_t(𝒴)^{1/2}`$ | (S) | |
| $`F_c`$ | = | $`A`$ | $`e_t^{1/2}𝒴`$ | | |
| $`𝒮`$ | = | $`\alpha _dB^2/\mathrm{\Lambda }`$ | $`e_t^{1/2}𝒴`$ | (KGW) | |
| $`F_c`$ | = | $`A`$ | $`e_t^{1/2}𝒴`$ | | |
| $`𝒮`$ | = | $`\alpha _dB/\mathrm{\Lambda }`$ | $`e_t(𝒴)^{1/2}`$ | (YKB) | (5) |
where $`A`$ = $`\alpha _c\alpha _\mathrm{\Lambda }\rho c_pT`$ $`B`$ = $`\alpha _s\alpha _\mathrm{\Lambda }\sqrt{p\beta T/\rho }=\alpha _s\alpha _\mathrm{\Lambda }\sqrt{c_pT_a}=\alpha _s\alpha _\mathrm{\Lambda }\sqrt{\beta T/\mathrm{\Gamma }_1}c_s`$ $`\beta `$ = $`(\mathrm{ln}v/T)_p`$ (6)
The recipes involve a total of 7 dimensionless $`\alpha `$ parameters that are of order unity, but for which theory gives little guidance. They ultimately have to be calibrated by comparing the numerical results to the stellar observations.
The three schemes have been compared in BK00. In the stationary limit and with the disregard of overshooting (no $`F_t`$ eq. 3 reduces to $`𝒞=0`$. In other words, it gives an expression for $`e_t`$ in terms of the local, instantaneous physical quantities such as density and temperature and their gradients, and the ancillary equation 6 provides the convective flux. This is then equivalent to standard MLT. However, in standard MLT the $`p_t`$ and $`p_\nu `$ are omitted (which is not a good approximation; c.f. below), although they could be readily included once $`e_t`$ is known.
In the time-dependent context one might expect the three formulations to have a very different behavior. However, as shown in BK00 the growth-rates differ very little, and furthermore the three recipes give essentially the same limit cycles as well (Figs. 3 and 6 of BK00).
From these, albeit limited comparisons, one is tempted to conclude that most of the differences between the published nonlinear results have more to do with the choice of the $`\alpha `$ parameters<sup>1</sup><sup>1</sup>1and some modifications such as flux limiters (Wuchterl & Feuchtinger 1998), small Péclet number corrections (BK00), or sonic dissipation (Gehmeyer & Winkler 1992). than with the choice of the time-dependent diffusion equation for the turbulent energy.
Fig. 1: Linear stability of sequence of models. Comparison of different approximations with exact results. Left: fundamental mode, right: first overtone mode.
## 3. Linear stability properties
In evolutionary computations it is often customary to compute also the linear stability of the evolving models to delineate the instability strips. For expediency several approximations are often made. We want to stress again (YKB98, BYKG98) that some of these approximations are not very good, as we go on to show in Figure 1. From right to left we display the normalized growth-rates $`\eta =2\kappa P`$ of the fundamental and first overtone, respectively, for a sequence of Cepheid models (M=5M, L=2060L, Z=0.02) as a function of T$`_{eff}`$$`\kappa `$ is the growth-rate and P the period.)
The solid lines denote the nonadiabatic growth-rates, consistent with the hydro and TC equations. The crossed solid lines correspond to the frequently used frozen flux approximation: the MLT flux is included in the computation of the static equilibrium model, but it is ignored in the computation of the linear eigenvalues, and the eddy viscous pressure is also disregarded. Clearly this approximation misses even the blue edge by a $``$600K degrees for the F mode and indicates instability for the O1 mode which is solidly stable. In the next two approximations everything is linearized correctly except (1) the perturbation of the turbulent pressure is disregarded (dotted line); (2) the perturbation of the convective flux is disregarded (long-dashed line) – neither of these a good approximation. The best ’simple’ approximation is (dashed line): MLT expression for the flux (derived from $`𝒞F_t=0`$), its linearization and inclusion of the eddy viscous pressure. Of course it is also important to choose ’proper’ values of the $`\alpha `$ parameters. A survey of the model behavior makes it quite clear that both a convective flux and a turbulent pressure are needed if one wants to get a reasonable IS. This importance of the eddy viscous pressure was already pointed out a dozen years ago by Baker (1987).
In addition it should be remembered that nonlinear effects can shift the linear IS boundaries by a few hundred degrees (c.f. Fig. 2 below).
## 4. RR Lyrae Models, RRab and RRc
There have been several recent large surveys of nonlinear pulsations of single mode RR Lyrae, both fundamental and first overtone pulsations by Bono et al. \[BCCM97\] and by Feuchtinger (1999b) \[F99b\]. F99b also compares these results to each other and to the available observations. He finds that the light-curve Fourier decomposition coefficients of both calculations agree faily well with observations. but that there are discrepancies both between the two calculations, and with observations in the low temperature models, i.e. in the most convective models. The pulsation amplitudes of both calculations agree well with observations (It should be noted though that this is not as stringent a test as the Fourier parameters).
As far as the shapes of the lightcurves are concerned, BCCM97 obtain sharp, but unobserved spikes (cf. their Figs. 2 and 4). F99b (c.f. also Wuchterl & Feuchtinger 1998) shows that these spikes are due to the fact that the convective flux becomes larger than its physically allowed upper limit, viz. $`F_c<\rho c_pTu_{conv}`$. This is a result of the breakdown of the diffusion approximation that is inherent in the TC equations. They propose the introduction of a ’flux limiter’ to prevent the flux from exceeding this upper limit. As a result F99b obtains light curves that look much more like the observed ones.
The RR Lyrae light-curves obtained by the Florida group do not have any unphysical spikes either, despite the fact that no flux limiter has been used. Feuchtinger (private communication) has traced the absence of spikes to a different choice of $`\alpha `$ parameters, those of the FL group giving rise to lower overall turbulent energies and velocities.
The computed radial velocity curves do not agree very well with the observations (F99b). It seems that a more thorough calibration of the $`\alpha `$ parameters is required before the final word is in on whether such a simple 1D TC equation is capable of capturing the essence of convection.
We also note that there are further constraints on the $`\alpha `$’s that need to be taken into account. For example, the observed temperature (color) variations can be into account. Second, the double-mode RR Lyrae (RRd) impose a number of sensitive additional constraints, namely the range of periods and of temperatures over which they occur, as well as the values of the component amplitudes. Finally, we recall that the very frequent Blazhko amplitude modulations have not yet been satisfactorily explained, but that they are likely to also add constraints.
We note a propos Blazhko effect that a possible mechanism for this effect could be an interaction between pulsation and convection. This becomes particularly favored if normally real and stable convective (diffusion) modes become oscillatory. If these additional vibrational modes are only mildly stable, and if their frequencies are in a $`n:1`$ resonance with the excited pulsational mode, then a resonance condition could cause the convective mode to interact nonlinearly with the pulsation and lead to amplitude modulations. We have checked on realistic stellar models that convective modes can indeed become oscillatory, but we had to increase the time-scale for convection by a very large factor, and no nonlinear computations have yet been performed.
## 5. Morphology of the Instability Strip – Modal Selection
On the observational side the microlensing projects have produced global pictures of the Cepheid instability strips for the SMC that are absolutely stunning (Udalski et al. 1999, Beaulieu et al. 1995).
On the theoretical side, the turbulent convective hydrodynamics codes have shed new light on the problem of modal selection in both Cepheids and RR Lyrae stars. Simultaneously, but totally independently, Feuchtinger (1998) and KBBY98 found RR Lyrae models and Cepheid models, respectively, that pulsated in the fundamental and first overtone modes simultaneously, with stable, and steady amplitudes (i.e. the models were NOT switching modes).
On the basis of these computations (c.f. BYKG 99 and Kolláth et al. in this Volume) we can infer the following (schematic) Cepheid instability strips (IS) in an HR diagram as shown in Fig. 2. The left subfigure depicts a linear IS. The first overtone IS in the form of a sugarloaf becomes stable above a certain luminosity (and mass). For simplicity we have omitted the second overtone from the picture.
Of course, nonlinear effects change the domain in which the corresponding limit cycles are stable. The right-hand subfigure depicts a schematic of the nonlinear Cepheid instability diagram. The lines from the leftside subfigure that are shown as solid lines are the blue and red boundaries of the fundamental (F) and first overtone (O) IS’s. (The now irrelevant parts of the previously shown linear edges are shown as thin dotted lines.) The new additional solid lines are the nonlinear blue and red edges for the fundamental and first overtone modes. Thus the overtone red edge is shifted somewhat to the left, but the fundamental blue edge can be shifted substantially to the right.
Double-mode behavior occurs in the lower wedge-shaped region, delineated on the left by overtone red edge and on the right by a dashed line. Either fundamental or first overtone (F/O) behavior occurs in the higher luminosity region that is shown as dotted. There may be a narrow region at the interface of the DM and F regimes in which either DM or fundamental behavior can occur.<sup>2</sup><sup>2</sup>2In some computations, with different $`\alpha `$ parameters, slightly more complicated interfaces have also been obtained . We note that a good global understanding of all these regimes can be obtained with the help of the amplitude equation formalism (e.g. BYKG99).
Fig. 2: Schematic Cepheid instability strip, left: linear, right: nonlinear; c.f. text.
More specifically, for example at level 1 (high luminosity) in Fig. 2 only F pulsations can occur. At level 2 we have, going from high to low T$`_{eff}`$, a regime first overtone pulsations, then a regime of either O or F pulsations (hysteresis), and then F only pulsations. At low luminosities, level 3, there is first a regime of O only, then of DM only, with possibly a narrow regime of either DM or F, followed by F only pulsations.
The hydrodynamic calculations indicate that DM behavior occurs only at luminosities that can be noticeably lower than the tip of the overtone instability strip. This is in agreement with the SMC observations (Fig. 5 of Udalski et al. 1999) which show a higher luminosity regime in which both F and O Cepheids occur, and a lower luminosity one in which the DM Cepheids lie (with the exception of a single star).
The RR Lyrae stars, at least within a given cluster, have essentially the same luminosity, mass and composition. The modal selection diagram is therefore essentially the same as for a narrow horizontal strip in the lower part of Fig. 2.
## 6. Classical Cepheid Pulsations
The classical Cepheids are much more diverse than the RR Lyrae stars. They span a wide range of masses, luminosities and metallicities. There are a great deal more observational constraints as well many of which have been summarized in BK00.
For example, the overtone Cepheids have a maximum period $`P_1^{max}`$ which occurs at the high luminosity tip of the IS. Next, resonances play an important role, viz. a $`P_2/P_0=1/2`$ resonance at about 10 days and a $`P_4/P_1=1/2`$ resonance around 3–4.5 days. This is evidenced by the structure of the Fourier decomposition coefficients of the light and radial velocity curves. We note that in principle it is possible to obtain a purely ’pulsational’ mass–luminosity relation by taking advantage of these two resonances. The periods and T$`_{eff}`$ at which DM behavior can occur, as well as the F/O, respectively O<sub>2</sub>/O<sub>1</sub> amplitude ratios add a very tight constraint as well.
Fig. 3: Period-radius relation for Galactic Cepheids
Radiative Cepheid models were found wanting in many respects, besides the obvious one of not providing a red edge (for a review c.f. Buchler 1998). In particular the discrepancies are largest for the low Z models for which the linear growth-rates and consequently the pulsation amplitudes are much too large. The resonance masses are also much too small to agree with stellar evolution calculations as was discussed in Buchler, Kolláth, Beaulieu & Goupil (1996). The question therefore arises whether convection can provide a differentially stronger dissipation for the low Z models.
There have been a number of Cepheid computations by the Italian, Vienna and Florida groups, but no comprehensive calculations have yet been made to see if all observational constraints can be simultaneously satisfied. However, there are some good news. For example, with the convective hydrocodes it seems possible to improve the light and radial velocity curves, and in particular to obtain the wide excursion in the observed $`\varphi _{21}`$ Fourier phase of the overtone Cepheids in which purely radiative models had failed. The most dramatic achievement though is the modelling of DM behavior in Beat Cepheids (KBBY98).
Interestingly, despite seven adjustable $`\alpha `$ parameters it was not possible to impose both the observed upper limit for the period of the first overtone Cepheids and obtain a reasonable width of the fundamental instability strip! This problem was solved when we included the physically required correction for inefficient convection (small Péclet number) (BK00), but at the expense of an additional, eighth free $`\alpha `$ parameter.
Some properties, such as period-radius relations seem relatively insensitive to the values of the alphas. In Figure 3 we show the P-R relation that we obtain for Galactic Cepheid models, compared to the observational data. A very similar agreement has been obtained by Bono et al. (1999).
Observations show that the SMC, LMC and Galactic Cepheids are remarkably similar. For example, the maximum first overtone periods lie around 6 days.<sup>3</sup><sup>3</sup>3If one disregards V440 Per which may be an oddball.. They have approximately the same luminosity, the same pulsation amplitudes, the same Fourier decomposition coefficients, and the dominant F and O1 resonances are almost in the same place, i.e. near 10d and 3-4.5d, respectively.
The not-so-good news is that at this time it does not seem possible to obtain good models both for the Galactic and for the low Z Magellanic Cloud Cepheids with the same calibration of the eight $`\alpha `$ parameters. Turbulent convection does not provide larger dissipation for the low Z models. The difficulty that was encountered with the radiative models thus persists with the convective models, and one may wonder whether the difficulty still lies with the opacities, this time with H, He or with the lower temperature H<sup>-</sup> and molecular opacities.
## 7. Pop. II Cepheid Pulsations
Pop. II Cepheids are variable stars that have lower metallicity than thir classical siblings. They also are believed to have much smaller masses for the same luminosity, which makes them on the one hand much more linearly unstable to vibrations, and on the other hand causes much larger pulsation amplitudes.
Observationally these stars, known as W Vir and RV Tau stars, range from periodic at low periods to strongly irregular at cycling times of 70 days. The irregular behavior seems to set in at period of about 25–30 days (Arp 1955, Pollard, this volume).
The recent nonlinear analysis of the AAVSO observational data of R Sct (Buchler, Kolláth, Serre & Mattei 1995) and of AC Her (Kolláth, Buchler, Serre & Mattei 1998) showed very clearly that the mechanism for the irregular behavior is the nonlinear interaction between the excited (linearly unstable) mode and a (linearly stable) overtone. Technically speaking, the dynamics takes place in a 4D subspace of phase-space and the pulsations are thus low-dimensional chaos,
This analysis corroborates numerical hydrodynamical results obtained a decade ago which showed that the irregular behavior of W Vir models was also due to low-dimensional chaos (Buchler & Kovács 1986, Kovács & Buchler 1997, Aikawa 1987, Buchler, Goupil & Kovács 1987). However, the onset of the irregular behavior occurred in these radiative models with periods as low 8 days, i.e. much lower than observations indicate. Glasner & Buchler (1990) included a very simplistic MLT model in the hydro-code, and this pushed the onset of chaos to higher periods. More recent calculations with the TC Florida code also show a shift in the same direction.
The basic nature of the irregular behavior is now understood, but clearly more work is necessary to obtain more detailed agreement with the observations.
## 8. Mira Pulsations
Convection plays an essential role in the cool and very extended Mira variables, and they are hard to model with much confidence. There is still a debate about whether the stars pulsate in the fundamental or the first overtone mode.
Ya’ari & Tuchman (1996) have modelled the nonlinear pulsations of these stars with very interesting results (see also Dorfi & Feuchtinger in this Volume). The large amplitude pulsations that develop cause a structural rearrangement of the star. Consequently the nonlinear period is quite different from both the linear fundamental and first overtone periods. However, convection is treated with a standard time-independent MLT approach, and unfortunately eddy viscosity is ignored in their computations. The latter reduces the pulsational amplitude, and could cause a qualitative change in the results.
## 9. Conclusions
In recent years several groups have included a description of turbulent convection in their hydrocodes. The addition of a simple nonlinear time-dependent diffusion equation for turbulent energy with concomitant convective flux and eddy viscous pressure has led to important improvements in RR Lyrae and in Cepheid models. Most striking has been the ability of these codes to model DM pulsations in both RR Lyrae and Cepheids.
However, it is clear that small discrepancies remain in the RR Lyrae models, both the the single mode RRab and RRc, as well as in the double-mode RRd. The next step is to see if a better calibration of the free $`\alpha `$ parameters can bring us in a better agreement with the plethora of observational data.
In the Cepheid modelling, one obtains reasonable agreement for the Galactic Cepheids, even with a preliminary crude calibration, but for the time being it remains a puzzle why the low Z models fail so strikingly.
## 10. Acknowledgements
It is a pleasure to acknowledge numerous valuable discussions with my collaborators Z. Kolláth, P. Yecko, M.-J. Goupil, J.-P. Beaulieu and M. Feuchtinger. This work has been supported by the National Science Foundation (AST9528338, AST9819608, INT9820805).
## References
Aikawa, T. 1987, ApSS, 139, 281
Arp, H. C. 1955, AJ 60, 1.
Baker, N. 1987, in Physical processes in comets, stars, and active galaxies, eds., W. Hillebrandt, E. Meyer-Hofmeister, and H.-C. Thomas, Springer, NY.
Beaulieu, J.P. et al. 1995, AA 303, 137
Bono, G., Stellingwerf, R.F. 1994, ApJ Suppl 93, 233–269
Bono, G., F. Caputo, V. Castellani & M. Marconi, 1997, AA 121, 327.
Bono, G., F. Caputo, V. Castellani, & M. Marconi, 1999, ApJ 512, 711
Buchler, J. R. 1998, in A Half Century of Stellar Pulsation Interpretations: A Tribute to Arthur N. Cox, eds. P.A. Bradley &J.A. Guzik, ASP 135, 220
Buchler & Kandrup, 2000, Proceedings of the Florida Workshop on Astrophysical Turbulence and Convection, An. N. Y. Acad. Sci. (in press)
Buchler, J. R. & Kolláth, Z. 2000, in Buchler & Kandrup. loc. cit.
Buchler, J.R., Kolláth, Z., Beaulieu, J.P., Goupil, M.J., 1996, ApJLett 462, L83
Buchler, J. R., Kolláth, Z., Serre, T. & Mattei, J. 1996, A Nonlinear Analysis of the Variable Star R Scuti, Astrophysical Journal, 462, 489–504.
Buchler, J.R., Goupil, M.J. & Kovács, Z. 1986, Physics Lett. A 126, 177
Buchler, J.R. & Kovács, Z. 1986, ApJ 320, L57-L62
Buchler, J.R. & Kovács, Z. 1986, ApJ 308, 661
Buchler, J. R., Serre, T., Kolláth, Z. & Mattei, J. 1995, A Chaotic Pulsating Star – The Case of R Scuti, Physical Review Letters 74, 842–845.
Buchler, J. R., Yecko, P., Kolláth, Z., Goupil, M. J. 1999, Turbulent Convection in Pulsating Stars, in Giménez et al. , loc. cit. ; (also astro-ph/9901188)
Buchler, Goupil & Kovács 1987
Cattaneo, F. Brummell, N. H. & Toomre, J. 1991, ApJ 370, 282.
Cox, J. P. & Giuli R. T. 1969, Principles of Stellar Structure (New York: Gordon and Breach)
Feuchtinger, M. U. 1998, AA 337, L29
Feuchtinger, M. U. 1999a, AA Suppl 136, 217
Feuchtinger, M. 1999b, A&A, in press
Gehmeyr, M. 1992, ApJ 399, 265
Gehmeyr, M. , Winkler, K.-H. A. 1992, AA 253, 92–100; ibid. 253, 101–112
Glasner, A. & Buchler, J. R. 1990, in The Numerical Modelling of Nonlinear Stellar Pulsations; Problems and Prospects, ed. J. R. Buchler, NATO ASI Ser. C302 (Dordrecht : Kluwer), p. 109
Kolláth, Z., Beaulieu, J.P., Buchler, J. R. & Yecko, P., 1998, 502, L55 \[KBBY\]
Kolláth, Z., Buchler, J. R., Serre, T. & Mattei, J. 1998, Analysis of the Irregular Pulsations of AC Herculis, Astronomy & Astrophysics, Vol. 329, 147–155.
Kovács, G. & Buchler, J.R. 1987, ApJ 324, 971
Kuhfuß, R. 1986, AA 160, 116
Giménez, A., Guinan E.F. and Montesinos, B. 1999, Theory and Tests of Convection in Stellar Structure, ASP Conf. Ser. Vol 173.
Stellingwerf, R.F. 1982, ApJ 262, 330
Udalski et al. 1999, Acta Astronomica 49, 1 (astro-ph/9903391).
Wuchterl, G. & Feuchtinger, M. 1998, AA 340, 419.
Ya’ari, A. & Tuchman, Y., 1996, ApJ 456, 350
Yecko, P., Kolláth Z., Buchler, J. R. 1998, A&A 336, 553 \[YKB\]
|
no-problem/9909/hep-ph9909519.html
|
ar5iv
|
text
|
# NORDITA-1999/59 HE LAPTH-749/99 hep-ph/9909519 September 27, 1999 𝜓' to 𝜓 Ratio in Diffractive Photoproduction
## I Introduction
Quarkonium production is a hard process in which the heavy quarks are produced with limited relative momentum. This kinematic restriction implies that the standard QCD factorization theorem does not apply, ie., there is no guarantee that the cross section can be expressed as a product of universal parton distributions and a partonic subprocess. Quarkonium production is thus sensitive to the environment and yields new information about the dynamics of hard processes. The abundant data imposes severe restrictions on theoretical models (for a discussion see Ref. ). In particular, the recent CDF data on charmonium polarization at large $`p_{}`$ in $`p\overline{p}`$ collisions disagrees with the color octet model prediction .
The ratio of cross sections for radially excited states is sensitive to the time-scale of the production process. If all relevant interactions occur while the quark pair is compact, $`r_{}1/m_Q`$, the direct production cross section is proportional to $`|\mathrm{\Phi }(0)|^2`$, the square of the quarkonium wave function at the origin. Conversely, late interactions depend on the shape of the wave function out to its Bohr radius, $`r_{}𝒪(1/(\alpha _sm_Q))`$. The $`\sigma (\psi ^{})/\sigma (J/\psi )`$ ratio is thus a very interesting quantity which may give clues on the correct production mechanism.
In forward charmonium hadroproduction the ratio of $`\psi ^{}`$ to $`J/\psi `$ cross sections is found to be roughly independent of the kinematics and of the size of the nuclear target . Its value is moreover consistent with the expectation based on the proportionality to $`|\mathrm{\Phi }(0)|^2`$ ,
$$R^{hN}\frac{\sigma ^{hN}(\psi ^{})}{\sigma _{dir}^{hN}(J/\psi )}=\frac{\mathrm{\Gamma }(\psi ^{}e^+e^{})}{\mathrm{\Gamma }(J/\psi e^+e^{})}\frac{M_{J/\psi }^3}{M_\psi ^{}^3}0.24,$$
(1)
suggesting that late interactions are unimportant for quarkonia produced in hadron fragmentation regions. In nuclear fragmentation regions the ratio is measured to be smaller than in Eq. (1). This is explained by the larger break-up cross section of the $`\psi ^{}`$ in late interactions with the nuclear comovers.
Preliminary CDF results show that $`52\pm 12\%`$ of the $`\mathrm{{\rm Y}}(1S)`$ are directly produced. Given that the $`\mathrm{{\rm Y}}(3S)`$ is produced only directly one deduces from the published total cross sections that $`\sigma (3S)/\sigma _{dir}(1S)=0.4\pm 0.1`$, consistent with the expectation $`0.3\pm 0.05`$ based on Eq. (1) for bottomonium.
Diffractive charmonium electroproduction, $`\gamma ^{()}p\psi (nS)p`$, is believed to occur via two-gluon exchange . At large $`Q^2`$, the size of the quark pair in the virtual photon wave function is $`1/Q1/(\alpha _sm_c)`$, and the cross section is predicted to be proportional to $`|\mathrm{\Phi }_n(0)|^2`$. The $`\psi ^{}`$ to $`J/\psi `$ ratio at large $`Q`$ is indeed measured to be consistent with $`|\mathrm{\Phi }_\psi ^{}(0)/\mathrm{\Phi }_{J/\psi }(0)|^20.50.6`$ . On the other hand, for photoproduction the measured value
$$R_{el}^{\gamma N}\frac{\sigma (\gamma p\psi ^{}p)}{\sigma (\gamma pJ/\psi p)}=0.15\pm 0.034(Q^2=0)$$
(2)
is about a factor 3 below the value found at large $`Q^2`$.
It was pointed out that due to the moderate value of the charm quark mass and a factor $`r_{}^2`$ from the coupling of the two gluons, the photoproduction amplitude in fact probes the charmonium wave function at a transverse size $`r_{}`$ which is comparable to the Bohr radius. This reduces the $`\psi ^{}`$ contribution due to the node in its wave function. The range of transverse size which is probed decreases when the virtuality $`Q^2`$ of the photon increases. It is thus possible to measure the shape of the charmonium wave function using electroproduction data.
In Ref. an estimate of the node effect gave a value .15 for the ratio (2).
This value was obtained with harmonic oscillator wave functions for the bound states and with the weighted photon wave function parametrized as a sum of two Gaussian functions . We find that the result is very sensitive to the parametrization and is thus uncertain to at least 50%.
We report here a more quantitative calculation of the photoproduction ratio, in a light-cone framework using charmonium wave functions obtained from potential models. Surprisingly, the calculated ratio (2) turns out to be a factor 2 to 5 below the data. We discuss the implications of this for the charmonium wave function.
## II Evaluation of the Ratio
The measured $`\psi ^{}/J/\psi `$ photoproduction ratio is consistent with being independent of the photon energy . At the H1 energy the incoming photon fluctuates into the $`c\overline{c}`$ pair long before the target. We therefore work in the high energy regime where the transverse size $`r_{}`$ of the ($`S`$-wave) $`c\overline{c}`$ pair is frozen during its interactions in the target and distributed according to the photon wave function
$$\mathrm{\Phi }_\gamma (x,𝒓_{})\sqrt{x(1x)}K_0(m_cr_{}),$$
(3)
where $`x`$ is the light-cone momentum fraction carried by the $`c`$ quark.
Each exchanged gluon couples to the $`c\overline{c}`$ pair with a strength proportional to the color dipole length $`r_{}`$ (in the usual approximation where the gluon wavelengths are long compared to $`r_{}`$). The forward scattering amplitude is then given by an overlap with the charmonium wave function $`\mathrm{\Phi }_\psi `$:
$$_\psi \frac{dx}{\sqrt{x(1x)}}d^2𝒓_{}\mathrm{\Phi }_\gamma (x,𝒓_{})r_{}^2\mathrm{\Phi }_\psi (x,𝒓_{})^{}.$$
(4)
We consider two models for $`\mathrm{\Phi }_\psi `$, obtained from the non-relativistic Buchmüller-Tye (BT) and Cornell potentials, respectively . In the non-relativistic limit there is a simple relation between the light-cone amplitude $`\mathrm{\Phi }_\psi (x,𝒓_{})`$ appearing in Eq. (4) and the equal-time wave function $`\mathrm{\Phi }_\psi ^{NR}(𝒓)`$ given by the Schrödinger equation (see, eg., Ref. ). In momentum space,
$$\mathrm{\Phi }_\psi (x,𝒑_{})=\frac{2(p^2+m_c^2)^{3/4}}{(p_{}^2+m_c^2)^{1/2}}\mathrm{\Phi }_\psi ^{NR}(𝒑),$$
(5)
where
$$x=\frac{1}{2}+\frac{p^z}{2\sqrt{p^2+m_c^2}}$$
(6)
and the relative factor is fixed by the normalization conditions.
Using
$`F(𝒑_{})`$ $``$ $`{\displaystyle \frac{d^2𝒓_{}}{8\pi }e^{i𝒑_{}𝒓_{}}r_{}^2K_0(m_cr_{})}`$ (7)
$`=`$ $`{\displaystyle \frac{p_{}^2m_c^2}{(p_{}^2+m_c^2)^3}},`$ (8)
Eq. (4) reads in momentum space
$$_\psi d^3𝒑\frac{(p_{}^2+m_c^2)^{1/2}}{(p^2+m_c^2)^{3/4}}F(𝒑_{})\mathrm{\Phi }_\psi ^{NR}(𝒑)^{}.$$
(9)
We use the Mathematica program of Lucha and Schöberl to solve the Schrödinger equation for the BT and Cornell potentials. Our results for the cross section ratio (2),
$$R_{el}^{\gamma N}=\frac{|_\psi ^{}|^2}{|_{J/\psi }|^2}.$$
(10)
are shown in the first line of Table I. The squared ratio of wave functions at the origin shown on the second line is the result predicted for highly virtual photons.
## III Discussion
Eq. (4) gives a cross section ratio which is an order of magnitude smaller than the ratio of wave functions at the origin. This means that the photon wave function, weighted by $`r_{}^2`$, probes the charmonium wave functions at relatively large separations. As seen from Fig. 1, $`r_{}^2\mathrm{\Phi }_\gamma `$ in fact gives similar weights to the $`\psi ^{}`$ wave function in the regions below and above the node, leading to a near cancellation in the $`\psi ^{}`$ integral. This is the reason for the small value of $`R_{el}^{\gamma N}`$ and for its sensitivity to the potential.
In addition, the result for the $`\psi ^{}`$ overlap integral (4) is sensitive to the charmonium wave function in the relativistic domain: only 60% of the integral comes from the momentum region $`|𝒑|0.9m_c`$ (the corresponding number for the $`J/\psi `$ is 95%). This means that the theoretical calculation is not reliable for the $`\psi ^{}`$ amplitude, since the wave function was obtained from the non-relativistic Schrödinger equation. The sensitivity to relativistic momenta has previously been emphasized in Ref. .
We thus have to conclude that there is no reliable theoretical prediction for the cross section ratio $`R_{el}^{\gamma N}`$.
The weighted photon wave function $`r_{}^2\mathrm{\Phi }_\gamma `$ probes $`c\overline{c}`$ pairs with $`r_{}0.5`$ fm (cf. Fig. 1). Assuming that Eq. (4) is valid in this range implies that the light-cone charmonium wave functions we used are incorrect, since our result (Table I, first line) is a factor 2 to 5 below the data (2). On the other hand, the data may be used to constrain the physical wave functions. In particular,
* The range of the photon wave function narrows with $`Q^2`$, allowing a ‘scan’ of the wave functions .
* The nuclear target $`A`$-dependence of coherent $`J/\psi `$ photoproduction is consistent with $`\sigma ^{hA}=A^\alpha \sigma ^{hN}`$, with $`\alpha =4/3`$ . This implies a small rescattering cross section, ie., a small transverse size of the $`c\overline{c}`$ pairs which contribute to $`J/\psi `$ production. It would be important to measure the $`A`$-dependence also in coherent $`\psi ^{}`$ production.
* In the case of inelastic $`J/\psi `$ photoproduction only one (Coulomb) gluon is exchanged with the target. The overlap integral corresponding to Eq. (4) then has a single power of $`r_{}`$. This means that the charmonium wave function is effectively probed at lower values of $`r_{}`$ than in elastic photoproduction. In our calculation using potential model wave functions the effect of changing the power of $`r_{}`$ in the overlap integral is quite large, as shown in the last line of Table I.
* The mechanism of charmonium hadroproduction is still uncertain, but it is likely that a single gluon is exchanged with the target. (This is a feature of all present models ). Just as in inelastic photoproduction, one gluon exchange implies a single power of $`r_{}`$ in the overlap integral (4). This is qualitatively consistent with the measured hadroproduction ratio (1), which is larger than $`R_{el}^{\gamma N}`$ but smaller than the ratio in large $`Q^2`$ electroproduction.
According to the above arguments, we expect the inelastic photoproduction ratio to be the same as that measured in (inelastic) hadroproduction, $`\sigma (\psi ^{})/\sigma _{dir}(J/\psi ).24`$. The preliminary data on inelastic $`\psi ^{}`$ photoproduction is still too imprecise for a definite conclusion.
It is perhaps not so surprising that the potential model results for the charmonium wave functions are inaccurate. In particular, those models take no account of the fact that the $`\psi ^{}`$ lies only 44 MeV below $`D\overline{D}`$ threshold. It follows from the uncertainty principle that the $`\psi ^{}`$ wave function contains virtual $`D\overline{D}`$ pairs with a lifetime (and size) around 4 fm. The photoproduction amplitude, however, measures only the $`|c\overline{c}`$ component of the wave function. It is quite possible that this component is narrowly distributed in $`r_{}`$, while $`c\overline{c}`$ pairs at larger separations are part of higher Fock states which contain gluons and light quarks.
## Acknowledgments
We wish to thank S. J. Brodsky and B. Pire for discussions. We are grateful for a helpful communication with J. Nemchik, and thank F. Schöberl for sending us his program. We are also grateful for the hospitality of the theoretical physics group at Ecole Polytechnique, where this work was initiated. This work was supported in part by the EU/TMR contract EBR FMRX-CT96-0008.
|
no-problem/9909/cond-mat9909082.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Nowadays much attention is paid to studying and modeling of different systems and its dynamics . Usually in such investigations the structure of a system is always supposed to be determined and constant, or totally random. On the other hand, there is a number of works devoted exactly to studying of structure formation but, as a rule, it has very narrow specifics. In the sense of a system evolution, dynamics undoubtedly depends on the geometry of a system and includes the structure formation as well. Moreover this property of the system dynamics have to be regarded as the fundamental and general for a wide class of phenomena. So the process of the structure formation is one of the most important in the world: from physics (atomic structure) to cosmology (structure of the Universe), from biology (evolution and morphogenesis) to social sciences (formation of different social groups).
The term structure can be understood in the sense of the spatial connection of objects’ parts or interactions chains, like in biology: predator – prey relationship, foods chains, gene networks. In our opinion, the idea seems very attractive that the rise of a structure as a result of natural processes taking place within a system occurs in a way common for a wide class of phenomena .
As an example of the simplest model of a spatial structure formation, let us take the diffusion-limited aggregation model (DLA) . As it was pointed out such models demonstrate formation of the spatial structure with fractal properties. Another attempt to reveal a structure, had been done in the investigations devoted to modeling of networks of gene expression. Much attention is paid to this question in graph theory . But all the studies are only particular quests in a narrow field, and moreover today they seem to be absolutely disconnected.
From the standpoint of theoretical physics, it is out of the question that the topology plays a crucial role in that of what phenomena can take place in a model under study. So it would be interesting to construct a model in which a topology (a structure of interaction) arises in the course of dynamics. In this work we propose a simplest model describing structure formation in the dynamics of a stochastic system, and report results of the study. Let us emphasize that even simplest models reveal interesting regularities.
## 2 The Model
Our world is undoubtedly very complex, so one might guess that to gain a detailed description it is necessary to write more and more sophisticated equations, to include more parameters and variables. On the one hand, it is true. But on the another, it is important to find the right point of view: to reveal fundamental mechanisms of a phenomenon and to check them out on a model. In the latter case, the simpler (but still nontrivial) model is, the more valuable it is since we can be sure that we gain a background for more detailed description. So our aim is not details but general comprehensive knowledge of how complex systems evolve. It is desirable that models would be free of fitting parameters so that one can study a mechanism itself but not parameter-dependent regimes of a model. In our model, besides a size of a system we have only one parameter $`\alpha `$ that can be conditionally called the factor of an environment influence.
Dynamics of the model is following. At the beginning we have the system which consists of $`N`$ nodes and zero links. On the second iteration step a link between a pair of elements in the system appears with probability $`\alpha `$ and nothing happens with probability $`1\alpha `$. Some time later we shall have a number of clusters (connected group of nodes) in the system. (The size of a cluster is the number of nodes, which forms the cluster.) At the K-th iteration step one of the nodes is randomly choosen. If it is not belong to any clusters, in other words has no links, there is possibility of a link appearing (with probability $`\alpha `$) in the system and the situation is not changed with probability $`1\alpha `$. If the choosen element belongs to a cluster, we ”check” each link going from the element and with probability $`\alpha `$ it is preserved, while it is destroyed with probability $`1\alpha `$. Thus at the K-th iteration step it is possible a creation of one new link and destroying several links. And so on for the K+1, K+2, … iteration steps. Let us envisage the next analogy. It is possible to consider our model as a formation of a network of channels in a medium, which at the beginning is homogeneous (water in rock, electrical current in a medium with high resistance). By some means we increase pressure/potential in a point of the medium. If there are not channels going from the point then as the action result (if pressure/potential exceeded some threshold (it corresponds to parameter $`\alpha `$ of the model)) a hole occurs and the new channel is created. If a chosen point already has channels, it seems to be fairly logical that flow/current will use the existing ones rather than to make a new hole.
Thus at the beginning we have homogeneous medium, after some time we get porous one.
One can see, that the system dynamics allows, in principle, formation of a cluster of any size, up to the size of the system. Obviously the larger a cluster is, the less stable it is. Let us also point out that in general case the dynamics of such systems does not have attractors, i.e. it can be considered as ”the evolution with open end”.
The proposed model seems to be close to the models investigated in the percolation theory . Taking into account that ’percolation theory deals with connectivity of a very large (macroscopic) number of elements under condition that the relation of every element with its neighbors has a random, but quite definite character, (for example, setting up by the means of using of a random number generator possess definite properties)’ we might emphasize that our model neither has any fixed structure of the system (that has to be understood in the sense of neighbors definition procedure) nor any determined space dimension in this sense it is similar just to infinite-range percolation . In the suggested model a structure arises in the process of system evolution, it is mobile, in this aspect it is closer rather to the second order phase transition . There is the chemistry application of reversible gelation (cooking gelatine) where similar creation and destruction of bonds occurs. Let us stress that dynamics of our model essentially differs from , we do not consider a disorded lattice as well as particles motions in dynamically disorded medium. We do not study a transport task . We envisage the mobile structure creation processes. But deep in our mind we are interested in the same problem – a role of connectivity in a system.
## 3 Results of computer simulations
As the main characteristics of a forming structure we consider: the probability distribution of the system wiring (in this context the wiring is a ratio of the number of links to the number of elements), the average fraction of free elements (the elements which have no neighbors) in the system, the distribution of clusters as a function of size.
We have studied these characteristics for different values of parameter $`\alpha `$.
It has been turned out that the probability distribution of the system wiring is excellently approximated by the Gaussian law, i.e.
$$p(x)=\frac{1}{\sigma \sqrt{2\pi }}e^{\frac{(xa)^2}{2\sigma ^2}}$$
where $`x`$ is the wiring (Fig. 1)
The distribution is localized on the very narrow interval of the wiring value. The results for the average and deviations for different $`\alpha `$ are given in TABLE 1. In (Fig. 2) one can see the distributions for different values of $`\alpha `$.
Turning to the distribution of clusters as a function of its size, one can see that the character of the dependence is exponential and does not depend on a system size. (Fig. 3).
$$n_se^{\beta s}$$
where $`n_s`$ is the number of clusters consisting of $`s`$ nodes. For $`1/4<=\alpha <=3/4`$ the accuracy of approximation varies from 0.999067 ($`\alpha =3/4`$) to 0.999862 ($`\alpha =1/2`$) and for two last cases it is less: 0.998865 and 0.996851 correspondingly.
The $`\beta `$ indices for different values $`\alpha `$ are also presented in TABLE 1.
We have also studied the extreme case when $`\alpha =1`$. In this situation, after some transient period of time a state of the system become statistically stable and character of its structure does not change. In so doing, we have paid attention to the following characteristics: the relaxation time (the time of structure formation) t-rel, the wiring of the system w, the maximal size of a cluster max-s.
It was revealed that t-rel and max-s depend on the system size, but w does not. As max-s increases with N and taking into account that in this case the cluster distribution is better approximated by the power law with exponent equal to -2.8 (accuracy 96%) than by the exponential law with exponent equal -0.54 (accuracy 95%) we can suppose that in this quenched case our model is more similar to percolation tasks. An infinite cluster would have an infinite size. Let us note that exponent of the critical cluster size distribution in our model is greater (in absolute value) than one in the percolation models in three dimention space according to , where it is equal -2.2, but it has the same order.
The set of these values for different system size is presented in TABLE 2.
The wiring value of the system for $`\alpha =1`$ is equal to 0.691.
Since w does not depend on the system size it should be the same in infinite systems, thus it can be envisaged as the percolation threshold in the quenched variant of our model. Comparing the obtained result with ones represented in one can see that ours is fairly close to percolation threshold for honeycomb lattice, but it is rather a contingency. According to our rule of structure construction, it can be supposed a getting structure to be the variant of Bethe lattice, but in our case according to the formulae $`p_c=1/q`$ (where $`q`$ is number of links going from a node, $`p_c`$ – percolation threshold, in our case it corresponds to w), $`q1.447`$, i.e. far away from any integer number, so one can conclude that the created structure of our system is a tree graph with no equal number of successors for every node. (According to the definition of the Bethe lattice the number of successors for every node should be the same).
Because of we investigate not only the quenched variant but the possibilities both appearance and destroying of links, besides characteristics of arising structures, large interest has dynamics of the structure formation. Let us note that besides characteristics of arising structures, we can investigate dynamics of the structure formation. Namely, we are interested in such characteristics as the duration of deviation of the fraction of free elements in the system from the average.
After some transient time, the system comes to the stationary regime. Thus statistical division into two fractions, namely, the fraction of free elements and the fraction of the elements having at least a single link takes place. We count the average fraction of free elements in the system: $`\lambda =\frac{1}{NT}_{i=1}^Tn_i`$ where T – the total time of counting; $`n_i`$ – the number of free elements at the moment i. Since each state of the system is not stable, i.e., there permanently occur processes of appearing and destroying of links, it is mirrored by the change of the fraction of free elements in the system (dl) (deviation from $`\lambda `$). Namely, when a link appears then dl decreases, if it vanishes – dl increases. It seems to be obvious that deviations of different duration can be observed: both increase and decrease of dl. Such deviation one can think of as the avalanches in models of self-organized criticality or as a relaxation time of the system into the quasistationary state analogously to fluctuation motions (spin fluctuations, order parameter fluctuations) in the phase transitions and the critical phenomena theory . We investigated the structure destruction processes, i.e. the probability of the dl deviation from the average to larger values ($`dl_{up}`$) as a function of its duration ($`t`$). In so doing, moments $`t=n/N>\lambda `$ and $`t=n/N<\lambda `$ are considered as the start and the end of an event correspondingly.
As it was shown by computer simulations, this characteristic is well approximated by the power law
$$dl_{up}t^\gamma $$
with the index $`\gamma =1.421\pm 0.005`$. In our opinion, it is a remarkable result that for $`t>3`$ the curves are very close despite very different values of $`\alpha `$: from 1/4 to 3/4. That is also true for $`t>15`$ and $`\alpha `$=0.95, and for $`t>50`$ and $`\alpha =0.99`$, i.e. the exponent (critical index) does not depend on the system parameter $`\alpha `$. Thus we have the parameter independent critical dynamics in the system. (Fig. 4)
In TABLE 3 one can see indexes $`\gamma `$ for different value of $`\alpha `$.
As for cluster distributions, the best correspondence is achieved for $`\alpha =1/2`$ As the deviation from this value increases, for small duration one can observe a bending of the curves to the bottom. In general, the accuracy of the approximation was near 99.7%.
We have also considered another case, i.e. processes of arising a structure. In this situation the results are turned out to be analogous to the former case. So the picture is symmetrical.
Apart from it we calculated distribution of deviation mass. By this term we denote deviation from the average measured in the number of elements become free or connected during an event. Analogously to the phase transitions and critical phenomena theory it can be envisage as size of fluctuation. The results one can see in (Fig. 5)
Let us stress that this result is not the only one possible. Indeed the average fraction of free elements in the system we calculated as it mentioned above as $`\lambda =\frac{1}{NT}_{i=1}^Tn_i`$, so it can happen that the deviation can be shorter in time and deeper in the number of elements in one direction than in another one with the same value of average.
## 4 Conclusion
It seems that in spite of the simplicity of its formulation, the model has a number of nontrivial characteristics.
After a transient period of time, a certain statistical mobile interaction structure is built up in the system. In the case of the quenched model ($`\alpha =1`$) the distribution of cluster size is better approximated by the power law than by the exponential one, as it occurs for percolation models. Here the critical structure kind of tree graph with no equal number of successers is formed.
Due to very good correspondence of the distribution of the system wiring to the Gaussian law and taking into account that the deviation is quite small one can conclude that large fluctuations in the system occur rarely and they rapidly vanish.
The critical character of the system dynamics does not depend on introduced parameter $`\alpha `$, as it follows from distributions of time duration of deviation from the average dl. The distributions are turned to be power functions with the same index $`\gamma =1.421\pm 0.005`$, so this model can be putted in the class SOC systems. It looks like the principle of structure formation employed in the model leads to a certain class of a universality.
Let us stress that in the general case while system dynamics has turned out to be critical, geometry of forming structures does not have such properties, as it is shown by the exponential drop in the distribution of clusters. In addition, in this case the index depends from $`\alpha `$.
In this work we didn’t attempt to create the model of SOC system as it done in , but we revealed such properties in the simplest model of structure formation. Maybe this work can be considerable as an approach to understanding of the mechanisms of self-organization to criticality.
Acknowlegement. The author greatly acknowledges Konstantin Mardanov for the help during a preparation of the manuscript.
FIGURES
FIG. 1. Probability distribution of the system wiring p(x) for parameter $`\alpha =1/2`$ (squares) in comparison with Gaussian distribution with the same parameters (triangles). Size of the system N=1000.
FIG. 2. Probability distribution of the system wiring p(x) for parameters: $`\alpha =1/2`$ (line only); $`\alpha =1/4`$ (stars); $`\alpha =1/3`$ (cross); $`\alpha =2/3`$ (diamonds); $`\alpha =3/4`$ (squares); $`\alpha =0.95`$ (circles); $`\alpha =0.99`$ (triangles). Size of the system N=1000.
FIG. 3. Distribution of clusters size y(s) for parameters: $`\alpha =1/2`$ (dots); $`\alpha =1/4`$ (stars); $`\alpha =1/3`$ (cross); $`\alpha =2/3`$ (diamonds); $`\alpha =3/4`$ (squares); $`\alpha =0.95`$ (circles); $`\alpha =0.99`$ (triangles). Size of the system N=1000.
FIG. 4. Distribution of deviation from average of free elements fraction in the system y(x) for parameters: $`\alpha =1/2`$ (dots); $`\alpha =1/4`$ (stars); $`\alpha =1/3`$ (cross); $`\alpha =2/3`$ (diamonds); $`\alpha =3/4`$ (squares); $`\alpha =0.95`$ (circles); $`\alpha =0.99`$ (triangles). Size of the system N=500.
FIG. 5. Distribution of deep of deviation from average of free elements fraction in the system g(n) for $`\alpha =3/4`$. Size of the system N=500.
TABLES
| $`\alpha `$ | dl | a | $`\sigma `$ | $`\beta `$ | max-s | range |
| --- | --- | --- | --- | --- | --- | --- |
| 1/4 | 0.764 | 0.127 | 0.01 | -1.88235 | 9 | \[0.082;0.177\] |
| 1/3 | 0.689 | 0.172 | 0.0115 | -1.60361 | 11 | \[0.125;0.224\] |
| 1/2 | 0.536 | 0.268 | 0.013 | -1.24915 | 14 | \[0.211;0.332\] |
| 2/3 | 0.378 | 0.378 | 0.0135 | -0.98363 | 16 | \[0.319;0.439\] |
| 3/4 | 0.295 | 0.442 | 0.013 | -0.909985 | 18 | \[0.383;0.498\] |
| 0.95 | 0.066 | 0.632 | 0.0095 | -0.725898 | 19 | \[0.590;0.669\] |
| 0.99 | 0.014 | 0.681 | 0.008 | -0.645233 | 21 | \[0.648;0.714\] |
TABLE 1
In this table values characterized of the properties of a forming structure for different parameter $`\alpha `$ values are presented. Where
$`\alpha `$ is the probability of link appearance
dl \- fraction of free elements in the system
a \- average value of the wiring
$`\sigma `$ \- deviation of the wiring
$`\beta `$ \- index of the cluster size distribution
max-s \- maximal size of a cluster
range \- range of non-zero values of the wiring
We investigated the systems of 1000 elements (N=1000).
| N | t-rel | max-s |
| --- | --- | --- |
| 100 | 446 | 6 |
| 200 | 1405 | 9 |
| 300 | 1124 | 11 |
| 400 | 2697 | 9 |
| 500 | 2502 | 12 |
| 600 | 2930 | 10 |
| 700 | 3969 | 8 |
| 800 | 4766 | 13 |
| 900 | 5700 | 10 |
| 1000 | 7012 | 10 |
| 1500 | 7675 | 12 |
| 2000 | 15367 | 13 |
| 2500 | 20632 | 11 |
| 3000 | 21785 | 11 |
| 3500 | 31820 | 16 |
TABLE 2
In this table the relaxation time (the time of structure construction) t-rel and the maximal size of a cluster max-s for different system size (N) in border case ($`\alpha =1`$) are presented.
| $`\alpha `$ | $`\gamma `$ |
| --- | --- |
| 1/4 | -1.42006 |
| 1/3 | -1.42167 |
| 1/2 | -1.42349 |
| 2/3 | -1.41858 |
| 3/4 | -1.42415 |
| 0.95 | -1.4161 |
| 0.99 | -1.42407 |
TABLE 3
In this table one can see critical indexes $`\gamma `$ for different values of $`\alpha `$. N=500.
|
no-problem/9909/hep-ph9909364.html
|
ar5iv
|
text
|
# LOOKING FOR EXTRA-DIMENSIONS AT THE WEAK SCALE: EXPERIMENTAL SEARCH FOR KALUZA-KLEIN STATES SIGNATURES AT THE 𝑒⁺𝑒⁻ LINEAR COLLIDER 11footnote 1Talk given in the working group session P6 at the International Workshop on Linear Colliders, Sitges, Barcelona, Spain, April 28 - May 5, 1999.
## 1 Introduction and motivations
Quantum gravity is presently best described within the framework of superstring theories. Superstring , allowing to unify gravity with the interactions described in the standard model and to remove divergences from quantum gravity, are known to live in 10 space-time dimensions. Furthermore, superstring dualities have tought us that the superstring scale may not be tied to the Planck scale but becomes a rather arbitrary parameter which as been proposed to be possibly as small as the TeV scale . This observation already opens formally the possibility of the existence of more than 4 space-time dimensions at the weak scale.
The proposal that the standard model particles and interactions leave in the usual 4 dimensional space-time while gravity propagates in a higher-dimensional space leads to a solution of the hierachy problem. In this framework, quantum gravity is characterized by a fundamental scale $`M`$ of order $``$ TeV and gravity propagates in a space with $`\delta `$ extra-dimensions of size R. The Newtonian gravitationnal constant $`G_N`$ is then expressed as $`G_N^1=M_P^2=8\pi R^\delta M^{2+\delta }`$ where $`M_P`$ is the Planck mass so that $`M`$ can be seen as the effective Planck mass of the higher dimensional theory. Such a picture of a standard model confined to a lower dimensional space and gravity propagating in the bulk is naturally imbedded within superstring theories . Furthermore, grand unification through extra-dimensions has been shown to be possible at scales as low as scales close to the weak scale .
These observations lead to a wide spectrum of phenomenological consequences for conventional Newtonian gravitation, particles physics, astrophysics and cosmology . In the higher i.e. 3+ $`\delta `$, dimensional space, the graviton propagates as a massless, spin-2 particle. Projected onto the normal 3 dimensional space, where the standard model leaves, it appears as a tower of massive Kaluza-Klein excitations.
In this study, we focus on the production of a Kaluza-Klein (KK) graviton in association with a photon at $`e^+e^{}`$ colliders, as suggested for a possible experimental test, and more specifically at the linear collider at $`\sqrt{s}=500`$ GeV. Note that in the early ’90, a proposal for a search for new dimensions at a TeV has been made .
## 2 Cross-Sections, Signature and backgrounds
The cross-section for the process $`e^+e^{}\gamma `$ graviton has been calculated (see also ), without the inclusion of initial state radiation (ISR) of photons, and yields to:
$$\frac{d\sigma }{dx_\gamma d\mathrm{cos}\theta }(s)=\frac{\alpha }{64}\frac{2\pi ^{\frac{\delta }{2}}}{\mathrm{\Gamma }(\frac{\delta }{2})}(\frac{\sqrt{s}}{M})^{\delta +2}\frac{1}{s}f(x_\gamma ,\mathrm{cos}\theta )$$
(1)
with $`x_\gamma =\frac{2E_\gamma }{\sqrt{s}}`$. The angle $`\theta `$ is the angle between the photon and the beam direction. In equation 1, $`f(x,y)`$ is defined by:
$$f(x,y)=\frac{2(1x)^{\frac{\delta }{2}1}}{x(1y^2)}\times [(2x)^2(1x+x^2)3y^2x^2(1x)y^4x^4]$$
(2)
The cross-section 1 has divergences for for $`x_\gamma 0`$ and $`\mathrm{cos}^2\theta 1`$ which means that the photon will be close to the beam with an energy spectrum favouring very small energies with respect to the beam energy.
In this work, the effect of ISR is included by introducing an energy-dependent $`e^+e^{}`$ luminosity function which can read :
$$L_{ee}(z)=[\beta (1z)^{\beta 1}(1+\frac{3}{4}\beta )\frac{1}{2}(1+z)]\times [1+\alpha _{em}(\frac{\pi }{3}\frac{1}{2\pi })]$$
(3)
where:
$$\beta =\frac{2\alpha _{em}}{\pi }(\mathrm{ln}\frac{s}{m_e}1)$$
(4)
In terms of this function, the total cross-section is then given by:
$$\sigma (s)=_0^1𝑑zL_{ee}(z)\sigma (zs)$$
(5)
Figure 1 (left part) shows the total cross-sections in the domain $`E_\gamma >5`$ GeV and $`1^o<\theta <179^o`$, with and without the inclusion of the ISR, as a function of the scale in TeV for various value of $`\delta `$ and this at $`\sqrt{s}=`$ 500 GeV. Including the ISR leads to a lowering of the total cross-sections by an amount which can be of the order of 10 % or even more, depending on the domain in $`E_\gamma `$ and $`\theta `$ considered.
These cross-sections ranges from $`10^2`$ pb up to several picobarns.
As the KK graviton G interacts very weakly with matter and has a very long lifetime, G can be considered as a non-interacting stable particle. In consequence, the signature for the process $`e^+e^{}\gamma G`$ is characterized by the presence of a photon and missing energy (and, eventually, a photon from ISR).
## 3 Signal extraction
The main physical backgrounds from processes of the standard model for the above signatures come from $`\nu \overline{\nu }\gamma (\gamma _{ISR})`$ production as well as, more marginally, from $`Z\gamma `$ and $`ZZ(\gamma _{ISR})`$ productions where the Z boson decays into neutrinos. Including effects from the detector, such as unefficient measurements or loss of particles, other backgrounds may become relevant. This may be the case for Bhabha processes with an ISR photon or $`\gamma \gamma (\gamma _{ISR})`$ productions as well as $`WW(\gamma _{ISR})`$, $`We\nu (\gamma _{ISR})`$, $`Zee(\gamma _{ISR})`$ productions. In order to study the extraction of a KK graviton signal from these backgrounds at a typical detector at the linear collider, we perform a Monte-Carlo study in which the $`\nu \overline{\nu }\gamma (\gamma _{ISR})`$ events are generated by the NUNUGPV package and the Bhabha events are generated by the BHWIDE package . At $`\sqrt{s}=500GeV`$, the total cross-section for $`\nu \overline{\nu }\gamma (\gamma _{ISR})`$ is found to be equal to 9.72 pb with an uncertainty of the order of 20 % and the total cross-section for Bhabhas is found to be 14.7 nb. The events corresponding to all the other processes quoted above are generated with the help of the PYTHIA 5.7 package with the following cross-section at $`\sqrt{s}=500`$ GeV , 8.2 pb ($`Z\gamma `$), 0.55 pb ($`ZZ`$), 8.0 pb ($`\gamma \gamma `$), 7.7 pb ($`WW`$), 5.3 pb ($`We\nu `$) and 7.4 pb ($`Zee`$), all with ISR $`\gamma `$’s.
We have developped an event generator for the production of a KK graviton in association with a photon which includes the effect of ISR according to the above formulae.
All the generated events are then passed through a fast simulation package of a typical detector at the linear collider i.e. the SIMDET package in its version 3.1.
The most important parameters for this detectors concern the electromagnetic calorimeter and the instrumented mask. They have been tuned (but not yet optimised) such that, for the electromagnetic calorimeter, the minimum deposited energy is 0.1 GeV, the electron misinterpretation probability is 0.01, the angular acceptance is 4.5<sup>o</sup>-175.5<sup>o</sup> and the energy resolution is 10 %. As for the instrumented mask, we have an angular coverage from the electromagnetic calorimeter i.e. 4.5<sup>o</sup> down to 1<sup>o</sup>, with a minimum deposited energy of 10 GeV.
The events are selected by taking the information from the so called BEST record of SIMDET 3.1 which gives the best estimate for the energy and direction of an object. A candidate photon is defined as a detected object having zero charge and zero mass. The selection then proceeds by requiring the presence of only one candidate photon in the event. Figure 1 (right part) shows the distribution of the transverse energy of this candidate photon for all the above backgrounds, assuming an integrated luminosity of 500 $`pb^1`$ at $`\sqrt{s}=500`$ GeV. A signal for 2 extra-dimensions at a scale of 1 TeV is also shown. The particular contribution of the Bhabha background is singled out and shown to concern the part below 15 GeV of the transverse energy distribution of the candidate photon.
Requiring the transverse energy of the candidate photon to be greater than 15 GeV, in order to suppress the Bhabha background, and lower than 175 GeV, in order to reduce the main background from $`\nu \overline{\nu }\gamma (\gamma _{ISR})`$, allows to extract the signal from KK graviton which will then appear as an excess of events having a single photon and missing energy events.
Figure 2 shows the $`S/\sqrt{B}`$ ratio, where S stands for signals and B for total background, as a function of the scale $`M`$ extrapolated to an integrated luminosity of $`500fb^1`$. Requiring the $`S/\sqrt{B}`$ ratio to be greater than 5 allows a reach in terms of M of the order of 3.66 TeV for 2 extra-dimensions and 2.12 TeV for 4 extra-dimensions.
One can try to lower the 15 GeV cut on the transverse energy of the candidate photon down to 10 GeV or even 5 GeV. This allows to increase the reach in terms of M up to values of the order of 4 TeV for 2 extra-dimensions but the price is a precise control of the background coming from the low energy part of the photon spectrum.
## 4 Conclusions and perspectives
At the $`e^+e^{}`$ linear collider, the search for $`e^+e^{}\gamma `$ graviton seems a promising way to look for extra-dimensions at the TeV scale. Although the inclusion of the effect of ISR lowers the total cross-sections, the present Monte-Carlo study, including a fast detector simulation, shows that this signal can be extracted from the physical and instrumental backgrounds and the exploration of the $`M=3.5`$ TeV - 4 TeV mass scale domain for 2 extra-dimensions at $`\sqrt{s}=500`$ GeV at 500 pb<sup>-1</sup> is feasible. A very good photon measurement and identification as close to the beams as possible and a good hermiticity are among crucial requirements on the detector at the $`e^+e^{}`$ linear collider for such a measurement. The study of the beam polarization will be done in a future work and we foresee an increase of reach in terms M as anticipated in and . Complementary processes such as $`e^+e^{}Z`$ graviton with a Z boson decaying into two fermions may help in detecting/confirming the effect of extra-dimensions at the weak scale at the linear collider. Last but not least, processes such as $`e^+e^{}\gamma `$ dilaton (or even $`e^+e^{}(\gamma _{ISR})`$ dilaton dilaton processes), leading also to signature with a single photon and missing energy, may help in revealing the stringy nature of quantum gravity. Fascinatingly enough, Kaluza-Klein dilaton production can be distinguished from Kaluza-Klein graviton production since the angular spectrum of the single photon which can be detected may show competely different features . Work in this direction is also underway.
## Acknowledgments
It a pleasure to thank P. Checchia, M. Spira, S. Tkaczyk, F. Richard, R. Rueckel and G. Wilson for discussions and suggestions on a preliminary version of this work during the ECFA-DESY Linear Collider Workshop at Oxford. This work has also benefited from discussions with G. Giudice and J. Wells. Finally, I. Antoniadis, P. Binetruy, E. Dudas, G. Ovarlez and A. Sagnotti deserve special thanks for their patience in illuminating and fascinating explanations on the basics of superstrings (and branes world) physics.
## References
|
no-problem/9909/cond-mat9909384.html
|
ar5iv
|
text
|
# Finite-size effects and the stabilized spin-polarized jellium model for metal clusters
## I Introduction
Since the production and study of sodium clusters by Knight et al., the physics of metal clusters has attracted much interest. Metal clusters are composed of atoms and have properties that are different from both a single atom and the bulk metal. However, when increasing the size of the cluster, its properties evolve to those of the bulk. The many-body technique suitable for these systems is the density functional theory (DFT). It is a well-known fact that the properties of alkali metals are dominantly determined by the delocalized valence electrons. In these metals, the pseudopotentials of the ions do not significantly affect the electronic structure, because the Fermi wavelengths of the valence electrons become much larger than the metal lattice constants if one replaces the ions with a uniform positive charge background. This fact allows us to replace the discrete ionic structure by a homogeneous positive charge background. This approximation is known as the jellium model (JM). The simplest way of applying the JM to metal clusters is to replace the ions of an $`N`$-atom cluster by a sphere of uniform positive charge density and radius $`R=(zN)^{1/3}r_s`$, where $`z`$ is the valence of the atom and $`r_s`$ is the bulk value of the Wigner–Seitz radius of the metal. However, since the ionic density near the surface of a metal differs from that of the bulk region, one may resort to the diffuse jellium model (dif-JM) in which the density of the jellium background falls to zero, from the bulk value, within a length of a few atomic sizes. Application of the dif-JM to metal clusters results in a better agreement of the theory and experiment over the JM results. However, in spite of its simplicity and success in predicting some properties of bulk metals and metal clusters, the JM, which was originally developed for bulk metals, has some drawbacks. In order to overcome the deficiencies of the JM, one should take some details of the ionic structure into account. Among the various methods of improvement, the first attempts that kept the simplicity of the JM and overcame some of the deficiencies of the JM resulted in the development of the stabilized jellium model (SJM) or pseudojellium model. The SJM was applied to metal clusters and improved some results of the simple JM. Montag et al., in their structure-averaged jellium model (SAJM), which added the ionic surface energy to the SJM energy functional, made it more suitable for metal clusters.
In other approaches, some researchers relax the spherical geometry and use the JM with spheroidal or ellipsoidal shapes, which are suitable for open-shell clusters. Relaxing the shape of the jellium background as well as its density distribution while keeping charge neutrality at every point in space, called the ultimate jellium model (UJM), was introduced by Koskinen et al. Since the bulk density in the UJM ($`r_s=4.18`$) is close to that of sodium, its results can be compared with experiments on Na metal clusters. However, since the jellium density and the electron density in the UJM are locally equal everywhere, the UJM cannot describe the ionized clusters.
In a recent work, by taking the spin degrees of freedom into account in the process of stabilization, we have generalized the SJM to the stabilized spin-polarized jellium model (SSPJM). In the SJM, the equilibrium bulk density is a free parameter and the experimental value is used for it. However, in the SSPJM the equilibrium bulk density parameter, $`\overline{r}_s(\mathrm{},\zeta )`$, is polarization dependent and to the best of our knowledge, no experimental data are available for it. Therefore in the SSPJM we take
$$\overline{r}_s^\mathrm{X}(\mathrm{},\zeta )=\overline{r}_s^\mathrm{X}(\mathrm{},0)+\mathrm{\Delta }r_s^{\mathrm{EG}}(\mathrm{},\zeta ),$$
(1)
in which $`\overline{r}_s^\mathrm{X}(\mathrm{},0)`$ is the equilibrium bulk value for the spin-compensated system which takes the experimental value of metal X, and $`\mathrm{\Delta }r_s^{\mathrm{EG}}(\mathrm{},\zeta )`$ is obtained by the application of the local spin-density approximation (LSDA) to the infinite electron-gas system. All equations throughout this paper are expressed in Rydberg atomic units. It turns out that $`\mathrm{\Delta }r_s^{\mathrm{EG}}(\mathrm{},\zeta )`$ is an increasing function of $`\zeta `$. By taking $`\zeta =(N_{}N_{})/N`$ for a monovalent metal cluster, and calculating the corresponding $`\mathrm{\Delta }r_s^{\mathrm{EG}}(\mathrm{},\zeta )`$ , it is possible to determine the appropriate radius for the spherical jellium through $`R(N,\zeta )=N^{1/3}\overline{r}_s(\mathrm{},\zeta )`$. Therefore, any variation in $`N_{}`$ and $`N_{}`$, keeping $`N=N_{}+N_{}`$ constant, leads to different values for $`\zeta `$ and thereby, different values for $`R(N,\zeta )`$.
Application of this model for the energy calculation of metal clusters show that the energy is minimized for a configuration with maximum spin-compensation (MSC). That is, for clusters with an even number of electrons, $`N`$, the number of up-spin electrons, $`N_{}`$, equals the number of down-spin electrons, $`N_{}`$; and for an odd number of electrons, $`N_{}N_{}=1`$. This MSC rule leads to the fine structure in $`\mathrm{\Delta }_2(N)`$ (see Fig. 5 of Ref. REFERENCES) and the odd–even alternations in the ionization energies \[Fig. 7(c) of Ref. REFERENCES\].
In the present paper, keeping the spherical geometry for the jellium, we have investigated the finite-size effects on the equilibrium $`r_s`$ values, and their consequences on the SSPJM calculations. First, using the LSDA, we have found $`\overline{r}_s(N,\zeta )`$, the equilibrium $`r_s`$ values of the closed-shell $`N`$-electron neutral and singly ionized “generic” clusters with $`N=2,8,18,20,34,40`$ for all possible polarizations. By generic we mean no specific metal but a simple jellium sphere which can assume its equilibrium size for any given value of $`N`$. By fitting the calculated results to a polynomial with even powers of $`\zeta `$ (in the absence of magnetic fields, the physical properties are invariant under the transformation $`\zeta \zeta `$), we have found two analytic equations for $`\mathrm{\Delta }r_s(\zeta )`$, one for neutral and the other for singly ionized clusters. Employing these analytic equations in the SSPJM calculations for Na clusters shows that as before, the MSC rule is at work and the results do not show any significant changes over our previous results on the ionization energies. In the next step, we have found the values $`\overline{r}_s(N,\zeta )`$ for all different neutral and singly ionized generic clusters ($`N42`$) with different spin configurations ($`0\zeta 1`$), and have shown that $`\overline{r}_s(N,\zeta )`$ behaves differently for open-shell and closed-shell clusters. That is, we have found that for a closed-shell cluster (and its two nearest neighbors) it is an increasing function of $`\zeta `$ over the whole interval $`0\zeta 1`$, whereas for an open-shell cluster (except for the two nearest neighbors of a closed-shell cluster), it has a decreasing behavior over $`0\zeta \zeta _0`$, and an increasing behavior over $`\zeta _0\zeta 1`$. However, in both open-shell and closed-shell clusters, the global minimum of energy, i.e., the ground-state energy, corresponds to a configuration in which $`\overline{r}_s(N,\zeta )`$ is a minimum. Here, $`\zeta _0`$ corresponds to an electronic configuration for which Hund’s first rule is satisfied. By subtracting the ground-state energy of a singly ionized generic cluster from that of its neutral counterpart, we obtain the ionization energy of that generic cluster. Calculation of these ionization energies for $`N42`$ shows a good agreement with experimental results on Na clusters. We see that, in the ionization-energy plot, although the saw-toothed behavior remains, the pronounced shell effects near closed shells, seen in the simple JM results, are substantially reduced. Thanks to the appreciable reduction of the shell effects in the ionization energy results of the above-mentioned calculations, we have performed the SSPJM calculations using the set of values $`\overline{r}_s(N,\zeta )`$. The results show that, instead of the MSC rule, Hund’s first rule is governing the ground-state configuration. We have also performed simple JM-LSDA calculations for the ground-state energies of Na clusters but, instead of using the ordinary bulk $`r_s`$ value (3.99), we have used the increasing function $`\overline{r}_s(\mathrm{},\zeta )`$ as in Eq. (1). The results show that, here also, Hund’s first rule remains at work and therefore, we conclude that with spherical geometries, the increasing behavior of $`\mathrm{\Delta }r_s^{\mathrm{EG}}(\mathrm{},\zeta )`$ is a necessary condition (but not sufficient) for realizing the MSC rule, and the energy corrections over the simple JM due to stabilization are also needed. Therefore, in order to improve the SSPJM results one should insist on the increasing behavior for $`\overline{r}_s(N,\zeta )`$. The only way which guarantees such behavior for all values of $`N`$ is to relax the spherical constraint on the geometry and consider ellipsoidal shapes for open shells. Using the LSDA and ellipsoidal geometry, one could obtain $`\overline{r}_s(N,\zeta )`$, for a given set of values of $`N`$ and $`\zeta `$, by finding the values of the ellipsoid axes which correspond to the minimum of energy. We expect that in this case $`\overline{r}_s(N,\zeta )`$ will be an increasing function of $`\zeta `$ for both open-shell and closed-shell clusters. Then, if one performs the SSPJM calculations for ellipsoidal clusters using these new increasing functions $`\overline{r}_s(N,\zeta )`$, one would obtain the MSC rule (and thereby the odd–even alternations) with improved results.
The organization of this paper is as follows. Section II is devoted to calculational schemes. In Sec. III we present the results of our calculations. In Sec. IV we conclude the work.
## II calculational Schemes
The energy functional in the SSPJM is given by
$`E_{\mathrm{SSPJM}}[n_{},n_{},n_+]`$ $`=`$ $`E_{\mathrm{JM}}[n_{},n_{},n_+]+(\epsilon _\mathrm{M}(\overline{n})+\overline{w}_R(\overline{n})){\displaystyle 𝑑𝐫n_+(𝐫)}`$ (3)
$`+\delta v_{\mathrm{WS}}(\overline{n}){\displaystyle 𝑑𝐫\mathrm{\Theta }(𝐫)[n(𝐫)n_+(𝐫)]},`$
in which $`E_{\mathrm{JM}}`$ is the energy fuctional of the simple JM, $`\epsilon _\mathrm{M}`$ is the Madelung energy, $`\overline{w}_R`$ is the average value of the repulsive part of the pseudopotential, and $`\delta v_{\mathrm{WS}}`$ is the average of the difference potential over the Wigner–Seitz cell and the difference potential, $`\delta v`$, is defined as the difference between the pseudopotential of a lattice of ions and the electrostatic potential of the jellium background. $`n_{}(𝐫)`$ and $`n_{}(𝐫)`$ are, respectively, the up-spin and down-spin electron densities with the total electron density given by $`n(𝐫)=n_{}(𝐫)+n_{}(𝐫)`$; and $`\overline{n}`$ is the uniform jellium density which is $`\zeta `$-dependent; for our previous SSPJM calculations we have used the values given by Eq. (1). In the spherical JM, the functions will depend only on the radial variable. The function $`\mathrm{\Theta }(𝐫)`$ becomes a simple radial step function, $`\theta (Rr)`$, where $`R=N^{1/3}\overline{r}_s`$ with $`\overline{r}_s=(3/4\pi \overline{n})^{1/3}`$. In the case of jellium with a sharp boundary, $`n_+(r)=\overline{n}\theta (Rr)`$, whereas for the diffuse case we take
$$n_+(r)=\{\begin{array}{c}\overline{n}\{1(R+t)e^{R/t}[\mathrm{sinh}(r/t)]/r\},rR\hfill \\ \overline{n}\{1((R+t)/2R)(1e^{2R/t})\}Re^{(Rr)/t}/r,r>R,\hfill \end{array}$$
(4)
where $`t`$ is a parameter related to the surface thickness.
To evaluate the total energy of a cluster, we solve the Kohn–Sham (KS) equations self-consistently. The effective potential in the KS equations for the SSPJM calculations is given by
$$v_{\mathrm{eff}}^\sigma ([n_{},n_{},n_+];𝐫)=\varphi ([n,n_+];𝐫)+v_{xc}^\sigma ([n_{},n_{}];𝐫)+\delta v_{\mathrm{WS}}(\overline{n})\mathrm{\Theta }(𝐫),$$
(5)
where
$$\varphi ([n,n_+];𝐫)=2𝑑𝐫^{}\frac{[n(𝐫^{})n_+(𝐫^{})]}{𝐫𝐫^{}}.$$
(6)
which appears in the electrostatic part of the total energy of the simple JM energy functional
$`E_{\mathrm{JM}}[n_{},n_{},n_+]`$ $`=`$ $`T_s[n_{},n_{}]+E_{xc}[n_{},n_{}]`$ (8)
$`+{\displaystyle \frac{1}{2}}{\displaystyle 𝑑𝐫\varphi ([n,n_+];𝐫)[n(𝐫)n_+(𝐫)]},`$
and
$$v_{xc}^\sigma ([n_{},n_{}];𝐫)=\frac{\delta E_{xc}}{\delta n_\sigma (𝐫)},\sigma =,.$$
(9)
For $`E_{xc}`$ we use the LSDA with the Perdew–Wang parametrization for the correlation part.
## III Results and Discussions
In the first step, by solving the KS equations for spherical geometries of the jellium in the LSDA, and finding the minimum energies, we have obtained $`\overline{r}_s(N,\zeta )`$, the equilibrium $`r_s`$ values of the closed-shell neutral and singly ionized jellium clusters with $`N=2,8,18,20,34,40`$ electrons. In the calculations we have considered all different possible polarizations. That is, for an $`N`$-electron cluster ($`N=N_{}+N_{}`$), we have found $`\overline{r}_s(N,\zeta )`$ for all different polarizations $`(0\zeta 1)`$ corresponding to the configurations $`N_{}=N_{}`$, $`N_{}=N_{}1`$, $`N_{}=N_{}2`$, …, $`N_{}=0`$. The results are shown in Figs. 1(a) and 1(b). We see that for these closed-shell clusters, $`\overline{r}_s`$ is an increasing function of $`\zeta `$. By a least-square fitting of the results to the polynomial
$$r_s(\zeta )=a_0+a_2\zeta ^2+a_4\zeta ^4+a_6\zeta ^6,$$
(10)
we have obtained $`a_0=4.28`$, $`a_2=2.15`$, $`a_4=2.41`$, $`a_6=1.84`$ for neutral, and $`a_0=4.62`$, $`a_2=1.49`$, $`a_4=3.31`$, $`a_6=3.92`$ for singly ionized clusters. In Figs. 1(a) and 1(b), we have also shown plots using Eq. (10) for the two sets of the coefficients and compared with the bulk function $`\overline{r}_s^{\mathrm{EG}}(\mathrm{},\zeta )`$. The values of $`a_0`$ show that the simple JM with LSDA predicts a larger atomic spacing for a cluster than the bulk value ($`a_0>4.18`$). Regardless of the $`a_0`$ values (since we are studying the variations of $`r_s`$ with respect to $`\zeta `$), we have performed our SSPJM calculations using
$$\mathrm{\Delta }r_s(\zeta )=a_2\zeta ^2+a_4\zeta ^4+a_6\zeta ^6,$$
(11)
with corresponding coefficients for neutral and singly ionized clusters. Figure 2 compares the plots of Eq. (11) for neutral and singly ionized clusters with $`\mathrm{\Delta }r_s^{\mathrm{EG}}(\mathrm{},\zeta )`$ of the bulk jellium. We see that they are increasing functions of $`\zeta `$. Using Eq. (11), we have performed the dif-SSPJM calculations for Na clusters with $`\overline{r}_s(\mathrm{},0)=3.99`$. After the self-consistent calculations of the KS equations, we have obtained the dif-SSPJM ground-state total energies of the neutral and singly ionized Na clusters ($`N42`$). The calculations show that the MSC rule is governing the ground-state configuration. In the dif-SSPJM calculations, we have taken $`t=1`$. In Fig. 3 we have plotted the ionization energies and compared them with our previous results and also with the experimental values. As is shown, there are no significant changes over the previous results.
In the second step, using JM-LSDA we have found the values $`\overline{r}_s(N,\zeta )`$ for all different neutral and singly ionized jellium clusters ($`N42`$) with different spin polarizations. Here, for a cluster with specified values of $`N`$ and $`\zeta `$, the value $`\overline{r}_s(N,\zeta )`$ minimizes the total energy of the cluster. In Fig. 4 we have shown the values of $`\overline{r}_s(N,\zeta )`$ and corresponding total energies per electron, $`\overline{E}(N,\zeta )/N`$, as functions of the number of the electrons, $`N`$, for neutral jellium clusters. The two plots show the same structure. That is, the maxima and minima of these two plots correspond to the same values of $`N`$. The values of the ground-state energies correspond to polarizations $`\zeta _0`$ consistent with Hund’s first rule and form the Hund curve. In each of the plots, the uppermost value for a given $`N`$ corresponds to the configuration with MSC. The values in between correspond to intermediate polarizations. We therefore conclude that for an open-shell cluster the functions $`\overline{r}_s(N,\zeta )`$ and $`\overline{E}(N,\zeta )/N`$ have minima for a polarization $`\zeta _00`$ which is consistent with Hund’s first rule. That is, for open-shell clusters (except for the nearest neighbors to the closed-shell cluster) $`\overline{r}_s`$ and $`\overline{E}(N,\zeta )`$ are decreasing functions of $`\zeta `$ for $`0\zeta \zeta _0`$ and increasing functions for $`\zeta _0\zeta 1`$. In the bulk jellium, the function $`\overline{r}_s^{\mathrm{EG}}(\mathrm{},\zeta )`$ is an increasing function over the whole range $`0\zeta 1`$. In Fig. 5 we have compared the functions $`\overline{r}_s(27,\zeta )`$ and $`\overline{r}_s(34,\zeta )`$ with $`\overline{r}_s^{\mathrm{EG}}(\mathrm{},\zeta )`$. We see that for the closed-shell cluster ($`N=34`$) the volume is an increasing function of $`\zeta `$ on the whole range $`0\zeta 1`$. But for the open-shell cluster ($`N=27`$) the volume decreases with increasing $`\zeta `$ in the range $`0<\zeta 7/27`$ and expands with $`\zeta `$ in the range $`7/27\zeta 1`$. The value $`\zeta _0=7/27`$ is due to the half-filled shell ($`l=3`$) of the up-spin band. Also, we note that $`\overline{r}_s(27,\zeta )>\overline{r}_s^{\mathrm{EG}}(\mathrm{},\zeta )`$ over the whole range $`0\zeta 1`$. Perdew et al. have also studied the equilibrium size of spherical clusters of stabilized jellium from a different point of view.
Figure 6 shows the ground state $`\overline{E}/N`$ and its corresponding $`\overline{r}_s`$ for neutral and singly ionized jellium clusters as functions of $`N`$, the number of electrons. The energies correspond to the equilibrium $`r_s`$ values for configurations consistent with Hund’s first rule. Here, $`N`$ is the number of electrons which for neutral clusters equals the number of the ions, and for singly ionized clusters it is one less than the number of the ions. The ionization energy of an $`N`$-atom jellium cluster is given by
$$I(N)=\overline{E}^{\mathrm{ion}}(N1,\zeta _0^{})\overline{E}^{\mathrm{neut}}(N,\zeta _0),$$
(12)
where $`\overline{E}^{\mathrm{neut}}(N,\zeta _0)`$ is the ground-state energy of the neutral $`N`$-electron jellium cluster which occurs at polarization $`\zeta _0`$ consistent with Hund’s rule, and $`\overline{E}^{\mathrm{ion}}(N1,\zeta _0^{})`$ is the ground state energy of the singly ionized cluster with $`N1`$ electrons and $`N`$ positive ions. Obviously, the polarizations $`\zeta _0`$ and $`\zeta _0^{}`$ are not the same. In Fig. 7 we have plotted the ionization energies of the jellium clusters ($`N42`$) obtained from Eq. (12), and compared them with the results obtained from the simple JM calculations (with $`r_s=3.99`$) and experiment on Na clusters. The relevance of this comparison with Na results is due to the fact that the equilibrium $`r_s`$ value of the bulk jellium, 4.18, is very close to that of the bulk Na. Except for very small clusters, the results based on the equilibrium $`r_s`$ values show a better agreement with experiment than the results of simple JM calculations for Na clusters.
We saw, in the second step, that if the jellium clusters assume their individual equilibrium volumes, then the ionization energies improve. Therefore, we are naturally led to define a new set of $`\mathrm{\Delta }r_s`$ for our SSPJM calculations which are obtained from
$$\mathrm{\Delta }r_s(N,\zeta )=\overline{r}_s(N,\zeta )a_0.$$
(13)
Here, $`a_0=4.28`$ for neutral and $`a_0=4.62`$ for singly ionized clusters; $`\overline{r}_s(N,\zeta )`$ is the equilibrium $`r_s`$ value for a cluster with given values of $`N`$ and $`\zeta `$. Using Eq. (13), we have performed the SSPJM calculations for Na. The results of calculations show that here (in contrast to the MSC rule) the energy of a cluster is minimized for a configuration consistent with Hund’s first rule. This argument is valid for both neutral and singly ionized clusters. This behavior has its roots in the decreasing behavior of $`\overline{r}_s(N,\zeta )`$ over the range $`0<\zeta \zeta _0`$ for open-shell clusters. We have also checked whether or not taking account of the volume change as a function of polarization, as given by Eq. (1), in the simple JM calculations gives rise to the MSC rule. The results show that Hund’s first rule remains at work and therefore we conclude that, in order to obtain MSC configuration for the ground state, not only $`\mathrm{\Delta }r_s`$ should be an increasing function of $`\zeta `$ but one should also include the two corrections, due to the stabilization, in the simple JM energy. That is, one should use the SSPJM energy along with an increasing function for $`\mathrm{\Delta }r_s`$. This condition will be met if one relaxes the spherical constraint on the jellium and considers ellipsoidal shapes for open-shell clusters. It is then possible to obtain $`\overline{r}_s(N,\zeta )`$ for a given set of values of $`N`$ and $`\zeta `$, by finding the values of the ellipsoid axes that minimize the total energy. We expect that under these conditions $`\overline{r}_s(N,\zeta )`$ becomes an increasing function of $`\zeta `$ for all clusters. This expectation is due to the fact that addition of a nonsymmetric perturbation to a spherical potential in a single-particle Hamiltonian lifts the orbital degeneracies and each degenerate level splits into ($`2l+1`$) new levels with different energies. Each of these levels will contain at most two electrons with opposite spins. Then, any increment in the polarization is accompanied by a transition of a spin-down electron to an unoccupied level and a successive spin-flip. This is the only way that one can increase the polarization consistent with Pauli’s exclusion principle. This process resembles the process of increasing the polarization in a closed-shell spherical cluster which results in increasing its equilibrium $`r_s`$ value. It is a well-known fact that the open-shell clusters lose their spherical geometry due to the Jahn–Teller effect, and that is why the MSC rule and the odd–even alternations are observed in experimental data for alkali metal clusters. In short, using a new set of increasing functions $`\overline{r}_s(N,\zeta )`$ – which is obtained using ellipsoidal geometries for simple JM and LSDA– in the SSPJM calculations for ellipsoidal geometries of the jellium will lead to the MSC configuration for the ground state of the cluster. Work in this direction is in progress.
Finally, it should be mentioned that in the context of the SSPJM one could fix, at the beginning, the pseudopotential core radius for a bulk system of a given species with a given fixed polarization; and then use this value in the SSPJM energy functional for a finite cluster (transferability condition on the pseudopotential). Then, one could obtain the equilibrium radius of the jellium by finding that $`r_s`$ value which minimizes the total energy. We have performed such calculations and obtained further agreement to the experimental results, which will appear elsewhere.
## IV Conclusion
In this paper, keeping the spherical geometry, we have considered the finite-size effects on the equilibrium $`r_s`$ values. Our calculations show that for a given $`N`$-electron cluster, the quantity $`\overline{r}_s(N,\zeta )`$ behaves differently for an open-shell and a closed-shell cluster. That is, this equilibrium $`r_s`$ value is an increasing function of $`\zeta `$ over the whole range $`0\zeta 1`$ for a closed-shell cluster; whereas for an open-shell cluster it is a decreasing function over $`0<\zeta \zeta _0`$ and an increasing function over $`\zeta _0\zeta 1`$. Here, $`\zeta _0`$ is a polarization corresponding to a configuration consistent with Hund’s first rule. Our SSPJM calculations based on equilibrium $`r_s`$ values show that, in contrast to the MSC rule, Hund’s first rule is at work. This behavior is due to the fact that $`\overline{r}_s(N,\zeta )`$ has a decreasing part for an open-shell cluster. We therefore conclude that to realize the MSC rule in the SSPJM calculations with $`\overline{r}_s(N,\zeta )`$, and thereby the odd–even alternation, one should lift the spherical constraint on the jellium and let it assume ellipsoidal shapes.
###### Acknowledgements.
The author would like to thank John P. Perdew for reading the manuscript and his helpful discussions on the subject. He also thanks N. Nafari for his useful discussions.
|
no-problem/9909/hep-th9909045.html
|
ar5iv
|
text
|
# Pair creation of neutral particles in a vacuum by external electromagnetic fields in 2+1 dimensions published in J. Phys. G 25 (1999) 1793-1795. ©IOP publishing Ltd. 1999
## Abstract
Neutral fermions of spin $`\frac{1}{2}`$ with magnetic moment can interact with electromagnetic fields through nonminimal coupling. In 2+1 dimensions the electromagnetic field strength plays the same role to the magnetic moment as the vector potential to the electric charge. This duality enables one to obtain physical results for neutral particles from known ones for charged particles. We give the probability of neutral particle-antiparticle pair creation in the vacuum by non-uniform electromagnetic fields produced by constant uniform charge and current densities.
PACS number(s): 03.70.+k, 11.10.Kk, 11.15.Tk
Pair creation of charged particles in the vacuum by an external electric field was first studied by Schwinger several decades ago . Related problems have been discussed by many authors, for example, in Refs. \[2-6\]. Pure magnetic fields do not lead to pair creation. However, inclusion of a magnetic field changes the result for a pure electric field. This has been studied by several authors \[7-11\]. The probability of pair creation in the vacuum was also calculated in lower dimensions and in more general dimensions . Similar subjects are now widely discussed in string and black hole theory.
In this paper we extend the subject to neutral particles of spin $`\frac{1}{2}`$. Neutral particles with magnetic moment can interact with electromagnetic fields through nonminimal coupling. The Aharonov-Casher effect is a well known indication for this interaction, and has been observed in experiment . So we expect that neutral particle-antiparticle pairs may be created in the vacuum by external electromagnetic fields. In 2+1 dimensions this can be verified by explicit calculations. For certain electromagnetic fields the probability of pair creation can be calculated exactly. Indeed, the result can be obtained from that for charged particles by using the duality between charged and neutral particles, respectively in external vector potentials and electromagnetic fields. Unfortunately, the problem is rather complicated in 3+1 dimensions, and useful results are still not available.
Consider a neutral fermion of spin $`\frac{1}{2}`$ with mass $`m`$ and magnetic moment $`\mu _m`$, moving in an external electromagnetic field. The equation of motion for the fermion field $`\psi `$ is
$$(i\gamma \mu _m\sigma F/2m)\psi =0,$$
(1)
where $`\gamma =\gamma ^\mu _\mu `$, $`\sigma F=\sigma ^{\mu \nu }F_{\mu \nu }`$, $`F_{\mu \nu }`$ is the field strength of the external electromagnetic field, and
$$\sigma ^{\mu \nu }=\frac{i}{2}[\gamma ^\mu ,\gamma ^\nu ].$$
(2)
In the following we work in 2+1 dimensions. The Dirac matrices $`\gamma ^\mu `$ satisfy the relation
$$\gamma ^\mu \gamma ^\nu =g^{\mu \nu }iϵ^{\mu \nu \lambda }\gamma _\lambda ,$$
(3)
where $`g^{\mu \nu }=\mathrm{diag}(1,1,1)`$, $`ϵ^{\mu \nu \lambda }`$ is totally antisymmetric in its indices and $`ϵ^{012}=1`$. The minus sign in (3) corresponds to representations equivalent to the one
$$\gamma ^0=\sigma ^3,\gamma ^1=i\sigma ^1,\gamma ^2=i\sigma ^2,$$
$`(4a)`$
and the plus sign corresponds to representations equivalent to the one
$$\gamma ^0=\sigma ^3,\gamma ^1=i\sigma ^1,\gamma ^2=i\sigma ^2,$$
$`(4b)`$
which is inequivalent to (4a). The difference between the two cases would not affect physical results. So we will work with the first case \[with the minus sign in (3)\]. The relation (3) is of crucial importance for the following discussion. Note that in other space-time dimensions there is no similar relation. Using (3) we have $`\sigma ^{\mu \nu }=ϵ^{\mu \nu \lambda }\gamma _\lambda `$. We define
$$f^\mu =\frac{1}{2}ϵ^{\mu \alpha \beta }F_{\alpha \beta },$$
(5)
and have
$$\sigma F/2=\gamma f.$$
(6)
Then (1) becomes
$$[\gamma (i\mu _mf)m]\psi =0.$$
(7)
Compared with the equation of a charged fermion with spin $`\frac{1}{2}`$ and charge $`q`$ moving in an external electromagnetic vector potential $`A_q^\mu `$
$$[\gamma (iqA_q)m_q]\psi _q=0$$
(8)
where a subscript $`q`$ is used to indicate the charged particle, one easily realize that $`f^\mu `$ plays the same role to the neutral particle as $`A_q^\mu `$ to the charged one. This duality has been noticed in the study of the Aharonov-Casher effect . Consequently, constant uniform fields $`F_{\mu \nu }`$ have no physical effect on neutral particles. To create particle-antiparticle pairs in the vacuum, non-uniform fields are necessary. Because of the above duality, we call $`f^\mu `$ the dual vecter potential, and $`f_{\mu \nu }=_\mu f_\nu _\nu f_\mu `$ the dual field strength. The dual magnetic field is
$$b=f_{12}=_\lambda F^{\lambda 0}=𝐄,$$
$`(9a)`$
and the dual electric field is
$$e_x=f_{01}=_\lambda F^{\lambda 2}=_tE_y+_xB,$$
$`(9b)`$
$$e_y=f_{02}=_\lambda F^{\lambda 1}=_tE_x+_yB.$$
$`(9c)`$
Using the Maxwell equation $`_\lambda F^{\lambda \mu }=J^\mu `$ where $`J^\mu `$ is the external source producing the field $`F_{\mu \nu }`$, we have
$$b=J^0\rho ,$$
$`(10a)`$
$$e_x=J^2J_y,e_y=J^1J_x.$$
$`(10b)`$
This means that the current $`(J_y,J_x)`$ plays the same role to the neutral particles as an electric field to the charged ones, and the charge density $`\rho `$ plays the same role as a magnetic field. We can now use the conclusions and results of without further calculations. Consider an external electric charge density $`\rho `$ and an electric current density $`𝐉`$, both being constant and uniform. The field strength $`F_{\mu \nu }`$ produced by them is not uniform. If $`|𝐉|<|\rho |`$ there is no pair creation. When $`|𝐉|>|\rho |`$, we have the probability, per unit time and per unit area, of neutral particle-antiparticle pair creation in the vacuum
$$w=\frac{(|\mu _m|𝒥)^{\frac{3}{2}}}{4\pi ^2}\underset{n=1}{\overset{\mathrm{}}{}}\frac{1}{n^{\frac{3}{2}}}\mathrm{exp}\left(\frac{n\pi m^2}{|\mu _m|𝒥}\right),$$
(11)
where
$$𝒥=\sqrt{𝐉^2\rho ^2}.$$
(12)
$`w`$ is independent of the space-time position as both $`\rho `$ and $`𝐉`$ are constant and uniform. Also note that the result is Lorentz invariant.
In conclusion, neutral particle-antiparticle pairs can be created in the vacuum by external electromagnetic fields in 2+1 dimensions, if these particles have nonvanishing magnetic moment. For electromagnetic fields produced by constant uniform sources, the probability of pair creation in the vacuum can be obtained from similar results for charged particles. The crucial point is the duality between magnetic moment and electric charge. It is expected that similar conclusions can be achieved in 3+1 dimensions. Unfortunately, in 3+1 dimensions there is no relation similar to (3), and the duality used here does not exist. Consequently, the calculations in 3+1 dimensions involve considerable mathematical difficulties. We cannot work out an explicit nontrivial result even for a field configuration similar to that in 2+1 dimensions, i.e., a field configuration where $`B_x=B_y=E_z=0`$ and everything is independent of $`z`$. At present, we can only calculate the simplest case for a constant uniform electric field, but the result turns out to be trivial. For a constant uniform magnetic field, the calculations can also be carried out analytically, but the final result involves divergent integrals which we still do not know how to regularize. It seems that some other techniques should be developed to deal with the subject in 3+1 dimensions.
This work was supported by the National Natural Science Foundation of China.
|
no-problem/9909/quant-ph9909022.html
|
ar5iv
|
text
|
# Squeezed angular momentum coherent states : construction and time evolution*footnote **footnote *presented on ICSSUR ’99: 6th International Conference on Squeezed States and Uncertainty Relations, Napoli, 24-29 May 1999
## Abstract
A family of angular momentum coherent states on the sphere is constructed using previous work by Aragone et al . These states depend on a complex parameter which allows an arbitrary squeezing of the angular momentum uncertainties. The time evolution of these states is analyzed assuming a rigid body hamiltonian. The rich scenario of fractional revivals is exhibited with cloning and many interference effects.
In this contribution we will concentrate on a family of coherent states on the sphere which can be proposed for the description of the rotation of quantum simple systems like rigid diatomic molecules or rigid nuclei. The relevant hamiltonian depends then only on the angular momentum $`I`$ and the energy spectrum is expressed in terms of a frequency $`\omega _0`$ by $`E_I=\mathrm{}\omega _0I(I+1)`$. A general wave packet (WP) of the family depends on a parameter $`\eta `$ and of a real integer number $`k`$ and will be denoted as $`\mathrm{\Psi }_{\eta k}(\theta ,\varphi )`$. The states with $`k`$ different from 0 are deduced from a parent state $`\mathrm{\Psi }_{\eta 0}`$ defined as
$$\mathrm{\Psi }_{\eta \mathrm{\hspace{0.17em}0}}(\theta ,\varphi )=\sqrt{\frac{N}{2\pi \mathrm{sinh}2N}}e^{N\mathrm{sin}\theta (\mathrm{cos}\varphi +\text{i}\eta \mathrm{sin}\varphi )}$$
(1)
For real $`\eta `$ the angular spread of the latter depends only on $`N`$ while the average value of $`L_z`$ is given by
$$L_z=\eta (N\mathrm{coth}(2N)\frac{1}{2})\stackrel{N1}{}\eta (N\frac{1}{2})$$
(2)
The states with $`k0`$ are obtained from (1) by application on $`\mathrm{\Psi }_{\eta \mathrm{\hspace{0.17em}0}}`$ of an operator $`(_+)^k`$. The operator $`_+`$ and two other ones which form an SU(2) algebra are defined by
$$_3=\frac{L_x+\text{i}\eta L_y}{\sqrt{1\eta ^2}},_\pm =\pm \left(\frac{\eta L_x+\text{i}L_y}{\sqrt{1\eta ^2}}\right)L_z$$
(3)
Up to a normalization factor we have
$$\mathrm{\Psi }_{\eta k}(\theta ,\varphi )=(_+)^k\mathrm{\Psi }_{\eta \mathrm{\hspace{0.17em}0}}(\theta ,\varphi )$$
(4)
The states $`\mathrm{\Psi }_{\eta k}`$ have the following properties:
1. They are eigenstates of $`_3`$
$$_3\mathrm{\Psi }_{\eta k}=k\sqrt{1\eta ^2}\mathrm{\Psi }_{\eta k}$$
(5)
2. The parameter $`\eta =|\eta |\mathrm{exp}(\text{i}\alpha )`$ is a squeezing parameter since one has
$$|\eta |^2=\frac{\mathrm{\Delta }L_x^2}{\mathrm{\Delta }L_y^2}$$
(6)
3. If $`\eta `$ is real the WP are minimum uncertainty states and in general we have
$$\mathrm{\Delta }L_x^2\mathrm{\Delta }L_y^2=\frac{1}{4}[L_z^2+|\{L_x,L_y\}L_xL_y|^2]=\frac{1}{4}\frac{L_z^2}{\mathrm{cos}^2\alpha }$$
(7)
4. Changing $`k`$ enable to change the average values of all the components of L.
There exist intensive analytical studies devoted to the eigenstates of $`L^2`$ and $`_3`$. When $`\eta `$ is real they are called intelligent spin states and quasi intelligent spin states if $`\eta `$ is complex . These states extend the well known work of . Ref. has discussed fully the use of the SU(2) algebra (3). Obviously our WP are not eigenstates of $`L^2`$ but can be expanded in such a basis of intelligent or quasi intelligent spin states with the freedom, by a convenient choice of N, to concentrate the WP on the sphere. More details on these WP can be found in our recent papers and .
Let us now sketch briefly the dynamics which take place if one take these WP as initial WP at time zero and if we let them evolve assuming a rigid body spectrum. Here we rely fully on the work of Averbukh and Perelman . For times of the form $`t=(m/n)T_{\text{rev}}`$ $`(T_{\text{rev}}=2\pi /\omega _0)`$ the WP is subdivided into $`q`$ fractional WP ($`q=n`$ if $`n`$ is odd, $`q=n/2`$ if $`n`$ is even), the shape of these WP depends on the squeezing parameter $`\eta `$. By changing $`\eta `$ and $`k`$ one can modify the quantum angular spread and make it different for the variable $`\theta `$ and for the variable $`\varphi `$. For $`\eta =\pm 1`$ and for all $`m/n`$ the fractional WP are all clones of the initial one (upper part of Fig. 1 for $`m/n=1/3`$ and $`1/4`$). For different values their shape changes (we have called these WP mutants). These shapes are shown in the lower part of Fig. 1 and on Fig. 2.
The differences between real and imaginary $`\eta `$ are not very significant as shown in Fig. 2. Therefore there exist numerous possibilities for constructing angular coherent states using these intelligent and quasi intelligent spin states. Obviously the choice made in (1) of an exponential WP does not exhaust all possible ones. These remarks illustrate the richness of the rotation of a rigid body in quantum mechanics. The internal rotational degree of freedom (i.e. the use of $`D_{MK}^I`$ functions instead of $`Y_M^I`$) can be studied on a similar footing .
This work extends to the rigid rotor in three dimensions the revival mechanism discussed in for the hydrogen atom. The cloning mechanism, valid in our case only for $`\eta =\pm 1`$, was already investigated in ref for an infinite square well. A review of the evolution of localized WP is found in .
REFERENCES
Aragone C et al. 1974 J. Phys. A : Math. Nucl. Gen. 17 L149;
Aragone C et al. 1976 J. Math. Phys. 17 1963.
Rashid M A 1978 J. Math. Phys 19 1391, 1397.
Radcliffe J M 1971 J. Phys. A4 313.
Arvieu R and Rozmej P 1999 J. Phys. A: Math. Gen. 32 2645.
Rozmej P and Arvieu R 1998 Phys Rev. A 58 4314.
Averbukh I Sh and Perelman NF 1989 Phys. Lett. A 139 449.
Dac̆ic-Gaeta Z and Stroud CR 1990 Phys. Rev. A 42 6308.
Aronstein Dl and Stroud CR 1997 Phys. Rev. A 55 4526.
Bluhm R et al. 1996 Am. J. Phys. 64 944.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.