id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9910/astro-ph9910070.html
|
ar5iv
|
text
|
# The Role of the IILR in the Structure of the Central Gaseous Discs
## 1. Introduction
There are two types of rotation curves, one rising rapidly and the other relatively slowly from the center. The fast rising rotation curve usually can host two Lindblad resonances, the outer and the inner (OLR and ILR). The slowly rising rotation curve, however, can host one extra inner Lindblad resonance. Thus we have an outer inner Lindblad resonance (OILR) and an inner inner Lindblad resonance (IILR). The importance of these Lindblad resonances lies in the fact that spiral density waves can be excited there by a rotating bar potential. These waves dominate the structure and the evolution of the galactic discs.
Recent NICMOS observations of the central regions of nearby barred galaxies of NGC5383, NGC1530, and NGC1667 (Sheth et al 1999; Regan et al 1999, Mulchaey and Regan 1997), all show, in the unsharp mask map, a pair of distinct trailing spirals in the very center ( within $``$500 pc), which can be traced as originating from the OILR located at a radius of a few kpc from the center.
Preliminary analysis of the rotation curve according to the BIMA observations (Sheth et al 1999) suggests that NGC5383 has an IILR. If this is indeed the case, the inner spiral structure should be leading and separated from the exterior spiral structure. The fact that it has a pair of distinct trailing spiral, and is connected continuously to the OILR spirals, poses a challenge to the theory. Two explanations are possible : (1) the IILR is ineffective because its Q-barrier may either be too close to the center or not exist, and (2) the rotation curve of NGC5383 close to the center may not guarantee the existence of an IILR. We shall examine both cases here.
## 2. The Role of the IILR
The effectiveness of the IILR in exciting spiral density waves depends heavily on the location of the Q-barrier, which, in turn, depends on the self-gravitation of the disk. The self-gravitation will shift the IILR Q-barrier toward the galactic center or even into non-existence. In the case of NGC5383, a typical disc mass, say, of 200 $`M_{}pc^2`$ and typical sound speed of $`8km/s`$, can shift the Q-barrier of the IILR from 0.5 kpc to 0.2 kpc. If the surface density is a little higher and the disc is a little cooler, the Q-barrier could be moved ”beyond” the center, i.e., non-existence.
Under such a circumstance, the long leading waves excited by the bar potential at the IILR may never get reflected or refracted at the Q-barrier, Therefore, there would be no reflected short leading spirals to be observed. For the same reason, the incoming trailing spirals generated at the OILR cannot find the Q-barrier associated with the IILR, hence they can go to the center (Yuan and Kuo 1997).
Furthermore, if the Q-barrier were moved to 0.2 kpc, the disk thickness would be comparable to the the disk radius (as well as the wavelengths of the density waves). Strong coupling between the disc and the thickness would result in strong damping of the waves and lead to the effective wave absorption. Again, it would diminish the Q-barrier’s role of reflecting waves, hence no short leading spirals.
## 3. The Rotation Curves near the Center
It is well known that the rotation curves near the center of a disc galaxy cannot be determined accurately, either because of the large velocity dispersion in the central region or because of lack of angular resolution. Yet the Lindblad resonances sensitively depend on the derivative of the rotation curve. Although crude observation data often suggest a rigid body rotation near the center, thus the existence of an IILR, it is not yet possible to rule out a Keplerian disk, which has no IILR. It is entirely possible that NGC5383 does not have an IILR. Thus the waves generated at the OILR, can (if viscosity is reasonably small) freely march all the way to or close to the center.
### Acknowledgments.
We are thankful to Drs. M.W. Regan, K. Sheth and J.S. Mulchaey for letting us use their unpublished observational results and for their comments on our work.
## References
Mulchaey, J.S., & Regan, M.W., 1997, ApJ, 482, 135
Regan, W.W., Sheth, K., & Vogel, S.N., 1999, preprint.
Sheth, K., Regan, M.W., Vogel, S.N., & Teuben, P.J., 1999, preprint.
Yuan, C., & Kuo, C.L., 1997, ApJ, 486, 750.
|
no-problem/9910/cond-mat9910489.html
|
ar5iv
|
text
|
# Neighbor-junction state effect on the fluxon motion in a Josephson stack
## I Introduction
Stacked long Josephson junctions (LJJ’s) have recently received much attention since they show a variety of new physical phenomena in comparison with single LJJ’s and have potential for applications as a narrow linewidth powerful oscillators for mm and sub-mm wavebands. The naturally layered high-$`T_c`$ superconductors (HTS) can be described as intrinsic stacks of Josephson junctions . Therefore, study of fluxon dynamics in artificial stacks can help to understand the phenomena which take place in HTS.
The inductive coupling model describing the dynamics of Josephson phases in $`N`$ inductively coupled LJJ’s was derived by Sakai et al.. Experimental investigation of stacked junctions became possible after the progress achieved in (Nb-Al-AlO<sub>x</sub>)<sub>N</sub>-Nb technology which, at the present stage, allows to fabricate stacks with up to about 30 Josephson tunnel junctions having parameter spread between them of less than 10% . Initially, the interest was concentrated on investigation of the simplest symmetric fluxon states since they are promising for oscillator applications. Later on, it was found that it is very interesting to understand the asymmetric states because they show rather nontrivial nonlinear dynamics. Such asymmetric states are also of practical importance, because multilayered oscillators most probably will operate in a regime when only the majority but not all of the junctions oscillate coherently while the other junctions are in resistive or not synchronized flux-flow state.
In a recent work it has been shown that the dynamic state of one junction in a 2-fold stack affects the static properties of the other junction. As a next step, it is interesting and important to understand how different dynamic states in one LJJ affect the dynamics of fluxons in the other LJJ. In particular, the goal of this work is to study the dynamics of a fluxon in one LJJ when the neighboring LJJ is in the resistive state and compare it with the case when the neighboring LJJ is in Meissner state. We call the resistive state a “phase-whirling” state because the Josephson phase difference rotates very fast and nearly uniformly, in rough approximation. Such a dynamic state often occurs in experiments and, therefore, it is important to understand and describe it adequately. In fact, in the early experiments with stacks it was somewhat naively supposed that the voltage of Flux-Flow Step (FFS) in one LJJ does not depend on the state (Meissner or resistive) of the other LJJ. In fact this is true only for the asymptotic voltage of FFS. Here we show that in the presence of the “phase-whirling” solution in one of the junctions the actual flux-flow voltage across the other LJJ gets lower.
In the section II we present the experimental data which clearly show that in a 2-fold stack the switching of one junction from the Meissner state to the phase-whirling state decreases the velocity of a fluxon moving in the other junction and, therefore, the dc voltage across it. The analytical approach which explains the observed decrease of fluxon velocity in the limit of high fluxon density is developed in section III. The results of numerical simulations confirm analytical results and are shown in section IV. The results of the work are summarized in section V.
## II Experiment
In order to investigate the influence of the phase-whirling state in the neighboring junction on the fluxon dynamics, we have chosen the most clean ring-shape (annular) LJJ stack geometry. Due to magnetic flux quantization in a superconducting ring, the number of fluxons initially trapped in each annular junction of the stack is conserved. The fluxon dynamics can be studied here under periodic boundary conditions which exclude possible complicated interference of the fluxon with the junction edges.
Experiments have been performed with 3 different (Nb-Al-AlO<sub>x</sub>)<sub>2</sub>-Nb annular LJJ stacks prepared in 2 different technological runs (2 samples in one run and the third sample in another run). The sample geometry is shown in Fig. 1. Two annular LJJ’s are stacked one on top of the other, with bias leads attached to the top and bottom electrodes. The physical parameters of all samples, measured at $`T=4.2\mathrm{K}`$, are summarized in Tab. I. The stacks were designed with extra contacts to the middle superconducting electrode so that the voltages across each LJJ can be measured separately. The inner diameter of all stacks was $`D=122.5\mu \mathrm{m}`$ and the width $`W=10\mu \mathrm{m}`$. Due to technological difficulties of making a stack of identical LJJ’s with contacts to the middle electrode, the two stacked junctions had rather substantial difference in quasiparticle (subgap) resistance $`R_{\mathrm{QP}}`$. The normalized circumference of the ring was $`\pi D/\lambda _J=L/\lambda _J15`$, where $`\lambda _J`$ is the Josephson penetration depth, which was approximately equal in both junctions. Measurements were performed in the temperature range $`4.2`$$`5.8\mathrm{K}`$.
In stacked annular LJJ’s, clean trapping of a single fluxon in a desired junction is rather difficult due to the asymmetry of the required state $`[1|0]`$. In the particular case of the 3 samples mentioned above, the asymmetry in the junction’s resistance allowed to trap the fluxon in the desired $`[1|0]`$ state without many efforts just by applying a small bias current through one of the junctions during cooling the sample below the critical temperature $`T_c`$. After every trapping attempt, the resulting state was checked. The $`I`$$`V`$ characteristic (IVC) of both LJJ’s were traced simultaneously in such a way that the current was applied through the whole structure (through two junctions connected in series) and the voltages were measured individually across each LJJ. The wanted state $`[1|0]`$ with a fluxon in one junction and no fluxon in the other junction was identified by a simultaneous observation of a small critical current $`I_c`$ and fluxon step with the smallest asymptotic voltage $`20\mu \mathrm{V}`$ in LJJ<sup>A</sup>, and a large critical current in LJJ<sup>B</sup>. Both the current amplitude of the fluxon step $`I_{\mathrm{max}}^A(H)`$ and the critical current $`I_c^B(H)`$ are expected to have their maxima at zero applied magnetic field $`H=0`$. To check that we have clean fluxon trapping, i.e. that the fluxon is trapped in a LJJ and not accompanied by the parasitic Abrikosov vortices in the superconducting films surrounding LJJ, we checked the dependences $`I_c^{A,B}(H)`$ and $`I_{\mathrm{max}}(H)`$ after each trapping attempt and repeated it until these dependences were symmetric.
The main experimental result of the paper is shown in Fig. 2. It is IVC’s of both LJJ’s of the sample #2 traced at $`T5\mathrm{K}`$ using rather complex current sweep sequence. Note, that the voltage scales of two IVC’s in Fig. 2 are different and shown on the bottom axis for $`V^A`$ and on the top axis for $`V^B`$. The sweep starts at the bias point A where $`I=0`$ and $`V=0`$ and a fluxon is trapped in LJJ<sup>A</sup> (state $`[1|0]`$). When the current is increased up to about $`I=0.69\mathrm{mA}`$ (point B in Fig. 2), LJJ<sup>A</sup> switches to the fluxon step, while LJJ<sup>B</sup> still remains in the Meissner state. Ideally, the LJJ<sup>A</sup> should switch to the fluxon step at zero bias current since any non zero current applied should drive the fluxon around the stack. In our case the fluxon is pinned, most probably near one of the contacts to the middle electrode, and only current $`I=0.69\mathrm{mA}`$ can tear it away from the pinning center. With the further increase of the bias current the LJJ<sup>A</sup> follows the fluxon step which corresponds to the fluxon rotating in the ring, and the voltage across LJJ<sup>A</sup> is proportional to the fluxon rotation frequency according to the Josephson relation.
In an ideal single annular LJJ, the fluxon step has a relativistic nature and its slope approaches infinity when the fluxon moves with velocity $`u`$ close to the Swihart velocity $`\overline{c}_0`$. In the stack with different $`j_c`$ or with inhomogeneities, the fluxon’s velocity can exceed the Swihart velocity. This results in the emission of the electromagnetic waves travelling behind moving fluxon (Cherenkov radiation) and in a finite slope of the step at any velocity. If the length of the emitted radiation tail is comparable with the circumference of the LJJ, the resonant structures (small steps) can appear on the top of the fluxon step. Such steps are visible on the top of the fluxon step in Fig. 2 and are outlined by the circle. At $`I2.67\mathrm{mA}`$ (point C in Fig. 2) both junctions simultaneously switch to the resistive state (gap voltage) with rapidly whirling Josephson phase. Such a simultaneous switching is called current locking. We studied it in detail for stacks of linear geometry in Ref. . Analyzing the dependence of critical current $`I_c^B(H)`$ and maximum current of the fluxon step $`I_{\mathrm{max}}^A(H)`$ on magnetic field, we conclude that the current locking was driven (initiated) by LJJ<sup>A</sup> (if LJJ<sup>A</sup> is kept in the resistive state, $`I_c^B`$ is substantially higher).
When both LJJ’s are in the resistive state, we inverse the direction of the sweep, i.e. start reducing the bias current. At $`I=1.08\mathrm{mA}`$ (point D in Fig. 2), LJJ<sup>A</sup> switches from the resistive state to the fluxon step while LJJ<sup>B</sup> still stays in the phase-whirling state. We denote such state of the stack as $`[1|R]`$. The voltage $`V^A`$ in the $`[1|R]`$ is by about $`16\%`$ smaller than $`V^A`$ in the $`[1|0]`$ state at the same bias. In fact, this is one of key observations in our study.
At this point there are two possibilities: first, continue to decrease the bias current down to zero or, second, increase the current and trace up the single fluxon step for $`[1|R]`$ state.
If we continue decreasing the bias current, at $`I=0.916\mathrm{mA}`$ (point E in Fig. 2) LJJ<sup>B</sup> switches from the resistive state (McCumber branch) to the Meissner state and the overall state of the stack becomes $`[1|0]`$. This causes the voltage $`V^A`$ to increase and become equal to voltage of the fluxon step which we traced in the beginning of the bias current sweep. The fact that switching of LJJ<sup>B</sup> caused a voltage jump across LJJ<sup>A</sup> is marked in Fig. 2 by dotted arrow. Thus, we demonstrated experimentally that the change in the state of LJJ<sup>B</sup> affects the velocity of fluxon moving in LJJ<sup>A</sup>. Further decrease of the bias current results in the fluxon pinning at $`I=0.470\mathrm{mA}`$ (point F in Fig. 2) and in the zero voltage across both LJJ’s.
The second possibility is, being in the bias point D, to increase bias current and trace the fluxon step of the $`[1|R]`$ state up. This step has rather peculiar shape as shown in Fig. 2. In addition to the common trend to have smaller voltage than the fluxon step in the $`[1|0]`$, the step in $`[1|R]`$ state bends back and forth which possibly implies some interesting physics behind it. We also noticed that as we trace both IVC’s in $`[R|R]`$ state from bias point C down to bias point D and then in $`[1|R]`$ state from point D up to bias point G, the voltage across the LJJ<sup>B</sup> has a small hysteresis at voltages equal to the sum of the gap voltages of the superconducting electrodes constituiting the LJJ<sup>B</sup>. This small hysteresis is shown magnified in the inset of Fig. 2. The voltage across LJJ<sup>B</sup> is somewhat smaller in the $`[1|R]`$ state than in the $`[R|R]`$ state. As soon as LJJ<sup>A</sup> switches to the resistive state (point G in Fig. 2 and dotted arrow in the inset), this difference vanishes.
We propose the following explanation for the observed back bending. As we increase current starting from point D up to $`I1.5\mathrm{mA}`$, corresponding to nearly vertical slope of the fluxon step, the voltage $`V^B`$ increases from $`55\mu \mathrm{V}`$ up to $`1.9\mathrm{mV}`$, i.e. approaches the gap voltage. The fluxon motion in LJJ<sup>A</sup>, due to the coupling between the junctions, causes oscillations of Josephson phase and, therefore, of electric and magnetic fields in LJJ<sup>B</sup>. This leads to photon-assisted tunneling (PAT) effect in LJJ<sup>B</sup>. This effect was earlier observed by Giaever also using a stack of two junctions. The characteristic frequency $`\omega `$ of photons absorbed in LJJ<sup>B</sup> is equal to the fluxon rotation frequency in LJJ<sup>A</sup>. Due to PAT, one expects to observe a step at the gap sum voltage decreased by $`\mathrm{}\omega /e=2V^A`$. In the low bias region shown in the inset of Fig. 2 the gap sum step is not well defined which results in somewhat weaker gap suppression result. In fact, the maximum suppression of the voltage $`V^B`$ we have found is about $`20\mu \mathrm{V}`$ that corresponds to the top of the back bending region at $`I2.6\mathrm{mA}`$. Since the Josephson voltage in LJJ<sup>A</sup> is also about $`20\mu \mathrm{V}`$, its effect on $`V^B`$ is only about 50% of the expected PAT step voltage change. Thus, as we increase current, the gap voltage in LJJ<sup>B</sup> decreases due to photon-assisted tunneling. The resulting decrease of fluxon step voltage of LJJ<sup>A</sup> is associated with the appearance of an additional dissipation channel due to PAT. Since the PAT step on IVC is limited in voltage (by $`2V^A`$) and in current amplitude ($``$ to the amplitude of the first Josephson harmonic which, in the case of fluxon motion, saturates at some bias), the bending to the right caused by Cherenkov radiation appears to be stronger at $`I>2.6\mathrm{mA}`$ so that the fluxon step of LJJ<sup>A</sup> gains the positive slope again. The differential resistance at the top the fluxon step in $`[1|R]`$ state is rather high and no resonances are observed. This picture is typical for fluxon with Cherenkov radiation tail moving in a media with high dissipation.
The negative bias part of the IVC reproduces all the features described above for the positive half except for the small hysteresis between bias point B and F. Very similar IVC’s were found for other 2 measured samples. The minor difference was in the particular values of the bias current in the points B, C, D, E, F and G which were also dependent on $`T`$. All samples showed that the voltage $`V^A`$ of the fluxon step in the $`[1|0]`$ state is somewhat higher than the voltage of the same step at the same bias in the $`[1|R]`$ state. The fluxon step in the $`[1|R]`$ state showed back and forth bending for all samples. The hysteresis between points B and F was intersecting with the hysteresis between points D and E for some samples and temperatures, so we had to use even more complex sweep sequence in order to trace out all possible dynamical states.
## III Theory
The main objective of this section is to analyze the origin of the decrease in the fluxon velocity in one LJJ due to the switching of the neighboring junction into the resistive state. Here we use the standard RSJ model which does not take into account the dependence of the dissipation on voltage at $`VV_g2.4\mathrm{mV}`$. Thus the gap related effects like PAT discussed above are neglected.
The fluxon dynamics in the system under investigation can be described in the framework of the inductive coupling model which for the case of two coupled junctions takes the form:
$`{\displaystyle \frac{\varphi _{xx}}{1S^2}}\varphi _{tt}\mathrm{sin}\varphi {\displaystyle \frac{S\psi _{xx}}{1S^2}}`$ $`=`$ $`\alpha \varphi _t\gamma ;`$ (1)
$`{\displaystyle \frac{\psi _{xx}}{1S^2}}\psi _{tt}\mathrm{sin}\psi {\displaystyle \frac{S\varphi _{xx}}{1S^2}}`$ $`=`$ $`\alpha \psi _t\gamma ,`$ (2)
where $`\varphi `$ and $`\psi `$ are the Josephson phases across the LJJ<sup>A,B</sup>, respectively, $`1<S<0`$ is a dimensionless coupling parameter, $`\alpha 1`$ is the damping coefficient describing the dissipation in the system due to quasiparticle tunneling, and $`\gamma =j/j_c`$ is the normalized the density of the bias current flowing through the stack. The coordinate $`x`$ and time $`t`$ are measured, respectively, in units of the Josephson length $`\lambda _J`$ and inverse plasma frequency $`\omega _p^1`$ of single-layer LJJ. Most of relevant parameters of the junctions, such as effective magnetic thicknesses and specific capacitances, for the sake of simplicity are assumed to be equal in both LJJ’s.
To understand the origin of an additional friction force we start from unperturbed (without r.h.s.) Eqs. (1) and (2) and use the force balance equations to derive the shape of the IVC. We do not directly solve Eqs. (1) and (2), but, rather, use trial solutions for $`\varphi (x,t)`$ and $`\psi (x,t)`$, which approximate the Josephson phase profiles in the states $`[1|0]`$ or $`[1|R]`$. The choice of the trial functions is suggested by the results of numerical simulations presented in the following section. To simplify the mathematics and concentrate attention on the physical sense, the dense fluxon chain approximation is used. For this case we adopt the following trial solutions:
$`\varphi (x,t)=H(xut)+A_{r,m}\mathrm{sin}\left[H(xut)\right];`$ (3)
$`\psi ^m(x,t)=B_m\mathrm{sin}\left[H(xut)\right];`$ (4)
$`\psi ^r(x,t)=\omega t{\displaystyle \frac{1}{\omega ^2}}\mathrm{sin}(\omega t)+B_r\mathrm{sin}\left[H(xut)\right],`$ (5)
where $`\psi ^m`$ and $`\psi ^r`$ are phases in LJJ<sup>B</sup> in the Meissner state and resistive state, accordingly (these are the two cases which we are going to compare); $`u`$ is the velocity of the fluxon chain; $`A_{r,m}`$, $`B_{r,m}1`$ are the constants which we are going to determine for resistive and Meissner states, respectively; $`H`$ is the average normalized magnetic field in the LJJ<sup>A</sup>:
$$H=\frac{2\pi N}{\mathrm{}},$$
(6)
where $`N`$ is the number of fluxons trapped in annular LJJ<sup>A</sup> (for $`[1|0]`$ and $`[1|R]`$ states $`N=1`$), and $`\mathrm{}=L/\lambda _J`$ is the normalized length of the junctions. Dense fluxon chain approximation implies that $`H1`$.
Substituting Eqs. (3)–(5) into Eqs. (1) and (2), and using the following approximations (which are justified in our case)
$`\mathrm{sin}\varphi \mathrm{sin}\left[H(xut)\right];`$ (7)
$`\mathrm{sin}\psi ^mB\mathrm{sin}\left[H(xut)\right];`$ (8)
$`\mathrm{sin}\psi ^r\mathrm{sin}(\omega t),`$ (9)
we arrive to the equations from which we can determine $`A_{r,m}`$ and $`B_{r,m}`$. The final result for the $`[N|R]`$ (resistive) state is
$`A_r`$ $`=`$ $`{\displaystyle \frac{D}{H^2Q}};`$ (10)
$`B_r`$ $`=`$ $`{\displaystyle \frac{S}{H^2Q}},`$ (11)
where we introduced notations
$`D`$ $`=`$ $`1u^2(1S^2);`$ (12)
$`Q`$ $`=`$ $`\left(1u^2\right)^2u^4S^2.`$ (13)
Note, that both $`D>0`$ and $`Q>0`$ for $`u<\overline{c}_{}`$.
The result for the $`[N|0]`$ (Meissner) state is
$`A_m`$ $`=`$ $`{\displaystyle \frac{DH^2+1S^2}{H^2\left(QH^2+D\right)}};`$ (14)
$`B_m`$ $`=`$ $`{\displaystyle \frac{S}{QH^2+D}}.`$ (15)
To calculate IVC, $`\gamma (u)`$, we write the force balance equation
$$2\pi N\gamma =F_\alpha ^A+F_\alpha ^B.$$
(16)
Here $`F_\alpha ^{A,B}`$ are the friction forces which develop in LJJ<sup>A,B</sup>. The expression for the friction force is well known from the perturbation theory
$$F_\alpha =\alpha _0^{\mathrm{}}\varphi _x\varphi _t𝑑x,$$
(17)
where we will use $`\mathrm{}=2\pi N/H`$ following from (6). Since we are interested in the average friction force to get an IVC, we have to perform friction force averaging in time:
$$\overline{F}_\alpha =\frac{1}{T}_0^TF_\alpha (t)𝑑t.$$
(18)
Since there are two characteristic frequencies in the system, fluxon (Josephson) frequency and the phase-whirling frequency of the resistive state, we have to choose the averaging interval $`T`$ in Eq. (18) so that it will contain an integer number of periods of each frequency i.e. $`T=2\pi k/\omega =2\pi m/Hu`$, and $`k`$, $`m`$ being the integer constants. After averaging, we get the following expressions for friction forces
$`\overline{F}_\alpha ^A`$ $`=`$ $`\pi N\alpha Hu\left(A_{r,m}^2+2\right);`$ (19)
$`\overline{F}_\alpha ^B`$ $`=`$ $`\pi N\alpha HuB_{r,m}^2.`$ (20)
All information about the actual state is contained in $`A_{r,m}`$ and $`B_{r,m}`$ calculated above for the Meissner and resistive state of the LJJ<sup>B</sup>.
Finally, we insert (19) and (20) into the force balance equation (16) and get IVC’s
$$\gamma _{r,m}(u)=\frac{\alpha Hu}{2}\left(A_{r,m}^2+B_{r,m}^2+2\right).$$
(21)
Now we can prove that IVC for the $`[N|R]`$ state is shifted to the region of lower velocities in comparison with the IVC for $`[N|0]`$ state, i.e. that
$$\delta (u)=\gamma _r(u)\gamma _m(u)>0\text{ for all }|u|<\overline{c}_{}.$$
(22)
Substituting $`A_{r,m}`$ and $`B_{r,m}`$ from Eqs. (10), (11), (14), (15) into the expression (21) and using the obtained expressions for $`\gamma _{r,m}(u)`$ in (22) we get
$$\delta (u)=\frac{\alpha u\left(X_1H^2+X_2\right)}{2H^3Q^2\left(H^2Q+D\right)^2},$$
(23)
where $`X_1`$ and $`X_2`$ are defined as
$`X_1`$ $`=`$ $`2QD\left[S^2Q\left(1S^2\right)+D^2\right];`$ (24)
$`X_2`$ $`=`$ $`D^2\left(D^2+S^2\right)\left(1S^2\right)^2Q^2,`$ (25)
Obviously, Eq. (23) is positive when both $`X_1`$ and $`X_2`$ are positive. To prove the latter, we express $`D^2`$ as a function of $`Q`$ using Eqs. (12) and (13)
$$D^2=Q\left(1S^2\right)+S^2.$$
(26)
Substituting (26) into (24) and (25) we get
$`X_1`$ $`=`$ $`4QDS^2>0;`$ (27)
$`X_2`$ $`=`$ $`S^2\left[3Q\left(1S^2\right)+2S^2\right]>0.`$ (28)
Thus (22) is proved and $`\gamma _r(u)>\gamma _m(u)`$ for any $`u<\overline{c}_{}`$.
This result is in agreement with our experiment and simulation (see the following section). From physical point of view, the origin of the effect lays in the difference between Eqs. (8) and (9) where the main term depends on the state of LJJ<sup>B</sup> resulting in different phase profile and different friction force for $`[1|0]`$ and $`[1|R]`$ states. The IVC’s of fluxon steps $`u^A(\gamma )`$ for the states $`[1|0]`$ and $`[1|R]`$ calculated using Eq. (21) are shown in Fig. 3. According to the calculations presented above the difference $`\delta (u)`$ diverges as $`u`$ approaches $`\overline{c}_{}`$. Actually, for this case of a single fluxon in our relatively long junction, the dense fluxon chain approximation is not fully valid so one needs to perform more exact analysis or numerical simulations. The region of validity of our approximation is $`A_r<1`$ and $`B_r<1`$. Using Eqs. (10) and (11) for the same parameters as in Fig. 3 we get that our approximation is valid up to $`u0.809`$ while $`\overline{c}_{}0.816`$.
## IV Simulation
To check the limitations of the analysis presented above we performed direct numerical simulations. Our simulations show that the effect observed in experiment and explained in the framework of high fluxon density approximation exists for any fluxon densities and any velocity even very close to $`\overline{c}_{}`$. Another advantage of the simulation over the analytical approach is that the simulation fully reproduce the dynamics of the inductive coupling model and, therefore, all the effects possible in its framework.
The numerical procedure works as follows. For a given set of LJJ’s parameters we simulate the IVC of the system, i.e. calculate $`\overline{V}^A(\gamma )`$ and $`\overline{V}^B(\gamma )`$ while increasing $`\gamma `$ from zero up to $`1`$. To calculate the voltages $`\overline{V}^A(\gamma )`$ and $`\overline{V}^B(\gamma )`$ for each value of $`\gamma `$, we simulate the dynamics of the phases $`\varphi ^{A,B}(x,t)`$ by solving the Eqs. (1) and (2) with the periodic boundary conditions:
$`\varphi ^{A,B}(0,\stackrel{~}{t})`$ $`=`$ $`\varphi ^{A,B}(\mathrm{},\stackrel{~}{t})+2\pi N^{A,B}`$ (29)
$`\varphi _{\stackrel{~}{x}}^{A,B}(0,\stackrel{~}{t})`$ $`=`$ $`\varphi _{\stackrel{~}{x}}^{A,B}(\mathrm{},\stackrel{~}{t}),`$ (30)
numerically using an explicit method \[expressing $`\varphi ^{A,B}(t+\mathrm{\Delta }t)`$ as a function of $`\varphi ^{A,B}(t)`$ and $`\varphi ^{A,B}(t\mathrm{\Delta }t)`$\], and treating $`\varphi _{xx}`$ with a five point, $`\varphi _{tt}`$ and $`\varphi _t`$ with a three point symmetric finite difference scheme. Numerical stability was checked by doubling the spatial and temporal discretization steps $`\mathrm{\Delta }x`$ and $`\mathrm{\Delta }t`$ and checking its influence on the fluxon profiles and on the IVC. The discretization values used for simulation were $`\mathrm{\Delta }x=0.01`$, $`\mathrm{\Delta }t=0.0025`$. After simulation of the phase dynamics for $`T_0=20`$ time units we calculate the average dc voltages $`\overline{V}^{A,B}`$ during this time interval as
$$\overline{V}^{A,B}=\frac{1}{T}_0^T\varphi _t^{A,B}(t)𝑑t=\frac{\varphi ^{A,B}(T)\varphi ^{A,B}(0)}{T}.$$
(31)
For faster convergence, we use the fact that $`\overline{V}^{A,B}`$ does not depend on $`x`$ and, therefore, we also take advantage of the spacial averaging of the phases $`\varphi ^{A,B}`$ in (31).
When the values of $`\overline{V}^{A,B}`$ are found from (31), the dynamics of the phases $`\varphi ^{A,B}(x,t)`$ is simulated further during $`1.2T_0`$ time units, the dc voltages $`\overline{V}^{A,B}`$ are calculated for this new time interval and are compared with the previously calculated values. We repeat such iterations further increasing the time interval by a factor 1.2 until the difference in dc voltages $`|\overline{V}(1.2^{n+1}T)\overline{V}(1.2^nT)|`$ obtained in two subsequent iterations becomes less than a given accuracy $`\delta V=10^3`$. The particular value of the factor $`1.2`$ was found to be quite optimal to provide fast convergence as well as more effective averaging of low harmonics on subsequent steps. Very small value of this factor, e.g. $`1.01`$ can result in very slow convergence in the case when $`\varphi (t)`$ contains harmonics with the period comparable or larger than $`T`$. Large values of the factor, e.g. 2 or higher, will consume a lot of CPU time already during the second or third iteration even when the convergence is good. After the voltage averaging for current $`\gamma `$ is complete, $`\gamma `$ is changed by a small amount $`\delta \gamma `$ to calculate the voltages in the next point of the IVC. As initial conditions here we use a distribution of phases (and their derivatives) achieved in the previous point of the IVC.
An example of calculated IVC is shown in Fig. 4. To trace both the Meissner and resistive states we use the following sweep sequence: $`\gamma `$ increases from 0 up to 1 with a step $`\delta \gamma =0.01`$, then decreases down to 0.5 with a step $`\delta \gamma =0.01`$, and further down with a step $`\delta \gamma =0.002`$ until the state $`[N|R]`$ is reached as shown in Fig. 4. From this point, we either sweep further down to $`\gamma =0`$ or up to $`\gamma =1`$ until both junctions switch to the resistive state. The decrease of voltage in $`[1|R]`$ state in comparison with $`[1|0]`$ state is very clearly seen in Fig. 4. We performed this kind of simulations for the wide range of the parameters i.e. for $`|S|=0.1`$, $`0.2\mathrm{}0.5`$, and $`N=1`$, $`2\mathrm{}5`$ (in total 25 pairs of IVC’s) and found similar IVC’s in all cases. For relatively dense fluxon chain $`N/L=1`$ we also compared the IVC’s obtained by means of numerical simulation with IVC’s derived analytically and found a good agreement as shown in Fig. 3. Small difference in the slope of analytical and numerical IVC’s is related to final density of fluxons in simulation $`H=2\pi `$ while theoretical curve corresponds to $`H1`$.
The limitations of our model due to voltage independent loss term $`\alpha `$ prevented proper calculation of the upper part of the fluxon step in $`[1|R]`$ state. Here numerical curve shows a series of small voltage jumps (see Fig. 4) in LJJ<sup>A</sup> that are not observed in experiment. These jumps are related to excitation of fluxon-antifluxon states in LJJ<sup>B</sup> at high voltages. Such states were not found in experiment, most probably due to the increased dumping in LJJ<sup>B</sup> at gap voltage.
## V Conclusion
We investigated experimentally and theoretically the motion of a fluxon in one of two magnetically coupled long Josephson junctions. Two different cases are studied: (1) when the neighboring junction (one which does not contain any fluxon) is in the Meissner state, and (2) when the neighboring junction is in a phase-whirling (resistive) state. We found that the phase-whirling state in LJJ<sup>B</sup> slows down the fluxon motion in LJJ<sup>A</sup> and results in a shift of the fluxon step to lower voltage. This effects is detected experimentally, reproduced in simulations based on the inductive coupling model, and derived analytically in the high fluxon density approximation. In addition, experiment shows quite peculiar back and forth bending of the fluxon step in $`[1|R]`$ state which we explain as a result of increased dumping due to photon assisted tunneling effect in LJJ<sup>B</sup>. The results of our study are also relevant for characterization of stacked Josephson junctions with large number of layers.
###### Acknowledgements.
We thank Norbert Thyssen for the sample fabrication. Partial support of this work by Deutsche Forschungsgemindschaft (DFG) is also acknowledged.
|
no-problem/9910/hep-ex9910070.html
|
ar5iv
|
text
|
# New experimental data for the decays ϕ→𝜇⁺𝜇⁻ and ϕ→𝜋⁺𝜋⁻ from SND detector.
## 1 Introduction
The both decays $`\varphi \mu ^+\mu ^{}`$ and $`\varphi \pi ^+\pi ^{}`$ give interference patterns in the energy dependences of the cross sections of the processes
$`e^+e^{}\mu ^+\mu ^{},`$ (1)
$`e^+e^{}\pi ^+\pi ^{}`$ (2)
in the region of $`\varphi `$ resonance. The interference amplitude is determined by the branching ratio of the corresponding decay. The table value of the branching ratio $`\mathrm{BR}(\varphi \mu ^+\mu ^{})=(2.5\pm 0.4)\times 10^4`$ is based on the experiments in photoproduction . In $`e^+e^{}`$ collisions one can measure the leptonic branching ratio of $`\varphi `$ meson $`B_{e\mu }=\sqrt{B(\varphi \mu ^+\mu ^{})B(\varphi e^+e^{})}`$. Such measurements were performed in Orsay and Novosibirsk , but their accuracy was not too high.
The experimental result on the decay $`\varphi \pi ^+\pi ^{}`$ does not agree well with the theoretical predictions (see for example ), but the improvement of the accuracy is needed.
Data collected at VEPP-2M with SND detector in the vicinity of $`\varphi `$ resonance allow to improve the accuracy of the measurements of these decays. The result on the decay $`\varphi \mu ^+\mu ^{}`$ was obtained using 1996 data sample with the total integrated luminosity $`2.61`$ pb<sup>-1</sup> and corresponding number of $`\varphi `$ mesons about $`4.6\times 10^6`$. The 1998 data were used to measure the decay $`\varphi \pi ^+\pi ^{}`$. During 1998 about $`13.2\times 10^6`$ $`\varphi `$ mesons were produced with the integrated luminosity $`8.6`$ pb<sup>-1</sup>.
## 2 Event selection
The preliminary selections were the same for the both processes. The events with two collinear charged tracks were selected. The cuts on the angles of acollinearity in azimuth and polar directions were following: $`\mathrm{\Delta }\phi <10^{}`$, $`\mathrm{\Delta }\theta <25^{}`$. To suppress the beam and cosmic background the production point of charged particles was required to be within $`0.5`$ cm from the interaction point in the azimuth plane and $`\pm 7.5`$ cm along the beam direction (the longitudinal size of the interaction region $`\sigma _z`$ is about $`2`$ cm). The polar angles of the charged particles were limited in the range $`45^{}<\theta <135^{}`$, corresponding to the acceptance angle of the outer system . The outer system allows to distinguish between the processes (1) and (2) due to the difference in the probability to hit the outer system for muons and pions. These probabilities are shown in Fig. 2 and 2.
The data were divided for two samples (“muons” and “pions”) in dependence on the existence of the hit in the outer system. The admixture of the events of the process (2) in the sample “muons” was about 3% at $`\varphi `$-resonance energy. The process (1) gave 15% contribution to the sample “pions” at the same energy.
Other sources of background are the cosmic muons and the process $`e^+e^{}e^+e^{}`$. The cosmic background was significant only for the process (1) and was suppressed by the time of flight system, which is a part of the outer system. The events of the process $`e^+e^{}e^+e^{}`$ did not contribute to the sample “muons” due to the suppression by the outer system. To reduce the background from these events for the process (2) the procedure of $`e/\pi `$ separation based on the energy depositions in the calorimeter layers was used. The events of the process $`e^+e^{}e^+e^{}`$ were suppressed by factor $`3.6\times 10^4`$, while only 7% of the events of the process under study were lost. The remaining background from the process $`e^+e^{}e^+e^{}`$ was about 1.5% at $`2E_b=1020`$ MeV.
The processes $`e^+e^{}\varphi \pi ^+\pi ^{}\pi ^0`$ and $`e^+e^{}\varphi K_SK_L`$ gave resonance background to the process $`e^+e^{}\pi ^+\pi ^{}`$. The events of these decays were suppressed by restrictions on the energy depositions in the calorimeter layers. The contribution of the resonance background remained at the level of 0.5% at $`2E_b=1020`$ MeV. Because such background changes the visible interference pattern, special efforts were made to subtract it. The selected events were divided in two parts by the cut on the parameter $`\mathrm{\Delta }\phi `$: $`\mathrm{\Delta }\phi <5^{}`$ and $`\mathrm{\Delta }\phi >5^{}`$. To obtain the interference parameters the events from the first part were used. The resonance background was determined from the second part, where its level is comparable with the visible cross section of the process (2). The relationship between the quantities of the resonance background in two parts was calculated by Monte Carlo simulation .
The detection efficiencies were obtained from the simulated data with using the experimental data for some corrections. The efficiencies for the processes (1) and (2) were 28% and 12% respectively at the region of $`\varphi `$ resonance.
## 3 Data analysis
The energy dependence of the visible cross sections of the processes under study was fitted with the following formula: $`\sigma _{vis}(E)=\epsilon (E)\sigma _B(E)\beta (E)+\sigma _{bg}(E)`$, where $`\epsilon `$ – the detection efficiency, $`\sigma _B`$ – the Born cross section of the appropriate process, $`\beta `$ – factor taking into account the radiative corrections , $`\sigma _{bg}`$ is the cross section of the background. The Born cross section was factorized into non-resonant and resonant parts:
$`\sigma _B(E)=\sigma _{nr}(E)Z^2`$, $`Z=1Qe^{i\psi }\frac{m_\varphi \mathrm{\Gamma }_\varphi }{m_\varphi ^2E^2iE\mathrm{\Gamma }(E)}`$, where $`Q,\psi `$ – modulus and phase of the interference amplitude, $`m_\varphi ,\mathrm{\Gamma }_\varphi `$ – mass and width of $`\varphi `$ meson. Non-resonant cross section was taken in the form $`\sigma _{\mu \mu }=83.50(\mathrm{nb})\frac{m_\varphi ^2}{E^2}`$ for the process (1). A second order polynomial was used to describe the cross section $`\sigma _{nr}`$ of the process (2).
The interference amplitude is related to the branching ratio of the decay $`\varphi \mu ^+\mu ^{}`$ by the following expression: $`Q_\mu =\frac{3\sqrt{B(\varphi \mu ^+\mu ^{})B(\varphi e^+e^{})}}{\alpha }`$, where $`\alpha `$ is the fine structure constant. For the decay $`\varphi \pi ^+\pi ^{}`$: $`Q_\pi =\frac{6\sqrt{B(\varphi \pi ^+\pi ^{})B(\varphi e^+e^{})}}{\alpha \beta _\pi ^{3/2}(m_\varphi )F_\pi }`$, where $`F_\pi `$ – the pion form factor at the maximum of $`\varphi `$ resonance, $`\beta _\pi =(14m_\pi ^2/E^2)^{1/2}`$.
Fig. 4 shows the cross section $`\sigma _B`$ of the process (1). The cross section $`\sigma _{vis}`$ for the process (2) is shown in Fig. 4.
The fit gave the following results: $`Q_\mu =0.129\pm 0.009`$, $`\psi _\mu =(4.5\pm 3.4)^{}`$, $`F_\pi ^2=2.98\pm 0.02`$, $`Q_\pi =0.073\pm 0.005`$, $`\psi _\pi =(34\pm 4)^{}`$. The systematic errors are not specified in these results. From here the leptonic branching ratio of $`\varphi `$ meson $`B_{e\mu }=(3.14\pm 0.22\pm 0.14)\times 10^4`$ and the branching ratios $`B(\varphi \mu ^+\mu ^{})=(3.30\pm 0.45\pm 0.32)\times 10^4`$, $`B(\varphi \pi ^+\pi ^{})=(0.71\pm 0.11\pm 0.09)\times 10^4`$ were obtained. The real and imaginary parts of the interference amplitude of the decay $`\varphi \pi ^+\pi ^{}`$ are following: $`\mathrm{}Z_\pi =(6.2\pm 0.5\pm 0.5)\times 10^2`$, $`\mathrm{}Z_\pi =(4.2\pm 0.6\pm 0.4)\times 10^2`$.
## 4 Conclusion
The measured value of $`B_{e\mu }`$ and the branching ratio $`B(\varphi \mu ^+\mu ^{})`$ are in good agreement with the table branching ratio $`B(\varphi e^+e^{})=(2.99\pm 0.08)10^4`$ . The accuracy of the result for the decay $`\varphi \mu ^+\mu ^{}`$ is comparable with the accuracy of the table value $`B(\varphi \mu ^+\mu ^{})=(2.5\pm 0.4)10^4`$ .
The measured value of $`B(\varphi \pi ^+\pi ^{})`$ agrees with the table value $`B(\varphi \pi ^+\pi ^{})=(0.8_{0.4}^{+0.5})\times 10^4`$ and has much better accuracy. There is a discrepancy between our result and the preliminary result of CMD-2 . The measured $`\mathrm{}Z_\pi `$ is much lower than VDM prediction with standard $`\rho \omega \varphi `$ mixing . Such low real part can be explained by the existence of direct decay of $`\varphi `$ to $`\pi ^+\pi ^{}`$ or non-standard $`\rho \omega \varphi `$ mixing.
## 5 Acknowledgement
The work is partially supported by RFBR (Grants No 96-15-96327, 99-02-17155, 99-02-16815, 99-02-16813) and STP “Integration” (Grant No 274).
|
no-problem/9910/astro-ph9910292.html
|
ar5iv
|
text
|
# E+A Galaxies in the near-IR: Field and Clusters
## 1. Introduction: what is an E+A galaxy?
Most of the E+A galaxies present mid- to early-type morphologies, as it was already shown by some authors (, ). A little fraction of them have late Hubble types (but see ). However, their spectra is peculiar: they do not have emission lines, representative of an ongoing star formation, but they have strong Balmer absorption lines, representative of a young population (A and B spectral types), and also strong Mg b $`\lambda 5175`$, Ca H & K $`\lambda 3934`$, $`\lambda 3968`$ and Fe $`\lambda 5270`$ lines, indicating that they have a rich population of G, K and M spectral types. This young population suggests that the E+A galaxies are 1 Gyr to 4 Gyr old. Are there other peculiar signatures in the spectra of the E+A galaxies? In particular, are their late-type star populations and their AGB population also different to those of normal galaxies? If so, we can conclude that the E+A phenomenon also involves changes in their older population. In that case, models trying to explain the nature of the E+A galaxies should also fit the signatures observed in the old stellar content.
## 2. Near-IR photometry of E+A galaxies
In order to investigate the above questions, I have started a program to observe southern E+A galaxies in the near-IR. All the observations are being carried on at Las Campanas Observatory (LCO), using NICMOS3 HgCdTe arrays (256 $`\times `$ 256 pixels) at both the 1-m Swope telescope (0.599 arcsec/pix, 2.5 arcmin<sup>2</sup> FOV), and the 2.5-m du Pont telescope (0.42 arcsec/pix, 1.8 arcmin<sup>2</sup> FOV). All the observations are carried on under photometric conditions and seeing $`<1.0`$ arcsec, and include $`J`$, $`H`$ and $`K_s`$ imaging. The sample of galaxies includes E+As from nearby ($`z<0.05`$, ) and intermediate-redshift ($`z0.3`$, ) clusters, as well as E+As located in the field , at $`z0.15`$. Photometry is performed on the calibrated images using SExtractor . Total apparent magnitudes and colors are computed, and compared with spectrophotometric models of galaxy evolution generated using GISSEL96 (see Figure 1). Rest-frame colors are extremely dependent on models, necessary to compute K-corrections. Near-IR spectroscopy will be obtained soon, in order to have reliable K-corrections, allowing to derive robust interpretations from rest-frame colors.
## References
Dressler, A., and Gunn, J. 1983, ApJ, 270, 7
Caldwell, N., and Rose, J. 1997, AJ, 113, 492
Dressler et al. 1999, ApJS, 122, 51
Couch, W., and Sharples, R. 1987, MNRAS, 229, 423
Zabludoff, A. et al. 1996, ApJ, 466, 104
Bertin, E., and Arnouts, S. 1996, A&AS, 117, 393
Charlot, S., Worthey, G., and Bressan, A. 1996, ApJ, 457, 625 (GISSEL96)
|
no-problem/9910/nucl-th9910046.html
|
ar5iv
|
text
|
# Shell Corrections of Superheavy Nuclei in Self-Consistent Calculations
## I Introduction
The stability of the heaviest and superheavy elements has been a long-standing fundamental question in nuclear science. Theoretically, the mere existence of the heaviest elements with $`Z`$$`>`$104 is entirely due to quantal shell effects. Indeed, for these nuclei the shape of the classical nuclear droplet, governed by surface tension and Coulomb repulsion, is unstable to surface distortions driving these nuclei to spontaneous fission. That is, if the heaviest nuclei were governed by the classical liquid drop model, they would fission immediately from their ground states due to the large electric charge. However, in the mid-sixties, with the invention of the shell-correction method, it was realized that long-lived superheavy elements (SHE) with very large atomic numbers could exist due to the strong shell stabilization .
In spite of tremendous experimental effort, after about thirty years of the quest for superheavy elements, the borders of the upper-right end of the nuclear chart are still unknown . However, it has to be emphasized that the recent years also brought significant progress in the production of the heaviest nuclei . During 1995-96, three new elements, $`Z`$=110, 111, and 112, were synthesized by means of both cold and hot fusion reactions . These heaviest isotopes decay predominantly by groups of $`\alpha `$ particles ($`\alpha `$ chains) as expected theoretically . Recently, two stunning discoveries have been made. Firstly, hot fusion experiments performed in Dubna employing <sup>48</sup>Ca+<sup>244</sup>Pu and <sup>48</sup>Ca+<sup>242</sup>Pu “hot fusion” reactions gave evidence for the synthesis of two isotopes ($`A`$=287 and 289) of the element $`Z`$=114. Secondly, the Berkeley-Oregon team, utilizing the “cold fusion” reaction <sup>86</sup>Kr+<sup>208</sup>Pb , observed three $`\alpha `$-decay chains attributed to the decay of the new element $`Z`$=118, $`A`$=293. The measured $`\alpha `$-decay chains <sup>289</sup>114 and <sup>293</sup>118 turned out to be consistent with predictions of the Skyrme-Hartree-Fock (SHF) theory and the Relativistic Mean-Field (RMF) theory .
The goal of the present work is to study shell closures in SHE. To that end we use as a tool microscopic shell corrections extracted from self-consistent calculations. For medium-mass and heavy nuclei, self-consistent mean-field theory is a very useful starting point . Nowadays, SHF and RMF calculations with realistic effective forces are able to describe global nuclear properties with an accuracy which is comparable to that obtained in more phenomenological macroscopic-microscopic models based on the shell-correction method.
In previous work , shell energies for SHE elements were extracted by subtracting from calculated HF binding energies the macroscopic Yukawa-plus-exponential mass formula with parameters of Ref. . In another work, based on the RMF theory , shell corrections were extracted for the heaviest deformed nuclei using the standard Strutinsky method in which the positive-energy spectrum was approximated by quasi-bound states. Neither procedure can be considered as satisfactory. A proper treatment of continuum states is achieved with a Green’s function method . We employ this method for the present study of shell corrections of SHE.
The material contained in this study is organized as follows. The motivation of this work is outlined in Sect. II, Section III contains a brief discussion of the Strutinsky energy theorem on which the concept of shell correction is based. The Green’s Function HF method used to extract the single-particle level density is presented in Sect. IV. Section V discusses the details of our HF and RMF models and describes the Strutinsky procedure employed. The results of calculations for shell corrections in spherical SHE and for macroscopic energies extracted from self-consistent binding energies are discussed in Sec. VI. Finally, Sec. VII contains the main conclusions of this work.
## II Motivation
All the heaviest elements found recently are believed to be well deformed. Indeed, the measured $`\alpha `$-decay energies, along with complementary syntheses of new neutron-rich isotopes of elements $`Z`$=106 and $`Z`$=108, have furnished confirmation of the special stability of the deformed shell at $`N`$=162 predicted by theory . Beautiful experimental confirmation of large quadrupole deformations in this mass region comes from gamma-ray spectroscopy. Recent experimental works succeeded in identifying the ground-state band of <sup>254</sup>No (the heaviest nucleus studied in gamma-ray spectroscopy so far). The quadrupole deformation of <sup>254</sup>No, inferred from the energy of the deduced $`2^+`$ state, is in nice agreement with theoretical predictions . Still heavier and more neutron-rich elements are expected to be spherical due to the proximity of the neutron shell at $`N`$=184. This is the region of SHE which we will investigate here.
In spite of an impressive agreement with available experimental data for the heaviest elements, theoretical uncertainties are large when extrapolating to unknown nuclei with greater atomic numbers. As discussed in Refs. , the main factors that influence the single-proton shell structure of SHE are (i) the Coulomb potential and (ii) the spin-orbit splitting. As far as the protons are concerned, the important spherical shells are the closely spaced $`1i_{13/2}`$ and $`2f_{7/2}`$ levels which appear just below the $`Z`$=114 gap, the $`2f_{5/2}`$ shell which becomes occupied at $`Z`$=120, the $`3p_{3/2}`$ shell which becomes occupied at $`Z`$=124, and the $`3p_{1/2}`$ and $`1i_{11/2}`$ orbitals whose splitting determines the size of the $`Z`$=126 magic gap. Interestingly, while the ordering of single-proton states is practically the same for all the self-consistent approaches with realistic effective interactions (see Fig. 1 and single-particle diagrams in Refs. ), their relative positions vary depending on the choice of force parameters. Since in the region of SHE the single-particle level density is relatively large, small shifts in positions of single-particle levels can influence the strength of single-particle gaps and be crucial for determining the shell stability of a nucleus. As a result, there is no consensus between theorists concerning the next proton magic gap beyond $`Z`$=82. While most macroscopic-microscopic (non-self-consistent) approaches predict $`Z`$=114 to be magic, self-consistent calculations suggest that the center of the proton shell stability should be moved up to higher proton numbers, $`Z`$=120, 124 or 126 . It is to be noted that the Coulomb potential mainly influences the magnitude of the $`Z`$=114 gap. (Here, the self-consistent treatment of Coulomb energy is a key factor.) On the other hand, the spin-orbit interaction determines the position of the $`2f`$ and $`3p`$ shells which define the proton shell structure above $`Z`$$`>`$114.
The spherical neutron shell structure is governed by the following orbitals: $`1j_{15/2}`$ (below the $`N`$=164 gap), $`2g_{7/2}`$, $`3d_{5/2}`$, $`3d_{3/2}`$, and $`4s_{1/2}`$ and $`1j_{13/2}`$ whose splitting determines the size of the $`N`$=184 spherical gap (see Fig. 2 and Refs. ). Again, similar to the proton case, the order of the single-neutron orbitals between $`N`$=164 and 184 is rather robust, while sizes of single-particle gaps vary. For instance, the $`N`$=172 gap, predicted by the RMF calculations shown in Fig. 2, results from the large energy splitting between the $`2g_{7/2}`$ and $`3d_{5/2}`$ shells. In non-relativistic models, these two orbitals are very close in energy, and this degeneracy is related to the pseudo-spin symmetry . Interestingly, in the SHF calculations, the pseudo-spin degeneracy holds in most cases. Namely, certain neutron orbitals group in pairs (pseudo-spin doublets): ($`2g_{7/2}`$, $`3d_{5/2}`$), ($`3d_{3/2}`$, $`4s_{1/2}`$), and the same holds for proton orbitals, e.g., ($`2f_{5/2}`$, $`3p_{3/2}`$). Considering the fact that the idea of pseudo-spin has relativistic roots , it is surprising to see that this symmetry is so dramatically violated in the RMF theory. As a matter of fact, the presence of pronounced magic gaps at $`Z`$=120 and $`N`$=172 in RMF models (see below) is a direct manifestation of the pseudo-spin symmetry breaking.
As discussed in Ref. , neutron-deficient superheavy nuclei are expected to be unstable to proton emission. Indeed, as seen in Fig. 1, the proton $`3p_{1/2}`$ shell has positive energy for $`Z`$$``$126, i.e., in these nuclei the $`3p_{1/2}`$ level is a narrow resonance. Due to huge Coulomb barriers, superheavy nuclei with $`Q_p`$$`<`$1.5 MeV are practically proton-stable . However, the higher-lying single-proton orbitals are expected to have sizable proton widths.
In order to assess the magnitude of shell effects determined by the bunchiness of single-particle levels, it is useful to apply the Strutinsky renormalization procedure which makes it possible to calculate the shell correction energy. Unfortunately, the standard way of extracting shell correction breaks down for weakly bound nuclei where the contribution from the particle continuum becomes important . Recently, a new method of calculating shell correction, based on the correct treatment of resonances, has been developed . The improved method is based on the theory of Gamow states (eigenstates of one-body Hamiltonian with purely outgoing boundary conditions) which can be calculated numerically for commonly used optical-model potentials . While this “exact” procedure cannot be easily adopted to the case of microscopic self-consistent potentials, its simplified version applying the Green’s-function method can .
## III Shell Correction and the Energy Theorem
The main assumption of the shell-correction (macroscopic-microscopic) method is that the total energy of a nucleus can be decomposed into two parts:
$$E=\stackrel{~}{E}+E_{\mathrm{shell}},$$
(1)
where $`\stackrel{~}{E}`$ is the macroscopic energy (smoothly depending on the number of nucleons and thus associated with the “uniform” distribution of single-particle orbitals) and $`E_{\mathrm{shell}}`$ is the shell-correction term that fluctuates with particle number reflecting the non-uniformities (bunchiness) of the single-particle level distribution. In order to make a separation (1), one starts from the one-body HF density matrix $`\rho `$
$$\rho (𝒓^{},𝒓)=\underset{i}{}n_i\varphi _i(𝒓^{})\varphi _i^{}(𝒓),$$
(2)
which can be decomposed into a “smoothed” density $`\stackrel{~}{\rho }`$ and a correction $`\delta \rho `$, which fluctuates with the shell filling
$$\rho =\stackrel{~}{\rho }+\delta \rho .$$
(3)
In Eq. (2), $`n_i`$ is the single-particle occupation coefficient which is equal to 1(0) if the level $`e_i`$ is occupied (empty). The smoothed single-particle density $`\stackrel{~}{\rho }`$ can be expressed by means of the smoothed distribution numbers $`\stackrel{~}{n}_i`$ :
$$\stackrel{~}{\rho }(𝒓^{},𝒓)=\underset{i}{}\stackrel{~}{n}_i\varphi _i(𝒓^{})\varphi _i^{}(𝒓).$$
(4)
When considered as a function of the single-particle energies $`e_i`$, the numbers $`\stackrel{~}{n}_i`$ vary smoothly in an energy interval of the order of the energy difference between major shells. The averaged HF Hamiltonian $`\stackrel{~}{h}_{\mathrm{HF}}`$ can be directly obtained from $`\stackrel{~}{\rho }`$. The expectation value of a HF Hamiltonian (containing the kinetic energy, $`t`$ and the two-body interaction, $`\overline{v}`$) can then be written in terms of $`\stackrel{~}{\rho }`$ and $`\delta \rho `$ :
$$E_{\mathrm{HF}}=\mathrm{Tr}(t\rho )+\frac{1}{2}\mathrm{Tr}\mathrm{Tr}(\rho \overline{v}\rho )=\stackrel{~}{E}+E_{\mathrm{osc}}+O(\delta \rho ^2),$$
(5)
where
$$\stackrel{~}{E}=\mathrm{Tr}(t\stackrel{~}{\rho })+\frac{1}{2}\mathrm{Tr}\mathrm{Tr}(\stackrel{~}{\rho }\overline{v}\stackrel{~}{\rho })$$
(6)
is the average part of $`E_{\mathrm{HF}}`$ and
$$E_{\mathrm{osc}}=\mathrm{Tr}(\stackrel{~}{h}_{\mathrm{HF}}\delta \rho )\text{with}\stackrel{~}{h}_{\mathrm{HF}}=t+\mathrm{Tr}(\overline{v}\stackrel{~}{\rho })$$
(7)
is the first-order term in $`\delta \rho `$ representing the shell-correction contribution to $`E_{\mathrm{HF}}`$. If a deformed phenomenological potential gives a similar spectrum to the averaged HF potential $`\stackrel{~}{h}_{\mathrm{HF}}`$, then the oscillatory part of $`E_{\mathrm{HF}}`$, given by Eq. (7), is very close to that of the deformed shell model, $`E_{\mathrm{shell}}`$=$`E_{\mathrm{osc}}+O(\delta \rho ^2)`$. The second-order term in Eq. (5) is usually very small and can be neglected . The above relation, known as the Strutinsky Energy Theorem, makes it possible to calculate the total energy using the non-self-consistent, deformed independent-particle model; the average part $`\stackrel{~}{E}`$ is usually replaced by the corresponding phenomenological liquid-drop (or droplet) model value, $`E_{\mathrm{macro}}`$. It is important that $`E_{\mathrm{shell}}`$ must not contain any regular (smooth) terms analogous to those already included in the phenomenological macroscopic part. The numerical proof of the Energy Theorem was carried out by Brack and Quentin who demonstrated that Eq. (1) holds for $`E_{\mathrm{shell}}`$ defined by means of the smoothed single-particle energies (eigenvalues of $`\stackrel{~}{h}_{\mathrm{HF}}`$).
In this work, we use a simpler expression to extract the shell correction from the HF binding energy, which should also be accurate up to $`O(\delta \rho ^2)`$. Namely, as an input to the Strutinsky procedure we take the self-consistent single-particle HF energies, $`e_i^{\mathrm{HF}}`$. In this case, the shell correction is given by
$$E_{\mathrm{shell}}(\rho )=\underset{i}{}(n_i\stackrel{~}{n}_i)e_i+O(\delta \rho ^2).$$
(8)
The equivalent macroscopic energy can easily be computed by taking the difference
$$E_{\mathrm{macro}}\stackrel{~}{E}^{\mathrm{HF}}=E(\rho )E_{\mathrm{shell}}(\rho ).$$
(9)
## IV Green’s Function Hartree-Fock Approach to Shell Correction
The HF equation is generally solved using a harmonic oscillator expansion method or by means of a discretization in a three-dimensional box. In both cases, a great number of unphysical states with positive energy appear. The effect of these quasi-bound states is disastrous for the Strutinsky renormalization procedure . Indeed, if one smoothes out the single-particle energy density,
$$g_{\mathrm{sp}}(e)=\underset{i}{}\delta \left(ee_i^{\mathrm{HF}}\right),$$
(10)
it would diverge at zero energy because the presence of the unphysical positive energy states. Consequently, the resulting shell correction becomes unreliable.
In order to avoid the divergence of $`g(e)`$ around the threshold, we apply the Green’s-function method for the calculation of the single-particle level density. In this method, the level density is given by the expression
$$g(e)=\frac{1}{\pi }\mathrm{}\left\{\mathrm{Tr}\left[\widehat{G}^+(e)\widehat{G}_{\mathrm{free}}^+(e)\right]\right\},$$
(11)
where $`\widehat{G}^+(e)=(e\widehat{h}+i0)^1`$ is the outgoing Green’s operator of the single-particle Hamiltonian $`\widehat{h}(\rho )`$, and $`\widehat{G}_{\mathrm{free}}^+`$ is the free outgoing Green’s operator that belongs to the “free” single-particle Hamiltonian. This latter is derived from the full HF Hamiltonian in such a way that those terms are kept which are related to the kinetic energy density and to the direct Coulomb term. The interpretation of Eq. (11) is straightforward: the second term in Eq. (11) contains the contribution to the single-particle level density originating from the gas of free particles.
The single-particle level density defined by the Green’s-function expression (11) behaves smoothly around the zero-energy threshold; for finite-depth Hamiltonians this definition is the only meaningful way of introducing $`g(e)`$. The level density (11) automatically takes into account the effect of the particle continuum which may influence the results of shell-correction calculations , especially pronounced for systems where the Fermi level is close to zero, i.e., drip-line nuclei.
Because it is difficult to calculate the Green’s-function, in this work we applied the approximation introduced in Ref. . In this approach, the single-particle level density is expressed as
$$g(e)\underset{i}{}\delta \left(ee_i^{\mathrm{HF}}\right)\underset{i}{}\delta \left(ee_i^{\mathrm{HF},\mathrm{free}}\right),$$
(12)
where $`e_i^{\mathrm{HF},\mathrm{free}}`$ are the eigenvalues of the free one-body HF Hamiltonian. As usual in the Strutinsky procedure, the smooth level density can be obtained by folding $`g(e)`$ with a smoothing function $`f(x)`$:
$`\stackrel{~}{g}(e)`$ $`=`$ $`{\displaystyle \frac{1}{\gamma }}{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑e^{}g(e^{})f\left({\displaystyle \frac{e^{}e}{\gamma }}\right)`$ (13)
$`=`$ $`\stackrel{~}{g}_0(e)\stackrel{~}{g}_{\mathrm{free}}(e),`$ (14)
where $`\gamma `$ is the smoothing width, $`\stackrel{~}{g}_0(e)`$ is the smooth level density obtained from the HF spectrum (including the quasi-bound states), and $`\stackrel{~}{g}_{\mathrm{free}}(e)`$ is the contribution to the smooth level density from the particle gas.
In practice, $`\stackrel{~}{g}(e)`$ can be calculated in three steps. First, we solve the HF equations to determine the self-consistent energies $`e_i^{\mathrm{HF}}`$. In the next step, we calculate the positive-energy gas spectrum $`e_i^{\mathrm{HF},\mathrm{free}}`$ at the self-consistent minimum. In particular, we take the Coulomb force from the self-consistent calculation. Finally, we compute $`\stackrel{~}{g}_0(e)`$ and $`\stackrel{~}{g}_{\mathrm{free}}(e)`$ using the same folding function. The quality of approximation (12) was tested in Ref. where it was demonstrated that, when increasing the number of basis states, the resulting single-particle level density quickly converges to the exact result.
## V Self-consistent Models
### A Skyrme-Hartree-Fock Model
In the SHF method, nucleons are described as nonrelativistic particles moving independently in a common self-consistent field. Our implementation of the HF model is based on the standard ansatz . The total binding energy of a nucleus is obtained self-consistently from the energy functional:
$`=_{\mathrm{kin}}`$ $`+`$ $`_{Sk}+_{Sk,ls}`$ (15)
$`+`$ $`_C+_{\mathrm{pair}}_{\text{CM}},`$ (16)
where $`_{\mathrm{kin}}`$ is the kinetic energy functional, $`_{Sk}`$ is the Skyrme functional, $`_{Sk,ls}`$ is the spin-orbit functional, $`_C`$ is the Coulomb energy (including the exchange term), $`_{\mathrm{pair}}`$ is the pairing energy, and $`_{\text{CM}}`$ is the center-of-mass correction.
Since there are more than 80 different Skyrme parameterizations on the market, the question arises, which forces should actually be used when making predictions and comparing with the data? Here, we have chosen a small subset of Skyrme forces which perform well for the basic ground-state properties (masses, radii, surface thicknesses) and have sufficiently different properties which allows one to explore the possible variations among parameterizations. This subset contains: SkM , SkT6 , Z<sub>σ</sub> , SkP , SLy4 , and SkI1, SkI3, and SkI4 from Ref. . We have also added the force SkO from a recent exploration . Most of these interactions have been used for the investigation of the ground-state properties of SHE before . All the selected forces perform well concerning the total energy and radii. They all have comparable incompressibity $`K`$=210-250 MeV and comparable surface energy which results from a careful fit to ground-state properties . Variations occur for properties which are not fixed precisely by ground-state characteristics. The effective nucleon mass is 1 for SkT6 and SkP, 0.9 for SkO, around 0.8 for SkM and Z<sub>σ</sub>, and even lower, around 0.65, for SLy4, SkI1, SkI3, and SkI4. Isovector properties also exhibit large variations. For SkI3 and SkI4, the spin-orbit functional is given in the extended form of which allows a separate adjustment of isoscalar and isovector spin-orbit force. The standard Skyrme forces use the particular combination of isoscalar and isovector terms which were motivated by the derivation from a two-body zero-range spin-orbit interaction . (For a detailed discussion of the spin-orbit interaction in SHF we refer the reader to Refs. .)
### B Relativistic Mean-Field Model
In our implementation of the RMF model, nucleons are described as independent Dirac particles moving in local isoscalar-scalar, isoscalar-vector, and isovector-vector mean fields usually associated with $`\sigma `$, $`\omega `$, and $`\rho `$ mesons, respectively . These couple to the corresponding local densities of the nucleons which are bilinear covariants of the Dirac spinors similar to the single-particle density of Eq. (2).
The RMF is usually formulated in terms of a covariant Lagrangian; see, e.g., Ref. . For our purpose we prefer a formulation in terms of an energy functional that is obtained by eliminating the mesonic degrees of freedom in the Lagrangian. For a detailed discussion of the RMF as an energy density functional theory, see Refs. . The energy functional of the nucleus
$`_{\mathrm{RMF}}=_{\mathrm{kin}}`$ $`+`$ $`_\sigma +_\omega +_\rho `$ (17)
$`+`$ $`_\mathrm{C}+_{\mathrm{pair}}_{\mathrm{CM}}`$ (18)
is composed of the kinetic energy of the nucleons $`_{\mathrm{kin}}`$, the interaction energies of the $`\sigma `$, $`\omega `$, and $`\rho `$ fields, and the Coulomb energy of the protons $`_\mathrm{C}`$. All these are bilinear in the nucleonic densities as in the case of non-relativistic models \[cf. Eq. (5)\]. Pairing correlations are treated in the BCS approach employing the same non-relativistic pairing energy functional $`_{\mathrm{pair}}`$ that is used in the SHF model. The center-of-mass correction $`_{\mathrm{CM}}`$ is also calculated in a non-relativistic approximation; see for a detailed discussion. The single-particle energies $`e_i`$ needed to calculate the shell correction are the eigenvalues of the one-body Hamiltonian of the nucleons which is obtained by variation of the energy functional (17).
In the context of our study, it is important to note that the spin-orbit interaction emerges naturally in the RMF from the interplay of scalar and vector fields . Without any free parameters fitted to single-particle data, the RMF gives a rather good description of spin-orbit splittings throughout the chart of nuclei .
As in SHF, there exist many RMF parameterizations which differ in details. For the purpose of the present study, we choose the most successful (or most commonly used) ones: NL1 , NL-Z , NL-Z2 , NL-SH , NL3 , and TM1 . All of them have been used for investigations of SHE .
The parameterization NL1 is a fit of the RMF along the strategy of Ref. used also for the Skyrme interaction Z<sub>σ</sub>. The NL-Z parametrization is a refit of NL1 where the correction for spurious center-of-mass motion is calculated from the actual many-body wave function, while NL-Z2 is a recent variant of NL-Z with an improved isospin dependence. The force NL3 stems from a fit including exotic nuclei, neutron radii, and information on giant resonances. The NL-SH parametrization was fitted with a bias toward isotopic trends and it also uses information on neutron radii. The force TM1 was optimized in the same way as NL-SH except for introducing an additional quartic self-interaction of the isoscalar-vector field to avoid instabilities of the standard model which occur for small nuclei. For SHE, the results obtained with NL-Z are not distinguishable from results obtained with the parameterization PL-40 that is contained in exactly the same manner as NL-Z but uses a stabilized non-linearity of the scalar-isoscalar field . (PL-40 was employed in some recent investigations of the properties of superheavy nuclei .)
All the above parameterizations provide a good description of binding energies, charge radii, and surface thicknesses of stable spherical nuclei with the same overall quality as the SHF model. The nuclear matter properties of the RMF forces, however, show some systematic differences as compared to Skyrme forces. All RMF forces have comparable small effective masses around $`m^{}/m0.6`$. (Note that the effective mass in RMF depends on momentum; hence the effective mass at the Fermi energy is approximately $`10\%`$ larger.) Compared with the SHF model, the absolute value of the energy per nucleon is systematically larger, with values around $`16.3`$ MeV, while the saturation density is always slightly smaller with typical values around 0.15 nucleons/fm<sup>3</sup>. The compressibility of the RMF forces ranges from low values around 170 MeV for NL-Z to $`K`$=355 MeV for NL-SH, which is rather high. There are also differences in isovector properties; the symmetry energy coefficient of all RMF forces is systematically larger than for SHF interactions, with values between 36.1 MeV for NL-SH and 43.5 MeV for NL1 (see discussion below).
### C Details of Calculations
In order to probe the single-particle shell structure of SHE, SHF and RMF calculations were carried out under the assumption of spherical geometry. By doing so we intentionally disregard deformation effects which make it difficult to compare different models and parametrizations. For the same reason, pairing correlations were practically neglected. (In order to obtain self-consistent spherical solutions for open-shell nuclei, small constant pairing gaps, $`\mathrm{\Delta }`$$`<`$100 keV were assumed; the corresponding pairing energies are negligible. This procedure is approximately equivalent to the filling approximation.)
The SHF calculations were carried using the coordinate-space Hartree-Fock code of Ref. . The HF equations were solved by the discretization method. To obtain a proper description of quasi-bound states, it was necessary to take a very large box and a very dense mesh. The actual box size was chosen to be 21 fm and the mesh spacing was 0.3 fm. With this choice, the low-lying positive-energy proton states obtained in SHF perfectly reproduce proton resonances obtained by solving the Schrödinger equation for the HF potential with purely outgoing boundary conditions.
The Strutinsky procedure contains two free parameters, the smoothing parameter $`\gamma `$ and the order of the curvature correction $`p`$. In calculating the Strutinsky smooth energy, instead of the traditional plateau condition we applied the generalized plateau condition described in Ref. . The optimal values of $`\gamma `$ (in units of oscillator frequency $`\mathrm{}\omega _0`$=41/$`A^{1/3}`$) calculated for several nuclei turned out to be close to $`\gamma _p`$=1.54 and $`\gamma _n`$=1.66 for protons and neutrons, respectively; these values, together with $`p`$=10, were adopted in our calculations of shell corrections in SHF.
In the RMF approach, the shell correction can be extracted from the single-particle spectrum like in SHF. To demonstrate it, one proceeds along the steps discussed in Sec. III. The total RMF energy (17) can be decomposed into a smooth part and a correction that fluctuates according to the actual level density. Since the RMF energy functional is bilinear in the densities, the extracted shell correction should be accurate up to order $`O(\delta \rho ^2)`$.
The RMF calculations were carried out using the coordinate-space code of Ref. . As in the SHF case, the box size was chosen to be 21 fm with a mesh spacing of 0.3 fm.
As already mentioned, all successful RMF parameterizations give a rather small effective mass. This leads to a small level density around the Fermi surface which in turn requires a very large smoothing range $`\gamma `$ when calculating the smoothed level density $`\stackrel{~}{g}`$. The values for $`\gamma `$ are strongly correlated with the order of the curvature correction polynomial $`p`$ ; the value $`p`$=10 chosen here is large enough to provide in nearly all cases a sufficiently smooth $`\stackrel{~}{g}`$, but also small enough that we can restrict the model space to levels up to 60 MeV, which is much larger than the space used in usual RMF calculations. We have adjusted the smoothing range $`\gamma `$ to the actual level density of a large number of nuclei to fulfill a generalized plateau condition along the strategy of . This leads always to values around $`\gamma _p`$=2.0 for protons and $`\gamma _n`$=2.2 for neutrons. All results presented in this paper are calculated with $`p`$=10 and $`\gamma `$ fixed at these values.
## VI Results
### A Spherical Shell Corrections in Superheavy Nuclei
According to the SHF calculations of Ref. , the spherical magic neutron number in the SHE region is $`N`$=184; all the $`N`$=184 isotones have been predicted to have spherical shapes. The magicity of $`N`$=184 in SHF is confirmed in this study. Figure 3 displays neutron shell correction calculated in several SHF models as a function of $`N`$ for $`Z`$=120. The absolute minimum of shell energy always appears at $`N`$=184. The $`N`$=172 shell effect is also seen, but it exhibits a strong force-dependence (it is particularly pronounced for Z<sub>σ</sub>, SkI3, SkI4 and SLy4).
As already mentioned, the neutron levels have the same ordering for nearly all forces; all differences seen in the shell corrections are therefore caused by slight changes in the relative distances of the single-particle levels between the models. Forces with large effective masses like SkO, SkP, and SkT6, give a comparatively large level density which washes out the shell effects below $`N`$=184. Forces with small effective masses (i.e., smaller level density) are much more likely to show significant shell effects at lower neutron numbers around $`N`$=172.
At fixed $`Z`$, the proton shell correction changes rather gradually as a function of neutron number; this is illustrated in Fig. 3 for the Skyrme force SkM. (Most of the Skyrme forces give a similar result.) Note that the proton shell corrections are generally smaller than those for the neutrons. At a second glance, however, one sees that the slow variations of the proton shell correction with neutron number are correlated with neutron shell closures. For instance, the $`Z`$=120 shell correction is largest at neutron numbers around $`N`$=172 and it becomes reduced when approaching $`N`$=184. This is caused by the self-consistent rearrangement of single-particle levels according to the actual density distribution in the nucleus and cannot appear in macroscopic-microscopic models with assumed average potentials (see Refs. for more discussion related to this point).
Proton shell corrections for the $`N`$=184 and $`N`$=172 isotones, obtained in the SHF model, are displayed in Fig. 4 as a function of $`Z`$. For SkM, neutron shell corrections are also shown for the $`N`$=172 and $`N`$=184 isotones. The shift of the magic proton number with neutron number when going from $`N`$=172 to $`N`$=184 is clearly visible. For $`N`$=172 most of the Skyrme forces (exceptions are SkT6 and SkP) agree on a magic $`Z=120`$, while for $`N=184`$ the shell correction shows a minimum at $`Z`$=124–126 in all cases. (Actually, in most cases, shell-corrections slightly favor $`Z`$=124 over $`Z`$=126; this is related to the gradual increase of single-particle energies of 3$`p_{3/2}`$ and 3$`p_{1/2}`$ orbitals above $`Z`$=120.)
Proton shell corrections and the $`N`$=172 neutron shell corrections are systematically smaller than those for neutrons at $`N`$=184. This partly explains why spherical ground states of SHE are so well correlated with the magic neutron number $`N`$=184, see, e.g., . Note that for the majority of Skyrme forces the $`N`$=172 isotones are predicted to be deformed.
Skyrme forces with non-standard isospin dependence of the spin-orbit interaction are the only ones that give additional (but not very pronounced) shell closures. In the SkI4 model, there appears a secondary minimum at $`Z`$=114 for $`N`$=184, while SkI3 is the only Skyrme force which points at $`Z`$=120 also for $`N`$=184. A non-standard spin-orbit interaction, however, does not neccesarily lead to shell closures other than $`Z`$=124-126 for $`N`$=184. For SkO, which has a spin-orbit force that is similar to SkI4, the $`Z`$=114 shell is only hinted. It is to be noted that for several interactions such as Z<sub>σ</sub>, SkI$`x`$, and SkO, shell correction changes rather slowly between $`Z`$=114 and $`Z`$=126. This indicates that none of the proton shell gaps in this region can be considered as truly “magic”. (The weak $`Z`$-dependence of proton shell correction above $`Z`$=114 was pointed out in the early Ref. .)
The RMF results presented in Figs. 5 and 6 show a pattern that is internally consistent but different from that of SHF. The minimum of neutron shell correction is systematically predicted at $`N`$=172. Except for NL-SH and TM1, the shell effect at $`N`$=182-184 is also clearly seen. Note that the $`N`$=184 gap in the single-particle spectrum is in all cases larger than the one at $`N`$=182 (see Fig. 2). The gaps are separated by a single $`4s_{1/2}`$ level which contributes very weakly to the shell energy. To illustrate the variation of proton shell effects along the $`Z`$=120 chain, proton shell corrections in NL-Z2 are also displayed in Fig. 5. Their pattern is very similar to that obtained in SHF models.
Looking at the proton shell corrections along the chain of $`N`$=184 isotones, see Fig. 6, the strongest shell effect is now obtained for $`Z`$=120. When comparing the results for the $`N`$=184 and $`N`$=172 chains, it can be seen again that the proton shell correction at $`Z`$=120 is strongly correlated with neutron number $`N`$=172. However, unlike in SHF, the $`Z`$=120 shell does not vanish completely for $`N`$=184. Proton shell corrections obtained with NL1, NL-Z, and NL-Z2 at $`N`$=184 vary rather slowly between $`Z`$=120 and $`Z`$=126, and this resembles the patterm obtained in SHF. Again, as in the case of Skyrme forces, proton shell corrections in RMF are smaller than those for the neutrons (cf. NL-Z2 calculations in Fig. 6). The increase in the proton shell correction at very large values of $`Z`$ for TM1 is related to the spherical $`Z`$=132 shell predicted by this interaction .
Shell closures can also be analysed in terms of the two-neutron and two-proton shell gaps
$`\delta _{2n}`$ $`=`$ $`E(N+2,Z)2E(N,Z)+E(N2,Z),`$ (19)
$`\delta _{2p}`$ $`=`$ $`E(N,Z+2)2E(N,Z)+E(N,Z2)`$ (20)
discussed in Refs. . The pattern of shell corrections calculated in SHF and RMF nicely follows the behavior of neutron and proton shell gaps found there. In particular, the strong correlation between shell effects at $`Z`$=120 and $`N`$=172 in RMF is seen in both representations. While shell gaps are related (but not equivalent) to the gaps in the single-particle spectrum, the shell correction gives also a measure of the stabilizing effect of a shell closure on the nuclear binding energy.
### B Macroscopic Energies
By subtracting the shell correction from the calculated binding energy, one obtains a rough estimate for the associated macroscopic energy $`E_{\mathrm{macro}}`$ (9). The macroscopic part of the SHF and RMF energies for the $`N`$=184 isotones as a function of $`Z`$ is displayed in Fig. 7. The macroscopic energy of the Yukawa-plus-exponential mass formula of the Finite-Range Liquid Drop Model (FRLDM) of Ref. , with parameters of Ref. , is also shown for comparison. To illustrate $`Z`$-dependence, all energies were normalized to the value at $`Z`$=100. In general, the behavior of $`E_{\mathrm{macro}}`$ is similar in all cases. In particular, the macroscopic proton drip line is consistently predicted to be at $`Z`$$``$120-124. It is interesting to note that the only Skyrme force which agrees with FRLDM is SLy4; other forces deviate from it significantly. The RMF forces give qualitatively the same results; there are several forces (NL-Z, TM1, and NL-SH) which give values of $`E_{\mathrm{macro}}`$ close to the FRLDM.
In an attempt to understand the pattern shown in Fig. 7, we employed the simple liquid drop model expression
$`E_{\mathrm{macro},\mathrm{LDM}}=a_{\mathrm{vol}}A`$ $`+`$ $`a_{\mathrm{surf}}A^{2/3}`$ (21)
$`+`$ $`a_{\mathrm{sym}}{\displaystyle \frac{(NZ)^2}{A}}+a_{\mathrm{Coul}}{\displaystyle \frac{Z^2}{A^{1/3}}}.`$ (22)
The parameters $`a_i`$ of Skyrme and RMF forces were calculated in the limit of symmetric nuclear matter; they are given in Table I, together with the values for the standard liquid drop model (LDM) of Ref. . \[Note that these values slightly change when including higher-order terms in the LDM expansion (21).\] Figure 8 shows the macroscopic energy (21) as a function of $`Z`$ for the $`N`$=184 isotones. The huge differences between results for various Skyrme and RMF parametrizations can be traced back to their different symmetry-energy coefficients. Indeed, for most of the forces discussed, $`a_{\mathrm{sym}}`$ is significantly greater than that of LDM, and this results in an increased slope of $`E_{\mathrm{macro},\mathrm{LDM}}`$. For the RMF forces the significantly larger $`a_{\mathrm{vol}}`$ even further increases the difference with respect to the LDM. Unfortunately, there is very little similarity between the results of microscopic calculations of Fig. 7 and the results of expansion (21). When comparing the energy scales of Figs. 7 and 8, one finds huge differences, of the order of 100 MeV, between $`E_{\mathrm{macro}}`$ and $`E_{\mathrm{macro},\mathrm{LDM}}`$. While for RMF the energy ordering remains the same in both cases, this feature does not hold for SHF. Only when looking at $`E_{\mathrm{macro},\mathrm{LDM}}`$, the results are ordered according to the corresponding values of $`a_{\mathrm{sym}}`$, as expected. All this indicates that even for very heavy nuclei with $`A`$$``$300, the simple leptodermous expansion with parameters taken from nuclear matter calculations is not going to work ; the finite-size effects are still very important for SHE.
In spite of the fact that macroscopic energies extracted from different self-consistent models systematically differ, the corresponding shell corrections are similar. For instance, the general pattern and magnitude of shell energies displayed in Figs. 3 and 4 do not depend very much on the Skyrme interaction used, and the same is true for the RMF results shown in Figs. 5 and 6. This means that although the global properties of effective interactions employed in this work differ, their single-particle spectra are fairly similar. Hence, shell corrections extracted from self-consistent single-particle spectra are very useful measures of spectral properties of effective forces. Figure 7 also illustrates how dangerous it is to extrapolate self-consistent results in the region of SHE. The trends of relative binding energies (e.g., $`Q_\alpha `$ values) are expected to smoothly deviate from force to force. The nice agreement with experimental data for the heaviest elements obtained in the SHF calculations with SLy4 and in the macroscopic-microscopic calculations with the FRLDM indicates that the macroscopic energies of forces which are too far off the FRLDM values, i.e. SkM, SkI1, and NL1, are probably not reliable in this region.
Figure 7 shows that the power of a force for predicting total binding energies is fairly independent of its predictive power for shell effects. Forces with a similar (good) description of the smooth trends of binding energies can yield rather different magic numbers; compare, e.g., SLy4 and NL3.
## VII Conclusions
The recent experimental progress in the search for new superheavy elements opens a new window for systematic explorations of the limit of nuclear mass and charge. Theoretically, predictions in the region of SHE are bound to be extrapolations from the lighter systems. An interesting and novel feature of SHEs is that the Coulomb interaction can no longer be treated as a small perturbation atop the nuclear mean field; its feedback on the nuclear potential is significant.
The main objective of this study was to perform a detailed analysis of shell effects in SHE. Since many nuclei from this region are close to the proton drip line, a new method of calculating shell corrections, based on the Green’s function approach, had to be developed. This technique was applied to a family of Skyrme interactions and to several RMF parametrizations. This tool turned out to be extremely useful for analyzing the spectral properties of self-consistent mean fields.
It has been concluded that both the SHF and RMF calculations are internally consistent. That is, all the Skyrme models employed in this work predict the strongest spherical shell effect at $`N`$=184 and $`Z`$=124,126. On the other hand, all the RMF parametrizations yield the strongest shell effect at $`N`$=172 and $`Z`$=120. It is very likely that the main factor contributing to this difference is the spin-orbit interaction, or rather its isospin dependence . The role of the spin-orbit potential in determining the stability of SHE was posed already in the seventies . The experimental determination of the centre of shell stability in the region of SHE will, therefore, be of extreme importance for pinning down the question of the spin-orbit force.
Another interesting conclusion of our work is that the pseudo-spin symmetry seems to be strongly violated in the RMF calculations for SHE. As a matter of fact, the $`N`$=172 and $`Z`$=120 magic gaps predicted in the relativistic model appear as a direct consequence of pseudo-spin breaking. This is quite surprising in light of several recent works on the pseudo-spin conservation in RMF .
Finally, from calculated masses we extracted self-consistent macroscopic energies. They show a significant spread when extrapolating to unknown SHE. This is expected to give rise to systematic (smooth) deviations between masses and mass differences obtained in various self-consistent models.
###### Acknowledgements.
This research was supported in part by the U.S. Department of Energy under Contract Nos. DE-FG02-96ER40963 (University of Tennessee), DE-FG05-87ER40361 (Joint Institute for Heavy Ion Research), DE-FG02-97ER41019 (University of North Carolina), DE-AC05-96OR22464 with Lockheed Martin Energy Research Corp. (Oak Ridge National Laboratory), the Polish Committee for Scientific Research (KBN) under Contract No. 2 P03B 040 14, NATO grant CRG 970196, and Hungarian OTKA Grant No. T026244.
|
no-problem/9910/astro-ph9910174.html
|
ar5iv
|
text
|
# Clump Giant Distance to the Magellanic Clouds and Anomalous Colors in the Galactic Bulge
## 1 Introduction
Most of the extragalactic distance scale is tied to the LMC, and so the distance to the LMC ($`d_{LMC}`$) influences the Hubble constant, $`H_0`$. For many years now there has been a division between the so called “short” and “long” distance scales to the LMC. Currently, the measured values of $`d_{LMC}`$ span a range of over 25% (see e.g., Feast & Catchpole 1997; Stanek, Zaritsky, & Harris 1998). Paczyński & Stanek (1998) pointed out that red clump giants should constitute an accurate distance indicator. Udalski et al. (1998a) and Stanek et al. (1998) applied the clump method and found a very short distance to the LMC ($`\mu _{LMC}18.1`$). In response, Cole (1998) and Girardi et al. (1998) suggested that clump giants are not standard candles and that their absolute $`I`$ magnitudes, $`M_I(RC)`$, depend on the metallicity and age of the population. Udalski (1998b, 1998c) countered this criticism by showing that the metallicity dependence is at a low level of about $`0.1`$ mag/dex, and that the $`M_I(RC)`$ is approximately constant for cluster ages between 2 and 10 Gyr. Stanek et al. (1999) and Udalski (1999) found a moderate slope of the $`M_I(RC)`$ – \[Fe/H\] relation of 0.15 mag/dex. The only clump determination, which resulted in a truly long $`d_{LMC}`$ was a study of the field around supernova SN 1987A by Romaniello et al. (1999). However, they assumed a bright $`M_I(RC)`$ from theoretical models and, additionally, the use of the vicinity of SN 1987A may not be the most fortunate choice (Udalski 1999).
The value of $`M_I(RC)`$ in different stellar systems is a major systematic uncertainty in the clump method. It is very hard to prove the standard character of a candle’s luminosity. However, it should be possible to check whether other stellar characteristics of a candle behave in a predictable fashion. Therefore, in §2 I discuss the $`(VI)_0`$ colors of the clump giants and RR Lyrae stars in the Galactic bulge. After making photometric corrections, I argue that the remaining color discrepancy between the Baade’s Window and local stars might have been caused by an overestimated coefficient of selective extinction. Using corrected colors, in §3 I derive a new $`M_I(RC)`$ – \[Fe/H\] relation for red clump stars and show its substantial impact on the distances to the Magellanic Clouds. I summarize the results in §4.
## 2 Mystery of anomalous colors in the Galactic bulge
Paczyński (1998) tried to explain why the clump giants in the Baade’s Window have $`(VI)_0`$ colors which are approximately $`0.2`$ magnitudes redder than in the solar neighborhood (Paczyński & Stanek 1998). Paczyński (1998) suggested super-solar metallicities of the Galactic bulge stars as a possible solution. However, there is a spectroscopic evidence (see Minniti et al. 1995) that the average metallicity of the bulge is \[Fe/H\] $`(0.3,0.0)`$. Stutz, Popowski & Gould (1999) found a corresponding effect for the Baade’s Window RR Lyrae stars, which have $`(VI)_0`$ redder by about 0.17 than their local counterparts (Fig. 1).
A similar size of the color shift in RR Lyrae stars and clump giants suggests a common origin of this effect. Does there exists any physical mechanism that could be responsible for such behavior? The bulge RR Lyrae stars and clump giants both burn Helium in their cores, but the similarities end here. RR Lyrae stars pulsate, clump giants do not. RR Lyrae stars are metal-poor, clump giants are metal-rich. RR Lyrae stars are likely to be a part of an axisymmetric stellar halo (e.g., Minniti 1996; Alcock et al. 1998a), whereas clump giants form a bar (e.g., Stanek et al. 1994; Ng et al. 1996). For RR Lyrae stars, Stutz et al. (1999) suggested that their very red $`(VI)_0`$ might have resulted from an unusual abundance of $`\alpha `$\- elements. Why should a clump population which emerged in a different formation process share the same property?
The solutions to the anomalous colors proposed by Paczyński (1998) and Stutz et al. (1999) are not impossible but are rather unlikely. Alternatively, the effect might be unrelated to the physics of those stars. The investigated bulge RR Lyrae stars and clump giants share two things in common. First, photometry of both types of stars comes from the OGLE, phase-I, project. Indeed, Paczyński et al. (1999) showed that the OGLE-I V-magnitudes are 0.021 mag fainter and I-magnitudes 0.035 mag brighter than the better calibrated OGLE-II magnitudes. Therefore, the correct $`(VI)`$ colors should be 0.056 bluer. Additionally, the new $`(VI)_0`$ from the more homogeneous Baade’s Window clump is bluer than Paczyński’s & Stanek’s (1998) color even when reduced to OGLE-I calibration<sup>1</sup><sup>1</sup>1Udalski’s (1998b) data for the LMC, SMC, and Carina galaxy come from OGLE-II and therefore do not require any additional adjustment.. When the new OGLE-II photometry reported by Paczyński et al. (1999) is used, the $`(VI)_0`$ anomaly shrinks and the remaining unexplained shift amounts to $`0.11`$ both for the RR Lyrae stars and clump giants.
Second, Paczyński & Stanek (1998) and Stutz et al. (1999) use the same extinction map (Stanek 1996) and the same coefficient of conversion from visual extinction $`A_V`$ to color excess $`E(VI)`$. The absolute values of $`A_V`$s are likely approximately correct (see equation 1) because the zero point of the extinction map was determined from the $`(VK)`$ color and $`A_V/E(VK)`$ is very close to 1 (Gould, Popowski, & Terndrup 1998; Alcock et al. 1998b). However, $`R_{VI}=A_V/E(VI)`$ is not as secure and has a pronounced effect on the obtained color.
Most of the current studies of the Galactic bulge use $`R_{VI}=2.5`$. If a true $`R_{VI}`$ towards Baade’s Window equals $`\alpha `$ instead, then the adjusted Stanek’s (1996) V-band extinction, will be<sup>2</sup><sup>2</sup>2Equation (1) implicitly assumes that differential $`(VI)`$ colors from Stanek (1996) are correct. Whether it is the case is an open question.:
$$A_{V,\mathrm{adjusted}}=\frac{\alpha }{2.5}\left(A_VA_{V,\text{0-point}}\right)+A_{V,\text{0-point}},$$
(1)
where most of the extinction, namely $`A_{V,\text{0-point}}`$ is excluded from the adjustment because it has been determined based on $`K`$-magnitudes<sup>3</sup><sup>3</sup>3To the zero-th order, a shape of the extinction curve is governed by one parameter, so that the change in $`R_{VI}`$ will affect $`A_V/E(VK)`$. In effect, Gould et al. (1998) and Alcock et al. (1998) might have overestimated (underestimated) $`A_{V,\text{0-point}}`$ by a few percent if $`\alpha <2.5`$ ($`\alpha >2.5`$). This change would propagate to equations (1) - (3) in several correlated ways. Thus, strictly speaking a value of $`\alpha `$ is additionally a function of $`A_V/E(VK)`$, which changes with $`\alpha `$. Therefore, solving this problem exactly requires iteration. I neglect this complication in further considerations.. The adjustment to the color, which follows from equation (1) is:
$$\mathrm{\Delta }(VI)_0=\frac{1}{2.5}A_V\frac{1}{\alpha }A_{V,\mathrm{adjusted}}=\frac{\alpha 2.5}{2.5\alpha }A_{V,\text{0-point}}.$$
(2)
Therefore, for a color shift $`\mathrm{\Delta }(VI)_0`$ , one expects:
$$\alpha =\frac{2.5A_{V,\text{0-point}}}{A_{V,\text{0-point}}2.5\mathrm{\Delta }(VI)_0}.$$
(3)
Using $`\mathrm{\Delta }(VI)_00.11`$ as required to resolve the color conflict in Baade’s Window and $`A_{V,\text{0-point}}=1.37`$ (Gould et al. 1998; Alcock et al. 1998b), I find $`\alpha 2.1`$ (Fig. 2). This $`R_{VI}=2.1`$ is certainly low, but not unreasonably so. Szomoru & Guhathakurta (1999) find that cirrus clouds in the Galaxy have extinctions consistent with $`A_V/E(BV)\genfrac{}{}{0pt}{}{{}_{}{}^{}{}_{_<}{}^{}}{{}_{}{}^{}{}_{}{}^{^{}}}2`$, which is more extreme than the change suggested here. If the extinction towards Baade’s Window is in part provided by the cirrus clouds, then the low $`R_{VI}`$ would be expected rather than surprising.
The value and variation of $`R_{VI}`$ was thoroughly investigated by Woźniak & Stanek (1996). The essence of the Woźniak & Stanek (1996) method to determine differential extinction is an assumption that regions of the sky with a lower surface density of stars have higher extinction. Woźniak & Stanek (1996) used clump giants to convert a certain density of stars to an amount of visual extinction. To make a calibration procedure completely unbiased would require, among other things, that clump giants were selected without any assumption about $`R_{VI}`$; that absolute $`V`$-magnitudes of clump giants, $`M_V(RC)`$, do not depend on their color \[here $`(VI)_0`$\]; and that reddened and unreddened clump giants be drawn from the same parent population. None of these is true. A color-magnitude diagram (CMD) for dense Galactic fields does not allow one to unambiguously distinguish clump giants from other stars. Different parts of an intrinsically clean CMD overlap due to differential reddening and a range of stellar distances. Therefore, the selection of clump giants must involve some assumptions about $`R_{VI}`$. Woźniak & Stanek (1996) adopt $`R_{VI}=2.6`$. This procedure tends to bias the derived relation toward this predefined slope. Woźniak & Stanek (1996) were fully aware of this effect, and they performed a number of simulations, which are summarized in their Figure 4. In brief, in the range $`2.1<R_{VI}<3.1`$, the bias scales as $`\delta R_{VI}0.4(2.6R_{VI})`$ and so may become very substantial for a low or high $`R_{VI}`$. In particular, if the true $`R_{VI}=2.1`$, Woźniak and Stanek (1996) would find $`R_{VI}=2.3`$. Therefore, this effect alone could account for half of the difference between the required and measured $`R_{VI}`$.
The intrinsic characteristics of the bulge clump stars are unknown, but I will assume they resemble the clump measured by Hipparcos (European Space Agency 1997). The fit to the local clump giants selected by Paczyński & Stanek (1998) gives $`M_V(RC)0.4(VI)_0`$. Therefore, the structure of the local clump itself acts similarly to extinction with $`R_{VI}=0.4`$. In an ideal case, when the CMD locations of the entire clump populations in different fields are compared, the $`M_V(RC)(VI)_0`$ dependence should not matter. However, when combined with the actual extinction and additionally influenced by the completeness function of a survey, this effect may additionally bias the value of $`R_{VI}`$.
Because the smaller selective extinction coefficient is not excluded by the current studies, one can assume $`R_{VI}=2.1`$ to match the $`(VI)_0`$ colors of the bulge with the ones in the solar neighborhood. The color is a weak function of \[Fe/H\], so this procedure is justified because the \[Fe/H\] of the bulge and solar neighborhood are similar. This change in $`R_{VI}`$ will decrease the I-mag extinction, $`A_I`$, by 0.11 mag. Therefore, the clump distance to the Galactic center would increase by the same amount.
## 3 Recalibration of the clump
What is the bearing of the bulge results on the distance to the LMC? Let $`\mathrm{\Delta }`$ indicate the difference between the mean dereddened I-magnitude of clump giants and the derredened V-magnitude of RR Lyrae stars at the metallicity of RR Lyrae stars in the Galactic bulge. When monitored in several stellar systems with different clump metallicities, the variable $`\mathrm{\Delta }`$, introduced by Udalski (1998b), allows one to calibrate the $`M_I(RC)`$ \- \[Fe/H\] relation with respect to the baseline provided by RR Lyrae stars. The better photometry from Paczyński et al. (1999) and a possible modification of $`R_{VI}`$ influence the value of $`\mathrm{\Delta }`$ at the Galactic center ($`\mathrm{\Delta }_{BW}`$). It is important to note that one will face the same type of adjustment to $`\mathrm{\Delta }_{BW}`$ whenever the anomalous colors in the Baade’s Window are resolved at the expense of the modification of V- or I-magnitudes. That is, the modification of $`R_{VI}`$ is not a necessary condition! It is simply one of the options. As a result of the change in $`\mathrm{\Delta }_{BW}`$, the $`M_I(RC)`$ \- \[Fe/H\] relation for clump giants changes. Moreover, $`\mu _{LMC}`$ and $`\mu _{SMC}`$ will change as well because the $`M_I(RC)`$ – \[Fe/H\] relation is used to obtain the clump distances to the Magellanic Clouds.
Here, I will modify Udalski’s (1998b) $`\mathrm{\Delta }`$ versus \[Fe/H\] plot and derive a new $`M_I(RC)`$ \- \[Fe/H\] relation consistent with the new data and considerations from §2. I construct the Udalski (1998b) plot using his original points modified in the following way:
— To match the change in $`(VI)_0`$, I modify $`\mathrm{\Delta }_{BW}`$ by 0.17 mags (a combined change from photometry and some other, yet unrecognized, source, e.g., selective extinction coefficient).
— I modify the \[Fe/H\] of the Baade’s Window clump giants, so that $`[\mathrm{Fe}/\mathrm{H}]=0.0`$ (see e.g., Minniti et al. 1995 for a review on the bulge metallicity).
The possible improvement to the above procedure would be a construction of Udalski’s (1998b) diagram based on clump giants in the LMC and SMC clusters, which would reduce the uncertainties associated with the reddening to the field stars. This more complex treatment is beyond the scope of this paper.
I make a linear fit to the $`\mathrm{\Delta }`$ – \[Fe/H\] relation. I assume that a total error in dependent variable $`\mathrm{\Delta }`$ for the $`i`$-th point, $`\sigma _{total,i}`$, can be expressed as $`\sigma _{total,i}^2=\sigma _{\mathrm{\Delta },i}^2+\left(\frac{d\mathrm{\Delta }}{d[\mathrm{Fe}/\mathrm{H}]}|_{[\mathrm{Fe}/\mathrm{H}]_i}\sigma _{[\mathrm{Fe}/\mathrm{H}],i}\right)^2,`$ where $`\sigma _{\mathrm{\Delta },i}`$ and $`\sigma _{[\mathrm{Fe}/\mathrm{H}],i}`$ are the individual point errors in $`\mathrm{\Delta }`$ and metallicity, respectively. As a result, I obtain:
$$M_I(RC)=(0.36\pm 0.03)+(0.19\pm 0.05)([\mathrm{Fe}/\mathrm{H}]+0.66)$$
(4)
Equation (4) is expressed in the form with uncorrelated errors and normalized to the local Hipparcos (European Space Agency 1997) result of $`M_I(RC)=0.23\pm 0.02`$ at \[Fe/H\] $`=0.0`$ reported by Stanek & Garnavich (1998). Equation (4) is a good fit to the data with a $`\chi ^2/d.o.f=3.17/2`$ (Fig. 3). The slope from equation (4) is 0.10 mag/dex steeper than the one given by Udalski (1998b). However, it agrees well with the slope of $`0.15\pm 0.05`$ based on the spectroscopic data for the local clump (Udalski 1999). Also, the slope from equation (4) is in a good agreement with the theoretical models (e.g., Cole 1998) and leads to $`M_{I,LMC}(RC)=0.35`$ in the LMC field and $`M_{I,LMC}(RC)=0.39`$ for the clusters (close to the values suggested by Girardi et al. 1998).
## 4 Summary
I demonstrated that the correction of the bulge $`(VI)_0`$ anomaly has a pronounced effect on the slope of the absolute magnitude – metallicity relation for clump giants. Introducing color correction to the original Udalski (1998b) diagram, I find $`M_I(RC)=0.23+0.19[\mathrm{Fe}/\mathrm{H}]`$. Consequently, the Udalski’s (1998c) distance modulus of $`\mu _{LMC}=18.18\pm 0.06`$ is increased to $`\mu _{LMC}=18.27\pm 0.07`$. The distance modulus to the SMC increases from $`\mu _{SMC}=18.65\pm 0.08`$ to $`\mu _{SMC}=18.77\pm 0.08`$.
Even though my approach in this paper is only qualitative, there are two important characteristics of this study:
1) The calibration of $`M_I(RC)`$ – \[Fe/H\] relation, has been based on the homogeneous set of the OGLE-II photometry. Therefore, no corrections due to the use of different telescopes, instruments and reduction procedures are required. Unfortunately, this makes the above calibration vulnerable to unrecognized systematic problems of the OGLE photometry.
2) The $`M_I(RC)`$ value has been derived based on observational data and not simply picked from a family of possible theoretical models of stellar evolution.
Romaniello et al. (1999) provide an independent source of clump photometry in the LMC, but due to the importance of photometric homogeneity I am not able to use their data in a way consistent with the rest of my analysis. With reference to point 2), it is crucial to note that observationally calibrated $`M_I(RC)`$ is not subject to the modeling uncertainties which affect the Romaniello et al. (1999) distance to the LMC. However, my calibration is only as good as the assumptions and data that enter the analysis. Reddening corrections to the original Udalski’s (1998b) diagram, which is partly based on the field stars in the LMC and SMC, may be needed. Therefore, a more comprehensive study of the metallicity effect on $`M_I(RC)`$ is necessary. Udalski’s (1999) determination based on the local clump is an important step toward establishing a reliable $`M_I(RC)`$ – \[Fe/H\] relation.
Andy Becker deserves my special thanks for many stimulating discussions about the extinction issues in the Galactic bulge. I am deeply grateful to Andy Gould for his very careful reading of the original version of this paper and a number of insightful remarks. I also would like to thank Kem Cook for his valuable comments and discussions. Work performed at the LLNL is supported by the DOE under contract W7405-ENG-48.
|
no-problem/9910/quant-ph9910039.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Quantum theory is not yet understood as well as e.g. classical mechanics or special relativity. Classical mechanics coincides well with our intuition and so is rarely questioned. Special relativity runs counter to our immediate insight, but can easily be derived by assuming constancy of the speed of light for every observer. And that assumption may be made plausible by epistemological arguments . Quantum theory on the other hand demands two premises. First, it wants us to give up determinism for the sake of a probabilistic view. In fact, this seems unavoidable in a fundamental theory of prediction, because any communicable observation can be decomposed into a finite number of bits. So predictions therefrom always have limited accuracy, and probability enters naturally. More disturbing is the second premise: Quantum theory wants us to give up the sum rule of probabilities by requiring interference instead. However, the sum rule is deeply ingrained in our thought, because of its roots in counting and the definition of sets: Define sets with no common elements, then define the set which joins them all. The number of elements in this latter set is just the sum of the elements of the individual sets. When deriving the notion of probability from the relative frequency of events we are thus immediately led to the sum rule, such that any other rule appears inconceivable. And this may be the reason why we have difficulties accepting the quantum theoretical rule, where probabilities are summed by calculating the square of the sum of the complex square roots of the probabilities. In this situation two views are possible. We may either consider the quantum theoretical rule as a peculiarity of nature. Or, we may conjecture that the quantum theoretical rule has something to do with how we organize data from observations into quantities that are physically meaningful to us. We want to adopt the latter position. Therefore we seek to establish a grasp of the quantum theoretical rule with the general idea in mind that, given the probabilistic paradigm, there may exist an optimal strategy of prediction, quite independent of traditional physical concepts, but resting on what one can deduce from a given amount of information. We will formulate elements of such a strategy with the aim of achieving maximum predictive power.
## 2 Representing Knowledge from Probabilistic Data
Any investigative endeavour rests upon one natural assumption: More data from observations will lead to better knowledge of the situation at hand. Let us see whether this holds in quantum experiments. The data are relative frequencies of events. From these we deduce probabilities from which in turn we derive the magnitudes of physical quantities. As an example take an experiment with two detectors, where a click is registered in either the one or the other. (We exclude simultaneous clicks for the moment.) Here, only one probability is measurable, e.g. the probablity $`p_1`$ of a click in detector 1. After N runs we have $`n_1`$ counts in detector 1 and $`n_2`$ counts in detector 2, with $`n_1+n_2=N`$. The probability $`p_1`$ can thus be estimated as
$$p_1=\frac{n_1}{N}$$
(1)
with the uncertainty interval
$$\mathrm{\Delta }p_1=\sqrt{\frac{p_1(1p_1)}{N}}.$$
(2)
From $`p_1`$ the physical quantity $`\chi (p_1)`$ is derived. Its uncertainty interval is
$$\mathrm{\Delta }\chi =\left|\frac{\chi }{p_1}\right|\mathrm{\Delta }p_1=\left|\frac{\chi }{p_1}\right|\sqrt{\frac{p_1(1p_1)}{N}}.$$
(3)
The accuracy of $`\chi `$ is given by the inverse of $`\mathrm{\Delta }\chi `$. With the above assumption we expect it to increase with each additional run, because we get additional data. Therefore, for any $`N`$, we expect
$$\mathrm{\Delta }\chi (N+1)<\mathrm{\Delta }\chi (N).$$
(4)
However, this inequality cannot be true for an arbitrary function $`\chi (p_1)`$. In general $`\mathrm{\Delta }\chi `$ will fluctuate and only decrease on the average with increasing $`N`$. To see this take a theory A which relates physical quantity and probability by $`\chi _A=p_1`$. In an experiment of $`N=100`$ runs and $`n_1=90`$ we get: $`\mathrm{\Delta }\chi _A(100)=.030`$. By taking into account the data from one additional run, where detector 2 happened to click, we have $`\mathrm{\Delta }\chi _A(101)=.031`$. The differences may appear marginal, but nevertheless the accuracy of our estimate for $`\chi _A`$ has decreased although we incorporated additional data. So our original assumption does not hold. This is worrisome as it implies that a prediction based on a measurement of $`\chi _A`$ may be more accurate if the data of the last run are not included. Let us contrast this to theory B, which connects physical quantity and probability by $`\chi _B=p_{1}^{}{}_{}{}^{6}`$. With $`N`$ and $`n_1`$ as before we have $`\mathrm{\Delta }\chi _B(100)=.106`$. Incorporation of the data from the additional run leads to $`\mathrm{\Delta }\chi _B(101)=.104`$. Now we obviously don’t question the value of the last run, as the accuracy of our estimate has increased.
The lesson to be learnt from the two examples is that the specific functional dependence of a physical quantity on the probability (or several probabilities if it is derived from a variety of experiments) determines whether our knowledge about the physical quantity will increase with additional experimental data, and that this also applies to the accuracy of our predictions. This raises the question what quantities we should be interested in to make sure that we get to know them more accurately by doing more experiments. From a statistical point of view the answer is straightforward: choose variables whose uncertainty interval strictly decreases, and simply define them as physical. And from a physical point of view? Coming from classical physics we may have a problem, as concepts like mass, distance, angular momentum, energy, etc. are suggested as candidates for physical quantities. But when coming from the phenomenology of quantum physics, where all we ever get from nature is random clicks and count rates, a definition of physical quantities according to statistical criteria may seem more reasonable, simply because there is no other guideline as to which random variables should be considered physical.
Pursuing this line of thought we want to express experimental results by random variables whose uncertainty interval strictly decreases with more data. When using them in predictions, which are also expressed by variables with this property, predictions should automatically become more accurate with more data input. Now a few trials will show that there are many functions $`\chi (p_1)`$ whose uncertainty interval decreases with increasing $`N`$ (eq.(3)). We want to choose the one with maximum predictive power. The meaning of this term becomes clear when realizing that in general $`\mathrm{\Delta }\chi `$ depends on $`N`$ and on $`n_1`$ (via $`p_1`$). These two numbers have a very different status. The number of runs, $`N`$, is controlled by the experimenter, while the number of clicks, $`n_1`$, is solely due to nature. Maximum predictive power then means to eliminate nature’s influence on $`\mathrm{\Delta }\chi `$. For then we can know $`\mathrm{\Delta }\chi `$ even before having done any experimental runs, simply upon deciding how many we will do. From eq.(3) we thus get
$$\sqrt{N}\mathrm{\Delta }\chi =\left|\frac{\chi }{p_1}\right|\sqrt{p_1(1p_1)}=constant,$$
(5)
which results in
$$\chi =C\mathrm{arcsin}(2p_11)+D$$
(6)
where C and D are real constants. The inverse is
$$p_1=\frac{1+\mathrm{sin}(\frac{\chi D}{C})}{2},$$
(7)
showing that the probability is periodic in $`\chi `$. Aside from the linear transformations provided by $`C`$ and $`D`$ any other smooth function $`\alpha (\chi )`$ in real or complex spaces will also fulfill requirement (5) when equally sized intervals in $`\chi `$ correspond to equal line lengths along the curve $`\alpha (\chi )`$. One particular curve is
$$\alpha (\chi )=\mathrm{sin}(\frac{\chi }{2})e^{i\frac{\chi }{2}},$$
(8)
which is a circle in the complex plane with center at $`i/2`$. It exhibits the property $`\left|\alpha \right|^2=p_1`$ known from quantum theory. But note, that for instance the function $`\beta =\mathrm{sin}(\chi /2)`$ does not fulfill the requirement that the accuracy only depend on $`N`$. Therefore the complex phase factor in eq.(8) is necessary .
## 3 Distinguishability
We have now found a unique transformation from a probability to another class of variables exemplified by $`\chi `$ in eq.(6). These unique variables always become better known with additional data. But can they be considered physical? We should first clarify what a physical variable is. A physical variable can assume different numerical values, where each value should not only imply a different physical situation, but should most of all lead to a different measurement result in a properly designed experiment. Within the probabilistic paradigm two measurement results are different when their uncertainty intervals don’t overlap. This can be used to define a variable which counts the principally distinguishable results of the measurement of a probability. Comparison of that variable to our quantity $`\chi `$ should tell us how much $`\chi `$ must change from a given value before this can be noticed in an experiment. Following Wootters and Wheeler the variable $`\theta `$ counting the statistically distinguishable results at detector 1 in $`N`$ runs of our above example is given by
$$\theta (n_1)=_0^{p_1(n_1)}\frac{dp}{\mathrm{\Delta }p(p)}=\sqrt{N}\left[\mathrm{arcsin}(2p_11)+\frac{\pi }{2}\right]_{p_1=\frac{n_1}{N}}$$
(9)
where $`\mathrm{\Delta }p`$ is defined as in eq.(2). When dividing $`\theta `$ by $`\sqrt{N}`$ it becomes identical to $`\chi `$ when in eq.(6) we set $`C=1`$ and $`D=\frac{\pi }{2}`$. This illuminates the meaning of $`\chi `$: It is a continuous variable associated with a probability, with the particular property that anywhere in its domain an interval of fixed width corresponds to an equal number of measurement results distinguishable in a given number of runs. With Occam’s dictum of not introducing more entities than are necessary for the description of the subject matter under investigation, $`\chi `$ would be the choice for representing physical situations and can rightly be called physical.
## 4 A Simple Prediction: The Superposition Principle
Now we return to our aim of finding a strategy for maximum predictive power. We want to see whether the unique class of variables represented by $`\chi `$ indicates a way beyond representing data and perhaps affords special predictions. For the sake of concreteness we think of the double slit experiment. A particle can reach the detector by two different routes. We measure the probabilty that it hits the detector via the left route, $`p_L`$, by blocking the right slit. In $`L`$ runs we get $`n_L`$ counts. In the measurement of the probability with only the right path available, $`p_R`$, we get $`n_R`$ counts in $`R`$ runs. From these data we want to make a prediction about the probability $`p_{tot}`$, when both paths are open. Therefore we make the hypotheses that $`p_{tot}`$ is a function of $`p_R`$ and $`p_L`$. What can we say about the function $`p_{tot}(p_L,p_R)`$ when we demand maximum predictive power from it? This question is answered by reformulating the problem in terms of the associated variables $`\chi _L`$, $`\chi _R`$ and $`\chi _{tot}`$, which we derive according to eq.(6) by setting $`C=1`$ and $`D=\frac{\pi }{2}`$. The function $`\chi _{tot}(\chi _L,\chi _R)`$ must be such that a prediction for $`\chi _{tot}`$ has an uncertainty interval $`\delta \chi _{tot}`$, which only depends on the number of runs, $`L`$ and $`R`$, and decreases with both of them. (We use the symbol $`\delta \chi _{tot}`$ to indicate that it is not derived from a measurement of $`p_{tot}`$, but from other measurements from which we want to predict $`p_{tot}`$.) In this way we can predict the accuracy of $`\chi _{tot}`$ by only deciding the number of runs, $`L`$ and $`R`$. No actual measurements need to have been done. Because of
$$\delta \chi _{tot}=\sqrt{\left|\frac{\chi _{tot}}{\chi _L}\right|^2\frac{1}{L}+\left|\frac{\chi _{tot}}{\chi _R}\right|^2\frac{1}{R}}$$
(10)
maximum predictive power is achieved when
$$\left|\frac{\chi _{tot}}{\chi _j}\right|=constant,\text{ }j=L,R.$$
(11)
We want to have a real function $`\chi _{tot}(\chi _L,\chi _R)`$, and therefore we get
$$\chi _{tot}=a\chi _L+b\chi _R+c,$$
(12)
where $`a`$, $`b`$ and $`c`$ are real constants. Furthermore we must have $`c=0`$ and the magnitude of both $`a`$ and $`b`$ equal to $`1`$, when we wish to have $`\chi _{tot}`$ equivalent to $`\chi _R`$ or to $`\chi _L`$ when either the one or the other path is blocked. So there is an ambiguity of sign with $`a`$ and $`b`$. When rewriting this in terms of the probability we get
$$p_{tot}=\mathrm{sin}^2(\frac{\chi _L\pm \chi _R}{2}).$$
(13)
This does not look like the sum rule of probability theory. Only for $`p_L+p_R=1`$ does it coincide with it. We may therefore conclude that the sum rule of probability theory does not afford maximum predictive power. But neither does eq.(13) look like the quantum mechanical superposition principle. However, this should not be surprising because our input were just two real valued numbers, $`\chi _L`$ and $`\chi _R`$, from which we demanded to derive another real valued number. A general phase as is provided in quantum theory could thus not be incorporated. But let us see what we get with complex representatives of the associated variables of probabilities. We take $`\alpha (\chi )`$ from eq.(8). Again we define in an equivalent manner $`\alpha _L`$, $`\alpha _R`$ and $`\alpha _{tot}`$. From $`p_L`$ we have for instance (from (8) and (7) with $`C=1`$ and $`D=\frac{\pi }{2}`$)
$$\alpha _L=\sqrt{p_L}\left(\sqrt{p_L}+i\sqrt{1p_L}\right)$$
(14)
and
$$\mathrm{\Delta }\alpha _L=\left|\frac{\alpha _L}{p_L}\right|\mathrm{\Delta }p_L=\frac{1}{2\sqrt{L}}.$$
(15)
If we postulate a relationship $`\alpha _{tot}(\alpha _R,\alpha _L)`$ according to maximum predictive power we expect the predicted uncertainty interval $`\delta \alpha _{tot}`$ to be independent of $`\alpha _L`$ and $`\alpha _R`$ and to decrease with increasing number of runs, $`L`$ and $`R`$. Analogous to (11) we must have
$$\left|\frac{\alpha _{tot}}{\alpha _j}\right|=constant,\text{ }j=L,R,$$
(16)
yielding
$$\alpha _{tot}=s\alpha _L+t\alpha _R+u,$$
(17)
where $`s`$, $`t`$, and $`u`$ are complex constants. Now $`u`$ must vanish and $`s`$ and $`t`$ must both be unimodular when $`p_{tot}`$ is to be equivalent to either $`p_L`$ or $`p_R`$ when the one or the other route is blocked. We then obtain
$$p_{tot}=\left|\alpha _{tot}\right|^2=\left|s\alpha _L+t\alpha _R\right|^2=p_L+p_R+2\sqrt{p_Lp_R}\mathrm{cos}(\varphi ),$$
(18)
where $`\varphi `$ is an arbitrary phase factor containing the phases of $`s`$ and $`t`$. This is exactly the quantum mechanical superposition principle. What is striking is that with a theory of maximum predictive power we can obtain the general form of this principle, but cannot at all predict $`p_{tot}`$ even when we have measured $`p_L`$ and $`p_R`$, because of the unknown phase $`\varphi `$. So we are lead to postulate $`\varphi `$ as a new measurable quantity in this experiment.
## 5 Conclusion
We have tried to obtain insight into the quantum mechanical superposition principle and set out with the idea that it might follow from a most natural assumption of experimental science: more data should provide a more accurate representation of the matter under investigation and afford more accurate predictions. From this we defined the concept of maximum predictive power which demands laws to be such that the uncertainty of a prediction is solely dependent on the number of experiments on which the prediction is based, and not on the specific outcomes of these experiments. Applying this to the observation of two probabilities and to possible predictions about a third probability therefrom, we arrived at the quantum mechanical superposition principle. Our result suggests nature’s law to be such that from more observations more accurate predictions must be derivable.
## 6 Acknowledgments
I thank the Austrian Science Foundation (FFW) for financial support of ion double slit experiments (Project P8781-PHY) whose analysis led to this paper.
|
no-problem/9910/nucl-th9910030.html
|
ar5iv
|
text
|
# Uranium on uranium collisions at relativistic energies
> Deformation and orientation effects on compression, elliptic flow and particle production in uranium on uranium collisions (UU) at relativistic energies are studied within the transport model ART. The density compression in tip-tip UU collisions is found to be about 30% higher and lasts approximately 50% longer than in body-body or spherical UU reactions. The body-body UU collisions have the unique feature that the nucleon elliptic flow is the highest in the most central collisions and remain a constant throughout the reaction. We point out that the tip-tip UU collisions are more probable to create the QGP at AGS and SPS energies while the body-body UU collisions are more useful for studying properties of the QGP at higher energies.
PACS number(s):25.75.+r
To better understand the $`J/\psi `$ suppression mechanism in ultra-relativistic heavy-ion collisions, uranium on uranium (UU) collisions has been proposed recently to extend beyond Pb+Pb collisions at the CERN’s SPS. Many other outstanding issues regarding the corrections to hard processes, the relation between elliptic flow and equation of state, as well as the study of QCD tri-critical point may also be resolved by studying deformation and orientation effects in UU collisions at relativistic energies. One of the most critical factors to all of these issues is the maximum achievable energy density in UU collisions. Because of the deformation, UU collisions at the same beam energy and impact parameter but different orientations are expected to form dense matter with different compressions and lifetimes. In particular, the deformation of uranium nuclei lets one gain particle multiplicity and energy density by aligning the two nuclei with their long axes head-on (tip-tip). Based on a schematic mass scaling of the energy density in relativistic heavy-ion collisions, Braun-Munzinger found a factor of 1.8 gain in energy density in the tip-tip UU collisions compared to the central Au+Au reactions. More recently, Shuryak re-estimated this factor and found it is about 1.3 using particle production systematics and geometrical considerations of relativistic heavy-ion collisions. Using a simple Monte-Carlo model, Shuryak has also demonstrated that the orientation and the impact parameter between the two colliding uranium nuclei can be determined simultaneously using the experimentally accessible criteria. Given the exciting new physics opportunities with UU collisions and the obvious discrepancy in the estimated gain of energy density , more quantitative studies with more realistic models are necessary. In this Rapid Communication, we report results of such a study. Besides a critical examination of the achievable density compression, we also study the nucleon elliptic flow and particle production in UU collisions with different orientations.
Our study is based on the relativistic transport model ART for heavy ion collisions. We refer the reader to Ref. for details of the model and its applications in studying various aspects of relativistic heavy-ion collisions at beam energies from 1 to 20 GeV/A. Uranium is approximately an ellipsoid with a long and short semi-axis
$$R_l=R(1+\frac{2}{3}\delta )$$
(1)
and
$$R_s=R(1\frac{1}{3}\delta ),$$
(2)
where $`R`$ is the equivalent spherical radius and $`\delta `$ is the deformation parameter. For $`{}_{}{}^{238}U`$, one has $`\delta =0.27`$ and thus a long/short axis ratio of about 1.3.
We have performed a systematic study of UU collisions at beam energies from 1 to 20 GeV/nucleon. We found similar deformation/orientation effects in the whole energy range studied. Typical results at beam energies of 10 and 20 GeV/nucleon will be presented in the following. Among all possible orientations between two colliding uranium nuclei, the tip-tip (with long axes head-on) and body-body (with short axes head-on and long axes parallel) collisions are the most interesting ones. Shown in Fig. 1 are the evolution of central baryon densities in the UU collisions at a beam energy of 20 GeV/nucleon and an impact parameter of $`0`$ and $`6`$ fm, respectively. In these calculations the cascade mode of the ART model is used. For comparisons we have also included results for collisions between two gold or spherical uranium nuclei. Indeed, it is interesting to notice that the tip-tip UU collisions not only lead to higher compressions but also longer reaction times. While the body-body UU collisions lead to density compressions comparable to those reached in the Au-Au and spherical UU collisions. More quantitatively, a 30% more compression is obtained in the tip-tip UU collisions at both impact parameters. The high density phase (i.e., with $`\rho /\rho _05`$) in the tip-tip collisions lasts about 3-5 fm/c longer than the body-body collisions. We have seen the same deformation and orientation effects in the total energy density which also include the newly produced particles. The higher compression and longer passage time render the tip-tip UU collisions the most probable candidates to form the Quark-Gluon-Plasma (QGP) at beam energies that are not very high, such as those currently available at the AGS/BNL and SPS/CERN.
At RHIC/BNL and LHC/CERN energies, the energy densities in colliding spherical heavy nuclei (e.g., Au and Pb) are already far above the predicted QCD phase transition density. A 30% increase in energy density due to deformation is probably not as critical as in fixed-target experiments at lower beam energies. A more important issue is how to detect signatures of the QGP and extract its properties. How the deformation of uranium nuclei may help to address this issue? To answer this question we have studied the nucleon elliptic flow in UU collisions with different orientations. Although our studies are only performed in the beam energy range of 1-20 GeV/nucleon, the deformation and orientation effects are found to be rather energy independent. Our results thus may have udeful implications to heavy-ion collisions at even higher energies. The elliptic flow reflects the anisotropy in the particle transverse momentum ($`p_t`$) distribution at midrapidity, i.e.,
$$v_2=<(p_x^2p_y^2)/p_t^2>,$$
(3)
where $`p_x(p_y)`$ is the transverse momentum in (perpendicular to ) the reaction plane and the average is taken over all particles in all events . The $`v_2`$ results from a competition between the “squeeze-out” perpendicular to the reaction plane and the “in-plane flow”. It has been shown recently in many studies that the elliptic flow is particularly sensitive to the equation of state of dense matter. Thus, the analysis of $`v_2`$ is one of the most promising tools for detecting signatures of the QGP and extracting its properties.
Shown in Fig. 2 are the evolution of the nucleon elliptic flow in UU collisions with different orientations at a beam energy of 10 GeV/nucleon and an impact parameter of 6 fm. We initialized the two uranium nuclei such that their long axes are in the reaction plane in both tip-tip and body-body collisions. It is seen that both the tip-tip and sphere-sphere collisions lead to a strong “in-plane flow” (positive $`v_2`$) while the body-body reactions result in a large “squeeze-out” (negative $`v_2`$). The tip-tip and sphere-sphere collisions can’t sustain the higher $`v_2`$ created around the maximum compression. This is due to the strong subsequent competition between the “in-plane flow” and “squeeze-out” of baryons. While for the body-body collisions the “squeeze-out” phenomenon dominates throughout the whole reaction because of the strong shadowing of matter in the reaction plane. The $`v_2`$ in body-body UU collisions can therefore sustain its early value. The elliptic flow in body-body UU collisions is therefore a better probe of the high density phase. This point is seen more clearly in the impact parameter dependence of the elliptic flow as shown in Fig. 3. Unique to the body-body UU collisions, the strength of elliptic flow is the highest in the most central collisions where the shadowing effect in the reaction plane is the strongest. While in tip-tip and sphere-sphere UU collisions the elliptic flow vanishes in the most central collisions due to symmetry. Therefore, the “squeeze-out” of particles including newly created ones perpendicular to the reaction plane in very central body-body UU collisions can provide direct information about the dense matter formed in the reaction. This is clearly an advantage of using the body-body collisions over the tip-tip collisions. Of course, at collider energies it is more important to study the elliptical flow at midrapidities for newly produced particles, such as pions which are even more sensitive to the nuclear shadowing effects.
It is also of considerable interest to study deformation and orientation effects on particle production. Shown in Fig. 4 are the multiplicities of pions and positive kaons as a function of impact parameter. The maximum impact parameter for the tip-tip and body-body UU collisions are $`2R_s`$ and $`2R_l`$, respectively. As one expects the central (with $`b5`$ fm) tip-tip UU collisions produce more particles due to the higher compression and the longer passage time of the reaction. While at larger impact parameters, the smaller overlap volume in the tip-tip collisions leads to less particle production than the body-body and sphere-sphere reactions. Also as one expects from the reaction geometry, the multiplicities in the body-body collisions approach those in the sphere-sphere collisions as the impact parameter reaches zero. In the most central collisions, the tip-tip UU collisions produce about 15% (40%) more pions (positive kaons) than the body-body and sphere-sphere UU collisions. These deformation and orientation effects on particle production are consistent with those on density compression shown in Fig. 1. Compared to pions, kaons are more sensitive to the density compression since most of them are produced from second chance particle (resonance)-particle (resonance) scatterings at the energies studied here.
In summary, using A Relativistic Transport model we have studied the deformation and orientation effects on the compression, elliptic flow and particle production in uranium on uranium (UU) collisions at relativistic energies. The compression in the tip-tip UU collisions is about 30% higher and lasts approximately 50% longer than in the body-body or spherical UU collisions. Moreover, we found that the nucleon elliptic flow in the body-body UU collisions have some unique features. We have pointed out that the tip-tip UU collisions are more probable to create the QGP at the AGS/BNL and SPS/CERN energies. While at RHIC/BNL and LHC/CERN energies, the “squeeze-out” of particles in the central body-body collisions is more useful for studying properties of the QGP.
I would like to thank W.F. Henning for suggesting me to work on this project and stimulating discussions. I am also grateful to C.M. Ko, M. Murray, J.B. Natowitz, E.V. Shuryak, A.T. Sustich, and B. Zhang for helpful discussions. This work was supported in part by a subcontract S900075 from Texas A&M Research Foundation’s NSF Grant PHY-9870038.
|
no-problem/9910/quant-ph9910015.html
|
ar5iv
|
text
|
# Implementation of the refined Deutsch-Jozsa algorithm on a 3-bit NMR quantum computer
## Abstract
We implemented the refined Deutsch-Jozsa algorithm on a 3-bit nuclear magnetic resonance quantum computer, which is the meaningful test of quantum parallelism because qubits are entangled. All of the balanced and constant functions were realized exactly. The results agree well with theoretical predictions and clearly distinguish the balanced functions from constant functions. Efficient refocusing schemes were proposed for the soft $`z`$-pulse and J-coupling and it is proved that the thermal equilibrium state gives the same results as the pure state for this algorithm.
A quantum computer which was just a theoretical concept has been realized recently by nuclear magnetic resonance (NMR). Several methods have been proposed such as ion trap , quantum dot , cavity QED , and Si-based nuclear spins to realize quantum computers but NMR has given the most successful results. Several quantum algorithms have been implemented by NMR quantum computers among which the Deutsch-Jozsa (D-J) algorithm has been studied most because it is the simplest quantum algorithm that shows the power of a quantum computer over a classical one. Most of quantum algorithms, including the D-J algorithm, have been implemented only for functions of one and two bits. The successful implementation of a quantum algorithm depends heavily on the number of basic operations which increases with the number of qubits due to finite coherence time. Moreover, more than 2-bit operations require more than two-body interactions which do not exist in nature. It is possible to avoid such interactions, though not easy, but it increases again the number of total basic gates and coherence may break down during the computation. There have been few works that have performed real three-bit operations so far.
The D-J algorithm determines whether an $`n`$-bit binary function,
$$f:\{0,1\}^n\{0,1\},$$
(1)
is a constant function which always gives the same output, or a balanced function which gives 0 for half of inputs and 1 for the remaining half. The D-J algorithm gives answer by only one evaluation of the function while a classical algorithm requires $`(2^{n1}+1)`$ evaluations in the worst case. The function is realized in quantum computation by unitary operation,
$$U|x|y=|x|yf(x),$$
(2)
where $`x`$ is an $`n`$-bit argument of the function and $`y`$ is one-bit. If $`|y`$ is in the superposed state, $`(|0|1)/\sqrt{2}`$, then the result of the operation,
$$U|x(\frac{|0|1}{\sqrt{2}})=(1)^{f(x)}|x(\frac{|0|1}{\sqrt{2}}),$$
(3)
carries information about the function encoded in the overall phase. If $`|x`$ is also prepared in the superposition of all its possible states, $`(|0+|1+\mathrm{}|2^n1)/\sqrt{2^n}`$, by applying an $`n`$-bit Hadamard operator $`H`$ to $`|x=|0`$, the relative phases of the $`2^n`$ states change depending on $`f`$. If $`f`$ is a constant function, then the relative phases are all same and additional application of $`H`$ restores $`|x`$ to $`|0`$. If $`f`$ is a balanced function, $`|x`$ cannot be restored to $`|0`$ by this operation. It is obvious that $`|y`$, being in the superposed state, $`(|0|1)\sqrt{2}`$, plays a central role in the algorithm but it is redundant in the sense that its state does not change.
This redundancy is removed in the refined D-J algorithm where the following unitary operator is used.
$$U_f|x=(1)^{f(x)}|x.$$
(4)
It has been shown that $`U_f`$ is always reduced to a direct product of single-bit operators for $`n2`$. In this case, $`n`$ classical computers can do the same job by simultaneous evaluations because qubits are never entangled. Therefore, meaningful tests of the D-J algorithm can occur if and only if $`n>2`$. Recently, a realization of the D-J algorithm for $`n=4`$ has been reported , but in that work, only one balanced function was evaluated and the corresponding $`U_f`$ is reducible to a direct product of four single-bit operators. In this study, we investigated the refined D-J algorithm with 3-bit arguments to find out the pulse sequences of $`U_f`$’s, and implemented the algorithm on an NMR quantum computer for all the functions.
There are $`{}_{8}{}^{}\text{C}_{4}^{}=70`$ balanced and two constant functions among all 3-bit binary functions. We index the functions with their outputs, $`f(0)\mathrm{}f(7)`$, expressed as hexadecimal numbers. For example, $`f_{\mathrm{𝟷}𝙴}`$ denotes the function of which the outputs are given by $`f(0)\mathrm{}f(7)=00011110`$. Note that $`U_{f_𝚡}=U_{f_{\mathrm{𝙵𝙵}𝚡}}`$, where x is a hexadecimal number equal to or less than FF. The difference of overall phase cannot be distinguished in the experimental implementations. Therefore, there are 35 distinct unitary operators corresponding to the balanced functions, and one operator corresponding to the constant functions. Since the unitary operator corresponding to the constant functions, $`U_{f_{\mathrm{𝟶𝟶}}}`$, is just the unity matrix, there are 35 non-trivial and distinct $`U_f`$’s to be implemented.
The NMR Hamiltonian of the weakly interacting three spin system is given by
$$=\underset{i}{\overset{3}{}}\mathrm{\Delta }\omega _iI_{iz}+\underset{i<j}{\overset{3}{}}\pi J_{ij}2I_{iz}I_{jz}$$
(5)
in the rotating frame, where $`I_{iz}`$ is the $`z`$-component of the angular momentum operator of spin $`i`$. The first term represents the precession of spin $`i`$ about $`z`$-axis due to the chemical shift, $`\mathrm{\Delta }\omega _i`$, and the second term the spin-spin interaction between spin $`i`$ and $`j`$ with coupling constant $`J_{ij}`$. This Hamiltonian provides six unitary operators, $`I_{iz}(\theta )=\mathrm{exp}[ı\theta I_{iz}]`$ and $`J_{ij}(\theta )=\mathrm{exp}[ı\theta 2I_{iz}I_{jz}]`$. In combination with $`I_{iz}(\theta )`$, two other operators $`I_{ix}(\theta )`$ and $`I_{iy}(\theta )`$ produced by rf pulses can perform any single-bit operations. The coupling operator $`J_{ij}(\theta )`$ can be used to make a controlled-not operation. The combination of single-bit operations and controlled-not operations can generate any unitary operations .
Table I shows the sequences of the realizable operators for all the 35 non-trivial distinct $`U_f`$’s. In the table, the notations $`I_1`$, $`I_2`$ and $`I_3`$ were replaced by $`I`$, $`S`$ and $`R`$, respectively for convenience. Some of $`U_f`$’s are irreducible and require three-body interaction. The sequences of realizable operators in the table were obtained by following a general implementation procedure using generator expansion . This method includes the coupling order reduction technique which replaces an $`n`$-body interaction operator for $`n>2`$ by two-body ones. It is noticed that all $`U_f`$’s consist of the operators of the single-spin rotations about $`z`$-axis and spin-spin interactions only. From now on, we call pulses corresponding to these operators the soft $`z`$-pulse and J-coupling, respectively.
The balanced functions are classified into four types depending on the number of $`J_{ij}(\theta )`$’s included in their operation sequences. It is easy to see that no qubits are entangled in type-I functions and therefore, obviously they are not the cases of meaningful tests. In type-II functions, only two qubits out of three are entangled. So, type-II functions can be said to be the stepping stones to meaningful tests. In type-III and IV functions, all three qubits are entangled and the functions of these types can be tested only by a three-bit quantum computer. Therefore, the realization of type-III and IV functions demonstrates true quantum parallelism. Each sequence in Table I is not unique for a given function but we believe that they are optimal ones for implementation of the refined D-J algorithm.
The whole operation sequence for implementation of the refined D-J algorithm is given by $`H`$-$`U_f`$-$`H`$-$`D`$ to be read from left to right. The first and second $`H`$’s were realized by hard $`\pi /2`$ and $`\pi /2`$ pulses about $`y`$-axis, respectively. Since the read-out operation $`D`$ can be realized by a hard $`\pi /2`$ pulse about $`y`$-axis, the second $`H`$ and $`D`$ cancel each other to make the sequence $`H`$-$`U_f`$.
The superposed input state is generated by the Hadamard operation on the pure state $`|0`$. Therefore, it is usually necessary to convert the thermally equilibrated spin state into the effective pure state. In the case of the refined D-J algorithm, however, the thermal equilibrium state gives the same results with the pure state. The deviation density matrix of the thermal equilibrium state, $`\rho _{\text{th}}`$, is approximated by
$$\rho _{\text{th}}=I_{1z}+I_{2z}+I_{3z}$$
(6)
for the Hamiltonian of Eq. 5, and the density matrix of $`|0`$, $`\rho _\text{p}`$, is given by
$$\begin{array}{ccc}\hfill \rho _\text{p}& =& I_{1z}+I_{2z}+I_{3z}\hfill \\ & & +2I_{1z}I_{2z}+2I_{2z}I_{3z}+2I_{1z}I_{3z}+4I_{1z}I_{2z}I_{3z}\hfill \\ & =& \rho _{\text{th}}+\mathrm{\Delta }\rho .\hfill \end{array}$$
(7)
The hard $`\pi /2`$ pulse for $`H`$ transforms terms of $`\rho _{\text{th}}`$ into single-quantum coherence and terms of $`\mathrm{\Delta }\rho `$ into multiple-quantum coherence . Since the sequences for $`U_f`$’s consist of only the soft $`z`$-pulse(s) and J-coupling(s) which are dependent only on the $`z`$-components of spin angular momentums, $`U_f`$’s do not change the order of quantum coherence. As single-quantum coherence is only observable, $`\rho _{\text{th}}`$ and $`\rho _\text{p}`$ give the same results for this case. In general, the thermal equilibrium state gives the same results with the pure state if the operation sequence after the first Hadamard operator does not change the order of quantum coherence.
The soft $`z`$-pulse and J-coupling were implemented by the time evolution under the Hamiltonian of Eq. 5 with refocusing $`\pi `$-pulses applied at suitable times during the evolution period. Since the refocusing $`\pi `$-pulse has the effect of time reversal, it can be used to make one term in the Hamiltonian evolve while the other terms “freeze” . We optimized this refocusing scheme as illustrated in Fig. 1 which shows the soft $`z`$-pulse on spin 1 and J-coupling between spin 1 and 2 as examples. The evolution time, $`T`$, is $`\theta /\mathrm{\Delta }\omega _i`$ for the soft $`z`$-pulse and $`\theta /(\pi J_{ij})`$ for the J-coupling. Previous schemes divide the evolution period into eight periods and require six pulses, or suffer from TSETSE effect because soft pulses exciting more than one but not all spins were used. Since the difficulty of experiment increases exponentially with increasing number of pulses, especially soft pulses, our scheme greatly enhances the possibility of successful implementation. Axes of successive $`\pi `$-pulses were chosen in the way to cancel imperfections of pulses. For example, four $`\pi `$-pulses in Fig. 1(a) were applied along $`x`$, $`x`$, $`x`$, and $`x`$-axes, respectively.
In our experiment, <sup>13</sup>C nuclear spins of 99% carbon-13 labeled alanine (CH<sub>3</sub>CH(NH<sub>2</sub>)CO<sub>2</sub>H) in D<sub>2</sub>O solvent were used as qubits. NMR signals were measured by using a Bruker DRX300 spectrometer. The chemical shifts of three different carbon spins are about 5670, $`3780`$, and $`6380`$ Hz, and coupling constants $`J_{12}`$, $`J_{23}`$, and $`J_{13}`$ are 54.06, 34.86, and 1.03 Hz, respectively. Protons were decoupled during the whole experiments. Gaussian shaped soft $`\pi `$-pulses were 2 ms in length and hard pulses were about a few microsecond. The length of the total pulse sequence was about 600 ms in the worst case.
We implemented all the 35 balanced and one constant functions exactly. Fig 2 shows the results for the four functions belonging to different types shown in Table I. The lines of the spectra for the remaining functions also indicate as clearly as ones in the figure whether they are positive or negative. The balanced functions are distinguished from the constant function because some of the lines are negative. The peaks of spin 1 and 3 show up as doublets in Fig 2(a), (b) and (c) while that of spin 2 is quartet because $`J_{13}`$ is very small compared to $`J_{12}`$ and $`J_{23}`$. Fig 2(d) shows, however, that the peaks of spin 1 and 3 are in fact quartet also. They look dispersive doublets because the neighboring lines split a little by $`J_{13}`$ have different signs. These results agree well with the theoretical predictions obtained from
$$\text{Tr}(e^{ıt/\mathrm{}}\rho e^{ıt/\mathrm{}}I_+),$$
(8)
where $`\rho `$ is the density matrix transformed by the operation sequence $`H`$-$`U_f`$ from $`\rho _{\text{th}}`$ and $`I_+=I_x+ıI_y`$.
In the implementation of the soft $`z`$-pulses and J-couplings shown in Fig. 1, the end of pulse sequence ($`t=T`$) can be clearly defined for the J-coupling but not for the soft $`z`$-pulse, because the last pulse of soft $`z`$-pulse is a soft pulse which is much longer than a hard pulse. Therefore, whole pulse sequence was arranged to finish with the J-coupling. Our refocusing scheme decreases the length of the total pulse sequence and therefore, reduces signal decay due to decoherence. Imperfection of soft pulses is thought to be the main source of the phase error and the decay of signal amplitude of some lines. This imperfection is more serious in the J-coupling than in the soft $`z`$-pulse because out-of-phase multiplets are produced in the former while in-phase multiplets are produced in the latter. Therefore, it is very important to calibrate soft pulses exactly especially for long sequences.
In summary, we implemented the complete refined D-J algorithm with 3-bit arguments which involves entanglement. All the operations were realized by the time evolution under Hamiltonian with refocusing $`\pi `$-pulses. The operation sequences best for our implementation were found using generator expansion. Experimental pulse sequences were made as simple as possible by using the thermal equilibrium state and the new refocusing scheme.
|
no-problem/9910/astro-ph9910152.html
|
ar5iv
|
text
|
# Stellar winds, dead zones, and coronal mass ejections
## 1 Introduction
The solar wind outflow presents a major challenge to numerical modeling since it is a fully three-dimensional (3D), time-dependent physical environment, where regions of supersonic and subsonic speeds coexist in a tenuous, magnetized plasma. Ulysses observations (McComas et al. swoops (1998)) highlighted again that the solar wind about the ecliptic plane is fundamentally dynamic in nature, while the fast speed wind across both solar poles is on the whole stationary and uniform. Recent SOHO measurements (Hassler et al. hassler (1999)) demonstrated how the fast wind emanating from coronal holes is rooted to the ‘honeycomb’ structure of the chromospheric magnetic network, making the outflow truly 3D, while the daily coronal mass ejections are in essence highly time-varying. Moreover, one really needs to study these time-dependent, multi-dimensional aspects in conjunction with the coronal heating puzzle (Holzer & Leer holzer (1997)).
Working towards that goal, Wang et al. (wang (1998)) recently modeled the solar wind using a two-dimensional, time-dependent, magnetohydrodynamic (MHD) description with heat and momentum addition as well as thermal conduction. Their magnetic topology shows both open (polar) and closed (equatorial) field line regions. When heating the closed field region, a sharp streamer-like cusp forms at its tip as the region continuously expands and evaporates. A quasi-stationary wind model results where the emphasis is on reaching a qualitative and quantitative agreement with the observed latitudinal variation (reproducing, in particular, the sharp transition at roughly $`\pm 20^{}`$ latitude between fast and slow solar wind) by tuning the spatial dependence of the artificial volumetric heating and momentum sources.
We follow another route towards global solar wind modeling, working our way up stepwise from stationary 1D to 3D MHD configurations. In a pure ideal, stationary, axisymmetric MHD approach, numerical simulations can benefit greatly from analytical theory. This is demonstrated in Ustyugova et al. (love (1998)), where stationary magneto-centrifugally driven winds from rotating accretion disks were calculated numerically and critically verified by MHD theory.
In this paper, we extend the Wang et al. (wang (1998)) modeling efforts to 2.5D, by including toroidal vector components while remaining axisymmetric. This allows us to explore stellar wind regimes where rotation is also important. The magnetic field still has open and closed field line regions, but in ideal MHD, the closed field region is a ‘dead’ zone from where no plasma can escape. The unknown coronal heating is avoided by assuming a polytropic equation of state and dropping the energy equation all together. The stationary, axisymmetric, polytropic MHD models are analysed as in Ustyugova et al. (love (1998)).
In particular, we investigate the effects of (i) having both open and closed field line regions in axisymmetric stellar winds; and of (ii) time-dependent perturbations within these transonic outflows. While we still ignore the basic question of why there should be a hot corona in the first place, we make significant progress towards fully 3D, dynamic models. The advantages of a gradual approach towards such ‘final’ model were pointed out in Keppens & Goedbloed (1999a ). There, we initiated our effort to numerically model stellar outflows by gradually relaxing the assumptions inherent in the most well-known solar wind model: the isothermal Parker wind (Parker parker (1958)). In a sequence of stationary, 1D, 1.5D, and 2.5D, hydrodynamic and magnetohydrodynamic stellar wind models, all obtained with the Versatile Advection Code (VAC, see Tóth vacapjl (1996, 1997); Tóth & Keppens vac3 (1998); Keppens & Tóth 1999a , and http://www.phys.uu.nl/t̃oth), we demonstrated that we can now routinely calculate axisymmetric magnetized wind solutions for (differentially) rotating stars. An important generalization of previous modeling efforts (Sakurai sakuraiAA (1985, 1990)) is that the field topology can have both open and closed field line regions, so we model ‘wind’ and ‘dead’ zones self-consistently. In essence, our work extends the early model efforts by Pneuman & Kopp (pneu (1971)) in (i) going from an isothermal to a polytropic equation of state; (ii) allowing for stellar rotation; and (iii) including time-dependent phenomena. While we get qualitatively similar solutions for solar-like conditions, we differ entirely in the numerical procedure employed and in the way boundary conditions are specified. Keppens & Goedbloed (1999a ) contained one such MHD model for fairly solar-like parameter values. In this paper, we start with a critical examination of this ‘reference’ model. The obtained transonic outflow, accelerating from subslow to superfast speeds, must obey the conservation laws predicted by theory, by conserving various physical quantities along streamlines. This will be checked in Sect. 2. Section 3 continues with a physical analysis of the model and investigates the influence of the magnetic field strength and of the latitudinal extent of the ‘dead’ zone. These parameters have a clear influence on the global wind structure, especially evident in the appearance and location of its critical surfaces where the wind speed equals the slow, Alfvén, and fast magnetosonic speeds. We also present one such wind solution for a star which rotates twenty times faster than our sun. Finally, Sect. 4 relaxes the stationarity of the wind pattern, by forcing coronal mass ejections on top of the wind pattern. Conclusions are given in Sect. 5.
## 2 Reference model and conservation laws
### 2.1 Solution procedure
We recall from Keppens & Goedbloed (1999a ) that we solve the following conservation laws for the density $`\rho `$, the momentum vector $`\rho \text{v}`$, and the magnetic field B:
$$\frac{\rho }{t}+(\rho \text{v})=0,$$
(1)
$$\frac{(\rho \text{v})}{t}+[\rho \text{v}\text{v}+p_{tot}\text{I}\text{B}\text{B}]=\rho \text{g},$$
(2)
$$\frac{\text{B}}{t}+(\text{v}\text{B}\text{B}\text{v})=0.$$
(3)
Here, $`p_{tot}=p+\frac{1}{2}B^2`$ is the total pressure, I is the identity tensor, $`\text{g}=(GM_{}/r^2)\widehat{𝐞}_r`$ is the external stellar (mass $`M_{}`$) gravitational field with $`r`$ indicating radial distance. We assume $`p\rho ^\gamma `$ (dimensionless, we take $`p=\rho ^\gamma /\gamma `$), where in this paper we only construct models for specified polytropic index $`\gamma =1.13`$. This compares to the value 1.05 used in recent work by Wu, Guo, & Dryer (wusp97 (1997)) and an empirically determined value of 1.46 derived from Helios 1 data by Totten, Freeman, & Arya (totten (1995)).
The discretized Eqs. (1)–(3) are solved on a radially stretched polar grid in the poloidal plane using a Total Variation Diminishing Lax-Friedrich discretization (see e.g. Tóth & Odstrčil vac1 (1996)) with Woodward limiting (Collela & Woodward wood (1984)). Stationary ($`/t=0`$) solutions are identified when the relative change in the conservative variables from subsequent time steps drops below a chosen tolerance (sometimes down to $`10^7`$). We explained in Keppens & Goedbloed (1999a ) how we benefitted greatly from implicit time integration (see Tóth, Keppens, & Botchev implvac2 (1998); Keppens et al. implvac1 (1999); van der Ploeg, Keppens, Tóth auke (1997)) for obtaining axisymmetric ($`/\phi =0`$) hydrodynamic ($`\text{B}=0`$) stellar outflows characterized by $`\rho (R,Z)`$ and $`\text{v}(R,Z)`$, where $`(R,Z)`$ are Cartesian coordinates in the poloidal plane. Denoting the base radius by $`r_{}`$, these hydrodynamic models cover $`r[1,50]r_{}`$ and have as escape speed $`v_{\mathrm{esc}}=\sqrt{2GM_{}/r_{}}=3.3015c_s`$, with $`c_s`$ the base sound speed. They are also characterized by a rotational parameter $`\zeta =\mathrm{\Omega }_{}r_{}/c_s=0.0156`$ (if not specified otherwise), and impose boundary conditions at the base such that (i) $`v_\phi =\zeta R`$; and (ii) the poloidal base speed $`\text{v}_p`$ is in accordance with a prescribed radial mass flux $`\rho \text{v}_p=f_{\mathrm{mass}}\widehat{𝐞}_r/r^2`$. The value of the mass loss rate parameter $`f_{\mathrm{mass}}`$ is taken from a 1D polytropic, rotating Parker wind valid for the equatorial regions under identical parameter values. For $`\zeta =0.0156`$, we get $`f_{\mathrm{mass}}=0.01377`$. We clarify below in which way the values for the dimensionless quantities $`v_{\mathrm{esc}}/c_s`$, $`\zeta `$, and $`f_{\mathrm{mass}}`$ relate to the prevailing solar conditions.
To arrive at a ‘reference’ MHD wind solution, two more parameters enter the description which quantify the initial field strength and the desired extent of the ‘dead’ zone. A stationary, axisymmetric, magnetized stellar wind is the end result of a time stepping process which has the initial density $`\rho (R,Z)`$ and toroidal velocity component $`v_\phi (R,Z)`$ from the HD solution with identical $`\gamma `$, $`v_{\mathrm{esc}}`$, and $`\zeta `$ parameters. The poloidal velocity is also copied from the HD solution in a polar ‘wind’ zone where $`\theta <\theta _{\mathrm{wind}}`$ (upper quadrant with $`\theta =0`$ at pole), quantified by its polar angle $`\theta _{\mathrm{wind}}`$. The ‘dead’ zone is appropriately initialized by a zero poloidal velocity. The field is initially set to a monopole field in the ‘wind’ zone, where
$$B_R(R,Z;t=0)=B_0R/r^3,B_Z(R,Z;t=0)=B_0Z/r^3,$$
(4)
where $`r^2=R^2+Z^2`$, coupled to a dipole field in the ‘dead’ zone with
$$B_R(R,Z;t=0)=3a_d\frac{ZR}{r^5},B_Z(R,Z;t=0)=a_d\frac{(2Z^2R^2)}{r^5}.$$
(5)
The strength of the dipole is taken $`a_d=B_0/(2\mathrm{cos}\theta _{\mathrm{wind}})`$ to keep the radial field component $`B_r`$ continuous at $`\theta =\theta _{\mathrm{wind}}`$. The initial $`B_\phi `$ component is zero throughout. Keppens & Goedbloed (1999a ) took $`B_0=3.69`$ and $`\theta _{\mathrm{wind}}=60^{}`$, so that the corresponding dead zone covered only a $`\pm 30^{}`$ latitudinal band about the stellar equator. For the sun at minimum activity, the extent of the coronal hole is typically such that $`\theta _{\mathrm{wind}}=30^{}`$, so it will be useful to vary this parameter in what follows (Sect. 3). We use a resolution of $`300\times 40`$ in the full poloidal halfplane, and impose symmetry conditions at both poles, and free outflow at the outer radius $`50r_{}`$ (where all quantities are extrapolated linearly in ghost cells). At the stellar base, we similarly extrapolate density and all magnetic field components from their initial values, but let these quantities adjust in value while keeping this initial gradient in the ghost cells. This implies that the density and the magnetic field at the base is determined during the time stepping process to arrive at steady-state. We enforce the $`\text{B}=0`$ condition using a projection scheme (Brackbill & Barnes barnes (1980)), to end up with a physically realistic magnetic configuration (despite the ‘monopolar’ field in the wind zone). The stellar boundary condition for the momentum equation allows us to specify a differential rotation rate $`\zeta (\theta )`$ and latitudinally varying mass flux through $`f_{\mathrm{mass}}(\theta )`$. We set
$$\rho \text{v}_p=f_{\mathrm{mass}}(\theta )\widehat{𝐞}_r/r^2,v_\phi =\zeta (\theta )R+B_\phi v_p/B_p.$$
(6)
The reference model has a rigid rotation rate according to $`\zeta =0.0156`$, while $`f_{\mathrm{mass}}=0.01377`$ in the wind region and zero in the equatorial dead zone.
As emphasized in Keppens & Goedbloed (1999a ), our choice of boundary conditions is motivated by the variational principle governing all axisymmetric, stationary, ideal MHD equilibria (see Sect. 2.2). The analytic treatment shows that the algebraic Bernoulli equation, together with the cross-field momentum balance, really determine the density profile and the magnetic flux function concurrently. In keeping with this formalism, we impose a base mass flux and a stellar rotation, and let the density and the magnetic field configuration adjust freely at the base. In prescribing the stellar rotation, we exploit the freedom available in the variational principle by setting a flux function at the base. Noteworthy, the Pneuman & Kopp (pneu (1971)) model, as well as many more recent modeling efforts for stellar MHD winds, fix the base normal component of the magnetic field together with the density. Below, we demonstrate that our calculated meridional density structure compares well with recent observations by Gallagher et al. (gall (1999)).
The values for the dimensionless parameters $`v_{\mathrm{esc}}/c_s`$, $`\zeta `$ and $`B_0`$ (actually the ratio of the coronal Alfvén speed to $`c_s`$) are solar-like in the following sense. At a reference radius $`r_{}=1.25R_{}`$, we take values for the number density $`N_o10^8\mathrm{cm}^3`$, temperature $`T_o=1.5\times 10^6\mathrm{K}`$, coronal field strength $`B_o2\mathrm{G}`$, and rotation rate $`\mathrm{\Omega }_{}=2.998\times 10^6\mathrm{s}^1`$. For $`\gamma =1.13`$ and assuming a mean molecular weight $`\stackrel{~}{\mu }=0.5`$, the base sound speed then turns out to be $`c_s=167.241\mathrm{km}/\mathrm{s}`$, with all dimensionless ratios as used in the reference model. Further, the value for the mass loss rate parameter $`f_{\mathrm{mass}}=0.01377`$ is then in units of $`1.06\times 10^{13}\mathrm{g}/\mathrm{s}`$, so that a split-monopole magnetic configuration leads to a realistic mass loss rate $`\dot{M}4\pi f_{\mathrm{mass}}2.9\times 10^{14}M_{}\mathrm{yr}^1`$. Since the reference model has a constant mass flux in its wind zone, the presence of the dead zone reduces this value by exactly $`(1\mathrm{cos}\theta _{\mathrm{wind}})=1/2`$. Units enter through the reference radius $`r_{}`$, the base sound speed $`c_s`$, and the base density $`\rho _{}=N_om_p\stackrel{~}{\mu }`$ (with proton mass $`m_p`$).
### 2.2 Streamfunctions
The final stationary wind pattern is shown below in Fig. 3 (see also Fig. 5 in Keppens & Goedbloed 1999a ). The physical correctness of this numerically obtained ideal MHD solution can be checked as follows. All axisymmetric stationary ideal MHD equilibria are derivable from a single variational principle $`\delta L=\delta 𝑑V=0`$ with Lagrangian density (Goedbloed, Keppens, Lifschitz eps98 (1998); Keppens & Goedbloed 1999b ):
$$(M^2,\psi ,\psi ;R,Z)=\frac{1}{2R^2}(1M^2)\psi ^2\frac{\mathrm{\Pi }_1}{M^2}+\frac{\mathrm{\Pi }_2}{\gamma M^{2\gamma }}\frac{\mathrm{\Pi }_3}{1M^2}.$$
(7)
To obtain an analytic ideal MHD solution, the minimizing Euler-Lagrange equations need to be solved simultaneously for the poloidal flux function $`\psi (R,Z)`$ and the squared poloidal Alfvén Mach number $`M^2(R,Z)\rho v_p^2/B_p^2`$. Here, $`\text{B}_p=(1/R)\widehat{𝐞}_\phi \times \psi `$. In contrast with the translationally symmetric case (Goedbloed & Lifschitz hans1 (1997); Lifschitz & Goedbloed hans2 (1997)), the governing variational principle contains factors $`R^2`$, while the profiles $`\mathrm{\Pi }_1`$ and $`\mathrm{\Pi }_3`$ are no longer flux functions. In particular,
$$\mathrm{\Pi }_1\chi ^2\left(H+\frac{R^2\mathrm{\Omega }^2}{2}+\frac{GM_{}}{r}\right),$$
(8)
$$\mathrm{\Pi }_2\frac{\gamma }{\gamma 1}\chi ^{2\gamma }S,$$
(9)
$$\mathrm{\Pi }_3\frac{\chi ^2}{2}\left(R\mathrm{\Omega }\frac{\mathrm{\Lambda }}{R}\right)^2.$$
(10)
where five flux functions $`H,\mathrm{\Omega },S,\mathrm{\Lambda },\chi ^{}`$ enter. These direct integrals of the axisymmetric, stationary ideal MHD equations are:
* the Bernoulli function ($``$ energy)
$$H(\psi )\frac{1}{2}v^2+\frac{\rho ^{\gamma 1}\gamma S}{\gamma 1}\frac{GM_{}}{r}v_\phi ^2+v_\phi B_\phi \frac{v_p}{B_p},$$
(11)
* the derivative of the stream function $`\chi ^{}\chi /\psi `$. Indeed, the poloidal stream function $`\chi (R,Z)`$ necessarily obeys $`\chi (\psi )`$, provided that the toroidal component of the electric field vanishes. These are immediate checks on the numerical solution, namely $`v_RB_Z=v_ZB_R`$, or the fact that streamlines and field lines in the poloidal plane must be parallel (easily seen in Fig. 3).
* the entropy $`S`$, which for our polytropic numerical solutions is constant by construction: $`S1/\gamma `$,
* a quantity related to the angular momentum flux $`\text{F}_{\mathrm{AM}}=\rho \text{v}_pRv_\phi \text{B}_pRB_\phi \rho \text{v}_p\mathrm{\Lambda }`$, defined as
$$\mathrm{\Lambda }(\psi )Rv_\phi RB_\phi \frac{B_p}{\rho v_p},$$
(12)
* and the derivative of the electric field potential
$$\mathrm{\Omega }(\psi )\frac{1}{R}\left(v_\phi \frac{v_p}{B_p}B_\phi \right).$$
(13)
Various combinations of these flux functions can be made, for instance Goedbloed & Lifschitz (hans1 (1997)) used the following flux function (instead of $`\mathrm{\Lambda }`$)
$$I(\psi )RB_\phi Rv_\phi \frac{\rho v_p}{B_p}=\chi ^{}\mathrm{\Lambda }.$$
(14)
Figure 1 presents gray-scale contour plots of three streamfunctions $`I`$ (actually $`\mathrm{log}I`$), $`\mathrm{\Lambda }`$, and $`H`$, calculated from the reference solution, with its poloidal field lines overlaid. Ideally, these contours must match the field line structure exactly: all deviations are due to numerical errors. Inspection shows that the agreement is quite satisfactory. A quantitative measure of the errors is given in the lower two frames. At bottom left, we plotted the relative deviation of $`\mathrm{\Omega }`$ from the value enforced at the base $`\zeta (\theta )`$ (constant to 0.0156 in this model). The solid line marks the 10% deviation, actual values range from \[0.0092, 0.0187\]. It should be noted that this is a very stringent test of the solution, since for the chosen parameters, the wind is purely thermally driven, and the stellar rotation is dynamically unimportant. The largest deviations are apparent at the rotation axis (symmetry axis) in $`\mathrm{\Omega }`$, which is not unexpected due to its $`1/R`$ dependence. Other inconsistencies are in the region which has drastically changed from its initial zero velocity, purely dipole magnetic field structure: open field lines coming from the polar regions are now draped around a dipolar ‘dead’ zone of limited radial extent. This dead zone simply corotates with the base angular velocity, and has a vanishing poloidal velocity $`v_p`$ and toroidal field $`B_\phi `$. Around that zone, the stellar wind traces the open field lines. The final bottom right frame shows $`E_\phi `$, virtually vanishing everywhere, and only at the very base are values of order $`𝒪(10^2)`$. For completeness, we show the plasma beta $`\beta =2p/B^2=1`$ contour which exceeds unity in an hourglass pattern that stretches out from the dead zone to large radial distances.
Overall, the obtained stationary numerical solution passes all criteria for being physically acceptable. We expect that most errors disappear when using a higher resolution. We already exploited a radial grid accumulated near the stellar surface, necessary for resolving the near-surface acceleration. However, we could benefit also from a higher resolution in polar angle, now only 40 points for the full half circle, by, for instance, using the up-down symmetry.
## 3 Extensions of the reference model
With the accuracy of the numerical solutions confirmed by inspection of the streamfunctions, we can start the discussion of the influence of the physical parameters $`B_0`$, $`\theta _{\mathrm{wind}}`$, and $`\zeta `$ on the global wind structure. First, we present a more detailed analysis of the reference solution itself.
Fig. 2 shows the density structure at left, where we plot number density as a function of polar angle for three fixed radial distances, namely at the base $`1.27R_{}`$, at $`11.9R_{}`$ and at $`12.7R_{}`$. Keppens & Goedbloed (1999a ) already demonstrated the basic effect visible here: the equatorial density is higher than the polar density. A recent determination of the $`(r,\theta )`$ dependence of the coronal electron densities within $`1R_{}r1.2R_{}`$ by Gallagher et al. (gall (1999)) concluded that the density falloff is faster in the equatorial region than at the poles, and that the equatorial densities within the observed region are a factor of three larger than in the polar coronal hole. Their study lists number densities of order $`10^8\mathrm{cm}^3`$, as in our reference model. Our base density at $`1.27R_{}`$ (dotted in Fig. 2, left panel) has a distinct latitude variation reflecting the combined open-closed field line structure. Interestingly, the observations in Gallagher et al. (gall (1999)) show a similar structure, with quoted values of $`8.3\times 10^7\mathrm{cm}^3`$ at $`1.2R_{}`$ above the pole, increasing to $`1.6\times 10^8\mathrm{cm}^3`$ at the same distance along the equator. In fact, a dip in the density variation was present due to an active region situated above the equator. Qualitatively, we recover this variation at the boundary of the dead zone. Again, we stress that the base density is calculated self-consistently, hence not imposed as a base boundary condition. The other two radial cuts situated beyond the dead zone agree quite well with the conclusions drawn by the observational study.
The reference wind solution also conforms with some well-known studies in MHD wind modeling. Suess & Nerney (suess (1973)) and Nerney & Suess (nern (1975)) pointed out how consistent axisymmetric stellar wind modeling which include magnetic fields and rotation automatically lead to a meridional flow away from the equator. At large radial distances, the flow profile should be of the form $`v_\theta \mathrm{sin}(2\theta )`$, with a poleward collimation of the magnetic field. This variation in polar angle is general and independent of the precise base field structure. In the middle panel of Fig. 2, we show the latitude dependence of $`v_\theta `$ at $`50R_{}`$ for the reference model. Note the perfect agreement with the predicted variation.
Since the solar rotation rate, quantified by the parameter $`\zeta `$, is low, the calculated wind solution in the poloidal plane should be similar to the one presented by Pneuman & Kopp (pneu (1971)). They constructed purely poloidal, isothermal and axisymmetric models of the solar wind including a helmet streamer (or ‘dead’ zone). An iterative technique was used to solve for the steady coronal expansion, while the density and the radial magnetic field were fixed at the base. They enforced a dipolar $`B_r`$ with a strength of $`1\mathrm{G}`$ at the poles, half the value we use at the initialization. Their uniform coronal temperature was taken to be $`1.56\times 10^6\mathrm{K}`$, almost identical to our base temperature $`T_o`$. Their base number density was imposed to be $`1.847\times 10^8\mathrm{cm}^3`$, independent of latitude, and they assumed a slightly higher value for the mean molecular weigth, namely $`\stackrel{~}{\mu }=0.608`$. This leads to a base density which is a factor of 2.246 higher than the one used in our model. With these differences in mind (together with our polytropic equation of state and the rotational effects), we show in the right panel of Fig. 2 the magnetic structure and the location of the sonic (where $`v_p=c_s`$) and the Alfvénic surface (where $`v_p=B_p/\sqrt{\rho }`$) in a manner used in the original publication of Pneuman & Kopp (pneu (1971)), their Fig. 4. In the $`(1/r,\mathrm{cos}\theta )`$ projection, the Alfvénic transition on the equator is at the cusp of the helmet structure before the sonic point, while the sonic surface is closer to the solar surface at the poles. The qualitative agreement is immediately apparent, although our solution method is completely different, most notably in the prescription of the boundary conditions. By calculating the base density and magnetic field configuration self-consistently, we generalize the solution procedure employed by Pneuman & Kopp (pneu (1971)) as we gain control of the size of the dead zone through our parameter $`\theta _{\mathrm{wind}}`$. This allows us to study the influence of the base topology of the magnetic field on the global wind acceleration pattern in what follows.
Fig. 3 confronts three steady-state wind solutions with our reference model, which differ in the latitudinal extent of the dead zone and/or in the magnetic field strength. With $`\theta _{\mathrm{wind}}=60^{}`$ and $`B_0=3.69`$ (corresponding to a $`2\mathrm{G}`$ base coronal field strength) for the reference case A, we increased the latitudinal extent of the dead zone by taking $`\theta _{\mathrm{wind}}=30^{}`$ in case B, doubled the field strength parameter $`B_0`$ in model C, and took both $`B_0=7.4`$ and $`\theta _{\mathrm{wind}}=30^{}`$ to arrive at model D. We recall that $`B_0`$ specifies only the initial field strength used in the time-stepping process towards a stationary solution. The final base field strength turns out to be of roughly the same magnitude, but differs in its detailed latitudinal variation. The changes in the gobal wind pattern are qualified by the resulting deformations of the critical surfaces (hourglass curves) where the wind speed equals the slow, Alfvén, and fast speeds. The plotted region stretches out to $`18r_{}22.5R_{}`$.
By enlarging the dead zone under otherwise identical conditions (from A to B), the polar, open field lines are forced to fan out more rapidly with radial distance. As a result, the acceleration of the plasma occurs closer to the stellar surface, and the critical curves become somewhat more isotropic in polar angle. The Alfvén surface moves inwards at the poles, and shifts outwards above the now larger dead zone at the equator, approaching a circle with an equatorial imprint of the dead zone. If we keep the dead zone small, but double the initial field strength $`B_0`$ (from A to C), the opposite behaviour occurs: the critical curves, hence the entire acceleration behaviour of the wind, become much more anisotropic. The most pronounced change is an inward shift of the polar slow transition, and an outward shift of the Alfvén and fast polar transition. This behaviour is in agreement with what a Weber-Davis model (Weber & Davis wd (1967)) predicts to happen when the field strength is increased (note that the Weber-Davis model only applies to the equatorial region). When both the field strength and the dead zone are doubled (from A to D), the resulting Alfvén and fast critical curves are rather isotropic due to the influence of the dead zone. The polar slow transition is displaced inward while the polar Alfvén and fast curve are shifted outward, as expected for the higher field. The detailed equatorial behaviour is clearly modulated by the existing dead zone. We note that all wind solutions presented are still thermally driven, since the solar-like rotation rate is rather low and the field strengths are very modest. The changes are entirely due to reasonable variations in magnetic field topology and only a factor of two in field strength. One could tentatively argue that such variations occur in the solar wind pattern within its 11-year magnetic cycle. In gray-scale, Fig. 3 shows the absolute value of the toroidal field component $`B_\phi `$ (this field changes sign across the equator). Note that the stellar rotation has wound up the field lines in a zone midway between the poles and the dead zone. For higher rotation rates (see below), the associated magnetic pressure build up due to rotation can influence the wind pattern and cause collimation (Trussoni, Tsinganos, & Sauty trusso (1997)). Due to the four-lobe structure, one can expect parameter regimes which lead to both poleward collimation (as in the monopole-field models of Sakurai sakuraiAA (1985)) and equatorward streamline bending.
Figure 4 compares the radial dependence of the poloidal velocity for the four models (A, B, C, D) at the pole (left panel) and the equator (middle panel). For comparison, we overplotted the same quantities in each panel for a solution with a split-monopole base field at the same parameter values. This monopolar field solution is identical in nature to the Sakurai (sakuraiAA (1985, 1990)) models and was shown in Keppens & Goedbloed (1999a , Fig. 4). At the pole, a faster acceleration to higher speeds as compared to the reference model results from either increasing the field strength or enlarging the dead zone. Moreover, all four models show a faster initial acceleration than the corresponding monopolar field model. Radio-scattering measurements of the polar solar wind speed (Grall et al. grall (1996)) indicated that the polar wind acceleration is almost complete by $`10R_{}`$, much closer than expected. Our model calculations show that a fast acceleration can result from modest increases in the coronal field strength and dead zone extent (model D has a solar-like dead zone of $`\pm 60^{}`$).
The middle panel of Fig. 4 shows the distinct decrease in equatorial wind speed due to the dead zone, when compared to a split-monopole solution. The equatorial velocities are reduced by $`10`$ to $`40\mathrm{km}/\mathrm{s}`$, depending on the size of the dead zone and the base field strength. Enlarging the dead zone reduces the wind speed significantly (compare A to B, and C to D). To a lesser degree, the same effect is true for an increase in coronal field strength (compare A to C, and B to D). Keppens & Goedbloed (1999a , Figure 6) contained a polar plot of the velocity and the density at fixed radial distance for the reference model, where at least qualitatively, a transition from high density, low speed equatorial wind to lower density, high speed polar wind is noticable. As evidenced by Fig. 4, this difference in equatorial and polar wind is even more pronounced for larger dead zones. The velocities reached are too low for explaining the solar wind speeds – the Weber-Davis wind solution of identical parameters reaches $`263\mathrm{km}/\mathrm{s}`$ at 1 AU. However, this is a well-known shortcoming of a polytropic MHD description for modeling the solar wind. Wu et al. (wujgr99 (1999)) therefore resort to an ad hoc procedure where the polytropic index is an increasing function of radial distance, $`\gamma (r)`$, to attain a more realistic $`420\mathrm{km}/\mathrm{s}`$ wind speed at 1 AU, corresponding to the ‘slow’ solar wind. A more quantitative agreement with the observations at these distances must await models where we take the energy equation into account and/or model extra momentum addition as in Wang et al. (wang (1998)). The equatorial toroidal velocity profile is shown at right in Fig. 4. Note that increasing $`B_0`$ or enlarging the dead zone both negatively affect the degree to which the corona corotates with the star.
Keppens & Goedbloed (1999a ) also contained a hydrodynamic solution for a much faster rotation rate quantified by $`\zeta =0.3`$, or twenty times the solar rotation rate. The additional centrifugal acceleration moves the sonic transition closer to the star along the equator, and induces an equatorward streamline bending at the base at higher latitudes (see also Tsinganos & Sauty tsinhd (1992)). One could meaningfully ask what remains of this effect when a two-component field structure is present as well. Therefore, we calculate an MHD wind for this rotation rate, with $`B_0=3.69`$ and $`\theta _{\mathrm{wind}}=60^{}`$ as in the reference case. The corresponding mass loss rate parameter (only used in the wind zone) is $`f_{\mathrm{mass}}=0.01553`$. Fig. 5 displays the wind structure as in Fig. 3, with the gray-scale indicating the logarithm of the density pattern. For this solution, we used the up-down symmetry to double the resolution in polar angle at the same computational cost. The shape of the critical curves has changed dramatically, with a significant outward poleward shift of the Alfvén curve, and a clear separation between Alfvén and fast critical curves – in agreement with a 1.5D Weber-Davis prediction. The actual position of the critical surfaces may be influenced by an interaction with the outer boundary at $`50r_{}`$ in the time-stepping towards a stationary solution. In fact, the combined Alfvén-fast polar transition has shifted outside the computational domain, and the residual could not be decreased to arbitrary small values but stagnated at $`𝒪(10^7)`$ following this interaction. Within the plotted region of $`37R_{}`$, the solution is acceptable as explained in Sect. 2. Note how the density structure shows an increase towards the equator, causing a very effective thermo-centrifugal acceleration of the equatorial wind above the dead zone. The equatorward streamline bending occuring in the purely hydrodynamic wind is still important, but now clearly affected by the presence of the dead zone. The toroidal magnetic pressure built up by the stellar rotation along the mid-latitude open field lines is shaping the wind structure as a whole. In those regions, we have $`\rho v_p^2/B_\phi ^2<1`$ together with $`2p/B_\phi ^2<1`$. Thereby, it also leads to streamline bending, both poleward as clearly seen in the high latitude field lines, and equatorwards in the vicinity of the stellar surface. In this way, the magnetic topology consisting of a dead and a wind zone, combined with fast rotation, leads to magnetically dominated collimation along the stellar poles, together with magneto-rotational deflections along the equator. The latter leads to enhanced densities in the equatorial plane.
Figure 6 shows the radial dependence at a polar angle $`\theta =41.6^{}`$ of the radial velocity $`v_r`$, sound speed $`c_s`$, radial Alfvén speed $`A_r=B_r/\sqrt{\rho }`$, and azimuthal speed $`v_\phi `$. Note that the radial velocity reaches up to $`400\mathrm{km}/\mathrm{s}`$ (compare with the $`200\mathrm{km}/\mathrm{s}`$ velocities reached under solar conditions as shown in Fig. 4), as a result of the additional centrifugal acceleration. The rotation plays a significant dynamical role here, in contrast to the ‘solar-like’ models discussed earlier and displayed in Fig. 3. The toroidal speed $`v_\phi `$ reaches above $`100\mathrm{km}/\mathrm{s}`$, a factor of $`10`$ higher than along the ecliptic as shown in Fig. 4 (left panel). The corotation obtained by the numerical procedure to find the stationary state can again be quantified by the relative error $`\mathrm{\Omega }\zeta /\zeta `$: the bottom panel of Fig. 6 proves that it is less than 3 % along that same radial cut.
## 4 Triggering coronal mass ejections
In the process of generating a stationary wind solution, various dynamic phenomena take place which may have physical relevance. For instance, the equatorial conic section delineated by $`\theta [\theta _{\mathrm{wind}},\pi \theta _{\mathrm{wind}}]`$ which was initially static ($`v_p=0`$) and dipolar throughout, first gets ‘invaded’ by plasma emanating from the wind zone. The open field structure is dragged in towards the equator, and most of the dipolar field is moved out of the domain, except for the remaining ‘dead’ zone. One could qualitatively relate some of these changes in the global magnetic topology with observed coronal phenomena.
In reality however, coronal mass ejections represent major disturbances which happen on top of the stationary transonic solar wind. They are associated with sudden, significant mass loss and cause violent disruptions of the global field pattern. Most notably, one frequently observes the global coronal wind structure to return to its previous stationary state, after the passage of the CME. Within the realms of our stellar wind models, we can trigger CMEs on top of the outflow pattern, study their motion, and at the same time demonstrate that the numerical solutions indeed are stable to such violent perturbations by returning to a largely unchanged stationary state. We still restrict ourselves to axisymmetric calculations, so the geometry of our ‘CME’ events is rather artificial. In future work, we intend to model these CMEs in their true 3D setting.
As background stellar wind, we use a slightly modified model B from the previous section. Model B had a large dead zone, $`\theta _{\mathrm{wind}}=30^{}`$, $`B_0=3.69`$, and a rigid rotation with $`\zeta =0.0156`$ corresponding to $`\mathrm{\Omega }_{}3\times 10^6\mathrm{s}^1`$. We changed the boundary condition on $`v_\phi `$ to mimick a ‘solar-like’ differential rotation, by taking
$$\zeta (\theta )=\zeta _0+\zeta _2\mathrm{cos}^2\theta +\zeta _4\mathrm{cos}^4\theta ,$$
(15)
with $`\zeta _0=0.0156`$, $`\zeta _2=0.00197`$, and $`\zeta _4=0.00248`$. This enforces the equator to rotate faster than the poles in accord with the observations. As expected for this low rotation rate, this has no significant influence on the wind acceleration pattern. The coronal mass ejection is an equally straigthforward modification of the boundary condition imposed on the poloidal momentum equation, namely $`\rho \text{v}_p=f_{\mathrm{mass}}(\theta ,t)\widehat{𝐞}_r/r^2`$ with
$$f_{\mathrm{mass}}(\theta ,t)=f_{\mathrm{wind}}(\theta )+g_{\mathrm{CME}}\mathrm{sin}\left(\frac{\pi t}{\tau _{\mathrm{CME}}}\right)\mathrm{cos}^2\left(\frac{\pi }{2}\frac{\theta \theta _{\mathrm{CME}}}{a_{\mathrm{CME}}}\right),$$
(16)
for $`0t\tau _{\mathrm{CME}}`$ and $`\theta _{\mathrm{CME}}a_{\mathrm{CME}}\theta \theta _{\mathrm{CME}}+a_{\mathrm{CME}}`$, and otherwise
$$f_{\mathrm{mass}}(\theta ,t)=f_{\mathrm{wind}}(\theta ).$$
(17)
The wind related mass loss rate $`f_{\mathrm{wind}}(\theta )`$ contains the polar angle dependence due to the dead zone, as before. The extra four parameters control the magnitude of the CME mass loss rate $`g_{\mathrm{CME}}`$, the duration $`\tau _{\mathrm{CME}}`$, and the location $`\theta _{\mathrm{CME}}`$ and extent $`0a_{\mathrm{CME}}\pi /2`$ in polar angle for the mass ejection. We only present one CME scenario for parameter values $`g_{\mathrm{CME}}=2`$, $`\tau _{\mathrm{CME}}=0.5`$, $`\theta _{\mathrm{CME}}=60^{}`$, and $`a_{\mathrm{CME}}=30^{}`$. Note that the up-down symmetry is hereby deliberately broken. This scenario mimicks a mass ejection which detaches from the coronal base within 45 minutes and which has an associated mass flux of about $`2\times 10^{13}\mathrm{g}/\mathrm{s}`$. In fact, the total amount of mass lost due to the CME can be evaluated from
$$M_{\mathrm{lost}}^{\mathrm{CME}}=2g_{\mathrm{CME}}\tau _{\mathrm{CME}}\frac{\pi ^2}{\pi ^2a_{\mathrm{CME}}^2}\left\{\mathrm{cos}(\theta _{\mathrm{CME}}a_{\mathrm{CME}})\mathrm{cos}(\theta _{\mathrm{CME}}+a_{\mathrm{CME}})\right\}.$$
(18)
For the chosen parameter values, this works out to be $`M_{\mathrm{lost}}^{\mathrm{CME}}=\frac{36}{35}\sqrt{3}`$, corresponding to $`0.98\times 10^{17}\mathrm{g}`$, a typical value for a violent event.
Figure 7 shows the density difference between the evolving mass ejection and the background stellar wind at left, the magnetic field structure (middle), and the toroidal velocity component $`v_\phi `$ (right) at times $`t=1`$ (1hr 27’ after onset) and $`t=3`$ (4hr 20’ after onset). The region is plotted up to $`15r_{}18.75R_{}`$. Although the event is triggered in the upper quadrant dead zone only, its violent character also disturbs the overlying open field (or wind) zone. The added plasma, trapped in the dead zone, even perturbs the lower quadrant wind zone at later times. Note that the CME induces global, abrupt changes in the toroidal velocity component. The outermost closed field lines get stretched out radially, pulling the dead zone along (see Fig. 7-8). In ideal MHD calculations, they can never detach through reconnection, although numerical diffusion can cause it to happen. We observed the outermost field lines of the dead zone to travel outwards without noticable reconnection. The overall wind pattern in the first 15 base radii thereby approximately returns to its original stationary state, as shown in Figure 8 which gives the solution at $`t=5`$ (7hr 13’ after onset) and $`t=30`$ (more than 43 hrs after onset). Figure 9 shows how a hypothetical spacecraft at $`21r_{}26.23R_{}`$, close to the ecliptic, would record the CME-passage as a sudden increase in density and poloidal velocity which eventually relax to their pre-event levels. The event is followed by an increased azimuthal flow regime and shows large amplitude oscillations in magnetic field strength and orientation.
As we assumed axisymmetry, this simulation serves as a crude model for CME-type phenomena. Interestingly, axisymmetric numerical simulations of toroidal flux ‘belts’ launched from within the dead zone of a purely meridional, polytropic MHD wind can relate favourably to satellite magnetic cloud measurements at 1AU (Wu et al. wujgr99 (1999)). Note that we triggered a ‘CME’ by prescribing a time and space dependent mass flux at the stellar base, where the density and the magnetic field components could adjust freely. Alternatively, as used in studies by Mikić & Linker (mikic (1994)), global coronal restructuring can be triggered by shearing a coronal arcade. Parameter studies of axisymmetric, but ultimately 3D solutions, could investigate the formation and appearance of various MHD shock fronts depending on plasma beta, Mach numbers, etc.
## 5 Conclusions and outlook
Continuing our gradual approach towards dynamic stellar wind simulations in three dimensions, we studied the influence of the magnetic field strength and topology (allowing for wind and dead zones), of the stellar rotation, and of sudden mass ejecta on axisymmetric MHD winds.
We demonstrated how reasonable changes in the coronal magnetic field (factor of two in field strength and in dead zone extent) influence the detailed acceleration behaviour of the wind. Larger dead zones cause effective, fairly isotropic acceleration to super-Alfvénic velocities since the polar, open field lines are forced to fan out rapidly with radial distance. The Alfvén transition moves outwards when the coronal field strength increases. The equatorial wind outflow is in these models sensitive to the presence and extent of the dead zone, but has, by construction, a vanishing $`B_\phi `$ and a $`\beta >1`$ zone from the tip of the dead zone to large radial distances. The parameter values for these models are solar-like, hence the winds are mostly thermally driven and, in particular, emanated from slowly rotating stars.
For a twenty times faster than solar rotation rate, the wind structure changes dramatically, with a clear separation of the Alfvén and fast magnetosonic critical curves. At these rotation rates, a pure hydrodynamic model predicts equatorward streamline bending from higher latitudes. Our MHD models show how this is now mediated by the dipolar dead zone. An equatorial belt of enhanced density stretches from above the dead zone outwards, where effective thermo-centrifugally driven outflow occurs. The magnetic field structure shows signs of a strong poleward collimation, due to the significant toroidal field pressure build up at these spin rates. As pointed out by Tsinganos & Bogovalov (tsing99 (1999)), this situation could apply to our own sun in an earlier evolutionary phase.
It could be of interest to make more quantitative parameter studies of the interplay between field topology, rotation rate, etc. in order to apprehend transonic stellar outflows driven by combinations of thermal, magnetic, and centrifugal forces. A systematic study of the angular momentum loss rates as a function of dead zone extent, magnetic field strength, and rotation rate can aid in stellar rotational evolution modeling (Keppens, Charbonneau, & McGregor rotpaper (1995); Keppens binary (1997)). Specifically, Li (jianke (1999)) pointed out that the present solar magnetic breaking rate is consistent with either one of two magnetic topologies: (i) one with the standard coronal field strength of $``$ 1 $`G`$ and a small $`<2R_{}`$ dead zone; or (ii) one with a larger $``$ 5 $`G`$ dipole strength and a sizeable dead zone. When we calculate the torque exerted on the star by the magnetized winds A, B, C and D shown in Fig. 3 as
$$\tau _{\mathrm{wind}}=4\pi _0^{\frac{\pi }{2}}𝑑\theta \mathrm{\Lambda }\rho r^2v_R,$$
(19)
(noting the axi- and up-down symmetry), we find $`\tau _{\mathrm{wind}}^A0.139\times 10^{31}\mathrm{dyne}\mathrm{cm}`$, $`\tau _{\mathrm{wind}}^B0.062\times 10^{31}\mathrm{dyne}\mathrm{cm}`$, $`\tau _{\mathrm{wind}}^C0.246\times 10^{31}\mathrm{dyne}\mathrm{cm}`$, and $`\tau _{\mathrm{wind}}^D0.123\times 10^{31}\mathrm{dyne}\mathrm{cm}`$. This confirms Li’s result, since a simultaneous doubling of the coronal field strength and the dead zone extent (from model A to D) hardly changes the torque magnitude. As could be expected, only enlarging the dead zone lowers the breaking efficiency (as pointed out in Solanki, Motamen, & Keppens samipap (1997)), while only raising the field strength leads to faster spin-down. Interestingly, a Weber-Davis prediction with governing parameters identical to the reference model A (as presented in Keppens & Goedbloed 1999a ) gives a value $`\tau _{\mathrm{wind}}=4\pi \rho _Ar_A^2v_{rA}\frac{2}{3}\mathrm{\Omega }_{}r_A^22.387\times 10^{31}\mathrm{dyne}\mathrm{cm}`$ (all quantities evaluated at the Alfvén radius $`r_A`$), one order of magnitude larger! The same conclusion was reached by Priest & Pneuman (priest74 (1974)) by estimating the angular momentum loss rate from the purely meridional Pneuman & Kopp (pneu (1971)) model. Although the latter model does not include rotation (so that $`\mathrm{\Lambda }`$ in Eq. (19) is strictly zero for this model), Priest & Pneuman (priest74 (1974)) could estimate the torque for a solar rotation rate from the obtained variation of the poloidal Alfvén radius as a function of latitude (our Fig. 2, right panel). The resulting estimate was only 15 % of that for a monopole base field. Our exactly evaluated spin-down rates are 2.6 % to 10.3 % of a split-monopole case. The large difference arises due to the presence of the dead zone and the fact that $`B_\phi `$ vanishes across the equator for the wind solutions from Fig. 3. Indeed, evaluating the torque from Eq. 19 for the monopolar wind solution from Keppens & Goedbloed (1999a , Figure 4) gives $`\tau _{\mathrm{wind}}2.326\times 10^{31}\mathrm{dyne}\mathrm{cm}`$, in agreement with the Weber-Davis estimate. Hence, it should be clear that full MHD modeling is a useful tool to further evaluate and constrain different magnetic braking mechanisms.
We showed how CME events can be simulated on top of these transonic outflows. The detailed wind structure is stable to violent mass dumps, even when ejected in the dead zone. Note that we restricted ourselves to axisymmetric perturbations, and it will be of interest to show whether the axisymmetric solutions are similarly stable to non-axisymmetric perturbations (as recently investigated for shocked accretion flows on compact objects in Molteni, Tóth, & Kuznetsov molt (1999)). One could then focus on truly 3D mass ejecta and their parametric dependence (possibly allow for direct comparison with LASCO observations of coronal mass ejections), or even experiment with unaligned rotation and magnetic axes. A 3D time-dependent analytic model by Gibson & Low (gibson (1998)) can be used as a further check on the numerics. Alternatively, we may decide to zoom in on (3D) details of the wind structure at the boundaries of open and closed field line regions or about the ecliptic plane, to see whether shear flow driven Kelvin-Helmholtz instabilities (Keppens et al. kh2d (1999); Keppens & Tóth 1999b ) develop in these regions.
The Versatile Advection Code was developed as part of the project on ‘Parallel Computational Magneto-Fluid Dynamics’, funded by the Dutch Science Foundation (NWO) Priority Program on Massively Parallel Computing, and coordinated by JPG. Computer time on the Cray C90 was sponsored by the Dutch ‘Stichting Nationale Computerfaciliteiten’ (NCF). We thank Keith MacGregor for suggesting to compare torque magnitudes for different models and an anonymous referee for making several useful comments.
|
no-problem/9910/hep-th9910002.html
|
ar5iv
|
text
|
# Bifurcation of Periodic Instanton in Decay-Rate Transition
## I Introduction
Recently, much attention is paid to quantum-classical decay-rate transition in various branches of physics such as spin tunneling systems, field theoretical models, and cosmology. An important issue in this field is to determine the type of the decay-rate transition. At zero temperature the decay rate by quantum tunneling is governed by instanton or bounce, and at intermediate range of temperature periodic instanton plays an important role. At high temperatures the decay takes place mainly by thermal activation, which is represented by sphaleron. Usually the decay-rate transition means the transition between quantum tunneling regime dominated by periodic instanton and thermal activity regime dominated by sphaleron.
The type of the decay-rate transition is completely determined by Euclidean action-temperature diagram as shown in Ref.. The typical action-temperature diagrams which appear frequently in quantum mechanical or field theoretical models can be classified by three types(see Fig. 1). Fig. 1(a) represents a smooth second-order transition in which there is no bifurcation of periodic instanton and hence, the decay rate varies smoothly with increasing temperature. In Fig. 1(b) there is one bifurcation point and the decay rate exhibits an abrupt change at the crossover from periodic instanton to sphaleron. Thus this is a typical diagram for the first-order transition. Fig. 1(c) has two bifurcation points and the decay-rate transition between regimes dominated by periodic instanton and sphaleron, respectively, is second order. However, there exists a sharp change of decay rate in quantum tunneling regime in this case. For convenience we call the types of decay-rate transition associated to Fig. 1(a), Fig. 1(b), and Fig. 1(c) type I, type II, and type III, respectively.
In order to determine the type of transition completely from action-temperature diagram one has to compute a periodic instanton explicitly. However, the explicit derivation of periodic instanton in field theoretical models is very complicated and sometimes impossible. Hence, it is important to develop a method which determines the type of decay-rate transition without computing the periodic instanton in the full range of temperature.
The researches along this direction were done recently by using nonlinear perturbation method or counting the number of negative modes of full hessian operator around sphaleron. Although these two methods start from completely different point of view, both method derive a same criterion for the sharp first-order transition. Since, however, these two methods use only periodic instanton near sphaleron, it is impossible to distinguish type III transition from type I by these methods.
The purpose of this letter is to develop a more powerful method which not only distinguish type III from type I but also enable one to understand the whole behaviour of decay-rate transition by exploring the properties of bifurcation point. In Sec. II we will show analytically that multiple zero modes should be arised at bifurcation point. This fact is used to derive a criterion for the appearance of bifurcation point in action-temperature diagram, which will be explored in Sec. III. Also, application of this criterion to simple quantum mechanical model will be presented in the same section. It is shown that the criterion derived by nonlinear perturbation method is only special limit of our result. In the final section a brief conclusion is given.
## II Zero modes at bifurcation point
The decay rate at finite temperatures can be evaluated by Euclidean action of periodic instanton as
$$\mathrm{\Gamma }(T)=Ae^{S_E[x]}$$
(1)
where $`A`$ is pre-exponential factor. The Euclidean action $`S_E[x]`$ is represented as
$$S_E[x]=_0^\beta 𝑑\tau [\frac{1}{2}\dot{x}^2+V(x)],$$
(2)
where $`\beta `$, period at Euclidean space, is given by inverse of temperature, and we assume, for convenience, particle mass is unity. The equation of motion for periodic instantion is
$$\ddot{x}=V^{}(x),$$
(3)
where the prime denotes the coordinate derivative, i.e, $`V^{}(x)=dV/dx`$, and the Euclidean energy $`E`$ is
$$E=V(x)\frac{1}{2}\dot{x}^2.$$
(4)
Now, let us consider periodic instantons $`x_0(\tau )`$ with period $`\beta _0`$ and $`x_1(\tau )`$ with $`\beta _1=\beta _0+\delta \beta `$. Then we can write
$`x_1(\tau )`$ $`=`$ $`x_0({\displaystyle \frac{\beta _0}{\beta _1}}\tau )+{\displaystyle \frac{\delta \beta }{\beta _0}}\eta (\tau )`$ (5)
$``$ $`x_0(\tau ){\displaystyle \frac{\delta \beta }{\beta }}[\tau \dot{x_0}\eta (\tau )]`$ (6)
where $`\eta (\tau )`$ is some periodic function which can be determined from the fact that $`x_0(\tau )`$ and $`x_1(\tau )`$ must be a solutions of equation of motion. Hence, $`\eta (\tau )`$ has to satisfy
$$\widehat{M}\eta =2\ddot{x_0},$$
(7)
where $`\widehat{M}`$ is the fluctuation operator at $`x_0`$;
$$\widehat{M}=\frac{d^2}{d\tau ^2}+V^{\prime \prime }(x_0).$$
(8)
Using Eqs. (4) and (6), one can obtain the ratio between energy and period differences;
$$\frac{\delta E}{\delta \beta }=\frac{1}{\beta }[V^{}(x_0)\eta +\dot{x_0}^2\dot{x_0}\dot{\eta }].$$
(9)
It is worthwhile noting that the right-hand side of Eq.(9) is a constant of motion. Since bifurcation point takes place when $`\frac{d\beta }{dE}=0`$, the absolute value of the right-hand side of the above equation must diverge at this point. This means that $`\eta `$ must be a singular function at the same point. In order for $`\eta `$ to be singular we need another zero mode different from well-known one, i.e., $`\dot{x_0}`$ originated from time translational symmetry. This fact can be easily shown as follows. If we expand the right-hand side of Eq.(7) in terms of eigenstates $`|\xi _n>`$ of $`\widehat{M}`$, the equation for $`\eta `$ becomes
$$\widehat{M}|\eta >=\underset{n}{}{}_{}{}^{}a_{n}^{}|\xi _n>,$$
(10)
where $`a_n`$’s are expansion coefficients and the prime denotes that the sum excludes the known zero mode $`|\dot{x_0}>`$ due to its orthogonality to $`|\ddot{x_0}>`$. Then $`|\eta >`$ can be simply written as
$$|\eta >=\underset{n}{}{}_{}{}^{}\frac{a_n}{h_n}|\xi _n>,$$
(11)
where $`h_n`$’s are corresponding eigenvalues of $`\widehat{M}`$. Here, it is shown that in order to get infinite magnitude of $`|\eta >`$ at bifurcation point, at least one of $`h_n`$’s should be zero.This zero mode does not correspond to the known zero mode($`\dot{x_0}`$) but new zero mode at bifurcation point. This fact is numerically explored at Ref..
In the next section we derive a condition for appearane of bifurcation point in action-temperature diagram.
## III Condition for bifurcation point
As mentioned in the previous section, the zero mode satisfies the following second-order ordinary differential equation;
$$\frac{d^2y}{d\tau ^2}+V^{\prime \prime }(x_0)y=0.$$
(12)
Since this equation is second order, there are two independent solutions; one is the known one, $`y_1=\dot{x_0}`$, and another is
$$y_2(\tau )=y_1(\tau )^\tau \frac{d\tau ^{}}{y_1^2(\tau ^{})}=\dot{x_0}(\tau )^\tau \frac{d\tau ^{}}{\dot{x_0}^2(\tau ^{})}.$$
(13)
This fact does not mean that the periodic instanton $`x_0`$ always have two independent zero modes, because in general $`y_2(\tau )`$ does not satisfy periodic boundary condition. In fact, $`y_2(\tau )`$ satisfies the physically relevant boundary condition at only bifurcation point. From the symmetry of $`x_0(\tau )`$ one can conjecture that in order for $`y_2(\tau )`$ to be an another zero mode $`\dot{x_0}(\tau )`$ and $`y_2(\tau )`$ must have common turning points. Using Eq.(4), this conjecture can be mathematically written as
$$_x_{}^{x_+}𝑑x\frac{1}{(V(x)E)^{3/2}}=\frac{2}{V^{}(x_{})\sqrt{V(x_{})E}}\frac{2}{V^{}(x_+)\sqrt{V(x_+)E}}$$
(14)
where $`x_+`$ and $`x_{}`$ are turning points. This is not a desirable expression for the condition of bifurcation since both sides of this equation are infinite. However, if one uses a relation
$$\frac{d}{dx}(\frac{1}{\sqrt{V(x)E}})=\frac{V^{}(x)}{2(V(x)E)^{3/2}},$$
(15)
the condition for the appearance of bifurcation becomes
$`f(E)`$ $``$ $`V^{}(x_{}){\displaystyle _{x_s}^{x_+}}𝑑x{\displaystyle \frac{V^{}(x_+)V^{}(x)}{(V(x)E)^{3/2}}}+V^{}(x_+){\displaystyle _x_{}^{x_s}}𝑑x{\displaystyle \frac{V^{}(x_{})V^{}(x)}{(V(x)E)^{3/2}}}+{\displaystyle \frac{2[V^{}(x_{})V^{}(x_+)]}{\sqrt{V(x_s)E}}}`$ (16)
$`=`$ $`0,`$ (17)
where $`x_s`$ is sphaleron solution, i.e, position of barrier top. It is easily shown that the values of $`f(E)`$ at minimum energy $`E=E_{min}`$ and at sphaleron energy $`E=E_s`$ are positive and zero, respectively, i.e., $`f(E_{min})>0`$ and $`f(E_s)=0`$. Since at zeros of $`f(E)`$ bifurcations take place, we can determine the type of decay rate from the number of zeros of $`f(E)`$ in the range of $`E_{min}<E<E_s`$; non-existence of zeros means type I, one and two zeros correspond to type II and type III, respectively.
Now, let us apply our criterion (17) to simple quantum mechanical model whose potential is
$$V(x)=\frac{4+\alpha }{12}\frac{1}{2}x^2\frac{\alpha }{4}x^4+\frac{\alpha +1}{6}x^6\frac{\gamma }{3}x^3.$$
(18)
Numerical calculation using the criterion (17) yields Fig. 2 which shows the type of decay rate along the potential parameter $`\alpha `$ and$`\gamma `$. The dashed line distinguishing type III from type I cannot be obtained by the criterion of Ref. since type I and III have same behavior in the vicinity of sphaleron. Now, let us derive the previous result of Ref. by imposing a simple restriction on $`f(E)`$. Since the sufficient criterion for first-order transition has been derived by nonlinear perturbation at sphaleron, it can be obtained from a simple restriction on value of $`f(E)`$ near sphaleron;
$$f(E_sϵ)<0,$$
(19)
where $`ϵ`$ is a infinitesimal positive number.
Since one can put $`x_s=0`$ without loss of generality, two integrals of Eq.(17) can be properly expanded in terms of following integrals;
$`G_n`$ $``$ $`{\displaystyle _0^{x_+}}𝑑x{\displaystyle \frac{x^n}{\sqrt{x_+x}(xx_{})^{3/2}}},`$ (20)
$`H_n`$ $``$ $`{\displaystyle _x_{}^0}𝑑x{\displaystyle \frac{x^n}{\sqrt{xx_{}}(x_+x)^{3/2}}}.`$ (21)
Since $`G_n`$ and $`H_n`$ can be exactly evaluated by $`x_\pm `$, we can expand Eq.(19) as increasing power series of $`x_\pm `$. Then we get first non-zero term as
$$\frac{5}{3}\frac{V^{\prime \prime \prime }(0)^2}{V^{\prime \prime }(0)}+V^{\prime \prime \prime \prime }(0)<0,$$
(22)
where we leave out irrelevant factor. This is the same with the result of nonlinear perturbation method, which means that the previous result is a special case of our condition for bifurcation.
## IV Conclusion
From the fact that an additional zero mode appears at bifurcation points, we derive the condition Eq.(17) for occurrence of bifurcation in action-temperature diagram. Using this condition, one can count the number of bifurcations in allowed energy range and hence, understand the whole behaviour of decay-rate transition. The sufficient criterion for the first-order transition can be also derived by imposing a restriction near sphaleron, without use of nonlinear perturbation or negative-mode consideration. From the restriction it can be understood that the previous criterion is a condition to have odd number of bifurcations, so that with the criterion one cannot distinguish type III from type I. We hope that the idea for bifurcation in this paper could be generalized to be applicable to field theoretical models, which, however, might be non-trivial. Work on this direction is in progress.
|
no-problem/9910/cond-mat9910324.html
|
ar5iv
|
text
|
# A semiclassical approach to the ground state and density oscillations of quantum dots
## I Introduction
Two-dimensional quantum dots, and semiconductor nanostructures in general, are examples of artificial systems in which, ultimately, we may seek to produce and control electronic quantum properties. Sometimes they are referred to as artificial atoms and in fact, it has been proved that the electronic structure of circular quantum dots resembles in some aspects the shell structure found in atoms and nuclei. Particularly relevant in this sense are the Coulomb blockade measurements that provide direct acces to the electronic energy levels by adding electrons one by one to the quantum dot . The magic numbers measured in vertical quantum dots , corresponding to maxima in the addition energies, have also shown clear evidences of shell structure. Absorption measurements in the far-infrared region as well as experiments using sophisticated light scattering techniques have probed both charge density (CDE) and spin density (SDE) excitations and have shown that parabolic confinement is closely satisfied for small size dots.
Interest in the quantum dot community has recently focussed on the properties of deformed nanostructures. For instance, elliptic quantum dots have been investigated in Refs. . Particularly, in Refs. some of us have addressed the collective oscillations of deformed dots where, in addition to density and spin modes, it has been predicted the existence of orbital current modes at low energies. This interest in deformed dots is motivated by the advances in nanofabrication techniques, that allow to produce quantum dots of many different shapes.
Microscopic theoretical approaches, like Hartree , Hartree-Fock and density functional , let alone exact diagonalizations , get very demanding computationally for increasing number of electrons, especially in the symmetry unrestricted case. It is thus interesting to develop a semiclassical approach, dealing only with the total density, for which the computational effort is not much dependent on the number of electrons. This is our purpose in this paper. Semiclassical Thomas-Fermi models have been already used in the field, for instance in Refs. . In particular, Ref. provides a quite rigorous presentation of the theory. However, previous works deal only with the semiclassical ground state and mostly for circular symmetry. Here we emphasize the application to deformed structures and concentrate on the oscillation modes. We do not include in this paper magnetic field since this requires a non trivial generalization of the theory that we plan to develop in a separate contribution.
The structure of the paper is as follows. In Sec. II we discuss the ground state in our semiclassical approach and compare with available microscopic Kohn-Sham (KS) results. Section III introduces the formalism for the time-dependent oscillations. Results for the oscillation modes are discussed in Sec. IV. Finally the conclusions are drawn in Sec. V.
## II Ground states
### A Definition of energy functional
Using Density Functional Theory in a local approximation we write the total energy in terms of the electronic density $`\rho (𝐫)`$ as $`E[\rho ]=𝑑𝐫[\rho ]`$, with the energy density separating as
$$[\rho ]=\tau [\rho ]+\frac{1}{2}v_H(𝐫)\rho +_{XC}(\rho )+v_{\mathrm{e}\mathrm{x}\mathrm{t}}(𝐫)\rho .$$
(1)
The different terms are the kinetic energy density $`\tau [\rho ]`$, the Hartree potential $`v_H(𝐫)=𝑑𝐫^{}\frac{\rho (𝐫^{})}{|𝐫𝐫^{}|}`$, the exchange-correlation energy density $`_{XC}(\rho )`$ and the external confining potential $`v_{\mathrm{e}\mathrm{x}\mathrm{t}}(𝐫)`$. The Thomas-Fermi approximation in two dimensions (2D) yields the following kinetic energy density
$$\tau _{TF}[\rho ]=\frac{\mathrm{}^2}{2m}\left(\pi \rho ^2+\frac{5}{12}^2\rho \right).$$
(2)
By analogy with the Weizäcker term for the 3D kinetic energy functional, we have added a gradient term that gives the exact kinetic energy for a single electron
$$\tau _W[\rho ]=\frac{\mathrm{}^2}{2m}\lambda \frac{(\rho )^2}{\rho },$$
(3)
with $`\lambda =1/4`$. Our total Thomas-Fermi-Weizsäcker (TFW) kinetic functional is then $`\tau =\tau _{TF}+\tau _W`$. It is worth to point out that the first non-vanishing gradient correction in 2D is not known from a rigorous semiclassical expansion and therefore we have introduced empirically the Weizsäcker-like term. We will show later that the results are not sensitive to the precise value of the coefficient $`\lambda `$. For the sake of comparison, we recall here that the KS method provides the exact kinetic energy by means of a set of single particle orbitals $`\{\phi _i\}`$ as
$$\tau (𝐫)=\frac{\mathrm{}^2}{2m}\underset{i,\mathrm{o}\mathrm{c}\mathrm{c}.}{}|\phi _i(𝐫)|^2.$$
(4)
For the exchange-correlation energy $`_{XC}`$ we have used the same functional of Ref. , based on the Tanatar and Ceperley calculation for the uniform gas . Specifically, in modified atomic units , it is written as
$$_{XC}(\rho )=\frac{4}{3}\sqrt{\frac{2}{\pi }}\rho ^{3/2}+\frac{1}{2}a_0\rho \frac{1+a_1x}{1+a_1x+a_2x^2+a_3x^3},$$
(5)
where $`x=(\pi \rho )^{1/4}`$ and the $`a_i`$ coefficients are given in Ref. . Notice that in this formalism we always assume perfect spin degeneracy in the ground state in order to obtain a single chemical potential for both spin components (see below).
The ground state density is determined from the minimization (Euler-Lagrange) equation, with the contraint of particle number conservation
$$\frac{\delta }{\delta \rho }\left(E[\rho ]\mu 𝑑𝐫\rho \right)=0.$$
(6)
Specifically, this reads
$`2\lambda {\displaystyle \frac{\mathrm{}^2}{2m}}^2\rho `$ $`+`$ $`\lambda {\displaystyle \frac{\mathrm{}^2}{2m}}{\displaystyle \frac{(\rho )^2}{\rho }}`$ (7)
$`+`$ $`\left(v_{\mathrm{e}\mathrm{x}\mathrm{t}}+v_H+{\displaystyle \frac{_{XC}}{\rho }}+2\pi {\displaystyle \frac{\mathrm{}^2}{2m}}\rho \right)\rho =\mu \rho .`$ (8)
By making the transformation $`\psi =\sqrt{\rho }`$ this equation may be written in the familiar form of a Schrödinger like equation
$$4\lambda \frac{\mathrm{}^2}{2m}^2\psi +V\psi =\mu \psi ,$$
(9)
where we have defined the semiclassical effective potential $`V=v_{\mathrm{e}\mathrm{x}\mathrm{t}}+v_H+\frac{_{XC}}{\rho }+2\pi \frac{\mathrm{}^2}{2m}\rho `$. The chemical potential $`\mu `$ plays the role of the eigenvalue in Eq. (9). Written in this way, we may now use the algorithms developed to solve the KS equation in arbitrary 2D confinements . Our method is based on a discretization of the $`xy`$ plane in a grid of uniformly spaced points. Then, the total number of grid points, not the number of electronic orbitals as in KS theory, determines the computational cost of the problem. Typically, we use grids ranging from $`50^2`$ to $`100^2`$ points.
### B Results and comparison with Kohn-Sham
In this subsection we show results for the semiclassical ground states in different confining potentials, focussing especially on the comparison with the corresponding microscopic KS results in order to prove the validity of the approximations. Since the microscopic calculation for medium and large sizes is computationally feasible only when circular symmetry is imposed to the system, we will begin by considering the cases of circular parabolas and disks of jellium. In these cases we compare with the KS radial solution . Then we analyze one case with deformation, namely the deformed parabola with $`N=20`$, and compare with a symmetry unrestricted microscopic calculation .
#### 1 Circular parabolas
The confining potential in this case is (we will use modified atomic units for the rest of the paper)
$$v_{\mathrm{e}\mathrm{x}\mathrm{t}}(r)=A_0+\frac{1}{2}\omega _0^2r^2.$$
(10)
The constants $`A_0`$ and $`\omega _0`$ are usually parameterized to reproduce the potential and its curvature at the origin of a jellium disk (see below) with $`N_p`$ positive charges as $`A_0=2\sqrt{N_p}/r_s`$ and $`\omega _0^2=1/(\sqrt{N_p}r_s^3)`$, with $`r_s`$ the radius per unit charge. Figure 1 shows the densities obtained taking $`r_s=1.51a_0^{}`$. Panels (a) and (b) prove that the semiclassical density closely adjusts to the microscopic one for varying electron number in a fixed parabola and for a varying parabola curvature at fixed electron number, respectively. As is well known from atomic physics, the TFW density averages the microscopic shell oscillations in the inner region. Panel (c) shows that the coefficient $`\lambda `$ controls the surface width of the semiclassical density. The value $`\lambda =1/4`$ provides a reasonable approximation to the KS density tails in all cases. However, it is clear that if this coefficient is allowed to vary a better fit of the densities may be obtained. We have not followed this procedure since the fitted value of $`\lambda `$ would be different for each type of confining potential (see next subsections).
In Fig. 2 we compare total energies per electron in the two approaches. The fit of energies is amazingly good; the deviation of the TFW energies with respect to KS being less than 0.5% for all cases shown in Fig. 2. Although the high precission of the TFW model for this quantity may seem a bit surprising, we recall that the method looks for the variational minimum of total energy. Therefore, this is a priori the best value of the model. We have made the comparison for magic electron numbers, corresponding to closed shell configurations. This is the reason why shell oscillations with size are not visible in the microscopic results of Fig. 2(a). The total energies are almost independent of $`\lambda `$ for reasonable values of this coefficient. The results for $`N=42`$ and $`\lambda =0.25`$, 0.5 and 1.5 are $`E/N=4.033`$, -4.025 and -4.014 H, respectively.
#### 2 Jellium disks
Another external potential with circular symmetry is that of a uniformly charged disk. Defining the positive charge density (jellium density) in terms of the $`r_s`$ parameter $`\rho _j=1/(\pi r_s^2)`$ and for a disk with radius $`R`$, the potential is
$$v_{\mathrm{e}\mathrm{x}\mathrm{t}}(r)=\{\begin{array}{cc}\frac{4}{\pi }\frac{R}{r_s^2}E(r/R)\hfill & \mathrm{if}rR\hfill \\ \frac{4}{\pi }\frac{r}{r_s^2}\left[E(R/r)\left(1\left(\frac{R}{r}\right)^2\right)K(R/r)\right]\hfill & \mathrm{if}rR\hfill \end{array},$$
(11)
where $`E`$ and $`K`$ are the elliptic integrals. Assuming a uniformly charged disk the number of positive charges $`N_p`$ is related to the disk radius by $`R=r_s\sqrt{N_p}`$.
Contrary to the parabola, the jellium potential forces the electronic density to saturate inside the dot. This is obviously due to the charge screening effect, that energetically favors the cancellation of charges. Figure 3 shows the electronic densities of neutral disks with $`r_s=1.51a_0^{}`$ as a function of size. The agreement between both models is rather good. As for the parabolas, the TFW densities average the oscillations of the KS ones but now they rapidly saturate inside the dot. Close to the edge, the TFW densities present an small oscillation. This is similar to the Friedel-type oscillations found in metal surfaces and is enhanced by potentials that abruptly vanish at the edge, as the jellium one. Also shown in Fig. 3 (panel b) are the energies per electron of the neutral disks, that are also very well reproduced by the semiclassical model.
From Fig. 3 we see that the TFW density tails slightly overestimate the KS ones, with the used value of $`\lambda `$. This implies that the number of electrons that are spilling out of the jellium disk will be slightly enhanced in the semiclassical method. This spill out mechanism is known from cluster physics to be of fundamental importance for a proper description of the optical properties . The fact that our TFW method correctly includes spill out gives us some confidence in its use for the description of time dependent density oscillation (see Sec. IV).
#### 3 Deformed parabola for N=20
The third case we have considered is an elliptical dot, confined by an anisotropic parabola,
$$v_{\mathrm{e}\mathrm{x}\mathrm{t}}(𝐫)=\frac{1}{2}\omega _0^2\frac{4}{(1+\beta )^2}(x^2+\beta ^2y^2).$$
(12)
The parameter $`\beta `$ gives the ratio of parabola coefficients in $`y`$ and $`x`$ directions, i.e., writing the external potential as $`v_{\mathrm{e}\mathrm{x}\mathrm{t}}(𝐫)=\frac{1}{2}(\omega _x^2x^2+\omega _y^2y^2)`$, we have $`\beta =\omega _y/\omega _x`$. At the same time the centroid $`(\omega _x+\omega _y)/2`$ is kept fixed at the value $`\omega _0`$, defined as for the circular parabola $`\omega _0^2=1/(r_s^3\sqrt{N_p})`$.
Since the KS problem is much more involved than for the preceding circular potentials, we restrict here the comparison to the $`N=20`$ electrons case, with $`N_p=20`$, and consider four deformations $`\beta =0.875`$, 0.75, 0.625 and 0.5. Figure 4 shows the densities corresponding to these four deformations as well as the circularly symmetric case ($`\beta =1`$) for completeness. It is seen that, quite nicely, the elliptic contour lines of the TFW density follow in average the equidensity regions of the KS result. For $`\beta =1`$ and $`\beta =0.875`$ the structure of equidensity regions of both models is very similar. For lower values of $`\beta `$ the microscopic model yields an incipient electron localization that the semiclassical model is obviously not able to reproduce; it gives nevertheless the correct average value. We conclude that the semiclassical model reproduces the average density distributions also in non circular systems. Total energies, given in Tab. I, are also in excellent agreement with the KS ones.
## III Time dependent equations
To derive the time dependent equations in the semiclassical approximation we will follow the fluid dynamical approach. This method has been used in nuclear physics and, more recently, also for electronic oscillations in atomic clusters . The variational derivation of these equations can be found in the book by Ring and Schuck . Here we will just point out the essential ingredients and particular details for the application to our case.
The essential assumption is that all the single particle orbitals evolve in time with a common complex phase as
$$\phi _i(𝐫,t)=\phi _i^{(0)}(𝐫,t)e^{is(𝐫,t)},$$
(13)
where both $`\phi _i^{(0)}(𝐫,t)`$ and $`s(𝐫,t)`$ are assumed to be real functions. With (13) the kinetic energy separates in two contributions
$$T=T_{\mathrm{i}\mathrm{n}\mathrm{t}\mathrm{r}}+\frac{1}{2}𝑑𝐫\rho 𝐮^2,$$
(14)
an intrinsic one $`T_{\mathrm{i}\mathrm{n}\mathrm{t}\mathrm{r}}=\frac{1}{2}_i|\phi _i^{(0)}|^2`$ and another associated to the common velocity field $`𝐮=s`$. Generalizing this separation we may write for the total energy (expectation value of the Hamiltonian $`H`$)
$$H=E_{\mathrm{i}\mathrm{n}\mathrm{t}\mathrm{r}}+\frac{1}{2}𝑑𝐫\rho 𝐮^2.$$
(15)
In our density functional approach the intrinsic energy is in fact given by the energy functional $`E_{\mathrm{i}\mathrm{n}\mathrm{t}\mathrm{r}}=E[\rho ]`$. Noticing now that $`\rho `$ and $`s`$ are conjugated canonical variables, Hamilton’s equations are $`\dot{\rho }=\frac{\delta }{\delta s}`$, $`\dot{s}=\frac{\delta }{\delta \rho }`$. Specifically, these read
$`\dot{\rho }`$ $`=`$ $`(\rho s)`$ (16)
$`\dot{s}`$ $`=`$ $`{\displaystyle \frac{1}{2}}(s)^2+{\displaystyle \frac{\delta E[\rho ]}{\delta \rho }}.`$ (17)
The two Eqs. (17) yield the time evolution of the semiclassical variables $`\rho `$ and $`s`$, and thus they are our required input to model a semiclassical dynamics in quantum dots. As happens for the ground state equation, we may still transform Eqs. (17) to a form similar to the microscopic equation. In fact, defining the time dependent complex function $`\psi =\sqrt{\rho }e^{is}`$ both Eqs. (17) reduce to
$$i\frac{\psi }{t}=\frac{1}{2}\mathrm{\Delta }\psi +\left(\frac{\delta E[\rho ]}{\delta \rho }+\frac{1}{2}\frac{\mathrm{\Delta }\sqrt{\rho }}{\sqrt{\rho }}\right)\psi ,$$
(18)
i.e., an equation identical to the time-dependent KS equation if we identify the contribution within brackets in the right hand side with the potential term.
In the preceding discussion we have assumed, for simplicity, that both spin densities are oscillating in phase, and thus identically to the total density. The formalism, however, can also account for the situation in which both spin densities $`\rho _\eta `$ ($`\eta =,`$) oscillate out of phase, as happens for instance in spin modes. In this case we just need to assume the semiclassical approximation for each spin fluid, with the Coulombic coupling terms between them and generalize the exchange-correlation contribution to the locally polarized case using the exchange interpolation formula . Defining $`\psi _\eta =\sqrt{\rho }_\eta e^{is_\eta }`$ the two equations are
$$i\frac{\psi _\eta }{t}=\frac{1}{2}\mathrm{\Delta }\psi _\eta +\left(\frac{\delta E[\rho _{},\rho _{}]}{\delta \rho _\eta }+\frac{1}{2}\frac{\mathrm{\Delta }\sqrt{\rho }_\eta }{\sqrt{\rho }_\eta }\right)\psi _\eta .$$
(19)
In this paper we apply the time dependent equations to obtain the linear response frequencies corresponding to dipole charge density (CDE) and spin density (SDE) excitations of general 2D nanostructures. These are normal modes of oscillation and the technique to obtain them is simply to apply an small instant perturbation on the ground state and take this as initial condition for the time simulation. For the CDE the initial perturbation is simply a rigid translation of the total electronic density, given by operator $`𝒯(𝐚)`$, with $`𝐚`$ the vector displacement. For the SDE we apply opposite translations for spin up and down densities, $`𝒯_\eta (𝐚_\eta )`$, with $`𝐚_{}=𝐚_{}`$. After this, we keep track of the time evolution of the dipole moments $`d_\eta (t)=𝐫_\eta \widehat{𝐞}`$, where $`\widehat{𝐞}`$ is a unitary vector in the direction of the displacement, and
$$𝐫_\eta =\frac{1}{N_\eta }𝑑𝐫\rho _\eta (𝐫,t)𝐫,$$
(20)
with $`N_{}`$, $`N_{}`$ the number of electrons with spin up and down, respectively. A frequency analysis of the total dipole moment $`d_{}+d_{}`$ for the CDE, and of the spin dipole moment $`d_{}d_{}`$ for the SDE, finally yields the oscillation frequencies .
## IV Results of density oscillations
We present in this section the spectra obtained within the semiclassical formalism. We consider density and spin dipole oscillations and, as in Sec. I, we emphasize the comparison with the corresponding KS results. One case is taken as representative for the three types of confining potentials for which we have already discussed the ground state.
#### 1 Circular parabola with $`N=56`$
Figure 5 shows the CDE and SDE for the parabola with $`N=56`$ and $`N_p=40`$. The results have been plotted in a logarithmic arbitrary vertical scale (each tick marks an order of magnitude). First, we notice that in the CDE the semiclassical spectrum (and also the KS one) yield a single frequency that coincides with the parameter $`\omega _0`$ of the confining potential. We thus conclude that time-dependent TFW satisfies well the generalized Kohn’s theorem . This theorem states that the exact dipole CDE for parabolic confinement is a single peak at the parabola frequency $`\omega _0`$. The more intense SDE’s lie at lower energy because the residual interaction is attractive in this channel . The semiclassical model does not include the coupling to particle-hole excitations, that leads to an important fragmentation of the KS spectrum. Nevertheless, it reproduces some fragmentation of the collective strength, contained in two dominant peaks at $`0.05`$ H and $`0.19`$ H, that nicely correspond to very intense KS excitations.
We remark that the microscopic results in Fig. 5 (and also those of Fig. 6 below) have been obtained by using the perturbative random-phase approximation (RPA) in a particle-hole basis while the TFW results were obtained, as explained in Sec. IV, from a frequency analysis of the real time signal. The RPA calculation provides a very high frequency resolution, even for low intensity peaks. On the contrary, the analysis of the time signal is not able in some cases to discriminate the low intensity peaks because of the limitations of a discrete time sampling and a finite total time window.
#### 2 Circular jellium with $`N=58`$
This is shown in Fig. 6. In this case generalized Kohn’s theorem does not hold since the external potential is not of parabolic type. As a consequence the dipole charge oscillation couples to the relative motion and the spectrum is generally fragmented, with peak energies depending on the number of electrons . The disk potential behaves quadratically close to the disk center but it deviates for points closer to the edge. Fig. 6 shows the $`\omega _0`$ value for the quadratic behaviour at the origin. The semiclassical model reproduces the blue shift from $`\omega _0`$ although it slightly underestimates its quantitative value. This may be traced back to the slight difference in spill out mentioned in Sec. II.B.2. The TFW model yields a greater spill out and thus produces a softer oscillation mode. The absence of an exact Kohn mode manifests with an important fragmentation, partly reproduced in the TFW model. In the SDE of Fig. 6 the situation is similar to the circular parabola. The semiclassical spectrum reproduces the dominant collective peaks, and has less fragmentation than KS.
#### 3 Deformed parabola
Figure 7 shows the results for an anisotropic parabola with $`N=20`$, $`N_p=20`$ and different deformations. The two upper panels correspond to the CDE for $`\beta =0.5`$ and 0.75. They prove that also in deformed parabolas the generalized Kohn’s theorem is well satisfied by time dependent TFW. In this case the parabolas in $`x`$ and $`y`$ directions are different and, in fact, the oscillation in these two directions is at frequency $`\omega _x`$ and $`\omega _y`$, respectively. For the two other deformations not shown in the figure ($`\beta =0.625`$, 0.875) generalized Kohn’s theorem is equally satisfied. Therefore, as for TDLDA, TDTFW fulfills the exact property for dipole charge oscillations saying that the two center of mass coordinates $`X=_ix_i`$ and $`Y=_iy_i`$ oscillate with the frequencies $`\omega _x`$ and $`\omega _y`$, respectively .
The four lower panels of Fig. 7 show the SDE spectra for the four deformations considered. The $`x`$-$`y`$ splitting in the spin channel is nicely reproduced, as a function of deformation within the TFW model. Fragmentation is present in both models, although TFW is overestimating the collective strength at high energy for $`\beta =0.75`$ and $`\beta =0.875`$. We attribute this to the rather small electron number of this dot ($`N=20`$), for which TFW is surely less accurate than for large sizes. We remark again that the lower energy (and more intense) peak, as well as the $`x`$-$`y`$ splitting are correctly reproduced.
To finish this section and in order to emphasize the power of the semiclassical method we reproduce in Fig. 8 a case that is beyond the present capability of microscopic KS calculations. It corresponds to 72 electrons in a deformed parabola, with $`N_p=72`$ and $`r_s=1.51a_0^{}`$. Since the CDE corresponds only to $`\omega _x`$ and $`\omega _y`$, according to Kohn’s theorem, we display the result of the SDE for $`\beta =0.75`$, as well as the circular one for comparison.
## V Conclusions
In this paper we have discussed a semiclassical approach to the ground state and density oscillations of 2D nanostructures. The method has been implemented for the general case in which no spatial symmetry is required. The validity of both ground state and dipole oscillation descriptions has been checked by systematically comparing the semiclassical TFW results with the corresponding KS ones. We have shown that the TFW densities closely follow the KS ones, averaging the shell oscillations, for different types of external confinings: circular and deformed parabolas and jellium disks for which the density saturates. Besides, the TFW model reproduces the KS energies with great accuracy. The dependence of the energy with the value of the Weizsäcker coefficient is very small. This coefficient controls the density tail and electronic spill out for jellium disks. We have shown that the value $`\lambda =1/4`$ provides a good overall fit.
Dipole charge density and spin density oscillations have been analyzed by using the time dependent semiclassical equations. In circular parabolic confinements we have shown that the semiclassical spectra satisfy well the generalized Kohn’s theorem for charge density excitations, while in neutral jellium disks it yields a blue shift similar to the KS one. In elliptic dots the generalized Kohn’s theorem is satisfied as well. For spin density excitations the TFW model is able to reproduce the dominant peaks of the spectrum and the splitting associated with deformation in elliptic dots.
In general, we have shown that the semiclassical Thomas-Fermi-Weizsäcker model provides a reliable tool to obtain accurate approximations to the ground state and linear oscillations of quantum dots. This opens the possibility to use it in order to explore a great variety of confining potentials with different geometries and for large sizes.
This work was performed under Grant No. PB95-0492 from CICYT, Spain.
|
no-problem/9910/quant-ph9910080.html
|
ar5iv
|
text
|
# Acknowledgements
## Acknowledgements
V.I.M. thanks Dipartimento di Scienze Fisiche Universitá di Napoli “Federico II” for kind hospitality and Russian Foundation for Basic Research for partial support.
|
no-problem/9910/astro-ph9910306.html
|
ar5iv
|
text
|
# The Low Resolution Spectrograph of the Hobby-Eberly Telescope II. Observations of Quasar Candidates from the Sloan Digital Sky Survey 1footnote 11footnote 1Based on observations obtained with the Sloan Digital Sky Survey, which is owned and operated by the Astrophysical Research Consortium.,2footnote 22footnote 2Based on observations obtained with the Hobby-Eberly Telescope, which is a joint project of the University of Texas at Austin, the Pennsylvania State University, Stanford University, Ludwig-Maximillians-Universität München, and Georg-August-Universität Göttingen.
## 1 Introduction
The Hobby-Eberly Telescope (HET), located at McDonald Observatory in west Texas, is the first optical/IR 8-m class telescope to employ a fixed altitude (Arecibo-type) design (Ramsey, Sebring, and Sneden (1994); Hill (1995); Ramsey et al. (1998)). The spherical primary, consisting of 91 identical hexagonal mirrors, is 11.1 m across and is oriented 35 from the zenith; full azimuth motion allows access to all declinations between $`10^{}`$ and $`+72^{}`$. During an observation the azimuth of the telescope is fixed and objects are followed by a tracker assembly located 13.08 m above the primary, riding at the top of the telescope structure (Booth, Ray, and Porter (1998)). The tracker carries a four-mirror corrector which delivers a 4 diameter field of view, and can follow an object continuously for between 40 minutes and 2.5 hours, depending on the source declination. Only for a fraction of this time, however, does the 9.2-m diameter entrance pupil fall entirely on the primary mirror; the minimum equivalent aperture at the track extremes is 6.8-m, and the average equivalent aperture for a “typical” observation is approximately 8-m.
Groundbreaking for the HET occurred in March 1994, and the telescope was dedicated on 8 October 1997. The first HET facility instrument, the Marcario Low Resolution Spectrograph (LRS; Hill et al. 1998a,b, 2000a; Cobos et al. (1998)) was installed in the tracker in April 1999. Commissioning of the LRS took place during the dark time in April, May, and June 1999; this paper, along with Hill et al. (2000b), presents initial science results from these observations.
Spectra of ten high-redshift quasar candidates from the Sloan Digital Sky Survey (SDSS; Gunn and Weinberg (1995); SDSS Collaboration (1996); York et al. (1999)) were obtained with the LRS during the Spring 1999 campaign. The results demonstrated that although the image quality of the HET primary had yet not reached design specifications, ten-minute exposures of $`r^{}`$ 19–20 quasars were adequate to measure redshifts, and a twenty-five minute exposure of an $`r^{}23`$, $`i^{}20.4`$ L dwarf yielded an accurate stellar classification.
## 2 Observations
### 2.1 Sloan Digital Sky Survey
The SDSS is using a CCD camera (Gunn et al. (1998)) on a dedicated 2.5-m telescope (Siegmund et al. (1999)) at Apache Point Observatory, New Mexico, to obtain images in five broad optical bands over 10,000 deg<sup>2</sup> of the high Galactic latitude sky centered approximately on the North Galactic Pole. The five filters (designated $`u^{}`$, $`g^{}`$, $`r^{}`$, $`i^{}`$, and $`z^{}`$) cover the entire wavelength range of the CCD response (Fukugita et al. (1996)). Photometric calibration is provided by simultaneous observations with a 20-inch telescope at the same site. The survey data processing software provides the astrometric and photometric calibrations, as well as identification and characterization of objects (Pier et al. (1999); Lupton et al. 1999b ).
The high photometric accuracy of the SDSS images and the information provided by the $`z^{}`$ filter (central wavelength of 9130 Å) makes the SDSS data an excellent source for identification of high-redshift quasar candidates. If the redshift of a quasar exceeds $``$ 3.5, the combination of the strong Ly $`\alpha `$ line (typical observed equivalent width of 400 Å) and absorption produced by intervening neutral hydrogen (at $`z`$ = 4 approximately half of the radiation shortward of the Ly $`\alpha `$ emission line is absorbed) causes the optical colors of high-redshift quasars to radically deviate (often by more than a magnitude; see Fan (1999); Fan et al. (1999)) from the colors of stars.
Fan et al. (1999) were able to identify 15 new quasars at redshifts larger than 3.65 (including four with $`z>4.5`$) from early SDSS commissioning data; recent work (Fan et al. 2000a ) has increased the number of SDSS high-redshift quasars to over 35. During the course of these investigations, observations of a number of quasar candidates have revealed a significant number of objects cooler than M-type stars (Fan et al. 2000b ), including the identification of the first field methane dwarf (Strauss et al. (1999)).
Quasar candidates were selected, using the multicolor technique similar to that of Fan et al. (1999a), from point sources in two equatorial SDSS strips taken in March 1999. Both low ($`z<3.5`$, from the “$`u^{}g^{}r^{}`$” diagrams) and high ($`z>3.5`$, from the “$`g^{}r^{}i^{}`$” and “$`r^{}i^{}z^{}`$” diagrams) redshift quasar candidates were chosen. Since the photometric measurements of this commissioning data have not yet been placed on the final SDSS system, we use the symbols $`u^{}`$, $`g^{}`$, $`r^{}`$, $`i^{}`$, and $`z^{}`$ to indicate that the photometry is similar but not identical to the final SDSS photometric system.
### 2.2 Spectroscopy of Quasar Candidates
Spectra of ten of the SDSS quasar candidates were obtained with the LRS between April and June 1999. Details of the optical and mechanical design and the performance of the LRS are provided in the companion paper (Hill et al. 2000a ); below is a brief description of the instrument as employed for the present observations.
The LRS is mounted in the Prime Focus Instrument Package, which rides on the HET tracker. The image scale is 4.89<sup>′′</sup> mm<sup>-1</sup> at the entrance aperture to the LRS; the observations of SDSS objects were taken with long slits with widths of 2<sup>′′</sup> or 3<sup>′′</sup>. The dispersive element was a 300 line mm<sup>-1</sup> grism blazed at 5500 Å. The detector is a thinned, antireflection-coated 3072 $`\times `$ 1024 Ford Aerospace CCD. The pixel size is 15$`\mu `$m; the scale on the detector is 0.25<sup>′′</sup> pixel<sup>-1</sup>. The CCD has a gain of 2.5 $`e^{}`$ ADU<sup>-1</sup> and a read noise of approximately 7 $`e^{}`$. The CCD was binned $`2\times 2`$ during readout of the SDSS observations; this produced a data frame size of 1568 $`\times `$ 512 and an image scale of 0.50<sup>′′</sup> pixel<sup>-1</sup>.
LRS wavelength calibration was provided by Ne, Cd, and Ar comparison lamps. The wavelength calibration between 4400–10,700 Å is well fit (rms residuals of about a tenth of a pixel) by a fourth-order polynomial. The dispersion ranges from 4.00 Å pixel<sup>-1</sup> at the blue end to 4.76 Å pixel<sup>-1</sup> in the near infrared. The resolution with the 2<sup>′′</sup> slit is 20 Å ($`R`$ = 300 at 6000 Å).
The observing conditions varied from nearly photometric to scattered cloud cover. Typically 86 of the 91 segments of the primary were in operation during the data acquisition, and the image quality ranged from 1.6<sup>′′</sup> (FWHM) to over 5<sup>′′</sup>. The exposure time per object ranged from 10 to 25 minutes. The relative flux calibration was provided by observations of spectrophotometric standards, usually those of Oke and Gunn (1983). Absolute spectrophotometric calibration was carried out by scaling each spectrum so that $`i^{}`$ magnitudes synthesized from the spectra matched the SDSS photometric measurements.
Six of the ten SDSS quasar candidates had interesting spectra; finding charts are given in Figure 1. The official names for the sources are SDSSp Jhhmmss.ss+ddmmss.s, where the coordinate equinox is J2000. For brevity, the objects will be referred to as simply SDSS hhmm+dd throughout most of this paper. The spectra of the remaining four objects were basically featureless; given the signal-to-noise ratio of these four spectra, one can state that they are not quasars or late-type stars.
The spectra from 4500–9200 Å for the six sources are displayed in Figure 2 (The data have been binned so that there are approximately two pixels per resolution element). The data for all objects were taken through the 2<sup>′′</sup> slit except for the spectrum of SDSS 1624$``$00 (3<sup>′′</sup> slit); the spectrum of SDSS 1405$``$00 was acquired with an OG515 blocking filter.
Prominent spectral features are labelled in the figure. Four of the sources are definitely quasars with redshifts between 2.92 and 4.15 (also see Fan et al. 2000a for observations of SDSS 1310-00 and SDSS 1447$``$00), one (SDSS 1347+00) is probably a quasar at $`z3.8`$, and one (SDSS 1430+00) is a very cool star or substellar object.
## 3 Discussion
A summary of the observations of the six SDSS sources are given in Table 1. The table contains the object name, photometry with 1-$`\sigma `$ errors from the SDSS data, UT date and exposure time of LRS observation, and the redshift of the source.
The photometry in Table 1 is quoted in asinh magnitudes (Lupton, Gunn, and Szalay 1999a ) that are based on the $`AB`$ system of Oke and Gunn (1983). Asinh magnitudes are essentially identical to the standard definition of magnitudes when the flux levels are well above zero; at low signal-to-noise ratio asinh magnitudes are linear in flux and do not diverge at zero (or negative) flux. The zero flux levels for the $`u^{}`$, $`g^{}`$, $`r^{}`$, $`i^{}`$, $`z^{}`$ bands were set to 24.24, 24.91, 24.53, 23.89, and 22.47, respectively, for the SDSS data reported here. For example, the $`g^{}`$ magnitude for 1420+00 (24.94) indicates that a small negative flux was measured in this band.
Redshift determinations for three of the quasars (SDSS 1310$``$00, SDSS 1447$``$00, and SDSS 1624$``$00) were quite straightforward; emission lines other than the absorption-affected Ly $`\alpha `$ line could be used for this measurement. The quasar SDSS 1405$``$00 is a strong Broad Absorption Line (BAL) with absorption from Ly $`\alpha `$, N V, Si IV+O IV\], and C IV. The BAL features have widths of approximately 3000 km s<sup>-1</sup> and have a redshift of $``$ 3.53; we have assigned an approximate redshift of 3.55 to the quasar. The spectrum of SDSS 1347+00, with its prominent emission line at 5900 Å and continuum drop across the line, is very suggestive of a redshift $``$ 3.8 object, but clearly a higher signal-to-noise ratio spectrum of the source is required for a definitive answer.
Some basic properties of the five quasars are given in Table 2: object name, redshift, color excess along line-of-sight (from Schlegel, Finkbeiner, and Davis (1998)), the Galactic extinction corrected $`AB`$ magnitude at 1450 Å in the rest frame of the quasar, and the absolute $`B`$ magnitude (assuming $`H_0`$ = 50 km s<sup>-1</sup> Mpc<sup>-1</sup>, $`q_0`$ = 0.5, and the continuum between 1450 Å and 4400 Å in the quasar rest frame is a power law with an index of $`0.5`$). In this cosmology 3C 273 has $`M_B=27.0`$.
The remaining object, SDSS 1430+00, is clearly a very late-type dwarf; based on the classification scheme of Kirkpatrick et al. (1999), we classify the object as either late M or early L, with a best estimate of L0 (also see Fan et al. 2000b ).
The results presented in this paper demonstrate that the HET/LRS can acquire, track, and obtain spectra of $``$ 20$`\mathrm{th}`$ magnitude objects. The commissioning tests demonstrate that exposure times of the order of ten minutes produce data of sufficient signal-to-noise ratio to determine quasar redshifts and classify substellar objects at this brightness level. The LRS will begin normal operations in Fall 1999; the observations presented here are representative of the type of survey work that the LRS plans to undertake in the future.
The Sloan Digital Sky Survey (SDSS) is a joint project of the University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, the Johns Hopkins University, the Max-Planck-Institute for Astronomy, Princeton University, the United States Naval Observatory, and the University of Washington. Apache Point Observatory, site of the SDSS, is operated by the Astrophysical Research Consortium. Funding for the project has been provided by the Alfred P. Sloan Foundation, the SDSS member institutions, the National Aeronautics and Space Administration, the National Science Foundation, the U.S. Department of Energy, and the Ministry of Education of Japan. The SDSS Web site is http://www.sdss.org/. The Hobby-Eberly Telescope (HET) is a joint project of the University of Texas at Austin, the Pennsylvania State University, Stanford University, Ludwig-Maximillians-Universität München, and Georg-August-Universität Göttingen. The HET is named in honor of its principal benefactors, William P. Hobby and Robert E. Eberly. The Marcario LRS was constructed by the University of Texas at Austin, Stanford University, Ludwig Maximillians-Universität München, the Instituto de Astronomia de la Universidad Nacional Autonomia de Mexico, Georg-August-Universitaet Goettingen, and Pennsylvania State University. The LRS is named for Mike Marcario of High Lonesome Optics who fabricated several optics for the instrument but died before its completion. This work was supported in part by National Science Foundation grants AST95-09919 and AST99-00703 (DPS), and AST96-18503 (MAS). MAS and XF acknowledge additional support from Research Corporation, the Princeton University Research Board, and an Advisory Council Scholarship.
|
no-problem/9910/astro-ph9910561.html
|
ar5iv
|
text
|
# Interaction rate at 𝑧∼1
## 1 Introduction
The redshift dependence of the interaction and merger rate is an important test of the current models for the formation and evolution of galaxies. It is now well established that galaxy interactions play a major role in galaxy formation and evolution (e.g. Schweizer 1998, Combes 1999 for recent reviews). Toomre (1977) demonstrated that the universe’s higher density in the past ( $``$(1+$`z`$)<sup>3</sup> ) suggests a higher past merger rate, increasing back in time as $`t^{5/3}`$ (with time $`t`$) if the binding energies of binary galaxies had a flat distribution. If the galaxy merger rate is parametrized in the power-law form $`(1+z)^m`$, then the exponent has been found to be $`m=2.5`$ from Toomre’s (1977) approach (assuming $`\mathrm{\Omega }=1`$). Statistics of close galaxy pairs from faint-galaxy redshift surveys (e.g. Yee & Ellingson 1995, Le Fevre et al. 1999) and morphological studies of distant galaxies support a large value of the exponent $`m`$ for $`z1`$. For instance, Abraham (1998) concluded that current best estimates for the merger rate are consistent with $`m3`$. Preliminary studies of distant peculiar objects representing distinct results of interactions/mergers (collisional ring galaxies, polar-ring galaxies, mergers) also support $`m3`$ (Lavery et al. 1996, Reshetnikov 1997, Remijan et al. 1998, Le Fevre et al. 1999), although statistics are still insufficient. Many other surveys, including IRAS faint sources, or quasars, have also revealed a high power-law (e.g. Warren et al. 1994, Springel & White 1998). However, some recent works have suggested a moderate ($`m2`$) (e.g. Neuschaefer et al. 1997, Wu & Keel 1998) or intermediate ($`m23`$) (Burkey et al. 1994, Im et al. 1999) density evolution of merging systems with $`z`$.
From analytical formulation of merging histories (e.g. Carlberg 1990, 1991; Lacey & Cole 1993), it is possible to relate the dark haloes merger rate to the parameters of the universe (average density $`\mathrm{\Omega }`$, cosmological constant $`\mathrm{\Lambda }`$). The merging rates for visible galaxies should follow, although the link is presently not well known (Carlberg 1990, Toth & Ostriker 1992). Theoretical models based on Press-Schechter formalism (Carlberg 1990, 1991) predict a redshift evolution of the merger rate with $`m\mathrm{\Omega }^{0.42}(1\mathrm{\Lambda })^{0.11}`$ (the exponents must be somewhat changed if the average halo mass decreases with $`z`$ – Carlberg et al. 1994). This conclusion is confirmed by numerical simulations within the CDM scenario – $`m=4.2`$ ($`\mathrm{\Omega }=1`$) and $`m=2.5`$ ($`\mathrm{\Omega }=0.3`$) for $`z1`$ (Governato et al. 1997).
Tidal tails originate in close encounters of disk galaxies (e.g. Toomre & Toomre 1972). The purpose of this note is to show that statistics of galaxies with extended tidal tails (tidal bridges have, on average, fainter surface brightnesses – Schombert et al. 1990) is a useful tool to study evolution of interaction rate to $`z1`$. The simulations by Hibbard & Vacca (1997) – they showed that extended tidal features remain readily visible in the long exposures typical of the Hubble Deep Fields out to $`z1`$ – is the theoretical base for our work. We found that current statistics of such objects in the North and South Hubble Deep Fields (HDF-N and HDF-S correspondingly) leads to $`m4`$. (Preliminary results based on the HDF-N only are presented in Reshetnikov 1999 – Paper I.)
## 2 Sample of galaxies
We used the deepest currently available deep fields (HDF-N – Williams et al. 1996 and HDF-S – Williams et al. 1999) to search galaxies with extended tidal tails. From detailed examination of the fields in the F814W filter (hereinafter referred to as $`I`$), we selected more than 70 tailed objects. Careful analysis of their images in combination with the redshift data enabled us to distinguish 25 galaxies with $`z=0.51.5`$ (12 objects in the HDF-N and 13 in the HDF-S). Galaxies with tidal tails in the HDF-N are described in detail in Paper I. Here we present the data for the HDF-S objects (Fig.1). (Our statistics of galaxies with tidal structures are in general agreement with van den Bergh et al. (1996) data on morphology of galaxies in the HDF-N. van den Bergh et al. classified 20 galaxies with $`21<I<25`$ as objects with probable tidal distortions in the HDF-N.)
General characteristics of the galaxies are summarized in Table 1. The columns of the table are: galaxy identification, $`I`$ band apparent magnitude, photometric redshift (there are no published spectroscopic redshifts for the sample galaxies). All the data are taken from the web site of the HDF-S group at SUNY, Stony Brook (Chen et al. 1998). In the fourth column we present the absolute magnitude in the rest-frame $`B`$ band calculated according to Lilly et al. (1995) as:
$`M_B`$=$`I`$–5 log($`D_L`$/10 pc)+2.5 log(1+$`z`$)+($`BI_z`$)+0.17,
where
$$D_L=\frac{c}{H_0q_0^2}[q_0z+(q_0\mathrm{\hspace{0.17em}1})(\sqrt{2q_0z+\mathrm{\hspace{0.17em}1}}\mathrm{\hspace{0.17em}1})]$$
(1)
– luminosity distance ($`\mathrm{\Lambda }=0`$), $`H_0`$ – the Hubble constant (75 km/s/Mpc), $`q_0`$ – deceleration parameter ($`q_0`$=0.05), $`BI_z`$$`k`$-correction color (we used correction for Sbc galaxy), and term 0.17 translates AB magnitudes into standard $`B`$.
In Table 2 we compare mean characteristics of the tailed galaxies in two deep fields. As one can see, both samples are consistent within quoted errors.
## 3 Characteristics of tidal tails
To be sure that our selected galaxies possess true tidal tails, we performed photometric measurements in the $`I`$ passband using circular apertures centered on the brightest regions of the suspected tails. For the measurements, we retrieved the HDF-S images (version 1) from the ST ScI web site and processed them in the ESO-MIDAS environment. The results of these measurements are summarized in Table 1 (column 5). The observed surface brightness of the tails has been converted to a rest-frame $`B`$ by applying the cosmological dimming term and a $`k`$-correction color term: $`\mu (B)=\mu (I)`$ – 2.5 log(1+$`z`$)<sup>3</sup> \+ ($`BI_z`$) + 0.17 (Lilly et al. 1998). General photometric characteristics of the local tidal tails are close to those for late-type spiral galaxies (Sb-Sc) (Schombert et al. 1990, Reshetnikov 1998) and we used color term for Sbc galaxy (Lilly et al. 1995) in our calculations. The results are presented in the last column of Table 1 and in Fig.2.
The mean rest-frame surface brightness of the tidal structures in the joint (HDF-N plus HDF-S) sample is $`<\mu _B>`$(tail) = 23.4$`\pm `$1.1, in full agreement with our results for the local sample of interacting galaxies (obtained by analogous manner) – $`<\mu _B>`$(tail) = 23.8$`\pm `$0.8 (Reshetnikov 1998).
Fig.2 (top) presents the observed distribution of the $`<\mu (I)>`$ values of the suspected HDF tails (dashed line) in comparison with the distribution for local objects in the $`B`$ passband. The bottom part of the figure shows converted to a rest-frame, $`B`$ distribution for HDF tails. It is evident that tails of distant galaxies demonstrate a distribution of $`<\mu (B)>`$ values close to that for local interacting galaxies. Fig.2 illustrates clearly the influence of observational selection on the recognition of tidal structures – we are able to detect relatively faint tails among the galaxies with $`z=0.51.0`$ but among $`z=1.01.5`$ objects we can see only very bright tails. Therefore, our sample of galaxies with extended tidal tails is sufficiently incomplete for $`z1`$. Thus, objects with $`z=0.51.0`$ will give a more reasonable estimation of $`m`$ in comparison with the total sample.
## 4 Density evolution
The resemblance of morphological and photometric characteristics of suspected tidal tails of the HDFs galaxies with local objects allows us to use them to measure possible change with $`z`$ of volume density of galaxies with tails (and, therefore, the rate of close encounters leading to the formation of extended tails).
The co-moving volume element in solid angle $`d\mathrm{\Omega }`$ and redshift interval $`dz`$ is
$$d\mathrm{V}=\frac{c}{H_0}(1+z)^2\frac{D_L^2}{E(z)}d\mathrm{\Omega }dz,$$
(2)
where $`D_L`$ – photometric distance (eq.(1)), and $`E(z)=`$(1 + $`z`$)$`\sqrt{2q_0z+\mathrm{\hspace{0.17em}1}}`$ for $`\mathrm{\Lambda }=0`$ (e.g. Peebles 1993). The increase of the space density of galaxies with tidal tails we take in standard power-law form:
$$n(z)=n_0(1+z)^m,$$
(3)
where $`n_0=n(z=0)`$ – local volume density of such galaxies. By integrating equations (2) and (3) we can find the expected number of objects within solid angle $`d\mathrm{\Omega }`$ and in required range of $`z`$.
### 4.1 Local density of galaxies with tidal tails
We suppose that at the current epoch interactions and mergers accompanied by tail formation are almost entirely between bound pairs of galaxies (e.g. Toomre 1977). So we adopt that frequency of tidal tails among single objects (mergers) and in groups, is significantly lower than in pairs.
According to Karachentsev (1987), the relative frequency of galaxies with tails among the members of binary systems is 94/974=0.10$`\pm `$0.01. The fraction of paired galaxies in the local universe is not well determined. Various strategies give results between 5% and 15%. For instance, local pairing fraction is 7%$`\pm `$1% according to Burkey et al. (1994), 6%-10% (Keel & van Soest 1992), 14%$`\pm `$2% (Lawrence et al. 1989). The most intensive studies lead to 12%$`\pm `$2% (Karachentsev 1987) and 10% (Xu & Sulentic 1991, Soares et al. 1995). Moreover, Xu & Sulentic (1991) found that the fraction of pairs is approximately constant (10%) over the luminosity range $`22<M_B<16`$ (see also Soares et al. 1995). Thus, we can adopt the value of 10%$`\pm `$5% as a reasonable estimate of the local fraction of binary galaxies. Therefore, the relative fraction of galaxies with tidal tails at $`z=0`$ is 0.1$`\times `$0.1=0.01$`\pm `$0.005.
To find total density of galaxies in the nearby part of the universe ($`z0.05`$), we considered the galaxy luminosity function (LF) according to Marzke et al. (1998). The adopted Schechter function parameters of the LF are: $`M_B^{}`$=–20.05, $`\varphi ^{}`$=5.4$`\times `$10<sup>-3</sup> Mpc<sup>-3</sup> and $`\alpha `$=–1.12 ($`H_0=`$75 km/s/Mpc). By integrating LF from $`M_B=15.4`$ to –21.1 (the range of absolute luminosities of galaxies with tails in the HDF-N and HDF-S), we found that total volume density of galaxies is equal to 0.026 Mpc<sup>-3</sup>. Thus, $`n_0=`$0.01$`\times `$0.026=(2.6$`\pm `$1.3)$`\times `$10<sup>-4</sup> Mpc<sup>-3</sup>.
The total angular area within which we searched tailed galaxies in two HDFs is 10.4 arcmin<sup>2</sup> or 8.8$`\times `$10<sup>-7</sup> sr.
### 4.2 Exponent $`m`$ from tidal structures
Varying exponent $`m`$, we can estimate the expected number of galaxies with tidal features in the HDFs. In Fig.3 we present the results of calculations for two redshift intervals: 0.5–1.5 (total sample) and 0.5–1.0 (adopted cosmology is $`\mathrm{\Lambda }=0`$, $`q_0=0.05`$ or $`\mathrm{\Omega }=2q_0=0.1`$ and $`H_0`$=75 km/s/Mpc). As one can see, the total sample (25 objects) leads to $`m=2.6`$. But this value must be considered as a low limit only due to strong underestimation of tidal tails at $`z1`$ (sect.3). For the galaxies with $`z=0.51.0`$ ($`N`$=14) we obtain $`m=4.0`$. Assuming Poisson error of $`N`$ ($`\sqrt{N}`$=3.7), we have $`m=4.0_{0.5}^{+0.4}`$. Adding 50% uncertainty in the local space density $`n_0`$, we have obtained a final estimation of $`m`$ as 4.0$`{}_{0.9}{}^{}{}_{}{}^{+1.2}`$. (Let us note also that two potential sources of errors – underestimation of $`n_0`$ value and omission of tailed galaxies in the HDFs – bias value of $`m`$ in opposite directions and partially compensate each other.)
The value of $`m`$ depends on the adopted cosmological model. For $`\mathrm{\Lambda }=0`$, $`\mathrm{\Omega }=1`$ we have $`m=4.9`$ ($`z`$=0.5–1.0). In our calculations for the model with a cosmological constant and zero spatial curvature ($`\mathrm{\Omega }_m`$=0.3, $`\mathrm{\Omega }_\mathrm{\Lambda }`$=0.7, $`\mathrm{\Omega }=\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }`$=1) we used the analytical approximation of the luminosity distance $`D_L`$ according to Pen (1999). In the framework of that model we have obtained $`m=3.6_{0.9}^{+1.2}`$.
To obtain more realistic error estimation, we must take into account the possible luminosity evolution of galaxies with redshift. Unfortunately, luminosity and surface brightness evolution of peculiar and interacting galaxies is poorly constrained at present (e.g. Roche et al. 1998). Moreover, Simard et al. (1999) claim that an apparent systematic increase in disk mean surface brightness to $`z1`$ for bright ($`M_B<19`$) spiral galaxies is due to selection effects. Nevertheless, assuming that interacting galaxies undergo luminosity evolution $`\mathrm{\Delta }M_B=1^m`$ between $`z=0`$ and 1, we estimated that the value of $`m`$ must be decreased by $`\mathrm{\Delta }m0.5`$: $`m=3.5`$ for $`\mathrm{\Omega }=0.1,\mathrm{\Lambda }=0`$ and $`m=4.4`$ for $`\mathrm{\Omega }=1,\mathrm{\Lambda }=0`$.
## 5 Discussion and conclusions
On the basis of analysis of the HDF-N and HDF-S images we selected 25 galaxies with probable tidal tails with $`z`$=0.5–1.5. Integral photometric characteristics of the suspected tails are close to that for local interacting galaxies. Considering the subsample of tailed galaxies with $`z`$=0.5-1.0 (14 objects), we estimated that co-moving volume density of such galaxies changes approximately as (1+$`z`$)<sup>4</sup>. (Hence the volume density of tailed galaxies at $`z=1`$ is $`n(z=1)=4\times 10^3\mathrm{Mpc}^3`$ for $`q_0=0.05`$.) Inclusion in the sample of the galaxies with tidal bridges does not noticeably change the value of the exponent (Paper I). Therefore, we estimated the change of the rate of close encounters leading to the formation of extended tails. If this rate reflects the merger rate, we have obtained evidence of a steeply increasing merger rate at $`z1`$. (Our result is related to field galaxies. The evolution in clusters might even be stronger than in the field. For instance, van Dokkum et al. 1999 found $`m=6\pm 2`$ for the merger fraction in rich clusters of galaxies.)
How does our estimation of $`m`$ agree with values obtained by other methods? The recent surveys of the evolution of galaxy pairs with $`z`$ are consistent with $`m3`$ (see references in Abraham 1998). Evolution of the rate of interactions according to our data is characterized by close (within quoted errors) value of $`m`$. Direct analysis of the morphology of distant galaxies at $`z1`$ suggests a significant increase of the fraction of irregular and peculiar systems with redshift (Fig.4). If interactions and mergers are responsible for the observed asymmetries of galaxies (e.g. Conselice & Bershady 1998), this increase can reflect the increase of the interaction rate with $`z`$. As one can see in Fig.4, relative fraction of Irr/Pec galaxies changes in accordance with $`m=4`$. Naim et al.’s (1997) result (35%$`\pm `$15% of peculiar galaxies down to $`I=24.0`$) agrees with $`m=4`$ also. Many other observational surveys and numerical works indicate a large ($``$3) exponent $`m`$ (Sect.1). Comparison with predictions of analytical and numerical works shows that current observational estimates of the merger rate favor a zero curvature ($`\mathrm{\Omega }=1`$) universe (e.g. Carlberg 1991, Governato et al. 1997).
Our results indicate that further detailed statistics of galaxies with tidal structures will be a powerful tool to quantify the interaction and merging rates evolution.
###### Acknowledgements.
I would like to thank an anonymous referee for useful comments. I acknowledge support from the Russian Foundation for Basic Research (98-02-18178) and from the “Integration” programme ($`N`$ 578).
|
no-problem/9910/astro-ph9910229.html
|
ar5iv
|
text
|
# RECONSTRUCTION ANALYSIS OF THE IRAS POINT SOURCE CATALOG REDSHIFT SURVEY
## 1 Introduction
Understanding the formation and evolution of large scale structure (hereafter LSS) in the universe is one of the foremost problems in cosmology. In the standard approach to studying LSS, one starts with a model for the primordial mass density fluctuations, uses analytical approximations or numerical simulations to predict the ensemble averaged statistical properties of galaxy clustering, and compares them with those of the observed galaxy distribution, assuming that we observe fair sample of the universe. However, we cannot expect a simulation started from random initial conditions to reproduce the specific structures observed in a galaxy redshift catalog, even if the statistical properties of the galaxy clustering are correct. Reconstruction analysis is a complementary approach to the study of LSS in which one works backward from the observed galaxy distribution to the initial mass density fluctuations in the same region of space, then evolves these model initial conditions forward in time to the present day using an N-body code. The reconstruction method incorporates assumptions about the properties of primordial fluctuations, the values of cosmological parameters, and the “bias” between the galaxy and mass distributions. These assumptions can be tested by comparing in detail the evolved reconstruction to the original galaxy redshift data. A by-product of the reconstruction analysis is a detailed model for the origin and evolution of familiar, well studied structures in the local universe, such as the Great Attractor, the Perseus-Pisces supercluster, and the Sculptor Void.
In this paper, we present the results of reconstruction analysis of the IRAS Point Source Catalog Redshift survey (hereafter PSCz, Saunders et al. 1995; Canavezes et al. 1998), using the “hybrid” reconstruction method described by Narayanan & Weinberg (1998, hereafter NW98). First, we create the galaxy density field smoothed with a Gaussian filter of radius $`R_s=4h^1\mathrm{Mpc}`$ (where $`hH_0/100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$), correct for the effects of bias and redshift-space distortions, and derive the smoothed initial mass density field by tracing the evolution of these density fluctuations backward in time. We then evolve these initial density fluctuations forward using an N-body code, assuming a value for $`\mathrm{\Omega }_m`$, and compare in detail the clustering properties of the reconstructed PSCz galaxy distribution to those of the input PSCz galaxy distribution, either assuming that galaxies trace mass or selecting galaxies from the N-body particle distribution using a biasing prescription. For our purposes, therefore, a model of structure formation consists of a value of $`\mathrm{\Omega }_m`$, a bias factor $`b`$ that gives the ratio of rms galaxy and mass fluctuations on a scale of $`8h^1\mathrm{Mpc}`$, and an explicit biasing scheme that specifies how galaxies are to be selected from the large scale mass distribution. All of our models assume that structure grew by gravitational instability from Gaussian primordial fluctuations — these are the implicit assumptions of the reconstruction method itself. We reconstruct the PSCz catalog using 15 different models and quantify the accuracy of each reconstruction using a variety of clustering statistics. Even if the model assumptions are correct, we do not expect to reproduce the observed structure exactly, because we begin with imperfect data and because the reconstruction method cannot invert gravitational evolution in the strongly non-linear regime. For each statistic, we therefore rank the accuracy of the model’s PSCz reconstruction with respect to the reconstructions of ten mock PSCz catalogs derived from the outputs of N-body simulations of the model under consideration. Finally, we use these ranks to evaluate the success of the PSCz reconstruction for each model, to constrain $`\mathrm{\Omega }_m`$, and to test models of bias between the mass and IRAS galaxy distributions in the real universe.
The hybrid reconstruction technique (NW98) that we use for our PSCz analysis combines the complementary desirable features of the Gaussianization mapping method (Weinberg 1992), which assumes Gaussian primordial fluctuations and a monotonic relation between the smoothed initial mass density field and the smoothed final galaxy density field, and the dynamical reconstruction methods of Nusser & Dekel (1992) and Gramann (1993a), which are based on the momentum and mass conservation equations, respectively, under the approximation that the comoving trajectories of mass particles are straight lines (the Zel’dovich approximation). In the hybrid method, we first recover the smoothed initial density field from the smoothed final mass density field using a modified form of the dynamical methods, then Gaussianize this recovered initial density field to robustly recover the initial fluctuations even in the non-linear regions (an approach also used by Kolatt et al. 1996).<sup>1</sup><sup>1</sup>1To “Gaussianize” a field, one applies a local, monotonic mapping that enforces a Gaussian 1-point PDF. For a reconstruction that incorporates biased galaxy formation, we precede the dynamical reconstruction step with a step that maps the smoothed galaxy density field monotonically to a smoothed mass density field with the theoretically expected (non-linear, non-Gaussian) PDF. The hybrid method is described and tested in detail in NW98, who show that it can reconstruct a galaxy redshift survey more accurately than either the Gaussianization method or the dynamical reconstruction methods alone. Comparison of the hybrid method to a variety of alternative reconstruction schemes, including the Path Interchange Zel’dovich Approximation (PIZA) method of Croft & Gaztañaga (1997), is given by Narayanan & Croft (1999).
Earlier attempts to reconstruct observational data include the reconstruction of the Perseus-Pisces redshift survey of Giovanelli & Haynes (1989) by Weinberg (1989) using the Gaussianization technique and the reconstruction of the IRAS $`1.2`$Jy redshift survey (Fisher et al. (1995)) by Kolatt et al. (1996) using the dynamical scheme of Nusser & Dekel (1992). This dynamical scheme was also used by Nusser, Dekel & Yahil (1995) to recover the PDF of the initial density field from the IRAS $`1.2`$Jy redshift survey. The primary requirements for a redshift survey to be suitable for reconstruction analysis are: (a) good sky coverage and depth, so that the gravitational influence of regions outside the survey boundaries is small, (b) dense sampling to reduce shot noise errors, and (c) a well understood selection function, to allow accurate construction of the observed galaxy density field. With respect to these criteria, PSCz is a substantial improvement on samples used in previous analyses, and it is the best all-sky redshift survey that exists today. The PSCz is a redshift survey of all galaxies in the IRAS Point Source Catalog whose flux at $`60\mu m`$ is greater than $`0.6`$Jy. It contains about $`15,500`$ galaxies distributed over $`84.1\%`$ of the sky, excluding only some regions of low Galactic latitude where the extinction in the $`V`$ band as estimated from the IRAS $`100\mu m`$ background is greater than 1.5 mag. (mainly the low Galactic latitude zone $`|b|<5^{}`$), the Magellanic clouds, some odd patches contaminated by Galactic cirrus, and two strips in ecliptic longitude not surveyed by the IRAS satellite.
The plan of this paper is as follows. In §2, we describe the hybrid reconstruction method used to reconstruct galaxy redshift surveys, outline the assumptions involved in the reconstruction analysis, and list all the steps involved in reconstructing the PSCz catalog in the order in which they are implemented. In §3, we describe the various models that we use to reconstruct the PSCz catalog, and our construction of the mock PSCz catalogs for each model from the outputs of N-body simulations. In §4, we illustrate the results of reconstruction analysis for 6 of our 15 models, using a variety of statistics. We quantify the accuracy of the PSCz reconstruction of a model using a “Figure-of-Merit” for each statistic, and rank the PSCz reconstruction with respect to all the mock catalogs for that model. We summarize the results of reconstructing the PSCz catalog for the full set of 15 models in §5, and describe the criteria for classifying a model as Accepted, or Rejected, based on its rankings. We review our results and discuss their implications in §6, drawing general conclusions about the value of $`\mathrm{\Omega }_m`$, the amplitude of mass density fluctuations, the bias between IRAS galaxies and mass, and the viability of gravitational instability with Gaussian initial conditions as an explanation for the structure observed in the PSCz redshift survey. A brief overview of our results can be obtained from Figures 3, 7, and 13, and Table 2.
## 2 Reconstruction Analysis
We reconstruct the galaxy distribution in the PSCz catalog using the “hybrid” reconstruction method of NW98. We refer the reader to NW98 for a detailed discussion of the method, including its general motivation and tests of its accuracy on N-body simulations. Here we provide a summary of the assumptions made in the reconstruction (§2.1), justification of our choice of smoothing length and sample radius for PSCz (§2.2), and a step-by-step description of the reconstruction procedure as applied to PSCz (§2.3).
### 2.1 Assumptions
Reconstruction analysis of a galaxy redshift catalog incorporates a number of assumptions, at various stages. These include assumptions about the cosmological parameters, about the nature of the primordial mass density fluctuations, about the process of structure formation, and about the physics of galaxy formation. The assumptions in our analysis are:
Structure formed by gravitational instability. The reconstruction procedure traces the evolution of density fluctuations backward in time under the assumption that the LSS formed from the gravitational instability of small amplitude fluctuations in the primordial mass density field. This assumption is also implicit when the power-restored initial density field is evolved forward in time using a gravitational N-body code.
The primordial density fluctuations form a Gaussian random field, as predicted by simple inflationary models for the origin of these fluctuations (Guth & Pi (1982); Hawking (1982); Starobinsky (1982); Bardeen, Steinhardt, & Turner 1983). This assumption is the basis of the Gaussianization step of the reconstruction procedure.
The values of $`\mathrm{\Omega }_m`$ and the bias factor $`b\sigma _{8g}/\sigma _{8m}`$, where $`\sigma _{8m}(\sigma _{8g})`$ is the rms fluctuation of the mass (IRAS galaxy) distribution in spheres of radius $`8h^1\mathrm{Mpc}`$. We vary these assumptions from one reconstruction to another. We use the value of $`\mathrm{\Omega }_m`$ in correcting for redshift-space distortions and in forward evolution of the reconstructed initial conditions. We use the value of $`\sigma _{8m}=\sigma _{8g}/b`$ to normalize the forward evolution simulations. Note that throughout the paper we use $`b`$ to refer to the rms fluctuation bias on the $`8h^1\mathrm{Mpc}`$ scale.
The shape of the primordial power spectrum. In contrast to the amplitude $`\sigma _{8m}`$, changes to the power spectrum shape make little difference to our results, because the shape is used only to compute corrections to or extrapolations of the initial power spectrum recovered from the observational data.
The evolved galaxy density field is a monotonic function of the evolved mass density field, once both are smoothed over scales of a few $`h^1\mathrm{Mpc}`$. This assumption, together with the value of $`\sigma _{8m}`$ and the assumption of Gaussian initial conditions, allows us to recover the smoothed mass density field from the smoothed galaxy density field in preparation for the time-reversed dynamical evolution.
An explicit biasing scheme, i.e., a prescription for relating the underlying mass distribution to the observable galaxy distribution. This scheme does not influence the recovery of initial conditions, but it is needed to select galaxies from the N-body simulation evolved from these initial conditions, and hence to compare the reconstruction to the input data. Most of our biasing schemes have a single free parameter that controls the strength of the bias. We set the value of this parameter to obtain the desired bias factor $`b`$ (assumption 3).
### 2.2 Choice of smoothing length and sample radius
Since the PSCz is a flux-limited survey, the number density of galaxies in the catalog decreases with distance from the observer. Consequently, the shot-noise in the PSCz galaxy distribution increases with distance. However, the Gaussianization procedure in the reconstruction analysis relies on the assumption that the rms amplitude of galaxy density fluctuations remains the same throughout the reconstruction volume and that the contribution to these fluctuations from shot-noise in the galaxy distribution is negligible. In order to ensure that the shot-noise remains small and does not increase with distance from the observer, we create a volume-limited PSCz sub-catalog in which the number density of galaxies remains constant throughout the reconstruction volume.
Much of the diagnostic power of the reconstruction analysis stems from the fact that non-linear gravitational evolution transfers power from large scales to small scales (Melott & Shandarin (1990); Beacom et al. (1991); Little, Weinberg, & Park 1991; Soda & Suto (1992); Bagla & Padmanabhan (1997)). This power transfer erases the information about the initial mass fluctuations on scales below the non-linear scale (Fourier wavenumbers $`kk_{\mathrm{nl}}`$). Consequently, a reconstruction method that recovers the initial fluctuations accurately up to the non-linear scale $`k=k_{\mathrm{nl}}`$ can reproduce many features of the evolved structures even on smaller scales $`(k>k_{\mathrm{nl}})`$, though not, of course, the finer details of these features. Since the rms fluctuation of the IRAS galaxy distribution in spheres of radius $`8h^1\mathrm{Mpc}`$ is about 0.7 (Saunders, Rowan-Robinson & Lawrence 1992; Fisher et al. (1994); Moore et al. (1994)), we need to recover the initial density field accurately at least up to this scale in order to take advantage of the power-transfer phenomenon. We will therefore reconstruct the PSCz catalog using a Gaussian smoothing length of $`R_s=4h^1\mathrm{Mpc}`$, which corresponds to a top-hat smoothing scale of about $`6.6h^1\mathrm{Mpc}`$.
We create the volume-limited sub-catalog by selecting all the galaxies in the PSCz located within a volume-limiting radius that are bright enough to be included in the survey even when they are located at this volume-limiting radius. We choose this volume-limiting radius, $`R_1`$, based on a compromise between two conflicting requirements. First, the reconstruction volume should be large enough that it contains many independent smoothing volumes. This criterion pushes us to choose a large value for $`R_1`$. Second, the shot-noise in the galaxy distribution of the volume-limited catalog should be small and remain constant with distance from the observer. This condition requires a uniformly high number density of galaxies, pushing us to adopt a smaller volume-limiting radius. Since we reconstruct the PSCz catalog using a Gaussian smoothing length of $`R_s=4h^1\mathrm{Mpc}`$, we fix $`R_1`$ so that the mean inter-galaxy separation at $`R_1`$ is $`\overline{d}n_g^{1/3}=\sqrt{2}R_s=5.6h^1\mathrm{Mpc}`$. We adopt this criterion $`\overline{d}=\sqrt{2}R_s`$ based on the rule of thumb suggested by Weinberg, Gott, & Melott (1987), to obtain the largest possible reconstruction volume while keeping shot-noise in the galaxy distribution small enough to have little effect on the smoothed galaxy density field.
We compute the number density of galaxies as a function of the distance from the observer in the PSCz catalog using the maximum-likelihood method described by Springel & White (1998). We find that the number density of galaxies drops to $`0.005\mathrm{h}^3\mathrm{Mpc}^3=(5.6h^1\mathrm{Mpc})^{1/3}`$ at a distance of $`R_1=50h^1\mathrm{Mpc}`$ from the observer. We then select all the galaxies in the PSCz catalog that are bright enough to be included in the survey even if they are placed at a distance of $`50h^1\mathrm{Mpc}`$ from the Local Group. The galaxies selected in this manner are then included in the PSCz sub-catalog, which is thus volume-limited to $`R_1=50h^1\mathrm{Mpc}`$. The luminosities at $`60\mu m`$ of the galaxies in this sub-catalog satisfy the condition $`\mathrm{log}_{10}\left(\frac{L_{60}}{L_{}}\right)>9.37`$ (for $`h=1`$).
### 2.3 Step-by-step description
The steps involved in reconstructing a $`50h^1\mathrm{Mpc}`$, volume-limited subset of the PSCz catalog for a given set of model assumptions are as follows:
Step 1: Create an all-sky galaxy distribution by “cloning” the galaxy distribution to fill in the regions excluded in the PSCz survey. The PSCz catalog does not include galaxies in regions of low Galactic latitude where there is substantial obscuration by dust. However, this region could be dynamically important, since the Perseus-Pisces supercluster and the Hydra-Centaurus supercluster are both located close to the Galactic plane and could even extend across it. Hence, we fill in the region with Galactic latitude $`|b_{\mathrm{cut}}|<8^{}`$, using the cloning technique introduced by Lynden-Bell, Lahav & Burstein (1989; see also Yahil et al. 1991). We divide this region into 36 angular bins of $`10^{}`$ in longitude and divide the redshift range in each angular bin into bins of $`1000\mathrm{km}\mathrm{s}^1`$. In each longitude-redshift bin, we assign $`N(l,z)`$ artificial galaxies, where $`N(l,z)`$ is equal to a random Poisson deviate whose expectation value is equal to the average density of the corresponding longitude-redshift bins in the adjacent strips $`|b_{\mathrm{cut}}|<|b|<2|b_{\mathrm{cut}}|`$, times the volume of the bin. If there is a real PSCz galaxy in any of these longitude-redshift bins, we include it in place of an artificial galaxy. We fill the masked regions at high Galactic latitudes with a random distribution of artificial galaxies having the observed mean density. The flux distribution of the artificial galaxies is identical to those of the real galaxies in the PSCz catalog. We tested (using mock PSCz catalogs) alternate methods of handling the mask region, including assigning galaxies at random locations within the mask region at the mean galaxy density, as well as ignoring all the galaxies in the mask region. We found that the cloning technique always leads to the most accurate reconstruction of the galaxy distribution. However, the mask region does not influence reconstruction analysis as much as it influences say, the analysis of the cosmological galaxy dipole (Rowan-Robinson et al. in preparation).
Step 2: Select a volume-limited galaxy distribution from the flux-limited PSCz catalog so that the shot-noise in the volume-limited catalog is small and remains constant throughout the reconstruction volume. Based on the selection function of the PSCz survey, we choose the volume-limiting radius $`R_1=50h^1\mathrm{Mpc}`$, where the mean inter-galaxy separation is $`\overline{d}=\sqrt{2}R_s=5.6h^1\mathrm{Mpc}`$.
Step 3: Compute the smoothed galaxy density field in redshift space. We create the PSCz galaxy density field in redshift space by cloud-in-cell (CIC) binning (Hockney & Eastwood 1981) the volume-limited galaxy distribution onto a $`100^3`$ cubical grid that represents $`200h^1\mathrm{Mpc}`$ on a side. The Local Group observer is at the center of this cube, and the three sides of the cube are oriented along the axes of the Supergalactic coordinate system. Since the dynamical component of the hybrid reconstruction method traces the evolution of the gravitational potential backward in time, it is necessary to model the gravitational field in the regions beyond the boundaries in order to reconstruct the density field accurately near the edges of the volume-limited catalog. We therefore supplement the volume-limited density field with the density field in an annular region $`20h^1\mathrm{Mpc}`$ thick beyond the volume-limiting radius of $`50h^1\mathrm{Mpc}`$. We form the density field in this annular region by weighting each galaxy by the inverse of the selection function of the flux-limited PSCz survey at the location of the galaxy. The full galaxy density field is therefore constructed from a volume-limited catalog in the region $`0<R<R_1=50h^1\mathrm{Mpc}`$ and from a flux-limited catalog in the region $`R_1<R<R_2=70h^1\mathrm{Mpc}`$. We fill the regions beyond $`R_2`$ with uniform density equal to the mean density of the galaxy distribution in the volume-limited catalog, $`n=0.005\mathrm{h}^3\mathrm{Mpc}^3`$. We smooth the galaxy density field using a Gaussian filter of radius $`R_s=4h^1\mathrm{Mpc}`$. We account for boundary effects in computing the smoothed density field $`\rho _{sm}(𝐫)`$ by using the ratio smoothing method of Melott & Dominik (1993),
$$\rho _{sm}(𝐫)=\frac{M(𝐫^{})\rho (𝐫^{})W(𝐫𝐫^{})d^3𝐫^{}}{M(𝐫^{})W(𝐫𝐫^{})d^3𝐫^{}},$$
(1)
where $`W(𝐫)`$ is the smoothing filter, and the mask array $`M(𝐫)`$ is set to 1 for pixels inside the survey region and to 0 for pixels outside the survey region. The rms amplitude of the galaxy density field smoothed with a Gaussian filter of radius $`R_s=4h^1\mathrm{Mpc}`$ is $`\sigma _{4G}=0.85`$.
Step 4: Monotonically map the galaxy density field onto a theoretically determined PDF of the underlying mass distribution. In an unbiased model, the galaxy density field is identical to the mass density field, so we skip this step entirely. In a biased model, we derive the mass density field using the PDF mapping procedure described in NW98. We first assume a value for the bias factor $`b`$ and estimate the rms linear mass fluctuation using the equation
$$\sigma _{8m}=\frac{\sigma _{8g}}{b},$$
(2)
where $`\sigma _{8g}`$ and $`\sigma _{8m}`$ are the rms fluctuations in $`8h^1`$Mpc spheres in the non-linear galaxy density field and the linear mass density field, respectively. We use an N-body code to evolve forward in time an ensemble of initial mass density fields, all drawn from the same assumed power spectrum, and all normalized to this value of $`\sigma _{8m}`$. We then derive an ensemble-averaged PDF of the smoothed final mass density fields from the evolved mass distributions of these simulations. While reconstructing the PSCz catalog using a model that corresponds to this value of $`\sigma _{8m}`$, we derive a smoothed final mass density field by monotonically mapping the smoothed PSCz galaxy density field to this average PDF. This step implicitly derives and corrects for the only monotonic local biasing relation that is simultaneously consistent with the observed galaxy PDF, the assumed shape of the power spectrum and value of $`b`$, and the assumption of Gaussian initial conditions.
Step 5: Correct for the effects of redshift-space distortions. The peculiar velocities of galaxies distort the mapping of galaxy positions from real space to redshift space, making the line-of-sight a preferred direction in an otherwise isotropic universe. Since we need the real space mass density field to recover the initial mass density fluctuations, we need to correct for these redshift-space distortions. On small scales, the velocity dispersion of a cluster stretches it along the line-of-sight into a “Finger of God” feature that points directly toward the observer (e.g., de Lapparent, Geller & Huchra (1986)), thereby reducing the amplitude of small-scale clustering. To correct for this effect, we first identify the clusters in redshift space using a friends-of-friends algorithm that uses a transverse linking length of $`0.6h^1\mathrm{Mpc}`$ and a radial linking length of $`500\mathrm{km}\mathrm{s}^1`$ (Huchra & Geller (1982); Nolthenius & White (1987); Moore, Frenk, & White 1993; Gramann, Cen, & Gott 1994). For each cluster, we then shift the radial locations of its member galaxies so that the resulting compressed cluster has a radial velocity dispersion of $`100\mathrm{km}\mathrm{s}^1`$, roughly the value expected from Hubble flow across its radial extent. On large scales, the distortions arise from coherent inflows into overdense regions and outflows from underdense regions (Sargent & Turner (1977); Kaiser (1987)). To remove these distortions, we apply the following iterative procedure, which is a modified version of the method suggested by Yahil et al. (1991) and Gramann et al. (1994). This method is described in detail in NW98, and we give only a brief outline here. After deriving the mass density field in step (4), we predict the velocity field using the second order perturbation theory relation (Gramann 1993b ),
$$𝐯(𝐫)=f(\mathrm{\Omega }_m)H\left[𝐠(𝐫)+\frac{4}{7}C_g(𝐫)\right],$$
(3)
where $`𝐠(𝐫)`$ is the gravitational acceleration field computed from the equation $`𝐠(𝐫)=\delta _m(𝐫)`$ and $`C_g`$ is the solution of the Poisson type equation
$$^2C_g=\underset{i=1}{\overset{i=3}{}}\underset{j=i+1}{\overset{j=3}{}}\left[\frac{^2\varphi _g}{x_i^2}\frac{^2\varphi _g}{x_j^2}\left(\frac{^2\varphi _g}{x_ix_j}\right)^2\right].$$
(4)
Equation (3) requires that we assume a value of $`\mathrm{\Omega }_m`$, to compute the factor $`f(\mathrm{\Omega }_m)\mathrm{\Omega }_m^{0.6}`$ (Peebles (1980)). Finally, we correct the positions of galaxies so that their new positions are consistent with their Hubble flow and the peculiar velocities at their new locations. We repeat these three steps until the corrections to the galaxy positions become negligible, which usually occurs within 3 iterations.
Step 6: Apply the dynamical reconstruction scheme to evolve the fluctuations backward in time. We compute the gravitational potential from this smoothed mass density field using the Poisson equation, then evolve this gravitational potential backward in time using our modified version of the Gramann (1993) method. We use Poisson equation to derive the initial mass density fluctuations i.e., the fluctuations that grow according to the predictions of linear theory.
Step 7: Gaussianize this dynamically reconstructed initial mass density field. This step enforces a Gaussian PDF for the initial mass density fluctuations and yields robust reconstructions even in the non-linear regions.
Step 8: Restore power to the recovered initial density field. Non-linear gravitational evolution tends to suppress the small-scale power in the reconstructed density field, beyond the suppression due to the Gaussian smoothing alone. We correct for this effect using the “power restoration” procedure described in Weinberg (1992). Using an ensemble of numerical simulations, we compute a set of correction factors $`C(k)`$ defined by
$$C(k)=\left[\frac{P_r(k)}{P_i(k)}\right]^{1/2},$$
(5)
where $`P_i(k)`$ is the power spectrum of a simulation’s smoothed initial conditions, and $`P_r(k)`$ is the power spectrum of the density field recovered by the hybrid reconstruction method. We multiply each Fourier mode of the reconstructed density field by $`C(k)`$ and also multiply by $`\mathrm{exp}(k^2R_s^2/2)`$ in order to remove the effect of the original Gaussian smoothing. Above some wavenumber $`k_{\mathrm{corr}}\pi /R_{\mathrm{nl}}`$, where $`R_{\mathrm{nl}}`$ is the scale on which the rms fluctuations are unity, non-linear evolution erases the information about the phases in the initial density field (Little et al. (1991); Ryden & Gramann (1991)) to the point that the hybrid method cannot recover it. For $`k_{\mathrm{corr}}<kk_{\mathrm{Nyq}}`$, therefore, we add random phase Fourier modes with an assumed shape for the power spectrum, where $`k_{\mathrm{Nyq}}`$ is the Nyquist frequency of the grid on which we define the density fields. We normalize this power spectrum by fitting it to the power spectrum of the recovered initial density field in the range of wavenumbers $`k_1kk_2`$, where $`k_1`$ and $`k_2`$ are wavenumbers in the linear regime, with $`k_1<k_2k_{\mathrm{corr}}`$. In the range of wavenumbers $`k_2<kk_{\mathrm{corr}}`$, multiplication by the large factor $`\mathrm{exp}(k^2R_s^2/2)`$ can distort the shape of the power spectrum, although this range of wavenumbers is only in the mildly non-linear regime. In this range of Fourier modes, therefore, we preserve the phases of the recovered initial density field but fix the amplitude of the modes to be that determined by the fitting procedure. In all of our simulations, $`k_{\mathrm{Nyq}}=50k_f`$, and we choose $`k_{\mathrm{corr}}=15k_f`$, $`k_1=4k_f`$ and $`k_2=8k_f`$, where $`k_f=2\pi /L_{\mathrm{box}}=0.0314h`$Mpc<sup>-1</sup> is the fundamental frequency of the simulation box. We assume that the shape of the power spectrum is governed by the parameter $`\mathrm{\Gamma }`$, which is equal to $`\mathrm{\Omega }_mh`$ in cold dark matter models with small baryon fraction and scale-invariant inflationary fluctuations (Efstathiou, Bond, & White, 1992). We do not add the small scale power using the technique of constrained realizations (Bertschinger (1987); Hoffman & Ribak (1991); van de Weygaert & Bertschinger (1996)) because we find it does not lead to a more accurate reconstruction even for a dense sampling of the constraints (see NW98 for a more detailed discussion).
Step 9: Evolve the power-restored density field forward in time using an N-body code. We evolve the reconstructed initial mass distribution using a particle-mesh (PM) code, assuming the values of $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$. This code is described and tested in Park (1990). We use $`100^3`$ particles and a $`200^3`$ force mesh in the PM simulations. We start the gravitational evolution from a redshift $`z=23`$ and follow it to $`z=0`$ in 46 equal incremental steps of the expansion scale factor $`a(t)`$. We fix the amplitude of the linear mass density fluctuations to be $`\sigma _{8m}=\sigma _{8g}/b`$, where $`b`$ is the bias factor. In the case of truly unbiased models (as opposed to biased models with $`b=1.0`$), we instead fix the amplitude of the linear mass fluctuations by requiring that the non-linear rms amplitude of fluctuations in redshift space smoothed with a Gaussian filter of radius $`4h^1\mathrm{Mpc}`$ ($`\sigma _{4G,g}`$) from the simulation match the observed value.
Step 10: Compare the evolved distribution with the original galaxy distribution, either assuming that galaxies trace mass or using a local biasing model to select galaxies from the mass distribution. In biased reconstructions, we choose the free parameter controlling the strength of the bias by requiring that the rms fluctuation $`\sigma _{4G}`$ of the reconstructed, redshift-space galaxy density field, smoothed with a Gaussian filter of radius $`4h^1\mathrm{Mpc}`$, match that of the original galaxy density field.
Figure 1 illustrates the intermediate steps in a hybrid reconstruction analysis of the PSCz catalog. Panel (a) shows the redshift-space positions of all galaxies in the volume-limited PSCz catalog, in a slice $`30h^1\mathrm{Mpc}`$ thick centered on the Supergalactic plane (SGP). Panel (b) shows a slice through the SGP of the galaxy density field smoothed with a $`4h^1\mathrm{Mpc}`$ Gaussian filter. The smoothed initial density field recovered by the hybrid reconstruction method is shown in panel (c). The mass distribution obtained by evolving the power-restored initial mass density field using an N-body code is shown in panel (d). The reconstructed, smoothed, redshift-space galaxy density field, and the reconstructed, redshift-space galaxy distribution obtained by selecting galaxies from the evolved mass distribution using a power-law biasing model (see §3.1 below) are shown in panels (e) and (f), respectively. The reconstruction illustrated in Figure 1 assumes $`\mathrm{\Omega }_m=0.4`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.6`$, and $`b=0.64`$.
## 3 PSCz Reconstruction: Models and Mock Catalogs
### 3.1 Model Assumptions
We use 15 different models to reconstruct the PSCz catalog and to create mock catalogs for calibrating reconstruction errors. Each model consists of a set of assumptions regarding the values of $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$, the shape of the linear mass power spectrum $`P(k)`$ as characterized by the parameter $`\mathrm{\Gamma }`$ described in step (8) of §2.3, the bias factor $`b`$ defined as $`b=b_8=\sigma _{8g}/\sigma _{8m}`$, and the functional form of the biasing relation between IRAS galaxies and the underlying dark matter distribution. As discussed in §2.3, these assumptions influence the reconstruction analysis in different ways. The value of $`\mathrm{\Omega }_m`$ is required when we correct the input data for redshift-space distortions and when we evolve the power-restored initial conditions forward in time. The shape ($`\mathrm{\Gamma }`$) and amplitude ($`\sigma _{8m}=\sigma _{8g}/b`$) of the mass power spectrum are used to calculate the correction factors $`C(k)`$, and the shape is used to extrapolate the recovered initial power spectrum above the wavenumber $`k>k_{\mathrm{corr}}`$. The value of the bias factor $`b`$ is required when we map the biased galaxy density field to the numerically determined PDF of the underlying mass density field with rms fluctuation amplitude $`\sigma _{8m}=\sigma _{8g}/b`$. It is also required in the forward evolution step when we evolve the power-restored initial mass density field to match the rms fluctuation amplitude $`\sigma _{8m}`$. We need to assume an explicit biasing scheme to select galaxies from the evolved mass distribution. Most of our biasing schemes have one free parameter that we fix so that the rms fluctuation of the resulting galaxy distribution matches that of the input galaxy distribution, and a random sampling factor that we use to match the number density of galaxies in our volume-limited PSCz sample.
Our models span a wide range of cosmological and galaxy formation parameters, varying with respect to the following properties:
$`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$: Our assumptions for the geometry of the background universe include Einstein de-Sitter models $`(\mathrm{\Omega }_m=1.0,\mathrm{\Omega }_\mathrm{\Lambda }=0)`$, open models $`(\mathrm{\Omega }_m<1.0,\mathrm{\Omega }_\mathrm{\Lambda }=0)`$, and flat models with a non-zero cosmological constant $`(\mathrm{\Omega }_m<1.0,\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }=1.0)`$.
Normalization and shape of the power spectrum: We normalize the amplitude of the primordial mass density fluctuations (characterized by $`\sigma _{8m}`$) either to be consistent with the level of anisotropies in the cosmic microwave background measured by the COBE satellite (the COBE normalization, Smoot et al. (1992)) or to produce the observed abundance of clusters at the present epoch (the cluster normalization, White, Efstathiou & Frenk 1993; Eke, Cole & Frenk 1996; Viana & Liddle (1996)). For all cluster-normalized models, we choose the values of $`\mathrm{\Omega }_m`$ and $`\sigma _{8m}`$ so that $`\sigma _{8m}\mathrm{\Omega }_m^{0.6}=0.55`$ (White et al. (1993)), for both open and flat universes. We refer the reader to Cole et al. (1997) and Cole et al. (1998, hereafter CHWF98) for more details of the COBE and cluster normalization procedures. We define the shape parameter of the transfer function in the linear mass power spectrum via the parameter $`\mathrm{\Gamma }`$ of Efstathiou et al. (1992); for CDM models with low baryon content, $`\mathrm{\Gamma }\mathrm{\Omega }_mh`$. Our power spectra include scale-invariant $`(n=1)`$ models with $`\mathrm{\Gamma }`$ values consistent with the clustering properties measured from several galaxy catalogs, viz., $`\mathrm{\Gamma }=0.150.3`$ (Maddox et al. (1990); Efstathiou et al. (1992); Vogeley et al. (1992); Peacock & Dodds (1994); Gaztañaga, Croft, & Dalton 1995; Maddox, Efstathiou, & Sutherland 1996; Gaztañaga & Baugh (1998); Tadros, Efstathiou, & Dalton 1998), and some models with larger $`\mathrm{\Gamma }`$ values. We also consider power spectra that are normalized to both the COBE and cluster constraints, by introducing a tilt in the spectral index of the power spectrum. Finally, two of our models are not normalized to COBE or clusters, although the rms fluctuation of the reconstructed galaxy distributions matches that of the IRAS galaxies.
Bias factor: We consider models in which IRAS galaxies trace mass (unbiased, $`b=1.0`$), models in which IRAS galaxies are more strongly clustered than the mass (biased, $`b>1.0`$), and models in which galaxies are more weakly clustered than the mass (antibiased, $`b<1.0`$). We even consider one biasing model in which the galaxies do not trace mass but $`b=1.0`$, i.e., the rms amplitude of fluctuations in the galaxy and mass distributions are identical at the scale of $`8h^1\mathrm{Mpc}`$, but the galaxy density has a non-linear dependence on the mass density.
Biasing scheme: Our biasing relations cover a wide range of plausible functional forms, with the only constraint being that they remain monotonic. These include functions derived empirically from observations of different types of galaxies, functions predicted from semi-analytic models of galaxy formation, functions that fit the results of numerical studies of galaxy formation, and functions constructed ad-hoc. All of our biasing models are deterministic, and are “local” in the sense that the efficiency of galaxy formation is determined by the properties of the local environment, i.e., by the properties within approximately one correlation length of the location of the galaxy. We compute all the local properties of the mass distribution in a sphere of radius $`4h^1\mathrm{Mpc}`$ centered on the galaxy.
The specific biasing schemes that we use to select the IRAS galaxies from the evolved mass distributions of our reconstructions are as follows.
Power-law bias: In this simple biasing model, the IRAS galaxy density ($`\rho _g`$) is a steadily increasing, power-law function of the local mass density, $`(\rho _g/\overline{\rho }_g)(\rho _m/\overline{\rho }_m)^B`$. The probability for an N-body particle with local mass density $`\rho _m`$ to be selected as an IRAS galaxy is therefore
$$P=A(\rho _m/\overline{\rho }_m)^{B1}.$$
(6)
We choose the values of $`A`$ and $`B`$ to reproduce the required number density and the rms fluctuation of the resulting galaxy distribution, respectively. This biasing relation is similar to the one suggested by Cen & Ostriker (1993) based on hydrodynamic simulations incorporating physical models for galaxy formation (Cen & Ostriker (1992)), but it differs in that there is no quadratic term that saturates the biasing relation at high mass densities.
Threshold bias: In this biasing scheme, galaxy formation is entirely suppressed below some threshold value of mass density, and IRAS galaxies form with equal efficiency per unit mass in all regions above the threshold. This biasing scheme was adopted in some of the early numerical investigations of CDM models (e.g., Melott & Fry (1986)), and it has been used extensively in theoretical modeling of voids and superclusters (e.g., Einasto et al. (1994)). In the density-threshold bias model, the probability that a particle with local mass density $`\rho _m`$ is selected as an IRAS galaxy is
$$P=\{\begin{array}{cc}A\hfill & \text{if }\rho _mB,\hfill \\ 0\hfill & \text{if }\rho _m<B\text{.}\hfill \end{array}$$
(7)
We choose the threshold density $`B`$ to match the required bias factor $`b`$, and the probability $`A`$ to reproduce the desired galaxy number density. We note that, since this model preferentially populates regions of higher mass density, it can only lead to a bias factor greater than unity, and hence cannot be used when an antibias ($`b<1.0`$) is required.
Morphology-density bias: It has been known for a long time that early-type galaxies are preferentially found in dense environments, while late-type galaxies dominate in less massive groups and in the field (Hubble (1936); Zwicky (1937); Abell (1958)). There have been numerous efforts to quantify this connection between morphology and environment (e.g., Dressler (1980); Postman & Geller (1984); Lahav & Saslaw (1992); Whitmore, Gilmore, & Jones (1993)), using a variety of clustering statistics including angular correlation functions, redshift-space correlation functions, and de-projected real-space correlation functions (Davis & Geller (1976); Giovanelli, Haynes & Chincarini (1986); Loveday et al. (1995); Hermit et al. (1996); Guzzo et al. (1997); Willmer et al. (1998)). We model the “bias” arising from this morphological segregation using the morphology-density relation proposed by Postman & Geller (1984). Since the IRAS-selected galaxy catalogs preferentially include dusty, late-type spirals (Soifer et al. (1984); Meiksin & Davis (1986); Lawrence et al. (1986); Babul & Postman (1990)), we select all the spiral galaxies as IRAS galaxies. The morphology-density relation of Postman & Geller (1984) assigns morphological types to galaxies based on the densities at the locations of all the galaxies. Since we evolve the power-restored density field forward in time using a low resolution PM code and with a finite number of mass particles, the final densities computed over spheres centered on the galaxies and large enough to contain significant numbers of neighbors will be different from the densities at the exact locations of the galaxies. Therefore, we recast the morphology-density relation of Postman & Geller (1984) in terms of the density computed in a sphere of radius $`2h^1\mathrm{Mpc}`$ centered on the galaxy. We assign the galaxy a spiral (Sp), S0, or elliptical (E) morphological type, with relative probabilities $`F_{\mathrm{Sp}}`$, $`F_{\mathrm{S0}}`$, and $`F_\mathrm{E}`$ that depend on the density computed within a sphere of radius $`2h^1\mathrm{Mpc}`$. For $`\rho <\rho _F=10\overline{\rho }`$, the morphological fractions are $`F_{\mathrm{Sp}}=0.7`$, $`F_{\mathrm{S0}}=0.2`$, and $`F_\mathrm{E}=0.1`$. For $`\rho _F<\rho <\rho _C=6\times 10^3\overline{\rho }`$, the fractions are
$`F_{\mathrm{Sp}}`$ $`=`$ $`0.70.2\alpha `$
$`F_\mathrm{E}`$ $`=`$ $`0.1+0.1\alpha `$ (8)
$`F_{\mathrm{S0}}`$ $`=`$ $`1F_{\mathrm{Sp}}F_\mathrm{E}`$
$`\alpha `$ $`=`$ $`\mathrm{log}_{10}(\rho /\rho _F)/\mathrm{log}_{10}(\rho _C/\rho _F).`$
For $`\rho >\rho _C`$, the morphological fractions saturate at $`F_{\mathrm{Sp}}=0.5`$, $`F_{\mathrm{S0}}=0.3`$, and $`F_\mathrm{E}=0.2`$. The ratio of rms fluctuations in $`8h^1\mathrm{Mpc}`$ spheres of the elliptical and spiral galaxy distributions selected in this manner is 1.3, consistent with the ratio of $`1.21.7`$ observed between optical and IRAS galaxy distributions (Lahav, Nemiroff, & Piran 1990; Strauss et al. (1992); Saunders et al. (1992); Peacock & Dodds (1994); Willmer, da Costa, & Pellegrini 1998; Baker et al. (1998)). This biasing scheme has no free parameters, so the resulting bias factor is known a priori.
Square-root Exponential bias: We construct a biasing scheme in which the IRAS galaxy density field is related to the mass density field by
$$y=A\sqrt{x}\mathrm{exp}(\alpha x),$$
(9)
where $`x=\rho _m/\overline{\rho }_m`$ and $`y=\rho _g/\overline{\rho }_g`$ are the mass and the IRAS galaxy overdensities, respectively. We choose the values of $`\alpha `$ and $`A`$ to reproduce the required galaxy rms fluctuation and the mean number density, respectively. This biasing relation is a monotonically increasing function for all $`\alpha >0`$. We include this ad-hoc biasing scheme to test the ability of reconstruction analysis to distinguish between different biasing relations with the same bias factor. We use this biasing scheme in a model in which galaxies do not trace the mass, although $`b=1.0`$. We note that neither the Power-law bias nor the Threshold bias can lead to $`b=1.0`$, for any non-trivial values of the free parameters governing the strength of the bias.
Semi-analytic bias: Except for the connection between power-law bias and the simulations of Cen & Ostriker (1993), the biasing models described so far are not based on theoretical models of the galaxy formation process. Rather, they include a variety of reasonable functional forms that could plausibly represent the results of some more complete theory of biased galaxy formation. We now consider a biasing scheme that is motivated by a physical theory of galaxy formation namely, the semi-analytic galaxy formation model of Benson et al. (1999; see also Cole et al. 1994, 1999). We parameterize this biasing scheme as follows. We consider the luminosities and morphologies of all the galaxies selected by Benson et al. (1999) from the mass distribution of the $`\mathrm{\Lambda }`$CDM2 simulation of Jenkins et al. (1998, the VIRGO consortium). We select as IRAS galaxies all the galaxies whose ratio of bulge to total mass is less than $`0.4`$. The rms fluctuation of the IRAS galaxy distribution selected in this manner ($`\sigma _{8g}`$), is about $`10\%`$ smaller than that of the underlying mass distribution.
The solid points in Figure 2 show the mean relation between this “IRAS” galaxy density field and the underlying mass density field, after both fields are smoothed with a top-hat filter of radius $`R_{\mathrm{th}}=3h^1\mathrm{Mpc}`$. The thick solid line shows an empirical fit to this mean relation using a smoothly varying double power-law of the form
$$y=Ax^\alpha \left[C+x^{(\alpha \beta )/\gamma }\right]^\gamma ,$$
(10)
where $`\alpha =2.9`$, $`\beta =0.825`$, $`\gamma =0.4`$, $`C=0.08`$, $`A=1.1`$, and $`x=(1+\delta _m)`$, $`y=(1+\delta _g)`$ are the mass and the IRAS galaxy overdensities, respectively, smoothed with a top-hat filter of radius $`R_{\mathrm{th}}=3h^1\mathrm{Mpc}`$. During the reconstruction analysis, we use this semi-analytic biasing relation to select the IRAS galaxies from the evolved mass distribution. We found that the scatter around this mean relation is dominated by shot-noise and hence ignore it in our parameterization of the semi-analytic bias model. This bias model does not have any free parameters, so it results in a known bias factor. Note that this relation was derived for a $`\mathrm{\Lambda }`$CDM model with $`\sigma _{8m}=0.9`$, although we apply it here to an open model with slightly lower $`\sigma _{8m}`$ so that it matches the $`\sigma _{8g}`$ of IRAS galaxies.
### 3.2 Models
The 15 different models that we analyze in this paper sample the range of interesting values of the parameters $`\mathrm{\Omega }_m`$ and $`\sigma _{8m}`$ (or, equivalently, the bias factor $`b`$). Thus, we analyze models with $`\mathrm{\Omega }_m=0.2,0.3,0.4,0.5,`$ and $`1.0`$, while the values of $`\sigma _{8m}`$ range from $`0.4`$ to $`1.44`$. The parameters of all these models are listed in Table 1. In our model nomenclature, the first two symbols denote the geometry of the universe and the value of the cosmological mass density parameter. Thus, E1 represents an Einstein de-Sitter model with $`\mathrm{\Omega }_m=1.0,\mathrm{\Omega }_\mathrm{\Lambda }=0`$; Ox represents an open model with $`\mathrm{\Omega }_m=0.\mathrm{x},\mathrm{\Omega }_\mathrm{\Lambda }=0`$; and Lx represents a flat model with $`\mathrm{\Omega }_m=0.\mathrm{x}`$ and a cosmological constant $`\mathrm{\Omega }_\mathrm{\Lambda }=1\mathrm{\Omega }_m`$. The capital letters immediately following specify the nature of the local biasing relation between the IRAS galaxy distribution and the underlying mass distribution. The last series of numbers after the letter $`b`$ corresponds to the bias factor of the model. For example, the model O4SQEb1.0 specifies an open model with $`\mathrm{\Omega }_m=0.4`$ in which the biasing relation is a square-root exponential function and the bias factor is 1.0.
We now briefly describe the features of all the 15 models. In §4, we will illustrate our analysis methods and results using 6 representative models, before giving summary results for the full suite of 15 models in §5. The 6 illustrative models are:
E1UNb1.0 — An Einstein de-Sitter universe, in which IRAS galaxies trace the mass distribution (unbiased), and hence $`b=1.0`$. The shape of the power-spectrum is consistent with the observed clustering of galaxies, $`\mathrm{\Gamma }=0.25`$ (Peacock & Dodds (1994)). The mass normalization agrees with the measured value of $`\sigma _{8g}0.7`$ for this biasing model, but it is above the cluster normalization constraint $`\sigma _{8m}=0.55`$ for $`\mathrm{\Omega }_m=1`$.
E1PLb1.8 — An Einstein de-Sitter universe, in which there is a power-law biasing relation between the IRAS galaxy and mass distributions. We choose the value of $`B`$ so that $`b=1.8`$ and the value of $`A`$ so that $`n_g=0.005h^3`$Mpc<sup>-3</sup>. The mass fluctuation amplitude is below the level $`\sigma _{8m}=0.55`$ implied by cluster normalization or by COBE normalization for its adopted $`\mathrm{\Omega }_m`$ and shape of the power spectrum. It requires a large value of the bias factor ($`b=1.8`$) to match the rms fluctuation of the IRAS galaxies. This model has $`\mathrm{\Gamma }=0.25`$ and a tilted power spectrum with $`n=0.803`$; it is similar to the E2(tilted) model of Cole et al. (1998), except that the rms fluctuation amplitude is $`\sigma _{8m}=0.40`$ instead of 0.55.
O4MDb0.7 — An open universe with $`\mathrm{\Omega }_m=0.4`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, in which the galaxy population as a whole traces the mass distribution. We select the IRAS galaxies using the morphology-density biasing relation. This model is cluster-normalized, and it requires the IRAS galaxies to be antibiased with respect to the mass, i.e., $`b<1.0`$.
O4SAb0.9 — An open universe with $`\mathrm{\Omega }_m=0.4`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, in which the galaxies are selected from the mass distribution using the semi-analytic biasing model. Although this model is COBE-normalized by construction, it can simultaneously reproduce the observed mass function of clusters (Cole et al. (1997)).
O4SQEb1.0 — An open universe with $`\mathrm{\Omega }_m=0.4`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, in which the IRAS galaxy density field is related to the mass density field by the square-root exponential biasing function. We choose the value of $`\alpha =0.041`$ so that $`b=1.0`$ and the value of $`A`$ so that $`n_g=0.005h^3`$Mpc<sup>-3</sup>. In this model, the IRAS galaxies do not trace the mass distribution even though the bias factor $`b=1.0`$. The parameters of this model are similar to those of O4SAb0.9, except that the biasing relation is very different.
L3PLb0.62 — A flat universe with $`\mathrm{\Omega }_m=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, in which the bias between galaxies and mass is described by the power-law bias model. This model is COBE-normalized, and it requires the IRAS galaxies to be antibiased with respect to the mass distribution. It can also reproduce the observed abundance of clusters at the present epoch (Cole et al. (1997)).
We also reconstructed the PSCz catalog using another set of 9 models, which, together with the 6 models described above, extend our exploration of the $`\mathrm{\Omega }_m`$, $`\sigma _{8m}`$ parameter space. These 9 models are all either cluster-normalized, or COBE-normalized, or both. They are:
E1PLb1.3 — An Einstein de-Sitter universe, in which the probability for a mass particle with local mass density $`\rho _m`$ to become a galaxy is given by the power-law biasing relation. We choose the value of $`B`$ so that $`b=1.3`$ and the value of $`A`$ so that $`n_g=0.005h^3`$Mpc<sup>-3</sup>. This model is both cluster-normalized and COBE-normalized, and it has a tilted power spectrum with $`n=0.803`$ and $`\mathrm{\Gamma }=0.451`$.
E1PLMDb1.3 — An Einstein de-Sitter universe, in which the galaxy population as a whole is biased using a power-law function. We choose the value of $`B`$ so that $`\sigma _{8g}1.0`$. We then use the morphology-density relation to select all the spiral galaxies as IRAS galaxies, so that $`\sigma _{8g}0.7`$. The resulting bias factor of IRAS galaxies is $`b1.3`$. This model has the same mass power spectrum as E1PLb1.3.
E1THb1.3 — An Einstein de-Sitter universe, in which we select the galaxies from the mass distribution using the threshold biasing relation. We choose the threshold density $`B`$ so that the bias factor $`b=1.3`$, and the probability $`A`$ so that the mean galaxy density is $`n_g=0.005h^3\mathrm{Mpc}^3`$. This model has the same mass power spectrum as E1PLb1.3.
O2PLb0.5 — An open universe with $`\mathrm{\Omega }_m=0.2`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, in which the IRAS galaxies are selected from the mass distribution using the power-law biasing relation. This model is cluster-normalized, and it has the largest amplitude of mass fluctuations ($`\sigma _{8m}=1.44`$) among all of our 15 models.
O3PLb1.4 — An open universe with $`\mathrm{\Omega }_m=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, in which the IRAS galaxies are selected from the mass distribution using the power-law biasing relation. This is a COBE-normalized model.
O4UNb1.0 — An open universe with $`\mathrm{\Omega }_m=0.4`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, in which the IRAS galaxies trace the mass distribution. Although this model is COBE-normalized by construction, it can simultaneously reproduce the observed mass function of galaxy clusters (Cole et al. (1997)).
L2PLb0.77 — A flat universe with $`\mathrm{\Omega }_m=0.2`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.8`$, in which there is a power-law biasing relation between the IRAS galaxies and the mass. This model is COBE-normalized, and it requires the IRAS galaxies to be antibiased with respect to the mass.
L4PLb0.64 — A flat universe with $`\mathrm{\Omega }_m=0.4`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.6`$, in which there is a power-law biasing relation between the IRAS galaxies and the mass. This model is COBE-normalized, and it requires the IRAS galaxies to be antibiased with respect to the mass. This model can simultaneously reproduce the observed mass function of clusters (Cole et al. (1997)).
L5PLb0.54: A flat universe with $`\mathrm{\Omega }_m=0.5`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.5`$, in which there is a power-law biasing relation between the IRAS galaxies and the mass. This model is also COBE-normalized, and it requires a strong antibias between the IRAS galaxies and the mass.
### 3.3 Mock Catalogs
If the reconstruction method were perfect, and structure in the universe really did form from gravitational instability of Gaussian initial conditions, then we would expect to reproduce exactly the galaxy distribution in the PSCz catalog, if we assumed the correct value of $`\mathrm{\Omega }_m`$ and the correct biasing relation between IRAS galaxies and mass. However, the reconstruction method suffers from inaccuracies arising at various intermediate steps — inaccuracies in the bias mapping procedure, inaccuracies in correcting for the redshift-space distortions, inaccuracies in the dynamical recovery of the initial mass density fluctuations, and inaccuracies in the forward evolution step caused by poor modeling of the large scale tidal field and (on small scales) the finite numerical resolution. All these errors accumulate at various levels, with the result that we cannot expect a reconstruction to produce an exact match to the input data even if all of its assumptions are correct. It is therefore necessary to calibrate the magnitude of the errors intrinsic to the reconstruction method before we can derive any conclusions regarding the validity of the various assumptions entering the reconstruction procedure.
We assess these errors by reconstructing a set of mock PSCz catalogs for each of the 15 different models. For every model, we construct the mock PSCz catalogs from the outputs of numerical simulations that have the appropriate values of $`\mathrm{\Omega }_m`$ and bias. The geometry, the sky-coverage, the depth, and the selection function of the mock catalogs all mimic those of the original PSCz catalog.
We construct the mock catalogs for 14 of the 15 models using the outputs of the N-body simulations of cold dark matter models performed by CHWF98. The CHWF98 simulations use a modified version of the AP3M code of Couchman (1991) to follow the gravitational evolution of $`192^3`$ particles in a periodic cubical box of side $`345.6h^1\mathrm{Mpc}`$, using a gravitational softening length of $`ϵ=90h^1\mathrm{kpc}`$ (for a Plummer force law), fixed in comoving coordinates. Further details of the simulations are in CHWF98. For the model E1PLb1.8, we created the mass distribution by evolving an initial density field with parameters similar to the E2(tilted) model of CHWF98, except that $`\sigma _{8m}=0.4`$ instead of 0.55. We evolved $`192^3`$ particles on a $`384^3`$ force mesh using the PM code of Park (1990). Our goal here was to investigate an $`\mathrm{\Omega }_m=1.0`$ model with lower mass fluctuation amplitude than those considered by CHWF98, which is why we needed to run a new simulation. For the other 14 models, the $`90h^1\mathrm{kpc}`$ force resolution of the mock catalog simulation is much higher than the $`1h^1\mathrm{Mpc}`$ force resolution of the PM simulation used in the forward evolution step of the reconstruction procedure. Our calibration of systematic errors therefore includes the error caused by limited force resolution of the PM simulations.
For every model (except E1PLb1.8), the last column in Table 1 lists the CHWF98 simulation from which we derive the mock catalogs. If the model involves bias, we start by selecting the galaxies from the mass distribution using the appropriate biasing algorithm. We then select “observers” from the galaxy distributions so that they satisfy the following observed properties of the Local Group:
The velocity of the Local Group observer should be in the range $`550\mathrm{km}\mathrm{s}^1<V_{LG}<700\mathrm{km}\mathrm{s}^1`$, consistent with the amplitude of the dipole anisotropy in the cosmic microwave background (Smoot et al. (1991)).
The overdensity of galaxies in a spherical region of radius $`5h^1\mathrm{Mpc}`$ centered on the Local Group observer should be in the range $`1.0<1+\delta _g(5h^1\mathrm{Mpc})<2.0`$ (Brown & Peebles (1987); Hudson (1993); Schlegel et al. (1994)).
The radial velocity dispersion in a sphere of radius $`5h^1\mathrm{Mpc}`$ around the Local Group observer should be less than $`150\mathrm{km}\mathrm{s}^1`$, consistent with the observations of a cold velocity field near the Local Group (Sandage & Tammann (1975); Sandage (1986); Giraud (1986); Schlegel et al. (1994)). We note that for all but one of the galaxy distributions (corresponding to the E1UNb1.0 model), our Local Groups have local velocity dispersion smaller than $`100\mathrm{km}\mathrm{s}^1`$.
The Local Group particles for any pair of mock catalogs constructed from a simulation should be separated by at least $`50h^1\mathrm{Mpc}`$. This criterion ensures that the density fields in the mock PSCz catalogs centered on these observers are quite different from each other, at least within the volume-limiting radius $`R_1=50h^1\mathrm{Mpc}`$.
We assign each particle in the galaxy distribution a redshift based on its real space distance and its radial peculiar velocity with respect to the Local Group observer particle. We assign luminosities to these galaxies consistent with the luminosity function of the IRAS galaxies in the PSCz catalog. We “observe” this galaxy distribution using the selection function of the PSCz survey. We reject all the galaxies in the angular regions not covered by the PSCz catalog, so that the sky coverage in the mock catalogs is identical to that of the true PSCz catalog. We create 10 mock PSCz catalogs for each of the 15 models and reconstruct them in exactly the same manner as the PSCz catalog.
## 4 PSCz Reconstruction: Illustrative Results
We now describe the results of reconstructing the PSCz catalog and ten mock catalogs for the first 6 of the 15 models described in §3.2. Figure 3 shows a slice through the galaxy density fields of the true and the reconstructed PSCz catalogs. The density fields have been convolved with a Gaussian filter $`e^{r^2/2R_s^2}`$, with smoothing radius $`R_s=4h^1`$Mpc. The slices show the contours of the density field in the SGP. The galaxy density field traced by the galaxies in the PSCz catalog is shown in panel (a). Some of the prominent features include the Perseus-Pisces supercluster seen as the overdensity near the boundaries in the bottom right region, the Great Attractor region in the diagonally opposite direction near the top left corner, and the Local void in the bottom left region. We refer the reader to Branchini et al. (1999) for a detailed cosmographical description of the PSCz catalog. Panels (b) through (f) show the galaxy density fields reconstructed in the models E1UNb1.0, E1PLb1.8, O4MDb0.7, O4SAb0.9, and L3PLb0.62, respectively. All the models can, at least qualitatively, reproduce the general features of the observed PSCz galaxy distribution. This success offers support to the hypothesis that structure formed from the gravitational instability of Gaussian primordial mass density fluctuations. We will see below that, although the various reconstructions resemble the observed PSCz galaxy density field in this visual representation, there are quantifiable differences between the accuracy of the reconstructions corresponding to different models. Thus, some models (like, for example, the models O4MDb0.7, O4SAb0.9 and L3PLb0.62) can reconstruct the PSCz catalog as well as can be expected based on the mock catalog reconstructions, while others (including the models E1UNb1.0 and E1PLb1.8) fail in a systematic manner.
Figure 4 shows the redshift-space locations of galaxies in the volume-limited PSCz catalog and its reconstructions. We plot the SGX and SGY coordinates of all the galaxies that lie in a slice $`30h^1`$Mpc thick centered on the SGP. The different panels show the true PSCz galaxy distribution and the various reconstructions, in the same format as Figure 3.
One of the most obvious quantitative measurements of the success of a reconstruction is the correlation coefficient $`r`$ between the original and the reconstructed smoothed galaxy density fields,
$$r\frac{\delta _r\delta _t}{\delta _r^2^{\frac{1}{2}}\delta _t^2^{\frac{1}{2}}},$$
(11)
where $`\delta _t`$ and $`\delta _r`$ are respectively the original and reconstructed smoothed galaxy density fields. Panels (a) through (f) of Figure 5 show the correlation coefficients for the models E1UNb1.0, E1PLb1.8, O4MDb0.7, O4SAb0.9, O4SQEb1.0, and L3PLb0.62, respectively. For every model, we assign ranks to the reconstructions of each of the 10 mock catalogs and to the reconstruction of the true PSCz catalog, in descending order of their values of $`r`$: the catalog whose reconstruction has the highest $`r`$ value is assigned a rank of 0, the catalog whose reconstruction has the lowest $`r`$ value is assigned a rank of 10, and so on in between. The solid line in each panel shows the values of $`r`$ for the 10 mock catalog reconstructions of the model, in rank order. The horizontal dashed line shows the value of $`r`$ for the PSCz reconstruction based on the model assumptions. We find that the absolute values of $`r`$ tend to decrease for models with larger values of $`\sigma _{8m}`$ (smaller values of $`b`$) because the greater degree of non-linear gravitational evolution makes the recovery of initial conditions less accurate. The absolute value of $`r`$ is therefore of little use for comparing the viability of different reconstruction models. We focus instead on the value of $`r`$ relative to the values expected given the model assumptions, and since the PSCz reconstructions for all six of these models have a rank of seven or less (and the models therefore reproduce PSCz better than they reproduce at least three of their own mock catalogs), we conclude that all of them are acceptable by this particular measure.
The correlation coefficient quantifies the success of the reconstruction in matching the observed galaxy density field at a particular scale, $`R_s=4h^1\mathrm{Mpc}`$. In order to probe a range of scales, Figure 6 shows the distribution of the Fourier difference statistic
$$D(k)\frac{|\stackrel{~}{\delta }_r(𝐤)\stackrel{~}{\delta }_t(𝐤)|^2}{\left(|\stackrel{~}{\delta }_r(𝐤)|^2+|\stackrel{~}{\delta }_t(𝐤)|^2\right)},$$
(12)
where the subscripts $`t`$ and $`r`$ refer to the true and reconstructed density fields respectively, and $`\stackrel{~}{\delta }(𝐤)`$ represents the complex Fourier component of the density field. The summation is over all the waves with $`|𝐤|`$ in the interval $`(k1,k]`$. This statistic measures the difference in both the moduli and the phases of the Fourier components of the true and reconstructed density fields, and it is independent of any smoothing of the density fields. It was first used by Little et al. (1991) to demonstrate the effects of power-transfer from large scales to small scales during non-linear gravitational evolution. When the complex amplitudes of the Fourier components of the true and reconstructed density fields are identical, $`D(k)=0`$, while for two fields with uncorrelated phases, the average value of $`D(k)=1`$. Figure 6 shows the value of $`D(k)`$ averaged over the range of wavenumbers $`k_{\mathrm{surv}}<k<k_8`$, where $`k_{\mathrm{surv}}=2\pi /(2\times R_1)=0.0628h`$Mpc<sup>-1</sup> is the wavenumber corresponding to the size of the reconstruction volume, and $`k_8=2\pi /(2\times 8)=0.3927h`$Mpc<sup>-1</sup> is the wavenumber corresponding to the length scale of $`8h^1\mathrm{Mpc}`$, approximately equal to the scale of non-linearity in the PSCz galaxy distribution. The different panels correspond to the six models in the same format as Figure 5. The solid line in each panel shows the distribution of $`D(k)`$ in the ten mock catalog reconstructions of that model. The dashed line shows the value of $`D(k)`$ in the PSCz reconstruction. We rank the mock catalog reconstructions and the PSCz reconstruction, in ascending order of their values of $`D(k)`$. We find that the PSCz reconstruction has a high rank (of 9) in the model E1UNb1.0, and has smaller ranks in all the other models. Hence, the model E1UNb1.0 fails (though only at the $`80\%`$ confidence level) to reproduce the PSCz density field as measured by this statistic, while the other five models yield acceptable reconstructions.
Figure 7 shows the PDF of the true and reconstructed galaxy density fields for the six models. We compute the PDF of a density field after smoothing it with a Gaussian filter of radius $`R_s=4h^1\mathrm{Mpc}`$. In every panel, the crosses and the thin solid line show the PDFs of the true and reconstructed galaxy density fields for a typical mock catalog — the one with rank $`5`$ according to the Figure-of-Merit (FOM) for this statistic, defined as the maximum value of the absolute difference between the cumulative distributions $`C_t(\nu )`$ and $`C_r(\nu )`$ of the true and reconstructed galaxy density fields,
$$\mathrm{FOM}_{\mathrm{PDF}}=\mathrm{max}|C_t(\nu )C_r(\nu )|.$$
(13)
This is the FOM that would be used in a Kolmogorov-Smirnov comparison of the PDFs, and we find that it gives similar results in terms of ranks to a FOM based on absolute differences of the differential PDFs. The open circles and the thick solid line show the true and reconstructed PDFs for the PSCz reconstruction, offset vertically by 0.2 for the sake of clarity.
For every model, we rank the mock catalog reconstructions and the PSCz reconstruction, in increasing order of the FOM of the statistic. If the reconstruction of PSCz based on the model assumptions is worse than expected from the mock catalog tests, then the PSCz reconstruction will have a high rank. A low PSCz rank, conversely, implies a reconstruction that is successful given the expectations from the mock catalog tests. Visual comparison between the PDF recoveries for PSCz and for the rank-$`5`$ mock catalogs in Figure 7 suggests that the PSCz reconstructions for the models E1UNb1.0, E1PLb1.8, O4MDb0.7, and O4SQEb1.0 should have high ranks, while the PSCz reconstructions for the models O4SAb0.9 and L3PLb0.62 should have low ranks. This is indeed the case, as can be verified by the PSCz ranks listed in each panel.
We show the ranks for all other statistics in Figures 8 through 12, in the same format as in Figure 7 for the PDF statistic. If the PSCz catalog has a rank of 5 for any of the statistics, we show the results for the mock catalog ranked 6 according to that statistic. We will use the ranks for all the statistics as the basis for a more systematic evaluation of models in §5.
Figure 8 shows the distribution of galaxy counts in spheres of radius $`8h^1\mathrm{Mpc}`$, in the true and reconstructed galaxy distributions for the six models. We compute this distribution by placing $`50,000`$ spherical cells at random locations within the reconstruction volume and counting the number of galaxies within each cell. If a cell lies close to the boundary of the survey region, we include it in the distribution only if at least $`90\%`$ of its volume lies within the survey region. The crosses and the thin solid line show the distributions of counts in the true and reconstructed galaxy distributions of the mock catalog with rank 5. The open circles and the thick solid line show the same quantities for the PSCz catalog, and are vertically offset by 0.05 for clarity. We define the FOM for this statistic as
$$\mathrm{FOM}_{\mathrm{COUNTS}}=\underset{N=1}{\overset{N=\mathrm{}}{}}|P_t(N)P_r(N)|,$$
(14)
where $`P_t(N)`$ and $`P_r(N)`$ are the distributions of the counts in cells in the true and reconstructed galaxy distributions. From visual inspection of Figure 8, we would expect the models E1UNb1.0, E1PLb1.8, and O4SQEb1.0 to have high ranks and the models O4MDb0.7, O4SAb0.9, and L3PLb0.62 to have low ranks, as is indeed verified by the ranks of the PSCz reconstruction listed in different panels. We also computed this distribution using spherical cells of radius $`3h^1\mathrm{Mpc}`$, but do not show the corresponding figure. Although the distribution of galaxy counts is a measure similar to the PDF statistic shown in Figure 7, here we are using different smoothing filters (top-hat instead of Gaussian) and smoothing lengths ($`3h^1\mathrm{Mpc}`$ and $`8h^1\mathrm{Mpc}`$ instead of $`4h^1\mathrm{Mpc}`$).
Figure 9 shows the void probability function (VPF) in the true and reconstructed galaxy distributions for the six models. Like the PDF and the count distributions, this statistic is sensitive to higher-order correlations in the density field (White (1979); Balian & Schaeffer (1989); Sheth (1996)), and it can distinguish between biased and unbiased galaxy formation models (Little & Weinberg (1994)). The probability $`P_0(R)`$, that a randomly placed sphere of radius $`R`$ is devoid of galaxies is a subset of the more general count distribution statistic $`P_N(R)`$, but here we examine $`P_0`$ at a range of $`R`$ instead of $`P_N`$ at fixed $`R=8h^1\mathrm{Mpc}`$, as in Figure 8. When computing the VPF, we require that at least $`90\%`$ of the spherical cell’s volume lie within the survey region. We define the FOM for this statistic as
$$\mathrm{FOM}_{\mathrm{VPF}}=\underset{R=0}{\overset{R=\mathrm{}}{}}|P_{0t}(R)P_{0r}(R)|,$$
(15)
where $`P_{0t}(R)`$ and $`P_{0r}(R)`$ are the VPFs of the true and reconstructed galaxy distributions, and the sum extends over discrete bins in $`R`$. By visual comparison with the mock catalog reconstructions, we expect the models E1UNb1.0, E1PLb1.8, O4MDb0.7, and O4SQEb1.0 to have high ranks. This expectation is confirmed by the ranks of the PSCz reconstruction listed in each panel. We also computed the underdensity probability function (UPF) introduced by Weinberg & Cole (1992) and found that the different models have similar ranks for the UPF as for the VPF. The UPF requires that a sphere be more than $`80\%`$ below the mean density rather than completely empty.
Figure 10 shows the distribution of distances to nearest neighbors in the true and reconstructed catalogs. If computed in three dimensions using galaxy redshift distances, this statistic would show a spurious peak at neighbor separations corresponding to the velocity dispersions of typical galaxy groups. Therefore, we instead estimate the nearest-neighbor distribution from the redshift-space galaxy distributions using the method suggested by Weinberg & Cole (1992). For every galaxy at a redshift $`z`$, we consider all the galaxies that lie within a redshift range $`\mathrm{\Delta }v<1000\mathrm{kms}^1`$ to be its potential nearest neighbor. Of these candidate neighbors, we then choose the galaxy that lies closest to this galaxy in the transverse direction, and we compute the distribution of this transverse separation $`R_t`$ divided by the mean inter-galaxy separation $`\overline{d}`$ (i.e., $`x_n=R_t/\overline{d}`$). This approach biases the estimated neighbor distance, but the bias is the same for the PSCz data and the reconstructions. We define the FOM for this statistic as
$$\mathrm{FOM}_{\mathrm{NNBR}}=\underset{x_n=0}{\overset{x_n=1}{}}|P_t(x_n)P_r(x_n)|,$$
(16)
where $`P_t(x_n)`$ and $`P_r(x_n)`$ are the nearest-neighbor distributions of the true and reconstructed galaxy distributions. From the ranks of the PSCz reconstruction in different panels, we find that the models E1PLb1.8 and O4SQEb1.0 have high ranks, while the other models have low ranks.
Figure 11 shows the redshift-space correlation functions $`\xi (s)`$ for the true and reconstructed catalogs. We compute the correlation functions using the estimator of Hamilton (1993),
$$\xi (s)=\frac{N_{DD}N_{RR}}{N_{DR}^2}1,$$
(17)
where $`N_{DD},N_{DR}`$, and $`N_{RR}`$ are the number of galaxy-galaxy, galaxy-random, and random-random pairs with a redshift-space separation $`s`$. We use a random catalog that has the same geometry and selection function as the PSCz catalog and contains 50,000 points distributed randomly within the survey volume. We consider only those galaxy pairs that subtend an angle smaller than $`\alpha _{max}=60^{}`$ at the observer so that the lines of sight to both the galaxies in the pair are approximately parallel. We fit the correlation function in the region $`1h^1\mathrm{Mpc}<s<15h^1\mathrm{Mpc}`$ with a power-law of the form $`\xi (s)=\left(\frac{s}{s_0}\right)^\gamma `$, where $`s_0`$ is the redshift-space correlation length and $`\gamma `$ is the index of the power-law. We define the FOM as
$$\mathrm{FOM}_\xi =|\gamma _t\gamma _r|,$$
(18)
where $`\gamma _t`$ and $`\gamma _r`$ are the slopes of the true and reconstructed, redshift-space correlation functions. We find that the model E1UNb1.0 has a high rank, while the other five models have low ranks. The failure of the model E1UNb1.0 is the one expected if we reconstruct a low $`\mathrm{\Omega }_m`$ universe using a high value of $`\mathrm{\Omega }_m`$: the high velocity dispersion in an $`\mathrm{\Omega }_m=1`$ model leads to excessive suppression of $`\xi (s)`$ on small scales, while the large amplitude of the coherent bulk motions (Kaiser (1987)) boosts it on large scales. Thus, the reconstructed $`\xi (s)`$ has a shallower slope compared to the true $`\xi (s)`$. We investigated a number of alternative FOM definitions, but we found (using the mock catalogs rather than the PSCz data set itself) that the difference in slopes was the most effective measure for picking out the characteristic signature of excessive redshift space distortions.
The peculiar velocities of galaxies affect the redshift space clustering on both small and large scales, as discussed in §2.2. However, the real to redshift space mapping does not affect the galaxy clustering perpendicular to the line-of-sight. Figure 12 shows the projected correlation function $`w(r_p)`$ (Davis & Peebles (1983); Fisher et al. (1994)) of the true and reconstructed galaxy distributions, computed using an estimator similar to the one defined in equation (17). Here, the transverse separation $`r_p`$ is defined by the relation $`r_p^2=s^2\pi ^2`$, where $`s`$ is the true separation in redshift-space and $`\pi `$ is the separation along the line-of-sight between the two galaxies in a pair. We fit a power-law to this function in the range $`1h^1\mathrm{Mpc}<r_p<15h^1\mathrm{Mpc}`$, and define a FOM similar to that defined by equation (18). We find that the model O4SQEb1.0 has a high rank, while the other five models have low ranks.
## 5 Evaluation of Models
We also reconstructed the PSCz catalog and the corresponding mock catalogs for the remaining set of nine models described briefly in §3.2. For all 15 models, we measured all the statistics described in the last section. We then ranked these models using the FOM corresponding to each statistic, in the manner described in §4. Table 2 lists the ranks of the PSCz reconstruction with respect to the mock catalogs for our full suite of 15 models.
Table 2 is the complete quantitative summary of the results of our reconstruction analysis of the PSCz catalog. A low rank for any statistic indicates that the model reproduces that statistical property of the PSCz catalog as well as, or better than, it reproduces that property for most of the mock catalogs corresponding to that model. On the other hand, a high rank (close to 10) for any statistic indicates that the model does not reproduce that property of the PSCz catalog as accurately as would be expected (based on the mock catalogs) if the model were a correct representation of the real universe. Computational practicality limits us to ten mock catalogs for each of our models, so even if the PSCz reconstruction has a rank of 10 for a particular statistic, we can only conclude that the model fails that statistical test at the $`90\%`$ confidence level. If we were to reconstruct 100 mock catalogs for that model, we would expect the PSCz reconstruction to be worse than at least $`90`$ of the mock catalogs (unless we happened to be unusually lucky in the ten that we did reconstruct), but we do not know whether it would be worse than 95, or 99, or all 100, since we have not been able to probe the tails of the reconstruction error distribution.
Two issues complicate the interpretation of Table 2. First is the fact that we have considered many different statistical tests and therefore given the PSCz reconstruction many “chances to fail”. As a result, a single rank of 10 does not necessarily imply a failure of the model; if the nine statistics were entirely independent of each other (which they are not), we would expect a typical mock catalog to have one rank of 10, and a significant fraction to have more than one rank of 10. In order not to be misled, we must compare the ranks of the PSCz reconstruction with the ranks of the mock catalog reconstructions even when we draw general inferences from Table 2, as we do below.
The second complication is that the statistical measures are not all independent of each other, since the clustering properties that they quantify are in some cases closely related. Fortunately, we can use the mock catalogs themselves to understand the correlations between the different statistics. Using all 150 mock catalogs, we computed the covariance matrix of the ranks of the nine statistics, and we also computed the distribution of mock catalog ranks for each statistic conditioned on the catalog having a rank of 10 for one of the other statistics. Both analyses led to the same conclusion: the nine statistics fall into five groups, and ranks within each group are correlated but ranks in one group are essentially uncorrelated with ranks in another group. The five groups are: (1) the correlation coefficient $`(r)`$ and the Fourier difference statistic $`D(k)`$, (2) the PDF of the smoothed galaxy density field, (3) the counts in spheres of radii $`3h^1\mathrm{Mpc}`$ and $`8h^1\mathrm{Mpc}`$ and the void probability function, (4) the nearest neighbor distribution, and (5) the two correlation functions $`\xi (s)`$ and $`w(r_p)`$. The statistics in the first and third groups are strongly correlated amongst themselves, while the statistics in the fifth group (the two correlation functions) are only moderately correlated.
As an overall quantitative measure of the success of a PSCz reconstruction relative to the expectation based on mock catalogs, we list the weighted mean rank of the PSCz reconstruction in column 11 of Table 2. We weight the rank of each statistic inversely by the number of statistics in its correlated group, so each of the five independent groups contributes equally to this mean rank. The mean weighted rank of mock catalogs computed in this manner is $`5.0`$, so a PSCz reconstruction with mean rank greater than $`5.0`$ is less accurate than a reconstruction of a typical mock catalog, and vice versa.
Two of the models listed in Table 2 fail the reconstruction test unambiguously. The PSCz reconstruction for model O4UNb1.0 has a rank of 10 in each of three independent groups of statistics, the PDF, the counts/VPF, and the nearest neighbor distribution. The worst of the ten mock catalog reconstructions of this model has two independent ranks of 10, while the next worst has one rank of 10 and two ranks of 9. The O4SQEb1.0 model fails even more clearly. The PSCz reconstruction of this model has a rank of 10 in four of the five independent groups of statistics, while the worst mock catalog reconstruction for this model has one rank of 10 and one rank of 9. For both models, the weighted mean rank of the PSCz reconstruction is higher than that of any of the model’s ten mock catalog reconstructions. Remarkably, the O4SAb0.9 model, which has nearly the same cosmological parameters as these two failed models but a different form of the biasing relation, produces one of the most successful PSCz reconstructions. We return to this point in §6.
The other model that fares especially poorly in Table 2 is E1UNb1.0, which has two independent ranks of 10 and a rank of 9 in third independent group. One of the mock catalog reconstructions of this model actually performs worse, with three independent ranks of 10 and a fourth independent rank of 9, and this mock catalog has a weighted mean rank of $`7.7`$, identical to the PSCz mean rank of $`7.7`$. In a purely statistical sense, therefore, we cannot rule out this model as clearly as we can rule out O4UNb1.0 and O4SQEb1.0. However, as already noted in our discussion of Figure 11a, the E1UNb1.0 reconstruction of PSCz fails in exactly the manner expected if we reconstruct a low $`\mathrm{\Omega }_m`$ universe with an $`\mathrm{\Omega }_m=1`$ model of similar $`\sigma _{8m}`$: the high peculiar velocities in the $`\mathrm{\Omega }_m=1`$ reconstruction suppress $`\xi (s)`$ on small scales and boost it on large scales, making the reconstructed slope of $`\xi (s)`$ too shallow. The PSCz reconstruction has a rank of 10 for $`\xi (s)`$ but only 6 for $`w(r_p)`$, so it is clear that this failure is caused by the reconstruction’s excessive peculiar velocities, not by a problem in the real space clustering. None of the mock catalog reconstructions of this model, including the one that has more high ranks than the PSCz reconstruction, fails in this characteristic manner.
Based on these considerations, we classify the models O4UNb1.0, O4SQEb1.0 and E1UNb1.0 as Rejected according to our analysis, and we indicate this classification by an R in column 12 of Table 2. These three PSCz reconstructions have the highest mean ranks among the 15 models, 7.8, 7.8 and 7.7, respectively. We classify the remaining 12 models as Accepted (indicated by an A in column 12), since in each of case there is at least one of the model’s mock catalogs that has more independent ranks of 9 or 10 than the model’s PSCz reconstruction. However, within this class of Accepted models, there is a wide range in the relative accuracy of the PSCz reconstruction. The models E1PLb1.3, E1PLb1.8, and L2PLb0.77 are all Accepted on the basis of a single mock catalog that is reconstructed worse than the PSCz catalog, while the models E1PLMDb1.3 and O3PLb1.4 are Accepted on the basis of two mock catalogs that are reconstructed worse than the PSCz catalog. These five models have the highest mean ranks among the 12 Accepted models. In each of the remaining seven Accepted models, the PSCz is reconstructed better than at least three mock catalogs, and these models have correspondingly smaller average ranks for their PSCz reconstructions.
## 6 Discussion
We have reconstructed the IRAS galaxy distribution in our cosmological neighborhood, within a spherical region of radius $`50h^1\mathrm{Mpc}`$ centered on the Local Group. We have tested 15 different models, each consisting of a set of assumptions about the values of $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$, the value of the bias factor $`b`$, and the nature of the biasing relation between IRAS galaxies and the underlying mass. For every model, we have quantified the accuracy of the PSCz reconstruction relative to the expectation based on mock PSCz catalogs, and we have used this result to classify the model as Accepted or Rejected. The Rejected models are unlikely to be the correct models for structure formation in the real universe, while for the Accepted models the PSCz reconstruction is more accurate than at least one of the model’s ten mock catalog reconstructions. We have computed mean weighted ranks (Table 2, column 11) as an overall quantitative measure of the accuracy of a model’s PSCz reconstruction relative to the expectation from mock catalogs. We now examine these results in detail to see what general conclusions we can derive regarding the allowed ranges of cosmological and galaxy formation parameters.
Figure 13 shows the locations of all $`15`$ models in the $`\mathrm{\Omega }_m\sigma _{8m}`$ plane. The 12 distinct points correspond to the $`15`$ different models because there are sets of models with identical values of $`\mathrm{\Omega }_m`$ and $`\sigma _{8m}`$ (and hence $`b`$) but with different biasing schemes. Thus, for example, the two models O4UNb1.0 and O4SQEb1.0 are indistinguishable in this plane, as are the three models E1PLb1.3, E1PLMDb1.3, and E1THb1.3. The circles show the 12 Accepted models, and the triangles show the three Rejected models. For the Accepted models, the radius of the circles is proportional to $`(10\mathrm{Rank})`$, where $`\mathrm{Rank}`$ is the mean rank for the PSCz reconstruction of a model. Hence, larger circles show models that are more successful in reconstructing the PSCz catalog. The shaded region shows the observed rms fluctuation of the IRAS galaxy distribution, $`\sigma _{8g}(\mathrm{IRAS})=0.69\pm 0.04`$ (Fisher et al. (1994)).
We plot four different constraints in the $`\mathrm{\Omega }_m\sigma _{8m}`$ plane that are obtained using independent techniques. The solid line in Figure 13 shows the constraint $`\sigma _{8m}\mathrm{\Omega }_m^{0.6}=0.55`$, required to reproduce the observed masses and abundances of rich clusters of galaxies at the present epoch (White et al. (1993); Eke et al. (1996); Viana & Liddle (1996)). The dotted line shows the constraint $`\sigma _{8m}\mathrm{\Omega }_m^{0.6}=0.85`$, which is implied by the power spectrum of mass density fluctuations estimated from the peculiar velocities of galaxies in the SFI catalog (Freudling et al. (1999)). The remaining two constraints arise from comparing the IRAS galaxy distribution with the peculiar velocities of galaxies. In linear perturbation theory, the mass continuity equation takes the form (Peebles (1980))
$$𝐯=fH_0\delta _m,$$
(19)
where $`f\mathrm{\Omega }_m^{0.6}`$. If we assume that $`\delta _g=b_\delta \delta _m`$ (the linear bias model), equation (19) becomes
$$𝐯=\beta H_0\delta _g,$$
(20)
where $`\beta =\mathrm{\Omega }_m^{0.6}/b_\delta `$. The short dashed line shows the constraint $`\beta _{\mathrm{IRAS}}=0.5`$ obtained by the VELMOD method, which derives a maximum likelihood estimate of $`\beta _{\mathrm{IRAS}}`$ by comparing the peculiar velocities of galaxies in the Mark III catalog with their radial velocities predicted from the IRAS 1.2 Jy redshift catalog, using equation (20) (Willick et al. 1997a ; Willick & Strauss (1998)). This value of $`\beta _{\mathrm{IRAS}}`$ is also obtained from an analysis of the anisotropy of the redshift-space power spectrum of IRAS galaxies (Cole, Fisher, & Weinberg 1995), and from a comparison of the spherical harmonics of the peculiar velocity field derived from the Mark III catalog with the spherical harmonics of the gravity field derived from the IRAS 1.2 Jy survey (Davis, Nusser, & Willick 1996; see Strauss & Willick 1995 and Hamilton 1998 for reviews of other estimates of $`\beta _{\mathrm{IRAS}}`$). We convert an estimate of $`\beta _{\mathrm{IRAS}}`$ into a constraint on $`\sigma _{8m}\mathrm{\Omega }_m^{0.6}`$ using the relation
$$\sigma _{8m}\mathrm{\Omega }_m^{0.6}=\beta _{\mathrm{IRAS}}\sigma _{8g},$$
(21)
where we have assumed that $`b_\delta =b=\sigma _{8g}/\sigma _{8m}`$. The long dashed line shows the constraint $`\beta _{\mathrm{IRAS}}=0.9`$ obtained by the POTENT method, which measures $`\beta _{\mathrm{IRAS}}`$ as the slope of the regression between the observed galaxy density field from the IRAS 1.2 Jy redshift catalog and the mass density field derived from the peculiar velocities of galaxies in the Mark III catalog, using a modified version of equation (19) (Sigad et al. (1998)). Each of these four constraints has a quoted uncertainty of about $`10\%20\%`$, implying that all of them cannot be consistent with one another.
Based on the ranks of the 15 models in Table 2, and their locations in Figure 13, we arrive at the following conclusions.
Our successful reconstructions of the PSCz catalog, at least for some plausible assumptions about the value of $`\mathrm{\Omega }_m`$ and the bias between IRAS galaxies and mass, lends support to the hypothesis that LSS originated in the gravitational instability of small amplitude, Gaussian primordial mass density fluctuations. While this success does not, by itself, rule out non-Gaussian models for primordial fluctuations, it strengthens the viability of Gaussian models. Models whose initial conditions have substantially non-Gaussian PDFs generally predict quite different properties for LSS (Moscardini et al. (1991); Weinberg & Cole (1992)).
Unbiased models in which IRAS galaxies trace mass are rejected, for both $`\mathrm{\Omega }_m=0.4`$ and $`\mathrm{\Omega }_m=1`$. From Table 2 and the discussion in §5, it is clear that the models E1UNb1.0 and O4UNb1.0 are both clearly rejected by the reconstruction analysis of the PSCz catalog. Figure 11(a) shows that the model E1UNb1.0 fails in the manner expected if we reconstruct the redshift-space galaxy distribution in a low $`\mathrm{\Omega }_m`$ universe using, erroneously, a high value of $`\mathrm{\Omega }_m`$. The high velocity dispersion of clusters in an $`\mathrm{\Omega }_m=1`$ model suppresses the small scale correlations in redshift space, while the large scale bulk flows, whose amplitude is proportional to $`\mathrm{\Omega }_m^{0.6}`$, enhances the correlations on large scales. Therefore, the reconstructed $`\xi (s)`$ has a shallower slope compared to the true $`\xi (s)`$, although the rms fluctuations (in redshift space) of the two galaxy distributions are similar, by construction.
Of the five models with $`\mathrm{\Omega }_m=1`$, E1THb1.3 is the only Accepted model that reconstructs the PSCz catalog as well as its own typical mock catalog. While the model E1UNb1.0 is clearly Rejected, the models E1PLb1.3 and E1PLb1.8 are Accepted because one mock catalog in each of these models is reconstructed worse than the PSCz catalog, and the model E1PLMDb1.3 is Accepted because there are two mock catalogs that are reconstructed worse than the PSCz catalog. Thus, although four of the five models with $`\mathrm{\Omega }_m=1`$ are Accepted, most of them are only moderately successful in reconstructing the PSCz catalog.
In Figure 13, there are five models that lie on, or close to, the constraint $`\beta _{\mathrm{IRAS}}=0.5`$. Of these, the models O4UNb1.0 and O4SQEb1.0 are clearly rejected, the models E1Plb1.8 and L2PLb0.77 are Accepted because there is one mock catalog that is reconstructed worse than the PSCz catalog, while the model O3PLb1.4 is Accepted because there are two mock catalogs that are reconstructed worse than the PSCz catalog. All these models are at best only moderately successful in reconstructing the PSCz catalog. This leads us to conclude that the low-normalization constraint $`\beta _{\mathrm{IRAS}}=0.5`$ (corresponding to $`\sigma _{8m}\mathrm{\Omega }_m^{0.6}0.35`$), inferred from the simplest interpretation of the VELMOD (Willick & Strauss (1998)) and redshift-space distortion (Cole et al. (1995)) analyses of the $`1.2`$Jy redshift survey, is only marginally successful in reconstructing the PSCz catalog. Here we have assumed that $`b_\delta =b=\sigma _{8g}/\sigma _{8m}`$ to convert an estimate of $`\beta _{\mathrm{IRAS}}`$ into a constraint on $`\sigma _{8m}\mathrm{\Omega }_m^{0.6}`$. While this relation is valid in a deterministic, linear bias model, in the case of a more general biasing relation between galaxy and mass distributions, the relation between $`b`$ and $`b_\delta `$ also includes terms arising from the non-linearity and the stochasticity of the biasing relation (Dekel & Lahav (1999)). We are currently investigating the extent to which the estimates of $`\beta _{\mathrm{IRAS}}`$ using different techniques, including POTENT, VELMOD, and the anisotropy of the redshift-space power spectrum, are sensitive to the details of the biasing scheme (Berlind, Narayanan, & Weinberg, in preparation).
There are seven models including E1THb1.3, O2PLb0.5, O4MDb0.7, O4SAb0.9, L3PLb0.62, L4PLb0.64, and L5PLb0.54, in which the PSCz catalog is reconstructed better than at least three mock catalogs corresponding to that model. These models are the most successful in reconstructing the properties of the galaxy distribution in the PSCz catalog. Except for the model E1THb1.3, all these models have $`\mathrm{\Omega }_m<1`$ and require that $`\sigma _{8m}>\sigma _{8g}`$ (hence $`b<1`$), i.e., that IRAS galaxies be antibiased with respect to the mass distribution on a scale of $`8h^1\mathrm{Mpc}`$. However, we are unable to pin down the bias factor more precisely: the model O4SAb0.9, with a small antibias, and the model O2PLb0.5, with a large antibias, both reconstruct the PSCz catalog very well. Most of the successful models require that $`\beta _{\mathrm{IRAS}}0.8`$ (except the model O4SAb0.9, which requires $`\beta _{\mathrm{IRAS}}0.7`$)
The model O4SAb0.9, in which IRAS galaxies are related to the mass distribution according to the predictions of the semi-analytic galaxy formation model, reconstructs the PSCz catalog very well. This accurate reconstruction of the PSCz catalog is a non-trivial success of the semi-analytic model, since the models O4UNb1.0 and O4SQEb1.0, with quite similar values of $`\mathrm{\Omega }_m`$ and $`\sigma _{8m}`$ but with different biasing relations, are both clearly rejected. This sensitivity of the reconstruction to the nature of the biasing relation demonstrates that the reconstruction analysis of a galaxy redshift survey can distinguish between different bias models, and not just between different values of the bias factor $`b`$.
Conclusions $`(2)(6)`$ are based on reconstructing the PSCz catalog under the general assumptions that the primordial mass density fluctuations form a Gaussian random field and that the bias between IRAS galaxies and mass can be characterized by a local, monotonic function. While the assumption of Gaussian initial fluctuations enables us to constrain the nature of the biasing relation, we could also use reconstruction analysis in a complementary mode, i.e., to test the level of non-Gaussianity of the initial fluctuations, given our current state of knowledge of the galaxy formation process. In this regard, the successful reconstruction for a model (O4SAb0.9) based on a physically motivated theory of galaxy formation and cosmological parameter values supported by independent constraints supports the standard hypothesis that primordial fluctuations were not far from Gaussian.
There are several natural directions for extending this analysis using observational data. In this paper, we have compared the properties of the reconstruction to the input PSCz galaxy distribution in redshift space alone. However, every model reconstruction predicts both the real-space galaxy distribution and the fully non-linear peculiar velocity field at every point within the reconstruction volume. We can then compare the velocity field predicted for any model with the observed peculiar velocities of galaxies in, say, the Mark III catalog (Willick et al. 1997b ) or the SFI catalog (Giovanelli et al. (1997)). Such a comparison will be more accurate than a comparison involving the velocity field predicted using linear theory. The amplitude and the non-linear components of the velocity field serve as good diagnostics of the allowed values of $`\mathrm{\Omega }_m`$ and $`\sigma _{8m}`$ (Narayanan & Weinberg (1998)). We can correct for inhomogeneous Malmquist bias when comparing the observed density and velocity fields by using the reconstructed line-of-sight density and velocity distributions to every galaxy. Alternatively, we can circumvent the effects of Malmquist bias by working directly in redshift space itself (Strauss & Willick (1995)).
In order to understand the galaxy formation process, it is necessary to study the relative bias between different types of galaxies as well as the absolute bias between the galaxy population as a whole and the underlying mass. For example, it is now well known that optical galaxies are more strongly clustered than IRAS galaxies (Lahav et al. (1990); Strauss et al. (1992); Saunders et al. (1992); Peacock & Dodds (1994); Willmer et al. (1998); Baker et al. (1998)). Reconstruction analysis of the galaxy distribution in the Optical Redshift Survey (Santiago et al. 1995, 1996), using a set of models similar to the ones discussed in this paper, will give us independent constraints on $`\mathrm{\Omega }_m`$ and $`\sigma _{8m}`$ and enable us to test whether optical galaxies trace the underlying mass. Since the Optical Redshift Survey and PSCz probe similar regions of space, the initial conditions derived from reconstruction of the two data should be consistent with each other, and it should be possible to reproduce one catalog beginning with the initial conditions derived from the other by changing only the biasing model used to select galaxies from the evolved mass distribution.
Reconstruction analysis is thus a powerful tool to constrain the ranges of allowed values of cosmological parameters and the details of the galaxy formation process. For example, we can discriminate between models with low and high values of $`\mathrm{\Omega }_m`$, and between models with different values of $`\sigma _{8m}`$ (hence, different values of $`b`$). However, if the cosmological parameters and the mass power spectrum can be determined precisely using other constraints, such as Type Ia supernovae, cosmic microwave background anisotropies, the Lyman-$`\alpha `$ forest, or weak lensing, then reconstruction analysis can focus on deriving the biasing relation between the different galaxy distributions and the underlying mass distribution. Knowledge of these relations will, in turn, provide strong tests of numerical and semi-analytic models for galaxy formation.
VKN and DHW were supported by NSF Grant AST-9616822. VKN also acknowledges support by the Presidential Fellowship from the graduate school of The Ohio State University. We thank Michael Hudson for helpful suggestions.
|
no-problem/9910/cond-mat9910299.html
|
ar5iv
|
text
|
# Phonon spectral function for an interacting electron-phonon system
## Abstract
Using exact diagonalzation techniques, we study a model of interacting electrons and phonons. The spectral width of the phonons is found to be reduced as the Coulomb interaction $`U`$ is increased. For a system with two modes per site, we find a transfer of coupling strength from the upper to the lower mode. This transfer is reduced as $`U`$ is increased. These results give a qualitative explanation of differences between Raman and photoemission estimates of the electron-phonon coupling constants for A<sub>3</sub>C<sub>60</sub> (A= K, Rb).
In a metallic system a phonon can decay into electron-hole pair excitations. This decay contributes to the width of the phonon. It was pointed out by Allen that this additional broadening can be used to estimate the electron-phonon coupling. The width can be measured in neutron scattering or, for the orientationally disordered fullerenes, in Raman scattering. Normally, the electron-phonon coupling is deduced by assuming noninteracting electrons. The method is, however, often applied to systems with strong correlation due to the Coulomb interaction, such as the alkali-doped fullerenes. In the alkali-doped fullerenes the electron-phonon interaction plays an important role, and accurate estimates of the coupling strength are essential. Almost all experimental estimates for these systems are based on Allen’s formula, and the accuracy of this formula is therefore crucial.
In strongly correlated systems the hopping is reduced and the excitation of electron-hole pairs may be more difficult. For instance if the correlation is so strong that the system has a metal-insulator transition, the decay into electron-hole pair excitations is completely suppressed. One aim of this paper is therefore to study how the estimate of the electron-phonon coupling is influenced if the electron-electron interaction is taken into account.
In metals a phonon can decay into a (virtual) electron-hole pair excitations which can then decay into a different phonon. In this way there is a coupling between different phonon modes of the same symmetry, leading to new modes which are linear combinations of the old ones. These new modes can have quite different coupling strengths than the old modes. A second aim of this paper is to study how the coupling strength is transferred between the modes due to the coupling via electron-hole pair excitations.
The electron-phonon coupling has been studied extensively for the alkali-doped fullerenes. In particular, there have been a study based on neutron scattering, and several studies based on Raman scattering. The high resolution studies of Winter and Kuzmany show a very strong coupling to a few of the low-lying modes, but almost no coupling to the high-lying modes. An alternative approach is based on photoemission from free negatively charged C$`{}_{}{}^{}{}_{60}{}^{}`$ molecules. By studying the weight of vibration satellites, it is possible to deduce the electron-phonon coupling. This results in rather different electron-phonon coupling constants. Although the main coupling was to the low-lying modes, there was also a substantial coupling to the two highest H<sub>g</sub> modes. The total coupling strength was also larger than deduced from Raman scattering.
In this paper we study a simple model with electron-phonon and electron-electron interactions. We consider a finite cluster with a nondegenerate electronic level and a nondegenerate phonon on each site. This model is solved by using exact diagonalization. We find that the Coulomb interaction reduces the phonon width, and that the use of Allen’s formula therefore leads to an underestimate of the electron-phonon coupling constants in Raman scattering experiments. Furthermore we find that due to the indirect interaction of different phonon modes via electron-hole pair excitations in metallic systems, there is a transfer of coupling strength to the low-lying modes which is not present for a free molecule. Since the Raman measurements of the electron-phonon coupling are performed for a solid, but the photoemission estimate is for a free molecule, the weight transfer is present in the Raman but not in the photoemission estimate. These observations are consistent with differences between the coupling constants deduced from Raman scattering and photoemission.
We consider a model with $`N_{\mathrm{mode}}`$ nondegenerate phonons per site and with electrons without orbital degeneracy. The Hamiltonian is
$`H`$ $`={\displaystyle \underset{i\nu }{}}\omega _\nu b_{i\nu }^{}b_{i\nu }+{\displaystyle \underset{i\sigma }{}}[\epsilon _0+{\displaystyle \underset{\nu }{}}g_\nu (b_{i\nu }+b_{i\nu }^{})]n_{i\sigma }`$ (2)
$`+U{\displaystyle \underset{i}{}}n_in_i+{\displaystyle \underset{ij}{}}t_{ij}c_{i\sigma }^{}c_{j\sigma },`$
where $`i`$ labels the $`N_{\mathrm{site}}`$ sites, $`c_{i\sigma }`$ and $`b_{i\nu }`$ annihilate an electron with spin $`\sigma `$ and a phonon with the label $`\nu `$, respectively, on site $`i`$ and $`n_{i\sigma }=c_{i\sigma }^{}c_{i\sigma }`$ is an occupation number operator. The energy of the phonon $`\nu `$ is $`\omega _\nu `$ and its coupling to the electrons is is $`g_\nu `$. The corresponding dimensionless electron-phonon coupling is given by $`\lambda _\nu =2g_\nu ^2N(0)/\omega _{ph}`$, where $`N(0)`$ is the density of states per spin. The energy of the electronic level is $`ϵ_0`$. Two electrons on the same site have a Coulomb repulsion $`U`$. The hopping between the sites is described by matrix elements $`t_{ij}`$. A Hamiltonian like (2) with $`t_{ij}t`$ for the nearest neighbor hopping has a high symmetry and a correspondingly large degeneracy. Since we use exact diagonalization to solve the model, we have to limit the number of sites to a small number (4-6). The resulting one-particle states are then very sparse in energy. Therefore we lower the symmetry by choosing each $`t_{ij}`$ randomly within some interval, which leads to a denser energy spectrum. The model is solved and the result is then averaged over different sets of $`\{t_{ij}\}`$. The strength of the hopping is measured by the one-particle width $`W`$ of the electronic band (for $`g=0`$ and $`U=0`$). In this model we for simplicity consider nondegenerate (A<sub>g</sub>) phonons and electrons. In, for instance, A<sub>3</sub>C<sub>60</sub> (A= K, Rb) the phonons are five-fold degenerate H<sub>g</sub> phonons and the electron states have a three-fold orbital degeneracy. The new physics which may be introduced by these degeneracies, e.g., the Jahn-Teller effect, is not considered here.
We consider a half-filled system, i.e., $`N_{\mathrm{site}}`$ electrons. With $`N_{\mathrm{site}}=6`$ there are then 400 different electronic configurations. To obtain a finite size Hilbert space, we limit the maximum number of phonons per modes to $`N_{\mathrm{phon}}`$. The number of phonon states is then $`(N_{\mathrm{phon}}+1)^{N_{\mathrm{site}}}`$ for the case of one mode per site. For instance, with $`N_{\mathrm{site}}=6`$ and $`N_{\mathrm{phon}}=3`$, the total Hilbert space has the dimension $`1.638410^6`$. Such a problem can be solved using exact diagonalization, i.e., the ground-state is expressed as a linear combination of all possible basis states in the Hilbert space. The lowest eigenfunction of the corresponding Hamiltonian matrix is then found using the Lanczos method. We further calculate the phonon Green’s function
$$D_{ij}^\nu (t)=i0|T\{\varphi _{i\nu }(t)\varphi _{j\nu }(0)\}|0,$$
(3)
where $`|0`$ is the ground state, $`\varphi _{i\nu }(t)=b_{i\nu }(t)+b_{i\nu }^{}(t)`$ is the phonon field operator in the interaction representation and $`T`$ is the time-ordering operator. Calculation of the Fourier transform gives $`D_{ij}^\nu (\omega )`$. We then define a spectral function as
$$A_{ii}^\nu (\omega )=\frac{1}{\pi }|\mathrm{Im}D_{ii}^\nu (\omega )|,$$
(4)
and study the average $`\rho _{ph}(\omega )=_{\nu i}A_{ii}i^\nu (\omega )/N_{site}`$. Due to the finite size of the system, the spectrum is discrete. We therefore introduce a Lorentzian broadening with the FWHM (full width at half maximum) 0.01 eV.
Fig. 1 shows the phonon spectral function $`A(\omega )`$ for different values of $`U`$. Due to the small size of the system, the width of the spectrum should not necessarily be expected to agree with Allen’s formula even for $`U=0`$. Nevertheless, the result of Allen’s formula $`\gamma _{\mathrm{Allen}}=0.19`$, is comparable to the width found for $`U=0`$. The figure illustrates how the spectral function becomes narrower with increasing $`U`$. This is further illustrated by the inset, which shows the width of the spectrum, calculated as the mean square deviation of the spectral function. The figure illustrates that one underestimates the electron-phonon coupling if Allen’s formula is used to extract the coupling for a system with a finite $`U`$. For systems like $`A_3`$C<sub>60</sub> (A= K, Rb), where the Coulomb interaction is believed to play an important role, the width of the phonons may then be substantially reduced. The use of Allen’s formula would then correspondingly underestimate the electron-phonon coupling.
We next discuss the case when there are two phonon modes per site, which have the unperturbed energies $`\omega _1`$ and $`\omega _2`$. First we calculate the lowest order phonon self-energy. This involves evaluating a “bubble” diagram. The self-energy can be written as
$$\mathrm{\Pi }_{\nu ,\nu ^{^{}}}(𝐪,\omega )g_\nu g_\nu ^{^{}}f(\omega ),$$
(5)
where $`f(\omega )`$ depends on the precise band structure. We consider contributions to the self-energy which are both diagonal and non-diagonal in the index $`\nu `$. The non-diagonal contribution corresponds to a phonon $`\nu `$ decaying into an electron-hole pair followed by this electron-hole pair decaying into a phonon $`\nu ^{^{}}`$. The non-interacting phonon Green’s function is
$$D_{\nu ,\nu ^{^{}}}^0(\omega )=2\omega _\nu /(\omega ^2\omega _\nu ^2)\delta _{\nu ,\nu ^{^{}}}.$$
(6)
The interacting phonon Green’s function is then given by
$`D^1(\omega )`$ $`=[D^0(\omega )]^1\mathrm{\Pi }(\omega )=`$ (10)
$`\left[\begin{array}{cc}\frac{\omega ^2\omega _1^2}{2\omega _1}g_1^2f(\omega )& g_1g_2f(\omega )\\ g_1g_2f(\omega )& \frac{\omega ^2\omega _2^2}{2\omega _2}g_2^2f(\omega )\end{array}\right]`$
The modes of the coupled system are obtained by looking for zeros of the determinant of the matrix in Eq. (10). For the lowest mode, the corresponding eigenvector consists of a bonding linear combination of the two unperturbed modes. As a result the coupling to the electrons is increased for these modes, due to constructive interference between the couplings for the two unperturbed modes. In the same way the coupling is reduced for the higher mode. For instance, we can look for the width of the lowest mode in the limit when $`\omega _2\omega _1`$ and when the electron-phonon coupling is weak. We then find that the width of the lowest mode is increased by a factor of
$$(1+c\lambda _2),$$
(11)
and the width of the highest mode is reduced by a factor
$$(1c\lambda _2(\frac{\omega _1}{\omega 2})^2),$$
(12)
where $`c`$ is somewhat larger than unity ($`c3`$) and depends on the shape of the band.
The result in Eq. (11) is based on the lowest order phonon self-energy and it neglects the Coulomb repulsion completely. We therefore study the same problem using exact diagonalization. Fig. 2 compares results for systems with one or two modes per site. The discrete spectra have been broadened by a Lorentzian with the FWHM=0.01. The main figure shows results for $`U=0`$, and it illustrates how the lower mode is broadened when the higher mode is switched on. For the parameters in Fig. 2 $`\lambda _1=0.043`$ and $`\lambda _2=0.085`$, and the additional broadening of the lowest mode is of the order of magnitude predicted by Eq. (11). The insert shows the width of the lower mode as a function of $`U`$. These results were obtained by fitting Lorentzians to the broadened spectra. The width for very large values of $`U`$ is due to the broadening of the discrete spectrum that we have introduced. As $`U`$ is increased, the width of the mode is reduced, as discussed above. The figure further illustrates that the transfer of coupling strength is reduced as $`U`$ is increased. This is expected, since the effects of hopping, and thereby the indirect coupling, is reduced as $`U`$ is increased.
It would be interesting to repeat these calculations for systems with degenerate phonons, e.g., to include the Jahn-Teller effect. This leads, however, to systems which are so large that they cannot easily be treated using exact diagonalization. Within a Hartree calculation we find a similar transfer of coupling strength to the lower modes also for Jahn-Teller phonons and electrons with orbital degeneracy. The transfer is, however, reduced by the nonspherical parts of the Coulomb interaction, i.e., by the difference between the interaction for equal orbitals and unequal orbitals. This effect may also play a role when we go beyond the Hartree approximation.
Finally, we observe that in theoretical approaches which do not explicitly include the transfer of coupling strength between the modes, it is appropriate to include this transfer by using the corresponding coupling constants. On the other hand, in a treatment where this transfer is explicitly included, the transfer should not be contained in the coupling constants used in the model.
To summarize, we have calculated the phonon spectral functions for systems with interacting electrons and phonons. We find that the Coulomb interaction between the electrons reduces the width of the phonons caused by the phonon decay into electron-hole pairs. As a result, estimates of the electron-phonon coupling based on the phonon width underestimate this coupling unless the Coulomb interaction is taken into account. This is consistent with the observations that weaker couplings have been deduced from Raman measurements than from photoemission (PES) experiments. Furthermore, we find that there is a transfer of coupling strength from the higher modes to the lower modes due to an indirect interaction via electron-hole pairs. This may, at least partly, explain the difference in the distribution of coupling strength between Raman and PES estimates, although it can probably not fully explain the weak coupling to the two highest phonons seen in Raman spectroscopy. In this work we have treated nondegenerate phonons. It would be interesting to extend the work to degenerate, Jahn-Teller phonons, since these are the important phonons in the alkali-doped Fullerenes.
This work has been supported by the Max-Planck-Forschungspreis.
|
no-problem/9910/astro-ph9910257.html
|
ar5iv
|
text
|
# Deep Ly𝛼 imaging of radio galaxy 1138-262 at redshift 2.2
## 1. Introduction
Observations of clusters at high redshift ($`z>2`$) can directly constrain cosmological models, but searches based on colors or narrow band emission have not discovered more than a handful of presumed cluster galaxies (Le Fèvre et al. 1996; Cowie & Hu 1998). There are several indications that powerful radio galaxies at high redshift (HzRGs) are located at the centers of forming clusters. The powerful radio galaxy 1138$``$262 has extensively been studied and there is strong evidence that it is a forming brightest cluster galaxy in a (proto-)cluster (e.g. Pentericci et al. 1997). The arguments include (i) the very clumpy morphology of 1138$``$262 as observed by the HST (Pentericci et al. 1998), reminiscent of a merging system; (ii) the extremely distorted radio morphology and the detection of the largest radio rotation measures ($``$ 6200 rad m<sup>-2</sup>) in a sample of more than 70 HzRGs, indicating that 1138$``$262 is surrounded by a hot and dense magnetized medium (Carilli et al. 1997); (iii) the detection of extended X-ray emission around 1138$``$262 (Carilli et al. 1998), indicating the presence of hot cluster gas.
## 2. A cluster at redshift 2.2?
With the aim of detecting Ly$`\alpha `$ emitting cluster galaxies, the field of 1138$``$262 was observed on April 12 and 13 1999 with FORS1 on the VLT ANTU using a narrow band (65 Å) covering the redshifted Ly$`\alpha `$ (3814 Å), and the broad B band which encompasses the narrow band. The resulting Ly$`\alpha `$ image shows a huge ($``$ 160 kpc)<sup>2</sup><sup>2</sup>2We adopt a Hubble constant of $`H_0`$=50 km s<sup>-1</sup>Mpc<sup>-1</sup> and a deceleration parameter of $`q_0`$=0.5. halo of ionized hydrogen around the galaxy, which extends even further than the radio emission.
From a combined Ly$`\alpha `$ and B band image we have extracted $``$ 1600 sources with SExtractor (Bertin & Arnouts, 1996), after a careful consideration of the aperture size to be used for the photometry. Objects, that are detected in the narrow band image at a level 3$`\sigma `$ higher than expected from the broad band image, are selected as candidate Ly$`\alpha `$ emitters. Discarding 6 bright stars, we detect 34 such objects in the 3$`\times `$3 Mpc<sup>2</sup> field with a range of Ly$`\alpha `$ fluxes from 0.1-5$`\times 10^{16}`$ ergs s<sup>-1</sup> cm<sup>-2</sup>. These are obvious candidates for being companion galaxies in the cluster around 1138$``$262. Three of these candidates are shown in Fig. 1. The next step will be to measure the redshifts of the Ly$`\alpha `$ emitters and subsequently determine the spatial correlation function and the velocity dispersion, which together with the size of the cluster will give a direct estimate of the total mass.
## References
Bertin, E. & Arnouts S. 1996, A&AS, 117, 393
Carilli, C. L., Röttgering, H. J. A., van Ojik, R., Miley, G. K., & van Breugel, W. J. M. 1997, ApJS, 109, 1
Carilli, C .L., Harris, D. E., Pentericci, L., Röttgering, H. J. A., Miley, G. K., & Bremer, M. N. 1998, ApJ, 496, L57
Cowie, L. L. & Hu, E. M. 1998, AJ, 115, 1319
Le Fèvre, O., Deltorn, J. M., Crampton, D., Dickinson, M. 1996, ApJ, 471, L11
Pentericci, L., Röttgering, H. J. A., Miley, C. L., & McCarthy, P. 1997, A&A, 326, 580
Pentericci, L., Röttgering, H. J. A., Miley, G. K., Spinrad, H., McCarthy, P. J., van Breugel, W. J. M., & Macchetto, F. 1998, ApJ, 504, 139
|
no-problem/9910/gr-qc9910081.html
|
ar5iv
|
text
|
# References
KIMS-1999-10-17
gr-qc/9910081
A possible solution for the non-existence of time
Hitoshi Kitada
Department of Mathematical Sciences
University of Tokyo
Komaba, Meguro, Tokyo 153-8914, Japan
e-mail: kitada@kims.ms.u-tokyo.ac.jp
http://kims.ms.u-tokyo.ac.jp/
November 12, 1999
Abstract. A possible solution for the problem of non-existence of universal time is given by utilizing Gödel’s incompleteness theorem .
In a recent book , Barbour presented a thought that time is an illusion, by noting that Wheeler-DeWitt equation yields the non-existence of time, whereas time around us seems to be flowing. However, he does not appear to have a definite idea or formal way to actualize his thought. In the present note I present a concrete way to resolve the problem of the non-existence of time, which is partly a reminiscence of my works , , , .
1. Time seems not to exist
According to equation (5.13) in Hartle , the non-existence of time would be expressed by an equation:
$`H\mathrm{\Psi }=0.`$ (1)
Here $`\mathrm{\Psi }`$ is the “state” of the universe belonging to a suitable Hilbert space $``$, and $`H`$ denotes the total Hamiltonian of the universe defined in $``$. This equation implies that there is no global time of the universe, as the state $`\mathrm{\Psi }`$ of the universe is an eigenstate for the total Hamiltonian $`H`$, and therefore does not change. One might think that this implies the non-existence of local time because any part of the universe is described by a part of $`\mathrm{\Psi }`$. Then we have no time, in contradiction with our observations. This is a restatement of the problem of time, which is a general problem to identify a time coordinate while preserving the diffeomorphism invariance of General Relativity. In fact, equation (1) follows if one assumes the existence of a preferred foliating family of spacelike surfaces in spacetime (see section 5 of ).
We give a solution in the paper to this problem that on the level of the total universe, time does not exist, but on the local level of our neighborhood, time does exist.
2. Gödel’s theorem
Our starting point is the incompleteness theorem proved by Gödel . It states that any consistent formal theory that can describe number theory includes an infinite number of undecidable propositions. The physical world includes at least natural numbers, and it is described by a system of words, which can be translated into a formal physics theory. The theory of physics, if consistent, therefore includes an undecidable proposition, i.e. a proposition whose correctness cannot be known by human beings until one finds a phenomenon or observation that supports the proposition or denies the proposition. Such propositions exist infinitely according to Gödel’s theorem. Thus human beings, or any other finite entity, will never be able to reach a “final” theory that can express the totality of the phenomena in the universe.
Thus we have to assume that any human observer sees a part or subsystem $`L`$ of the universe and never gets the total Hamiltonian $`H`$ in (1) by his observation. Here the total Hamiltonian $`H`$ is an ideal Hamiltonian that might be gotten by “God.” In other words, a consequence from Gödel’s theorem is that the Hamiltonian that an observer assumes with his observable universe is a part $`H_L`$ of $`H`$. Stating explicitly, the consequence from Gödel’s theorem is the following proposition
$`H=H_L+I+H_E,H_E0,`$ (2)
where $`H_E`$ is an unknown Hamiltonian describing the system $`E`$ exterior to the realm of the observer, whose existence, i.e. $`H_E0`$, is assured by Gödel’s theorem. This unknown system $`E`$ includes all that is unknown to the observer. E.g., it might contain particles which exist near us but have not been discovered yet, or are unobservable for some reason at the time of observation. The term $`I`$ is an unknown interaction between the observed system $`L`$ and the unknown system $`E`$. Since the exterior system $`E`$ is assured to exist by Gödel’s theorem, the interaction $`I`$ does not vanish: In fact assume $`I`$ vanishes. Then the observed system $`L`$ and the exterior system $`E`$ do not interact, which is the same as that the exterior system $`E`$ does not exist for the observer. This contradicts that the observer is able to construct a proposition by Gödel’s procedure (see section 5 and ) that proves $`E`$ exists. By the same reason, $`I`$ is not a constant operator:
$`I\text{constant operator}.`$ (3)
For suppose it is a constant operator. Then the systems $`L`$ and $`E`$ do not change no matter how far or how near they are located because the interaction between $`L`$ and $`E`$ is a constant operator. This is the same situation as that the interaction does not exist, thus reduces to the case $`I=0`$ above.
We now arrive at the following observation: For an observer, the observable universe is a part $`L`$ of the total universe and it looks as though it follows the Hamiltonian $`H_L`$, not following the total Hamiltonian $`H`$. And the state of the system $`L`$ is described by a part $`\mathrm{\Psi }(,y)`$ of the state $`\mathrm{\Psi }`$ of the total universe, where $`y`$ is an unknown coordinate of system $`L`$ inside the total universe, and $``$ is the variable controllable by the observer, which we will denote by $`x`$.
3. Local Time Exists
Assume now, as is usually expected, that there is no local time of $`L`$, i.e. that the state $`\mathrm{\Psi }(x,y)`$ is an eigenstate of the local Hamiltonian $`H_L`$ for some $`y=y_0`$ and a real number $`\mu `$:
$`H_L\mathrm{\Psi }(x,y_0)=\mu \mathrm{\Psi }(x,y_0).`$ (4)
Then from (1), (2) and (4) follows that
$`0=H\mathrm{\Psi }(x,y_0)=H_L\mathrm{\Psi }(x,y_0)+I(x,y_0)\mathrm{\Psi }(x,y_0)+H_E\mathrm{\Psi }(x,y_0)`$
$`=(\mu +I(x,y_0))\mathrm{\Psi }(x,y_0)+H_E\mathrm{\Psi }(x,y_0).`$ (5)
Here $`x`$ varies over the possible positions of the particles inside $`L`$. On the other hand, since $`H_E`$ is the Hamiltonian describing the system $`E`$ exterior to $`L`$, it does not affect the variable $`x`$ and acts only on the variable $`y`$. Thus $`H_E\mathrm{\Psi }(x,y_0)`$ varies as a bare function $`\mathrm{\Psi }(x,y_0)`$ insofar as the variable $`x`$ is concerned. Equation (5) is now written: For all $`x`$
$`H_E\mathrm{\Psi }(x,y_0)=(\mu +I(x,y_0))\mathrm{\Psi }(x,y_0).`$ (6)
As we have seen in (3), the interaction $`I`$ is not a constant operator and varies when $`x`$ varies<sup>2</sup><sup>2</sup>2Note that Gödel’s theorem applies to any fixed $`y=y_0`$ in (3). Namely, for any position $`y_0`$ of the system $`L`$ in the universe, the observer must be able to know that the exterior system $`E`$ exists because Gödel’s theorem is a universal statement valid throughout the universe. Hence $`I(x,y_0)`$ is not a constant operator with respect to $`x`$ for any fixed $`y_0`$., whereas the action of $`H_E`$ on $`\mathrm{\Psi }`$ does not. Thus there is a nonempty set of points $`x_0`$ where $`H_E\mathrm{\Psi }(x_0,y_0)`$ and $`(\mu +I(x_0,y_0))\mathrm{\Psi }(x_0,y_0)`$ are different, and (6) does not hold at such points $`x_0`$. If $`I`$ is assumed to be continuous in the variables $`x`$ and $`y`$, these points $`x_0`$ constitutes a set of positive measure. This then implies that our assumption (4) is wrong. Thus a subsystem $`L`$ of the universe cannot be a bound state with respect to the observer’s Hamiltonian $`H_L`$. This means that the system $`L`$ is observed as a non-stationary system, therefore there must be observed a motion inside the system $`L`$. This proves that the “time” of the local system $`L`$ exists for the observer as a measure of motion, whereas the total universe is stationary and does not have “time.”
4. A refined argument
To show the argument in section 3 more explicitly, we consider a simple case of
$$H=\frac{1}{2}\underset{k=1}{\overset{N}{}}h^{ab}(X_k)p_{ka}p_{kb}+V(X).$$
Here $`N`$ $`(1N\mathrm{})`$ is the number of particles in the universe, $`h^{ab}`$ is a three-metric, $`X_kR^3`$ is the position of the $`k`$-th particle, $`p_{ka}`$ is a functional derivative corresponding to momenta of the $`k`$-th particle, and $`V(X)`$ is a potential. The configuration $`X=(X_1,X_2,\mathrm{},X_N)`$ of total particles is decomposed as $`X=(x,y)`$ accordingly to if the $`k`$-th particle is inside $`L`$ or not, i.e. if the $`k`$-th particle is in $`L`$, $`X_k`$ is a component of $`x`$ and if not it is that of $`y`$. $`H`$ is decomposed as follows:
$$H=H_L+I+H_E.$$
Here $`H_L`$ is the Hamiltonian of a subsystem $`L`$ that acts only on $`x`$, $`H_E`$ is the Hamiltonian describing the exterior $`E`$ of $`L`$ that acts only on $`y`$, and $`I=I(x,y)`$ is the interaction between the systems $`L`$ and $`E`$. Note that $`H_L`$ and $`H_E`$ commute.
Theorem. Let $`P`$ denote the eigenprojection onto the space of all bound states of $`H`$. Let $`P_L`$ be the eigenprojection for $`H_L`$. Then we have
$$(1P_L)P0,$$
(7)
unless the interaction $`I=I(x,y)`$ is a constant with respect to $`x`$ for any $`y`$.
Remark. In the context of the former part, the theorem implies the following:
$$(1P_L)P\{0\},$$
where $``$ is a Hilbert space consisting of all possible states $`\mathrm{\Psi }`$ of the total universe. This relation implies that there is a vector $`\mathrm{\Psi }0`$ in $``$ which satisfies $`H\mathrm{\Psi }=\lambda \mathrm{\Psi }`$ for a real number $`\lambda `$ while $`H_L\mathrm{\Phi }\mu \mathrm{\Phi }`$ for any real number $`\mu `$, where $`\mathrm{\Phi }=\mathrm{\Psi }(,y)`$ is a state vector of the subsystem $`L`$ with an appropriate choice of the position $`y`$ of the subsystem.
Proof of the theorem. Assume that (7) is incorrect. Then we have
$$P_LP=P.$$
Taking the adjoint operators on the both sides, we then have
$$PP_L=P.$$
Thus $`[P_L,P]=P_LPPP_L=0`$. But in generic this does not hold because
$$[H_L,H]=[H_L,H_L+I+H_E]=[H_L,I]0,$$
unless $`I(x,y)`$ is equal to a constant with respect to $`x`$. Q.E.D.
5. Conclusion
Gödel’s proof of the incompleteness theorem relies on the following type of proposition $`P`$ insofar as concerned with the meaning:
$`P\text{}P\text{ cannot be proved.”}`$ (8)
Then if $`P`$ is provable it contradicts $`P`$ itself, and if $`P`$ is not provable, $`P`$ is correct and seems to be provable. Both cases lead to contradiction, which makes this kind of proposition undecidable in a given consistent formal theory.
This proposition reminds us of the following type of self-referential statement:
A person $`P`$ says “I am telling a lie.” (9)
The above statement and proposition $`P`$ in (8) are non-diagonal statements in the sense that both deny themselves. Namely the core of Gödel’s theorem is in proving the existence of non-diagonal “elements” (i.e. propositions) in any formal theory that includes number theory. Assigning the so-called Gödel number to each proposition in number theory, Gödel constructs such propositions in number theory by a diagonal argument, which shows that any consistent formal theory has a region exterior to the knowable world.
On the other hand, what we have deduced from Gödel’s theorem in section 2 is that the interaction term $`I`$ is not a constant operator. Moreover the argument there implies that $`I`$ is not diagonalizable in the following decomposition of the Hilbert space $``$:
$`={\displaystyle ^{}}_L(\lambda )𝑑\lambda {\displaystyle ^{}}_E(\mu )𝑑\mu ,`$ (10)
where the first factor on the RHS is the decomposition of $``$ with respect to the spectral representation of $`H_L`$, and the second is the one with respect to that of $`H_E`$. In this decomposition, $`H_0=H_L+H_E`$ is decomposed as a diagonal operator:
$$H_0=H_LI_E+I_LH_E=^{}\lambda 𝑑\lambda I_E+I_L^{}\mu 𝑑\mu ,$$
where $`I_L`$ and $`I_E`$ denote identity operators in respective factors in (10). To see that $`I`$ is not diagonalizable in the decomposition (10), assume contrarily that $`I`$ is diagonalizable with respect to (10). Then by spectral theory of selfadjoint operators, $`I`$ is decomposed as $`I=f(H_L)I_E+I_Lg(H_E)`$ for some functions $`f(H_L)`$ and $`g(H_E)`$ of $`H_L`$ and $`H_E`$. Thus the total Hamiltonian $`H`$ is also diagonalizable and written as:
$$H=H_0+I=(H_L+f(H_L))I_E+I_L(H_E+g(H_E)).$$
Namely the total Hamiltonian $`H`$ is decomposed into a sum of mutually independent operators in the decomposition of the total system into the observable and unobservable systems $`L`$ and $`E`$. This means that there are no interactions between $`L`$ and $`E`$, contradicting Gödel’s theorem as in section 2. Therefore $`I`$ is not diagonalizable with respect to the direct integral decomposition (10) of the space $``$.
Now a consequence of Gödel’s theorem in the context of the decomposition of the total universe into observable and unobservable systems $`L`$ and $`E`$ is the following:
> In the spectral decomposition (10) of $``$ with respect to a decomposition of the total system into the observable and unobservable ones, $`I`$ is non-diagonalizable. In particular so is the total Hamiltonian $`H=H_L+I+H_E`$.
Namely Gödel’s theorem yields the existence of non-diagonal elements in the spectral representation of $`H`$ with respect to the decomposition of the universe into observable and unobservable systems. The existence of non-diagonal elements in this decomposition is the cause that the observable state $`\mathrm{\Psi }(,y)`$ is not a stationary state and local time arises, and that decomposition is inevitable by the existence of the region unknowable to human beings.
From the standpoint of the person $`P`$ in (9), his universe needs to proceed to the future for his statement to be decided true or false; the decision of which requires his system to have infinite “time.” This is due to the fact that his self-contradictory statement does not give him satisfaction in his own world and forces him to go out to the region exterior to his universe. Likewise, the interaction $`I`$ in the decomposition above forces the observer to anticipate the existence of a region exterior to his knowledge. In both cases the unbalance caused by the existence of an exterior region yields time. In other words, time is an indefinite desire to reach the balance that only the universe has.
Acknowledgements. I wish to express my appreciation to the members of Time Mailing List at http://www.kitada.com/ for giving me the opportunity to consider the present problem. Special thanks are addressed to Lancelot R. Fletcher, Stephen Paul King, Benjamin Nathaniel Goertzel, Bill Eshleman, Matti Pitkanen, whose stimulating discussions with me on the list have led me to consider the present problem. I especially thank Stephen and Bill for their comments on the earlier drafts to improve my English and descriptions.
|
no-problem/9910/nucl-th9910067.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
In this talk we explore the effects of the FSI in $`(e,e^{}p)`$ reactions using polarized nuclei. All of the measurements to date involving medium and heavy nuclei have been performed with unpolarized targets and hence only the global effects of FSI averaged over all polarization directions have been addressed experimentally. Using instead polarized nuclei as targets, new possibilities to extract the full tri-dimensional momentum distribution of nuclei will become available .
The few theoretical studies of $`(e,e^{}p)`$ reactions involving polarized, medium and heavy nuclei in DWIA report a dependence of FSI effects — or nuclear transparency — on the choice of polarization angles. In the present work we show that these variations of the transparency can be understood in terms of the orientation of the initial-state nucleon’s orbit and of the attenuation of the ejected nucleon’s flux.
We shall show that one is able to predict the orientations of the target polarization that are optimal for minimizing the FSI effects, providing the ideal situations for nuclear structure studies. This situation occurs when the nucleon is ejected directly away from the nuclear surface. On the other hand, when the nucleon is ejected from the nuclear surface but in the opposite direction — into the nucleus — it has to cross the entire nucleus to exit on the opposite side, and the FSI effects are then found to be maximal. This second situation is ideal for detailed studies of the absorptive part of the FSI. All of these situations can be selected simply by changing the direction of the nuclear polarization.
## 2 Coincidence cross section of polarized nuclei. Results
Here we present the results of a DWIA calculation of the $`{}_{}{}^{39}\stackrel{}{\mathrm{K}}(e,e^{}p)^{38}\mathrm{Ar}_{\mathrm{g}.\mathrm{s}.}`$ cross section in the extreme shell model. The present choice is prototypical and the results can be generalized for any polarized nucleus and can be addressed using more sophisticated nuclear models.
We describe the ground state of $`{}_{}{}^{39}\stackrel{}{\mathrm{K}}`$ as a hole in the $`d_{3/2}`$ shell of <sup>40</sup>Ca. The initial nuclear state is 100% polarized in the direction $`\mathrm{\Omega }^{}=(\theta ^{},\varphi ^{})`$.
$$|A(\mathrm{\Omega }^{})=R(\mathrm{\Omega }^{})|d_{3/2}^1,m=\frac{3}{2},$$
(1)
where $`R(\mathrm{\Omega }^{})`$ is a rotation operator. In this simple model the nuclear polarization is carried by the hole in the $`d_{3/2}`$ shell. The polarization angles $`(\theta ^{},\varphi ^{})`$ are the spherical coordinates of the polarization vector $`𝛀^{}`$ with respect to the $`𝐪`$-direction ($`z`$-axis) and with the $`x`$-axis in the scattering plane.
The final hadronic state is given by a proton in the continuum with energy $`ϵ^{}`$ and momentum $`𝐩^{}`$, plus a daughter $`A1`$ nucleus (<sup>38</sup>Ar) in the ground state. This is described in the shell model as two holes in the $`d_{3/2}`$ shell coupled to total spin $`J=0`$.
The hole wave function is obtained by solving the Schrödinger equation with a Woods-Saxon potential. The wave function of the ejected proton is obtained by solving the Schrödinger equation with an optical potential for positive energies.
We compute the cross section as
$$\mathrm{\Sigma }\frac{d\sigma }{dE_e^{}d\mathrm{\Omega }_e^{}d\mathrm{\Omega }^{}}=\sigma _M\left(v_L^L+v_T^T+v_{TL}^{TL}+v_{TT}^{TT}\right),$$
(2)
where $`\sigma _M`$ is the Mott cross section, $`v_K`$ are the electron kinematical factors given in , and $`^K`$ are the nuclear response functions. See Refs. for more details of the model.
Next we show results of a calculation of the $`(e,e^{}p)`$ cross section for different nuclear polarizations $`(\theta ^{},\varphi ^{})`$. The kinematics correspond to the quasi-elastic peak and in-plane emission
$$q=500\mathrm{Mev}/c,\omega =133.5\mathrm{MeV},\varphi =0,\theta _e=30^\mathrm{o}$$
In fig. 1 we show the cross section for $`{}_{}{}^{39}\stackrel{}{\mathrm{K}}`$ polarized in the $`y`$ direction, (left) and in the $`y`$ direction (right). The solid lines are the full DWIA calculation, using the optical potential of Schandt et al. The dashed lines are the cross sections computed in PWIA, i.e., without FSI. The dotted lines correspond to the DWIA, but including in the FSI just the central imaginary part of the optical potential, while the dash-dotted lines include in addition the central real part of the potential.
Comparing the solid and dashed lines, we see that the effect of the FSI (solid lines relative to dashed lines) is quite dependent on the polarization of the nucleus. This fact suggest that the “transparency” of the nucleus to proton propagation can be maximized or minimized by selecting a particular polarization of the nucleus and that if one is able to understand physically the different behavior seen for the FSI effects in fig. 1, then it could be possible to make specific predictions about the reaction for future experiments.
## 3 A semi-classical picture of the reaction
In order to understand physically the above results we will consider a semi-classical model of the reaction by assuming it to take place in two or more steps as follows: first a proton with (missing) momentum $`𝐩`$ and energy $`ϵ`$ is knocked-out by the virtual photon and it acquires momentum $`𝐩^{}`$ and energy $`ϵ^{}=ϵ+\omega `$. Second, as this high-energy nucleon traverses the nucleus it undergoes elastic and inelastic scattering which, in our model, are produced by the real and imaginary parts of the optical potential.
The important point here is that the nucleus is polarized in a specific direction. Accordingly, the initial-state nucleon can be localized in an oriented (quantum) orbit. From the knowledge of this orbit and of the missing momentum one can predict the most probable location of the struck proton, and therefore one can specify the quantity of nuclear matter that the proton must cross before exiting from the nucleus with momentum $`𝐩^{}`$.
We illustrate the case of a particle in a $`d_{3/2}`$ wave. which is the relevant state for our calculation. First consider that the particle is polarized in the $`z`$-direction ($`𝛀^{}=𝐞_3`$). The corresponding wave function can be written as
$$|\frac{3}{2}\frac{3}{2}=\psi _1|+\psi _2|.$$
(3)
where the up and down components are given by
$`\psi _1`$ $`=`$ $`\sqrt{{\displaystyle \frac{3}{8\pi }}}\mathrm{sin}\theta \mathrm{cos}\theta \mathrm{e}^{i\varphi }R(r)`$ (4)
$`\psi _2`$ $`=`$ $`\sqrt{{\displaystyle \frac{3}{8\pi }}}\mathrm{sin}^2\theta \mathrm{e}^{2i\varphi }R(r).`$ (5)
Here the angles $`(\theta ,\varphi )`$ are the spherical coordinates of the particle’s position $`𝐫`$ and $`R(r)`$ is its radial wave function. The spatial distribution is then given by the single-particle probability density
$$\rho (𝐫)=|\psi _1|^2+|\psi _2|^2=\frac{3}{8\pi }\mathrm{sin}^2\theta |R(r)|^2.$$
(6)
Taking into account the form of the radial wave function for the $`d_{3/2}`$ wave, we can see that the particle is distributed around the center of the nucleus in a toroidal-like (quantum) orbit as shown schematically in fig. 2 (upper part). In a semi-classical picture of the bound state, we can imagine the particle performing a rotatory orbit within the torus in a counter-clock sense, as shown in the figure. The shape of the distribution for arbitrary polarization $`𝛀^{}`$ is just a rotation of the above distribution, as also shown in fig. 2 (bottom).
The next step is to localize the particle within the orbit for a given value of the missing momentum $`𝐩`$. From elementary quantum mechanics we employ the Fourier transform $`\stackrel{~}{\psi }(𝐩)`$ of the wave function and the position operator in momentum space $`\widehat{𝐫}=i_p`$ to define the local position of the nucleon in the orbit for momentum $`𝐩`$ in the following way:
$$𝐫(𝐩)=\frac{\mathrm{Re}\stackrel{~}{\psi }^{}(𝐩)(i_p)\stackrel{~}{\psi }(𝐩)}{\stackrel{~}{\psi }^{}(𝐩)\stackrel{~}{\psi }(𝐩)}.$$
(7)
This is a well-defined vector which represents the most probable location of a particle with momentum $`𝐩`$ when it is described by a wave function $`\psi `$. Henceforth $`𝐫(𝐩)`$ represents the position of the particle in the orbit in the present semi-classical model.
For the case of interest here of the $`d_{3/2}`$ orbit polarized in the $`z`$-direction, we compute the position $`𝐫(𝐩)`$ by using the wave function given in eqs. (35) in momentum space:
$$\mathrm{Re}\stackrel{~}{\psi }^{}(𝐩)i_p\stackrel{~}{\psi }(𝐩)=\frac{3}{8\pi }\frac{|\stackrel{~}{R}(p)|^2}{p}\mathrm{sin}\theta (1+\mathrm{sin}^2\theta )\widehat{\mathit{\varphi }},$$
(8)
where now $`(\theta ,\varphi )`$ are the spherical coordinates of the missing momentum $`𝐩`$, $`\stackrel{~}{R}(p)`$ is the radial wave function in momentum space, and $`\widehat{\mathit{\varphi }}`$ is the unit vector in the azimuthal direction. As we see, upon dividing by the momentum distribution (given by eq. (6), but in momentum space)
$$\stackrel{~}{\psi }^{}(𝐩)\stackrel{~}{\psi }(𝐩)=\frac{3}{8\pi }\mathrm{sin}^2\theta |\stackrel{~}{R}(p)|^2,$$
(9)
the radial dependence in the numerator and denominator goes away, and we obtain an expectation value of position which is independent of the radial wave function
$$𝐫(𝐩)=\frac{1+\mathrm{sin}^2\theta }{p\mathrm{sin}\theta }\widehat{\mathit{\varphi }}.$$
(10)
This expression has been obtained for the polarization in the $`z`$-direction. For a general polarization vector $`𝛀^{}`$ we just perform a rotation of the vector $`𝐫(𝐩)`$. Introducing the angle $`\theta _p^{}`$ between $`𝐩`$ and $`𝛀^{}`$, we can write the nucleon position in a general way
$$𝐫(𝐩)=\frac{1+\mathrm{sin}^2\theta _p^{}}{p^2\mathrm{sin}^2\theta _p^{}}𝛀^{}\times 𝐩.$$
(11)
Using the above definitions we can give a physical interpretation of the results shown in fig. 1. The kinematics for the case of the <sup>39</sup>K nucleus polarized in the $`y`$ direction are illustrated in fig. 3(a). Therein, the momentum transfer points in the $`z`$-direction and we show the missing-momentum vector $`𝐩`$ corresponding to the maximum of the momentum distribution, $`p140`$ MeV/c. The momentum of the ejected proton $`𝐩^{}`$ is also shown in the picture. For $`𝛀^{}`$ pointing in the $`y`$ direction, the semi-classical orbit lies in the $`xz`$-plane and follows a counter-clockwise direction of rotation. For these conditions, the most probable position of the proton before the interaction is indicated with a black dot near the bottom of the orbit. As the particle is going up with momentum $`𝐩^{}`$ after the interaction with the virtual photon, it has to cross all of the nucleus and exit it by the opposite side; thus one expects that the FSI will be large in this situation, as shown in the left panel of fig. 1.
In fig. 3(b) we show the picture for the opposite polarization in the $`y`$-direction. In this case the nucleon distribution in the orbit is the same as in (a), but the rotation direction is the opposite, Hence now it is more probable for the nucleon to be located near the upper part of the orbit. As the nucleon is still going up with the same momentum $`𝐩^{}`$, the distance that it has to travel through the nucleus is much smaller than in case (a), and hence one expects small FSI effects, namely, what is seen in the right panel of fig. 1.
We have arrived at a very intuitive physical picture of why the FSI effects differ for different orientations of the nuclear spin: the polarization direction fixes the orientation of the nucleon distribution (quantum orbit). For a given value of the missing momentum one can locate the particle in a definite position within the orbit, and therefore within the nucleus. Assuming that the particle leaves the nucleus with the known momentum $`𝐩^{}`$, one can determine the quantity of nuclear matter that it has to cross before exiting.
In order to check the above picture for any nuclear polarization, we have computed the cross section for a set of 26 different nuclear polarization angles expanding the $`(\theta ^{},\varphi ^{})`$ plane. Using equation (11) we have computed the distance $`s`$ of the nucleon trajectory within the nucleus, by choosing some appropriate value for the nuclear radius $`R`$. A model of exponential attenuation of the cross section due to nuclear absorption can be crafted in the following way:
$$\mathrm{\Sigma }_{DWIA}\mathrm{\Sigma }_{PWIA}\mathrm{e}^{s/\lambda },$$
(12)
where $`\lambda `$ is a free parameter to be interpreted as the mean free path (MFP). Within this approximation, the nuclear transparency, defined as the ratio between the DWIA and PWIA results, can be written as
$$T\frac{\mathrm{\Sigma }_{DWIA}}{\mathrm{\Sigma }_{PWIA}}\mathrm{e}^{s/\lambda }.$$
(13)
In fig. 4 we show the nuclear transparency as a function of the distance $`s`$ to the nuclear surface, computed for different polarizations, at the maximum of the cross section. For the FSI we have used just the central, imaginary part of the optical potential. In this figure we see that the dependence of $`\mathrm{log}T`$ can in fact be approximated by a straight line. By performing a linear regression we obtain a MFP of $`\lambda =8.4`$ fm. This value is quite independent of the radius $`R`$ in the region between $`r_{1/2}`$ and $`r_{1/10}`$, where the nuclear density $`\rho (r)`$ takes the values $`\rho (0)/2`$ and $`\rho (0)/10`$, respectively.
## 4 Applications to two-particle emission reactions
Finally we give a possible application of the above model to two-hadron emission reactions. Consider $`(e,e^{}N\pi )`$ reactions from polarized nuclei in the $`\mathrm{\Delta }`$-region. By selecting the appropriate nuclear polarization, one could reduce or enhance the FSI of the final $`\mathrm{\Delta }`$ in the nuclear medium. In fact, using the above model, one have control on the point of the nucleus where the $`\mathrm{\Delta }`$ is created. Making a crude estimation of the length that the $`\mathrm{\Delta }`$ travels before decaying
$$x\frac{\mathrm{}c}{\mathrm{\Gamma }_\mathrm{\Delta }}\frac{200}{120}\mathrm{fm}1.7\mathrm{fm}$$
(14)
we see that it could be possible to produce the $`\mathrm{\Delta }`$ in the two situations shown in figure 5.
In the case (a) the $`\mathrm{\Delta }`$ is created near the nuclear surface and propagates into the nucleus. It has large FSI and decays inside the nucleus into a pair $`N+\pi `$ which also interacts with the nucleus. In the second case (b) the $`\mathrm{\Delta }`$ propagates out of the nucleus. The FSI of the $`\mathrm{\Delta }`$ is expected to be smaller. The $`\mathrm{\Delta }`$ decays outside of the nucleus. This situation is cleaner to study the $`\mathrm{\Delta }`$ electroproduction amplitude in nuclei without too much distortion by FSI. Case (a) is ideal to study the $`\mathrm{\Delta }`$ properties in the nuclear medium.
## 5 Summary and conclusions
We have studied the reaction <sup>39</sup>K$`(e,e^{}p)^{38}\mathrm{Ar}_{\mathrm{gs}}`$ for polarized <sup>39</sup>K in DWIA. We have studied the dependence of the FSI as a function of the nuclear polarization direction and introduced a physical picture of the results in order to understand the different effects seen in the cross section.
The argument to explain the FSI effects is based on the PWIA and it has been illustrated by introducing the semi-classical concept of a nucleon orbit within the nucleus. For given kinematics we can fix the expectation value of the position of the nucleon within the nucleus before the interaction. From this information we have computed the length of the path that the nucleon travels across the nucleus for each polarization.
Our results show that when the FSI effects are large the computed nucleon path through the nucleus is also large, whereas the opposite happens when the FSI effects are small. Thus, by selecting the appropriate nuclear polarization, one can reduce or enhance the FSI effects. Such control should prove to be very useful in analyzing the results from future experiments with polarized nuclei.
Finally, our model can also be applied to the $`(e,e^{}N\pi )`$ reaction in the $`\mathrm{\Delta }`$ peak. Since by flipping the nuclear polarization one can go from big to small FSI effects of the $`\mathrm{\Delta }`$ , this opens the possibility of using this kind of reaction to distinguish the FSI effects from other issues of interest, such as the $`\mathrm{\Delta }`$ electroproduction amplitudes in the medium.
## Acknowledgments
This work is supported in part by funds provided by the U.S. Department of Energy (D.O.E.) under cooperative agreement #DE-FC01-94ER40818, in part by DGICYT (Spain) under Contract No. PB92-0927 and the Junta de Andalucía (Spain) and in part by NATO Collaborative Research Grant #940183.
|
no-problem/9910/astro-ph9910554.html
|
ar5iv
|
text
|
# Galaxies of Redshift 𝑧>5: The View from Stony Brook
## 1. Introduction
We have over the past few years applied our photometric redshift technique to the Hubble Deep Field (HDF) and Hubble Deep Field South (HDF–S) WFPC2 and NICMOS fields (e.g. Lanzetta, Yahil, & Fernández-Soto 1996, 1998; Fernández-Soto, Lanzetta, & Yahil 1999; Yahata et al. 2000). Our objective is to establish properties of the extremely faint galaxy population by identifying galaxies that are too faint to be spectroscopically identified by even the largest ground-based telescopes. Our experiences indicate that photometric redshift measurements are at least as robust and reliable as spectroscopic redshift measurements (and probably significantly more so). Specifically, comparison of photometric and reliable spectroscopic measurements in the HDF and HDF–S fields demonstrates that the photometric redshift measurements are accurate to within an RMS relative uncertainty of $`\mathrm{\Delta }z(1+z)_{}^<\mathrm{\hspace{0.25em}10}\%`$ and that there are no known examples of photometric redshift measurements that are in error by more than a few times the RMS uncertainty. These results apply at all redshifts $`z<6`$ that have yet been examined. It thus appears that the photometric redshift technique provides a means of obtaining redshift identifications of large samples of the faintest galaxies to the largest redshifts.
Here we report on some aspects of our efforts. Highlights of the results include the following:
1. We have identified nearly 3000 faint galaxies, of which nearly 1000 galaxies are of redshift $`z>2`$ and more than 50 galaxies are of redshift $`z>5`$ (ranging up to and beyond $`z=10`$). Further, we have fully characterized the survey area versus depth relationships, in terms of both energy flux density and surface brightness, in order to measure statistical properties of the very high redshift galaxy population.
2. We find that cosmological surface brightness dimming effects play a dominant role in setting what is observed at redshifts $`z>2`$. Most importantly, we find that it is more or less meaningless to interpret the galaxy luminosity function (or its moments) at high redshifts without explicitly taking account of surface brightness effects.
3. We find that the comoving number density of high intrinsic surface brightness regions (or in other words of high star formation rate density regions) increases monotonically with increasing redshift.
4. We find that previous estimates neglect a significant of dominant fraction of the ultraviolet luminosity density of the universe due to surface brightness effects and that the rest-frame ultraviolet luminosity density (or equivalently the cosmic star formation rate density) has not yet been measured at redshifts $`z_{}^>\mathrm{\hspace{0.25em}2}`$. The ultraviolet luminosity density of the universe plausibly increases monotonically with increasing redshift to redshifts beyond $`z=5`$.
The most recent versions of our photometry and redshift catalogs of faint galaxies in the HDF and HDF–S fields can be found on our web site at:
http://www.ess.sunysb.edu/astro/hdfs/.
Here and throughout we adopt a standard Friedmann cosmological model of dimensionless Hubble constant $`h=H_0/(100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1)`$ and deceleration parameter $`q_0=0.5`$.
## 2. Observations and Analysis
Our current observations and analysis differ from our previous observations and analysis in three important ways:
First, we have included all available public ground- and space-based imaging observations of the HDF, HDF–S WFPC2, and HDF–S NICMOS fields. Details of the current observations are summarized in Table 1.
| Table 1 | |
| --- | --- |
| Field | Filters |
| HDF | F300W, F450W, F606W, F814W, |
| | F110W, F160W, $`J`$, $`H`$, $`K`$ |
| HDF–S WFPC2 | F300W, F450W, F606W, F814W, |
| | $`U`$, $`B`$, $`V`$, $`R`$, $`I`$, $`J`$, $`H`$, $`K`$ |
| HDF–S NICMOS | F110W, F160W, F222M, STIS, |
| | $`U`$, $`B`$, $`V`$, $`R`$, $`I`$ |
Second, we have developed and applied a new quasi-optimal photometry technique based on fitting models of the spatial profiles of the objects (which are obtained using a non-negative least squares image reconstruction method) to the ground- and space-based images, according to the spatial profile fitting technique described previously by Fernández-Soto, Lanzetta, & Yahil (1999). For faint objects, the signal-to-noise ratios obtained by this technique are larger than the signal-to-noise ratios obtained by aperture photometry techniques by typically a factor of two.
Third, we have measured photometric redshifts using a sequence of six spectrophotometric templates, including the four templates of our previous analysis (of E/S0, Sbc, Scd, and Irr galaxies) and two new templates (of star-forming galaxies). Inclusion of the two new templates eliminates the tendency of our previous analysis to systematically underestimate the redshifts of galaxies of redshift $`2<z<3`$ (by a redshift offset of roughly 0.3), in agreement with results found previously by Benítez et al. (1999).
The accuracy and reliability of the photometric redshift technique is illustrated in Figure 1, which shows the comparison of 108 photometric and reliable spectroscopic redshifts in HDF and HDF–S. (Note that a non-negligible fraction of published spectroscopic redshift measurements of galaxies in HDF and HDF–S have been shown to be in error and so must be excluded from consideration.) With the sequence of six spectrophotometric templates, the photometric redshifts are accurate to within an RMS relative uncertainty of $`\mathrm{\Delta }z/(1+z)_{}^<\mathrm{\hspace{0.25em}10}\%`$ and there are no known examples of photometric redshift measurements that are in error by more than a few times the RMS uncertainty. These results apply at all redshifts $`z<6`$ that have yet been examined. Details of some of our current observations and analysis are described by Yahata et al. (2000).
## 3. Stony Brook Faint Galaxy Redshift Survey
Our analysis of the HDF and HDF–S WFPC2 and NICMOS fields constitutes a survey of galaxies to the faintest energy flux density and surface brightness limits currently accessible. Properties of the redshift survey are as follows:
First, we have determined nine- or 12-band photometric redshifts of faint galaxies in three deep fields.
Second, we have selected galaxies at both optical and infrared wavelengths, in two or more of the F814W, F160W, $`H`$, and $`K`$ bands (depending on field). (We have related selection in different bands by adopting the spectral energy distribution of a star-forming galaxy).
Third, we have fully characterized the survey area versus depth relations, as functions of both energy flux density and surface brightness.
Fourth, we have established properties of the extremely faint galaxy population by using a maximum-likelihood parameter estimation technique and a bootstrap resampling parameter uncertainty estimation technique. The derived parameter uncertainties explicitly account for the effects of photometric error, sampling error, and cosmic dispersion with respect to the spectrophotometric templates.
The Stony Brook faint galaxy redshift survey includes nearly 3000 faint galaxies, of which nearly 1000 galaxies are of redshift $`z>2`$ and more than 50 galaxies are of redshift $`z>5`$ (ranging up to and beyond $`z=10`$). The depth and scope of the survey is summarized in Figure 2, which shows redshift distributions of all galaxies identified in the HDF and HDF–S WFPC2 and NICMOS fields. The redshift distributions of galaxies identified in the HDF and HDF–S WFPC2 field are characterized by broad peaks at redshift $`z1`$ and long tails extending to redshifts $`z>5`$. Further, the distributions are statistically different from one another (with the HDF–S WFPC2 field exhibiting a statistically significant excess of galaxies of redshift $`z>2`$ compared with the HDF), and both exhibit statistically significant large-scale fluctuations. The redshift distribution of galaxies identified in the HDF–S NICMOS field is characterized by a broad peak at redshift $`z1`$ and a long tail extending to redshifts $`z>10`$.
## 4. Some High-Redshift Galaxies
Examples of some high-redshift galaxies are shown in Figure 3, which plots observed and modeled spectral energy distributions and redshift likelihood functions of galaxies identified in the HDF–S WFPC2 and NICMOS fields.
The top group of panels of Figure 3 shows four galaxies of redshift $`3<z<4`$ and near-infrared continuum magnitude $`AB(8140)25`$, and the middle group of panels of Figure 3 shows four galaxies of redshift $`5<z<6`$ and near-infrared continuum magnitude $`AB(8140)26`$. In each case, the spectral enery distribution shows unambiguous evidence of the $`\mathrm{Ly}\alpha `$-forest and Lyman-limit decrements, and the redshift likelihood function is very sharply peaked, indicating that, of all the spectrophotometric models considered, the appropriately redshifted spectrophotometric template provides the only plausible fit to the observations. We believe that the redshifts indicated in the top and middle groups of panels of Figure 3 are established with essentially complete certainty—and with substantially greater certainty than has been or could be achieved by means of spectroscopic observations of galaxies of the same redshifts and continuum magnitudes. The galaxies shown in the top and middle groups of panels of Figure 3 are unexceptional, and results shown for these galaxies are completely representative of results obtained for other similar galaxies.
Results of Figure 1 indicate that at redshifts $`3<z<4`$, the RMS measurement uncertainty of the photometric redshift technique is $`\mathrm{\Delta }z0.3`$ or $`\mathrm{\Delta }z/(1+z)10\%`$, which we believe results primarily due to stochastic variations in the density of the $`\mathrm{Ly}\alpha `$ forest among different lines of sight. Results of Figure 1 indicate (albeit with limited statistical certainty) that at redshifts $`5<z<6`$ the RMS measurement uncertainty of the photometric redshift technique is $`\mathrm{\Delta }z0.15`$ or $`\mathrm{\Delta }z/(1+z)3\%`$, which we believe is superior to results at redshifts $`3<z<4`$ because almost complete absorption in the $`\mathrm{Ly}\alpha `$ forest allows for less stochastic variations in the density of the $`\mathrm{Ly}\alpha `$ forest among different lines of sight.
The bottom group of panels of Figure 3 shows four galaxies of best-fit photometric redshift measurement $`z>6`$ and near-infrared continuum magnitude $`AB(16,000)27`$, including two galaxies (galaxies B and C) that we identified previously as candidate extremely high redshift galaxies on the basis of ground-based near-infrared measurements (Lanzetta, Yahil, & Fernández-Soto 1998). At these redshifts and continuum magnitudes, the redshift determinations are not unambiguous, and the high-redshift solutions are typically accompanied by lower-redshift solutions, of early-type galaxies of redshift $`z3`$. Additional deep imaging observations of these galaxies are needed to establish their redshifts with certainty.
## 5. The Galaxy Luminosity Function at Redshifts $`z>2`$
We have modeled the rest-frame 1500 Å luminosity function of galaxies of redshift $`z>2`$ by adopting an evolving Schechter luminosity function
$$\mathrm{\Phi }(L,z)=\mathrm{\Phi }_{}/L_{}(z)[L/L_{}(z)]^\alpha \mathrm{exp}[L/L_{}(z)]$$
(1)
with
$$L_{}(z)=L_{}(z=3)\left(\frac{1+z}{4}\right)^\beta .$$
(2)
The best-fit parameters for a simultaneous fit to the HDF and HDF–S WFPC2 and NICMOS fields (where we have related selection in different bands by adopting the spectral energy distribution of a star-forming galaxy) are $`\mathrm{\Phi }_{}=0.004\pm 0.001h^3`$ Mpc<sup>-3</sup>, $`L_{}=2.7\pm 0.3\times 10^{28}h^2`$ erg s<sup>-1</sup> Hz<sup>-1</sup>, $`\alpha =1.49\pm 0.03`$, and $`\beta =1.2\pm 0.3`$. The best-fit model is compared with the observations in Figure 4, which shows the cumulative galaxy surface density versus redshift and magnitude for galaxies selected in the F814W and F160W bands.
From a practical point of view, Figure 4 presents our best measurements and models of the empirical galaxy surface density versus redshift and near-infrared magnitude. The most striking result of Figure 4 is that galaxies identified by our analysis at the highest redshifts $`z>7`$ (which are detected only at the faintest F160W magnitudes $`AB_{}^>\mathrm{\hspace{0.25em}28}`$) are predicted by a straightforward extrapolation of a plausible model of the high-redshift galaxy luminosity function. For our analysis to have uncovered no galaxies of redshift $`z>7`$ would have implied rapid evolution of the galaxy luminosity function at redshifts $`z>6`$.
## 6. Effects of Cosmological Surface Brightness Dimming
Results of the previous section indicate that the galaxy luminosity function evolves only mildly at redshifts $`z>2`$, i.e. as $`(1+z)^\beta `$ with $`\beta 1`$. But due to $`(1+z)^3`$ cosmological surface brightness dimming, the measured luminosities of extended objects decrease rapidly with increasing redshift, even if the actual luminosities of the objects remain constant. For this reason, we consider it more or less meaningless to interpret the galaxy luminosity function (or its moments) over a redshift interval spanning $`z=2`$ through $`z=10`$ without explicitly taking account of surface brightness effects.
To make explicit the effects of cosmological surface brightness dimming on observations of high-redshift galaxies, we have constructed the “star formation rate intensity distribution function” $`h(x)`$. Specifically, we consider all pixels contained within galaxies on an individual pixel-by-pixel basis. Given the redshift of a pixel (which is set by the photometric redshift of the host galaxy), an empirical $`k`$ correction (which is set by the model spectral energy distribution of the host galaxy) and a cosmological model determine the rest-frame 1500 Å luminosity of the pixel, and an angular plate scale and a cosmological model determine the proper area of the pixel. Adopting a Salpeter initial mass function to convert the rest-frame 1500 Å luminosity to the star formation rate and dividing the star formation rate by the proper area yields the “star formation rate intensity” $`x`$ of the pixel. Summing the proper areas of all pixels within given star formation rate intensity and redshift intervals, dividing by the star formation rate intensity interval, and dividing by the comoving volume then yields the “star formation rate intensity distribution function,” which we designate as $`h(x)`$. The star formation rate intensity distribution function $`h(x)`$ is exactly analogous to the QSO absorption line systems column density distribution function $`f(N)`$ (as a function of neutral hydrogen column density $`N`$). In terms of the star formation rate intensity distribution function, the unobscured cosmic star formation rate density $`\dot{\rho }_s`$ (or equivalently the rest-frame ultraviolet luminosity density) is given by
$$\dot{\rho }_s=_0^{\mathrm{}}xh(x)𝑑x.$$
(3)
Results are shown in Figure 5, which plots the star formation rate intensity distribution function $`h(x)`$ versus star formation rate intensity $`x`$ determined from galaxies identified in the HDF and HDF–S NICMOS field. Several results are apparent on the basis of Figure 5: First, the star formation rate intensity threshold of the survey is an extremely strong function of redshift, ranging from $`x_{\mathrm{min}}5\times 10^4`$ $`M_{}`$ yr<sup>-1</sup> kpc<sup>-2</sup> at $`z0.5`$ to $`x_{\mathrm{min}}1`$ $`M_{}`$ yr<sup>-1</sup> kpc<sup>-2</sup> at $`z6`$. We conclude that cosmological surface brightness dimming effects play a dominant role in setting what is observed at redshifts $`z>2`$. Second, the comoving number density of high intrinsic surface brightness regions increases monotonically with increasing redshift. We conclude that the comoving number density of high intrinsic surface brightness regions (or equivalently of high star formation rate density regions) increases monotonically with increasing redshift. (See also Pascarelle, Lanzetta, & Fernández-Soto 1998). Third, at redshifts $`z_{}^<\mathrm{\hspace{0.25em}1.5}`$ \[at which $`h(x)`$ is measured over a wide range in $`x`$\], the distribution is characterized by a relatively shallow slope at $`\mathrm{log}x_{}^<1.5`$ $`M_{}`$ yr<sup>-1</sup> kpc<sup>-2</sup> and by a relatively steep slope at $`\mathrm{log}x_{}^>1.5`$ $`M_{}`$ yr<sup>-1</sup> kpc<sup>-2</sup>. These slopes are such that the bulk of the cosmic star formation rate density occurs at $`\mathrm{log}x1.5`$ $`M_{}`$ yr<sup>-1</sup> kpc<sup>-2</sup>, which is measured only at redshifts $`z_{}^<\mathrm{\hspace{0.25em}2}`$. We conclude that previous estimates neglect a significant or dominant fraction of the ultraviolet luminosity density of the universe due to surface brightness effects and that the rest-frame ultraviolet luminosity density (or equivalently the cosmic star formation rate density) has not yet been measured at redshifts $`z_{}^>\mathrm{\hspace{0.25em}2}`$.
This last point is illustrated in Figure 6, which shows the ultraviolet luminosity density of the universe versus redshift measured to various intrinsic surface brightness thresholds. Specifically, Figure 6 shows the ultraviolet luminosity density of the universe versus redshift measured to intrinsic surface brightness thresholds that could be detected in the HDF at all redshifts to $`z=5.0`$, to $`z=3.4`$, to $`z=2.3`$, to $`z=1.6`$, and to $`z=1.1`$. (Higher intrinsic surface brightness thresholds can be seen to higher redshifts, whereas lower intrinsic surface brightness thresholds can be seen only to lower redshifts.) Results of Figure 6 indicate that to any fixed intrinsic surface brightness threshold, the ultraviolet luminosity density of the universe increases monotonically with increasing redshift. Apparently, the ultraviolet luminosity density of the universe plausibly increases monotonically with increasing redshift to redshifts beyond $`z=5`$.
### Acknowledgments.
We thank Hy Spinrad and Daniel Stern for providing spectroscopic redshift measurements in advance of publication and acknowledge Mark Dickinson and Roger Thompson for obtaining NICMOS observations of HDF. This research was supported by NASA grant NACW–4422 and NSF grant AST–9624216 and is based on observations with the NASA/ESA Hubble Space Telescope and on observations collected at the European Southern Observatory.
## References
Benìtez, N, Broadhurst, T., Bouwens, R., Silk, J., & Rosati, P., 1999, ApJ, 515, L65
Fernández-Soto, A., Lanzetta, & Yahil, A. 1999, ApJ, 513, 34
Lanzetta, K. M., Yahil, A., & Fernández-Soto, A. 1996, Nature, 381, 759
. 1998, AJ, 116, 1066
Pascarelle, S., Lanzetta, K. M., & Fernández-Soto, A. 1998, ApJ, 508, L1
Yahata, N., Lanzetta, K. M., Chen, H.-W., Fernández-Soto, A., Pascarelle, S., Puetter, R., & Yahil, A. 2000, ApJ, submitted
|
no-problem/9910/astro-ph9910327.html
|
ar5iv
|
text
|
# Current Status of the Microlensing Surveys
## 1. Introduction
Three groups: EROS, MACHO and OGLE, reported the detection of their first gravitational microlensing events in September of 1993 (Aubourg et al. 1993, Alcock et al. 1993, Udalski et al. 1993). By now several hundred microlensing events have been discovered, most of them towards the galactic bulge, some towards the Magellanic Clouds, and a few in other directions. The events are detected in real time, and are reported on the World Wide Web at the rate of approximately two per week. A mini-collaboration, DUO, reported the detection of 13 microlensing events (Alard & Guibert, 1997), one of them due to a double lens (Alard et al. 1995). The microlensing projects monitor tens of millions of stars, they have discovered over one hundred thousand variables, and over ten thousand pulsating stars. Information may be found at the Web sites:
OGLE, Poland: http://www.astrouw.edu.pl/~ftp/ogle
OGLE, USA: http://www.astro.princeton.edu/~ogle
MACHO: http://wwwmacho.mcmaster.ca/
EROS: http://www.lal.in2p3.fr/recherche/eros/
Several new groups joined the search for, as well as the follow-up of the microlensing events. Their Web sites are:
MEGA: http://www.astro.columbia.edu/~arlin/MEGA
MOA: http://www.phys.vuw.ac.nz/dept/projects/moa/
AGAPE: http://cdfinfo.in2p3.fr/Experiences/AGAPE/
PLANET: http://www.astro.rug.nl/~planet/
In addition, several other projects generate a huge volume of the photometric data. The following are the Web sites I was able to find:
DIRECT: http://cfa-www.harvard.edu/~kstanek/DIRECT/
ROTSE: http://rotsei.lanl.gov/
ROTSE: http://umaxp1.physics.lsa.umich.edu/~mckay/rsv1/rsv1\_home.htm
LOTIS: http://hubcap.clemson.edu/~ggwilli/LOTIS/
ASAS: http://www.astrouw.edu.pl/~gp/html/asas/asas.html
Ystar: http://csaweb.yonsei.ac.kr/~byun/Ystar/
STARE: http://www.hao.ucar.edu:80/public/research/stare/stare\_synop.html
I expect to update the list as new projects come along, and they can be found on my home page at: http://www.astro.princeton.edu/faculty/bp.html.
In this paper some results obtained with the data generated by the microlensing projects are described. As I am not actively working on pulsating stars the choice is subjective, and some important findings may be missing. I apologize for the omissions. Fortunately, there will be many presentations at this conference by the representatives of most groups. Also, many important results may be found at the IAP conference: “Variable Stars and the Astrophysical Returns of the Microlensing Searches” in Paris in 1996 (cf. Ferlet et al. 1997). The likely prospects for the future of large scale automated surveys is also described in this paper.
## 2. Some Results Related to Pulsating Stars
There have been many papers about pulsating stars by the EROS and MACHO teams (Alcock et al. 1995, 1996, 1997a,b, 1998, 1999a, Bauer et al. 1999, Beaulieu et al. 1997a,b, Sasselov et al. 1997). The single most spectacular result was the period –luminosity ($`PL`$) diagram for pulsating stars in the Large Magellanic Cloud by the MACHO collaboration. For the first time the two sequences of Cepheids: those pulsating in the fundamental mode and those pulsating in the first overtone were very clearly separated. In addition, several distinct $`PL`$ sequences for red giants were also seen for the first time in the MACHO diagram. I trust it will be presented later at this conference.
Another spectacular result was obtained by the DUO collaboration, at the time a single graduate student, Christophe Alard. Using several hundred Schmidt camera plates from ESO covering a $`5^{}\times 5^{}`$ square in the sky near the galactic bulge, Alard (1996) discovered $`15,000`$ periodic variables and selected $`1,500`$ RRab stars. Using two photometric bands he constructed a reddening-independent histogram of their distance moduli. Two peaks appeared in the distribution: one corresponding to the distance of $`8`$ kpc, the second to the distance of $`24`$ kpc, i.e. to the galactic bulge and to the Sagittarius dwarf galaxy, respectively. The RR Lyrae variables turned out to be excellent tracers of the extent of that recently discovered galaxy, extending its size well beyond the initial estimate (Ibata et al. 1995).
Somewhat less spectacular, but very useful, was the photometry of over 200 RR Lyrae variables in the Sculptor dwarf galaxy by the OGLE collaboration (Kaluzny et al. 1995). This was promptly used by Kovács & Jurcsik (1996) to establish a very good correlation between the shape of the RR Lyrae light curves and the absolute magnitudes.
The recent series of OGLE papers on Cepheids in the Magellanic Clouds (Udalski et al. 1999a,b,c,d) made a catalog of $`1,300`$ Cepheids in the LMC, and a total of $`340,000`$ photometric measurements in standard $`I`$, $`V`$, and $`B`$-bands, accessible over the Internet (Udalski et al. 1999d). Note, that just several months ago a paper was posted on astro-ph (Lanoix et al. 1999) presenting “an exhaustive compilation of all published data of extragalactic Cepheids”. The total number of measurements in the compilation was about 3,000, i.e. two orders of magnitude fewer than in the paper by Udalski et al. (1999d).
The $`PL`$ relation for the LMC Cepheids based on the OGLE public domain data is shown in Fig. 1. The magnitude $`W_I`$ is the interstellar reddening-independent combination of $`I`$-band and $`V`$-band photometry:
$$W_II1.55(VI).$$
$`(1)`$
All these refer to the intensity-averaged mean magnitudes.
For each Cepheid a difference $`\mathrm{\Delta }W_{OC}`$ can be calculated between the actual value of $`W_I`$ and the line fitted to the fundamental mode pulsators, given by Udalski et al. (1999c, Table 1) as
$$W_{I,C}=3.277\mathrm{log}P+15.815,\mathrm{\Delta }W_{OC}W_IW_{I,C},$$
$`(2)`$
where $`P`$ is the Cepheid period, in days. A histogram of the $`\mathrm{\Delta }W_I`$ values is shown in Fig. 2. A very clear separation of the fundamental mode and the first overtone mode pulsators is apparent in both Figures.
## 3. Prospects for the Near Future
In my view the single most important outcome of the microlensing searches is the practical demonstration that it is feasible to monitor tens of millions of stars per night, making up to $`10^{10}`$ photometric measurements per year, using very modest instrumentation: a 1-m class telescope, a CCD camera, and several PC-class computers. There can be no doubt that this technology will develop into much larger surveys.
The fates of the current projects vary from case to case. The largest of them, MACHO, will suffer the ultimate Y2K problem: it will be terminated on December 31, 1999. As far as I know, EROS will continue till at least 2002, while there is no time limit for OGLE. Currently OGLE uses a single 2K $`\times `$ 2K CCD camera in a drift scan mode (cf. Udalski et al. 1997, OGLE-II) to monitor $`30\times 10^6`$ stars per night. A new mosaic 8K $`\times `$ 8K CCD camera will become operational in the year 2000, and it will be used in a still-frame mode. The data rate will increase by a factor $`10`$, leading to the detection of $`500`$ microlensing events per year with the OGLE-III system.
While hardware upgrades are important, the same is true about software. The first attempt to use the image subtraction technique to search for gravitational microlensing was published by Tomaney & Crotts (1996). Preliminary application of image subtraction by the MACHO collaboration increased the number of detected microlensing events by a factor $`2`$ (Alcock et al. 1999b,c).
New, very powerful image subtraction software has been recently developed by Alard & Lupton (1997) using OGLE data. It has been applied to the old OGLE microlensing data, dramatically improving photometric accuracy (Alard 1999a). Olech et al. (1999) used it to detect RR Lyrae variables all the way to the center of a globular cluster M5 with ground-based CCD images. Woźniak et al. (1999) developed with it a real-time photometry system for Huchra’s lens (2237+0305). A full data pipeline based on Alard & Lupton (1997) and Alard (1999b) software is currently under development by Woźniak (1999). The goal is to analyze all OGLE-II galactic bulge data. It is likely that incorporation of this software in the forthcoming OGLE-III system will result in a real-time detection rate of $`1,000`$ microlensing events per year.
Image subtraction generates blending-independent determination of stellar flux variations. This naturally leads to a period – flux amplitude ($`PA`$) rather than to $`PL`$ relation for Cepheids. Let the flux of radiation from a star and its full flux amplitude be defined for the $`I`$-band as
$$F_I10^{0.4I},\mathrm{\Delta }F_I=F_{I,max}F_{I,min},\mathrm{\Delta }m_{F,I}2.5\mathrm{log}_{10}\left(\mathrm{\Delta }F_I\right).$$
$`(3)`$
Note, that $`\mathrm{\Delta }F_I`$ is the quantity measured using the image subtraction method, and $`\mathrm{\Delta }m_{F,I}`$ is the same quantity expressed in magnitudes, which are the units preferred by astronomers. Also note, that both quantities: $`\mathrm{\Delta }F_I`$ and $`\mathrm{\Delta }m_{F,I}`$ are independent of blending, and can be reliably measured in the most crowded environments, without any bias. Unfortunately, the period - flux amplitude relation has significantly more scatter than the period - luminosity relation, as it is apparent in Fig. 3.
The blending of a Cepheid with images of nearby bright stars makes it appear brighter than it really is. Recently, Mochejska et al. (1999) and Stanek & Udalski (1999) pointed out that this may lead to a systematic underestimate of distance moduli of the most distant galaxies studied by the HST Key Project. We expect that a $`PA`$ relation will eliminate this systematic error. However, it is not known if the $`PA`$ relation is as universal as the $`PL`$ relation appears to be.
The microlensing searches do not have a monopoly for generating huge sets of photometric measurements. DIRECT is a project searching for pulsating and eclipsing variables in nearby galaxies M31 and M33 (Kaluzny et al. 1998, 1999, Stanek et al. 1999). A large number of Cepheids discovered by this project will provide the first opportunity to verify the $`PA`$ relation when combined with the OGLE results for the Magellanic Clouds.
Several large projects using very small instruments are searching for optical flashes which may accompany gamma-ray bursts (ROTSE: Akerlof et al. 1999; LOTIS: Williams et al. 1999). They cover all sky every night down to $`14`$ mag. The first spectacular result was obtained by the ROTSE collaboration on 1999 January 23: the detection of an optical flash brighter than 9 mag from the burst GRB 990123, which was at the redshift $`z=1.6`$ (Akerlof et al. 1999). The archive of ROTSE is a gold mines for studies of many kinds of variable objects, including pulsating stars. In the 2000 or so square degrees analyzed so far by McKey and his collaborators $`2000`$ variables were found, $`1800`$ of them new (McKey 1999).
Another project, ASAS, is in its early stages of development (Pojmański 1997, 1998), but with the intent to cover the whole sky with nightly observations of all objects brighter than $`14`$ mag initially, and to go deeper with time. This project has real-time photometry implemented from the start, and it has already demonstrated that even among stars as bright as $`1112`$ mag, over 70% of all variables have not been cataloged, and they are waiting to be discovered.
## 4. Prospects for the Distant Future
There are many ideas about future expansion of sky monitoring, and many projects at various stages of implementation, planning or dreaming. The most ambitious of these is the idea of the Dark Matter Telescope (DMT), as proposed by Angel et al. (1999) and by Armandroff et al. (1999). This dream is to built an 8.4-m telescope with a 3 field imaged at f/1.25, with the focal surface filled with $`1.4\times 10^9`$ pixels in a very large mosaic CCD camera. The primary science objective is the determination of the distribution of dark matter in the universe by measuring weak gravitational lensing of galaxies over a large part of the sky. However, such an instrument could perform other spectacular tasks, as it could reach $`V=24`$ mag in a 20-s exposure, and cover the whole sky to this depth in about 4 clear nights. While the hardware sounds very impressive, the software capable of handling in a meaningful way such a data rate may be even more challenging. Data access will not be easy either.
The DMT is an undertaking on such a huge scale that a research team even larger than the MACHO or EROS teams will be needed, first to obtain the necessary funds, next to implement the project, and finally to get science out of it. Does this imply that there will be no room for small teams in the astronomical future? I do not think so. With all its power a DMT-like instrument will be useless for the discovery of optical flashes like the one associated with the GRB 990123 (Akerlof et al. 1999). In order to detect such flashes independently of gamma-ray burst triggers a very different and far less expensive instrument is needed to continuously monitor all sky down to 14 mag, or so. Note that ROTSE and LOTUS cover $`400`$ square degrees in a single exposure with four telephoto lenses, each equipped with a 2K $`\times `$ 2K CCD camera. To cover $`10,000`$ square degrees we need $``$ 25 small systems like ROTSE or LOTIS. The likely cost of so many instruments would be in a relatively modest range $`12\times 10^6`$ dollars. This level of funding is within reach of a small team. For example, the total hardware cost of the OGLE was $`1.3\times 10^6`$ dollars.
The large membership of the MACHO and EROS teams created a misleading impression that this is the only way for the future. However, the current data rate of OGLE is within a factor 2 or so of the current data rate of MACHO, yet the OGLE team has only 7 members, most of them students. Next year the OGLE data rate will increase by a factor $`10`$, with one or two more students joining the project. ASAS has only one senior person and one part-time student, yet within a year its data rate is likely to equal the current data rate from either ROTSE and LOTIS, and all data will be processed in real time. The most spectacular example of a capability of a small team has been demonstrated by DUO, with nearly all data analysis and writing of science papers done by just one graduate student, Christophe Alard.
Obviously, it is not possible for a small team to make full use of all the data. Hence, OGLE (and ASAS in future) makes its data public domain as soon as feasible. The bottleneck is quality control. However, once the data pipeline is fully debugged, it might be possible for the software to handle quality control nearly in real time. It might be possible to make most data public domain right away. Would this pose a threat to the scientific recognition of a small team? I do not think so. Whoever uses the data will have to give full credit to the source of the data. Also, members of a small team could prepare new software needed to take full advantage of any new hardware upgrades ahead of time, and this way have a head start over the competition. It is conceivable that the scientific recognition of a team making good data public domain could be strongly enhanced by the scientific results obtained with their data by others.
Several bottlenecks have to be overcome. The most difficult and expensive technical bottleneck is software, as has been learned by large projects like the Sloan Digital Sky Survey (SDSS) and the 2MASS. It has not been demonstrated yet that a truly large data set can be made public domain at a reasonable cost. Perhaps even more difficult are psychological and sociological problems: will the authors of useful public domain data get enough recognition to be offered tenured positions at major universities? I think these are very important issues. Most of us enjoy working in small groups, and find it much less pleasant and less efficient to work almost anonymously in an industrial size mega-team. Also, managing large teams is very complicated and tedious, with huge overheads in time, funds, and loss of satisfaction. I think that whenever a project can be meaningfully broken down into a number of smaller parts it should be divided into such parts. It is much better to reference the work of separate but collaborating groups, rather than to have a paper with dozens of co-authors, with no clear division of responsibility and credit for various parts of the work. Time will tell if small or large teams will lead in scientific discoveries in the future.
### Acknowledgments.
I am very grateful to Dr. A. Udalski for providing the data on which Fig. 3 is based. It is a pleasure to acknowledge the support by NSF grants AST-9530478 and AST-9820314.
## References
Akerlof, C., Balsano, R. & Barthelmy, S. et al. (ROTSE) 1999, Nature, 398, 400
Alard, C. (DUO) 1996, A&A, 458, L17
Alard, C. (OGLE) 1999a, A&A, 343, 10
Alard, C. (OGLE) 1999b, astro-ph/9903111
Alard, C., Guibert, J. (DUO) 1997, A&A, 326, 1
Alard, C. & Lupton, R. H. (OGLE) 1997, ApJ, 503, 325
Alard, C., Mao, S. & Guibert, J. (DUO) 1995, A&A, 300, L17
Alcock, C., Akerlof, C. W., Allsman, R. A. et al. (MACHO) 1993, Nature, 365, 621
Alcock, C., Allsman, R. A., Axelrod, T. S. et al. (MACHO) 1995, AJ, 109, 1653
Alcock, C., Allsman, R. A., Axelrod, T. S. et al. (MACHO) 1996, AJ, 111, 1146
Alcock, C., Allsman, R. A., Alves, D. et al. (MACHO) 1997a, ApJ, 474, 217
Alcock, C., Allsman, R. A., Alves, D. et al. (MACHO) 1997b, ApJ, 482, 89
Alcock, C., Allsman, R. A., Alves, D. et al. (MACHO) 1998, AJ, 115, 1921
Alcock, C., Allsman, R. A., Alves, D. et al. (MACHO) 1999a, ApJ, 511, 185
Alcock C., Allsman, R. A., Alves, D. et al. (MACHO) 1999b, ApJ, 521, 602
Alcock C., Allsman, R. A., Alves, D. et al. (MACHO) 1999c, ApJS, 124, 171
Angel, R., Lesser, M., Sarlot, R., & Dunham, T. (DMT) (1999)
Armandroff, T., Bernstein, G., Dell’Antonio, I. et al. (DMT) (1999)
Aubourg, E., Bareyre, P., Brehin, S. et al. (EROS) 1993, Nature, 365, 623
Bauer, F., Afonso, C., Albert, J. N. et al. (EROS) 1999, A&A, 348, 175
Beaulieu, J. P., Krockenberger, M., Sasselov, D. D. et al. (EROS) 1997a, A&A, 318, L47
Beaulieu, J. P., Sasselov, D. D., Renault, C. et al. (EROS) 1997b, A&A, 321, L5
Ferlet, R., Maillard, J.-P. & Raban, B. (Editors) 1997, Variable Stars and the Astrophysical Returns of the Microlensing Searches, Cedex, France, Editions Frontieres
Ibata, R. A., Gilmore, G. & Irwin, M. J. 1995, MNRAS, 277, 781
Kaluzny, J., Kubiak, M., Szymański, M. et al. (OGLE) 1995, A&AS, 112, 407
Kaluzny, J., Stanek, K. Z., Krockenberger, M. et al. (DIRECT) 1998, AJ, 115, 1016
Kaluzny, J., Mochejska, B. J., Stanek, K. Z. et al. (DIRECT) 1999, AJ, 118, 346
Kovács, G. & Jurcsik, J. 1996, ApJ, 466, L17
Lanoix, P., Garnier, R., Paturel, G. et al. 1999, astro-ph/9904027
McKay, T. 1999, private communication
Mochejska, B. J., Macri, L. M., Sasselov, D. D., & Stanek, K. Z. (DIRECT) 1999, astro-ph/9908293
Olech, A., Woźniak, P. R., Alard, C. et al. 1999, astro-ph/9905065
Pojmański, G. (ASAS) 1997, Acta Astronomica, 47, 467
Pojmański, G. (ASAS) 1998, Acta Astronomica, 48, 35
Sasselov, D. D., Beaulieu, J. P., Renault, C. et al. (EROS) 1997, A&A, 324, 471
Stanek, K. Z. & Udalski, A. (OGLE) 1999, astro-ph/9909346
Stanek, K. Z., Kaluzny, J., Krockenberger, M. et al. (DIRECT) 1999, AJ, 117, 2810
Tomaney, A. B., & Crotts, A. P. S. 1996, AJ, 112, 2872
Udalski, A., Szymański, M., Kaluzny, J. et al. (OGLE) 1993, Acta Astronomica, 43, 289
Udalski, A., Szymański, M. & Kubiak, M. (OGLE) 1997, Acta Astronomica, 47, 319
Udalski, A., Soszyński, I., Szymański, M. et al. (OGLE) 1999a, Acta Astronomica, 49, 1
Udalski, A., Soszyński, I., Szymański, M. et al. (OGLE) 1999b, Acta Astronomica, 49, 45
Udalski, A., Szymański, M., Kubiak, M. et al. (OGLE) 1999c, Acta Astronomica, 49, 201
Udalski, A., Soszyński, I., Szymański, M. et al. (OGLE) 1999d, Acta Astronomica, 49, 223
Williams, G. G., Park, H. S., Ables, R. et al. (LOTIS) 1999, ApJ, 519, L25
Woźniak, P. (OGLE) 1999, in preparation
Woźniak, P., Alard, C., Udalski, A. et al. (OGLE) 1999, astro-ph/9904329
|
no-problem/9910/astro-ph9910116.html
|
ar5iv
|
text
|
# Overview of Grain Models
## 1 Introduction
A comprehensive model for interstellar grains should provide detailed information on the composition, size distribution, optical properties, and physical structure and shape of interstellar grains, fully consistent with the rapidly expanding list of observational constraints, including extinction, scattering, polarization, and emission properties, spectroscopic absorption and emission features, and the abundances of refractory elements in the interstellar medium (ISM) and their observed depletion pattern. In addition, such a model should also have predictive capabilities, allowing one to correctly anticipate observational phenomena not yet seen. Despite substantial progress in the nearly seven decades since the recognition of interstellar dust as a general astrophysical phenomenon (Trumpler 1930), such a comprehensive dust model does not yet exist. Instead, we have several quasi-comprehensive models which satisfy at least some sub-set of observational constraints, and an even larger number of more limited, constraint-specific models, which have been designed to explain single observational constraints without even attempting to approach comprehensiveness. In addition, much theoretical work has been directed at understanding the sources and processes of grain formation and the mechanisms involving grain-grain and grain-gas interactions in the interstellar environment, which again must be consistent with the observed gas-to-dust ratio in interstellar clouds and the spectroscopic signatures of various grain components.
A study of the model-to-model differences provides a vivid illustration of the fundamental issues concerning interstellar grains, about which there remains substantial disagreement. These issues include uncertainties about the details of the size distribution, especially near the upper and lower ends of the distribution, questions about grain composition, especially the amount, nature, and structure of carbon-based grains, and disagreements about the physical grain structure.
During the past decade, several comprehensive reviews of the interstellar dust problem have appeared in print, foremost among these the monograph by Whittet (1992) and the reviews by Mathis (1993) and by Dorschner and Henning (1995). The interested reader is advised to go to these sources for additional background information. The present review will focus entirely upon dust in the diffuse interstellar medium of the Milky Way galaxy, and it will aim to summarize the current status of observational constraints with emphasis on more recent developments, followed by a discussion of the principal semi-comprehensive models and their abilities to match these constraints. I will then review important recent work and theoretical explorations related to dust models, which will highlight the impressive rate at which new observational advances help define the detailed properties and characteristics of interstellar grains.
## 2 Observational Constraints
### 2.1 Chemical Abundances and Depletions
The absence of interstellar absorption features attributable to cosmic ices along interstellar lines of sight traversing only the diffuse ISM indicates that the dust in this environment is composed only of refractory solids, consisting mainly of the relatively abundant heavy elements C, O, Fe, Si, Mg, S, Ca, Al, and Ni (Whittet 1992). The abundance of these elements relative to hydrogen in the current interstellar medium and the degree to which these elements are depleted from the gas phase determine the overall chemical composition of dust grains and their combined mass relative to the interstellar gas. Until quite recently, it was accepted practice to equate the interstellar relative abundance of heavy elements with that encountered in the Sun and the solar system. Recent evidence, based on abundance determinations in young stars, i.e. objects formed more recently from the local ISM, on abundance determinations in HII regions, on studies of the distribution of the heavy-element abundance in solar-type stars at the Sun’s galactocentric distance, and on abundance determinations of undepleted elements in the diffuse ISM itself has led to suggestions that the actual abundances of heavy elements in the present Galactic ISM , the so-called reference abundances, are lower than the earlier assumed solar abundances, possibly by about 1/3 of the total (Snow & Witt 1996). Combined with new observations of the gas-phase abundances along diffuse ISM lines-of-sight (e.g. Fitzpatrick 1996; Sofia et al. 1997; Meyer et al. 1998) obtained with the GHRS on HST, tight new constraints on the chemical composition on interstellar grains, especially on carbonaceous grains (Snow & Witt 1995; Meyer 1997), have emerged. All current dust models have some difficulty in meeting these constraints, both with regard to the carbon abundance and with regard to matching the implied interstellar mass extinction coefficient for interstellar grains.
### 2.2 Interstellar Extinction
Extinction by dust in interstellar space adds two important constraints which help to specify the nature of grains. The wavelength dependence of extinction, now known over a wavelength interval extending from 0.1 $`\mu `$m to $``$1000 $`\mu `$m, provides important information about the size distribution of dust particles. The total amount of optical extinction for a sightline with known hydrogen column density is the prime determinant for the interstellar dust-to-gas ratio. Observed reddening throughout the near-IR and optical wavelength ranges restricts the bulk of the interstellar dust to the sub-$`\mu `$m size range. The continued non-linear rise of extinction throughtout the far-UV points to the existence of grains small compared to UV wavelengths. The relative amounts of extinction in the visual and at far-UV wavelengths helps to specify the mass ratio of very small grains absorbing mainly in the UV to the larger sub-$`\mu `$m grains contributing absorption and scattering throughout the near-IR/optical/UV spectral range. The large degree of spatial variation in this ratio (Fitzpatrick 1999) indicates that variations in the size distribution are similarly large within our Galaxy.
### 2.3 Scattering by Dust
Observations of scattering by dust can provide useful information about the nature of grains and their size distribution by yielding data on the albedo and the phase function asymmetry and their respective wavelength dependences. The high degree of linear polarization of scattered light can provide additional insight, provided the dust/source geometry is sufficiently well known. Finally, the scattering of x-rays by interstellar dust, both through the intensity level of scattered x-rays and the radial distribution of intensity in the resulting x-ray halos, constrains the composition and structure as well as the size distribution of interstellar grains, especially at the large-size end of this distribution. Examples of particularly useful scattering results are of the reflection nebula NGC 7023 in the 160 to 100 nm wavelength range (Witt et al. 1993) which show that the far-UV albedo declines shortward of 130 nm, suggesting that the particles responsible for the far-UV rise in extinction are mainly small absorbing grains. Similarly, studies of the nebula IC 435 (Calzetti et al. 1995) demonstrate rather definitively that the 2175 Å extinction “bump” is caused entirely by absorption and that the asymmetry of the scattering phase function increases monotonically from the visual into the far-UV range. The latter finding shows clearly that scattering throughout the visible and UV is caused by the same relatively large grains, which produce very strongly forward-directed scattering in these wavelength ranges. The potential of using dust scattered x-ray haloes as a grain diagnostic has been reviewed recently by Predehl and Klose (1996), and examples of the analysis of the particularly well-observed x-ray halo of Nova Cygni 1992 are the papers by Mathis et al. (1995) and by Smith and Dwek (1998).
### 2.4 Interstellar Polarization by Aligned Grains
The wavelength dependence of partial linear and circular polarization (Martin 1989) caused by interstellar dust in the diffuse ISM reveals unique information on the shape and size distribution of alignable grains and the mechanism producing this alignment. The efficiency of polarization, i.e. its degree relative to the associated amount of visual extinction, provides a measure for the effectiveness of this alignment process. An important expansion of the polarization data base has occurred recently with the addition of measurements of ultraviolet linear interstellar polarization (Martin et al. 1999).
### 2.5 Discrete Interstellar Absorption Features
Discrete spectroscopic absorption features offer the potential of identifying specific material components of grains. Several such features are seen in the diffuse ISM. The strongest is the 2175 Å UV extinction feature, which most models attribute to absorption by carbonaceous, most likely graphitic, small grains (Draine 1989). Still less controversial is the identification of the infrared interstellar absorption bands at 9.7 and 18 $`\mu `$m with the Si-O stretch and Si-O-Si bending modes in amorphous silicates and the attribution of the infrared interstellar absorption feature at 3.4 $`\mu `$m to C-H stretch transition in aliphatic hydrocarbon solids. On the other hand, the identification of the numerous diffuse interstellar absorption bands (Sarre 1999, this volume) remains one of the most challenging unsolved problems of interstellar spectroscopy. Until these bands are positively identified, their great potential as diagnostics of physical conditions in the ISM will not be realized. Equally poorly understood is the very-broadband structure (VBS) in the visual interstellar extinction curve (Hayes et al. 1973). The VBS takes the form of a shallow depression in the extinction in the 520 - 600 nm region, seen in many Galactic lines of sight (e.g. van Breda & Whittet 1981). Efforts to connect the VSB to the occurrence of photoluminescence by grains (Duley & Whittet 1990) or to UV-characteristics of interstellar extinction (Jenniskens 1994) have not led to a conclusive identification of the VSB carrier.
### 2.6 Thermal Emissions from Dust
Interstellar grains are heated by absorption of UV/optical photons and they re-radiate this energy as thermal emission with a spectrum revealing the temperature distribution and emission characteristics of the radiating particles. The near-full extent of this emission, both spatially and in wavelength (Beichman 1987; Soifer, Houck, and Neugebauer 1987), was first seen with the Infrared Astronomical Satellite (IRAS), then further expanded with the Diffuse Infrared Background Explorer (DIRBE) (Bernard et al. 1994). In addition to the expected radiation from relativlely cool (approx. 20K) wavelength-sized grains in equilibrium with the local radiation field, IRAS revealed the presence of non-equilibrium thermal emission from small and very small grains, i.e. nanoparticles, whose heat capacity is so small that the absorption of a single UV/optical photon results in large (500 - 1000K) temperature excursions. With about 35% of the entire diffuse IR radiation from the Galaxy originating with this process, the important role of nanoparticles in the size distribution of interstellar grains has now been fully recognized (Puget & Leger 1989).
### 2.7 Discrete Interstellar Emission Features
Emission features originating in interstellar grains are observed in the optical in the form of Extended Red Emission (ERE) and in the near-infrared spectral range in the form of Aromatic Hydrocarbon Bands, also referred to as the Unidenditified Infrared Bands (UIB). The ERE was first seen in the peculiar reflection nebula called the Red Rectangle (Schmidt et al. 1980), where it appears as an exceptionally intense, broad emission feature extending from 540 nm to about 900 nm with a peak near 660 nm. Subsequently, the ERE feature has been observed in many other dusty environments, including reflection nebulae, planetary nebulae, HII-regions, external galaxies, and the diffuse ISM of the Milky Way (Gordon, et al. 1998; Szomoru & Guhathakurta 1998). The ERE is an efficient photoluminescence process associated with interstellar grains, as shown by its close correlations with the dust column density and the intensity of the illuminating radiation field. In the diffuse ISM, at least 10% of the interstellar UV/optical photons are absorbed by the ERE carrier (Gordon, et al. 1998). This calls for a carrier consisting of cosmically abundant elements, and current ideas center on silicon nanoparticles (Witt et al. 1998; Ledoux et al. 1998; Zubko et al. 1999), carbonaceous nanoparticles (Seahra & Duley 1999), and PAH molecules (d’Hendecourt et al. 1986). The UIBs, with principal emission features centered at wavelengths of 3.3, 6.2, 7.7, 8.6, and 11.3 $`\mu `$m, after having been observed mainly in dense dusty environments with abundant UV photons for many years, have now also been detected as prominent features in the diffuse ISM (Giard et al. 1989; Mattila et al. 1996; Onaka et al. 1996) through obvervations from the balloon experiment Arome and the infrared satellites ISO and IRTS. Several current dust models (see below) attribute the UIBs to emission from PAH molecules, although solid-state carbonaceous models have been advanced as well (Papoular et al. 1996; Duley & Williams 1981).
### 2.8 Interstellar Grains in the Solar System
Evidence of interstellar grains is found in the solar system in two ways: either in the form of pre-solar grains extracted from primitive meteorites (Zinner 1997) and liberated from evaporating comets (Bradley et al. 1999) or in the form of present-day interstellar grains entering the solar system from interstellar space (Landgraf et al. 1999). Isotopic anomalies in presolar grains provide unique insights into the sources of certain sub-populations of grains, be they asymptotic giant branch stars or supernovae; they provide evidence that crystalline grains can form in astronomical environments and survive. A population of glassy silicate, interplanetary dust particles, known as GEMS, exhibit 9.7 $`\mu `$m silicate features closely matching those of silicates seen in astronomical sources (Bradley et al. 1999). The possibility that GEMS are examples of actual interstellar silicate grains must therefore be considered seriously. Dust detectors on space craft exploring the outer reaches of the solar system are detecting interstellar grains entering interplanetary space from the local interstellar medium (Landgraf et al. 1999). The analytical capabilities of these detectors are still limited, restricting the measurements to that of mass flux and velocity vector. Nevertheless, the data available have provided new information on the upper end of the dust size spectrum and the dust-to-gas ratio applicable at least to the interstellar grains in the local interstellar cloud (Frisch et al. 1999). The successful launch of STARDUST in 1999 provides hope that actual samples of cometary and interstellar dust will be captured and returned to Earth within only a few years.
## 3 Major Dust Models in the Current Literature
### 3.1 The MRN Model
Mathis, Rumpl, and Norsieck (1977), with a focus on explaining the entire observed wavelength dependence of interstellar extinction and using as a constraint the relative abundances of refractory elements based on solar reference abundances, proposed a grain model consisting of separate spherical particles consisting of graphite and silicates. This model is known as the MRN model. The proposed grain compositions take into account that there are two basic chemical environmemts for the condensation of solids from the gas phase, depending upon the C/O ratio being less than or greater than unity. In the first case, all C-atoms end up in CO, with the remaining O-atoms available to form silicates and metal oxides. In the second case, all O-atoms end up in CO, with the remaining C-atoms free to form various carbonaceous compounds, although not necessarily graphite. The model ignores the likely processing experienced by interstellar grains through exposure to shocks, UV radiation, cosmic rays, and cold, dense gas in molecular clouds. A defining characteristic of the MRN model is its assumed power-law size distribution n(a) $`a^{3.5}`$, with sharp cutoffs at both a maximum radius (250 nm) and a minimum radius (5 nm). This model lacked components which would account for the observation of widespread UIB and ERE emissions in the diffuse ISM; it does not account for the 3.4 $`\mu `$m interstellar absorption, and, lacking nanoparticles, it did not anticipate the important role of non-equilibrium thermal emission, which is an important part of the IR spectrum of the Milky Way and of other dusty galaxies. The MRN model received major upgrades through the work of Draine & Lee (1984) and Draine & Anderson (1985), who, respectively, provided the optical constants for “astronomical silicates” and “astronomical graphites” to provide a more satisfactory fit to the interstellar extinction curve, and who added a population of “very small grains”, filling the size range between 5 nm (the previous MRN lower limit) and 0.3 nm, the size of large molecules. The latter adjustment allowed the MRN model to account for the existence of the near-IR continuum emission, albeit not for the UIB structure of this emission. While long abandoned by its principal originators, the MRN model continues to be most frequently invoked when references to interstellar grains are being made. It is clearly not the comprehensive model called for in the introduction.
### 3.2 Core-Mantle Models
Core-mantle models are based on the idea that the bulk of the carbonacous grain component resides as a mantle on the surfaces of silicate grains. Greenberg and his collaborators (Greenberg & Hong 1974; Hong & Greenberg 1980; Li & Greenberg 1997) envision this carbonaceous mantle in the form of an organic refractory residue, which is produced via photolysis of icy grain mantles formed on grain surfaces while interstellar grains spend time periodically inside dense molecular clouds. In its latest version (Li & Greenberg 1997), this model assumes a trimodal size distribution consisting of large core-mantle grains containing most of the mass, a second population of small carbonaceous grains of graphitic nature to produce the 2175 Å feature, and a population of PAH molecules to provide the rising far-UV extinction through absorption and the source for the UIB emission. This model provides a satisfactory match to the interstellar extinction curve, and, through the large core-mantle grains, of the interstellar polarization. It also give a satisfactory fit to measured scattering properties such as dust albedo and phase function asymmetry, and most importantly, it fits the infrared dust emission spectrum. At the same time, it comes relatively close to fitting current interstellar chemical abundance constraints.
Weaknesses of this model are the poor characterization of the population of small carbonaceous grains and the still open question of how the core-mantle structure can be maintained through the shattering experience of encountering an interstellar shock. If processes like grain shattering and subsequent reassembly through agglomeration of grain fragments are important in the life cycles of grains, a more randomly composite grain structure as envisioned by Mathis & Whiffen (1989) appears more likely.
Another version of core-mantle grain models has been proposed by Duley, Jones, & Williams (1989) and Jones, Duley, & Williams (1990). They suggest that the carbonaceous mantle is in the form of hydrogenated amorphous carbon (HAC), which can experience modifications of its optical properties by changing the degree of hydrogenation in different interstellar environments (Duley & Williams 1990). This was intended to provide explanations both for the high degree of spatial variation observed in the far-UV extinction curve plus for the widely seen ERE, for which HAC was considered a possible source material. It should be recognized that the type of evolutionary processes envisioned by both types of core-mantle grain models most likely do occur at some level.
### 3.3 The Post-IRAS Model
Building upon the lessons learned from IRAS, Desert, Boulanger, & Puget (1990) proposed a dust model with a focus on explaining the energetics and spectrum of near-IR, mid-IR, and far-IR dust emission and the implied very important role of very small grains and PAH molecules. They also suggest three independent grain populations with radii ranging from 110 nm to 15 nm (big grains), 15 nm to 1.2 nm (very small grains), and 1.2 nm to 0.4 nm (PAHs). They envision relatively large spatial abundance variations among these three populations to account for the large variations seen in the far-UV extinction as well as in the details of the IR dust emission spectrum. This model can claim that it correctly predicted the UIB emission confirmed later by the Arome, IRTS, and ISO experiments. A weakness of this model is the small maximum size of the “big grains”. They may prove to be insufficient to explain interstellar linear polarization observations, observations of near-IR scattering, and observations of x-ray halos, and observations of present-day interstellar grains found entering the solar system.
### 3.4 Common Properties and Problems
While differing in details, all these models have certain properties in common as they also share certain problems. All agree that silicates are a major component of the large-grain population, playing the most important role in visual extinction and interstellar reddening, in scattering at all wavelengths, in interstellar linear polarization, and in the far-IR dust emission at wavelengths around 100 $`\mu `$m and beyond. The observed strength of the “silicate bands” at 9.7 and 18 $`\mu `$m requires the near total depletion of gas-phase Si of solar abundance and its incorporation in Si-O bonds in silicates. The models also agree that size distribution of grains must extend in a near-continuous fashion from the sub-$`\mu `$m range to the nanoparticle range, forming a continuous transition to large molecules. The number density of grains must increase with decreasing size in a manner of a power law with negative power of 3 or larger. All models have the problem of meeting the interstellar chemical abundance constraints as currently understood, especially the limits on the Si- and C-abundances. There is considerable diagreement on the dominant form and structure of carbonaceous dust.
## 4 Recent Work Related to Dust Models
Spurred on by the very impressive growth in the amount of observational information on dust-related phenomena, research directed at solving some of the outstanding problems of interstellar grain models is progressing at an ever-increasing pace. This is aided by the availability of increasingly sophisticated laboratory facilities and of increasingly powerful computing resources and numerical techniques. Space limitations prevent me from commenting on laboratory studies in any detail. These provide absolutely essential data on optical properties of relevant solids, on wavelengths, profiles, and absorption strengths of absorption bands, on the emission characteristics of proposed carriers of the UIBs and the ERE, and on the processing of likely grain analog materials by exposure to radiation and to gases. Other efforts related directly to recent grain modeling work will be reviewed briefly below.
### 4.1 New Tighter Abundance Constraints
Realizing that the relative abundances of refractory elements in the interstellar medium may be less than expected on the basis of their abundances in the Sun and the solar system (Snow & Witt 1995, 1996; Meyer 1997), Mathis (1996) began the careful examination of grain models under these new, tight abundance constraints. Fluffy grains, having a porous structure with up to 50% vacuum, were found to hold out promise for a sufficiently increased mass extinction coefficient so that the new constraints might be met, albeit only barely within the still large observational uncertainties. This possibility had already been suggested earlier by Jones (1988) in a different context. Dwek (1997) pointed out, however, that the pourous-grain model predicts a far-IR emissivity in excess of that observed from the diffuse ISM with the Cosmic Background Explorer (COBE), resulting from the lower dust albedo of the fluffy grains compared to traditional models and direct observations. Such problems should be surmountable, if the modeling approach includes the abundance constraints along with other observational constraints in defining the composition and size distribution of interstellar grains, as has been shown by Zubko et al. (1998) using the regularization approach (Zubko 1999) as a base for developing a modern dust model.
### 4.2 Grains with Composite-Porous-Fluffy-Irregular Structure
For many decades, the Mie-theory (see for reference: Bohren & Huffman 1983) has been the standard tool for computing the optical properties of model grains, given a set of indices of refraction from laboratory measurements for the assumed grain material. This, however, limited models to spherical, homogeneous, optically isotropic particles, spherically concentric core-mantle particles, or infinite cylinders in as far as sub-$`\mu `$m grains were included. Purcell & Pennypacker (1973), followed by Draine (1988), introduced the “discrete dipole approximation” (DDA) as a practical, albeit computing-intensive, technique for calculating the optical properties of irregularly shaped, composite, and/or optically anisotropic wavelength-sized grains. Coupled with a faster iterative method introduced by Lumme & Rahola (1994), the DDA appears to be the method of choice in the age of rapidly increasing computing power, although optical properties of composite particles which do not deviate too much from spherical shape may be computed quite reliably via classical Mie theory with optical constants derived from an effective medium theory. One of the latest published efforts of modeling extinction and infrared emission assuming fractal dust grains is the work of Fogel & Leung (1998). Among their results, their finding that interstellar extinction with fractal grains requires about one-third less mass in the form of grains is most interesting, as this is just the fraction of mass having been made unavailable through the tighter abundance constraints.
### 4.3 Grain Size Distributions
Every one of the three quasi-comprehensive dust models reviewed in Section 3 uses a different ad-hoc form for the size distribution of grains, with several free parameters to be determined by fitting the model predictions to the observations. In all three cases, the largest grains contain most of the dust mass available. The upper size limit is thus set by the dust-to-gas ratio derived from observations and the desire to avoid grey extinction which does not contribute to interstellar reddening. The lower size limit, initially, was ill-defined by extinction constraints, but it is now well-determined by the requirement of non-equilibrium thermal emission, resulting from the stochastic heating of very small grains by individual photons, and it lies below 1 nm. Two techniques have recently been advanced for the determination of size distributions that aim to find the optimum distribution derived from observational constraints alone. The maximum entropy method (MEM) was used by Kim et al. (1994) and Kim & Martin (1994, 1995) toward this end, and although they assumed bare silicate and graphite spherical grains consistent with the MRN model, they found significant departures from the simple power law assumed by MRN. In particular, they found the sharp upper size limit to be quite unrealistic and saw their MEM distribution extend toward larger grain sizes. A direct result of this was their prediction that near-IR dust albedos should be higher by about a factor two compared to MRN, which appears to be consistent with observations.
While the MEM approach still requires an initial guess for the size distribution, the regularization technique employed by Zubko and his collaborators (Zubko 1999a; Zubko et al. 1998, 1999) computes the size distribution from the observational and abundance constraints alone. This opens the possibility of determining effective size distributions of grains for individual lines of sight (Zubko et al. 1996, 1998) and for extragalactic systems such as the SMC (Zubko 1999b) where the extinction curve is known.
Given the impact of the grains near the upper limit of the size distribution on the dust-to-gas mass ratio, efforts to determine the shape of the distribution at this end by independent means are of particular importance. Ongoing in-situ collections of interstellar grains by spacecraft in the outer solar system are especially significant. Landgraf et al. (1999) have summarized the results to-date, after showing that detected interstellar grains can be separated cleanly from a background of solar-system dust. Frisch et al. (1999) discuss the consequences of these findings on the implied dust-to-gas ratio in the local interstellar cloud. Values typically twice those generally accepted for the average diffuse ISM in the plane of the Milky Way galaxy are found, but given the local nature of the data, the implications are locally restricted. Results of more general validity can be expected from the analysis of x-ray haloes, whose radial distribution and absolute intensity for a given column density of grains is most strongly influenced by the largest grains in the line-of-sight size distribution (Smith & Dwek 1998). The lines-of-sight are typically of the order of kpc, and with the advent of powerful x-ray telescopes such as Chandra and XXM, suitable data are expected to be available in the near future.
### 4.4 The 2175 Å UV Absorption Feature
The current grain models explain the 2175 Å feature through absorption by graphite grains. This explanation is beset with numerous difficulties (Draine & Malhotra 1993; Mathis 1994; Rouleau et al. 1997). Finding a satisfactory alternative to graphite grains, ideally a type of grain which can explain other dust features in addition to the 2175 Å absorption feature, has therefore been at the center of numerous investigations in recent years.
The difficulty of forming graphite grains in an initial condensation process has been recognized for a long time (Czyzak et al. 1981). As a result, many studies have been directed at processes by which hydrogenated amorphous carbon grains, which are believed to be more easily formed in carbon star atmospheres, might be graphitized in interstellar space ( Sorrell 1990; Blanco et al. 1991, 1993; Mennella et al. 1995, 1997, 1998). UV-irradiation appears to be the most plausible process, and indeed it does lead to an absorption profile which approaches that of the observed 2175 Å band (Mennella et al. 1998). Hydrogenated amorphous carbon, in a more hydrogenated form, could then also be drawn upon as the source material for the interstellar 3.4 $`\mu `$m absorption band (Furton & Witt 1999; Mennella et al. 1999)
Polycyclic aromatic hydrocarbons (PAH) are believed to be abundant in interstellar space and are considered a possible carrier of the UIB emission. PAH exhibit particularly strong absorption in the UV. The old idea that a combination of numerous PAH absorption spectra might result in the observed 2175 Å feature (Donn 1968) has been revived recently. Beegle et al. (1997) found in laboratory experiments that molecular aggregates produced from the PAH naphtalene exhibit a 2175 Å feature. Duley and Seahra (1998) investigated the optical properties of carbon nanoparticles composed of stacks of PAHs, involving up to several hundred carbon atoms, and found that such structures could indeed explain the plasmon-resonance type extinction feature at 2175 Å. In their model, these nanoparticles simply represent the high-mass component of a general population of large PAH molecules in interstellar space. Experimental work by Schnaiter et al. (1998) with nano-sized hydrogenated carbon particles have shown promising results.
Other alternatives have been explored by Henrard et al. (1993) and Ugarte (1995) along lines of onion-like graphitic particles or multishell fullerenes. Such particles might be generated through annealing of tiny nano-diamonds, which are found with remarkable abundance in primitive meteorites and which are demonstrably pre-solar (Zinner 1997). Papoular et al. (1993), within the context of their coal model for carbonaceous interstellar grains, found that anthracite produces a close fit to the 2175 Å feature.
### 4.5 Extended Red Emission
With the detection of intense extended red emission (ERE) in the diffuse ISM of the Milky Way galaxy ( Gordon et al. 1998; Szomoru & Guhathakurta 1998), the ERE has become an important observational aspect of interstellar grains that future models need to reproduce. The observational evidence shows that ERE is a photoluminescence process powered by far-UV photons but that the ERE carriers are easily modified by intense UV radiation fields and destroyed, if the radiation intensity exceeds certain levels of intensity and hardness. This behavior points to large molecules or nanoparticles as the likely carrier. Witt et al. (1998) and Ledoux et al. (1998) have proposed silicon nanoparticles in the size range from 1 to 5 nm as the likely candidates, supported by an extensive base of laboratory data. Seahra & Duley (1999) suggested that the same PAH nanoparticles proposed by them as the 2175 Å band carriers are also the source of the ERE. The number of interstellar photons absorbed in the 2175 Å band is approximately what is required to excite the ERE (Gordon et al. 1998), but this does not necessarily imply a causal connection. The Seahra & Duley model predicts, in addition to the main ERE band, two satellite bands, one shortward of 500 nm and one at 1000 nm wavelength. No evidence for photoluminescence by grains shortward of 500 nm has been found, however (Rush & Witt 1975).
## 5 Open Questions
Despite some very impressive progress in our understanding of interstellar grains, spurred on strongly by exciting new observations and laboratory results, a number of challenging open questions remains. I will mention a few of my favorite ones.
* What are the dominant, most abundant forms of carbon solids in interstellar space?
* How can carbon solids be sufficiently multi-functional to meet abundance constraints and yet explain all interstellar phenomena attributed to carbonaceous grains?
* What is the nature of the nanoparticles which radiate in the 25 to 60 $`\mu `$m range in the diffuse ISM?
* What are the appropriate indices of refraction for grain materials, if they are composed of conglomerates of nanoparticles?
* Why does silicon get in and out of grains so readily?
* What produces the ERE?
* Why do starburst galaxies seem to lack the 2175 Å absorber?
###### Acknowledgements.
I am grateful to the Scientific Organizing Committee for inviting me to attend this symposium and to the local Korean hosts for providing a beautiful setting for a week of stimulating interactions. I gratefully acknowledge financial support received from the Organizing Committee, from The University of Toledo, and from the National Aeronautics and Space Administration.
|
no-problem/9910/astro-ph9910179.html
|
ar5iv
|
text
|
# Modeling Interacting Galaxies Using a Parallel Genetic Algorithm
## 1. Introduction
A major problem in modeling of encounters of galaxies is the extended parameter space which is composed by the orbital and the structural parameters of the interacting galaxies. Traditional grid-based fitting strategies suffer from very large CPU-requirements. E.g. for a restricted 7-dimensional parameter space (an encounter of a disc with a point mass) and a resolution of only 5 values per dimension, one needs 78125 models, or about 26 years of integration time on a GRAPE3af special purpose computer for a ”complete” grid. More systematic search strategies like gradient methods depend strongly on the initial conditions, which makes them prone to trapping in local optima. An efficient alternative approach are evolutionary methods and especially genetic algorithms (Holland 1975, Goldberg 1989, Charbonneau 1995). In combination with fast (but not self-consistent) restricted N-body-codes (Toomre & Toomre 1972) they allow for an efficient search in parameter space which can be used for both, an automatic search of interaction parameters (provided sufficiently accurate data are available) and/or a uniqueness test of a preferred parameter combination (Wahde 1998, Theis 1999).
The basic idea of GAs is to apply an evolutionary mechanism including ’sexual’ reproduction operating on a population which represents a group of different parameter sets. All members are characterized by their fitness which quantifies the correspondance between simulations and observations. In order to determine the ’parents’ two individuals are selected according to their fitness. These parents represent two points in parameter space. The corresponding parameter set is treated like a ’chromosome’, i.e. it is subject to a cross-over and a mutation operation resulting in a new individual which is a member of the next generation. Such a breeding is repeated until the next generation has been formed. Finally, the whole process of sexual reproduction is repeated iteratively until the population confines one or several regions of sufficiently high fitness in parameter space.
Here we present a parallel implementation of our genetic algorithm.
## 2. Parallelization of the genetic algorithm
We parallelized our code on the level of the GA by applying a ’master-slave’ technique using message passing (MPI). On the master processor all the reproduction operations are performed. It sends the individual parameter sets to the slave processors which do the N-body simulations and determine the corresponding fitness values (Fig. 1 left). The speed-up, i.e. the ratio of CPU-times $`t_1`$ and $`t_N`$ used by a calculation with one or $`N`$ processors, respectively, reaches values between 30 and 60 for 100 processors (Fig. 1 right). The deviation from the optimal gain is mainly caused by the spread of CPU-times required for the individual simulations. It becomes more important with decreasing population size. By parallelization the whole GA-fit using 10000 models reduces to less than 3 CPU minutes on a CRAY T3E. Thus, an ’interactive’ analysis becomes possible.
### Acknowledgments.
The authors are grateful to Paul Charbonneau and Barry Knapp for providing their (serial) genetic algorithm pikaia. C.T. thanks for the support by the organizers of the 15th IAP meeting.
## References
Charbonneau P., 1995, ApJS, 101, 309
Goldberg D.E., 1989, Genetic Algorithms in Search, Optimization, & Machine Learning, Addison-Wesley, Reading
Holland J., 1975, Adaptation in natural and artificial systems, Univ. of Michigan Press, Ann Arbor
Theis Ch., 1999, Rev. Mod. Astron., 12, 309
Toomre A., Toomre J., 1972, ApJ, 178, 623
Wahde M., 1998, A&AS, 132, 417
|
no-problem/9910/cond-mat9910462.html
|
ar5iv
|
text
|
# Time evolution of wave-packets in quasi-1D disordered media
## 1 Introduction
Band random matrices (BRM) represent an effective model for both 1D disordered systems with long-range hopping and quasi-1D wires. The bandwidth $`b`$ plays the role of the range of the interaction in the first case, the one of the square root of the number of independent conduction channels in the second. Up to now, studies have been mostly devoted to the analysis of the stationary solutions of the Schrödinger equation and to the corresponding spectral properties of BRM’s . Much less is known about the solutions of the time dependent Schrödinger equation, a topic on which only a few studies have been performed . The partial analogy of this latter problem with the ‘dynamical localization’ phenomenon in the kicked rotor suggests that an initial delta-like packet spreads diffusively and eventually saturates to a localized state. The width of this asymptotic packet for BRM’s is of the order of $`b^2`$ lattice sites, i.e. the same order as the localization length of all the eigenfunctions .
The theoretically predicted scaling laws for the mean square displacement $`\stackrel{~}{M}`$ were tested numerically and a comparison of the asymptotic form of the wave-packet with a theoretical formula derived for the 1D Anderson model was attempted . More recently some new theoretical results appeared which give a formula for the time asymptotic packet in the BRM model in the large $`b`$ limit . Therefore, it became important to check numerically this formula and to both investigate how the packet reaches its time asymptotic shape and measure the size of finite $`b`$ corrections. For what the time evolution is concerned some phenomenological expressions were suggested in ref. , based on a power-law convergence of the mean square displacement to its steady state value. However, the presence of large statistical fluctuations prevented the authors of ref. from assessing whether the time asymptotic scaling is ruled by power law corrections or by the logarithmic corrections to the $`t^1`$ dependence suggested by rigorous results obtained for the 1D Anderson model . The fluctuations $`\mathrm{\Delta }_{\stackrel{~}{M}}`$ of the width of the asymptotic wave-packet with the realization of the disorder constitute an even more controversial issue, since not even the scaling behaviour is clearly understood. Some evidence of an anomalous behaviour was presented in two previous studies of the same problem and in the kicked rotor . In all cases the numerics was too poor to make a convincing statement about the value of the anomalous exponent.
The bottleneck of the previous simulations was the slowness of the integration scheme, a 4-th order Runge-Kutta with a small time step to obtain a good conservation of probability over a long time span. This low efficiency prevented from reaching sufficiently large values of $`b`$ and from considering a large enough number of realizations of the BRM’s. We have instead implemented a 2-nd order Cayley algorithm, which, being unitary, exactly conserves probability, although the one-step integration error is larger than the one of the Runge-Kutta scheme (a situation similar to those of symplectic algorithms in classical Hamiltonian dynamics). This has allowed us to more than double the maximum bandwidth (from $`b=12`$ to $`b=30`$) and to increase the statistics by a factor four (in the worst case).
As a result, we have been able to complete an accurate analysis of the time evolution of the mean square displacement, finding that there is no need to invoke effective formulas with a power-law time dependence even at relatively short times. We have found a clean evidence of an anomalous scaling of the relative fluctuations of the packet width, which behave as $`\mathrm{\Delta }_{\stackrel{~}{M}}/\stackrel{~}{M}b^\delta `$ with $`\delta =0.75\pm 0.03`$. In order to confirm this anomaly, we have investigated the statistics of the packet width at a specific time in the localization regime. The probability distributions at various $`b`$ values, when appropriately rescaled, superpose, and the resulting universal ($`b`$ independent) curve is definitely different from a Gaussian with an exponential tail at large $`\stackrel{~}{M}`$ values. Finally, we have compared our results with the theoretical formula for the asymptotic wave-packet finding a convincing agreement. The finite $`b`$ corrections to the $`b\mathrm{}`$ Zhirov expression are of order $`(1/b)`$.
## 2 Model and numerical technique
We have considered the time-dependent Schrödinger equation
$$i\frac{\psi _i}{t}=\underset{j=ib}{\overset{i+b}{}}H_{ij}\psi _j$$
(1)
where $`\psi _i`$ is the probability amplitude at site $`i`$ and the tight-binding Hamiltonian $`H_{ij}`$ is a real symmetric band random matrix. The band structure of the Hamiltonian is determined by the condition
$$\begin{array}{cc}H_{ij}=0\hfill & \text{if }|ij|>b,\hfill \end{array}$$
the parameter $`b`$ setting the band-width; the matrix elements inside the band are independent Gaussian random variables with
$$\begin{array}{ccc}H_{ij}_d=0\hfill & \text{and}\hfill & \left(H_{ij}\right)^2_d=1+\delta _{ij}\hfill \end{array}$$
where the symbol $`_d`$ stands for the average over different realizations of the disorder. In the present work we have considered the evolution of an ‘electron’ initially localized at the centre (identified with the site $`i=0`$) of an infinite lattice. To this aim we have analysed the solution of equation (1) corresponding to the initial condition
$$\psi _i\left(t=0\right)=\delta _{i0}.$$
Since the wave-packet evolves in a supposedly infinite lattice, it is necessary to avoid any spurious boundary effect due to the inevitably finite size of the vectors used in the numerical computations. This goal has been achieved by resorting to a self-expanding lattice, i.e. a lattice whose size is progressively enlarged according to the development of the wave-function. At each integration step, our program checks the probability that the electron is in the leftmost and rightmost $`b`$ sites, adding $`10b`$ new sites whenever the amplitude $`|\psi _i|`$ is larger than $`\epsilon =10^3`$ in at least one of the $`2b`$ outermost sites. We have separately verified that $`\epsilon `$ is small enough not to significantly affect the computation of the probability distribution. For instance, by lowering $`\epsilon `$ by an order of magnitude, the mean squared displacement (computed over the same disorder realizations) changes only by a few percent. Since this systematic error is not larger than the uncertainty due to statistical fluctuations, it is not convenient to reduce the cut-off as it would turn out in a slower code with a consequent reduction of the statistics.
The Schrödinger equation (1) was integrated by approximating the evolution operator $`\mathrm{exp}\left(iHt\right)`$ with the Cayley form
$$\mathrm{exp}\left(iH\delta t\right)\frac{1iH\delta t/2}{1+iH\delta t/2},$$
(2)
which implies that the values of the wave-function at two successive time-steps are related by
$$\left(1+\frac{1}{2}iH\delta t\right)\psi (t+\delta t)=\left(1\frac{1}{2}iH\delta t\right)\psi (t).$$
(3)
Solving the band diagonal system of equations (3) allows one to determine $`\psi (t+\delta t)`$ once $`\psi (t)`$ is known. Cayley’s algorithm is a standard tool for the computation of the solutions of the Schrödinger equation (see for instance ref. ); to the best of our knowledge, this is the first application to the specific field of random Hamiltonians with long-range hopping. Cayley’s form (2) for the evolution operator has two relevant features: it is second-order accurate in time and unitary; in addition, the corresponding integration scheme (3) is stable. Stability is essential in order to study the long time evolution of the wave-packet; as for unitarity, it ensures the conservation of probability and, together with second-order accuracy in time, allows one to choose time steps $`\delta t`$ two or three order of magnitude bigger than those used in Runge-Kutta integration schemes. Indeed, we could make use of a time step $`\delta t10^1`$, to be compared with the time step $`\delta t10^410^3`$ used for the same problem in Refs. . To ascertain how large a $`\delta t`$ could be used, we have compared the solutions obtained through Cayley’s algorithm at various $`\delta t`$ with the exact solution of the Schrödinger equation (1), computed by diagonalizing the Hamiltonian (to avoid boundary effects due to the finite size of the diagonalized matrices, we have considered sufficiently short evolution times). By this way we came to the somewhat surprising conclusion that the validity range of the approximate equality (2) extended up to time steps as big as $`\delta t1/\sqrt{b}`$ (the scaling of $`\delta t`$ with the band-width $`b`$ was necessary to compensate the opposite scaling of the energy eigenvalues with $`\sqrt{b}`$). To check this conclusion, we have computed the mean squared displacement in the localized regime for several values of $`\delta t`$ in the range $`10^2/\sqrt{b}1/\sqrt{b}`$, finding differences of a few percent, not larger than the statistical fluctuations.
This can depend on the fact that the long time evolution of the wave-packet seems to be led by the eigenstates at the band centre. Indeed, in the energy representation, the exact evolution operator and the Cayley form can be written as
$`\mathrm{exp}\left(iH\delta t\right)={\displaystyle \underset{n}{}}|ne^{iE_n\delta t}n|`$
$`{\displaystyle \frac{1iH\delta t/2}{1+iH\delta t/2}}={\displaystyle \underset{n}{}}|ne^{i\varphi _n(\delta t)}n|,`$
with
$$\varphi _n(\delta t)=2\mathrm{arctan}\left(E_n\delta t/2\right),$$
where $`|n`$ is the eigenvector corresponding to the energy $`E_n`$. These equations show that, for increasing $`\delta t`$, the approximate equality (2) holds true only in the subspace spanned by the eigenvectors corresponding to the band centre, while at the band edge, where the eigenvalues $`|E_n|`$ tend to $`\sqrt{b}`$, the coefficients $`\mathrm{exp}\left(iE_n\delta t\right)`$ and $`\mathrm{exp}\left(i\varphi _n\delta t\right)`$ become quickly different. Therefore, the eigenstates at the band edges appear to play a minor role in the time evolution. This is probably due to the shorter localization length of such states compared with those in the centre: in fact, for the mean square displacement, all eigenstates are weighted with their localization length .
## 3 Results
To investigate the time evolution of the wave-packet, we have computed the mean square displacement
$$\stackrel{~}{M}(b,t)=u(t)\underset{j=\mathrm{}}{\overset{\mathrm{}}{}}j^2|\psi _j(t)|^2_d.$$
(4)
Previous studies of this problem strongly suggest that $`\stackrel{~}{M}`$ satisfies the scaling relation
$$\stackrel{~}{M}(b,t)=b^4M(\tau =t/b^{3/2}),$$
(5)
for large enough values of the bandwidth $`b`$. Nevertheless, in Ref. , where the most detailed numerical investigation has been carried out, it was not possible to obtain a clear verification of the scaling law (5) due to the poor statistics and the small values of $`b`$.
The faster integration algorithm described in the previous section has allowed us both to average over more realizations (400 in the worst case), i.e to reduce statistical fluctuations, and to reach larger values of $`b`$ (namely, $`b=30`$ instead of $`b=12`$ as in Ref. ). The results reported in Fig. 1 for several values of $`b`$ are clean enough to show a convincing convergence from above to a limit shape. In other words, there is no possibility to interpret the deviations as a signature of a different scaling behaviour for $`\stackrel{~}{M}`$.
In order to perform a more quantitative analysis, we have proceeded in the following way: $`M(\tau ,b)`$<sup>1</sup><sup>1</sup>1We have added the variable $`b`$ to underline the residual but asymptotically irrelevant dependence on the band-with. has been averaged over the time interval $`20<\tau <30`$ to obtain the more statistically reliable quantity $`M_t(b)`$. By assuming a dependence of the type
$$M_t(b)=M_{\mathrm{}}(1+ab^\alpha ),$$
(6)
we have fitted the three parameters $`M_{\mathrm{}}`$, $`a`$ and $`\alpha `$, finding that the convergence rate $`\alpha `$ is very close to 1 (0.95), i.e. that the finite-band corrections are of the order $`1/b`$. The fitted value of $`M_{\mathrm{}}`$ is 0.61. The results for the finite-band correction
$$\delta M=M_{\mathrm{}}M_t(b)$$
(7)
are plotted in Fig. 2.
The good quality of our numerical data suggests also the possibility to compare the temporal behaviour with the available theoretical formulas. In particular, it has been argued in Ref. that the existence of the so-called Mott states should imply a $`(\mathrm{ln}t)/t`$ convergence of $`M`$ to its asymptotic value. Therefore, we propose the following expression
$$M(\tau ,b)=M(\mathrm{},b)\left(1\frac{1+A\mathrm{ln}(1+\tau /t_D)}{1+\tau /t_D}\right),$$
(8)
which is the simplest formula that we have found able to reproduce also the initially linear (i.e. diffusive) regime. For each value of $`b`$, the best fit is so close to the numerical data of Fig.1 to be almost indistinguishable from them (this is why we do not report the fits on the same figure). The meaningfulness of the above expression is further strengthened by the stability of the three free parameters $`M(\mathrm{},b)`$, $`A`$ and $`t_D`$, which allows the calculation of a $`b`$-independent diffusion constant (see below). From the values of $`M(\mathrm{},b)`$, we can extrapolate the asymptotic value $`M(\mathrm{},\mathrm{})`$ in exactly the same way as we have done for $`M_t(b)`$, finding once more a $`(1/b)`$-convergence to a value around 0.70. This result is to be compared with the theoretical prediction $`M(\mathrm{},\mathrm{})0.668`$ . The deviation of about 0.03 can be attributed to the accuracy of the integration algorithm.
Another important parameter that can be extracted from formula (8) is the diffusion coefficient. Indeed, by expanding Eq. (8) for small $`\tau `$, we find
$$\frac{M(\tau ,b)}{\tau }=\frac{M(\mathrm{},b)}{t_D}(1A)=D$$
(9)
The diffusion constant $`D`$ turns out to be close to 0.50 for all values of $`b`$ and, what is more important, close to the value that we obtain from a quadratic fit of the initial growth rate of the packet. This is a very encouraging result, since it confirms the correctness of formula (8) for both the diffusive and the localized regimes. Let us notice that the value $`D0.50`$ is somewhat smaller than the one reported in ($`D0.83`$). Taking into account statistical fluctuations and systematic deviations, we find that $`D=0.50\pm 0.05`$.
In past papers, a phenomenological expression involving a power-law convergence to the asymptotic value of the mean squared displacement has been proposed , arguing that it should provide an effective description of both the diffusive and localized regime
$$M(\tau ,b)=M(\mathrm{},b)\left(1\frac{1}{\left(1+\tau /t_D\right)^\beta }\right).$$
(10)
The success of expression (8) shows that there is no need to introduce anomalous power laws to reproduce the numerical findings. However, for the sake of completeness, we have fitted our numerical data also with Eq. (10), finding an equally good agreement. Therefore, on the basis of the quality of the fit we cannot conclude which of the two expressions is better; nevertheless, it is worth recalling that the former one has the correct asymptotic behaviour and, moreover, the fitted parameters are more stable.
A much more controversial situation exists about the fluctuations of the packet width. Let us introduce the r.m.s. deviation
$$\mathrm{\Delta }_{\stackrel{~}{M}}(b,t)\sqrt{u(t)^2_du(t)_d^2}.$$
(11)
In fact, it has not yet been clarified how the above variable scales in the large-$`b`$ limit. In particular, the correct value of the scaling exponent $`\nu `$ in the relation
$$\mathrm{\Delta }_{\stackrel{~}{M}}(b,t)=b^\nu \mathrm{\Delta }_M(\tau =t/b^{3/2},b)$$
(12)
is still unknown; this is why the $`b`$ dependence in $`\mathrm{\Delta }_M`$ is explicitly maintained. A scaling like $`b^4`$ would imply that the packet-width is not a self-averaging quantity, since the relative size of the fluctuations would not go to zero for increasing $`b`$. Conversely, an exponent $`\nu =3`$ corresponds both to self-averaging and ‘normal’ behaviour. In fact, the number $`N_c`$ of independent channels (lattice sites) actively contributing to the localized region (i.e. the localization length) is of the order $`b^2`$. If we assume that all such contributions to the second moment $`\stackrel{~}{M}`$ are independent of one another, then we are led to conclude that the relative fluctuations should decrease as $`1/\sqrt{N_c}=1/b`$, thus yielding an absolute growth as $`b^3`$. Since previous studies have suggested a small anomaly, i.e. $`\nu `$ slightly larger than 3, we have chosen to report the behaviour of $`\mathrm{\Delta }_M(\tau ,b)`$ for $`\nu =3`$. The data shown in Fig. 3 reveals a drastically different behaviour from what observed in Fig. 1. First of all, the curves tend to grow for increasing $`b`$; moreover, there is no obvious indication of a convergence to some finite value. Altogether, these features imply that $`\nu `$ is strictly larger than 3, qualitatively in agreement with previous simulations.
In order to perform a more quantitative analysis, we have computed the average of $`\mathrm{\Delta }_{\stackrel{~}{M}}`$ and rescaled it to the average of $`\stackrel{~}{M}`$,
$$\mathrm{\Delta }_M^a(b)\frac{\mathrm{\Delta }_{\stackrel{~}{M}}_t}{\stackrel{~}{M}_t}.$$
(13)
($`_t`$ is again to be interpreted as the average over the time interval $`20<\tau <30`$). The advantage of this renormalization, already adopted in Ref. , is that it reduces finite-band corrections. The results reported in Fig. 4, reveal a clean power law decay with an exponent $`\delta 0.75`$. This value is slightly larger than the one found in the previous studies, but follows from a much cleaner numerics. A more global check of the scaling behaviour can be made by plotting the rescaled fluctuations
$$\mathrm{\Delta }_M^g(\tau )b^\delta \frac{\mathrm{\Delta }_{\stackrel{~}{M}}}{\stackrel{~}{M}}.$$
(14)
for the various values of $`b`$. The optimal value of $`\delta `$ can thus be identified as that one yielding the best data collapse. The curves reported in the inset of Fig. 4 have been obtained for $`\delta =0.75`$. It is necessary to modify $`\delta `$ by at least $`\pm 0.03`$ units in order to see a significant worsening of the data collapse. Accordingly, the best estimate of the anomalous exponent is $`\delta =0.75\pm 0.03`$, so that the dependence of $`\mathrm{\Delta }_M`$ on $`b`$ in Eq. (12) is removed for $`\nu =4\delta 3.25`$.
In order to find further support for this anomalous behaviour, we have investigated the probability distribution $`P(M)`$ for the second moment at the time $`\tau =30`$ (the longest time we reached for the larger $`b`$-values), i.e. when the wave-function has almost entered the steady-state regime. The construction of reliable histograms has forced us to consider smaller values of $`b`$. In fact, we have studied the cases $`b=8`$, $`b=12`$ and $`b=22`$, using $`10^4`$ realizations of the disorder in the first two cases and $`10^3`$ in the last one. The results are reported in Fig. 5, where, following a method suggested in Ref. , we have conveniently rescaled the probability distribution $`P(M)`$. In particular, defining by $`M_{av}`$ the average value of $`M`$ at $`\tau =30`$ and $`\sigma (M)`$ the standard deviation over the ensemble of disorder realizations, we have plotted $`P^{}(M^{})=\sigma (M)P(M)`$ vs. $`M^{}=(MM_{av})/\sigma `$. After this rescaling, the distributions $`P^{}(M^{})`$, corresponding to the three $`b`$ values, have zero average and unit standard deviation. It is remarkable to notice that all curves nicely overlap indicating a striking scaling behaviour. A further important feature is the deviation from a Gaussian behaviour, especially for large values of $`M^{}`$, where a clear exponential tail is visible. The dotted line just above the three curves (corresponding to the pure exponential $`\mathrm{exp}(M^{})`$) has been added to give an idea of the decay rate which is slightly larger than 1. The results of this analysis are important in two respects: i) the exponential tail ‘explains’ the difficulties encountered in getting rid of statistical fluctuations in the estimate of $`\mathrm{\Delta }_M`$; ii) the deviations from a Gaussian behaviour provide an independent evidence of the anomalous scaling behaviour of the fluctuations. It is interesting to remark that a preliminary quantitative comparison has revealed a striking identity of the probability $`P^{}(M^{})`$ with the distributions found in Ref. for such diverse quantities as the magnetization in the 2D XY model and the power consumption in a confined turbulent flow. This close correspondence deserves further investigations.
Finally, we want to compare our results with the theoretical predictions for the asymptotic shape of the wave-packet. In a reasonable agreement was found between the numerical data and the formula obtained by Gogolin for strictly one-dimensional systems . Since a theoretical expression has been derived in the meantime for quasi-1D systems , it is desirable to compare our data also with this expression. In Fig. 6 we present the disorder-averaged probability profiles $`|\psi _j(t)|^2_d=\stackrel{~}{f}(j,t)`$ for large times, rescaled under the assumption
$$f_s(x)=b^2\stackrel{~}{f}(j,\mathrm{});x=j/b^2,$$
(15)
and compare them with Zhirov’s theoretical formula, which is denoted by the white line. No appreciable deviation is noticeable except for the extreme part of the tails, where it is reasonable to expect numerical errors due to boundary effects.
The good overlap is partly due to the (unavoidable) choice of logarithmic scales in Fig. 6. However, if we zoom the region around the maximum (with the exception of the zero channel), one can see in Fig. 7 a slow tendency of the various curves to grow towards the theoretical expectation. This is consistent with the behaviour of $`M`$ reported in Fig. 1, which reveals a convergence from above for the mean squared displacement. It is interesting to notice that all such deviations are mainly due to the finiteness of $`b`$ while the lack of asymptoticity in $`t`$ appears to be much less relevant.
Finally, we consider separately the zero channel, i.e. the return probability to the origin $`f_s(0)`$. In Fig. 8 we plot this quantity versus $`\tau `$ for different values of $`b`$. In all cases, a quite fast convergence, as compared with the behaviour of the packet-width, to the asymptotic value is clearly seen. In practice, as soon as $`\tau `$ is about 1, the average value of $`f_s(0)`$ reaches the asymptotic value. It is instructive to compare our numerical findings with the asymptotic (in time and $`b`$) analytic expression $`f_s(0)=6`$. By fitting the dependence of the time average of $`f_s`$ (in the interval $`1<\tau <30`$) on $`b`$ as in Eq. 6), we find that the asymptotic value is about 5.7 and that the convergence rate is $`1/\sqrt{b}`$. The numerical value is in a reasonable agreement with the theoretical one, considering that it is the result of an extrapolation of data already affected by errors of the order of a few percent. The non trivial part of the result is the rate of convergence of this probability, which is definitely slower than the $`1/b`$ behaviour displayed by the second (and other low order) moments. The behaviour of the return probability, however, is in agreement with the theoretical predictions made in , where the finite $`b`$ deviations from the asymptotic steady-state probability distribution were estimated to be of order $`O(1/\sqrt{b})`$ in the $`|x|1/b`$ neighbourhood of the origin.
The $`b`$-dependence of the return probability is further illustrated in Fig. 9, where we plot the deviation $`\delta f=6f_s(0)`$ from the asymptotic value $`lim_b\mathrm{}f_s(0)=6`$ as a function of $`b`$ in bilogarithmic scale. The displayed numerical values were obtained by averaging the return probability both over disorder realizations and the time interval $`20<\tau <30`$; the data were then fitted with two expressions, exhibiting deviations from the asymptotic value $`f_s(0)=6`$ of order $`O(1/b)`$ and $`O(1/b^\alpha )`$ respectively. In the second case, the exponent $`\alpha `$ was used as a fitting parameter and the best fit value was $`\alpha =0.53`$. As can be seen from Fig. 9, the power law with $`O(1/\sqrt{b})`$ corrections fits the data much better than the one with deviations of order $`O(1/b)`$.
## 4 Conclusions and perspectives
We have studied the time evolution of an initial $`\delta `$-like wave-packet in a 1D disordered lattice with long-range hopping. The main results of this paper are the following. We have confirmed with clean numerics the scaling law (5) for the mean square displacement $`\stackrel{~}{M}`$, first proposed and studied in Ref. . This scaling law is valid in the large $`b`$ limit; here we have found that finite $`b`$ corrections are of the order $`1/b`$. We have proposed formula (8) for fitting the time evolution of $`\stackrel{~}{M}`$ towards its steady state value; this formula contains the logarithmic corrections suggested by the existence of Mott states. We confirm the presence of an anomaly in the scaling law of the relative fluctuations $`\mathrm{\Delta }_{\stackrel{~}{M}}/\stackrel{~}{M}`$ of the mean square displacement, finding that they vanish for large $`b`$ as $`b^{0.75}`$. We have linked this anomaly to the presence of non-Gaussian fluctuations of the mean square displacement. In fact, the probability distribution of $`\stackrel{~}{M}`$ displays an exponential tail for large values of $`\stackrel{~}{M}`$. The conveniently rescaled probability strikingly coincides with the distributions obtained in Ref. for such diverse quantities as the magnetization in the 2D XY model and the power consumption in a confined turbulent flow. The degree of universality of such distribution deserves further investigations. Finally, we have compared the numerical results on the steady state probability profile with the theoretical formula proposed by Zhirov for large $`b`$, finding a good agreement. We have computed for the first time finite $`b`$ corrections, obtaining $`O(1/b)`$ deviations for the moments of the probability profile and $`O(1/\sqrt{b})`$ corrections for the return probability to the origin.
## 5 Acknowledgements
We thank B. Chirikov for having suggested the numerical verification of Zhirov’s formula. We thank O. Zhirov for having exchanged with us his ideas and for his comments on our numerical results. We finally thank F.M. Izrailev, H. Kantz and B. Mehlig for useful discussions.
|
no-problem/9910/hep-th9910184.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Mirror symmetry of Calabi-Yau manifolds is one of the remarkable predictions of type II string theory. The $`(2,2)`$ superconformal field theory associated with a string propagating on a Ricci-flat Kähler manifold has a $`U(1)_L\times U(1)_R`$ R-symmetry group, and the Hodge numbers of the manifold correspond to the charges of $`(R,R)`$ ground states under the R-symmetry. There is a symmetry in the conformal field theory which is a flip of the sign of the $`U(1)_L`$ current, $`J_LJ_L`$. If physically realized, this symmetry implies existence of pairs of manifolds $`(,𝒲)`$, which have “mirror” hodge diamonds, $`H^{p,q}()=H^{dp,q}(𝒲),`$ and give rise to exactly the same superconformal field theory.
While this observation predicts existence of mirror pairs of Calabi-Yau $`d`$-folds, it is not constructive: one would like to know how to find $`𝒲`$ if one is given a Calabi-Yau manifold $``$. The mirror construction has been proposed on purely mathematical grounds by Batyrev for Calabi-Yau manifolds which can be realized as complete intersections in toric varieties.
In , Strominger, Yau and Zaslow argued that, for mirror symmetry to extend to a symmetry of non-perturbative string theory, $`(,𝒲)`$ must be $`T^d`$ fibrations, with fibers which are special lagrangian, and furthermore, that mirror symmetry is $`T^d`$-duality of the fibers. The argument of SYZ is local, and is very well understood only for smooth fibrations. To be able to fully exploit the idea, one must understand degenerations of special lagrangian tori, or more generally, limits in which the Calabi-Yau manifold itself becomes singular. Some progress on how this is supposed to work has been made in , , , and local mirror symmetry appears to be the simplest to understand in this context.
In an a priori unrelated development initiated by ‘mirror pairs’ of gauge theories were found. In the field theory context, a duality in the sense of IR equivalence of two gauge theories is usually referred to as a ‘mirror symmetry’ if
* the duality swaps Coulomb and Higgs branch of the theories, trading FI terms for mass parameters,
* the R-symmetry of the gauge theory has the product form $`G_L\times G_R`$ and duality swaps the two factors.
The example of studies $`𝒩=4`$ SUSY theories in 3 dimensions. Recently it was shown that this duality can even be generalized to an equivalence of two theories at all scales, a relation that will prove to be crucial for our applications.
The purpose of this note is to obtain geometrical mirror pairs by a ‘worldsheet’ construction. Since there seems to be little hope that one can do so directly in the non-linear sigma model (NL$`\sigma `$M), we study linear sigma models which flow in the IR to non-linear sigma models with Calabi-Yau’s as their target spaces. String theory offers a direct physical interpretations of these models as world-volume theory of a D-string probe of the Calabi-Yau manifolds. We find gauge theory mirror duality for the linear sigma model which reduces to geometric mirror symmmetry for Calabi-Yau target spaces.
Using brane constructions T-dual to the D1 brane probe on $``$ one can easily construct gauge theories which flow to mirror manifolds. We will show that two different dualities of $`d=2`$, $`𝒩=(2,2)`$ supersymmetric gauge theories arise: One has a realization in string theory as ‘S-duality<sup>2</sup><sup>2</sup>2 where by S-duality in type IIA setups we mean a flip of the 2 and the 10 direction’ of ‘interval’ brane setups. This duality is a consequence of mirror symmetry of $`d=3`$, $`𝒩=2`$ field theories where both the Coulomb branch and the Higgs branch are described by non-compact Calabi-Yau manifolds. This duality in $`d=3`$ maps a Calabi Yau manifold to an identical one, so while it is a non-trivial field theory statement, this does not provide a linear $`\sigma `$ model construction of the mirror Calabi-Yau manifold. There exists another $`d=2`$, $`𝒩=(2,2)`$ field theory duality obtained as S-duality of the diamond brane constructions of . This duality maps a theory whose Coulomb branch is dual to Calabi-Yau manifold $``$ in the “boring” way described above, to a theory whose Higgs branch is the mirror manifold $`𝒲`$. The composition of these two dualities, therefore, flows to Calabi-Yau mirror symmetry. While we consider in this note a very particular family on non-compact Calabi-Yau manifolds, the generalization to arbitrary affine toric varieties is possible.
The organization of the note is as follows: In the next section we discuss different possible dualities in two dimensions as obtained via brane constructions for the case of the conifold. Section three generalizes this discussion to sigma models built from branes dual to more general non-compact CY manifolds and to the non-abelian case, describing $`N`$ D-brane probes on the singular CY. In the last section we present a detailed study of the moduli space and argue that the composition of two dualities is Calabi-Yau mirror symmetry.
## 2 Mirror duality in 2d gauge theories
### 2.1 From three to two dimensions
The original $`D=3`$ $`𝒩=4`$ of upon compactification implies also a duality relation in $`𝒩=(4,4)`$ theories in 2 dimensions, as noted in . The recent results of are needed to make this precise. The nature of this duality with 8 supercharges will teach us, how we should understand the $`𝒩=(2,2)`$ examples. Since in 2d the concept of a moduli space is ill defined, equivalence of the IR physics does not require the moduli spaces and metrics to match point by point, but only that the NL$`\sigma `$Ms on the moduli space<sup>3</sup><sup>3</sup>3or in the non-compact CY examples we are considering the two disjoint CFTs of Coulomb and Higgs branch are equivalent, as we will see in several examples.
Start with the 3d theory compactified on a circle. This is the setup analyzed in . It is governed by two length scales, $`g_{YM}^2`$, the 3d Yang-Mills coupling, and $`R_2`$, the compactification radius. To flow to the deep IR is equivalent to sending both length scales to zero. However physics still might depend on the dimensionless ratio
$$\gamma =g_{YM}^2R_2.$$
As shown in , while the Higgs branch metric is protected, the Coulomb branch indeed does depend on $`\gamma `$. For $`\gamma 1`$ we first have to flow into the deep IR in 3d and then compactify, resulting in a 2d NL$`\sigma `$M on the 3d quatum corrected Coulomb branch. The resulting target space is best described in terms of the dual photon in 3d, a scalar of radius $`\gamma `$. For ‘the mirror of the quiver’ (U(1) with $`N_f`$ electrons) it turns out to be an ALF space with radius $`\gamma `$. For small $`\gamma `$ we should first compactify, express the theory in terms of the Wilson line, a scalar of radius $`\frac{1}{\gamma }`$, and obtain as a result a tube metric with torsion, corresponding to the metric of an NS5 brane on a transverse circle of radius $`\frac{1}{\gamma }`$ . Indeed these two NL$`\sigma `$Ms are believed to be equivalent and exchanging the dual photon for the Wilson line amounts to the T-duality of NS5 branes and ALF space in terms of the IR NL$`\sigma `$M <sup>4</sup><sup>4</sup>4This picture is obvious from the string theory perspective. Studying a D2 D6 system on a circle, going to the IR first lifts us to an M2 on an ALF space which becomes a fundamental string on the ALF, while going to 2d first makes us T-dualize to D1 D5, leaving us with the $`\sigma `$ model of a string probing a 5-brane background..
In order to obtain linear $`\sigma `$-model description of this scenario, one has to use the all scale mirror symmetry of . They show that $`g_{YM}^2`$ maps to a Fermi type coupling in the mirror theory, or more precisely: one couples the gauge field via a BF coupling to a twisted gauge field, the gauge coupling of the twisted gauge field being baptised Fermi coupling. For the case of the quiver theory with Fermi coupling, one obtains the same ALF space, this time on the Higgs branch. In the same spirit we will present two different dualities for $`𝒩=(2,2)`$ theories.
### 2.2 Mirror symmetry from the interval
One way to ‘derive’ field theory duality is to embed the field theory into string theory and then field theory duality is a consequence of string duality. A construction of this sort was implemented in for the $`𝒩=4`$ theory in d=3 via brane configurations. One uses an interval construction with the 3 basic ingredients: NS5 along 012345, D5 along 012789 and D3 along 0126. The two R-symmetries are $`SU(2)_{345}`$ and $`SU(2)_{789}`$. D3 brane segments between NS5 branes give rise to vector multiplets, with the 3 scalars in the 3 of $`SU(2)_{345}`$. D3 brane segments between D5 branes are hypermultiplets with the four scalars transforming as 2 doublets of $`SU(2)_{789}`$.
Under S-duality the D5 branes turn into NS5 branes and vice versa while D3 branes stay invariant. One obtains the same kind of setup but with D5 and NS5 branes interchanged. S-duality of type IIB string theory is mirror symmetry in the gauge theory<sup>5</sup><sup>5</sup>5 To be precise, the S-dual theory will really contain a gauge theory of twisted hypers coupled to twisted vectors. If in addition one performs a rotation taking 345 into 789 space, the two R-symmetries are swapped and the theory is written in terms of vectors and hypers..
Now let us move on to the 2d theories. The brane realization of this duality is via an interval theory in IIA with NS and NS’ branes and D2 branes along 016 . The IIA analog of S-duality, the 2-10 flip, takes this into D4 and D4’ branes. The following parameters define the interval brane setup and the gauge theory:
* The seperation of NS and NS’ brane along 7 is the FI term. It receives a complex partner, the 10 seperation which maps to the 2d theta angle.
* The seperation of the D4 branes along 2 and 3 gives twisted masses to the flavors.
Mirror symmetry maps the FI term to the twisted masses. A twisted mass sits in a background vector multiplet and has to be contrasted with the standard mass from the superpotential which sits in a background chiral multiplet. Like the real mass in $`𝒩=2`$ theories in $`d=3`$, it arises from terms like
$$d^4\theta Q^{}e^{V_B}Q.$$
where $`V_B`$ is a background vector multiplet.
An Example: As an example let us discuss the interval realization of the small resolution of the conifold. As shown in by performing $`T_6`$ T-duality on a D-string probe of the conifold we get an interval realization of the conifold gauge theory in terms of an elliptic IIA setup with D2 branes stretched on a circle with one NS and one NS’ brane. In this IIA setup the seperation of the NS branes in 67 is the small resolution, while turning on the diamond mode would be the deformation of the conifold.
The gauge group on the worldvolume of the D-string on the conifold is a $`U(1)\times U(1)`$ gauge group with 2 bifundamental flavors $`A_1`$, $`A_2`$, $`B_1`$ and $`B_2`$. We can factor out the decoupled center of mass motion, the diagonal $`U(1)`$, which does not have any charged matter and hence is free. We are left with an interacting $`U(1)`$ with 2 flavors. The scalar in the decoupled vector multiplet is the position of the D1 brane in the 23 space transvere to the conifold. While the Coulomb branch describes seperation into fractional branes, the Higgs branch describes motion on the internal space and reproduces the conifold geometry. The complexified blowup mode for resolving the conifold is the FI term and the $`\theta `$ angle.
After 2-10 flip, the dual brane setup is again an elliptic model, this time with one D4 and one D4’ brane. The gauge theory is a single $`𝒩=(8,8)`$ $`U(1)`$ from the D2 brane with 2 additional $`𝒩=(2,2)`$ matter flavors from the D4 and D4’ brane. That is we have
* 3 ‘adjoints’, that is singlet fields $`X`$, $`Y`$ and $`Z`$ and
* matter fields $`Q`$, $`\stackrel{~}{Q}`$, $`T`$ and $`\stackrel{~}{T}`$ with charges +1,-1,+1,-1.
* They couple via a superpotential
$$W=QX\stackrel{~}{Q}+TX\stackrel{~}{T}.$$
* The singlet $`Z`$ is decoupled and corresponds to the center of mass motion.
Turning on the FI term and the $`\theta `$ angle in the original theory is a motion of the NS brane along the 7 and 10 direction respectively. It maps into a 23 motion for the D4 brane, giving a twisted mass to $`Q`$ and $`\stackrel{~}{Q}`$.
This analysis can also be performed by going to the T-dual picture of D1 branes probing D5 branes intersecting in codimension 2, that is over 4 common directions. Aspects of this setup and its T-dual cousins in various dimensions have already been studied by numerous authors, e.g. for the D3 D7 D7’ system in or for the D0 D4 D4’ in . The resulting gauge theory agrees with what we have found by applying the standard interval rules.
### 2.3 Twisted mirror symmetry from diamonds
A second T-dual configuration for D1 brane probes of singular CY manifolds is D3 branes ending on a curve of NS branes, called diamonds in . These setups are the $`T_{48}`$-duals of D1 brane probes of the $`𝒞_{kl}`$ spaces. Indeed it was this relation that allowed us to derive the diamond matter content to begin with . In order to use the diamond construction to see mirror symmetry, we use S-duality of string theory, as in the original work of . Let us first consider the parameters defining a diamond and how they map under S-duality:
* the complex parameter defining the NS brane diamond contains the FI term which is paired up with the 2d $`\theta `$ angle,
* the S-dual D5-brane diamond is defined by a complex parameter which is derived from a superpotential mass term.
FI term and theta angle are contained in a background twisted chiral multiplet. Under the duality this twisted chiral multiplet is mapped to a background chiral multiplet containing the mass term. Since ordinary mirror symmetry mapped FI terms to mass terms in twisted chiral multiplet, the map of operators under the two versions of duality will be different.
An Example: Let us start once more with the simplest example, the D1 string on the blowup of the conifold. That is, we consider a single diamond, one NS and one NS’ brane, on a torus. After S-duality this elliptic model with NS5 and NS5’ brane turns into an elliptic model with D5 and D5’ brane. Since we have only D-branes in this dual picture, the matter content can be analyzed by perturbative string techniques. To shortcut, we perform T<sub>48</sub> duality to the D1 D5 D5’ system as in the interval setup. For the special example of the conifold the two possible mirrors do not differ in the gauge and matter content, only in the parameter map. This will not be the case in the more general examples considered below.
As analyzed above, the corresponding dual gauge theory is a U(1) gauge group with 3 neutral fields $`X`$, $`Y`$ and $`Z`$ and two flavors $`Q`$, $`\stackrel{~}{Q}`$, $`T`$ and $`\stackrel{~}{T}`$ with charges $`+1`$, $`1`$, $`+1`$, $`1`$ respectively. The superpotential in the singular case is $`W=QX\stackrel{~}{Q}+TY\stackrel{~}{T}`$.
By S-duality, as in the NS NS’ setup, turning on the D5-brane diamonds corresponds turning on vevs for the d=4 hypermultiplets from the D5 D5’ strings. Under $`𝒩=(2,2)`$ these hypermultiplets decompose into background chiral multiplets and hence appear as parameters in the superpotential. If we call those chiral multiplets $`h`$ and $`\stackrel{~}{h}`$ the corresponding superpotential contributions are $`Qh\stackrel{~}{T}+\stackrel{~}{Q}\stackrel{~}{h}T`$, so that all in all the full superpotential reads
$$W=QX\stackrel{~}{Q}+TY\stackrel{~}{T}+Qh\stackrel{~}{T}+\stackrel{~}{Q}\stackrel{~}{h}T.$$
## 3 More mirror pairs
### 3.1 Other singular CY spaces
According to the analysis of , D1 brane probes on the blowup of spaces of the form
$$G_{kl}:xy=u^kv^l$$
are $`T_6`$ dual to an interval setup with $`k`$ NS and $`l`$ NS’ branes. The gauge group is a $`U(1)^{k+l1}`$ with bifundamental matter. It is straight forward to construct interval mirrors via the 2-10 flip in terms of a $`U(1)`$ with 2 singlets and $`k+l`$ flavors. The $`k+l1`$ complexified FI terms map into the $`k+l1`$ independend twisted mass terms (one twisted mass can be absorbed by redefining the origin of the Coulomb branch).
Similarly we can construct diamond mirrors for D1 brane probes of $`C_{kl}`$ spaces,
$$C_{kl}:xy=z^k,uv=z^l.$$
The gauge group for the D1 brane probe is $`U(1)^{2kl1}`$. The mirror is once more a single $`U(1)`$ with 2 singlets and $`k+l`$ flavors. This time $`(k+1)(l+1)3`$ complexified FI terms map to superpotential masses. Note that, while the D1-brane gauge theory has $`2kl1`$ FI terms, only $`(k+1)(l+1)3`$ lead to independent deformations of the moduli space. This is a consequence of the fact the D1 brane gauge theory is not the minimal linear sigma model of $`C_{kl}`$, which is just a $`U(1)^{(k+1)(l+1)3}`$ (the same phenomenon arises in the case of $`^3/\mathrm{\Gamma }`$ orbifolds ).
### 3.2 Generalization to non-abelian gauge groups
Our realization in terms of brane setups gives us for free the non-abelian version of the story, the mirror dual of $`N`$ D1 branes sitting on top of the conifold. Let us spell out the dual pairs once more in the simple example of the conifold. Generalization to arbitrary $`G_{kl}`$ and $`C_{kl}`$ spaces is straight forward. The gauge group on $`N`$ D1 branes on the blowup of the conifold is
$$SU(N)\times SU(N)\times U(1)$$
where we already omitted the decoupled center of mass VM. The matter content consists of 2 bifundamental flavors $`A_{1,2}`$, $`B_{1,2}`$. They couple via a superpotential
$$W=A_1B_1A_2B_2A_1B_2A_2B_1.$$
The diamond mirror of this theory is a single $`U(N)`$ gauge groups with 3 adjoints <sup>6</sup><sup>6</sup>6And here by adjoint we really mean a $`U(N)`$ adjoint, that is a $`SU(N)`$ adjoint and a singlet. The singlet in $`Z`$ once more corresponds to the overall center of mass motion and decouples. $`X`$, $`Y`$ and $`Z`$ and 2 fundamental flavors $`Q`$, $`\stackrel{~}{Q}`$, $`T`$, $`\stackrel{~}{T}`$ coupling via a superpotential:
$$W=X[Y,Z]+QX\stackrel{~}{Q}+TY\stackrel{~}{T}+hQ\stackrel{~}{T}+\stackrel{~}{h}\stackrel{~}{Q}T$$
where $`h`$ and $`\stackrel{~}{h}`$ are the same background parameters determining the diamond as in the abelian case.
## 4 Geometric mirror symmetry from linear sigma models
The basic conjecture is that applying both dualities succesively maps the L$`\sigma `$M for a given Calabi-Yau to the L$`\sigma `$M on the mirror. The parameter map we have presented above implies that the dual theory is formulated in terms of twisted multiplets, realizing the required flip in the R-charge.
In order to support our conjecture, let us do the calculation for the single D1 brane probe on a $`𝒞_{kl}`$ space. By construction the Higgs branch of the gauge theory we start with is the blownup $`𝒞_{kl}`$ space. The twisted mirror of this theory is a U(1) gauge theory coupled to $`k+l`$ flavors $`Q`$, $`\stackrel{~}{Q}`$ and $`T`$, $`\stackrel{~}{T}`$ and two singlet fields $`X`$ and $`Y`$. The superpotential takes the form
$$W=\underset{i=1}{\overset{k}{}}Q_i(Xa_i)\stackrel{~}{Q}^i+\underset{a=1}{\overset{l}{}}T_a(Yb_a)\stackrel{~}{T}^a+\underset{ia}{}Q_ih_a^i\stackrel{~}{T}^a+\stackrel{~}{Q}^i\stackrel{~}{h}_i^aT_a,$$
(1)
where $`h`$ and $`\stackrel{~}{h}`$ are background hypermultiplets parametrizing the diamonds and the $`a_i`$ and $`b_a`$ are the relative positions of the D5 and D5’ branes in the D1 D5 D5’ picture along 45 and 89 respectively; $`a_i=b_a=0`$.
According to the conjecture we now must find the ordinary mirror of this theory, whose Higgs branch, it is claimed, will be the mirror manifold. Ordinary mirror symmetry derives from 3d mirror symmetry. In three dimensions the Higgs branch of the mirror theory is the same as the quatum corrected Coulomb branch of the original one. For the purpose of computing the mirror of the $`𝒞_{kl}`$ space it suffices therefore to calculate the effective Coulomb branch of the 3d U(1) gauge theory with $`k+l`$ flavors and superpotential eq.(1).
First let us study the classical moduli space. The D-term equations require
$$\underset{i=1}{\overset{k}{}}|Q_i|^2|\stackrel{~}{Q}^i|^2+\underset{a=1}{\overset{l}{}}|T_a|^2|\stackrel{~}{T}^a|^2=0$$
the F-term requirements for the $`Q`$, $`T`$, $`\stackrel{~}{Q}`$ and $`\stackrel{~}{T}`$ fields are
$$N\left(\begin{array}{c}Q\\ T\end{array}\right)=0,(\stackrel{~}{Q},\stackrel{~}{T})N^T=0$$
where $`N`$ is the $`k+l`$ by $`k+l`$ matrix
$$N=\left(\begin{array}{cc}\text{diag}\{Xa_1,Xa_2,\mathrm{},Xa_k\}& h\\ \stackrel{~}{h}& \text{diag}\{Yb_1,Yb_2,\mathrm{},Yb_l\}\end{array}\right)$$
In addition the scalar potential contains the standard piece
$$2\sigma ^2(\underset{i=1}{\overset{k}{}}|Q_i|^2+|\stackrel{~}{Q}^i|^2+\underset{a=1}{\overset{l}{}}|T_a|^2+|\stackrel{~}{T}^a|^2)$$
from the coupling of the scalar $`\sigma `$ in the vector multiplet to the matter fields and the F-terms for $`X`$ and $`Y`$. The classical Coulomb branch is three complex dimensional parametrized by $`X`$, $`Y`$ and $`\sigma +i\gamma `$, where $`\gamma `$ is the dual photon. Along this branch, $`Q`$, $`\stackrel{~}{Q}`$, $`T`$ and $`\stackrel{~}{T}`$ are zero. The Coulomb branch meets the Higgs branch along the curve <sup>7</sup><sup>7</sup>7Note that this is the defining equation of the curve the NS5-branes wrap, the diamond . It is also the defining equation of the complex structure of the local mirror manifold for the blownup $`𝒞_{kl}`$, the deformed $`𝒢_{kl}`$, whose defining equation obtained by adding the ‘quadratic pieces’ $`UV\text{det}(N)=0`$ which do not change the complex structure.
$$\text{det}(N)=0.$$
Now consider the quantum Coulomb branch. As shown in the quantum Coulomb branch of a $`U(1)`$ theory with $`N_f=k+l`$ flavors has an effective description in terms of chiral fields $`V_+`$ and $`V_{}`$ and a superpotential
$$W_{eff}=N_f(V_+V_{}\text{ det}(M))^{1/N_f}.$$
$`M`$ is the $`k+l`$ by $`k+l`$ meson matrix
$$M=\left(\begin{array}{cc}Q_i\stackrel{~}{Q}^j& Q_i\stackrel{~}{T}^b\\ T_a\stackrel{~}{Q}_j& T_a\stackrel{~}{T}^b\end{array}\right).$$
Far out on the Coulomb branch $`V_\pm `$ are related to the classical variables via $`V_\pm e^{\pm 1/g^2(\sigma +i\gamma )}`$. Adding the tree level superpotential eq.(1) written in the compact form
$$\text{Tr }(NM)$$
to this effective superpotential, the $`M`$ F-term equations describing our quantum Coulomb branch read
$$N_{\beta \gamma }(V_+V_{})^{1/N_f}\frac{H_{\beta \gamma }}{\text{det}(M)^{11/N_f}}=0$$
(2)
where
$$H_{\beta \gamma }=\frac{\text{det}(M)}{M^{\beta \gamma }}$$
Taking the determinant in eq.(2) we find that the quantum Coulomb branch is described by a hypersurface
$$\text{det}(N)=V_+V_{}.$$
This is precisely the mirror manifold of $`𝒞_{kl}`$ . Since the origin $`V_+=V_{}=X=Y=0`$ is no longer part of this branch of moduli space, we arrive at a smooth solution even so we started from the effective superpotential of that is singular at the origin.
We here considered only mirror symmetry for $`𝒞_{kl}`$ spaces. Since any affine toric CY can be imbedded in $`𝒞_{kl}`$ for sufficiently large $`k`$ and $`l`$, mirror symmetry for all such spaces follows by deformation.
## Acknowledgements
We would like thank Jacques Distler, Ami Hanany, Sav Sethi, Matt Strassler and Andy Strominger for useful discussions.
|
no-problem/9910/math9910011.html
|
ar5iv
|
text
|
# Estimating the 𝐽 function without edge correctionAdrian Baddeley and Katja Schladitz were supported by a grant from the Australian Research Council. Martin Kerscher was supported by the “Sonderforschungsbereich SFB 375 für Astroteilchenphysik der Deutschen Forschungsgemeinschaft”. Bryan Scott was supported by an Australian Postgraduate Award and CSIRO Postgraduate Studentship.
## 1 Introduction
A spatial point pattern is often studied by estimating point process characteristics such as the empty space function $`F`$, the nearest-neighbour distance distribution function $`G`$, and the $`K`$-function. Here we consider
$$J(r)=\frac{1G(r)}{1F(r)}$$
(1)
as advocated by van Lieshout and Baddeley (1996). The $`J`$ function is identically equal to 1 for a Poisson process, and values of $`J(r)`$ less than or greater than 1 are suggestive of clustering or regularity, respectively. (Note that it is of course possible to find non-Poisson point processes for which $`J(r)=1`$, as Bedford and van den Berg (1997) have shown).
In practice, observation of the point process is usually restricted to some bounded window $`W`$. As a consequence, estimation of the summary functions, which is based on the measurement of various distances of the point process, is hampered by the “edge effects” (bias and censoring) introduced by restricting observation of these distances to $`W`$. In order to counter edge effects it is necessary to apply some form of edge correction to the empirical estimates of the summary functions. For further details see Baddeley (1999); Stoyan et al. (1995); Cressie (1991); Ripley (1988).
Unbiasedness is highly desirable when a summary function estimate is to be compared directly to the corresponding theoretical value for a point process model. However, as Diggle argues in the discussion of Ripley (1977) and in Diggle (1983), unbiasedness is not essential when using a summary function estimator as the test statistic in a hypothesis test, since the bias will be accounted for in the null distribution of the test statistic.
This paper studies the uncorrected estimator of $`J`$ obtained by ignoring edge effects and computing $`\widehat{J}=(1\widehat{G})/(1\widehat{F})`$ from the uncorrected, empirical distributions $`\widehat{G}`$ and $`\widehat{F}`$ of distances observed in a compact window. It was prompted by the accidental discovery that this uncorrected estimator remarkably still yields values $`\widehat{J}(r)`$ approximately equal to 1 for the Poisson process. An intuitive explanation is that the relative bias due to edge effects is roughly equal for the estimates of $`1G`$ and $`1F`$, so that these biases approximately cancel in the ratio estimator of $`J`$. It follows that the uncorrected estimate of $`J`$ could be used for the direct visual assessment of deviations from the Poisson process, something not possible with the uncorrected estimates of $`F`$, $`G`$ or $`K`$. Our aim is to formalise this uncorrected estimator of $`J`$, and to investigate its use as a summary statistic and as a test statistic in point pattern analysis.
The paper is organised as follows. Section 2 outlines and generalises some fundamental point process ideas. In Section 3 we define the $`J_W`$ function estimated by the uncorrected procedure described above and derives some of its properties. In Section 4 we verify that the natural estimator of $`J_W`$ is the uncorrected estimate $`\widehat{J}_W`$. Finally, Section 5 presents the results of a computational experiment to compare the power of Monte Carlo Tests constructed from estimates of $`J`$ and $`J_W`$ as well as estimation results from the simulations of various point process models in square and rectangular windows in $`^2`$ and in a cubic window in $`^3`$.
## 2 Background
Let $`X`$ be a stationary point process in $`^d`$ with intensity $`\lambda `$. (For details of the theory of point processes see Daley and Vere-Jones, 1988; Cressie, 1991; Stoyan et al., 1995). The empty space function $`F(r)`$ is the probability of finding a point of the process within a radius $`r`$ of an arbitrary fixed point:
$$F(r)=(XB(0,r)\mathrm{}),$$
(2)
where $`B(x,r)`$ denotes the ball of radius $`r`$ centred at $`x`$. The nearest neighbour distance distribution function $`G(r)`$ is the probability of finding another point of the process in the ball of radius $`r`$ centred at a “typical” point of the process:
$$G(r)=^{0!}(XB(0,r)\mathrm{}),$$
(3)
where $`^{0!}`$ denotes the reduced Palm distribution at the origin $`0`$. Roughly speaking $`^{0!}`$ is the distribution of rest of the process $`X\{0\}`$ given there is a point of the process at the origin (see Daley and Vere-Jones, 1988).
Let $`W`$ be a compact observation window in $`^d`$ with nonempty interior. The construction of estimators for $`F`$ is based on the stationarity of $`X`$ yielding
$$F(r)=\frac{1}{|W|}\underset{W}{}(XB(x,r)\mathrm{})𝑑x,$$
(4)
where $`|W|`$ denotes the volume of $`W`$. For $`G`$ the Campbell-Mecke formula (Stoyan et al., 1995, (4.4.3)) gives
$$G(r)=\frac{1}{\lambda |W|}𝔼\underset{xXW}{}\mathrm{𝟙}\{XB(x,r)\{x\}\mathrm{}\}.$$
(5)
In both cases we need information about $`XB(x,r)`$ for $`xW`$, whereas we only observe $`XB(x,r)W`$. Usually this edge effect problem is countered by restricting the integration in (4) and summation in (5) to those points $`x`$ for which $`B(x,r)W`$ (the “border method”) or by weighting the contributions to the integral and sum so as to correct for the bias (see for example Baddeley, 1999).
The uncorrected, empirical distributions of distances observed in the window $`W`$ correspond to simply replacing $`X`$ by $`XW`$ in (4) and (5). In order to investigate the effect of this, we extend $`F`$ and $`G`$ to functionals as follows.
###### Definition 1.
For every compact set $`K^d`$ containing the origin define
$`𝔽(K)`$ $`:=`$ $`(XK\mathrm{})`$
$`𝔾(K)`$ $`:=`$ $`^{0!}(XK\mathrm{})`$
$`𝕁(K)`$ $`:=`$ $`{\displaystyle \frac{1𝔾(K)}{1𝔽(K)}}.`$
(Note that the empty space functional bears some relation to the contact distribution function (Stoyan et al., 1995, p. 105) in that $`H_B(r)=𝔽(rB)`$). ¿From these we are able to define the window based $`J`$ function.
## 3 The $`J_W`$ function
###### Definition 2.
For every compact set $`W^d`$ with nonempty interior let
$$J_W(r):=\frac{\underset{W}{}[1𝔾(B(0,r)W_x)]𝑑x}{\underset{W}{}[1𝔽(B(0,r)W_x)]𝑑x}$$
(6)
be the *window based $`J`$ function*, where $`W_x=\{yx:yW\}`$ is the translate of $`W`$ by $`x^d`$.
If $`X`$ is a stationary Poisson process, then by Slivnyak’s Theorem (Stoyan et al., 1995, (4.4.7)) $`^{0!}`$. Thus $`𝔽𝔾`$ and we arrive at the following proposition.
###### Proposition 1.
Let $`X`$ be a stationary Poisson process. Then
$$J_W(r)1\text{for all}W\text{and}r0.$$
Explicit evaluation of $`J_W`$ for other point process models seems difficult. However, we can show that $`J_W`$ behaves similarly to the $`J`$ function for ordered and clustered processes, suggesting it can also be interpreted in the same way as the $`J`$ function. For some processes it is also possible to demonstrate that the $`J_W`$ function exhibits less deviation from the Poisson hypothesis than the equivalent $`J`$ function.
###### Proposition 2.
Suppose $`X`$ is a process which is “ordered” in the sense that its $`J`$ functional is non-decreasing, that is $`K_1K_2`$ implies $`𝕁(K_1)𝕁(K_2)`$. Then
$$1J_W(r)J(r)\text{for all }r.$$
Similarly if a process is “clustered”, $`K_1K_2`$ implies $`𝕁(K_1)𝕁(K_2)`$, then $`1J_W(r)J(r)`$ for all $`r`$.
Proof: Observe that (6) can be rewritten
$$J_W(r)=_W𝕁(B(0,r)W_x)h_{W,r}(x)𝑑x$$
(7)
where
$$h_{W,r}(x)=\frac{\left(1𝔽(B(0,r)W_x)\right)}{_W\left(1𝔽(B(0,r)W_y)\right)dy}$$
satisfies $`h_{W,r}(x)0`$ for all $`x^d`$ and $`_Wh_{W,r}(x)𝑑x=1`$. Hence
$$\underset{W}{\mathrm{min}}𝕁(B(0,r)W_x)J_W(r)\underset{W}{\mathrm{max}}𝕁(B(0,r)W_x)$$
and since $`𝕁`$ is nondecreasing
$$1=𝕁(\mathrm{})J_W(r)𝕁(B(0,r))=J(r).\text{ }\mathrm{}$$
The latter result can be strengthened to strict inequality for specific examples.
###### Proposition 3.
Let $`X`$ be a Neyman-Scott cluster process with mean number of points per cluster greater than $`1`$. Assuming the support of the distribution of the cluster points contains a neighbourhood of the origin, then
$$J(r)<J_W(r)<1\text{for all}W\text{and}r>0.$$
Examples of processes satisfying the conditions of Proposition 3 are Matérn’s cluster process and the modified Thomas process described, for example, in Stoyan et al. (1995).
Proof: The Palm distribution of a Neyman-Scott process is the convolution of the original distribution $`P`$ of the process and the Palm distribution $`c_0`$ of the representative cluster $`N`$ (Stoyan et al., 1995, (5.3.2)). Thus, for every compact set $`K`$, we have
$`1𝔾(K)`$ $`=`$ $`{\displaystyle \mathrm{𝟙}\{(\phi \psi )K\{0\}=\mathrm{}\}c_0(d\psi )P(d\phi )}`$
$`=`$ $`{\displaystyle \mathrm{𝟙}\{\phi K\{0\}=\mathrm{}\}\mathrm{𝟙}\{\psi K\{0\}=\mathrm{}\}c_0(d\psi )P(d\phi )}`$
$`=`$ $`(XK\{0\}=\mathrm{})c_0(NK\{0\}=\mathrm{})`$
$`=`$ $`(1𝔽(K))c_0(NK\{0\}=\mathrm{}).`$
Hence
$$𝕁(K)=c_0(NK\{0\}=\mathrm{}).$$
Now the assumption on the cluster distribution ensures $`c_0(NK\{0\}=\mathrm{})<1`$ for all $`K`$ containing a neighbourhood of the origin. The conclusion then follows since $`B(0,r)W_x`$ contains a neighbourhood of $`0`$ whenever $`x`$ is an interior point of $`W`$. $`\mathrm{}`$
###### Proposition 4.
Let $`X`$ be a hard-core process with hard-core radius $`R`$. Then
$$1<J_W(r)<J(r)\text{for all}W\text{and}0<r<R,$$
and $`J_W(r)`$ is non-decreasing in $`r`$ for all $`r<R`$.
Proof: Trivially we have $`𝔾(K)=0`$ for all $`KB(0,R)`$, while for any point process the empty space functional $`𝔽`$ is nondecreasing. Therefore the $`J`$ functional becomes
$$𝕁(K)=\frac{1}{1𝔽(K)}\text{for all}KB(0,R),$$
which is also non-decreasing and the result follows by proposition 2. $`\mathrm{}`$
## 4 Estimation of the $`J_W`$ function
Analogously to $`J`$ we want to estimate $`J_W`$ by the ratio of two estimators for the denominator and numerator in Definition 2. The stationarity of $`X`$ and Fubini’s Theorem yield
$$\underset{W}{}𝔽(B(0,r)W_x)𝑑x=𝔼\underset{W}{}\mathrm{𝟙}\{XB(x,r)W\mathrm{}\}𝑑x,$$
and so the denominator of (6) becomes
$$\underset{W}{}[1𝔽(B(0,r)W_x)]𝑑x=|W|\left[1\frac{1}{|W|}𝔼|W\left((XW)B(0,r)\right)|\right],$$
(8)
where $``$ denotes Minkowski addition.
Applying the Campbell-Mecke formula (Stoyan et al., 1995, (4.4.3)) we find
$`{\displaystyle \underset{W}{}}𝔾(B(0,r)W_x)𝑑x`$ $`=`$ $`{\displaystyle \underset{W}{}}^{0!}(XB(0,r)W_x\mathrm{})𝑑x`$
$`=`$ $`{\displaystyle \frac{1}{\lambda }}𝔼{\displaystyle \underset{xXW}{}}\mathrm{𝟙}\{XB(x,r)\{x\}W\mathrm{}\}.`$
Let $`d(x,A)`$ denote the Euclidean distance from a point $`x^d`$ to a set $`A^d`$. The numerator of (6) can then be expressed as
$$\underset{W}{}[1𝔾(B(0,r)W_x)]𝑑x=|W|[1\frac{1}{\lambda |W|}𝔼\underset{xXW}{}\mathrm{𝟙}\{d(x,XW\{x\})r\}].$$
(9)
The two results (8) and (9) allow uncorrected estimation of $`J_W(r)`$ by
$$\widehat{J}_W(r):=\frac{1\frac{1}{\mathrm{\#}(XW)}\underset{xXW}{}\mathrm{𝟙}\{d(x,XW\{x\})r\}}{1\frac{1}{|W|}|W\left((XW)B(0,r)\right)|}$$
(10)
which is the uncorrected estimate of the $`J`$ function referred to in the introduction.
Thus the uncorrected estimate of the $`J`$ function, based on the uncorrected (EDF) estimates of $`F`$ and $`G`$, can be thought of as a ratio unbiased estimator of the $`J_W`$ function. As was shown in the previous section, the $`J_W`$ function can be interpreted in the same way as the $`J`$ function. Consequently, the uncorrected estimate of the $`J`$ function, unlike the uncorrected estimates of $`F`$, $`G`$ or $`K`$, can be used directly as an interpretive statistic in classifying deviations from the Poisson process.
## 5 Simulation study
This section reports the results of simulation studies comparing the uncorrected estimator $`\widehat{J}_W`$ with “corrected” estimators $`\widehat{J}`$. In §5.1 we show the results for a single simulated pattern; §5.2 reports the means and variances of the estimators of $`J`$ in a simulation study. These results show that $`\widehat{J}_W`$ typically has smaller variance than the corrected estimators. In §5.3 and §5.4 we consider the power of hypothesis tests based on the uncorrected estimator $`\widehat{J}_W`$. It is not clear, a priori, whether “corrected” estimators $`\widehat{J}`$ or uncorrected estimators $`\widehat{J}_W`$ will yield more powerful tests. There are two competing effects: the variance of $`\widehat{J}_W`$ is smaller than that of $`\widehat{J}`$, but $`J_W`$ is less sensitive than $`J`$ to departures from the Poisson process (say) according to Proposition 2.
For the comparisons which follow, the reduced sample (border method) estimator $`\widehat{J}_{rs}`$ was adopted. However, for the purpose of highlighting the variations which exist between corrected estimates of the $`J`$ function, the Kaplan-Meier estimator $`\widehat{J}_{km}`$ (described in Baddeley and Gill, 1997) is also considered in many cases.
### 5.1 Empirical example
This example highlights the use of $`\widehat{J}_W`$ as a qualitative summary statistic. Consider the point pattern given in Figure 1. This pattern is a realisation of a Matérn Cluster Process, intensity $`\lambda =100`$, cluster radius $`R=0.1`$ and mean number of offspring $`\mu =4`$, observed within an observation window consisting of two rectangular windows, 3.125 by 0.16 units, separated by 0.02 units. (The Matérn Cluster Process is a Neyman-Scott process in which offspring are uniformly distributed in disc of radius $`R`$ about (Poisson) parent points. The number of offspring per parent point is Poisson with mean $`\mu `$ (Stoyan et al., 1995, p. 159)).
Figure 1 also displays the corresponding estimates $`\widehat{J}_W`$ and $`\widehat{J}_{rs}`$ of the given point pattern, together with envelopes of 99 simulations of a binomial process of the same intensity, observed within the same window. The results in this case are marked; the uncorrected estimate suggests strong evidence of clustering in the point pattern, while the corrected estimate appears to suggest no evidence of clustering. Of course, the results are not surprising given the severity of the edge effect introduced by the “censoring” of the middle seventeenth of the window compared to the relatively small bias this introduces. However, it does illustrate the possible benefit of using $`\widehat{J}_W`$ in certain situations.
As the empirical use of $`J_W`$ is the same as the $`J`$ function, readers interested in further examples of the analysis of empirical data are referred to Kerscher (1998), Kerscher et al. (1999), and Kerscher et al. (1998).
### 5.2 Mean and variance
¿From the previous example it is clear that the uncorrected estimate of the J function may be superior in some situations. To examine whether this was true more generally, a number of simulations were conducted to compare the corrected and uncorrected methods across a range of processes. This began with the estimation of the mean and standard deviation of the three estimators ($`\widehat{J}_W`$, $`\widehat{J}_{km}`$, and $`\widehat{J}_{rs}`$) based on $`10,000`$ realisations of a Poisson process with intensity $`100`$, in a unit square window, with the results presented in Figure 2.
With increasing $`r`$, the distributions of the $`\widehat{J}_.`$ (that is, the estimators $`\widehat{J}_W`$, $`\widehat{J}_{km}`$, and $`\widehat{J}_{rs}`$) become skewed to the right; for large $`r`$ there is substantial mass above $`\widehat{J}_.(r)=2`$. As a result all three estimators are positively biased for large values of $`r`$. Empirically it was found that a square root transformation approximately symmetrised the distribution. As expected, the sample standard deviation of the estimates increases with $`r`$, as the denominator of each estimator decreases with $`r`$. However, $`\widehat{J}_W`$ is less biased and has lower variance than $`\widehat{J}_{km}`$ and $`\widehat{J}_{rs}`$.
These simulations were repeated for two processes with more substantial edge effects (namely, a Poisson process of intensity $`\lambda =25`$ in a unit window, and a Poisson process of intensity $`\lambda =10`$ in a $`10`$ by $`1`$ rectangular window). In both cases the results were qualitatively similar to those above.
In addition, further simulations were conducted for point patterns in $`^3`$. Estimates of the means and standard deviations of $`\widehat{J}_{km}`$ and $`\widehat{J}_{rs}`$ based on $`1000`$ realisations in a unit cube were compared for the Poisson process and two alternatives: Matérn hard-core (Stoyan et al., 1995, p. 163) and Matérn cluster processes for a range of parameter values. Some of the key results are presented in Figure 3. As expected, $`\widehat{J}_W`$ is reliable over a wider domain than $`\widehat{J}_{rs}`$. For hard-core processes, the standard deviation of $`\widehat{J}_{rs}`$ is considerably bigger than that of $`\widehat{J}_W`$ and the difference grows with the hard-core radius. For cluster processes the differences are far less apparent, however the overall tendency of $`\widehat{J}_W`$ to have lower variance is also confirmed for this class of processes. Note also that with both alternative processes the mean of $`\widehat{J}_W`$ is bounded by $`1`$ and $`\widehat{J}_{rs}`$, as expected.
It is interesting to note that, unlike the $`J`$ function estimators, the domain of the $`J_W`$ function estimator for a given point process realisation can be easily calculated. The $`J_W`$ estimator is defined for all $`r<r_{F_{max}}`$, where $`r_{F_{max}}`$ is the maximum nearest-point distance (the largest distance, over all points in the window, from a point to the nearest point of the process). Also $`J_W(r)=0`$ for any $`r_{G_{max}}r<r_{F_{max}}`$ if $`r_{G_{max}}<r_{F_{max}}`$, where $`r_{G_{max}}`$ is the maximum nearest-neighbour distance (the largest distance, over all points of the process within the window, from a point of the process to the nearest other point of the process). The value $`r_{F_{max}}`$ is however an upper bound on the domain of both the Reduced Sample and Kaplan-Meier estimators.
### 5.3 The test statistic
We now aim to compare the power of the $`J_W`$ function with the edge corrected estimators of the $`J`$ function in testing the Poisson hypothesis in the two dimensional case. We restricted ourselves to this estimation problem in view of the problems with estimating the range of interaction using the $`J`$ function reported in Kerscher et al. (1999).
The distribution of the following test statistic for each of the three estimators was estimated:
$$\tau =_0^{r_0}\frac{\widehat{J}_.(r)1}{\widehat{\sigma }(r)}𝑑r,$$
(11)
where $`\widehat{\sigma }`$ denotes the sample standard deviation of $`\widehat{J}_.(r)`$ under the Poisson hypothesis. This form of test statistic was chosen, as opposed to a squared integrand, because of the skewed nature of the distributions of $`\widehat{J}_.`$. The distributions of the test statistics were estimated by a discrete sum and based on $`10,000`$ realisations of a Poisson process. The upper limit of integration $`r_0`$ was chosen to be the 0.9 quantile of the $`F`$ function (for intensity $`100`$, $`r_00.856`$). Having estimated the distribution, the $`0.025`$ and $`0.975`$ quantiles were obtained for use in a two-sided $`5\%`$ significance test for deviation from a Poisson process. One-sided $`5\%`$ significance tests were also constructed to test for clustering or regularity by considering the $`0.05`$ and $`0.95`$ quantiles, respectively.
### 5.4 Power of tests using the various estimators
In order to estimate the power of the hypothesis test described above, realisations from alternative point processes were generated and the proportion of the realisations rejected by the hypothesis test was recorded. The first class of point processes considered was the Matérn hard-core process, with hard-core radius $`R`$. For each of $`22`$ values of $`R`$, $`1000`$ realisations were generated. The proportion of rejections is presented in Figure 4. Note that as $`R0`$ the model approaches the Poisson process, so we expect all power curves to approach $`0.05`$ $`(5\%)`$ as $`R0`$. All three estimators have very similar power curves, with the $`J_W`$ estimator at least as powerful as the two $`J`$ function estimators for all values of $`R`$.
The other class of alternative point processes considered was Matérn’s cluster process. A grid of $`(R,\mu )`$ values was constructed and $`1000`$ realisations were obtained from the corresponding Matérn cluster process. The proportion of rejections for each $`(R,\mu )`$ value is presented in Figure 5. Once again, the curves are very similar, with all three tests performing almost identically. Similar results were obtained for the respective one-sided tests in both cases as well as for the lower intensity $`25`$.
The power tests for the $`10`$ by $`1`$ window support the argument that edge effects are stronger when the boundary is relatively longer. The resulting power function estimates against the Matérn Model II and Matérn cluster process models are also presented in Figures 4 and 5, respectively.
One important observation made while conducting these numerical simulations was that the choice of test statistic had far more impact on the power of the resulting hypothesis test than the choice of $`J`$ function estimator. For a comparison of various test statistics of the $`J`$ function see Thönnes and van Lieshout (1998).
|
no-problem/9910/cond-mat9910250.html
|
ar5iv
|
text
|
# Stress in frictionless granular material: Adaptive Network Simulations
## I Introduction
The fragility of granular matter is a longstanding preoccupation of engineers and a recent preoccupation of physicists. By granular matter we mean a static assembly of hard, spheroidal grains whose contact forces may be compressive but not tensile. Thus granular matter is noncohesive. Mohr and Coulomb recognized a fundamental continuum consequence of the noncohesive state. There can be no co-ordinate system in which the shear stress exceeds some fixed multiple $`\mu `$ of the normal stress . This “Mohr-Coulomb” condition limits the stresses that a granular material can support, and thus amounts to a form of fragility. When this condition is violated, building foundations settle and embankments slip.
Modern civil engineering practice views the stress field in a granular medium as divided into elastic and plastic zones. The stresses in these plastic zones are at the Mohr-Coulomb limit, and thus these zones are at the margin of stability. The stress in the elastic zones is within the bounds of stability and thus the stress here is transmitted as in an elastic body.
Recently attention has turned to the microscopic origin of the macroscopic fragility of granular media. The microscopic pattern of contact forces and bead motions shows strong local heterogeneity and history dependence. The history of prior motion in a region clearly influences the way it transmits forces. The prior motion may affect the $`\mu `$ coefficient in the Mohr-Coulomb law, the elasticity tensor, or further constitutive properties. The question is, for a given history of relaxation to a static state, how are these forces transmitted and what range of forces can be supported. The transmission of forces can be expressed as a linear-response property of a granular pack. An infinitessimal force is added to the bead at position $`𝐱_0`$ and the corresponding incremental force on a contact at $`r`$ is determined. For sufficiently small perturbations of a finite pack, this linear response function G is well defined. It depends on the shapes and sizes of the beads, their frictional properties, and how the pack was constructed.
If the perturbing forces become too large, motion occurs. Beads shift their positions and form new contacts. This motion may be reversible, so that the beads return to their original positions when the perturbation is removed. This motion may also be irreversible, with the positions altered after removal of the perturbation. The thresholds for reversible and for irreversible motion are fundamental ways to characterize the nonlinear response of the pack. For any given pack there is a weakest perturbing force distribution that causes motion. This threshold force may go to zero as the size of the pack grows. Many simulations have sought to characterize the above features of a granular pack. These studies model the system in a realistic way that requires detailed specifications and many parameters. This detail makes it difficult to discern which observed features are inescapable consequences of the granular state, and which are properties of the particular realization. In this study we take the opposite approach, sacrificing realism for the sake of simplicity. We seek the simplest system that shows the instabilities of noncohesive material. Thus our system consists of frictionless, spherical beads, which have been deposited into a container one at a time and not moved thereafter. Such a system develops tensile contacts.
To avoid these contacts, we must define some motion that evolves the pack to a more stable state. Again we choose a procedure favoring simplicity rather than realism. We seek the stable state attainable with minimal disturbance from the initial state. Accordingly our procedure does not move the beads, but rather removes and adds contacts one at a time in order to attain a stable contact network. In this sense our simulation is an adaptive network.
This network is isostatic: the contact forces are determined from the applied forces solely through the force equilibrium of each bead, without reference to bead displacements or material deformation.
Our adaptive method demonstrates that frictionless granular materials can be mechanically robust. For a given load, the simulation converges to a state of no tensile contacts. A change in the applied load of order unity can be applied with only minor shifts in the contacts. Further, the three components of the stress in two dimensions obey a constitutive law of the “null stress” type: a weighted sum of the three components vanishes, the weights depending on the packing but not on the loading.
## II Response Function
Our system is a set of spherical beads, whose radii are chosen randomly within a moderate range. These beads are supported on one side, called the bottom, with a layer of fixed spheres. The width $`w`$ of the system is much larger than a bead. The beads are arranged densely in this space up to a height $`h`$. A fixed downward force $`F_0`$ is applied to each bead lying at the upper surface. We choose a configuration of beads that is mechanically stable: the normal forces acting at the bead contacts oppose the applied forces $`F_0`$ and prevent motion. Initially, we allow for tensile (negative) contact forces.
We label the $`N`$ beads by an index $`\alpha `$. Then we may denote the contact force from bead $`\alpha `$ to bead $`\beta `$ by the scalar $`f_{\alpha \beta }`$. Newton’s Third Law dictates that for all $`\alpha `$, $`\beta `$, $`f_{\beta \alpha }=f_{\alpha \beta }`$. The $`N_c`$ contact forces are constrained by the requirement that the total vector force on each bead vanish. In $`d`$ dimensions, there are evidently $`dN`$ such constraints. The bead positions $`𝐱_\alpha `$ are likewise constrained by the geometrical condition that the distance between two contacting beads $`\alpha `$ and $`\beta `$ must be the sum of their radii $`r_\alpha +r_\beta `$. There are $`N_c`$ such constraints for the $`dN`$ quantities $`𝐱_\alpha `$. If all these constraints are independent, the number $`N_c`$ of contacts must be exactly $`dN`$. Then the system is isostatic: the $`dN`$ force balance equations are just sufficient to determine the $`N_c`$ contact forces. The equations of force balance may be written
$$\underset{\beta (\alpha )}{}f_{\alpha \beta }\widehat{n}_{\alpha \beta }=𝐅_\alpha $$
(1)
Here $`\beta (\alpha )`$ denotes the set of contacting neighbors of bead $`\alpha `$, $`𝐅_\alpha `$ is an external force applied to this bead, $`f_{\alpha \beta }`$ and the unit vector $`\widehat{n}_{\alpha \beta }`$ represent the magnitude and (fixed) direction of the contact force between the two beads $`\alpha `$ and $`\beta `$. Since all the above equations are linear, the response of the system to a given external forcing is determined by the response function $`𝐆`$:
$$f_{\alpha \beta }=𝐆(\alpha \beta |\gamma )𝐅_\gamma $$
(2)
The response function $`𝐆`$ determines not only the response to an external force but also the global displacement field associated with local geometrical perturbation of the network. In order to see this, let us assume that the packing is subjected to external forcing $`𝐅_\gamma `$. Then we relax exactly one of the $`N`$ geometric constraints, and change infinitesimally the distance between two contacting beads, $`r_{\alpha \beta }|𝐱_\beta 𝐱_\alpha |`$. As long as the connectivity of the network does not change, its motion is non-dissipative. This means that the work done to distort the packing locally, $`\delta r_{\alpha \beta }f_{\alpha \beta }`$ is the work against external forces, i. e.
$$\delta 𝐱_\gamma 𝐅_\gamma =\delta r_{\alpha \beta }f_{\alpha \beta }=\delta r_{\alpha \beta }𝐆(\alpha \beta |\gamma )𝐅_\gamma $$
(3)
The above equation should be valid for any set of external forces; hence,
$$\delta 𝐱_\gamma =𝐆(\alpha \beta |\gamma )\delta r_{\alpha \beta }$$
(4)
We conclude that $`𝐆`$ is the response function both for contact force and the displacement field. Note that the displacement discussed here is not due to deformation of the beads. It corresponds to a “soft mode” that preserves the distances between all contacting beads other than the perturbed contact.
In a general case, finding the response function for a given configuration is a non-local problem, which requires solving a set of linear Eq. (1). The task becomes much easier for the case of sequential packing. This is created by adding one bead at a time. The requirement of mechanical stability implies that any newly–added bead has exactly $`d`$ “supporting” contacts (in $`d`$-dimensional space). If all the contacts were permanent and this $`d`$-branch tree structure were not perturbed by the future manipulations, the response function might be found by a simple unidirectional projection procedure. Indeed, since there are exactly $`d`$ supporting contacts for any bead in a sequential packing, the total force $`\stackrel{~}{𝐅}_\alpha `$ including external force $`𝐅_\alpha `$ and that applied from the supported beads can be uniquely decomposed onto the corresponding $`d`$ components, directed along the supporting unit vectors $`𝐧_{\alpha \gamma }`$. This gives the values of the supporting forces. The $`f`$’s may be compactly expressed in terms of a generalized scalar product $`\mathrm{}|\mathrm{}_\alpha `$:
$$f_{\alpha \beta }=\stackrel{~}{𝐅}_\alpha |\widehat{n}_{\alpha \beta }_\alpha $$
(5)
The scalar product $`\mathrm{}|\mathrm{}_\alpha `$ is defined such that $`\widehat{n}_{\alpha \beta }|\widehat{n}_{\alpha \beta ^{}}_\alpha =\delta _{\beta \beta ^{}}`$. for the supporting contacts $`\beta `$, $`\beta ^{}`$ of bead $`\alpha `$. In general, it does not coincide with the conventional scalar product. The resulting response function, $`𝐆(\alpha \beta |\gamma )`$, can be calculated as the superposition of all the projection sequences (i.e. trajectories), which lead from bead $`\gamma `$ to the bond $`\alpha \beta `$:
$$𝐆(\alpha \beta |\gamma )=\underset{(\gamma \alpha _1\mathrm{}\alpha \beta )}{}|\widehat{n}_{\gamma \alpha _1}_\gamma \widehat{n}_{\gamma \alpha _1}|\widehat{n}_{\alpha _1\alpha _2}_{\alpha _1}\mathrm{}\widehat{n}_{\alpha _k\alpha }|\widehat{n}_{\alpha \beta }_\alpha $$
(6)
Here the summation is done over all the trajectories $`(\gamma \alpha _1\mathrm{}\alpha _k\alpha \beta )`$ such that any bead in the sequence is a supporting neighbor of the previous one.
## III Adaptive network simulation.
For a large enough system, sequential packing is not compatible with the requirement of non-tensile contacts. Anytime when this requirement is violated, a rearrangement occurs and system finds a “better” configuration. One might expect that this would make the problem of force propagation a dynamic one. However, it is possible to limit oneself to a purely geometrical consideration, following the ideas of the previous section.
Suppose $`\alpha \beta `$ is a “bad bond”, whose contact force is negative (tensile). This means that the network would move in such a way that the two beads, $`\alpha `$ and $`\beta `$ are taken apart. In other words, the soft mode associated with the perturbation of $`\alpha \beta `$ bond is activated, and for small enough displacements all the beads move in accordance with Eq. (4). The motion stops when a replacement contact is created, i.e. when a gap between any two neighboring beads closes.
In this work, we limit ourselves to this linear approximation. It should be understood that Eq. (4) is correct only for infinitesimal displacements, and in a general case one should account for the evolution of the response function in the course of the rearrangement. We avoid the problem of changing G by permitting only infinitessimal motion in the model. We imagine that the “bad bond” gets deactivated, and it is replaced with a rigid “strut” between two neighboring beads that were not in contact in the previous configuration. There is a natural choice for where the strut should be placed to cause minimal disturbance. Each pair of non-contacting neighbors $`\gamma \delta `$ has a gap $`r_{\gamma \delta }r_\gamma r_\delta `$. When the contact $`\alpha \beta `$ is removed, the distance $`r_{\alpha \beta }`$ is allowed to change; this change alters the gaps of other neighbors $`\gamma \delta `$ as specified by Eq. 4. Extrapolating this linear-response equation, motion $`\delta r_{\alpha \beta }`$ required to close gap $`\gamma \delta `$ is given by
$$\delta r_{\alpha \beta }=\frac{r_{\gamma \delta }r_\gamma r_\delta }{\widehat{n}_{\gamma \delta }[𝐆(\alpha \beta |\gamma )𝐆(\alpha \beta |\delta )]}$$
(7)
(For many choices of $`\gamma \delta `$ the required $`\delta r_{\alpha \beta }`$ is infinite since the $`\alpha \beta `$ contact has no effect on the $`\gamma \delta `$ gap.) Using this formula, we identify the gap $`\alpha ^{}\beta ^{}`$ which would require the smallest change of $`r_{\alpha \beta }`$ in order to close, and we link this pair by a strut.
After we have found the replacement bond, the modified response function can be found without solving the whole set of force balance equations (1)! We denote the response function for the initial packing as $`𝐆_0`$. The new response function G must be such that there is no longer a contact force $`f_{\alpha \beta }`$. In general there is such a force in the initial packing. However, we may alter this unwanted force by adding an external force to some other bead. We choose to add external forces to beads $`\alpha ^{}`$ and $`\beta ^{}`$ that mimic a contact force: the two forces are equal, opposite and directed along the unit vector $`\widehat{n}_{\alpha ^{}\beta ^{}}`$ joining them, with a strength denoted $`f_{\alpha ^{}\beta ^{}}`$. Our choice of the replacement pair guarantees that the effect of $`\alpha ^{}`$ or $`\beta ^{}`$ on the $`\alpha \beta `$ contact is non-vanishing. Then the force on this contact is given by
$`f_{\alpha \beta }={\displaystyle \underset{\gamma }{}}𝐅_\gamma 𝐆_0(\alpha \beta |\gamma )`$ (8)
$`+f_{\alpha ^{}\beta ^{}}[𝐆_0(\alpha \beta |\alpha ^{})𝐆_0(\alpha \beta |\beta ^{})]\widehat{n}_{\alpha ^{}\beta ^{}}.`$ (9)
We may make this $`f_{\alpha \beta }`$ vanish by a proper choice of the external force, $`f_{\alpha ^{}\beta ^{}}=_\gamma 𝐅_\gamma 𝐆(\alpha ^{}\beta ^{}|\gamma )`$, where $`𝐆(\alpha ^{}\beta ^{}|\gamma )`$ is the new response function, found by requiring that $`f_{\alpha \beta }`$ vanish in Eq. (8):
$$𝐆(\alpha ^{}\beta ^{}|\gamma )=\frac{𝐆_0(\alpha \beta |\gamma )}{\widehat{n}_{\alpha ^{}\beta ^{}}[𝐆_0(\alpha \beta |\alpha ^{})𝐆_0(\alpha \beta |\beta ^{})]}.$$
(10)
A contact force on an arbitrary contact is now determined by a combination of external forces $`𝐅_\gamma `$ and the above–determined $`f_{\alpha ^{}\beta ^{}}`$. This results in the following expression for $`𝐆(\lambda \mu |\gamma )`$ ($`\lambda \mu `$ other than $`\alpha ^{}\beta ^{}`$);
$`𝐆(\lambda \mu |\gamma )=𝐆_0(\lambda \mu |\gamma )`$ (11)
$`𝐆_0(\alpha \beta |\gamma ){\displaystyle \frac{\widehat{n}_{\alpha ^{}\beta ^{}}[𝐆_0(\lambda \mu |\alpha ^{})𝐆_0(\lambda \mu |\beta ^{})]}{\widehat{n}_{\alpha ^{}\beta ^{}}[𝐆_0(\alpha \beta |\alpha ^{})𝐆_0(\alpha \beta |\beta ^{})]}}.`$ (12)
This prescription gives the response of the pack to a wide class of contact replacements. The prescription does not require either the initial or the final state to be stable: it allows tensile contacts. Using the contact replacement procedure we may investigate the stability of a pack systematically. Our algorithm has two major stages: network preparation and its “mutation” via the contact replacement scheme. By repeating this adaptive procedure sufficiently many times, one may hope to get the stable configuration without tensile forces (for a given loading), just like the real system would do. There is a possibility that the present geometry–preserving algorithm could not stabilize the network. For instance if the tangential component of the surface force is strong enough, it is expected to initiate a macroscopic avalanche, as in a sandpile with slope exceeding the critical angle. This class of rearrangements is beyond the capabilities of our connectivity mutation scheme. This circumstance has even certain advantages: we can determine the critical slope from our simulations as the direction of the surface force at which the algorithm stops working. It should be emphasized that our algorithm can easily be modified to incorporate the change in the network geometry. The major reason why we use the above geometry-preserving (“strut”) approximation is its much higher computational effectiveness.
## IV Simulation details and results
### A Method
We begin by creating a two-dimensional sequential pack of variable-sized discs by adding them one by one. The studied system has the following parameters: polydispersity 10% (bead radii from 1 to 1.1), number of beads, $`N`$ from 250 to 500 (the major limitation is the computation time). Although there is no gravitational force acting on the beads in these simulations, the statistics of the packing can be varied by changing the “pseudo-gravity” direction, $`\widehat{g}`$. Namely, while adding a bead to the packing we require $`\widehat{g}`$ to be directed between the two supporting contacts. Simultaneously, we calculate the response function, by using the sum-over-trajectories formula, Eq. (6).
Then we apply certain load to beads on the surface. In the studied cases, the forces applied to the surface beads were all the same, with the only principal variable being the ratio of the two components $`f_x/f_y`$ ($`y`$ is the vertical direction, the surface in average is parallel to $`x`$ due to the periodic boundary conditions in the horizontal direction). As long as the response function is given, we may find all the contact forces for a given load. As we have found, tensile contacts appear within a few beads from the surface. We analyze the sign of the contact forces one by one, from top to bottom (in the order opposite to the one in which the beads were originally deposited). When a tensile contact is encountered, we follow the contact replacement procedure described in the previous section: find the new bond and modify the response function to account for the connectivity change. Now we repeat the procedure again starting from very top until there is no tensile contact left in the system Figure 1 shows a sequence of four typical steps in this procedure. Evidently the removed and the added contact may be far apart.
It sometimes happens that this prescription does not remove the tensile contacts: the removal of a tensile contact continues to generate others. In this case we may modify our procedure for selecting the next tensile contact to remove. For example we may select the strongest tensile contact instead of that tensile contact having the largest sequence number. Such alternative prescriptions seem to have little effect on the force network, as discussed in the next section.
### B Variability and reproducibility
While our bond replacement procedure mimics the way the real system should rearrange, our choice of the “bad bond” to be replaced is far more arbitrary. For instance, instead of checking the sign of the contact forces one by one from top to bottom, we could go the other way, or try to replace the contact bearing the largest negative force first. Neither of these prescriptions is very realistic; however, we find that the results are insensitive to the procedure. In order to probe this sensitivity, we compared the results from two different “annealing” procedures. The first was the procedure described in the previous subsection. The second procedure is as follows. We perform exactly the same one-by-one check, as in the previous case, but do not remove a negative bond unless the magnitude of the force exceeds certain tolerance threshold. When no more bonds remain in the system for a given threshold level, we reduce the tolerance and repeat the procedure. The threshold plays a role of temperature: keeping it finite allows us to deviate from the target (non-tensile) state of the system and explore its vicinity at the configuration space. The second annealing algorithm converges considerably faster than the zero-tolerance one. For instance, it took us from 1500 to 3000 iterations to complete the original algorithm with 500 beads, while the annealing procedure reduced the needed time to approximately 500-1000 steps. Interestingly, the variation of the convergence time is of order of the time itself.
We have found that the contact configuration resulting from the annealing procedure does differ from the one generated by the zero-tolerance algorithm (see Figure 2). However, we did not detect any statistically–significant variation of the ensemble–averaged properties of the final state obtained with the two methods. These properties included the average stress and the contact force probability distribution function (PDF), presented below.
### C Macroscopic constitutive equation
One of the crucial results of the simulation is that our geometry-preserving adaptive network algorithm does converge for a considerable range of force direction. It stops working when $`|f_x/f_y|`$ approaches $`0.6`$ (for packing prepared at vertical pseudogravity $`\widehat{g}`$. This suggests that the critical slope for the frictionless packing is about 30 degrees, consistent with simple theoretical arguments and some experiments Note that this slope may considerably exceed the angle of repose in dynamic experiments and simulations because of hysteresis associated with the lack of damping in the frictionless system. Presumably, the critical slope can be observed by quasi-static tilting of a zero-slope packing.
Another interesting observation is that the eventual connectivity of the packing is not too different from the original sequential packing. For example, the 500-bead system needs up to 3000 iterative steps (rearrangements) to find the stable state, and yet only 150 out of 1000 contacts ($`15\%`$) in the final configuration are different from the original network. This provides us with a solid background for using the sequential packing as the zero-order approximation of the real network. This was one of the major hypothesis used in our earlier work to derive the constitutive equation of frictionless granular packing.
One more hypothesis, used for derivation of the macroscopic equation for stress is the mean field decoupling Ansatz. The average stress in a region of a sequential packing can be written
$`\sigma ^{ij}(𝐱)=`$ (13)
$`{\displaystyle \underset{\alpha }{}}{\displaystyle \underset{\beta (\alpha )}{}}\delta (𝐱_\alpha 𝐱)\stackrel{~}{𝐅}_\alpha |𝐧_{\alpha \beta }_\alpha n_{\alpha \beta }^in_{\alpha \beta }^jr_{\alpha \beta },`$ (14)
The sum $`\beta `$ is over the beads that support the bead $`\alpha `$. Our mean-field hypothesis consists in assuming that the force-related part of this average is independent of the geometrical part. We define
$$𝐟(𝐱)\underset{\alpha }{}\delta (𝐱_\alpha 𝐱)\stackrel{~}{𝐅}_\alpha |,$$
(15)
and
$$\widehat{\tau }\underset{\beta (\alpha )}{}|𝐧_{\alpha \beta }_\alpha n_{\alpha \beta }^in_{\alpha \beta }^jr_{\alpha \beta }.$$
(16)
Then our mean-field assumption amounts to the statement
$$\sigma ^{ij}=𝐟\widehat{\tau }$$
(17)
The mean field hypothesis can be directly checked for unperturbed sequential packing, were both fields $`𝐟`$ and $`\widehat{\tau }`$ are well–defined. The results of such a check are represented on Figure 3. The exact values of various stress components are shown to be in an excellent agreement with their evaluation based on the mean-field Ansatz. We conclude that the mean field is a very good approximation at least for non-adaptive sequential packing.
As long as the rearrangements are switched on, there is no obvious way to define the concept of supporting neighbor, and therefore the “force from subsequent beads”, $`𝐟`$ is ill-defined as well. However, a more general meaning of constitutive Eq. (13), is that the stress is parameterizable with some vector $`𝐟`$ , and the third-rank material tensor $`\widehat{\tau }`$ establishes this parameterization. We now take $`\widehat{\tau }`$ corresponding to the original pre-rearrangement sequential packing and probe the Ansatz by checking whether the total stress (after the adaptive procedure) can be expressed as $`\widehat{\tau }𝐟`$. In other words, we compare the only unknown component of the stress $`\sigma _{xx}`$ (two other are given by the boundary conditions) with its theoretical value obtained from $`\widehat{\tau }`$. We performed this check for two different classes of packing, corresponding to different directions of “pseudogravity”, $`g_x/g_y=0`$ and $`0.2`$. The agreement between the two curves is surprisingly good as long as the direction of the applied force does not deviate too much from the preparation conditions (i. e. from the pseudogravity vector), see Figure 4.
### D Response function
As noted above, our system transmits forces in accord with a null-stress constitutive property. Given the null-stress law, one may infer the corresponding response function G. The force transmission is transmitted from a point source according to a wave-like equation. In a medium where all non-vertical directions are equivalent, the force should propagate downward along slanting characteristic lines, whose slope is dictated by the only parameter in the null-stress law. The responding region lies within the “light cone” bounded by these lines. In two dimensions, the response consists of two delta-functions traveling along the light cone. Disorder is expected to scatter the wave solutions of the pure system, thus resulting in a widening of the delta-peaks. This scattering could be sufficient to create qualitative new mesoscopic behavior from localization effects. Our simulated system showed strong influences from disorder, as illustrated in Figure 5. Because there can be no vertical-force response at the top of the system, we observe a global anisotropy, with stronger responses below the source than above it. The response is also strongly heterogeneous.
Our simulations allow us to perform ensemble averaging of the response stress field. Figure 6 shows the results of such averaging over 600 realizations of the network. As the perturbation propagates deep into the sample, the response function gets a two–peak shape, in a good agreement with the null-stress law. As expected, the peaks are broadened by the disorder, and one cannot resolve them immediately below the source. Another important observation, which also supports the null-stress approach, is that the average response is virtually zero above the source. Note that we have studied only linear response of the system, so that the perturbation did not change the contact network. This need not to be the case in the experiments involving strong local perturbations
### E Contact force distribution function
Our simulation allows us to address yet another interesting and widely-discussed problem: the statistics of contact force. Recent experiments indicate that this distribution can be well approximated as exponential, that is, it is considerably wider than a naively-expected Gaussian. This is related to the strong heterogeneities of the mesoscopic stress in granular matter: it appears to be localized to string-like structures known as force chains.
In the initial sequential packing there is no constraint on the sign of the contact force, and its amplitude appears to grow indefinitely with the packing depth. After the rearrangements, there are no negative forces in the system, and therefore their amplitude cannot grow forever(the total transmitted force is fixed). Figure 7 shows the spatial distribution of the contact forces in the systems of two different degrees of polydispersity after the adaptive stage is completed. One can clearly see that our simulations are at least in qualitative agreement with experiment: it is easy to identify the force chains in both cases.
We were also able to make a quantitative comparison between the simulations and experiments. Figure 8 shows the probability distribution function of the contact force taken from our simulations of almost monodisperse system. It apparently agrees with the exponential histogram observed experimentally. An insight into the origin of this exponential behavior is given by the “q–model” due to S. Coppersmith et al. The further discussion of this intriguing result will be published elsewhere
## V Conclusion
In the study of granular materials, clearcut confirmation of theories has been elusive. One predicted feature of great interest is the null-stress constitutive law postulated by Wittmer et al. We have verified that null-stress behavior occurs in a simplified granular system embodying disorder, perfect rigidity and cohesionless contacts. We have measured the free parameter in the null-stress law for several situations. We have confirmed the validity of our major assumptions used for microscopic foundation of this constitutive law. Our simulation also allowed us to compute directly the ensemble–averaged response function, thus providing an additional check for the adequacy of the null-stress approach.
The simulation method has further interesting features. It demonstrates that stable configurations of isostatic force networks can be found without changing the positions of the nodes. It also reveals order-unity variability in the microscopic force distribution resulting from the relaxation process. Finally, it shows strongly heterogeneous response to point forces—much stronger than that of the geometric contact network. This suggests strong multiple-scattering features in the force propagation.
The observed exponential probability distribution function for the contact force is in a good agreement with the experiments. Since there is also an indirect experimental support for the null-stress law, our choice of the system (hard frictionless spheres) appears to be an adequate simplification to capture the basic physics of granular rigidity. The further simplifications, such as the fixed–geometry adaptive algorithm provide an effective tool for the future studies of this problem. This may include a study of non-linear response of the system to large localized perturbation, effects of polydispersity, and history dependence of the response.
## Acknowledgement
The authors thank R. Ball, S. Coppersmith, D. Mueth, H. Jaeger, S. Nagel, and J.Socolar for valuable discussions. Likewise, we thank the participants in the Jamming and Rheology program of the Institute of Theoretical Physics in Autumn 1997. This work was supported in part by the National Science Foundation under Award numbers PHY-94 07194, DMR-9528957, DMR-9975533 and DMR 94 00379.
|
no-problem/9910/cond-mat9910332.html
|
ar5iv
|
text
|
# Emergence of Scaling in Random Networks
Systems as diverse as genetic networks or the world wide web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature is found to be a consequence of the two generic mechanisms that networks expand continuously by the addition of new vertices, and new vertices attach preferentially to already well connected sites. A model based on these two ingredients reproduces the observed stationary scale-free distributions, indicating that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.
The inability of contemporary science to describe systems composed of non-identical elements that have diverse and nonlocal interactions currently limits advances in many disciplines, ranging from molecular biology to computer science (1). The difficulty in describing these systems lies partly in their topology: many of them form rather complex networks, whose vertices are the elements of the system and edges represent the interactions between them. For example, living systems form a huge genetic network, whose vertices are proteins and genes, the edges representing the chemical interactions between them (2). At a different organizational level, a large network is formed by the nervous system, whose vertices are the nerve cells, connected by axons (3). But equally complex networks occur in social science, where vertices are individuals or organizations, and the edges characterize the social interaction between them (4), or describe the world wide web (www), whose vertices are HTML documents connected by links pointing from one page to another (5, 6). Due to their large size and the complexity of the interactions, the topology of these networks is largely unknown.
Traditionally, networks of complex topology have been described using the random graph theory of Erdős and Rényi (ER) (7), but in the absence of data on large networks the predictions of the ER theory were rarely tested in the real world. However, driven by the computerization of data acquisition, such topological information is increasingly available, raising the possibility of understanding the dynamical and topological stability of large networks.
In this paper we report on the existence of a high degree of self-organization characterizing the large scale properties of complex networks. Exploring several large databases describing the topology of large networks that span as diverse fields as the www or the citation patterns in science we show that, independent of the system and the identity of its constituents, the probability $`P(k)`$ that a vertex in the network interacts with $`k`$ other vertices decays as a power-law, following $`P(k)k^\gamma `$. This result indicates that large networks self-organize into a scale-free state, a feature unexpected by all existing random network models. To understand the origin of this scale invariance, we show that existing network models fail to incorporate growth and preferential attachment, two key features of real networks. Using a model incorporating these two ingredients, we show that they are responsible for the power-law scaling observed in real networks. Finally, we argue that these ingredients play an easily identifiable and important role in the formation of many complex systems, implying that our results are relevant to a large class of networks observed in Nature.
While there are a large number of systems that form complex networks, detailed topological data is available only for a few. The collaboration graph of movie actors represents a well documented example of a social network. Each actor is represented by a vertex, two actors being connected if they were cast together in the same movie. The probability that an actor has $`k`$ links (characterizing his or her popularity) has a power-law tail for large $`k`$, following $`P(k)k^{\gamma _{actor}}`$, where $`\gamma _{actor}=2.3\pm 0.1`$ (Fig. 1A). A more complex network with over $`800`$ million vertices (8) is the www, where a vertex is a document and the edges are the links pointing from one document to another. The topology of this graph determines the web’s connectivity and, consequently, our effectiveness in locating information on the www (5). Information about $`P(k)`$ can be obtained using robots (6), indicating that the probability that $`k`$ documents point to a certain webpage follows a power-law, with $`\gamma _{www}=2.1\pm 0.1`$ ( Fig. 1B) (9). A network whose topology reflects the historical patterns of urban and industrial development is the electrical powergrid of western US, the vertices representing generators, transformers and substations, the edges corresponding to the high voltage transmission lines between them (10). Due to the relatively modest size of the network, containing only 4941 vertices, the scaling region is less prominent, but is nevertheless approximated with a power-law with an exponent $`\gamma _{power}4`$ (Fig. 1C). Finally, a rather large, complex network is formed by the citation patterns of the scientific publications, the vertices standing for papers published in refereed journals, the edges representing links to the articles cited in a paper. Recently Redner (11) has shown that the probability that a paper is cited $`k`$ times (representing the connectivity of a paper within the network) follows a power-law with exponent $`\gamma _{cite}=3`$.
The above examples (12) demonstrate that many large random networks share the common feature that the distribution of their local connectivity is free of scale, following a power-law for large $`k`$, with an exponent $`\gamma `$ between 2.1 and 4 which is unexpected within the framework of the existing network models. The random graph model of ER (7) assumes that we start with $`N`$ vertices, and connect each pair of vertices with probability $`p`$. In the model the probability that a vertex has $`k`$ edges follows a Poisson distribution $`P(k)=e^\lambda \lambda ^k/k!`$, where $`\lambda =N\left(\begin{array}{c}N1\\ k\end{array}\right)p^k(1p)^{N1k}`$. In the small world model recently introduced by Watts and Strogatz (WS) (10), $`N`$ vertices form a one-dimensional lattice, each vertex being connected to its two nearest and next-nearest neighbors. With probability $`p`$ each edge is reconnected to a vertex chosen at random. The long-range connections generated by this process decrease the distance between the vertices, leading to a small-world phenomenon (13), often referred to as six degrees of separation (14). For $`p=0`$ the probability distribution of the connectivities is $`P(k)=\delta (kz)`$, where $`z`$ is the coordination number in the lattice, while for finite $`p`$, $`P(k)`$ is still peaked around $`z`$, but it gets broader (15). A common feature of the ER and WS models is that the probability of finding a highly connected vertex (that is, a large $`k`$) decreases exponentially with $`k`$, thus vertices with large connectivity are practically absent. In contrast, the power-law tail characterizing $`P(k)`$ for the studied networks indicates that highly connected (large $`k`$) vertices have a large chance of occurring, dominating the connectivity.
There are two generic aspects of real networks that are not incorporated in these models. First, both models assume that we start with a fixed number ($`N`$) of vertices, that are then randomly connected (ER model), or reconnected (WS model), without modifying $`N`$. In contrast, most real world networks are open, they form by the continuous addition of new vertices to the system, thus the number of vertices, $`N`$, increases throughout the lifetime of the network. For example, the actor network grows by the addition of new actors to the system, the www grows exponentially in time by the addition of new web pages (8), the research literature constantly grows by the publication of new papers. Consequently, a common feature of these systems is that the network continuously expands by the addition of new vertices that are connected to the vertices already present in the system.
Second, the random network models assume that the probability that two vertices are connected is random and uniform. In contrast, most real networks exhibit preferential connectivity. For example, a new actor is cast most likely in a supporting role, with more established, well known actors. Consequently, the probability that a new actor is cast with an established one is much higher than casting with other less known actors. Similarly, a newly created webpage will more likely include links to well known, popular documents with already high connectivity, or a new manuscript is more likely to cite a well known and thus much cited paper than its less cited and consequently less known peer. These examples indicate that the probability with which a new vertex connects to the existing vertices is not uniform, but there is a higher probability to be linked to a vertex that already has a large number of connections.
We next show that a model based on these two ingredients naturally leads to the observed scale invariant distribution. To incorporate the growing character of the network, starting with a small number ($`m_0`$) of vertices, at every timestep we add a new vertex with $`m`$($`m_0`$) edges that link the new vertex to $`m`$ different vertices already present in the system. To incorporate preferential attachment, we assume that the probability $`\mathrm{\Pi }`$ that a new vertex will be connected to vertex $`i`$ depends on the connectivity $`k_i`$ of that vertex, such that $`\mathrm{\Pi }(k_i)=k_i/_jk_j`$. After $`t`$ timesteps the model leads to a random network with $`t+m_0`$ vertices and $`mt`$ edges. This network evolves into a scale-invariant state with the probability that a vertex has $`k`$ edges following a power-law with an exponent $`\gamma _{model}=2.9\pm 0.1`$ (Fig. 2A). As the power-law observed for real networks describes systems of rather different sizes at different stages of their development, it is expected that a correct model should provide a distribution whose main features are independent of time. Indeed, as Fig. 2A demonstrates, $`P(k)`$ is independent of time (and, subsequently, independent of the system size $`m_0+t`$), indicating that despite its continuous growth, the system organizes itself into a scale-free stationary state.
The development of the power-law scaling in the model indicates that growth and preferential attachment play an important role in network development. To verify that both ingredients are necessary, we investigated two variants of the model. Model A keeps the growing character of the network, but preferential attachment is eliminated by assuming that a new vertex is connected with equal probability to any vertex in the system (that is, $`\mathrm{\Pi }(k)=const=1/(m_0+t1)`$). Such a model (Fig. 2B) leads to $`P(k)\mathrm{exp}(\beta k)`$, indicating that the absence of preferential attachment eliminates the scale-free feature of the distribution. In model B we start with $`N`$ vertices and no edges. At each time step we randomly select a vertex and connect it with probability $`\mathrm{\Pi }(k_i)=k_i/_jk_j`$ to vertex $`i`$ in the system. While at early times the model exhibits power-law scaling, $`P(k)`$ is not stationary: since $`N`$ is constant, and the number of edges increases with time, after $`TN^2`$ timesteps the system reaches a state in which all vertices are connected. The failure of models A and B indicates that both ingredients, namely growth and preferential attachment, are needed for the development of the stationary power-law distribution observed in Fig. 1.
Due to the preferential attachment, a vertex that acquired more connections than another one will increase its connectivity at a higher rate, thus an initial difference in the connectivity between two vertices will increase further as the network grows. The rate at which a vertex acquires edges is $`k_i/t=k_i/2t`$, which gives $`k_i(t)=m(t/t_i)^{0.5}`$, where $`t_i`$ is the time at which vertex $`i`$ was added to the system (see Fig. 2C), a scaling property that could be directly tested once time-resolved data on network connectivity becomes available. Thus older (smaller $`t_i`$) vertices increase their connectivity at the expense of the younger (larger $`t_i`$) ones, leading with time to some vertices that are highly connected, a ”rich-gets-richer” phenomenon that can be easily detected in real networks. Furthermore, this property can be used to calculate $`\gamma `$ analytically. The probability that a vertex $`i`$ has a connectivity smaller than $`k`$, $`P(k_i(t)<k)`$, can be written as $`P(t_i>m^2t/k^2)`$. Assuming that we add the vertices at equal time intervals to the system, we obtain that $`P(t_i>m^2t/k^2)=1P(t_im^2t/k^2)=1m^2t/k^2(t+m_0)`$. The probability density $`P(k)`$ can be obtained from $`P(k)=P(k_i(t)<k)/k`$, which, at long times, leads to the stationary solution
$$P(k)=\frac{2m^2}{k^3},$$
giving $`\gamma =3`$, independent of $`m`$. While it reproduces the observed scale-free distribution, the proposed model cannot be expected to account for all aspects of the studied networks. For this we need to model these systems in more detail. For example, in the model we assumed linear preferential attachment, that is $`\mathrm{\Pi }(k)k`$. However, while in general $`\mathrm{\Pi }(k)`$ could have an arbitrary nonlinear form $`\mathrm{\Pi }(k)k^\alpha `$, simulations indicate that scaling is present only for $`\alpha =1`$. Furthermore, the exponents obtained for the different networks are scattered between $`2.1`$ and $`4`$. However, it is easy to modify our model to account for exponents different from $`\gamma =3`$. For example, if we assume that a fraction $`p`$ of the links is directed, we obtain $`\gamma (p)=3p`$, which is supported by numerical simulations (16). Finally, some networks evolve not only by adding new vertices, but by adding (and sometimes removing) connections between established vertices. While these and other system-specific features could modify the exponent $`\gamma `$, our model offers the first successful mechanism accounting for the scale-invariant nature of real networks.
Growth and preferential attachment are mechanisms common to a number of complex systems, including business networks (17, 18), social networks (describing individuals or organizations), transportation networks (19), etc. Consequently, we expect that the scale-invariant state, observed in all systems for which detailed data has been available to us, is a generic property of many complex networks, its applicability reaching far beyond the quoted examples. A better description of these systems would help in understanding other complex systems as well, for which so far less topological information is available, including such important examples as genetic or signaling networks in biological systems. We often do not think of biological systems as open or growing, since their features are genetically coded. However, possible scale-free features of genetic and signaling networks could reflect the evolutionary history dominated by growth and aggregation of different constituents, leading from simple molecules to complex organisms. With the fast advances in mapping out genetic networks, answers to these questions might not be too far. Similar mechanisms could explain the origin of the social and economic disparities governing competitive systems, since the scale-free inhomogeneities are the inevitable consequence of self-organization due to the local decisions made by the individual vertices, based on information that is biased towards the more visible (richer) vertices, irrespective of the nature and the origin of this visibility.
References and Notes
1. R. Gallagher and T. Appenzeller, Science 284, 79 (1999); R. F. Service, Science 284, 80 (1999).
2. G. Weng, U. S. Bhalla, R. Iyengar, Science 284, 92 (1999).
3. C. Koch and G. Laurent, Science 284, 96 (1999).
4. S. Wasserman and K. Faust, Social Network Analysis, (Cambridge University Press, Cambridge, 1994).
5. Members of the Clever project, Sci. Am 280, 54 (June 1999).
6. R. Albert, H. Jeong and A.-L. Barabási, Nature 401, 130 (1999), see also http://www.nd.edu/ networks.
7. P. Erdős, and A. Rényi, Publ. Math. Inst. Hung. Acad. Sci 5, 17 (1960); B. Bollobás, Random Graphs (Academic Press, London, 1985).
8. S. Lawrence and C. L. Giles, Science 280, 98 (1998); Nature 400, 107 (1999).
9. Note that in addition to the distribution of incoming links, the www displays a number of other scale-free features, characterizing the organization of the webpages within a domain \[B. A. Huberman and L. A. Adamic, Nature 401, 131 (1999)\], the distribution of searches \[B. A. Huberman, P. L. T. Pirolli, J. E. Pitkow and R. J. Lukose, Science 280, 95 (1998)\], or the number of links per webpage (6).
10. D. J. Watts and S. H. Strogatz, Nature 393, 440 (1998).
11. S. Redner, European Physical Journal B 4, 131 (1998).
12. We also studied the neural network of the worm Caenorhabditis elegans (3, 10) and the benchmark diagram of a computer chip
(http://vlsicad.cs.ucla.edu/$``$cheese/ispd98.html). We find that $`P(k)`$ for both is consistent with power-law tails, despite the fact that for C. elegans the relatively small size of the system ($`306`$ vertices) limits severely the data quality, while for the wiring diagram of the chips vertices with over $`200`$ edges have been eliminated from the database.
13. S. Milgram, Psychol. Today 2, 60 (1967); M. Kochen (ed.) The Small World (Ablex, Norwood, NJ, 1989).
14. J. Guare, Six Degrees of Separation: A play (Vintage Books, New York, 1990).
15. M. Barthélémy and L. A. N. Amaral, Phys. Rev. Lett. 82, 15 (1999).
16. Note that for most networks the connectivity $`m`$ of the newly added vertices is not constant. However, choosing $`m`$ randomly will not change the exponent $`\gamma `$ \[Y. Tu, private communication\].
17. W. B. Arthur, Science 284, 107 (1999).
18. Note that preferential attachment was also used to model correlations between stock prices \[L. A. N. Amaral and M. Barthélémy, private communication\].
19. J. R. Banavar, A. Maritan and A. Rinaldo, Nature 399, 130 (1999).
20. We thank D. J. Watts for providing the C. elegans and the power grid data, B. C. Tjaden for supplying the actor data, H. Jeong for collecting the data on the www and L. A. N. Amaral for helpful discussions. This work was partially supported by NSF Career Award DMR-9710998.
|
no-problem/9910/astro-ph9910491.html
|
ar5iv
|
text
|
# On possible ‘cosmic ray cocoons’ of relativistic jets
## 1 Introduction
Shock waves are widely considered as sources of cosmic ray particles in relativistic jets ejected from active galactic nuclei (AGNs). In the present paper we consider an alternative, till now hardly explored mechanism involving particle acceleration at the velocity shear layer, which must be formed at the interface between the jet and the ambient medium (cf. discussion, in a different context, of the turbulence role at a tangential discontinuity by Drobysh & Ostryakov 1998). One should note that there is a growing evidence of interaction between the jet and the ambient medium, and formation of boundary layers, both in observations (e.g. Attridge et al. 1999, Scarpa et al. 1999, Perlman et al. 1999) and in modelling (Aloy et al. 1999). In the next section we discuss this acceleration mechanism in some detail. We point out possible regimes of turbulent second-order Fermi acceleration at low particle energies, next dominated by the ‘viscous’ acceleration at larger energies, and by acceleration at the tangential flow discontinuity at highest energies. In section 3 we shortly consider highest energies allowed in this model by radiative and/or escape losses. Then, in section 4, we discuss time dependent spectra of cosmic rays accelerated at infinite planar flow discontinuity. In the presence of efficient radiative losses a usually formed flat power-law distribution is ended with a bump (in some conditions a nearly mono-energetic spike) followed by a cut-off. Dynamic consequences of cosmic ray pressure increase at the jet boundary are discussed in section 5. In particular a cosmic ray cocoon can be formed around the jet changing its propagation and leading to an intermittent jet activity. Final remarks are presented in section 6.
One should be aware of a partly speculative presentation character of this paper. Till now, besides casual remarks, the considered complicated physical phenomenon was hardly discussed in the literature. One can mention in this respect a discussion of radiation-viscous jet boundary layers by Arav & Begelman (1992) and a discussion of possible cosmic ray acceleration up to ultra-high energies by Ostrowski (1998a). With the present paper we would like to open the considered physical mechanism to more detailed modelling and quantitative considerations.
## 2 Particle acceleration at the jet boundary
For particles with sufficiently high energies the transition layer between the jet and the ambient medium can be approximated as a surface of discontinuous velocity change, a tangential discontinuity (‘td’). If particles’ gyroradia (or mean free paths normal to the jet boundary) are comparable to the actual thickness of this shear-layer interface it becomes an efficient cosmic ray acceleration site provided the considered velocity difference $`U`$ is relativistic and the sufficient amount of turbulence is present in the medium (Ostrowski 1990, 1998a). The problem was extensively discussed in early eighties by Berezhko with collaborators (see the review by Berezhko 1990) and in the diffusive limit by Earl et al. (1988) and Jokipii et al. (1989). However, till now no one considered the situation with highly relativistic flow characterized with the Lorentz factor $`\mathrm{\Gamma }(1U^2)^{1/2}1`$ and, thus, our present qualitative discussion is mostly based on the results derived for mildly relativistic flows.
Any high energy particle crossing the boundary from, say, region I (within the jet) to region II (off the jet), changes its energy, $`E`$, according to the respective Lorentz transformation. It can gain or loose energy. In the case of uniform magnetic field in region II, the successive transformation at the next boundary crossing, II $``$ I, changes the particle energy back to the original value. However, in the presence of perturbations acting at the particle orbit between the successive boundary crossings there is a positive mean energy change:
$$<\mathrm{\Delta }E>=\eta _\mathrm{E}(\mathrm{\Gamma }1)E.$$
$`(2.1)`$
The numerical factor $`\eta _\mathrm{E}`$ depends on particle anisotropy at the discontinuity. It increases with the growing magnetic field perturbations’ amplitude and slowly decreases with the growing flow velocity. The last factor will be particularly important for large $`\mathrm{\Gamma }`$ flows. For mildly relativistic flows, in the strong scattering limit particle simulations give values of $`\eta _\mathrm{E}`$ as substantial fractions of unity (Ostrowski 1990). For large $`\mathrm{\Gamma }`$ we will assume the following scaling
$$\eta _\mathrm{E}=\eta _0\frac{2}{\mathrm{\Gamma }},$$
$`(2.2)`$
where $`\eta _0`$ is defined by the magnetic field perturbations’ amplitude at $`\mathrm{\Gamma }=2`$. In general $`\eta _0`$ depends also on particle energy. During the acceleration process, particle scattering is accompanied with the jet’s momentum transfer into the medium surrounding it. On average, a single particle with the momentum $`p`$ transports across the jet’s boundary the following amount of momentum:
$$<\mathrm{\Delta }p>=<\mathrm{\Delta }p_\mathrm{z}>=\eta _p(\mathrm{\Gamma }1)Up,$$
$`(2.3)`$
where the $`z`$-axis of the reference frame is chosen along the flow velocity and the value of $`p`$ is given as the one before transmission. The numerical factor $`\eta _p`$ depends on scattering conditions near the discontinuity and in the highly perturbed conditions (in mildly relativistic shocks) it can reach values being a fraction of unity also. At large $`\mathrm{\Gamma }`$ we expect $`\eta _p\eta _\mathrm{E}`$. As a result, there acts a drag force per unit surface of the jet boundary and the opposite force at the medium along the jet, of the magnitude order of the accelerated particles’ energy density. Independent of the exact value of $`\eta _\mathrm{E}`$, the acceleration process can proceed very fast due to the fact that average particle is not able to diffuse – between the successive energizations – far from the accelerating interface. One should remember that in the case of shear layer or tangential discontinuity acceleration - contrary to the shock waves - there is no particle advection off the ‘accelerating layer’. Of course, particles are carried along the jet with the mean velocity of order $`U/2`$ and, for efficient acceleration, the distance travelled this way must be shorter than the jet breaking length.
The simulations (Ostrowski 1990) show that in favourable conditions the discussed acceleration process can be very rapid, with the time scale given in the observer frame ($``$ the region II rest frame) as<sup>1</sup><sup>1</sup>1The expression (2.4) and the following discussion is valid for an average accelerated particle. A small fraction of external particles reflected from the jet can reach a large energy gain, $`\mathrm{\Delta }E/E\mathrm{\Gamma }^2`$, but these particles do not play a principal role in the cosmic ray energy balance.
$$\tau _{\mathrm{t}d}=\alpha \frac{r_\mathrm{g}}{c},$$
$`(2.4)`$
where $`r_\mathrm{g}`$ is the characteristic value of particle gyroradius in the ambient medium. The introduced acceleration time is coupled to the acceleration length $`l_{\mathrm{t}d}\alpha r_\mathrm{g}`$ due to particle advection along the jet flow. For efficient scattering the numerical factor $`\alpha `$ can be as small as $`10`$ (Ostrowski 1990). A warning should be risen in this place. The applied diffusion model involves particles with infinite diffusive trajectories between the successive interactions with the discontinuity. Thus reaching stationary conditions in the acceleration process requires infinite times, leading to the infinite acceleration time. However, quite flat spectra, nearly coincident with the stationary spectrum, are generated in short time scales given by Eq. 2.4 and these distributions are considered in the present discussion. One may note that in analytic evaluation of $`\tau _{\mathrm{t}d}`$ for the ultra-relativistic jet, applying Eq-s (2.1) and (2.2), large $`\mathrm{\Gamma }`$ factors cancel each other. For the mean magnetic field $`B_\mathrm{g}`$ given in the Gauss units and the particle energy $`E_{\mathrm{E}eV}`$ given in EeV ($`1`$ EeV $`10^{18}`$ eV)<sup>2</sup><sup>2</sup>2Below we use also another energy units with respective indices $`GeV`$ and $`TeV`$. the time scale (2.4) reads as
$$\tau _{\mathrm{t}d}10^5\alpha E_{\mathrm{E}eV}B_\mathrm{G}^1[s].$$
$`(2.5)`$
Let us remind that in the case of a non-relativistic jet, $`Uc`$, the acceleration process is of the second-order in $`U/c`$ and a rather slow one.
For low energy cosmic ray particles the velocity transition zone at the boundary is expected to appear as a finite-width turbulent shear layer. We do not know of any attempt in the literature to describe the internal structure of such layer on the microscopic level. Therefore, we limit the discussion of the acceleration process within such a layer to quantitative considerations only. From rather weak radiation and the observed effective collimation of jets in the powerful FR II radio sources one can conclude, that the interaction of presumably relativistic jet with the ambient medium must be relatively weak. Thus the turbulent transition zone at the jet boundary must be limited to a relatively thin layer. Within such a layer two acceleration processes take place for low energy particles (by ‘low energy particles’ we mean the ones with the mean radial free path $`\lambda `$ much smaller than the transition layer thickness, $`D`$). The first one is connected with the velocity shear and is called ‘cosmic ray viscosity’ (Earl et al. 1988). The second is the ordinary Fermi process in the turbulent medium. The acceleration time scales can not be evaluated with accuracy for these processes, but – for particles residing within the considered layer – we can give an acceleration time scale estimate
$$\tau _{\mathrm{I}I}=\frac{r_\mathrm{g}}{c}\frac{c^2}{V^2+\left(U\frac{\lambda }{D}\right)^2},$$
$`(2.6)`$
where $`V`$ is the turbulence velocity ($``$ the Alfvén velocity for subsonic turbulence) and $`D`$ is the shear layer thickness. The first term in the denominator represents the second-order Fermi process, while the second term is for the viscous acceleration. One expects that the first term can dominate at low particle energies, while the second for larger energies, with $`\tau _{\mathrm{I}I}`$ approaching the value given in Eq. (2.4) for $`\lambda D`$. If the second-order Fermi acceleration dominates, $`\lambda <D(V/U)`$, the time scale (2.6) reads as
$$\tau _{\mathrm{I}I}10^7E_{\mathrm{T}eV}B_\mathrm{G}^1V_3^2[s],$$
$`(2.7)`$
where $`V_3`$ is the turbulence velocity in units of $`3000`$ km/s. Depending on the choice of parameters this scale can be comparable or longer than the expansion and internal evolution scales for relativistic jets. In order to efficiently create high energy particles for the further acceleration by the viscous process and the tangential discontinuity acceleration one have to assume that the turbulent layer includes high velocity turbulence, with $`V_3`$ reaching values substantially larger than $`1`$. Then the scale (2.7) may be much reduced, also because of oblique shocks formed in the turbulent layer and the accompanying first order Fermi acceleration processes. For the following discussion we will assume that such effective pre-acceleration takes place, but the validity of this assumption can be estimated only a posteriori from comparison of our conclusions with the observational data. Another possibility is that a population of high energy particles exist in the medium surrounding the jet due to some other unspecified acceleration processes in the central object vicinity.
The cosmic ray energy spectra generated with the above mechanisms at work are expected to be very flat (Section 4; see also Ostrowski 1998a). With such particle distribution the dynamic influence at the jet and its’ ambient medium can be due to effects of the highest energy cosmic rays, immediately preceding the spectrum cut-off. Because of the short acceleration time scale (2.4) expected for such particles the acceleration process can provide particles (protons) with energies reaching ultra high energies. Without radiative losses one can obtain particles with $`r_\mathrm{g}`$ the jet radius, $`R_j`$ , near the cut-off in the spectrum (Ostrowski 1998a). For standard jet parameters the considered particle energies may reach $`10^{19}`$ eV. Let us note that the existence of such cosmic rays was suggested by Mannheim (1993) to explain $`\gamma `$-ray fluxes from blazars.
## 3 Energy losses
To estimate the upper energy limit for accelerated particles, at first one should compare the time scale for energy losses due to radiation and inelastic collisions to the acceleration time scale. The discussion of possible loss processes is presented by Rachen & Biermann (1993). The derived loss time scale for protons can be written in the form
$$T_{\mathrm{l}oss}510^9B_\mathrm{G}^2(1+Xa)^1E_{\mathrm{E}eV}^1[s],$$
$`(3.1)`$
where $`B_\mathrm{G}`$ is the magnetic field in Gauss units, $`a`$ is the ratio of the energy density of the ambient photon field relative to that of the magnetic field and $`X`$ is a quantity for the relative strength of p$`\gamma `$ interactions compared to synchrotron radiation. For cosmic ray protons the acceleration dominates over the losses (Eq-s 2.5, 3.1) up to the maximum energy
$$E_{\mathrm{E}eV}210^2\alpha ^1\left[B_\mathrm{G}(1+Xa)\right]^{1/2}.$$
$`(3.2)`$
This equation can easily yield a large limiting $`E_{\mathrm{E}eV}1`$ with moderate jet parameters (e.g. $`B_\mathrm{G}1`$, $`Xa10^2`$, and $`\alpha =10`$). However, one should note that the particle gyroradius provides the minimum scale for the acceleration region’s spatial extent (Ostrowski 1998a). Thus, for the actual particle maximum energy $`E_{\mathrm{m}ax}`$ the jet radius should be larger than the respective particle gyroradius $`r_\mathrm{g}(E_{\mathrm{m}ax})`$. E.g., for $`R_j=10^{16}`$ cm and $`B_\mathrm{G}=1`$ the particle energy satisfying the condition $`R_j=r_\mathrm{g}`$ equals $`E_{\mathrm{m}ax}10`$ EeV, what is consistent with the above estimate based on Eq. 3.2 .
## 4 Energy spectra of accelerated particles
The acceleration process acting at the tangential discontinuity of the velocity field leads to the flat energy spectrum and the spatial distribution expected to increase their extension with particle energy. Below, for illustration, we propose two simple acceleration and diffusion models describing these features. For low and high energy particles we consider the time dependent acceleration process at, respectively, the plane shear layer or tangential discontinuity, surrounded with infinite regions for particle diffusion. In the discussion below all particles are ultra-relativistic with $`E=p`$.
### 4.1 A turbulent shear layer
At first we consider ‘low energy’ particles wandering in an extended turbulent shear layer, with the particle mean free path $`\lambda p`$. With the assumed conditions the mean time required for increasing particle energy on a small constant fraction is proportional to the energy itself, and the mean rate of particle energy gain is constant, $`<\dot{p}>_{\mathrm{g}ain}`$ = const. Let us take a simple expression for the synchrotron energy loss, $`<\dot{p}>_{\mathrm{l}oss}p^2`$, to represent any real process acting near the discontinuity. With $`<\dot{p}><\dot{p}>_{\mathrm{g}ain}<\dot{p}>_{\mathrm{l}oss}`$ the transport equation for the particle momentum distribution function $`nn(t,p,x)`$ has the following form
$$\frac{n}{t}+\frac{}{p}\left[<\dot{p}>n\right]+\frac{}{x}\left[\kappa _{}\frac{n}{x}\right]+\left(\frac{n}{t}\right)_{\mathrm{e}sc}=Q.$$
$`(4.1)`$
where $`x`$ measures the distance perpendicular to the shear layer and the escape term at highest energies is represented by $`\left(\frac{n}{t}\right)_{\mathrm{e}sc}`$.
For the jet boundary acceleration, the jet radius and the escape boundary distance provide energy scales to the process. Another scale for particle momentum, $`p_\mathrm{c}`$, is provided as the one for equal losses and gains, $`<\dot{p}>_{\mathrm{g}ain}=<\dot{p}>_{\mathrm{l}oss}`$. As a result, a divergence from the power-law and a cut-off have to occur at high energies in the spectrum. At small energies, where the diffusive regions are extended and losses non-significant, the considered solution should be close to the power-law.
We used a simple Monte Carlo simulations of the acceleration process to solve Eq. (4.1). In the equation we assume a continuous particle injection, uniform within the considered layer, $`Q=`$ const. The diffusion coefficient $`\kappa _{}`$ is taken to be proportional to particle momentum, but independent of the spatial position $`x`$. With neglected particle escape through the shear layer side boundaries and the considered uniform conditions $`\frac{n}{x}=0`$ and the spatial diffusion term in Eq. 4.1 vanishes. For the escape term $`\left(\frac{f}{t}\right)_{\mathrm{e}sc}`$ we simply assume a characteristic escape momentum $`p_{\mathrm{m}ax}`$. At figures 1 and 2 we use $`p_\mathrm{c}`$ as a unit for particle momentum, so it defines also a cut-off for $`p_\mathrm{c}<p_{\mathrm{m}ax}`$ . At Fig. 1, at small momenta the spectrum has a power-law form – in our model $`n(t,p)p^2`$ – with a cut-off momentum growing with time. However, at long time scales, when particles reach momenta close to $`p_\mathrm{c}`$, losses lower the value of $`<\dot{p}>`$ leading to spectrum flattening and pilling up particles at $`p`$ close to $`p_\mathrm{c}`$. Then, a low energy part of the spectrum does not change any more and only a narrow spike at $`pp_\mathrm{c}`$ grows with time. Let us also note that in the case of efficient particle escape, i.e. when $`p_{\mathrm{m}ax}<p_\mathrm{c}`$, the resulting spectrum would be similar to one of the short time spectra in Fig. 1, with a cut-off at $`p_{\mathrm{m}ax}`$ (cf. Ostrowski 1998a).
### 4.2 Tangential discontinuity acceleration
An illustration of the acceleration process at the tangential discontinuity have to take into account a spatially discrete nature of the acceleration process. Here, particles are assumed to wander subject to radiative losses outside the discontinuity, with the mean free path proportional to particle momentum $`p`$ and the loss rate proportional to $`p^2`$. At each crossing the discontinuity a particle is assumed to gain a constant fraction $`\mathrm{\Delta }`$ of momentum (cf. Eq-s 2.1, 2.2):
$$p^{}=(1+\mathrm{\Delta })p,$$
$`(4.2)`$
and, due to losses, during each free time $`\mathrm{\Delta }t`$ its momentum decreases from $`p_{\mathrm{i}n}`$ to $`p`$ according to the relation
$$\frac{1}{p}\frac{1}{p_{\mathrm{i}n}}=\mathrm{const}\mathrm{\Delta }t.$$
$`(4.3)`$
The time dependent energy spectra obtained within this model are presented in Fig. 2, where we choose units in a way to put constant in Eq. (4.3) equal to one and the particle mean free path equals at two considered models at $`p=p_\mathrm{c}`$. Comparison of the results in two models allows to evaluate the modification of the acceleration process by changing the momentum dependence of the particle diffusion coefficient. For slowly varying diffusion coefficient (represented here with a ‘$`\lambda =const`$’ model) high energy particles which diffuse far away off the discontinuity and loose there much of their energy still have a chance to diffuse back to be accelerated at the discontinuity. In the model with $`\kappa `$ quickly growing with particle energy (here the ‘$`\lambda =Cp`$’ model) such distant particles will decrease their mobility in a degree sufficient to break, or at least to limit their further acceleration. One should note that in both models the spectrum inclination at low energies is the same (here $`n(p)p^2`$).
## 5 Consequences of the jet’s ‘cosmic ray cocoon’
Cosmic ray distributions in Fig-s 1 and 2 reveal a few spectral components: a flat power-law section at small energies, followed either with a smooth transition to the cut-off, or at first a hard component (‘bump’ or ‘spike’) proceeding the final cut-off. The later case occurs when the radiative losses are enough efficient to pile up particles before the loss dominated high energy range. Spectra without such hard component will appear in cases when particles escape from the acceleration region at low energies, or when the acceleration time scale is longer than the involved dynamical scales (for the jet expansion or slowing down, e.g.). One may note that in the models discussed by us the power-law section of the spectrum has the form $`n(p)p^2`$.
Normalization of the spectrum at low momenta essential for dynamical considerations is defined by the injection efficiency. This parameter can not be derived from available models or observations and it is treated as a free parameter in the present considerations. The cosmic ray pressure at the jet boundary, $`P_{cr}=pn(p)𝑑p`$, grows with growing injection efficiency and extension of the spectrum in energy. Additionally, the high energy bump can substantially contribute to $`P_{cr}`$. Let us review a few possibilities arising due to cosmic ray population at the jet boundary, as illustrated at Fig. 3 .
Dynamical effects caused by cosmic rays depend on the ratio of $`P_{cr}`$ to the ambient medium pressure, $`P_{ext}`$, at the boundary. If, at small particle energies, the acceleration time scale is longer than the jet expansion time or particle escape is efficient at small energies, the formed energetic particle population cannot reach sufficiently high energy density to allow for dynamic effects in the medium near the interface. Then, it acts only as a small viscous agent near the boundary, decreasing slightly gas and magnetic field concentration (cf. Arav & Begelman 1992). In such cases we call the occurring cylindrically distributed cosmic ray population a ‘weak cosmic ray cocoon’. Then, if accelerated particles are electrons or can transfer energy to electrons, a uniform cosmic ray electron population may be formed along the jet leading to the observed synchrotron component with slowly varying spectral index and break frequency. Density of such radiating electrons is expected to have maximum in a cylindrical layer at the jet boundary.
If acceleration dominates losses at small (injection) energies, then the time-dependent high energy part of the spectrum can bear a power-law form with a growing cut-off energy, like the short time distributions at Fig-s 1 and 2. After losses become significant a few possibilities appear. If it happens at low energies, when the acceleration process is limited to the turbulent shear layer, a power-law with a growing sharp spike preceding a cut-off energy will appear. If in such case the increasing cosmic ray pressure in a cocoon could reach values comparable to the medium pressure, a substantial modification of the jet boundary layer is expected. Below we will discuss various possibilities arising in such cases of the ‘dynamic cosmic ray cocoons’. If the acceleration at the tangential discontinuity resembles our models in section 4.2, then, depending on the conditions near the jet, the cosmic ray pressure may stabilise at an intermediate $`P_{\mathrm{c}r}<P_{\mathrm{e}xt}`$, or grow to form the dynamic cosmic ray cocoon.
Let us consider a possible scenario of dynamic interaction of high energy cosmic rays with the jet and the ambient medium. The particles are ‘injected’ and further accelerated at the jet boundary. Growing number of such particles results in forming the cosmic ray pressure gradient outside the jet pushing the ambient medium apart. Additionally, an analogous gradient may be formed directed into the jet, helping to keep it collimated. The resulting rarefied medium or partly emptied of the magnetized plasma space near the jet boundary will decrease the acceleration efficiency. Thus the cosmic ray energy density may build up only to the value comparable to the ambient medium pressure, when it is able to push the magnetized plasma away. Because the diffusive escape of charged particles from the cosmic ray cocoon is not expected to be efficient (contrary to photons considered by Arav & Begelman 1992), in some cases the blown out volumes could be quite large, reaching the values comparable to $`R_j`$ or even to the local vertical scale of gas. The accumulated cosmic rays can be removed by advection – in the form of cosmic rays’ filled bubbles or cosmic ray dominated winds – outside the active nucleus into regions of more tenuous plasma, or simply outside the jet at larger distances from the central source.
The jet moving in a space filled with photons and high energy cosmic rays (cf. Fig. 3) is subject to the braking force due to scattering this species (e.g. Sikora et al. 1998; for the photon breaking). For cosmic rays with $`\lambda R_j`$ both types of particles penetrate relatively freely inside the jet and the breaking force is exerted more or less uniformly within its volume, in rough proportion to the electron (or pairs’) density for the photon breaking and to the turbulent magnetic field energy density for the cosmic ray breaking. If cosmic ray cut-off energy is lower, with the equivalent $`\lambda <R_j`$, the cosmic ray breaking force acts within the jet boundary layer of width $`\lambda `$. From Eq. (2.3) we estimate the cosmic ray breaking force per unit jet length to be
$$f_{\mathrm{b},cr}=2\pi \eta _\mathrm{p}(\mathrm{\Gamma }1)P_{\mathrm{c}r}R_j,$$
$`(5.1)`$
where we consider $`\lambda R_j`$ and we put $`U=1`$. From the above discussion, in the stationary conditions one can put $`P_{\mathrm{c}r}P_{\mathrm{e}xt}`$, where $`P_{\mathrm{e}xt}`$ is the external medium pressure. For a jet with a (relativistic) mass density $`\rho _j`$, with Eq-s (2.2,3) and $`\eta _\mathrm{p}=\eta _\mathrm{E}`$, the jet breaking length due to cosmic rays is
$$L_{\mathrm{b},cr}=R_j\frac{\mathrm{\Gamma }}{4\eta _0}\frac{\rho _jc^2}{P_{\mathrm{e}xt}}.$$
$`(5.2)`$
For example, assuming $`P_{\mathrm{e}xt}=\rho _jc^2`$ and $`\eta _0=0.25`$, we obtain $`L_{\mathrm{b},cr}=\mathrm{\Gamma }R_j`$.
Because of dynamic (‘d’) form of pushing out the ambient medium and following it cosmic rays’ escape, the back-reaction of this process at particle acceleration is expected to make the full process unstable, with an intermittent behaviour seen in longer time scales. The full configuration with the ‘heavy’ ambient gas supported with the ‘light’ gas of ultra-relativistic particles in the cosmic ray cocoon is expected to be subject to the Raileigh-Taylor (‘RT’) instability. A related characteristic time scale can be roughly estimated as
$$t_{\mathrm{R}T}\left(\frac{L}{2\pi g}\right)^{1/2},$$
$`(5.3)`$
where $`g`$ is the gravitational acceleration and $`L`$ the scale of instability (e.g. for $`g=10^2`$ cm/s<sup>2</sup>, $`L=10^{17}`$ cm and additional requirement of the sound velocity comparable to $`c`$ the time scale $`t_{\mathrm{R}T}`$ is estimated to be below $`1`$ yr).
Another type of instability can be generated by the time dependence of the non-linear acceleration process. Continues injection of (low energy) seed particles to the acceleration process can be continued till the cosmic ray pressure becomes equal to $`P_{\mathrm{e}xt}`$. Then, the ambient plasma is pushed away from the jet and its interaction with the jet boundary surface diminishes. It must lead to decrease of the injection efficiency, if acceleration at the turbulent surface layer is responsible for the process. Then, cosmic ray energy density contained in still accelerated highest energy particles increases until they manage to escape from the jet vicinity, allowing for re-establishment of original conditions. It allows the ambient medium to ‘fall down’ at the jet to start a new phase of intensive interaction between the jet and the ambient magnetized plasma, initiating efficient injection of low energy seed particles. The process should be accompanied with intense kinetic energy dissipation processes and a radiation flare at all frequencies. In the flare phase one can expect substantial weakening of the cosmic ray jet breaking mechanism allowing for larger jet velocity and forming internal shocks. Next, the full process could repeat with a time scale comparable to the time required for removing highest energy particles from the system. For the inequality $`t_{\mathrm{R}T}>\tau _{\mathrm{t}d}(E_{\mathrm{m}ax})`$ a continues (diffusive, or as a wind) particle escape will govern the process. Then one may expect smaller variations of the output radiation flux.
The presented discussion assumed the cylindrical symmetry of the unstable flow, which may be not true. However, any large amplitude perturbation of the conditions near/in the jet can not be much smaller than the spatial scale $`R_j`$ and the respective observer’s time scale shorter than $`R_j/c`$.
## 6 Conclusions and further speculations
Limited to the hydrodynamic approach the present discussion is intended to provide an alternative view of the AGN central activity related to the jet outflows. The acceleration of cosmic rays up to extremely high energies occurs in a natural way at the relativistic jet boundary if there are effective preliminary acceleration mechanisms providing seed particles with mean free paths comparable to the width of a boundary layer. With the assumption that such processes work efficiently we discussed several possible consequences for the conditions in the spatial volume containing the jet. Here, the main factor playing the role is a flat spectrum cosmic ray population carrying substantial energy density in highest energy particles. These particles may dynamically influence the jet flow and conditions in the surrounding medium, without direct radiative effects. A possibility is considered of the jet timely separated from the ambient gas by the layer filled with cosmic rays and ambient photons. During such a phase the electromagnetic radiation produced in the jet can more easily escape from the AGN centre, to reach observer situated close to the jet axis direction. Also, the plasma self-absorption frequency can be decreased for such observer if the optical depth of the upper plasma layers does not dominate the output. As we expect larger intensity of generated low-frequency radiation when the ambient medium directly pushes on the jet, the discussed picture should be characterized with a positive correlation of the radiation intensity with the self-absorption frequency shift to larger values (see Böttcher 1999 for a recent discussion of such shifts within the standard jet picture). Also, for flares in some BL Lac objects with a week-month time scale, the beginning of the flare should be seen approximately at the same time at all non-absorbed frequencies. Only later evolution of the introduced disturbance will lead to shifts of flare maxima at different frequencies. One should remember that we do not include into this discussion other radiation sources (accretion disc, corona) in the active nucleus vicinity.
The instabilities related to the discussed process may lead to temporary variations of jet flow velocity and the degree of the jet surface perturbation. Perturbations in jet flow introduced by these instabilities may also lead to shock wave formation with its observational consequences. As a result the processes accelerating lower energy cosmic rays and cosmic ray electrons are expected to have fluctuating nature with time scales estimated in Eq. (5.3) for large changes. The processes occurring inside the jet can be characterized with the observer’s time scale a factor of $`\mathrm{\Gamma }`$ shorter. However, in the situation with the jet perturbation introduced by the external process, the actual time scale will be intermediate between the internal, Lorentz contracted one and the external perturbation scale.
In the above discussion we avoided considering the acceleration of electrons (or pairs). In the mentioned model of Mannheim (1993) energetic electrons arise as a result of cascading of pairs resulting from the energetic proton interactions inside the jet. One can consider also different scenarios providing the cosmic ray electrons. E.g., in the space close to the jet boundary a large power can be stored in the highly anisotropic population of cosmic ray protons. Such distribution is known to be unstable and it leads to creation of long electromagnetic plasma waves. Damping of such waves by pairs may be very efficient acceleration process providing cosmic ray electrons (cf. Hoshino et al. 1992, in a different context). Of course, the short plasma waves at the jet boundary can be also generated by velocity shear.
In the presented evaluations we often consider the situation with particles starting to play a dynamic role in the system when their energies reach scales yielding gyroradia $`r_\mathrm{g}R_j`$. If particles become dynamically important at lower energies, with $`r_\mathrm{g}<<R_j`$, all considered time scales should be respectively scaled down. Then, the jet breaking force due to cosmic rays is acting only at the external layers of the jet, generating magnetic stresses along it.
## Acknowledgements
Discussions with Mitch Begelman and Marek Sikora were particularly useful during preparation of this paper. Critical remarks of Luke Drury helped to improve the final version of the paper. I also gratefully acknowledge support from the Komitet Badań Naukowych through the grant PB 179/P03/96/11 , and, partly, within the project 2 P03D 002 17.
|
no-problem/9910/nucl-ex9910017.html
|
ar5iv
|
text
|
# REFERENCES
Thermally-induced expansion in the 8 GeV/c
$`\pi ^{}`$ \+ <sup>197</sup>Au reaction.
T. Lefort<sup>1</sup>, L. Beaulieu<sup>1</sup>, A. Botvina<sup>2</sup>, D. Durand <sup>3</sup>, K. Kwiatkowski<sup>1</sup><sup>*</sup><sup>*</sup>*Present address: Physics Division, Los Alamos National Laboratory, Los Alamos, NM 87545., W.-c. Hsi,<sup>1</sup>Present address: 7745 Lake Street, Morton Grove IL 60053., L. Pienkowski<sup>4</sup>, B. Back<sup>5</sup>, H. Breuer<sup>6</sup>, S. Gushue<sup>7</sup>, R.G. Korteling<sup>8</sup>, R. Laforest<sup>9</sup>Present address: Washington University Medical School, 510 Kingshighway, St. Louis MO 63110., E. Martin<sup>9</sup>, E. Ramakrishnan<sup>9</sup>, L.P. Remsberg<sup>7</sup>, D. Rowland<sup>9</sup>, A. Ruangma<sup>9</sup>, V.E. Viola<sup>1</sup>, E. Winchester<sup>9</sup>, S.J. Yennello<sup>9</sup>
<sup>1</sup>Department of Chemistry and IUCF, Indiana University Bloomington, IN 47405
<sup>2</sup> Institute for Nuclear Research, Russian Academy of Science, 117312 Moscow, Russia.
<sup>3</sup> LPC de Caen, 6 Boulevard Marechal Juin, 14050 CAEN, France.
<sup>4</sup>Heavy Ion Laboratory, Warsaw University, 02 097 Warsaw Poland.
<sup>5</sup>Physics Division, Argonne National Laboratory, 9700 S. Cass Ave., Argonne, IL 60439.
<sup>6</sup>Department of Physics, University of Maryland, College Park, MD 20742.
<sup>7</sup>Chemistry Division, Brookhaven National Laboratory, Upton, NY 11973.
<sup>8</sup>Department of Chemistry, Simon Fraser University, Burnaby, B.C., V5A 1S6 Canada.
<sup>9</sup>Department of Chemistry and Cyclotron Laboratory, Texas A&M University, College Station, TX 77843, USA.
## Abstract
Fragment kinetic energy spectra for reactions induced by 8.0 GeV/c $`\pi ^{}`$ beams incident on a <sup>197</sup>Au target have been analyzed in order to deduce the possible existence and influence of thermal expansion. The average fragment kinetic energies are observed to increase systematically with fragment charge but are nearly independent of excitation energy. Comparison of the data with statistical multifragmentation models indicates the onset of extra collective thermal expansion near an excitation energy of E\*/A $``$ 5 MeV. However, this effect is weak relative to the radial expansion observed in heavy-ion-induced reactions, consistent with the interpretation that the latter expansion may be driven primarily by dynamical effects such as compression/decompression.
PACS: 25.70.Pq,21.65.+f,25.40-h,25.80.Hp
The origin of the multifragmentation process , and its link to a nuclear liquid-gas phase transition in finite systems , is one of the most interesting and debated questions in the field of many-body nuclear dynamics. Is the fragmentation process thermally driven, initiated by an early compressional stage, or simply induced by mechanical or shape instabilities ? The observation of collective expansion energy at the end of the reaction may help to shed some light on the origin of the process.
The expansion of hot nuclear matter is usually attributed to either an internal thermal pressure or the response to an initial compression produced at the beginning the reaction . Two stages of the expansion can be schematically defined. The first drives the nucleus up to the freezeout configuration, in competition with the restraining nuclear force. A possible second stage corresponds to an extra residual expansion energy (or radial flow) that exceeds the minimum required to reach freezeout. The collective expansion energy is proportional to the masses of the emitted particles.
The onset of extra expansion energy has been observed in heavy-ion collisions near 5-7 A MeV of available center-of-mass energy for fusion-like events ($`\mathrm{A}_{\mathrm{tot}}>250`$) . In a subsequent analysis, Bougault et al showed that a pure thermal extra expansion energy, simulated with the Expanding-Evaporating Source model (EES) , accounts for only a small part of the measured extra expansion energy. On this basis and supported with BNV calculations, they attributed the extra expansion energy observed in their data to an early compressional stage in the collision. Thus, one can link the multifragmentation energy threshold ($`5\mathrm{AMeV}`$) to the onset of collective extra expansion energy initiated by a compressional phase . In this paper, we address the possible existence of thermally-induced extra expansion energy and its link to the thermal multifragmentation process.
The existence of a thermally-induced extra collective expansion is an open question. The EOS collaboration found a large amount of collective expansion, up to 50 % of the total available energy, in their study of $`{}_{}{}^{197}\mathrm{Au}+{}_{}{}^{12}\mathrm{C}`$ reaction at 1 A GeV . On the other hand, the collective expansion observed in the spectator study of the ALADIN group is moderate . In both cases, the excitation energy of the projectile should be mainly thermal, as for light-ion-induced collisions. Since the presence of collective expansion at high excitation energy may affect the isotope thermometer accuracy and hence the caloric curve shape , it is important to determine the extent of collective expansion in these reactions.
The advantage of using light-ion-induced collisions stems from the nature of the deposited energy in the target nucleus: the contribution of compression, angular momentum and deformation is weak and the main part of the deposited energy is thermal . Previous studies have shown that light projectiles can deposit excitation energies up to E\*/A $``$ 9 MeV in a gold target nucleus, well above the energy threshold for multifragmentation. Thus, light-ion-induced collisions offer a powerful tool for studying the relationship between multifragmentation and collective thermal expansion.
In this letter we study fragment kinetic energy observables for the 8 GeV/c $`\pi ^{}`$ \+ <sup>197</sup>Au reaction. The data are compared with different statistical models: SIMON and SMM (Statistical Multifragmenting Model) , as well as with data from heavy-ion reactions. The analysis is based on experiment E900a performed at the Brookhaven AGS accelerator with tagged beams of 8 GeV/c $`\pi ^{}`$ \+ $`{}_{}{}^{197}\mathrm{Au}`$ using the Indiana Silicon Sphere (ISiS), a 4$`\pi `$ detector array with 162 gas-ion-chamber/silicon/CsI telescopes . Further experimental details can be found in .
In light-ion-induced collisions the emission spectra can be described with two components: an early pre-equilibrium emission stage that is forward-focused along the beam axis (mainly composed of energetic light charged particles) and isotropic emission from a slowly moving equilibrium-like residual source. Thermal-like charged particles are defined by the spectral shapes, from which an upper cutoff of 30 MeV for Z=1 and 9Z+40 MeV for heavier fragments is assigned . The pre-equilibrium-like particles emitted above the cutoff energy are removed. Then the charge, mass and excitation energy of the equilibrated residual source are determined via event-by-event reconstruction . The amount of pre-equilibrium emission increases with the excitation energy leading to a decrease of the equilibrated source average mass and charge, from $`<\mathrm{Z}>`$=76, $`<\mathrm{A}>`$=188 at E\*/A=1 MeV to $`<\mathrm{Z}>`$=56, $`<\mathrm{A}>`$=138 at E\*/A=9 MeV .
In the shape transition of the charge distribution, from a power-law behavior to an exponential-like pattern, has been observed in coincidence with an extra collective expansion energy. Since collective expansion may shorten the time for fragment formation, this transition has been interpreted as a sign of the expansion energy presence. For the 8 GeV/c $`\pi ^{}`$ \+ <sup>197</sup>Au reactions, a power law is able to reproduce the charge distribution of the equilibrated component , nonetheless, above E\*/A=7 MeV, an exponential fit provides a better result. The exponential pattern is also observed with a pure statistical scenario and is enhanced by secondary decay . Therefore, it is necessary to investigate other observables such as fragment kinetic energies in order to point out the existence of collective expansion.
In figure 1 the angle-integrated kinetic energy spectra for carbon nuclei (representative for all fragment spectra) are shown for three excitation energy bins. The energy of the Coulomb-like peak decreases with increasing excitation energy in agreement with the measured decrease of the source size (see above) and consistent with the onset of expanded nuclei. On the other hand the spectral slope increases with excitation energy. No evidence for a strong deformation of the spectra induced by a collective expansion is noticed, as was reported for heavy-ion-induced reactions in .
The mean kinetic energy of fragments as a function of their mass (charge) is also an indicator of the presence of collective expansion. One expects no dependence (flat behavior) for a pure thermal process, a slight increase due to Coulomb effects, and a steeper slope when an expansion energy is present. For a constant source size (charge) and density, the fragment mean kinetic energy is also expected to increase as a function of the excitation energy. In figure 2 the mean kinetic energy of fragments is plotted as a function of fragment charge for several excitation energy bins, transformed into the source frame. While the data are found to increase with the charge, little dependence on excitation energy is noticed. The constancy observed as a function of the excitation energy can be interpreted as a balance between the increase in thermal energy and the decrease in Coulomb repulsion of the emitting source (due to lower average source charge and possibly density) with excitation energy, consistent with the evolution of the spectral shapes in figure 1. Finally, it is worth mentioning that residues (if any), which have a lower average kinetic energy , are not identified in ISiS due to threshold effects.
In this letter, we focus on excitation energies above E\*/A=4 MeV, where one may expect to see evidence for collective expansion . In figure 3, the fragment mean kinetic energies are compared with predictions of SIMON-evaporation , SIMON-explosion and SMM simulations. The inputs of all the model simulations are identical, using the source charge, mass, velocity and excitation energy distributions reconstructed from data . Then the simulations are filtered to take account of the geometry of ISiS, the energy thresholds and the energy lost in the target. In addition, the simulated events have been sorted the same as in the experimental data and the excitation energy recalculated event-by-event. The results are found to be equal to the initial input within 10 %, which gives an estimate of the confidence level of the comparison. The procedure was reduced to a single geometrical filter with SIMON-explosion.
The procedure to extract the expansion energy is as follows. First we check to insure that the model reproduces the IMF multiplicity and charge distribution. If so, the thermal and Coulomb energies of fragments are calculated with the model and compared with the data. It may be necessary to add an extra collective energy (proportional to the mass of the emitted fragment) to reproduce the fragment mean kinetic energy. The extra collective energy corresponds to the expansion energy.
Above E\*/A=3 MeV, the evaporative model at normal density, SIMON-evaporation, underestimates the IMF (Z=3-16) multiplicity (by at least a factor two at E\*/A=8 MeV), the mean kinetic energies and produces too few heavy IMFs. This discrepancy confirms that a standard evaporative process is not able to reproduce the IMF multiplicities and the charge distributions at high excitation energy , although this is still debated . It is therefore not relevant to use this model to estimate an expansion energy.
Since the time-dependent evaporative model is unable to reproduce the data above E\*/A=3 MeV, we then examined SMM and SIMON-explosion models, which assume a simultaneous break-up process. For both models the density is first set at one-third of normal density at the freeze-out stage. In our experimental event selection, we assume that the fast emission that takes place before the freeze-out is mainly removed by our cutoff energy defined above. Therefore, the experimental excitation energy determined within the energy cutoff is used as the source of thermal excitation energy at freeze-out for both models.
In order to contrast between a picture in which the fragments are emitted cold and one where they are excited, we employ SIMON-explosion to investigate the former case and SMM for the latter. In the cold fragment scenario, instead of feeding SIMON-explosion with a charge distribution of hot fragments that undergo secondary decay to produce the experimental charge distributions, the experimental charge distribution (cold fragments) is used as input. In this context, the IMF multiplicty and the charge distribution are in agreement with the data by definition. Figure 3 shows that this model reproduces the data at E\*/A=5 MeV but underpredicts the fragment kinetic energies at higher excitation energies.
Finally, we used SMM in which it is assumed that thermal and collective expansion are unfolded and only the thermal energy is used to generate partitions. The Z=6-16 multiplicity and charge distributions are well reproduced above E\*/A=6 MeV. For both SMM and SIMON-explosion models, it is necessary to add about 0.5 MeV/A of collective energy for the E\*/A=6-8 MeV bin in order to match the experimental mean kinetic energies. For SMM calculations at $`\rho _0/3`$, good agreement with the kinetic energy spectra is obtained if an additional collective expansion energy is included, as shown by the lines in figure 1. Except for the high kinetic energy tail, the carbon kinetic energy spectra are well reproduced at about 5 A MeV with no expansion energy (solid line) and with an added 0.5 A MeV (dashed line) for the E\*/A=6-9 MeV bin.
As shown in figure 4 the amount of extra collective expansion energy is low and therefore highly dependent on the Coulomb energy in the simulations, i.e the source volume or density. In order to investigate the density dependence of the procedure used to extract the collective expansion, we have performed SMM calculations in which the density value is varied. The IMF multiplicity and the charge distribution predicted by SMM are in good agreement with such data for density values between $`\rho _0/3`$ and $`\rho _0/2`$. The two calculations, $`\rho _0/3`$ and $`\rho _0/2`$ correspond, respectively, to the upper and the lower limit of the shaded zone in the lower panel of figure 4. Even at a higher density, an additional collective energy is necessary to match the fragment mean kinetic energy. The onset of the extra collective energy occurs at about E\*/A=5 MeV and increases with the excitation energy. This behavior is consistent with an increasing thermal pressure inside the nucleus as a function of the excitation energy.
The upper panel of figure 4 shows the IMF emission probability as a function of the excitation energy. Below excitation energy of about E\*/A=4 MeV the light charged particle emission is the prominent decay channel. At higher excitation energy, emission with one or more IMFs takes over. The onset of multiple IMF emission occurs in the same excitation energy range as that for the onset of the thermal expansion energy. The similarity underlines the possible link between expansion energy and multiple fragment emission probability.
Finally, a comparison with heavy-ion collisions is also made in the lower panel of figure 4. This comparison has been limited to systems with a well defined fusion source in order to avoid any problem of source separation present in the main part of the impact-parameter range. In addition, we only refer to studies performed in comparison with SMM . The collective energy is expressed as a function of the freeze-out excitation energy (SMM input) instead of the available energy per nucleon used in . In contrast with the small amount of thermal expansion energy found in ISiS collisions, in central heavy-ion collisions, the rise is much larger, as shown by the symbols and the two lines in figure 4 which correspond to different assumptions for extracting the collective expansion . This behavior indicates that the collective expansion observed in central heavy-ion collisions cannot be explained with only a thermal component and supports the concept of an early compressional stage in central heavy-ion collisions that is not present in light-ion induced collisions.
In conclusion, a study of the fragment kinetic energies has been performed for the 8 GeV/c $`\pi ^{}`$ \+ <sup>197</sup>Au reactions. The sequential simulation at normal density, SIMON-evaporation, failed to reproduce the data above E\*/A=4 MeV of excitation energy. For the two simultaneous models, SMM and SIMON-explosion, the fragment mean kinetic energies are well reproduced if an extra collective expansion energy is added at high excitation energy. Within the context of the SMM calculation, the onset of this collective expansion energy takes place at about E\*/A=5 MeV of excitation energy. The expansion energy increases slightly with the excitation energy, consistent with a thermally-induced expansion scenario. This observation, consistent with a soft explosion, also suggests that the nucleus is a dilute system at the break-up stage. Multiple IMF production takes place in the same excitation energy range, underlining the possible relationship between enhanced IMF emission and expanded nuclei. Nonetheless, the thermal expansion energy is weak and much lower than that observed in central heavy ion collisions within the same excitation energy range. Therefore, the main part of expansion energy observed in central heavy-ion collisions at intermediate energies must be related to a dynamical stage (initial compression ?) that does not exist in light-ion induced collisions.
Acknowledgements: This work was supported by the U.S. Department of Energy and National Science Foundation, the National and Engineering Research Council of Canada, Grant No. P03B 048 15 of the Polish State Committee for Scientific Research, Indiana University Office of Research and the University Graduate School, Simon Fraser University and the Robert A. Welch Foundation.
|
no-problem/9910/astro-ph9910199.html
|
ar5iv
|
text
|
# A Companion Galaxy to the Post-Starburst Quasar UN J1025-0040
## 1 Introduction
Although the relationship between starbursts and nuclear activity remains controversial (see, e.g., Joseph 1999 and Sanders 1999), the two are often found together. Furthermore, there has long been strong circumstantial evidence that QSO activity is often triggered by interactions or mergers (Stockton 1999 and references therein) and it is well established that virtually all of the most luminous starbursts show strong interaction (Sanders & Mirabel 1996); at least some of these also show nuclear activity. Indeed, some objects classified as ultraluminous infrared galaxies (ULIGs) on the basis of their far-IR flux densities were originally known as QSOs (e.g., 3C 48, Mrk 231, Mrk 1014). Spectroscopy of hosts and companions of these and other objects that show both QSO and ULIG characteristics confirm that they almost universally have dominant post-starburst populations (eg, Boroson & Oke 1984, Stockton, Canalizo & Close 1998, Canalizo & Stockton 2000a, 2000b).
One of the most spectacular examples of such an object is UN J1025$``$0040, recently identified by Brotherton et al. (1999; hereafter Paper I), originally targeted as a quasar candidate by the 2dF survey<sup>1</sup><sup>1</sup>1http://msowww.anu.edu.au/$``$rsmith/QSO\_Survey/qso\_surv.html (Smith et al. 1996). UN J1025$``$0040 is a quasar where, at least in the optical, the flux contribution from a recent massive starburst is closely balanced with that from the AGN, leading to an unusual composite spectrum. Brotherton et al. estimate an age of 400 Myr for the post-starburst population. They emphasize the possibility that UN J1025$``$0040 may be a transitional object between ULIGs and the classical QSO population.
In the deep $`K_S`$ image shown as Fig. 2 in Paper I, there is a faint object 4$`\stackrel{}{\mathrm{.}}`$2 south-southwest of the quasar, having an essentially stellar profile. However, the host galaxy of the quasar is elongated in roughly the same direction, so, as suggested in Paper I, this object might be a companion galaxy, rather than simply an intervening star. Here we present spectroscopic and imaging observations that confirm this suggestion.
## 2 Observations and Data Reduction
Spectroscopic observations of UN J1025$``$0040 and its companion were carried out on UT 1999 April 22 with the Low Resolution Imaging Spectrometer (LRIS; Oke et al. 1995) on the Keck II telescope. The slit was 1″ wide and oriented at a PA of 23° to pass through both the quasar and the companion; the grating had 400 grooves mm<sup>-1</sup> and was blazed at 8500 Å, giving a dispersion of 1.86 Å pixel<sup>-1</sup>, and a projected slit width of $`8.5`$ Å. The observations were taken close to the meridian, so the slit position angle was always within 30° of the parallactic angle, and the zenith angle was never more than 21°. There was no order-separating filter in the beam, so we use only the portions of the spectrum uncontaminated by second order overlap. The total integration time was 2160 s.
The spectroscopic reduction followed standard procedures. After subtracting bias and dividing by a normalized halogen lamp flat-field frame, we rectified the individual frames and placed them on a wavelength scale by a transformation to match measurements of the spectrum of a Hg-Ne-Kr lamp. We then traced the spectra of the quasar and the companion using routines in the IRAF apextract package. We calibrated the spectra by observations of the spectrophotometric standard stars Feige 34 and Wolf 1346 (Massey et al. 1988), and averaged the one-dimensional traces from the three frames with the IRAF task scombine.
We obtained $`K_S`$ images of UN J1025$``$0040 using the Near-Infrared Camera (NIRC; Matthews and Soifer 1994) at the Keck I telescope on UT 1998 April 18. The details of the observations and data reduction are described in Paper I.
We obtained $`H`$-band images of UN J1025$``$0040 with the University of Hawaii (UH) 2.2 m telescope on UT 1999 April 3. We used the $`1024\times 1024`$ QUIRC (HgCdTe) infrared camera (Hodapp et al. 1996) at f/31, which gives an image scale of 0$`\stackrel{}{\mathrm{.}}`$0608 pixel<sup>-1</sup>. The sky conditions were photometric and the seeing was $``$0$`\stackrel{}{\mathrm{.}}`$3. We used four UKIRT faint standard stars (Casali & Hawarden 1992) for flux calibration. The total integration time was 3600 s.
We also obtained $`R`$ and $`I`$-band images with the UH 2.2 m telescope on UT 1999 April 6–7 and UT 1999 May 5. The April images were obtained through thin cirrus with $``$0$`\stackrel{}{\mathrm{.}}`$9 seeing and were calibrated to the May images, for which conditions were photometric with $``$0$`\stackrel{}{\mathrm{.}}`$7 seeing. All images were obtained with a Tektronix $`2048\times 2048`$ CCD with an image scale of 0$`\stackrel{}{\mathrm{.}}`$22 pixel<sup>-1</sup> and were calibrated by 5 standard stars in Selected Area 101 (Landolt 1992). The total exposures on the UN J1025$``$0040 field were 4500 s in $`R`$ and 9000 s in $`I`$.
## 3 Results
Figure 1 shows the spectra of UN J1025$``$0040 and the companion object, which clearly is a galaxy associated with the quasar. The redshift of the starburst in the host galaxy, as measured from stellar absorption lines, is $`z_{host}=0.6344\pm 0.0001`$. The redshift of the companion galaxy $`z_{comp}=0.6341\pm 0.0001`$ was measured from the \[O II\] $`\lambda `$3727 emission line.
We compared the spectrum of the companion to Bruzual & Charlot (1996) isochrone synthesis models. The lighter line in Fig. 1 is an 800 Myr old instantaneous burst, Scalo (1986) initial mass function, solar metallicity model, which gives a reasonable fit to the data. Note that, in spite of the rather noisy spectrum, the absorption features in the model are fit well by the observed spectrum. The agreement of the absorption features and overall shape of the continuum in the model with the observed spectrum, as well as the presence of the \[Ne III\] emission line at $`\lambda `$3869, provide further evidence that the galaxy is at redshift $`z=0.6341`$.
The images of UN J1025$``$0040 in $`R`$, $`I`$, and $`H`$ bands (Fig. 2) show the same basic morphology as the $`K_s`$ image shown in Fig. 2 of Paper I. In general, the images of the host galaxy show an elongation towards the companion, and possibly a hint of a bridge between the two objects. Our 2-D spectra (Fig. 3) show faint and clumpy \[O II\] $`\lambda `$3727 emission between the companion and the host.
Table 1 gives the photometry of UN J1025$``$0040 and its companion. Figure 4 shows the SED of the companion in the rest frame, with the 800 Myr model superposed on the photometry points. Even though this model was chosen solely on the basis of its fit to the spectroscopy at $`\lambda _o<4500`$ Å, it is in remarkable agreement with the photometry of the object. The fact that the two IR points are well fit by the model indicates that there is little or no dust.
We experimented with adding older stellar components, allowing both ages and relative contributions to vary, but we were not able to obtain any better fit to both the spectrum and the SED. Therefore, at this level, we find no evidence for a significant older stellar component in the companion galaxy.
## 4 Discussion
With our confirmation that the object south-southwest of UN J1025$``$0040 is a companion galaxy and the evidence that the object is physically related to the quasar, this system joins other examples for which it is plausible that both the quasar activity and the starburst may have been triggered by an interaction. From the $`I`$-band magnitude for the companion given in Table 1, we estimate an absolute magnitude $`M_B=18.3`$ ($`H_0=75`$, $`q_0=0`$), so the companion is similar to the Large Magellanic Cloud in luminosity, but more compact ($``$ 1.9 kpc).
We find very different ages for post-starburst populations in the host galaxy (400 Myr) and the companion galaxy (800 Myr). These ages, of course, are somewhat uncertain. Because of contamination from the quasar, and the fact that the SED of the quasar itself remains unknown, it is difficult to model the stellar population with high accuracy. The spectrum of the companion, on the other hand, is too noisy for detailed modeling. However, the contrast between the strong Balmer lines and the Balmer discontinuity in the host, and the 4000 Å break in the companion, show clearly that the post-starburst ages of these objects cannot be the same and, in fact, must differ by a few $`\times 10^8`$ years. High-spatial-resolution spectroscopy could separate the spectrum of the quasar nucleus from that of the starburst and allow a more precise determination of the age of the starburst.
It is possible that the starburst in the companion may have been triggered during a previous passage, while the corresponding starburst in the host galaxy, if present, may be masked by the more recent starburst. This suggestion is appealing because the orbital period of the pair should be of the order of a few $`\times 10^8`$ years, while it is difficult to imagine internal galactic processes having similar time scales which could cause massive starbursts; and it is equally difficult to imagine that these two starbursts are completely unrelated. An orbital origin for the starbursts also fits well with the episodic star formation at times of close passage seen in the merger models of Mihos & Hernquist (1996). At this stage, however, attributing the age difference to the orbital period of UN J1025$``$0040 and its companion can only be speculation. It could be, for example, that the recent starburst in the host galaxy is due to the merger of a second companion and is unrelated to the one we can see; or the stellar populations in the host galaxy and the companion might actually be the same age, if the initial mass function in the companion had a sharp cutoff at about 2 solar masses.
Nevertheless, UN J1025$``$0040 is a key object for our attempts to understand the starburst—AGN connection. It is the only object for which we know that there have been recent major starbursts in both a QSO host galaxy and its companion, and for which we can compare the starburst ages of each. It is the clearest example of a “transition” object between a starburst galaxy and a classical QSO. The most important task now is to determine what kind of transition is relevant to such objects: is it an evolutionary transition, in the sense that the objects progress from starburst to QSO (e.g., Sanders et al. 1988); or is it simply an example of the range of properties such objects can have due to the range of physical conditions under which they are produced? The answer to this question depends on the relative times at which the starburst and the QSO activity are initiated and their relative luminosities as they age (Stockton 1999). The largest uncertainty remains our lack of knowledge of the luminosity history and lifetimes of QSOs. It is in this area that close studies of objects like UN J1025$``$0040 may be helpful. If we can assume (or, better, demonstrate) that the QSO activity is triggered roughly simultaneously with the peak of the starburst, then we can conclude that the QSO lifetime (either continuous or episodic) can be as long as $`4\times 10^8`$ years. If the QSO luminosity were roughly constant over this period, then at some earlier time, the starburst would have swamped the QSO flux, quite aside from any effect of dust. For example, when the starburst was $`50`$ Myr old, it would have been $`2.5`$ times more luminous over most of the optical, rising rapidly to $`6.5`$ times more luminous between 4000 Å and 3000 Å in the rest frame. It would have dominated the QSO emission at all wavelengths from at least the near-IR to close to the Lyman limit, and the QSO would have been detectable in the optical spectrum only from small peaks at the positions of the strongest emission lines.
This scenario would tend to support the evolutionary view of transition objects like UN J1025$``$0040 and 3C 48 (Canalizo & Stockton 2000a), but it depends on assumptions about the initiation and timescale of QSO activity. These assumptions can only be checked by examining post-starburst ages and nuclear activity in a sample of objects, spanning a range of ages.
This research was partially supported by NSF under grant AST95-29078. Data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the financial support of the W. M. Keck Foundation. This research has been supported in part by NSF under grant AST95-29078, and in part performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-ENG-48.
|
no-problem/9910/astro-ph9910008.html
|
ar5iv
|
text
|
# SupraNova Events from Spun–up Neutron Stars: an Explosion in Search of an Observation
## 1 Introduction
One of the key requests on gamma ray burst (GRB) models is that they make contact with the fireball model (Rees and Mèszàros 1992) which has proven so successful in predicting and interpreting the observed properties of GRBs’ afterglows. In particular, this entails that a large explosion is to take place in a region with small baryon contamination: for $`E=10^{53}`$ erg, the baryon contamination must be at most $`E/\gamma c^210^4`$ M, for $`\gamma =300`$, the bulk Lorenz factor of the explosion. Vietri and Stella (1998) presented a model which could accomplish this, involving a supramassive neutron star (SMNS), i.e., a neutron star with a larger baryon number than any normal neutron star because it derives part of its support against self–gravity from the centrifugal force; these supramassive stars cannot be slowed down to zero spin rate, because they are so massive that, as they lose angular momentum, they become unstable to black hole formation before reaching zero spin rate (Cook, Shapiro and Teukolsky 1994a,b, Salgado et al., 1994). The model consists of the implosion/explosion (I/E) of a supramassive neutron star which has lost through magnetic dipole radiation so much angular momentum that it must then collapse to a black hole; the rotational energy of the small amount of equatorial mass left behind because already in near centrifugal equilibrium provides the energy source that powers the burst.
In paper I we proposed that SMNSs are formed in the SN explosion of a core with too much mass and angular momentum to end up in a normal neutron star. Though still nothing stands against this possibility, and we are not reneging it, we have now realized that a different channel exists: mass and angular momentum accretion from a companion in a Low Mass X–ray Binary (LMXB). The following discussion is relevant to the formation of MilliSecond Pulsars (MSPs), and will mimick arguments used in discussing the evolution of LMXBs into MSPs. We first discuss how the accretion of large amounts of mass and angular momentum may be realized in Nature and then apply this scenario to GRBs.
## 2 Mass and angular momentum accretion
The main obstacle to accretion onto a normal neutron star of large amounts of angular momentum from a close companion via an accretion disk is the neutron star’s magnetic field: the neutron star can rotate only so fast as to make the corotation and Alfvèn radii coincide, lest a propeller phase sets in, which would actually entail angular momentum loss (Ghosh and Lamb 1978, Illarionov and Sunyaev 1975). The coincidence of these two radii leads to an equilibrium period, $`P_{eq}=1.3(B/10^{12}G)^{6/7}(\dot{M}/\dot{M}_{Edd})^{3/7}`$ s (Ghosh and Lamb 1992), which clearly shows that $`B`$ must decrease before significant spin–up can occur. Though no unique model has emerged yet, the current consensus is that the neutron star magnetic field decays by at least $`34`$ decades either as a direct result of mass accretion (Phinney and Kulkarni 1994) or of the ensuing spinup (Ruderman, Zhu and Chen 1998).
The strongest constraints on field decay in NSs come from LMXBs and MSPs. In only one LMXB, SAX J1808.4-3658 a coherent 2.5 ms signal has been detected in the persistent X-ray emission, providing direct evidence for the presence of a small magnetosphere; the inferred magnetic field is in the $`B10^810^9`$ G range (Psaltis and Chakrabarty 1998). All other LMXBs have undetectably small coherent pulsations in their persistent emission, if at all. Yet spin periods in the in the 2-4 ms range have been deduced for about ten LMXBs from the X-ray flux oscillations that are present during type I bursts emitted by these sources (cf. van der Klis 1998). Spinup through accretion can have occurred in these neutron stars only if their magnetic field is lower than $`10^9`$ G. Further evidence that the field might have decayed to dynamical insignificance derives from the modelling of the kHz Quasi Periodic Oscillations (QPO), a common phenomenon observed in LMXBs. In the sonic point model (Miller, Lamb and Psaltis 1998), the Alfvén radius is located at a radius corresponding to a Keplerian frequency of $`350Hz`$, corresponding to a magnetic field of $`8\times 10^8G`$: this already bears witness to a thousand–fold reduction of the magnetic field below that of a typical newborn pulsar. A better model explains QPOs in terms of the fundamental frequencies of test particle motions in the general–relativistic potential well in the vicinity of the neutron star (Stella and Vietri 1999). The model is capable of explaining the observed relation between peak QPO frequency and its lower frequency counterpart over three orders of magnitude in peak QPO frequency and several distinct classes of sources, including candidate black holes and LMXBs (Stella, Vietri and Morsink 1999), implying that the magnetic field has been reduced already to $`2\times 10^8`$ G during the LMXB phase.
A different argument involves a handful of MSPs with observed magnetic fields $`2\times 10^8`$ G , of which there are currently about a dozen, including the lowest fields ever measured, $`7\times 10^7G`$ in $`J2229+2643`$ and $`J2317+1439`$ (Camilo, Nice and Taylor 1996). For these small fields, the Alfvén radius for disk accretion (Ghosh and Lamb 1978, 1992) is smaller than $`12`$ km, which, according to Cook, Shapiro and Teukolsky (1994b) is larger than the radius of the innermost stable orbit for the softest equations of state for a neutron star with $`M=1.4`$ M (see Table 1). It should also be noticed that it has been argued cogently (Arons 1993) that the magnetic field in these objects is dominated by the dipole component, with negligible contribution from higher multipoles. Altogether, this means that we have already observed the result of accretion from a companion pushing the magnetic field to dynamical irrelevance (or, at least, very close to it) sometime during the mass exchange process. The argument about QPOs (Stella and Vietri 1999) implies that this may happen reasonably early in the LMXB history.
Detailed models are required to establish the exact history of a neutron star’s mass, angular momentum and magnetic field, but unfortunately these computations are currently fraught with uncertainties: where is the $`B`$ field located, in the core or in the crust? And what is an appropriate model for the field suffocation? Population synthetic studies of this phenomenon (Possenti et al., 1999) have focused on two representative equations of state, and modeled the decay of the magnetic field in two limiting cases, imposing at the crust–core boundary either complete field expulsion by the superconducting core, or advection and freezing in a very highly conducting transition shell. The main result lies in the establishment of the existence of a tail in the rotation period distribution extending well beyond the shortest period observed so far ($`P=1.558`$ ms), with only moderate dependence on the field suppression mechanism. For the softest equation of state the period distribution is still increasing at the shortest value before the onset of mass shedding, where Possenti et al. stopped their computations, while for the stiffest one the period distribution had a wide maximum around $`P=24`$ ms, and a tail extending below this value. The fraction of objects with $`P<1.558`$ ms is $`1\%`$ and $`10\%`$ for the stiff and soft equation of state, respectively, all ending up with very small magnetic fields, $`10^8`$ G.
Though the accreted mass is larger when account is taken of the need to suppress the magnetic field than when the magnetic field is neglected, the difference is not very large (Burderi et al., 1999). So, in order to appraise whether the neutron stars thusly formed may be (or not) supramassive, we simply consider Table I, from Cook, Shapiro and Teukolsky (1994b). It shows that the total amount of mass that needs being accreted from a companion in order to reach the supramassive stage at the initial point of mass shedding depends strongly upon the equation of state. For equation of state C, the neutron star collapses to a black hole even before reaching mass shedding. The soft equations of state, EoSs, (A, D, E, KC) have become supramassive; the intermediate EoSs (C, M, UT, FPS) are within $`<0.1`$ M of doing so, and will cross the threshold if accretion continues after the mass shedding point is reached (see below). The total amounts of mass required to become supramassive ($`4/3`$ of the difference in mass at infinity, Phinney and Kulkarni 1994, corresponding to a further $`0.5`$ M) are so modest that it seems likely that even the models based upon EoSs AU and UU will reach this stage, provided a donor of sufficient mass is found: this too will be discussed below. EoSs L and N are hopeless: the total amount of mass to be accreted corresponds to $`2`$ M. If either of these EoSs were correct, there would be no way to form SMNSs via accretion from a companion in a LMXB.
Both Cook, Shapiro and Teukolsky (1994b) and Possenti et al.(1999) halted their computations when the mass shedding rotation rate is reached, but there is nothing magic about this moment. Instead, Popham and Narayan (1991) and Paczynski (1991) argued that accretion continues unimpeded, in Newtonian stars plus disks configurations, with stars remaining close to the breakup angular speed, while mass and total angular momentum increase. Other reasonable possibilities may contribute to prolong mass accretion: the reduction of angular momentum through gravitational wave losses (propitiated by the growth of a small stellar eccentricity) or the setup of a spiral shock wave reducing the angular momentum of incoming disk material, with the outward transport of angular momentum. In any case, several avenues are possible which would keep the neutron star marginally inside the mass shedding limit. For this reason, it seems nearly certain that the intermediate EoSs (C, M, UT, FPS) which are only $`0.1`$ M away from being supramassive, will reach this stage as mass accretion continues.
Since all soft and intermediate equations of state only require $`0.5`$ M to become supramassive, their companion star may be any star with mass $`1`$ M, exactly as discussed in the normal recycling model for MSPs. EoSs AU and UT, instead, require in total $`11.1`$ M to become supramassive. At first sight, this requirement might seem unsurmountable: thermal stability in the mass exchange process through Roche lobe overflow (Webbink et al.1983) requires that the companion of the NS has a mass below $`5/6`$ of the neutron star’s, i.e., $`1.17`$ M for an initial NS mass of $`1.4`$ M. Since the smallest He–core that may be left behind is that of a star which took all Hubble time to evolve off the main sequence, $`0.16`$ M, this leaves a maximum transferable mass of $`1.01`$ M, less than required for either AU or UT. However, this is incorrect: the famous requirement of $`5/6`$ths only occurs because Webbink et al.(1983) considered Paczynski’s (1967) approximation for the Roche lobe radius: using instead Eggleton’s (1983) formula, this requirement disappears. Webbink et al.(1983, Eq. 15) show that the mass transfer rate is $`\dot{M}_11/(d\mathrm{ln}R_L/d\mathrm{ln}M_1)`$ where $`R_L`$ is the Roche lobe radius, and they argue that thermal stability in the process requires $`Xd\mathrm{ln}R_L/d\mathrm{ln}M_1>0`$. Using Eggleton’s formula $`R_L/a=0.49/(0.6+\mathrm{ln}(1+q^{1/3})/q^{2/3})`$ where $`a`$ is the distance between the two stars and $`qM_1/M_2`$ is the mass ratio, we find, under the hypothesis of conservative mass transfer,
$$X=\frac{(1+q)(2(1+q^{1/3})\mathrm{ln}(1+q^{1/3})q^{1/3})}{(1+q^{1/3})(1.8q^{2/3}+3\mathrm{ln}(1+q^{1/3}))}>0$$
(1)
for every $`q`$! Also, dynamical stability exists provided the donor mass is $`<2`$ M (Rappaport et al., 1995). So we may consider as a possible companion for neutron stars with EoSs AU and UT, sub/giants of mass $`2`$ M, where mass transfer is pushed forth by donor radius expansion, in complete analogy with the model of Webbink et al., 1983, except for donor mass. From Fig. 8b of Verbunt (1993), we see that a giant or subgiant of nearly solar metallicity, of, say, $`1.7`$ M manages to transfer at sub–Eddington rates $`1.4`$ M to the neutron star, provided mass transfer begins when the giant core is $`0.2`$ M; mass transfer will then leave behind a small ($`0.3`$ M), nearly inert He nucleus, with final period in the range of $`0.3`$ d. We thus see that these systems provide attractive progenitors for supramassive neutron stars, even in the case in which the applicable EoS is either AU or UT, provided of course mass accretion is close to conservative, an implicit assumption we made throughout, and that mass can be accreted in sufficient quantities.
Recent studies of binary pulsar masses (Thorsett and Chakrabarty 1999) seem to argue against significant mass accretion, but it should be noticed that, by investigating millisecond pulsars with periods exceeding $`2ms`$, the authors are investigating objects for which we know a priori that little mass need have been accreted, since their periods are long compared with SMNSs’. We may expect different results when pulsars are chosen otherwise: a recent redetermination of the mass of Cyg X-2 finds $`M=(1.8\pm 0.2)M_{}`$ (Orosz and Kuulkers 1999), departing from the narrow range of Thorsett and Chakrabarty.
The lowest magnetic field for the formation of supramassive neutron stars may be lower than the empyrical value ($`2\times 10^8`$ G) mentioned above (Possenti et al. (1999) because when mass accretion from the companion begins to taper off, or alternatively if mass accretion is intermittent, the Alfvén radius (which scales as $`\dot{M}^{2/7}`$) may expand further than the corotation radius: the neutron star then goes through a new propeller phase which slows its rotation. The overall effect is not large so that we shall consider in the following a maximum magnetic field $`q\times 10^8`$ G, with $`q1`$.
## 3 Further evolution
Supramassive neutron stars are unstable to collapse to a black hole when angular momentum losses reduce the initial angular momentum to about half of the initial value; furthermore, these stars are peculiar in that evolution at constant baryon number, but decreasing total angular momentum, makes them spin up, rather than down; all of this is especially evident in Fig. 7-10-13-16 of Salgado et al., 1994. Magnetic dipole losses cause a net torque which spins down the neutron star in a time (Vietri and Stella 1998) $`t_{sd}=5\times 10^9yr(10^8G/B)^2`$. This time–scale is not strongly dependent upon EoS, but depends strongly upon whether the model is only marginally supramassive, or close to the absolute maximum mass (rotating or not) for the given EoS, so that it may be considerably shorter under many circumstances. Thus, a time $`t_{sd}`$ after becoming supramassive, the neutron star will collapse to a black hole. This time is reasonably long when compared with typical mass accretion time–scales, which, as discussed above, are typically determined by sub/giant nuclear evolution timescales. Thus mass transfer will have long since ceased, and the immediate SMNS surroundings will be reasonably baryon free. The companion star, in the meantime, will have settled down as a low–luminosity, low–mass white dwarf, which is not expected to pollute the environment either. Furthermore, we can gauge the baryon–cleanliness of the SMNS surroundings at large if we assume that MSPs are born through the same chain of events, except less extreme, for then we know the Galactic distribution of MSPs. These are often located well outside the Galactic disk, within an ISM with typical densities well in defect of $`n=1`$ cm<sup>-3</sup>, which makes the total baryon mass within, say, $`0.1`$ pc, less than $`10^5`$ M, more than enough to guarantee contact with the fireball model. We thus see that also this version of the formation scenario guarantees a baryon clean environment, exactly like the different scenario of Paper I.
The situation is clean even in the case in which the collapse occurs while mass transfer is still taking place. The total amount of baryons in the accretion disk is negligible: the disk crossing time is of order of $`1`$ month, which, with mass transfer rates $`10^910^8M_{}yr^1`$, corresponds to much less than the maximum contamination value. The total amount of outlying mass from a wind is also rather small: for $`\dot{M}_w10^9M_{}yr^1`$ and $`v_w30kms^1`$, the total mass within, say, $`0.1pc`$ is $`3\times 10^6M_{}`$, again negligible. The highly relativistic ejecta and $`\gamma `$ rays from the burst will hit the companion and form a shock way inside the star’s photosphere, so that local dissipation of the ejecta kinetic energy will lead to the companion’s inflating on the (long!) Kelvin–Helmholtz time–scale, and the non–thermal afterglow emission will not be contaminated by the re–radiated thermal component.
The mechanism for the energy release is the same as discussed in paper I: once the neutron star is destabilized, the innermost regions will collapse promptly to a black hole, while the equatorial matter, which is close to centrifugal equilibrium, will just contract a little bit and begin orbiting the newly formed black hole. A necessary condition which needs to be met is that this equatorial material lies outside the innermost stable orbit. This can be checked from Table I of Cook, Shapiro and Teukolsky (1994b) who show that neutron stars which have reached the mass shedding regime have equatorial radii larger than the innermost stable orbit (see their column $`j`$), independent of EoS. In paper I, we estimated the amount of matter left behind as $`0.1`$ M; this configuration is identical to the one hypothesized in most current models (Mèszàros 1999), and the debris torus is massive enough to power any burst, especially in the presence of a moderate amount of beaming.
We now discuss the rate at which spun–up SMNSs collapse to black holes. Since the timescales involved are a fair fraction of the age of the Universe, and since star formation evolves strongly in the recent past (Madau et al., 1996), we have to consider cosmological evolution of the population. However, from Fig.1 of White and Ghosh (1998), it can be seen that the population of MSPs is roughly constant (within the accuracy of the present, order of magnitude estimates) over the redshift range $`0<z1`$ for most assumptions. There are currently an estimated $`5\times 10^4`$ MSPs in the disk of the Galaxy (Lorimer 1995, Phinney and Kulkarni 1994); assuming that there are as many systems in the bulge, that a fraction $`\beta `$ of these are SMNSs, and that the typical timescale for collapse to black hole is given by $`t_{sd}`$, the expected rate of collapses in the Milky Way is $`r=10^5\beta /t_{sd}=\beta /(5\times 10^4\mathrm{yr})`$, which is to be compared with the inferred rate of GRBs, 1 every $`3\times 10^7`$ yr in an $`L_{}`$ galaxy like the Milky Way. Scaling to $`\beta =0.05`$, a value intermediate between the extremes of the simulations of Possenti et al., we find that the two rates agree for a beaming fraction $`\delta \mathrm{\Omega }/4\pi 0.2(\beta /0.05)`$; this is consistent with the idea that these explosions do not require extreme beaming fractions , since the explosion need not wade its way through a massive stellar envelope, but immediately breaks free into a baryon clean environment.
This model makes an easily testable prediction, because the location of bursts inside their host galaxies is the same as that of LMXBs, which are distributed at distances from the Galactic plane $`\overline{z}1`$ kpc, most likely arising from kick velocities at the time of neutron star formation (van Paradijs and White 1995). A similar $`z`$ distribution is observed for MSPs, $`\overline{z}0.7`$ kpc, and moderate transverse speeds. Thus we would expect GRBs to cluster around galactic disks (contrary to the binary pulsar merger model, where at least some $`50\%`$ of all GRBs should be uncorrelated with the original birth galaxies), but should not correlate with star forming regions (except for SMNSs which form directly during a SN event, as discussed in paper I), contrary to all scenarios involving massive stars. Also, the redshift distribution of GRBs within this model should be flatter than the star formation distribution (again contrary to hypernovae), because the redshift distribution of the MSP population is rather flat (White and Ghosh 1998).
We acknowledge helpful discussions with G. Ghisellini, A. Possenti and L. Burderi.
|
no-problem/9910/hep-th9910032.html
|
ar5iv
|
text
|
# 1. Introduction
## 1. Introduction
It remains an unsettled question in quantum field theory how to deal with bottomless systems, i.e. systems whose Euclidean actions are unbounded from below. Important and familiar examples include Einstein gravity . The fact that one cannot define the vacuum for such systems makes it impossible to quantize them in the usual ways. The standard path integral quantization procedure, for example, is not applicable, since the Feynman measure $`e^S`$ with a bottomless action $`S`$ is not normalizable and the vacuum expectation values cannot be defined.
In the language of the stochastic quantization of Parisi and Wu , the difficulty with bottomless systems manifests itself as the absence of thermal equilibrium, where quantum theory is supposed to be realized for ordinary (normal) systems. In spite of this apparent drawback, however, it has been argued by several people that the potentiality inherent in yet to be studied stochastic dynamics may offer the possibility to properly deal with bottomless systems. Such attempts have already been started in Refs. and , though on the basis of quite different ideas. An analytical expression of the probability distribution obtained for 0-dim bottomless systems supports the idea expressed in Ref. , i.e. realization of the Feynman measure $`e^S`$ in a finite space-time region at finite fictitious time by means of the kernel-degree of freedom. We must note that here the stochastic process is of the diffusion type and has no equilibrium for bottomless systems, since the desired distribution $`e^S`$ is not normalizable and therefore no longer belongs to the spectrum of the Fokker–Planck operator $`H`$; every eigenstate of $`H`$ necessarily decays away in the large-time limit. Every quantity measured in this process is thus dependent on the initial conditions and is dependent on a fictitious time, in general. This is a severe problem if one tries to extract from this kind of treatment physically meaningful and sensible quantities, which should, of course, be independent of any of the stochastic dynamics.
The purpose of this paper is thus first to examine the dependence of the probability distribution on the initial conditions and to clarify the behavior of the correlation functions in this framework. It is observed that at large fictitious times the correlation functions diffuse quite universally, irrespectively of the initial distribution. This suggests the possibility of extracting such a quantity that does not depend on the initial conditions at large times. We then propose a method that enables us to extract it from the diffusive stochastic process.
## 2. Stochastic quantization with bottomless systems
Let us first briefly review the stochastic quantization of Parisi and Wu . In stochastically quantizing the system with a Euclidean action $`S[\varphi ]`$, one sets up the Langevin equation
$$\frac{}{t}\varphi (x,t)=\frac{\delta S[\varphi ]}{\delta \varphi (x,t)}+\eta (x,t),$$
(2.1)
which governs the stochastic dynamics of $`\varphi `$ with respect to the fictitious time $`t`$. Here $`\eta `$ is the Gaussian white noise characterized by the statistical properties
$$\eta (x,t)=0,\eta (x,t)\eta (x^{},t^{})=2\delta ^D(xx^{})\delta (tt^{}),\mathrm{etc}.$$
(2.2)
The quantum theory has been shown to be realized in the thermal equilibrium limit $`t\mathrm{}`$. One can equivalently work with the Fokker–Planck equation
$$\frac{}{t}P[\varphi ;t]=H[\varphi ]P[\varphi ;t],$$
(2.3)
with
$$H[\varphi ]=d^Dx\frac{\delta }{\delta \varphi (x)}\left(\frac{\delta }{\delta \varphi (x)}+\frac{\delta S[\varphi ]}{\delta \varphi (x)}\right),$$
(2.4)
where $`P[\varphi ;t]`$ is the probability distribution of $`\varphi `$ at time $`t`$. It is easy to show that the Fokker–Planck operator $`H`$ is negative semi-definite for any $`S`$ and that the eigenfunctional of $`H`$ belonging to the highest zero eigenvalue is $`e^S`$, which ensures the relaxation of $`P`$ to the distribution $`e^S`$, irrespective of the choice of the initial distribution, provided that the spectrum includes the discrete zero. Quantum field theory is thus given in the thermal equilibrium limit of a hypothetical stochastic process.
This is, however, not the case with bottomless systems, since $`e^S`$ is not normalizable and hence does not belong to the spectrum of the Fokker–Planck operator $`H`$. Eigenvalues are negative definite, and the hypothetical stochastic process has no thermal equilibrium limit (i.e., it is a diffusion process).
## 3. Attempts to deal with bottomless systems
Although the naive application of stochastic quantization to bottomless systems does not work, some attempts to deal with them have been reported . Greensite and Halpern assumed that the meaningful distribution function in a diffusive stochastic process for bottomless systems is the highest normalizable eigenstate of the Fokker–Planck operator. That is, they gave up using the Feynman measure $`e^S`$ to evaluate expectation values, since $`e^S`$, which is not normalizable for the bottomless action $`S`$, cannot belong to the spectrum of the Fokker–Planck operator. Instead, they proposed to utilize its true ground state: the Feynman measure $`e^S`$ was abandoned. Since the true ground state is not a stationary state and it finally decays away at large times, they had to devise a way how to extract it from the diffusive stochastic process.
On the other hand, Tanaka et al. pursued the possibility of producing the distribution $`e^S`$ even for a bottomless action $`S`$ by making use of the Langevin equation
$$\frac{}{t}\varphi (x,t)=K[\varphi ]\frac{\delta S[\varphi ]}{\delta \varphi (x,t)}+\frac{\delta K[\varphi ]}{\delta \varphi (x,t)}+K^{1/2}[\varphi ]\eta (x,t)$$
(3.1)
with a positive kernel functional $`K`$ . Here and in what follows, stochastic differential equations are of the Ito-type . Note that the corresponding Fokker–Planck operator is given by
$$H[\varphi ]=d^Dx\frac{\delta }{\delta \varphi (x)}K[\varphi ]\left(\frac{\delta }{\delta \varphi (x)}+\frac{\delta S[\varphi ]}{\delta \varphi (x)}\right),$$
(3.2)
which has negative semi-definite eigenvalues for any positive kernel $`K`$, and that the thermal equilibrium distribution could again be given by $`e^S`$ only if it is normalizable. They expected that an appropriate choice of the positive kernel $`K`$ may enable stochastic variables to be confined in a finite region and that the desired distribution $`e^S`$ could be reproduced there. Their numerical simulation of the Langevin equation (3.1) with a specific choice of the kernel $`K`$ for simple 0-dim models actually seems to support their expectation.
In order to see the actual stochastic dynamics more clearly and in detail, let us restrict ourselves to 0-dim cases. The Langevin equation now reads
$$\dot{x}=K(x)S^{}(x)+K^{}(x)+K^{1/2}(x)\eta ,$$
(3.3)
and the corresponding Fokker–Planck equation is given by
$$\dot{P}(x,t)=\frac{}{x}K(x)\left(\frac{}{x}+S^{}(x)\right)P(x,t).$$
(3.4)
Here dots and primes denote differentiation with respect to the fictitious time $`t`$ and $`x`$, respectively. It is shown in Ref. that the Fokker–Planck equation (3.4) can be solved analytically with an appropriate choice of the kernel function $`K`$ for any 0-dim action $`S`$ which is unbounded from below for $`x\pm \mathrm{}`$. The choice of the kernel $`K`$
$$K(x)=e^{2S(x)}$$
(3.5)
yields the Green function for the Fokker–Planck equation (3.4)
$$P(x,t;x_0)=e^{S(x)}\frac{1}{\sqrt{4\pi t}}\mathrm{exp}\left(\frac{f^2(x)}{4t}\right),f(x)=_{x_0}^x𝑑ye^{S(y)}.$$
(3.6)
This satisfies the normalization condition $`_{\mathrm{}}^{\mathrm{}}𝑑xP(x,t;x_0)=1`$ and the initial condition $`P(x,0;x_0)=\delta (xx_0)`$. The choice of the kernel (3.5) provides the Langevin equation (3.3)
$$\dot{x}=e^{2S(x)}S^{}(x)+e^{S(x)}\eta $$
(3.7)
with the desired drift force $`e^{2S(x)}S^{}(x)`$ which acts as a restoring force in bottomless regions (that is, for large $`|x|`$ with $`xS^{}(x)<0`$).
One can confirm from Eq. (3.6) that the desired distribution $`e^S`$ is indeed realized, as Tanaka et al. expected. Consider the domain given by
$$D_t=\{x|f^2(x)<2\gamma t\}$$
(3.8)
at a given time $`t`$ with $`\gamma 1`$ a real constant. Note that $`2t`$ is the variance of the distribution $`P`$ as a function of $`f(x)`$, and therefore, outside this domain $`D_t`$, $`P`$ is considered to be vanishingly small owing to the Gaussian factor in Eq. (3.6). Within $`D_t`$, on the other hand, this factor is considered to be of order unity and we have the approximate distribution
$$P(x,t;x_0)\frac{1}{Z_t}e^{S(x)}\mathrm{for}xD_t,$$
(3.9)
where $`Z_t_{D_t}𝑑xe^{S(x)}=_{D_t}𝑑f=\sqrt{8\gamma t}`$. The desired distribution $`e^S`$ is seen to be realized in $`D_t`$ with the appropriate normalization factor $`Z_t`$. In other words, the stochastic variable $`x`$ is considered to be distributed according to $`Pe^S/Z_t`$, exclusively in $`D_t`$ at time $`t`$.
## 4. Temporal behavior of the probability distribution
In order to confirm the above argument and for illustration, we consider a typical 0-dim bottomless system given by $`S(x)=m^2x^2/2\lambda x^4/4`$ ($`\lambda >0`$) and directly study the behavior of the probability distribution $`P`$ as a function of $`t`$. Figure 1 displays its temporal behavior, starting from the initial (ideally delta-shaped) distribution $`P(x,0;x_0)=\delta (xx_0)`$, where $`x_0=1.5a`$ with $`a=\sqrt{m^2/\lambda }`$ being the position of a maximum of the action. Note that although the starting point of the stochastic variable $`x=x_0`$ is in the bottomless regions (i.e. $`|x|>a`$), the diffusion is well controlled (i.e., $`P`$ remains in a finite region even at very large $`t`$) owing to the restoring force resulting from the kernel (3.5). Furthermore, we can see in Fig. 2, where $`\stackrel{~}{S}(x,t;x_0)=\mathrm{ln}[\sqrt{4\pi t}P(x,t;x_0)]`$ \[see Eq. (3.6)\] is plotted together with $`S`$, that the distribution $`P(x,t;x_0)`$ is well approximated as $`e^S`$ within the domain where the stochastic variable is exclusively distributed. This is in accordance with the expectation discussed in the previous paragraph.
Even though the desired Feynman measure $`e^S`$ seems to be realized in the course of the stochastic process, the process is of the diffusion type and has no equilibrium, as is explicitly seen in the analytical expression (3.6). The distribution $`P`$ decays as time increases, and expectation values over $`P`$ have no thermal equilibrium limits and may depend in general on the choice of the initial conditions. Figure 3 shows the temporal behavior of $`x^2_t=_{\mathrm{}}^{\mathrm{}}𝑑xx^2P(x,t;x_0)`$ for the same system as above but with different initial data. This quantity increases indefinitely and no thermal equilibrium limit exists. Note that possible errors that arise in the course of numerical evaluation of the integrations are very small, typically of order $`10^6`$.
It is, however, worth pointing out that the same figure (Fig. 3) also exhibits a universal behavior of the expectation values. After the initial transient time, $`x^2_t`$ turns out to approach a unique curve, albeit monotonically increasing with $`t`$, irrespectively of the choice of initial conditions. This may imply the possibility of extracting “physics” underlying bottomless systems through the diffusive stochastic process.
Figure 1: The distribution $`P(x,t;x_0)`$ for a bottomless system $`S(x)=m^2x^2/2\lambda x^4/4`$ with $`S_0=m^4/4\lambda =1.0`$ at $`t=0.01,\mathrm{\hspace{0.17em}1},\mathrm{\hspace{0.17em}100}`$ and $`10000`$ $`[a^2]`$. The action has maxima $`S_0`$ at $`x=\pm a=\pm \sqrt{m^2/\lambda }`$. The initial distribution is a delta-shaped function located at $`x_0=1.5a`$.
Figure 2: Temporal behavior of $`\stackrel{~}{S}(x,t;x_0)=\mathrm{ln}[\sqrt{4\pi t}P(x,t;x_0)]`$ with the initial distribution $`\delta (x1.5a)`$. The dashed curves denote the action $`S(x)=m^2x^2/2\lambda x^4/4`$ itself with $`S_0=m^4/4\lambda =1.0`$.
Figure 3: Temporal behavior of the expectation value $`x^2_t`$ for the system $`S(x)=m^2x^2/2\lambda x^4/4`$ ($`S_0=m^4/4\lambda =1.0`$) with different initial positions $`x_0=0.0,\mathrm{\hspace{0.17em}1.0}`$ and $`1.5`$ $`[a]`$.
## 5. Stationary quantities in the diffusive process
In order to address this apparently ambitious hope, let us examine the situation more carefully. Since we do not know which quantity is to be regarded physically meaningful for bottomless systems, it seems reasonable for us to assume that a “physical quantity” within the framework of stochastic quantization is one that has a long-time limit, irrespective of the initial conditions, for any systems including bottomless ones. This is in accordance with the original spirit of stochastic quantization and may be considered a minimal requirement. \[Of course, this is just one possibility, and our assumption is not meant to exclude other possibilities, such as that presented in Ref. .\] For diffusive stochastic processes, however, whether such a quantity exists or not is a highly nontrivial question, because we naturally anticipate that every quantity should have a trivial (either vanishing or diverging) large-time limit. The idea here is to discriminate different dynamics possibly coexisting in such diffusion processes. In other words, our task is to extract appropriate “physics” underneath the dominating diffusive process caused by the unboundedness of the classical action. The following analysis is a first step along this line of thought.
Observe that the domain $`D_t`$ defined by Eq. (3.8) expands monotonically as $`t`$ increases. This represents one of the main features of diffusion. We attempt to estimate this expansion quantitatively. Let $`x_+(t)`$ \[$`x_{}(t)`$\] be the right(left) end of the domain $`D_t`$:
$$_{x_0}^{x_\pm (t)}𝑑ye^{S(y)}=\pm \sqrt{2\gamma t}.$$
(5.1)
\[See Eqs. (3.8) and (3.6).\] If we change the initial position $`x_0`$ by an infinitesimal amount $`\delta x_0`$, it yields the following changes in $`x_\pm (t)`$:
$$\delta x_\pm =e^{S(x_\pm )S(x_0)}\delta x_0.$$
(5.2)
This shows that $`\delta x_\pm `$ become vanishingly small as time increases, since $`x_+`$ increases and $`x_{}`$ decreases monotonically in time and therefore $`e^{S(x_\pm )}0`$. The difference between the domains $`D_t`$ for different values of $`x_0`$ then disappears. This implies an approach to a unique distribution $`Pe^S`$. This is the reason that we observe a unique curve for expectation values at large times in Fig. 3. It is important to note that the expansion of $`D_t`$ is well controlled in this stochastic process. In fact, we can easily see, from Eq. (5.1),
$$\dot{x}_\pm =\pm \sqrt{\frac{\gamma }{2t}}e^{S(x_\pm )}\pm \frac{2\gamma }{Z_t}e^{S(x_\pm )},$$
(5.3)
which implies that the expansion rate diminishes exponentially at large $`t`$ (or large $`|x_\pm |`$).
This observation may enable us to extract a quantity that has a long-time limit. Let us examine the time development of the expectation value of an arbitrary function $`F(x)`$, $`F(x)_t=_{\mathrm{}}^{\mathrm{}}𝑑xF(x)P(x,t;x_0)`$. Using the Fokker–Planck equation (3.4), we obtain its time derivative:
$`{\displaystyle \frac{d}{dt}}F(x)_t={\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑xF(x){\displaystyle \frac{}{x}}e^{2S(x)}\left({\displaystyle \frac{}{x}}+S^{}(x)\right)P(x,t;x_0)`$
$`={\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑x\left({\displaystyle \frac{}{x}}e^{S(x)}F^{}(x)\right)e^{S(x)}P(x,t;x_0).`$ (5.4)
This can be rewritten, under the validity of the approximation (3.9), as
$$\frac{d}{dt}F(x)_t\frac{1}{Z_t}_{D_t}𝑑x\frac{}{x}e^{S(x)}F^{}(x)=\frac{1}{Z_t}e^{S(x)}F^{}(x)|_{x_{}(t)}^{x_+(t)}.$$
(5.5)
With the help of the equations for $`x_+(t)`$ and $`x_{}(t)`$ in Eq. (5.3), it can be further reduced to
$$\frac{d}{dt}F(x)_t\frac{1}{2\gamma }\left(\dot{x}_+F^{}(x_+)+\dot{x}_{}F^{}(x_{})\right)=\frac{1}{2\gamma }\left(\frac{d}{dt}F(x_+)+\frac{d}{dt}F(x_{})\right).$$
(5.6)
One may thus expect that the quantity $`F(x)_t`$, defined by
$$F(x)_t=\frac{1}{2\gamma }[F(x_+(t))+F(x_{}(t))]F(x)_t,$$
(5.7)
Figure 4: Temporal behavior of $`x^2_t`$ for a bottomless system $`S(x)=m^2x^2/2\lambda x^4/4`$ with $`S_0=m^4/4\lambda =1.0`$. Initial positions are chosen as $`x_0=0.0,\mathrm{\hspace{0.17em}1.0}`$ and $`1.5`$ $`[a]`$. The parameter $`\gamma `$ is chosen to be $`0.995`$.
has a $`t`$-independent value for large $`t`$. Roughly speaking, $`F(x)_t`$ is a quantity reflecting the “quantum fluctuations” in the bottomless system, since the first two terms in the square parentheses represent “deterministic” contributions. \[Remember that the dynamics for $`x_\pm (t)`$ in Eq. (5.3) are deterministic.\]
The above argument has been confirmed by the numerical calculation of $`x^2_t`$ for the system $`S(x)=m^2x^2/2\lambda x^4/4`$ whose results are given in Fig. 4. (Here errors are typically of order $`10^4`$.) These results show that after the initial transient time, the quantity $`x^2_t`$ approaches a certain value, which is constant over a wide range of large $`t`$, irrespectively of the choice of the initial position $`x_0`$. Similar behavior is observed for other correlation functions.
## 6. Discussion
The observation of a time-independent quantity in a diffusive (or diverging) stochastic process is surprising and worthy of note. We hope that it will lead to further insight for bottomless systems. The above quantity $`F(x)_t\mathrm{}`$ is a candidate for the time-independent quantity, representing a kind of quantum effect in such systems. It can be thought, as described above, to be a measure of the quantum fluctuations for bottomless systems.
We must remember, however, that the present analysis is largely dependent on our choice of the specific kernel function $`K(x)`$ (3.5). In particular, the asymptotic value of $`F(x)_t`$ possibly depends on the very choice of the kernel. Even though it seems plausible that for bottomless systems meaningful quantities are rather limited, and accordingly not all conceivable kernels can provide us with them, it is very important to clarify this point, e.g. by determining the kernel dependence of $`F(x)_t\mathrm{}`$. This is an important issue to be studied, though a different choice of the kernel no longer ensures the solvability of the Fokker–Planck equation and may entail an additional numerical study in order to obtain the probability distribution $`P`$. Other points worth further investigation include (a) the relevance (if any) of $`F(x)_t`$ to observable quantities, (b) formulation on the basis of the corresponding Langevin equation (3.7), (c) comparison with other proposals , and (d) extension to higher dimensional systems. Work is in progress on these points.
## Acknowledgements
The authors acknowledge useful and helpful discussions with Professor I. Ohba and Dr. Y. Yamanaka. They also thank Helmuth Hüffel for critical reading of the manuscript and discussions and Professor M. Namiki for enlightening suggestions. This work is supported by a Grant-in-Aid for JSPS Research Fellows.
|
no-problem/9910/cond-mat9910138.html
|
ar5iv
|
text
|
# Scaling for vibrational modes of fractals tethered at the boundaries
## I Introduction
In this work we consider the vibrational spectrum associated with the scalar elasticity of fractals when their boundaries are progressively tethered to immobile, external anchors. By scalar elasticity, we mean the vibrational problem where the local displacement variable $`u_i(t)`$ at site $`i`$ is a scalar. We study the various signatures of the vibrational density of states and the localization properties of the normal modes under these conditions.
The spatial scale invariance of fractals and the absence of translational invariance due to the boundaries conspire to influence their vibrational spectra in fundamental ways. The now well-studied fracton spectrum, with the characteristic, fractional power law for the low-energy density of states, arises for the bulk fractals with free boundaries, while so-called fractino spectrum arises for objects which do not have fractal bulk but have fractal boundaries that are clamped. Here, we generalize these further to study bulk fractals whose fractal boundaries are progressively clamped or tethered to immobile anchors. The resulting vibrational spectrum is complex yet shows some simple scaling features common to many phenomena with long range order.
The problem of vibrations in a disordered system is in itself of physical interest. Granular systems , such as sand or snow piles or crops stored in silos, are among the systems of potential application. For example, sound propagation in a granular system had been modeled by Leibig using a scalar elastic network with a random distribution of spring constants and analyzed in terms of the normal modes. Boundary conditions such as tethering add much complexity to the problem which may be relevant to some physical and engineering applications. For example, random mixtures of soft and hard materials such as solid granules embedded in a polymer gel or colloidal particles and aggregates filling the pores or cracks in rocks may constitute such an application. Another potential interest might be due to the recently discovered anomalous behavior of the superfluid transition when aerogels are embedded in liquid <sup>4</sup>He or <sup>3</sup>He, where the aerogels may impose a boundary condition over a fractal boundary of the liquid helium.
Most previous works on the effect of clamped boundaries on the vibrational density of states of inhomogeneous systems dealt with deterministic inhomogeneities (such as in fractal drums ). In this case the bulk is nonfractal (Euclidean) and the boundary is an ordered fractal, and the effect is a correction term to the leading behavior, the latter remaining the same as for a homogeneous system as one approaches the asymptotic limit of large system sizes. We have a substantially different system where both the bulk and the tethered boundaries are statistical fractals, sometimes of different fractal dimensionality, where even the leading behavior is expected to be special to inhomogeneous systems.
The prototype of such a system is a critical percolation cluster . We have used site percolation clusters created near the critical percolation threshold on square and simple cubic lattices. We focus our attention on the low-energy regime where we expect the clamping effect of the fractal boundaries to be pronounced. Without tethering, these modes tend to have relatively large spatial extent, and thus are more sensitive to boundary tethering. In particular, we study the lowest energy mode ($`ϵ_1`$) and the peak ($`ϵ_p`$) that appears in the low-energy region of the vibrational density of states. The lowest energy mode is computationally much easier to obtain than the entire density of states and yet captures some of its essential features.
To study these systems, we exploit the mapping between scalar elasticity and diffusion as extended to the case of mapping between the vibration of tethered objects and diffusion with permanent traps. In this approach, we define a transition probability matrix $`𝐖`$ (an $`S\times S`$ matrix for a cluster of $`S`$ sites) where $`W_{ij}`$ is the hopping probability per step $`p_{ij}`$ from site $`j`$ to site $`i`$ in the corresponding diffusion problem. We set $`p_{ij}=1/z`$ for all pairs of nontethered sites $`i,j`$ where $`z`$ is the lattice coordination number. The diagonal elements $`W_{ii}=0`$ if site $`i`$ is tethered (representing the complete leakage of diffusion field or full tethering) while, if $`i`$ is not tethered, $`W_{ii}=1n_i/z`$ (representing conservation of diffusion field) where $`n_i`$ is the number of available neighbors for site $`i`$. Then the time evolution of the diffusion field $`P_i(t)`$ is given by
$$P_i(t+1)=\overline{\underset{j}{}}[p_{ij}P_j(t)+(\delta _{ij}p_{ji})P_i(t)]=\overline{\underset{j}{}}W_{ij}P_j(t),$$
(1)
where $`\overline{}`$ includes the diagonal terms.
The eigenmodes of $`𝐖`$ are the normal modes of the tethered vibration problem and the eigenvalues $`\lambda `$ are related to the classical vibration frequency $`\omega `$ and energy $`ϵ\omega ^2`$ by
$$\lambda =1\omega ^2\mathrm{exp}(\omega ^2)$$
(2)
where the last approximation corresponds to the long time limit. We use this formulation in part to take advantage of the fact that the low-energy region of the spectrum corresponds to the region of the maximum eigenvalues of $`𝐖`$ which affords a much easier access numerically. The numerical technique used is based on the work of Saad which implemented Arnoldi’s method. Preliminary results of this work were reported earlier in which introduced the mapping as well as presented preliminary numerical results only for the all boundary tethered case in two dimensions. In the current work, we have improved data in both two and three dimensions as well as for hull tethering (see below).
## II Scaling of lowest energy mode for hull versus all boundary tethering
In a cluster with tethered boundaries the lowest energy mode $`ϵ_1`$ crucially depends on how many and which sites are tethered. Clearly, for an individual untethered mode, tethering of the sites where the mode has a large component greatly affects the mode. Statistically, an important classification of the boundary sites is between the internal and external boundaries (or hull) . For a fractal like the critical percolation cluster, they are both fractal objects with similar (but not identical) fractal dimensions; in two dimensions, the hull has a fractal dimension $`d_f^h=1.75`$ while the internal boundaries have a fractal dimension $`d_f^i=91/48`$ (same as the bulk fractal itself and also as the fractal dimension of all boundary $`d_f^b`$). However, the behavior of $`ϵ_1`$ (and the density of states) is unexpectedly different depending on if only (some of) the hull sites are tethered or (some of) both types of boundary sites (i.e., the hull and internal boundary) are tethered, as functions of the number $`N`$ of the tethered sites and the size $`S`$ of the cluster, the two main parameters in these situations.
In order to understand the interplay of the different length scales we write the scaling variable as the ratio of the relevant length scales for the two cases. The common length scale for both cases is the average distance $`l`$ between the tethered sites where
$$l(S/N)^{1/d_f}.$$
(3)
We also extend the meaning of $`N`$ by $`NfN_T`$ where $`f`$ is the fraction of the $`N_T`$ boundary sites which are tethered: $`N_T=N_h`$ (number of hull sites) for hull tethering, $`N_T=N_a`$ (number of all boundary sites) for all boundary tethering. In particular, this means $`N`$ can be smaller than one if $`f`$ is sufficiently small, which indicates many random samples in the ensemble will have no tethers.
The second relevant length scale $`L`$ for hull tethering for the critical percolation cluster is the the average diameter of the cluster itself,
$$L_cS^{1/d_f}$$
(4)
since the hull forms a connected set of external boundaries to whose interior the normal modes are confined. In particular, the spatial extent of the lowest energy mode is dictated by that of the interior sites at the scale of the cluster diameter $`L_c`$). See the illustration of these lengths in Fig.1.
On the other hand, in the case of all boundary tethering, the second relevant length $`L_b`$ is the average diameter of a blob in the cluster. As sites of both hull and internal boundary are tethered in all boundary tethering, a normal mode is confined to sites which are internal to both boundaries. i.e., in the blobs. The blobs in this context may be the areas of the cluster with essentially no internal holes (empty sites in the interior), which are connected together by links of less connectivity to form the overall cluster. See the illustration of these lengths in Fig.1. Though we do not have a precise characterization of a blob, it is clear that this sense of a blob is different from those discussed earlier, e.g., in the sense of a merely multiply connected component .
Since one would expect these new blobs to go critical at the ordinary critical percolation threshold $`p_c`$, its diameter $`L_b`$ may be expected to follow a power law at $`p_c`$,
$$L_bS^y$$
(5)
for an appropriate universal exponent $`y`$ in the scaling regime of large $`S`$. We note that both a precise identification of such blobs and the relation between $`L_b`$ and $`S`$ are still open questions .
Let us consider scaling for the hull tethering case first. In the limit of no tethering, the problem is reduced to that of so-called ants where the lowest energy mode is the stationary mode with $`ϵ=0`$ for any $`S`$. On the other hand, for asymptotically large $`S`$ and fixed $`L_c/l>0`$ (i.e., fixed $`N>0`$ and $`S\mathrm{}`$), the number of tethered sites $`N`$ becomes negligible compared to cluster size $`S`$. Thus, the lowest energy mode is arbitrarily close to stationarity and should behave essentially in the same way as the second lowest energy mode (lowest energy, nontrivial mode) of the nontethered case, i.e., as $`ϵ_1S^{d_w/d_f}`$ ($`d_w`$ is the walk dimension and $`d_f`$ is the fractal dimension of the cluster). Thus, the scaling form of $`ϵ_1`$ is expected to be
$$ϵ_1S^{d_w/d_f}F(L_c/l)=S^{d_w/d_f}G(N).$$
(6)
Since $`ϵ_10`$ as $`N0`$ with fixed $`S`$, we must have $`G(z)0`$ as $`z0`$. If we assume analyticity of $`G(z)`$ for small $`z`$, in the absence of obvious symmetry requirements which might remove the linear term, it seems reasonable to conclude $`G(z)z`$ for small $`z`$.
It is possible to obtain the limit of $`z\mathrm{}`$ in such a way as to essentially remove the effect of tethering simultaneously. To this end, consider joining many mini-clusters with a given individual $`l_1`$ in such a way so as to make the overall cluster a fractal with fractal dimension $`d_f`$. In this process the overall cluster length scale $`L_c`$ increases at a greater rate ($`L_cS^{1/d_f}`$) than the overall $`l`$ ($`l(S/N)^{1/d_f}`$) (since $`N`$ is also growing). Thus, as more mini-clusters are joined, the variable $`L_c/l`$ tends to $`\mathrm{}`$ while the tethered hull ($`S^{d_f^h/d_f}`$) becomes a negligible part of the entire boundary which goes as $`S`$ (since $`d_f^h<d_f`$). In this case with the diminished importance of tethered hull, the behavior of $`ϵ_1`$ should converge toward the lowest nontrivial mode of the nontethered limit once again ($`ϵ_1S^{d_w/d_f}`$). Thus, in this limit of $`z=N\mathrm{}`$, the lowest eigenmode is confined entirely to the interior of the cluster, and the spatial extent and the energy of the mode is unaffected by further tethering of hull sites. This is equivalent to herding which is a term we had used to describe the saturation effect of the confinement of an eigenmode. Thus, we must have $`G(z)const.`$ as $`N\mathrm{}`$.
Next we consider all boundary tethering. The no-tethering limit is of course the same as for the hull tethering case. As one tethers sites of all boundaries, the lowest-energy mode gets localized in blobs which are unaffected by tethering. Moreover, if the ratio $`L_b/l`$ is fixed at a nonzero value and $`S\mathrm{}`$, then each blob behaves like a cluster of its own with hull tethering. The expected scaling variable is then the number of tethered sites per blob $`NL_b^{d_f}/S`$ (which is equal to $`(L_b/l)^{d_f}`$ as expected). We thus propose a scaling form
$$ϵ_1S^{d_w/d_f}\overline{F}(L_b/l)=S^{d_w/d_f}\overline{G}(N/S^x),$$
(7)
where $`x=1yd_f`$.
For $`z0`$, as for the hull tethering, we have $`\overline{G}(z)0`$. However, the limit of $`z\mathrm{}`$ can be achieved by increasing the fraction of tethered boundary sites as the cluster size increases. The limit should then correspond to the so-called ideal chain with all the boundary sites tethered. Thus, in this limit, $`NS`$ and $`ϵ_1(\mathrm{ln}S)^{2/d_0}`$ (where $`d_0`$ is the exponent describing the stretched exponential behavior of the density of states for ideal chains introduced in ). Numerically, $`d_0`$ is about 4 in two dimensions and about 6 in three dimensions for the critical percolation cluster. Thus, the scaling function $`\overline{G}(z)z^{d_w/[d_f(1x)]}(\mathrm{ln}z)^{2/d_0}`$ as $`z\mathrm{}`$.
It is interesting to consider the connection of this behavior to the confinement of the lowest energy mode to the small, compact regions of the cluster in this case. These regions are not affected by tethering because of the absence of internal boundaries (holes). Their fractal dimension must be essentially equal to the Euclidean lattice dimension $`d`$. Thus, we denote their average size by $`S_c`$, we may expect $`ϵ_1S_{c}^{}{}_{}{}^{2/d}`$ where 2 is the walk dimension on a compact cluster. In view of the relation $`ϵ_1(\mathrm{ln}S)^{2/d_0}`$ above, this would lead to $`S_c(\mathrm{ln}S)^{d/d_0}(\mathrm{ln}S)^{1/2}`$ in both two and three dimensions. This type of slow growth of $`S_c`$ is consistent with our direct observations.
Fig.2 and Fig.3 show the numerical results for the above discussed scaling behavior for the square lattice in two dimensions. Fig.2 is for the hull tethering case while Fig.3 is for all boundary tethering. Besides the scaling variables on the $`x`$ axis being different, the scaling functions exhibit dramatically different behavior. Though, in both cases, $`ϵ_1`$ (and thus the scaling functions) approach zero as $`z0`$, the low $`z`$ behavior is linear for hull tethering while it seems to be faster than linear for all boundary tethering. Also while the hull tethering scaling function tends to a finite limit as $`z\mathrm{}`$ (see above discussion), that for all boundary tethering appears to grow unbounded as it should if $`x<1`$. Corresponding results for all boundary tethering for the simple cubic lattice are shown in Fig.4. Here the reasonable data collapsing is achieved for $`x=0.4`$.
We find, in particular, that hull tethering cannot change the behavior of the lowest energy mode, which remains the same as that of the untethered case ($`ϵ_1S^{d_w/d_f}`$). This is because a fully tethered hull confines the mode in the interior of the critical percolation cluster which, asymptotically for large clusters, scales with the same fractal dimension as the entire cluster itself. Thus, the lowest energy mode results in the vibration of the interior of the cluster independent of the external boundary. This inability to change the behavior of the lowest energy mode is analogous to the case of an Euclidean cluster as well as to the fractal drums (fractal external boundary and Euclidean interior) ).
In contrast, the tethering of all boundaries leads to confinement of the lowest energy vibrational mode in regions of the cluster which are essentially compact, altering its behavior from a power-law dependence on cluster size $`ϵ_1S^{d_w/d_f}`$ to a slow logarithmic dependence on $`ϵ_1(\mathrm{ln}S)^{2/d_0}`$. Thus all boundary tethering conspires with the complex geometry of the critical percolation cluster (with pockets of compact regions, etc.) to dramatically alter the behavior of lowest energy vibrational mode.
## III Scaling of the maximum in density of states
Let us now consider the vibrational density of states $`\rho (ϵ)`$ of the matrix $`𝐖`$. For nontethered limit, $`\rho (ϵ)`$ generally has a power law increase toward the lowest energy (stationary) mode . On the other hand, if all boundary sites are tethered, we recover the ideal chain result of a stretched exponential decrease toward the lowest energy mode $`ϵ_1`$ . Thus, when progressively more of all boundary sites are tethered, a crossover between the two limits occurs. For an intermediate fraction $`f`$ of tethered sites, we generally observe a maximum in the density of states as shown in Fig.5(a) for the square lattice. Starting from larger $`ϵ`$ and moving toward $`ϵ=0`$, $`\rho `$ increases initially because the nontethered modes for the range of $`ϵ`$ are too localized to be affected when tethers are added, while for much smaller $`ϵ`$ the tethers begin to drag down the density of states. (Remember that tethers correspond to traps and low $`ϵ`$ corresponds to the long time survival of a diffusing particle.) Thus a peak in $`\rho (ϵ)`$ occurs, say, at $`ϵ_p`$, which gradually increases from 0 for $`f=0`$ (ant limit) toward $`f=1`$ (ideal chain limit) as the tethering fraction $`f`$ increases.
The existence of the maximum in the density of states has implications on the possible resonant behavior in response to the external source of vibrational energy. See Leibig for a discussion of this point for the case of weakly disordered network of Hookian springs.
For the hull tethering case, the situation is similar for finite $`S`$. In the asymptotic large $`S`$ limit, however, hull tethering cannot affect the density of states in qualitative manner no matter how large the tethering fraction $`f`$ may be because the hull becomes increasingly a negligible fraction of all boundaries. This implies that, for large $`S`$, the region near $`ϵ=0`$ must always witness power-law increasing $`\rho (ϵ)`$. However, the finite $`S`$ effect often masks this asymptotic behavior so that for most sizes of clusters numerically generated and for most tether fraction $`f`$, we do observe a maximum in the density of states similar to the all boundary tethering case (see Fig.5(b)).
The crossover of $`ϵ_p`$ as the tether fraction $`f`$ is varied is then expected to obey the same type of scaling as for the lowest energy mode $`ϵ_1`$. For hull tethering,
$$ϵ_pS^{d_w/d_f}G_p(N),$$
(8)
while, for all boundary tethering,
$$ϵ_pS^{d_w/d_f}\overline{G}_p(N/S^x),$$
(9)
with the same value of $`x`$ as in Eq.(7).
These scaling laws have been tested on the same critical percolation clusters as for the lowest energy mode scaling. The numerical results, shown in Fig.6 for the hull case and Fig.7 for all boundary tethering, are in good agreement with the scaling laws above. The corresponding results for the simple cubic lattice in three dimensions are shown in Fig.8, again with reasonably good agreement when a choice of $`x=0.4`$ is used.
## IV Spatial extent of the lowest energy mode
The difference between hull and all boundary tethering is dramatic also in the spatial characters of the normal modes. We show in Fig.9 and Fig.10 the amplitude maps of typical lowest energy modes (corresponding to $`ϵ_1`$) for hull tethering and all boundary tethering, respectively. For hull tethering, starting from low tethering fraction $`f`$ (or small $`N`$) and increasing $`f`$ initially reduces the amplitude of vibration of the sites in the vicinity of the tethered hull sites. This causes a decrease in the wavelength and consequently an increase in $`ϵ_1`$ (also of the scaling function $`F(z)`$). However after a certain number of hull sites are tethered, the amplitudes of the mode at all sites near the hull become attenuated and the region of large amplitudes becomes well confined to the interior of the cluster. Beyond this point, increasing $`N`$ does not further affect the spatial structure of the normal mode as the sites with appreciable amplitudes are already deeply in the interior of the cluster. This saturation behavior may be likened to herding of livestock into a safe, fenced haven, and thus we may call it herding of normal modes. Herding is reflected in the saturation behavior of $`F(z)`$ for $`z`$ tending to $`infinity`$. In the examples shown in Fig. 9, the participation rations indicate that only about 1% of the sites contribute substantially to the mode for all shown values of the tethering fractions.
For all boundary tethering, increasing $`f`$ serves to progressively confine the (originally most extended, untethered) normal mode to sites which are interior to both the external and internal boundaries, i.e., in the blobs of the percolation cluster. Since typically most blobs are very small, this results in a much sharper decrease in the spatial extent of the normal mode, as seen in Fig.10. Even for relatively small $`f`$, the effect of confinement can be nearly complete in this case, and further tethering may force the mode to jump around to distant locations which may allow the largest spatial extent. This behavior may be likened to chasing wild animals in a hunting expedition, and thus we may call it hunting of normal modes. It is interesting to note that, while the largest modes without tethering are those of small $`ϵ`$, they are also the ones most affected by tethering (particularly all boundary tethering), and thus after some tethers have been put in place, these modes are not necessarily the spatially largest ones any longer. Indeed those modes shown in Fig.10 have main contributions only from less than 0.1% of the sites according to the values of the participation ratios.
The above observations on the spatial structure of normal modes under boundary tethering may have technological implications. For example, a soft glassy material may need to be clamped only at relatively few locations on its external boundary to fully confine its lowest energy normal mode to its interior via herding.
## V Conclusion
The present work reveals a number of interesting features of the elastic properties of a fractal with tethered boundaries. The vibrational modes and consequently the vibrational density of states for both hull and all boundary tethering are dictated by regions of clusters which are tether free. Despite the structural similarity between the hull and internal boundaries, the effect of tethering depends greatly on whether only the hull is tethered or all boundaries are tethered. Tethering of hull sites confines the low-energy vibrational modes in the interior of the cluster and since the interior of the cluster is itself a fractal with the fractal dimension same as that of the entire cluster, the leading behavior of the vibrational density of states for fully tethered hull must asymptotically be the same as that of the untethered cluster. On the other hand, for full tethering of all boundaries, the low-energy modes are confined in the compact regions of the cluster which, because of their small spatial extent, give rise to faster than exponential decrease in the density of states in the same limit.
Keeping in mind the fact that Euclidean clamped boundaries do not have an appreciable effect on the density of states, our results demonstrate the potential of tethering fractal boundaries to attenuate harmonic excitations. Also, the present problem is equivalent to that of diffusion in the presence of permanent traps, a fact that was taken advantage of for the numerical part of this work. Since the latter serves as a model for diffusion controlled reaction/absorption in the presence of immobile reactants/absorbents, there may also be applications in the areas of drug reaction/absorption, etc. Finally, because of the similar mathematical formulation of quantum mechanical localization and hopping transport problems , we expect that the same techniques will be useful in studying those problems and that some of the current results may have direct analogs in them.
|
no-problem/9910/cond-mat9910037.html
|
ar5iv
|
text
|
# Local Magnetic Order vs. Superconductivity in a Layered Cuprate
\[
## Abstract
We report on the phase diagram for charge-stripe order in La<sub>1.6-x</sub>Nd<sub>0.4</sub>Sr<sub>x</sub>CuO<sub>4</sub>, determined by neutron and x-ray scattering studies and resistivity measurements. From an analysis of the in-plane resistivity motivated by recent nuclear-quadrupole-resonance studies, we conclude that the transition temperature for local charge ordering decreases monotonically with $`x`$, and hence that local antiferromagnetic order is uniquely correlated with the anomalous depression of superconductivity at $`x\frac{1}{8}`$. This result is consistent with theories in which superconductivity depends on the existence of charge-stripe correlations.
\]
Superconductivity in the layered cuprates is induced by doping charge carriers into an antiferromagnetic insulator. The kinetic energy of the mobile carriers competes with the superexchange interaction between neighboring Cu spins . There is increasing evidence for the hole-doped cuprates that this competition drives a spatial segregation of holes which form antiphase domain walls between strips of antiferromagnetically correlated Cu spins . A major controversy surrounds the issue of whether the mesoscopic self-organization of charges and spins is a necessary precursor for high-temperature superconductivity , or whether it is simply an alternative instability that competes with superconductivity .
To gain further insight into this problem, we have performed a systematic study of the phase diagram of La<sub>1.6-x</sub>Nd<sub>0.4</sub>Sr<sub>x</sub>CuO<sub>4</sub> (LNSCO), a system for which evidence of competition between superconductivity and stripe order has been reported previously . From neutron and x-ray scattering measurements we show that the charge and magnetic ordering temperatures reach their maxima at $`x\frac{1}{8}`$. For $`x<\frac{1}{8}`$, the charge-ordering transition is limited by a structural phase boundary. The low-temperature structural phase involves a change in the tilt pattern of the CuO<sub>6</sub> octahedra, stabilized by the substituted Nd, which can pin vertical charge stripes .
At first glance, these results, together with the anomalous depression of the superconducting transition temperature, $`T_c`$, at $`x\frac{1}{8}`$, appear to provide confirmation that charge-stripe order is in direct competition with superconductivity; however, the picture becomes more complicated when one takes into account recent nuclear-quadrupole-resonance (NQR) studies of LNSCO . In this work, a transition (involving the onset of an apparent loss of intensity) has been identified which coincides with the charge ordering determined by diffraction for $`x\frac{1}{8}`$; however, in contrast to the diffraction results, the NQR transition temperature, $`T_{\mathrm{NQR}}`$, continues to increase as $`x`$ decreases below $`\frac{1}{8}`$. Furthermore, the same transition is observed in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> (LSCO) for $`x\frac{1}{8}`$. The implication is that local charge order, not easily detected by diffraction techniques, may occur even in the absence of the Nd-stabilized lattice modulation.
Does $`T_{\mathrm{NQR}}`$ really correspond to charge ordering? To test this possibility, we have analyzed the in-plane resistivity, $`\rho _{ab}(T)`$, which should be a sensitive measure of charge ordering. Through a scaling analysis, we have identified a temperature scale $`T_u`$ that corresponds to a low-temperature upturn with respect to an extrapolated linear variation with $`T`$. (The signature of the charge ordering is subtle, as befitting its unconventional nature.) We show that $`T_u`$ corresponds with $`T_{\mathrm{NQR}}`$ for both LNSCO and LSCO, thus providing support for the association of $`T_{\mathrm{NQR}}`$ with local charge ordering. Furthermore, we show that $`T_u`$ and $`T_{\mathrm{NQR}}`$ are linearly correlated with the size of the lattice distortion at low temperature. Together with the reasonable assumption that the magnitude of the charge-order parameter at low temperature is correlated with the ordering temperature, this result is strong evidence for a monotonic decrease of the charge-order parameter with increasing hole concentration (over the range studied here).
A monotonic variation of the stripe pinning strength means that there is no correlation with the anomalous depression of $`T_c`$ at $`x\frac{1}{8}`$. We are left with the suprising conclusion that it is, instead, the static magnetic order alone which has a special association with the $`\frac{1}{8}`$ anomaly. In making this assertion, we do not argue that ordering the charge is good for superconductivity; to the contrary, $`T_c`$ is certainly reduced in all of our LNSCO samples compared to comparably-doped LSCO. Rather, our point is that, while pinning charge stripes is not good, it is magnetic order that is truly incompatible with superconductivity. The competition between static local antiferromagnetism and superconductivity is supported by recent theoretical work , and is compatible with the spin-gap proximity-effect mechanism for superconductivity .
For this study, a series of crystals of La<sub>2-x-y</sub>Nd<sub>y</sub>Sr<sub>x</sub>CuO<sub>4</sub>, with $`y=0.4`$ and $`x=0.08`$ to 0.25, was grown by the travelling-solvent floating-zone method . Figure 1(a) shows the electrical resistivity measured parallel to the CuO<sub>2</sub> planes by the six-probe method. As previously reported , there are upturns in $`\rho _{ab}`$ at low temperature for the $`x=0.12`$ and 0.15 samples, compositions at which charge order has been observed . In each there is also a small jump near 70 K, where a subtle structural transition takes place from the so-called low-temperature-orthorhombic (LTO) phase to the low-temperature-tetragonal (LTT) or an intervening low-temperature-less-orthorhombic (LTLO) phase . At $`x=0.12`$, charge ordering and the structural transition are essentially coincident ; however, charge ordering occurs significantly below the structural phase change at $`x=0.15`$ (see Fig. 2) .
The resistivity for $`x=0.10`$ looks somewhat different. Instead of an increase at the structural transition temperature, $`\rho _{ab}`$ decreases below the transition, and continues to decrease in a typically metallic fashion until superconductivity sets in. To test whether stripe order occurs in this sample, we performed a neutron scattering experiment at the NIST Center for Neutron Research (NCNR) . We found that the $`x=0.10`$ sample does indeed exhibit charge and spin order. The temperature dependence of the peak intensities for representative superlattice peaks are shown in Fig. 1(b). On warming, the charge order (which has also been confirmed by x-ray diffraction measurements at HASYLAB) seems to be limited by the structural transition at 65 K, while the magnetic order disappears at a lower temperature.
We have also used neutron scattering to determine the magnetic ordering temperatures ($`T_m`$) in samples with $`x=0.08`$ and 0.25. The results are summarized in Fig. 2. (Further details of the neutron studies will be presented elsewhere.) The new results for $`x=0.08`$ and 0.10 make it clear that the highest $`T_m`$ occurs at $`x\frac{1}{8}`$, where the superconducting transition ($`T_c`$) is most greatly depressed. Also plotted in the figure are the transition temperatures ($`T_{\mathrm{NQR}}`$) deduced from Cu NQR measurements by Singer, Hunt, and Imai . Those temperatures coincide with the charge-order transitions, $`T_{ch}`$, for $`x=0.12`$ and 0.15 determined by diffraction, but there appears to be a discrepancy for $`x<0.12`$.
The NQR and diffraction results for $`x<0.12`$ are not necessarily in conflict, since NQR is an inherently local probe, whereas the diffraction measurements require substantial spatial correlations of the charge order in order to obtain detectable superstructure peaks. But it is also interesting that NQR measurements suggest charge order in pure La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> for $`x0.125`$, where diffraction studies have not yet detected any charge-related superlattice peaks. If some form of charge ordering is occurring within the LTO phase, one would expect to see an indication of it in the resistivity. As we will show below, it is, in fact, possible to identify a signature of charge order in resistivity measurements.
To analyze the resistivity, we consider first the behavior at higher temperatures. For cuprates doped to give the maximum $`T_c`$, it was noted early on that, over a surprisingly large temperature range,
$$\rho (T)=\alpha T+\beta ,$$
(1)
with $`\beta `$ very close to zero. We find that this formula describes fairly well the results in Fig. 1(a) for $`T200`$ K. Values for $`\alpha `$ were obtained by fitting Eq. (1), with $`\beta 0`$, to data in the range $`250\text{ K}<T<300`$ K; the same analysis was also applied to resistivity data for La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> crystals with $`x=0.10`$, 0.12, 0.15, and 0.20 .
Next, we analyze the upturn in the resistivity at low temperature. The temperature at which the upturn becomes significant varies with $`x`$, and so does the rate of upturn; it was pointed out previously by Büchner and coworkers that the rate of upturn increases monotonically as one goes from $`x=0.10`$ to 0.12 to 0.15. We have found that all of the data can be scaled approximately onto a single curve if $`\rho _{ab}`$ is divided by $`\alpha T`$ and then plotted against a reduced temperature $`t=(TT_0)/T_u`$, where $`T_u`$ is the characteristic upturn temperature and $`T_0`$ is the temperature towards which $`\rho _{ab}`$ appears to be diverging. The scaled resistivities are shown in Fig. 3; note that the same scaling is useful for samples both with and without Nd. The scaled curve is given approximately by
$$\rho _{ab}/\alpha T=\mathrm{tanh}(15t)/\mathrm{tanh}(t),$$
(2)
and we have determined error bars for the parameters $`T_0`$ and $`T_u`$ by performing least-squares fits to this function.
The values of $`T_u`$ are compared with $`T_{\mathrm{NQR}}`$ in Fig. 4, where both are plotted vs. the maximum orthorhombic splitting $`(ba)_{\mathrm{LTO}}`$ in the LTO phase. Büchner et al. have shown that $`(ba)_{\mathrm{LTO}}`$ is a useful measure of the octahedral tilt angle, which changes orientation but not magnitude in the LTLO and LTT phases. For the $`y=0.4`$ samples, we used our own neutron measurements of $`(ba)_{\mathrm{LTO}}`$, while we used results from for LSCO.
From Fig. 4 we see that (1) the values of $`T_u`$ and $`T_{\mathrm{NQR}}`$ agree within the error bars, and (2) both values tend to scale with the octahedral tilt angle, independent of the tilt orientation (LTO vs. LTT). The first point reinforces the association of $`T_{\mathrm{NQR}}`$ with charge order, while the second indicates that the ordering temperature for local charge ordering is controlled by the tilt angle. (A correlation between tilt angle and $`T_c`$ reduction was noted previously by Dabrowski et al. .) Longer-range charge correlations (those detected by diffraction) appear to be sensitive to the tilt orientation.
The variation of $`T_0`$ with $`x`$ is shown in the inset of Fig. 4 for $`y=0.4`$. There is a considerable increase in $`T_0`$ from $`x=0.10`$ to 0.15. We suggest that this trend may be associated with a phase locking of charge-density-wave correlations along neighboring charge stripes, a possibility suggested by Kivelson, Fradkin, and Emery . Whether or not this interpretation is correct, there is clearly no correlation between the variations of $`T_0`$ (or $`T_u`$) and the depression of $`T_c`$ for $`y=0.4`$, which is greatest at $`x\frac{1}{8}`$.
The variations of $`T_u`$ and $`T_0`$ shown in Fig. 4 strongly indicate that ordering of the charge stripes is not responsible for the strong depression of $`T_c`$ at $`x\frac{1}{8}`$. We are then left with the conclusion that the culprit must be the magnetic order, which is maximized at the point where $`T_c`$ is minimized. That local antiferromagnetic order competes with superconductivity is certainly compatible with the spin-gap proximity-effect mechanism for superconductivity . In that theory, hole pairing is associated with the occurrence of a spin gap; given that antiferromagnetic order competes with singlet correlations and a spin gap, one would then expect $`T_c`$ to be depressed when magnetic order is present. (Of course, charge order is a prerequisite for magnetic order.) The trade off between local magnetic order and superconductivity is also emphasized in a recent numerical study .
One simple reason why $`T_m`$ might reach a maximum at $`x=\frac{1}{8}`$ is suggested by recent analyses of coupled spin ladders . If the charge stripes are rather narrow and centered on rows of Cu atoms, then the intervening magnetic strips would consist of 3-leg spin ladders. Theoretical analyses have shown that even weak couplings between a series of 3-leg ladders will lead to order at sufficiently low temperature, whereas weakly coupled 2- or 4-leg ladders have a quantum-disordered ground state . As $`x`$ deviates from $`\frac{1}{8}`$, one would have a combination of even-leg and 3-leg ladders, thus weakening the tendency to order. Although there is no direct experimental evidence concerning the registry of the stripes with the lattice, the picture of a CuO<sub>2</sub> plane broken into a series of 3-leg ladders by Cu-centered charge stripes at $`x=\frac{1}{8}`$ is appealing in the present case.
One might argue that only longer-range magnetic (or charge) order is relevant for suppressing superconductivity. We believe that a counter-example is given by the case of Zn-doping, where a local suppression of superconductivity is associated with static short-range antiferromagnetic correlations about the Zn sites .
In conclusion, we have presented evidence that it is local magnetic order rather than charge-stripe order which is responsible for the anomalous suppression of superconductivity in LNSCO at $`x\frac{1}{8}`$. While pinning charge stripes also causes some reduction of $`T_c`$, charge order appears to be compatible with superconductivity as long as the spin correlations remain purely dynamic.
This research was supported by the U.S.-Japan Cooperative Research Program on Neutron Scattering, a COE Grant from the Ministry of Education, Japan, and U.S. Department of Energy Contract No. DE-AC02-98CH10886. We acknowledge the support of the NIST, U.S. Department of Commerce, in providing the neutron facilities used in this work; SPINS is supported by the National Science Foundation under Agreement No. DMR-9423101. NI and JMT acknowledge the hospitality of the NCNR staff. We thank V. J. Emery and S. A. Kivelson for helpful comments.
|
no-problem/9910/astro-ph9910509.html
|
ar5iv
|
text
|
# Monitoring the Short-Term Variability of Cyg X-1: Spectra and Timing
## Introduction
For stellar black hole candidates, several distinct states can be identified that differ in their general spectral and temporal properties. Based mainly on spectral arguments these states have been associated with different accretion rates and different geometries of the accretion flow (e.g., Esin et al., 1998; Nowak, 1995). With broad band instruments like the Rossi X-ray Timing Explorer (RXTE) it is possible to study the states with high time resolution over a time base of several years. The focus of this work lies on parameters and functions characterizing the short-term variability ($`<1000`$ s) of the canonical black hole candidate Cyg X-1 and their stability in the hard state.
In 1996 and 1997 observations of Cyg X-1 with the pointed RXTE instruments were not performed regularly and mainly concentrated on the $``$3 month long soft state in 1996 (see, e.g., Cui et al., 1998). We initiated a monitoring campaign of the hard state in 1998 (3 ksec exposures), which we expanded to 10 ksec exposures in 1999 in order to allow the calculation of Fourier frequency dependent time lags with sufficient accuracy (Fig. 1). Additionally, the RXTE observations are accompanied by simultaneous radio pointings. The aim of this campaign is to address fundamental questions such as the cause of the long term flux variability in the hard state, namely the 150 d periodic behavior seen in the RXTE All Sky Monitor and in the radio flux (Pooley, Fender & Brocksopp, 1999, Hjellming, priv. comm.). A precessing, interacting disk-jet system has been suggested as one possible explanation for this hard state cycle (Brocksopp et al., 1999).
We have performed spectral and/or temporal analyses on $``$ 30% of the available public data measured before 1999. In addition, we have analyzed those of our 2 weekly observations in 1999 that were scheduled before the gain change of the RXTE Proportional Counter Array (PCA) in 1999 March (for data after the gain change the calibration and background models are still uncertain). In this paper we present first results from these monitoring observations. Using the ftools 4.2, we extracted PCA spectra and high (2 ms) time resolution PCA lightcurves. We computed periodograms for several energy bands, as well as the time lags, and the coherence function between these energy bands (Nowak et al., 1999a). In addition, we use the linear state space model (LSSM) to model the light curves in the time domain. This method allows one to derive a characteristic time scale, $`\tau `$, that can explain the dynamics of the lightcurve. $`\tau `$ can be interpreted in terms of a shot noise relaxation time scale, but note that LSSMs only need a single time scale to provide a good fit of the lightcurve (see Pottschmidt et al., 1998, for an application of the LSSM to EXOSAT data from Cyg X-1).
## Variability of Spectral and Temporal Properties
Fourier frequency dependent time lags of up to $``$0.1 sec are known to exist between different energy bands in Cyg X-1. While the lags increase with energy, they cannot be explained by the diffusion time scale of photons in a Compton corona alone (Miyamoto & Kitamoto, 1989; Nowak et al., 1999a). Nevertheless, the characteristic time lag “shelves” allow to roughly constrain coronal parameters (Nowak et al., 1999c). We find that over the course of weeks, the typical time lag in the hard state can vary by at least a factor three (Fig. 1, left panel). The first three observations show a gradual decrease in the time lags, while the fourth observation has intermediate values. This systematic development is mirrored by the shot relaxation time scale $`\tau `$, which gets larger for observations with smaller time lag (Fig. 1, right panel).
At the same time, the X-ray spectrum also changes systematically (Fig. 2). Spectral fitting of black hole candidate spectra with the PCA is severely affected by the uncertainty of the PCA response matrix. Although the data exhibit a clear and varying hardening above $`10`$ keV, it is difficult to associate these changes with physically interpretable spectral parameters. For example, both, a broken power-law and a power law reflected from cold matter result in acceptable fits. This behavior is similar to GX 339$``$4 (Wilms et al., 1999). In order to characterize the spectral variability of Cyg X-1 independently of any spectral model, therefore, we directly compared the data in detector space. Fig. 2 displays the relative deviation of the four observations with respect to the observation of 1999 Jan 28. Cyg X-1 is clearly spectrally variable on a time basis of 14 d (note that part of the variation could be due to orbital modulation).
Comparison of Figs. 1 and 2 shows that a spectral hardening of the source correlates with a decrease of the time lags and with an increase of the relaxation time $`\tau `$. Recently, Gilfanov, Churazov & Revnivtsev (1999) also analyzed several of the public RXTE observations of Cyg X-1. They found a variability of the spectral hardness of the same order as presented here and an increase of the PSD break frequency with the reflection fraction. They also confirmed for Cyg X-1 a correlation between the intrinsic spectral slope and the reflection fraction (Zdziarski, Lubiski & Smith, 1999), as well as a relationship between two temporal parameters, namely the PSD “break frequency” and the PSD “hump frequency” (Wijnands & van der Klis, 1999; Psaltis, Belloni & van der Klis, 1999).
Due to the long time basis of the available RXTE data it is also possible to compare observations that are widely spaced in time, e.g., the 1999 monitoring observations with an observation made more than two years earlier, in 1996 Oct 23. The latter has previously been published in a series of papers (Dove et al., 1998; Nowak et al., 1999a, c). It was performed shortly after the soft state of 1996, and we cautioned, therefore, that the observation might still have been “contaminated” by soft state peculiarities. But, the comparison with the observation of 1999 Feb 25 shows almost identical PSDs and time lags. So, we see that the source really was in its hard state and that the hard state timing properties can be reproduced with great accuracy on the time scale of years.
## Discussion
We have presented first results from our systematic analysis of RXTE data of Cyg X-1 in the hard state. Apparently, during the canonical hard state this source can vary by up to a factor of $`2`$ in 2–50 keV flux and by up to a factor of three in the associated time lags within a few weeks. On the other hand, we were also able to identify data with almost identical spectral and temporal behavior spaced by more than two years.
As we noted in the previous section, there is possible evidence for a correlation of the changes in the spectral and temporal behavior of the source. Harder spectra appear to be associated with shorter time lags, similar to the hard state of GX 339$``$4 (Nowak et al., 1999b). A possible interpretation would be that the accretion disk penetrates to smaller disk radii at times of harder flux, thereby increasing the reflection fraction of the Comptonized radiation (see also Gilfanov, Churazov & Revnivtsev, 1999), i.e., hardening the spectrum, and shortening the time-delay of the harder photons (with the smaller system geometry corresponding to shorter lags). Alternatively, the harder spectra might be due to changes in the coronal parameters: our results might indicate that coronae with larger optical depth and/or temperature are physically smaller. This is also consistent with the development of the shot time scale in the sense that more scattering events lead to longer relaxation times.
### Acknowledgments
We thank all participants in the 1999 broad band campaign for their continued effort to obtain simultaneous radio through X-ray observations of Cygnus X-1. This work has been partially financed by DFG grant Sta 173/22 and a travel grant by the Deutsche Forschungsgemeinschaft to JW.
|
no-problem/9910/hep-ph9910210.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The standard parton model is based on the DGLAP evolution equations, which re-sums contributions from $`[\alpha _s\text{ln}(\mu ^2/\mathrm{\Lambda }^2)]`$. It represents an one-dimensional phase space approximation for the parton motion, also known as the collinear approximation, which gives the correct behavior of the structure functions at not too small values of $`x`$. When $`x`$ becomes smaller, also contributions from $`[\alpha _s\text{ln}(\mu ^2/\mathrm{\Lambda }^2)\text{ln}(1/x)]`$ and $`[\alpha _s\text{ln}(1/x)]`$ need to be considered. In the so called $`k_t`$ factorization or semi-hard approach (SHA) , the transverse momenta of the partons in the evolution from large $`x`$ at the proton vertex towards small $`x`$ at the hard interaction vertex are taken into account. This evolution in $`x`$ has been formulated in terms of the BFKL evolution equation. The CCFM evolution equation includes coherence effects via angular ordering and it reproduces the BFKL (DGLAP) evolution equation in the small (large) $`x`$ limits, respectively.
The resummation of the terms $`[\alpha _s\text{ln}(\mu ^2/\mathrm{\Lambda }^2)]`$, $`[\alpha _s\text{ln}(\mu ^2/\mathrm{\Lambda }^2)\text{ln}(1/x)]`$ and $`[\alpha _s\text{ln}(1/x)]`$ in SHA results in the so called unintegrated gluon distribution $`(x,k_t^2,Q_0^2)`$, which determines the probability to find a gluon carrying the longitudinal momentum fraction $`x`$ and transverse momentum $`k_t`$. The factorization scale $`Q_0^2`$ (such that $`\alpha _s(Q_0^2)<1`$) indicates the non perturbative input distribution. They obey the BFKL or CCFM equation and reduce to the conventional parton densities $`F(x,\mu ^2)`$ once the $`k_t`$ dependence is integrated out:
$$_0^{\mu ^2}(x,k_t^2,Q_0^2)𝑑k_t^2=xF(x,\mu ^2,Q_0^2).$$
(1)
However, in CCFM the unintegrated parton distribution $`𝒜(x,k_t^2,Q_0^2,\overline{q}^2)`$ (instead of $`(x,k_t^2,Q_0^2)`$) depends also on the maximum angle for any emission corresponding to $`\overline{q}`$ (coming from angular ordering). In the small $`x`$ limit they reduce to $``$ .
To calculate the cross section of a physical process, the unintegrated functions $``$ or $`𝒜`$ have to be convoluted with off-mass shell matrix elements corresponding to the relevant partonic subprocesses. In off-mass shell matrix element the virtual gluon polarization tensor is taken in form of SHA prescription :
$$L_{\mu \nu }^{(g)}=\overline{ϵ_2^\mu ϵ_2^\nu }=p^\mu p^\nu x^2/|k_t|^2=k_t^\mu k_t^\nu /|k_t|^2.$$
(2)
The specific properties of semi-hard theory may manifest in several ways. With respect to inclusive production properties, one obtains an additional contribution to the cross sections due to the integration over the $`k_t^2`$ region above $`\mu ^2`$ and the broadening of the $`p_t`$ spectra due to extra transverse momentum of the interacting gluons . It is important that the gluons are not on-mass shell but are characterized by virtual masses proportional to their transverse momentum. This also assumes a modification of the polarization density matrix. A striking consequence of this fact on the $`J/\psi `$ spin alignment has been demonstrated in .
In this paper we present predictions for the production of $`D^{}`$ mesons in photo-production at HERA using the SHA approach. We use an unintegrated gluon density coming from a solution of the CCFM equation (see ). We also show predictions based on a parton level Monte Carlo integration using a BFKL-like parameterization of the unintegrated gluon density, and we compare both predictions.
## 2 Unintegrated gluon distribution and CCFM evolution
The parton evolution at small values of $`x`$ is believed to be best described by the CCFM evolution equation , which for $`x0`$ is equivalent to the BFKL evolution equation and for large $`x`$ reproduces the standard DGLAP equations. The CCFM evolution equation takes coherence effects of the radiated gluons into account via angular ordering. In it is shown that a very good description of the inclusive structure function $`F_2(x,Q^2)`$ and the production of forward jets in DIS, which are believed to be a prominent signature of small $`x`$ parton dynamics, can be obtained from the CCFM evolution equation. The main important point there was the treatment of the non-Sudakov form factor, which suppresses radiation at small values of $`x`$. In Fig. 1 we show the gluon density obtained from this solution of the CCFM equation as a function of $`x`$ for different values of $`k_t^2`$ at $`\overline{q}^2=10`$ GeV<sup>2</sup>.
In Fig. 2 we show the gluon density as a function of $`k_t^2`$ for different values of $`x`$ at $`\overline{q}^2=10`$ GeV<sup>2</sup>.
For comparison, we also use the results of a BFKL-like parameterization of the unintegrated gluon distribution $`(x,k_t^2,\mu ^2)`$, according to the prescription in . The proposed method lies upon a straightforward perturbative solution of the BFKL equation where the collinear gluon density $`xG(x,\mu ^2)`$ from the standard GRV set is used as the boundary condition in the integral form (1). Technically, the unintegrated gluon density is calculated as a convolution of collinear gluon density with universal weight factors :
$$(x,k_t^2,\mu ^2)=_x^1𝒢(\eta ,k_t^2,\mu ^2)\frac{x}{\eta }G(\frac{x}{\eta },\mu ^2)𝑑\eta ,$$
(3)
$$𝒢(\eta ,k_t^2,\mu ^2)=\frac{\overline{\alpha }_s}{xk_t^2}J_0(2\sqrt{\overline{\alpha }_s\mathrm{ln}(1/\eta )\mathrm{ln}(\mu ^2/k_t^2)}),k_t^2<\mu ^2,$$
(4)
$$𝒢(\eta ,k_t^2,\mu ^2)=\frac{\overline{\alpha }_s}{xk_t^2}I_0(2\sqrt{\overline{\alpha }_s\mathrm{ln}(1/\eta )\mathrm{ln}(k_t^2/\mu ^2)}),k_t^2>\mu ^2,$$
(5)
where $`J_0`$ and $`I_0`$ stand for Bessel functions (of real and imaginary arguments, respectively), and $`\overline{\alpha }_s=3\alpha _s/\pi `$. The latter parameter is connected with the Pomeron trajectory intercept: $`\mathrm{\Delta }=\overline{\alpha }_s4\mathrm{ln}2`$ in the LO and $`\mathrm{\Delta }=\overline{\alpha }_s4\mathrm{ln}2N\overline{\alpha }_s^2`$ in the NLO approximations, respectively, where $`N`$ is a number . In the following we use $`\mathrm{\Delta }=0.35`$.
The presence of the two different parameters, $`\mu ^2`$ and $`k_t^2`$, in eq.(3) for unintegrated gluon distribution $`(x,k_t^2,\mu ^2)`$ refers to the fact that the evolution of parton densities is done in two steps. First the DGLAP scheme is applied to evolve the structure function from $`Q_0^2`$ to $`\mu ^2`$ within the collinear approximation. After that eqs.(3)-(5) are used to develop the parton transverse momenta $`k_t^2`$. This is in contrast to the CCFM evolution, where the evolution of “longitudinal” and “transverse” components occurs simultaneously.
From Figs. 1 and 2 we see that the BFKL approach gives much harder $`k_t`$ spectrum than the CCFM approach. However, it has been argued extensively in the literature , that in BFKL a so-called “consistency constraint” should be applied, so simulate at least a part of the large next-to-leading corrections. For comparison we also show in Figs. 1 and 2 the unintegrated gluon distribution from Kwiecinski, Martin, Stasto <sup>1</sup><sup>1</sup>1A. Stasto kindly provided us (H.J.) with the program.. This shows that the shape of the distributions from BFKL including the “consistency constraint” is similar to the one obtained from CCFM and that the gluon density is stronger suppressed at large $`k_t`$ as compared to the approach in . Unfortunately we could not use the the gluon distribution from , because it started only at $`k_t^2>1`$.
## 3 Predictions for $`D^{}`$ meson production at HERA
We have used the hadron level Monte Carlo program Cascade described in to predict the cross section for $`D^{}`$ photo-production at HERA energies. The unintegrated gluon distribution was obtained from the solution of the CCFM equation described in . The scale in $`\alpha _s`$ was set to $`k_t^2`$ in the parton evolution and we used $`\mathrm{\Lambda }_{QCD}^{(4)}=0.2`$ GeV. For the hard scattering the off-shell matrix element for heavy quarks (including the heavy quark mass with $`m_c=1.5`$ GeV) as described in are used together with the one-loop expression for $`\alpha _s`$ with $`k_t^2`$ (of the gluon entering the hard scattering) as the scale. The complete initial state cascade is simulated via a backward evolution as described in . The hadronization was performed by the Lund string fragmentation JETSET . The Peterson function with $`ϵ=0.06`$ was used for the charm quark fragmentation.
In Fig. 3 we show the prediction of $`D^{}`$ production as a function of the transverse momentum $`p_t^D^{}`$ using the Cascade Monte Carlo described above and compare it with the measurement of the ZEUS collaboration . We observe a rather good description of the $`p_t`$ spectrum.
In Fig. 4 we show the $`D^{}`$ cross section as a function of the pseudo-rapidity $`\eta ^D^{}`$ for different regions in $`p_t`$. Also here we observe a good description of the experimental data points. Here the main important point is the cross section at values of $`\eta ^D^{}>0.5`$.
For comparison we used the results of a calculation of with a parameterization for the unintegrated gluon distribution $`(x,k_t^2,\mu ^2)`$, according to the prescription of . Here the scale $`m_t^2`$ is used in $`\alpha _s`$ The results are also shown in Figs. 3 and 4. In general both prediction agree rather nicely. The differences observed are entirely due to the different behavior of the unintegrated gluon distribution as a function of $`x`$ and $`k_t^2`$. In addition in the CCFM approach angular ordering and the maximum angle allowed for any emission plays a important role. The results presented here are similar to the ones obtained from a full NLO calculation. In the SHA approach the gluon entering the hard interaction is off-mass shell, which is a similar situation as in a NLO calculation where the propagators in the 3 parton final states are fully considered. This is in contrast to the LO DGLAP (collinear) case, where the gluon is always treated on-mass shell. However if, in LO DGLAP, heavy flavor excitation is included (via resolved photon processes) then again a similar situation occurs. It was found in that including heavy flavor excitation in LO Monte Carlo programs, lead to better description of the data. The semi-hard approach presented here shows, that a good description of also the photo-production of heavy flavor can be achieved in a theoretically consistent way, without including artificially large intrinsic transverse momenta, or heavy flavor excitation.
## 4 Conclusions
We have shown that using a unintegrated gluon distribution obtained from a solution of the CCFM evolution equation, that describes the structure function $`F_2(x,Q^2)`$ and the production of forward jets in DIS at HERA, we can also describe the cross sections of inclusive $`D^\pm `$ meson production measured at HERA. Within the semi-hard approach the measured cross section as a function of $`p_t`$ and $`\eta _D^{}`$ can be nicely described. The results are similar to NLO calculations. The shape of the gluon $`k_t`$ distribution is driven by the BFKL or CCFM evolution equations. This shows that there is no place to include any artificially large intrinsic transverse momentum distribution of parton inside the proton.
It is also interesting to note, that within the semi-hard approach, heavy flavor excitation in the photon is consistently included, by the fact that a gluon radiated close to the quark box can have a transverse momentum larger that of the quarks.
## 5 Acknowledgments
We are grateful to M. Ryskin and Y. Shabelski for many discussions about the semi-hard approach. We thank L. Gladilin for many discussions about the ZEUS $`D^{}`$ data.
|
no-problem/9910/astro-ph9910171.html
|
ar5iv
|
text
|
# Post nova white dwarf cooling in V1500 Cygni
## 1 Introduction
In many close binary systems the hotter star will heat one face of the cooler star. As the orbital motion of the binary brings this face into view the observed flux from the binary will increase, only to fall again as it rotates out of view. After a nova explosion, the hot white dwarf is an obvious candidate for heating its cool companion. Probably the best evidence is in V1500 Cyg (Nova Cyg 1975), where Schmidt et al. (1995) show that the secondary star dominates the photometric modulation. They fitted HST spectra with a red star whose unperturbed temperature was $`3000K`$ but whose face towards the white dwarf was $`8000K`$.
There are three other novae for which there is photometric evidence for heating. The orbital modulation in DN Gem (Nova Gem 1912) was found by Retter, Leibowitz & Naylor (1999) to be well described by a heating model and DeYoung & Schmidt (1994) suggested heating could explain the lightcurve of V1974 Cyg (Nova Cyg 1992). Finally, Somers, Mukai & Naylor (1996) found the infrared lightcurve of WY Sge (Nova Sge 1783) required a heated face to be modelled successfully. However, in this case the level of irradiation was so low that it could have been supplied by the accretion luminosity, as occurs in the dwarf nova IP Peg during outburst (Webb et al 1999).
This led us to ask if there were further evidence available which would help us identify the source of irradiation in old novae as the white dwarf. It is particularly important to do so, as the irradiated surface of the secondary star may be the most reliable diagnostic we have of the white dwarf luminosity, since its intrinsic radiation is produced in the far UV/soft X-ray regimes, where the effect of interstellar absorption will be very marked. If the irradiating object is the white dwarf, the irradiation should decrease on the white dwarf cooling timescale. Prialnik (1986) shows how the surface layers of a white dwarf are heated during a nova explosion and cool as a power law on a time scale of 200 yrs. (This is in contrast to the cooling of white dwarfs after their initial formation, which involves cooling of the entire star and occurs on a time scale of $`10^8`$ years.)
## 2 History of the photometric modulation in V1500 Cyg
V1500 Cyg was a naked eye nova, reaching a peak magnitude of V=2.2 in late August 1975. It is thought to be currently a slightly asynchronous AM Her system. The magnetic nature of the white dwarf was first proved by the detection of circularly polarized light (Stockman et al. 1988). The present day photometric period (orbital) is $`1.8\%`$ longer than the polarimetric period (white dwarf spin). Presumably the two were thrown out of equality by the nova explosion and observations suggest that they will re-synchronize in a time scale of 200 years (Schmidt & Stockman 1991).
Stockman et al. (1988) explain the photometric period evolution as follows:
$``$ The expansion due to the nova explosion increases the moment of inertia of the white dwarf, causing its spin period to increase to 0.141d, breaking the synchronous rotation. At this point the photometric modulation is associated with the spin period of the white dwarf.
$``$ Interaction between the secondary star and the envelope causes the envelope to be spun up to the binary period, strong coupling ensures that the core also achieves synchronism.
$``$ The remnant envelope shrinks back onto the white dwarf surface, reducing its moment of inertia and thus decreasing the spin period to 0.137d.
Currently, therefore, V1500 Cyg displays two periods. There is a polarimetric signal at 0.137d which is the spin period of the white dwarf, and a photometric signal at 0.140d which is the orbital period of the binary. Aside from Schmidt et al’s spectrophotometry outlined in Section 1, further evidence that the secondary star now dominates the photometric modulation comes from the fact that the timing of flux maximum in our own data matches the orbital ephemeris (see Section 4). It is also clear however, from the presence of flickering and the slightly asymmetrical B band light-curves, that there is some contamination of the orbital modulation. This could either be due to the accretion stream, or perhaps because the spin period of the white dwarf may also have a photometric signal (Pavlenko & Pelt 1988).
## 3 Modelling white dwarf cooling after surface heating.
Prialnik (1986) modelled the evolution of a classical nova through a complete cycle; accretion, outburst, mass loss, decline and resumed accretion. The model is for a 1.25$`M_{}`$ C-O white dwarf. The resulting outburst is fast and similar to that observed in V1500 Cyg, matching the composition of the ejected envelope very well. The white dwarf is modelled allowing for heat transfer via radiation, conduction (Iben 1975, Kovetz & Shaviv 1973) and convection (Mihalas 1978). The result may be fitted well with a power law cooling curve of the form
$$Lt^{1.14},$$
(1)
where L is the luminosity of the white dwarf and $`t`$ is the time since outburst.
## 4 Observations
To extend the baseline of photometric amplitude decline we obtained one orbital cycle of B and V band photometry using the JKT on La Palma on the night of 1995 October 3. The TEK4 CCD was used with pixels binned 2 by 2 to achieve 0.6”x0.6” pixels in rapid readout mode. The seeing was around 2.0 arcsec. The exposures were typically of 120 seconds with filters being alternated between observations.
A bias frame was subtracted from each image, and it was then flatfielded using an image of the twilight sky. The counts for V1500 Cyg and various other stars were extracted from each frame using the optimal weighting procedure described in Naylor (1998). We divided the counts for V1500 Cyg by those for star C1 of Kaluzny & Semeniuk (1987), allowing us to put the lightcurves in Figures 1 and 2 onto a magnitude scale.
The times of maximum and minimum agree, within errors, with the ephemeris of Semeniuk, Olech & Nalezyty (1995). The data do not show a pure heating modulation since there is some evidence of a dip in the light curve at time 2 449 994.61. Such dips are not uncommon, see for example the light-curves of Kaluzny & Semeniuk (1987) whose data are from 10 years earlier than our observations. These irregularities add errors in the estimation of the amplitude of the modulation (see Section 5).
## 5 The Amplitude v Time relationship
By searching through various sources a record of the B band photometric behaviour observed in V1500 Cyg since outburst has been assembled. Table 3.1 gives a list of all references used. The amplitude of the B band photometric modulation versus the time since outburst is plotted in Figure 3 on logarithmic scales. We immediately note that these points lie roughly in a straight line, implying a power law decay in time, although it should be noted that the first year of data have been omitted. While other authors have attempted to find a trend in the amplitude versus time data they relied on the amplitude in magnitude space. This is a measure of the amplitude relative to the brightness of other parts of the system. Clearly this is inappropriate for V1500 Cyg since the overall brightness in the B band changes due to a large variety of effects. The amplitude in flux space is a good probe of the irradiation since it does not need to be corrected for changes in relative brightness of other parts of the system. Hence, what we refer to as the “flux amplitude” is obtained by converting the magnitudes at orbital maximum and minimum into fluxes (assuming a star of $`B`$=0.0 gives $`7.2\times 10^9`$ ergs cm<sup>-2</sup> s<sup>-1</sup> Å<sup>-1</sup>), and differencing them. An unweighted straight line to these data give
$$At^{1.26\pm 0.21},$$
(2)
where $`A`$ is the flux amplitude, and the error bar corresponds to 1$`\sigma `$.
It is apparent that there is an extra source of noise that acts on a short time scale compared with the general trend of amplitude decline. (Hence our choice of an unweighted fit.) This is likely to be due to the aforementioned orbital dips shifting around the orbital light-curve. The position of maximum light from the accretion columns with respect to the secondary star’s photometric hump will vary with beat phase. This will cause the apparently random excess scatter in the photometric amplitudes.
## 6 The Irradiating Flux v Time relationship
The bolometric flux amplitude from the secondary star will be proportional to the irradiating flux. However, the flux measured in any particular bandpass will not follow such a proportionality, as the bolometric correction will change with the temperature of the heated face. To extract how the irradiation is changing from the changing flux amplitude one must determine the response of the heated face of the secondary star to decreasing the heating. One may conveniently approximate the response as
$$AF_{irr}^xt^{x\eta }.$$
(3)
Here $`\eta `$ is the power law decline predicted by Prialnik to be -1.14 while $`x`$ is the response of the secondary star to heating. In fact $`x`$ is not a constant but is a weak function of the temperature of the heated face of the secondary star. We therefore used the code described in Somers et al (1996) determine how the amplitude of the modulation changes with the irradiating flux. This model irradiates a Roche-lobe filling star from a point source at the position of the white dwarf. The irradiation then raises the temperature of each surface element in impinges on, such that all the incident energy is re-radiated. We will also assume that the radiation from the secondary star can be approximated as a blackbody, an assumption we shall examine later. We used a mass ratio q=3, inclination i=$`60^{}`$ and underlying secondary star temperature of 3000K. These values are typical of those quoted for V1500 Cyg and the results are, in any case, insensitive to the exact values.
We began by irradiating a 3000K secondary star such that the flux when the irradiated face was towards the observer was equivalent to a star of 8000K, matching the front face temperature observed by HST. This model should correspond to the last data point in Figure 3. The flux amplitude in Figure 3 declines by a factor of 10<sup>1.2</sup>, and so we increased the irradiation in our model until the flux amplitude had increased by this factor. Over this range of interest, we found that
$$AF_{irr}^{0.75}$$
(4)
represented the data to better than 25 percent for all values of $`F_{irr}`$. We also used tried using the bolometric correction and colours of model atmospheres given in Bessell et al (1998) instead of blackbodies to represent the flux. We found this changed $`x`$ by less than 0.06 in the low irradiation case, which is that most affected by the difference between model atmospheres and black bodies.
With a value for $`x`$ we can now use the observations to derive a value of $`\eta `$ by equating (2) to (3). This yields:
$$\eta =0.94\pm 0.09.$$
(5)
This result is just (2.2$`\sigma `$) consistent with the value Prialnik (1986). find from purely theoretical considerations of $`\eta =1.14`$. Especially given the nature of the approximations made, this seems to support the conclusion that the photometric variation in V1500 Cyg, and by implication other old novae, is caused by irradiation from the white dwarf.
## 7 Conclusions
The above shows that for at least the first 20 years from outburst the white dwarf cooling models match the available observations. Given the Prialnik-type cooling law, with the observed value of $`\eta `$, and the temperature of the irradiated face at some known time after outburst (from Schmidt et al. 1995) then we can calculate the typical time taken for irradiation of the surface to become negligible. The irradiation will drop off so that the incoming radiation is less than double the unheated surface luminosity of the secondary star about $`280\pm 140`$ years after the outburst of V1500 Cyg. Interestingly, we find that in WY Sge, now over 200 years since nova outburst, the irradiation from the white dwarf has declined to these levels (Somers et al 1996), although in that system the disc is a complicating factor. Thus both V1500 Cyg, and WY Sge suggest that white dwarfs really do cool as the theory predicts.
## ACKNOWLEDGMENTS
The Jacobus Kapteyn Telescope is operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias. We thank Gregory Beekman and Coel Hellier who helped with the observations, and Alon Retter for commenting on the manuscript. TN was in receipt of a PPARC advanced fellowship when the majority of this work was carried out.
|
no-problem/9910/astro-ph9910539.html
|
ar5iv
|
text
|
# Disappearing Pulses in Vela X-1
## Introduction
Vela X-1 (4U 0900–40) is an eclipsing high mass X-ray binary consisting of the 23 $`M_{}`$ B0.5Ib supergiant HD 77581 and a neutron star with an orbital period of 8.964 d and a spin period of about 283 s (van Kerkwijk et al., 1995, and references therein). The persistent X-ray flux from the neutron star is known to be very variable exhibiting strong flares and low states. Inoue et al. (1984) and Kreykenbohm et al. (1999) have observed low states of near quiescence where no pulsations were seen for a short amount of time. Before or after these low states normal pulsations were observed. During an observation of Vela X-1 for 12 consecutive orbits in January 1998 by the Rossi X-ray Timing Explorer (RXTE), we have by chance observed such a quiescent state for the first time from the beginning to the end, preceded and followed by the usual pulsations.
## Lightcurves and pulse profiles
As Fig. 1 demonstrates, the source flux suddenly decreased between orbits 2 and 3, reaching its minimum during orbit 4. At the same time *the source pulsations decreased strongly, while significant non-pulsed source flux remained*. This is shown in detail in Fig. 2. The pulsed fraction decreased from 30%–50%, depending on the energy band, to 7%–9%. Note that even at the lowest state, the overall source flux was $`>`$5 times the predicted background level in the energy range used.
Fig. 3 presents the pulse profiles obtained from the individual orbits 1 to 8. The profiles of the first two orbits correspond to the well-known, complex shape usually obtained when integrating over many pulse periods, with a clear transition from a five-peaked profile at low energies to a double-peak structure at high energies (Raubenheimer, 1990). In contrast, orbits 3 to 5 show much less pronounced profiles, with the profile of orbit 4 being essentially flat. The pulse profile of the source during “recovery” (orbits 6 & 7) is similar to those observed before the low state at higher energies but much less pronounced at energies $`<`$10 keV.
## Hardness ratios and Spectra
Fig. 4 shows the evolution of the spectral hardness, both at energies up to 10 keV (energy band 2 vs. band 1) and at energies beyond 10 keV (energy band 4 vs. band 3). The hardness ratios were calculated using $`(HS)/(H+S)`$ where $`H`$ and $`S`$ are the fluxes in the hard and soft band respectively. There are three apparent properties of this plot:
* *The two hardness ratios are very clearly anticorrelated.*
* *The disappearence of the pulses in orbit 3 goes hand in hand with an abrupt spectral change.*
* The reemergence of pulsations is accompanied by a “normalization” of the hardness ratios, *but during the high state the spectrum stays significantly harder than in the normal state at the beginning.*
The strong spectral changes at the onset of the low state are also apparent in the quotient spectra shown in Fig. 5. There are clear indications for strong absorption and at the same time increased flux in the iron line and a soft excess at the lowest energy range.
*In contrast there is little change in the global spectral shape as the source begins to pulsate again. Except for a slight soft excess, the spectrum during the low state is rather well described by simply scaling the spectrum of the following flaring state.*
Attempts to fit the spectra turned out to be quite difficult, even allowing for the known complexity of the Vela X-1 continuum. Detailed results have been presented on two posters at the X-ray’999 conference in Bologna, the following paragraphs summarize the results.
We used a partial covering model with two additive components, using the same continuum, one heavily absorbed and one scattered unabsorbed into the direction of the observer. The continuum used was the NPEX (Negative Positive EXponential Mihara 1995). This model had to be further modified by an additive iron line and two coupled cyclotron lines. For the high state a cyclotron line at $``$55 keV is required by the data, a second feature at $``$21 keV may be due to uncertainties in the PCA response matrix.
Modeling the spectra of the individual orbits in the first part of our observation, we found the N$`_\text{H}`$ value of the absorbed component varying between 40$`\times `$10$`^{\text{22}}`$ cm$`^{\text{-2}}`$ and 230$`\times `$10$`^{\text{22}}`$ cm$`^{\text{-2}}`$ with a clear maximum during orbit 4. The relative importance of the scattered component appears also to be maximal during the low state. There is no clear correlation of the other continuum parameters with source flux, but their values are quite different before and after the low state.
## Discussion
The results of spectral fitting are somewhat in contrast with the finding above that the global spectral shape remains more or less constant after orbit 3. Within the framework of our spectral model this similarity is obtained by parallel changes in the column depth and in the spectral continuum. Further analysis will have to show if this is an artefact or reality.
A possible scenario to explain the disappearing pulses is that a very thick blob in the surrounding stellar wind – which is known to be clumpy (e.g., Nagase et al., 1986) – temporarily obscures the pulsar. Taking our fit results from above as basis (N$`_{\text{H,max}}`$$``$2$`\times `$10$`^{\text{24}}`$ cm$`^{\text{-2}}`$), the optical depth for Thomson scattering of such a blob would be $``$1.6, reducing the direct component to $``$20%. The scattered radiation would need to come from a relatively large region ($``$10$`^{\text{13}}`$ cm) to destroy coherence. The large fraction of scattered radiation would also explain the relatively increased Fe-line emission and soft excess.
After the quiescent state, when pulsations begin again, the emission in this scenario would be a combination of heavily absorbed direct radiation – the accretion being fueled by some part of the blob – and scattered radiation from a wide region. This would also explain the reduced pulse fraction, due mainly to a higher “pedestal” during the high state as compared to the normal state at the beginning of the observation (see Fig. 3).
|
no-problem/9910/astro-ph9910006.html
|
ar5iv
|
text
|
# Gas and stellar 2D kinematics in early-type galaxies
## 1. Introduction
It is surprising to see how widely accepted is the assumption that giant early-type galaxies tend to have triaxial shapes (fainter ones being predominantly axisymmetric), when strong and convincing cases of triaxiality are rare (e.g. Merritt 1999 and references therein). Fits to the kinematics of luminous early type galaxies using axisymmetric models are surprisingly good, although this may be linked, in most cases (but see Statler, Dejonghe, & Smecker-Hane 1999), to a critical lack of detailed kinematical information. Thus a more accurate statement would be: we still do not know much about the detailed intrinsic shape and dynamics of early-type galaxies.
In this context, two-dimensional kinematical maps are a prerequisite for the determination of the underlying gravitational potential. Stars contribute for most of the visible mass of early-type galaxies and their motions are (almost exclusively) determined by gravitation. However, general axisymmetric and triaxial dynamical models are not easy to build, mainly because of the very large solution space to probe. Gas orbits are thought to be simpler to deal with (as generally assumed circular or elliptical), but non-gravitational motions can enter the play, particularly in the central regions. The realisation, not that long ago (see e.g. Goudfrooij 1997 and references therein), that most early-type galaxies do contain a significant gaseous component, led us to start a program to obtain the 2D kinematics of the stellar AND gaseous components in the central regions of a small sample of early-type galaxies.
## 2. Observations
We have observed about a dozen early-type galaxies using the TIGER Integral Field Spectrograph (IFS) at the CFH Telescope. The TIGER spectrograph provided about 400 spatial elements, homogeneously covering the field of view with a spatial sampling of $`0.^{\prime \prime }39`$.
To obtain both the stellar and gas kinematics, we observed two spectral domains, namely a blue domain around 5200Å, including the Mg triplet as well as Ca and Fe stellar absorption lines, and a red one including the H$`\alpha `$, \[NII\] and \[SII\] emission lines. The spectral sampling was 1.5Å per pixel, with a final resolution of 1700 and 2200 in the blue and red, respectively.
The data have been reduced using a dedicated software developed at the Lyon Observatory (Rousset, PhD Thesis, Lyon). The two major difficulties were: first to correct the blue spectra from the contamination by the \[NI\]$`\lambda `$5200 emission line (when present), and second to properly subtract the stellar contribution (mainly the H$`\alpha `$ absorption line) from the red spectra. This was achieved with an algorithm which includes a library of stellar and galaxy spectra (coll. Paul Goudfrooij). Illustrative examples of the resulting subtractions are given in Fig. 1. Maps of the distribution and kinematics of the gas and stellar components were then built for all the galaxies in the sample, and will soon be published in a forthcoming paper through a collaboration with P. Goudfrooij (StSci) and P. Ferruit (Uni. of Maryland & CRA Lyon).
## 3. Results
The ionised gas distribution in these galaxies exhibit a variety of morphologies, including nuclear spirals (e.g. NGC 2974, NGC 4278), point-like emission regions (NGC 3414, NGC 6482), corotating disks (NGC 5838, NGC 2749) and counter-rotating discs (NGC 128). In some objects, part of the gas is clearly coupled with the dust component (e.g. NGC 4374, see Bower et al. 1997), although this is not a systematic feature. In most cases, the gas and stars have different angular momentum axis. One striking example is NGC 1453, in which they are tilted by about 50 degrees with respect to each other (Fig. 2), suggesting a triaxial geometry (see also Pizzella et al. 1997).
The blended H$`\alpha `$/\[NII\] emission line system often exhibits broad wings which could be interpreted as resulting from the presence of a broad H$`\alpha `$ line (a BLR). However, in all cases, these broad wings are also observed in the forbidden \[SII\] emission lines, although with a lower contrast. This argues for an unresolved kinematical gradient in the centre of these galaxies. This is confirmed by the spatial mapping of these wings, whose presence is limited to an unresolved central peak. The fact that these wings are weaker in the \[SII\] lines could be naturally explained by its lower critical density, which diminishes the contribution of high-density regions. This obviously does not mean that BLRs are not present, but set an upper limit on their contribution to the central emission line spectra.
All our spectra are compatible with the LINER type, although at different levels of activity. This is consistent with the compilation done by Ho, Fillipenko, & Sargent (1997) for the 7 objects in common. It is interesting to note that two of the four most active galaxies in our sample do show the presence of a nuclear spiral, the third one (M 87) having a spiral-like gas disc, and the fourth being viewed nearly edge-on. Central spiral structures may therefore play a role in the nuclear activity (see also Regan & Mulchaey 1999).
## 4. Perspectives
While our sample is not complete in any sense, it is striking to observe the morphological and kinematical decoupling of the ionised gas with respect to the stellar component. This certainly hints for an external origin in most cases. We plan to continue this study by using the recently comissioned integral field spectrograph OASIS, mounted on the adaptive optics bonnette of the CFHT. Our understanding of the gas/stars coupling in early-type galaxies will also greatly benefit from the on-going survey at the WHT conducted by the SAURON Consortium (Lyon/Leiden/Durham).
## References
Bower, G. A., Heckman, T. M., Wilson, A. S., Richstone, D. O., 1997, ApJ, 483, 33
Emsellem, E., Arsenault, R., 1997, A&A, 318, 39
Goudfrooij, P., 1997, in ASP Conf. Ser. Vo. 116, eds M. Arnaboldi, G. S. Da Costa, & P. Saha, 338
Ho, L. C., Filippenko, A. V., Sargent, W. L. W., 1997, ApJS, 112, 315
Merritt, D., 1999, PASP, 111, 756
Pizzella, et al. 1997, A&A, 323, 349
Regan, M. W., Mulchaey, J. S., 1999, AJ, 117, 267
Statler, T. S., Dejonghe, H., Smecker-Hane, T., 1999, AJ, 117, 126
|
no-problem/9910/hep-th9910074.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The “holographic principle”, or “AdS-CFT correspondence” has been considered as an amazing prediction on the basis of string theory, D-branes, and supergravity, setting off an enormous research activity. It states that the “degrees of freedom” of a quantum field theory in anti-deSitter (AdS) space-time of dimension $`d`$ can be completely identified with the degrees of freedom of a conformal quantum field theory in Minkowski space-time of one dimension less. It was first formulated for supergravity in 5 (+ 5 compactified) dimensions and super Yang-Mills theory in 4 dimensions by J. Maldacena , and was then conjectured by E. Witten to hold as a rather general principle .
The holographic principle arises by a confluence of several ideas arising from string theory and from gravity. It was discussed long ago by ’t Hooft that the degrees of freedom of quantum fields above a black hole horizon are counted by the Bekenstein entropy of the horizon. In string theory, the horizons of certain extremal black hole solutions to classical supergravity with AdS geometry are considered as D3-branes. These appear as the supporting space-time for a 1+3-dimensional superconformal Yang-Mills theory, whose state space should contain that of the supergravity theory.
The AdS-CFT correspondence, yet, is not a feature of string theory. Although string theory happened to play a prominent role in its discovery, the conjecture itself is a statement about ordinary QFT, and its validity should be discussed in the framework of ordinary QFT.
It is well known that the AdS-CFT correspondence admits no simple identification of the respective quantum fields on the two space-time manifolds involved. It has also by now become a familiar conception with many of us that the quantum fields proper are somewhat ambiguous entities in QFT while the invariant entities are the local algebras they generate . If we agree to take this idea seriously, a precise formulation of the conjectured correspondence is possible without any recourse to string theory or to perturbative supergravity, and a rigorous proof is astonishingly simple .
The formulation of a QFT should be such that it admits the unambiguous and relativistically invariant computation of all the physical quantities (such as masses and cross sections) which specify the theory. It has been demonstrated that this possibility is guaranteed if, in a representation of positive energy, only the localization of each observable is known (see ). The covariant assignment of localizations to the observables (or, what amounts to the same, the specification of all observables which are localized in a given region) provides the physical interpretation and therefore determines the theory. This formulation of a QFT does not rely on any idea of quantization of some classical physics.
The idea for the AdS-CFT correspondence is that the same observables can be given different assignments of localizations (in different space-times) in a covariant way, and are thus interpreted differently as a QFT in $`d`$ dimensions or a conformal QFT in $`d1`$ dimensions. We shall establish the conjecture by showing that any covariant assignment of a localization in one space-time to the observables gives rise to another such assignment in the other space-time, and vice versa. But we shall see that it is not always adequate to think of localization in terms of smearing of fields.
So what is the role of string theory? It might turn out on the long run, that string theory is just an ingenious device to produce nonperturbative QFT’s, and might not at all exceed the realm of ordinary QFT. This must not be a reason to be disappointed but may be celebrated as a scheme which circumvents or soothens the notorious difficulties attached to Lagrangean perturbation theory. If this is correct, then it is no surprise if “stringy” pictures lead to statements about QFT.
While locality has a central position in QFT, the intuition prevails that string theory is “less local” than ordinary QFT, and hence is a new type of theory. But quite to the contrary, string theory is even more local than is necessary in QFT: namely, if one accepts that (free) fields carrying different inner degrees of freedom commute irrespective of their localization, then, treating transversal string excitations as inner degrees of freedom, they may cause excited string fields to commute even at time-like separation. This crude idea has been elaborated with some rigor for free strings . I personally take it as another hint that, as the dust settles, string theory might bring us back to QFT, hopefully at an advanced level of insight.
## 2 Algebraic holography
We formulate the AdS-CFT conjecture as the assertion that the local QFT’s on AdS space-time $`\mathrm{AdS}_{1,d1}`$ are in 1:1 correspondence with the conformally invariant local QFT’s on compactified Minkowski space-time $`\mathrm{CM}_{1,d2}`$. Corresponding theories have the same state space.
AdS space-time is usually described as the hypersurface $`\{x:x_0^2\stackrel{}{x}^2+x_d^2=R^2\}`$ in an ambient Minkowski space with two time directions (or rather, its quotient by identification of antipodal points $`x,x`$). The embedded space has only one intrinsic time direction and a global time orientation. A serious reason for unease is that it has closed time-like curves. Yet, the Klein-Gordon field constructed in is, for quantized values of the mass, a perfectly sensible QFT on AdS space-time.
QFT on AdS space-time has also been studied in . No a priori conflict with the general framework of QFT was found, except that causal influences (signalled by non-commutation of local observables) can propagate only along geodesics. This feature (well known also from chiral observables in 2D conformal QFT’s) must presumably be interpreted as the absence of proper interaction. Thus, interacting QFT’s will, like the Klein-Gordon field of generic mass , live on a covering space.
One may also regard AdS space-time $`\mathrm{AdS}_{1,d1}`$ as a “cosmological deformation” of Minkowski space-time with the maximal number of isometric symmetries. The symmetry group (AdS group) is the Lorentz group SO($`2,d1`$) of the ambient space, the boosts in the second time direction acting as deformed translations.
The very same group, SO($`2,d1`$), is also the symmetry group of conformal QFT in $`d1`$ dimensions, that is on compactified Minkowski space-time $`\mathrm{CM}_{1,d2}`$. In fact, the manifold $`\mathrm{CM}_{1,d2}`$ coincides with the conformal boundary of AdS space-time (defined by the conformal structure of $`\mathrm{AdS}_{1,d1}`$ induced by the AdS metric), and its conformal structure coincides with the one inherited by restriction. The AdS group preserves the boundary and acts on it like the conformal group.
This observation is pivotal for the asserted 1:1 identification of QFT’s: the Hilbert space and the representation of the group SO($`2,d1`$) are the same for both theories. What differs is the interpretation of the group: e.g., the Hamiltonian of the AdS group (the rotation between the two ambient time directions) corresponds to a periodic subgroup of the conformal group generated by time translations and time-like special conformal transformations. Space-like AdS translations correspond to dilatation subgroups of the conformal group.
The set of observables, considered as operators on the common Hilbert space, is also the same for corresponding theories. What differs is the assignment of a localization to a given observable. The same operator is said to be localized in a suitable region $`X\mathrm{AdS}_{1,d1}`$ as an observable of the AdS theory, and it is said to be localized in another region $`Y\mathrm{CM}_{1,d2}`$ as an observable of the conformal theory.
For this reinterpretation to work, it must be compatible with covariance and locality, and this is the nontrivial issue. The group SO($`2,d1`$) must act geometrically in both interpretations, and hence the sets of regions $`X`$ and $`Y`$ above must be related by a bijective correspondence which respects the action of the group. Furthermore, this bijection must map causal complements with respect to one geometry into causal complements with respect to the other geometry since the geometric notion of causal independence is coded algebraically by local commutativity.
These constraints already fix the bijection between regions $`X`$ in AdS space-time and regions $`Y`$ in conformal Minkowski space-time of one dimension less. Here it is:
The basic regions in $`\mathrm{AdS}_{1,d1}`$ are wedge-like regions which arise as the connected components of the intersection of AdS space-time with the ambient space region $`\{x:x^1>|x^0|\}`$, and all SO($`2,d1`$) transforms thereof. Every such intersection provides a pair of wedge regions which turn out to be causally complementary in the sense that they cannot be connected by a causal geodesic.
The basic regions in $`\mathrm{CM}_{1,d2}`$ are nonempty intersections of a forward and a backward light-cone, called double-cones. Each wedge region in $`\mathrm{AdS}_{1,d1}`$ intersects the boundary in a double-cone in $`\mathrm{CM}_{1,d2}`$. This yields a bijection between wedge regions in $`\mathrm{AdS}_{1,d1}`$ and double-cone regions in $`\mathrm{CM}_{1,d2}`$ which respects the action of the group SO($`2,d1`$) by construction, and causal complements by inspection.
Sofar, the discussion is entirely geometric. The algebraic part is very simple: We declare an operator which, as an observable of the boundary theory, is localized in some double-cone, to be localized, as an observable of the AdS theory, in the associated wedge region, or vice versa. A minute’s thought brings about that this prescription yields a 1:1 correspondence between local QFT’s on AdS space-time $`\mathrm{AdS}_{1,d1}`$ and local QFT’s on conformal Minkowski space-time $`\mathrm{CM}_{1,d2}`$ . The associated theories have the same Hilbert space and the same representation of the group SO($`2,d1`$), while their respective physical interpretations are different.
One issue remains to be discussed, the positivity of the energy spectrum. As we mentioned before, the two interpretations go along with different Hamiltonians. By a standard argument, it is well known that if one of these Hamiltonians has positive spectrum, then so does the other. Yet, the spectra are very different! The AdS Hamiltonian has discrete spectrum due to periodicity in time, while the boundary Hamiltonian has conformally invariant, hence continuous spectrum.
We shall discuss that the notion of sharp localization acquires a very different meaning in corresponding theories. While sharp boundary localization obviously corresponds to localization at space-like infinity of AdS space-time, sharp AdS localization turns out to have a very delicate meaning in terms of boundary localization.
## 3 Examples
The proof sketched above is a structural proof for a structural statement. It does not tell us which particular AdS theory is associated with which particular conformal QFT. This has to be discussed case by case, as will be exemplified below.
Our proof also doesn’t tell us which observables are localized in bounded regions of AdS space-time. The standard procedure is to say that an observable is localized in an arbitrary AdS region $`X`$, if it is localized in all wedge regions which contain $`X`$. The according algebraic determination of localization in AdS double-cone regions yields the following general structural results .
In $`d1+2`$, if there are double-cone localized AdS observables, the boundary theory must violate the additivity property that the observables localized in small double-cones covering the space-like basis of a large double-cone generate the observables localized in the large double-cone. While this additivity should always hold for theories generated by gauge-invariant Wightman fields, its violation seems characteristic for non-abelian gauge theories, as Wilson loops cannot be expressed in terms of point-like gauge invariant quantities. Thus, AdS theories which are described in terms of proper local fields and which consequently possess observables localized in bounded regions, should correspond to gauge-type conformal boundary theories.
Conversely, a boundary theory which is additive in the above sense must correspond to a QFT on AdS space-time which does not possess local fields. To prevent confuion: the latter still has as many wedge-localized observables as there are observables in the boundary theory, which however cannot be “detached” from infinity. Impossibility of point-like localization does not mean that the theory is non-local, since observables localized in causally complementary wedges do commute as they should. Topological field theories (such as Chern-Simons with Wilson lines attached to the boundary as wedge-localized observables) come to one’s mind .
The situation is much more favorable in $`d=1+1`$. Then $`\mathrm{CM}_{1,d2}=S^1`$ and the boundary theory is a chiral conformal QFT. The Penrose diagram of two-dimensional AdS space-time is the strip $`\times (0,\pi )`$ with points $`(t,x)(t+\pi ,\pi x)`$ identified. This is in fact a Möbius strip. Light rays are $`45^{}`$ lines. A wedge region is a space-like triangular region enclosed by a future and a past directed light ray emanating from a point in the interior of the strip, and the associated boundary “double-cone” is the interval cut out of the boundary $`S^1`$ by this wedge.
A double-cone in the Möbius strip is the intersection of two wedges which cut out of the boundary $`S^1`$ two intervals $`I_1,I_2`$ which overlap at both ends: $`I_1I_2=J_1J_2`$.
An AdS observable in $`d=1+1`$ is thus localized in a double-cone if as a chiral observable it is at the same time localized in both the large intervals $`I_1,I_2`$. This is of course true for the chiral observables localized in $`J_1`$ or $`J_2`$, but in general there will be more than these. The additional AdS double-cone observables remain broadly localized (if considered as boundary observables) even if the double-cone, and hence the intervals $`J_1,J_2`$ are small. We shall see examples of such observables below. This again does not mean any non-locality of AdS double-cone observables, since they do commute whenever two double-cones sit within causally complementary wedges. It only means that their description in terms of boundary fields may be non-local.
We present an example in $`d=1+1`$: a massless conserved vector current $`j^\mu `$ on AdS space-time represented by the Möbius strip. Its equations of motion are solved in the plane by $`j^0(t,x)=j_R(tx)+j_L(t+x)`$ and $`j^1(t,x)=j_R(tx)j_L(t+x)`$. Restricting this solution to the strip $`\times (0,\pi )`$ and requiring it to respect the identification of points $`(t,x)(t+\pi ,\pi x)`$ amounts to put $`j_L=j_Rj`$ with period $`2\pi `$. Canonical quantization is achieved by representing $`[j(u),j(u^{})]=i\delta ^{}(uu^{})`$ on a Fock space.
This is indeed a U(1) current, the simplest conformal QFT on $`S^1`$. But its degrees of freedom are redistributed (through the combinations $`j^\mu `$) over the Möbius strip.
We now compute observables localized in an AdS double-cone (giving rise to intervals $`I_1I_2=J_1J_2`$ as before) in the interior of the strip. Typical boundary observables localized in an interval $`I`$ are of the form $`W(f)=\mathrm{exp}ij(f)`$ where $`f`$ is a periodic smearing function which is constant outside the interval $`I`$. Adding a constant $`c`$ to $`f`$ is immaterial since the charge operator $`j(u)𝑑u`$ is a multiple $`q`$ of unity in every irreducible representation, so $`W(f+c)=e^{icq}W(f)`$. Now, consider a smearing function which has constant but different values on both gaps between $`J_1,J_2`$. Then $`W(f)`$ is localized as a boundary observable in both intervals $`I_1,I_2`$, but it is neither localized in $`J_1`$ nor in $`J_2`$, nor is it generated by such observables. As an AdS observable, $`W(f)`$ is localized in a double-cone, and operators of this form generate all observables in the double-cone. Suitably regularized limits of $`W(f)`$ even yield point-like local fields on the Möbius strip $`\varphi (t,x)=\mathrm{exp}i\alpha _{tx}^{t+x}j(u)𝑑u`$.
As the size of a double-cone well within the strip shrinks, the correlations of the boundary observables within the intervals $`J_1,J_2`$ disappear, giving rise to two decoupled (chiral) current algebras, one for $`J_1`$ and one for $`J_2`$, among the AdS observables of the double-cone. The additional observables of the form $`W(f)`$ as discussed above behave in this limit as vertex operators $`E_\alpha (t+x)E_\alpha (tx)`$ carrying opposite left and right chiral charge. Thus, locally the QFT on AdS space-time looks like the bosonic sector of the Thirring (“U(1) WZW”) model.
|
no-problem/9910/hep-ph9910385.html
|
ar5iv
|
text
|
# 1 A pictorial representation of mixing between quarkonium and glueball physical states through common meson-meson channels, included in the Dyson summation of the scalar bound state propagator.
Comment on the “Coupling Constant and
Quark Loop Expansion for Corrections to
the Valence Approximation” by Lee and Weingarten
M. Boglione <sup>1</sup>
and
M.R. Pennington <sup>2</sup>
<sup>1</sup> Vrije Universiteit Amsterdam, De Boelelaan 1081,
1081 HV Amsterdam, The Netherlands
<sup>2</sup> Centre for Particle Theory, University of Durham
Durham DH1 3LE, U.K.
Lee and Weingarten have recently criticized our calculation of quarkonium and glueball scalars as being “incomplete” and “incorrect”. Here we explain the relation of our calculations to full QCD.
PACS numbers: 14.40.Cs, 12.40.Yx, 12.39.Mk
Submitted to Physical Review 28th April 1999
Lattice techniques provide an invaluable tool for calculating the properties of hadrons . As a matter of practical necessity, these calculations involve approximations to full QCD. While the spectrum of glueballs has been computed with increasing precision , this is within quenched QCD. To make contact with experiment requires one to get closer to the full theory by allowing for the creation of $`q\overline{q}`$ pairs. Different attempts to do this for light scalars, both quarkonium and glueball, have been made by Boglione and Pennington (BP) and by Lee and Weingarten (LW) . In a very recent paper, LW have criticized the former attempt as being incomplete and incorrect. We believe their extensive discussion is in error in claiming key aspects of QCD have been omitted by BP. Let us explain.
The BP treatment, like that of Tornqvist and others , is based on a specific approximation to QCD, in which only hadronic (color singlet) bound states and their interactions occur. One begins with the QCD Lagrangian, for which the only parameters are quark masses and the strength of the quark-gluon interaction. $`\mathrm{\Lambda }_{QCD}`$, and other scheme-dependent parameters, enter on renormalization. One then formally integrates out the quark and gluon degrees of freedom and obtains a Lagrangian involving only hadronic fields with their interactions, in an infinite variety of ways, all of which are determined by the parameters of the underlying theory. We then focus on the ten lightest scalar states. The bare states are realized by switching off all their interactions. Consequently, their propagators are those of bare particles: they are stable. To take this limit, each coupling in the effective Lagrangian of hadronic interactions is multiplied by a parameter $`\lambda _i`$ and these $`\lambda _i`$ are taken to zero. This does not necessarily correspond to a simple limit of QCD. Nevertheless, we plausibly assume that the ten lightest non-interacting states, that result in this limit, are the nine members of an ideally mixed quarkonium multiplet and an (orthogonal) glueball. Notice that the names quarkonium and glueball are just a convenient way of referring to the quantum numbers of these states. Individual quark and gluon fields play no role. However, they are, of course, implicit in the formation of hadronic bound states.
The Tornqvist and BP treatment is then to switch on the “dominant” interactions of the light scalars by tuning the appropriate parameters $`\lambda _i`$ from $`01`$ for the couplings of the bound states to two (or more) pseudoscalars <sup>1</sup><sup>1</sup>1 for the glueball, the four pion channel may be particularly important.. It is by turning on the interactions that the bare states are “dressed”. Fig. 1 represents the Dyson summation of such contributions to the inverse propagator. This dressing does not correspond to the creation of a single $`q\overline{q}`$ pair. Multiple pairs and all the gluons (Fig. 1) needed to generate color singlets and respect the chiral limit are implicitly included. Indeed, it is well-known , that any picture of pions as simple $`q\overline{q}`$ systems loses contact with the Goldstone nature of the light pseudoscalars, so crucial for describing the world accessible to experiment. This important (chiral) limit is embodied in our calculation. The resulting hadronic interactions have a dramatic effect on the scalar sector. For instance, the $`a_0`$ and an $`f_0`$ emerge at 980 MeV with large $`K\overline{K}`$ components , even though their bare states are members of an ideal multiplet 4-500 MeV/c<sup>2</sup> heavier. LW criticize these results as not including the specific gluonic counterterm, Fig. 2, and not explaining why.
The explanation is clear : our analysis only includes color singlet states internally as unitarity requires. Colored configurations of whatever kind are implicitly included and not readily dissected. If such counterterms are relevant to the dressing by pseudoscalar (Goldstone) pairs, they have been included.
In spirit, our analysis is close to that of Ref. . Propagators are dressed by hadron clouds, as in Fig. 1. These determine the right hand cut structure of meson-meson scattering amplitudes. However, in the work of Refs. , this $`s`$–channel dynamics is assumed to control the whole scattering amplitude, with left hand cut effects (and crossed-channel exchanges) neglected, even though this violates crossing symmetry . In our treatment , particularly here where we consider mixing, only propagators are computed and no further assumptions are needed.
Of course, our analysis does have approximations. For instance, the scale of hadronic form-factors for a gluish state is assumed to be similar to that of well-established $`q\overline{q}`$ hadrons. This may not be the case. Moreover, our treatment only incorporates interactions with two pseudoscalars, and to a lesser extent with multipion channels. It is these that determine both the sign and magnitude of the mass-shifts generated. For the quarkonium states, the dressing by the light two pseudoscalar channels always produces a downward shift in mass. The size of these shifts of between one and five hundred MeV (depending on flavor) is set phenomenologically by the $`K_0^{}(1430)`$. A much smaller shift of 10–25 MeV for the precursor glueball is set by the strength of the glueball to two pseudoscalar coupling calculated on the lattice by Sexton et al. . The suppression of the couplings of the resulting “dressed” hadron to two pseudoscalars happens irrespective of the exact mass of the bare glueball . The inclusion of more channels, like $`\rho \rho `$ and $`K^{}\overline{K^{}}`$, may well be important in dressing this state and alter the rather small mass-shifts we found for that sector both in magnitude and sign. Of course, only physically accessible hadronic intermediate states contribute to the imaginary part of the propagator, Fig. 1. Unopen channels contribute only to the real (or dispersive) part and result in renormalizations of the undressed parameters.
By including, in our calculation key aspects of the hadron world, in the way described here and in , we believe we must have approached closer to full QCD — despite the criticism of Lee and Weingarten.
Note added: Long after the submission of this paper to Physical Review, Lee, Vaccarino and Weingarten have repeated their comments identically in Refs. .
The authors acknowledge partial support from the EU-TMR Program EuroDA$`\mathrm{\Phi }`$NE, Contract No. CT98-0169.
|
no-problem/9910/cond-mat9910017.html
|
ar5iv
|
text
|
# Transverse Depinning in Strongly Driven Vortex Lattices with Disorder
\[
## Abstract
Using numerical simulations we investigate the transverse depinning of moving vortex lattices interacting with random disorder. We observe a finite transverse depinning barrier for vortex lattices that are driven with high longitudinal drives, when the vortex lattice is defect free and moving in correlated 1D channels. The transverse barrier is reduced as the longitudinal drive is decreased and defects appear in the vortex lattice, and the barrier disappears in the plastic flow regime. At the transverse depinning transition, the vortex lattice moves in a staircase pattern with a clear transverse narrow-band voltage noise signature.
\]
The dynamics of driven vortex lattices interacting with disorder exhibit a wide variety of interesting nonequilibrium behavior and dynamic phase transitions. Experiments , simulations , and theory suggest that at low drives the vortex lattice is disordered and exhibits plastic or random flow while at higher drives the lattice can undergo a reordering transition and flow elastically. In this highly driven state it was suggested by Koshelev and Vinokur that the flux lattice forms a moving crystal. In subsequent theoretical work, Giamarchi and Le Doussal proposed that the reordered state is actually an ordered moving glass phase and that the vortices travel in highly correlated static channels. In other work , it has been proposed that these channels may be decoupled, producing a smectic structure. Simulations and experiments have found evidence for both smectic as well as more ordered moving vortex lattice structures.
A particularly intriguing prediction of the theory of Giamarchi and Le Doussal is that, in the highly driven phase, the moving lattice has a diverging potential barrier against a transverse driving force, resulting in the existence of a finite transverse critical current. This transverse critical current has been observed in simulations by Moon et al. and Ryu et al. in the highly driven phase. Large transverse barriers have also been seen in systems containing periodic pinning . Also, recent experiments involving STM images of moving vortices reveal that the vortex lattice moves along one of its principle axes rather than in the direction of the drive (as predicted by ), suggesting that the moving lattice is stable to a small transverse force component.
Although the existence of a transverse critical current has been confirmed in simulations, there has been no numerical study of the properties of the critical current, such as the dependence of the barrier size on the strength of the longitudinal drive or on the defectiveness of the vortex lattice. It would also be very interesting to understand the dynamics of the vortices at the transverse depinning transition, and relate this to experimental measures such as voltage noise spectra.
In this work we report a simulation study of the transverse depinning transition in driven vortex lattices interacting with random disorder. We find that at high longitudinal drives, when the vortex lattice is defect free and moves in correlated 1D channels along one of its principle axes, a finite transverse depinning barrier is present. For lower drives the transverse barrier is reduced but still present, even in the decoupled channel limit when adjacent channels slip past each other and some defects in the vortex lattice appear. For the lowest drives the flux lattice becomes highly defected as it enters the plastic flow phase, and the transverse barrier is lost. In the high driving limit at the transverse depinning transition, the vortex lattice spends most of its time moving along the longitudinal direction, but periodically jumps in the transverse direction by one lattice constant. The vortex lattice can thus be seen to move in a staircase like fashion, keeping its principle axis aligned in the original direction of longitudinal driving. As the transverse force is increased the frequency of the jumps in the transverse direction increases. This motion produces a clear washboard signal in the transverse velocity which can be detected for transverse drives up to ten times the transverse depinning threshold.
We consider a 2D slice of a system of superconducting vortices interacting with a random pinning background. The applied magnetic field $`𝐇=H\widehat{𝐳}`$ is perpendicular to our sample, and we use periodic boundary conditions in $`x`$ and $`y`$. The $`T=0`$ overdamped equation of motion for a vortex is:
$$𝐟_i=\eta 𝐯_i=𝐟_i^{vv}+𝐟_i^{vp}+𝐟_d+𝐟_i^T,$$
(1)
where $`𝐟_i`$ is the total force acting on vortex $`i`$, $`𝐯_i`$ is the velocity of vortex $`i`$, and $`\eta `$ is the damping coefficient, which is set to 1. The repulsive vortex-vortex interaction is given by $`𝐟_i^{vv}=_{j=1}^{N_v}A_vf_0K_1(|𝐫_i𝐫_j|/\lambda )\widehat{𝐫}_{ij}`$ where $`𝐫_i`$ is the position of vortex $`i`$, $`\lambda `$ is the penetration depth, $`f_0=\mathrm{\Phi }_0^2/8\pi \lambda ^3`$, the prefactor $`A_v`$ is set to 3 , and $`K_1(r/\lambda )`$ is a modified Bessel function which falls off exponentially for $`r>\lambda `$, allowing a cutoff in the interactions to be placed at $`r=6\lambda `$ for computational efficiency. We use a vortex density of $`n_v=0.75/\lambda ^2`$ giving the number of vortices $`N_v=864`$ for a sample of size $`36\lambda \times 36\lambda `$. The pinning is modeled as randomly placed attractive parabolic traps of radius $`r_p=0.3\lambda `$ with $`𝐟_i^{vp}=(f_p/r_p)(|𝐫_i𝐫_k^{(p)}|)\mathrm{\Theta }(r_p|𝐫_i𝐫_k^{(p)}|)\widehat{𝐫}_{ik}^{(p)}`$, where $`𝐫_k^{(p)}`$ is the location of pin $`k`$, $`\mathrm{\Theta }`$ is the Heaviside step function, $`\widehat{𝐫}_{ij}=(𝐫_i𝐫_j)/|𝐫_i𝐫_j|`$ and $`\widehat{𝐫}_{ik}^{(p)}=(𝐫_i𝐫_k^{(p)})/|𝐫_i𝐫_k^{(p)}|`$. The pin density is $`n_p=1.0/\lambda ^2`$ and the pinning force is $`f_p=1.5f_0`$. The Lorentz force from an applied current $`𝐉=J\widehat{𝐲}`$ is modeled as a uniform driving force $`𝐟_d`$ on the vortices in the $`x`$-direction. We initialize the vortex positions by performing simulated annealing with $`f_d/f_0=0.0`$. We then gradually increase $`f_d`$ to its final value by repeatedly increasing $`f_d`$ by $`0.004f_0`$ and remaining at each drive for $`10^4`$ time steps, where $`dt=0.02`$. If we increase the drive more rapidly than this, the reordered vortex lattice that forms at higher drives may fail to align its principle axis in the direction of the driving. Slow increases in $`f_d`$ always produce an aligned lattice. Once the final $`f_d`$ value is reached we equilibrate the system for an additional $`2\times 10^4`$ steps and then begin applying a force in the transverse direction $`f_d^y`$ which we increase by $`0.0001f_0`$ every $`10^4`$ time steps. We monitor the transverse velocities $`V_y=(1/N_v)_{i=1}^{N_v}𝐯_i\widehat{𝐲}`$ to identify the transverse critical current.
In Fig. 1 we show $`V_y`$ versus the transverse drive $`f_d^y`$ at longitudinal drives of $`f_d/f_0=1.0`$ and $`f_d/f_0=3.0`$ for a system with a longitudinal depinning threshold of $`f_c^x/f_00.5`$. For $`f_d/f_0=3.0`$ the vortex lattice is free of defects and the vortices move in well defined 1D channels as seen from the vortex trajectories in the right inset, in agreement with previous simulations . In this case there is clear evidence for a transverse barrier with $`f_c^y/f_c^x0.01`$, approximately 100 times smaller than the longitudinal depinning threshold, in agreement with earlier simulations . Thus, the vortex lattice resists changing its direction of motion. For $`f_d/f_0=1.0`$, the vortex lattice is highly defected and the 1D channel structure is lost (as seen in the left inset of Fig. 1). In this case the transverse barrier is absent since the lattice has no particular alignment and can readily change the direction of its motion. In the absence of pinning, $`f_c^y=0`$ for all drives $`f_d`$, as indicated by the top curve in Fig. 1.
We ran a series of simulations in which the final longitudinal drive $`f_d`$ was varied in order to determine the dependence of the magnitude of the transverse barrier on the magnitude of the longitudinal drive, as well as on the density of defects in the vortex lattice. In Fig. 2 we plot the resulting transverse depinning thresholds $`f_c^y`$ and the fraction of six-fold coordinated vortices $`P_6`$ (calculated from the Voronoi or Wigner-Seitz cell construction) as a function of $`f_d`$. For longitudinal drives $`f_d/f_0>1.5`$, there are no defects in the vortex lattice (indicated by the fact that $`P_61.0`$) and $`f_c^y`$ is roughly constant, $`f_c^y/f_c^x0.01`$. Below $`f_d/f_01.5`$ defects begin to appear in the vortex lattice as adjacent moving channels decouple, and the overall vortex lattice develops a moving smectic structure. . The transverse critical current $`f_c^y`$, which is still finite in this phase, becomes progressively reduced as more defects are generated. At the lowest drives, $`f_d/f_01.0`$, the transverse critical force is lost when the 1D channels are completely destroyed and the vortex lattice enters the amorphous plastic flow phase shown in the left inset of Fig. 1. The dislocations in the lattice, which were aligned perpendicular to the vortex motion at $`f_d/f_0>1.0`$, become randomly aligned at the transition to plastic flow. The loss of $`f_c^y`$ thus coincides with the loss of alignment of the defects and the destruction of the 1D channels.
We have also checked the effect of finite system size on the magnitude of the transverse critical force for $`f_d/f_0=1.5,2.0`$ and $`3.0`$ for different system sizes ($`L=24\lambda `$, $`36\lambda `$, $`48\lambda `$ and $`60\lambda `$). In the inset of Fig. 2, we show that $`f_c^y/f_c^x`$ is not affected by the system size. These results support the idea that it is the presence of defects in the lattice that reduce or destroy the transverse barrier, rather than any type of matching effect with the system size, and that as long as some form of channeling occurs the barrier will still be present.
In order to view the dynamics of the vortex lattice at the transverse depinning threshold we plot in Fig. 3 the vortex positions and trajectories for the same system shown in Fig. 1 with $`f_d/f_0=3.0`$ and $`f_d^y/f_c^x=0.011`$. For clarity we have highlighted a particular row of vortices. In Fig. 3(a) the principle axis of the vortex lattice is aligned with the direction of the drive and the ordered lattice is moving along this axis. In Fig. 3(b) the entire vortex lattice has translated by one lattice constant in the transverse direction. During the transition the lattice moves at an angle to the longitudinal drive, following a different axis of the lattice. Once the vortices have moved one lattice constant transverse to the drive, they begin moving along the same channels that formed before the transverse translation. After this, the vortices move along the longitudinal direction once again, as in Fig. 3(c), before jumping by another lattice constant in the transverse direction. At the transverse depinning transition, the vortex lattice thus moves in a staircase like manner, always keeping its principle axis aligned in the direction of the original longitudinal drive and always translating along one of the axes of the lattice. As $`f_c^y`$ is increased the frequency of jumps in the transverse direction increases. If $`f_c^y`$ is increased to a high enough value, we have found evidence that the vortex lattice will reorient itself with the net driving force via the creation of a grain boundary. This will be discussed in more detail elsewhere. The transverse depinning transition is unlike the longitudinal depinning transition in that the latter occurs through plastic deformations of the lattice and the generation of a large number of defects. In contrast, the transverse depinning transition is elastic.
The fact that the vortices move periodically by a lattice constant in the transverse direction is a result of the fact that the longitudinal channels followed by the vortices are uniquely determined by the underlying disorder . The vortices jump from one of these stable channels to another, giving the same effect as a washboard potential. This periodic effect occurs only for a moving lattice in which 1D channels have formed; to a stationary lattice, the disorder would appear random.
A consequence of the staircase-like vortex motion just above the transverse depinning threshold is that the net transverse vortex velocity at a fixed $`f_d^y`$ should show a clear washboard frequency which should increase for increasing $`f_d^y`$. In Fig. 4(a) we plot $`V_y`$ for samples with $`f_d^y`$ held fixed at several different values just above the transverse depinning threshold for a system with $`f_d/f_0=3.0`$. For $`f_d^y/f_c^y=0.011`$ the $`V_y`$ shows periodic pulses which correspond to the correlated transverse jumps of the vortex lattice seen in Fig. 3. The flat portions of the voltage signal correspond to time periods when the lattice is moving only in the longitudinal direction, between hops. For increasing transverse drive the frequency of these pulses also increases. The additional structure in the $`V_y`$ voltage pulses at lower values of $`f_d^y`$ is characteristic of the underlying pinning, and varies for different disorder realizations. It occurs when the vortex lattice moves slightly unevenly, with a small wobble, but no defects or tearing occur in the lattice. The main feature of large periodic pulses is always observed, and the wobble dies away at larger transverse drives. In Fig. 4(b) we show that the Fourier transform of the velocity signal $`V_y`$ for a driving force of $`f_d^y/f_c^y=0.016`$ exhibits a resonance frequency at $`\nu =6.0\times 10^5`$ inverse MD steps. In Fig. 4(c), the resonant frequency increases linearly with $`f_d^y`$. We find that this resonance persists for $`f_d^y`$ up to ten times larger then the transverse depinning threshold. It should be possible to detect this washboard frequency with Hall-noise measurements.
In recent experiments employing an STM to directly image a slowly moving vortex lattice , evidence for staircase-like motion of the flux lattice has been observed. In these experiments the direction of the driving force could not be directly controlled, but was assumed to be at a slight angle with respect to the principle vortex lattice vector, so that a transverse component of the driving force was present. Further experiments in which the magnitude and direction of the drive can be directly controlled are needed; however, experimental imaging techniques such as STM or Lorentz microscopy seem highly promising.
In summary we have investigated the transverse depinning of moving vortex lattices interacting with random disorder. We find that for high longitudinal drives where the vortex lattice is defect free a finite transverse barrier forms. For lower drives where defects in the vortex lattice form and the vortex lattice has a smectic structure the transverse barrier is reduced but still finite. In the highly disordered plastic flow phase the transverse barrier is absent. The transverse depinning transition is elastic, unlike the plastic longitudinal depinning transition, and near this transition the vortex lattice moves in a staircase-like fashion. We observe a washboard frequency in the transverse voltage signal which can be detected for transverse drives up to ten times the depinning drive.
We thank T. Giamarchi, P. Kes, P. Le Doussal, F. Nori, R. Scalettar, and G. Zimányi for helpful discussions. We acknowledge support from CLC and CULAR, administered by the University of California.
|
no-problem/9910/hep-ex9910032.html
|
ar5iv
|
text
|
# Recent N∗ Results From 𝐽/𝜓 Decays
## 1 Introduction
Nucleons are the most common form of hadronic matter on the earth and probably in the whole universe. understanding their internal structure will give us insight into how the real world works. An important source of information about the nucleon internal structure is the nucleon excitation spectrum. Our present knowledge on this aspect came almost entirely from partial-wave analyses of $`\pi N`$ total, elastic, and charge-exchange scattering data of more than twenty years ago. Since the late 1970’s, very little has happened in experimental $`N^{}`$ baryon spectroscopy. Considering its importance for the understanding of the baryon structure and for distinguishing various pictures of the nonperturbative regime of QCD, a new generation of experiments on N\* physics with electromagnetic probes have recently been started at new facilities, such as CEBAF at JLAB, ELSA at Bonn, GRAAL at Grenoble and so on.
One of us, Zou suggested that we can also study $`N^{}`$ baryon in J$`/\psi `$ decay to baryon-antibaryon final states, which provide a new laboratory for study of $`N^{}`$ baryon, especially in the mass range of 1-2 GeV. For example, the $`J/\psi p\overline{p}\eta `$ is an excellent channel to study the $`N^{}(1535)`$ state which has a very large decay branching ratio to the $`N\eta `$ , while other baryon resonances below 2.0 GeV do not have a large branching ratio to decay in this channel, a fact noted very early in the development of the quark shell model .
In this paper, based on 7.8 million $`J/\psi `$ events collected at BEPC, the events for $`J/\psi p\overline{p}\pi ^0`$ and $`p\overline{p}\eta `$ have been selected and reconstructed. We perform a partial wave analysis(PWA) on $`J/\psi p\overline{p}\eta `$ data in the full mass region of $`p\eta `$($`\overline{p}\eta `$). This is the first PWA study of $`N^{}`$ baryon in $`J/\psi `$ hadronic decay in the world. Two S-wave $`N^{}`$ baryon, namely $`N^{}(1535)`$ and $`N^{}(1650)`$, are found in their $`p\eta `$ decay mode. The new information on $`J/\psi NN^{}`$ couplings provides a new source for studying baryon structure.
## 2 Event Selection
The $`\eta `$ and $`\pi ^0`$ are detected in their $`\gamma \gamma `$ decay modes. Each candidate event is required to have two oppositely signed charged tracks with a good helix fit in the polar angle range $`0.8<\mathrm{cos}\theta <0.8`$ in MDC and at least 2 reconstructed $`\gamma `$’s in BSC. A vertex is required within an interaction region $`\pm 15`$ cm longitudinally and 2 cm radially. A minimum energy cut of 60 MeV is imposed on the photons. Showers associated with charged tracks are also removed.
After previous selection, we use TOF information to identify the $`p\overline{p}`$ pairs, and at least one track with unambiguous TOF information is required. The open angle of two charged tracks smaller than $`175^o`$ is required in order to remove back to back events; to remove radiative Bhabha events, we require $`(E_+/P_+1)^2+(E_{}/P_{}1)^2>0.4`$, where $`E_+`$, $`P_+`$ ($`E_{}`$, $`P_{}`$) are the energy deposited in BSC and momentum of positron (electron) respectively. Events are fitted kinematically to the 4C hypotheses $`J/\psi 2\gamma p\overline{p}`$. Figure 3 shows the invariance mass spactrum of the $`2\gamma `$, we can see the clear $`\pi ^0`$ and $`\eta `$ signals. Meanwhile, the events are also fitted to $`J/\psi \gamma p\overline{p}`$ and $`4\gamma p\overline{p}`$. We require
$$Prob(\chi _{(2\gamma p\overline{p})}^2,4C)>Prob(\chi _{(\gamma p\overline{p})}^2,4C),Prob(\chi _{(2\gamma p\overline{p})}^2,4C)>Prob(\chi _{(4\gamma p\overline{p})}^2,4C)$$
to reject the $`\gamma p\overline{p}`$ and $`p\overline{p}\pi ^0\pi ^0`$ backgrounds. In order to improve the mass resolution, 5C fits are performed on the selected events, the extra constraints are those of the $`\eta `$ and $`\pi ^0`$ masses for $`J/\psi p\overline{p}\eta `$ and $`p\overline{p}\pi ^0`$ decays, respectively. $`Prob(\chi _{p\overline{p}\eta }^2,5C)>1\%`$ ($`Prob(\chi _{p\overline{p}\pi ^0}^2,5C)>1\%`$) is required.
Figure 3 and Figure 3 show the $`p\pi ^0`$ and $`p\eta `$ mass distributions from the decays $`J/\psi p\overline{p}\pi ^0`$, $`p\overline{p}\eta `$ respectively. Clear peaks are observed around 1480 MeV in $`p\pi ^0`$ invariant mass spectrum. The $`p\eta `$ events peak strongly in the neighborhood of $`\eta `$-production threshold, and we shall show that the data require a strong $`\frac{1}{2}^{}`$ peak near the threshold. There is an additional obvious bump around 1600-1700MeV, it favors $`J^P=\frac{1}{2}^{}`$ in our $`S_{11}`$ ($`\eta p`$) partial wave analysis.
## 3 Amplitude Analysis of $`J/\psi p\overline{p}\eta `$
A PWA analysis is performed for the $`J/\psi p\overline{p}\eta `$ channel with the amplitudes constructed from Lorentz-invariant combinations of the momenta and the photon polarization 4-vectors for $`J/\psi `$ initial states with helicity $`\pm 1`$. The relative magnitudes and phases of the amplitudes are determined by a maximum likelihood fit to the data. Based on the study of $`p\overline{p}`$ and $`p\eta `$ invariant mass distributions in our data, the decay chain $`J/\psi p\overline{p}\eta `$ is analyzed taking into account two $`p\eta `$($`\overline{p}\eta `$) S-waves($`S_{11}`$) and one $`p\eta `$($`\overline{p}\eta `$) P-wave($`P_{11}`$) intermediate processes. The two S-waves amplitudes are only used to fit the data in the low mass region near the $`\eta `$ production threshold. While the P-wave is used to fit the data in the high mass region. The background from multi-$`\pi ^0`$ is $`8\%`$ in the 5C fit, we have included a phase space background in the PWA fit to allow for this; The $`p\eta `$ mass projection fitted to the real data is shown in Figure 3. We now discuss the features of the data and the outcome of fits.
### 3.1 $`S_{11}(1535)`$
A peak at $``$ 1535 MeV near the $`\eta `$ threshold optimises $`M=1540_{17}^{+15}`$ MeV as shown in Figure 4(a), The data favour $`J^P=\frac{1}{2}^{}`$ over $`\frac{1}{2}^+`$. A fit with $`J^P=\frac{1}{2}^+`$ instead gives $`\mathrm{ln}L`$ worse by 16.0 than for $`\frac{1}{2}^{}`$ assignment(Our definition of $`logL`$ is such that it increases by 0.5 for a one standard deviation change in one parameter). With our 4 fitted parameters, the statistical significance of the peak is above 6.0$`\sigma `$. For the width scan as shown in Figure 4(b), our data require a width, $`\mathrm{\Gamma }=178_{22}^{+20}`$ MeV. Our results for $`N^{}(1535)`$ are consistent with the resonance parameters measured by Krusche et al. at the MAMI acceletator in Mainz on the $`\eta `$ photoproduction.
### 3.2 $`S_{11}(1650)`$
At $``$ 1650 MeV, there is a further peak. We fit it with a $`J^P=\frac{1}{2}^{}`$ resonance. Figure 4(c) are the scan of mass. Its mass optimise at $`M=1648_{16}^{+18}`$ MeV with $`\mathrm{\Gamma }=150`$ MeV fixed to PDG value. We have tried fits to this peak with resonances having quantum numbers $`\frac{1}{2}^+`$. We find that log likelihood is better for $`\frac{1}{2}^{}`$ than $`\frac{1}{2}^+`$ by 9.0. With our 4 fitted parameters, the statistical significance of the peak is $`5.8\sigma `$. Our results for $`N^{}(1650)`$ are consistent with the parameters proposed by PDG.
A small improvement to the fit is given by including a $`J^P=\frac{1}{2}^+`$ resonance, which optimises at $`M=1834_{55}^{+46}`$ MeV and $`\mathrm{\Gamma }=200`$ MeV fixed. The statistical significance of the peak is only $`2.0\sigma `$. we have tried $`J^P=\frac{1}{2}^{}`$ instead $`\frac{1}{2}^+`$, but the fit is much worse.
## 4 Conclusion
In summary, we have studied the $`J/\psi p\overline{p}\eta `$ decay channel, and a PWA analysis is performed on the data. There is a definite requirement for a $`J^P=\frac{1}{2}^{}`$ component at $`M=1540_{17}^{+15}`$ MeV with $`\mathrm{\Gamma }=178_{22}^{+20}`$ MeV near the threshold for $`\eta `$ production. In addition, there is an obvious $`J^P=\frac{1}{2}^{}`$ resonance, with $`M=1648_{16}^{+18}`$ MeV and $`\mathrm{\Gamma }=150`$ MeV fixed to PDG data. In the higher $`p\eta `$ mass region, there is an evidence of $`J^P=\frac{1}{2}^+`$ signal around 1800 MeV, we can not get any conclusion for this state due to the low statistics.
All above analysis is the first step for us to probe $`N^{}`$ baryons at BES. We will perform detail studies of $`N^{}`$ baryons in the following $`J/\psi `$ decay channels: $`J/\psi p\overline{p}\pi ^0`$, $`p\overline{p}\pi ^0\pi ^0`$, $`p\overline{p}\pi ^+\pi ^{}`$, $`p\overline{p}\eta `$, $`p\overline{p}\omega `$ and so on.
## 5 Acknowledgements
One of the authors, H.B. Li, is grateful to Prof. Leonard Kisslinger for the helpful discussions. This work is supported in part by Chinese National Science Foundation under contract No. 19290401 and 19605007.
|
no-problem/9910/astro-ph9910143.html
|
ar5iv
|
text
|
# The High-Resolution OTF Survey of the ¹²𝐶𝑂 in M 31
## 1. Introduction
The investigation of the large-scale conditions for the star formation is an important problem of modern astronomy. The final stage of the process is reached when individual regions within diffuse molecular clouds start to coagulate into dense, gravitationally bound complexes that eventually collapse and form stars. The trigger to start such a contraction is however unknown, and in particular the influence of large-scale phenomena on this process is unclear. Possible candidates are cloud-cloud collisions (e.g. due to orbit crowding), MHD shocks (caused by density waves), ‘contagious’ star formation induced by supernova blast waves or thermal and magnetic instabilities. The relative importances of these processes are however uncertain and the time scales are unknown.
The small-scale properties of star-forming clouds are studied in great detail in the Milky Way, but there are severe difficulties to obtain good information about the large-scale structure. This is of course due to our position within the disk of the Galaxy, but also because of the problem to determine proper distances. We thus have to investigate an external galaxy and the obvious choice is the Andromeda Galaxy, M 31. It is rather close, at a well determined distance (784 kpc after Stanek & Garnavich 1998), its properties are similar to that of our Milky Way and a wealth of observations at all wavelengths is available for comparisons. In particular, a complete survey of the $`\lambda `$ 21 cm line emission of the atomic hydrogen has been performed with the WSRT at $`24^{\prime \prime }\times 36^{\prime \prime }`$ angular resolution (Brinks & Shane 1984).
Several attempts have been made to observe the emission of the CO in that galaxy in order to investigate the properties of the molecular gas, but this turned out to be unexpectedly difficult. Early attempts (e.g. Combes et al. 1977) resulted in a rather high number of non-detections – only the dustiest regions showed a reasonable probability to detect molecular gas (e.g. Lada et al. 1988, Ryden & Stark 1986). Such a sample is however biased, of course, and only a survey with uniform coverage can give the necessary information about the large-scale properties. The first complete survey of M 31 was however published only a few years ago (Dame et al. 1993) and had an angular resolution of $`9^{}`$ (2 kpc along the major axis).
## 2. Observations
### 2.1. Preparatory Studies
In 1993, we made a new attempt to study the properties of an unbiased sample of molecular clouds and chose the target region on kinematical grounds only: we focused on a place where the major axis is crossed by a spiral arm whose location was determined by a kinematical analysis of the Hi data (Braun 1991). The observations were performed with the IRAM 30-m telescope in a standard “on-off” observing mode. At a distance of about $`38^{}`$ south-west from the center, CO emission was indeed clearly detected around the major axis and subsequently an area of about $`3^{}\times 4^{}`$ was mapped. We found an extended cloud complex and several individual clumps within this area. The properties of the clouds varied substantially within these few arcminutes, e.g. we observed line widths from FWHM 20 km s<sup>-1</sup> down to only 4 km s<sup>-1</sup>. The line temperatures found were of the order of 0.1 to 0.4 K (in T$`{}_{}{}^{}{}_{A}{}^{}`$ units) – and not limited to only about 20 mK as found by Dame et al. (1993).
The results showed clearly that the CO emission of M 31 is highly clumped and the low intensities of the CfA survey are caused by the small filling factor of the $`9^{}`$ beam. A high spatial resolution is thus mandatory to reveal the properties of the molecular gas. This is however a difficult task because of the size of the emission area: a surface of about 1 square degree has to be mapped with an angular resolution of less than half an arcminute. The sensitivity should reach 0.2 K or better at a velocity resolution of a few km s<sup>-1</sup>.
In principle, there are at least two ways to substantially speed up the imaging speed compared to the classical on-off mode: one could use multiple-beam receivers and thus cover a larger field with every “on”-position or use a continuous scanning which reduces overlays for telescope moving and reference positions by a very large amount.
### 2.2. Observation technique
Based on the experience obtained with the little map and the various already published results we choose an “OTF” (On-The-Fly) observing technique for our survey. Here, the telescope is moved across the source at a constant speed, taking data at a high rate, so that there is a sufficient number of dumps within the diameter of the beam. In our case, we use a scanning speed of $`4^{\prime \prime }`$/second and take a dump every second, which ensures a good sampling of the $`22^{\prime \prime }`$ beam. The field is covered by adding subsequent scans parallel to the first at a small distance, in our case $`8^{\prime \prime }`$. The orientation of the scans is defined in a coordinate system fixed to the center of the source; on the sky, the scans therefore remain equidistant and the individual dumps equally spaced within the tracking errors of the telescope ($`1^{\prime \prime }`$ during a normal scan). That way, the coverage thus obtained is uniformly and densely sampled.
The reference data are obtained before and after each scan at positions free of emission; typically, we integrated 30 seconds there, so that the noise of this data is significantly lower than the noise of the dumps along the scan. A scan of a length of $`20^{}`$ lasts five minutes given the parameters above, so a complete sequence (calibration – reference – scan – reference) takes about 6.5 minutes to complete. In order to obtain the sensitivity needed, two SIS receivers are used in parallel which look at the same sky position. Moreover, each field is scanned twice, in orthogonal orientations. That way, we are able to obtain a sensitivity of $`0.15`$ K or better in the final map (T$`{}_{}{}^{}{}_{A}{}^{}`$ scale).
The orthogonal orientation of the second coverage not only results in a dense sampling of the source area, but allows for a special data reduction technique that supresses so-called “scanning noise”. It is derived from the “basket-weaving” method presented by Emerson & Gräve (1988) for cm continuum observations and adapted to line observations by P. Hoernes (1998). A short overview of the procedure is given below in Sect. 2.3.. As a result, the noise distribution is very smooth and in particular we avoid spurious elongated artifacts.
The main backends used are filterbanks with 1 MHz resolution. Three units are available at the 30-m telescope: two of them offer 256 channels and one 512 channels. The total bandwidth of the receivers and the IF system being 500 MHz each, the necessary velocity coverage for M 31 is thus easily obtained with a sufficient resolution (2.6 km s<sup>-1</sup>). The 512-channel filterbank and the autocorrelators also available at the telescope were used for the two 230 GHz receivers that run in parallel to the 115 GHz units. Each point may thus be observed with up to four receivers simultaneously – but for proper observations at the higher frequency the observational parameters have to be adapted to the smaller beam, of course.
### 2.3. Data reduction
The observation method results in separate files for the calibration, the reference and the source data. The first step in the data reduction process is thus to apply the appropriate calibration to each dump. Thereafter, the spectrum at the reference position is subtracted from the source points. Various averaging options for the case of multiple references are available, but a straight mean of the values obtained before and after the individual scan is usually sufficient. After this subtraction, we have a set of spectra comparable to the usual output of “on-off” or wobbler techniques, with one second of integration time per spectrum.
The next step in the data reduction is the subtraction of spectral baselines and the removal of erroneous data points. Such points are e.g. introduced by bad channels in the filterbanks. They are relatively easy to detect due to their “singular” nature and the occurrence at the same channel in independent spectra. The fitting of the baseline needs a bit more thought: since each scan contains several hundred spectra, we need a routine that is able to determine the necessary line windows by itself. Here, we profit largely from the WSRT Hi data that have a resolution comparable to our survey ($`24^{\prime \prime }\times 36^{\prime \prime }`$, Brinks & Shane 1984). We assume that the velocity structure of the atomic gas (Hi) is similar to that of the molecular gas (CO). Then, we can use the kinematical information in the Hi-data to determine the velocity interval possibly containing CO emission and fit the spectral baseline to the data outside of this region. In the case of M 31 this is a very safe procedure because the CO emission is weak and even a possible error in the line window does not change the fit of the baseline very much. In any case, the baseline fit is a linear procedure and may be used iteratively, if necessary.
Now it is time to actually ‘reduce’ the data – remember that we still are working on a $`4^{\prime \prime }\times 8^{\prime \prime }`$ data grid for the $`22^{\prime \prime }`$ beam. We thus bf regrid the data onto a convenient regular grid, which at the same time allows to correct for tracking errors and to introduce additional scans (e.g. to back up single lines with higher noise). It may be advisable, however, to maintain the distance between individual scans in the grid setup for the next step of the data reduction.
At this stage, we combine several maps (which are actually data cubes) using the “basket-weaving” method already mentioned above (Emerson & Gräve 1988). Inspection of the maps shows that quite often drifts of the zero level are remaining even after the subtraction of a spectral baseline. Adjacent scans thus may show a different value for neighboring points (at a distance of $`8^{\prime \prime }`$ compared to a beam of $`22^{\prime \prime }`$) because they have been observed through a different atmosphere, separated in time by e.g. six minutes (see Fig. 1 upper left). Now remember that we have scanned the area of interest at least twice, at orthogonal orientations. Thus, we have two data sets where the distribution of coherent dumps (along the scan) and incoherent dumps (in different scans) is orthogonal.
In the two-dimensional frame of a map, the scans constitute regularly spaced rows or columns. Any feature that occurs with such a fixed period (e.g. a varying zero level in each scan) will be projected into a very narrow interval by a 2-D Fourier transform (Fig. 1, upper right). Thus, it is easy to apply a filter function to suppress this noise contribution. The orientation of this interval depends of course on the original scanning direction, so the part taken out can be reconstructed from the other input maps. This also avoids unwanted filtering of linear source structures, because a priori only periodic arrangements are filtered – and only in one specific orientation per input frame.
After the filtering, the input maps are coadded – in principle, any number of coverages with arbitrary scanning orientations can be used as input. If the coverage should be not the same for all input maps, the quality of the noise filtering will vary over the combined area, of course. However, the procedure is rather ‘friendly’ and does not introduce edge-effects at the borders of the individual coverages. For spectral data, each spectral channel is treated as an individual map. The whole procedure works on the data pixels and it is up to the user to ensure a coherent data set beforehand.
Until now, the pixels are coherent only at the level of the gridding, which is typically of the order of half a beam width or less. Hence, we smooth the data as the last step of the data reduction. The smoothing gaussian is chosen small enough in order not to degrade the original resolution (we obtain a final resolution of $`23^{\prime \prime }`$); this leaves the source structures untouched, but further suppresses the random noise in the pixels.
## 3. Results
### 3.1. The map
Until now, about three quarters of M 31 have been mapped with full sampling for the $`22^{\prime \prime }`$ beam at the <sup>12</sup>CO(1-0) line transition (see Fig. 2). In addition, we choose several regions with particular properties and observed them in a mode adapted to fully sample the smaller beam at the (2-1) transition (shown in the fields surrounding the main frame in Fig. 2).
Over most of the area surveyed so far, coherent spiral arm pieces are clearly visible. The arm/interarm contrast is high, but there are small cloud complexes scattered over a large part of the surface. Note however, that only in the southern half of the map the lowest contour is at least $`3\sigma `$, whereas in the northern part the data was still preliminary at the time of the conference. There, the contours have been chosen such as to show the effects of the scanning procedure which result e.g. in the horizontal stripes at the upper end of fields 7 and 8. In any case, we check such weak points with dedicated on-off measurements and maintain only those that can be confirmed – which is usually the case.
### 3.2. Comparisons with other data and first results
There is a wealth of observations available for M 31, covering the whole wavelength range. As described above, we used the Hi data cube of the WSRT survey (Brinks & Shane 1984) already for the determination of the line windows during the data reduction. Now, we can compare the two data sets to obtain a picture of the properties of the atomic vs. the molecular gas. The molecular gas turns out to show a significantly higher contrast, but the general structure is very similar to that of the atomic gas: the main emission is found in a “ring” and very few gas can be found close to the center. The ratio of molecular/atomic gas decreases with radius, but individual molecular complexes have nevertheless been found out to a distance of about 18 kpc. The kinematical signature is rather similar for both kinds of gas, which justifies the use of the Hidata in the baseline fitting procedure.
Large-scale streaming motions are not prominent along the spiral arms – the data are rather dominated by local effects. They show up as double- or multiple-peaked spectra with total widths of up to 50 km s<sup>-1</sup> (cf. Fig. 3). A comparison with the distribution of ionized regions (Devereux 1994) suggests a relation between such disturbed molecular clouds and the Hii regions (see Fig. 2 of Neininger et al. 1998) for the region presented in Fig. 3).
## 4. Summary
#### Technical items
The OTF method is a fast, flexible and versatile observing mode which yields high-quality data. Longer integration times per point can be achieved by co-adding a corresponding number of coverages. In principle, it can be adapted to all mapping projects with single-dish telescopes – the limitations are usually technical, such as receiver stability, maximum telescope speed, maximum dump rate or data storage capacity. Our setup allows us to map an area of 10 arcmin<sup>2</sup> at a typical noise of 150 mK per spectrum in 1 hour of telescope time. Using specially designed filtering techniques, spurious features can be removed and a homogeneous result can be achieved – even in wavelength bands that are seriously affected by atmospheric effects.
#### Astronomical results
The molecular gas in M 31 has a high arm/interarm contrast, but single cloud complexes are found between the arms. The spatial filling factor is rather low. The bulk of the molecular gas is situated between a radius of about 4 kpc and 13 kpc, but some cloud complexes have been observed out to distances of 18 kpc. The total gas mass (Hi \+ H<sub>2</sub>) corresponds well to the optical extinction. The signature of a possible density wave is very weak in the molecular gas, instead it seems to be dominated by local effects.
#### Acknowledgments.
Special thanks to the IRAM staff which has made the OTF observations possible – in particular to A. Sievers, W. Brunswig, W. Wild and the receiver engineers. Of special importance were some hydrogen data: we thank E. Brinks and R. Braun for the Hi cubes and N. Devereux for his H$`\alpha `$ image.
## References
Braun, R. 1991, ApJ, 372, 54
Brinks, E., & Shane, W.W. 1984, A&ASup., 55, 179
Combes, F., Encrenaz, P.J., Lucas, R., & Weliachew, L. 1977, A&A, 61, L7
Dame, T., et al. 1993, ApJ, 418, 730
Devereux, N.A., Price, R., Wells, L.A., & Duric, N. 1994, AJ, 108, 1667
Emerson, D.T., & Gräve, R. 1988 A&A, 190, 353
Hoernes, P. 1998, PhD Thesis, University of Bonn
Lada, C.J., et al., 1988, ApJ, 328, 143
Neininger, N., Guélin, M., Ungerechts, H., Lucas, R., & Wielebinski, R. 1998, Nature, 395, 871
Ryden, B.S., & Stark, A.A., 1986, ApJ, 305, 823
Stanek, K.Z., & Garnavich, P.M. 1998, ApJ, 503, L131
|
no-problem/9910/astro-ph9910406.html
|
ar5iv
|
text
|
# Source Counts from the 15𝜇m ISOCAM Deep Surveys Based on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) with the participation of ISAS and NASA
## 1 Introduction
Deep galaxy counts as a function of magnitude, or flux density, should, in principle, give a constraint on the geometry of the universe ($`\mathrm{\Omega }_{}`$, $`\mathrm{\Lambda }_{}`$). In fact, their departure from the Euclidean expectation (no expansion, no curvature) is dominated by the e-correction (intrinsic evolution of the galaxies) and by the k-correction (redshift dependence). The understanding of galaxy evolution therefore is a key problem for cosmology, and number counts appear to be a strong constraint on the models, which does not suffer from the exotic behavior of individual galaxies. Most of the energy released by local galaxies is radiated in the optical-UV range (Soifer & Neugebauer 1991). If this were to remain true over the whole history of the universe, then one could follow the comoving star formation rate of the universe as a function of redshift by measuring the optical-UV light radiated by galaxies (Lilly et al 1996, Madau et al 1996). This scenario changed considerably after the detection of a substantial diffuse cosmic IR background (CIRB) in the 0.1 – 1 mm range from the COBE-FIRAS data (Puget et al 1996, Guiderdoni et al 1997, Hauser et al 1998, Fixsen et al 1998, Lagache et al 1999) and at 140 – 240 $`\mu `$m from the COBE-DIRBE data (Hauser et al 1998, Lagache et al 1999). Surprisingly the mid-IR/sub-mm extragalactic background light is at least as large as that of the UV/optical/near-IR background (Dwek et al 1998, Lagache et al 1999), which implies a stronger contribution of obscured star formation at redshifts larger than those sampled by IRAS ($`z>0.2`$). To understand the exact origin of this diffuse emission and its implications for galaxy evolution, we need to identify the individual galaxies responsible for it and the best way to do that consists of observing directly in the IR/sub-mm range.
In the mid-IR, IRAS has explored the local universe ($`z<0.2`$). With a thousand times better sensitivity and sixty times better spatial resolution, ISOCAM (Cesarsky et al 1996), the camera on-board ISO (Kessler et al 1996), provides for the first time the opportunity to perform cosmologically meaningful surveys. Deep surveys have been carried out on small fields containing sources well known at other wavelengths: the HDF North (Serjeant et al 1997, Aussel et al 1999a,b, Désert et al 1999) and the CFRS 1452+52 field (Flores et al 1999). This has yielded a small but meaningful sample of sources (83 galaxies) with a positional accuracy better than 6”, sufficient for most multiwavelength studies. Most of these sources can easily be identified with bright optical counterparts ($`I_C<22.5`$) with a median redshift of $`z0.70.8`$ imposed by the k-correction (Aussel et al 1999a,b, Flores et al 1999). Flores et al (1999) estimate, from their sample of 41 sources, that accounting for the IR light from star forming galaxies may lead to a global star formation rate which is 2 to 3 times larger than estimated from UV light only.
To obtain reliable source count diagrams, better statistics and a wider range of flux densities are required. For this reason, we have performed several cosmological surveys with ISOCAM, ranging from large area-shallow surveys to small area-ultra deep surveys. These surveys were obtained in the two main broad-band filters LW2 (5-8.5 $`\mu `$m) and LW3 (12-18 $`\mu `$m), centered at 6.75 $`\mu `$m and 15 $`\mu `$m respectively. This paper only presents the source counts at 15 $`\mu `$m, because the sample of galaxies detected in the 6.75 $`\mu `$m band is strongly contaminated by Galactic stars, whose secure identification requires ground-based follow-up observations. Including the surveys over the two Hubble deep fields, almost 1000 sources with flux densities ranging from 0.1 mJy to 10 mJy were detected, allowing us to establish detailed source count diagrams. Source lists for each individual survey, as well as maps and detailed description of data reduction, will be found in separate papers (see Table 1). Forthcoming papers will discuss the nature of these galaxies based on the ISOHDF-North survey (Aussel et al, in prep. & 1999b), the contribution of these galaxies to the cosmic IR background, the relation of these observations with ISOPHOT and SCUBA deep surveys (Elbaz et al, in prep.), as well as a tentative modelling of the number counts (Franceschini et al, in prep.).
## 2 Description of the Surveys
The five ISOCAM Guaranteed Time Extragalactic Surveys (Cesarsky & Elbaz 1996, Table 1) are complementary. They were carried out in both the northern (Lockman Hole) and southern (Marano Field) hemispheres, in order to be less biased by large-scale structures. These two fields were selected for their low zodiacal and cirrus emission and because they had been studied at other wavelengths, in particular in the X-ray band, which is an indicator of the AGN activity of the galaxies. Only one of the ‘Marano’ maps was scanned at the exact position of the original Marano Field (Marano, Zamorani, Zitelli 1988), while the ‘Marano FIRBACK’ (MFB) Deep and Ultra-Deep surveys were positioned at the site of lowest galactic cirrus emission, because they were combined with the FIRBACK ISOPHOT survey at 175 $`\mu `$m (Puget et al 1999, Dole et al 1999). Indeed the importance of the Galactic cirrus emission in hampering source detection is much larger at 175 $`\mu `$m than at 15 $`\mu `$m, but the quality of the two 15 $`\mu `$m ultra deep surveys in the Marano Field area is equivalent. In addition, very deep surveys were taken with ISOCAM over the areas of the HDF North (Serjeant et al 1997) and HDF South. In this paper, we include the HDF North results from Aussel et al (1999a), and show for the first time ISOCAM counts on the HDF South field.
## 3 Data Reduction and Simulations
The transient behavior of the cosmic ray induced glitches, which makes some of them mimic real sources, is the main limitation of ISOCAM surveys. We have developed two pipelines for the analysis of ISOCAM surveys in order to obtain two independent source lists per survey and improve the quality of the analysis. PRETI (Pattern REcognition Technique for ISOCAM data), developed by Starck et al (1999), is able to find and remove glitches using multi-resolution wavelet transforms.
It includes also Monte Carlo simulations to quantify the false detection rate, to calibrate the photometry and to estimate the completeness. The ‘Triple Beam-Switch’ (TBS) technique, developed by Désert et al (1999), treats micro-scanning or mosaic images as if they resulted from beam-switching observations. All the surveys have been independently analyzed using both techniques and the source lists were cross-checked to attribute quality coefficients to the sources. PRETI and TBS agree at the 20 $`\%`$ level in photometry (corresponding to the photometric accuracy of both techniques), and with an astrometric accuracy smaller than the pixel size (due to the redundancy). PRETI allowed us to attain fainter levels in deep surveys, whereas in the shallow surveys, where the redundancy is not very high, a very strict criterion of multiple detections had to be applied. Finally, we have made Monte Carlo simulations by taking into account the completeness and the photometric accuracy to correct for the Eddington bias and to compute error bars in the number count plots.
## 4 The ISOCAM 15 $`\mu \mathrm{m}`$ source counts
Figures 1 and 2 show respectively the integral and the differential number counts obtained in the five independent guaranteed time surveys conducted in the ISOCAM 15 $`\mu `$m band,as well as the HDF surveys. The contribution of stars to the 15 $`\mu `$m counts was corrected. It is negligible at fluxes below the mJy level as confirmed by the spectro-photometric identifications in the ISOHDF-North (1 star out of 44 sources), South (3 stars over 71 sources), and CFRS 1415+52 (1 star over 41 sources ranging from $`0.3`$ mJy to $`0.8`$ mJy). In the Lockman Shallow Survey ($`S_{15\mu m}>1`$ mJy), about 12 $`\%`$ of the sources were classified as stars from their optical-mid IR colors (using the Rayleigh-Jeans law). We have also represented the counts from the ISOHDF-North (from Aussel et al 1999a), ISOHDF-South, and, at the lowest fluxes, the counts obtained from the A2390 cluster lens (down to 50 $`\mu `$Jy, including the correction for lensing magnification; Altieri et al 1999, see also Metcalfe et al 1999). We have only included the flux bins where the surveys are at least 80 % complete, according to the simulations.
The first striking result of these complementary source counts is the consistency of the eight 15 $`\mu `$m surveys over the full flux range. Some scatter is nevertheless apparent; given the small size of the fields surveyed, we attribute it to clustering effects.
The two main features of the observed counts are a significantly super-euclidean slope ($`\alpha =3.0`$) from 3 to 0.4 mJy and a fast convergence at flux densities fainter than 0.4 mJy. In particular, the combination of five independent surveys in the flux range 90-400 $`\mu `$Jy shows a turnover of the normalized differential counts around 400 $`\mu `$Jy and a decrease by a factor $`3`$ at 100 $`\mu `$Jy. We believe that this decrease, or the flattening of the integral counts (see the change of slope in col(7) of Table 1) below $``$400 $`\mu `$Jy, is real. It cannot be due to incompleteness, since this has been estimated from the Monte-Carlo simulations (see Section 3). The differential counts can be fitted by two power laws by splitting the flux axis in two regions around 0.4 mJy. In units of mJy<sup>-1</sup> deg<sup>-2</sup>, we obtain, by taking into account the error bars ($`S`$ is in mJy):
$`{\displaystyle \frac{dN(S)}{dS}}=\{\begin{array}{cccc}(2000\pm 600)& S^{(1.6\pm 0.2)}& \mathrm{}& 0.1S0.4\\ & & & \\ (470\pm 30)& S^{(3.0\pm 0.1)}& \mathrm{}& 0.4S4\end{array}`$ (4)
In the integral plot, the curves are plotted with 68 $`\%`$ confidence contours based on our simulation analysis. The total number density of sources detected by ISOCAM at 15 $`\mu `$m down to 100 $`\mu `$Jy (no lensing) is ($`2.4\pm 0.4`$) arcmin<sup>-2</sup>. It extends up to ($`5.6\pm 1.5`$) arcmin<sup>-2</sup>, down to 50 $`\mu `$Jy, when including the lensed field of A2390 (Altieri et al 1999).
## 5 Discussion & Conclusions
We have presented the 15 $`\mu `$m differential and integral counts drawn by several complementary ISOCAM deep surveys, with a significant statistical sampling (993 galaxies, 614 of which have a flux above the 80 % completeness limit) over two decades in flux from 50 $`\mu `$Jy up to 5 mJy.
The differential counts (Fig. 2), which are normalized to $`S^{2.5}`$ (the expected differential counts in a non expanding Euclidean universe with sources that shine with constant luminosity), present a turnover around $`S_{15\mu m}`$=0.4 mJy, above which the slope is very steep ($`\alpha =3.0\pm 0.1`$). No evolution predictions were derived assuming a pure k-correction in a flat universe ($`q_0=0.5`$), including the effect of Unidentified Infrared Bands emission in the galaxy spectra. In the Fig.1&2, the lower curve is based on the Fang et al (1998) IRAS 12 $`\mu `$m local luminosity function (LLF), using the spectral template of a quiescent spiral galaxy (M51). The upper curve is based on the Rush, Malkan & Spinoglio (1993) IRAS-12 $`\mu `$m LLF, translated to 15 $`\mu `$m using as template the spectrum of M82, which is also typical of most starburst galaxies in this band. More active and extincted galaxies, like Arp220, would lead to even lower number counts at low fluxes while flatter spectra like those of AGNs are less flat than M51. In the absence of a well established LLF at 15 $`\mu `$m, we consider these two models as upper and lower bounds to the actual no-evolution expectations; note that the corresponding slope is $`2`$. The actual number counts are well above these predictions; in the 0.3 mJy to 0.6 mJy range, the excess is around a factor 10: clearly, strong evolution is required to explain this result (note the analogy with the radio source counts, Windhorst et al 1993).
We believe, according to the results obtained on the HDF and CFRS fields (Aussel et al 1999a,b, Flores et al 1999), that the sources responsible for the ’bump’ in the 15 $`\mu `$m counts are not the faint blue galaxies which dominate optical counts at $`z0.7`$. Instead, they most probably are bright and massive galaxies whose emission occurs essentially in the IR and could account for a considerable part of the star formation in the universe at $`z<1.5`$.
In Fig. 1, we have overplotted the integral counts in the K (Gardner et al 1993) and B (Metcalfe et al 1995) bands, in terms of $`\nu S_\nu `$. For bright sources, with densities lower than 10 deg<sup>-2</sup>, these curves run parallel to an interpolation between the ISOCAM counts presented here and the IRAS counts; the bright K sources emit about ten times more energy in this band than a comparable number of bright ISOCAM sources at 15 $`\mu \mathrm{m}`$ . But the ISOCAM integral counts present a rapid change of slope around 1-2 mJy, and their numbers rise much faster than those of the K and B sources. The sources detected by ISOCAM are a sub-class of the K and B sources which harbor activity hidden by dust. Linking luminosity to distance, we predict a rapid change of the luminosity function with increasing redshift, which can only be confirmed by a complete ground-based spectro-photometric follow-up. We should be able to follow the evolution of the luminosity function from $`z0.2`$ to $`1.5`$ with the large number of galaxies detected in the Marano Field surveys. The combination of the intensity of the $`H_\alpha `$ emission line (redshifted in the J-band) with the IR luminosity could set strong constraints on the star formation rate. Finally, emission line diagnostics, combined with hard X-ray observations with XMM and Chandra, would allow us to understand whether the dominant source of energy is star formation or AGN activity.
###### Acknowledgements.
One of us (MH) wishes to acknowledge the hospitality of the Max Planck Inst. for Radioastronomy and the A. von Humboldt Foundation of Germany during work on this paper; his research with ISO is funded by a NASA grant.
|
no-problem/9910/quant-ph9910049.html
|
ar5iv
|
text
|
# Reduced phase space quantization
## I Introduction
In the study of quantization of classical systems one must start with two essential things, namely, a phase space for the system and a dynamical principle. This principle may be a classical Hamiltonian derived from a Lagrangian or from a set of hyperbolic field equations. It may also be some quantum requirement such as annihilation of unphysical states by constraint operators. In an interesting paper Radhika Vathsan considered quantization of a4-dimensional phase space with canonical coordinates $`(q^1,q^2,p_1,p_2)`$ with a priori chosen constraints
$`\varphi `$ $``$ $`q^{12}+q^{22}+p_1^2+p_2^2R^2=0`$
$`\chi `$ $``$ $`p_2=0`$
using geometric and Dirac method of quantization. In her analysis these constraints do not follow from any Lagrangian. $`\varphi `$ is arbitrarily assumed to be a first class constraint and $`\chi `$ is chosen as a gauge fixing condition. The reduced phase space turns out to be 2-dimensional sphere $`S^2`$ of radius $`R/2`$.
In the present paper we reanalyse quantization of the above system using Dirac’s method. Dirac’s method starts with a singular Lagrangian which inherently contains the constraints. Whether the constraints are first or second class follows in a straightforward manner from the analysis without any arbitrariness. We choose here two Lagrangians, the first of which reproduces the same set of constraints as as a pair of second class constraints. The second example gives similar looking constraints but with a minus sign for the $`(q^2)^2`$ term. We do rigorous constraint analysis and then quantize canonically.
## II Constraint analysis and quantization
Consider the Lagrangian
$$L=\frac{\dot{q}^{12}}{4q^2}q^2\left(q^{12}+\frac{q^{22}}{3}R^2\right)$$
$`(1)`$
excluding the line $`q^2=0`$ on the configuration space. We solve for canonically conjugate momenta to get
$`p_1`$ $`=`$ $`{\displaystyle \frac{\dot{q}^1}{2q^2}}`$
$`p_2`$ $`=`$ $`0`$
The second equation is a primary constraint
$$\varphi _1p_2=0$$
$`(2)`$
The Hamiltonian is given by
$$H=q^2p_1^2+p_2v_2+q^2\left(q^{12}+\frac{q^{22}}{3}R^2\right)$$
$`(3)`$
where $`v_2`$ is unknown Lagrange multiplier. By evolving $`\varphi _1`$ and setting it to zero
$$\{p_2,H\}=0$$
we get a secondary constraint
$$\varphi _2q^{12}+q^{22}+p_1^2+p_2^2R^2=0$$
$`(4)`$
Further evolution of $`\varphi _2`$ determines $`v_2`$
$$v_2=0$$
$`(5)`$
There are no further constraints. We,therefore, obtain two constraints $`\varphi _1,\varphi _2`$ with non-zero Poisson bracket between them and so are second class constraints. To get the reduced phase space the extra degrees of freedom corresponding to these constraints must be completely removed. The Dirac bracket is defined by
$$\{f,g\}_D=\{f,g\}\{f,\varphi _i\}\left(C^1\right)_{ij}\{\varphi _j,g\}$$
$`(6)`$
for any two classical observables $`f(q,p),g(q,p)`$. The matrix $`C`$ is
$$C=\left(\begin{array}{cc}0& 2q^2\\ 2q^2& 0\end{array}\right)$$
where
$$C_{ij}=\{\varphi _i,\varphi _j\}$$
The basic Dirac brackets are
$`\{q^1,p_1\}_D`$ $`=`$ $`1`$
$`\{q^2,p_1\}_D`$ $`=`$ $`{\displaystyle \frac{q^1}{q^2}}`$
$`\{q^1,q^2\}_D`$ $`=`$ $`{\displaystyle \frac{p_1}{q^2}}`$
$`(7)`$
The rest are zero.
Next we put both constraints equal to zero. $`\varphi _1=0`$ eliminates $`p_2`$ and from $`\varphi _2=0`$ we eliminate $`q^2`$.
$$q^2=\pm \sqrt{R^2p_1^2q^{12}}$$
$`(8)`$
There is $`\pm `$ sign ambiguity in $`q^2`$ which corresponds to the fact that the configuration space we started with consisted of two disconnected parts given by $`q^2>0`$ and $`q^2<0`$. This amounts to residual freedom in obtaining the phase space even after imposing all constraint conditions. For each choice of sign for $`q^2`$ eqn(8) gives
$$p_1^2+q^{12}=R^2q^{22}$$
$`(9)`$
or,
$$p_1^2+q^{12}<R^2$$
$`(10)`$
The completely reduced phase space is, therefore, a disc of radius $`R`$ without the boundary. Choosing $`q^2`$ to be positive the reduced Hamiltonian is
$$H=\frac{2}{3}\left(R^2p_1^2q^{12}\right)^{3/2}$$
$`(11)`$
The equations of motion are
$`\dot{q}^1=\{q^1,H\}`$ $`=`$ $`{\displaystyle \frac{H}{p_1}}`$
$`=`$ $`2q^2p_1`$
$`\dot{p}_1=\{p_1,H\}`$ $`=`$ $`{\displaystyle \frac{H}{q^1}}`$
$`=`$ $`2q^2q^1`$
$`(12)`$
We proceed to canonically quantise the system. Hilbert space $``$ consists of configuration space wave-functions $`\psi (q)`$ (index 1 has been dropped from $`q^1`$) which are square integrable in the interval $`R<q<R`$. Observables are self-adjoint operators on $``$. $`q`$ and $`p`$ go over to position and momentum operators
$$q^1\widehat{q}$$
$$\widehat{q}\psi (q)=q\psi (q)$$
$`(13)`$
$$p_1\widehat{p}i\mathrm{}\frac{}{q}$$
$`(14)`$
and $`\widehat{q},\widehat{p}`$ satisfy the commutation relation
$$[\widehat{q},\widehat{p}]=i\mathrm{}$$
$`(15)`$
The evolution of the system is generated by the Hamiltonian operator
$$\widehat{H}=\frac{2}{3}\left(R^2\widehat{p}^2\widehat{q}^2\right)^{3/2}$$
$`(16)`$
which must be self-adjoint for the evolution to be unitary.
Further, the operator $`\left(R^2\widehat{p}^2\widehat{q}^2\right)`$ must have positive eigenvalues for $`\widehat{H}`$ to be positive-definite. This requirement is due to the classical constraints showing up at the quantum level.
Consider next the Lagrangian
$$L=\frac{\dot{q}^{12}}{4q^2}q^2\left(q^{12}\frac{q^{22}}{3}R^2\right)$$
$`(17)`$
The constraints for this system are
$`\chi _1`$ $``$ $`p_2=0`$
$`\chi _2`$ $``$ $`q^{12}q^{22}+p_1^2+p_2^2R^2=0`$
$`(18)`$
Constraint analysis is straightforward. The reduced phase space is obtained from the inequality
$$p_1^2+q^{12}>R^2$$
$`(19)`$
$`q^2`$ has two branches. For each choice the reduced phase space is 2-dimensional infinite plane with a hole of radius $`R`$ at the centre, where we have restricted $`q^2`$ to be positive. The reduced Hamiltonian is
$$H=\frac{2}{3}\left(p_1^2+q^{12}R^2\right)^{3/2}$$
$`(20)`$
The system can then be quantized. The Hilbert space consists of square-integrable functions on real line $`𝐑^1`$ excluding the interval $`[R,R]`$. The evolution will be generated by the Hamiltonian
$$\widehat{H}=\frac{2}{3}\left(\widehat{p}^2+\widehat{q}^2R^2\right)^{3/2}$$
As in the earlier case the self-adjointness and positive definiteness of the Hamiltonian will restrict the Hilbert space.
## III Conclusion
We have discussed two singular systems with non-trivial reduced phase spaces.For the first system the reduced phase space is not $`S^2`$ as it appears to be but an open disc of radius $`R`$. If we include the boundary we can map all points on it to the south pole of $`S^2`$ with radius $`R/2`$. The reduced space would then be $`S^2`$. However, this can be done only if the singularity at $`q^2=0`$ is a coordinate singularity and can be removed by appropriate choice of coordinates. This is not the case for us since the Lagrangian we chose is essentially singular at $`q^2=0`$. For the second system we find that the reduced phase is 2-dimensional plane with a hole at the center. This Lagrangian is again singular at $`q^2=0`$ which cannot be removed by any coordinate transformation. Canonical quantization reveals restrictions on the Hilbert spaces of the two systems. These restrictions are manifestations of the classical constraints at the quantum level.
###### Acknowledgements.
We thank Radhika Vathsan for useful discussions and Tabish Qureshi for a critical reading of the manuscript. One of us (P. Chingangbam) acknowledges financial support from Council for Scientific and Industrial Research, India, under grant no.9/466(29)/96-EMR-I
|
no-problem/9910/hep-th9910094.html
|
ar5iv
|
text
|
# Untitled Document
HUB–EP–99/56 LPTENS-99/35
Continuous Gauge and Supersymmetry Breaking for Open Strings on D-branes
Ralph Blumenhagen<sup>1</sup>, Costas Kounnas<sup>2,3</sup> and Dieter Lüst<sup>4</sup>
<sup>1,4</sup> Humboldt-Universität Berlin, Institut für Physik, Invalidenstrasse 110,
10115 Berlin, Germany
<sup>2</sup> Laboratoire de Physique Théorique, ENS, F-75231 Paris, France
<sup>3</sup> Theory Divison, CERN, CH-1211, Geneva 23, Switzerland
Abstract
We consider freely acting orbifold compactifications, which interpolate in two possible decompactification limits between the supersymmetric type II string and the non-supersymmetric type 0 string. In particular we discuss how D-branes are incorporated into these orbifold models. Investigating the open string spectrum on D3-branes, we will show that one can interpolate in this way between $`𝒩=4`$ supersymmetric $`U(N)`$ respectively $`U(2N)`$ Yang-Mills theories and non-supersymmetric $`U(N)\times U(N)`$ gauge theories with adjoint massless scalar fields plus bifundamental massless fermions in a smooth way. Finally, by lifting the orbifold construction to M-theory, we conjecture some duality relations and show that in particular a new supersymmetric branch of gauge like theories emanate for the non-supersymmetric model.
<sup>1</sup> e–mail: blumenha@physik.hu-berlin.de <sup>3</sup> e–mail: costas.kounnas@cern.ch <sup>4</sup> e–mail: luest@physik.hu-berlin.de
10/99
1. Introduction
As it became clear during the recent years, the emergence of $`(p+1)`$-dimensional, supersymmetric gauge theories from open strings living on the world volumes of Dp-branes in closed string theories provides very important insights into the perturbative and non-perturbative gauge theory dynamics. Of particular interest is the limit in which the closed (bulk) string degrees of freedom decouple from the open (boundary) degrees of freedom; this limit is provided on the $`U(N)`$ gauge theory side by taking the number of color degrees of freedom to be very large, $`N\mathrm{}`$, whereas on the closed string side the same limit is achieved by considering the classical weak coupling limit, $`g_s0`$. Quite surprisingly, there is in fact a very striking duality between the open string gauge theories on the boundary and the closed string gravitational theories in the bulk, stating that superconformal gauge theories in the large N limit are dual to type II superstring theories in anti-de-Sitter background spaces \[1--3\]. The most well understood example is given by the duality between $`𝒩=4`$, $`U(N)`$ Yang-Mills gauge theory and supergravity in an $`AdS_5\times S^5`$ background.
Of course the most interesting problem is to explore the CFT/AdS duality for models which allow for (spontaneous) breaking of space-time supersymmetry. One way to break supersymmetry in four dimensions is to consider models with non-zero temperature . A different and perhaps more direct way to construct non-supersymmetric string models is provided by the type 0 string construction \[5--7\] following an idea by Polyakov . Seen from a world sheet point a view the non-supersymmetric type 0A/B theories are simply obtained by a non-supersymmetric, but nevertheless modular invariant GSO projection \[9,,10\], such that the spectrum of closed strings is purely bosonic. Alternatively the type 0 strings can be constructed as an orbifold of type II superstring theory by modding out the symmetry $`(1)^{F_s}`$, where $`F_s`$ denotes the space-time fermion number. In this way all closed string fermions in the (R,NS) and (NS,R) are projected out by the $`\text{ZZ}_2`$ action, whereas the twisted sector contains an (NS,NS) tachyon and in addition a second set of massless (R,R) fields. This has the important effect that the D-branes present in type 0 theories are essentially doubled compared to the supersymmetric type II parent theories. We will call the two different kinds of p-dimensional D-branes (electric) Dp and (magnetic) Dp’-branes. Implementing the $`(1)^{F_s}`$ projection in the open string sector, it turns out that the open strings stretched between D-branes of the same kind, i.e. between Dp and Dp-branes respectively between Dp’ and Dp’-branes, lead to space-time bosons, in particular to massless gauge bosons and massless scalar fields. On the other hand, open strings between Dp and Dp’-branes lead to space-time fermions. Let us mention that recently also non-supersymmetric tachyon-free compactifications of both type II string theory \[12--16\] and type 0 string theory \[17--23\] were discussed.
Using this construction an interesting class of non-supersymmetric gauge models arises when considering an equal number of $`N`$ electric D3 and $`N`$ magnetic D3’-branes in type 0B string theory. This configuration leads to a non-supersymmetric $`U(N)_e\times U(N)_m`$ gauge theory with bosonic and fermionic massless matter arranged in such a way that the $`\beta `$-function vanishes to leading order in $`1/N`$. In fact, Klebanov and Tseytlin \[5--7\] gave reasonable evidence that this gauge theory is again dual to a tachyon-free gravity background of the form $`AdS_5\times S^5`$ just as in the supersymmetric type IIB theory. Putting the type 0B D3-branes on transversal singularities \[24--27\] or considering type 0A Hanany-Witten \[28\] like brane constructions a full variety of non-supersymmetric gauge theories $`(U(N)_e\times U(N)_m)^K`$ can be constructed \[29--33\], where $`K`$ is a number which characterizes the singularity type or the number of NS 5-branes in the Hanany-Witten set up. All these models are Bose-Fermi degenerated and have the same one-loop $`\beta `$-functions as their corresponding $`U(N)^K`$ type II parent models. This correspondence becomes even more close by noting that the gauge theory given by the diagonal $`U(N)_{e+m}U(N)_e\times U(N)_m`$ is identical to the gauge theory of the underlying type II theory. Therefore one might conjecture that there exists a smooth interpolation between the supersymmetric $`U(N)`$ gauge theories and the non-supersymmetric $`U(N)\times U(N)`$ models in such a way that supersymmetry is smoothly broken (installed) along the line of deformation. Since the $`U(N)_e\times U(N)_m`$ gauge theories do not contain any massless scalars fields with the right quantum numbers to trigger the symmetry breaking to the diagonal group $`U(N)_{e+m}`$ via the usual field theoretical Higgs mechanism, one has to consider the couplings between the open string boundary modes to the closed string bulk modes in order to realize the desired interpolation.
One well established way to break supersymmetry in a smooth way is provided by the Scherk-Schwarz mechanism \[34--39\]. This class of models is built as freely acting orbifolds with different boundary conditions for bosons and fermions, very similar to strings at finite temperature \[40--45\]. Sending the radius $`R`$ of the orbifold to infinity (to zero) space-time supersymmetry is recovered. Recently Scherk-Schwarz supersymmetry breaking was studied in type I compactifications \[46--49\] and also in M-theory . Moreover the Scherk-Schwarz mechanism was also used to show that there exist indeed two types of models which smoothly interpolate between the closed type IIA/B and type 0A/B theories . They can be either realized as a freely acting orbifold of type II on $`S^1/(1)^{F_s}S`$, where $`S`$ denotes a half shift on the circle, or alternatively as a freely acting orbifold of type 0 on $`S^1/(1)^{f_L}S`$, where $`f_L`$ is the left-moving world sheet fermion number. In the former case supersymmetry is recovered for infinite radius of the circle, whereas in the latter case the zero radius limit is fully supersymmetric. These interpolating models are expected to exhibit a Hagedorn phase transition at the special critical radius of the circle where a tachyonic mode arises \[42--45\]. The freely acting orbifold of type IIA can also be lifted to M-theory . In this case it was postulated in that the strong coupling regime of type 0A is given by M-theory on $`S^1/(1)^{F_s}S`$. Taking this conjecture seriously, one implication is that at strong coupling the type 0A string becomes supersymmetric in the bulk. Moreover, the closed string tachyon becomes massive at strong coupling and at the same time, fermions, which are of solitonic nature in the type 0A string, become light and provide the massless (R-NS) fermions of the closed type IIA superstring.
In this paper we will extend this construction including also the D-branes and the corresponding open strings into the orbifolds which interpolate between type II and type 0 strings. In this way we will show that one can smoothly interpolate between broken and unbroken gauge theories and at the same time between supersymmetric and non-supersymmetric models in a way not known before from field theory. We will first consider type IIA/B compactified on $`S^1/(1)^{F_s}S`$ where $`N`$ Dp-branes (p even (odd) for type A(B)) are either wrapped around the orbifold circle, or are placed transversal to it. In the former case one interpolates between a $`(p+1)`$-dimensional, $`𝒩=4`$ supersymmetric $`U(2N)`$ gauge theory at infinite radius and a $`p`$-dimensional, non-supersymmetric $`U(N)\times U(N)`$ gauge at zero radius. In case of transversal branes, the infinite radius limit of the orbifold provides an $`𝒩=4`$ supersymmetric $`U(N)`$ gauge theory in $`p+1`$ dimensions whereas in the zero radius limit a non-supersymmetric $`U(N)\times U(N)`$ gauge theory in $`p+2`$ dimensions appears. We extend this construction by considering two-dimensional type IIA/B orbifold compactifications on $`S^1S^1/(1)^{F_s}S`$. In turns out that varying the two radii of the compact space one can interpolate between supersymmetric and non-supersymmetric gauge theories living in the same space-time dimension. A similar, in fact T-dual, picture arises from D-branes of type 0A/B on $`S^1/(1)^{f_L}S`$ and on $`S^1S^1/(1)^{f_L}S`$, respectively.
Finally we will discuss how this construction can be embedded into M-theory. Here again the M-theory moduli allow to interpolate between weakly coupled non-supersymmetric gauge theories and strongly coupled supersymmetric gauge theories in a smooth way. In contrast to the string theory embeddings, here one of the radii is related to the gauge coupling constant, so that one is really interpolating between small and strong gauge coupling. Taking the non-supersymmetric dualities seriously, we are led to strong-weak coupling dualities for the non-supersymmetric gauge theories. Moreover, we see that non-supersymmetric gauge theories contain new branches at strong coupling, which are supersymmetric, but not all stringy degrees of freedom are decoupled.
2. Type II compactifications on freely acting orbifolds
2.1. Supersymmetry restoration in the bulk
In this section we briefly review the smooth orbifold construction of which produces type 0 from type II and vice versa. One starts with the type IIA (B) superstring compactified in the ninth direction on a circle $`S^1`$ of radius $`R_9^{II}`$. The theory contains 32 unbroken supercharges for every value of $`R_9^{II}`$. In the limit $`R_9^{II}\mathrm{}`$ the Kalazu-Klein modes become massless and one recovers the type IIA (B) superstring in ten dimensions. On the other hand, for $`R_9^{II}0`$ the winding modes become massless and the theory is now identical to the type IIB (A) superstring in ten dimensions. This is what is usually called T-duality between the type IIA/B superstring \[52,,53\] (for a recent discussion on IIA/IIB T-duality and M-theory see ).
At the next step we build the orbifold type IIA (B) on $`S^1/(1)^{F_s}`$ of radius $`R_9^0`$. This $`\text{ZZ}_2`$ projection breaks all 32 supercharges and the resulting theory is nothing else than the type 0 string compactified on $`S^1`$ with purely bosonic spectrum in the closed string bulk. For large $`R_9^0\mathrm{}`$ the type 0A (B) string in ten dimensions emerges whereas for $`R_9^00`$ the ten-dimensional type 0B (A) string is present. Hence $`T`$-duality among type 0A/B just works like for the type IIA/B superstring pair.
Finally we combine the action of the $`\text{ZZ}_2`$ $`(1)^{F_s}`$ orbifold projection with the half-shift $`S`$ along the circle, i.e. we are now considering type IIA (B) on the freely acting orbifold $`S^1/(1)^{F_s}S`$ of radius $`R_9`$. For finite values of $`R_9`$ all 32 supercharges are broken in the sense that all 32 gravitinos are massive with mass of order $`1/R_9`$. In the limit $`R_9\mathrm{}`$ the gravitinos become massless, and one obtains the closed string spectrum of the ten-dimensional type IIA (B) superstring. On the other hand, for $`R_90`$ the gravitinos become infinitely heavy such that they completely decouple, and one regains the spectrum of the type 0A (B) for $`R_9^0`$=0. So effectively, using also the type 0 T-duality, the limit $`R_90`$ of type IIA (B) on this orbifold is given by the ten-dimensional type 0B (A) string theory. There is a tachyon for $`R_9<\sqrt{2}`$, which decouples in the limit $`R_9\mathrm{}`$. Therefore one expects a Hagedorn phase transition at this radius \[43--45\]. Note that at infinite radius the two moduli spaces of type II compactified on the circle $`S^1`$ and of type II compactified on the orbifold $`S^1/(1)^{F_s}S`$ meet. In the same way, at $`R_9=0`$ the moduli space of type 0 on $`S^1`$ and of type II on $`S^1/(1)^{F_s}S`$ have a common intersection.
2.2. D-branes and open strings in Type II on $`S^1/(1)^{F_S}S`$
First let us briefly recall the D-brane spectrum of type 0 strings. As already mentioned, the $`(1)^{F_s}`$ projection removes all closed string fermions from the untwisted sector and leads to a tachyon and further massless RR fields in the twisted sector. Computing the spectrum one finds that all states in the RR sector are doubled implying that all Dp-branes (p odd in type 0B, p even in type 0A) are doubled, as well. In particular, in the case of D3-branes there now exist (electric) D3 and (magnetic) D$`3^{}`$-branes. Using the boundary state approach it was shown in that indeed the boundary state representing a Dp-brane in type II splits into two boundary states of type 0. The explicit form of the boundary states was used to derive rules for open strings stretched between the various types of Dp-branes. Open strings stretched between the same kind of Dp-branes carry only space-time bosonic modes, whereas open strings stretched between a Dp and a D$`p^{}`$-brane carry only space-time fermionic modes. The massless spectrum living on the D3-branes is given by a four-dimensional gauge theory with gauge group $`G=U(N)\times U(N)`$, three complex bosons in the adjoint and four Weyl-fermions in the $`(\mathrm{N},\overline{\mathrm{N}})+(\overline{\mathrm{N}},\mathrm{N})`$ representation of $`G`$. It follows that the one-loop $`\beta `$-function vanishes, and as was shown explicitly in the two-loop $`\beta `$-function vanishes in the large N limit. In the next to leading order one obtains a non-vanishing contribution to the two-loop $`\beta `$-function coefficient, $`b_2=16`$. Thus, the gauge theory is free in the infrared.
Next consider the type II string compactified on $`S^1/(1)^{F_s}`$. First we discuss the case where the circle $`S^1`$ is transversal to the Dp and Dp’-branes such that the D-branes take certain positions on $`S^1`$. In addition to the open strings discussed before there are also winding open strings which are wrapped $`n`$-times around $`S^1`$ before their ends are stuck at the D-branes. Therefore we call this case the winding picture. The masses of these winding states are proportional to $`|nR_9^0|`$. On the one hand, for $`R_9^0\mathrm{}`$ the winding modes become heavy and decouple. On the other hand for $`R_9^00`$ an infinite number of winding modes becomes massless, which means that effectively the world volume dimension of a Dp-brane grows by one unit.
Alternatively we can also decide to put the compact circle along one of the world volume directions of the Dp-branes. This is called the momentum picture, since there are open strings on the Dp(p’)-branes which carry discrete momenta of the order $`m/R_9^0`$. Hence these states become light in the limit $`R_9^0\mathrm{}`$, but decouple in the other decompactification limit $`R_9^00`$. Thus, in the $`R_9^0\mathrm{}`$ limit the dimension of the brane world-volume is effectively enlarged.
After these considerations we are now ready to discuss the D-branes and open strings within the compactification of type II string on the orbifold $`S^1/(1)^{F_s}S`$.
(i) The winding picture: transversal Dp-branes
For concreteness let us consider the type IIB string compactified on a circle of radius $`R_9`$. We place 2N transversal D3-branes in this background and divide by the $`\text{ZZ}_2`$ operation $`T=(1)^{F_s}S`$. The shift acts on a momentum/winding ground state as $`S|m,n=(1)^m|m,n`$. Let us first consider the range $`0<R_9<\mathrm{}`$ and then consider the two possible limits. Requiring that $`T`$ is indeed a symmetry of the background, we have to make sure that the D3-branes are arranged in such a way that $`T`$ transforms one brane into another. We will place N branes at a position $`x_9=A=0`$ on the circle and N branes at $`x_9=B=R_9/2`$. Therefore we deal with two kinds of winding strings in the open string sector. First there are the AA- and BB-sectors with open strings between two D3-branes either at $`x_9=0`$ or at $`x_9=R_9/2`$ and integer winding numbers $`w=nR_9`$. Their masses are given by
$$M^2(nR_9)^2.$$
Second in the AB- and BA-sectors there are open strings stretched between one D3-brane at $`x_9=0`$ and one other D3-brane at $`x_9=R_9/2`$. These states have half-inter winding numbers $`w=(n+\frac{1}{2})R_9`$, and their masses are given by
$$M^2((n+\frac{1}{2})R_9)^2.$$
Hence the ground state in the AB(BA)-sector is massive for finite $`R_9`$.
The natural action of $`T`$ on the Chan-Paton factors of these branes is given by
$$\gamma _T=\left(\begin{array}{cc}0& I\\ I& 0\end{array}\right)_{2N,2N}$$
so that the branes at $`x_9=0`$ are mapped to the branes at $`x_9=R_9/2`$. Computing the annulus amplitude for open string stretched between two D3-branes is straightforward and gives
$$\begin{array}{cc}\hfill A=& \mathrm{Tr}\left[\frac{1+T}{2}P_{GSO}e^{2\pi tL_0}\right]\hfill \\ \hfill =& \frac{N^2}{2}\frac{\vartheta \left[\genfrac{}{}{0pt}{}{0}{0}\right]^4\vartheta \left[\genfrac{}{}{0pt}{}{0}{1/2}\right]^4\vartheta \left[\genfrac{}{}{0pt}{}{1/2}{0}\right]^4}{\eta ^{12}}\underset{n\text{ZZ}}{}\left(e^{2\pi tR_9^2n^2}+e^{2\pi tR_9^2(n+\frac{1}{2})^2}\right),\hfill \end{array}$$
with argument $`e^{2\pi t}`$. The first term with integer windings is from open strings stretched between branes at the same location and the second term with half-integer windings is from open strings stretched between branes at opposite locations of the circle. It is straightforward to compute the massless spectrum. In the range $`R_9>0`$ we obtain the four dimensional supersymmetric spectrum shown in Table 1.
sector spin gauge $`U(N)`$ AA, BB vector Adj scalar $`6\times \mathrm{𝐀𝐝𝐣}`$ Weyl $`4\times \mathrm{𝐀𝐝𝐣}`$
Table 1: Winding picture: open string spectrum of $`S^1/T`$ for finite $`R_9`$
Thus the massless spectrum in the AA and BB sectors is exactly the one of $`U(N)`$ $`𝒩=4`$ super Yang-Mills theory in four space-time dimensions. In the limit $`R_9\mathrm{}`$ the mass of the other winding states becomes infinite and they decouple. Therefore we are dealing with four-dimensional $`U(N)`$ $`𝒩=4`$ supersymmetric Yang-Mills in this decompactification limit.
In the $`R_90`$ limit however an infinite number of winding states in the AA and BB sectors becomes massless. This has the effect that the gauge theory now lives in one dimension higher. Moreover the gauge group gets enlarged, as now open strings in the AB and BA sectors stretched between the D3-branes at locations A and B become massless. By a unitary transformation
$$U=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}I& I\\ I& I\end{array}\right)_{2N,2N}$$
one obtains $`\gamma _T=\mathrm{diag}[I,I]`$ and now one is in the situation of the pure $`(1)^{F_s}`$ orbifold, which leads to the massless open string spectrum listed in Table 2.
sector spin gauge $`U(N)\times U(N)`$ 33, 3’3’ vector (Adj,1)+(1,Adj) scalar $`5\times \{(\mathrm{𝐀𝐝𝐣},1)+(1,\mathrm{𝐀𝐝𝐣})\}`$ 33’, 3’3 Dirac $`2\times \{(N,\overline{N})+(\overline{N},N)\}`$
Table 2: Winding picture: open string spectrum of $`S^1/T`$ for $`R_9=0`$
This is precisely the mass spectrum of type 0B with $`N`$ electric D3 plus $`N`$ magnetic D3’-branes but at zero radius $`R_9^{0B}=0`$. This means that the non-supersymmetric gauge theory actually lives in five space-time dimensions, i.e. this limit is nothing else than the type 0A string with a non-supersymmetric $`U(N)\times U(N)`$ gauge theory arising from $`N`$ electric D4 plus $`N`$ magnetic D4’-branes.
In summary, so far we have constructed an interpolating model between four-dimensional $`𝒩=4`$ super YM theory at $`R_9\mathrm{}`$ and a special five-dimensional non-supersymmetric gauge theory at $`R_90`$. The additional massless states appear in the winding sector of open strings; hence there is no field theoretic description for them. $`R_9`$ is a purely stringy parameter, which is not present in conventional gauge theory. The corresponding closed string modulus field in the bulk is coupled to the open strings in the boundary and provides at the same time the supersymmetry restoration and the gauge symmetry breaking on the D-branes.
Of course, this picture can be immediately generalized considering 2N Dp-branes, p even (odd), in type IIA(B) superstring compactified on $`S^1/(1)^{F_s}S`$, where again half of the Dp-branes are positioned at $`x_9=0`$ and the other half sit at $`x_0=R_9/2`$. In this way one interpolates between supersymmetric type IIA(B) Dp-branes at $`R_9\mathrm{}`$ and non-supersymmetric type 0B(A) D(p+1)-, D(p+1)’-branes at $`R_90`$. In the open string sector one is smoothly interpolating between (p+1)-dimensional supersymmetric $`U(N)`$ Yang-Mills theory with 16 supercharges at $`R_9=\mathrm{}`$ and (p+2)-dimensional non-supersymmetric $`U(N)\times U(N)`$ Yang-Mills theory at $`R_9=0`$.
(ii) The momentum picture: wrapped Dp-branes
Now let us discuss the case where the orbifold $`S^1/(1)^{F_s}S`$ lies in one of the world volume directions of the D-branes, i.e. the D-branes are wrapped around the compact orbifold. The open string states again split into two sectors, namely momentum states with even momenta $`p_9=2m/R_9`$ and masses
$$M^2(2m)^2/R_9^2,$$
and in general different states with odd momenta $`p_9=(2m+1)/R_9`$ and corresponding masses
$$M^2(2m+1)^2/R_9^2.$$
To be specific consider 2N D4-branes of type IIA wrapped in this way. For finite values of $`R_9`$ the resulting gauge theory lives in four uncompactified space-time dimensions, whereas in the $`R_9\mathrm{}`$ limit momentum modes are massless, and the gauge theory become five-dimensional. Since in the $`R_90`$ limit we would like to obtain the non-supersymmetric gauge theory with gauge group $`U(N)\times U(N)`$ we make the following choice for the action of $`T`$ on the Chan-Paton labels
$$\gamma _T=\left(\begin{array}{cc}I& 0\\ 0& I\end{array}\right)_{2N,2N}.$$
In order to obtain the precise form of the spectrum we compute the annulus diagram for open strings stretched between the wrapped D4-branes.
$$\begin{array}{cc}\hfill A=& \mathrm{Tr}\left[\frac{1+T}{2}P_{GSO}e^{2\pi tL_0}\right]=N^2\frac{\vartheta \left[\genfrac{}{}{0pt}{}{0}{0}\right]^4\vartheta \left[\genfrac{}{}{0pt}{}{0}{1/2}\right]^4\vartheta \left[\genfrac{}{}{0pt}{}{1/2}{0}\right]^4}{\eta ^{12}}\underset{m\text{ZZ}}{}\left(e^{2\pi t\frac{m^2}{R_9^2}}\right).\hfill \end{array}$$
The resulting massless spectrum for $`R_9<\mathrm{}`$ is listed in Table 3.
sector spin gauge $`U(N)\times U(N)`$ vector $`(\mathrm{𝐀𝐝𝐣},1)+(1,\mathrm{𝐀𝐝𝐣})`$ $`p_9`$ even scalar $`6\times \{(\mathrm{𝐀𝐝𝐣},1)+(1,\mathrm{𝐀𝐝𝐣})\}`$ Weyl $`4\times \{(N,\overline{N})+(\overline{N},N)\}`$
Table 3: Momentum picture: open string spectrum of $`S^1/T`$ for finite $`R_9`$
We see indeed that the massless spectrum is the one of four-dimensional, non-supersymmetric Yang-Mills with gauge group $`U(N)\times U(N)`$. The massless spectrum agrees with the open string spectrum of D3- and D3’-branes in the type 0B string. Therefore in the limit $`R_90`$ the massive states with masses proportional to $`(2m+1)/R_9`$ decouple, and we get precise agreement with the type 0B string.
The limit $`R_9\mathrm{}`$ provides on the other hand an infinite number of new massless momentum states which enhance the gauge symmetry to the group $`U(2N)`$ and also restores space-time supersymmetry. So in the large radius limit the spectrum is given by supersymmetric $`U(2N)`$ Yang-Mills theory in five space-time dimensions, where 16 supercharges are preserved in the open string theory.
In general starting with $`2N`$ wrapped Dp-branes in type IIA(B) on $`S^1/(1)^{F_s}S`$ one interpolates between the following two decompactification limits:
$`R_90`$: N D(p-1) and N D(p-1)’-branes of type 0B(A) with p-dimensional, non-supersymmetric $`U(N)\times U(N)`$ gauge theory.
$`R_9\mathrm{}`$: 2N Dp-branes of type IIA(B) with (p+1)-dimensional, supersymmetric $`U(2N)`$ gauge theory.
2.3. D-branes and open strings in type II on $`S^1S^1/(1)^{F_S}S`$
So far we have interpolated between supersymmetric and non-supersymmetric gauge theories living in different number of space-time dimensions. Now we will extend the discussion by compactifying the type II string on a two-dimensional compact space given by $`S^1(S^1/(1)^{F_s}S)`$ characterized by the two radii $`R_8`$ and $`R_9`$. Using the T-duality on the compact circle in $`x_8`$ we can now smoothly interpolate between D-branes of the same world volume dimensions, and hence also between non-supersymmetric and supersymmetric gauge theories of the same dimensionality.
(i) The winding picture: transversal Dp-branes
Let us consider 2N Dp-branes in type IIA(B) which are transversal to both the $`x_8`$ and the $`x_9`$ direction. The positions of the Dp-branes on the orbifold circle are as discussed in section (2.1). The spectrum of this theory follows without large efforts from the previous discussion. For finite values of the two radii $`R_8`$ and $`R_9`$ we are dealing with a (p+1)-dimensional gauge theory, living on the world volume of the Dp-branes. The massless states are those of maximally supersymmetric $`U(N)`$ Yang-Mills gauge theory. Of particular interest are the following four possible decompactification limits:
a) $`R_8\mathrm{}`$, $`R_9\mathrm{}`$: Here we are dealing with Dp-branes of type IIA(B), and the corresponding gauge theory is (p+1)-dimensional, supersymmetric Yang-Mills with $`U(N)`$ gauge group.
b) $`R_80`$, $`R_9\mathrm{}`$: This limit describes D(p+1)-branes of type IIB(A) with (p+2)-dimensional, supersymmetric $`U(N)`$ gauge theory.
c) $`R_8\mathrm{}`$, $`R_90`$: Now the decompactification limit corresponds to self-dual D(p+1)- and D(p+1)’-branes of type 0B(A) with non-supersymmetric $`U(N)\times U(N)`$ gauge theory in p+2 dimensions.
d) $`R_80`$, $`R_90`$: Finally we obtain self-dual D(p+2), D(p+2)’-branes of type 0A(B) with non-supersymmetric $`U(N)\times U(N)`$ gauge theory in p+3 dimensions.
Therefore, in the winding picture the moduli space of this two-dimensional compactification enables us to interpolate between N D3-branes of type IIB and N self-dual D3, D3’-branes of type 0B. Hence there exists a stringy interpolation mechanism between four-dimensional, non-supersymmetric $`U(N)\times U(N)`$ gauge theory with 6 adjoint scalars plus $`4\times \{(N,\overline{N})+(\overline{N},N)\}`$ Weyl fermions and four-dimensional $`𝒩=4`$ supersymmetric $`U(N)`$ Yang-Mills theory. The supersymmetric $`U(N)`$ gauge group is just the diagonal subgroup of the non-supersymmetric $`U(N)\times U(N)`$ gauge symmetry.
(ii) The momentum picture: wrapped Dp-branes
In the momentum picture we are considering 2N Dp-branes which are wrapped both in the $`x_8`$ and also in the $`x_9`$ directions. For finite values of the radii there is a non-supersymmetric $`U(N)\times U(N)`$ gauge theory with massless fields living in p-1 uncompactified space-time dimensions. Again we like to consider the four special decompactification limits:
a) $`R_8\mathrm{}`$, $`R_9\mathrm{}`$: Here we are dealing with Dp-branes of type IIA(B), and the corresponding gauge theory is (p+1)-dimensional, supersymmetric Yang-Mills with $`U(2N)`$ gauge group.
b) $`R_80`$, $`R_9\mathrm{}`$: This limit describes D(p-1)-branes of type IIB(A) with p-dimensional, supersymmetric $`U(2N)`$ gauge theory.
c) $`R_8\mathrm{}`$, $`R_90`$: Now the decompactification limit corresponds to self-dual D(p-1)- and D(p-1)’-branes of type 0B(A) with non-supersymmetric $`U(N)\times U(N)`$ gauge theory in p dimensions.
d) $`R_80`$, $`R_90`$: Finally we obtain self-dual D(p-2), D(p-2)’-branes of type 0A(B) with non-supersymmetric $`U(N)\times U(N)`$ gauge theory in p-1 dimensions.
We see that within the momentum picture one can again interpolate between 2N D3-branes of type IIB and N self-dual D3, D3’-branes of type 0B. This time it provides an interpolation mechanism between four-dimensional, non-supersymmetric $`U(N)\times U(N)`$ gauge theory with 6 adjoint scalars plus $`4\times \{(N,\overline{N})+(\overline{N},N)\}`$ Weyl fermions and four-dimensional $`𝒩=4`$ supersymmetric $`U(2N)`$ Yang-Mills theory. Here the non-supersymmetric $`U(N)\times U(N)`$ gauge group is a regular subgroup of the supersymmetric $`U(2N)`$ gauge symmetry.
3. Type 0 compactifications on freely acting orbifolds
Alternatively we can start also with the type 0A (B) string compactified on $`S^1`$. Modding by $`(1)^{f_L}`$ leads back to the type IIA (B) superstring theories. It follows that the compactification of type 0A (B) on the freely acting orbifold $`S^1/(1)^{f_L}S`$ possesses the following decompactification limits. For $`R_9\mathrm{}`$ one recovers the ten-dimensional non-supersymmetric type 0A (B) theories and for $`R_90`$ the ten-dimensional supersymmetric type IIB (A) theory arises. Observe that now a tachyon develops for large radii $`R_9>\sqrt{2}`$. As was shown in the type 0A (B) orbifold is T-dual to the type IIB (IIA) orbifolds discussed in the previous section, where the relation between the radii is $`R_{II}=2/R_0`$. The straightforward T-dual of the type IIB (IIA) over $`S^1/(1)^{F_S}S`$ orbifold is of course type IIA (IIB) over $`S^1/(1)^{F_S}\stackrel{~}{S}`$, where $`\stackrel{~}{S}`$ is a shift in the momentum lattice acting as $`(1)^n`$ on winding modes.
Let us now discuss the properties of the D-branes and the gauge theories in the open string sector within this orbifold compactification of the type 0 string. Since the situation is just T-dual to the orbifold compactification of the type II strings discussed before, we will only summarize the main results. In order to interpolate between type 0 and type II we will only allow for self-dual D-branes in the type 0 orbifolds, i.e. we are placing N electric Dp and N magnetic Dp’-branes either transversal to the orbifold or wrapped around it.
(i) The winding picture: transversal Dp-branes
Here we are placing N electric Dp and N magnetic Dp’-branes both at $`x_9=0`$ and at $`x_9=R_9/2`$. The action of $`(1)^{f_L}S`$ on the CP-factors relates for instance a Dp-brane at $`x_9=0`$ to a Dp’-brane at $`x_9=R_9/2`$. Using the T-duality relation between the type II and type 0 radii, one obtains exactly the annulus partition function in (2.1). For generic radii the open string sector is given by (p+1)-dimensional, non-supersymmetric $`U(N)\times U(N)`$ gauge theory. For $`R_90`$ additional gauge bosons as well as their superpartners become massless, such that this limit is described by D(p+1)-branes with associated $`𝒩=4`$ supersymmetric $`U(2N)`$ gauge theory in p+2 space-time dimensions. On the other hand, for $`R_9\mathrm{}`$ we obtain the Dp, Dp’-branes of type 0, and the gauge theory is non-supersymmetric $`U(N)\times U(N)`$ Yang-Mills theory. Of course, we can extend this picture by considering a type 0 compactification on $`S^1S^1/(1)^{f_L}S`$. In this way we get the T-dual picture of the interpolating models discussed in section 2.3.
(ii) The momentum picture: wrapped Dp-branes
Here the situation is similar to the winding picture in the type II orbifold compactification. We are wrapping N electric Dp and N magnetic Dp’-branes around the circle and using the T-duality relation for the radii we obtain the same result as in (2.1). For $`R_90`$ the theory becomes $`𝒩=4`$ supersymmetric, but now with $`U(N)`$ gauge group. In contrast, for $`R_9\mathrm{}`$ supersymmetry is completely broken but the gauge group is enhanced to $`U(N)\times U(N)`$.
4. M-theory embedding
4.1. The bulk theory
So far we have considered embeddings of supersymmetric and non-supersymmetric gauge theories into string theory and by varying some stringy parameters we have seen that one can interpolate between them. Unfortunately, this does not teach us anything new about the dynamics of the non-supersymmetric gauge theories. Therefore, we should try to lift the whole picture to M-theory where at least one of the radii is related to the string coupling constant and therefore to the gauge coupling constant.
Let us briefly review the suggestion of where a freely acting orbifold of 11-dimensional M-theory was constructed with the conjecture that the non-supersymmetric type 0A string can be viewed as an M-theory compactification on $`S^1/(1)^{F_s}S`$. As a consequence of this conjecture it follows that at small radius $`r_{10}`$, i.e. at weak string coupling, one recovers the non-supersymmetric type 0A string, whereas at large radius $`r_{10}`$, i.e. at strong string coupling, the maximally supersymmetric M-theory in 11 dimensions emerges. Hence the type 0A string should contain fermionic solitons with masses $`m_f=1/r_{10}`$, and the tachyon of type 0A should become massive at strong coupling. Similarly it was argued in that the ten-dimensional type 0B string can be obtained as the zero volume limit of M-theory on $`T^2/(1)^{F_s}S`$.
So let us consider M-theory compactified on the freely acting orbifold $`S^1S^1/(1)^{F_s}S`$. The corresponding two radii are called $`r_9`$ and $`r_{10}`$.<sup>6</sup><sup>6</sup> Radii measured in units of the 11-dimensional Planck length $`L_{11}`$ are denoted by small letters. On the other hand, radii measured in units of the string scale $`\sqrt{\alpha ^{}}`$ are denoted as before by capital letters. If the $`\text{ZZ}_2`$ orbifold is in the $`x_9`$ direction then the previous perturbative discussion on the interpolation between supersymmetric and non-supersymmetric theories applies. However if we exchange the role of the two circles such that supersymmetry is broken by the M-theory coordinate $`x_{10}`$, one has to conclude that the type 0A string at weak coupling is given by M-theory on $`S^1/(1)^{F_s}S`$ at zero radius $`r_{10}`$. Varying the M-theory radius $`r_{10}`$, i.e. the 0A coupling constant $`g_{10}^A`$, one is interpolating between M-theory with 32 supercharges and type 0A string with completely broken supersymmetry. To be precise let us recall the well-known relations \[55,,56\] between the string and M-theory parameters. First the relation between the string coupling constant $`g_{10}^A`$ and $`r_{10}`$ is given by <sup>7</sup><sup>7</sup> The relation between the type 0 parameters and M-theory parameters are as in the type II case except some additional factors of 2 due to the orbifoldization.
$$g_{10}^A=\left(\frac{r_{10}}{2}\right)^{3/2}.$$
Next we consider the compactification of M-theory to nine dimensions. The nine-dimensional radius of type 0A on $`S^1`$ is then related to the M-theory parameters as
$$R_9^A=r_9\sqrt{\frac{r_{10}}{2}},$$
whereas the nine-dimensional 0A string coupling constant reads
$$g_9^A=\frac{g_{10}^A}{\sqrt{R_9^A}}=\left(\frac{r_{10}}{2}\right)^{5/4}r_9^{1/2}.$$
Using the T-duality between 0A and 0B in nine dimensions we can express the 0B parameters in the following way:
$$R_9^B=\frac{1}{R_9^A}=\frac{\sqrt{2}}{r_9\sqrt{r_{10}}},g_9^B=g_9^A=\left(\frac{r_{10}}{2}\right)^{5/4}r_9^{1/2},g_{10}^B=g_9^B\sqrt{R_9^B}=\frac{r_{10}}{2r_9}.$$
Using these relations we are interested in the following three type IIB (0B) limits.
a) $`r_90`$, $`r_{10}0`$ with $`R_9^B\mathrm{}`$: Here we obtain the type 0B string at finite or zero string coupling $`g_{10}^B=r_{10}/2r_9`$.
b) $`r_90`$, $`r_{10}\mathrm{}`$ with $`R_9^B\mathrm{}`$: This limit describes the strongly coupled type IIB string with $`g_{10}^B\mathrm{}`$.
c) $`r_9\mathrm{}`$, $`r_{10}0`$ with $`R_9^B\mathrm{}`$: Now one is dealing with the weakly coupled type 0B string with $`g_{10}^B0`$.
Let us now discuss how M-theory branes and the corresponding gauge theories can be incorporated into this picture.
4.2. M2-branes
The membrane solution (M2-brane) of 11-dimensional supergravity plays a very important role in the relation between string theory and M-theory. Upon circle compactification from 11 to 10 dimensions M2-branes with world volumes transversal to $`x_{10}`$ can be identified with the IIA D2-branes. On the other hand, being wrapped around the eleventh dimension the M2-brane provides the fundamental string of the IIA superstring. Let us now discuss the fate of the M2-branes under the M-theory compactification on $`S^1\times S^1/(1)^{F_s}S`$, where the orbifold lies in the $`x_{10}`$ direction. As a first set of membranes we introduce 2N M2-branes which are completely unwrapped, so that their worldvolumes are transversal to the compact two-dimensional orbifold. Therefore this choice corresponds to the winding picture discussed before. Of course we have to place N M2-branes at opposite positions on the circle. In order to obtain gauge theories the M2-branes are intersected by another set of M2-branes which are wrapped around the orbi-circle in $`x_{10}`$. Hence from the string point of view these wrapped membranes correspond to the open strings which are responsible for the gauge symmetry degrees of freedom.
In order to see what the various limits of M-theory parameters mean for the corresponding gauge theories we need the relations between string theory and M-theory parameters, where we measure all length scales in units of the 11-dimensional Planck length $`L_{11}`$, which now appears explicitly in all formulas, i.e. $`r_9`$ and $`r_{10}`$ are now dimensionful(!) quantities. Focussing on type B quantities we obtain:
$$R_9^B=\frac{2L_{11}^3}{r_9r_{10}},g_{10}^B=\frac{r_{10}}{2r_9}.$$
Note that switching from $`R_9^B`$ measured in string units to the radius measured in M-theory units the following relation between the fundamental string scale $`\alpha ^{}`$ and $`L_{11}`$ is required:
$$T_F=\frac{1}{\alpha ^{}}=\frac{r_{10}}{2L_{11}^3}.$$
The ten-dimensional gravitational coupling constant is given by
$$\kappa _{10}^2=\frac{L_{11}^9}{r_{10}}.$$
In addition we also need the expression for the tension of the solitonic D1-strings and D3-branes:
$$T_{D1}=\frac{T_F}{g_{10}^B}=\frac{r_9}{L_{11}^3},T_{D3}=\frac{T_F^2}{g_{10}^B}=\frac{r_9r_{10}}{2L_{11}^6}.$$
In the following we like to consider the gauge theories in the limit $`R_9^B\mathrm{}`$, i.e. in the type B limit. Moreover we like to study the behavior of the gauge theories in the limit where the 11-dimensional Planck length is small, i.e. $`L_{11}=\rho 0`$.
a) $`r_9=\rho ^a0`$ ($`a>0`$), $`r_{10}=\rho ^b0`$ ($`b>0`$):
In order that $`R_9^B\mathrm{}`$ we have to require that $`a+b>3`$. Then this limit describes the non-supersymmetric type 0B string. It contains N self-dual D3-branes. In order that the gauge theory modes completely decouple from the perturbative as well as all non-perturbative stringy modes we demand in addition that $`T_F\mathrm{}`$ and $`T_{D1}\mathrm{}`$. This provides a further restriction on how $`r_9`$ and $`r_{10}`$ go to zero, namely $`a<3`$ and also $`b<3`$. Choosing the parameters in this way we have a non-supersymmetric $`U(N)\times U(N)`$ gauge theory with 6 adjoint scalars and 4+4 bifundamental fermions in four dimensions. The gauge theory is weakly coupled, $`g_{YM}=\sqrt{g_{10}^B}0`$, if $`b>a`$. Moreover this non-supersymmetric gauge theory is expected to possess an S-duality symmetry $`g_{YM}1/g_{YM}`$, which is realized by the exchange of $`r_9`$ and $`r_{10}`$. Thus the strongly coupled non-supersymmetric $`U(N)\times U(N)`$ gauge theory is obtained choosing $`a>b`$. Of course, one has to be very careful with such duality statements in the non-supersymmetric case, as the lack of supersymmetry does not allow us to find more supporting evidence for such a conjecture.
b) $`r_9=\rho ^a0`$ ($`a>0`$), $`r_{10}=\rho ^b`$ const. or $`\mathrm{}`$ ($`b0`$):
Choosing the two radii in this way we run towards supersymmetry restoration; supersymmetry is completely restored for $`r_{10}\mathrm{}`$. Again keeping $`R_9^B`$ large we need $`a>3+b`$, i.e. $`a>3`$. Then in the supersymmetric limit the N M2-branes can be viewed as type IIB D3-branes with open strings ending on them. The open strings lead to $`𝒩=4`$ supersymmetric $`U(N)`$ gauge theory at very strong Yang-Mills gauge coupling $`g_{YM}=\sqrt{g_{10}^B}\mathrm{}`$. However now the stringy modes do not decouple anymore. Although the fundamental string tension $`T_F`$ is still very large, the D1 string tension $`T_{D1}`$ goes to zero, since $`a>3`$. This means that we deal not only with strongly coupled $`𝒩=4`$ supersymmetric $`U(N)`$ Yang-Mills, but also all D-stringy modes are present and do not decouple. We call this theory $`𝒩=4`$ MSYM.
c) $`r_9=\rho ^a`$ const. or $`\mathrm{}`$ ($`a0`$), $`r_{10}=\rho ^b0`$ ($`b>0`$):
In this limit supersymmetry is completely broken. Asking for $`R_9^B\mathrm{}`$ it implies $`b>3+a`$, i.e. $`b>3`$. Then the N M2-branes can be viewed as self-dual type 0B D3-branes with non-supersymmetric $`U(N)\times U(N)`$ gauge theory at very weak Yang-Mill gauge coupling $`g_{YM}=\sqrt{g_{10}^B}0`$. Now the elementary stringy modes do not decouple since $`T_F0`$ whereas $`T_{D1}\mathrm{}`$. This means that we deal not only with classical non-supersymmetric $`U(N)\times U(N)`$ Yang-Mills theory, but also all elementary stringy modes are present and do not decouple. Let us call this theory $`𝒩=0`$ MYM theory. Under exchange of the two compact radii this case is mapped to the limit described in b.), implying some strong-weak duality between these two supersymmetric resp. non-supersymmetric gauge theories.
In all three type IIB (0B) limits the ten-dimensional gravitational coupling vanishes and the tension of the D3-branes becomes infinite, so that gravity and massive modes on the D3-branes decouple properly. If we take the M-theory discussion seriously, it tells us that the non-supersymmetric $`U(N)\times U(N)`$ gauge theory can be continued at very weak and very strong coupling to two new branches of gauge like theories, where not all the stringy modes decouple from the dynamics. On one of these two branches the model can be deformed continuously to a supersymmetric model. Moreover, analogously to the bulk theory one might expect a strong-weak duality for the non-supersymmetric $`U(N)\times U(N)`$ gauge theory for finite gauge coupling. In the large N limit this might really be true, as the theory is conformal and the gauge coupling is a free parameter.
4.3. M5-branes
The solitonic solution dual to the membranes in 11-dimensional supergravity are given by the M-theory 5-branes (M5-branes). We will consider 2N M5-branes which are wrapped around both directions of the compact space $`S^1S^1/(1)^{F_s}S`$. Hence from the string point of view we are in the momentum picture. Therefore in nine dimensions the M5-branes correspond to 2N D3-branes. These 2N M5-branes are intersected by M2-branes which correspond to the open string sector in string theory. In complete analogy to the unwrapped case, in the three type IIB (0B) limits one obtains the following spectra.
a) $`r_9=\rho ^a0`$ ($`a>0`$), $`r_{10}=\rho ^b0`$ ($`b>0`$): We obtain N self-dual D3 plus D3’-branes of finitely coupled type 0B. Now we have non-supersymmetric $`U(N)\times U(N)`$ gauge theory with the usual matter content.
b) $`r_9=\rho ^a0`$ ($`a>0`$), $`r_{10}=\rho ^b`$ const. or $`\mathrm{}`$ ($`b0`$): Here we are dealing with the strongly coupled type IIB. After T-duality in the $`x_9`$ direction the 2N M5-branes can be viewed as 2N D3-branes with open strings ending on them. The corresponding gauge theory in the momentum picture is given by four-dimensional, $`𝒩=4`$ supersymmetric $`U(2N)`$ gauge theory at very strong Yang-Mill gauge coupling. However, not all D1-stringy modes decouple and one does again get $`𝒩=4`$ MSYM with gauge group $`U(2N)`$.
c) $`r_9=\rho ^a`$ const. or $`\mathrm{}`$ ($`a0`$), $`r_{10}=\rho ^b0`$ ($`b>0`$):
Now one is dealing with N D3 plus N D3’-branes of weakly coupled type 0B string theory. The corresponding gauge theory is weakly coupled $`𝒩=0`$ MYM with gauge group $`U(N)\times U(N)`$.
In the M5-brane scenario we get two new branches at very small and very large coupling, on which the supersymmetric $`U(2N)`$ and the non-supersymmetric $`U(N)\times U(N)`$ gauge theory do still couple to some stringy modes.
5. Conclusions
In this paper we have shown that D-branes in freely acting orbifolds of type II and type 0 string constructions allow for a continuous interpolation between $`𝒩=4`$ supersymmetric gauge theories with $`G=U(N)`$ or $`G=U(2N)`$ gauge symmetry and a non-supersymmetric gauge theory with gauge group $`G=U(N)\times U(N)`$, 6 adjoint scalars in each gauge group factor plus 4 Weyl fermions in the representations $`(N,\overline{N})+(\overline{N},N)`$. The interpolation mechanism is of stringy nature and can be realized varying the radii of the compact orbifold space. Therefore the coupling of the open string degrees of freedom to the modes of the closed string compactification are crucial for the understanding of this mechanism.
We have also discussed how this scenario can be lifted to M-theory. As a result of this discussion a non-supersymmetric gauge theory at weak coupling can be continuously connected to a supersymmetric gauge theory like branch at strong coupling. Another conclusion from this investigation was that the non-supersymmetric gauge theory itself might have a strong-weak coupling duality. Let us emphasize, that duality conjectures for non-supersymmetric models are on less solid ground as compared to the supersymmetric case. Therefore, the M-theory results should be considered with some care.
It is clear that the discussion in this paper can be straightforwardly generalized putting branes on transversal singularities or discussing Hanany-Witten type of brane constructions. Consider for example the case of D3-branes probing a transversal, non-compact $`\text{ZZ}_K`$ orbifold. Via T-duality this is equivalent to $`K`$ NS 5-branes intersected by D4-branes \[57,,58\]. For the type II case, supersymmetry is broken to $`𝒩=2`$ and the gauge group is now given by $`G=U(N)^K`$. In the corresponding type 0 string the same construction leads to a non-supersymmetric gauge theory with gauge group $`[U(N)\times U(N)]^K`$ plus certain matter fields. Again the large N $`\beta `$-function is the same as in the $`𝒩=2`$, type II parent model. As before the freely acting orbifold construction implies that one can interpolate between these two $`𝒩=2`$ and $`𝒩=0`$ gauge theories. For orbifolds or conifolds which break the amount of supersymmetry on the brane down to $`𝒩=1`$, an interpolating model can be build in an analogous way.
Acknowledgements
We thank C. Angelantonj, M. Green, A. Karch, I. Klebanov and A. Sagnotti for useful discussions. The work is partially supported by the European Commission TMR program under the contract ERBFMRXCT960090, in which the Humboldt-University at Berlin and the Ecole Normale Superieur in Paris are associated.
References
relax J. M. Maldacena, The Large N Limit of Superconformal Field Theories and Supergravity, Adv.Theor.Math.Phys. 2 (1998) 231, hep-th/9711200. relax S.S. Gubser, I.R. Klebanov and A.M. Polyakov, Gauge Theory Correlators from Noncritical String Theory, Phys. Lett. B428 (1998) 105, hep-th/9802109. relax E. Witten, Anti-de Sitter Space and Holography, Adv. Theor. Math. Phys. 2 (1998) 253, hep-th/9802150. relax E. Witten, Anti-de Sitter Space, Thermal Phase Transition and Confinement in Gauge Theories, Adv, Theor, Math. Phys. 505 (1998) 505, hep-th/9803131. relax I.R. Klebanov and A.A. Tseytlin, D-Branes and Dual Gauge theories in Type 0 Strings, Nucl.Phys. B546 (1999) 155, hep-th/9811035. relax I.R. Klebanov and A.A. Tseytlin, A Non-supersymmetric Large N CFT from Type 0 String Theory, JHEP 9903 (1999) 015, hep-th/9901101. relax I.R. Klebanov and A.A. Tseytlin, Asymptotic Freedom and Infrared Behavior in the Type 0 String Approach to Gauge Theory, Nucl. Phys. B547 (1999) 143, hep-th/9812089 relax A.M. Polyakov, The Wall of the Cave, Int.J.Mod.Phys. A14 (1999) 645, hep-th/9809057. relax L. Dixon and J. Harvey, String Theories in Ten Dimensions Without Space-Time Supersymmetry, Nucl. Phys. B274 (1986) 93. relax N. Seiberg and E. Witten, Spin Structures in String Theory, Nucl. Phys. B276 (1986) 272. relax N. Nekrasov and S.L. Shatashvili, On Non-Supersymmetric CFT in Four Dimensions, hep-th/9902110. relax S. Kachru, J. Kumar, E. Silverstein, Vacuum Energy Cancellation in a Non-supersymmetric String, Phys. Rev. D59 (1999) 106004, hep-th/9807076; S. Kachru and E. Silverstein, Self-Dual Nonsupersymmetric Type II String Compactifications, JHEP 9811 (1998) 001, hep-th/9808056; S. Kachru and E. Silverstein, On Vanishing Two Loop Cosmological Constant in Nonsupersymmetric Strings, JHEP 9901 (1999) 004, hep-th/9810129. relax J. A. Harvey, String Duality and Non-supersymmetric Strings, Phys.Rev. D59 (1999) 026002, hep-th/9807213. relax G. Shiu and S.-H.H Tye, Bose-Fermi Degeneracy and Duality in Non-Supersymmetric Strings, Nucl.Phys. B542 (1999) 45, hep-th/9808095. relax R. Blumenhagen, L. Görlich, Orientifolds of Non-Supersymmetric Asymmetric Orbifolds, Nucl.Phys. B551 (1999) 601, hep-th/9812158. relax C. Angelantonj, I. Antoniadis, K. Förger, Non-Supersymmetric Type I Strings with Zero Vacuum Energy, hep-th/9904092. relax A. Sagnotti, M. Bianchi, On the Systematics of Open String Theories, Phys. Lett. B247 (1990) 517 relax A. Sagnotti, Some Properties of Open String Theories, hep-th/95090808 ; A. Sagnotti, Surprises in Open String Perturbation Theory, hep-th/9702093. relax O. Bergman and M.R. Gaberdiel, A Non-Supersymmetric Open String Theory and S-Duality, Nucl.Phys. B499 (1997) 183, hep-th/9701137. relax C. Angelantonj, Non-Tachyonic Open Descendants of the 0B String Theory, Phys.Lett. B444 (1998) 309, hep-th/9810214. relax R. Blumenhagen, A. Font and D. Lüst, Tachyon-free Orientifolds of Type 0B Strings in Various Dimensions, hep-th/9904069. relax R. Blumenhagen and A. Kumar, A Note on Orientifolds and Dualities of Type 0B String Theory, hep-th/9906234. relax K. Förger, On Non-tachyonic $`Z_N\times Z_M`$ Orientifolds of Type 0B String Theory, hep-th/9909010. relax M.R. Douglas and G. Moore, D-branes, Quivers and ALE Instantons, hep-th/9603167. relax S. Kachru and E. Silverstein, 4d Conformal Field Theories and Strings on Orbifolds, Phys. Rev. Lett. 80 (1998) 4855, hep-th/9802183. relax A. Lawrence, N. Nekrasov and C. Vafa, On Conformal Field Theories in Four Dimensions, Nucl. Phys. B533 (1998) 199, hep-th/9803076. relax I.R. Klebanov and E. Witten, Superconformal field theory on three-branes at a Calabi-Yau singularity, Nucl. Phys. B536 (1998) 199, hep-th/9807080. relax A. Hanany and E. Witten, TypeIIB Superstrings, BPS monopoles and Three Dimensional Gauge Dynamics, Nucl. Phys. B492 (1997) 152, hep-th/9611230. relax M. Alishahiha, A. Brandhuber and Y. Oz, Branes at Singularities in Type 0 String Theory, JHEP 9905 (1999) 024, hep-th/9903186. relax M. Billo, B. Craps and F. Roose, On D-branes in Type 0 String Theory, hep-th/9902196. relax A. Armoni and B. Kol, Non-Supersymmetric Large N Gauge Theories from Type 0 Brane Configurations, hep-th/9906081. relax R. Blumenhagen, A. Font and D. Lüst, Non-supersymmetric Gauge Theories from D-branes in Type 0 String Theory, hep-th/9906101. relax I.R. Klebanov, N.A. Nekrasov and S.L. Shatashvili, An Orbifold of Type 0B Strings and Non-supersymmetric Gauge Theories, hep-th/9909109. relax J. Scherk and J.H. Schwarz, Spontaneous Breaking of Supersymmetry through Dimensional Reduction, Phys. Lett. 82B (1979) 60. relax C. Kounnas and M. Porrati, Spontaneous Supersymmetry Breaking in String Theory Nucl. Phys. B310 (198) 355. relax S. Ferrara, C. Kounnas, M. Porrati and F. Zwirner, Superstrings with Spontaneously Broken Supersymmetry and their Effective Theories, Nucl. Phys. B318 (1989) 76. relax C. Kounnas and B. Rostand, Coordinate Dependent Compactification and Discrete Symmetries, Nucl. Phys. B341 (1990) 641. relax C. Kounnas, BPS States in Superstrings with Spontaneously Broken SUSY, Nucl. Phys. Proc.Suppl.B58 (1997) 57. relax E. Kiritsis and C. Kounnas, Perturbative and Nonperturbative Supersymmetry Breaking: N=4 $``$ N=2 $``$ N=1 Nucl. Phys. B503 (1997) 117. relax B. Sathiapalan, Vortices on the String World Sheet and Constraints on Toroidal Compactification, Phys. Rev. D35 (1987) 3277. relax I. Kogan, Vortices on the World Sheet and String’s Critical Dynamics, JETP Lett. 45 (1987) 709. relax J.J. Attick and E. Witten, The Hagedorn Transition and the Number of Degrees of Freedom of String Theory, Nucl. Phys. B310 (1988) 291. relax I. Antoniadis and C. Kounnas, Supersymmetric Phase Transition at High Temperature, Phys. Lett. B261 (1991) 369. relax I. Antoniadis, J.P. Derendinger and C. Kounnas, Nonperturbative Temperature Instabilities in N=4 Strings Nucl. Phys. B551 (1999) 41. relax I. Antoniadis, J.P. Derendinger and C. Kounnas, Nonperturbative Supersymmetry Breaking and Finite Temperature Instabilities in N=4 Superstrings, hep-th/9908137. relax J.D. Blum and K.R. Dienes, Duality without Supersymmetry: The Case of the SO(16)$`\times `$SO(16) String, Phys.Lett. B414 (1997) 260, hep-th/9707148; Strong/Weak Coupling Duality Relations for Non-Supersymmetric String Theories, Nucl.Phys. B516 (1998) 83, hep-th/9707160. relax I. Antoniadis, E. Dudas and A. Sagnotti, Brane Supersymmetry Breaking, hep-th/9908023. relax I. Antoniadis, G. D’Appollonio, E. Dudas and A. Sagnotti, Partial Breaking of Supersymmetry, Open Strings and M-theory, Nucl. Phys. B553 (1999) 133, hep-th/9812118. relax I. Antoniadis, E. Dudas and A. Sagnotti, Supersymmetry Breaking, Open Strings and M-theory, Nucl. Phys. B544 (1999) 469, hep-th/9807011. relax I. Antoniadis and M. Quiros, Supersymmetry Breaking in M-theory and Gaugino Condensation, Nucl. Phys. B505 (1997) 109, hep-th/9705037; On the M-theory Description of Gaugino Condensation, Phys. Lett. B416 (1998) 327, hep-th/9707208. relax O. Bergman and M.R. Gaberdiel, Dualities of Type 0 Strings, hep-th/9906055. relax M. Dine, P. Huet and N. Seiberg, Large and Small Radius in String Theory, Nucl. Phys. B322 (1989) 301. relax J. Dai, R.G. Leigh and J. Polchinski, New Connections between String Theories, Mod. Phys. Lett. A4 (1989) 2073. relax M. Abou-Zeid, B. de Wit, D. Lüst and H. Nicolai, Space-time Supersymmetry, IIA/B Duality and M-Theory, hep-th/9908169. relax E. Witten, String Theory Dynamics in Various Dimensions, Nucl. Phys. B443 (1995) 85, hep-th/9503124. relax J.H. Schwarz, The Power of M Theory, Phys. Lett. B367 (1996) 97, hep-th/9510086; J.H. Schwarz, Lectures on Superstring and M Theory Dualities, Nucl. Phys. Proc. Suppl. B55 (1997) 1, hep-th/9607201. relax A. Karch, D. Lüst and D. Smith, Equivalence of Geometric Engineering and Hanany-Witten via Fractional Branes, Nucl. Phys. B533 (1998) 348, hep-th/9803232. relax B. Andreas, G. Curio and D. Lüst, The Neveu-Schwarz Five-brane and its Dual Geometries, JHEP 9810 (1998) 022, hep-th/9807008.
|
no-problem/9910/astro-ph9910544.html
|
ar5iv
|
text
|
# The X-ray spectra of symbiotic stars
## 1. Introduction
Symbiotic stars are binary stars in which usually a white dwarf accretes from the wind of a red giant (e.g. Luthardt 1992). Their X-ray spectra are apparently dominated by distinct soft and hard X-ray components (I do not discuss the third “supersoft”case – usually interpreted as steady nuclear burning of accreted material). As an example I take the ASCA GIS2 spectrum of the bright symbiotic CH Cyg, plotted in Figure 1. The emission is seen to peak at 1 keV and at 5 keV. In Figure 1 I have also overlaid a simple 10 keV bremsstrahlung spectrum that has been folded through the response of the telescope. The dip in the observed spectrum (at 2 keV) corresponds to a maximum in the effective area of ASCA, and thus must be a true minimum in the X-ray emission of the CH Cyg.
The ASCA observation of CH Cyg was analysed by Ezuka, Ishida & Makino (1998). To achieve an acceptable fit they required three emission components (kT=
0.2, 0.7, 7.3 keV), each with a different absorption column, plus an additional partial-covering absorber. Usually the hard emission is attributed to the accreting compact object and the soft emission to colliding winds of the two stars. In this paper I demonstrate that the spectrum can be understood with a far more simple model, and that there is no need for a separate soft component.
## 2. Ionised absorption
The key to this new interpretation is to allow the absorbing medium to be partially ionised. This is reasonable because the wind of the red giant — the obvious candidate absorber — is strongly illuminated by ionising radiation from the accreting white dwarf. Fitting the ASCA spectrum with a single-temperature emission model (mekal ) absorbed by a photoionised medium (absori ) I readily achieve a fairly good fit (reduced $`\chi ^2`$=2.4 with 172 d.o.f.; top panel of Figure 2). The residuals are dominated by narrow features at 0.9 keV and 6.4 keV. Adding narrow lines to the model I find an acceptable fit (reduced $`\chi ^2`$=1.4 with 168 d.o.f.; bottom panel of Figure 2). Best fitting parameters are kT=11 keV, $`\mathrm{N}_\mathrm{H}=4\times 10^{23}\mathrm{cm}^2`$, $`\xi `$=840 (ionisation parameter).
## 3. Line emission
The 6.4 keV emission line is most likely due to K<sub>α</sub> fluorescence of weakly-ionised iron. Since the absorbing medium is strongly ionised, this fluorescence must arise elsewhere, probably through reflection from the surface of the compact object. The 0.9 keV line is more difficult to identify because its energy is less well constrained and there are a large number of emission lines in this portion of the X-ray spectrum. However, its proximity to the strong OVIII absorption edge (see Figure 3) suggests it is most likely the recombination continuum emission of OVIII. The emission spectrum of the absorbing medium is neglected in the ionised absorption model (absori ).
## 4. No need for colliding winds
The consequence of my fit to the ASCA spectrum of CH Cyg is that a separate low-temperature emission component is no longer needed. Thus there is no need to invoke X-ray emission from colliding winds in this system, and indeed, no longer any need for a substantial wind from the white dwarf.
The model spectrum (Figure 3) shows how the partially ionised absorber cuts deeply at intermediate energies but allows soft photons to leak through. Most evidence taken to support X-ray emission from colliding winds has come from soft X-ray observations, e.g. ROSAT (Mürset, Wolff & Jordan 1997). Clearly the ROSAT spectrum (0.1-2.5 keV) of an absorbed system will reveal only the soft X-ray leak, and this could be mistaken for a low temperature emission spectrum. I believe that all the ROSAT spectra of symbiotic stars previously interpreted as emission from colliding winds may be reinterpreted as absorbed hard X-ray spectra.
## REFERENCES
Luthardt, R. 1992, RvMA 5, 38
Ezuka, H., Ishida, M. & Makino, F. 1998, ApJ 499, 388
Mürset, U., Wolff, B. & Jordan, S. 1997, A&A 319, 201
|
no-problem/9910/cond-mat9910217.html
|
ar5iv
|
text
|
# Stable integration of isolated cell membrane patches in a nanomachined aperture: a step towards a novel device for membrane physiology
## Abstract
We investigate the microscopic contact of a cell/semiconductor hybrid. The semiconductor is nanostructured with the aim of single channel recording of ion channels in cell membranes. This approach will overcome many limitations of the classical patch-clamp technique. The integration of silicon-based devices ’on-chip’ promises novel types of experiments on single ion channels.
PACS number: 87.16Pg, 87.16.Uv, 61.82.Fk
Nanostructuring allows to build devices with dimensions similar to those of basic biological units, e.g. ion channels in cell membranes. Ion channels are proteins that are integral parts of cell membranes and act as pores. They regulate the flow of ions in- and out of the cell . Exhibiting different kinds of gating mechanisms they act as basic excitable units in biological systems. The function of these elementary units is therefore of fundamental importance for information processing in neural systems.
For about two decades physiologists have been able to resolve ionic currents through single ion channels by using the patch-clamp technique . This method relies on forming a $`\mu `$-size contact with the cell membrane by means of an electrolyte-filled glass pipette. The open tip of the pipette is pressed against the membrane, defining an isolated patch. Due to the strong glass-membrane adhesion , a G$`\mathrm{\Omega }`$-seal is obtained which allows current measurements with resolution of a few 100 fA. A basic limitation of this approach is the limited recording bandwith B$`<100`$kHz. This limitation arises mainly because of stray capacitances and the high access resistance of the long-tapered pipette . In contrast, the geometry of the semiconductor based probes used in our approach should overcome these limitations. We define a nanoscale aperture located in a suspended Si<sub>3</sub>N<sub>4</sub> membrane on micromachined silicon substrate. This enables us to minimize the distance between the ion channel under investigation and the recording electrode. In addition, with semiconductor structuring techniques, the passive glass pipette can be replaced by a versatile probe, which can easily integrate active semiconductor elements e.g. amplifiers or electromechanical devices. Finally, due to the open geometry of the probe, imaging techniques such as fluorescence microscopy, atomic force microscopy (AFM) and scanning electron microscopy (SEM) can be applied.
In Fig. 1(a) an SEM-micrograph of our device with an integrated cell membrane is depicted. A Si<sub>3</sub>N<sub>4</sub>-layer is suspended on a micron scale by etching a V-groove in the (100)-silicon substrate beneath. In order to build these suspended membranes we deposit a 120 nm thick Si<sub>3</sub>N<sub>4</sub>-layer on both sides of a (100) low n-doped silicon substrate using Low Pressure Chemical Vapor Deposition (LPCVD). Applying standard optical lithography and Reactive Ion Etching (RIE) we define an etch mask on the backside of the samples. Subsequent anisotropic wet etching in a KOH-solution results in a V-shaped groove, where the upper Si<sub>3</sub>N<sub>4</sub>-layer serves as an etch stop. Adjusting the size of the etch mask we build a suspended Si<sub>3</sub>N<sub>4</sub>-layer with dimensions of a few ten microns side length. Both optical lithography as well as low-energy electron-beam lithography is used to define an orifice in the suspended membrane. The lithographic pattern is transferred into the membrane by an RIE process. The lower inset in Fig. 1(a) shows apertures in such a suspended membrane with sizes ranging from 500 nm down to 50 nm. Due to this nanostructuring process the geometry of the aperture can be freely chosen.
The integration of a cell membrane is achieved by positioning a cell on top of the probe. The device is installed into a classical patch-clamp setup including a remotely controlled positioning system and a microscope. The inset in Fig. 1(a) shows the schematical arrangement of the semiconductor-cell hybrid. In order to carry out electrical measurements, the ensemble is connected via electrodes in standard Ringer’s electrolyte solution (270 mOsm) forming the extra-cellular medium. Cultured embryonic cells from rat striatum or C6-glioma cells are acutely dissociated applying standard trypsin treatment and trituration. A glass suction pipette is used to move an isolated cell onto the aperture as shown in Fig. 1(b). By applying negative pressure from below, the cell’s membrane is partially sucked into the opening. This procedure is in close analogy to the standard patch-clamp technique. In order to obtain a cell-free patch, the glass pipette is used to remove the cell body, leaving an excised membrane patch in the aperture. In Fig. 2, this excised patch is shown from both sides of the device. The micrographs were taken with a low-voltage scanning electron microscope (SEM) at resolution of about 1 nm. Fig. 2(a) shows a top view of the aperture. Cellular material (presumably cytoskeletal elements) is seen to fill the entire lumen. Fig. 2(b) shows a close-up view of the cell membrane that has been dragged into the opening. Imaging cellular structures is only possible after fixatation with glutaraldehyde solution, dehydration in graded alcohol and drying in a critical point drier. This procedure is responsible for the somewhat distorted surface structure of the cell membrane. However, the image clearly shows, that there is an extremely close association of the membrane with the silicon nitride material without any visible gaps. This finding justifies the expectation that Si<sub>3</sub>N<sub>4</sub> can, in our design, substitute for glass in creating G$`\mathrm{\Omega }`$-seals. It is also in line with the glass-like properties of this material. These adhesion properties are of great importance when interfacing neurons and silicon . Single channel recording is only made possible by the so-called G$`\mathrm{\Omega }`$-seal where the membrane sticks tightly to the glass of the pipette . Demonstrating a G$`\mathrm{\Omega }`$-seal with the Si<sub>3</sub>N<sub>4</sub> membrane is therefore the next major step towards patch clamp recording with the device presented here. Another advantage of our approach becomes obvious: due to its geometry our device lends itself to visualization techniques such as SEM, atomic force microscopy (AFM) or scanning nearfield optical microscopy (SNOM).
Furthermore, we applied confocal fluorescence microscopy for imaging the hybrid in the ionic solution i.e. in a sitation where the membrane proteins and their functions are intact. In order to visualize the membrane, we incubated isolated cells with a solution containing the fluorescent marker bis-oxanol prior to integrating the membrane into the probe. The fluorophore is excited by blue light (488 nm) emitted from a Ar-ion laser. In Fig. 3(a) a scanning micrograph of the membrane-semiconductor hybrid taken with a confocal fluorescence microscopy is shown. On the suspended Si<sub>3</sub>N<sub>4</sub>-layer fluorescent cell debris is found in the environment of the aperture. A structure of more regular, round shape can be discerned near the center of the image and represents fluorescent cellular membrane incorporated in the aperture. As shown in Fig. 3(b), using a z-scan series, this structure can be definitely distinguished from the surrounding debris: The graph shows a plot of the fluorescence intensity as a function of the distance of the confocal plane from the probe surface. Thus, successive optical sections parallel to the probe surface ranging from about $`4\mu `$m to $`4\mu `$m with zero set at the Si<sub>3</sub>N<sub>4</sub>-membrane level are taken. The three curves correspond to the normalized fluorescence light intensity emitted from the clean Si<sub>3</sub>N<sub>4</sub>-film, the debris on top of the film and the membrane in the aperture, respectively. Obviously, the fluorescence of the fractured cell material is emitted starting from a $`z`$-position higher than that of the Si<sub>3</sub>N<sub>4</sub>-film. In contrast, the fluorescence of the incorporated membrane displays a $`z`$-range on the same level or even lower than the reference Si<sub>3</sub>N<sub>4</sub>-film. The increase of fluorescence intensity of the debris in the negative z-range is related to the backscattering of excitation light from the Si<sub>3</sub>N<sub>4</sub>-layer acting as a bifringent mirror. Since the integrated cell-membrane is freely suspended, this effect is not seen in the aperture.
In conclusion, we have shown a first realization of a cell membrane patch integrated into a nanostructured semiconductor device verified by fluorescence and SEM-micrographs. Attaching native cell membranes to nanostructured probes is the first step towards patch clamp recording with semiconductor or silicon-on-insulator (SOI) devices. In addition, the geometry of our hybrid enables various methods of microscopy such as confocal fluorescence or atomic force microscopy to be applied in situ. The application of the device presented for patch clamp recording will be discussed elsewhere . Furthermore, it has to be noted, that the principles of processing such a nano patch clamp (NPC) chip can easily be transferred to Silicon-on-Quartz or other material classes. Combining the patch-clamp technique with semiconductor devices allows the integration of active amplifying devices, e.g. field effect transistors. By using lithographic methods, these active elements can be positioned in the immediate vicinity of the ion channel. The noise level of the measurement can thus be lowered dramatically due to an ’on chip’ amplification, leading to a highly improved resolution.
We would like to thank F. Rucker, E. Rumpel, L. Pescini, H. Lorenz, and H. Gaub for valuable discussions and support. Thanks are due to A. Kriele, G. Zitzelsberger A. Grünewald and L. Kargl for expert techical support. The semiconductor wafers were kindly supplied by I. Eisele of the Universität der Bundeswehr (Munich). The nitride films were grown by H. Geiger of the Universität der Bundeswehr (Munich). This work was supported on part by the Deutsche Forschungsgemeinschaft (SFB 531).
Fig. 1: (a) SEM micrograph of V-groove in (100)-silicon with a suspended Si<sub>3</sub>N<sub>4</sub>-layer on top. In the suspended Si<sub>3</sub>N<sub>4</sub>-layer a small aperture is nanostructured by optical or electron-beam-lithography and RIE. In the aperture cell material is incorporated. The upper inset depicts the schematical arrangement of the semiconductor-cell hybrid. The lower inset shows a series of holes in a suspended membrane with dimensions down to 50 nm.
(b) Photograph of the probe with cell positioned on top of the aperture. The glass pipette on the right is used to manipulate the cell.
Fig. 2: (a) Top-view of the aperture with incorporated cell material. The arrows indicate the circumference of the opening.
(b) Close-up of the cell membrane taken from the backside protruding from the opening. The membrane is sealed tightly to the Si<sub>3</sub>N<sub>4</sub>-layer with no remaining cleft in between.
Fig. 3: (a) Fluorescence scanning micrograph taken with a confocal microscope. The cell-membrane is labeled with fluorescent marker (thick arrow). Some cell debris is also visible (thin arrows).
(b) $`Z`$-series taken from the fluorescence of the cell-membrane, the Si<sub>3</sub>N<sub>4</sub>-layer and the debris on top of it (for details see text).
|
no-problem/9910/astro-ph9910096.html
|
ar5iv
|
text
|
# An Imaging and Spectroscopic Survey of Galaxies within Prominent Nearby Voids II. Morphologies, Star Formation, and Faint Companions
## 1 Introduction
In the last decade, wide-angle redshift surveys have revealed large-scale structure in the local universe comprised of coherent sheets of galaxies with embedded galaxy clusters, bounding vast ($`10^{56}`$ Mpc<sup>3</sup>) and well-defined “voids” where galaxies are largely absent. The influence of these structures’ large-scale density environment upon galaxy properties has been a continuing source of debate and is of interest for constraining proposed models of galaxy formation and evolution. The morphology-density relation (e.g., Dressler (1980); Postman & Geller (1984)), which quantifies the increasing fraction of ellipticals and lenticulars with local density, is one of the most obvious indications of environmental dependence for densities greater than the mean. In the lowest density regions, the voids, the observational evidence of trends in morphological mix, luminosity distribution, star formation rate, etc., is still rudimentary because of the intrinsic scarcity of void galaxies and the difficulties in defining an unbiased sample for study. Here we use a broadband imaging and spectroscopic survey of a large optically-selected sample to compare the properties of galaxies in voids with their counterparts in denser regions.
We refer the reader to the first paper of this study (Grogin & Geller 1999, hereafter Paper I ) for a more detailed review of the previous theoretical and observational research into void galaxies, which we summarize here. Proposed theories of galaxy formation and evolution have variously predicted that the voids contain “failed galaxies” identified as diffuse dwarfs and Malin 1-type giants (Dekel & Silk (1986); Hoffman, Silk, & Wyse (1992)), or galaxies with the same morphological mix as higher-density regions outside clusters (Balland, Silk, & Schaeffer (1998)), or no luminous galaxies at all for the want of tidal interactions to trigger star formation (Lacey et al. (1993)). Some of these theories have already met serious challenge from observations which have shown that dwarf galaxies trace the distribution of the more luminous galaxies and do not fill the voids (Kuhn, Hopp, & Elsässer (1997); Popescu, Hopp, & Elsässer (1997); Binggeli (1989)), and that the low surface-brightness giants are relatively rare and not found in the voids (Szomoru et al. (1996); Bothun et al. (1993); Weinberg et al. (1991); Henning & Kerr (1989)).
Most previous studies of void galaxies have focused on emission line-selected and $`IRAS`$-selected objects in the Boötes void at $`z0.05`$ (Kirshner et al. (1981, 1987)); all have been limited to a few dozen objects or fewer. The galaxies observed in the Boötes void: 1) are brighter on average than emission-line galaxies (ELGs) at similar redshift and contain a large fraction ($`40`$%) with unusual or disturbed morphology (Cruzen, Weistrop, & Hoopes (1997)); 2) have star formation rates ranging from 3–55 $`_{}`$ yr<sup>-1</sup>, up to almost three times the rate found in normal field disk systems (Weistrop et al. (1995)), in apparent contrast to the Lacey et al. (1993) model prediction; and 3) are mostly late-type gas-rich systems with optical and Hi properties and local environments similar to field galaxies of the same morphological type (Szomoru, van Gorkom, & Gregg (1996); Szomoru et al. (1996)).
Szomoru et al. (1996) conclude that the Boötes void galaxies formed as normal field galaxies in local density enhancements within the void, and that the surrounding global underdensity is irrelevant to the formation and evolution of these galaxies. Because the Boötes void galaxies are not optically-selected, though, their properties may not be representative of the overall void galaxy population. On the other hand, similar conclusions were drawn by Thorstensen et al. (1995) in a study of 27 Zwicky Catalog (Zwicky et al. 1961– (1968), hereafter CGCG) galaxies within a closer void mapped out by the Center for Astrophysics Redshift Survey (Geller & Huchra (1989); hereafter CfA2). The fraction of absorption-line galaxies in their optically-selected sample was typical of regions outside cluster cores, and the local morphology-density relation appeared to hold even within the global underdensity.
Our goal is to clarify the properties of void galaxies by collecting high-quality optical data for a large sample with well-defined selection criteria. We thus obtained multi-color CCD images and high signal-to-noise spectra for $`150`$ optically-selected galaxies within prominent nearby voids. We work from the CfA2 Redshift Survey, which has the wide sky coverage and dense sampling necessary to delineate voids at redshifts $`cz10000`$ km s<sup>-1</sup>. These conditions are not met for the Boötes void, making the definition of Boötes void galaxies in previous studies harder to interpret.
Using a straightforward density estimation technique, we identified three large ($`30`$$`50h^1`$ Mpc) voids within the magnitude-limited survey and included all galaxies within these regions at densities less than the mean ($`n<\overline{n}`$). In addition to the void galaxies from CfA2, we have also included fainter galaxies in the same regions from the deeper Century Survey (Geller et al. (1997); hereafter CS) and 15R Survey (Geller et al. (2000)). We thereby gain extra sensitivity toward the faint end of the void galaxy luminosity distribution, up to 3 magnitudes fainter than $`M_{}`$.
Covering essentially the entire volume of three distinct voids, our sample should place improved constraints upon the luminosity, color, and morphological distributions, and star formation history of void galaxies. Moreover, this optically-selected sample should be more broadly representative than previous void galaxy studies restricted to emission-line, IRAS-selected, and Hi-selected objects. We also conduct a follow-up redshift survey to $`m_R=16.13`$ in our imaging survey fields and identify fainter void galaxy companions, akin to the Szomoru, van Gorkom, & Gregg (1996) Hi survey for neighbors of the Boötes void galaxies. We thereby probe the small-scale ($`150h^1`$ kpc) environments around galaxies in regions of large-scale ($`5h^1`$ Mpc) underdensity. Here and throughout we assume a Hubble Constant $`H_0100h`$ km s<sup>-1</sup> Mpc<sup>-1</sup>.
In Paper I we introduced the sample and its selection procedure, described the broadband imaging survey, and examined the variation of the galaxy luminosity distribution and color distribution with increasing large-scale underdensity. The luminosity distribution in modestly overdense, void periphery regions ($`1<n/\overline{n}2`$) and in modestly underdense regions ($`0.5<n/\overline{n}1`$) are both consistent with typical redshift survey luminosity functions in $`B`$ and $`R`$. However, galaxies in the lowest-density regions ($`n/\overline{n}0.5`$) have a significantly steeper LF ($`\alpha 1.4`$). Similarly, the $`BR`$ color distribution does not vary with density down to $`0.5\overline{n}`$, but at lower densities the galaxies are significantly bluer.
Here we address the morphology and current star formation (as indicated by EW(H$`\alpha `$)) of optically-selected galaxies in underdense regions. In addition, we describe a deeper redshift survey of the imaging survey fields designed to reveal nearby companions to the more luminous void galaxies. Section 2 reviews the void galaxy sample selection briefly (cf. Paper I ) and discusses the selection of redshift survey targets. We describe the spectroscopic observations and data reduction in §3. We then analyse the morphological distribution (§4) and H$`\alpha `$ equivalent width distribution (§5) of the sample as a function of the smoothed large-scale ($`5h^1`$ Mpc) galaxy number density. Section 6 describes results from the redshift survey for close companions. We conclude in §7.
## 2 Sample Selection
Paper I contains a detailed description of the sample selection for the imaging and spectroscopic survey, summarized here in §2.1. In §2.2 we describe the selection procedure for a deeper redshift survey of the image survey fields to identify nearby companions of the sample galaxies.
### 2.1 Imaging and Spectroscopic Survey Sample
We use a $`5h^1`$ Mpc-smoothed density estimator (Grogin & Geller (1998)) to identify three prominent voids in the CfA2 redshift survey. We attempt to include all CfA2 galaxies below the mean density contour ($`n<\overline{n}`$) around the voids, as well as fainter galaxies in these regions from the 15R and Century Surveys. The apparent magnitude limit of the CS enables us to include void galaxies with absolute magnitude $`R18`$, some three magnitudes fainter than $`M_{}`$. By restricting our study to galaxies within three of the largest underdense regions in CfA2 ($`30h^1`$ Mpc diameter), we minimize the sample contamination by interlopers with large peculiar velocity. Table 2 lists the galaxy sample, including arcsecond B1950 coordinates, Galactocentric radial velocities, and the $`(n/\overline{n})`$ corresponding to those locations.
We define the galaxies in Table 2 with $`(n/\overline{n})1`$ as the “full void sample”, hereafter FVS. We further examine the properties of two FVS subsamples: the lowest-density void subsample (hereafter LDVS) of 46 galaxies with $`(n/\overline{n})0.5`$, and the complementary higher-density void subsample (hereafter HDVS) of 104 galaxies with $`0.5<(n/\overline{n})1`$. Our survey also includes some of the galaxies around the periphery of the voids where $`(n/\overline{n})>1`$. Typically the region surrounding the voids at $`1<n/\overline{n}2`$ is narrow, intermediate between the voids and the higher-density walls and clusters (cf. Paper I , Figs. 1–3). Although our sampling of galaxies in regions with $`1<n/\overline{n}2`$ is far from complete, we designate these galaxies as a “void periphery sample” (hereafter VPS) to serve as a higher-density reference for the FVS and its subsamples. Because the VPS galaxies are chosen only by their proximity to the voids under study, we should not have introduced any density-independent selection bias between the FVS and the VPS.
### 2.2 Void Galaxy Field Redshift Survey Sample
Most of the volume spanned by the voids of interest has only been surveyed to the CfA2 magnitude limi, $`m_B15.5`$. At the 5000–10000 km s<sup>-1</sup> distance of these voids, this limiting magnitude corresponds to an absolute magnitude cutoff of $`B_{}`$ or brighter. To gain information on the presence of fainter companions to the void galaxies in our study, we use the SExtractor program (Bertin & Arnouts (1996)) to make a list of fainter galaxies on the $`R`$-band imaging survey fields. We define the SExtractor magnitude $`r_{\mathrm{SE}}`$ as the output MAG\_BEST with ANALYSIS\_THRESH set to 25 mag arcsec<sup>-2</sup> (cf. Bertin & Arnouts (1996)). We limit the redshift survey to $`r_{\mathrm{SE}}=16.1`$, the rough limit for efficient redshift measurement using the FAST spectrograph on the FLWO 1.5 m. This magnitude limit is also commensurate with the Century Survey limit ($`m_R=16.13`$) as well as with the deepest 15R Survey fields in our study (cf. Paper I ).
As a check on the reliability of SExtractor magnitudes, we compare against the isophotal photometry from our imaging survey of the Table 2 galaxies (Paper I ). Those $`R`$-band magnitudes are determined at the $`\mu _B=26`$ mag arcsec<sup>-2</sup> isophote; we denote them $`r_{B26}`$. Figure 1 shows SExtractor magnitudes $`r_{\mathrm{SE}}`$ versus $`r_{B26}`$ for 291 of the 296 galaxies in Table 2; the remaining five do not have SExtractor magnitudes because of saturated $`R`$-band image pixels (00132+1930, NGC 7311) or confusion with nearby bright stars (00341+2117, 01193+1531, 23410+1123). We indicate the linear least-squares fit between the two magnitude estimates (dotted line), with 11 outliers at $`>2\sigma `$ clipped from the fitting. Because Table 2 includes fainter 15R and Century Survey galaxies, we have good calibration down to the $`r_{\mathrm{SE}}=16.1`$ limit of the companion redshift survey.
Figure 1 shows that the agreement between $`r_{\mathrm{SE}}`$ and $`r_{B26}`$ is excellent over $`3.5`$ mags. The scatter about the fit is only 0.05 mag, comparable to the uncertainty in the $`r_{B26}`$ magnitudes (Paper I ). The slope of the fit, $`dr_{B26}/dr_{\mathrm{SE}}=1.043\pm 0.004`$ indicates that the scale error is negligible. The crossover magnitude, for which $`r_{\mathrm{SE}}=r_{B26}`$, is $`15.49\pm 0.14`$ mag. The linear fit is sufficiently well-constrained that our $`r_{\mathrm{SE}}=16.1`$ survey limit corresponds to a limiting $`r_{B26}=16.13\pm 0.01`$. This value may be directly compared with with the similarly-calibrated 15R and Century $`r_{B26}`$ limits given in Paper I . For the 5000–10000 km s<sup>-1</sup> redshift range of the three voids, this redshift survey therefore includes galaxies brighter than $`R_{B26}17.4`$ to $`18.9`$. Here and throughout the paper we leave off an implicit ($`5\mathrm{log}h`$) when quoting absolute magnitudes.
The $`11\mathrm{}`$ imaging survey fields are roughly centered on the target galaxies — the absolute mean deviation of the pointing offset is $`30\mathrm{}`$. Given this mean offset and the sample’s distribution of angular diameter distance, we estimate that the mean sky coverage of the redshift survey around the galaxies in Table 2 drops to 90% at a projected radius of $`115h^1`$ kpc .
Table 4 lists the arcsecond B1950 coordinates and SExtractor magnitudes $`r_{\mathrm{SE}}`$ of the companion redshift survey targets (sorted by right ascension) as well as the angular separation of each from its respective “primary” (cf. Tab. 2). Some galaxies in Table 4 have multiple entries because they neighbor more than one primary. In some cases the neighbor is itself a primary from Table 2; we note this in the comment field. There are 211 unique galaxies in Table 4, which form 250 pairings with primaries from Table 2. Of these 250 pairs, 180 have projected separations $`115h^1`$ kpc.
## 3 Observations and Data Reduction
Paper I describes the imaging survey and reductions in detail. The resulting CCD images from the F. L. Whipple Observatory 1.2 m reflector have typical exposure times of 300s in $`R`$ and $`2\times 300`$s in $`B`$. Here we describe 1) a high-S/N spectroscopic survey of the CGCG and 15R galaxies in the primary sample (cf. §2.1); and 2) a deeper redshift survey of galaxies in the $`11\mathrm{}`$ $`R`$-band fields of the imaging survey (cf. §2.2).
### 3.1 High-S/N Spectroscopic Survey
We carried out the spectroscopic survey of CGCG galaxies in our sample with the FAST longslit CCD spectrograph (Fabricant et al. (1998)) on the FLWO 1.5 m Tillinghast reflector over the period 1995–1998. We used a $`3\mathrm{}`$ slit and a 300 line mm<sup>-1</sup> grating, providing spectral coverage from 3600–7600Å at 6Å resolution. For the typical exposure times of 10–20 minutes, we obtained a signal-to-noise ratio (S/N) in the H$`\alpha `$ continuum of $`30`$ per 1.5Å pixel.
For the 15R galaxies in our sample, we used the 15R redshift survey spectra. These spectra were taken over the period 1994–1996 with FAST in an identical observing setup to our CGCG spectra. The exposure times for these spectra are typically 6–12 minutes, giving an H$`\alpha `$ continuum S/N of $`15`$.
The high-S/N spectroscopic survey of CGCG and 15R galaxies is essentially complete: 100% for the LDVS; 99% for the HDVS; and 98% for the VPS. Including the unobserved Century Survey galaxies in the accounting, the overall spectroscopic completeness is 98% for the LDVS, 95% for the HDVS, and 95% for the VPS.
All spectra were reduced and wavelength-calibrated using standard IRAF tasks as part of the CfA spectroscopic data pipeline (cf. Kurtz & Mink (1998)). We flux-calibrate the resulting 1-D spectra with spectrophotometric standards (Massey et al. (1988); Massey & Gronwall (1990)) taken on the same nights. Because these spectra were observed as part of the FAST batch queue, the observing conditions were not always photometric. We therefore treat the flux calibrations as relative rather than absolute, and only quote equivalent widths rather than line fluxes.
We next de-redshift each spectrum using the error-weighted mean of cross-correlation radial velocities found with FAST-customized emission- and absorption-line templates (cf. Kurtz & Mink (1998); Grogin & Geller (1998)). Our redshifts from the high-S/N CGCG spectra supersede the previous CfA redshift survey values and are reflected in the recent Updated Zwicky Catalog (hereafter UZC: Falco et al. (1999)). We do not correct the spectra for reddening (intrinsic or Galactic), but note that the majority of sample galaxies are at high Galactic latitudes where Galactic reddening is minimal. Figure 2 shows a representative subset of our reduced imaging and spectroscopic data: $`B`$-band images and corresponding spectra for a range of early- to late-type galaxies.
We make a first pass through the de-redshifted spectra with SPLOT, fitting blended Gaussian line profiles to the H$`\alpha `$ and Nii lines, and also fitting H$`\gamma `$ and H$`\delta `$ for later Balmer-absorption correction. We note that the FAST spectral resolution of 6Å allows clean deblending of H$`\alpha `$ $`\lambda `$6563Å from the adjacent \[Nii\] $`\lambda \lambda `$6548,6584Å lines. Using the first-pass line centers and widths, we make a second pass through the spectra to determine the equivalent widths and associated errors via direct line- and continuum-integration rather than profile-fitting.
We apply an approximate correction to EW(H$`\alpha `$) for Balmer absorption by using the greater of EW(H$`\gamma `$) and EW(H$`\delta `$) if detected in absorption at $`1\sigma `$. We note that only one of the 277 galaxies in our spectroscopic sample has Balmer absorption exceeding 5Å (CGCG 0109.0+0104), and this object has strong emission lines. There appear to be no “E+A” galaxies in our sample, which is not surprising given the small fraction ($`0.2`$%) of such objects in the local universe (Zabludoff et al. (1996)). Table 2 includes the resulting H$`\alpha `$ equivalent widths and their errors.
### 3.2 Redshift Survey of Void Galaxy Fields
We observed the redshift survey targets of Table 4 with the FAST longslit CCD spectrograph (Fabricant et al. (1998)) on the FLWO 1.5 m Tillinghast reflector over the period June 1996–November 1997 as part of the FAST batch queue. The exposure times ranged from 5–20 minutes, with a median of 12 minutes. The observing setup, as well as the spectrum reduction, wavelength-calibration, and redshift extraction, were identical to the high-S/N spectroscopic survey (§3.1).
Of the 211 galaxies in Table 4, we include new measurements for 83. Another 46 are members of the primary sample, and thus have known redshifts. For the remainder, we obtained 35 redshifts from the 15R and Century Surveys, the UZC (Falco et al. (1999)), ZCAT (Huchra et al. (1995)), and NED<sup>1</sup><sup>1</sup>1The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.. The median uncertainty for the redshifts presented in Table 4 is 21 km s<sup>-1</sup>.
We lack a redshift for 47 galaxies, a completeness of 78%. These galaxies are bunched near the magnitude limit — 36 of the 47 have $`r_{\mathrm{SE}}>15.5`$. The survey completeness by field is somewhat greater because many of the imaging survey fields have no follow-up targets: 89% of the LDVS fields are fully surveyed, 85% of the HDVS, and 88% of the VPS.
Of the 165 galaxies in Table 4 which have a projected radius $`D_p115h^1`$ kpc from a Table 2 galaxy, we have redshifts for 129 — again 78% complete. The completeness by field is slightly larger when restricted to $`D_p<115h^1`$ kpc because more of the fields have no targets: 91% of the LDVS fields are fully surveyed for $`D_p115h^1`$ kpc, 88% of the HDVS, and 90% of the VPS.
## 4 Morphology-Density Relation
One of us (N.A.G.) classified the morphologies of the entire sample by eye from the $`B`$ CCD images. The median seeing during the observations was $`2\stackrel{}{\mathrm{.}}0`$ and varied between $`1\stackrel{}{\mathrm{.}}4`$ and $`3\stackrel{}{\mathrm{.}}3`$. The target galaxies, all with $`5000cz10000`$ km s<sup>-1</sup>, are typically $`90\mathrm{}`$ in diameter and are roughly centered within the $`11\mathrm{}`$ CCD fields ($`0\stackrel{}{\mathrm{.}}65`$ per $`2\times 2`$-binned pixel). We assign each galaxy in Table 2 a “$`T`$-type” from the revised morphological system, with the caveat that we list both irregular and peculiar galaxies as $`T=10`$. From repeatability of classification for galaxies imaged on multiple nights, as well as from independent verification of the classifications by several “experts” (J. Huchra, M. Kurtz, R. Olowin, and G. Wegner), we estimate that the classifications are accurate to $`\sigma _T\pm 1`$ for the CGCG galaxies and $`\sigma _T\pm 2`$ for the fainter (and typically smaller in angular size) 15R and Century galaxies.
We plot (Fig. 3) the histograms of revised morphological type $`T`$ for the VPS (top), the HDVS (middle), and the LDVS (bottom). The VPS and the HDVS are very similar in their morphological mix, with an early-type fraction of $`30`$% (Tab. 5). A chi-square test of these two histograms yields a $`78\%`$ probability of the null hypothesis that the VPS and HDVS have a consistent underlying morphological distribution.
In contrast, Figure 3 shows that the morphological mix changes significantly at the lowest densities (LDVS). There is a notable increase in the fraction of Irr/Pec galaxies and a corresponding decrease in the early-type fraction (Tab. 5). A chi-square test between the VPS and LDVS morphology histograms gives only a 7% probability that these two samples reflect the same underlying morphological mix. These two samples are well-separated in surrounding density — the uncertainty in the $`5h^1`$ Mpc density estimator is $`0.1`$ at the distance of the three voids in this study (Grogin & Geller (1998)). Clearly a larger sample is desirable to better establish the morphological similarity between the VPS and HDVS, and their morphological contrast with the LDVS.
The incidence of qualitatively disturbed or interacting systems appears somewhat larger for the LDVS: $`35\pm 10\%`$, compared with $`20\pm 5\%`$ for the galaxies at larger $`n`$. We show (Fig. 4) a mosaic of 9 $`B`$-band images of LDVS galaxies which are probable interactions. Notable among these interacting void galaxies is the spectacular object IC 4553 (Arp 220), the prototype (and nearest) ultraluminous IR galaxy. An increase in disturbed galaxies at the lowest global densities seems counterintuitive. We show (§6) that the effect may result from a low small-scale velocity dispersion in these regions.
## 5 EW(H$`\alpha `$)-Density Relation
Figure 5 shows the cumulative distribution function (CDF) of H$`\alpha `$ equivalent width for the three different density regimes: the VPS (dashed), the HDVS (dotted), and the LDVS (solid). The similarity between the VPS and HDVS is evident, with a Kolmogorov-Smirnov (K-S) probability of 32% that the galaxies in these two density regimes have a consistent underlying distribution of EW(H$`\alpha `$). Given the similar fraction of early-type galaxies in these two samples (Fig. 3), it is not surprising that we see a similar fraction of absorption-line systems ($`35\%`$). If this absorption-line fraction is representative of the overall survey at similar densities, then void galaxy studies drawn from emission-line surveys miss roughly one-third of the luminous galaxies in regions of modest global underdensitiy. Figure 5 shows that there are galaxies even at $`n0.5\overline{n}`$ with old stellar populations and no appreciable current star formation.
The shift toward late-type morphology in the LDVS (Fig. 3) is mirrored by a shift toward larger H$`\alpha `$ eqivalent widths (Fig. 5). Absorption-line systems are less than half as abundant at $`n0.5\overline{n}`$ ($`15`$% of the total); strong ELGs with EW(H$`\alpha `$) $`>40`$Å are more than three times as abundant. The K-S probability of the LDVS and VPS representing the same underlying distribution of EW(H$`\alpha `$) is only 0.4%. The probability rises to 3% between the LDVS and HDVS.
Figure 6 shows EW(H$`\alpha `$) as a function of the galaxies’ $`BR`$ colors — the shift toward bluer galaxies in the LDVS is clear (cf. Paper I ). The red galaxies are predominantly absorption-line systems, with the notable exception of several galaxies in the LDVS with $`BR1.2`$ and EW(H$`\alpha `$) $`20`$Å. Figure 7 displays these galaxies’ spectra and $`B`$-band images. Only two have bright nearby companions (CGCG 0017.5+0612E, CGCG 1614.5+4231), but the others have possible faint companions. All appear to be disk systems, and the red colors probably result from internal reddening by dust; Balmer decrements are in the range 7–9 for these galaxies compared with the typical value of $`2.8`$ for case-B recombination.
## 6 Void Galaxy Companions
Here we discuss various results stemming from our deeper redshift survey of the imaging survey fields to $`m_R=16.13`$ (cf. §3.2). We determine the incidence of close companions as a function of density environment, examine the relationship between the presence of companions and the distribution of EW(H$`\alpha `$) versus density, and measure the velocity separation of the close companions as a function of $`(n/\overline{n})`$.
### 6.1 Incidence of Close Companions and Effect on EW(H$`\alpha `$)
Figure 8 shows the projected separations (in $`h^1`$ kpc) and absolute velocity separations for all entries in Table 4 with measured redshift. A galaxy in Table 4 counts as a companion if the velocity separation from the primary is $`<1000`$ km s<sup>-1</sup> (dashed horizontal line). This velocity cutoff is generous, but the gap in Figure 8 at $`|\mathrm{\Delta }cz|500`$–2000 km s<sup>-1</sup> leads us to expect few interlopers. Because the sky coverage of the neighbor redshift survey becomes increasingly sparse at projected separations $`D_p115h^1`$ kpc (dotted vertical line), we repeat the analyses in this section with and without the added companion criterion $`D_p115h^1`$ kpc.
Table 6 lists the fraction of galaxies in the various density groupings which are classified as “unpaired” (zero companions as defined above) and “paired” (at least one such companion), or cannot yet be classified because of one or more missing redshifts in Table 4. The fraction of paired galaxies decreases consistently across all density subsamples by $`25`$% under the restriction $`D_p115h^1`$ kpc . Table 6 shows that the fraction of paired galaxies is largely insensitive to the global density environment. Szomoru et al. (1996), who detected 29 companions around 12 Boötes void galaxies in Hi, also noted this tendency of void galaxies to be no less isolated on these small scales than galaxies at higher density.
We investigate the relationship between close companions and recent star formation in void galaxies by comparing EW(H$`\alpha `$) CDFs for paired versus unpaired galaxies. Figure 9 shows a dual-CDF plot of the paired and unpaired galaxies’ EW(H$`\alpha `$) (with the companion restriction $`D_p115h^1`$ kpc): the unpaired galaxies’ CDF increases from the bottom; the paired galaxies’ CDF decreases from the top. As in Figure 5, we distinguish between the VPS (dashed), HDVS (dotted), and LDVS (solid). The overall fraction of galaxies exceeding a given EW(H$`\alpha `$) is now represented by the interval between the upper and lower curves. For example, the excess of high-EW(H$`\alpha `$) galaxies in the LDVS is reflected here in the slower convergence of upper and lower solid curves until large EW(H$`\alpha `$). The upper and lower curves converge to the fraction of galaxies with EW(H$`\alpha `$) measurements but without detected companions. As the overall unpaired fraction is similar for each density subsample (cf. Tab. 6), it is reassuring that the three sets of curves converge at similar levels.
Inspection of the lower curves of Figure 9 reveals that the unpaired galaxies have much the same distribution of EW(H$`\alpha `$), regardless of the global density environment. The K-S probabilities (Tab. 7) confirm this impression. In contrast, the galaxies with companions have a very different EW(H$`\alpha `$) distribution depending on their density environment. Absorption-line systems account for $`20`$% of the paired galaxies in both the VPS and the HDVS, while there are essentially no LDVS absorption-line systems with companions. A K-S comparison of the EW(H$`\alpha `$) distribution of the paired-galaxy LDVS with either the HDVS or VPS reveals a highly significant ($`4\sigma `$) mismatch, whereas the EW(H$`\alpha `$) distributions for the paired-galaxy HDVS and VPS are consistent (Tab. 7).
### 6.2 Pair Velocity Separation vs. Density
Figure 10 shows the radial velocity separation $`\mathrm{\Delta }cz`$ versus projected separation for entries in Table 4 with $`|\mathrm{\Delta }cz|<1000`$ km s<sup>-1</sup>. Clearly the velocity separations of the LDVS pairs (solid triangles) are much smaller than either the HDVS pairs (open squares) or the VPS pairs (open stars). For the pairs which fall within the $`90`$% coverage radius of $`115h^1`$ kpc (dotted line), the dispersion in velocity separation $`\sigma _{\mathrm{\Delta }cz}`$ is $`88\pm 22`$ km s<sup>-1</sup> for the LDVS, compared with $`203\pm 26`$ km s<sup>-1</sup> for the HDVS and $`266\pm 31`$ km s<sup>-1</sup> for the VPS (Table 6). These values for $`\sigma _{\mathrm{\Delta }cz}`$ do not vary by more than the errors if we include the points in Figure 10 with projected separations $`>115h^1`$ kpc (“All $`D_p`$” in Tab. 6).
An F-test (Tab. 8) between the respective $`\mathrm{\Delta }cz`$ distributions for companions with $`D_p115h^1`$ kpc gives a low probability $`P_\mathrm{F}`$ that the LDVS could have the same underlying $`\mathrm{\Delta }cz`$ variance as the HDVS ($`P_\mathrm{F}=3.2\%`$) or the VPS ($`P_\mathrm{F}=0.59\%`$). The difference between HDVS and VPS dispersions is not significant at the $`2\sigma `$ level ($`P_\mathrm{F}=13\%`$). The F-test probabilities using all $`D_p`$ from Table 4 are almost identical (Tab. 8). We discuss the implications of the velocity dispersion variation in §7.
## 7 Summary and Conclusions
Our $`B`$\- and $`R`$-band CCD imaging survey and high-S/N longslit spectroscopic survey of $`300`$ galaxies in and around three prominent nearby voids have enabled us to examine the morphologies and star-formation history (in terms of EW(H$`\alpha `$)) as a function of the global density environment for $`n2\overline{n}`$. These studies complement our earlier examination of the luminosity and $`BR`$ color distributions of the same galaxies (Paper I ). We have also described an additional redshift survey of projected “companions” to $`m_R=16.13`$ which probes the very local environments ($`150h^1`$ kpc) around these galaxies within globally ($`5h^1`$ Mpc) low-density regions.
Our analysis of the CCD $`B`$ morphologies and H$`\alpha `$ linewidths reveals:
1. The distribution of galaxy morphologies varies little with large-scale ($`5h^1`$ Mpc) density environment over the range $`0.5<(n/\overline{n})2`$, with a consistent fraction of early types ($`35`$% with $`T<0`$). The distribution of H$`\alpha `$ equivalent widths, indicative of star-formation history, is similarly invariant with large-scale density over this range.
2. At large-scale densities below half the mean, both the morphology and EW(H$`\alpha `$) distributions deviate at the 2–3$`\sigma `$ level from the higher-density subsamples. There is a reduction in the early-type fraction (down to $`15`$%) and a corresponding increase in the fraction of irregular/peculiar morphologies. More of these galaxies show active star formation; even several of the redder galaxies ($`BR>1.2`$) have EW(H$`\alpha `$)$`>20`$Å. Many of the void galaxies, particularly at the lowest densities, show evidence of recent or current mergers/interaction.
The results here and in Paper I can be combined into a consistent picture of the trends in galaxy properties with large-scale underdensity. In Paper I we showed that the luminosity function in the voids at $`0.5\overline{n}`$ is consistent with typical redshift survey LFs; at densities below $`0.5\overline{n}`$ the LF faint-end slope steepens to $`\alpha 1.4`$. Recent studies point to a type-dependent galaxy LF with steeper faint-end slope for the late morphologies in CfA2+SSRS2 (Marzke et al. (1998)) and for the ELGs in the Las Campanas Redshift Survey (Bromley et al. (1998)). We might therefore expect our morphological and spectroscopic distributions to vary little over the range $`0.5<(n/\overline{n})2`$ and to shift toward late types and ELGs at the lowest densities. This trend is exactly what we observe (cf. §4)
Furthermore, we found in Paper I that the $`BR`$ color distribution of our sample at densities $`0.5\overline{n}`$ is consistent with the overall survey and shifts significant toward the blue for $`n0.5\overline{n}`$. This trend is consistent with the observed shift at $`(n0.5\overline{n})`$ toward late-type and high-EW(H$`\alpha `$) galaxies, which are typically bluer than early-type and absorption-line systems.
To ascertain whether these changes in galaxy properties below $`(n/\overline{n})0.5`$ are caused by variations in the local redshift-space environments of these galaxies (within $`150h^1`$ kpc), we have carried out a deeper redshift survey of the imaging survey fields, to $`m_R=16.13`$. The relative fractions of unpaired versus paired galaxies (paired galaxies are closer than $`115h^1`$ kpc projected and 1000 km s<sup>-1</sup> in redshift) does not significantly vary with the degree of global underdensity. Furthermore, the distribution of EW(H$`\alpha `$) for the unpaired galaxies varies little between our density subsamples. However, the galaxies at the lowest densities ($`n/\overline{n}0.5`$) which have companions are invariably ELGs. At higher global densities, roughly $`20\%`$ of the galaxies with companions are absorption-line systems. This difference in paired galaxy EW(H$`\alpha `$) with density is significant at the $`4\sigma `$ level.
Our companion redshift survey further reveals that the pair velocity separation decreases significantly ($`3\sigma `$) at the lowest densities, in support of theoretical predictions (e.g., Narayanan, Berlind, & Weinberg (1998)) that within the voids the velocity dispersion among galaxies should decline. Because associated galaxies at smaller velocity separations should have more effective interactions, the excess strong ELGs and disturbed morphologies at global underdensities $`(n0.5\overline{n})`$ may be ascribed to local influences.
These results argue for a hierarchical galaxy formation scenario where the luminous galaxies in higher-density regions formed earlier than at much lower density (e.g., Kauffmann (1996)). The older galaxies at higher-density would typically have less gas and dust at the present epoch, and thus show less active star formation even in the presence of nearby companions. In the voids, where the luminous galaxies presumably formed more recently, there should be more gas and dust present for active star formation triggered by nearby companions. As the EW(H$`\alpha `$) distributions are almost identical for the unpaired galaxies at different global density, we conclude that the local environment, i.e. the presence or absence of nearby ($`150h^1`$ kpc) companions, has more influence upon the current rate of star formation in these regions. In a future paper we hope to clarify the relationship between global underdensity and galaxy age (and metallicity) by studying the absorption-line indices of our high-S/N spectra (cf. Trager et al. (1998) and references therein). We may thereby use low-density regions to test the prediction by Balland, Silk, & Schaeffer (1998) that non-cluster ellipticals must have all formed at high redshift ($`z2.5`$).
Although the sample of void galaxies described here is much larger than previous studies of at most a few dozen objects (e.g. Cruzen, Weistrop, & Hoopes (1997); Szomoru, van Gorkom, & Gregg (1996); Thorstensen et al. (1995); Weistrop et al. (1995)), the distinctions in galaxy properties we observe at the lowest densities are based upon $`<50`$ objects. We should like to increase the LDVS sample size to improve our statistics. One avenue is to include all other low-density portions of CfA2 and the comparably deep SSRS2 survey (Da Costa et al. (1998)). Unfortunately the centers of voids are very empty, at least to $`m_B15.5`$, and we would only expect to increase the low density sample thereby to $`100`$ galaxies. Because the global density estimator requires a well-sampled, contiguous volume with dimensions $`5h^1`$ Mpc, growing the LDVS sample significantly is contigent upon deeper, wide-angle redshift surveys like the Sloan Digital Sky Survey (Bahcall (1995)) and the 2dF Galaxy Redshift Survey (Folkes et al. (1999)).
###### Acknowledgements.
We acknowledge the FAST queue observers, especially P. Berlind and J. Peters, for their help in obtaining our spectra; E. Barton, A. Mahdavi, and S. Tokarz for assistance with the spectroscopic reduction; and D. Fabricant for the FAST spectrograph. We give thanks to J. Huchra, M. Kurtz, R. Olowin, and G. Wegner for sharing their expertise in morphological classification, and to J. Huchra and A. Mahdavi for each kindly providing a redshift in advance of publication. This research was supported in part by the Smithsonian Institution.
|
no-problem/9910/astro-ph9910312.html
|
ar5iv
|
text
|
# The HR Diagram of Globular Clusters: Theorist’ view(s)
## 1 Introduction
It is now close to forty years that Globular Clusters (GCs) HR diagrams are used to derive the age of the oldest stars of the Galaxy, but today they provide a very complete test of the predictions of stellar evolution of low mass stars. This is easily recognized by looking at the composite HR diagram of the GC NGC 6397 shown in figure 1, and illustrated in figure 2 by recent stellar models. The diagram shows the core H–burning phases (main sequence –MS– and turnoff –TO– regions, the H–shell burning of the evolving mass ($`M0.8`$$`M_{}`$) including the red giant branch –RGB–, the Helium core burning phase (horizontal branch –HB–) and the double shell burning phase (asymptotic giant branch –AGB–). The Planetary Nebula phase is too short (only two planetaries are known in the galactic GC system) but the white dwarf –WD– cooling is well represented. The MS ends at the top with the evolving stars, and at the bottom with the lowest masses which are able to ignite hydrogen. The characteristic shape of the MS well below the turnoff displays characteristic details of the physics of the atmospheres and interiors of low and very low mass stars (see, e.g. D’Antona 1995): it changes slope twice, at a first, more luminous “kink” (FK) at $`M0.5`$$`M_{}`$, and at a second kink (SK), very close to the end of hydrogen burning structures. So the HR diagrams provide morphological information and reference luminosity indicators (TO, HB, WDs) which give constraints on the cluster stars evolution. In addition the number ratios (luminosity functions (LFs), “clumps”, gaps, ratio of HB to RGB number stars) add valuable quantitative information to the morphology.
In particular, the LF of the turnoff and giant branch depends mostly on the age of the system, while the mass function affects its unevolved part. The LF below the TO presents a characteristic broad maximum, due to the functional form of the mass – luminosity relation, whose peak becomes dimmer when the metal content increase (for a full description see, e.g., Silvestri et al. 1998).
In my opinion, the two fundamental questions which we would like to address in a meeting on the comparison between GCs and halo field stars are the following:
1. is our knowledge sufficient to constrain the GC ages at a level at which they can be interesting as indicators of the age of the Universe?
2. are GC stars identical to their halo counterparts of similar metallicity?
The answers to these questions are simple and a bit unconfortable:
* Details of the theory determine the precise absolute ages of GCs: till now it is difficult to constrain the ages to better than from 10 to 16Gyr (but see later);
* The distance scale of GCs is necessary to know their ages. Its calibration necessarily relies on the hypothesis that GC and halo stars are strict relatives, unless we put all our faith on theoretical models only.
In the following I describe the ways in which we can try to determine the distance scale of GCs and the hidden dangers in the playing of the “isochrone fitting computer game”, mainly those related to the use of the giant branch location. I will finally show how the new distance indicators emerging in the latest years (WDs, FK and SK of the low MS) are consistent with the traditional distance scales based on the fit of ground based photometry and on the HB models.
## 2 The ages paradigm
The distance scale of GCs is the main key to their age determination. There are many “traditional” methods to derive this scale: they can be reduced to the following list:
* Approach based on distance indicators:
+ Fitting of MS to local sample of subdwarfs;
+ Fitting of the GCs RR Lyrae to RR Lyrae in the Magellanic Clouds, distance of LMC calibrated through the Cepheids;
+ Fitting of HB (or RR Lyrae) to local halo HB or RR Lyrae.
* Purely theoretical approach:
+ Fitting of HB (or RR Lyrae) luminosity to HB theoretical models;
+ Fitting of observed MS to theoretical MS (this implies a match of the models colors);
+ Fitting of the morphology of the HR diagram ($`\delta (BV)`$ type methods)
The recent, mainly HST based, observations which allow to reach dimmer and dimmer luminosities have added new ways to determine the distance, or at least to check it, which will be examined in sections 5, 6 and 7, namely:
* fitting of the location of the low MS, with attention to the location of the FK;
* fitting of the MS region following the SK with the local M subdwarfs sample;
* fitting of the WD sequence to models or disk counterparts.
I will not discuss the approach based on the classical distance indicators, which has known a renewed interest in these years, thanks to the impact of the results from the Hipparcos satellite. The fitting to the local subdwarfs is discussed in Reid 1997, Gratton et al. 1997, Pont et al. 1998 and Chaboyer et al. 1998. The RR Lyrae calibration after Hipparcos data was first rediscussed by Feast and Catchpole (1997). Notice that, while most Hipparcos results imply a more or less stringent confirmation of the so called “long” distance scale for GCs, the local RR Lyrae and HB stars give a much shorter scale (Fernley et al. 1998) consistent with the previous statistical parallax determination by Layden et al. 1996 –but see the recent approach by Groenewegen and Salaris 1999.
I will mostly concentrate on the theoretical approach. Actually, none of the theoretical methods has been ever used independently from the others, but a sort of “consistency” between the different aspects of the problems has generally been looked for, including the observational distance indicators. For instance, fitting the morphology (that is, the relative position of RGB and TO) as an absolute method for the age determination has never been taken seriously (see later) but the consistency of the whole HR diagram locations (as shown in figure 1b) has been more or less unconsciously taken as selfevidence of an evolutionary scheme –and thus of a given range of ages– and especially in some comparisons with observations we have seen mention of “spectacular fit”, or “location and shape matched superbly by isochrones” while the truth hidden below is very different.
In the following I will base part of my discussion on a personal interpretation of the events which in recent years led to a revision of the average age of GCs. A look at the literature in fact shows that the ages of GCs quoted before or up to 1996-1997 (pre-Hipparcos) are in the range 13-18Gyr, while the most quoted ages of the years 1998-1999 are 10-14Gyr (post-Hipparcos)<sup>1</sup><sup>1</sup>1Two notable exceptions are the work by Salaris et al. (1997) and the three papers by our group (Mazzitelli et al. 1995, D’Antona et al. 1997, Caloi et al. 1997) which published post–Hipparcos ages in the pre–Hipparcos years. Vandenberg et al. 1996 in fact give an age of $`15.8\pm 2`$Gyr to M92, noting that “ages below 12 or above 20Gyr appear highly unlikely”, and Chaboyer et al. (1996) give an average age of $`14.6\pm 1.7`$Gyr to the galactic GCs, putting a 95% lower bound at 12.1Gyr. The turning point seems to have been the results from Hipparcos satellite, which on the one hand made the metal poor subdwarf sequence more luminous (by no more than 0.1mag, the effect being even lower according to some researchers) and on the other hand contributed to raise the zero point of Cepheids’ luminosity, leading thus to confirm a larger luminosity of the RR Lyrae in the LMC. However, the Hipparcos results alone do not justify the global shift of the average age of GCs, which amounts to $`4`$Gyr (from 16 to 12Gyr). In my opinion, Hipparcos has simply given more weight to the the “long” distance scale of GCs, which already had some enphasis in the observational literature (Sandage 1993, Walker 1992).
What has been happening is schematized in a very naif way in figure 3: the HB (or RR Lyrae) luminosity has been increased in recent models due to the sum of two small effects (a slight increase in the core mass at the helium flash -by about 0.01$`M_{}`$\- and a slight increase due to the improvement in the equation of state (EoS)). This has been the most important update in the models, and has shifted the ages to at most a couple of Gyr smaller with respect to previous HB theoretical luminosity<sup>2</sup><sup>2</sup>2Some confusion was however present in the literature up to 1997 as to the absolute visual magnitudes corresponding to the $`\mathrm{log}L/L_{}`$ of the models (see e.g. the display in the lower panel of figure 1 from Caloi et al. 1997), so that the modification induced in HB models might appear more drastic in some authors’ comparisons..
At the same time, the TO luminosity corresponding to a given age has been slighlty decreasing. The sum of these subtle effects is a good $`0.27`$mag of difference in the absolute TO location at a given age based on old or new models, for a given $`\mathrm{\Delta }V`$ from the HB to the TO, and thus the net effect is a reduction of 4Gyr in the age. The interpretation which figure 3 gives to the age decrease is not unique: other motivations for a more or less substantial decrease in the age are found in the recent literature. Pont et al. (1998), who do not revise the HB luminosity, remark however a difference in the scale of the $`V`$ bolometric corrections ($`BC_V`$) between the models atmospheres employed by Bergbush and Vandenberg 1992, and the most recent scale both by Kurucz and by Bell, amounting to $``$0.1mag, and thus leading to a decrease in the age by $`1.5`$Gyr.
It is evident that the HB luminosity of the present models, which in the end represent the most direct classical distance indicator, is in itself still uncertain, unless we believe that we can trust our models at the level of 0.01$`M_{}`$ for the determination of the helium core mass at flash, and that we know perfectly all the other pieces of input physics. At least, discussion is still open on the helium core flash masses. Notice also that, e.g., the most recent HB models (Caloi et al. 1997 versus Cassisi et al. 1998) do not agree on the $`L_{HB}`$ at intermediate metallicity (for \[M/H\] from $``$ –1.5 to $`1`$), and it is unclear why. The problem of GC ages is linked to minute details of the input physics, and we can not exclude an uncertainty of $`0.25`$mag in the theoretical determination of the HB luminosity. So the real uncertainty on the age determination from the HB is still of $`\pm `$several Gyr.
However, why the paradigm of the 15-16Gyr age was so difficult to be abandoned? In my opinion a part of the answer is the following: in the course of many years, the whole theoretical construction of the GC HR diagram had been adjusted to be consistent with about that age, so that is resulted very difficult to make drastic changes to that view. This interpretation becomes more clear by examining the relative location of TO and RGB in stellar models.
The TO color location is discussed in section 3. The input physics may affect it at a level of $`0.05`$mag. On the other hand, the RGB colors heavily depend on the treatment of convection. By changing the ratio mixing length to pressure scale heigth ($`\alpha =l/H_p`$) in the MLT formulation, the color location of the RGB may vary by tenths of magnitude (e.g. Vandenberg 1983). Although everybody knew that the $`\alpha `$ choice was ‘ad hoc’, the “old” distance scale had this interesting outcome: by chance it had the additional bonus of giving a very good fit of the MS and RGB locations, if the models employed the same $`\alpha `$ parameter which fitted the solar radius at the solar age (solar calibration of the mixing length). In addition, the solar calibration was also in good agreement with the location of the best known subdwarf Groombridge 1830. It was perhaps necessary to add some very small adjustment of colors, but the reproduction of the GC morphologies was indeed very good. The best example of this procedure is given by Bergbush and Vandenberg 1992: they show how they calibrate their color-$`T_{\mathrm{eff}}`$ relations (based on quite good model atmospheres) to provide a “consistent” picture for all metallicities. They also explicitly state that the transformations they adopt are OK for their own models, and that different adjustments might be required by other models. In spite of being very careful, the procedure adopted in Bergbush and Vandenberg 1992 (but also by others) implicitly hides both the choice of the distance scale (and thus the resulting 15Gyr or so) and the choice of the convection model, as I will now clarify. The fortuitous coincidence between the RG location in population II models with a solar calibrated $`\alpha `$ led researchers to postpone the problem of a better understanding of superadiabatic convection<sup>3</sup><sup>3</sup>3This “canonical” assumption was abandoned only in the Mazzitelli et al. 1995 paper, treating convection according to the Canuto and Mazzitelli (1991) model, and in Chieffi et al. (1995), who propose a calibration of $`\alpha `$ as a function of the cluster metallicity. This latter paper puts clearly into evidence that no predictions can be made on the RGB location on the basis of MLT models.. After 1997 the -even small- change of distance scale implied by the new HB models and by the Hipparcos subdwarfs re–calibration did not allow any longer to forget the problem: the same theoretical RGB, for a smaller age, provides a larger $`\delta (BV)`$ and the theoretical RGs were too red. Thus the “new” ages required a change either in the convection modelling, or a different tuning of the correlations colors-$`T_{\mathrm{eff}}`$, or both. The situation is schematically shown in figure 4, in which I use an extreme difference in the distance modulus (0.25mag) to clarify the problem. Suppose that color magnitude diagram was well fit by an isochrone (open squares in figure 4. An update of the distance modulus to 0.25mag longer (and the adjustment of the color by 0.06mag, in the range of TO color uncertainties) provides now an age $`6`$Gyr younger, but it does not allow to fit of the RGB. A good fit requires a bluer RGB.
The modellers have tried to solve the problem of the discrepancy between the $`\delta (BV)`$ and the new distance scale in the following ways:
1. they have increased $`\alpha `$ to obtain again the fit. On theoretical grounds, there is no scientific basis in the assumption that the $`\alpha `$ in different stars should be the same as in the solar model, so why not? This solution is adopted e.g. by Brocato et al. (1998) who discuss at length the effect described here for the case of the GC M68) and by Cassisi et al. 1998;
2. some researchers have considered again models with solar $`\alpha `$, but have chosen the color $`T_{\mathrm{eff}}`$relation in an appropriate way to reproduce the giant colors (it is generally possible to find good justifications for this choice also). This is the approach by Salaris and Weiss 1997, 1998: they adopt Buser and Kurucz (1978) colors for the giants, which are bluer by several hundreths of magnitude than the more recent ATLAS9 updated colors (see e.g. the comparison in figure 1 of Cassisi et al. 1999). In this way, the $`\delta (BV)`$ between the TO and the RGB results smaller and can fit GCs shapes with the new distance scale.
3. there are a few attempts to try different convection models, which will generally not allow a “perfect fit”.
If one adopts the solutions 1) or 2), it is important to remember that the shape of the HR diagram loses any predictive power, as it has been fit already assuming a distance scale: just as the “spectacular fits” of a few years ago produced a 15Gyr answer, present day fits will produce a 10-12Gyr answer, but the quality of the fit has nothing to do with the truth of the answer. A better way of posing the problem, when an observer adopts a given set of tracks to infer the age of a new stellar system, would be to say that the system shows the same age of -or that its age differs from- the GC on which the track set has been more or less explicitly calibrated. It is not clear to me that even relative ages of GCs of different metallicities can be inferred from the $`\delta (BV)`$ or $`\delta (VI)`$ method, when we use a given MLT prescription which already is tested on the HR diagrams to fit a given distance scale.
The 3rd solution is less misleading, and it could in the end allow progress in the field, but it requires lots of work and maybe frustrating, as it produces results not always in “perfect agreement” with observations<sup>4</sup><sup>4</sup>4One of the reasons why we could get the hint of a decrease in the GC ages two years before Hipparcos (Mazzitelli et al. 1995) was that we used a convection models by which it was not possible to get a fine tuning of the RG location.. A few such attempts to overcome the MLT are today available:
1. the FST (or Full Spectrum Turbulence) models, based on the Canuto and Mazzitelli (1991) formulation, whose fluxes are in good agreement with experimental data, and are computed using modern closures of the Navier Stokes equations, and in which the scale length is assumed to be the distance from the convective boundary. Models have been computed by Mazzitelli et al. 1995, D’Antona and Mazzitelli 1997, Silvestri et al. 1998. This formulation of convection gives a different flavour to the HR diagram shape and it is less tunable than the MLT, an advantage in terms of predictive power, but a real failure if we want to obtain perfect fits. However, the Silvestri et al. (1998) models, which differ from the previous of our group mainly for the updated choice of color-$`T_{\mathrm{eff}}`$ transformations (Castelli 1998<sup>5</sup><sup>5</sup>5see Kurucz website http://cfaku5.harvard.edu versus Kurucz 1993),also provide a reasonable fit of the RGB as shown in figures 2 and 6 –but notice also the discrepancy in the case of M30 in figure 5.
2. Freytag and Salaris (1999) have calibrated the MLT $`\alpha `$ by RHD models based on grids of 2D hydrodinamic simulations by Ludwig et al. 1999. Although numerical simulations are able to take into account only a relatively small number of eddies for a realistic description of turbulence, the Freytag and Salaris approach is an interesting novelty for this field.
3. an incongruence of FST models and of models not adopting a plain MLT description is that in any case they use till now grey boundary conditions, and the colors are obtained through transformations based on MLT model atmospheres. Models including, e.g., FST model atmospheres should be built up to get selfconsistent colors (Kupka, Schmidt and D’Antona 1999);
## 3 The TO and upper MS location
The location of the TO is affected by many uncertainties in the input physics, although not at the level of the RGB. If we wish to use the theoretical MS colors to determine an age, we must shift vertically the cluster HR diagram until it is superimposed to the MS of the observed metallicity, and then we determine the age from the TO luminosity. The MS is very steep in the TO region: a simple shift in color of the theoretical MS by +0.02mag implies a determination of age smaller by 2Gyr, not to talk about the possible uncertainty in the reddening. The main inputs affecting the MS and TO location are the following:
1. the convection description affects both the TO color and its shape (see the comparison between the MLT based description and the FST models in Mazzitelli et al. 1995 and D’Antona et al. 1997);
2. The helium gravitational and thermal settlings (diffusion) affect both the TO color and the age. A number of models are available, starting from Proffitt and Michaud 1991, up to D’Antona et al. 1997, Straniero et al. 1997 and Cassisi et al. 1998);
3. the color - $`T_{\mathrm{eff}}`$ relation is affected by the convection treatment in the atmosphere (cf. Kurucz 1993 versus Castelli 1998 models).
Everything included, the absolute determination of the TO and upper MS colors is uncertain by $`0.05`$mag, so that it is better not to rely on colors for age determination.
I add a few words about the possible effect of helium diffusion. It is today well settled that it is necessary to include the treatment of microscopic helium diffusion to account for some details of the seismic Sun (Bahcall and Pinsonneault 1995, Basu et al. 1996), but the evaluation of the diffusion coefficients is difficult, and its application to models not always well clear in the researchers description. The diffusion affects both the TO color and the age: age reductions from 5-10% to 20% are found, and the TO color may be affected up to 0.1mag in some models. An important warning first issued by Deliyannis and Demarque (1991) must be kept in mind: diffusion affects lithium nearly in the same way as helium, thus: “the properties of the Spite plateau in population II severely restrict the amount of diffusion induced curvature that can be tolerated in a lithium isochrone”. In other words, the effect of “too much” diffusion would appear in a smaller lithium abundance for the hotter population II stars, a fact which is not verified in the halo subdwarfs, which show a remarkably flat lithium abundance versus $`T_{\mathrm{eff}}`$ (the Spite and Spite 1982 plateau)<sup>6</sup><sup>6</sup>6Here again we attribute to GCs stars the same properties of the nearby subdwarfs. Actually, the lithium behaviour at the TO of GCs might be a bit different than in the field stars. Boesgaard et al. 1998 show that the M92 TO stars have a larger scatter in lithium than field stars. The Spite’s plateau must still be confirmed by extensive GC stars observations, which are becoming possible with the new generation telescopes.. Chaboyer et al. 1992 show that an age reduction up to 3Gyr (15%) is in principle possible for GCs when diffusion is included in the computation, but they find that the lithium isochrones imply a maximum age reduction by 1Gyr ($`7`$%).
In conclusion, also the theoretical TO – upper MS color location is affected by uncertainties by which the absolute age determination, again, can not be known to better than $`\pm `$ several billion years.
## 4 Location of the lower MS
The “double kink” shape of the low MS of GCs is due to the influence of the interior physics on the structure of low mass stars. The appearence of the FK is attributed mainly to the to the lowering the adiabatic gradient when the $`H_2`$ dissociation begins to be present in envelope (below $`5000`$K). The SK is associated with the reaching of degeneracy in the interior (D’Antona and Mazzitelli 1996). The shape of the low MS can be a powerful tool, first to constrain the models, and then to constrain the GC parameters.
As first shown by Baraffe et al. 1995, when the formation of molecules begin to be important in the stellar atmosphere (at $`T_{\mathrm{eff}}\stackrel{<}{}5000`$K) the grey atmospheric integration fails to give a good description for the boundary conditions: in summary, it underestimates the opacities and does not account for the opacity distribution with wavelength. The net effect is that the grey integration provides much larger pressure and density at the bottom of the atmosphere. In the interior, the temperature gradient is the adiabatic gradient, so that finally the same central conditions give a larger $`T_{\mathrm{eff}}`$ . Figure 5 shows in fact that, for the same chemistry, non–grey models are redder by $`0.06`$mag with respect to grey models. This also implies that the metallicity and probably also the element to element ratios, are important to determine the location of the FK. This is certainly a powerful tool, but also makes the region below the FK very dependent on the model inputs.
Te EoS adopted for these low mass models is also an important ingredient. It determines the slope of the region between the two kinks and, together with the atmospheric integration, it influences the mass luminosity relation, which is the most important input for the interpretation of the luminosity functions of the MS in terms of mass function. There are still substantial uncertainties close to the bottom of the main sequence (see Montalban et al. 1999).
Figure 5 summarizes the uncertainties in the low MS location and part of the uncertainties in the upper MS and TO locations. We see that there is a region between $`M_v=6`$ and 7 at which the uncertainties in color transformations, convection, diffusion, boundary conditions seem to play almost no role: This region, then, is the best to be used as distance indicator for the MS.
## 5 Consistency of HB and low MS distance indicators
The new HST data which have so much extended our knowledge on the low luminosity part of the HR diagram put an interesting problem: is there consistency between the “optical” traditional distance indicators for GCs and the location and shape of the low MS?
We can check this idea by following this procedure: first we can fit the optical data to the RR Lyrae (or HB) to derive a distance, and check the reddening by controlling the MS location at $`M_v67`$; then we adopt the same reddening and distance modulus for the MS HST data. This allows us to see if the MS and first kink location are consistently reproduced.
We show in the figures 6 and 7 the check on the low metallicity (\[M/H\]=-2) clusters M92 and M30, finding excellent consistency. The comparisons are equally good in the HST color bands (F555 and F814) and in the transformed Johnson–Cousins bands $`V`$ and $`I`$ (in fact these HST bands and the standard magnitudes are only marginally different). Thus, on the one side the location of the FK provides a check of the distance, on the other side we gain confidence in the good quality of the low mass models including non grey boundary conditions.
Here again we must admit that this result is not unique: if we assume for M30 a short distance modulus, say $`(mM)_0=14.5`$, the general agreement of the sequences would still be reasonable (and the age would increase to $`16`$Gyr). Of course our HB models would not fit the HB luminosities of the cluster, but we have agreed that even a small change in the input physics may lead to less luminous HBs<sup>7</sup><sup>7</sup>7Some further indications on the choice of the “best” distance can come from the comparison of the observed and theoretical luminosity functions: D’Antona (1998) and Silvestri et al. (1998) show that the non monotonic mass functions derived, e.g. by Piotto et al. 1997, and by others, were mostly due to the use of too short a distance modulus for the examined clusters..
## 6 The distance through the fit to the lowest MS
This approach has been applied by Reid and Gizis (1998) to the only cluster for which the lowest main sequence, the post-SK region, is known, namely NGC 6397 (figure 1). This part of the HR diagram is populated by stars which are close to degeneracy in the interior, so they are close to the minimum mass which can ignite hydrogen. The mass–luminosity relation here is very steep, that is, the stars have practically all the same mass, and the HR diagram location follows a constant radius line. In addition, the degeneracy radius is not so much dependent on the details of the structure, and this locus is very similar for all GC metallicities. Thus, although the models are not well understood in detail here (Montalban et al. 1999), if we have a reasonable sample of M subdwarfs with known distances and accurate colors, we can fit the GC sequence to the nearby sequence and get the distance modulus. The example of the fit by Reid and Gizis (1998) is very interesting, and provides for NGC 6397 a not unreasonable modulus $`(mM)_0=12.13\pm 0.15`$mag, again consistent with the long distance moduli and short ages. We need a better definition of the lowest MS through a larger sample of M subdwarfs, and more GC explored down to the post-SK region, to make this distance indicator more useful.
## 7 White dwarfs
There are by now three well defined WD sequences for GCs: NGC6752 (Renzini et al. 1996), NGC6397 (King et al. 1998, see figure 1) and M4 (Richer et al. 1997). For the two latter clusters, the location of the WD sequence is consistent with the cooling track for $`M0.5`$$`M_{}`$ by Wood (1995) models transformed into the observed magnitudes by means of Bergeron et al. (1995) model atmospheres (see Richer et al. 1997). The errors on the reddening of the clusters and on the model observational colors are yet such that we can not quantify better this general statement. The data of NGC6752 have been compared by Bragaglia et al. with a “standard” sequence of field WDs spectroscopically determined to be of mass $`0.5`$$`M_{}`$(from the list of Bragaglia et al 1995). They obtain a “short” distance modulus, and an age of $`15`$Gyr (even assuming a large $`\alpha `$–enhancement). This result is then marginally discrepant from the others we have quoted so far.
On the other hand, from Wood (1995) models we see that $`\mathrm{\Delta }M_v/\mathrm{\Delta }M_{wd}2.5`$. In other words, a small uncertainty (by 0.1$`M_{}`$) in the mass determination fo the field WD sample produces a noticeable difference (by 0.25mag) in the WD sequence location, which results in an age difference close to 4Gyr. Do we know the spectroscopic masses within 0.1$`M_{}`$? We know that the spectroscopic masses of helium atmosphere WDs are highly uncertain, some uncertainty surely weighs also on the DA type WDs. It is also possible that the field WDs differ from the GC WDs in other ways: the environment in which they are born is very different and may imply, e.g., different accretion rates, much larger in the disk than in the GC, which is devoid of gas and dust. Although the sedimentation of metals is very efficient in these high gravity stars, it is well possible that some residual effect from accretion affects the stellar radius.
## 8 Summary
I conclude with the following short summary:
Morphological fits including the RGB are meaningless in terms of age determination;
– The traditional best theoretical distance scale still mostly relies on HB models, and on the MS colors at $`M_v67`$.
– the small versus large ages only require a difference of $`0.25`$mag of distance modulus;
– The uncertainties in HB, TO, MS, WDs sequences location are all of the same order of magnitude: namely: $`0.25`$mag;
– then we can make no firm choice between 10-12 and 14-16Gyr age;
– however most recent theoretical and observational results, including the new constrains on distance by the very low luminosity stars, all point towards the smaller range of ages.
## Acknowledgements
I warmly thank the organizers of the Colloquium, and especially Arlette Noels, first for inviting me, and second for holding a perfect scientific and logistic meeting.
I also thank my coworkers and friends Vittoria Caloi and Italo Mazzitelli, together with Josefina Montalban, Paolo Ventura, and Fabio Silvestri for always useful exchange of ideas and for the work which has made this review possible. I acknowledge all the useful data and lively information from F. Allard, A. Cool, I. King, G. Piotto, H. Richer and M. Salaris.
## References
> Andreuzzi et al. 1999, submitted to A&A
>
> Bahcall, J.N., & Pinsonneault, M. 1995, Rev. Mod. Phys. 67, 781
>
> Baraffe I., Chabrier G., Allard F., Hauschildt P., 1995 ApJ 446, L35
>
> Baraffe I., Chabrier G., Allard F., Hauschildt P., 1997 A&A 327, 1054 (BCAH97)
>
> Bergeron, P. Wesemael, F. & Beauchamp, A. 1995, PASP 107, 1047
>
> Basu, S., Christensen Dalsgaard, J., Schon, J., Thompson, M.J. 1997, ApJ 460, 164
>
> Bergbusch, P.A., & VandenBerg, D.A. 1992, ApJS 81, 163
>
> Boesgaard, A.M. et al. 1998, ApJ 493, 206
>
> Bragaglia, A., Renzini, A., Bergeron, P. 1995, ApJ 443, 735
>
> Buonanno, R., Corsi, C. & Fusi Pecci, F. 1985 A&A 145, 97
>
> Buser, R., Kurucz, R. 1978, A&A 70, 555
>
> Caloi, V., D’Antona, F., & Mazzitelli, I. 1997, A&A 320, 823
>
> Canuto, V.M., & Mazzitelli, I. 1991, ApJ 370, 295
>
> Cassisi, S., Castellani, V., Degl’Innocenti, S., Weiss, A. 1998, A&AS 129, 267
>
> Cassisi, S., Castellani, V., Degl’Innocenti, S., Salaris, M., Weiss, A. 1999, A&AS 134, 103
>
> Castelli, F. 1998, in “Views on distance indicators”, Mem.S.A.It. 69, 165
>
> Chaboyer, B. 1995, ApJ 444, L9
>
> Chaboyer, B., Deliyannis, C.P., Demarque, P., Pinsonneault, M.H., Sarajedini, A. 1992, ApJ 388, 372
>
> Chaboyer, B. & Kim, Y.–C. 1995, ApJ 454, 767
>
> Chaboyer, B., Demarque, P., Kernan, P.J., Krauss, L.M. 1996, Science 271, 957
>
> Chaboyer, B., Demarque,P., Kernan, P.J., Krauss, L.M. 1998, ApJ 494, 96
>
> Chieffi, A., Straniero, O., Salaris, M. 1995, ApJL 445, L39
>
> Cool, A. M. 1997, “Binary Stars Below the Turnoff in Globular Cluster Color–Magnitude Diagrams” in Advances in Stellar Evolution, eds. R. T. Rood and A. Renzini (Cambridge: Cambridge U. Press), p. 191
>
> D’Antona F., 1995,in The Bottom of the Main sequence – and beyond, WSO Astrophysics Symposia (Springer) Ed. C. Tinney p.13
>
> D’Antona F. 1998, in “The Stellar Initial Mass Function”, ed. G.Gilmore & D. Howell ASP Conference Series, (San Francisco: ASP), vol. 142, p. 157
>
> D’Antona F., Caloi V., Mazzitelli I., 1997, ApJ 477, 519
>
> D’Antona F., Mazzitelli I., 1996, ApJ 456, 329
>
> Deliyannis, C. & Demarque, P. 1991, ApJ 379, 216
>
> Feast, M.W. & Catchpole, R.M. 1987, MNRAS 286, L1
>
> Fernley, J. Barner, T.G., Skillen, I., Hawley, S.L., HAnley, C.J., Evans, D.W., Solano, E., Garrido, R. 1998, A&A 330, 515
>
> Freytag, B. & Salaris, M. 1999, ApJL 513, L49
>
> Gratton R.G., Fusi Pecci F., Carretta E., Clementini G., Corsi C.E., Lattanzi M., 1997, ApJ 491, 749
>
> Groenewegen, M.A.T. & Salaris, M. 1999, A&A 348, L33
>
> Kaluzny, J. 1997, A&AS 122,1
>
> King I.R., Anderson J., Cool A.M., Piotto G., 1998, ApJ 492, L37
>
> Kupka, F., Schmidt, M. & D’Antona, F. 1999, in preparation
>
> Kurucz, R.L. 1991, in: Stellar Atmospheres: Beyond the Classical Models, L. Crivellari, I. Hubeny, D. G. Hummer eds., NATO ASI Series (Dordrecht: Kluwer), p. 441
>
> Kurucz, R.L. 1993, ATLAS9 Stellar Atmosphere Programs and 2 km/s grid (Kurucz CD-ROM No 13)
>
> Layden, A.C., Hanson, R.B., Hawley, S.L., Klemola, A.R., Hanley, C.J. 1996, AJ 112, 2110
>
> Ludwig, H.-G., Freytag, B. & Steffen, M. 1999, A&A 346, 111
>
> Mazzitelli, I., D’Antona, F. & Caloi, V. 1995, A&A 302, 382 (MDC)
>
> Montalban, J., D’Antona, F., Mazzitelli, I. 1999, submitted to A&A
>
> Piotto et al. 1990 ApJ 350,662
>
> Piotto G., Cool A. & King I.R., 1997, AJ, 113 1345
>
> Pont, F., Mayor, M., Turon, C. & Vandenberg, D. A. 1998, A&A 329, 87
>
> Proffitt, C.R. & Michaud, G. 1991, ApJ 371, 584
>
> Reid I.N., 1997, AJ 114, 161
>
> Reid I.N., & Gizis, J.E. 1998, AJ 116, 2929
>
> Richer, H. et al. 1997, ApJ 484, 741
>
> Rogers F.J., Swenson F.J., Iglesias C.A., 1996, ApJ 456, 902
>
> Salaris, M., Chieffi, A., and Straniero, O. 1993, ApJ 414, 580
>
> Salaris, M., Degl’Innocenti, S., and Weiss, A. 1997, ApJ 479, 665
>
> Salaris, A., Weiss, A. 1997, A&A 327, 107
>
> Salaris, A., Weiss, A. 1998, A&A 335, 943
>
> Sandage, A., 1993 AJ 106, 703
>
> Silvestri F., Ventura, P., D’Antona F., Mazzitelli I., 1998, ApJ 509, 192
>
> Spite, F. & Spite, M. 1982, A&A 115, 357
>
> Stetson, P.B. & Harris, W.E. 1988, AJ 96, 909
>
> Straniero, O., Chieffi, A., Limongi, M. 1997, ApJ 490, 425
>
> VandenBerg, D.A. 1983, ApJS 51, 29
>
> VandenBerg, D.A. 1992, ApJ 391, 685.
>
> VandenBerg, D.A., & Bell, R.A. 1985, ApJS 58, 561
>
> VandenBerg, D.A., Bolte M., & Stetson, P.B. 1990, AJ 100, 445
>
> VandenBerg, D.A., Stetson, P.B. & Bolte M. 1996, ARAA 34, 461
>
> Walker, A.R. 1992, ApJ 300, L81
>
> Wood, M.A. 1995, in “White Dwarfs”, ed. D. Koester & K.Werner (Berlin:Springer), p.41
|
no-problem/9910/astro-ph9910155.html
|
ar5iv
|
text
|
# On the migration of a system of protoplanets
## 1 Introduction
It is generally assumed that planetary systems form in a differentially rotating gaseous disc. In the late stages of their formation the protoplanets are still embedded in the protostellar disc and their orbital evolution is coupled to that of the disc. Gravitational interaction between the planets and the gaseous disc has basically two effects:
a) The torques by the planets acting on the disc tend to push away material from the orbital radius of the planet, and for sufficiently massive planets a gap is formed in the disc (Papaloizou & Lin 1984; Lin & Papaloizou 1993). The dynamical process of gap formation has been studied through time dependent hydrodynamical simulations for planets on circular orbits by Bryden et al. (1999) and Kley (1999, henceforth paper I). The results indicate that even after the formation of a gap, the planet may still accrete material from the disc and reach about 10 Jupiter masses ($`M_{Jup}`$). For very low disc viscosity and larger planetary masses the mass accumulation finally terminates (Bryden et al. 1999).
b) Additionally, the gravitational force exerted by the disc alters the orbital parameter (semi-major axis, eccentricity) of the the planet. Here, these forces typically induce some inward migration of the planet (Goldreich & Tremaine 1980) which is coupled to the viscous evolution of the disc (Lin & Papaloizou 1986). Hence, the present location of the observed planets (solar and extrasolar) may not be identical with the position at which they formed.
In particular, the migration scenario applies to some of the extra-solar planets (for a summary of their properties see Marcy, Cochran & Mayor 1999), the 51 Peg-type planets. They all have masses of the order $`M_{Jup}`$, and orbit their central stars very closely, having orbital periods of only a few days. As massive planets, according to standard theory, have formed at a few AU distance from their stars, these planets must have migrated to their present position. The inward migration was eventually halted by tidal interaction with the star or through interaction with the stellar magnetosphere (Lin, Bodenheimer & Richardson 1996). The only extrasolar planetary system known so far ($`\upsilon `$ And) consists of one planet at 0.059 AU on a nearly circular orbit and two planets at .83 and 2.5 AU having larger eccentricities (.18 and .41) (Butler et al. 1999).
In case of the solar system the question, what prevented any further inward migration of Jupiter, arises. As the net tidal torque on the planet is a delicate balance between the torque of the material located outside of the planet and the material inside (eg. Ward 1997), any perturbation in the density distribution may change this balance. In this letter we consider the effect that an additional planet in the disc has on the migration rate.
We present the results of numerical calculations of a thin, non-self gravitating, viscous disc with two embedded protoplanets. Initially the planets with one $`M_{Jup}`$ each are on circular orbits at $`a=1a_{Jup}`$ and 2 $`a_{Jup}`$, respectively. In contrast to the existing time-dependent models (Bryden et al. 1999; paper I) we take into account the back-reaction of the gravitational force of the disc on the orbital elements of the planets and star. The models are run for about 3000 orbital periods of the inner planet corresponding to 32,000 years. In Section 2 a description of the model is given, the results are presented in Section 3 and our conclusions are given in Section 4.
## 2 The Model
We consider a non-self-gravitating, thin accretion disc model for the protoplanetary disc located in the $`z=0`$ plane and rotating around the $`z`$-axis. Its evolution is described by the two dimensional ($`r\phi `$) Navier-Stokes equations, which are given in detail in Kley (1999, paper I). The motion of the disc takes place in the gravitational field of the central star with mass $`M_s`$ and the two embedded protoplanets with masses $`m_1`$ and $`m_2`$. In contrast to paper I we use here a non-rotating frame as both planets have to be moved through the grid. The gravitational potential is then given by
$$\mathrm{\Phi }=\frac{GM_s}{|𝐫𝐫_s|}\frac{Gm_1}{\left[(𝐫𝐫_1)^2+s_1^2\right]^{1/2}}\frac{Gm_2}{\left[(𝐫𝐫_2)^2+s_2^2\right]^{1/2}}$$
(1)
where $`G`$ is the gravitational constant and $`𝐫_s`$, $`𝐫_1`$, and $`𝐫_2`$ are the radius vectors to the star and two planets, respectively. The quantities $`s_1`$ and $`s_2`$ are smoothing lengths which are 1/5 of the corresponding sizes of the Roche-lobes. This smoothening of the potential allows the motion of the planets through the computational grid.
The motion of the star and the planets is determined firstly by their mutual gravitational interaction and secondly by the gravitational forces exerted on them by the disc. The acceleration of the star $`𝐚_s`$ is given for example by
$`𝐚_s=Gm_1{\displaystyle \frac{𝐫_s𝐫_1}{|𝐫_s𝐫_1|^3}}`$ $``$ $`Gm_2{\displaystyle \frac{𝐫_s𝐫_2}{|𝐫_s𝐫_2|^3}}`$ (2)
$``$ $`G{\displaystyle _{Disc}}\mathrm{\Sigma }{\displaystyle \frac{𝐫𝐫_s}{|𝐫𝐫_s|^3}}𝑑A`$
where the integration is over the whole disc surface, and $`\mathrm{\Sigma }`$ denotes the surface density of the disc. The expressions for the two planets follow similarly. We work here in an accelerated coordinate frame where the origin is located in the centre of the (moving) star. Thus, in addition to the gravitational potential (1) the disc and planets feel the additional acceleration $`𝐚_s`$.
The mass accreted from the disc by the planets (see below) has some net angular momentum which in principle changes also the orbital parameter of the planets. However, this contribution is typically about an order of magnitude smaller than the tidal torque (Lin et al. 1999) and is neglected here.
As the details of the origin and magnitude of the viscosity in discs is still uncertain we assume a Reynolds-stress formulation (paper I) with a constant kinematic viscosity. The temperature distribution of the disc is fixed throughout the computation and is given by the assumption of a constant ratio of the vertical thickness $`H`$ and the radius $`r`$. Hence, the fixed temperature profile is given by $`T(r)r^1`$. We assume $`H/r=0.05`$, which is equivalent to a fixed Mach number of 20.
For numerical convenience we introduce dimensionless units, in which the unit of length, $`R_0`$, is given by the initial distance of the first planet to the star $`R_0=r_1(t=0)=1a_{Jup}`$. The unit of time is obtained from the (initial) orbital angular frequency $`\mathrm{\Omega }_1`$ of the first planet i.e. the orbital period of the planet 1 is given by
$$P_1=2\pi t_0.$$
(3)
The evolutionary time of the results of the calculations as given below will usually be stated in units of $`P_1`$. The unit of velocity is then given by $`v_0=R_0/t_0`$. The unit of the kinematic viscosity coefficient is given by $`\nu _0=R_0v_0`$. Here we take a typical dimensionless value of $`10^5`$ corresponding to an effective $`\alpha `$ of $`4\times 10^3`$.
### 2.1 The numerical method in brief
The normalized equations of motion are solved using an Eulerian finite difference scheme, where the computational domain $`[r_{min},r_{max}]\times [\phi _{min},\phi _{max}]`$ is subdivided into $`N_r\times N_\phi `$ grid cells. For the typical runs we use $`N_r=128,N_\phi =128`$, where the azimuthal spacing is equidistant, and the radial points have a closer spacing near the inner radius. The numerical method is based on a spatially second order accurate upwind scheme (monotonic transport), which uses a formally first order time-stepping procedure. The methodology of the finite difference method for disc calculations is outlined in Kley (1989) and paper I.
The N-body module of the programme uses a forth order Runge-Kutta method for the integration of the equations of motion. It has been tested for long term integrations using the onset of instability in the 3-body problem consisting of two closely spaced planets orbiting a star as described by Gladman (1993). For the initial parameter used here, the error in the total energy after $`1.2\times 10^5`$ orbits (integration over $`10^6yrs`$) is less $`2\times 10^9`$.
### 2.2 Boundary and initial conditions
To cover the range of influence of the planet on the disc fully, we typically choose for the radial extent (in dimensionless units, where planet 1 is located initially at $`r=1`$) $`r_{min}=0.25,r_{max}=4.0`$. The azimuthal range covers a complete ring $`\phi _{min}=0.0,\phi _{max}=2\pi `$ using periodic boundary conditions. To test the accuracy of the migration, a comparison calculation with $`r_{max}=8.0`$ and higher resolution $`N_r=256,N_\phi =256`$ was also performed.
The outer radial boundary is closed to any mass flow $`v(r_{\mathrm{max}})=0`$, while at the inner boundary mass outflow is allowed, emulating accretion onto the central star. At the inner and outer boundary the angular velocity is set to the value of the unperturbed Keplerian disc. Initially, the matter in the domain is distributed axially symmetric with a radial surface density profile $`\mathrm{\Sigma }r^{1/2}`$.
Two planets, each with an initial mass of $`1M_{Jup}`$, are located at $`r_1=1.0,\phi _1=\pi `$ and $`r_2=2.0,\phi _2=0`$. Thus, they are not only spaced in radius but are positioned (in $`\phi `$) in opposition to each other to minimize the initial disturbance. The radial velocity $`v`$ is set to zero, and the angular velocity is set to the Keplerian value of the unperturbed disc.
Around the planets we then introduce an initial density reduction whose approximate extension is obtained from their masses and the magnitude of the viscosity. This initial lowering of the density is assumed to be axisymmetric; the radial profile $`\mathrm{\Sigma }(r)`$ of the initial distribution is displayed in Fig. 1 (solid line). The total mass in the disc depends on the physical extent of the computational domain. Here we assume a total disc mass within $`r_{min}=0.25`$ and $`r_{max}=4.00`$ of 0.01 $`\mathrm{M}_{}`$. The starting model is then evolved in time and the accretion rates onto the planets is monitored, where a given fraction of the mass inside the Roche-lobe of the planet is assumed to accrete onto the planet at each time step and is taken out of the computation and added to masses of the planets. The Courant condition yields a time step of $`6.810^4P_1`$.
## 3 Results
Starting from the initial configuration (Fig. 1) the planets exert torques on the adjacent disc material which tend to push mass away from the location of the planets. At the same time the planets continuously accrete mass from its surroundings, and the mass contained initially between the two planets is quickly accreted by the planets and added to their mass. Finally one large gap remains in the region between $`r=1`$ and $`r=2`$ (Fig. 1).
Similarly to the one planet calculations as described in Bryden et al. (1999) and in paper I, each of the two planets creates a spiral wave pattern (trailing shocks) in the density of the disc. In case of one disturber on a circular orbit the pattern is stationary in the frame co-rotating with the planet. The presence of a second planets makes the spirals non-stationary as is seen in the snapshots after 50, 100, 250 and 500 orbits of the inner planet that are displayed in figure 2. Near the outer boundary at $`r=4`$ the reflection of the spiral waves are visible. Using the higher resolution model with $`r_{max}=8.0`$ (section /refbounds) we tested whether the numerical resolution or the wave reflection at $`r_{max}`$ has any influence on the calculation of the net torques acting on the planet and the accretion rates onto the planet. Due to limiting computational resources the higher resolution model was run only for a few hundred orbital periods and the largest difference ($`2.5\%`$) occurred in the mass $`m_3`$ of the outer planet. The difference in radial position (migration) is less than $`1\%`$. We may conclude that our resolution was chosen sufficiently fine and that the reflections at the outer boundary $`r_{max}=4`$ do not change our conclusions significantly.
In previous calculations (paper I) the equilibrium mass accretion rate from the outer part of the disc onto a one Jupiter mass planet for the same viscosity ($`\nu =10^5`$) and distance from the star was found to be $`4.35\times 10^5M_{Jup}/yr`$ for a fully developed gap. Here the accretion rate onto the planets is much higher in the beginning ($`5\times 10^4M_{Jup}/yr`$) as the initial gap was not as cleared. Thus, during this gap clearing process, the masses of the individual planets grows rapidly at the onset of calculations (Fig. 3). At $`t250`$ the mass within the the gap has been exhausted (see also Fig. 1) and the accretion rates $`\dot{M}`$ on the planets lower dramatically. At later times after several hundred orbits ($`P_1`$) they settle to nearly constant values of about $`10^5M_{Jup}/yr`$ for the outer planet, and $`2.2\times 10^6M_{Jup}/yr`$ for the inner planet (from Fig. 3). Since the mass inside of planet 1 has left the computational domain and the initial mass between the two planets has been consumed by the two planets, this mass accretion rate onto planet 1 for later times represents the mass flow of material coming from radii larger than $`r_2`$ (beyond the outer planet). It is the material which has been flowing accross the gap of the outer planet. Previously, this mass flow across a gap has been calculated to be about one seventh of the mass accretion rate onto a planet (paper I) and the present results are entirely consistent with that estimate.
The gravitational torques exerted by the disc lead to an additional acceleration of the planets resulting in an expression similar to the acceleration of the star (Eq. 2). For one individual planet this force typically results in an inward migration of the planet on timescales of the order of the viscous time of the disc (Lin & Papaloizou 1986). Here this inward migration is seen clearly for the outer planet in Fig. 4 where the time evolution of the semi-major axis of the two planets is plotted.
The inner planet on the other hand initially, for $`t<200`$, moves slightly inwards but then the semi-major axis increases and, showing no clear sign of migration anymore, settles to a constant mean value of $`1.02`$. However, the decrease of the semi-major axis of the outer planet reduces the orbital distance between the two planets. From three body simulations (a star with two planets) and analytical considerations (Gladman 1993; Chambers, Wetherill & Boss 1996) it is known that when the orbital distance of two planets lies below the critical value of $`\mathrm{\Delta }_{cr}=2\sqrt{3}R_H`$, where
$$R_H=\left(\frac{m_1+m_2}{3M_{}}\right)^{1/3}\frac{a_1+a_2}{2}$$
(4)
is the mutual Hill radius of the planet, the orbits of the planets are not stable anymore. In the calculations this effect is seen in the strong increase of the eccentricity of the inner planet. At $`t=2500`$ its eccentricity has grown to about $`e_1=0.1`$, while the eccentricity of the outer planet remains approximately constant at a level of $`e_2=0.03`$.
We should remark here that in the pure 3-body problem without any disc and the same initial conditions ($`r_1=1.0,m_1=1.0;r_2=2.0,m_2=1.0`$) for the three objects, the semi-major axis of the planets stay constant as this system is definitely Hill stable (Gladman 1993). However, if one takes as initial conditions for the pure 3-body system the parameters for the planets as obtained from the disc evolution at $`t=2500`$ ($`r_1=1.0,m_1=2.3;r_2=1.5,m_2=3.2`$) then the evolution becomes chaotic on timescales of hundreds of orbits and the eccentricity grows up to $`e=0.6`$ for both planets within $`4000`$ orbits.
## 4 Conclusions
We have presented calculations of the long term evolution of two embedded planets in a protoplanetary disc that covers several thousand orbital periods. The planets were initially located at one and two $`a_{Jup}`$ from the central star with initial masses of $`1M_{Jup}`$ each. The gravitational interaction with the gaseous disc having a total mass of $`0.01\mathrm{M}_{}`$ leads to an inward migration of the outer planet while the semi-major axis of the inner planet remains approximately constant and even slightly increased. At the same time, the ongoing accretion onto the planets increases their masses continuously until at the end of the simulation (at $`t=2500`$) the outer planet has reached a mass of about $`3.2M_{Jup}`$ and the inner planet of about $`2.3M_{Jup}`$.
This increase in mass and the decreasing distance between them renders the orbits eventually unstable resulting in a strong increase of the eccentricities on timescales of a few hundred orbits.
From the computations we may draw three major conclusions:
1) The inward migration of planets immersed in an accretion disc may be halted by the presence of additional protoplanets located for example at larger radii. They disturb the density distribution significantly which in turn reduces the net gravitational torque acting on the inner planet. Thus, the migration of the inner planet is halted, and its semi-major axis remains nearly constant.
2) When disc depletion occurs sufficiently rapid to prevent a large inward migration of the outer planet(s), a planetary system with massive planets at a distance of several $`au`$ remains. This scenario may explain why for example in the solar system the outer planets (in particular Jupiter) have not migrated any closer towards the sun.
3) If the initial mass contained in the disc is sufficiently large then the inward migration of the outer planet(s) will continue until some of them reach very close spatial separations. This will lead to unstable orbits resulting in a strong increase of the eccentricities. Orbits may cross and planets may then be driven either to highly eccentric orbits or may leave the system all together (see eg. Weidenschilling & Marzari 1996). This may then explain the high eccentricities in some of the observed extrasolar planets in particular the planetary system of $`\upsilon `$ Andromedae.
As planetary systems containing more than two planets display different stability properties (Chambers et al. 1996) it will be interesting to study the evolution of multiple embryos in the protoplanetary nebula.
## Acknowledgments
I would like to thank Dr F. Meyer for lively discussion on this topic. Computational resources of the Max-Planck Institute for Astronomy in Heidelberg were available for this project and are gratefully acknowledged. This work was supported by the Max-Planck-Gesellschaft, Grant No. 02160-361-TG74.
|
no-problem/9910/nucl-th9910005.html
|
ar5iv
|
text
|
# Low Energy Behavior of the 7Be (𝑝,𝛾)⁸B reaction
## Abstract
Three related features of the astrophysical S-factor, $`S_{17}`$, for low energies are an upturn as the energy of the proton goes to zero, a pole at the proton separation energy and a long-ranged radial integral peaking at approximately 40 fm. These features, particularly the last, mean $`S_{17}`$ at threshold is largely determined by the asymptotic normalization of the proton bound-state wave function. In this paper we identify the pole contribution to the S-factor without any explicit consideration of the asymptotic behavior of the bound-state wave-function. This allows us to calculate, $`S_{17}`$ in terms of purely short range integrals involving the two-body potential. Much of the discussion is relevant to other radiative capture processes with weakly bound final states.
The decay of <sup>8</sup>B in the sun produces high-energy neutrinos that are observed by earth based detectors. In fact, water detectors are mainly sensitive these neutrinos. The <sup>8</sup>B is produced in the <sup>7</sup>Be$`(p,\gamma )^8`$B reaction. In the sun this reaction is peaked for protons with an energy of about 18 keV. The reader is referred to Adelberger et al for a review. Unfortunately the low cross-sections make it highly unlikely that earth based experiments will allow for measurements below 100 keV and even measurements at that low an energy are very difficult. Thus the experimental results have to be extrapolated from above 100 keV to 18 keV in order to be useful. This extrapolation would not seem like much of a problem, except that $`S_{17}`$ has an upturn near threshold and that upturn does not play an important role in the region accessible to experiment. Thus the upturn has to be determined by purely theoretical considerations — a worrisome situation. In this paper we will present a very simple derivation of the pole contribution that causes the upturn. This new derivation does not explicitly rely on the asymptotic behavior of the bound-state wave-function. It does, however, show a remarkable and intimate connection between the short distance part of the wave function and the asymptotic normalization. Furthermore, in addition to increasing our confidence in our understanding of the low energy behavior of the $`S`$-factor, it may provide a useful computational technique.
The long range of the radial integrals needed for determining the <sup>7</sup>Be$`(p,\gamma )^8`$B S-factor at low energies makes the calculations more difficult then might be naively expected. For example, shell model calculations are rarely valid in the extreme tail region where the integrals are peaked. The long range of the integrals can also make the calculation numerically difficult or time consuming, even if the tail of the wave function is well determined.
On the other hand the long range of the integrals can be turned into an advantage. Since most of the contribution at the low energies required in the solar calculations comes from the asymptotic region outside the range of the nuclear force, the low energy $`S`$-function, $`S_{17}`$, is determined mainly by the asymptotic normalization of the bound-state wave-function and properties of Coulomb wave-functions. Thus $`S_{17}(0)`$ is given once the asymptotic normalization is known without the need for additional calculations. From ref. one infers that $`S_{17}(0)=38A_n`$ where $`A_n`$ is the asymptotic normalization. This is accurate to about 1%.
As has been recently pointed out, ref. , the asymptotic normalization also determines the pole of the t-matrix for the elastic scattering of protons on <sup>7</sup>Be. Using this as a starting point we will derive expressions for the asymptotic normalization and the $`S`$-factor that depend only on the wave-function in the nuclear interior. We start by writing the t-matrix as:
$$k|T|k=k|V|k+k|VGV|k$$
(1)
where $`|k`$ is a plane wave, $`T`$ is the t-matrix, $`V`$ is the potential and $`G`$ is the full (not free) Greens function. Inserting the eigenstates, $`|\psi _i`$, of the full Hamiltonian as a complete set of states, we have:
$$k|T|k=k|V|k+\underset{i=1}{\overset{\mathrm{}}{}}\frac{\left|k|V|\psi _i\right|^2}{E_kE_i}$$
(2)
where the $`E_i`$ are the eigen-energies of the full Hamiltonian and $`E_k`$ is the energy corresponding to the plane wave $`|k`$. In the continuum the sum becomes an integral. The t-matrix, $`k|T|k`$, has a pole at each of the bound-state energies. The residues of these poles can be identified from eq. 2. For the $`i`$-th state it is $`\left|k|V|\psi _i\right|^2`$. Since the potential limits the radial integral to relatively small values of $`r`$, this results shows that the residue is determined purely by short-ranged parts of the wave function; seemingly in contradiction with the claim of that the residue is determined by the asymptotic normalization of the bound-state wave function. This apparent contradiction can be overcome only if there is a relation between the short-range and long-range parts of the wave-function. This is indeed that case as can be seen using the equation:
$$k|V|\psi _i=k|HH_o|\psi _i=(E_iE_k)k|\psi _i$$
(3)
where $`H`$ is the full Hamiltonian and $`H_o`$ is free Hamiltonian. In deriving the last expression on the right we have used the fact that $`|k`$ is an eigenstate of $`H_o`$. If we want $`V`$ to be just the nuclear potential, excluding the Coulomb potential, then the Coulomb potential must be included in $`H_o`$ and $`|k`$ becomes the Coulomb distorted wave. This is useful in restricting the range of the integral to range of the nuclear potential. A technique similar to that leading to eq. 3 was used in ref. in the context of R-matrix theory.
Rewriting the last equation we have:
$$k|\psi _i=\frac{k|V|\psi _i}{(E_iE_k)}$$
(4)
The right hand side of this equation has an explicit pole at $`E_i=E_k`$. On the left hand side the pole is not explicit but is due to the radial integral diverging at $`k=\pm i\kappa =\pm i\sqrt{2m|E_i|}`$. Near the pole the bound-state wave-function can be replaced by its asymptotic form for large $`r`$, $`A_n\mathrm{exp}[\kappa r]/r`$, since the integral is dominated by large $`r`$. For simplicity we have ignored the Coulomb potential. With the asymptotic approximation for $`|\psi _i`$, $`k|\psi _i`$ becomes:
$`k|\psi _i`$ $``$ $`{\displaystyle \frac{1}{(2\pi )^{3/2}}}{\displaystyle \mathrm{exp}[ikr]A_n\mathrm{exp}[\kappa r]/rd^3r}`$ (5)
$``$ $`{\displaystyle \frac{2A_n}{(2\pi )^{1/2}(\kappa ^2+k^2)}}={\displaystyle \frac{A_n}{m(2\pi )^{1/2}(E_i+E_k)}},`$ (6)
where $`E_i`$ is the energy of the bound state and is negative. The integral in the first line can be carried out analytically and the leading contribution to the integral for $`k`$ near $`\pm i\kappa `$ is given in the second line. Thus, in this limit $`k|\psi _i\frac{A_n}{E_iE_k}`$ and we have $`A_n=(2\pi )^{1/2}mi\kappa |V|\psi _i`$. Near $`k=\pm i\kappa `$, $`E_k`$ is negative and as noted above for a bound state $`E_i`$ is also negative. Combining eq. 2 and eq. 3 we get:
$`k|T|k`$ $`=`$ $`k|V|k+{\displaystyle \underset{i=1}{\overset{\mathrm{}}{}}}(E_kE_i)\left|k|\psi _i\right|^2`$ (7)
$``$ $`{\displaystyle \frac{1}{2\pi m^2}}\left|{\displaystyle \frac{A_n}{E_kE_i}}\right|^2`$ (8)
where the second line is valid near the pole. The residue of the pole in the t-matrix is now seen to be proportional to $`|A_n|^2`$ or the square of the absolute value of the asymptotic normalization. This is in agreement with ref. .
The key to the above result is Eq. 4, which is just the momentum-space bound-state Lippmann-Schwinger equation. It has the amazing property of relating short and long-range properties of the wave-function — an integral over the interior of the nucleus to the asymptotic normalization. In fact, if we take $`|k`$ to be a Coulomb distorted wave function rather than a plane wave then the $`V`$ in eq. 4 is just the nuclear potential and the range of the integral is the range of the nuclear potential. A similar technique was used in ref. for proton and neutron emission. Eq. 4 can easily be extended to the case of a two body potential which is needed for realistic calculations. Again the integrals will be short-ranged but will contain two-body matrix elements.
As has been stressed by Mukhamedzhanov and collaborators, the asymptotic normalization is sufficient to determine $`S_{17}`$ near threshold. By explicitly identifying the pole, eq. 4 leads to an expression for the asymptotic normalization in terms of an integral over the interior of the nucleus. This may be useful in theoretically calculating the asymptotic normalization and hence $`S_{17}`$ . For example, the shell model can give a good approximation to the wave function in the interior while it typically does not do a good job in describing the tail of the wave function. By construction the scattering wave in eq. 4 is the eigenstate of $`H_o`$ and thus independent of the nuclear potential. Thus we can evaluate $`k|V|\psi _i`$ with $`|\psi _i`$ obtained from the shell model and use eq. 4 to determine the asymptotic normalization.
Eq. 4 is of use in determining $`S_{17}`$ only in the low energy region where the integrals are long ranged and the S-factor is determined mainly by the asymptotic normalization. It is, however, possible to derive an expression that is short-ranged and valid for all energies. This is done by identifying the singularity in the full expression for $`S_{17}`$. At low energies the reaction is predominately E1 and we take the dipole approximation. Starting with the matrix element of the dipole operator and considering first the case of a single particle in a one-body potential, the matrix element can be written:
$$M=\psi _i(k)|r|\psi _f$$
(9)
where $`\psi _i(k)`$ is the fully-distorted wave in the initial channel (contrast with eq, 4) and $`\psi _f`$ is the wave-function of the final-state bound proton. This matrix element has a second order pole as the energy of the scattering state goes to the unphysical energy corresponding to the bound state. To explicitly identify the singularity we proceed as follows:
$$M=\frac{\psi _i(k)|HrrH|\psi _f}{E_kE_f}=\frac{\psi _i(k)|\frac{}{m}|\psi _f}{E_kE_f}$$
(10)
where $`E_k`$ and $`E_f`$ are the energies of the initial and final states respectively and we have assumed a local potential. Since $`V`$ is local it commutes with $`r`$. Thus $`HrrH=[H,r]`$ reduces to the commutator of $`r`$ with the kinetic energy. This gives the result shown above. Repeating this procedure of introducing the commutator with $`H`$ gives (again assuming a local potential):
$$M=\frac{\psi _i(k)|H\frac{}{m}\frac{}{m}H|\psi _f}{\left(E_kE_f\right)^2}=\frac{\psi _i(k)|\frac{V}{m}|\psi _f}{\left(E_kE_f\right)^2}$$
(11)
The second order pole is now explicitly shown. This equation differs from eq. 4 in having a higher power divergence and in using the fully interacting wave-function for both the initial and finial state.
The integral in the matrix element on the right hand side of eq. 11 is restricted to the range of the potential. For a proton the potential has a $`1/r`$ tail from the Coulomb potential so the integrand will fall off a factor of $`r^3`$ faster then that in eq. 9. The integral in eq. 9 typically peaks at 40 fm with non-negligible contributions coming from beyond 100fm. Repeating the S-factor calculations of ref. with eq. 11 indicates that the integrals can now be cut off at about 50–60fm at threshold. While this is a definite improvement, the integrals still extend far into the tail region. As we will see shortly, when a two-body potential is used this changes dramatically.
The physical case, of course, does not involve a one-body potential but rather a two or more body interaction. For a many-body system the dipole interaction is $`_{i=1}^Ae_ir_i`$ and for a local two-body interaction the potential is $`\frac{1}{2}_{j,k}V(r_jr_k)`$. Repeating the above procedure of introducing the second order commutator we have :
$$M=\frac{\psi _i(k)|_{j,k}\frac{e_j_jV(r_jr_k)}{m_j}|\psi _f}{\left(E_kE_f\right)^2}$$
(12)
For a two-body interaction Newton’s third law (conservation of momentum) gives us $`_jV(r_jr_k)=_kV(r_kr_j)`$. Thus if $`e_1/m_1=e_2/m_2`$, the $`j=1`$,$`k=2`$ term cancels the $`j=2`$,$`k=1`$ term. Applying this result for all like pairs the matrix element reduces to :
$$M=\frac{e_p}{m_p}\frac{\psi _i(k)|_{j,k}_jV(r_jr_k)|\psi _f}{\left(E_kE_f\right)^2}$$
(13)
where $`j`$ is restricted to protons and $`k`$ to neutrons, $`e_p`$ is the charge of the proton and $`m_p`$ is its mass. Thus the matrix element depends only on the neutron-proton potential and not on the proton-proton potential.
In one-body models (see Christy and Duck) a factor of $`\frac{e_p}{m_p}\frac{e_7}{m_7}`$, where $`e_7`$ and $`m_7`$ are the charge and mass of the target nucleus, is included to take into account that the photon can couple either to the proton or to the nucleus. Ignoring the binding energy this factor is just $`\frac{e_p}{m_p}\frac{N}{A}`$. The factor of $`e_p/m_p`$ is explicitly present in eq. 13. The factor of $`N/A`$ can be recovered if the sum over the neutrons can be replaced by $`N/A`$ times a sum over all nucleons. This would be possible if the neutron-proton and proton-proton potentials were the same and if the neutron and proton density distribution in the target nucleus were also the same. For the valence particle the sum, $`_kV(r_jr_k)`$, in this case, would reduce to the mean-field potential for this particle and we recover the one body approximation.
However, because of the Coulomb potential, the proton-proton and proton-neutron potentials are very different. In contrast to eq. 11, the potential in eq. 13 is short-ranged and does not have a Coulomb tail; the integrals will be restricted to the range of the nuclear potential. This difference suggests that one-body models cannot be used to accurately calculate the asymptotic normalization. They should, however, still give the correct energy dependence for low energies since they have the correct singularity structure and asymptotic $`r`$ dependence.
The cancellation of Coulomb potential contributions in eq. 13 is unique to dipole transitions. If $`e_i`$ were proportional to the particle mass, $`m_i,`$ then the dipole operator would be just the center-of -mass coordinate which can not change the internal structure of the nucleus and the total cross-section would be zero.
Eq. 13 may lead to significant gains in computing $`E_1`$ transitions since the range of the integral has decreased to less then 10 fm from over 100 fm. Part of gain is lost since the new expression involve two-body rather then one-body matrix elements. There is another less obvious drawback. For the <sup>7</sup>Be$`(p,\gamma )^8`$B reaction the scattering wave function has a node in the nuclear interior. This leads to large negative and positive contributions to the integrals in eqs. 11 and 13. There is a large cancellation between the negative and positive contributions. This will make the numerical calculation of these matrix elements less stable.
We now illustrate the points made above by using a single particle model. We take the potential to have a Woods-Saxon shape and include the Coulomb potential. In the direct calculation, i.e. using eq. 9, of the s-wave contribution to the S-factor the upper bound on the integral must be 300fm for a 2% accuracy. The accuracy is determined by first using a large upper limit on the integral and then reducing it until a 2% error is observed. The calculation of the matrix element was then done using eq. 11. The results, as expected, agree within numerical errors to those from eq. 9. Using eq. 11 the upper limit can be reduced to 50fm while still maintaining 2% accuracy. This is still quite a large radius and is due to the long range nature of the Coulomb potential and the previously noted cancellation in the integral.
In calculations with a two body potential the Coulomb contribution cancels so we have used eq. 11 to carry out a calculation but only included the nuclear potential in the $`V`$. In this case it is only necessary to integrate to 7fm in order to have 2% accuracy. This is the range we would expect from the size of the nucleus. Dropping the Coulomb contribution from $`V`$ in eq. 11 also causes the integral to increases by more than a factor of 2. This sensitivity to the presence of the Coulomb potential in eq. 11 indicates that a single particle model cannot be used to determine the absolute value of the S-factor; or at a minimum the underlying two-body nature of the interaction must be explicitly taken into account in calculations that rely on single-particle wave-functions.
The proton distribution in <sup>7</sup>Be probably has a larger radius than the neutron distribution. This is due to the Coulomb potential and the presence of an extra proton. Taking this into account by reducing the potential radius in eq. 11 (remember that for a two-body potential only the neutron-proton potential comes in) we find extreme sensitivity of the S-factor matrix element to the exact radius used. Reducing the potential radius by just 0.12fm causes the integral to vanish. In varying the radius of the potential we have kept the same wave-function since the distortion is due to both the protons and neutrons. We note that in ref. the result was also strongly sensitive to the potential used. The sensitivity to theoretical input may not be just a shortcoming of the technique but may have a deeper origin indicating that the asymptotic normalization may, in the end, have to be determined experimentally.
Eq. 13 was derived on the assumption that the two-body potential had a simple local form. Non-localities or charge exchange potentials will change this. Consider a general many body Hamiltonian, $`H=_i\frac{_i^2}{2m_i}+\frac{1}{2}_{i,j}V_{i,j}`$. In eq. 10, we take the commutator of the Hamiltonian with the dipole operator. This will, in general, give two terms. One comes from the kinetic energy and is just $`_i\frac{e_i_i}{m_i}`$. The second contribution is from the commutator of the potential with the dipole operator. Since the second term is already short ranged we take the second commutator only with the first term This gives:
$`M`$ $`=`$ $`{\displaystyle \frac{\psi _i(k)|_j\frac{e_j_j}{m_j}|\psi _f}{E_kE_f}}+{\displaystyle \frac{\psi _i(k)|[V,_je_jr_j]|\psi _f}{E_kE_f}}`$ (14)
$`=`$ $`{\displaystyle \frac{\psi _i(k)|[V,_j\frac{e_j_j}{m_j}]|\psi _f}{\left(E_kE_f\right)^2}}+{\displaystyle \frac{\psi _i(k)|[V,_je_jr_j]|\psi _f}{E_kE_f}}`$ (15)
This equation will be correct for any two-body potential. Since the Coulomb potential is local it will not contribute to the second commutator and, as we saw above, cancels in the first commutator. All the remaining contributions in either term will be of the same range as the nuclear potential. Thus we see that even in the more general case the relevant expression can be reduced to matrix elements of short ranged potentials.
Strictly speaking, neither eq. 4 nor 15 prove that the matrix elements on the left hand sides have singularities since the matrix elements on the right hand side may go to zero. For capture from the d-wave in the <sup>7</sup>Be$`(p,\gamma )^8`$B reaction the Coulomb wave function has a zero near the pole location so the pole is effectively canceled and the d-wave contribution to the s-factor is almost linear in the low energy region. In other derivations of the existence of the pole such accidental zeros must also be checked for. With the exception of such special cases the procedures outlined in this paper provide the cleanest and simplest method to show the existence of the pole in the S-factor and to isolate its residue.
To summarize we have shown that the asymptotic normalization needed to calculated $`S_{17}`$ can be obtained from an integral over the nuclear interior. This relies on the intimate connection between the wave-function in the nuclear interior and the asymptotic normalization that follows directly from the bound-state Lippmann-Schwinger equation. The matrix element needed in the calculation for $`S_{17}`$ at any energy can also be reduced to a form that involves only integrals over the nuclear interior. These expressions make the singularity structure of the S-factor explicit and may simplify calculating $`S_{17}`$.
ACKNOWLEDGEMENT: The author thanks S. Gurvitz, S. Karataglidis, S. Scherer and H.W. Fearing for useful discussions. J. Escher is thanked for carefully reading the manuscript and useful discussion. The Natural Sciences and Engineering Sciences Research Council is thanked for financial support.
|
no-problem/9910/hep-ex9910002.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Scintillating crystal detectors have been widely used as electromagnetic calorimeters in high energy physics , as well as in medical and security imaging and in the oil-extraction industry. They have also been adopted for non-accelerator experiments, notably NaI(Tl) detectors are already used in Dark Matter searches , producing some of the most sensitive results.
Several characteristic properties make crystal scintillator an attractive detector option for low-energy (keV to MeV range) low-background experiments. Subsequent sections of this article will bring out the potential advantages of this approach and some of the physics topics well-suited to be investigated by this detector technology. The characteristic performance of the CsI(Tl) crystal are used as illustrations. A generic design to exploit these merits in a realistic experiment is discussed in Section 4.
## 2 Motivations and Merits
### 2.1 Nuclear Physics
The physics at the keV to MeV range can depend critically on the choice of isotopes as the interaction targets. The nuclear structure determines the interaction cross-sections, detection threshold, as well as experimental signatures like spatial or temporal correlations.
The deployment of target with large mass for low energy experiments usually requires that the target is an active detector. There are only a few detector technologies which accommodates a wide range of possible nuclei. The choice is even more limited when the potentials and possibilities to scale-up to a massive (tons or more) detector have to be taken into account. The two most prominent candidate techniques are loaded liquid scintillator and crystal scintillator.
### 2.2 Existing Experience and Potential Spin-Offs
The large target-mass requirement is always a challenge to low count rate experiments. From the big electro-magnetic calorimeters in high energy physics, there are much experience in producing and operating 50-ton-range crystal calorimeters. The technology is proven to be stable and robust at the harsh accelerator environment. Indeed, the present broad applications and affordable price range for crystals like CsI(Tl) and BGO are driven mostly by the demand and development from high energy physics experiments. Therefore, it is possible that construction of a big scintillating crystal detector for low-energy experiments will also lead to the maturity of a new technology with potential spin-offs in other areas.
### 2.3 Intrinsic Properties
Some of the properties of crystal scintillators make them favorable candidates to be adopted for studying low-energy low-background experiments relative to the various other proposed detector schemes. While different crystals do have different characteristic performance parameters, the merits of this detector approach are discussed below using CsI(Tl) as an illustration.
The characteristic properties of CsI(Tl) , together with a few other common crystals as well as liquid and plastic scintillators, are summarized and compared in Table 1. The selection of CsI(Tl) is due to the fact that it is a commonly used and relatively inexpensive crystal scintillator produced in large quantities and with many examples of successful operation as 50-ton-range electro-magnetic calorimeters, as in all the B-factory detectors currently under operation. Unlike NaI(Tl), it is only slightly hygroscopic and can operate stably for a long time without the need of a hermetic seal (based on experiences from high energy physics experiments). This minimizes the use of passive materials at the target volume which, as explained below, is crucial to allow the merits of this detector technique for low-energy low-background experiments to be fully exploited.
#### 2.3.1 Solid and Compact Detector
Crystal scintillators usually have high density and are made up of high-Z isotopes. Therefore, a massive (tens of tons) detector can still be very compact (scale of several m<sup>3</sup>), such that external shielding configurations can be made more efficient and cost effective. The compact dimension also favors applications where artificial neutrino sources are used thereby allowing efficient exposure of the target materials to the source.
A solid detector can also prevent radioactive radon gas from diffusing into the inner fiducial volume from the external surfaces. This is a major concern for target based on gaseous or liquid detectors. Special procedures are still necessary to minimize the radon contaminations on crystal surfaces, as noted in Section 2.3.3 and Section 4.
#### 2.3.2 Efficient Active Veto
The attenuation effects of CsI(Tl) to $`\gamma `$-rays of different energy, together with those of water and liquid scintillator (a generic CH<sub>2</sub> compound with density 0.9 gcm<sup>-3</sup>, are depicted in Figure 1. In the region between 500 keV to around 3 MeV, Compton scattering (which varies as the atomic number Z) is the main process, and therefore the attenuation effects of CsI(Tl) are only enhanced by the density ratio, relative to H<sub>2</sub>O and CH<sub>2</sub>. Above several MeV, pair production (varying as Z<sup>2</sup>) takes over making the high-Z CsI(Tl) more efficient. This is the reason of choosing this crystal as electromagnetic calorimeters. In the low energy region below 500 keV, photo-electric effects (varying as Z<sup>5</sup>) dominates overwhelmingly. For instance, the attenuation lengths for a 100 keV $`\gamma `$-ray are 0.12 cm and 6.7 cm, respectively, for CsI(Tl) and CH<sub>2</sub>. That is, 1 cm of CsI(Tl) is equivalent to 8 attenuation lengths, and 10 cm of CsI(Tl) has the same attenuating power as 5.6 m of liquid scintillator at this low energy. Most crystal scintillators, having high-Z isotopes, share this merit.
#### 2.3.3 Focus only on Internal Background
Given the large attenuating effects on low energy photons, crystal detectors can provide a unique advantage to the background suppression in low energy experiments - that external $`\gamma `$-background are highly suppressed such that practically all $`\gamma `$-background originates internally IF (1) a three-dimensional fiducial volume can be defined, and (2) a housing-free design with minimal passive materials can be realized.
For non-hygroscopic crystals like CsI(Tl) or BGO where a hermetic seal system is not needed for their operation, “internal” would include only two materials: the crystal itself and the surface wrapping or coating materials. Teflon wrapping sheets are most commonly used, while there is an interesting new development with sol-gel coating which can be as thin as a few microns . Teflon is known to have very high radio-purity (typically better than the ppb level for the <sup>238</sup>U and <sup>232</sup>Th series) .
The suppression of radon contamination to the inner fiducial volume requires special but standard procedures. Crystal surfaces as well as the teflon wrapping sheets should be cleaned before wrapping, preferably in a nitrogen-filled glove-box. The detector modules should be covered and protected by an additional surface (like aluminium foils), which will be removed only at the time of installation. The whole detector should be installed and operate in an air-tight box filled with clean nitrogen.
As a result, the experimental challenges and hurdles become focussed on to two distinct aspects:
the control and understanding of the internal purity of the crystal target itself; and
the realization of a detector design giving good position resolutions and with as low a threshold as possible.
Accordingly, the difficulties for external gamma-background control can be alleviated at the expense of additional detector requirements.
The internal background can be due to contaminations of naturally occurring isotopes (<sup>238</sup>U and <sup>232</sup>Th series, <sup>40</sup>K), long-lived fission products and cosmic ray-induced unstable nuclei. The background due to external $`\gamma `$’s, like those from the readout device, electronics components, construction materials, or radon contamination on the outer surfaces, can thus be attenuated and vetoed by the outer active volume. Background can also originate externally from cosmic-ray induced neutrons which have little attenuation with high-Z nuclei. Their effects, however, can be minimized by a cosmic veto and by operating the experiment underground.
Hygroscopic crystals like NaI(Tl) are housed in containers as hermetic seal. The containers, usually made of oxygen-free copper for low background application, can be made to have high radio-purity. However, it is still an inactive material with high cross-sections for photons. Consequently, it is possible that high energy photons (which have less attenuation in the crystal) can penetrate into the fiducial volume, undergo Compton scatterings at the passive container, and deposit only low energy at the crystal detector itself. Therefore, the adoption of non-hygroscopic crystals (that is, a housing-free set-up) is essential to exploit the full power of this merit - that there is big suppression for external photons at low energies ($`<`$500 keV) to get into fiducial volume or for those at high energies ($`>`$ MeV) to get into fiducial volume but deposit only 100 keV of visible signals.
The background count rate will be stable and not affected by external parameters if the dominant contributions are from internal contaminations. Therefore, this detector technique can provide additional desirable feature in applications requiring delicate comparison and subtraction of data taken at different periods (such as reaction ON/OFF, annual modulation, Day/Night effects). The light yield for scintillating crystals is usually temperature-dependent, and therefore a good calibration scheme and temperature-control of the detector region is crucial to realize these subtraction procedures.
It should be stressed that an experimental design which provides the definition of a three-dimensional fiducial volume is essential to allow this large suppression of the external gamma background. The extent to which this can be achieved in a realistic detector set-up will depend on the specific crystal properties (particularly the light yield), and the energy range of interest. This will be discussed further in Sections 4 and 5.
#### 2.3.4 Good Energy Resolution and Modularity
Light yield of typical crystal scintillators are comparable to those of liquid and plastic scintillators. However, the modular size are smaller while the refractive index higher, leading to more efficient light transmission and collection. The high gamma attenuation also allows full $`\gamma `$-energy deposition. Consequently, crystal scintillators have typically better energy resolution and lower detection threshold, both of which are necessary for low-energy measurements. The high $`\gamma `$-rays capture efficiency, together with the good resolution to measure them as energy-peaks, can provide important diagnostic tools for understanding the physical processes and background of the system. For instance, by measuring the $`\gamma `$-peaks due to <sup>40</sup>K, <sup>60</sup>Co and <sup>137</sup>Cs, their associated $`\beta `$-background can be accurately accounted for and subtracted off.
The good modularity also enhances background suppression, since the interesting signals for most applications are single-site events. Most background from internal radioactivity come as $`\beta `$+$`\gamma `$’s in coincidence (like decays of <sup>214</sup>Bi and <sup>208</sup>Tl from the <sup>238</sup>U and <sup>232</sup>Th series, respectively) and hence will produce multiple hits with high probability. Similarly neutron capture events by the target isotopes manifest as (n,$`\gamma `$) interactions, giving rise to a $`\gamma `$-burst of multiple hits with known total energy. The neutron capture rate can therefore be measured, so that the background due to subsequent decays of the unstable daughter nuclei can be subtracted off.
#### 2.3.5 Possibility of Pulse Shape Discrimination
Crystals like CsI(Tl) and NaI(Tl) have superb pulse shape discrimination (PSD) properties to differentiate $`\gamma `$/e events from those due to heavily ionizing particles like $`\alpha `$’s, which have faster fall time. Figure 2 depicts typical PSD between $`\alpha `$/$`\gamma `$ in CsI(Tl) with the “Partial Charge Vs Total Charge” method , demonstrating excellent separation. The PSD capabilities provide powerful handle to tag and study those background channels involving alpha-emission, such as those from the <sup>238</sup>U and <sup>232</sup>Th decay chains.
#### 2.3.6 High Sensitivity to U/Th Cascades
Unlike in liquid scintillators, $`\alpha `$’s are only slightly quenched in their light output in crystals like CsI(Tl) and NaI(Tl). The exact quenching ratio depends on the Tl concentration and the measurement parameters like shaping time: for full integration of the signals, the quenching is about 50% . Therefore, some of the $`\alpha `$’s emitted from the uranium and thorium series are above 3 MeV. This is beyond the end-point for natural radioactivity (2.61 MeV) and hence the peak signatures are easy to detect among the flat background. In comparison, the electron-equivalence light yield for several MeV $`\alpha `$’s in liquid scintillators is typically less than 10% of their kinetic energy, making the signals well below the natural end-point and therefore more difficult to detect.
A crystal contaminated by uranium or thorium would therefore give rise to multiple peaks above 3 MeV, as reported in Ref. in the case for CsI(Tl). Shown in Figure 3 is the background spectrum from a 5-kg CsI(Tl) crystal put in 5 cm of lead shielding with cosmic veto in a typical sea-level laboratory. The absence of multiple peaks above 3 MeV suggest a <sup>238</sup>U and <sup>232</sup>Th concentration of less than the $`10^{12}`$ g/g level, assuming the decay chains are in equilibrium. All the peaks and structures in the spectrum can be explained by ambient radioactivity or by (n,$`\gamma `$) interactions at the crystal and shielding materials. This simple yet effective measurement for crystal scintillator can be compared to the complicated schemes requiring elaborate underground facilities for liquid scintillator . A typical level achieve-able by the photon-counting method with a low-background germanium is only $`10^9`$ g/g .
The sensitivities can be pushed further by doing the measurement underground (the flat background above 3 MeV are due to cosmic-ray induced neutrons which undergo (n,$`\gamma `$) when captured by the crystal or the shielding materials), and by exploiting the PSD characteristics of the crystal. In addition, by careful studies of the timing and energy correlations among the $`\alpha `$’s, one can obtain precise information on the radioactive contaminants in the cases where the <sup>238</sup>U and <sup>232</sup>Th decay series are not in equilibrium, so that the associated $`\beta `$/$`\gamma `$ background can be accounted for accurately. For instance, some Dark Matter experiments with NaI(Tl) reported trace contaminations (range of $`10^{18}10^{19}`$ g/g) of <sup>210</sup>Pb in the detector, based on peaks from $`\gamma `$’s of 46.5 keV and from $`\alpha `$’s of 5.4 MeV. Accordingly, $`\beta `$-decays from <sup>210</sup>Bi can be subtracted off from the signal.
## 3 Potential Applications
Several areas of low energy particle physics where the crystal scintillator technique may be applicable are surveyed in this section.
### 3.1 Neutrino-Electron Scattering at Low Energy
Scatterings of the $`(\nu _\mathrm{e}\mathrm{e})`$ and $`(\overline{\nu _\mathrm{e}}\mathrm{e})`$ give information on the electro-weak parameters ($`\mathrm{g}_\mathrm{V},\mathrm{g}_\mathrm{A},\mathrm{and}\mathrm{sin}^2\theta _\mathrm{W}`$), and are sensitive to small neutrino magnetic moments ($`\mu _\nu `$. They are two of the most realistic systems where the interference effects between Z and W exchanges can be studied .
The goal of future experiments will be to push the detection threshold as low as possible to enhance the sensitivities in the magnetic moment search. Using reactor neutrinos as source, an experiment based on a gaseous time projection chamber with CF<sub>4</sub> is now operational. Another experiment using CsI(Tl) is being built , with the goal of achieving a threshold of 100 keV. Project with NaI(Tl) detector at an underground site and using an artificial neutrino source has also been discussed .
### 3.2 Neutral Current Excitation on Nuclei
Neutral current excitation (NCEX) on nuclei by neutrinos has been observed only in the case of <sup>12</sup> with 30 MeV neutrinos. Excitations with lower energies using reactor neutrinos have been studied theoretically but not observed.
Crystal scintillators, having good $`\gamma `$ resolution and capture efficiency, are suitable to study these processes where the experimental signatures are peaks in the energy spectra with characteristic energies. Realistic experiments can be based on using the crystal isotopes as active targets, like <sup>133</sup>Cs and <sup>127</sup>I in CsI(Tl) or <sup>6</sup>Li, <sup>7</sup>Li and <sup>127</sup>I in LiI(Eu). The <sup>7</sup>Li case, with a $`\gamma `$-energy of 480 keV, has particularly large cross-sections. Alternatively, a compact passive boron-rich target like B<sub>4</sub>C can be inserted into an array of CsI(Tl) detector modules . There are theoretical work suggesting that the NCEX cross-sections on <sup>10</sup>B and <sup>11</sup>B are sensitive to the axial isoscalar component of NC interactions and the strange quark content of the nucleon.
### 3.3 Dark Matter searches
Direct searches of Weakly Interacting Massive Particles (WIMP) are based on looking for the low-energy (few keV) nuclear recoil signatures when they interact with the nuclei. Crystal scintillators may offer an appropriate detector technique for these studies from their PSD capabilities, as well as being a matured technology where a large target mass is possible. The cross-sections depend on specific isotopes , based on their nuclear matrix elements and spin states.
The NaI(Tl) crystal detectors are already used in WIMP searches Up to the scale of 100 kg target mass has been deployed , producing some of the most sensitive results. Other projects on CaF<sub>2</sub>(Eu) and CsI(Tl) are also pursued. In addition, searches have been performed with the WIMP-nuclei inelastic scattering giving rise to NCEX.
For crystal detectors where a three-dimensional fiducial volume with minimal passive materials can be defined, there is no background due to external $`\gamma `$’s at this low energy. Internal $`\beta `$ background is suppressed by the spectral distribution at this very low energy range. For instance, less than 3$`\times 10^4`$ of the $`\beta `$-decays in <sup>40</sup>K (end-point 1.3 MeV), give rise to events below 10 keV. However, achieving a three-dimensional fiducial volume definition will be more difficult at these low energies, as elaborated in Section 5.
### 3.4 Low Energy Solar Neutrinos
The goal of future solar neutrino experiments will be to measure the low energy (pp and <sup>7</sup>Be) solar neutrino spectrum. Charged- (CC) and Neutral-current (NC) on nuclei are attractive detection channels besides neutrino-electron scattering. The CC mode can provide a direct measurement of the $`\nu _e`$-spectrum from the Sun without the convolutions necessary for the $`\nu `$-e channels, while the NC mode can provide a solar model independent cross-check. Crystal scintillators are possible means to realize detectors based on the CC and NC interactions.
Previously, crystals with indium has been investigated for a $`\nu _{}`$-detector with <sup>115</sup>In as target which can provide a distinct temporally and spatially correlated triple coincidence signature. Recently, the crystals LiI(Eu) and GSO ($`\mathrm{Gd}_2\mathrm{SiO}_5(\mathrm{Ce})`$ are being considered. The attractive features are that LiI(Eu) have large $`\nu _e`$N-CC cross-sections for both <sup>7</sup>Li and <sup>127</sup>I, and $`\nu _e`$N-NC (E<sub>γ</sub>=480 keV) for <sup>7</sup>Li, while <sup>160</sup>Gd in GSO can provide another time-delay signature for background suppression and for tagging the flavor-specific $`\nu _e`$N-CC reactions. The primary experimental challenge is the requirements of extremely low background level due to the small signal rate.
### 3.5 Double Beta Decay
The energy range of interest for the search of neutrino-less double beta decay is mostly above 1 MeV, and hence some of the merits for crystal scintillators discussed in Section 2 relative to the other techniques are no longer applicable. We mention for completeness that there are efforts on <sup>115</sup>Cd with CdWO<sub>4</sub> and on <sup>160</sup>Gd with GSO crystals.
## 4 Generic Detector Design
To fully exploit the advantageous features discussed in Section 2, the design of a scintillating crystal detector for low-energy low-background experiments should enable the definition of a fiducial volume with a surrounding active 4$`\pi `$-veto.
Displayed in Figure 4 is a generic conceptual design where such a detector can be realized. The detector design is based on an experiment being constructed which, in its first phase, will study low energy neutrino-electron scattering from reactor neutrinos using CsI(Tl) as target. The listed dimensions are for this particular experiment. The dimensions for other applications will naturally depend on the optimization based on the specific detector performance and requirements.
As shown in Figure 4, one CsI(Tl) crystal unit consists of a hexagonal-shaped cross-section with 2 cm side and a length 20 cm, giving a mass of 0.94 kg. Two such units are glued optically at one end to form a module where the light output from both ends are read out by photo-detectors. Photo-multipliers (PMTs) will be used for the experiment though solid-state photo-detectors like photo-diodes or avalanche photo-diodes are also possibilities for other applications. The modular design enables the detector to be constructed in stages. Figure 4 shows a design with a 17$`\times `$15 matrix giving a total mass of 480 kg.
The cleaning and wrapping procedures to minimize radon contamination to the crystal surfaces noted in Section 2.3.3 will be adopted. The detector will operate inside an air-tight box filled with dry nitrogen. The box itself will in turn be inside a nitrogen air-bag. The compact dimensions of the inner target detector allow a more elaborate and cost-effective shielding design. External to the air-bag are the typical shielding configurations: from outside inwards plastic scintillators for cosmic-ray veto, 15 cm of lead, 5 cm of steel, 25 cm of boron-loaded polyethylene, and 5 cm of copper. Potassium-free PMT glass window as well as other high radio-purity materials will be used near the target region.
The energy deposited can be derived from the sum of the two PMT output $`(\mathrm{Q}_{\mathrm{tot}}=\mathrm{Q}_1+\mathrm{Q}_2)`$ after their gains are normalized, while the longitudinal position can be obtained from their difference in the form of $`\mathrm{R}=(\mathrm{Q}_1\mathrm{Q}_2)/(\mathrm{Q}_1+\mathrm{Q}_2)`$. The variation of $`\mathrm{Q}_1`$, $`\mathrm{Q}_2`$ and $`\mathrm{Q}_{\mathrm{tot}}`$ along the crystal length are displayed in Figure 5. The error bars denote the width of the photo-peaks of a <sup>137</sup>Cs source. The discontinuity in the middle is due to the optical mismatch between the interface glue (n=1.5) and the crystal (n=1.8). It can be seen that $`\mathrm{Q}_{\mathrm{tot}}`$ is independent of the position, and the resolution at 660 keV is about 10%. The detection threshold (where signals are measured at both PMTs) is $`<`$20 keV.
The variation of R along the crystal length is depicted in Figure 6. The ratio of the RMS errors in R relative to the slope gives the longitudinal position resolution. Its variation as a function of energy, obtained from measurements with $`\gamma `$-sources of different energies, is displayed in Figure 7, showing a resolution of $`<`$2 cm and 4 cm at 660 keV and 100 keV, respectively. Only upper limit 2 cm on the resolution can be concluded above 350 keV due to (a) finite collimator size for the calibration sources, and (b) the event-sites of $`\gamma `$-interactions (mostly multiple Compton scattering) being less localized at higher energies. It can be seen that a three-dimension fiducial volume can be defined above 50 keV, where the definitions can be optimized for different energy ranges. For instance, a 10 cm active veto length will give a suppression factor of $`5\times 10^3`$ to external photons of 100 keV. The fiducial volume only consists of the crystal itself and the teflon wrapping sheets, typically in a mass ratio of 1000:1.
Individual modules will be calibrated, both in light yield and the light transmission profile, before installation. On site, the stability of the crystals and PMTs can be monitored by radioactive sources illuminating the two end surfaces, as well as by cosmic-ray events. Stability of the electronics can be monitored with a precision pulse generation. LEDs placed at the end surface near the PMTs can be used to monitor stability of PMTs’ response as well as the light transmission through the crystal.
The various potential experiments based on scintillating crystal detectors can essentially adopt a similar design. Much flexibility is available for optimization. Different modules can be made of different crystals. Different crystals can be glued together to form “phoswich” detectors, in which cases the event location among the various crystals can be deduced from the different pulse shape. Passive target, as well as a different detector technology, can be inserted to replace a crystal module.
## 5 Background Discussions
Background understanding is crucial in all low-background experiments. It is beyond the scope of this article to present a full discussion of the background and sensitivities for all the possible candidate crystals in the various potential applications listed in Section 3. In this Section, we consider the key ingredients in the background issues relating to the CsI(Tl) experiment , followed by discussions of possible extensions to the lower-energy or smaller signal-rate experiments.
The experiment will operate at a shallow depth (about 30 meter-water-equivalent) near a reactor core, with the goal of achieving an 100 keV physics threshold corresponding to a $`\overline{\nu _e}`$-electron signal rate of O(1) per kg of CsI(Tl) per day \[$``$ “pkd”\]<sup>2</sup><sup>2</sup>2For simplicity, we denote “events per kg of CsI(Tl) per day” by pkd in this section. As noted from discussions in Section 2.3.3 and detector performance parameters achieved in Section 4, while care and the standard procedures should be adopted for suppressing the ambient radioactivity background as well as those from the equipment and surrounding materials, the dominant background channel is expected to be that of internal background from the CsI(Tl) itself. Based on prototype measurements as well as detector and shielding simulations, the various contributions are summarized below.
1. Internal Intrinsic Radioactivity:
Figure 3 and discussions in Section 2.3.6 demonstrate that a <sup>238</sup>U and <sup>232</sup>Th concentration of less than the $`10^{12}`$ g/g level \[$``$1 pkd\], assuming the decay chains are in equilibrium. In addition, direct counting method with a high-purity germanium detector shows the <sup>40</sup>K and <sup>137</sup>Cs contaminations of less than the $`10^{10}`$ g/g \[$``$1700 pkd\] and $`4\times 10^{18}`$ g/g \[$``$1200 pkd\] levels, respectively. Mass spectrometry method sets limits of <sup>87</sup>Rb to less than $`8\times 10^9`$ g/g \[$``$210 pkd\].
2. Neutron Capture
The important channel comes from (n,$`\gamma `$) on <sup>127</sup>I producing <sup>128</sup>I ($`\tau _{\frac{1}{2}}=25\mathrm{min};\mathrm{Q}=2.14\mathrm{MeV}`$). Ambient neutrons or those produced at the the lead shieldings have little probability of being captured by the CsI crystal target, being attenuated efficiently by the boron-loaded polyethylene. Neutron capture by the target are mostly due to cosmic-induced neutrons originated from the target itself, such that the <sup>128</sup>I production rate is about 1.8 pkd.
The other neutron-activated isotope, <sup>134</sup>Cs ($`\tau _{\frac{1}{2}}=2.05\mathrm{yr};\mathrm{Q}=2.06\mathrm{MeV}`$), decays with 70% branching ratio by beta-decay (end point 658 keV), plus the emission of two $`\gamma `$’s (605 keV and 796 keV), and therefore will not give rise to a single hit at the low-energy region. The probability of producing single-hit at the 1.5-2 MeV region is suppressed by a factor of $`<`$0.05.
3. Muon Capture
Cosmic-muons can be stopped by the target nuclei and subsequently captured . The process will give rise to <sup>133</sup>Xe and <sup>127</sup>Te ($`<`$0.05 probability), both of which can lead to low-energy single-site background events. The expected rate is less than 1.5 pkd. The other daughter isotopes are stable.
4. Muon-Induced Nuclear Dissociation
Cosmic-muons can disintegrate the target nuclei via the ($`\gamma `$,n) interactions or by spallations , at an estimated rate of $``$10 pkd and $``$1 pkd, respectively. Among the various decay configurations of the final states nuclei of the ($`\gamma `$,n) processes, <sup>132</sup>Cs and <sup>126</sup>I, only about 20% (or $``$2 pkd) of the cases will give rise to low-energy single-hit background.
Therefore, the present studies place limits on internal radio-purity to the range of less than the 1000 pkd level. The effects due to cosmic-induced long-lived isotopes at this shallow depth but within elaborate shieldings are typically at the range of a few pkd. The residual background can be identified, measured and subtracted off by various means like alpha peaks, gamma peaks, and neutron-capture bursts. Such background subtraction strategies have been successfully used in accelerator neutrino experiments. For instance, the CHARM-II experiment measured about 2000 neutrino-electron scattering events from a sample of candidate events with a factor of 20 larger in size , achieving a few % uncertainty in the signal rate. A suppression factor of 100 is therefore a realistic goal.
In addition, one can use the conventional Reactor ON/OFF subtraction to further enhance the sensitivities. Based on considerations above, a residual Background-to-Signal ratio of less than 10 before Reactor ON$``$OFF is attainable. In comparison, the best published limit on neutrino magnetic moment search with reactor neutrinos is based on a Si(Li) target with a mass of 37.5 kg at a threshold of 600 keV and a Reactor OFF/(ON-OFF) ratio of 120. Therefore, the CsI experiment should be able to achieve a better sensitivity in the studies of neutrino-electron scatterings.
The other applications typically allow the operation in underground sites so that the cosmic-induced background would be reduced compared to discussions above. The new challenges and complications are due to lower energy or smaller signal rates, discussed as follows:
1. Dark Matter Searches:
The present experimental background level for the nuclear recoil energy range (10 keV and less) is around O(1) pkd per keV. This is comparable to the internal radio-purity limits already achieved in the CsI(Tl) prototype. However, the low energy (and therefore low light output) makes the the definition of a three-dimensional fiducial volume less efficient. The geometry and performance parameters of Figure 4 is optimized for higher energy. Nevertheless, a simple variant of the concept can be adopted by using an active light guide based on crystals with distinguishably different time-profiles as the target crystals. A possibility is the combination of pure CsI (decay time 10 ns) and CsI(Tl) (decay time 1000 ns). The location of the events can be obtained by pulse shape analysis. The rejection of PMT noise can be done also by PSD but will become delicate at the low energy (few-photoelectrons) regime. The background subtraction procedures will require detailed knowledge of the effects from X-rays and Auger electrons at these low energies. Geometry of the crystal modules and the electronics design should be optimized to lower the detection and PSD threshold as far as possible. External shieldings should be optimized to minimize the effects of high energy neutrons which can penetrate easily through the active veto to give the nuclear recoil background. Experiences from the operational NaI(Tl) detectors can provide valuable input.
2. Solar Neutrino Experiments:
The energy range of interest ($`>`$100 keV) allows good detector performance for crystal scintillators. However, a much smaller $`\nu _{}`$N-CC signal rate on the range of O(1) per 10 tons per day is expected. A target mass of tens of tons will be required, such that the scale-up schemes should be studied. A major R&D program, similar to the efforts with liquid scintillators to enhance - and measure - the radio-purity level to the 10<sup>-16</sup> g/g range for U/Th is necessary, for whichever target isotopes and whichever detector techniques. Still, the efforts can be focussed on to a single material, namely the crystal target itself. The radio-purity requirements can be relaxed for target isotopes which can lead to distinct spatial and temporal signatures, like <sup>115</sup>In as well as <sup>176</sup>Yb, <sup>160</sup>Gd and <sup>82</sup>Se .
## 6 Outlook
Large water Cerenkov and liquid scintillator detectors have been successfully used in neutrino and astro-particle physics experiments. New detector technology must be explored to open new windows of opportunities. Crystal scintillators may be well-suited to be adopted for low background experiments at the keV-MeV range. Pioneering efforts have already been made with NaI(Tl) crystals for Dark Matter searches, while another experiment with CsI(Tl) is being constructed to study low energy neutrino interactions. The present O(100 kg) target mass range can be scaled up to tens of tons, based on the successful experience of calorimeters in high energy physics experiments.
A generic detector design is considered in this article, demonstrating that defining a three-dimensional fiducial volume with minimal passive materials is possible. The large $`\gamma `$-attenuation at low energy can lead to a large suppression of background due to ambient radioactivity by the active veto layers. Consequently, the principal experimental challenges become ones focussed on the understanding, control and suppression of the radioactive contaminations in the crystals, as well as on the optimization of the detector design to realize an efficient, totally-active, three-dimensional fiducial volume definition. The high $`\gamma `$-detection efficiency, good energy and spatial resolutions, low detection threshold, PSD capabilities and clean alpha signatures provide powerful diagnostic tools towards these ends.
There are still much room for research and development towards the realization of big experiments. Potential spin-offs in other areas are possible in the course of these efforts.
This work was supported by contracts NSC 87-2112-M-001-034 and NSC 88-2112-M-001-007 from the National Science Council, Taiwan.
|
no-problem/9910/astro-ph9910346.html
|
ar5iv
|
text
|
# Galaxy Interactions and Starbursts at High Redshift
## 1. Introduction
An interest in galaxy interactions at high redshift can be motivated from several directions. It has been shown that many of the observed properties of galaxies at $`z3`$–4 (e.g., their number density, luminosity function, colours, sizes, velocity dispersions, etc.) are well-reproduced by a model in which these galaxies are predominantly starbursts triggered by galaxy interactions (Somerville, Primack, & Faber 1999; SPF). In addition, the collision rate of dark matter halos in high-resolution cosmological N-body simulations shows a marked increase at earlier epochs, and the clustering properties of these colliding halos at $`z3`$ are similar to those of the observed Lyman-break galaxies (Kolatt et al. 1999).
Previous numerical investigations of starbursts in interacting galaxies (e.g. Mihos & Hernquist 1994a; 1996) have assumed initial properties typical of local galaxies. Many of these properties (e.g., gas content, surface density, disk-to-halo ratio) may be quite different at high redshift. Using the same code as Mihos & Hernquist, we are carrying out an ongoing program of simulations to more fully explore the parameter space with an emphasis on high-redshift galaxies. A more complete version of these preliminary results will be presented in Rosenfeld et al. (in prep.).
## 2. Simulations
We use the TREESPH code, including star formation using a Schmidt law ($`\dot{\rho _{}}=C\rho _{gas}^N`$) as described in Mihos & Hernquist (1994b). We set the proportionality constant in this equation by requiring the isolated galaxies to lie on the relation given by Kennicutt (1998). Masses and mass ratios of typical colliding dark matter halos at $`z=3`$ were determined using the simulations discussed in Kolatt et al. (1999). For all calculations requiring the assumption of a cosmology, we assume the same cosmological model as those simulations, namely $`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, and $`H_0=70`$ km/s/Mpc. The mass and exponential scale radius of the stars and gas in the galactic disk inhabiting a halo of a given mass at the desired redshift are estimated using the semi-analytic models of SPF, and are consistent with known properties of Lyman-break galaxies at $`z3`$. The disks are assumed to be stable before the start of the interaction. The results presented here are for an interaction between equal mass, bulgeless galaxies, with halo mass $`7.1\times 10^{11}M_{}`$, stellar mass $`7.2\times 10^9M_{}`$, stellar scale radius 1.7 kpc, gas fraction $`0.5`$ and gas scale radius 3.4 kpc. All other properties (relative inclination, orbit, etc.), are the same as the fiducial case of Mihos & Hernquist (1996; MH96).
## 3. Results
The star formation rate over the course of the interaction is shown in figure 1 (left panel). The behaviour is qualitatively similar to the results of MH96. We convolve this star formation history with stellar population models (GISSEL98, Bruzual & Charlot in prep.) to obtain the apparent magnitude of the galaxy at $`z3`$ (figure 1, right panel). We have used the solar metallicity models with a Salpeter IMF. We show the magnitudes in the WF/PC F606W ($`V_{606}`$) and NICMOS F160W ($`H_{160}`$) filters (AB system), which probe the rest-frame $`1500\AA `$ and $`4000\AA `$ part of the spectrum at $`z=3`$. The top set of curves in the right panel neglects dust extinction; the bottom set shows the result of including a correction of a factor of $`5`$ at 1500 Å, as suggested by recent observational estimates of typical extinction corrections in bright LBGs (Meurer et al. 1999; Steidel et al. 1999), and assuming a Calzetti attenuation curve (Calzetti et al. 1996). With this level of dust extinction, the bursting galaxy would be visible at present spectroscopic limits for about 200 Myr, whereas in the absence of the burst the galaxy would have been well below the detection limit. Note that in the absence of dust reddening, the $`V_{606}H_{160}`$ color of the galaxy during the burst is quite blue (-0.5 – 0), but after the dust correction, typical values (0.5 to 1.0) are reasonably consistent with observations.
Figure 2 shows how the interacting galaxy would appear if observed at $`z3`$ by HST (in the absence of noise or sky background) at various times during the merger. Left panels show the WF/PC $`V_{606}`$ filter with a pixel size of 0.04 arcsec; the right panels show the NICMOS $`H_{160}`$ filter with 0.08 arcsec pixels. From top to bottom, the galaxies are shown at the beginning of the simulation, during the first interaction of the galaxies, after the galaxies have separated again, and finally after the final merger. These images can be matched up with the star formation and total magnitude curves shown in figure 1 by multiplying the simulation time units shown on the figure by a factor of 8.3 to convert to Myr.
If in fact the observed galaxy population becomes increasingly dominated by merging starburst systems at higher redshifts, one would expect to observe larger fractions of galaxies with disturbed morphologies and significant substructure. We plan to use synthesized images such as figure 2 to develop statistics to quantify these effects, and eventually to test whether the morphological properties of observed high-redshift galaxies are consistent with the collisional starburst scenario.
## References
Bruzual, A, & Charlot, S. in preparation
Calzetti, D., Kinney, A.L., & Storchi-Bergmann, T. 1996, ApJ, 458, 132
Kennicutt, R.C. 1998, ApJ, 498, 541
Kolatt, T.S. et al. 1999, ApJ, 523, 109
Meurer, G.R., Heckman, T.M., & Calzetti, D. 1999, ApJ, 521, 64
Mihos, J.C., & Hernquist, L. 1994a, ApJ, 425, 13
Mihos, J.C., & Hernquist, L. 1994b, ApJ, 437, 611
Mihos, J.C., & Hernquist, L. 1996, ApJ, 464, 641 (MH96)
Rosenfeld, G., Somerville, R.S., Kolatt, T.S., Mihos, J.C., Dekel, A. & Primack, J.R. in preparation
Somerville, R.S., Primack, J.R., & Faber, S.M. 1999, MNRAS, accepted (SPF)
Steidel, C.C., Adelberger, K.L., Giavalisco, M., Dickinson, M., & Pettini, M. 1999, ApJ, 519, 1
|
no-problem/9910/hep-ph9910448.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
At present four different Monte Carlo programs are available for the computation of jet quantities in deep-inelastic scattering (DIS) to next-to-leading order (NLO) in the strong coupling constant $`\alpha _s`$: MEPJET , DISENT , DISASTER++ and JETVIP . Since all of these claim to be exact calculations, they should produce identical results (within numerical precision). In this contribution we compare the leading-order (LO) and the NLO predictions of these programs to test whether they are compatible. The comparisons are performed in typical phase space regions where HERA analyses are currently made.
## 2 Program Overview
The four programs allow to calculate next-to-leading order parton cross sections with arbitrary cuts. They differ in the techniques used and in several details. A short overview on the four programs is given in table 1. For a detailed discussion of the single topics we refer to the program manuals.
The programs can be classified by the method that is applied to cancel the collinear and infrared singularities. Two general methods are available, the phase space slicing method and the subtraction method. The phase space slicing method employs a technical cutoff ($`s_{\mathrm{min}}`$ or $`y_{\mathrm{cut}}`$). Correct results are only obtained for sufficiently small values of this parameter. The cutoff independence has to be checked for every investigated observable/scenario. In practice this test is performed by comparing multiple runs with different (small) cutoff parameters. The subtraction method does not apply such a cutoff.
All programs are able to calculate single jet and dijet observables in next-to-leading order, i.e. $`𝒪(\alpha _s^1)`$ or $`𝒪(\alpha _s^2)`$ for processes with one or two partons in the Born process. Processes with a higher number of particles in the Born graph are available in leading order only.
In order to apply arbitrary cuts on the final state, the full event record of all incoming and outgoing particles is needed. The full event record is available for all programs with the exception of the azimuthal angle $`\varphi `$ of the scattered electron wrt. the outgoing partons in the JETVIP program. In JETVIP the $`\varphi `$ dependence of the matrix elements is integrated analytically. Since this angle is not available, the full vector of the Lorentz boost from the Breit frame to the HERA laboratory frame can only be calculated under the assumption of a flat distribution in $`\varphi `$. At larger $`Q^2`$ this can lead to an error of at maximum 5-7% when angular jet cuts in the HERA laboratory frame are applied . Therefore no such cuts are used in our test scenarios.
In perturbative QCD calculations two scales are introduced: the renormalization ($`\mu _r`$) and the factorization scale ($`\mu _f`$). All programs allow to identify the renormalization scale with arbitrary varaibles, e.g. proportional to kinematic variables ($`Q`$) or to final state quantities ($`E_{T,\mathrm{jet}}`$). The same is true for the factorization scale, except for DISENT. In DISENT the factorization scale is restricted to variables that are independent of the hadronic final state, i.e. proportional to kinematic variables ($`Q`$) or to constant values. To keep the checks simple, we stick to the choice of $`\mu =Q`$ for both scales.
At very low and at very high $`Q^2`$, effects changing the cross section become more and more important. At high $`Q^2`$ the exchange of $`Z`$ and $`W`$ bosons can not be neglected while at low $`Q^2`$ the contributions from resolved photons to jet cross sections become sizable. In other regions of phase space effects from quark masses can also become relevant. Since these different effects can only be calculated by single programs (see table) they have not been considered in the present comparison.
## 3 Comparison of the Results
### 3.1 Technical Settings
For all NLO calculations as well as for the LO calculations we are using renormalization and factorization scales $`\mu _r=\mu _f=Q`$ and the 2-loop formula for the running of $`\alpha _s`$ (taken from PDFLIB ). Throughout we are using CTEQ4M parton density functions (taken from PDFLIB). All cross sections are calculated for a running electromagnetic coupling constant<sup>1</sup><sup>1</sup>1The official DISENT program does not take into account the running of the electromagnetic coupling constant. We have modified the official DISENT program to include this. and are performed in the $`\overline{\mathrm{MS}}`$ scheme with five active flavors.
### 3.2 Scenarios for the Comparisons
At NLO the jet cross sections depend on the jet definition and the recombination scheme. For all comparisons we are using the inclusive $`k_{}`$ algorithm in the Breit frame. It has been shown that this jet definition is infrared safe to all orders and less affected by hadronization corrections than other jet definitions . Particles are recombined in the $`E_T`$-scheme in which the jet $`E_T`$ is obtained from the scalar sum of the particle $`E_T`$, the pseudorapidity and the azimuth angle are calculated as $`E_T`$-weighted averages from the particle quantities. In all cases we calculate inclusive jet cross sections (i.e. cross sections for the production of events with at least two jets that pass the jet cuts). The jets are indexed in descending order in their transverse energies in the Breit frame ($`E_{T1}E_{T2}`$).
The $`ep`$ center of mass energy squared is set to $`s=427.5820\text{GeV}^2=90200\text{GeV}^2`$ (corresponding to the HERA running conditions in 1994-97). What will later be called the “central scenario” is defined as follows:
$$30<Q^2<40\text{GeV}^2,0.2<y<0.6,E_{T2\mathrm{m}\mathrm{i}\mathrm{n}}=5\text{GeV}.$$
(1)
So far this scenario includes infrared sensitive parts of phase space, where $`E_{T1}E_{T2}E_{T\mathrm{min}}`$. These phase space regions can be avoided by additional harder cuts on either the $`E_{T1}`$ of the hardest jet, on the average $`\overline{E}_T`$ of both jets or on the invariant dijet mass $`M_{jj}`$. These different choices are varied in scenario 1(a-c). The central choice will be an additional cut on $`E_{T1}>8\text{GeV}`$. Different values of this cut are tested in scenario 2(a-d).
Starting from the central scenario we also vary the ranges of the kinematical variables $`Q^2`$ (scenario 3(a-d)) and $`y`$ (scenario 4(a-c)). Further comparisons are dedicated to phase space regions which are irrelevant for experimental analyses, but helpful to test the programs. In scenario 5(a-c) we compare the programs for softer transverse jet energy cuts. Scenario 6(a-c) is defined by the requirement of a difference in the transverse jet energies. The only contributions to these cross sections come from 3-parton final states in $`𝒪(\alpha _s^2)`$ such that we are left with a leading order prediction.
The various scenarios differ from the central scenario (1) as follows:
| Scenario 1 | |
| --- | --- |
| different ways to avoid | |
| infrared sensitive regions | |
| No. | additional jet cut |
| 1 a) | $`E_{T1\mathrm{m}\mathrm{i}\mathrm{n}}>8\text{GeV}`$ |
| 1 b) | $`M_{jj}>25\text{GeV}`$ |
| 1 c) | $`(E_{T1}+E_{T2})>17\text{GeV}`$ |
| Scenario 2 | |
| --- | --- |
| different $`E_{T1\mathrm{m}\mathrm{i}\mathrm{n}}`$ cuts | |
| No. | $`E_{T1\mathrm{m}\mathrm{i}\mathrm{n}}/\text{GeV}`$ |
| 2 a) | 8 |
| 2 b) | 15 |
| 2 c) | 25 |
| 2 d) | 40 |
| Scenario 3 | | |
| --- | --- | --- |
| different $`Q^2`$ ranges | | |
| add. cut $`E_{T1\mathrm{m}\mathrm{i}\mathrm{n}}=8\text{GeV}`$ | | |
| No. | $`Q_{\mathrm{min}}^2/\text{GeV}^2`$ | $`Q_{\mathrm{max}}^2/\text{GeV}^2`$ |
| 3 a) | 3 | 4 |
| 3 b) | 30 | 40 |
| 3 c) | 300 | 400 |
| 3 d) | 3000 | 4000 |
| Scenario 4 | | |
| --- | --- | --- |
| extreme $`y`$ regions | | |
| add. cut $`E_{T1\mathrm{m}\mathrm{i}\mathrm{n}}=8\text{GeV}`$ | | |
| No. | $`y_{\mathrm{min}}`$ | $`y_{\mathrm{max}}`$ |
| 4 a) | 0.01 | 0.05 |
| 4 b) | 0.2 | 0.6 |
| 4 c) | 0.9 | 0.95 |
| Scenario 5 | | |
| --- | --- | --- |
| probing softer regions | | |
| No. | $`E_{T2\mathrm{m}\mathrm{i}\mathrm{n}}/\text{GeV}`$ | $`E_{T1\mathrm{m}\mathrm{i}\mathrm{n}}/\text{GeV}`$ |
| 5 a) | 1 | 2 |
| 5 b) | 2 | 3 |
| 5 c) | 3 | 4 |
| Scenario 6 | |
| --- | --- |
| add. cut on the difference | |
| of the jet $`E_T`$ | |
| No. | $`(E_{T1}E_{T2})>`$ |
| 6 a) | 1 GeV |
| 6 b) | 2 GeV |
| 6 c) | 3 GeV |
### 3.3 Numerical Comparisons
An overview of the results of all calculations for the 17 different scenarios is given in the tables in the appendix. The leading order results are shown in the last row for each scenario. These values have been calculated to a numerical precision of typically 0.2%. In all cases we see a perfect agreement between the different programs.
The next-to-leading order calculations for the corresponding scenarios have been performed to a numerical precision of typically 0.3% <sup>2</sup><sup>2</sup>2For DISASTER++ the precision is often worse since the calculations by DISASTER++ require significantly more CPU time compared to the other programs.. In most cases we have tested the stability of the JETVIP results w.r.t. the cutoff parameter $`y_{\mathrm{cut}}`$.
#### DISENT and DISASTER++
The programs DISENT and DISASTER++ which are both based on the subtraction method are in very good overall agreement. In 12 cases their NLO results are in agreement within the quoted errors of typically 0.3%. Only in 5 comparisons (1b, 3c, 3d, 4a, 5a) deviations are seen in the range of 0.6% to 2.2% with a significance of 1.5 – 3.6 standard deviations but without any systematic trend. If we consider that the errors quoted by the programs may sometimes be underestimated<sup>3</sup><sup>3</sup>3For the DISENT program the same cross section has been repeatedly calculated . The statistically independent results were roughly Gaussian distributed in the central region, with a width compatible with the error quoted by DISENT. However, significantly larger tails have been seen. The same is likely to be true for the other programs (no similar checks have been made). this can still be labeled “good agreement”.
In cases where the precision of the NLO calculation is important, the user should therefore not trust the quoted erros but aim for a higher precision.
#### MEPJET
MEPJET NLO predictions were found to agree well with DISASTER++ and DISENT for physical PDFs in Ref. (Table 2 and discussion on p. 14). MEPJET’s NLO results for the extreme phase space regions investigated here are typically 5–8% lower than the NLO results obtained with DISENT and DISASTER++. In three cases deviations of about 10% occur. No obvious correlation between the size of the deviation and the value of the $`K`$-factor exists. The $`K`$-factor varies from 1.3 to 7.0, except for case 2d. Here DISASTER++ and DISENT yield a $`K`$-factor of 1.17 and 1.16, respectively, while MEPJET’s $`K`$-factor is close to 1. MEPJET deviates from DISASTER++ by 2$`\sigma `$ and from DISENT by 3$`\sigma `$ in this case.
All MEPJET calculations ran with the default cutoff value of $`s_{\mathrm{min}}=0.1\text{GeV}^2`$. To check for possible cutoff dependences additional runs with smaller $`s_{\mathrm{min}}`$ values were done for selected cases (see data table). No $`s_{\mathrm{min}}`$ dependence was found. Effects potentially introduced by approximations used for the crossing functions were also investigated and found to be not significant. All LO results agree within the statistical errors. In addition perfect agreement between MEPJET, DISASTER++ and DISENT is seen in scenario 6, which tests the real $`O(\alpha _s^2)`$ corrections. What causes the observed discrepancies in full NLO in the extreme phase space regions probed here is currently unknown.
#### JETVIP
As proposed in we have started to perform the NLO calculations for the JETVIP program for a cutoff value $`y_{\mathrm{cut}}=10^3`$. Although some of these results are in agreement to the DISENT/DISASTER++ values (scenarios 2b, 3d, 4a, 5a-c), in the other 11 cases discrepancies of up to 20% are seen. Therefore we have made extensive studies on the $`y_{\mathrm{cut}}`$ dependence of the JETVIP results in the range $`10^6y_{\mathrm{cut}}10^2`$. Only in scenario 6, where only real corrections of $`𝒪(\alpha _s^2)`$ are tested, the results become stable for $`y_{\mathrm{cut}}10^4`$. For all NLO results we observe a significant cutoff dependence. Since the independence on the cutoff is the most important test of the successful implementation of the phase space slicing method the strong $`y_{\mathrm{cut}}`$ dependence of the JETVIP results is worrisome.
Especially at very small values of $`10^6y_{\mathrm{cut}}10^5`$ no convergence of the results is seen. In scenario 1a we have repeated the calculation at $`y_{\mathrm{cut}}=10^5`$ with fourfold statistics. While the quoted errors are 2.6% and 0.4%, respectively, both results deviate by 15%. This is a clear indication that at these small $`y_{\mathrm{cut}}`$ values the quoted errors are not reliable.
At intermediate values $`10^4y_{\mathrm{cut}}10^3`$ large $`y_{\mathrm{cut}}`$ dependencies (above 4%) are observed in four scenarios (2c, 2d, 3a, 6a) only. In the other 13 scenarios the dependence is below 3%. In 11 of these cases the JETVIP results at $`y_{\mathrm{cut}}=10^4`$ agree within this level of precision with the DISENT/DISASTER++ results. The other two results 1b, 1c deviate by 10% and 4.5% from the DISENT/DISASTER++ results.
## 4 Summary
We have compared four different programs for NLO calculations of jet cross sections in $`ep`$ collisions: DISENT, DISASTER++, JETVIP and MEPJET. Dijet cross sections in different ranges of $`Q^2`$, $`y`$, $`E_{T,\mathrm{jet}}`$ have been calculated in leading order (LO) and in next-to-leading order (NLO). All calculations are performed to a numerical precision of typically 0.2% (LO) and 0.3% (NLO).
While the leading order predictions of all programs agree within the numerical precision of 0.2%, our comparisons show that in NLO only the calculations of DISENT and DISASTER++ can be said to be in good agreement.
MEPJET shows systematic deviations of being typically 5–8% lower than DISENT and DISASTER++. Only the $`𝒪(\alpha _s^2)`$ tree level cross sections are in perfect agreement.
The JETVIP program shows a significant dependence on the phase space slicing parameter $`y_{\mathrm{cut}}`$ which has to be understood. Only at intermediate values of $`y_{\mathrm{cut}}10^4`$ the dependence is reduced. In the cases where the $`y_{\mathrm{cut}}`$ dependence within $`10^4y_{\mathrm{cut}}10^3`$ is smaller than 3% the JETVIP results are often comparable with the DISENT and DISASTER++ results.
##### Acknowledgments
We would like to thank Stefano Catani, Dirk Graudenz, Erwin Mirkes, Björn Pötter, Mike Seymour, Dieter Zeppenfeld for providing the NLO programs and for many helpful discussions.
## Appendix A Numerical Results
Here we list all available numerical results. The last line for each scenario contains the leading-order results.
| scenario | DISASTER++ | DISENT | JETVIP | MEPJET |
| --- | --- | --- | --- | --- |
| 1 a) | 119.82 $`\pm `$ 0.411 | 119.54 $`\pm `$ 0.33 | 113.42 $`\pm `$ 0.10 ($`y_{\mathrm{cut}}=10^2`$) | 113.45 $`\pm `$ 0.21 ($`s_{\mathrm{min}}=0.1`$) |
| | | | 121.41 $`\pm `$ 0.19 ($`y_{\mathrm{cut}}=10^3`$) | 113.3 $`\pm `$ 3.5 ($`s_{\mathrm{min}}=0.0001`$) |
| | | | 121.69 $`\pm `$ 0.77 ($`y_{\mathrm{cut}}=10^4`$) | |
| | | | 99.6 $`\pm `$ 2.6 ($`y_{\mathrm{cut}}=10^5`$) | |
| | | | 114.98 $`\pm `$ 0.44 ($`y_{\mathrm{cut}}=10^5`$) | |
| | | | 75.7 $`\pm `$ 2.7 ($`y_{\mathrm{cut}}=10^6`$) | |
| LO: | 41.662 $`\pm `$ 0.083 | 41.769 $`\pm `$ 0.061 | 41.745 $`\pm `$ 0.033 | 41.722 $`\pm `$ 0.032 |
| 1 b) | 82.83 $`\pm `$ 0.44 | 81.02 $`\pm `$ 0.49 | 93.58 $`\pm `$ 0.22 ($`y_{\mathrm{cut}}=10^3`$) | 78.55 $`\pm `$ 0.16 |
| | | | 91.11 $`\pm `$ 0.49 ($`y_{\mathrm{cut}}=10^4`$) | |
| | | | 83.55 $`\pm `$ 1.1 ($`y_{\mathrm{cut}}=10^5`$) | |
| LO: | 30.57 $`\pm `$ 0.07 | 30.59 $`\pm `$ 0.05 | 30.54 $`\pm `$ 0.05 | 30.56 $`\pm `$ 0.01 |
| 1 c) | 72.26 $`\pm `$ 0.30 | 72.05 $`\pm `$ 0.28 | 77.15 $`\pm `$ 0.16 ($`y_{\mathrm{cut}}=10^3`$) | 67.57 $`\pm `$ 0.21 |
| | | | 75.46 $`\pm `$ 0.37 ($`y_{\mathrm{cut}}=10^4`$) | |
| | | | 66.18 $`\pm `$ 1.36 ($`y_{\mathrm{cut}}=10^5`$) | |
| LO: | 35.155 $`\pm `$ 0.072 | 35.172 $`\pm `$ 0.052 | 35.184 $`\pm `$ 0.050 | 35.141 $`\pm `$ 0.024 |
| scenario | DISASTER++ | DISENT | JETVIP | MEPJET |
| --- | --- | --- | --- | --- |
| 2 a) | as 1 a) | | | |
| 2 b) | 16.585 $`\pm `$ 0.092 | 16.526 $`\pm `$ 0.051 | 16.668 $`\pm `$ 0.031 ($`y_{\mathrm{cut}}=10^3`$) | 15.743 $`\pm `$ 0.078 |
| | | | 16.302 $`\pm `$ 0.071 ($`y_{\mathrm{cut}}=10^4`$) | |
| | | | 13.668 $`\pm `$ 0.160 ($`y_{\mathrm{cut}}=10^5`$) | |
| LO: | 6.185 $`\pm `$ 0.020 | 6.222 $`\pm `$ 0.011 | 6.214 $`\pm `$ 0.005 | 6.221 $`\pm `$ 0.003 |
| 2 c) | 2.0809 $`\pm `$ 0.0273 | 2.0519 $`\pm `$ 0.0080 | 1.9563 $`\pm `$ 0.0049 ($`y_{\mathrm{cut}}=10^3`$) | 1.9084 $`\pm `$ 0.0083 |
| | | | 1.7987 $`\pm `$ 0.0119 ($`y_{\mathrm{cut}}=10^4`$) | |
| | | | 1.0962 $`\pm `$ 0.0260 ($`y_{\mathrm{cut}}=10^5`$) | |
| LO: | 1.0230 $`\pm `$ 0.0046 | 1.0221 $`\pm `$ 0.0022 | 1.0255 $`\pm `$ 0.0009 | 1.0250 $`\pm `$ 0.0005 |
| 2 d) | 0.1398 $`\pm `$ 0.0052 | 0.1403 $`\pm `$ 0.0011 | 0.1124 $`\pm `$ 0.0007 ($`y_{\mathrm{cut}}=10^3`$) | 0.1229 $`\pm `$ 0.0047 |
| | | | 0.0772 $`\pm `$ 0.0014 ($`y_{\mathrm{cut}}=10^4`$) | |
| LO: | 0.1197 $`\pm `$ 0.0014 | 0.12125 $`\pm `$ 0.00036 | 0.12073 $`\pm `$ 0.00016 | 0.12087 $`\pm `$ 0.000064 |
| scenario | DISASTER++ | DISENT | JETVIP | MEPJET |
| --- | --- | --- | --- | --- |
| 3 a) | 341.2 $`\pm `$ 1.7 | 339.1 $`\pm `$ 1.2 | 315.9 $`\pm `$ 0.4 ($`y_{\mathrm{cut}}=10^3`$) | 331.49 $`\pm `$ 0.42 ($`s_{\mathrm{min}}=0.1`$) |
| | | | 340.0 $`\pm `$ 0.7 ($`y_{\mathrm{cut}}=10^4`$) | 334.96 $`\pm `$ 1.31 ($`s_{\mathrm{min}}=0.01`$) |
| | | | 296.6 $`\pm `$ 2.0 ($`y_{\mathrm{cut}}=10^5`$) | 336 $`\pm `$ 14 ($`s_{\mathrm{min}}=0.001`$) |
| LO: | 48.418 $`\pm `$ 0.100 | 48.423 $`\pm `$ 0.081 | 48.363 $`\pm `$ 0.040 | 48.397 $`\pm `$ 0.040 |
| 3 b) | as 1 a) | | | |
| 3 c) | 26.848 $`\pm `$ 0.061 | 26.680 $`\pm `$ 0.051 | 26.259 $`\pm `$ 0.139 ($`y_{\mathrm{cut}}=10^3`$) | 24.684 $`\pm `$ 0.050 |
| | | | 26.79 $`\pm `$ 0.094 ($`y_{\mathrm{cut}}=10^4`$) | |
| | | | 23.894 $`\pm `$ 0.407 ($`y_{\mathrm{cut}}=10^5`$) | |
| LO: | 16.938 $`\pm `$ 0.022 | 16.936 $`\pm `$ 0.016 | 16.928 $`\pm `$ 0.008 | 16.918 $`\pm `$ 0.011 |
| 3 d) | 1.9975 $`\pm `$ 0.0033 | 1.9852 $`\pm `$ 0.0029 | 1.9657 $`\pm `$ 0.0061 ($`y_{\mathrm{cut}}=10^3`$) | 1.8917 $`\pm `$ 0.0038 |
| | | | 1.9946 $`\pm `$ 0.0066 ($`y_{\mathrm{cut}}=10^4`$) | |
| | | | 1.7194 $`\pm `$ 0.0179 ($`y_{\mathrm{cut}}=10^5`$) | |
| LO: | 1.4982 $`\pm `$ 0.0017 | 1.4967 $`\pm `$ 0.0013 | 1.4956 $`\pm `$ 0.0013 | 1.4966 $`\pm `$ 0.0010 |
| scenario | DISASTER++ | DISENT | JETVIP | MEPJET |
| --- | --- | --- | --- | --- |
| 4 a) | 19.218 $`\pm `$ 0.143 | 18.959 $`\pm `$ 0.068 | 18.818 $`\pm `$ 0.051 ($`y_{\mathrm{cut}}=10^3`$) | 17.190 $`\pm `$ 0.037 |
| | | | 18.470 $`\pm `$ 0.041 ($`y_{\mathrm{cut}}=10^4`$) | |
| | | | 18.896 $`\pm `$ 0.167 ($`y_{\mathrm{cut}}=10^5`$) | |
| LO: | 11.611 $`\pm `$ 0.038 | 11.573 $`\pm `$ 0.022 | 11.590 $`\pm `$ 0.010 | 11.587 $`\pm `$ 0.006 |
| 4 b) | as 1 a) | | | |
| 4 c) | 6.424 $`\pm `$ 0.027 | 6.448 $`\pm `$ 0.018 | 6.299 $`\pm `$ 0.059 ($`y_{\mathrm{cut}}=10^3`$) | 6.086 $`\pm `$ 0.028 |
| | | | 6.356 $`\pm `$ 0.040 ($`y_{\mathrm{cut}}=10^4`$) | |
| | | | 6.243 $`\pm `$ 0.155 ($`y_{\mathrm{cut}}=10^5`$) | |
| LO: | 2.1612 $`\pm `$ 0.0058 | 2.1615 $`\pm `$ 0.0031 | 2.173 $`\pm `$ 0.013 | 2.1598 $`\pm `$ 0.0015 |
| scenario | DISASTER++ | DISENT | JETVIP | MEPJET |
| --- | --- | --- | --- | --- |
| 5 a) | 1676.2 $`\pm `$ 4.0 | 1655.6 $`\pm `$ 4.2 | 1654.3 $`\pm `$ 6.0 ($`y_{\mathrm{cut}}=10^3`$) | 1489.8 $`\pm `$ 3.2 |
| | | | 1678.6 $`\pm `$ 20.4 ($`y_{\mathrm{cut}}=10^4`$) | |
| LO: | 845.40 $`\pm `$ 1.04 | 844.71 $`\pm `$ 0.70 | 844.84 $`\pm `$ 0.83 | 844.67 $`\pm `$ 0.45 |
| 5 b) | 973.8 $`\pm `$ 2.6 | 970.1 $`\pm `$ 2.4 | 970.3 $`\pm `$ 3.0 ($`y_{\mathrm{cut}}=10^3`$) | 885.9 $`\pm `$ 2.0 |
| | | | 989.4 $`\pm `$ 8.3 ($`y_{\mathrm{cut}}=10^4`$) | |
| LO: | 436.43 $`\pm `$ 0.62 | 436.25 $`\pm `$ 0.43 | 436.85 $`\pm `$ 0.68 | 436.27 $`\pm `$ 0.23 |
| 5 c) | 564.5 $`\pm `$ 1.6 | 561.9 $`\pm `$ 1.5 | 565.6 $`\pm `$ 1.5 ($`y_{\mathrm{cut}}=10^3`$) | 518.0 $`\pm `$ 0.8 |
| | | | 573.6 $`\pm `$ 4.8 ($`y_{\mathrm{cut}}=10^4`$) | |
| LO: | 242.20 $`\pm `$ 0.37 | 242.60 $`\pm `$ 0.28 | 243.25 $`\pm `$ 0.36 | 242.47 $`\pm `$ 0.23 |
| scenario | DISASTER++ | DISENT | JETVIP | MEPJET |
| --- | --- | --- | --- | --- |
| 6 a) | 126.24 $`\pm `$ 0.44 | 126.92 $`\pm `$ 0.47 | 118.13 $`\pm `$ 0.05 ($`y_{\mathrm{cut}}=10^3`$) | 126.08 $`\pm `$ 0.20 |
| | | | 122.94 $`\pm `$ 0.05 ($`y_{\mathrm{cut}}=10^4`$) | |
| | | | 123.01 $`\pm `$ 0.05 ($`y_{\mathrm{cut}}=10^5`$) | |
| | | | 123.23 $`\pm `$ 0.22 ($`y_{\mathrm{cut}}=10^6`$) | |
| 6 b) | 56.30 $`\pm `$ 0.26 | 56.02 $`\pm `$ 0.25 | 54.78 $`\pm `$ 0.02 ($`y_{\mathrm{cut}}=10^3`$) | 55.90 $`\pm `$ 0.10 |
| | | | 55.30 $`\pm `$ 0.03 ($`y_{\mathrm{cut}}=10^4`$) | |
| | | | 55.30 $`\pm `$ 0.03 ($`y_{\mathrm{cut}}=10^5`$) | |
| | | | 55.30 $`\pm `$ 0.03 ($`y_{\mathrm{cut}}=10^6`$) | |
| 6 c) | 27.15 $`\pm `$ 0.16 | 27.13 $`\pm `$ 0.07 | 26.94 $`\pm `$ 0.01 ($`y_{\mathrm{cut}}=10^3`$) | 27.14 $`\pm `$ 0.05 |
| | | | 27.00 $`\pm `$ 0.02 ($`y_{\mathrm{cut}}=10^4`$) | |
| | | | 27.00 $`\pm `$ 0.02 ($`y_{\mathrm{cut}}=10^5`$) | |
| | | | 27.01 $`\pm `$ 0.02 ($`y_{\mathrm{cut}}=10^6`$) | |
|
no-problem/9910/astro-ph9910079.html
|
ar5iv
|
text
|
# The rapid X-ray variability of V4641 Sagittarii (SAX J1819.3-2525 = XTE J1819-254)
## 1. Introduction
In Feb. 1999, a new X-ray transient was independently discovered by BeppoSAX (SAX J1819.3–2525; in ’t Zand et al. 1999) and RXTE (XTE J1819–254; Markwardt, Swank, & Marshall 1999a). Its position is consistent with that of the variable star V4641 Sgr (note that V4641 Sgr had been misidentified as GM Sgr \[see IAU Circular 7277\]). Its 2–10 keV flux varied between $`<`$1 and 80 mCrab (in ’t Zand et al. 1999; Markwardt et al. 1999a). On 15 Sept. 1999, the source rapidly increased in the optical (Stubbings 1999) and with the RXTE All-Sky Monitor (ASM) its 2–12 keV intensity was observed to increase very rapidly (within 7 hours) from 1.6 to 12.2 Crab (Smith, Levine, & Morgan 1999a,b). Subsequent ASM measurements showed that within two hours of this flare, the flux had declined down to a level only marginally detectable with the ASM ($`<`$50 mCrab; Smith et al. 1999c).
Optical and infrared spectra taken during this bright X-ray event show emission lines (Ayani & Peiris 1999; Liller 1999; Djorgovski et al. 1999; Charles, Shahbaz, & Geballe 1999), reminiscent of accretion of matter onto a compact object, demonstrating that V4641 Sgr is indeed the optical counter part. On 16 Sept. 1999, VLA observations were taken and a radio source was discovered (Hjellming, Rupen, & Mioduszewski 1999a) at a position consistent with V4641 Sgr. Follow up VLA and ATCA observations showed that its flux was rapidly declining on time scales of hours to days (Gaensler et al. 1999; Hjellming, Rupen, & Mioduszewski 1999b; Hjellming et al. 1999c). The VLA observations also showed that it was resolved, demonstrating the presence of ejecta (Hjellming et al. 1999b,c).
On 15 Sept. 1999, a short (3000 s) pointing was taken with the RXTE proportional counter array (PCA). A rapidly variable source was observed with a flaring and a quiescent episode (Markwardt, Swank, & Morgan 1999b). No pulsations or quasi-periodic oscillations (QPOs) were detected but red noise below 30 Hz was present. Here, we discuss in more detail the rapid X-ray variability as observed with the PCA during this observation.
## 2. Observation and Results
Several public TOO PCA observations were scheduled between 15 and 18 Sept. 1999 for a total of $``$33 ksec on-source time. However, only the first 1500 s of the first observation (on 15 Sept. 1999 21:18–22.43 UTC; see also Markwardt et al. 1999b) showed very high fluxes and strong variability. We limited our analysis to these first 1500 s in order to study the X-ray variability of the source. During the second 1500 s of this observation and during the later observations, V4641 Sgr could still be detected with the PCA, but at a very low flux level ($`<`$100 counts s<sup>-1</sup> for 5 detectors on; 2–60 keV) and no significant variability could be detected (upper limits of 5%–40% rms \[2–22.1 keV; 0.01–100 Hz; depending on total on-source time and source count rate during the individual observations\] on band limited noise similar to that detected during the first $``$900 s of the first observation \[see below\]).
During the PCA observations, data were accumulated in several different modes which were simultaneously active. Here, we used data obtained with the ’Binned’ modes B\_250US\_1M\_0\_249\_H (one photon energy channel \[effective range of 2–60 keV\] and 244 $`\mu `$s time resolution) and B\_4M\_8B\_0\_49\_H (eight channels covering 2–22.1 keV and 3.9 ms time resolution). These data were used to calculate 128 s FFTs to create the power spectra and the cross spectra, the 3.9 ms data were used to create light curves, a hardness-intensity diagram, and a hardness curve. The power spectra were fitted with a function consisting of a constant (the dead-time modified Poisson level) and a broken power law (the band-limited noise below $``$100 Hz). A Lorentzian was used to determine upper limits (95% confidence level) on the rms amplitude of QPOs above 100 Hz, assuming a QPO width of 50 Hz. To correct for the small dead-time effects on the lags, we subtracted the average 50–125 Hz cross-vector from the cross spectra (van der Klis et al. 1987).
The 2–22.1 keV light curve of the first 1500 s of the first PCA observation is shown in Figure 3a, showing the flaring behavior of V4641 Sgr between 300 and 900 s from the start. After 1000 s, the source enters a quiescent state, with one last flare between 1010 and 1060 s (Fig.3c). After that (and also in the second 1500 s of this observations, and in the other observations) it remained in this state at very low count rates (see also Markwardt et al. 1999b). During the flaring episode, it varied rapidly in count rate. On time scales of 5–10 minutes it varied in luminosity by a factor of up to 500. Within one second, it sometimes increased and then decreased by more than a factor of 4 (Figs. 3b and d).
The strong variability is also evident from the power spectra obtained from the first 896 s (see Fig. 3a). Strong band-limited noise (47.2%$`\pm `$0.8% rms amplitude; 0.01–100 Hz; 2–22.1 keV) can clearly be seen. Its shape fits a broken power law with the break at 5.1$`\pm `$0.2 Hz and an index below and above the break of 1.03$`\pm `$0.02 and 2.16$`\pm `$0.03, respectively. The rms amplitude decreases from $``$54% to $``$35% as a function of energy, increasing from $``$4 keV to $``$20 keV (Fig. 3a). By subtracting the 3.9 ms data (2–22.1 keV) from the 244 $`\mu `$s data (2–60 keV), we could make a power spectrum for the data above 22.1 keV. The rms amplitude of the noise was even less, 30.6%$`\pm `$0.5%, above 22.1 keV (effective range of 22.1–60 keV). To show the decrease of amplitude below 20 keV more clearly, we excluded this point from the figure. The noise also has significant hard phase lags between photons with energies of 2–9.7 keV and those with energies of 9.7–22.1 keV (Fig. 3b). The lags increased from being consistent with zero below 0.1 Hz to $``$0.1 rad at $``$0.03 Hz. Above this frequency, the lags decreased again and above about 10 Hz the lags were consistent with zero again. We studied the energy dependence of the lags for four frequencies intervals: 0.02–0.1 Hz (Fig. 3b), 0.1–1.0 Hz (Fig. 3b), 1.0–10.0 Hz (Fig. 3b), and 10.0–50.0 Hz (not shown, the lags were always consistent with zero). Although the lags in the range 0.02–0.1 Hz were barely significant (Fig. 3b) a clear trend with energy was present.
In the power spectrum obtained from the 244 $`\mu `$s data, no QPOs above 100 Hz were detected (upper limits of 2% rms). However, these data cover the whole RXTE energy band. Therefore, such QPOs could have been present but undetectable if their strength depended strongly on energy, similar to the kHz QPOs observed in many neutron star and some black hole systems (van der Klis 1998, 1999; Remillard et al. 1999; Homan, Wijnands, & van der Klis 1999). From the 3.9 ms data, we made a hardness-intensity diagram (Fig. 3a) and a hardness curve (b) with a time resolution of 2 s. For comparison, we also plotted the 2–22.1 keV light curve (Fig. 3c). As hardness we used the count rate ratio between 9.7–22.1 keV and 2–9.7 keV and as intensity the 2–22.1 keV count rate, but plotted on a logarithmic scale to show the variations more clearly. From Figure 3a, it is clear that during the flaring episode the hardness decreased when the count rate increased. When the transition to the quiescent state occurred the source became softer. During the last flare (in the quiescent state; Fig. 3c), the overall hardness was less than for the flares during the flaring episode.
## 3. Discussion
We have presented the X-ray variability of V4641 Sgr during its 1999 Sept. 15 bright event. It shows strong variations by a factor of 4 on time scales of seconds to variations by a factor of $``$500 on time scales of minutes. These strong variations are also evident in the power spectrum by the presence of strong (30%–55% rms amplitude, depending on energy) band-limited noise. Assuming the most likely distance of 0.5–1.0 kpc (Hjellming et al. 1999c; obtained by performing an HI absorption experiment against the radio counter part) and using the maximum flux during our PCA observation (Markwardt et al. 1999b), the maximum intrinsic luminosity (2–60 keV) was 0.3–1.0 $`\times 10^{37}`$ erg s<sup>-1</sup> (note that it was at considerable higher luminosities during the ASM measurements). This luminosity, the strong variability on short time scales, and the presence of optical and infrared emission lines (see § 1) suggest that the X-rays are produced by accretion onto a compact object.
The exact nature of the compact object is difficult to determine. The likely intrinsic luminosity and the hardness of the spectrum (Markwardt et al. 1999b; see Smith et al. 1999c for a more detailed analysis) strongly indicate that accretion onto a white dwarf, a nova explosion, and thermonuclear burning on a white-dwarf surface cannot account for the X-ray emission and its properties. Compared to the intrinsically brightest low magnetic field neutron star and black hole systems, the luminosity of V4641 Sgr during our PCA observations was relatively low (the brightest systems have an luminosities of $`>`$ $`10^{3839}`$ erg s<sup>-1</sup>), which suggests that also the accretion rate was relatively low. At such low accretion rates, the rapid variability (van der Klis 1995; Wijnands & van der Klis 1999), including the phase lags (Ford et al. 1999), and the spectrum (e.g., Barret & Vedrenne 1994) of the low magnetic field neutron star and the black hole systems are very similar (van der Klis 1994), making it difficult to determine the exact nature of the compact object in V4641 Sgr. Although the very strong variability suggests a black hole primary in this system (usually such strong variability is only observed for the black-hole systems), it cannot be excluded that some neutron star systems can also exhibit such strong variability. The decrease of the strength of the variability in V4641 Sgr with energy is similar to what has been observed in several black-hole systems in their low state (e.g., Nowak et al. 1999a; Nowak, Wilms, & Dove 1999b), but the neutron star systems have not been studied in enough detail in this respect to allow detailed comparisons between the different types of systems.
It was suggested that the nature of the compact objects in X-ray transients could be determined by studying and comparing the X-ray emission properties of those systems in their quiescent state (see, e.g., Rutledge et al. 1999). The luminosity for V4641 Sgr in its quiescent episode during the first PCA observation was 0.5–2.2 $`\times 10^{34}`$ erg s<sup>-1</sup> but monitoring observations with the PCA of the galactic-center region showed that during the last 7 months the luminosity had dropped occasionally to 0.6–2.4 $`\times 10^{33}`$ erg s<sup>-1</sup> (see Markwardt et al. 1999b for the fluxes used). However, these luminosities do not give more insight in the nature of the compact object. Both the observed or derived upper limits on the luminosities for neutron star and black hole X-ray transients in quiescence are consistent with the values detected for V4641 Sgr in its quiescent episode. More detailed studies in quiescence are needed to determine the lowest observed luminosity in quiescence. If V4641 Sgr contains a black hole then it is expected that the luminosity should drop sometimes significantly below 10<sup>32</sup> erg s<sup>-1</sup>, as observed in other black-hole transients in quiescence.
Thus, the properties of V4641 Sgr as observed with the PCA are very similar to those observed for an accreting compact object which accretes matter from its companion star at a low rate. Although a neutron star primary cannot be excluded, the strong variability and the low intrinsic luminosity make an interpretation of V4641 Sgr as a black-hole candidate in the low state most favorable. The most promising way to determine the exact nature is to dynamically (during the quiescent state; from the optical line spectrum of the companion star) constrain the mass of the primary. If the primary is truly a black hole, then several features have been observed for V4641 Sgr which are not commonly observed in black hole systems in their low state. For example, its very strong variability have only been observed in a few other systems (e.g., GRS 1915+105: Greiner, Morgan, & Remillard 1996; Belloni et al. 1997; Taam, Chen, & Swank 1997; GS 2023+338: Terada et al. 1994; Oosterbroek et al. 1997). But a major difference is that the intrinsic luminosities of those sources were much higher (about $`>10^{39}`$ erg s<sup>-1</sup>; see, e.g., Belloni et al. 1997; Terada et al. 1994) than for V4641 Sgr. Another unusual property of V4641 Sgr is the very short time span of its bright X-ray event ($`<10`$ hours). In this respect, V4641 Sgr is similar to the recently discovered transient CI Cam (Smith et al. 1998). However, several differences are also present. The outburst of CI Cam was on slightly longer time scales (days) and much more smooth (Belloni et al. 1999) than the event observed for V4641 Sgr, which exhibited very strong variability. So, although V4641 Sgr resembles several sources in some of its behavior, it differs from each of them significantly in other respects.
Reanalysis of the ASM archive revealed several similar short lived events for V4641 Sgr as the Sept. 15 event (Smith et al. 1999c), which went previously unnoticed. The Sept. 15 event was directly noticed because (a) more attention to V4641 Sgr was paid because of its sudden increase in the optical (Stubbings 1999) and (b) this event was brighter (2–30 times) than the others. The short life times of the events and the strong flux fluctuations indicate that the accretion is very unstable and highly irregular and not much accretion takes place. The fact that several events went unnoticed suggest that many sources with similar events also fail to get noticed, especially when they are at a greater distance than V4641 Sgr and have therefore lower fluxes. The exact number of such sources in our galaxy is difficult to estimate because of the uncertainties in the distances and the recurrence time scales of their outbursts. However, if V4641 Sgr harbors a black hole, it is clear that a sizeable number of the black holes in our galaxy could be present in V4641 Sgr like systems.
|
no-problem/9910/cond-mat9910313.html
|
ar5iv
|
text
|
# Enumerations of plane meanders
## 1 Introduction
Meanders form a set of combinatorial problems concerned with the enumeration of self-avoiding loops crossing a line through a given number of points. Meanders are considered distinct up to any smooth deformation leaving the line fixed. This problem seems to date back at least to the work of Poincaré on differential geometry. More recently it has been related to enumerations of ovals in planar algebraic curves and the classification of 3-manifolds . During the last decade or so it has received considerable attention in other areas of science. In computer science meanders are related to the sorting of Jordan sequences . In physics meanders are relevant to the study of compact foldings of polymers , properties of the Temperley-Lieb algebra , and defects in liquid crystals and $`2+1`$ dimensional gravity .
The difficulty in the enumeration of most interesting combinatorial problems is that, computationally, they are of exponential complexity. Initial efforts at computer enumeration of meanders have been based on direct counting. Lando and Zvonkin studied closed meanders, open meanders and systems of closed meanders, while Di Francesco et al. studied semi-meanders. In this paper we describe a new and improved algorithm, based on transfer matrix methods, for the enumeration of closed plane meanders. While the algorithm still has exponential complexity, the growth in computer time is much slower than that experienced with direct counting, and consequently the calculation can be carried much further. The algorithm is easily modified to enumerate systems of closed meanders, semi-meanders or open meanders.
## 2 Definitions of meanders
A closed meander of order $`n`$ is a closed self-avoiding curve crossing an infinite line $`2n`$ times. Fig. 1 shows some meanders. The meandric number $`M_n`$ is simply the number of such meanders distinct up to smooth transformations. We define the generating function
$$M(x)=\underset{n=1}{\overset{\mathrm{}}{}}M_nx^n.$$
(1)
We can extend the definition to systems of closed meanders, by allowing configurations with disconnected closed meanders. The meandric numbers $`M_n^{(k)}`$ are the number of meanders with $`2n`$ crossings and $`k`$ components. An open meander of order $`n`$ is a self-avoiding curve running from west to east while crossing an infinite line $`n`$ times. The number of such curves is $`m_n`$. It should be noted that $`M_n=m_{2n1}`$. Finally, we could consider a semi-infinite line and allow the curve to wind around the end-point of the line. A semi-meander of order $`n`$ is a closed self-avoiding loop crossing the semi-infinite line $`2n`$ times. The number of semi-meanders of order $`n`$ is denoted by $`\overline{M}_n`$. In this case a further interesting generalisation is to study the number of semi-meanders $`\overline{M}_n(w)`$, which wind around the end-point of the line $`w`$ times.
## 3 Enumeration of meanders
The method used to enumerate meanders is similar to the transfer matrix technique devised by Enting in his pioneering work on the enumeration of self-avoiding polygons. The first terms in the series for the meander generating function can be calculated using transfer matrix techniques. This involves drawing a boundary perpendicular to the infinite line. Meanders are enumerated by successive moves of the boundary, so that one crossing at a time is added to the meanders as illustrated in Fig. 2. At each position of the boundary we have a configuration of loop-ends closed to the left, and for each configuration we count all the possible meanders that could give rise to that particular configuration of loop-ends. Since the curve making up a meander is self-avoiding each configuration can be represented by an ordered set of edge states $`\{x_i\}=0`$ (1) indicates the lower (upper) part of loop closed to the left of the boundary. In addition we need to know where the infinite line is situated within the loop-ends. This can be done simply by specifying how many loop-ends lie beneath the infinite line. Configurations are read from the bottom to the top. As an example we note that the configuration along the boundary of the meander in Fig. 2 at position 4 is $`\{2;001011\}`$.
We start with the configuration {1;01} with a count of 1, that is one loop crossing the infinite line. Next we move the boundary one step ahead and add a new crossing. So we either put in an additional loop or we take an existing loop-end immediately above or below the infinite line and drag it across the line. The first possibility is illustrated in Fig. 2 in going to position 2 where we get the configuration {2;0011}. Additional loops are also inserted while going to positions 4 and 7. As we cross the infinite line with an existing loop-end we may be allowed to connect it to the loop-end on the other side. In going to position 6 we connect a ‘1’ below the line to a ‘0’ above the line and no further processing is required. In going to position 8 or 9 a ‘0’ below the line is connected to a ‘0’ above the line. This requires further processing because in connecting two lower loop-ends an upper loop-end elsewhere in the old configuration becomes a lower loop-end in the new configuration. In going to position 8 we see that the configuration {2;000111} before the step forward becomes the configuration {1;0011} after the step. That is the upper end of the third loop before the step becomes the lower end of the second loop after the step. We refer the reader to for the detailed rules for relabeling of configurations. Finally, note that connecting a ‘0’ below the line to a ‘1’ above the line results in a closed loop. So this is only allowed if there are no other loops cut by the boundary and the result is a valid closed meander. As we move along and generate a new ‘target’ configuration its count is calculated by adding up the count for the various ‘source’ configurations which could generate that target. For example the ‘target’ {2;0011} could be generated from the ‘sources’ {1;01}, {1;0011}, {3;0011} and {3;001011}, by, respectively, putting in an additional loop, moving a loop-end below the line, moving a loop-end above the line and connecting two loop-ends across the line.
The number of configurations, which need be generated in a calculation of $`M_n`$, is restricted by the fact that at each step we change the number of loop-ends above and/or below the infinite line by at most one. So if we have already taken $`k`$ steps then there can be at most $`2nk`$ loop-ends above or below the line. Any configurations violating this criterion can be discarded. Furthermore we can reduce the number of distinct configurations by a factor of two by using the symmetry with respect to reflection in the infinite line.
As we noted above connecting a ‘0’ below the line to a ‘1’ above the line results in a closed loop. Failure to observe the restriction on this closure would result in graphs with disconnected components, either one closed meander over another or one closed meander within another. Obviously these are just the types of graphs required in order to enumerate systems of closed meanders. So by noting that each such closure adds one more component it is straightforward to generalise the algorithm to enumerate systems of closed meanders. Open meanders are a little more complicated. Suffice to say at this stage that the main part of the necessary generalisations consists in adding an extra piece of information. We have to add a free end and specify its position within the configuration of loop-ends. In order to enumerate semi-meanders all we just change the initial configuration, and start in a position just before the first crossing of the semi-infinite line with $`w`$ loops nested within one another. By running the algorithm for each $`w`$ from 0 to $`n`$ we count semi-meanders with up to $`n`$ crossings.
Using the new algorithm we have calculated $`M_n`$ up to $`n=24`$ as compared to the previous best of $`n=17`$ obtained by V. R. Pratt . To fully appreciate the advance it should be noted that the computational complexity grows exponentially, that is the time required to obtain $`n`$ term grows asymptotically as $`\lambda ^n`$. For direct enumerations time is simply proportional to $`M_n`$ and thus $`\lambda =lim_n\mathrm{}M_{n+1}/M_n12.26`$. The transfer matrix method employed in this paper is far more efficient and the numerical evidence suggests that the computational complexity is such that $`\lambda 2.5`$. Another way of gauging the improved efficiency is to note that the calculations for semi-meanders carried out in were “done on the parallel Cray-T3D (128 processors) of the CEA-Grenoble, with approximately 7500 hours $`\times `$ processors.” Or in total about 100 years of CPU time. The equivalent calculations can be done with the transfer matrix algorithm in about 15 minutes on a single processor DEC-Alpha workstation! The price we have to pay is that unlike for direct enumeration memory use grows exponentially with growth factor $`\lambda `$.
## 4 Results and analysis
The enumerations undertaken thus far are too numerous to detail here. We thus only give the results for $`M_n`$ which are listed in Table 1. The series for the meander generating function is characterised by coefficients which grow exponentially, with sub-dominant term given by a critical exponent. The generating function behaviour is $`M(x)=_nM_nx^nA(x)(x_cx)^\xi ,`$ and hence the coefficients of the generating function $`M_n=[x^n]M(x)\sigma ^n/n^{\xi +1}_ic_i/n^{f(i)}`$, where $`\sigma =1/x_c`$ is the connective constant. We analyzed the series by the numerical method of differential approximants , and obtained the very accurate estimates $`x_c=0.08154695(10)`$ and $`\xi =2.4206(4)`$, and we thus find that the connective constant $`\sigma =12.262874(15)`$. Having obtained these accurate estimates we used them to fit the asymptotic form of the coefficient to the formula above. The results were fully consistent with $`f(i)=i`$. There were no signs of half-integer or other powers, showing that there does not appear to be any non-analytic correction terms to the generating function.
## 5 Conclusion
We have presented an improved algorithm for the enumeration of closed meanders. The computational complexity of the algorithm is estimated to be $`2.5^n`$, much better than direct counting algorithms which have complexity $`12.26^n`$. Implementing this algorithm enabled us to obtain closed meanders up to order 24. From our extended series we obtained precise estimates for the connective constant and critical exponent. An alternative analysis provides very strong evidence for the absence of any non-analytic correction terms to the proposed asymptotic form for the generating function.
## Acknowledgments
This work was supported by a grant from the Australian Research Council.
|
no-problem/9910/astro-ph9910565.html
|
ar5iv
|
text
|
# ON THE MAXIMUM MASS OF DIFFERENTIALLY ROTATING NEUTRON STARS
## 1. Introduction
One of the most important characteristics of a neutron star is its maximum allowed mass. The maximum mass is crucial for distinguishing between neutron stars and black holes in compact binaries and in determining the outcome of many astrophysical processes, including supernova collapse and the merger of binary neutron stars.
Observations of binary pulsars suggest that the individual stars in such systems have masses very close to 1.4 $`M_{}`$ (Thorsett et al. 1993). If mass-loss during the final binary coalescence can be neglected, the remnant of such a merger will then have a rest mass exceeding 3 $`M_{}`$. If this mass is larger than the maximum allowed mass for neutron stars, then the merger will lead to prompt collapse to a black hole on a dynamical timescale (ms). If, however, neutron stars can support such a high mass, at least temporarily, then the merger may result in a high mass, quasi-equilibrium neutron star, which only later may collapse to a black hole. The two different outcomes may have important consequences for gravitational wave signals and, possibly, gamma ray burst models.
The maximum mass of a cold, nonrotating, spherical neutron star is uniquely determined by the Tolman-Oppenheimer-Volkoff equations and depends only on the cold equation of state. For most recent equations of state this maximum mass is in the range of 1.8 – 2.3 $`M_{}`$ (Akmal, Pandharipande and Ravenhall 1998), significantly smaller than the mass expected for the remnant of a binary neutron star merger.
Thermal pressure and uniform rotation can provide additional support, and may stabilize slightly more massive stars. In this paper, we point out that differential rotation can significantly increase the maximum allowed mass of neutron stars and may temporarily stabilize the remnant of binary neutron star mergers. Recent fully relativistic simulations of binary neutron star mergers (Shibata & Uryu 1999) show that merger remnants are indeed differentially rotating (as suggested by several Newtonian simulations) and that they may support masses much larger than the maximum allowed mass of spherical stars.
In the case of a head-on collision from infinite separation, thermal pressure alone may support the merged remnant of progenitors (Shapiro 1998). Thermal pressure is likely to have a much smaller effect for coalescence from the innermost stable circular orbit since shock heating on impact is less pronounced, and will be dissipated by neutrino emission in $`10`$ s.
Rotation can further increase the maximum allowed mass. The maximum mass of a uniformly rotating star is determined by the spin rate at which the fluid at the equator moves on a geodesic and any further speedup would lead to mass shedding. This maximum mass can be determined numerically and is found to be at most $`20`$% larger than the nonrotating value (e.g. Cook, Shapiro, & Teukolsky, 1992, hereafter CST; 1994, and references therein). It is therefore unlikely that uniform rotation could support the remnant of a binary neutron star merger. Rotating equilibrium configurations with rest masses exceeding the maximum rest mass of nonrotating stars constructed with the same equation of state are referred to as “supramassive” stars (CST).
The merger of a binary neutron star system, however, will not result in a uniformly rotating object, especially since the neutron stars are likely to be close to being irrotational before merger (Bildsten & Cutler 1992; Kochanek 1992). The remnant is likely to be differentially rotating (see Rasio & Shapiro 1999 for discussion and references; Shibata & Uryu 1999). The star’s core may then rotate faster than the envelope, and it is easy to imagine that such a star could support a significantly larger mass than its uniformly rotating counterpart (see also Ostriker, Bodenheimer & Lynden-Bell, 1966, where this effect was demonstrated for white dwarfs). We refer to equilibrium configurations with rest masses exceeding the maximum rest mass of a uniformly rotating star as “hypermassive” stars.
In contrast to the maximum mass of the nonrotating and uniformly rotating stars, the maximum mass of differentially rotating stars cannot be uniquely defined, since the value will depend on the chosen differential rotation law. In principle, one might even construct an extensive Keplerian disk around the equator of the star, possibly increasing the mass of the star by large amounts. Instead of constructing such extreme configurations, we seek to determine whether a reasonable degree of differential rotation can have a significant effect on the maximum mass of neutron stars.
Here we adopt a polytropic equation of state and a simple rotation law to explore the effects of differential rotation on the maximum mass. We construct relativistic equilibrium models and find that even for modest degrees of differential rotation the maximum mass increases significantly, easily surpassing the likely remnant mass of a binary neutron star merger. We then evolve high-mass models dynamically in full general relativity, and find that there do exist models which are dynamically stable against both radial collapse and bar formation. These are plausible candidates for binary neutron star remnants.
## 2. Equilibrium
We adopt a polytropic equation of state $`P=K\rho _0^{1+1/n}`$, where $`P`$ is the pressure and $`\rho _0`$ the rest-mass density. We take the polytropic constant $`K`$ to be unity without loss of generality, and choose the polytropic index $`n=1`$ <sup>1</sup><sup>1</sup>1Since $`K^{n/2}`$ has units of length, all solutions scale according to $`\overline{M}=K^{n/2}M`$, $`\overline{J}=K^nJ`$, $`\overline{\mathrm{\Omega }}=K^{n/2}\mathrm{\Omega }`$, etc, where the barred quantities are physical quantities, and the unbarred quantities are our dimensionless quantities corresponding to $`K=1`$ (compare CST)..
Relativistic equilibrium models of rotating stars have been constructed by several authors, including Butterworth & Ipser (1975), Friedman, Ipser & Parker (1986), Komatsu, Eriguchi, & Hachisu (1989), CST, Bonazzola et al. (1993), and Stergioulas & Friedman (1995). A comparison between several different methods can be found in Nozawa et al. (1998). We use the numerical code developed by CST, which is based on the formalism of Komatsu, Eriguchi, & Hachisu (1989).
Constructing differentially rotating neutron star models requires choosing a rotation law $`F(\mathrm{\Omega })=u^tu_\varphi `$, where $`u^t`$ and $`u_\varphi `$ are components of the four velocity $`u^\alpha `$, and $`\mathrm{\Omega }`$ is the angular velocity. For simplicity we follow CST and consider the rotation law $`F(\mathrm{\Omega })=A^2(\mathrm{\Omega }_c\mathrm{\Omega })`$, where $`\mathrm{\Omega }_c`$ denotes the central angular velocity and where the parameter $`A`$ has units of length. Expressing $`u^t`$ and $`u_\varphi `$ in terms of $`\mathrm{\Omega }`$ and metric potentials yields eq. (42) in CST, or, in the Newtonian limit, $`\mathrm{\Omega }=\mathrm{\Omega }_c/(1+\widehat{A}^2\widehat{r}^2\mathrm{sin}^2\theta )`$. Here we have rescaled $`A`$ and $`r`$ in terms of the equatorial radius $`R_e`$: $`\widehat{A}=A/R_e`$ and $`\widehat{r}=r/R_e`$. The parameter $`\widehat{A}`$ is a measure of the degree of differential rotation and determines the length scale over which $`\mathrm{\Omega }`$ changes. Since uniform rotation is recovered in the limit $`\widehat{A}\mathrm{}`$, it is convenient to parametrize sequences by $`\widehat{A}^1`$.
We construct axisymmetric differentially rotating models using a modified version of the scheme adopted in CST. Instead of fixing the central density in the iteration scheme for each model, we fix the maximum density. This change allows us to construct higher mass models in some cases, since the central density does not always coincide with the maximum density, and hence may not specify a model uniquely. Given a value of $`\widehat{A}`$, we construct a sequence of models for each value of the maximum density by starting with a static, spherically symmetric star and then decreasing the ratio of the polar to equatorial radius, $`R_{pe}=R_p/R_e`$, in small increments. This sequence ends when we reach mass shedding (for large values of $`\widehat{A}`$), or when the code fails to converge (indicating the termination of equilibrium solutions), or when $`R_{pe}=0`$ (beyond which the star would become a toroid).
In Fig. 1 we show the maximum-mass values in each sequence as a function of the maximum value of the mass-energy density $`ϵ`$ for different values of $`\widehat{A}`$. Even for modest differential rotation, we can construct models with masses much higher than the maximum mass for static and uniformly rotating stars. Some of these models exceed the Kerr limit $`J/M^2>1`$, where $`J`$ is the angular momentum and $`M`$ the total mass energy of the star.
## 3. Stability
Nonrotating spherical stars are dynamically stable (unstable) against radial modes if $`M/ϵ_c>0`$ ($`M/ϵ_c<0`$), where $`ϵ_c`$ is the central energy density. The same criterion can be applied to sequences of uniformly rotating stars of constant $`J`$ to determine secular stability (Friedman, Ipser, & Sorkin 1988). Exact criteria do not exist for the dynamical stability of rotating stars; however, numerical simulations of uniformly rotating models suggest that the onset of dynamical stability is very close to the onset of secular instability (Shibata, Baumgarte, & Shapiro 1999a).
As an indication of the stability of our models against nonaxisymmetric bar-mode formation, we have indicated values of the ratio of their kinetic energy $`T`$ to potential energy $`W`$, $`\beta T/|W|`$, in Fig. 1 <sup>2</sup><sup>2</sup>2See CST for relativistic definitions of these quantities.. Newtonian stars develop bars on a dynamical timescale when $`\beta \beta _{\mathrm{dyn}}=0.27`$, while they develop bars on a secular timescale for $`\beta \beta _{\mathrm{sec}}=0.14`$ via gravitational radiation or viscosity (Chandrasekhar 1969, 1970; Houser, Centrella, & Smith 1994). For relativistic stars, $`\beta _{\mathrm{sec}}`$ for gravitational wave-driven bars is somewhat smaller than for Newtonian stars (Stergioulas & Friedman 1998), while $`\beta _{\mathrm{sec}}`$ for viscosity-driven bars is slightly larger (Bonazzola, Frieben, & Gourgoulhan 1996; Shapiro & Zane 1998).
To investigate the dynamical stability of our equilibrium models, we insert them as initial data in a dynamical simulation and evolve them in time. We employ a fully relativistic code that solves Einstein’s equations coupled to hydrodynamics in three spatial dimensions plus time (Shibata 1999).
As a candidate for a dynamically stable star, we evolve the model with $`\widehat{A}^1=1.0`$, $`ϵ_{\mathrm{max}}=0.073`$ and $`R_{pe}=0.3`$ (marked with a dot in Fig. 1). This model has a rest mass about 60% higher than the maximum nonrotating rest mass, $`\beta 0.23`$, $`R_e/M5`$ and $`J/M^21`$, and is plotted in Fig. 2. The orbital period at the equator is about three times the orbital period at the center. We show contours at $`t=0`$ and after 3.15 orbital periods at the center. Clearly, this model is dynamically stable against both quasi-radial collapse to a black hole and bar formation, even when small perturbations are included initially. This demonstrates that differentially rotating stars can stably support significantly higher masses than uniformly rotating stars for longer than a dynamical timescale. A more systematic study of the dynamical stability of differentially rotating neutron stars will be presented in a forthcoming paper (Shibata, Baumgarte, & Shapiro 1999b).
Dynamically stable differentially rotating neutron stars are subject to various secular instabilities. The timescale for gravitational-wave driven bar-mode formation can be estimated from
$$\tau _{\mathrm{bar}}\left(\frac{M}{3M_{}}\right)^3\left(\frac{R}{15\text{km}}\right)^4\left(\frac{\beta \beta _{\mathrm{sec}}}{0.1}\right)^5\text{s}$$
(1)
(Friedman & Schutz 1975, 1978), where the average radius $`R`$ and mass $`M`$ are scaled to values appropriate for a binary merger remnant. For $`\beta 0.2`$, this yields timescales of 10 s. The final fate of bar-unstable stars is not known, except for incompressible Newtonian spheroids, where in the presence of gravitational radiation and viscosity they evolve to Jacobian or Dedekind ellipsoids (Chandrasekhar 1969; Miller 1974; Shapiro & Teukolsky 1983; Lai & Shapiro, 1995). Gravitational waves may also drive an $`r`$-mode instability for arbitrarily small rotation rates (see, e.g., Lindblom, Owen & Morsink 1998). For the hot remnants of binary neutron star mergers, however, these modes may be suppressed by bulk viscosity.
Magnetic braking and viscosity will eventually bring the star into uniform rotation. When a hypermassive star is driven to uniform rotation by viscosity or magnetic fields, it will undergo catastrophic collapse and/or mass loss. The lifetime of a hypermassive star is therefore set by these dissipative processes. If $`J/M^2>1`$, angular momentum must be dissipated either by radiation or mass loss before the star can form a Kerr black hole (cf. Baumgarte & Shapiro 1998), which may produce a massive, hot and thick disk around the newly formed black hole.
A frozen-in magnetic field will be wound up by differential rotation, which may create very strong toroidal fields. This process will generate Alfvén waves, which can redistribute and even carry off angular momentum. The timescale $`\tau _B`$ for this magnetic braking mechanism is related to the Alfvén speed $`v_A=B/(4\pi \rho )^{1/2}`$ according to
$$\tau _B\frac{R}{v_A}10^2\left(\frac{B}{10^{12}\text{G}}\right)^1\left(\frac{R}{15\text{km}}\right)^{1/2}\left(\frac{M}{3M_{}}\right)^{1/2}\text{s}.$$
(2)
Here $`B`$ is the initial poloidal field along the gradient of $`\mathrm{\Omega }`$. Strong poloidal magnetic fields can increase the maximum allowed mass of neutron stars (Bocquet et al. 1995) and contribute to the dissipation of angular momentum by dipole radiation, but are subject to a variety of MHD instabilities (e.g. Spruit 1999a, 1999b).
Since the fluid flow in differentially rotating equilibrium stars is divergence-free, the viscous timescale $`\tau _V`$ is determined by shear viscosity
$$\tau _V\frac{\rho R^2}{4\eta }10^9\left(\frac{R}{15\text{km}}\right)^{23/4}\left(\frac{T}{10^9\text{K}}\right)^2\left(\frac{M}{3M_{}}\right)^{5/4}\text{s},$$
where $`\eta =347\rho ^{9/4}T^2`$ (cgs) (Cutler & Lindblom 1987). Molecular viscosity alone is likely to be less effective in bringing the star into uniform rotation than magnetic braking. Nascent neutron stars may also be subject to convective instabilities (e.g. Pons et at. 1999), but the role of convection in rotating magnetic stars is not well understood (cf. Tassoul 1978).
For weak magnetic fields and high values of $`\beta `$, the neutron star merger remnant is likely to develop a bar. The accompanying quasi-periodic gravitational wave signal may be observable by the new generation of gravitational wave laser interferometers under construction (Lai & Shapiro 1995; Shibata, Baumgarte, & Shapiro 1999b).
For strong magnetic fields and small values of $`\beta `$, magnetic braking is likely to dominate the evolution of differentially rotating neutron stars, and may alter the velocity profile within minutes. On this timescale, differential rotation will no longer be able to support hypermassive stars formed in binary merger. In the resulting delayed collapse, a brief secondary burst of gravitational waves will be emitted. The frequency of this secondary burst may be quite high<sup>3</sup><sup>3</sup>3The frequency of the fundamental quasi-normal mode of a Schwarzschild black hole is $`\omega 0.37M^1`$, which yields $`f4`$ kHz for $`M=3M_{}`$; the frequency of the axisymmetric mode is slightly higher for a Kerr black hole (Leaver 1985)., but since the angular momentum parameter $`J/M^2`$ may be close to unity, the amplitude could be large enough to be observable by an advanced generation of gravitational wave detectors (Stark & Piran, 1985). If the orbital parameters, including the masses and radii of the stars, can be determined during the inspiral and early merger phase, and if the time of the initial coalescence can be inferred from the initial burst signal, then the measurement of this delay in the final collapse may provide an estimate for the strength of the wound-up magnetic field in the interior of the merged neutron star.
## 4. Discussion
We find that the maximum mass of a differentially rotating star can be significantly higher than that of nonrotating or uniformly rotating stars, even for modest degrees of differential rotation. As an immediate consequence, it is possible that binary neutron star coalescence does not lead to a prompt black hole formation, but that, instead, a differentially rotating, hypermassive quasiequilibrium neutron star is formed. This has important consequences for the gravitational wave signal from such an event and possibly for the prospects of explaining gamma ray bursts by binary neutron star mergers.
Pulsars are likely to be uniformly rotating, since magnetic braking and viscosity will bring any initially differentially rotating stars into uniform rotation. The well established maximum masses of uniformly rotating neutron stars are therefore relevant for old neutron stars, including millisecond pulsars, while our much higher maximum masses may be relevant for nascent neutron stars in a transient phase in a supernova, following fallback, or in a merged binary.
This work was supported by NSF Grants AST 96-18524 and PHY 99-02833 and NASA Grants NAG 5-7152 and NAG 5-8418 at the University of Illinois at Urbana-Champaign. Numerical computations were performed on the FACOM VX/4R machine in the Data Processing Center of NAOJ, and at the National Center for Supercomputing Applications at the University of Illinois. M. Shibata gratefully acknowledges the kind hospitality at the Department of Physics of the University of Illinois, and support through a JSPS Fellowship for Research Abroad.
|
no-problem/9910/cond-mat9910258.html
|
ar5iv
|
text
|
# I Introduction
## I Introduction
For almost half a century, the number of silicon MOSFET’s on a single commercial chip has approximately doubled every eighteen months. There is little doubt that this exponential growth will continue throughout the next decade, allowing for a minimum linear scale of 50 nm at around 2010. At shorter lengths, significant technological problems arise. However, silicon MOSFET’s with channel length approaching 30 nm have been studied widely in the past few years, both theoretically and experimentally (for a review see Refs. ).
A further reduction of MOSFET’s linear size in bulk production will inevitably require radical advances in lithography and doping technologies, and possibly also require a change in the basic transistor geometry. However, we are not aware of any fundamental limitation on the performance of sub-30 nm scale MOSFET’s. In fact, it was very recently shown that a double-gate silicon-on-oxide (SOI) transistor with a gate length of 18 nm can exhibit an acceptable $`I_{on}/I_{off}`$ current ratio of about eight orders of magnitude. In the present work we study theoretically transistors with even shorter gates, of length between 3 and 10 nm. We show that such devices (with channel length as short as 8 nm) can exhibit performance sufficient for both logic and memory applications.
Three fundamental differences exist between ordinary, 100-nm-scale MOSFET’s, and short, 10-nm-scale devices. First, as the short channel transistors are of length comparable to or smaller than the scattering mean free path of electrons in the channel, electron transport in them is essentially ballistic. This is in contrast to drift-diffusion transport in long transistors. Second, at 10-nm length scales, quantum mechanical tunneling can be significant, and as will be shown below, may actually dominate at some parameter range. Lastly, as the short transistors are of length comparable to the electrostatic screening length of electrons, one dimensional approximations are not sufficient to describe the electrostatic potential in the system, and a full, two-dimensional solution of the Poisson equation is required. Moreover, in contrast to long devices, we will show that finite penetration of electric field into the source, drain, and gate electrodes can crucially affect the source-drain current in the short-channel MOSFET’s, and cannot be neglected.
Previous works attempting to model short-channel MOSFET’s usually took into account the ballistic nature of electron transport, but relied on one-dimensional approximations, and neglected electron tunneling and finite fields in the electrodes. Natori was the first to employ this approach. He calculated the I-V characteristics of both single-gate and dual-gate MOSFET’s, and showed that, within this simplistic approach, the current exhibits a strong saturation due to the exhaustion of all source electrons by large source-drain voltage. Within the same approximations, Lundstrom arrived at a phenomenological equation which allows for a treatment of finite backscattering of electrons. This approach was extended very recently, and the on-current, and in particular the effects of higher subbands on the current, were studied. However, short-channel effects such as drain-induced barrier lowering (DIBL) cannot be described within this one-dimensional approximation, and therefore the authors of Ref. limited themselves to devices of the order of 100 nm. A previous study by Pikus and Likharev of short-channel ballistic MOSFET’s also relied on a one-dimensional approximation of the electrostatics in the system, and neglected quantum tunneling and backscattering. Thus, the results of that work (mainly, that a 5-nm gate-length transistor may perform sufficiently well) could be questioned. Finally, Wong et. al. studied ballistic MOSFET’s with a somewhat longer minimal channel length of around 15 nm. They concentrated on two-dimensional short channel effects, but studied only the closed state (so only the Laplace equation had to be solved). At these length scales neglect of tunneling is justified.
The aim of the present work was to model the transport in ultrashort silicon MOSFET’s, with channel length of around 10 nanometers. Our modeling uses as few simplifying assumptions as possible, the main one being the neglect of backscattering, an assumption which is justified by existing mobility data, and by our choice of transistor geometry – see the next section. We take full account of the ballistic nature of transport in the transistor and of quantum mechanical tunneling. We solve the full, two-dimensional, Poisson equation not only in the channel, but also in all electrodes, thus allowing for charging effects inside the source, drain, and gates. We study both the on and off states.
## II Model
The geometry of the transistor under consideration in this paper is shown schematically in Fig. 1(a). As appears to be the consensus on the optimal general design for ultrasmall MOSFET’s, we study here a double-gate SOI structure. A thin silicon channel of length $`L`$, width $`W`$, and thickness $`tW`$ connects two bulk source and drain polysilicon electrodes. The electrodes are heavily doped to a donor density $`N_D`$, while the silicon layer is undoped. Both the electrodes and the channel are separated from two polysilicon gate electrodes, doped to a density $`N_G`$, by silicon oxide layers of identical thickness $`t_{ox}`$. Source-drain voltage $`V=E_DE_S`$ and gate voltage $`V_g=E_GE_S`$ are determined from the Fermi levels $`E_S`$, $`E_D`$, and $`E_G`$ in the source, drain, and gate electrodes, respectively. A typical energy profile $`\mathrm{\Phi }(x)`$ in the center of the silicon layer is shown in Fig. 1(b). $`\mathrm{\Phi }_0`$ denotes the peak of the potential energy in the channel.
Motion in the thin silicon channel is affected by lateral quantum confinement. In this work we consider only the case where the channel is so thin that $`E_S`$, $`E_D`$, and temperature $`T`$ are well below the second subband energy, so only the first subband participates in transport. This assumption leads to an important aspect of our model: due to the mismatch in phase space between the strictly 2D channel and the bulk, 3D source and drain electrodes, an electron impinging from the channel on one of the electrodes would be absorbed by that electrode, i.e., would have only a negligible probability of backscattering (from source/drain impurities or phonons) into the channel. If the scattering mean free path in the electrodes is larger than $`t`$ then, due to the mismatch also in real space, the already negligible backscattering probability becomes even smaller.
Thus, in our geometry, the usual transport model of ballistic transistors holds to a very good approximation. Within this model electron distribution inside the source and drain electrodes is the equilibrium distribution at the lattice temperature $`T`$. Source electrons impinging on the channel are absorbed by it, and then travel ballistically inside the channel. However, only electrons with energy higher than $`\mathrm{\Phi }_0`$ are certain to arrive at (and be absorbed by) the drain electrode. Electrons with lower energy are transmitted through the potential barrier with the quantum mechanical tunneling probability $`\mathrm{\Theta }<1`$, and reflected from it with probability $`1\mathrm{\Theta }`$. The reflected electrons travel back ballistically towards the source electrode, where they are absorbed \[see Fig. 1(b)\]. A similar (but opposite) process describes the drain electrons.
All electrons in the channel and in the electrodes, as well as the ionized donors in the electrodes, contribute to the local electrostatic potential. The self consistent solution of this Poisson-transport problem is at the heart of our calculations. Once the local potential and electron density are known for any given set of $`V`$ and $`V_g`$, the source-drain current $`I`$ can be calculated as the difference between the source-to-drain current and the drain-to-source current. The gate current due to oxide leakage can also be calculated within the regular tunneling approach.
## III Theory
The formulation of the theory describing the above model is similar to previous works by Natori, Lundstrom and co-workers, and especially by Pikus and Likharev. The two main differences in the theory between this and the previous works are that there is no attempt here to reduce the full Poisson equation to a one-dimensional approximation, and the source-drain tunneling current is now taken into account. Also, in the present work the quantization energy $`E_1`$ needs to be taken explicitly into account because it is present only in the channel.
The basic equation to be solved in the channel, the oxide layers, and the source, drain, and gate electrodes is the Poisson equation for $`\mathrm{\Phi }(x,z)`$, where $`x`$, $`z`$ are the directions along and perpendicular to the channel length, respectively (see Fig. 1):
$$^2\mathrm{\Phi }(x,z)=\frac{4\pi }{ϵ(x,z)}\rho (x,z).$$
(1)
Here $`ϵ(x,z)`$ is the local dielectric constant and $`\rho (x,z)`$ the local three dimensional charge density. The boundary conditions for Eq. (1) at $`x,z\pm \mathrm{}`$ are given in terms of the applied voltages $`V`$, $`V_g`$ which determine the difference between the Fermi levels $`E_S`$, $`E_D`$, and $`E_G`$ (The boundary conditions are taken at distances much larger than the screening length in the electrodes, where charge neutrality allows for a unique determination of the local potential from the known values of the local Fermi energies and donor densities).
The charge density in the structure is given by
$`\rho (x,z)=`$ $`\{\begin{array}{cc}\frac{2q}{t}\mathrm{cos}^2\left(\frac{\pi }{t}z\right)n_2\left[\mathrm{\Phi }(x,z)\right]\hfill & \left[(x,z)𝒞\right]\hfill \\ qN_{D,G}\frac{3\left(2\overline{m}\right)^{3/2}q}{\pi ^2\mathrm{}^3}_{\mathrm{\Phi }(x,z)}^{\mathrm{}}\frac{\left[E\mathrm{\Phi }(x,z)\right]^{1/2}}{\mathrm{exp}\left(\frac{EE_F}{T}\right)+1}𝑑E\hfill & \left[(x,z)\right]\hfill \\ 0\hfill & \mathrm{otherwise},\hfill \end{array}`$ (5)
Where $`𝒞`$, $``$ denote the channel and electrode (source, drain, and gate) regions, respectively, $`E_F`$ is the Fermi level in the corresponding electrode, $`\overline{m}`$ is the three-dimensional density-of-states effective mass, and $`q=|q|`$ is the electron charge. The two-dimensional electron density $`n_2`$ is the sum of all partial densities of electrons at energy $`E_x=\mathrm{\Phi }(x,z)+m_1[v_x(x,z)]^2/2`$, with $`𝐯`$ the electron velocity and $`m_1`$ the mass of the electron in the direction of the current \[which, due to quantization (and assuming $`100`$ orientation), is the light electron mass\]. Counting the transmitted and reflected electrons, both from the source and the drain electrons, $`n_2`$ is given by:
$$n_2^\pm [E_x,\mathrm{\Phi }(x)]=\frac{J^\pm (E_x)}{qv[E_x,\mathrm{\Phi }(x)]},$$
(6)
with $`J^\pm (E_x)`$ the partial current density in each direction. $`J^\pm (E_x)`$ is uniform throughout the length of the structure and can be calculated by using the equilibrium distribution in the source and drain electrodes. In Eq. (5) $`n_2`$ is multiplied by the cosine quantum factor in order to obtain the 3D density in the channel. In the electrodes, Eq. (5) uses the equilibrium Fermi-Dirac density of electrons, while in the oxide $`\rho `$ is taken to be zero.
Once equations (1) and (5) are solved simultaneously, the source-drain current can be calculated as the sum over all partial currents, each multiplied by the energy-dependent quantum transmission probability through the potential in the channel:
$$J=\frac{q}{\pi ^2\mathrm{}}_{\mathrm{}}^{\mathrm{}}𝑑k_y_0^{\mathrm{}}𝑑E_x\mathrm{\Theta }(E_x)\left[f_0(E)f_0(E+V)\right],$$
(7)
where $`f_0`$ is the Fermi function,
$$E=E_1+E_x+\frac{\mathrm{}^2k_y^2}{2m_1},$$
(8)
and the transmission probability is given by the WKB result
$`\mathrm{\Theta }(E_x)=`$ $`\{\begin{array}{cc}\mathrm{exp}\left\{2_{x_1}^{x_2}\frac{\sqrt{2m_1\left[\mathrm{\Phi }(x)+E_1E_x\right]}}{\mathrm{}}𝑑x\right\}\hfill & E_x<\mathrm{\Phi }_0+E_1,\hfill \\ 1\hfill & \mathrm{otherwise},\hfill \end{array}`$ (11)
with $`x_{1,2}`$ the classical turning points.
We solve Eqs. (1) and (5) using a Poisson solver designed specifically for this problem. As we will see, for short devices screening in the electrodes may have a strong effect on the performance of the device. Our solver therefore treats the electrodes (source, drain, and gates) on an equal footing with the channel. This requires that we use a very small mesh size, of the order of 0.1 nm. Nonetheless, our program (which uses the conjugate gradient method as its basic algorithm) works rather efficiently even with such a dense mesh. For 10-nm-scale devices, it takes a low-end workstation CPU run time of only about 3 seconds for a single iteration and less than 1 minute for the full calculation at a single parameter set.
Examples of the results of these calculations are shown in Fig. 2, where the potential energy $`\mathrm{\Phi }`$ is plotted as a function of $`x`$ and $`z`$ for typical cases of off and on states at a finite source-drain voltage. \[only $`z<0`$ is shown, as $`\mathrm{\Phi }(x,z)=\mathrm{\Phi }(x,z)`$ due to the symmetry of our geometry\]. The two-dimensional short channel effects (here $`L=10`$ nm) clearly manifest themselves in this figure: first, it is seen that in the off state \[Fig. 2(a)\] $`\mathrm{\Phi }_0`$ assumes a value significantly smaller than $`qV_g`$. Second, in the on state \[Fig. 2(b)\] $`\mathrm{\Phi }_0`$ vanishes, and there is no longer a potential barrier in the channel. The latter effect accounts for the saturation of the current described in Refs. . Fig. 2(c) shows the charge distribution at $`z=0`$ for the same gate voltages as in panels (a,b).
## IV Results
In this paper we present results for silicon $`n`$-MOSFET’s with what we consider to be the optimal donor density and channel thickness (see the next section for a discussion) of $`N_D=3\times 10^{20}`$ cm<sup>-3</sup> and $`t=2`$ nm. For these parameters, $`E_1=96`$ meV, while $`E_F=150`$ meV. We study two different oxide thicknesses: the ’thin-oxide’ device ($`t_{ox}=1.5`$ nm) will be shown to be suitable for logic applications (for which small gate leakage is tolerable), while the ’thick-oxide’ transistor ($`t_{ox}=2.5`$ nm) will be suitable for memory applications, in which it is necessary to have many orders of magnitude control of the subthreshold current, while voltage gain is of minor importance. We present all results for both ’short’ ($`L=8`$ nm) and ’long’ ($`L=12`$ nm) devices.
Figure 3 shows $`IV`$ characteristics of the ’thin-oxide’ device for ten different values of gate voltage. The curves show a well-expressed current saturation even at $`L=8`$ nm, while at $`L=12`$ nm the saturation is almost flat. In these ultrasmall devices the saturation shows up only when the electron potential energy maximum in the channel is suppressed by positive gate voltage, and is due to the exhaustion of source electrons . However, an effect which wasn’t taken into account in the earlier works
is the finite screening in the source and drain electrodes. As the screening length in the electrodes becomes comparable to the channel length, the voltage no longer falls only on the channel. This leads to a significant decrease of the potential energy at the source-channel interface with increasing source-drain voltage. Thus, the quantization energy $`E_1`$ (relative to the source Fermi energy $`E_S`$) is lowered with increasing $`V`$, an effect which accounts for the increase of current with $`V`$ (Fig. 3) even in the ’totally saturated’ regime.
Figure 4 shows the same characteristics as Fig. 3, but for the ’thick-oxide’ transistor. The characteristics now are much more linear, and saturation practically vanishes in the 8-nm length device.
Sub-threshold curves of the ’thick-oxide’ transistors are presented in Fig. 5 for ten different source-drain voltages. For the 12-nm device the curves have a nearly perfect log slope (indicated by the dashed line) and very small DIBL effect. However, the slope rapidly goes down and DIBL up as the length decreases below 10 nm. This loss is especially rapid at small currents (big negative gate voltages) due to electron tunneling under the narrower ”bump” in the electric potential profile. Figure 6 shows the tunneling and thermal currents for the same parameters as in Fig. 5 separately (the sum of these two components gives the current presented by the upper curves of Fig. 5). It is clear from the figure that the tunneling current dominates the subthreshold current at small $`L`$ ($`8`$ nm) and large negative $`V_g`$. In fact, the tunneling clearly affects not only the magnitude of the current in the off state, but also the qualitative shape of the subthreshold curve, which is no longer exponential – see Fig. 5(b).
Also shown in Fig. 5 are the oxide-leakage current and the current due to intrinsic carriers (both are evaluated within a simple model and should be taken only as an order of magnitude estimate). Due to the large effective gap implied by the quantum confinement of electrons and holes, the latter is very small compared to the former, so gate-oxide leakage becomes the main limiting mechanism on the subthreshold performance. Another deteriorating effect, Zener tunneling of holes from the drain electrode into the channel, appears only at negative gate voltages much larger than the ones we consider here (where the oxide leakage current is already prohibitive).
Figure 7 shows subthreshold curves for the ’thin-oxide’ device. The slopes of the curves in this case are almost ideal even for the short-channel device \[Fig. 7(b)\]. However, the large oxide leakage means that the $`I_{on}/I_{off}`$ ratio is reduced to $`10^6`$.
In addition to the $`I_{on}/I_{off}`$ ratio, there exist at least three other figures of merit which characterize the subthreshold curves. First is the subthreshold slope roll-off $`𝒮𝒮_{id}`$, with $`𝒮_{id}=60`$ mV/decade the ideal room-temperature slope. This roll-off is plotted in Fig. 8(a) as a function of $`L`$, for three values of $`t_{ox}`$.
Second is the threshold voltage roll-off $`V_TV_T^{\mathrm{}}`$ where $`V_T`$ is defined here as the gate voltage at which $`J=2\times 10^4`$ A/cm and $`V_T^{\mathrm{}}`$ is the threshold voltage at $`L\mathrm{}`$. In our geometry, $`V_T^{\mathrm{}}=313`$ mV. The $`V_T`$ roll-off is shown in Fig. 8(b) as a function of $`L`$ for the same values of $`t_{ox}`$ as in Fig. 8(a).
Lastly, the finite value of voltage gain, defined as $`G_V=\left(dV/dV_g\right)_{I=\mathrm{const}}`$, (in contrast to the “ideal” value of infinity) can be used to characterize both DIBL at the subthreshold state and imperfect saturation at the open state. Voltage gain as a function of gate voltage for both the ’thin-oxide’ and ’thick-oxide’ devices, and for various $`L`$’s, is shown in Fig. 9. In order to evaluate the results here, one should remember that the usual CMOS design tools imply $`G_V1`$, while devices with $`G_V<1`$ cannot sustain logic circuits.
In order to show the effect of channel thickness on the results, we reproduce in Fig. 10 the plots of Fig. 9 at
.
$`L=8`$ nm and $`L=12`$ nm, and include also the cases of $`t=2.5`$ nm and $`t=3`$ nm. At $`t=2.5`$ nm our model is still strictly valid. For the sake of understanding the thickness-dependence of the performance of the device, we also present results for $`t=3`$ nm, at which $`E_2=170`$ meV is higher than $`E_S`$ by only 20 meV. At this and larger values of $`t`$, the second quantized level becomes too close to the source Fermi energy, so transport through the second subband (possible at finite bias via tunneling of source electrons into the second subband inside the channel) may not be negligible.
## V Discussion and Conclusions
The main conclusion which can be drawn from our results is that ballistic, dual-gate transistors with channels as short as 8 nm still seem suitable for digital applications, with proper choice of the gate oxide thickness. In fact, devices with a relatively thick oxide allow a very high $`I_{on}/I_{off}`$ ratio, above 8 orders of magnitude \[Fig. 5(b)\], making them suitable for memory applications including both DRAM and NOVORAM . In contrast, transistors with thinner gate oxides (say, 1.5 nm) have a gate oxide leakage too high for memory applications \[Fig. 7(b)\], but their transconductance of about 4000 mS/mm \[Fig. 3(b)\] and voltage gain of around 5 over a wide range of gate voltages \[Fig. 9(b)\] are sufficient for logic circuits. The performance of the devices, both for logic and memory applications, improves dramatically when going from $`L=8`$ nm to $`L=12`$ nm \[Figs. 3(a),5(a),9(b)\]. The results presented here are compatible with the results of Ref. (when comparing gate lengths), Ref. (in which MOSFET’s with larger $`t`$ have been studied), and Ref. (in which the open-state current in longer devices has been calculated within a simplified 1D model.)
At least in the geometry which we study here, a channel length of 8 nm seems to be very close to the lower limit of still-feasible MOSFET’s. Most importantly, the maximal voltage gain drops to 2 already at 6 nm even for the ’thin-oxide’ device \[Fig. 9(b)\], a fact which renders it useless for logic applications. As for memory applications, they are basically limited by the minimal thickness of the gate insulator. As long as the insulator used is SiO<sub>2</sub>, $`t_{ox}`$ cannot be significantly thinner than 2.5 nm, which is the thinnest layer which still gives an eight-orders-of-magnitude control over the current \[Fig. 5(b)\]. This implies a strict limit of 5 nm on the channel length. However, a working device of $`L=6`$ nm is hard to imagine, because even if a gate length of 1 nm becomes plausible, the extrapolation of the upper curve of Fig. 8(a) implies an extremely large gate voltage swing of around 5 V.
Decreasing the channel thickness $`t`$ would have a desirable effect on the electrostatics of both thick- and thin-oxide devices. However, it seems that the overall effect of reducing $`t`$ below 2 nm would be deteriorating. First, in layers of such small thickness, the mobility of electrons is expected to decrease sharply with decreasing $`t`$. In fact, recent simulations predicted the electron mobility $`\mu `$ in SOI MOSFET’s to be around 350 cm<sup>2</sup>/Vs at an effective electric field of $`6\times 10^5`$ V/cm. This electric field corresponds to a confinement potential with a first quantized level of width $`t^{}=2`$ nm. $`\mu `$ = 350 cm<sup>2</sup>/Vs implies a scattering length $`l20`$ nm, which is consistent with our ballistic model at $`L<15`$ nm (it is also consistent with the measurements of Ref. which find mobilities of the order of 200 cm<sup>2</sup>/Vs in MOSFET’s with $`t^{}1.5`$ nm). However, at $`tt^{}2`$ nm, mobility is expected to becomes much smaller than these values, implying a scattering length smaller than $`L`$, which is inconsistent with our model, and which would deteriorate the device due to strong backscattering.
It is worth emphasizing that for short devices ($`L8`$ nm), tunneling current is large, and in fact may dominate over the thermal current \[Fig. 6(b)\]. In this sense, one can classify the short-channel devices studied here as “tunneling transistors”. The tunneling effect indeed changes the overall shape of the current characteristics \[e.g., the subthreshold curve is no longer exponential, see Fig. 5(b)\], but even in the strong-tunneling regime the transistor is still responsive to gate voltage, enough to allow practical current-control.
One important drawback of the devices studied here is the small (or even negative) threshold voltage $`V_T`$ (see Figs. 5,7). The main cause of this effect (in addition to the regular short-channel effects which reduce $`V_T`$ due to two-dimensional charge redistribution in the gate) is the undoped channel. This leads to an accumulation of electrons in the channel starting at small negative gate voltage (in contrast to regular n-channel MOSFET’s with p-type substrate in which electron accumulation in the channel is possible only after the substantial depletion of holes by positive gate voltage.)
Two different approaches may be utilized to solve this problem (which is of importance mainly to logic applications). One approach is to allow a finite number of acceptor dopants in the channel. This would have an effect similar to the p-substrate in regular MOSFET’s, since the gate voltage would first have to deplete the access holes before allowing for accumulation of electrons. A crude estimate of this effect can be obtained by using the planar capacitance model by which the change in $`V_T`$ is given by
$$\mathrm{\Delta }V_T=\frac{4\pi qN_at_{ox}}{ϵ_{ox}},$$
(12)
with $`N_a`$ the sheet density of acceptors, $`ϵ_{ox}`$ the dielectric constant of SiO<sub>2</sub>, and $`4\pi t_{ox}/ϵ_{ox}`$ the gate capacitance (this approximation neglects any short-channel effects). In order to achieve $`\mathrm{\Delta }V_T=0.4V`$ (which would give according to Fig. 7 $`V_T0.1V`$, which is sufficiently large because of the small source-drain voltages in use), $`N_a`$ should be approximately $`6\times 10^{12}`$ cm<sup>-2</sup>, which implies $`l4`$ nm. Such channel doping is unacceptable since the relation $`lL`$ implies strong fluctuations in device performance due to dopant fluctuations. Thus, it seems that a more realistic approach to manipulating $`V_T`$ would be to use a specific metal with necessary workfunction as a gate material.
The practical implementation of the remarkable MOSFET scaling opportunities presented here requires several technological problems to be solved. First of all, the fabrication of dual-gate transistors requires rather advanced techniques - see, e.g. Ref. . Second, the gate voltage threshold $`V_T`$ of nanoscale transistors is rather sensitive to nanometer fluctuations of the channel length - see Fig. 8(b). Notice, however, that the relative sensitivity of $`V_T`$ \[which may be adequately characterized by the log-log plot slope $`(L/V_T)\times dV_T/dL`$\] decreases at small $`L`$. This fact gives hope that with appropriate transistor geometry (for example, vertical structures where $`L`$ is defined by layer thickness rather than by patterning - see, e.g., Ref. ) the channel length fluctuations will eventually be made small enough for appreciable VLSI circuit yields.
## VI Acknowledgments
Helpful discussions with D.J. Frank, C. Hu, S. Laux, M. Fischetti, J. Palmer, Y. Taur, S. Tiwari, H.-S.P. Wong, R. Zhibin, and especially M. Lundstrom and P. M. Solomon are gratefully acknowledged. This work was supported in part by the AME program of DARPA via ONR.
|
no-problem/9910/gr-qc9910027.html
|
ar5iv
|
text
|
# Robust test for detecting non-stationarity in data from gravitational wave detectors
## I Introduction
Each of the large interferometric gravitational Wave detectors that are now under construction (LIGO , VIRGO , GEO , TAMA ) will produce a flood of data when they come online in a few years. Apart from the “main” data channel carrying measurement of strain in the arm lengths, there will be a few hundred auxiliary channels at each site associated with system and environmental monitors, such as seismometers and magnetometers. Their role would be to monitor the state of the detector and its environment so that any unusual event in the main channel or an unexpected behavior of the detector can be diagnosed properly. (The sum total of raw data from the LIGO detectors will be produced at the rate of $`10`$ megabytes every second.)
Under ideal conditions, each data channel would carry stationary noise. For the main channel, this would reflect a steady state of the interferometer and for the auxiliary channels, a steady state of the environment. However, experience with prototypes as well as with the several resonant mass detectors that have been operating for quite some time shows that this situation does not hold in reality. There will always be episodes of non-stationarity though their rates and durations will depend on the choice of the detector site and other factors.
Detecting non-stationarity is important both in the main channel, because some non-stationarity could be of astrophysical origin, and also in the auxiliary channels where it can be an important diagnostic of the instrument or its environment. It is also important when estimating a statistical model of the detector noise where it is essential that the data segment used be stationary. \[The deleterious effects of non-stationarity on power spectral density (PSD) estimation were noted in .\]
Several methods for detecting non-stationarity that are relevant in this context have already been considered in the gravitational wave data analysis literature . However, these methods share an unsatisfactory feature which is that the computation of the detection threshold corresponding to a specified false alarm rate requires an a priori knowledge of a statistical model of the stationary ambient noise. An error in the model leads to an error in our knowledge of the false alarm rate. In the real world such prior models are usually not available and it is necessary to estimate noise models from the data itself. Even if a model exists, it will almost always have some free parameters (the variance being a trivial example) whose values would have to be estimated from the data fairly regularly, especially in the case of a complicated instrument such as a laser interferometer or its environment monitors.
Thus, when confronted with an uncharacterized dataset, an experimenter who is only limited to methods such as the above can face considerable uncertainty in fixing a threshold for the test before analyzing the data. For a sufficiently small dataset, the analyst can start with ad hoc thresholds and work in some iterative sense towards a statistically satisfactory conclusion. The problem becomes more serious when the data set to be analyzed is so large that it becomes necessary to substantially automate the analysis, as would most certainly be needed in the case of the large interferometers. An additional set of problems will arise when analyzing auxiliary channels since ambient terrestrial noise may be intrinsically more difficult to characterize and have a variable nature.
We introduce here, in the context of gravitational wave data analysis, a test for detecting non-stationarity for which the issue of fixing the correct threshold is trivial by design. The false alarm rate for such a robust test depends weakly on the statistics of the ambient noise and is specified almost completely by the detection threshold alone. In the present paper we concentrate on short duration non-stationarity or bursts since they are likely to be the most common types of non-stationarity in gravitational wave detectors. We find that the robustness of the test improves for smaller false alarm rates, which is precisely the regime of interest. If required, the test can be optimized in terms of the duration of the bursts that need to be detected.
We compare the efficiency of this test in detecting narrowband bursts with that of an ideal test which requires both a noise model and prior knowledge of the frequency band (center frequency and bandwidth) in which the bursts occur. We find that supplementing our test with an approximate prior knowledge of the burst duration allows it to detect, at the same false alarm rate and detection probability, bursts with a peak amplitude that is a factor of $`3`$ larger than that of the bursts which the ideal test can detect.
Apart from being robust, it also has the following properties that make it useful as an online monitor of stationarity. The computational cost associated with this test is quite small. Areas of non-stationarity are clearly distinguished, in the time-frequency plane, from areas of stationarity. Apart from making the output simple to understand visually, this will allow an automated routine to catalogue burst information such as the time of occurrence and frequency band.
The detection of non-stationarity has been actively studied in Statistics for quite some time and numerous tests suitable for a wide variety of non-stationary effects exist in the literature. The central idea behind our test is the detection of statistically significant changes in the PSD. As a means of detecting non-stationarity, this idea is quite natural and has been proposed in several earlier works. (See, for instance,.) though what constitutes a change and how it is measured can be defined in many different ways leading to tests that differ statistically as well as computationally. The specific implementation presented in this paper leads to a statistically robust test. The issue of robust tests for non-stationarity, though important as we have argued, has not been considered in gravitational wave detection so far. The same concerns as well as a more rigorous treatment exist in the Statistical literature . Our present work was, however, done independently and this test is a new contribution.
The paper is organized as follows. In Section II we formally state the problem addressed in this paper. Section III describes the Student t-test which lies at the core of our test. This is followed by a discussion of the basic ideas that lead to the test and why the test can be expected to be robust. In Section IV, the test is characterized statistically in term of its false alarm rate and detection power. The main results of this paper are also presented in this section. The computational cost associated with this test is discussed in Section IV D. This is followed by our conclusions and pointers to future work in Section V.
## II Formal statement of the problem
A random process $`x(t)`$ is said to be strictly stationary if the joint probability density $`P(x(t_i),x(t_i+\delta _1),x(t_i+\delta _2),\mathrm{},x(t_i+\delta _n))`$ of any finite number, $`n`$, of samples is independent of $`t_i`$. Often, one uses a less restrictive definition called wide sense stationarity which demands only that the mean $`\mathrm{E}\left[x(t_i)\right]`$ and the autocovariance $`\mathrm{E}[(x(t_i)\mathrm{E}[x(t_i)])(x(t_i+\tau )\mathrm{E}[x(t_i+\tau )])]`$ be independent of $`t_i`$. A random process not satisfying any of the above definitions is called non-stationary.
We assume that the ambient noise in the data channel of interest is wide sense stationary over sufficiently long time scales and a burst is an episode of non-stationarity with a much smaller duration. That is, the occurrence of a burst lasting from $`t=t_0`$ to $`t=t_1`$ in a segment $`x(t)`$ of data ($`0tT`$) means that
$$x(t)=\{\begin{array}{cc}\text{wide sense stationary}\hfill & \hfill 0tt_0\\ \text{non-stationary}\hfill & \hfill t_0tt_1\\ \text{wide sense stationary}\hfill & \hfill t_1tT\end{array},$$
(1)
where $`t_1t_0T`$. In practice, only a time series $`𝐱`$ consisting of regularly spaced samples of $`x(t)`$ is available instead of $`x(t)`$ itself. Thus, given the time series $`𝐱`$, we want to decide between the following two hypotheses about $`𝐱`$ :
1. Null Hypothesis $`H_0`$ : $`𝐱`$ is obtained from a wide sense stationary random process.
2. Alternative Hypothesis $`H_1`$ : $`𝐱`$ is obtained from a non-stationary random process.
The frequentist approach to this decision problem, which is followed here, begins by constructing a function $`𝒯(𝐱)`$, called a test statistic, of the data $`𝐱`$. If the data $`𝐱`$ is such that $`𝒯(𝐱)\eta `$, for some threshold $`\eta `$, the null hypothesis is rejected in favor of the alternative hypothesis for that $`𝐱`$.
Since $`𝐱`$ is obtained from a random process, there exists a finite probability, that $`𝒯(𝐱)`$ crosses the threshold even when the data is stationary. Such an event is called a false alarm and the rate of such events over a sequence of data $`𝐱`$ is called the false alarm rate. The threshold $`\eta `$ is determined by specifying the false alarm rate that the analyst is willing to tolerate.
To compute the threshold, we need to know the distribution function of $`𝒯(𝐱)`$ when $`H_0`$ is true. This distribution can, in principle, be obtained if the joint distribution of $`𝐱`$ (i.e., a noise model) is known. However, as mentioned in the introduction, such prior knowledge is usually incomplete, if it exists at all, in the real world. The only solution then is to estimate the joint distribution from the data itself. Therefore, one must first find a stationary segment of the data, by detecting and then rejecting non-stationary parts, but that brings us back to our primary objective itself!
To get around this paradox, we must construct $`𝒯(𝐱)`$ such that its distribution is as independent as possible of the distribution of the data under the null hypothesis. If the distribution of the test statistic is strictly independent of the distribution of $`𝐱`$, the test is called non-parametric. If the test statistic distribution depends on the distribution of $`𝐱`$ but only weakly, the test is said to be a robust test. Tests which do not have either of these properties are called parametric. Formally, therefore, the aim of this work is to find a non-parametric, or at least a robust test, for non-stationarity.
## III Description of the test
### A Student’s $`t`$-test
Before we describe our test for non-stationarity, it is best to discuss Student’s t-test in some detail since this standard statistical test plays an important role in what follows.
Student’s t-test is designed to address the following problem. Given a set of $`N`$ samples, $`\{x_1,\mathrm{},x_N\}`$, drawn from a Gaussian distribution of unknown mean and variance, how do we check that the mean $`\mu `$ of the distribution is non-zero? In Student’s t-test, a test statistic $`t`$ is constructed,
$$t=\frac{\widehat{\mu }\sqrt{N}}{\sqrt{\widehat{s}^2}},$$
(2)
where
$`\widehat{\mu }`$ $`:=`$ $`{\displaystyle \frac{1}{N}}{\displaystyle \underset{j=1}{\overset{N}{}}}x_j,`$ (3)
$`\widehat{s}^2`$ $`:=`$ $`{\displaystyle \frac{1}{N1}}{\displaystyle \underset{j=1}{\overset{N}{}}}\left(x_j\widehat{\mu }\right)^2.`$ (4)
The distribution of $`t`$ is known , both when $`\mu =0`$ and $`\mu 0`$. To check whether $`\mu 0`$, a two-sided threshold is set on t corresponding to a specified false alarm probability. If t crosses the threshold on either side, the null hypothesis $`\mu =0`$ is rejected in favour of the alternative hypothesis $`\mu 0`$.
Of interest to us here are two main properties of the t-test. First, if two sets of independent samples $`X=\{x_1,\mathrm{},x_N\}`$ and $`Y=\{y_1,\mathrm{},y_N\}`$ are drawn from Gaussian distributions with the same but unknown variances, the t-test can be employed to check whether the means of the two distributions are equal or not. This can be done simply by constructing a third set of samples $`Z=\{y_1x_1,\mathrm{},y_Nx_N\}`$, which would again be Gaussian distributed, and then testing, as shown above, whether the mean of the distribution from which $`Z`$ is drawn is non-zero or not.
The second important property of the t-test is its robustness : As long as the underlying distributions from which the two samples are drawn are identical, but not necessarily Gaussian, the distribution of the t statistic does not deviate much from the Gaussian case. The lowest order corrections to the mean and variance of the distribution being $`𝒪(N^{5/2})`$ and $`𝒪(N^2)`$ respectively.
### B An outline of the test
We present here an outline of our test. The details of the actual algorithm are presented in the appendix.
From Sec. II, it is clear that a direct signature of non-stationarity is a change in the autocovariance function. This implies that the PSD of the random process should also change since the it is the Fourier transform of the autocovariance function . Therefore, the basic idea behind our test is the detection of a change in the PSD of a time series.
The test involves the following steps (see Fig. 1 also).
1. The time series to be analyzed is divided into adjacent but disjoint segments of equal duration $`l_l`$.
2. Take two such disjoint data segments $`S_k`$ and $`S_{k+ϵ}`$ separated by a time interval $`(ϵ1)l_l`$, $`ϵ=1,\mathrm{\hspace{0.17em}2},\mathrm{}`$. We would like to compare the PSDs of these two segments and test if there is a significant difference.
3. Subdivide each of the two segments into $`N`$ subsegments of equal duration. Thus, segment $`S_i`$, $`i\{k,k+ϵ\}`$, gives us $`N`$ subsegments, each of duration $`l_s=l_l/N`$, which we denote by $`s_j^{(i)}`$, $`j=1,\mathrm{\hspace{0.17em}2},\mathrm{},N`$. This is an intermediate step in the estimation of the PSD of each segment $`S_i`$.
4. Compute the periodogram of each $`s_j^{(i)}`$. A periodogram is the squared modulus of the Discrete Fourier Transform (DFT) of a time series \[Eq. (6\].
5. For every frequency bin, therefore, we obtain a set $`X`$ of $`N`$ numbers from $`S_k`$ and similarly another set $`Y`$ from $`S_{k+ϵ}`$. In a conventional estimation of the PSD of a segment, say $`S_k`$, we would simply average the corresponding set $`X`$. However, since we want to compare two PSDs, we do the following instead.
6. Perform Student’s t-test for equality of mean on these two independent sets $`X`$ and $`Y`$. If the t statistic crosses a preset threshold $`\eta `$, then a significant change in the mean is indicated, otherwise not.
7. Repeat step 6 for all frequency bins in exactly the same manner.
Steps 2 to 7 should then be repeated with another pair of disjoint segments $`S_{k+1}`$ and $`S_{k+ϵ+1}`$ and so on.
Thus, the output of the test at this stage is a two dimensional image with time along one axis and frequency along the other. In this image, every frequency bin for which the threshold $`\eta `$ is crossed can be thought of as being colored black while the remaining are colored white. Hence, white areas in this image would indicate stationarity while the contrary would be indicated by the black areas. A sample image is shown in Fig. 2(a). It is the result of applying the test to a simulated time series constructed by adding a broad band burst to stationary white Gaussian noise (see Sec. IV A for definitions).
Not all black areas would, however, correspond to non-stationarity. Most of them would be random threshold crossings caused by the stationary noise itself. We search, therefore, for clusters of black pixels in the image which pass a veto that can be motivated as follows. Suppose the burst is fully contained in one segment, say, $`S_k`$. Then one would expect the t-test threshold to be crossed once when comparing $`S_k`$ with $`S_{kϵ}`$ and again when $`S_k`$ is compared with $`S_{k+ϵ}`$. This leads to a characteristic “double bang” structure for the cluster of black pixels. We throw away all other groups of black pixels that do not show such a feature. (This scheme is defined rigorously in the appendix.) Fig 2(b) shows the result obtained by applying this veto to the image in Fig 2(a). One of the cluster is at the location of the added burst while the other is a false event.
### C Why is this test robust?
This test can be expected to be robust for two reasons. First, the periodogram at any frequency is asymptotically exponentially distributed. This can be heuristically explained as follows. The DFT of a time series is a linear transform. If the number of time samples in a random time series is sufficiently large, it then follows from the central limit theorem that the DFT of that time series will have, at each frequency, imaginary and real parts which are distributed as Gaussians. Since the basis functions used in a DFT are orthogonal, the real and imaginary parts also tend towards being statistically independent. This implies that, for a sufficiently large number of time samples, the periodogram, which is simply the squared modulus of the DFT, is exponentially distributed at each frequency.
The second reason which should make the test robust is the fact, mentioned earlier, that the t-test is robust against non-Gaussianity when the two samples being compared have identical distributions. Under the null hypothesis of stationarity, we do indeed have identically distributed sets in our case.
Since the asymptotic distribution of a periodogram is independent of the statistical distribution of the time samples, much of the information about the time domain statistical distribution is lost in the frequency domain. Thus, the t-test “sees” nearly exponentially distributed samples whereas the time domain samples may have a Gaussian or Non-Gaussian distribution. Added to this, the robustness of the t-test also removes information about the time domain statistical distribution. Further, the t-test checks for a change in the mean value and is insensitive to the absolute value of the mean. This is strictly true in the Gaussian case but, because of the robustness of the t-test, it should also hold to a large extent for the exponential case.
These basic considerations suggest strongly that the test as a whole should be robust. However, the test also involves some other steps beyond just a simple t-test. First, the same segment is involved twice in a t-test (c.f. Sec. III B). Thus, for any $`k`$, samples $`k`$ and $`k+ϵ`$ in the sequence of t values at a given frequency will be correlated to a large extent. Second, we impose a non-trivial veto.
The above features of the test, though well motivated and conceptually simple, make a straightforward analytical study of the test difficult. Therefore, to establish the robust nature of the test and quantify its performance, we must follow a more empirical approach based on Monte Carlo simulations. This is the subject of the next section. An analytical treatment of the test is currently under development.
## IV Statistical characterization of the test
Our main aim in this section is to demonstrate the robust nature of the test and to study the efficacy of this test in detecting non-stationarity. Since, we need to use Monte Carlo simulations for understanding these statistical aspects of the test, we discuss only a few selected cases in this paper.
For a test to qualify as robust the threshold should be almost completely specified by the false alarm rate without requiring any assumptions about the statistics of the data. The false alarm rate, in the context of this test, is the rate at which clusters of black pixels occur when the input to the test is a stationary data stream. To obtain the false alarm rate, several realizations of stationary noise are generated and the test is performed on each. For a given threshold, the number of clusters detected over all the realizations provides an estimate of the false alarm rate at that threshold.
The efficacy of a test in detecting a deviation from the null hypothesis is measured by the detection probability of the deviation. In this paper, we measure the detection probability of different types of bursts that appear additively in stationary ambient noise. Realizations of signals from a fixed class (such as narrowband or broadband bursts of noise) are generated, to each of which we add a realization of stationary noise. The test is applied to the total data and we check whether a cluster of black pixel appears in a specified area of the time-frequency plane. This fixed area, which we call the detection region, is specified in advance of the simulation. The ratio of the number of realizations having a cluster in the specified area to the total number of realizations gives an estimate of the detection probability for bursts of that class.
The function that maps the test threshold into false alarm rate depends on the test parameters, $`l_l`$, $`l_s`$ and $`ϵ`$ (c.f. Sec. III B). Therefore, for each choice of these test parameters, the test must be calibrated separately using a Monte Carlo simulation. However, thanks to the robust nature of the test, the simulation needs to be performed only once and for a simple noise process such as Gaussian white noise which need not have any relation to the actual random process at hand. The role of the test parameters is discussed in more detail in Sec. IV C.
### A False alarm probability
We perform a Monte Carlo simulation for each of the representative cases below and show that the false alarm rate, as a function of threshold, is the same for all of them.
Each realization of the input data is a 10 sec long time series and each simulation uses $`5000`$ such realizations. We can look upon all the separate realizations of the input as forming parts of a single data stream ($`5000\times 10`$ sec long) and, if we assume that false alarms occur as a Poisson process, the false alarm rate (in number of events per hour) is given by the total number of false alarms over all realizations divided by $`5\times 10^4/3600`$.
The various cases considered here are as follows.
(i) White Gaussian noise ($`\sigma =1`$)— The time series consists of independent and identically distributed Gaussian random variables. The standard deviation $`\sigma `$ of the Gaussian random variables is unity and their mean is zero.
(ii) White Gaussian noise ($`\sigma =10`$) — Same as above but with $`\sigma =10`$.
(iiI) White non-Gaussian noise — All details in this simulation are the same as above except that the distribution of each sample is now chosen to be an exponential with $`\sigma =1`$.
(iv) Colored noise — We generated Gaussian, zero mean noise with a PSD as shown in Fig. 3. The overall normalization is arbitrary but the noise is scaled in the time domain to make its variance unity. This PSD was derived from the expected initial LIGO PSD, as provided in , by truncating the latter below 5 Hz and above 800 Hz followed by the application of a band pass filter with unity gain between 50 Hz and 500 Hz.
The range covered by the above types of statistical models is much more extensive than would be required in practice. By applying the test to such extreme situations, we can bound the variations in the false alarm rate versus threshold curve that would occur in a more realistic situation. In considering this range of models for the stationary background noise, we have gone from a two-sided distribution to a completely one side distribution. The output from most channels would be two sided and, hence, closer to a Gaussian than the Exponential distribution considered here.
The results are shown in Fig. 4, 5 and 6. For the small false alarm rates ($`<5`$/hour) that will be required in practice, the test is clearly shown to be very insensitive to the statistical nature of the data. The largest variation is between the Gaussian and exponential case while there is hardly any variation, even at large false alarm rates, among the Gaussian cases. The variation between the Gaussian and exponential case is less than $`50\%`$ in the worst case. As explained above, this should be treated as an upper bound on the error one might expect in practice.
Figs. 4, 5 and 6 correspond to different sets of test parameter values. The threshold for a given false alarm rate does depend, as one may expect, on the parameters of the test $`l_s`$, $`l_l`$ and $`ϵ`$. Because of the robust nature, however, given a particular set of parameter values only a single Monte Carlo simulation has to be performed with, say, white Gaussian noise, in order to obtain the corresponding false alarm rate versus threshold curve.
The parameter values for Fig. 4 were chosen to be the same as those that will be used in the following section. We also consider in that section the case of a band pass filtered and down sampled time series. Fig. 6 uses parameter values appropriate to the latter while the choice for Fig. 5 is explained in more detail in Sec. IV C.
### B Detection probability
A burst has an effectively finite duration and is itself an instance of a stochastic process. We consider the following combinations of background noise, bursts and test parameters $`l_l`$, $`l_s`$ and $`ϵ`$. The sampling frequency of the data is assumed to be $`1000`$ Hz.
The background noise is a zero mean stationary Gaussian process with a PSD that matches the expected initial LIGO PSD (c.f. Fig. 3). The burst is a narrow band burst constructed by band pass filtering a white Gaussian noise sequence followed by multiplication of the filtered output with a time domain window. Let the width of the pass band be $`W`$ and its central frequency be $`f_c`$. The time domain window function is chosen to be a Gaussian in shape ($`\mathrm{exp}\left(t^2/2\mathrm{\Sigma }^2\right)`$) where $`\mathrm{\Sigma }`$ is chosen such that when $`t=0.5`$ sec, the window amplitude drops to 10% of its maximum value (which is unity at $`t=0`$). The burst has, therefore, an effective duration of $`1`$ sec. After windowing, the peak amplitude of the burst is normalized to a specified value. The test parameters are $`l_l=0.5`$ sec, $`l_s=0.064`$ sec and $`ϵ=3`$. ( $`l_s=0.064`$ sec corresponds to 64 points, a power of 2, in order to optimize the Fast Fourier Transforms needed for computing the periodogram for each subsegment.)
We consider two types of narrow band bursts. Type (1) has $`f_c=200`$ Hz, while type (2) has $`f_c=100`$ Hz. $`W=20`$ Hz for both types of bursts. The detection region, which is the area in the time frequency plane that must contain a cluster of black pixel for a valid detection, is chosen in both cases to be $`1.0`$ sec and $`80`$ Hz wide in time and frequency respectively. It is centered at the location of the window maximum in time and at $`f_c`$ in frequency.
For each type of burst, we empirically determine the peak amplitude required in order for the burst to have a detection probability of $`0.8`$. This is done at several different values of the detection threshold corresponding to false alarm rates of 1 false event in 1/2, 1, 2, or 3 hours. The results are tabulated in Table I.
As shown later in Sec. IV C, the above choice for the test parameters, especially the value of $`l_l`$, optimizes the test for detecting bursts which effectively last for $`1`$ sec. We have, therefore, presented the best performance the test can deliver for detecting bursts with this duration. Note that the same set of parameters optimize the test for detecting bursts that occur in very different frequency bands. Thus, the duration of a burst is effectively the only characteristic that needs to be considered when optimizing the test. This point is discussed further in Sec. IV C.
In Figs. 7 and 8, we show samples of both data and burst (with the peak amplitudes given in Table I) for each of the two cases described above. Fig. 7 corresponds to type (1) bursts and illustrates the fact that the bursts being detected are not prominent enough to be picked up by “eye” . The burst in Fig. 8, which is of type (2), is more prominent. This is because these bursts lie closer in frequency to the “seismic wall” part of the noise curve (see Fig. 3) where the variance of the PSD is higher.
To better understand the detection efficiency of our test, it is natural to ask for a comparison with a test that, intuitively, represents the best we can do. Let us suppose that we know a priori that all bursts are of type (2) above and that the ambient noise is a Gaussian, stationary random process. Note that such prior information is substantially more than that used to optimize our test which was a knowledge of only the burst duration. Nonetheless, assuming that such information was available to us (and no more), then the following would be the ideal scheme we should compare our test with.
In the ideal scheme (similar to ), we first band pass filter the data $`𝐱`$. Since we know the bursts are of type (2), let the filter pass band be $`W=20`$ Hz wide, centered at the frequency $`f_c=100`$ Hz. The output of the filter is demodulated and the resulting quadrature components, say $`𝐗=\{X_k\}`$ and $`𝐘=\{Y_k\}`$, $`k=1,\mathrm{\hspace{0.17em}2},\mathrm{}`$, are resampled down to a sampling frequency of $`2W`$. The downsampled quadratures are then squared and summed to give a time series $`𝐙=\{Z_k=X_k^2+Y_k^2\}`$. If any sample of $`𝐙`$ crosses a threshold $`\eta `$, we declare that a burst was present near the location of that sample.
The samples of $`𝐙`$ should be nearly independent and distributed identically. Since the original time series is a Gaussian random process, this distribution is an exponential. (Note that the assumption of Gaussianity is essential since the central limit theorem does not apply here.) The number of samples per hour would be $`2W\times 3600=144000`$. For a false alarm rate of one per hour, therefore, the threshold $`\eta `$ should be $`2.14`$. Here, we have used the fact that for the PSD shown in Fig. 3, the standard deviation of $`Z_k`$ turns out to be $`0.18`$.
Monte Carlo simulations then show that, for obtaining a detection probability of 0.8 with the ideal scheme, the peak amplitude of bursts of type (2) must be $`1.5\sigma `$, where $`\sigma `$ is the standard deviation of the original time series $`𝐱`$. From Table I we see that, for the same false alarm rate and detection probability, our test requires a peak amplitude of $`4.7\sigma `$, a factor of $`3`$ higher than that for the ideal test.
### C The role of the test parameters
The test has three adjustable parameters (c.f. Sec. III B) $`l_l`$, $`ϵ`$ and $`l_s`$. The false alarm rate of the test depends on the choice of these parameters as does the power of the test. Here, we empirically explore the effect of these parameters on the performance of the test.
#### 1 Resolution in time and frequency
The parameter $`l_l`$, determines the time resolution of the test. A burst can only be located in time with an accuracy of $`l_l`$. The duration of a subsegment $`l_s`$ determines the frequency resolution of the test. The bin size in frequency domain is simply given by $`1/l_s`$.
#### 2 False alarm rate
(a) The effect of $`l_l`$. A decrease in $`l_l`$ reduces the number of samples used in the t-test and, hence, should lead to an increase in the false alarm rate. Fig. 9 shows the effect of $`l_l`$ on the false alarm rate of the test ($`l_s`$ and $`ϵ`$ held fixed). It is seen that, for large $`l_l`$, the trend is indeed as expected above but it reverses below a certain value of $`l_l`$. This is probably an effect of the correlation in the sequence of t values (c.f. Sec. III C), though a full understanding requires an analytical treatment. Nonetheless, simulations establish that this behavior does not significantly affect the robustness of the test. In fact, the parameters chosen for the simulations in Sec. IV A for the demonstration of robustness, correspond to values of $`l_l`$ on both sides of the change point in Fig. 9. Fig. 4 corresponds to a value of $`l_l`$ that lies on the left and Fig. 5 to a value on the right of the change point and both show that the test is robust. We have verified this behavior for several other cases also.
(b) The effect of $`l_s`$. Similarly, the effect of an increase in $`l_s`$ for a fixed $`l_l`$ is expected to increase the false alarm rate but as in the case of $`l_l`$, though this trend is present, it is reversed above a certain value of $`l_s`$ (see Fig. 10). However, simulations verify that this does not affect the robustness of the test.
(c) The effect of $`ϵ`$. The false alarm rate should be independent of $`ϵ`$ since for stationary noise it does not matter which two segments are compared in the t-test. This agrees with actual simulation results as shown in Fig. 11(a).
#### 3 Detection probability
(a) The effect of $`l_l`$. When $`l_l`$ is significantly larger than the burst duration, only a few of the subsegments in the segment containing the burst will have a distribution which is different from the stationary case. The periodograms of such subsegments will appear as outliers in an otherwise normal sample and the t-test, which is unsuitable for such cases, will not be able to detect them. Therefore, as the burst duration falls below $`l_l`$, the detection probability should decrease. The effect of $`l_l`$ on detection probability should be independent of the frequency band in which the burst is localized since $`l_l`$ only governs the number of subsegments over which the burst is spread. Both of the above effects are observed, as shown in Fig. 12. Thus, to optimize the test, the only prior knowledge required is the duration of the bursts which are to be detected.
(b) The effect of $`l_s`$. Decreasing $`l_s`$ will increase $`N`$, the number of samples used in the t-test, which would increase the detection probability. However, $`l_s`$ should not be reduced indiscriminately (see Sec. IV C 4).
(c) The lag $`ϵ`$. As long as the burst durations are smaller than $`ϵ`$, a change in $`ϵ`$ should not affect the power of the test. This is indeed observed in our simulations, an example of which is shown in Fig. 11(b). In Fig. 12, we kept $`ϵ`$ large enough that the effect of $`ϵ`$ on burst detectability did not get entangled with that of $`l_l`$.
#### 4 Miscellaneous
Reducing $`l_s`$ to the point that each subsegment has only one sample is simply equivalent to monitoring changes in the variance of the input time series. This is because a one point DFT is simply the sample itself and the periodogram is, therefore, just the square of the sample. Thus, a test for change in variance is a special case of the present test.
However, under some circumstances, an indiscriminate reduction in $`l_s`$ can have adverse effects. For instance, suppose the ambient noise PSD is such that the power in some frequency region is much greater than the power elsewhere (see Fig. 3 for an example) and all the bursts occur in the low power region. Since reduction in $`l_s`$ decreases frequency resolution, the low power region will be completely masked by the high power one for sufficiently small $`l_s`$. This will then make the detection of the bursts more difficult. A related issue is that of narrowband noise contamination which is discussed in more detail in Sec. V.
A very long lag would allow the detection of long time scale non-stationarity such as an abrupt change in the variance from one fixed value to another. However, for such abrupt long lasting changes, there exist better methods of detection .
### D Computational Cost
In estimating the computational cost of this test, it is helpful to divide the total number of floating point operations (additions, subtractions, multiplications) required into two parts : (a) Deterministic and (b) Stochastic.
(a) Deterministic. This is the part involving the generation of the raw image (c.f. Sec. III B). The number of operations required is completely determined by the parameters $`l_l`$, $`l_s`$ and the sampling frequency of the data $`f_s`$. A breakup of the steps involved in this part and the respective number of operations involved is as follows.
For each column of the image : (1) Two sets of Fast Fourier Transforms (FFTs) have to be computed, each set having $`N=l_l/l_s`$ FFTs with each FFT involving $`n=l_sf_s`$ time samples. Therefore, the number of operations involved is $`2\times N\times 3n\mathrm{log}_2n`$. (2) The modulus squared of only the positive frequency FFT amplitudes are computed for each subsegment leading to $`2N\times 3\times (n/2)`$ operations (the factor 3 comes from squaring and adding the real and imaginary parts). (3) For each frequency, the sample mean ($`2N+1`$ operations) and variances ($`3N+1`$ operations) are computed followed by 4 operations to construct the t-statistic. Thus, total number of operations involved is $`(5N+8)n/2`$. (4) Finally, for each frequency, the t-statistic is compared to a threshold, involving $`n/2`$ operations in all. Adding up all the steps and dividing the total number of operations by $`l_l`$ gives the computing speed required in order to generate the image online : $`\left(6\mathrm{log}_2n+9/2N+11/2\right)f_s`$. As an example, for $`f_s=5000`$ Hz, $`l_l=0.5`$ sec and $`l_s=0.064`$ sec, the required computing speed is $`0.28`$ MFlops. Thus, generating the raw image is computationally trivial by the standards of modern day computing.
(b) Stochastic. This is the part involving the application of the veto to the raw image (c.f. Sec. III B). Since the number of black pixels in the image after thresholding as well as the size of the black-pixel patches are random variables, the number of operations involved in this part is also a random variable. One expects, however, that for low false alarm rates, the computational cost of this part will be much smaller than that of the deterministic part since clusters would only occur sparsely in the image.
The simplest way to estimate the computational load because of the stochastic part is via Monte Carlo simulations in which the number of operations involved in the stochastic part are explicitly counted within the code itself. In Table II, we present the number of floating point operations incurred in the stochastic part, as a fraction of the total number of operations incurred in the deterministic part, over a wide range of false alarm rates for stationary input noise. (To generate Table II, the test parameters used were $`l_l=0.5`$ sec, $`l_s=0.064`$ sec and $`ϵ=3`$. The sampling frequency of the input data was 1000 Hz, each realization being 20 sec long. The operations were counted over 200 trials.)
From Table II, we see that even when the false alarm rate is as high as 50 events/hour, the time spent in the stochastic part is negligible compared to that involved in generating the raw image itself. The computational cost of generating the image itself (the deterministic part) is quite low as shown above. Hence, overall, the test can be implemented without significant computational costs.
## V Discussion
A test for the detection of non-stationarity is presented which has the important property of being robust. This allows the test to be used on data without the need to first characterize the data statistically.
The main results of this work are (i) the demonstration, using Monte Carlo simulations, of the insensitivity of the false alarm rate at a given threshold to the statistical nature of the data being analyzed, and (ii) application of the test to the detection of different types of bursts which showed that the test can detect fairly weak bursts. For instance, as shown in Table I, the test could detect 80% of narrowband bursts, each located within a band of 20 Hz centered at 200 Hz, that were added to Gaussian noise with a PSD such as that of LIGO-I when the peak amplitude of the bursts was only $`1.6\times \text{r.m.s. of background noise}`$ and the false alarm rate for the test was 1 event/hour.
We did not catalog the false alarm rate or detection probability for a large number of cases since real applications will almost always fall outside any such catalog. Instead, for false alarm rate, we chose a rather extreme range for the types of stationary noise so that a bound on the robustness could be obtained. While, for detection probability, our main aim was to demonstrate that, given its robustness, the test performs quite well in realistic situations. When applying the test to a particular data set, the appropriate false alarm versus threshold curve can be obtained easily using a single Monte Carlo simulation. Almost always, the experimenter has some prior idea of the range of burst durations he/she is interested in and therefore can choose the set of test parameters appropriately. This would be necessary for any test of non-stationarity, and not particularly the present one, since non-stationarity can take many forms. A more general approach would require understanding the test analytically. This work is in progress.
Though we mentioned the problem of narrow band noise (c.f. Sec. IV C) it was not addressed in detail. This is because this is an issue that is fundamental to all tests for transient non-stationarity and not specific to the present test alone. Narrow band noise, such as power supply interference at 60 Hz and its harmonics or the thermal noise associated with the violin modes of suspension wires, appear non-stationary on timescales much shorter than their correlation length. Thus, if a narrow band noise component has significant power, the frequency band (max\[frequency resolution, line width\]) containing it will appear non-stationary to any test that searches for short duration transients. On the other hand, steady narrowband signals in the data can suppress the detection of non-stationarity that happens to lie close to them in frequency. This is because detection of short bursts implies an increase in time resolution and, correspondingly, a decrease in frequency resolution. Thus if the narrowband signals are strong, they can make the frequency bins containing them appear stationary.
This problem can be addressed in several ways. A preliminary look at the PSD can tell us about the frequency bands where narrowband interference is severe and the output of the test in those bands can be discarded from further analysis. Another way could be to decrease the time resolution sufficiently though at the cost of losing short bursts. A more direct and effective approach would be to pass the data through time domain filters that notch the offending frequencies. Such filters could also be made adaptive so that the frequencies can be tracked in time . Further work is in progress on this issue.
## Acknowledgement
I am very grateful to Albert Lazzarini for extremely useful comments, criticisms and suggestions which led to a significant improvement in the work. I thank Albert Lazzarini and Daniel Sigg for help in obtaining information about auxiliary channels in LIGO. I thank Eric-Chassande Motin for discussions that led to some interesting ideas for the future. Discussions with Gabriela Gonzalez and L. S. Finn were helpful. I thank L. S. Finn for suggestions regarding the text. This work was supported by National Science Foundation awards PHY 98-00111 and PHY 99-96213 to The Pennsylvania State University.
## Algorithm of the test
### 1 Notation
We present, first, some of the notation that will be used in the following. The time series to be analyzed will be denoted by $`𝐱`$, where $`𝐱`$ is a sequence of real numbers. We will need to divide $`𝐱`$ into disjoint segments, without gaps, with all segments having the same duration $`l_l`$. A segment of length $`l_l`$ will be denoted by $`𝐲^{(j)}`$, where $`j`$ stands for segment number $`j`$.
Each segment $`𝐲^{(j)}`$ will need to be further subdivided into disjoint segments, again without gaps, with all subsegments having the same duration $`l_s`$. The $`k^{\mathrm{th}}`$ such sub-segment in the segment $`𝐲^{(j)}`$ will be denoted by $`𝐳^{(j,k)}`$.
The periodogram of a time series is defined to be the squared modulus of its DFT. That is, if $`\widehat{𝐮}`$ is the DFT of some time series $`𝐮`$ (consisting of $`m`$ samples), then the $`q^{\mathrm{th}}`$ frequency component $`\widehat{u}_q`$ of $`\widehat{𝐮}`$ is given by,
$$\widehat{u}_q:=\underset{p=1}{\overset{m}{}}u_p\mathrm{exp}\left(2\pi i(q1)(p1)/m\right).$$
(5)
where $`q=\{1,\mathrm{},m\}`$. The periodogram $`\{S_q\}`$ ($`q=\{1,\mathrm{},m\}`$) is defined by
$$S_q:=|\widehat{u}_q|^2.$$
(6)
To reduce the aliasing of high frequency power on to lower frequencies, it is common to compute the periodogram after modifying $`𝐮`$ by multiplying it with a window function $`𝐰`$ : $`u_p`$ $``$ $`u_pw_p`$. The definition of the periodogram is modified in this case to,
$$S_q:=\frac{1}{𝐰}|\stackrel{~}{u}_q|^2$$
(7)
where $`𝐰`$ stands for the Euclidean norm of the window function and $`\stackrel{~}{u}_q`$ is the $`q^{\mathrm{th}}`$ frequency component of the DFT of the windowed sequence. Before windowing, we also subtract the sample mean of the sequence from each sample in the sequence. In the following, all periodograms are obtained as defined in Eq. (7) after subtraction of the sample mean followed by windowing. the window function is chosen to be the symmetric Hanning window of the same length as the input time series $`𝐮`$. We denote the periodogram of $`𝐳^{(j,k)}`$ by $`𝐒^{(j,k)}`$, with its $`q^{\mathrm{th}}`$ frequency component denoted by $`S_q^{(j,k)}`$.
### 2 Algorithm : The first stage
We will now state the algorithm of the test. First, the values of the free parameters of the test $`l_l`$, $`l_s`$ and $`ϵ`$ are set. Then the following loop is executed.
1. Starting with $`j=1`$, take segments $`𝐲^{(j)}`$ and $`𝐲^{(j+ϵ)}`$ from the detector output $`𝐱`$. The loop index is $`j`$.
2. Subdivide each of the above segments into equal number of subsegments $`𝐳^{(j,k)}`$ and $`𝐳^{(j+ϵ,k)}`$, $`k=1,\mathrm{},\mathrm{floor}(l_l/l_s)`$, where the floor function returns the integer part of its argument. Let $`N=\mathrm{floor}(l_l/l_s)`$.
3. Compute the sets $`\{𝐒^{(j,k)}\}`$ and $`\{𝐒^{(j+ϵ,k)}\}`$ with $`k`$ as the running index. Thus, each of the two sets contains $`N`$ periodograms.
4. For each frequency component, compute the sample means and variances of the two sets. Let the sample means at the $`q^{\mathrm{th}}`$ frequency component be denoted by $`\mu _q^{(j)}`$ and $`\mu _q^{(j+ϵ)}`$ for $`\{𝐒^{(j,k)}\}`$ and $`\{𝐒^{(j+ϵ,k)}\}`$ respectively. Then,
$`\mu _q^{(j)}`$ $`=`$ $`N^1{\displaystyle \underset{k=1}{\overset{N}{}}}S_q^{(j,k)},`$ (8)
$`\mu _q^{(j+ϵ)}`$ $`=`$ $`N^1{\displaystyle \underset{k=1}{\overset{N}{}}}S_q^{(j+ϵ,k)}.`$ (9)
Similarly, let the standard deviations be denoted by $`\sigma _q^{(j)}`$ and $`\sigma _q^{(j+ϵ)}`$,
$`\left(\sigma _q^{(j)}\right)^2`$ $`=`$ $`(N1)^1{\displaystyle \underset{k=1}{\overset{N}{}}}\left(S_q^{(j,k)}\mu _q^{(j)}\right)^2,`$ (10)
$`\left(\sigma _q^{(j+ϵ)}\right)^2`$ $`=`$ $`(N1)^1{\displaystyle \underset{k=1}{\overset{N}{}}}\left(S_q^{(j+ϵ,k)}\mu _q^{(j+ϵ)}\right)^2,`$ (11)
where we have used the unbiased estimator of variance (the biased estimator has $`N`$ in the denominator instead of $`N1`$).
5. Compute $`t_q^{(j)}`$, the value of the t-statistic for the $`q^{\mathrm{th}}`$ frequency component,
$$t_q^{(j)}:=\sqrt{N}\frac{\mu _q^{(j+ϵ)}\mu _q^{(j)}}{\left[\left(\sigma _q^{(j)}\right)^2+\left(\sigma _q^{(j+ϵ)}\right)^2\right]^{1/2}}.$$
(12)
Let $`𝐓`$ be a matrix with $`𝐓_{qj}=|t_q^{(j)}|`$, $`q`$ and $`j`$ being the row and column indices respectively. For every pass through the loop described above, a column of $`𝐓`$ is produced.
Let the threshold for the t-test be $`\eta `$. Set all elements of $`𝐓`$ that are below $`\eta `$ to zero and set all elements above $`\eta `$ to a fixed value $`t_0`$. We denote the resulting matrix by the same symbol $`𝐓`$. This should not cause any confusion since we will mostly require the thresholded form of $`𝐓`$ in the following.
The matrix $`𝐓`$ can also be visualized (see Fig. 2) as a two dimensional image composed of a rectangular array of pixels (picture elements) with the same dimension as $`𝐓`$. We can imagine that the pixels for which the corresponding matrix elements crossed $`\eta `$ are colored black and those that did not cross are colored white. We call the black pixels b-pixels and the white ones w-pixels.
### 3 Algorithm : The second stage
A burst will appear in the image $`𝐓`$ as a cluster of b-pixels. In order to define a cluster we first delineate the set of pixels which form the nearest neighbors to any given pixel. The nearest neighbor of a pixel with $`q`$ as the row and $`j`$ as the column index is a pixel with row index $`q^{}`$ and column index $`j^{}`$ such that (i) $`q^{}\{q,q\pm 1\}`$ and $`j^{}\{j,j\pm 1\}`$ or (ii) $`q^{}=q`$ and $`j^{}\{j+ϵ,jϵ\}`$. We call the set of nearest neighbors of type (i) as contacting and those of type (ii) as non-contacting. Fig. 13 shows the set of nearest neighbors of a pixel. We can now define a cluster of b-pixels as a set of b-pixels such that (i) each member of this set has at least one other member as its nearest neighbor, and (ii) at least one member of the cluster has another member as a non-contacting nearest neighbor.
The next step in the algorithm is the identification of a cluster of b-pixels in the image $`𝐓`$. In our code, we proceed as follows. Make a list of all b-pixels in the image $`𝐓`$ (the ordering of the list is immaterial). Let this list be called $`𝐋`$. We define two more symbols :
(i) $`𝐋_{\mathrm{sub}}`$ is a proper subset of $`𝐋`$ such that any two elements $`a𝐋_{\mathrm{sub}}`$ and $`b𝐋_{\mathrm{sub}}`$, there exist elements $`\{c,d,\mathrm{},h\}𝐋_{\mathrm{sub}}`$ such that $`c`$ is a contacting nearest neighbor of $`a`$, $`d`$ is a contacting nearest neighbor of $`c`$ and so on till $`h`$ which is also a contacting nearest neighbor of $`b`$. That is, starting from any one element we can reach any other by “stepping” through a chain of members. Essentially, $`𝐋_{\mathrm{sub}}`$ is, roughly speaking, an unbroken patch of b-pixels.
(ii) $`𝐋_{\mathrm{sub}}^{}`$ is the complement of $`𝐋_{\mathrm{sub}}`$ in $`𝐋`$.
In the algorithm below, it is understood that when an element is added or removed from $`𝐋_{\mathrm{sub}}`$, the new set is always renamed as $`𝐋_{\mathrm{sub}}`$. Similarly, the complement of the new $`𝐋_{\mathrm{sub}}`$ is always denoted by $`𝐋_{\mathrm{sub}}^{}`$.
The steps in the algorithm are as follows. (Parenthesized statements are comments.)
1. For each member of $`𝐋_{\mathrm{sub}}`$, search for contacting nearest neighbors in $`𝐋_{\mathrm{sub}}^{}`$.
2. if found add them to $`𝐋_{\mathrm{sub}}`$. If not, go to step 4.
(To obtain $`𝐋_{\mathrm{sub}}`$ starting from the null set: take the first element, which we call the seed element, of $`𝐋`$ as $`𝐋_{\mathrm{sub}}`$ and go to step 1.)
3. Update $`𝐋_{\mathrm{sub}}^{}`$. Go to step 1.
4. Check if any element of $`𝐋_{\mathrm{sub}}`$ has a non-contacting nearest neighbor in $`𝐋_{\mathrm{sub}}^{}`$.
(This and the following steps check whether $`𝐋_{\mathrm{sub}}`$ qualifies as a cluster according to our definition.)
5. If none are found, go to step 7. Otherwise, take the first non-contacting nearest neighbor as a new seed element and construct a subset $`\stackrel{~}{𝐋}_{\mathrm{sub}}`$ following step 1 to step 3 (temporarily rename $`𝐋`$ by $`𝐋_{\mathrm{sub}}^{}`$, $`𝐋_{\mathrm{sub}}`$ by $`\stackrel{~}{𝐋}_{\mathrm{sub}}`$ and $`𝐋_{\mathrm{sub}}^{}`$ by $`\stackrel{~}{𝐋}_{\mathrm{sub}}^{}`$ in those steps). Add $`\stackrel{~}{𝐋}_{\mathrm{sub}}^{}`$ to $`𝐋_{\mathrm{sub}}`$ and set a flag $``$ that $`𝐋_{\mathrm{sub}}`$ is a cluster.
6. Repeat step 4.
7. Rename $`𝐋_{\mathrm{sub}}^{}`$ as $`𝐋`$. If flag $``$ was set, save $`𝐋_{\mathrm{sub}}`$. Go to step 1 again (until not more than one b-pixel is left in $`𝐋`$).
The above algorithm is easy to implement in softwares such as MATLAB or MATHEMATICA which have inbuilt routines for set operations. We use MATLAB for our implementation. The actual code can, of course, be optimized significantly. For instance, in step 1 the search can be confined to only the most recent set of elements added to $`𝐋_{\mathrm{sub}}`$.
|
no-problem/9910/cond-mat9910019.html
|
ar5iv
|
text
|
# Temperature dependence of spin polarizations at higher Landau Levels
\[
## Abstract
We report our results on temperature dependence of spin polarizations at $`\nu =1`$ in the lowest as well as in the next higher Landau level that compare well with recent experimental results. At $`\nu =3`$, except having a much smaller magnitude the behavior of spin polarization is not much influenced by higher Landau levels. In sharp contrast, for filling factor $`\nu =\frac{8}{3}`$ we predict that unlike the case of $`\nu =\frac{2}{3}`$ the system remains fully spin polarized even at vanishingly small Zeeman energies.
\]
It has been long established that spin degree of freedom plays a very important role in the quantum Hall effects , that are unique demonstrations of electron correlations in nature. At the Landau level filling factor $`\nu =1`$ ($`\nu =n_e/n_s`$, where $`n_e`$ is the electron number and $`n_s=AeB/\mathrm{}c=A/2\pi \mathrm{}_0^2`$ is the Landau level degeneracy, $`A`$ is the area of the system and $`\mathrm{}_0`$ is magnetic length) the ground state is fully polarized with total spin $`S=n_e/2`$ . A fully spin polarized state is also expected for $`\nu =\frac{1}{3}`$, while a spin unpolarized state is predicted for the filling factor $`\nu =2/m`$, where $`m`$ is an odd integer . Recently, a new dimension to those studies was introduced by Barrett et al. (see also ) in their work on spin excitations around $`\nu =1`$ and also temperature dependence of spin polarizations at $`\nu =1`$. Since then several experimental groups have explored spin polarization of various other filling factors. In these experiments, direct information about electron spin polarization at various filling factors can be obtained via nuclear magnetic resonance (NMR) spectroscopy. Information about spin polarization of the two-dimensional electron gas in an externally applied high magnetic field is derived here from measurements of Knight shift of <sup>71</sup>Ga NMR signal due to conduction electrons in GaAs quantum wells in the quantum Hall effect regime. For a fully polarized ground state, as is the case for $`\nu =1`$ and $`\nu =\frac{1}{3}`$, experimental results indicate that spin polarization saturates to its maximum value at very low temperatures and drops rapidly as the temperature is raised (Fig. 1, and also reported earlier in Ref. ). At large $`T`$, spin polarization is expected to decay as $`T^1`$ and was found experimentally to behave that way .
Recently, Song et al. reported NMR spectroscopy in a somewhat similar set up as that of Barrett et al. in order to explore $`\nu =1`$ and $`\nu =3`$. Interestingly, temperature dependence of spin polarization at $`\nu =3`$ revealed a different behavior as compared to that at $`\nu =1`$. More specifically, the results of Song et al. indicated that even at the lowest temperature studied, electron spin polarization at $`\nu =3`$ does not show any indication of saturation and with increasing temperature it sharply drops down to zero (Fig. 2). In this paper, we investigate spin polarization versus temperature at $`\nu =1`$ in the lowest Landau level as well as in the next Landau level. We also compare our results with experimental results of Ref. . At low temperatures, the behavior of spin polarization at $`\nu =3`$ is similar to that at $`\nu =1`$ but of much smaller magnitude. These results agree reasonably well with available experimental data at $`\nu =3`$. However, discrepancies between our theoretical results and the experimental data remain at higher temperatures. We also present theoretical results for $`\nu =\frac{2}{3}`$ in the next higher Landau level. At $`\nu =\frac{2}{3}`$, convincing evidence exist about the spin polarization in the lowest Landau level . But there are no experimental data available as yet for spin polarizations in the next higher Landau level, i.e., at $`\nu =\frac{8}{3}`$. We find (somewhat unexpectedly) that for $`\nu =\frac{8}{3}`$, even at a vanishingly small Zeeman energy, electrons in the higher Landau level remain fully spin polarized.
We have calculated temperature dependence of spin polarization for different filling factors from ,
$$S_z(T)\frac{1}{Z}\mathrm{e}^{\epsilon _j/kT}j|S_z|j$$
where $`Z=_j\mathrm{e}^{\epsilon _j/kT}`$ is the canonical partition function and the summation is over all states including all possible polarizations. Here $`\epsilon _j`$ is the energy of the state $`|j`$ with Zeeman coupling included. They are evaluated for finite-size systems in a periodic rectangular geometry . Our earlier theoretical results indicated that at small values of the Zeeman energy, temperature dependence of spin polarization is non-monotonic for filling factors $`\nu =2/m`$, $`m>1`$ being an odd integer. In particular, for $`\nu =\frac{2}{3}`$ and $`\nu =\frac{2}{5}`$, we found that spin polarization initially increases with temperatures, reaching a peak at $`T0.01K`$ when it falls as $`1/T`$ with increasing temperature. Appearence of the peak was associated with spin transitions at these filling factors and was found to be in good agreement with the experimental observation . For $`\nu =1`$ and $`\nu =1/3`$, our results are also in excellent agreement with the earlier available experimental results .
In our present work, energies are evaluated via exact diagonalization of a few electron system in a periodic rectangular geometry . Since even at the lowest experimental magnetic field the Landau level separation $`\mathrm{}\omega _c`$ is still an order of magnitude greater than typical energies due to the Coulomb interaction, electrons in the lowest Landau level can be treated as inert. In the calculations that follow we can therefore consider the lowest Landau level to be an uniform background causing merely a constant shift to interaction energies. The higher Landau levels then enter the system Hamiltonian via a modified interaction potential . More specifically, for a finite number of active electrons $`N_e`$ in a rectangular cell and choosing the Landau-gauge vector potential, the Hamiltonian in the $`n=0,1`$ Landau levels is (ignoring the kinetic energy and single-particle terms in the potential energy which are constants ),
$`={\displaystyle \underset{j_1,j_2,j_3,j_4}{}}𝒜_{nj_1,nj_2,nj_3,nj_4}a_{nj_1}^{}a_{nj_2}^{}a_{nj_3}a_{nj_4},`$ (1)
$`𝒜_{nj_1,nj_2,nj_3,nj_4}=\delta _{j_1+j_2,j_3+j_4}^{}_n(j_1j_4,j_2j_3),`$ (2)
$`_n(j_a,j_b)={\displaystyle \frac{1}{2ab}}{\displaystyle \underset{𝐪}{\overset{^{}}{}}}{\displaystyle \underset{k_1}{}}{\displaystyle \underset{k_2}{}}\delta _{q_x,2\pi k_1/a}\delta _{q_y,2\pi k_2/b}\delta _{j_ak_2}^{}`$ (3)
$`\times {\displaystyle \frac{2\pi e^2}{ϵq}}\left[{\displaystyle \frac{8+9(q/b^{})+3(q/b^{})^2}{8(1+q/b^{})^3}}\right]`$ (4)
$`\times _n(q)\mathrm{exp}\left({\displaystyle \frac{1}{2}}q^2\mathrm{}_0^22\pi \mathrm{i}k_1j_b/n_s\right),`$ (5)
$`_n(q)=\{\begin{array}{cc}1\hfill & \text{for }n=0\text{,}\hfill \\ (1\frac{1}{2}q^2\mathrm{}_0^2)^2\hfill & \text{for }n=1\text{,}\hfill \end{array}`$ (8)
$`n_e=\{\begin{array}{cc}N_e\hfill & \text{for }n=0\text{,}\hfill \\ \frac{\nu }{\nu 2}N_e\hfill & \text{for }n=1\text{.}\hfill \end{array}`$ (11)
Here $`a`$ and $`b`$ are the two sides of the rectangular cell that contains the electrons. The Fang-Howard variational parameter $`b^{}`$ is associated with the finite-thickness correction , $`ϵ`$ is the background dielectric constant, and the results are presented in terms of the dimensionless thickness parameter $`\beta =(b^{}\mathrm{}_0)^1`$. The Kronecker $`\delta `$ with prime means that the equation is defined $`\mathrm{mod}n_s`$, and the summation over $`q`$ excludes $`q_x=q_y=0`$. This numerical method has been widely used in the quantum Hall effect literature and is known to be very accurate in determining the ground state and low-lying excitations in the system.
Our results for $`S_z(T)/\mathrm{max}S_z(T)`$ vs $`T`$ for an eight-electron system in a periodic rectangular geometry at $`\nu =1`$ are presented in Fig. 1 where we also present the experimental data of Ref. for comparison. Here the temperature is expressed in units of $`e^2/ϵ\mathrm{}_0`$ and the conversion factor to $`K`$ is $`e^2/ϵ\mathrm{}_0[K]=51.67(B[\mathrm{tesla}])^{\frac{1}{2}}`$ appropriate for system studied in experiments. In our calculations, we fix the parameters as in the experimental systems: the Landé $`g`$-factor is 0.44 and the magnetic field is $`B=9.4`$ tesla. The curves that are close to the experimental data (and presented here) are for $`\beta =24`$. As we discussed above, at low temperatures there is a rapid drop in spin polarization and for high temperatures spin polarizations decay as $`1/T`$. Our results are in good agreement with those experimental features. They were also in good qualitative agreement with the earlier experimental results at this filling factor . While not entirely new, these results are presented with the intention of comparing them with the temperature dependence of spin polarization at $`\nu =3`$. The results in the latter case are shown in Fig. 2 (again for an eight-electron system in a periodic rectangular geometry). In drawing this figure, we have taken the following facts from the experimental results of Ref. into consideration: (a) that the maximum $`S_z`$ is in fact, 1/3 and not 1 as in $`\nu =1`$, (b) the experimental scale at $`\nu =3`$ of Ref. is the same as that at $`\nu =1`$, and (c) spin polarization at $`\nu =3`$ is drawn in Fig. 2 in the same scale as for $`\nu =1`$. All the parameters except the magnetic field are kept the same as in the case of $`\nu =1`$. Just as in the experimental situation, we fix the magnetic field for $`\nu =3`$ at a much lower magnetic field of $`B=4.4`$ tesla. The filled Landau levels, however, are still found to be inert at this low field and does not influence our chosen Hamiltonian. As seen in Fig. 1, numerical values of spin polarization are much smaller here than those for $`\nu =1`$. Our theoretical results for $`\beta =24`$ agree reasonably well with the experimental results of Ref. except in the high temperature regime where the experimental data drop down to zero. Theoretical results, in contrast, have the usual $`1/T`$ tail. We should point out however, that due to discreteness of the energy spectrum for finite number of electrons the terms with $`S_z`$ and $`S_z`$ in the polarization cancel each other at high temperatures like $`1/T`$ and we will always end up with $`1/T`$ decay of $`S_z(T)`$ vs $`T`$ . Therefore, we cannot predict with certainty how a macroscopic system would behave at high $`T`$. However, given the fluctuations in data points for $`\nu =1`$ and $`\nu =3`$ and the fact that the last few data points for $`\nu =3`$ are extremely small, it is not clear if one expects saturation of points with $`1/T`$ behavior or the spin polarization actually vanishes. Clearly, experimental data at high temperatures do not show any sign of saturation and in order to settle the question of actual vanishing of $`S_z(T)`$ it would be helpful to have more data in the high temperature regime. Saturation is also not visible in the low-temperature region of the experimental data. In order to clarify many of these outstanding issues, it is rather important to have more experimental probe of temperature dependence at this filling factor.
Influence of higher Landau levels is found to be quite significant for filling factor $`\nu =\frac{2}{3}`$. As we have demonstrated earlier , at low Zeeman energies the system at this filling factor is spin unpolarized and with increasing Zeeman energies, the system undergoes a phase transition to a fully spin polarized state. Similar result is also expected for $`\nu =\frac{2}{5}`$. These theoretical predictions are now well established through a variety of experiments . Our results for $`S_z(T)`$ vs $`T`$ at $`\nu =\frac{8}{3}`$ are shown in Fig. 3, where we present results for a six-electron system and a magnetic field value of 4.4 tesla. In Fig. 3, we present our results for $`\beta =2,4`$, but the spin polarization is rather insensitive to the finite-thickness correction. We also consider two different values of Landé $`g`$-factor: 0.44 (solid curves) and 0.05 (dashed curves). Interestingly, the results indicate that the total spin $`S`$ of the active electrons, unlike in the lowest Landau level, is at its maximum value $`S=N_e/2`$ even without Zeeman coupling. Hence even an infinitesimal Zeeman coupling will orient the spins in the active system resulting the polarization to be $`1/4`$. That is at odds with the conventional composite fermion model which predicts fractions of the form $`2+2/m`$, $`m`$ odd, to be unpolarized . This somewhat surprising behavior can be thought to be due to more repulsive effective interactions forcing the electrons, according to Hund’s rule, to occupy the maximum spin state more effectively as compared with electrons on the lowest Landau level. In order to demonstrate this behavior we have considered the case of a very small Zeeman energy (dashed curves), but the results still indicate full spin polarization of the active system. At this low Zeeman energy, spin polarization drops rather rapidly from its maximum value as the temperature is increased. In this context we should mention that the idea of an extremely small Zeeman energy is not that far fetched: in recent experiments, a significant reduction in Zeeman energy has been achieved by application of a large hydrostatic pressure on the heterostructure . It is even possible to have situations close to zero Zeeman energy . With the help of all the different techniques available in the literature to study spin polarization, it should be possible to explore $`S_z(T)`$ for $`\nu =\frac{8}{3}`$.
In closing, we have investigated spin polarization as a function of temperature for $`\nu =1`$ and $`\nu =\frac{2}{3}`$ in the higher Landau level. Our results indicate that for $`\nu =3`$ our theoretical results are not much influenced by the higher Landau level (except being much lower in magnitude). Available experimental results are incomplete at low and high temperature regions where no saturation of data points have been observed. Our results at $`\nu =\frac{8}{3}`$ reveal that the system is always fully spin polarized even at very small Zeeman energies. That is in contrast to the behavior at $`\nu =\frac{2}{3}`$ which at low Zeeman energies has a spin unpolarized state that is well supported by various experimental investigations. More experimental data points at $`\nu =3`$ in the low and high-temperature regime would be very helpful. Experimental probe of $`\nu =\frac{8}{3}`$ with NMR and optical spectroscopy should be able to explore the spin states predicted in the present work.
We would like to thank Dr. Y.-Q. Song for sending us their experimental data and Igor Kukushkin for helpful discussions.
|
no-problem/9910/cond-mat9910233.html
|
ar5iv
|
text
|
# Hydrodynamic Approach to Vortex Lifetime in Trapped Bose Condensates
## I Introduction
The prospect of creating quantized vortices in trapped Bose-Einstein condensed gases (BEC’s) has been an intensely discussed and studied subject in the last few years . Despite the considerable interest in this area of BEC studies, many of the most fundamental questions have yet to be answered, such as those concerning the stability and lifetime of such a state . In this article, we study the properties of a vortex in a trapped BEC from a hydrodynamic point of view. We confine the discussion to two-dimensional systems at zero temperature, and furthermore employ the limit of a condensate which is large in comparison with the size of the vortex.
We will thus only be concerned with a system whose properties can be described by a nonlinear Schrödinger equation. The system that we have in mind is the dilute Bose gas, which is governed by the Gross-Pitaevskii equation at temperatures sufficiently low that the system may be described as a superfluid. It should, however, be noted that this type of equation is applicable to a wider class of systems than just zero-temperature dilute gases .
The stability of a vortex in a BEC is limited by several factors. At finite temperatures, the vortex may be destroyed due to collisions with thermal excitations. This will not be the subject of this study. Second, the vortex may decay spontaneously even at zero temperature through the excitation of modes (or, equivalently, the emission of phonons) in the cloud . Third, deviations from spherical symmetry in the trapping potential will also limit the lifetime of the vortex. The two latter processes will be the subject of this paper.
The paper is organized as follows. Sections II-IV are concerned with the motion of an off-center vortex and its decay through phonon emission. In section II we introduce the model and the trial assumptions for the density and velocity distributions, and analyze the motion of an off-center vortex. In Sec. III, the parallel case of a precessing vortex in an infinite, homogeneous system is analyzed, and the power loss due to the radiation of sound is calculated. This can be thought of as a “semiclassical” approximation to the trapped case considered here. In Sec. IV, the validity of this approximation is discussed and a lower bound for the lifetime of a vortex is arrived at. In Sec. V, we find the characteristic time for destruction of a vortex due to deviations from cylindrical symmetry in the trapping potential, and finally, in Section VI, the results are summarized and discussed.
## II Circular motion
In the case of a harmonic trapping potential, the equation for the condensate wave function $`\psi (\stackrel{}{r})`$, whose squared modulus gives the superfluid density distribution $`\rho (\stackrel{}{r})`$, reads
$`\left[{\displaystyle \frac{\mathrm{}^2}{2m}}^2+{\displaystyle \frac{1}{2}}m\omega _t^2r^2+U_0|\psi (\stackrel{}{r})|^2\right]\psi (\stackrel{}{r})=\mu \psi (\stackrel{}{r}),`$ (1)
where $`\mu `$ is the chemical potential and $`U_0=4\pi \mathrm{}^2a/m`$ is the effective interaction potential, with $`a`$ being the $`s`$-wave scattering length. Assuming the kinetic energy to be negligible, we obtain the so-called Thomas-Fermi approximation for the wave function for a non-rotating cloud,
$$\psi _{\mathrm{TF}}(\stackrel{}{r})=\sqrt{\rho _0}\left(1\frac{r^2}{R^2}\right)^{1/2},$$
(2)
with an associated density distribution $`\rho _{\mathrm{TF}}=|\psi _{\mathrm{TF}}|^2`$. Here, the central density $`\rho _0=\mu /U_0`$ and the Thomas-Fermi radius $`R=\left(2\mu /m\omega _t^2\right)^{1/2}`$. For a two-dimensional system, the wave function $`\psi (\stackrel{}{r})`$ is normalized according to
$$d^2\stackrel{}{r}\left|\psi (\stackrel{}{r})\right|^2=\nu ,$$
(3)
where $`\nu `$ is the number of particles per unit length. As a measure of the influence of the inter-particle interactions on the system’s properties, we define the dimensionless parameter
$$\gamma \nu a.$$
The Thomas-Fermi approximation is a valid one when $`\gamma `$ is large, and in that limit we have for a two-dimensional system
$`\mu =2\mathrm{}\omega _t\sqrt{\gamma },`$ (4)
$`R=a_{\mathrm{osc}}2\gamma ^{1/4},`$ (5)
where $`a_{\mathrm{osc}}=\left(\mathrm{}/m\omega _t\right)^{1/2}`$ is the oscillator length.
We now turn to the problem of a cloud containing a singly quantized vortex at the position $`\stackrel{}{r}_0`$. In the limit of large $`\gamma `$, the density distribution of the cloud will not be appreciably affected by the presence of a vortex, except in a region whose size is comparable to the healing length $`\xi `$, defined as
$$\xi =\sqrt{\frac{\mathrm{}^2}{2m\rho U_0}}=\frac{1}{(8\pi \rho a)^{1/2}}.$$
(6)
The healing length gives the length scale over which the wave function for a vortex in a homogeneous Bose gas increases from zero to its bulk value . For an untrapped system, the density $`\rho `$ is the value of the density far from the vortex core. In the case of a trapped system, $`\rho `$ must be taken to be the local Thomas-Fermi density at the point $`\stackrel{}{r}_0`$ in the absence of a vortex, $`\rho (\stackrel{}{r}_0)`$, thus defining a local healing length
$$\xi (r_0)=\frac{1}{(8\pi \rho (r_0)a)^{1/2}}=\frac{\xi _0}{\sqrt{1\frac{r_0^2}{R^2}}},$$
(7)
where $`\xi _0=\xi (0)`$ is the value of the healing length in the center. The velocity distribution in a Bose-condensed system is given by $`\mathrm{}/m`$ times the gradient of the phase of the wave function. For a positively oriented vortex in an infinite, uniform system, it is known to be
$$\stackrel{}{v}_{\mathrm{uni}}(\stackrel{}{r})=\frac{\mathrm{}}{m}\varphi ,$$
where $`\varphi `$ is the polar angle relative to the position of the vortex, which gives
$$\stackrel{}{v}_{\mathrm{uni}}(\stackrel{}{r})=\frac{\mathrm{}}{m}\frac{\widehat{z}\times (\stackrel{}{r}\stackrel{}{r}_0)}{|\stackrel{}{r}\stackrel{}{r}_0|^2},$$
$`\widehat{z}`$ being the unit vector in the $`z`$ direction.
The velocity field is altered due to the boundary of the system and due to the spatially varying density. The existence of a boundary requires that the normal velocity vanishes there, and for homogeneous systems the recipe is to introduce a negatively oriented image vortex at the point $`\stackrel{}{r}_1\stackrel{}{r}_0R^2/r_0^2`$, giving the velocity field <sup>*</sup><sup>*</sup>*The existence of sharp boundary is an artefact of the Thomas-Fermi approximation. Although this approximation never holds at the boundary, we do have that for sufficiently large clouds, $`\rho (R)`$ is small, while $`\rho (R)`$ is nonnegligible, which justifies the neglect of the first term in the equation for stationary flow, $`\rho \stackrel{}{v}+\stackrel{}{v}\rho =0`$. Hence the radial velocity has to (approximately) vanish at $`r=R`$.
$$\stackrel{}{v}_0(\stackrel{}{r})=\frac{\mathrm{}}{m}\frac{\widehat{z}\times (\stackrel{}{r}\stackrel{}{r}_0)}{|\stackrel{}{r}\stackrel{}{r}_0|^2}\frac{\mathrm{}}{m}\frac{\widehat{z}\times (\stackrel{}{r}\stackrel{}{r}_1)}{|\stackrel{}{r}\stackrel{}{r}_1|^2}.$$
(8)
If the system has a density gradient, as in the present case, the condition for stationary flow, $`(\rho \stackrel{}{v})=0`$, is not automatically fulfilled. Writing the velocity field as
$$\stackrel{}{v}=\stackrel{}{v}_0+\stackrel{}{v}_1,$$
with $`\stackrel{}{v}_0`$ taken from Eq. (8), we get an equation for the correction $`\stackrel{}{v}_1`$:
$$\rho \stackrel{}{v}_1+\stackrel{}{v}_0\rho +\stackrel{}{v}_1\rho =0,$$
(9)
since the divergence of $`\stackrel{}{v}_0`$ vanishes. An approximate solution to this equation, valid close to the center of the system, may be found by treating $`\rho `$ as small, whereupon $`\stackrel{}{v}_1`$ can also be expected to be a small correction, and the third term on the left-hand side of Eq. (9) can be discarded. We then have
$$\stackrel{}{v}_1(\stackrel{}{r})=f(\stackrel{}{r}),$$
where
$$f(\stackrel{}{r})=\frac{\stackrel{}{v}_0(\stackrel{}{r})\rho (\stackrel{}{r})}{\rho (\stackrel{}{r})},$$
with the boundary condition that the normal velocity vanish for $`r=R`$. Since we consider a Bose condensate, the velocity has to be a potential flow, $`\stackrel{}{v}_1(\stackrel{}{r})=\varphi _1(\stackrel{}{r})`$, and we have the equation for $`\varphi _1`$,
$`^2\varphi _1(\stackrel{}{r})=f(\stackrel{}{r})\text{ for }rR,`$
$`{\displaystyle \frac{\varphi _1(\stackrel{}{r})}{r}}=0\text{ for }r=R`$
whose solution is written in terms of the Green’s function for the Neumann problem on a disk of radius $`R`$
$$G_N(\stackrel{}{r}^{},\stackrel{}{r})=\frac{1}{2\pi }\mathrm{ln}\frac{R}{|\stackrel{}{r}^{}\stackrel{}{r}|}+\frac{1}{2\pi }\mathrm{ln}\frac{R}{\left|\stackrel{}{r}^{}\frac{R^2}{r^2}\stackrel{}{r}\right|}+\frac{1}{2\pi }\mathrm{ln}\frac{R}{r}$$
as
$$\varphi _1(\stackrel{}{r})=d^2\stackrel{}{r}^{}f(\stackrel{}{r}^{})G_N(\stackrel{}{r}^{},\stackrel{}{r}).$$
(10)
One can easily obtain higher-order velocity terms in $`\rho `$, if one writes
$$\stackrel{}{v}=\stackrel{}{v}_0+\stackrel{}{v}_1+\stackrel{}{v}_2+\mathrm{}$$
One immediately finds that
$$\varphi _{n+1}(\stackrel{}{r})=d^2\stackrel{}{r}^{}\frac{\stackrel{}{v}_n(\stackrel{}{r})\rho (\stackrel{}{r})}{\rho (\stackrel{}{r})}G_N(\stackrel{}{r}^{},\stackrel{}{r}),$$
where $`\stackrel{}{v}_n=\varphi _n`$, $`n=1,2,\mathrm{}`$ . When the density varies over a scale larger than the healing length, higher order corrections are small. We will be content in this paper to retain only the zeroth-order term.
The energy per unit length of a two-dimensional system described by a nonlinear Schrödinger equation is
$`E[\psi ,\stackrel{}{r}_0]={\displaystyle d^2\stackrel{}{r}\left(\frac{\mathrm{}^2}{2m}\left|\psi \right|^2+\frac{1}{2}m\omega _t^2r^2\left|\psi \right|^2+\frac{U_0}{2}\left|\psi \right|^4\right)},`$ (11)
where we have made explicit the dependence of $`E`$ on the vortex coordinate $`\stackrel{}{r}_0`$. The change in energy of the system due to the presence of a vortex only shows up in the kinetic-energy term, as long as its effect on the density profile is neglected. We shall denote this additional energy by $`U_{\mathrm{eff}}`$, since one may regard it as an effective potential for the vortex, depending on the vortex position $`\stackrel{}{r}_0`$. In terms of the velocity field $`\stackrel{}{v}(\stackrel{}{r})`$ it is written
$$U_{\mathrm{eff}}(\stackrel{}{r}_0)=\frac{m}{2}d^2\stackrel{}{r}\rho (\stackrel{}{r})v(\stackrel{}{r})^2.$$
(12)
Using the lowest-order approximation $`\stackrel{}{v}_0`$ for the velocity, and employing the Thomas-Fermi wave function (2), the result is (cf. )
$`U_{\mathrm{eff}}(\stackrel{}{r}_0)`$ $`=`$ $`{\displaystyle \frac{\pi \mathrm{}^2\rho _0}{2m}}\left[\left(1{\displaystyle \frac{r_0^2}{R^2}}\right)\mathrm{ln}\left({\displaystyle \frac{R^2}{\xi _0^2}}\right)+\left({\displaystyle \frac{R^2}{r_0^2}}+12{\displaystyle \frac{r_0^2}{R^2}}\right)\mathrm{ln}\left(1{\displaystyle \frac{r_0^2}{R^2}}\right)\right].`$ (13)
To obtain this result, one needs to exclude the vortex core, of size $`\xi (r_0)`$ around the vortex position, from the radial integral for the integrals to converge. The first term dominates for small $`r_0`$ and is independent of the details of the model.
The above analysis shows that as long as the system is not subjected to an external rotational constraint, the vortex will experience an effective potential which decreases with the distance from the trap center, except in a small region close to the boundary, where $`U_{\mathrm{eff}}`$ has a local minimum due to the unphysical behaviour of the Thomas-Fermi wave function at $`r=R`$. This minimum is located a distance $`\delta `$ from the edge, given by
$$\delta =\frac{1}{2e}\left(\frac{\xi _0}{R}\right)^{2/3}.$$
(14)
It is interesting to note that the healing length does not set the length scale here. Apart from constants of order unity, this “boundary thickness” is the same as the cut-off length $`\delta `$ of Ref. , which comes about when computing the kinetic energy of a Thomas-Fermi cloud.
We now turn to the motion of the vortex in this simple model. We assume that the vortex coordinate $`\stackrel{}{r}_0`$ may have a time dependence, but that all other parameters of the system remain stationary. Solving the time-dependent counterpart to the equation (1) is equivalent to minimizing the action obtained from the Lagrangian
$$L[\psi ,\stackrel{}{r}_0,\dot{\stackrel{}{r}_0}]=T[\psi ,\stackrel{}{r}_0,\dot{\stackrel{}{r}_0}]E[\psi ,\stackrel{}{r}_0],$$
(15)
where the kinetic term
$$T[\psi ,\stackrel{}{r}_0,\dot{\stackrel{}{r}_0}]=d^2\stackrel{}{r}\frac{i\mathrm{}}{2}\left[\psi ^{}\frac{\psi }{t}\psi \frac{\psi ^{}}{t}\right].$$
(16)
A straightforward calculation, remembering that the gradient of the phase of $`\psi `$ is the velocity field, yields
$$T=\mathrm{}\frac{\widehat{z}(\dot{\stackrel{}{r}_0}\times \stackrel{}{r}_0)}{r_0^2}\left(\stackrel{~}{\nu }(\stackrel{}{r}_0)\nu \right),$$
(17)
where $`\stackrel{~}{\nu }(\stackrel{}{r}_0)=2\pi _0^{r_0}𝑑rr\rho (r)`$ is the number of particles per unit length inside the circle of radius $`r_0`$. The Euler-Lagrange equation for $`\stackrel{}{r}_0`$ and $`\dot{\stackrel{}{r}_0}`$ will finally yield, for the radial ($`r_0`$) and azimuthal ($`\varphi _0`$) components respectively,
$`\dot{r}_0=0;`$
$`\dot{\varphi }_0={\displaystyle \frac{F(r_0)}{\mathrm{}\stackrel{~}{\nu }/r_0}}`$
where we have defined $`F=U_{\mathrm{eff}}/r_0`$. We see that in this model, an off-center vortex executes an orbiting motion around the center with an angular frequency $`\omega =\dot{\varphi }_0`$. Since $`\stackrel{~}{\nu }/r_0=2\pi r_0\rho (r_0)`$, we finally obtain
$$\omega =\frac{F(r_0)}{2\pi \mathrm{}r_0\rho (r_0)}.$$
In the case of a Thomas-Fermi profile, $`F`$ is obtained by differentiating Eq. (13):
$$F(r_0)=\frac{\pi \mathrm{}^2\rho _0r_0}{mR^2}g(r_0/R),$$
(18)
where
$$g(x)=2\mathrm{ln}\left(\frac{R}{\xi }\right)+\left(\frac{1}{x^4}+2\right)\mathrm{ln}\left(1x^2\right)+\frac{1}{x^2}+2,$$
(19)
which yields the final result for the frequency of precession of a vortex in a Thomas-Fermi cloud,
$$\omega =\frac{\mathrm{}}{2mR^2\left(1\frac{r_0^2}{R^2}\right)}g(r_0/R).$$
(20)
The same expression has been obtained previously by different approaches . Nevertheless, the above treatment allows a smooth generalization to incorporate the phonon effect.
## III Phonon radiation by a vortex in an infinite system
We now turn to the problem of a vortex in an infinite system, exercising (e. g. under the influence of an external force) circular motion.
It has previously been shown how any homogeneous superfluid described by a nonlinear Schrödinger-type energy functional is equivalent to (2+1)-dimensional electrodynamics, with vortices playing the role of charges and sound corresponding to electromagnetic radiation. For a fluid, which in the absence of vortices has the density $`\rho _0`$ (note that the use of this symbol is not the same as in the preceding section), with the local fluid velocity $`\stackrel{}{v}(\stackrel{}{r},t)`$ and density $`\rho (\stackrel{}{r},t)`$, and (possibly) containing a vortex at the position $`\stackrel{}{r}_0(t)`$, moving at a velocity $`\dot{\stackrel{}{r}_0}(t)=\stackrel{}{v}_v(t)`$, we define the “vortex charge” $`q_v=\mathrm{}\sqrt{2\pi \rho _0/m}`$, “vortex density” $`\rho _v(\stackrel{}{r},t)=\delta ^{(2)}(\stackrel{}{r}\stackrel{}{r}_0(t))`$, and the corresponding “vortex current” $`\stackrel{}{ȷ}_v(\stackrel{}{r},t)=q_v\rho _v(\stackrel{}{r},t)\stackrel{}{v}_v(\stackrel{}{r},t)`$. The speed of sound is $`c=\sqrt{U_0\rho _0/m}`$. We then have the analogous Maxwell equations:
$`\stackrel{}{b}`$ $`=`$ $`0,`$
$`\stackrel{}{e}`$ $`=`$ $`2\pi q_v\rho _v,`$
$`\times \stackrel{}{e}+{\displaystyle \frac{1}{c}}{\displaystyle \frac{\stackrel{}{b}}{t}}`$ $`=`$ $`0,`$
$`\times \stackrel{}{b}{\displaystyle \frac{1}{c}}{\displaystyle \frac{\stackrel{}{e}}{t}}`$ $`=`$ $`{\displaystyle \frac{2\pi }{c}}q_v\stackrel{}{ȷ}_v,`$
where we have defined
$`\stackrel{}{e}(\stackrel{}{r},t)`$ $`=`$ $`\sqrt{{\displaystyle \frac{2\pi m}{\rho _0}}}\rho (\stackrel{}{r},t)\widehat{z}\times \stackrel{}{v}(\stackrel{}{r},t),`$
$`\stackrel{}{b}(\stackrel{}{r},t)`$ $`=`$ $`\sqrt{{\displaystyle \frac{2\pi m}{\rho _0}}}c\widehat{z}\rho (\stackrel{}{r},t).`$
The “no magnetic monopole” law is clear from the definition of $`\stackrel{}{b}`$; the Coulomb law states how the presence of vortices create a rotational current, the Faraday law is equivalent to the continuity equation for the fluid, and the counterpart to Ampère’s law derives from the Josephson-Anderson relation implied by the Euler equation. The energy of the system is
$$E=\frac{1}{4\pi }d^2\stackrel{}{r}\left(\stackrel{}{e}^2(\stackrel{}{r},t)+\stackrel{}{b}^2(\stackrel{}{r},t)\right),$$
and correspondingly the Poynting vector
$$\stackrel{}{\sigma }(\stackrel{}{r},t)=\frac{c}{2\pi }\stackrel{}{e}(\stackrel{}{r},t)\times \stackrel{}{b}(\stackrel{}{r},t).$$
(21)
Electromagnetic potentials, $`\stackrel{}{a}(\stackrel{}{r},t)`$ and $`\phi (\stackrel{}{r},t)`$, are defined in the usual way, and within a Lorentz gauge we recover the usual wave equations
$`\left(^2{\displaystyle \frac{1}{c}}{\displaystyle \frac{^2}{t^2}}\right)\phi (\stackrel{}{r},t)`$ $`=`$ $`2\pi \rho _v(\stackrel{}{r},t),`$
$`\left(^2{\displaystyle \frac{1}{c}}{\displaystyle \frac{^2}{t^2}}\right)\stackrel{}{a}(\stackrel{}{r},t)`$ $`=`$ $`{\displaystyle \frac{2\pi }{c}}\stackrel{}{ȷ}_v(\stackrel{}{r},t),`$
which have the exact solution in (2+1) dimensions
$`\phi (\stackrel{}{r},t)`$ $`=`$ $`{\displaystyle 𝑑t^{}d^2\stackrel{}{r}^{}\frac{\theta \left(tt^{}\frac{|\stackrel{}{r}\stackrel{}{r}^{}|}{c}\right)}{\sqrt{(tt^{})^2\frac{|\stackrel{}{r}\stackrel{}{r}^{}|^2}{c^2}}}\rho _v(\stackrel{}{r}^{},t^{})},`$ (22)
$`\stackrel{}{a}(\stackrel{}{r},t)`$ $`=`$ $`{\displaystyle \frac{1}{c}}{\displaystyle 𝑑t^{}d^2\stackrel{}{r}^{}\frac{\theta \left(tt^{}\frac{|\stackrel{}{r}\stackrel{}{r}^{}|}{c}\right)}{\sqrt{(tt^{})^2\frac{|\stackrel{}{r}\stackrel{}{r}^{}|^2}{c^2}}}\stackrel{}{ȷ}_v(\stackrel{}{r}^{},t^{})}.`$ (23)
The step functions $`theta`$ in the numerator is a feature peculiar to two dimensions; the corresponding three-dimensional expressions contain a delta function.
We wish to use the above formulation of vortex dynamics in order to find the energy dissipated from a circularly moving vortex due to the radiation of sound waves. Our aim is therefore to find the value of the Poynting vector $`\stackrel{}{\sigma }`$ at large distances from the vortex.
We first specialize to the case of one point particle at the position $`\stackrel{}{r}_0`$. Eq. (23) becomes
$`\phi (\stackrel{}{r},t)`$ $`=`$ $`q_v{\displaystyle 𝑑t^{}\frac{\theta \left(tt^{}\frac{X}{c}\right)}{\sqrt{(tt^{})^2\frac{X^2}{c^2}}}},`$ (24)
$`\stackrel{}{a}(\stackrel{}{r},t)`$ $`=`$ $`{\displaystyle \frac{q_v}{c}}{\displaystyle 𝑑t^{}\frac{\theta \left(tt^{}\frac{X}{c}\right)}{\sqrt{(tt^{})^2\frac{X^2}{c^2}}}\stackrel{}{v}_v(t^{})},`$ (25)
where $`X=|\stackrel{}{r}\stackrel{}{r}_0(t^{})|`$. We now seek the solutions to these integrals in the limit of large $`\stackrel{}{r}`$. To zeroth order in $`r_0/r`$, $`X=r`$, which is independent of $`t^{}`$. For a charge exercising circular motion with frequency $`\omega `$ at a radius $`r_0`$, the velocity is
$$\stackrel{}{v}_v(t^{})=v(\widehat{x}\mathrm{sin}\omega t^{}+\widehat{y}\mathrm{cos}\omega t^{}).$$
The integrals for the vector potential’s $`x`$ and $`y`$ components can now be done exactly, yielding Bessel functions:
$`a_x(\stackrel{}{r},t)={\displaystyle \frac{q_vv}{c}}{\displaystyle _{\mathrm{}}^{tr/c}}{\displaystyle \frac{dt^{}\mathrm{sin}\omega t^{}}{\sqrt{(tt^{})^2\frac{r^2}{c^2}}}}={\displaystyle \frac{qv\pi }{2c}}\left[N_0\left({\displaystyle \frac{\omega r}{c}}\right)\mathrm{sin}\omega t+J_0\left({\displaystyle \frac{\omega r}{c}}\right)\mathrm{cos}\omega t\right],`$
$`a_y(\stackrel{}{r},t)={\displaystyle \frac{q_vv}{c}}{\displaystyle _{\mathrm{}}^{tr/c}}{\displaystyle \frac{dt^{}\mathrm{cos}\omega t^{}}{\sqrt{(tt^{})^2\frac{r^2}{c^2}}}}={\displaystyle \frac{qv\pi }{2c}}\left[N_0\left({\displaystyle \frac{\omega r}{c}}\right)\mathrm{cos}\omega t+J_0\left({\displaystyle \frac{\omega r}{c}}\right)\mathrm{sin}\omega t\right],`$
and finally
$$\stackrel{}{a}(\stackrel{}{r},t)=\frac{\pi q_v}{2c}\left[\stackrel{}{v}_v(t)N_0\left(\frac{\omega r}{c}\right)+\widehat{z}\times \stackrel{}{v}_v(t)J_0\left(\frac{\omega r}{c}\right)\right].$$
We immediately obtain the “magnetic field”:
$$\stackrel{}{b}(\stackrel{}{r})=\frac{\pi q_v\omega }{2c^2}\left[\stackrel{}{v}_v(t)N_1\left(\frac{\omega r}{c}\right)+\widehat{z}\times \stackrel{}{v}_v(t)J_1\left(\frac{\omega r}{c}\right)\right]\times \widehat{r}.$$
(26)
Finally, we utilize the asymptotic formulas for the Bessel functions at large arguments, which yields
$$\stackrel{}{b}(\stackrel{}{r})=q_v\sqrt{\frac{\pi \omega }{2c^3r}}\left[\stackrel{}{v}_v(t)\times \widehat{r}\mathrm{sin}\left(\frac{\omega r}{c}\frac{3\pi }{4}\right)+(\widehat{z}\times \stackrel{}{v}_v(t))\times \widehat{r}\mathrm{cos}\left(\frac{\omega r}{c}\frac{3\pi }{4}\right)\right].$$
(27)
It is not necessary to calculate the electric field $`\stackrel{}{e}(\stackrel{}{r})`$ in order to obtain the Poynting vector, since at large $`r`$, the field is locally that of a plane wave, in which case we have
$$\stackrel{}{\sigma }=\frac{c}{2\pi }\stackrel{}{b}^2\widehat{r}.$$
(28)
On integrating around the circle with radius $`r`$ we get the power radiated by a vortex exercising circular motion in an infinite system:
$$P=_0^{2\pi }r𝑑\theta \widehat{r}\stackrel{}{\sigma }=\frac{\pi q_v^2\omega v^2}{4c^2}=\frac{\pi q_v^2\omega ^3r_0^2}{4c^2}.$$
(29)
## IV Lifetime of a vortex
In Sec. II we found that an off-center vortex performs a circular motion. In the preceding section we saw how, in an infinite and homogeneous system, such motion excites sound waves, which carry away energy from the vortex. In a trapped cloud, the effect of such radiation would be that the vortex move outward towards regions of lower potential energy $`U_{\mathrm{eff}}(\stackrel{}{r}_0)`$, until it finally escapes from the cloud .
The application of the results of Sec. III on the trapped case may be thought of as a semiclassical approximation. One condition for this approximation to hold is that the precession frequency of the vortex match the attainable excitation frequencies of the cloud, as seen in Eq. (27), where the moving vortex excites sound waves which have the same frequency as the precession.
This requirement, however, is not met in the present case. In a harmonically trapped cloud containing a vortex, all but one of the mode frequencies are greater than or equal to the trap frequency $`\omega _t`$ ; the single low-lying mode is identical to an off-center displacement of the vortex .
Comparing the precession frequency with the trap frequency, we find, using Eq. (5),
$$\frac{\omega }{\omega _t}=\frac{g(r_0/R)}{8\gamma ^{1/2}\left(1\frac{r_0^2}{R^2}\right)}$$
which is always less than one, except very close to the boundary.
We conclude that the semiclassical approximation is never valid for a vortex in a harmonically trapped cloudThis is, in fact, the case for a larger class of trapping potentials, including all power-law potentials and the square well.. This does not, however, necessarily imply that the vortex is stable; only that its lifetime is longer than that implied by the semiclassical approximation.
The results of the two preceding sections can therefore be utilized to calculate a lower bound for the vortex lifetime. The power dissipated from the vortex by phonon emission, Eq. (29), is to be set equal to the rate of motion downhill the potential gradient:
$$P=\frac{dE}{dt}=\frac{Fdr_0}{dt}.$$
We note that $`P`$ and $`F`$ are functions of $`r_0`$. Rearranging terms, we obtain the time $`\tau _p`$ for the vortex to move from $`r_0=\xi `$ to $`r_0=R\delta `$:
$$\tau _p=_\xi ^{R\delta }𝑑r_0\frac{F(r_0)}{P(r_0)}.$$
(30)
This quantity is a lower bound for the lifetime of a vortex originally residing at a distance $`\xi `$ from the trap center. The upper cutoff, $`R\delta `$, is needed in order to avoid unphysical boundary effects, as discussed in connection with Eq. (14). The subscript $`p`$ is introduced to indicate that this time scale is associated with the radiation of phonons.
Inserting Eqs. (20), (18) and (29) into (30) we get
$$\tau _p=\frac{16m^2\rho _0U_0R^4}{\pi \mathrm{}^3}I,$$
where the dimensionless integral $`I`$ equals
$$I=_{\xi /R}^{1\delta /R}𝑑x\frac{(1x^2)^3}{x(g(x))^2},$$
with $`g(x)`$ given by Eq. (19). An exact result for the integral I is easily obtained numerically; the result is shown in Fig. 1, and will be discussed shortly. An estimate can be obtained by noting that the function $`g(x)`$ for strong coupling is approximately equal to $`2\mathrm{ln}(R/\xi )+\frac{1}{2}`$ over a large range of values of $`x`$, and that the lowest-order term in $`x`$ dominates the numerator, whereupon one gets $`I\mathrm{ln}(R/\xi )/(2\mathrm{ln}(R/\xi )+\frac{1}{2})^2`$ and
$$\tau _p\frac{16m^2\rho _0U_0R^4\mathrm{ln}(R/\xi )}{\pi \mathrm{}^3(2\mathrm{ln}(R/\xi )+\frac{1}{2})^2}.$$
(31)
Finally, we insert the Thomas-Fermi results (5), to cast the above result in terms of the parameter $`\gamma `$:
$$\tau _p\frac{1}{\omega _t}\frac{128}{\pi }\frac{\gamma ^{3/2}\mathrm{ln}(4\sqrt{\gamma })}{(\mathrm{ln}(4\sqrt{\gamma })+\frac{1}{4})^2}.$$
(32)
## V Broken rotational symmetry: change of angular momentum
The other factor which at zero temperature may limit the lifetime of a vortex is deviations from rotational symmetry of the trapping potential. Even very small irregularities in the magnetic or electric fields used to trap the condensed gases in experiments may affect the possibility to keep a vortex in the system.
We therefore consider a BEC in a somewhat deformed trap. The total torque due to inhomogeneities in the density and the trap is
$$\stackrel{}{M}=d^2\stackrel{}{r}\rho (\stackrel{}{r})\stackrel{}{r}\times V(\stackrel{}{r}),$$
(33)
where $`\rho (\stackrel{}{r})`$ and $`V(\stackrel{}{r})`$ are the actual (not exactly cylindrically symmetric) density and external potential, respectively. A natural unit for measuring the torque would be the potential energy associated with the trapping potential, which has the same units:
$$U_{\mathrm{o}sc}=d^2\stackrel{}{r}\rho (\stackrel{}{r})V(\stackrel{}{r}),$$
whose value for a harmonic-oscillator trap and a Thomas-Fermi density profile equals
$$U_{\mathrm{o}sc}=\frac{\pi }{12}\rho _0m\omega _t^2R^4.$$
Writing
$$M=\stackrel{}{M}\widehat{z}=ϵU_{\mathrm{o}sc},$$
we can use $`ϵ`$ as a (approximately independent of coupling strength) measure of the relative distortion of the trap and the density profile. Assuming $`ϵ`$ to be approximately constant over time, the time scale for the destruction of an initially centrally placed vortex is
$$\tau _M=\frac{L_0}{M},$$
where $`L_0`$ is the value of the angular momentum for a system with a central vortex; this is equal to $`\mathrm{}`$ times the number of particles $`\nu `$ per unit length; and so the time for the vortex to move out is
$$\tau _M=\frac{6\mathrm{}}{ϵm\omega _t^2R^2}.$$
## VI conclusions
We have arrived at two estimates for vortex lifetime: one connected with phonon radiation, and one due to broken rotational symmetry. The former, $`\tau _p`$, is an increasing function of coupling strength $`\gamma `$, whereas the latter, $`\tau _M`$, decreases with increasing $`\gamma `$. This leaves us with a “window” at moderate values of coupling strength, where both time scales are reasonably large.
The lower-bound time scales $`\tau _p`$ and $`\tau _M`$ are plotted as functions of coupling strength in Fig. 1. Both are to be considered as lower bounds, since the assumptions applied in deriving them are very pessimistic. The parameter $`ϵ`$ is taken to be $`10^3`$, which is an upper bound for the trap deformations attainable experimentally . Trap frequencies are often around 100 s<sup>-1</sup> in experiments; thus $`\tau _p`$ will be longer than one second (wich is a typical order of magnitude for condensate lifetimes) as long as $`\gamma `$ is greater than unity, and $`\tau _M`$ is longer than one second for all $`\gamma 5000`$. This leaves us with a large parameter range, easily attainable experimentally, for which a vortex at zero temperature can be considered as long-lived; considering the conservative assumptions made here, the actual region of vortex stability is probably much larger than this analysis indicates.
## VII Acknowledgement
This work was in part supported by the Swedish NFR. We would like to thank C. J. Pethick for interesting discussions.
|
no-problem/9910/cond-mat9910146.html
|
ar5iv
|
text
|
# Thin superconducting disk with field-dependent critical current: Magnetization and ac susceptibilities
## I Introduction
The critical state model (CSM) is widely accepted as a powerful tool in the analysis of magnetic properties of type-II superconductors. In the parallel geometry, i.e., for long samples like slabs and cylinders placed in a parallel magnetic field, an extensive amount of theoretical work has already been carried out. Exact results for flux density profiles, magnetization, ac susceptibility etc., have been obtained for a number of different field-dependent critical current densities. During the last years even more attention has been paid to the CSM analysis in the perpendicular geometry, i.e., for thin samples in perpendicular magnetic fields. Assuming a constant critical current (the Bean model), explicit analytical results have been obtained for a long thin strip and a thin circular disk. From experiments, however, it is well known that also in such samples the critical current density $`j_c`$ usually depends strongly on the local flux density $`B`$. Due to the lack of a proper theory, this dependence often hinders a precise interpretation of the measured quantities.
In the perpendicular geometry, the ac susceptibility beyond the Bean model has been calculated only by carrying out flux creep simulations assuming a power-law current-voltage relation with a large exponent. However, quite recently an exact analytical approach was developed for the CSM analysis of a long thin strip and thin circular disk. In both cases a set of coupled integral equations was derived for the flux and current distributions. In the present paper we solve these equations numerically for the thin disk case, and calculate magnetization hysteresis loops as well as the complex ac susceptibility. Results for several commonly used $`j_c(B)`$ dependences are presented.
The paper is organized as follows. In Sec. II we give a short description of the exact solution for the disk problem. In Sec. III, magnetization hysteresis loops are calculated and the relation between the width of the loop and $`j_c`$ is discussed. The results for the complex ac susceptibility are presented in Sec. IV and analysed with emphasis on the asymptotic behavior at small and large field amplitudes. Finally, Sec. V gives the conclusions.
## II Exact solution
Consider a thin superconducting disk of radius $`R`$ and thickness $`d`$, where $`dR`$. We assume either that $`d\lambda `$, where $`\lambda `$ is the London penetration depth, or, if $`d<\lambda `$, that $`\lambda ^2/dR`$. In the latter case the quantity $`\lambda ^2/d`$ plays a role of two-dimensional penetration depth. We put the origin of the coordinates at the disk center and direct the $`z`$-axis perpendicularly to the disk plane. The external magnetic field $`𝐁_a`$ is applied along the $`z`$-axis, and the $`z`$-component of the field in the plane $`z=0`$ is denoted as $`B`$. The current flows in the azimuthal direction, with a sheet current denoted as $`J(r)=_{d/2}^{d/2}j(r,z)𝑑z`$, where $`j`$ is the current density.
### A Increasing field
We begin with a situation where the external field $`B_a`$ is applied to a zero-field-cooled disk. The disk then consists of an inner flux-free region, $`ra`$ , and of an outer region, $`a<rR`$, penetrated by magnetic flux.
In the CSM with a general $`J_c(B)`$ the current and flux density distributions in a disk are given by the following coupled equations
$$J(r)=\{\begin{array}{cc}\frac{2r}{\pi }_a^R𝑑r^{}\sqrt{\frac{a^2r^2}{r^2a^2}}\frac{J_c[B(r^{})]}{r^2r^2},\hfill & \hfill r<a\\ & \\ J_c[B(r)],\hfill & \hfill a<r<R\end{array}$$
(1)
$$B(r)=B_a+\frac{\mu _0}{2\pi }_0^RF(r,r^{})J(r^{})𝑑r^{}.$$
(2)
$$B_a=\frac{\mu _0}{2}_a^R\frac{dr}{\sqrt{r^2a^2}}J_c[B(r)].$$
(3)
Here $`F(r,r^{})=K(k)/(r+r^{})E(k)/(rr^{})`$, where $`k(r,r^{})=2\sqrt{rr^{}}/(r+r^{})`$, while $`K`$ and $`E`$ are complete elliptic integrals. In the case of constant $`J_c`$, these equations reduce to the exact Bean-model formulas derived in Refs. and .
Note that the calculation can be significantly simplified at large external field where $`a0`$, and the critical state $`J(r)=J_c[B(r)]`$ is established throughout the disk. The distribution $`B(r)`$ is then determined by the single equation
$$B(r)=B_a\frac{\mu _0}{2\pi }_0^RF(r,r^{})J_c[B(r^{})]𝑑r^{},$$
(4)
following from Eq. (2).
### B Subsequent field descent
If $`B_a`$ is reduced, after being first raised to some maximum value $`B_{am}`$, the flux density will decrease in the outer part, $`arR`$, and remain trapped in the inner part, see Fig. 1. We denote the flux front position, the current density and the field distribution at the maximum field as $`a_m`$, $`J_m(r)`$ and $`B_m(r)`$, respectively. Evidently, $`J_m(r)`$, $`B_m(r)`$, and $`a_m`$ satisfy Eqs. (1)-(3).
Let the field and current distributions during field descent be written as
$$B(r)=B_m(r)+\stackrel{~}{B}(r),J(r)=J_m(r)+\stackrel{~}{J}(r).$$
(5)
The relation between $`\stackrel{~}{B}(r)`$ and $`\stackrel{~}{J}(r)`$ then reads
$$\stackrel{~}{J}(r)=\{\begin{array}{cc}\frac{2r}{\pi }_a^R𝑑r^{}\sqrt{\frac{a^2r^2}{r^2a^2}}\frac{\stackrel{~}{J}_c(r^{})}{r^2r^2},\hfill & \hfill r<a\\ & \\ \stackrel{~}{J}_c(r),\hfill & \hfill a<r<R\end{array}$$
(6)
$$\stackrel{~}{B}(r)=B_aB_{am}+\frac{\mu _0}{2\pi }_0^RF(r,r^{})\stackrel{~}{J}(r^{})𝑑r^{}.$$
(7)
$$B_aB_{am}=\frac{\mu _0}{2}_a^R\frac{\stackrel{~}{J}_c(r)}{\sqrt{r^2a^2}}𝑑r,$$
(8)
where we defined
$$\stackrel{~}{J}_c(r)=J_c[B_m(r)+\stackrel{~}{B}(r)]+J_c[B_m(r)].$$
(9)
Again, setting $`J_c=`$const, these equations reproduce the Bean-model results.
If the field is decreased below $`B_{am}`$ the memory of the state at $`B_{am}`$ is completely erased, and the solution becomes equivalent to the virgin penetration case. If the difference $`B_{am}B_a`$ is large enough then one can again use Eq. (4), only with the opposite sign in front of the integral.
Given the $`J_c(B)`$-dependence, a complete description of any magnetic state is now found by solving the equations numerically. An efficient iteration procedure is described in Ref. .
## III Magnetization
The magnetization of a disk is defined as the magnetic moment, $`\pi _0^Rr^2J(r)𝑑r`$, per unit volume. Due to symmetry the magnetization is directed along the $`z`$-axis. In a fully penetrated state described by the Bean model with critical current $`J_{c0}`$, the magnetization equals $`M_0=J_{c0}R/3d`$. It is convenient to use $`M_0`$ for normalization, i.e.
$$\frac{M}{M_0}=\frac{3}{R^3}_0^R\frac{J(r)}{J_{c0}}r^2𝑑r.$$
(10)
The magnetization can be calculated using the current profiles obtained by the procedure described in the previous section. Shown in Fig. 2 are magnetization hysteresis loops calculated for the $`J_c(B)`$-dependences:
$`J_c`$ $`=`$ $`J_{c0}/(1+|B|/B_0)\text{(Kim model),}`$ (11)
$`J_c`$ $`=`$ $`J_{c0}\mathrm{exp}(|B|/B_0)\text{(exponential model).}`$ (12)
A striking manifestation of the $`B`$-dependence is a peak occuring at small $`B_a`$. The calculations show that for any choice of the parameter $`B_0`$, the peak is always located at negative $`B_a`$ on the descending branch of the major loop. Such a peak position at negative $`B_a`$ is a typical feature also in the parallel geometry. However, it contrasts the case of a thin strip in perpendicular field, where it was shown analytically that for any $`J_c(B)`$-dependence the peak is located exactly at $`B_a=0`$.
In the Bean model, there is a simple relation between the critical current and the width $`\mathrm{\Delta }M`$ of the major magnetization loop,
$$J_c=\frac{3d}{2R}\mathrm{\Delta }M.$$
(13)
The same expression is often used to determine $`J_c`$ from experimental $`\mathrm{\Delta }M`$ data even when the width of the observed loop is not constant. As discussed in Refs. the applicability range of such a procedure is limited. In the parallel geometry a simple proportionality only applies for $`B_a`$ larger than the full penetration field. For the thin disk case the field range where $`J_c\mathrm{\Delta }M`$ can be estimated from our calculations. Figure 3 shows $`J_c(B)`$ inferred from the magnetization loop using Eq. (13), together with the actual $`J_c(B)`$. One can see that at fields larger than the characteristic field
$$B_c\mu _0J_{c0}/2,$$
(14)
there is essentially no distinction between the two curves. We find that this holds independently of $`B_0`$ and also for other $`J_c(B)`$ models. Therefore, also for the present geometry the $`B`$-dependence of $`J_c`$ can be inferred directly from $`\mathrm{\Delta }M(B_a)`$, except in the low-field region. Here the correct $`J_c(B)`$ can be obtained only by a global fit of the magnetization curve.
The Bean-model virgin magnetization for a thin disk can be expanded in $`B_a`$ as
$$\mu _0M\chi _0B_a\left(1\frac{1}{2}\left(\frac{B_a}{B_c}\right)^2\right),$$
(15)
where $`\chi _0=8R/3\pi d`$ is the Meissner state susceptibility. Our numerical calculations show that the same expansion also holds for $`B`$-dependent $`J_c`$, only with an effective value $`B_c^{\text{eff}}`$ satisfying
$$B_c^{\text{eff}}/B_c1\alpha \sqrt{B_c/B_0}.$$
(16)
We find that if $`B_0/B_c0.5`$ the parameter $`\alpha =0.50`$ for the exponential model, and $`\alpha =0.43`$ for the Kim model. In the parallel geometry the low-field expansion has an additional $`B_a^2`$ term which is not affected by the $`J_c(B)`$ dependence. Thus, the deviation from the Meissner response at small $`B_a`$ is there insensitive to $`J_c(B)`$. This result contrasts the case of perpendicular geometry where due to demagnetization effects, a $`B`$-dependence of $`J_c`$ affects the flux behavior even in the limit of low fields, see discussion in Ref. .
## IV Complex ac susceptibility
### A Basic expressions
The hysteretic dependence of the magnetization $`M`$ as the applied field $`B_a`$ is cycled leads to ac losses. The energy dissipation per cycle of $`B_a`$ is
$$W=_{\mathrm{cycle}}B_a(t)\frac{dM(t)}{dt}𝑑t$$
(17)
per unit volume. According to the critical state model, $`M(t)`$ follows $`B_a(t)`$ adiabatically, i.e., $`M(t)=M[B_a(t)]`$. Thus, the losses are given by the area of the magnetization hysteresis loop, $`M𝑑B_a`$.
It is conventional to express the ac response through the imaginary and real parts of the so-called nonlinear magnetic susceptibility. If the applied field is oscillated harmonically with amplitude $`B_{am}`$, i.e., $`B_a(t)=B_{am}\mathrm{cos}\omega t`$, the magnetization is also oscillating with the same period. The complex susceptibility is then defined by the coefficients of the Fourier series of the in general anharmonic $`M(t)`$, where the real and imaginary parts are given by
$`\chi _n^{}`$ $`=`$ $`{\displaystyle \frac{\mu _0\omega }{\pi B_{am}}}{\displaystyle _0^{2\pi /\omega }}M(t)\mathrm{cos}(n\omega t)𝑑t,`$
$`\chi _n^{\prime \prime }`$ $`=`$ $`{\displaystyle \frac{\mu _0\omega }{\pi B_{am}}}{\displaystyle _0^{2\pi /\omega }}M(t)\mathrm{sin}(n\omega t)𝑑t,`$
respectively.
The dissipated energy, $`W`$, is determined by the response $`\chi _n^{\prime \prime }`$ at the fundamental frequency, namely
$$\chi ^{\prime \prime }\chi _1^{\prime \prime }=\frac{\mu _0W}{\pi B_{am}^2}=\frac{2\mu _0}{\pi B_{am}^2}_{B_{am}}^{B_{am}}M(B_a)𝑑B_a.$$
(18)
Below we shall also analyze the real part of the susceptibility at the fundamental frequency, $`\chi ^{}\chi _1^{}`$, which can be expressed as
$$\chi ^{}=\frac{2\mu _0}{\pi B_{am}^2}_{B_{am}}^{B_{am}}\frac{M(B_a)B_adB_a}{\sqrt{B_{am}^2B_a^2}}.$$
(19)
The $`\chi ^{\prime \prime }(B_{am})`$ and $`\chi ^{}(B_{am})`$ are calculated from these expressions using $`M(B_a)`$ obtained by the previously described procedure with $`B_{am}`$ covering a wide range of amplitudes. For convenience, we normalize the susceptibilities to the Meissner state value $`\chi _0=8R/3\pi d`$
As seen from Fig. 4, the response $`\chi ^{\prime \prime }`$ shows a maximum as a function of the field amplitude. Such a maximum is in fact a common feature in all geometries. For the Bean model for a long cylinder the peak is known to occur when $`B_{am}`$ is equal to the full penetration field. In the perpendicular geometry the interpretation of the peak position is not so simple. Even in the Bean model for a thin disk only numerical results are available: the peak value equals $`\chi _{\mathrm{max}}^{\prime \prime }=0.24`$ and occurs at an amplitude of $`B_{am}=1.94B_c`$, corresponding to the penetration $`1a_m/R=72\%`$. We find that the $`B`$-dependence of $`J_c`$ leads to a slight increase both in $`a_m`$ and in the peak magnitude. For example, the numerical results for the Kim model with $`B_c=B_0`$ give $`\chi _{\mathrm{max}}^{\prime \prime }=0.29`$ and $`1a_m/R=70\%`$. The difference between various $`J_c(B)`$ models becomes more distinct if one analyses the asymptotic behavior at small and large field amplitudes, as shown below.
### B Low-field behavior
At small field amplitudes the Bean model gives the exact expressions
$`\chi ^{}/\chi _0`$ $`=`$ $`1+15(B_{am}/B_c)^2/32,`$ (20)
$`\chi ^{\prime \prime }/\chi _0`$ $`=`$ $`(B_{am}/B_c)^2/\pi .`$ (21)
Shown in Fig. 5 are our numerical results for $`\chi ^{\prime \prime }`$ for the exponential model. From the log-log plot it is clear that the quadratic dependence on $`B_{am}`$ retains as in Eq. (21) only with a modified coefficient. Moreover, we find that also $`\chi ^{}`$ can be described by the Bean model expression Eq. (20) with the same effective $`B_c`$. The effective $`B_c`$ fits the expression
$$B_c^{\text{eff}}/B_c=1\alpha B_c/B_0,$$
(22)
when $`B_0/B_c1`$ with $`\alpha =0.42`$ for the exponential model, and $`\alpha =0.36`$ for the Kim model. Interestingly, the same effective description was found for the flux penetration depth $`a`$, whereas it deviates from the description of the virgin magnetization, Eq. (16).
### C High-field behavior
The high-field behavior of the dissipated energy $`W`$ is shown in Fig. 6 for a variety of $`J_c(B)`$ dependences. We choose to plot $`W`$ rather than $`\chi ^{\prime \prime }`$ because the difference between the asymptotic behavior in the various models becomes more evident. One sees from the figure that for large $`B_{am}`$ the Bean model yields $`WB_{am}`$. The exponential model shows saturation, whereas one finds after a closer inspection that the Kim model leads to a logarithmic increase. These behaviors can be understood by considering the fact that for large amplitudes the disk is fully penetrated and $`B(r)B_a`$. Therefore, $`M(B_a)J_c(B_a)`$, and one obtains $`W^{B_{am}}J_c(B)𝑑B`$.
The high-field behavior of the real part of the susceptibility, $`\chi ^{}`$, for different $`J_c(B)`$ is shown in Fig. 7. For the Bean model we find asymptotically that $`\chi ^{}/\chi _0=1.33(B_{am}/B_c)^{3/2}`$ (dotted line), which is in agreement with Eq. 32 in Ref. . For the $`B`$-dependent $`J_c`$’s we also find power-law behavior, although with different exponents. For both the Kim and exponential model the asymptotic behavior is described by $`\chi ^{}B_{am}^3`$. However, also intermediate values for the exponent are possible, e.g., for $`J_c=J_{c0}/[1+(|B|/3B_c)^{1/2}]`$ the numerical results suggest that $`\chi ^{}B_{am}^{9/4}`$.
In order to understand this power-law behavior let us rewrite Eq. (19) as
$$\chi ^{}=\frac{2}{\pi B_{am}^2}_0^{B_{am}}\frac{\mu _0M_{\mathrm{rev}}(B_a)B_adB_a}{\sqrt{B_{am}^2B_a^2}},$$
(23)
where $`M_{\mathrm{rev}}=M_{}+M_{}`$ is the reversible magnetization. The integrand has different estimates in the regions I, II, and III indicated in Fig. 8. Therefore we divide the interval of integration correspondingly, $`\chi ^{}=\chi _I^{}+\chi _{II}^{}+\chi _{III}^{}`$. In region I, $`M_{\mathrm{rev}}`$ does not depend on $`B_{am}`$, thus, $`\chi _I^{}B_{am}^3`$ at large $`B_{am}`$. In region II ($`B_aB_c`$) we use that
$$M_{\mathrm{rev}}(B_a)𝑑rr^2\left[J_c(B_a+B_i(r))J_c(B_aB_i(r))\right],$$
where $`B_i`$ is the field created by the current. Expanding this expression one has $`M_{\mathrm{rev}}J_c^{}(B_a)𝑑rr^2B_i(r)`$. Then using the further simplification that $`𝑑rr^2B_i(r)J_c(B_a)`$, one obtains
$$\chi _{II}^{\prime \prime }\frac{1}{B_{am}^2}_{II}\frac{J_c(B_a)J_c^{}(B_a)B_a}{\sqrt{B_{am}^2B_a^2}}𝑑B_a.$$
Taking $`J_c(B)(B_0/B)^s`$ at large $`B`$, we arrive at the estimates, $`\chi _{II}^{}\left(B_c/B_{am}\right)^2(B_0/B_{am})^{2s}`$ for small $`s`$, and $`\chi _{II}^{}\left(B_c/B_{am}\right)^3(B_0/B_c)^{2s}`$ for large $`s`$. Finally, consider the region III, where $`B_{am}B_a`$ is of the order of $`\mu _0J_c(B_a)`$. Since the initial slope of the return branch does not depend on $`B_{am}`$, we have that $`M_{\mathrm{rev}}(B_a)B_{am}B_a`$. It then follows that, $`\chi _{III}^{}\left(B_c/B_{am}\right)^{3/2}\left(B_0/B_{am}\right)^{3s/2}`$. As the asymptotic behavior at large $`B_{am}`$ is determined by the slowest decaying term, we arrive at the following result
$$\chi ^{}\{\begin{array}{cc}B_{am}^{3(1+s)/2},\hfill & \hfill s<1\\ & \\ B_{am}^3,\hfill & \hfill s1\end{array}$$
(24)
These power-laws fully agree with our numerical calculations shown in Fig. 7. The expression 24 gives the exact values for the exponent found for the Bean model ($`s=0`$), the Kim ($`s=1`$) and exponential ($`s=\mathrm{}`$) models and even for the $`J_c(B)`$ with $`s=1/2`$. Note however, that this asymptotic behavior is sometimes established only at rather low values of $`|\chi ^{}|`$, see curve 4 in Fig. 7. Therefore one should be very careful in interpretation of corresponding experimental log-log plots.
It should be specially emphasized that the presented analysis for the high-field asymptotic behavior is not restricted to a thin disk. In fact, we expect the result (24) to be valid in any geometry. This result is also in agreement with numerical calculations for long samples described by the Kim and the exponential model.
### D Plots of $`\chi ^{\prime \prime }`$ versus $`\chi ^{}`$
In contrast to graphs of $`\chi `$ as a function of the field amplitude or temperature, a plot of $`\chi ^{\prime \prime }`$ versus $`\chi ^{}`$ contains only dimensionless quantities, and is therefore very useful for analyzing experimental data. In practice, such a parametric plot $`\chi ^{\prime \prime }(\chi ^{})`$ can be obtained by scans either over the magnetic field amplitude or over the temperature. Figure 9 presents the $`\chi ^{\prime \prime }(\chi ^{})`$ plot of the data shown in Fig. 4. We observe that a $`B`$-dependence of $`J_c`$ gives a significant distortion of the graph. Compared to the Bean model one finds that: (i) the maximum is shifted to higher values of $`\chi ^{\prime \prime }`$; (ii) it occurs at smaller values of $`\chi ^{}`$; (iii) in the limit of large $`B_{am}`$ (or high temperatures) $`\chi ^{\prime \prime }`$ falls to zero more abruptly.
Meanwhile, at small $`B_{am}`$, as $`\chi ^{}1`$, the slope of $`\chi ^{\prime \prime }(\chi ^{})`$ curve remains the same as in the Bean model, namely,
$$\frac{\chi ^{\prime \prime }}{\chi _0}=\frac{32}{15\pi }\left(1+\frac{\chi ^{}}{\chi _0}\right)\mathrm{at}\chi ^{}1.$$
(25)
This result holds for any $`J_c(B)`$. It also follows from the previous analysis showing that at low fields both $`\chi ^{}`$ and $`\chi ^{\prime \prime }`$ are modified by $`J_c(B)`$ in the same way. The universal slope given by Eq. (25) allows one to examine if experimental data are described by the critical state model without a priori knowledge of the actual $`J_c(B)`$ dependence for the sample.
The presented $`\chi ^{\prime \prime }(\chi ^{})`$ plots for a disk in a perpendicular field should be compared to similar plots for the long samples in a parallel field studied systematically in Ref. . As expected, the Bean-model curve for a thin disk shown by the dashed line in our Fig. 9 appears quite different from the Bean-model curves for long samples shown in Fig. 7(a,b) of Ref. . Meanwhile, further analysis of these figures shows that the account of a $`B`$-dependent $`J_c`$ always leads to very similar distortions of the $`\chi ^{\prime \prime }(\chi ^{})`$ plots. Namely, in all geometries the $`\chi ^{\prime \prime }`$ peak increases in magnitude and shifts towards $`\chi ^{}=0`$. Note that such a behavior is found when the characteristic field $`B_0`$ of the $`J_c(B)`$-dependence is larger or of the order of $`B_c`$. For $`B_0B_c`$ this behavior may change qualitatively. In particular, in the parallel geometry, the peak position, $`\chi _{\mathrm{max}}^{}`$, becomes a nonmonotonous function of $`B_0`$. However, the case of $`B_0B_c`$ is not very realistic for a thin disk since $`B_c`$ is proportional to the sample thickness while $`B_0`$ is usually taken as geometry-independent.
It is interesting to compare our $`\chi ^{\prime \prime }(\chi ^{})`$ plots to the ones obtained by calculations based on a non-linear current-voltage curve, $`jE^{1/n}`$, $`n<\mathrm{}`$. Shown in Fig. 10 together with the CSM results is a $`\chi ^{\prime \prime }(\chi ^{})`$-curve (dotted line) drawn in accordance to typical graphs presented in Refs. . Compared to the Bean model curve, the maximum of $`\chi ^{\prime \prime }`$ increases in magnitude and shifts towards $`\chi ^{}=1`$. Moreover, the slope at $`\chi ^{}1`$ becomes steeper. The last two features are in a strong contrast to the effect of having a $`B`$-dependent $`J_c`$ in the CSM. Consequently, an analysis of the $`\chi ^{\prime \prime }(\chi ^{})`$ plot allows one to discriminate between a strict CSM behavior and one where flux creep is an ingredient.
Finally, we compare in Fig. 11 our theoretical results to available experimental data on the susceptibility of YBaCuO films. The shown data were obtained by reading selected points in the graphs found in the literature. It is evident that the poor fit by the Bean model (dashed curve) is greatly improved by the curve (full line) calculated for a $`B`$-dependent $`J_c`$. Whereas the agreement is better throughout the $`\chi ^{\prime \prime }(\chi ^{})`$ plot, it is especially evident at small $`|\chi ^{}|`$ (large field amplitudes), where the $`J_c(B)`$-dependence plays a major role. There is still a discrepancy in the low-field region, where all experimental points do not follow the universal CSM slope given by Eq. 25. The deviation can be caused by a flux creep leading to a steeper slope. This suggestion can be checked experimentally by analyzing $`\chi ^{\prime \prime }(\chi ^{})`$ plots obtained at different temperatures.
## V Conclusion
Magnetization and ac susceptibility of a thin superconducting disk placed in a perpendicular magnetic field were analyzed in the framework of the critical state model where $`J_c`$ depends on the local flux density. We solved numerically the set of coupled integral equations for the flux and current distributions, and from that calculated magnetization hysteresis loops as well as the susceptibility, $`\chi =\chi ^{}+\mathrm{i}\chi ^{\prime \prime }`$. The results, which were obtained for several commonly used $`J_c`$ decreasing with $`|B|`$, allowed us to determine the range of fields where the vertical width of the major magnetization loop, $`\mathrm{\Delta }M(B_a)`$, is directly related to $`J_c(B_a)`$.
We have shown that at small fields the virgin magnetization and complex susceptibility have the same dependence on $`B_a`$ as for the Bean model, although with different coefficients. For large ac amplitudes, $`B_{am}`$, the behavior of the ac susceptibility changes from $`\chi ^{}B_{am}^{3/2}`$ and $`\chi ^{\prime \prime }B_{am}^1`$ for the Bean model, to $`\chi ^{}B_{am}^3`$ and $`\chi ^{\prime \prime }B_{am}^2`$ for $`J_c`$ decreasing with $`|B|`$ as $`|B|^1`$ or faster. We could show numerically, and also presented an argument, that when asymptotically $`J_c|B|^s,s<1`$, one has $`\chi ^{}B_{am}^{3(1+s)/2}`$. The results for the high-field behavior of the susceptibility are expected to be valid for superconductors of any geometry.
A most convenient test for critical-state models is provided by an analysis of the $`\chi ^{\prime \prime }(\chi ^{})`$ plot. We conclude that the asymptotic behavior at $`\chi ^{}1`$ is universal for the CSM with any $`J_c(B)`$. On the other hand, flux creep can affect this behavior. The peak in $`\chi ^{\prime \prime }`$ at $`\chi ^{}=0.38`$ predicted by the Bean model was found to be shifted toward $`\chi ^{}=0`$ due to the $`B`$dependence in $`J_c`$, and toward $`\chi ^{}=1`$ because of flux creep.
###### Acknowledgements.
The financial support from the Research Council of Norway (NFR), and from NATO via NFR is gratefully acknowledged.
|
no-problem/9910/patt-sol9910001.html
|
ar5iv
|
text
|
# Discrete stochastic modeling of calcium channel dynamics
\[
## Abstract
We propose a simple discrete stochastic model for calcium dynamics in living cells. Specifically, the calcium concentration distribution is assumed to give rise to a set of probabilities for the opening/closing of channels which release calcium thereby changing those probabilities. We study this model in one dimension, analytically in the mean-field limit of large number of channels per site $`N`$, and numerically for small $`N`$. As the number of channels per site is increased, the transition from a non-propagating region of activity to a propagating one changes in nature from one described by directed percolation to that of deterministic depinning in a spatially discrete system. Also, for a small number of channels a propagating calcium wave can leave behind a novel fluctuation-driven state, in a parameter range where the limiting deterministic model exhibits only single pulse propagation.
\]
It has become clear that the intracellular nonlinear dynamics of calcium plays a crucial role in many biological processes . The nonlinearity of this problem is due to the fact that there exist calcium stores inside the cell which can be released via the opening of channels which themselves have calcium-dependent kinetics. Typically, these processes are modeled using a set of coupled equations for the calcium concentration (the diffusion equation with sources and sinks) and for the relevant channels; the latter is often described by a rate equation for the fraction of open channels per unit of area. More elaborate models take into account the discrete nature of these channels, their spatial clustering, and fluctuations in the process of their opening and closing .
In this paper, we will propose and analyze a set of models which operate just with the channel dynamics alone. The justification for this is that the calcium field equilibrates quickly, with a diffusion time of perhaps 0.1s, as compared to the channel transition times, perhaps on the order of 1s for activation of a subunit to several seconds for its deactivation. One can then imagine solving for the quasi-stationary calcium concentration and thereafter using it to determine the conditional probabilities of channel opening or closing. In a subsequent paper, we will show how this can be done in detail starting from a specific fully-coupled model (the DeYoung-Keizer-model ); here, we will make reasonable assumptions for these probabilities and study the resulting stochastic model in a one dimensional geometry.
For specificity, we will focus on systems that have IP<sub>3</sub> (inositol 1,4,5- trisphosphate) channels. Each of these channels consists of a number of subunits. Here we assume that $`h`$ subunits have to be activated for the channel to be open; experiments indicate that $`h=3`$ . A subunit is activated when IP<sub>3</sub> ion is bound to its corresponding domain and Ca<sup>2+</sup> is bound to its activating domain and not bound to its inhibiting site. The characteristic time of binding and unbinding of IP<sub>3</sub> is typically so fast (more than 20 times faster than other binding steps ), that we can assume local balance of active/passive channels maintained at all times. Furthermore, we assume that the channels are spatially organized into clusters , with a fixed number of channels $`N`$ per cluster and a fixed inter-cluster distance.
Our model is as follows. We introduce two stochastic variables for each channel cluster: $`n_i`$, the number of activated subunits, and $`m_i`$, the number of inhibited subunits. At every time step, the number of activated subunits $`n_i`$ at site $`i`$ is changed due to three stochastic processes; activation of additional subunits by binding available Ca<sup>2+</sup> to their activation domains, de-activation by unbinding Ca<sup>2+</sup> from active subunits, and inhibition by binding available Ca<sup>2+</sup> to their inhibition domains. We take these transition rates to depend on the number of open channels at site $`i`$, $`c_i`$, and on the number of open channels at the nearest neighboring sites $`i\pm 1`$, $`c_{i\pm 1}`$. Similarly, there will be binding and unbinding to the inhibitory domain, changing $`m_i`$. We denote by $`p_{0(1)}^\pm `$ the probability to activate/inhibit a subunit per number of open channels at the same site (0) or the neighboring site (1). To compute the actual probabilities, we need to multiply these by the number of open channels. Here, we use the simple expedient of taking this to equal $`n_i^h/hN_s^{h1}`$ where the total number of subunits $`N_s=hN`$; this is easily shown to be the expected number of open channels for large enough $`N`$. This approach allows us to avoid keeping explicit account of each of the independent subunits. Also, we let $`p_d^\pm `$ be the deactivation and deinhibition probabilities which are $`c`$ independent.
Let us define the total probabilities $`p^\pm =p_0^\pm +2p_1^\pm `$ and the “diffusion constant” $`\alpha =p_1^\pm /(p_0^\pm +2p_1^\pm )`$. We also denote $`C_i(t)=(12\alpha )c_i(t)+\alpha c_{i1}(t)+\alpha c_{i+1}(t)`$, which mimics the amount of calcium at site $`i`$ due to open channels at sites $`i,i\pm 1`$. Our model explicitly consists of the following coupled stochastic processes. $`n_i`$ is updated
$$n_i(t+\mathrm{\Delta }t)=n_i(t)+\mathrm{\Delta }_n^+\mathrm{\Delta }_n^{}\delta _+$$
(1)
where $`\mathrm{\Delta }_n^+`$ is a random integer number drawn from the binomial distribution $`B(\mathrm{\Delta }_n^+,N_sn_i(t)m_i(t),p^+C_i(t))`$, $`\mathrm{\Delta }_n^{}`$ is drawn from $`B(\mathrm{\Delta }_n^{},n_i(t),p^{}C_i(t))`$, and $`\delta _n^+`$ is drawn from $`B(\delta _n^+,n_i(t),p_d^+)`$. The equation for $`m_i`$ reads
$$m_i(t+\mathrm{\Delta }t)=m_i(t)+\mathrm{\Delta }_m^+\delta _m^+$$
(2)
where $`\mathrm{\Delta }_m^+`$ is drawn from $`B(\mathrm{\Delta }_m^+,N_sm_i(t),p^{}C_i(t))`$, and $`\delta _m^+`$ is drawn from $`B(\delta _m^+,m_i(t),p_d^{})`$. We do not allow for transitions from the inhibited state to the activated state. In all these formulas, $`B(x,y,p)\left(\genfrac{}{}{0pt}{}{y}{x}\right)p^x(1p)^{yx}`$. Note that the probability that IP<sub>3</sub> is bound is included by rescaling the number of subunits.
As a first step, we consider a simplified version of the channel dynamics with the inhibition process excluded (all $`p^{}`$=0), i.e. a subunit is activated whenever Ca<sup>2+</sup> is attached to its activating site. Thus we take $`m_i=0`$, and arrive at the one-variable model for the number of activated subunits $`n_i`$. Let us first focus on fairly small $`N_s`$. Examples of the stochastic dynamics for several values of parameters are shown in Figure 1. At small $`\alpha `$, an initial seed almost always ultimately dies giving rise to so-called abortive calcium waves. At larger values of $`\alpha `$ the region of activated channels typically expands at a finite rate. This transition mirrors what has been seen in many experimental systems .
As is well known for statistical models such as the contact process, the critical value of $`\alpha `$ can be accurately determined by computing the distribution of survival times $`\mathrm{\Pi }(t)`$ for the activation process started from a single active site. For $`\alpha <\alpha _c`$, the distribution falls exponentially at large $`t`$ as the wave of activation eventually dies out. On the contrary, at $`\alpha >\alpha _c`$, $`\mathrm{\Pi }(t)`$ asymptotically reaches a constant value $`\mathrm{\Pi }_{\mathrm{}}`$, since a non-zero fraction of runs produce ever-expanding active regions. At $`\alpha =\alpha _c`$, the distribution function exhibits a power-law asymptotic behavior with the slope determined by the universality class of the underlying stochastic process. Our data (not shown) indicate that $`\alpha _c`$ is inversely proportional to the number of subunits per site $`N_s`$. We have checked that our data is in the directed percolation (DP) class. For example, in Fig. 2 we show $`\mathrm{\Pi }(t)`$ of a cluster of open channels at the critical value of $`\alpha _c`$ for $`h=3`$, $`N_s=10`$ and $`\gamma =0.1`$. The power-law dependence is consistent with DP prediction of $`\mathrm{\Pi }(t)t^{0.159}`$. This is perhaps not too surprising. According to the Janssen-Grassberger DP conjecture, any spatio-temporal stochastic process with short range interactions, fluctuating active phase and unique non-fluctuating (absorbing) state, single order parameter and no additional symmetries, should belong to the DP class. This result does open up the exciting possibility that intracellular calcium dynamics could be an experimental realization of the DP process.
Figure 1(c) shows the opposite limit where the dynamics becomes almost deterministic. If we take $`N_s\mathrm{}`$ and fix $`pN_s/hP`$, we can use a mean-field description in terms of the fraction of activated subunits $`\rho _i=n_i/N_s`$,
$$\dot{\rho _i}=((12\alpha )\rho _i^h+\alpha \rho _{i1}^h+\alpha \rho _{i+1}^h)(1\rho _i)\gamma \rho _i.$$
(3)
and where we rescaled time $`t^{}=Pt/\mathrm{\Delta }t`$ and introduced $`\gamma =p_d/P`$. For all $`h2`$, if $`\gamma <\gamma _{cr}`$ Eq.(3) the system possesses two stable uniform solutions, $`\rho =0`$ and $`\rho =\rho _0`$ and one unstable solution $`\rho _u`$, where $`\rho _{0,u}`$ are real roots of the algebraic equation $`\rho ^{h1}(1\rho )=\gamma `$. The front is a solution connecting these two stable fixed points; it is easy to show that this front has a unique propagation velocity.
For small $`\alpha `$, the discreteness of our spatial lattice causes the front to become pinned, as the probability of activating subunits at the neighboring site $`O(\alpha \rho _0^h)`$ becomes smaller than the threshold value for excitation probability $`O(\rho _u)`$. The stationary front solution is described by the recurrence relation,
$$(12\alpha )\rho _i^h+\alpha \rho _{i1}^h+\alpha \rho _{i+1}^h=\frac{\gamma \rho _i}{1\rho _i}$$
(4)
The bifurcation line which separates pinned and moving fronts, can be found in the limit of small $`\alpha `$ by using the ideas of ref.. Indeed, in this limit, the values of $`\rho _i`$ quickly (as $`\alpha ^i`$) approach 0 and $`\rho _0`$ away from the front at $`i\pm \mathrm{}`$, respectively. We can thus replace $`\rho _i`$ by $`\rho _0`$ and $`0`$ everywhere to the left and to the right of the front position except for $`\rho _\pm `$ at the two sites nearest to the front, $`i1`$ and $`i+1`$. Solving the resulting set of two algebraic equations up to $`\alpha ^2`$, one can obtain the values of $`\rho _\pm `$. At any $`\gamma `$, there is a critical value of $`\alpha _m`$ at which the real solution $`\rho _\pm `$ vanishes. The family of these values $`\alpha _m`$ forms the bifurcation line for front pinning in $`(\gamma ,\alpha )`$ plane. At large $`\alpha `$, discreteness of the mean field model (3) becomes insignificant, and (3) can be replaced by its continuum limit
$$_t\rho =(\rho ^h\alpha _x^2\rho ^h)(1\rho )\gamma \rho .$$
(5)
which of course has no front pinning. Instead, $`\alpha `$ can be scaled out and there is specific value of $`\gamma `$ at which the system goes from forward to backward propagating fronts. Figure 3 shows the phase diagram of the mean field equation (3) for $`h=3`$. All the data (except possibly at the non-generic case $`\gamma =0`$) are consistent with expected $`(\alpha \alpha _m)^{1/2}`$ scaling.
How does one get from DP behavior to deterministic pinning/depinning? To investigate this issue, we have performed simulations for the front speed as a function $`\alpha `$ at various finite values of $`N_s`$; with the results given in Fig. 4. At large $`N_s`$, the velocity approaches the mean field prediction as long as $`\alpha >\alpha _m`$. Close to critical value $`\alpha _m`$, the velocity deviates from the mean-field dependence $`V(\alpha \alpha _m)^{1/2}`$ because of thermally activated “creep”; fluctuations let the front to overcome potential barriers associated with finite site separation, and lead to exponentially slow front propagation (see, e.g., ). Directed percolation regime is not observed at large $`N_s`$ since the DP critical value $`\alpha _c`$ is less than $`\alpha _m`$. At smaller $`N_s`$, the relative magnitude of the fluctuations grows, and the DP threshold value $`\alpha _c`$ exceeds $`\alpha _m`$. Now, the front pinning is determined by fluctuations rather than discreteness, and the critical state exhibits the properties of directed percolation.
Now we return to the full two-variable stochastic model which describes both activation and inhibition. Since the probability of Ca<sup>2+</sup> binding to the inhibition domain is typically much smaller than those for the activation domain, the inhibitor dynamics is slow. In the mean-field limit $`N_s\mathrm{}`$, this model is similar to the FitzHugh-Nagumo model often used to describe waves propagating in excitable systems. One therefore expects that for a certain range of binding/unbinding probabilities, the model gives rise to pulse propagation; that is, once the wave passes, the system goes into a state dominated by inhibition from which it slowly recovers as the inhibitory domains slowly unbind. This is indeed what we find for large enough $`N_s`$, as shown in Fig. 5(a). Behind the pair of outgoing pulses, the channels stay refractory for a certain time $`O(1/p^{})`$ and then return to the quiescent state.
However, we find that having only a modest number of channels $`N`$ leads to fluctuations which strongly affect the spatio-temporal behavior of the model. In fact, a new dynamical state is formed behind the outgoing fronts, a state which remains active at all subsequent times (see Fig.5,b). This state is catalyzed by backfiring, i.e. the creation of oppositely propagating waves behind a moving front. In the deterministic limit of our model, this cannot occur as the system is completely refractory once the front has passed. At finite $`N`$ however, propagation of the front does not lead to the activation and subsequent inhibition of all the channels. Instead, a finite number of these remain inactivated, providing a supply of active elements that can still support wave propagation. There exist more complicated deterministic models, such as one proposed for $`CO`$ oxidation on single crystal surfaces, which also appear to have pulse-induced backfiring. There, however, this effect is due to the loss of pulse stability which occurs due to the rather complex non-linear dynamics of the inhibitory field. Here, it is the fluctuations which allow for this phenomenon.
We have checked that this backfiring-induced state occurs as well in more realistic and more complex models which solve for the calcium concentration together with the channel dynamics. Again, the mechanism appears to be the lack of complete inhibition in the wake of the propagating pulse. Hence, our result that one should find this behavior in intracellular calcium dynamics is not an artifact of any of the simplifying assumptions used here. Also, this state persists when the model is studied in higher dimensions. A study of the exact nature of the transition to backfiring and a comparison of the deterministic versus stochastic pathways to its existence will be undertaken in future work.
In summary, we proposed and studied a simple discrete model of calcium channel dynamics based on the assumption that calcium diffusion time is much smaller than the characteristic times of Ca<sup>2+</sup> binding/unbinding. This model demonstrates familiar properties of deterministic reaction-diffusion systems in the limit $`N\mathrm{}`$ when fluctuations are small. For small $`N`$, we observed a transition to a directed percolation regime, in agreement with the general DP conjecture. For the full model including inhibition, we found at small $`N`$ a novel persistent fluctuation driven state which emerges behind a front of outgoing activation; this occurs in a parameter regime where the corresponding deterministic system exhibits only single outgoing pulses.
The authors thank H. Hinrichsen, M.Or-Guil and I. Mitkov for helpful discussions. LST thanks Max Planck Institut für Physik komplexer Systeme, Dresden, Germany for hospitality. LST was supported in part by the Engineering Research Program of the Office of Basic Energy Sciences at the US Department of Energy under grants No. DE-FG03-95ER14516 and DE-FG03-96ER14592. HL was supported in part by US NSF under grant DMR98-5735. M.F. was supported in part by DFG grant Fa350/2-1.
|
no-problem/9910/astro-ph9910192.html
|
ar5iv
|
text
|
# First Characterization of the Ultra-Shielded Chamber in the Low-noise Underground Laboratory (LSBB) of Rustrel Pays d’Apt
## Abstract
In compliance with international agreements on nuclear weapons limitation, the French ground-based nuclear arsenal has been decommissioned in its totality. One of its former underground missile control centers, located in Rustrel, 60 km east of Avignon (Provence) has been converted into the “Laboratoire Souterrain à Bas Bruit de Rustrel-Pays d’Apt” (LSBB). The deepest experimental hall (500 m of calcite rock overburden) includes a 100 m<sup>2</sup> area of sturdy flooring suspended by and resting on shock absorbers, entirely enclosed in a 28 m-long, 8 m-diameter, 1 cm-thick steel Faraday cage. This results in an unparalleled combination of shielding against cosmic rays, acoustic, seismic and electromagnetic noise, which can be exploited for rare event searches using ultra low-temperature and superconducting detectors. The first characterization measurements in this unique civilian site are reported. http://home.cern.ch/collar/RUSTREL/rustrel.html
1. Description of the infrastructure
Rustrel, a small village in the Pays d’Apt, is one hour by car east of Avignon, in the heart of Provence (southeast France). The high-speed train connection from Paris to Avignon takes three hours and twenty minutes. Marseille-Marignagne is the nearest international airport, one hour and a half from Rustrel. The village is just south of the Plateau d’Albion, the former location for the ground-based component of the French air force’s “force de frappe” (strike force). The military selection of this location was presumably due to the proximity of the eastern French border, low density of population, good rock quality, and the existence of mountains offering natural protection. Missile silos were spread over a large area, with two underground launching centers on both sides of the Mount Ventoux.
One of these centers has been spared from destruction and is now converted into a laboratory (LSBB); the usable spare parts and technical subsystems from the second center have been preserved for future maintenance of the LSBB. The LSBB consists of 3.2 km of reinforced concrete galleries below the ”Grande Montagne” (1,010 m at the summit), joining various halls. An heliport is available in front of the entrance area, which houses office space and living quarters. A telecommunications area in the summit is directly connected via optical fiber to the deepest part of the galleries, the launching control room.
Two major experimental halls are available; the shallowest (350 m<sup>2</sup>, 7 m ceiling height, 50 m rock overburden) is located 400 meters from the entrance and is shielded in the same manner as the launching control room (one kilometer further down the corridors, protected by 500 m of calcite rock). This control room is the second hall of interest and includes 100 m<sup>2</sup> of sturdy flooring suspended by shock absorbers. The room is entirely surrounded by a horizontal steel capsule 28 m-long, 8 m in diameter and 1 cm-thick; entrance doors are clamped by electrical contacts to ensure the sealing of this peculiar Faraday cage. Several auxiliary galleries at the same depth can be used for experiments not needing exceptional EM shielding.
Due to its previous purpose, the whole setup was designed to offer maximum safety against intrusion and nuclear attack: External and internal steel doors can bear 20 bars of overpressure and emergency generators respond in a tenth of a second to energy supply perturbations. Ten optical fibers are available for telecommunications with the exterior. The whole area is fully air-conditioned, with ventilation and running water reaching the deepest hall. Road traffic is very scarce within two kilometers from the laboratory.
2. Radiation shielding & natural radioactivity
2.1 Neutrons
The capsule and surrounding rooms are located at a depth of $``$1,500 meters of water equivalent (m.w.e.). In this sense, LSBB ranks about average when compared to other underground laboratories. Nevertheless, this depth is more than enough to ensure screening of secondary cosmic neutrons (fig. 1). Indeed, neutrons produced by natural radioactivity in the surrounding rocks are dominant below few hundreds of m.w.e., independent of the nature of the shielding materials used in the experiments (deeply-reaching muons can produce neutrons in this shielding). An increased depth brings no further reduction in neutron flux .
2.2 Muons
At 1,500 m.w.e. the muon flux becomes a second order concern for most of the activities envisioned at LSBB, and can be further suppressed by anticoincidence with an active veto (plastic scintillator) without creating a substantial dead time. In the case of ultra-low temperature experiments, long-lived heating by passing muons (a problem at ground level) is a rare occurrence at a flux of $`510^3\mu /m^2/s`$ (fig. 2).
2.3 Rock radioactivity
Rock samples were extracted from exposed walls in secondary galleries, measured by the CERN radioprotection service and compared with a reference calcite rock from a low-depth gallery nearby Paris. Only two isotopes were detected above background (table 1).
| Isotope | Boissise la Bertrand | LSBB |
| --- | --- | --- |
| | (Seine et Marne) | (Rustrel) |
| <sup>137</sup>Cs | 0.204 Bq | 0.437 Bq |
| <sup>226</sup>Ra | 2.030 Bq | 0.645 Bq |
Table 1: Identifiable rock radioactivity in Boissise (near Paris), and in the Laboratoire Souterrain à Bas Bruit (LSBB).
Comparable levels were found, within the factor 2 uncertainty in the activities. While this is nowhere close to an exhaustive measurement, it allows to discard the possibility that Rustrel rock might be unusually “hot”.
2.4 Airborne radon
The radon concentration in the atmosphere of the capsule was measured during three weeks in January 1998 using a RAD7 NITON radonmeter (a self-contained continuous-monitoring solid state alpha detector ). An average value of 28 Bq/m<sup>3</sup> (0.77 pCi/l) was obtained . As a reference, this can be compared with the maximum acceptable 45 Bq/m<sup>3</sup> in US households, or with the lowest values achieved in the Gran Sasso underground laboratory, 20-50 Bq/m <sup>3</sup> . Similar results have been obtained in April 1999. This low rate, comparable to outdoor measurements in the area, is a factor 15 lower than during military operation. The improvement was obtained by opening the vertical escape chimney of the site, allowing for natural ventilation, and by turning off a cooling unit within the capsule. The escape chimney was normally obstructed for security reasons; as a result, in spite of the strong ventilation in the gallery, the deepest hall was almost a dead end for air circulation.
3. Seismicity and acoustic noise
The area of Rustrel is well-studied from a geological point of view. It is at the center of a 30 km circle free of active faults in spite of the relative proximity to the Alps. This structural configuration is the reason for the absence of local seismicity during the last eleven hundred years . The acoustic environmental noise has been roughly characterized in a 20 m<sup>2</sup> shielded room adjacent to the capsule during the installation of the SIMPLE experiment , which relies precisely on acoustic detection of the weak signal arising from the vaporization of superheated freon droplets suspended in a gel matrix. At the level of sensitivity of our room monitoring microphones, no significant activity was detected after the ventilation ducts were muffled.
4. Electromagnetic shielding
4.1 A unique underground shielding
The peculiar shielding of the main experimental halls was designed with the intention of protecting electronic equipment from the huge electromagnetic pulse created by a nearby nuclear explosion. This is the reason why, instead of a conventional Faraday cage made of thin copper, the choice was made for a thick (1 cm) steel shielding. As a result, these large cages attenuate not only high-frequency electromagnetic waves but also low-frequency and even DC (e.g., the magnetic field of the Earth). Due to their large dimensions it is possible to locally create large magnetic fields as long as the walls have not reached their magnetic saturation. In other words, within these cages one can have very low magnetic fluctuations even at non-zero magnetic field values.
4.2 DC domain
The chosen steel was not optimized for magnetic shielding; nevertheless, the residual magnetic field inside the capsule is lower than 6 $`\mu `$T (compare with 46 $`\mu `$T for the Earth’s magnetic field at the LSBB latitude). The measurements were done with triaxial fluxgate magnetometers with a bandwidth from 0 to 5 Hertz, a noise level of 0,5 nT (peak to peak) and an absolute precision of 200 nT (0.2 $`\mu `$T) . Over a period longer than 12 hours a remarkable long-term stability and low noise level (less than 20 nT) were observed. This performance is not extreme, yet very impressive when observed over such a large experimental area (one would have to wait more than 12 hours to observe, in a square loop of $`0.3\times 0.3mm^2`$, a magnetic flux variation larger than one quantum flux!). Such a long-term magnetic stability allows for the utilization of SQUID detectors with large pick-up coils. A Hall-effect gaussmeter applied directly on the steel walls just above the welding lines revealed only a weak local magnetization (local magnetic field smaller than 100 $`\mu `$T). At the expense of a few precautions (displacement of the ventilating units, compensated AC wiring) the EM quality of the site can be improved even further.
4.3 Dynamic fluctuations
Using a triaxial fluxgate connected to a spectrum analyzer, the performance of the shielding was measured from 1 to 1000 Hz. No detectable signal above the noise level of the measuring chain was obtained, indicating that the magnetic fluctuations are lower than 2.5 pT/Hz<sup>1/2</sup>. Finally, in early August 1999 a high-Tc SQUID was operated for a short period inside the capsule; from this measurement it was concluded that in the same frequency range the noise level is below 600 fT/Hz<sup>1/2</sup>.
5. Conclusion
This first set of characterization measurements indicates that a singular combination of shielding features makes of LSBB a site of choice for low-noise experiments in the fields of ultra-low temperature physics, superconductivity, biology, metrology and astroparticle physics.
Acknowledgements: We are indebted to M. Auguste, G. Boyer, A. Cavaillou and L. Ibtiouene for their help in performing these measurements.
|
no-problem/9910/astro-ph9910072.html
|
ar5iv
|
text
|
# Identification of a Likely Radio Counterpart of the Rapid Burster
## 1 Introduction
Discovered in 1976 Lewin et al. (1976), the Rapid Burster (MXB 1730$``$335, hereafter “RB”) is located in the highly reddened globular cluster Liller 1 Liller (1977), which has a distance modulus of 14.68$`\pm `$0.23, corresponding to 8.6$`\pm `$1.1 kpc, determined by main-sequence fitting Frogel et al. (1995). Liller 1 has a small optical core radius (about 6$`\stackrel{}{\mathrm{.}}`$5).
Radio observations of transient X-ray binaries have found several to be radio transients as well (see Hjellming & Han (1995) for a review), both black hole candidates (A0620$``$00, GS 2000+25, GS 2023+33, GS 1124$``$68) and neutron stars (Aql X-1, Cir X-1, Cen X-4). The outbursts of X-ray transients are due to a sudden turn-on of accretion onto the compact object in a binary lasting from $``$days to months Tanaka & Lewin (1995); Chen et al. (1997). The radio spectral and temporal behavior in some of these objects is described by a synchrotron bubble model Van der Laan (1966) which indicates that there are plasma outflows associated with the X-ray outburst. In particular, the black hole candidate and superluminal jet source GRS 1915+105 has been seen to exhibit correlated behavior at X-ray, infrared and radio wavelengths Fender et al. (1997); Mirabel et al. (1998); Eikenberry et al. (1998a, b); Fender & Pooley (1998) that is readily understood in terms of the synchrotron emission of expanding plasmoids that have been ejected from the inner regions of the system Fender et al. (1997); Mirabel et al. (1998); Fender & Pooley (1998).
Despite its frequent X-ray outbursts (average interval, $`220`$ days; Guerriero et al. (1999)), several previous studies have not detected radio emission from the RB Johnson et al. (1978); Lawrence et al. (1983); Grindlay & Seaquist (1986); Johnston et al. (1991); Fruchter & Goss (1995). Improvements in radio sensitivity since some of those studies were performed made the detection of a RB radio counterpart at previously unattainable flux levels practical. Furthermore, the advent of the RXTE All-Sky Monitor made it possible to correlate radio observations with a well sampled X-ray light curve. We therefore undertook a search for a radio counterpart of the Rapid Burster in X-ray outburst. Since the type II X-ray bursts of the Rapid Burster are thought to be caused by the same phenomenon (spasmodic accretion) as the X-ray outbursts van Paradijs (1996), one might expect that they are accompanied by simultaneous radio emission. The observations reported here marginally exclude (at the 2.9$`\sigma `$ level) simultaneous radio burst emission from the likely radio counterpart.
### 1.1 X-ray behavior
The RB is a transient X-ray source which has been observed during the past few years to go into outburst approximately every 220 days for a period of $``$30 days Guerriero et al. (1999). It is the only low-mass X-ray binary (LMXB) which produces two different types of X-ray bursts Hoffman et al. (1978). Type I bursts, which are observed from $``$40 other LMXBs, are due to thermonuclear flashes on the surface of an accreting neutron star. Type II bursts, which have been observed from only one other LMXB (GRO J1744$``$28; Kouveliotou et al. (1996); Lewin et al. (1996)), are sudden releases of gravitational potential energy resulting from accretion instabilities. For a detailed review of type I and type II bursts, see Lewin, van Paradijs and Taam Lewin et al. (1993, 1995a).
The RB does not fit easily into the “Z/Atoll” paradigm of LMXBs Rutledge et al. (1995), in which the correlated X-ray fast-timing and spectral behavior of LMXBs divides the population into two classes Hasinger & van der Klis (1989); van der Klis (1995). However, the RB is known to exhibit periods of behavior characteristic of an Atoll source. During an outburst in 1983, the RB exhibited strong persistent emission (PE) and type I bursts with no type II bursts – behavior typical of the Atoll sources Barr et al. (1987). This behavior has also been seen in outbursts observed more recently with RXTE. For the first $``$17 days of these outbursts, strong PE is accompanied only by type I X-ray bursts. Type II bursting behavior begins after this period and continues while the PE decreases (details of these observations are given in Guerriero et al. (1999)).
In spite of 20 years of theoretical modeling, no satisfactory model exists for the disk instability which drives the type II bursts (for a review, see Lewin et al. (1995a)). Models which require weak magnetic fields ($`{}_{}{}^{}{}_{}{}^{<}10^9`$ G) were effectively excluded with the discovery of GRO J1744$``$28, which contains a pulsar with a magnetic field of $`10^{11}`$ G Finger et al. (1996); Cui (1997), and also exhibits type II bursts Kouveliotou et al. (1996); Lewin et al. (1996).
### 1.2 Previous Radio Observations
There have been several radio studies of fields of view containing Liller 1, some with the goal of identifying a radio counterpart of the Rapid Burster. Some have focussed on finding radio variability in hopes of catching bursts in the radio correlated with type II X-ray bursts while others have sought persistent radio sources. We summarize the most stringent limits on the flux density of a persistent radio source in Table 1; additional, less stringent limits are summarized by Lawrence et al. Lawrence et al. (1983).
Fruchter & Goss (Fruchter & Goss, 1995, hereafter, FG) discovered a radio source in three bands (0.33, 1.5, and 4.5GHz), with a spectral slope of $`2`$. They observe the flux density of this source at 1.5 GHz above the upper limit derived from an observation at another time Johnston et al. (1991). FG interpret this discrepancy as the result of beam dilution in the Johnston et al. observations (0$`\stackrel{}{\mathrm{.}}`$5 beam obtained with the VLA in A-array) over-resolving a large population of radio pulsars in Liller 1. Thus, the interpretation of this radio source was as an integration over a population of radio pulsars in Liller 1. To date, no radio pulsations have been detected from the direction of Liller 1 (R. Manchester, private communication).
Prior to the present work, there have been two studies which produced limits on radio emission simultaneous with type II X-ray bursts of the RB. Johnson et al. Johnson et al. (1978) observed simultaneously in the radio and X-ray bands during a period when a total of 64 type II bursts were observed in the X-ray, and placed upper limits on the simultaneous flux density of radio bursts (at 2.7 and 8.1 GHz) of $``$20 mJy. Rao & Venugopal Rao & Venugopal (1980) observed at 0.33 GHz during two X-ray bursts and placed an upper limit of 0.2 Jy on the radio flux density during the bursts.
There are claims of radio burst detections from the RB without simultaneous X-ray observations (Calla et al. (1979, 1980a, 1980b); and Calla, private communication in Johnson et al. (1978)). Calla et al. report observing approximately 14 radio bursts on nine different days with peak flux densities of 400–600 Jy at 4.1 GHz and durations of 10–500 s. This phenomenon has not, however, been confirmed at other observatories and the reported flux densities are substantially above the limits placed by Johnson et al. Johnson et al. (1978) during their simultaneous X-ray/radio observations. If the reported radio bursts are real, they do not appear to be correlated with type II X-ray bursts Lawrence et al. (1983). Using the 16.7 ksec of observations at 8.4 GHz obtained in the course of this work, we find a 3$`\sigma `$ upper limit of 250 mJy on the flux density of our likely radio counterpart during any single 3.3 sec integration. Thus we see no evidence for radio flares of the type reported by Calla et al. during either the X-ray active or quiescent periods.
### 1.3 Objectives
The goals of the present work are, first, to search for a radio counterpart of the Rapid Burster, detectable either in X-ray outburst or in quiescence; and second, to determine if the radio emission can be tied to the active accretion during an outburst.
In Section 2 we describe the radio and X-ray observations, and present some general results. In Section 3, we present the results of our analyses of the radio observations in the context of the “standard model” of radio emission from X-ray transients: the synchrotron bubble model. We evaluate the likelihood of the counterpart identification in Section 4, discuss the results of these observations in Section 5, and list our conclusions in Section 6.
## 2 Observations
X-ray observations were made with two instruments on the Rossi X-ray Timing Explorer (RXTE; Bradt et al. (1993)): the All-Sky Monitor (ASM; Levine et al. (1996)) and the Proportional Counter Array (PCA; Zhang et al. (1993); Jahoda et al. (1996)). The ASM consists of three 1.5–12 keV X-ray proportional counters with coded-aperture masks. It obtains observations of approximately 80% of the sky every 90 minutes and is sensitive to persistent X-ray sources down to $``$5 mCrab ($`3\sigma `$ detection) in a typical day’s cumulative exposure Remillard & Levine (1997). The standard data products of the ASM include the daily-average count rate of a known catalogue of X-ray sources including the RB<sup>1</sup><sup>1</sup>1see http://space.mit.edu/$``$derekfox/xte/ASM.html. A summary of our radio observations and the contemporaneous ASM X-ray flux measurements is shown in Table 2.
The PCA is a collimated array of gas-filled proportional counters, sensitive in the 2–60 keV energy range, with a FOV of $`1^{}`$ and a geometric area of $`6500\mathrm{cm}^2`$. In total, five simultaneous PCA and radio observations were performed; they are discussed in detail below. When the PCA is pointed directly at the RB, there is a nearby ($`0.5^{}`$) persistent X-ray source (4U 1728$``$34) which contributes to the measured X-ray flux. During part of these observations, the PCA is pointed offset from the RB at a reduced collimator efficiency, excluding 4U 1728$``$34 entirely from the field of view. The uncertainty in the aspect correction of the PCA dominates our uncertainty in the RB count rates, which throughout this paper are given as the count rate for the RB only (aspect corrected, background subtracted, with the count rate from 4U 1728$``$34 – assumed constant over the 1-hr observation – subtracted).
The radio observations were performed at three different observatories. The Very Large Array (VLA)<sup>2</sup><sup>2</sup>2The VLA is part of the National Radio Astronomy Observatory, which is operated by Associated Universities, Inc., under cooperative agreement with the National Science Foundation. observed every outburst discussed in this work and made the first detection of the radio counterpart reported here. VLA observations were obtained in two closely spaced bands, each of 50 MHz nominal bandwidth. Each band was observed in both right-circular and left-circular polarization making the total observed bandwidth 100 MHz for each polarization. The complex antenna gains were set using observations of a nearby compact radio source (1744$``$312) and the flux density scale was determined using observations of 3C 286.
The Rapid Burster was observed on two occasions with the Australia Telescope Compact Array (ATCA)<sup>3</sup><sup>3</sup>3The Australia Telescope is funded by the Commonwealth of Australia for operation as a National Facility managed by the CSIRO.. The complex antenna gains were set using observations of the nearby phase calibrator, PKS B1657$``$261. The observations were made simultaneously in two orthogonal linear polarizations at 4800 and 8640 MHz with a bandwidth of 128 MHz at each frequency. For both observations the array was in the extended 6A configuration with baselines in the range 337–5939 m.
The Sub-millimeter Common User Bolometer Array (SCUBA; Cunningham et al. (1994)) on the James Clerk Maxwell Telescope (JCMT)<sup>4</sup><sup>4</sup>4The James Clerk Maxwell Telescope is operated by The Joint Astronomy Centre on behalf of the Particle Physics and Astronomy Research Council of the United Kingdom, the Netherlands Organisation for Scientific Research, and the National Research Council of Canada. observed on 30 January 1998 UT using the 850 $`\mu `$m system in photometry mode.
To date, there have been five outbursts of the RB observed with RXTE/ASM. The progression of four of these outbursts, including the evolution of the RB’s bursting behavior, is described elsewhere Guerriero et al. (1999). During three of these outbursts, we performed radio observations with the VLA, ATCA, or SCUBA, the results of which are listed in Table 2. We describe the observations in detail in the following sections.
### 2.1 November 1996 Outburst
Radio observations with the VLA in A-configuration were made at 8.4 GHz while the RB was in X-ray quiescence on 1996 October 14. These produced low upper limits in both the radio (45$`\pm `$30 $`\mu `$Jy) and X-ray (0.15$`\pm `$0.31 ASM c/s) bands (errors are $`1\sigma `$; 73 ASM c/s $`=`$ 1 Crab).
On 1996 October 29, the RB was detected by the ASM to have begun an X-ray outburst, reaching peak intensity at 1996 October 30 14:30 UT ($`\pm `$1 hr). VLA observations, with simultaneous RXTE/PCA observations took place 7.2 days after the time of X-ray peak flux (1996 November 6.8 UT), at which time a radio source which was not present in the October 14 observation was detected with a flux density of 370$`\pm `$45 $`\mu `$Jy(8.4 GHz). The new source was consistent with a point source but the low signal-to-noise ratio limits the upper limit on the size to 0$`\stackrel{}{\mathrm{.}}`$5 in both dimensions. The source is located at RA 17h33m24s.61; Dec $``$33d23m19s.8 (J2000), $`\pm `$0$`\stackrel{}{\mathrm{.}}`$1 (see Figure 1; the positions of radio source detections described below are consistent with this position).
During the 3.6 ksec PCA observation, one type I X-ray burst was observed along with strong persistent emission (PE; 220$`\pm `$10 mCrab; throughout this work, 1 Crab=13,000 PCA c/s) but there were no type II X-ray bursts. For the RB, we find an approximate conversion between PCA count rate and X-ray flux to be 3$`\times 10^{12}`$ erg cm<sup>-2</sup> s<sup>-1</sup> per PCA cps (2-20 keV).
A second simultaneous VLA/PCA observation took place 12.3 days after the X-ray maximum of the outburst (on 96 November 11.88 UT) at 4.89 and 8.44 GHz; the radio point source had flux densities of 190$`\pm `$45 and 310$`\pm `$35 $`\mu `$Jy(respectively). During this observation, one type I X-ray burst was observed with persistent emission of 128$`\pm `$8 mCrab and no type II X-ray bursts.
### 2.2 June-July 1997 Outburst
The RB began the following X-ray outburst on 1997 June 25, reaching a peak flux at 1997 June 26 10:30 UT ($`\pm `$2 hr). Simultaneous VLA/PCA observations took place 2.9 days after X-ray maximum while the VLA was in C-array. The radio object was present at both 4.89 GHz and 8.44 GHz, with flux densities of 210$`\pm `$70 and 310$`\pm `$45 $`\mu `$Jy respectively. During this observation, the PE observed by the PCA was 280$`\pm `$15 mCrab and there were two type I X-ray bursts but no type II X-ray bursts.
On 1997 July 24.08 UT, a second simultaneous VLA/PCA observation took place, 27.6 days after X-ray maximum. The VLA was in CS-array and the radio point source was not detected at 41$`\pm `$30 $`\mu `$Jy (8.44 GHz). The PE was very weak ($`<`$8 mCrab); over the whole observation, the average intensity (bursts+PE) was 210$`\pm `$100 PCA c/s (16 mCrab). A total of seven type II X-ray bursts were observed with the PCA with an average intensity of 2350$`\pm `$100 PCA c/s (180 mCrab) during the bursts, and a mean duration of 12 seconds. Of these, six had simultaneous coverage at VLA at 8.44 GHz. The radio data corresponding to these six bursts were extracted to determine if there is detectable radio emission during the brightest bursts (3240$`\pm `$100 PCA c/s). We find an upper limit on the radio emission during these type II X-ray bursts of $`<`$690 $`\mu `$Jy($`3\sigma `$).
### 2.3 January-February 1998 Outburst
The next RB outburst began on 1998 January 27, peaking in X-ray intensity at 1998 January 29 12:30 UT ($`\pm `$4 hours). During PCA observations 1.2 days after X-ray maximum (1998 January 30 19:19–21:09 UT), strong PE (4000$`\pm `$100 c/s) was observed along with two type I X-ray bursts but no type II X-ray bursts. Observations at the JCMT were carried out 1.3 days after the X-ray maximum using the SCUBA system at 350 GHz and failed to detect the proposed radio counterpart with a 3$`\sigma `$ upper limit of 3 mJy.
Beginning 1.6 days after the X-ray maximum, the ATCA observed for a 3-hour period (1:37-4:46 UT, on 1998 January 31) at 4.8 and 8.64 GHz. These observations produced flux density upper limits of $`<`$390 and $`<`$360 $`\mu `$Jy respectively (3$`\sigma `$). Between 1:33-4:28 UT on 1998 February 8, 9.6 days after the X-ray maximum, observations at the ATCA produced $`3\sigma `$ upper limits of $`<`$480 $`\mu `$Jy (4.80 GHz) and $`<`$930 $`\mu `$Jy (8.6 GHz).
On 1998 February 19, 20.1 days after the X-ray maximum, we performed a 7.8 ksec PCA observation during which a total of 91 type II bursts were observed. Simultaneous VLA observations occurred during the second half of this period and 39 type II bursts were observed simultaneously by the PCA and the VLA at 8.44 GHz. These bursts had average peak count rates of $``$6000 c/s (due only to type II burst emission, excluding background and PE of 250 PCA c/s) and durations of $``$10 seconds. Integrated over the full radio observation, the 3$`\sigma `$ upper limit on radio emission from a point source at the position of the proposed radio counterpart was $`<`$90 $`\mu `$Jy (8.4 GHz). This observation is the only radio observation in the present work during which a large number of type II X-ray bursts were observed. Therefore, we can use it to explore the relationship between the type II X-ray bursts and the radio emission.
In order to compare the X-ray flux to the radio flux density, we re-bin the PCA data to correspond to the 3.33 s integration periods of the VLA data. To optimize the signal-to-noise ratio for the detection of radio bursts under the assumption that the radio flux density is proportional to the X-ray flux, we use only time bins with average count rates $`>`$1700 c/s. During these time bins, the radio emission is constrained to be $`<`$360 $`\mu `$Jy ($`3\sigma `$) at 8.4 GHz, while the average X-ray count rate was 3550$`\pm `$100 c/s.
One can decrease the noise level in the radio data by simply increasing the integration time. However, if the radio flux density is strictly proportional to the X-ray flux, the signal-to-noise ratio will decrease. Using the radio data observed when the type II burst X-ray flux was $`>`$300 c/s yields a three-sigma upper limit of 255 $`\mu `$Jy on the radio emission during bursts while the average X-ray type II burst intensity was 2340$`\pm `$120 c/s.
### 2.4 Relationship between X-ray and Radio Intensity
In Figure 2, we show the 8.4 GHz VLA flux densities measured during three consecutive outbursts of the RB compared with the RXTE ASM X-ray intensity measurements.
The radio outbursts and quiescent periods correspond well with the X-ray state of the RB, indicating a possible relationship between the radio and X-ray sources.
For the ASM, the best-fit linear relation between the 8.4 GHz flux density measured at the VLA and the ASM X-ray flux (forcing the line through the origin) gives $`S_{8\mathrm{GHz}}=27\pm 1.7\mu \mathrm{Jy}/(1\mathrm{ASM}\mathrm{c}/\mathrm{s})`$. However, this is a very poor fit, with a reduced $`\chi _\nu ^2=5.6`$ (for 6 degrees of freedom). For the PCA data, the relation is $`S_{8\mathrm{GHz}}=125\pm 8\mu \mathrm{Jy}/(1000\mathrm{PCA}\mathrm{c}/\mathrm{s})`$, again with a very poor fit ( $`\chi _\nu ^2=4.0`$ for 4 degrees of freedom). These data and the linear fits are shown in Figure 3.
It seems also plausible (observationally, if not theoretically) that the radio source is “on” at a constant flux density (of 325$`\pm `$25 $`\mu `$Jy at 8.4 GHz) whenever the X-ray source is above a threshold of $``$3 ASM c/s ($``$500 PCA c/s). In Figure 3 panel (a), we also show the 3$`\sigma `$ upper limits for the excess radio emission correlated with the average type II burst intensity. The two most constraining upper limits (which were drawn from the same data; see Section 2.3) are still marginally consistent with the radio detections during periods of X-ray persistent emission of comparable X-ray intensity. The most constraining simultaneous radio emission limit during the type II bursts is discrepant at the 2.9$`\sigma `$ level.
## 3 Analysis of Radio Observations Using the Synchrotron Bubble Model
One possible description of our radio flux density measurements involves a synchrotron bubble model Van der Laan (1966); Hjellming & Han (1995). In this model, the physical source of the radiation is a dense, expanding bubble of plasma. The relativistic electrons in the plasma (assumed to have a more or less isotropic velocity distribution and a power-law energy distribution) emit synchrotron radiation as their trajectories are deflected by interactions with ambient or entangled magnetic fields. The resulting radio spectrum is strongly peaked with a power law form at frequencies (much) below or above the peak frequency. This peak frequency decreases as the bubble expands in a well-determined fashion: $`\nu _m\rho ^{(4\gamma +6)/(\gamma +4)}`$, where $`\nu _m`$ is the peak frequency, $`\rho `$ is the normalized radius of the bubble, and $`\gamma `$ is the power-law index of the electron energy distribution, $`N(E)dEE^\gamma dE`$ Van der Laan (1966).
A synchrotron bubble’s time dependent radio emission is completely determined if we set the time, $`t_0(\nu _0)`$, and the flux density at spectral peak, $`S_0(\nu _0)`$, when the spectral peak reaches some nominal frequency $`\nu _0`$ (which we will take to be the VLA X-band, 8.44 GHz). Furthermore, one must define the functional form of the bubble’s radial expansion in time. Here we adopt the convention of previous authors Van der Laan (1966); Hjellming & Han (1995) and assume a power-law form for this expansion: $`\rho t^{{\scriptscriptstyle \frac{1}{\alpha }}}`$.
The initial hypothesis that we wish to investigate is whether our radio flux measurements may be explained by the evolution of a single synchrotron bubble initiated at the start of each X-ray outburst. We perform this analysis for two primary reasons: first, models of this nature, with time scales of the order of those observed in the radio emission, have proven successful in modeling the radio outbursts of other X-ray binaries Han (1993); Hjellming & Han (1995); second, even if the model is not successful, we expect it to give insight into the physical processes responsible for the radio emission we observe.
We determine the X-ray outburst start times by fitting the RXTE ASM light-curve to a functional form consisting of an onset time, a linear rise in intensity, a peak intensity time, and an exponential decay – similar to the form used by Guerriero et al. Guerriero et al. (1999), except that we do not include the secondary flares which are found by those authors in two of the outbursts. The results of our fits are consistent with those of Guerriero et al., and parameter uncertainties are probably dominated by the systematics associated with this simplified outburst intensity model. Based on these fits, we determine the time after outburst onset for each of the radio observations.
We have performed fits to our radio observations using the synchrotron bubble model of Van der Laan (1966) Eqs. 11 & 12; note however that his Eq. 11 is incorrectly typeset and should read
$$S(\nu ,\rho )=S_{m0}(\nu /\nu _{m0})^{5/2}\rho ^3\frac{\left[1\mathrm{exp}\left(\tau _m(\frac{\nu }{\nu _{m0}})^{(\gamma +4)/2}\rho ^{(2\gamma +3)}\right)\right]}{[1\mathrm{exp}(\tau _m)]}$$
(1)
where the variables are as defined in that work. For purposes of this analysis, we assume that the observed RB outbursts generate synchrotron bubbles with identical physical properties and time evolution. The data are not sufficient to adequately constrain all four independent parameters so we have fixed the expansion index $`\alpha `$ at three canonical values: $`\alpha =1`$ corresponding to free expansion with constant velocity, $`\alpha =2.5`$ corresponding to energy-conserving expansion into an ambient medium (as in the Sedov phase of a supernova remnant), and $`\alpha =4`$ resulting from a momentum-conserving (but not adiabatic) expansion into an ambient medium Hjellming & Han (1995). Application of the geometric corrections to the van der Laan model suggested by Hjellming and Johnston Hjellming & Johnston (1988) altered the fit parameters slightly but did not significantly improve the fits or change the character of the solutions (the light curves changed by less than 10%). Therefore, we quote results from the simpler model. The resulting synchrotron bubble parameters are given in Table 3
and the models are compared to the observations in Figure 4.
As indicated in Table 3, the models provide reasonable, but statistically unacceptable, fits to the data ($`\chi _\nu ^2=`$ 1.7–4.4, for nine degrees of freedom). Given our many simplifying assumptions, particularly the assumption that all outbursts are identical, this is perhaps not surprising. We note that the free expansion ($`\alpha =1`$) model fit is significantly worse than the other two and violates the $`3\sigma `$ upper limit imposed by the 1998 January 30 SCUBA observation (Figure 4a); for these reasons we prefer the models which assume a deceleration of the bubble’s expansion by an ambient medium.
Even these models, however, are in the end physically unacceptable, as may be determined by looking at the underlying physical parameters. We can express $`S_0`$ in terms of the ambient magnetic field density, $`H_0`$, and the angular extent of the source, $`\theta _0`$, at time $`t_{0,8\mathrm{GHz}}`$, as follows:
$$S_0=0.85h(\gamma )\left(\frac{H_0}{1\mathrm{mG}}\right)^{1/2}\left(\frac{\theta _0}{1\mathrm{mas}}\right)^2\mathrm{Jy},$$
(2)
where $`\gamma `$ is the power-law index of the electron energy distribution (2.6$`{}_{0.4}{}^{}{}_{}{}^{+4.4}`$ for the adiabatic model and 9.0$`{}_{2.4}{}^{}{}_{}{}^{+5.0}`$ for the momentum-conserving model), and $`h(\gamma )`$ is a known function that varies from 4.1 to 1.04 as $`\gamma `$ increases from 1 to 10:
$$h(\gamma )=\frac{\pi \mathrm{\Gamma }(\frac{3\gamma +19}{12})\mathrm{\Gamma }(\frac{3\gamma 1}{12})\mathrm{\Gamma }(\frac{3}{4})}{\sqrt{6}\mathrm{\Gamma }(\frac{3\gamma +2}{12})\mathrm{\Gamma }(\frac{3\gamma +22}{12})\mathrm{\Gamma }(\frac{5}{4})},$$
(3)
where $`\mathrm{\Gamma }`$ is the Euler gamma function.
Under the adiabatic assumption, the 8.44 GHz flux density of the source when the spectral peak reaches that frequency is $`S_{0,8\mathrm{GHz}}=340\pm 35`$ $`\mu `$Jy, whereas under the momentum-conserving assumption it is 440$`\pm `$50 $`\mu `$Jy. Thus the indications are that $`\theta _010^3\mathrm{mas}`$ at $`t_{0,8\mathrm{GHz}}`$, under both models, for fields of the order of a milligauss (with a weak $`H_0^{1/4}`$ dependence on the actual field strength). At the distance of the RB (8.6 kpc) that corresponds to a linear size of $`d_02`$ light-seconds, and expansion velocities at time $`t_{0,8\mathrm{GHz}}`$ of order 1 km sec<sup>-1</sup>, much too slow for the expanding hot plasma that the model requires. We are forced to conclude that more complicated models are required to accurately describe the observations (see Section 5).
## 4 Evaluation of Counterpart Likelihood
### 4.1 Positional Coincidence
First, we consider the probability that an unrelated radio source lies close to the RB on the sky. Micro-jansky source counts at 8.44 GHz by Windhorst et al. Windhorst et al. (1993) find a source density of $`1.14\times 10^2\mathrm{arcmin}^2`$ for flux densities above 300 $`\mu `$Jy. Therefore, the probability of an unrelated source brighter than 300 $`\mu `$Jy falling within 8$`\stackrel{}{\mathrm{.}}`$6 of the X-ray position of the RB is $`7\times 10^4`$. It should be noted that the $`\mu `$Jy sources are typically associated with faint blue galaxies Fomalont et al. (1991) and that the survey fields are not representative of the rich environment of a globular cluster near the galactic center. Because the $`\mu `$Jy source population in an environments comparable to Liller 1 is unknown, we do not use the probability computed above in our evaluation of the counterpart likelihood which is based exclusively on the correlation between the radio and X-ray lightcurves.
The best X-ray position for the RB Grindlay et al. (1984) places the radio source 8$`\stackrel{}{\mathrm{.}}`$6 away from the RB with $`1\sigma `$ X-ray and radio uncertainties of 1$`\stackrel{}{\mathrm{.}}`$6 and 0$`\stackrel{}{\mathrm{.}}`$1, respectively. Taken at face value, this is significant at the 5.4$`\sigma `$ level, not including the systematic uncertainty in the radio to optical (i.e. Einstein star tracker) reference frame shift. This makes the radio source identification with the RB seem less likely. However, further investigation reveals that the quoted Gaussian errors are an inadequate description of the true error distribution.
The Einstein position for the Rapid Burster was determined from a single pointing (four Einstein HRI pointings at Liller 1 were made, but only one found the RB in outburst). The 1$`\stackrel{}{\mathrm{.}}`$$`1\sigma `$ uncertainty in this position is estimated from the dispersion of the errors in the single-pointing Einstein positions of X-ray sources with known optical counterparts Grindlay et al. (1984). An important cross-check of the requisite star-tracker calibration (which dealt with a number of complex systematic effects; Grindlay (1981)) was provided by the Einstein globular cluster X-ray source program, which performed multiple pointings at each of eight globular clusters (GlCls) with known bright X-ray sources in order to determine the positions of these sources to better than 1$`\stackrel{}{\mathrm{.}}`$6 accuracy Grindlay (1981); Grindlay et al. (1984). These eight clusters were NGC 104 (47 Tuc), NGC 1851, Terzan 2, Liller 1, NGC 6441, NGC 6624, NGC 6712, and NGC 7078 (M15).
If the star-tracker calibration was successful in accounting for all significant sources of systematic error, then we would expect the resulting sample standard deviations, $`s,`$ in the positions of the GlCl X-ray sources as derived from the multiple pointings at each, to cluster strongly around $`s=1\stackrel{}{\mathrm{.}}6`$. Examining the quoted uncertainties of Grindlay et al. Grindlay et al. (1984), however, we find that the actual $`s`$ values deviate from 1$`\stackrel{}{\mathrm{.}}`$6 by up to a factor of two — $`s=0\stackrel{}{\mathrm{.}}8`$ for the X-ray source in NGC 6712 (5 pointings), while $`s=3\stackrel{}{\mathrm{.}}2`$ for the X-ray source in Terzan 2 (4 pointings). A $`\chi ^2`$ test shows that in NGC 6712, $`s`$ is too small at the 98% confidence level and that for Terzan 2, $`s`$ is too large with 99.95% confidence. Together, these deviations are unlikely to be statistical and they represent two out of the seven clusters examined (recall that Liller 1 had only one pointing with a detection, so its $`s`$ value is undetermined).
We conclude that there are probably remaining unaccounted-for systematic effects in the Einstein aspect solutions. These effects average into the quoted 1$`\stackrel{}{\mathrm{.}}`$6 error over many pointings, but caused substantial non-Gaussian excursions within the context of the GlCl X-ray source program. Since the multiple pointings at each cluster were typically executed over a time span of weeks to months, the unaccounted-for systematics are not likely to be temporal in nature, but rather to relate to position on the sky. They may relate, for example, to the density of suitable stars within the star-tracker fields. In this connection it is worth noting that Terzan 2 and Liller 1 are the two sample GlCls nearest the Galactic Center, and are only 3° apart on the sky.
### 4.2 Significance of the X-ray – Radio Correlation
The probability that the observed radio/X-ray behavior is produced by an unrelated background radio source depends on the model for the variability behavior of that source. We consider first a source model in which the background radio source varies randomly with a duty cycle, $`p`$, and has a short auto-correlation time-scale allowing us to consider each of our measurements to be statistically independent.
We take the radio source to be “on” or “off” when our 1$`\sigma `$ radio sensitivity is $`<`$70 $`\mu `$Jy and the source is or is not detected, respectively, at the $`3\sigma `$ level. Moreover, we define the Rapid Burster to be “on” or “off” (for these purposes) when its one-day average RXTE ASM count rate is greater or less than 4 cts/sec (see Figure 3 and Section 2.4). Based on these assumptions, the probability of an unrelated variable background radio source mimicking the X-ray on/off state of the RB is the product of the normalized probability distribution for obtaining the number of radio “on” and “off” observations for a given $`p`$, and the probability distribution of observing the radio and X-ray sources to be “on” and “off” simultaneously for a given $`p`$, integrated over all values of $`p`$:
$$P=\frac{_0^1p^N(1p)^Mp^{N_s}(1p)^{M_s}𝑑p}{_0^1p^N(1p)^M𝑑p},$$
(4)
where $`N`$ is the total number of times the radio source is observed “on”, $`M`$ is the total number of times the radio source is observed “off”, $`N_s`$ is the number of times the radio source and X-ray source are observed on simultaneously, and $`M_s`$ is the number of times the radio source and X-ray source are observed off simultaneously. We have assumed a uniform prior distribution for $`p`$ itself. For our observations (cf. Table 2), $`N=3`$, $`M=3`$, $`N_s=3`$, $`M_s=3`$, which yields a probability of 1.2% for an unrelated radio source mimicking the observed on-off behavior of the RB. This indicates that it is unlikely that the observed radio/X-ray correlation is produced by a randomly varying background radio source.
More complex models for the radio source might postulate that it occasionally turns “on” and remains so for $`T_{\mathrm{on}}`$ days (auto-correlation time scale). We can estimate the probability of a chance correlation of such a source with the Rapid Burster, without performing involved Monte Carlo simulations, by making use of the framework developed above. If we speculate that $`T_{\mathrm{on}}`$ is such that the two detections during Nov 1996 (separated by 5 days) are perfectly correlated and all other observations are uncorrelated, then we have 5 coincident X-ray/radio observations ($`N=N_s=2`$, $`M=M_s=3`$), and the probability of a chance correlation is 2.6%. Note that if the auto-correlation time scale is shorter than 5 days then all six observations are statistically independent as above. Note also that the auto-correlation time scale cannot be much longer than 5 days, because as it approaches $``$20 days it approaches the observed radio on/off time (that is, the initial non-detection followed by a detection during Nov 1996, and the detection followed by a non-detection in June/July 1997).
One might reasonably be concerned about the possibility of an unrelated radio source that turns “on” suddenly and remains so for an extended period of time. The radio data do not exclude the possibility that such a source was present and remained “on” during the unsampled period between the Nov 1996 and June/July 1997 outbursts of the RB. In this case, we have only three independent measurements of coincidence with the RB. However, the radio source is observed to make the transition from “off” to “on” (and the reverse) in $`{}_{}{}^{}{}_{}{}^{<}`$25 d. The radio data show one “turn-on” and one “turn-off” both of which are well sampled and coincident with the X-ray behavior of the RB. If an unrelated radio source turns “on” every $``$300 d, then the probability of the “turn-on” coincidence observed in Nov 1996 is $``$8%. The slow decay of the RB X-ray outburst makes a “turn-off” coincidence somewhat more probable. If we cannot distinguish “turn-offs” that are separated by $`\pm 10`$ d, then the probability of the observed coincidence in the June/July 1997 outburst is $``$15%. If the two events are independent, the probability of this type of source mimicking the observed RB X-ray turn-on/off behavior is $``$1%.
The population of $`\mu `$Jy radio transients and their outburst properties are not well studied. The probability of an unrelated, variable radio source having the time-dependent characteristics above and lying this close to the RB is somewhat less than unity although we have neglected this in the above calculations. It seems reasonable to assume the most conservative of the above approaches as the upper limit; we therefore find that the upper limit on the probability of a flaring background radio source mimicking the X-ray behavior of the RB is 3%.
### 4.3 Possible Relationship Between the FG Radio Object and the Present Object
The relationship between the variable radio source we observe and the steep spectrum source ($`\alpha =2`$) observed by FG is not clear. To within the astrometric uncertainty (1$`\stackrel{}{\mathrm{.}}`$5, 3$`\sigma `$), the two sources are at the same position on the sky. If FG’s interpretation of their source is correct and it represents the integrated emission of a population of radio pulsars, then it seems likely that their radio position is the center of the GC. (The $``$2<sup>′′</sup> separation between the FG radio source and the optical center of Liller 1 is consistent with the 1<sup>′′</sup> uncertainty in the absolute optical astrometry; see Figure 1). For a 6$`\stackrel{}{\mathrm{.}}`$5 core-radius GC Kleinmann et al. (1976); Picard & Johnston (1995), the 1$`\stackrel{}{\mathrm{.}}`$5 radio error circle represents 12% of the optical light of the GC. Thus, it is not unlikely that (if the RB is associated with the GC) the radio counterpart to the Rapid Burster would also lie close to the center of the GC.
The three flux-density measurements of the FG source were made over three different (sometimes overlapping) epochs. Combining the three measurements into a single spectrum (which was found to be steep) assumes that the source is not variable. The conflicting 1.5 GHz measurements of FG and Johnston et al. may indicate variability of a factor of two or more (2$`\sigma `$) between April 1990 and May 1993. The X-ray state of the Rapid Burster during these observations is unknown, and it may be that it was X-ray active during FG’s 1.5 GHz observation, providing the additional radio flux above that expected for an underlying population of radio pulsars in Liller 1. The relationship between the FG radio source and the RB could be illuminated by radio measurements of the same epoch at 0.33, 1.5 and 4.5 GHz, taken both while the RB is in X-ray outburst and in quiescence.
### 4.4 Conclusion – A Likely Radio Counterpart
The probability that a serendipitously located variable radio source would mimic the Rapid Burster X-ray state as has been observed is small (1–3%), but not dismissably so. The number and distribution of faint, variable radio sources toward the Galactic Center is not well known. There has been at least one other recent instance where a radio source, variable on a time-scale of $``$days, was discovered $`3^{}`$ from a bright X-ray source (although the X-ray flux was not correlated; Frail et al. (1996a, b)). In addition, globular clusters are known to harbor both millisecond radio pulsars and accreting X-ray sources, so the appearance of an unrelated variable radio source in Liller 1 must be considered more likely than for a random field. Other than the correlated radio and X-ray states and the marginal consistency of the radio spectrum with synchrotron-bubble models, we have not observed any radio behavior which would tie this object uniquely to the Rapid Burster.
The apparent positional discrepancy between the radio source and RB is likely explained by the non-Gaussian distribution of the X-ray position error as discussed above and acknowledged by Grindlay et al. Grindlay et al. (1984) and Grindlay Grindlay (1998). Proper motion of the RB will probably contribute only negligibly to the discrepancy, given the association with Liller 1, even though an interval of 20 years separates the Einstein and radio observations (a proper motion of $``$0$`\stackrel{}{\mathrm{.}}`$5 per year is required). Improvement upon the X-ray localization will be obtained during an approved AXAF observation, which should determine the X-ray position to $`{}_{}{}^{}{}_{}{}^{<}`$0$`\stackrel{}{\mathrm{.}}`$5, thus confirming or excluding this radio counterpart.
At present, we identify this radio source as a likely radio counterpart of the Rapid Burster. Apart from the AXAF observation, the case could be strengthened by observing bursts from this location (either in radio or IR), by continued observations in X-ray and radio of correlated on/off behavior, by observation of larger swings in the X-ray and radio fluxes which would permit a definite X-ray/radio correlation to emerge, or by discovering other radio behavior which is correlated with X-ray behavior of the Rapid Burster (such as short time-scale variability). For example, a program of 12 short VLA observations reaching a noise level of 45 $`\mu `$Jy/beam taken at two-week intervals and correlated with contemporaneous RXTE ASM observations would reduce the probability of an unrelated source mimicking the RB to 0.03%, even if the RB remains quiescent in X-rays and there are no radio detections.
## 5 Discussion of the Radio Observations
The radio observations of the proposed radio counterpart on 1996 November 11.88 determine that the radio spectral slope is flat to inverted with $`\alpha `$=0.9$`\pm `$0.3 ($`S_\nu \nu ^\alpha `$).
The JCMT/SCUBA observation at 350 GHz is, to date, the earliest radio observation relative to the beginning of an X-ray outburst. A few hours prior to this observation, a PCA observation measured the RB PE to be 4000$`\pm `$100 c/s. Using the measured radio spectral slope ($`\alpha `$=0.9$`\pm `$0.3) and the PCA/radio conversion at 8.4 GHz, the extrapolated radio spectrum gives a 350 GHz flux density of 4.5–41 mJy (with uncertainty dominated by the spectral slope). This is well above the 3 mJy $`3\sigma `$ upper limit obtained at the JCMT indicating that the radio emission is not a simple power-law spectrum, proportional in intensity to the instantaneous X-ray intensity. This non-detection is also the crucial observation in ruling out one class of synchrotron bubble models (see Section 3).
Synchrotron bubble behavior, associated with the outburst of an X-ray transient, has been observed on many occasions; Hjellming & Han Hjellming & Han (1995) show radio data and fits for A0620$``$00, Cen X-4, GS 2000+25, Aql X-1, GS 2023+338 (V404 Cyg), and GRS 1124$``$683 (see also Han (1993)). In that respect the detection of radio emission from the Rapid Burster during its outbursts is not particularly surprising.
The physical parameters derived from our model fits, however, indicate expansion velocities of $``$km sec<sup>-1</sup> about 10 d after the outburst start which is well below the physical lower limit set by the sound speed of the hot plasma, $`c_\mathrm{s}0.1\sqrt{T_\mathrm{K}}`$ km sec<sup>-1</sup> (where $`T_\mathrm{K}`$ is the temperature of the plasma in Kelvin, $``$10<sup>7</sup> in SS 433 – Hjellming & Johnston (1988)).
Our assumption of a generic bubble event accompanying each outburst is therefore likely to be flawed and we must consider more complex models. For example, radio emission over the course of an outburst may result from the summed emission of a succession of synchrotron bubbles, each of which expands at high speed and therefore brightens and fades (at a given frequency) much more quickly than the emission at large. If the flux levels that we see are produced by $`T_\mathrm{K}10^7`$ plasma, then the expected rise times for the synchrotron bubbles are $``$1 hour. Alternatively, each of our radio detections may simply have caught a single fast bubble in the midst of its expansion; in that case, the timing of our detections would indicate how the rate of these bubble ejections changes over the course of an outburst. Relevant to both of these possibilities, it is worth noting that the radio flares from GRS 1915+105 have rise and decay times of $``$1 hour Mirabel et al. (1998); Fender & Pooley (1998).
We are therefore motivated to consider whether the type II X-ray bursts of the Rapid Burster might produce individual synchrotron bubbles, with associated infrared and radio emission. The synchrotron bubbles of GRS 1915+105 have been shown to be related to active accretion, as determined by simultaneous X-ray observations Mirabel et al. (1998); Eikenberry et al. (1998a). Scaling down the brightness of the infrared emission seen in that source by the factor of ten difference in X-ray flux between it and the RB suggests that there may be mJy infrared flares ($`K14`$) during RB type II X-ray bursts, if synchrotron bubbles are indeed being formed. We are currently pursuing short time-resolution IR observations of the RB during its next outburst to test this hypothesis; naturally, observation of any bursting counterpart to the RB will confirm or reject the VLA counterpart proposed here.
The radio detections we report here could not themselves have been produced by the type II X-ray bursts; the radio detections were made prior to day 12 of each outburst, while the type II X-ray bursts were not observed until after day 14 in every case.
The observations of 1997 July 24 were carried out when the VLA was in its new CS (C-short) configuration. The combination of the array configuration with low-elevation observing yielded many short projected baselines with lengths all the way down to the antenna separation (25 m). Heavily tapered maps of these data reveal an extended source with peak flux density of approximately 3 mJy (at 8.44 GHz) that is at least as large as the primary beam. The axis of symmetry of the extended source lies at its closest point approximately $`1^{}`$ north-east from the RB, which corresponds to 2.5 pc at the 8.6 kpc distance of the RB. The source is probably unrelated to the RB but its presence is an important consideration for future observations that include short interferometric baselines.
## 6 Conclusions
We have detected a likely radio counterpart for the Rapid Burster, with radio emission correlated with the X-ray outbursts. The likelihood of unrelated variable radio source duplicating the observed correlation between the X-ray flux and radio flux density is low (1–3%), but not dismissably so. There is an apparent discrepancy between the X-ray position of the RB and the radio counterpart but it is likely due to the non-Gaussian distribution of the X-ray position errors. Confirmation of the counterpart from additional observations – an already approved AXAF observation, further radio observations while the RXTE ASM is still operational, or possibly infrared observations while the RB is in outburst – is required. The time and spectral evolution of the radio source, while not physically interpretable as a full-outburst synchrotron-bubble (as seen in some other transient X-ray binaries), may be due to $``$hour-long radio flares such as have been seen from the superluminal-jet source GRS 1915+105.
Our lower-limit on the time delay between X-ray emission and radio emission from the type II bursts ($`{}_{}{}^{}{}_{}{}^{>}`$1 sec) is consistent with the delay expected from a synchrotron bubble. Our observation of a persistent radio source $``$5 days after the start of active accretion (i.e. an outburst) sets an upper limit for the radio vs. X-ray time delay. The correlation of radio flux density with persistent X-ray intensity in this system indicates that the radio flux density is related to active accretion onto the surface of the neutron star – as accretion is responsible for the X-ray outburst. This suggests that excess radio emission may be produced during the type II X-ray bursts, which are themselves accretion driven. However, our observations produce no evidence of simultaneous radio/X-ray bursts, marginally constraining them (2.9$`\sigma `$) to be below the level that we observe during periods of comparable X-ray flux in persistent emission. This may imply that radio bursts at 8.4 GHz do not occur simultaneously with X-ray type II bursts.
## 7 Acknowledgements
We are grateful to Evan Smith, Jean Swank, and the XTE Science Operations Facility staff for their efficient processing of the several XTE TOO observations covered in this work. We are also indebted to VLA observers and staff Phillip Hicks, Robert Hjellming, Rick Perley, Michael Rupen, and Ken Sowinski, who graciously gave us time to perform observations of the RB, to Barry Clark who worked on short-notice to help find the time to make observations during critical periods, to Tasso Tzioumis who performed the observations at ATCA, and to Ian Robson who made possible the observations at the JCMT. We are grateful to M. van der Klis and the anonymous referee for their detailed and helpful comments on the manuscript. This work has been supported under NASA Grant NAG5-7481. JvP acknowledges the support of NASA under grant NAG5-7414. RPF was supported during the period of this research initially by ASTRON grant 781-76-017 and subsequently by the EC Marie Curie Fellowship ERBFMBICT 972436. CBM thanks the University of Groningen for its support in the form of a Kapteyn Institute Postdoctoral Fellowship. RER thanks his host J. Trümper of Max-Planck-Institut für Extreterrestrische Physik, where this work began, and his host Lars Bildsten of UC Berkeley, where this work was completed.
|
no-problem/9910/astro-ph9910167.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Paczyński and Stanek (1998, hereafter PS) noticed that the I-band absolute brightness, $`M_I`$, of red clump (hereafter RC) giants, the intermediate age (2–10 Gyr) helium core burning stars, has intrinsically small dispersion and can be used as a ”standard candle” for distance determination. The obvious advantage of this method is large number of these stars in stellar populations allowing determination of the mean brightness with statistically unprecedented accuracy. Moreover, RC giants are also very numerous in the solar neighborhood. Therefore their brightness could be precisely calibrated with hundreds of stars for which the Hipparcos satellite measured parallaxes with accuracy better than 10% (Perryman et al. 1997). It should be noted that RC stars are the only standard candle which can be calibrated with direct, trigonometric parallax measurements. Similar quality parallaxes do not exist for Cepheids or RR Lyr stars (cf. Fig. 2 of Horner et al. 1999).
As in the case of any other stellar standard candle, the brightness of RC stars might, however, be affected by population effects: different chemical composition or age of the stellar system being studied, as compared to the local Hipparcos giants. PS argued that both dependences of $`M_I`$ on age and metallicity are negligible. On the other hand, Girardi et al. (1998) claimed much stronger dependences based on theoretical modeling. According to their results $`M_I`$ of RC stars could be different by as much as 0.5 mag in different environments. However, one should be aware that results of modeling are quite sensitive to the input physics and results from different types of evolutionary codes are not consistent each other (Dominguez et al. 1999, Castellani et al. 1999). While models seem to reasonably reproduce qualitative properties of RC stars they cannot provide accuracy of hundredths of magnitude required for precise distance determination.
The necessity of good, preferably empirical, calibration of population effects on RC brightness became very urgent, in particular when the distance modulus to the LMC was determined (Udalski et al. 1998, Stanek, Zaritsky and Harris 1998), supporting the ”short” distance scale to that galaxy ($`0.4`$ mag smaller than the ”long” value of $`(mM)_{\mathrm{LMC}}=18.50`$ mag). The distance to the LMC is one of the most important distances of the modern astrophysics because extragalactic distance scale is tied to it. Udalski (1998a) presented an empirical calibration of $`M_I`$ of RC stars on metallicity suggesting only a weak dependence ($`0.09\pm 0.03`$ mag/dex). The dependence of $`M_I`$ of RC stars on age was studied by Udalski (1998b). Observations of RC stars in several star clusters of different age located in low extinction areas of the Magellanic Clouds showed that their $`M_I`$ in these clusters is independent of age (for stars 2–10 Gyr old) within observational uncertainties of a few hundredths of magnitude. Recently, Sarajedini (1999) presented analysis of a few Galactic open clusters suggesting fainter RC in older ($`>`$5 Gyr) clusters.
In this paper we present empirical arguments which additionally support usefulness of the ”RC stars” method – the relation of $`M_I`$ vs. metallicity based on the most precise data necessary to solve the problem: high resolution and S/N spectra of nearby red giants, for which accurate parallaxes and photometry were measured by Hipparcos. Large range of metallicity of nearby RC giants partially overlapping with metallicity range of field RC giants in the LMC enables us to compare brightness of these stars and determine the RC distance modulus to the LMC largely free from population uncertainties.
## 2 Observational Data
The sample of red giant stars from the solar neighborhood comes from the Hipparcos catalog and consists of objects with high accuracy trigonometric parallaxes ($`\sigma _\pi /\pi <10\%`$). About 75% stars from this sample are the same objects which were used by PS, i.e., stars with I-band photometry. To enlarge that data set we have also included stars which I-band magnitude was obtained from $`BV`$ color via very well defined correlation between $`BV`$ and $`VI`$ colors (cf. Fig. 8 Paczyński et al. 1999), i.e., stars marked as type ’H’ in the Hipparcos catalog. While accuracy of the I magnitude of these stars is somewhat worse, it is still acceptable taking into account usually larger uncertainty from parallax error.
For further analysis only the stars with \[Fe/H\] abundance determination were selected. We used results of the spectroscopic survey of McWilliam (1990), containing \[Fe/H\] determinations for 671 G and K giants based on high resolution ($`R40000`$) and S/N ($`100`$) spectra. This is the most comprehensive and homogeneous data set available. Typical accuracy of \[Fe/H\] determination is about 0.1 dex. 284 objects from our photometric sample were cross-identified with objects in the spectroscopic survey list of McWilliam (1990). Left panel of Fig. 1 presents the color-magnitude diagram (CMD) of this sample. Comparison with Fig. 2 of PS indicates that full photometric sample and our 284 object sample used for further analysis are distributed identically in the CMD, so we did not introduce any significant bias when limiting Hipparcos stars to objects with spectroscopic data. The mean distance from the Sun of our final sample of 284 stars is $`<d>=66`$ pc. It is worth noting that due to accurate Hipparcos parallaxes the possible Lutz-Kelker bias of the absolute magnitude of our sample is negligible, as shown by Girardi et al. (1998).
The photometric data of the LMC fields were obtained during the OGLE-II microlensing survey (Udalski, Kubiak and Szymański 1997). Observations were collected with the 1.3-m Warsaw telescope at the Las Campanas Observatory, Chile which is operated by the Carnegie Institution of Washington on eight photometric nights between October 30, 1999 and November 12, 1999. Single chip CCD camera with $`2048\times 2048`$ pixel SITe thin chip was used giving a scale of 0.417 arcsec/pixel. $`35`$ frames in the V and I-bands were collected for each field with exposure time of 300 sec for both bands. Photometry was derived using the standard OGLE pipeline. On each night several standard stars from the Landolt (1992) fields were observed for transformation of the instrumental magnitudes to the standard system. The error of zero points of photometry should not exceed 0.02 mag.
## 3 Discussion
### 3.1 Hipparcos Red Clump Stars
CMD of the analyzed sample of red giants presented in the left panel of Fig. 1 indicates that the majority of objects are RC stars that form a very compact structure located in the range: $`0.0<M_I<0.5`$ and $`0.9<VI<1.1`$. They are, however, contaminated by red giant branch stars and a blue vertical structure, going from $`M_I1.4`$ mag down to $`M_I0.1`$ mag, consisting of younger (more massive) red giants.
$`M_I`$ of stars is plotted against metallicity \[Fe/H\] in the right panel of Fig. 1. The distribution of stars is non-uniform with most objects within metallicity range of $`0.3<[\mathrm{Fe}/\mathrm{H}]<0.0`$ dex. To investigate the relation of $`M_I`$ of RC stars on metallicity we divided our sample into three subsamples: high metallicity, $`[\mathrm{Fe}/\mathrm{H}]>0.05`$ dex, medium metallicity, $`0.25<[\mathrm{Fe}/\mathrm{H}]<0.05`$ dex, and low metallicity stars, $`[\mathrm{Fe}/\mathrm{H}]<0.25`$ dex. Such a division ensures more or less uniform distribution of objects within each bin with the mean, median and mode values equal to ($`0.02,0.00,0.00`$), ($`0.15,0.14,0.14`$) and ($`0.39,0.36,0.33`$) dex for the high, medium and low metallicity bins, respectively. We determined $`M_I`$ in all bins in similar manner as described in Udalski et al. (1998). Fig. 2 presents the histograms of distribution of $`M_I`$ for each subsample with fitted Gaussian function representing RC stars superimposed on parabola representing background stars.
It is evident from Fig. 2 that the majority of stars in the low and medium metallicity bins are typical intermediate age RC objects. Small dispersion of their $`M_I`$ ($`\sigma _{\mathrm{RC}}=0.12`$ mag) indicates that these stars can indeed be a good standard candle. On the other hand the high metallicity bin has the RC poorly defined ($`\sigma _{\mathrm{RC}}=0.26`$ mag). Closer examination of Fig. 1 indicates that most of stars from this bin (open circles) are red branch giants and objects located on the vertical sequence of younger giants in the RC evolution phase with very few stars belonging to the intermediate age RC.
$`M_I`$ of RC stars in the low, medium and high metallicity bins are equal to $`0.273\pm 0.015,0.244\pm 0.012`$ and $`0.190\pm 0.041`$ mag, respectively (statistical error). The horizontal bars in the right panel of Fig. 1 show the range of each bin and its $`M_I`$. The vertical bars are shown at the median metallicity of each bin and represent statistical uncertainty of $`M_I`$. It is clear from Figs. 1 and 2 that the absolute brightness of RC stars increases with lower metallicity. If we consider all three bins then the slope of the $`M_I`$ vs. \[Fe/H\] relation is about 0.2 mag/dex. One should, however, remember that the high metallicity bin is poorly defined due to small number of metal rich RC stars of intermediate age in the solar neighborhood. Moreover, metallicity of the most metal rich stars from McWilliam’s sample is very likely underestimated (McWilliam 1997) making this bin additionally uncertain. Larger mean metallicity of this bin would lead to smaller slope of the $`M_I`$ vs. \[Fe/H\] relation. Indeed, if we limit ourselves only to low and medium metallicity bins where the intermediate age RC stars dominate the linear relation becomes:
$$M_I=(0.13\pm 0.07)([\mathrm{Fe}/\mathrm{H}]+0.25)(0.26\pm 0.02)$$
$`(1)`$
Eq. (1) indicates that the dependence of $`M_I`$ on metallicity is rather weak. The relation is in good agreement with the previous empirical determination by Udalski (1998a) based on comparison of RC stars with RR Lyr stars but this time it is based on precise measurements of numerous sample of individual stars, thus it is more reliable. It should be also stressed that the result is weakly sensitive to systematic errors, as those are very unlikely in the Hipparcos absolute photometry of so bright and nearby stars, and due to weak dependence on metallicity even large systematic metallicity error (which is also unlikely, Taylor 1999) would only lead to a magnitude shift of the order of a few hundredths of magnitude.
### 3.2 LMC Red Clump Stars
Table 1 summarizes the basic properties of field RC stars in nine fields around star clusters distributed in different parts of the LMC halo. The interstellar extinction in these directions is small, thus minimizing uncertainties of extinction-free photometry. All lines-of-sight are far enough from the LMC center so the reddening could be reliably determined from the COBE/DIRBE maps of Schlegel, Finkbeiner and Davis (1998). The I-band interstellar extinction was calculated using the standard extinction curve ($`A_I=1.96E(BV)`$). We assumed uncertainty of the reddening value equal to $`\pm 0.02`$ mag.
$`I`$ of RC stars was derived in similar manner as in Udalski et al. (1998). About 160–1400 field red giants were used for its determination in our fields. Statistical uncertainty of $`I`$ is usually below 0.01 mag so the main contribution to the extinction free magnitude, $`I_0`$, error budget comes from the interstellar reddening uncertainty. Dispersion of brightness of field RC stars, $`\sigma _{\mathrm{RC}}`$, is typically below 0.15 mag similar to the local RC stars.
Unfortunately metallicity of the LMC field RC stars is not known so precisely as that of the Hipparcos local giants. Metallicity of field giants in our nine lines-of-sight was determined by Bica et al. (1998, herafter BGDCPS) using Washington photometry. It ranges from $`0.7`$ dex to $`0.35`$ dex (Table 1) with typical error of 0.1 dex of internal determination, and the total error of $`0.2`$ dex. Due to large errors it is not clear if the observed dispersion of metallicity in different parts of the LMC is real or results from the uncertainty of the method. It is also not clear whether the absolute numbers are correct, e.g., metallicity determined using the same method for several LMC clusters by BGDCPS is on average by $`0.2`$ dex lower than spectroscopic metallicities of the LMC clusters determined by Olszewski et al. (1991). This point should be cleared up in the near future with direct spectroscopic observations of the LMC RC stars with new 8-m class telescopes.
Lower panel of Fig. 3 presents $`I_0`$ of RC stars in our fields in the LMC as a function of BGDCPS \[Fe/H\]. Large asterisk indicates the mean metallicity and brightness of the entire sample. Horizontal bar corresponds to the typical range of metallicities of field giants, $`0.5`$ dex, as determined by BGDCPS. It is worth noting that if one assumes that dispersion of metallicity in the LMC fields is real and BGDCPS determinations are correct, at least differentially, then the observed trend of variation of $`I_0`$ with \[Fe/H\] is similar to the local sample of RC stars. The formal fit of a straight line gives the slope equal to $`0.21\pm 0.12`$ mag/dex. Although one should treat this figure with caution it is encouraging that it is in good agreement with that resulting from analysis of the local RC stars.
### 3.3 Distance Modulus to the LMC
The range of metallicities among nearby RC stars is wide and covers $`0.6<[\mathrm{Fe}/\mathrm{H}]<+0.2`$ dex. This is a very fortunate situation, because this range partially overlaps at the low \[Fe/H\] end with metallicity of the RC stars in the LMC. In the upper panel of Fig. 3 $`M_I`$ of Hipparcos RC stars is plotted as a function of \[Fe/H\]. Solid line marks relation of $`M_I`$ vs. \[Fe/H\] given by Eq. (1). Two dotted vertical lines indicate the range where metallicities of the LMC and local Hipparcos RC stars overlap. We remind here that we have adopted BGDCPS metallicities of the LMC RC stars, and if they are too low, what is likely, the overlap of metallicities of both populations can be much larger.
The mean metallicity of the LMC RC stars lies outside the overlap region. Therefore, to derive $`(mM)_{\mathrm{LMC}}`$ we have to slightly extrapolate $`M_I`$ vs. \[Fe/H\] relation given by Eq. (1). $`M_I`$ of the Hipparcos RC giants extrapolated to the metallicity of $`0.55`$ dex is equal to $`M_I=0.30\pm 0.04`$ mag. With $`I_0=17.94\pm 0.05`$ mag of RC stars in our nine lines-of-sight in the LMC, this immediately leads to $`(mM)_{\mathrm{LMC}}=18.24\pm 0.08\mathrm{mag}`$.
Can this result be severely affected by extrapolation? That could only be possible if $`M_I`$ vs. \[Fe/H\] relation of RC stars for metallicities in the range of $`0.9<[\mathrm{Fe}/\mathrm{H}]<0.5`$, i.e., not covered by Hipparcos stars, would behave extraordinarily. In particular it would have to be extremely steep in this range to narrow the gap between our result and the ”long” distance modulus of $`(mM)_{\mathrm{LMC}}=18.50`$. However, this is not the case: we already mentioned that if dispersion of metallicity in the LMC fields as measured by BGDCPS is real then the slopes of $`M_I`$ vs. \[Fe/H\] relations in the LMC and the local sample are similar. Also theoretical modeling of RC provides similar arguments. For instance, models of Girardi (1999a, Fig. 4) indicate that for RC stars of age 2–8 Gyr the mean slope of the $`M_I`$ vs. \[Fe/H\] relation in the range of metallicities of $`1.0<[\mathrm{Fe}/\mathrm{H}]<0.4`$ is about 0.15 mag/dex in excellent agreement with our empirical data (theoretical relation is plotted with dashed line in the upper panel of Fig. 3 as a continuation of our empirical relation). Thus large uncertainty of the distance modulus due to extrapolation of Eq. (1) is very unlikely.
Possible differences of ages of both populations can be another potential source of uncertainty of the derived $`(mM)_{\mathrm{LMC}}`$. Empirical study of this effect based on analysis of clusters in the Magellanic Clouds showed that it is practically negligible for RC stars of age within 2–10 Gyr (Udalski 1998b). On the other hand analysis of a few Galactic open clusters by Sarajedini (1999) suggests that RC stars older than $`5`$ Gyr become fainter. Without going into detailed discussion of these results we only note here that analysis of Galactic clusters requires very precise distance determination what is difficult and that two old clusters claimed to have fainter RC (Be39, NGC188) have actually very sparse population of RC stars consisting of only a few stars. However, despite differences which deserve further studies, both Udalski (1998b) and Sarajedini (1999) data show that for the age range of 2–5 Gyr $`M_I`$ of RC giants is constant within $`\pm 0.05`$ mag. The age of the LMC RC stars is within this range (BGDCPS) and the vast majority of the local RC stars are younger than 4 Gyr (Girardi 1999b). To be on the safe side we included to the final error budget uncertainty of $`\pm 0.05`$ mag for possible differences of age between both populations. We may conclude that the derived distance modulus to the LMC is largely free from population uncertainties. Also small interstellar extinction assures that the result is sound. Had there been any additional LMC extinction in the LMC fields, on top of that given by Schlegel et al. (1998), $`(mM)_{\mathrm{LMC}}`$ would be reduced to even smaller value.
Acknowledgements. We would like to thank the anonymous referee whose critical remarks allowed us to significantly improve the manuscript. We are very grateful to Mr. K. Żebruń for collecting observations of the LMC fields. We thank Drs. B. Paczyński, K.Z. Stanek, M. Kubiak and M. Szymański for many discussions and important suggestions. The paper was partly supported by the grants: Polish KBN 2P03D00814 and NSF AST-9820314.
## REFERENCES
* Bica, E., Geisler, D., Dottori, H., Clariá, J.J., Piatti, A.E., and Santos Jr, J.F.C. 1998, Astron. J., 116, 723 (BGDCPS).
* Castellani, V., Degl’Innocenti, S., Girardi, L., Marconi, M., Prada Moroni, P.G., and Weiss, A. 1999, Astron. Astrophys., in press, astro-ph/9911432.
* Dominguez, I., Chieffi, A., Limongi, M., and Straniero, O. 1999, Astrophys. J., 524, 226.
* Girardi, L., Groenewegen, M.A.T, Weiss, A., and Salaris, M. 1998, MNRAS, 301, 149.
* Girardi, L. 1999a, MNRAS, 308, 818.
* Girardi, L. 1999b, astro-ph/9912309.
* Horner, D. et al. 1999, astro-ph/9907213.
* Landolt, A.U. 1992, Astron. J., 104, 372.
* McWilliam, A. 1990, Astrophys. J. Suppl. Ser., 74, 1075.
* McWilliam, A. 1997, AAR&A, 35, 503.
* Olszewski, E.W., Schommer, R.A., Suntzeff, N.B., and Harris, H.C. 1991, Astron. J., 101, 515.
* Paczyński B., and Stanek, K.Z. 1998, Astrophys. J. Letters, 494, L219 (PS).
* Paczyński B., Udalski, A., Szymański, M., Kubiak, M., Pietrzyński, G., Soszyński, I., Woźniak, P., and Żebruń, K. 1999, Acta Astron., 49, 319.
* Perryman, M.A.C. et al. 1997, Astron. Astrophys., 323, L49.
* Sarajedini, A. 1999, Astron. J., 118, 2321.
* Schlegel, D.J., Finkbeiner, D.P., and Davis, M. 1998, Astrophys. J., 500, 525.
* Stanek, K.Z, Zaritsky, D., and Harris, J. 1998, Astrophys. J. Letters, 500, L141.
* Taylor, B.J. 1999, Astron. Astrophys. Suppl. Ser., 135, 75.
* Udalski, A., Kubiak, M., and Szymański, M. 1997, Acta Astron., 47, 319.
* Udalski, A. 1998a, Acta Astron., 48, 113.
* Udalski, A. 1998b, Acta Astron., 48, 383.
* Udalski, A., Szymański, M., Kubiak, M., Pietrzyński, G., Woźniak, P., and Żebruń, K. 1998, Acta Astron., 48, 1.
|
no-problem/9910/astro-ph9910418.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The interest in vortices in disks has recently regained momentum due to the potential role they could play in planet formation. It is believed that the initial process of planet formation takes place via a progressive aggregation and sticking of dust grains in the primordial protoplanetary disk. The grain aggregates begin to settle toward the mid-disk plane and grow to centimeter-size grains. This process is efficient in producing centimer-sized grains, but it is cut off sharply thereafter due to small scale turbulent diffusion in the nebula. These particles form a massive dusty layer in the disk mid-plane. A gravitational instability of the dusty, dense mid-plane layer, can be triggered by seeds of the order of meters to form 10-100 km planetesimals (the radial velocity dispersion induced by drag will further delay the onset of the instability until the mean size is in the range 10-100 m). Therefore, there is a gap (of at least two orders of magnitude) between the maximal particles size reached by coagulation ( centimeters) and the minimal size required for planetesimal formation ( meters). In order to remedy this problem, it has been suggested (e.g. Barge & Sommeria 1995; Adams & Watkins 1995; Tanga et al. 1996) that dust concentration in the cores of anticyclonic vortices may be rapidly (within a few orbits) enhanced by a significant factor. As a result, the size of the particles needed to trigger the gravitational instability can be reduced by a comparable factor (for a review of the problem see e.g. Tanga et al. 1996). Assuming that the drift is negligible in the vortex, a gap in the size of a factor of a hundred may perhaps be bridged in this way. The cores of anticyclonic vortices may therefore be the preferred regions for rapid planetesimal (and planet core) formation.
Simplified numerical simulations of vortices in disks (e.g. Bracco et al. 1998, solving the vorticity equation; Nauta 1999, using a shallow water equation) have shown that anticyclonic vortices may be rather stable and can survive in the flow for many orbits. More detailed calculations (Godon & Livio 1999; assuming a two-dimensional, compressible, viscous, polytropic disk) have shown explicitly that the exponential decay time of the vortices is inversely proportional to the alpha viscosity parameter ($`\alpha _{SS}`$; Shakura & Sunyaev 1973). Godon & Livio found that the decay time can be of the order order of $`10100`$ orbits in protoplanetary disks (where it is believed that $`\alpha _{SS}10^410^3`$, and the ratio of the disk thickness to the radial distance is $`H/r0.050.20`$; e.g. Bell et al. 1995). Vortices can therefore live sufficiently long to allow (in principle at least) for dust to concentrate in their cores. The results of recent, highly simplified simulations (Bracco et al.1999, still solving the vorticity equation) in fact point in that direction. However, the value of $`H/r`$ is not defined in Bracco et al. (1998, 1999). The elliptical vortices obtained by these authors are not very elongated, implying an effective aspect ratio of $`H/r=c_s/v_K1`$ ($`c_s`$ is the speed of sound and $`v_K`$ is the Keplerian velocity in the disk). In addition, the vorticity equation does not take into account the compressibility of the flow, and the vortex interaction range is essentially infinite.
Most importantly, however, from the point of view of vortices as potential planet formation sites, it is presently not clear how vortices form in the disk initially. An analysis carried out by Lovelace et al. (1999) suggests that a non-linear, non-axisymmetric, Rossby wave instability could lead to the formation of vortices. Vortices can also appear in a broader context, e.g. in galactic disks (Fridman & Khoruzhii, 1999). However, so far, no process has been shown unambiguously to create vortices in disks around young stellar objects. The only robust numerical result obtained to date is that coherent anticyclonic vortices can survive in a strongly sheared (two-dimensional) Keplerian flow (for about 50 orbits when $`\alpha _{SS}=10^4`$, Godon & Livio 1999).
In the present work we first investigate the formation of vortices in a compressible Keplerian disk. We then perform simulations intended to study the dynamics of dust in the presence of vortices. Finally, we examine the possible role of vortices in angular momentum transport. The technical details of the modeling are given in §2. The results are presented in §3, and a discussion follows.
## 2 Accretion Disks modeling
We solve the time-dependent, vertically averaged equations of the disk (e.g. Pringle 1981) using a pseudospectral method (Godon 1997). The equations are solved in the plane of the disk using cylindrical coordinates $`(r,\varphi )`$. We use an alpha prescription for the viscosity law (Shakura & Sunyaev 1973), and assume a polytropic relation for the pressure $`P=K\rho ^{1+1/n}`$, where $`K`$ is the polytropic constant and $`n`$ is the polytropic index. Details of the modeling, including the full equations and the numerical method can be found in Godon (1997) and in Godon & Livio (1999). Models with different values of the physical and numerical parameters have been run (see also Godon & Livio 1999). However, in all the models presented here, we chose $`H/r=0.15`$, $`\alpha _{SS}=10^4`$, $`n=2.5`$, with an initial density profile $`\rho r^{15/8}`$, and an initial Keplerian angular velocity (standard disk model, Shakura & Sunyaev 1973). A numerical resolution of $`128\times 128`$ collocation points has been used.
### 2.1 The Viscous Polytropic Disk
Since we are modeling a viscous polytropic disk, it is important to stress that vorticity is dissipated and cannot be generated, for the following reason. The equation for the vorticity $`\stackrel{}{\omega }=\stackrel{}{}\times \stackrel{}{v}`$ can be obtained by taking the curl of the Navier-Stokes equations (e.g. Tassoul 1978):
$$\frac{D}{Dt}\frac{\stackrel{}{\omega }}{\rho }=\frac{\stackrel{}{\omega }}{\rho }.\stackrel{}{}\stackrel{}{v}\frac{1}{\rho }\stackrel{}{}\frac{1}{\rho }\times \stackrel{}{}P+\stackrel{}{}\times \frac{1}{\rho }\stackrel{}{}.\stackrel{}{\tau },$$
(1)
where
$$\frac{D}{Dt}\frac{\stackrel{}{\omega }}{\rho }=\frac{}{t}\frac{\stackrel{}{\omega }}{\rho }+\stackrel{}{v}.\stackrel{}{}\frac{\stackrel{}{\omega }}{\rho },(1^{})$$
and $`\stackrel{}{\tau }`$ is the viscous stress tensor. The second term on the RHS of equation (1) is a source term for the vorticity. This term is non-zero in a baroclinic flow \[when $`P=P(\rho ,T)`$\] and it vanishes in a barotropic flow \[$`P=P(\rho )`$\]. The last term on the RHS is the curl of the viscous forces and it is responsible for the viscous dissipation of the vorticity. Consequently, in an inviscid barotropic flow, the flux of vorticity across a material surface is a conserved quantity (this is known as Kelvin’s circulation theorem). In the present work, the flow is polytropic $`P=K\rho ^\gamma `$, and viscous, therefore vorticity is only dissipated.
### 2.2 Dust Modelling
In this work we also address the question of the concentration of dust particles in the cores of anticylconic vortices. The equations for the dust ”particles” were simplified, by taking into account only the drag force exerted by the gas on the dust particles (see e.g. discussion in Barge & Sommeria 1995; Tanga et al. 1996). We also assumed the radius $`s`$ of the dust particles to be smaller than the mean free path $`\lambda `$ of the gas molecules (this is known as the Epstein regime, see e.g. Cuzzi, Dobrovolskis and Champney, 1993). Accordingly, the equations of motion of a dust particle located in the plane ($`r,\varphi `$) are given in the inertial frame of reference by
$$\frac{d^2r}{dt^2}=r\left(\frac{d\varphi }{dt}\right)^2\frac{GM}{r^2}\gamma \left(\frac{dr}{dt}v_r\right),$$
(2)
$$r\frac{d^2\varphi }{dt^2}=2\frac{dr}{dt}\frac{d\varphi }{dt}\gamma \left(r\frac{d\varphi }{dt}v_\varphi \right),$$
(3)
where $`G`$ is the gravitational constant, $`M`$ is the mass of the central star, $`v_r`$ and $`v_\varphi `$ are the radial and angular (respectively) components of the velocity of the flow and $`\gamma `$ is the drag parameter. Sometimes it is convenient to write $`\gamma =\tau ^1`$, where $`\tau `$ is the characteristic time for the dust to be dragged by the flow (or, in the frame co-moving with the flow, it is defined as the ’stopping’ parameter). In the absence of drag ($`\gamma 0`$), the equations represent the motion of particles in a Keplerian potential.
## 3 Results
### 3.1 The Formation of vortices
In this subsection we simulate different mechanisms which are potentially capable of creating vortices in a thin, two-dimensional Keplerian disk (representing a protoplanetary disk). We distinguish mainly between two types of processes: (i) vorticity is generated only initially (similar to simulations of decaying turbulence), (ii) vorticity is generated continuously (similar to simulations of driven turbulence). In the first case we assume that the flow is initially turbulent, while in the second we propose two potential mechanisms for the generation of vorticity: accretion of clumps of gas (or ”comets”) onto the disk, and convection.
#### 3.1.1 Initially Turbulent State
If the initial collapse of the protostellar cloud is turbulent, then one might suspect that the initial disk that forms could still have some turbulence in it. It is common in the modeling of two-dimensional turbulent flows (e.g. planetary atmospheres) to assume for the initial conditions a random perturbation of the vorticity field (Bracco et al. 1998, 1999; Nauta 1999).
We made a similar assumption for a standard disk model (Shakura & Sunyaev 1973) with $`H/r=0.15`$ and $`\alpha _{SS}=10^4`$ (Figure 1). As the model evolved, the anticyclonic vorticity perturbations formed coherent vortices which merged together to form larger vortices, while the cyclonic vorticity perturbations were stretched and dissipated (Figure 2). The energy spectrum (Figure 3) was found to be fairly flat for small wave numbers, in agreement with previous simulations of two-dimensional compressible turbulence (e.g. Farge & Sadourni 1989; Godon 1998). The amplitude of the vortices decreased with time, as expected for a simulation of decaying turbulence. The ”turbulence” in this case is not fed by the background Keplerian flow, but rather only by the initial perturbation.
Our results are broadly consistent with those of Bracco et al. (1998). Namely, coherent anticyclonic vortices do form (from an initially perturbed vorticity field) and are stable for many rotation periods (they have exponential decay times of the order of $`50`$ orbits). However, a close examination of the results reveals further details. In particular, we find that all the anticyclonic vortices are accompanied by cyclonic vorticity stripes, which form a partial shielding of the vortices (see Figure 4). This finding can be explained in terms of the Burger number of the flow as follows (see e.g. Polvani et al. 1994). Let us define a Rossby deformation radius as
$$L_R=\frac{c_s}{2\mathrm{\Omega }}=\frac{H}{2}.$$
(4)
The flow on scales lager than $`L_R`$ is affected by the Coriolis force (the Coriolis force becomes of the order of the pressure gradient). Now, the Burger number is defined by
$$B=\left(\frac{L_R}{L}\right)^2,$$
(5)
where $`L`$ is the typical length scale of the flow. In a two-dimensional Keplerian disk
$$B=\left(\frac{H}{2r}\right)^2=\frac{^2}{4},$$
(6)
where $``$ is the azimuthal Mach number of the flow. Therefore, for thin disks the Burger number becomes very small. However, it has been observed (see e.g. Polvani et al. 1994) that for small Burger numbers, prograde (anticyclonic in disks) vortices are surrounded by rings of adverse (cyclonic) vorticity. We found indeed that the anticyclonic vortices are shielded by a weak cyclonic vorticity edge (rather than by a cyclonic vorticity ring). The cyclonic vorticity is located (radially) at the outer edge of the anticyclonic vortex. Although cyclonic vorticity perturbations decay in the flow within a few orbits, the cyclonic edges do not.
#### 3.1.2 Accretion of Clumps of Gas
In order to explore the idea that the impacts of clumps of gas onto a protoplanetary disk can generate vortices, we first simulated a related problem, in which previous research suggested that vorticity is created. Namely, the idea that the deposition of energy due to the impact of a comet onto a planetary atmosphere creates a vortex at the impact site. Harrington et al. (1994), for example, simulated the dynamic response of Jupiter’s atmosphere to the impact of the fragments of comet Shoemaker-Levy 9. In all of their simulations, they obtained both a set of globally-propagating inertia-gravity waves (the speed of propagation of the waves was $`400ms^1`$) and a longer-lived vortex at the impact site.
We first wanted to test whether we can reproduce these results, using our numerical tools. We therefore carried out a similar simulation using a two-dimensional Fourier pseudospectral code (the code was written and developed in this research specifically for this purpose). We modeled the atmosphere assuming that it is a two-dimensional polytropic compressible flow. In order to simulate the inertia-gravity waves we chose the polytropic constant $`K`$ in such a way that the speed of the waves matched $`400ms^1`$, and the polytropic index $`n`$ was set to 1 (this is mathematically equivalent to solving the shallow water equations). The models are insensitive to the precise values of $`K`$ and $`n`$, and qualitatively similar results are obtained when simulating sonic waves rather than the inertia-gravity waves (the polytropic constant is then chosen in such a way that the propagation speed of the waves matches the sound speed of $`700ms^1`$). At the impact site we deposited $`10^{28}erg`$, corresponding to a fragment 1-km in diameter (of density $`1gcm^3`$, and mass $`5\times 10^{14}`$g).
We obtained the same results as in Harrington et al. (1994): a set of globally-propagating waves and a longer-lived vortex at the impact site (Fig. 5). The vortex was present during the entire simulation, which was carried out for about 30 Jovian days. We obtained similar results for models rotating more slowly (representing the Earth; in the case of the Earth the sonic crossing time is of the order of the rotation period, while for Jupiter it is much longer). The mechanism by which vorticity is created at the impact site is the following. Due to the Coriolis force, motion towards the poles is deflected to the East, while motion towards the equator is deflected to the West. Therefore, the outward motion due to the high pressure gradient at the impact site generates an anticyclonic vorticity motion.
Next we examined the possibility that a local deposition of energy in a Keplerian disk can generate vorticity as well. In this case it is mainly the shear, rather than the Coriolis force, that would be responsible for inducing the anticyclonic motion. We assumed that energy can be deposited in the same manner, due to the collision of an infalling clump of gas (a remnant of the initial collapsing cloud; see e.g. Cassen & Mossman 1981) with the forming disk. For definiteness we assumed that the orbit of the clump was similar to that of the Shoemaker-Levy 9 comet, and that it impacted the disk at a distance of 5AU from the central star. In this case the kinetic energy of the clump at impact scales linearly with the mass the fragment. We carried out simulations with a maximum energy input of $`10^{37}`$ ergs, corresponding to a maximum clump mass of $`5\times 10^{23}`$ gm.
In the models that we ran, we found that initially, an anticyclonic vorticity region formed at the impact site (Fig. 6a). However, within a short time, the vortex was strongly sheared by the flow and no coherent structure was observed. Specifically, the anticyclonic vorticity stripe completely dissipated within a few orbits (Fig. 6b). The reason for the rapid destruction was the fact that the vorticity amplitude was too small for the vortex size (or equivalently the size was too large for the amplitude). Any vorticity perturbation with a characteristic velocity disturbance $`u`$ needs to be much smaller in size than the characteristic length $`L_s`$ related to the shear by $`L_s=\sqrt{u/\mathrm{\Omega }^{}}`$, otherwise it is quickly destroyed. We have simulated impacts with clouds of size up to $`H`$, with a desnity of $`10^4`$ of the local density in the disk. Since this corresponds to masses that are as high as $`10^9`$ times the mass of a typical Shoemaker-Levy 9 fragment, we have to conclude that this mechanism is unlikely to produce vortices in protoplanetary disks.
#### 3.1.3 Convective Cells
A second possibility (in principle) is that vortices are formed by convection. The idea is that convective bubbles ’rotated’ by the shear could generate vorticity in the flow. Since our simulations are two-dimensional (and therefore cannot follow vertical convection), we investigated ways in which the effects of convection could be mimicked. If we assume convective cells to be of size $`l=aH`$ (where the ”mixing length” parameter $`a`$ is smaller than one), the difference in velocity across the cell, due to the shear, is given by $`\mathrm{\Delta }v=l\times dv_K/drac_s`$. We found that vorticity perturbations with velocity differences corresponding to a value of $`a0.1`$ (when $`H/r0.1`$) were sufficient to create a vortex in the flow. However, to concentrate dust in the core of an anticyclonic vortex one needs to have $`a0.2`$ (see §3.2; this could also be achieved by mergers of vortices). We therefore simulate the formation of vortices due to convection by generating in the flow vorticity perturbations of size $`l0.1H`$ and velocity $`v0.1c_s`$. We introduce the perturbations in the flow at the rate of about one per orbit (since $`v/l\mathrm{\Omega }_K`$). In this case the simulations are similar to those of driven turbulence.
We ran this model for many rotation periods, allowing the disk to evolve. The simulation showed that with time, the vortices that formed interacted and merged together. Eventually, after about 50 orbital periods, one large vortex dominated the flow (Figure 7).
### 3.2 Dust Dynamics
The time scale parameter $`\tau =1/\gamma `$ in eqs.2 and 3 is defined to be approximately the time it takes a (spherical) dust particle to basically come to rest in the flow, and it is given by (e.g. Barge & Sommeria 1995):
$$\tau =\frac{\rho _ds}{\rho _{gas}c_s}=\frac{2\rho _ds}{\mathrm{\Sigma }\mathrm{\Omega }},$$
(7)
where $`\rho _d`$ is the density of the particle, $`s`$ is its radius, $`\rho _{gas}`$ is the gas density, $`c_s`$ is the speed of sound, $`\mathrm{\Sigma }`$ is the gas surface density and $`\mathrm{\Omega }`$ is the (Keplerian) angular velocity.
For very light particles, the stopping-time $`\tau `$ is short compared to $`\mathrm{\Omega }^1`$ and the particles come to rest rapidly (relatively to the flow). In this case the dust just moves with the flow. For heavy particles, one has $`\tau >>1/\mathrm{\Omega }`$, and the dust particles follow an almost Keplerian motion without being affected by the drag forces. For intermediate mass particles (for which $`\tau 1/\mathrm{\Omega }`$) it is expected that the vortex captures particles in its vicinity (e.g. Barge & Sommeria 1995). Following Cuzzi et al. (1993), we assume that the mean free path $`\lambda `$ in the gas is given by $`\lambda (r/1AU)^{11/4}`$ cm. The equations for the dust particles (eqs.2-3) are valid in the Eptstein regime, i.e. when the radius $`s`$ of the particles is smaller than the mean free path: $`s<\lambda `$. Assuming that $`\rho _d=3`$g/cm and taking $`\mathrm{\Sigma }=\mathrm{\Sigma }_0r^{1.5}`$ where $`\mathrm{\Sigma }_0=1700g/cm^2`$ and $`r`$ is given in AUs (Barge and Sommeria 1995), the stopping parameter $`\tau `$ takes the value $`1/\mathrm{\Omega }(r)`$ at $`r_0=60AU`$ for $`s=1`$cm and at $`r_05AU`$ for $`s=50cm`$. Therefore, our simulations are valid in the range $`r>r_0/60`$ for $`s=1`$cm and $`r>0.8\times r_0`$ for $`s=50`$cm (where we have defined $`r_0`$ to be the radius at which $`\tau =1/\mathrm{\Omega }`$ for particles of a given size). Since most of the simulations were carried out in the regime $`\tau 1/\mathrm{\Omega }`$, the results are valid only for particles of radius $`s50`$cm or smaller, otherwise the dust particles are not in the Epstein regime at the radius where $`\tau 1/\mathrm{\Omega }`$. The results are also valid for chondrules for which $`s0.1`$ cm, where the Epstein regime is realized for $`r>0.4`$ AU, and $`r_0200AU`$. All the models scale with radius $`r_0`$ and orbital period $`P=2\pi /\mathrm{\Omega }(r_0)`$.
#### 3.2.1 Concentration of dust particles in the core of anticyclonic vortices
We carried out simulations in which a single vortex was initially introduced in the disk, with $`H/r=0.15`$, and $`H/r=0.5`$. The second case, $`H/r=0.5`$, was intended to mimic a disk similar to that of Bracco et al. (1999), where the vortices were not very elongated. In both cases the initial velocity in the vortex was taken to be a significant fraction of the sound speed (more than 30 percent). The results obtained for both disks are similar. The initial random distribution of dust particle in the disk is shown in Figure 8. With time, dust particles concentrate inside the vortex (Figures 9 and 10). The number of dust particles in the core of the vortex increases linearly with time (Figure 11). The particle density inside the core was doubled (in comparison to the ambient particle density) within about 3 orbits. The radial drift of the particles is also clearly visible in the outer disk (Figure 10).
#### 3.2.2 Radial drift of the dust particles in the disk
Because of the compressibility, the gas flow is partially supported by pressure forces, and its angular velocity is slightly sub-Keplerian. For dust particles of a given size and density, one has $`\tau r^3`$ (Eq.7). At the radius $`r_0`$, at which $`\tau 1/\mathrm{\Omega }_K`$, the drag exerted by the gas flow on the particles becomes non-negligible and the particles are slowed down to sub-Keplerian speed. The centrifugal force is consequently decreased and the particles drift radially inwards. At larger radii, however, one has $`r>>r_0`$ and $`\tau >>1/\mathrm{\Omega }_K`$. The drag there is negligible and the particles rotate at a Keplerian velocity, without being affected by the gas flow. On the other hand, at small radii ($`r<<r_0`$), the drag is so strong ($`\tau <<1/\mathrm{\Omega }_K`$) that the particles move completely with the flow at a sub-Keplerian velocity without even drifting inwards (i.e. like tracers). One might therefore expect a gap to form around the location where $`\tau 1/\mathrm{\Omega }_K`$, with the matter having the tendency to accumulate at a radius $`r<r_0`$.
We found that the radial drift velocity reaches a maximum value of about $`v_{drift}10^2\times v_K`$ (when $`\tau 1/\mathrm{\Omega }_K`$). Accordingly, the timescale for the particles to drift inwards is very short ($`t_{drift}r/v_{drift}`$), of the order of $`20`$ Keplerian orbits. We found that the surface density of the dust particles in the inner disk (i.e. $`r<r_0`$) increased by a factor of 2-3 within about ten rotation periods (Figure 12). However, the depletion of dust in the outer disk is slower ($`100`$ orbits), since a much broader region of the disk is involved.
### 3.3 Transport of Angular Momentum
It is interesting to consider whether interactions and mergers of vortices can result in fignificant angular momentum transport in the disk.
The effective ”viscosity” parameter, $`\alpha _{eff}`$, is given in a steady state by (e.g. Pringle 1981)
$$\alpha _{eff}=\frac{2}{3}\frac{r}{H}\frac{v_r}{c_s},$$
(8)
where $`v_r`$ is the radial velocity. In Figure 13 we show $`\alpha _{eff}`$ as a function of time for the merger of two vortices. It is clear that for such a mechanism to provide efficient angular momentum transport, it requires merging of vortices every few rotations. Therefore, one needs a process that is able to generate vortices continuously (and a low MHD viscosity to avoid their destruction; Godon & Livio 1999). The only process capable (at least in principle) of such vorticity generation is convection, however, it would require a full three-dimensional calculation to assess the viability of this mechanism. At the moment it appears unlikely that vortices would play an important role in angular momentum transport. In particular, even if vortices are formed continuously, it appears difficult for mergers to sustain a relatively coherent outward flux of angular momentum.
## 4 Discussion
In this work, we have carried out for the first time a two-dimensional compressible simulation of a Keplerian disk, with the purpose of studying the formation and role of vortices in protoplanetary disks. We found that in order to generate a vortex in the disk, the initial vorticity perturbation has to be anticyclonic and relatively strong: its velocity and size have to be a considerable fraction (at least $`0.1`$) of the sound speed and disk thickness respectively. We also found that each anticyclonic vortex is shielded by a weak cyclonic vorticity stripe (”vortex shielding”), while the cyclonic perturbations are elongated and sheared by the flow.
We showed that if the disk that forms after the collapse of the protostellar cloud is initially turbulent and contains a randomly perturbed vorticity field, then coherent anticyclonic vortices can form, and they merge together into larger vortices. The vortices so formed decay slowly, on a timescale that is inversely proportional to the viscosity parameter, and of the order of 50-100 orbits for $`\alpha _{SS}10^4`$.
It is important to note that the decay time of the vortices for disks with parameters ranging from $`(H/r,\alpha )=(0.5,10^5)`$ to $`(H/r,\alpha )=(0.05,10^3)`$ (Godon & Livio 1999) was the same as the one obtained here, because the viscosity is given by $`\nu =\alpha _{SS}c_sH=\alpha _{SS}H^2\mathrm{\Omega }_K`$. We can, therefore, infer the same decay time for a disk with $`\alpha _{SS}=10^2`$ and $`H/r=0.015`$. This is important since the the maximal particle size reached by coagulation (cm) was calculated using $`\alpha _{SS}=10^2`$ (more precisely, the size is given by $`150\alpha _{SS}`$ cm; Dubrulle, Morfill & Sterzik 1995). Such a value of the aspect ratio $`H/r=0.015`$, however, would have required a higher resolution and was therefore less convenient from a numerical point of view. Furthermore, value for $`\alpha _{SS}`$ in the range $`10^410^3`$ are frequently used to model the FU Ori outburst cycles in young stellar objects (e.g. Bell et al. 1995).
We also attempted to generate vortices by simulating the impacts of accreting clumps of gas onto the disk. However, the anticyclonic vorticity perturbations that form as a result of such impacts are not strong enough to generate a coherent structure, and they dissipate within several orbits.
We have in addition carried out simulations of driven turbulence, to mimic the effects of convection, where vorticity is continuously generated in the flow. These simplified calculations showed that after about 50 orbits a large vortex forms and is sustained in the flow, due to the merging of smaller vortices. Such a vortex is a preferred place for the dust to concentrate and trigger the formation of a large planetesimal or a core of a protoplanet.
In addition to vortex formation, we examined the process of dust concentration by carrying out simulations of a two-phase flow designed to model the dust-gas interaction in protoplanetary disks. We found that the dust concentrates quickly in the cores of vortices when the drag parameter is of the order of the orbital frequency. As a consequence of the radial drift, particles are continuously renewed near the vortex orbit. The dust density in the vortex increases by a factor of 10 within about 20 orbits.
In the Introduction, we noted that there is a gap of two orders of magnitude between the maximal particles size (centimeters) reached by coagulation and minimal size (meters) required for planetesimal formation (via a gravitational instability). The minimal size required for planetesimal formation could be reduced (to centimeters), if the the density in the vortex is increased by a factor of 100. According to our simulations this would happen in about 200 orbits, or about 2000 years at 5AU - a short time compared to the timescale of the the formation of centimer-size objects ($`10^4`$ years; Beckwith, Henning, & Nakagawa 1999). We found that vortices survive in the flow for about 50 orbits (unless the alpha viscosity parameter is smaller, or vortices are constantly generated in the flow), thus reducing the gap for this process to work from a factor of a 100 to 4.
We also found that the particles drift rapidly inwards, due to the compressibility of the flow. This result was not obtained in previous two-dimensional incompressible models of the disk. The drift of the dust induces a significant increase in the surface density of the dust particles. In about 10 orbits the density can be increased in the inner disk by a factor of about 3, and in the outer disk it is concomitantly decreased by about the same factor within a hundred orbits. Specifically, for dust particles of radius 10cm and density 3g/cc, we find that within about 300 years (about 10 orbits) the density increases (by a factor of 3) in the region $`r<9`$AU, and it decreases at larger radii ($`r>9`$AU) in about 3000 years.
We have also considered the effects of interactions and mergers of vortices on angular momentum transport in the disk. We found that even if vortices are formed continuously, it appears difficult for mergers to transport angular momentum outwards effectively.
## Acknowledgments
This work has been supported by NASA Grant NAG5-6857 and by the Director’s Discretionary Research Fund at STScI. We would like to thank James Cho for useful discussions on the vorticity equation.
## References
Adams, F. C., & Watkins, R. 1995, ApJ, 451, 314
Barge, P., & Sommeria, J. 1995, A & A, 295, L1
Beckwith, S. V. W., Henning, T., Nakagawa, Y., 1999, in Protostars and Planets IV, in press, astro-ph/9902241
Bell, K. R., Lin, D. N. C., Hartmann, L. W., & Kenyon, S. J., 1995, ApJ, 444, 376
Bracco, A., Chavanis, P.H., Provenzale, A., & Spiegel, E.A. 1999, preprint, astro-ph/9810336.
Bracco, A., Provenzale, A., Spiegel, E., Yecko, P. 1998, in A. Abramowicz, G. Björnsson, J.E. Pringle (ed.), Theory of Black Hole Accretion Disks, Cambridge Univ. Press, 254
Cassen, P., & Moosman, A. 1981, ICARUS, 48, 353
Cuzzi, J.N., Dobrovolskis, A.R., Champney, J.M. 1993, ICARUS, 106, 102
Dubrulle, B., Morfill, G., and Sterzik, M. 1995, ICARUS, 114, 237
Farge, M., & Sadourny, R. 1989, J. Fluid Mech., 206, 433
Fridman, A.M., & Khoruzhii, O.V. 1999, in Astrophysical Discs, ASP Conference Series, Vol. 160, J.A. Sellwood and J. Goodman, eds., 341
Godon, P., & Livio, M. 1999, ApJ, 523, 350
Godon, P. 1997, ApJ, 480, 329
Harrington, J., LeBeau, R.P.Jr, Backes, K.A., & Dowling, T.E, 1994, Nature, 368, 525.
Lovelace, R.V.E., Li, H., Colgate, S.A., & Nelson, A.F. 1999, ApJ, 513, 805
Nauta, M.D., 1999, A & A, in press
Polvani, L.M., McWilliams, J.C., Spall, M.A., & Ford, R., 1994, Chaos, 4, 177
Pringle, J.E., 1981, ARA& A, 19, 137
Shakura, N.I., & Sunyaev, R. A. 1973, A & A, 24, 337.
Tanga, P., Babiano, A., Dubrulle, B., & Provenzale, A., 1996, ICARUS, 121, 158
Tassoul, J. L., 1978, Theory of Rotating Stars, Princeton Series in Astrophysics, Princeton, New Jersey
## Figures Captions
Figure 1: Color scale of the initial random perturbation of the vorticity field. Green, yellow and brown represent increasing anticyclonic vorticity, and cyclonic vorticity is in dark blue (light blue represents null vorticity). The Keplerian background has been subtracted for clarity.
Figure 2: A color scale of the vorticity is shown at t=8 orbits for the model shown in Figure 1. Coherent anticyclonic vortices have formed and merged into larger vortices. Cyclonic vorticity appears only as elongated (dark blue) stripes.
Figure 3: The spectrum of the total kinetic energy (averaged in time and in the radial direction) is shown as a function of the azimuthal wavenumber for the model shown in Figures 1 and 2. The higher modes are smoothed out by the viscosity and there the slope is $`2.8`$. In the lower modes the slope is fairly flat ($`0.6`$), as expected for compressible two-dimensional flows.
Figure 4: Vortex shielding in a disk. A color scale of the vorticity in the disk is shown. The Keplerian background has been subtracted for clarity. The anticyclonic vortex is accompanied by a cyclonic vorticity stripe (dark blue band) stretching azimuthally. Another weak anticyclonic vortex is seen (in green).
Figure 5: A simulation of the Jovian atmosphere about 48h after impact. The color scale represents the vorticity. Waves propagate outwards (at the inertia-gravity waves speed) and a coherent vortex forms at the impact site.
Figure 6a: Colorscale of the vorticity in a disk after energy has been deposited locally, at $`t0.1`$ orbit. An anticyclonic vortex forms at the (rotating) impact site, and it stretches with time. A wave propagates outwards and is strongly deformed by the shear. The Keplerian background vorticity has not been subtracted for comparison.
Figure 6b: The vorticity of the disk after a little more than one orbit. The vortex has been stretched out to the point that it is now barely visible in the lower left.
Figure 7: A grayscale of the vorticity in the disk after about 50 orbital periods. The simulation here corresponds to driven turbulence, namely, an anticyclonic vorticity perturbation is introduced in the flow every orbit. Such a perturbation is first stretched and then ’collapses’ onto itself to form a coherent vortex. The vortices so formed interact together and merge into larger vortices. Eventually, as shown here, one large vortex dominates the flow (in the upper part of the figure). The Keplerian background has been subtracted for clarity.
Figure 8: The initial random distribution of dust particles in the disk. The number of particles is $`N=15,000`$ and the surface density of the particles increases like 1/r.
Figure 9: A strong anticyclonic vortex in the disc, shown at t=12 orbits.
Figure 10: The distribution of dust particles in the disk after 12 orbits. The density of the dust particles in the vortex has increased by a factor of five in comparison to its initial value. The drift of the particles inwards is also apparent. More than a half of the particles have crossed the inner boundary of the computational domain.
Figure 11: The number of particles located inside the vortex as a function of time. The number of particles $`N(t)`$ is in units of $`N_0=N(t=0)`$, and the unit of time is the orbital period of the vortex in the disk. The dotted line represents a thick disk ($`H/r0.5`$) and the full line is for a thinner disk with $`H/r=0.15`$.
Figure 12: The dust particles in the disk at $`t10`$ orbits. The inner radius of the disk is at $`r_{in}=1`$ and the outer radius is at $`r_{out}=41`$. The radius at which $`\tau 1/\mathrm{\Omega }`$ is located at $`r=r_0=10\times r_{in}`$. Due to the drift, the particle density in the inner disk has increased by a factor of about 3.
Figure 13: The effective alpha viscosity parameter $`\alpha _{eff}`$ (due to the merger process of vortices) in a disk (in which $`\alpha _{SS}=10^4`$). During the first orbits the vortices emit waves and $`\alpha _{eff}`$ reaches its maximum. Eventually the vortices merge together at around $`t12`$ orbits.
|
no-problem/9910/astro-ph9910342.html
|
ar5iv
|
text
|
# Comparing the SBF Survey Velocity Field with the Gravity Field from Redshift Surveys
## 1. The SBF Survey
The Surface Brightness Fluctuation (SBF) method of estimating early-type galaxy distances has been around for over a decade (Tonry & Schneider 1988). Blakeslee, Ajhar, & Tonry (1999) have recently reviewed the SBF method, its applications, and calibration. Tonry, Ajhar, & Luppino (1990) made the first application in the Virgo cluster and calibrated it using theoretical isochrones for the color dependence combined with the Cepheid distance to M31 for the zero point. Tonry (1991) applied the method to Fornax galaxies and made the first fully empirical calibration, differing substantially from the earlier theoretical one. The situation is now much improved, with the best theoretical models (Worthey 1994) agreeing well with the latest empirical calibration.
The $`I`$-band SBF Survey of Galaxy distances began in earnest in the early 1990’s using the 2.4 m telescope at MDM Observatory on Kitt Peak in the Northern hemisphere and the 2.5 m at Las Campanas Observatory in the South. The Survey includes distances to over 330 galaxies reaching out to $`cz4500`$ km s<sup>-1</sup>. Tonry et al. (1997; hereafter SBF-I) describe the SBF Survey sample in detail. The breakdown by morphology is 55% ellipticals, 40% lenticulars, and 5% spirals. The median distance error is about 0.2 mag, several times larger than our estimate of the intrinsic scatter in the method. This is due to the compromises involved in conducting a large survey with an observationally demanding method on small telescopes; thus, most individual galaxy distances could be improved with reobservation in more favorable conditions.
Tonry et al. (1999, hereafter SBF-II) investigate the large-scale flow field within and around the Local Supercluster using extensive parametric modeling. This modeling is summarized by Tonry et al. and Dressler et al. in the present volume. Also in this volume are SBF-related works by Pahre et al. calibrating the $`K`$-band fundamental plane with SBF Survey distances and by Liu et al. reporting $`K`$-band SBF measurements in the Fornax and Coma clusters. Here we present a preliminary comparison of the SBF survey peculiar velocities with expectations from the density field as probed by redshift surveys. More details on many aspects of this work are given by Blakeslee et al. (2000).
## 2. Comparing to the Density Field
We use the method of Nusser & Davis (1994; see the description by Davis in this volume) to perform a spherical harmonic solution of the gravity field from the observed galaxy distribution of both the IRAS 1.2 Jy flux-limited redshift survey (Strauss et al. 1992; Fisher et al. 1995) and the Optical Redshift Survey (ORS) (Santiago et al. 1995). We assume linear biasing so that the fluctuations in the galaxy number density field are proportional to the mass density fluctuations, i.e., $`\delta _g=b\delta _m`$, where $`b`$ is the bias factor, and use linear gravitational instability theory (e.g., Peebles 1993) so that the predicted peculiar velocities are determined by $`\beta \mathrm{\Omega }^{0.6}/b`$. We then compute the distance-redshift relation in the direction of each sample galaxy as a function of $`\beta `$.
The comparisons are done with the same subset of SBF survey galaxies as used in SBF-II: galaxies with good quality data, $`(VI)_0>0.9`$ so that the color calibration applies, and not extreme in their peculiar velocities (e.g., no Cen-45); we also omit Local Group members. The sample is then 280 galaxies. We compare the predicted $`cz`$$`d`$ relations to the observations using a simple $`\chi ^2`$ minimization approach, as adopted by Riess et al. (1997) in the comparison to the Type Ia supernova (SNIa) distances. Figure 1 gives an illustration. Unlike SNIa distances however, the SBF distances have no secure external tie to the far-field Hubble flow, so we allow the overall scale of the distances (in km/s) to be a free parameter. The best-fit scale then yields a value for $`H_0`$ when combined with the SBF tie to the Cepheids, which is uncertain at the $`\pm 0.1`$ mag level (SBF-II). Thus, the 15 or more free parameters of SBF-II are here replaced by just two parameters: $`H_0`$ and $`\beta `$.
Our sample consists mainly of early-type galaxies in groups, with the dominant groups being the Virgo and Fornax clusters. We use a constant small-scale velocity error $`\sigma _v=200`$ km s<sup>-1</sup> and deal with group/cluster virial dispersions by using group-averaged redshifts for the galaxies. The group definitions are from SBF-I; more than a third of the galaxies are not grouped. This approach is unlike that of SBF-II, which used only individual galaxy velocities but a variable $`\sigma _v`$. Blakeslee et al. (2000) explore several approaches in doing the comparison to the gravity field, including one with no grouping but a variable $`\sigma _v`$, and find similar results to what we report below. In addition, they find negligibly different results when $`\sigma _v=150`$ km s<sup>-1</sup> is used instead of 200 km s<sup>-1</sup>.
## 3. Results
Figure 2 displays the joint probability contours on $`H_0`$ and $`\beta `$ derived from the $`\chi ^2`$ analysis of the IRAS/SBF and ORS/SBF comparisons. The calculations are done in $`\beta `$ steps of 0.1 and $`H_0`$ steps of 1 km s$`^1`$Mpc. Both redshift surveys call for $`H_074`$ km s$`^1`$Mpc. For $`\beta `$, the IRAS comparison gives $`\beta _I=0.4`$–0.5, while the ORS prefers $`\beta _O0.3`$. Adopting the best $`\beta `$ models for each comparison, Figure 3 shows the reduced $`\chi ^2`$ for 279 degrees of freedom plotted against $`H_0`$ (we averaged the $`\beta _I=0.4`$ and $`\beta _I=0.5`$ predictions for IRAS). As there can only be one $`H_0`$, and the IRAS and ORS surveys are not wholly independent, it is perhaps not surprising that they give consistent results. Interestingly, the preferred $`H_0`$ splits the difference between the “SBF $`H_0`$” values proffered by SBF-II and Ferrarese et al. (1999).
The reduced $`\chi ^2`$ as a function of $`\beta `$ for a fixed $`H_0`$ is shown in Figure 4. We find $`\beta _I=0.44\pm 0.08`$ for IRAS and $`\beta _O=0.30\pm 0.06`$ for the ORS with the adopted $`H_0`$. We see that $`\sigma _v=200`$ km s<sup>-1</sup> gives $`\chi _\nu ^2=\mathrm{\hspace{0.17em}1}`$ for the ORS comparison, and 0.9 for the IRAS comparison; the latter would have $`\chi _\nu ^2=\mathrm{\hspace{0.17em}1}`$ for $`\sigma _v=180`$ km s<sup>-1</sup>, very similar to the value of $`\sigma _v`$ found in the SBF-II parametric modeling. The derived $`\beta `$’s are not sensitive to the adopted $`\sigma _v`$, but they are to $`H_0`$ because of the covariance seen in Figure 2. Had we adopted an $`H_0`$ 5% larger, the best-fit $`\beta `$’s would increase by $``$30% with still reasonable values of $`\chi ^2`$.
We also note that the $`\beta `$’s are insensitive to our treatment of the clusters. We experimented by throwing out all galaxies conceivably near the triple-valued zones of Virgo, Fornax, and Ursa Major, 40% of the sample. Remarkably, the results differ only negligibly, but $`\chi _\nu ^2`$ increases by about 0.15, and the values of $`\sigma _v`$ giving $`\chi _\nu ^2`$ of unity increase by 10%. The IRAS residual plot for the full comparison shown in Figure 5 demonstrates why. The Virgo and Fornax clusters near $`cz`$ of 1000 and 1400 km s<sup>-1</sup>, respectively, lie close to the zero-difference line with fairly small scatter because they have had their virial dispersions effectively removed, unlike most of the rest of the galaxies. However, for the reason illustrated in Figure 1, the clusters do not drive the fit.
Figure 6 shows how the peculiar velocity predictions and observations look on the sky. The predictions should resemble a noiseless, smoothed version of the observations. Clearly there is general agreement on the most prominent feature, the dipole motion seen in the Local Group frame. The observed large negative velocities near $`(l,b)(283^{},+74^{})`$ indicate rapidly infalling Virgo background galaxies, which as noted in Figure 1, the fit cannot reproduce. This and other issues must be dealt with in future analyses.
## 4. Summary and Future Prospects
We have presented preliminary results from an initial comparison of the SBF Survey peculiar velocities with predictions based on the density field of the IRAS and ORS redshift surveys. The comparison simultaneously yields $`\beta `$ for the density field and a tie to the Hubble flow for SBF, i.e., $`H_0`$. The resulting $`H_074`$ is between the two other recent estimates with SBF. For the IRAS comparison we find $`\beta _I0.44`$, consistent with other recent results populating the 0.4–0.6 range from “velocity-velocity” comparisons using Tully-Fisher or SNIa distances (e.g., Schlegel 1995; Davis et al. 1996; Willick et al. 1997; Riess et al. 1997; da Costa et al. 1998; Willick & Strauss 1998). Our value of $`\beta _O0.30`$ is the same as that of Riess et al., and consistent with the expectation $`\beta _O/\beta _I0.7`$ from Baker et al. (1998). The numbers change only slightly with different treatments of the galaxy clusters and higher resolution computations (Blakeslee et al. 2000). Our results thus reinforce the “factor-of-two discrepancy” with the high $`\beta `$’s obtained in “density-density” comparisons (e.g., Sigad et al. 1998). One explanation is a scale-dependent biasing (e.g., Dekel, this volume).
We plan in the near future to pursue comparisons using methods that can deal with multivalued redshift zones directly, such VELMOD (Willick et al. 1997) and take advantage of SBF’s potential for probing small, nonlinear scales. Additionally, we continue to work towards an independent far-field tie to the Hubble flow by measuring SBF distances to SNIa galaxies, through calibration of the $`K`$-band fundamental plane Hubble diagram (Pahre et al., this volume), and by pushing out to $`cz10,000`$ km s<sup>-1</sup> directly using SBF measurements from space. This will remove the substantial systematic uncertainty in $`\beta `$ due to the covariance with $`H_0`$. Finally, we also plan to use SBF data for a “density-density” measurement of $`\beta `$ and to explore the nature of the biasing.
### Acknowledgments.
JPB thanks the Sherman Fairchild Foundation for support. The SBF Survey was supported by NSF grant AST9401519.
## References
Baker, J. E., Davis, M., Strauss, M. A., Lahav, O., Santiago, B. X. 1998, ApJ, 508, 6
Blakeslee, J. P., Ajhar, E. A., & Tonry, J. L. 1999, in Post-Hipparcos Cosmic Candles, eds. A. Heck & F. Caputo (Boston: Kluwer Academic), 181
Blakeslee, J. P., Davis, M., Tonry, J. L., Dressler, A., & Ajhar, E. A. 2000, ApJ, in press
da Costa, L. N., Nusser, A., Freudling, W., Giovanelli, R., Haynes, M. P., Salzer, J. J., & Wegner, G. 1998, MNRAS, 299, 425
Davis, M., Nusser, A. & Willick, J. A. 1996, ApJ, 473, 22
Ferrarese, L., et al. ($`H_0`$ Key Project) 1999, ApJ, in press
Fisher, K. B., Huchra, J. P., Strauss, M. A., Davis, M., Yahil, A., & Schlegel, D. 1995, ApJS, 100, 69
Nusser, A. & Davis, M. 1994, ApJ, 421, L1
Peebles, P.J.E. 1993, Principles of Physical Cosmology (Princeton Univ. Press)
Riess, A. G., Davis, M., Baker, J., & Kirshner, R. P. 1997, ApJ, 488, L1
Santiago, B. X., Strauss, M. A., Lahav, O., Davis, M., Dressler, A., & Huchra, J. P. 1995, ApJ, 446, 457
Schlegel, D. J. 1995, Ph.D. Thesis, Univ. of California, Berkeley
Sigad, Y., Eldar, A., Dekel, A., Strauss, M. A., & Yahil, A. 1998, ApJ, 495, 516
Strauss, M. A., Huchra, J. P., Davis, M., Yahil, A., Fisher, K. B., & Tonry, J. 1992, ApJS, 83, 29
Tonry, J. L. 1991, ApJ, 373, L1.
Tonry, J. L., Ajhar, E. A., & Luppino, G. A. 1990, AJ, 100, 1416
Tonry, J. L., Blakeslee, J. P., Ajhar, E. A., & Dressler, A. 1997, ApJ, 475, 399 (SBF-I)
Tonry, J. L., Blakeslee, J. P., Ajhar, E. A., & Dressler, A. 1999, ApJ, in press (SBF-II)
Tonry, J. L. & Schneider, D. P. 1988, AJ, 96, 807
Willick, J. A., Strauss, M. A., Dekel, A., & Kolatt, T. 1997, ApJ, 486, 629
Willick, J. A. & Strauss, M. A. 1998, ApJ, 507, 64
|
no-problem/9910/cond-mat9910464.html
|
ar5iv
|
text
|
# Correlations in optical phonon spectra of complex solids
\[
## Abstract
Spectral correlations in the optical phonon spectrum of a solid with a complex unit cell are analysed using the Wigner-Dyson statistical approach. Despite the fact that all force constants are real, we find that the statistics are predominantly of the GUE type depending on the location within the Brillouin zone of a crystal and the unit cell symmetry. Analytic and numerical results for the crossover from GOE to GUE statistics are presented.
\]
Wigner-Dyson statistical analysis has become a widespread approach to characterise quantum spectra of complex dynamical systems, such as nuclei, atoms and molecules, disordered quantum electron systems and electromagnetic resonances in chaotic cavities . Both experimental and theoretical studies of such systems point to the fact that the quantum energy levels universally obey the same mutual correlations and spectral rigidity as the eigenvalues of random Gaussian-distributed matrices. In the present paper, we apply such a statistical approach to characterise vibrational spectra of multi-component solids with a complex unit cell structure, in order to assess the correlation properties of their optical phonon spectra taken at various positions in the Brillouin zone (BZ).
In random matrix theory (RMT), different ensembles of matrices, which are defined by their fundamental symmetry, result in distinct level correlation properties. The two most common in condensed matter physics are the Gaussian ensemble of real symmetric matrices (GOE) which adequately describes the spectral correlations in quantum chaotic electron systems with time reversal symmetry, and the Gaussian ensemble of Hermitian matrices (GUE) which describes chaotic electron billiards with time-reversal symmetry broken by a magnetic field. One may expect a set of $`3N`$ coupled classical oscillators to be an example of a system with time-reversal symmetry, thus obeying the GOE spectral statistics. This is indeed the case with the acoustic spectroscopy of irregularly shaped solid resonators , or with the spectrum of a regularly shaped system consisting of coupled oscillators whose masses are random . Below we demonstrate the counter-intuitive result that, on the whole, the spectral correlations of the optical phonon modes associated with the same (albeit arbitrary) point of the BZ of a complex solid obey the GUE statistics. This complies with an earlier observation on the electronic structure of highly excited bands in solids. In particular, we report a study of the distribution function $`P(s)`$ of the nearest-level-spacing in vibrational spectra of a complex solid based on numerical simulations of crystalline structures, both with and without mirror reflection symmetry in the unit cell. We also analyse the dependence of statistical properties on the phonon wave number $`𝐐`$ within the BZ, and obtain a detailed description of the crossover between the limiting regimes of GOE-type correlations specific to the phonon frequencies exactly at the center of the BZ where $`𝐐=0`$, and of GUE-type for sufficiently large values of $`Q`$.
To simulate a complex solid, we have adopted the following model. The unit cell of a crystal was taken in the form of a parallelepiped consisting of $`N=8\times 10\times 12`$ atoms with equal pair interactions but randomly chosen masses arranged on an fcc lattice. The unit cell size $`L`$ determines the periodicity of the Bravais lattice of the entire solid, $`𝐋=n_1𝐋_1+n_2𝐋_2+n_3𝐋_3`$. The spectrum of optical phonons in it can be found by determining all $`3N`$ normal modes corresponding to the linearised set of equations for coupled harmonic oscillators, $`m_𝐣\ddot{u}_𝐣^\alpha =_{\mathrm{𝐢𝐋}}K_{\mathrm{𝐢𝐣}}^{\alpha \beta }(𝐋)u_{𝐢+𝐋}^\beta `$, where $`𝐢,𝐣`$ are atomic positions within the unit cell and $`\alpha ,\beta `$ denote cartesian components. Fourier transforming we obtain for each given point in the BZ (wave number $`𝐐`$),
$$K_{\mathrm{𝐢𝐣}}^{\alpha \beta }(𝐐)=\underset{𝐋}{}k(l_{𝐋+𝐢,𝐣}^\alpha l_{𝐋+𝐢,𝐣}^\beta e^{ı\mathrm{𝐐𝐋}}4\delta _{𝐋+𝐢,𝐣}\delta ^{\alpha \beta }),$$
(1)
where $`k`$ is the interatomic force constant, $`l_{𝐋+𝐢,𝐣}^\alpha =(𝐣𝐢𝐋)^\alpha /\left|𝐣𝐢𝐋\right|`$, and $`(𝐣𝐢𝐋)`$ belongs to the first coordination sphere of an fcc lattice. Note that the latter implies non-zero matrix elements only for nearest neighbors and applies restrictions to the sum over $`𝐋`$ in Eq. (1). The sets of phonon frequencies $`\{\omega _k(𝐐)\}`$ (which result in a spaghetti of $`3N`$ dispersion curves, each corresponding to a particular phonon branch) can be obtained by solving numerically the eigenvalue equation
$$\mathrm{det}(D\omega ^2I)=0,D(𝐐)=M^{1/2}K(𝐐)M^{1/2}.$$
(2)
This equation also defines the dynamical matrix, $`D(𝐐)`$, in which the complexity in the composition of a solid is introduced by the random diagonal matrix $`M_{\mathrm{𝐢𝐣}}^{\alpha \beta }=m_𝐢\delta ^{\alpha \beta }\delta _{\mathrm{𝐢𝐣}}`$. For the masses $`m_𝐣`$ we have used a box distribution over the interval of masses $`[m\delta m,m+\delta m]`$ with $`\delta m/m=0.3`$. This corresponds to an r.m.s. of mass disorder $`\delta m^2^{1/2}/m0.17`$. The mean value $`m`$ defines the cut-off frequency $`\omega _c=\sqrt{8k/m}`$ of the vibrational spectrum in the disorder-free limit and sets a characteristic scale to measure the eigenfrequencies, whereas lengths are measured in units of the lattice constant. As explained below, such a model also allows us to exploit an analogy between the numerical results obtained by us and the properties of spectra of systems exhibiting quantum diffusion. To avoid complications brought into the problem by the localisation effects related to the vibrations of light atoms in a heavy matrix, we restrict our statistical analysis to the frequency range $`\omega /\omega _c<0.95`$.
At the middle of the BZ ($`𝐐=0`$) and at other symmetry points, such as $`𝐐=(0,\pi /L_2,\pi /L_3)`$, the dynamical matrix $`D`$ is real and symmetric. For an arbitrary $`𝐐\mathrm{𝟎}`$, the matrix elements of $`D(𝐐)`$ related to the sites on the edges of the unit cell (we use the nearest-neighbor interaction) acquire complex phase factors which make the whole dynamical matrix complex and Hermitian, $`D(𝐐)=D_S(𝐐)+ıD_A(𝐐)`$, where $`D_S(𝐐)`$, $`D_A(𝐐)`$ are real symmetric and antisymmetric matrices, respectively. Without imposing any spatial symmetries onto the unit cell, these two scenarios span the two limiting cases for the D-matrix symmetry; the difference between these two regimes manifests itself in the form of the normalized distribution function $`P(s)`$ of the nearest-level-spacing, $`s=(\omega _{k+1}\omega _k)/\mathrm{\Delta }`$, where $`\mathrm{\Delta }`$ is the mean level spacing.
We have constructed $`P(s)`$ employing a two-step averaging procedure. The first step is to use an ensemble of 50 random realizations of the distribution of masses in the sample. A further averaging of $`P(s)`$ is applied over a broad frequency range, namely, $`0.45<\omega /\omega _c<0.95`$, for each of the calculated spectra, by using an observation that they become statistically indistinguishable after having been rescaled by $`\mathrm{\Delta }`$ . Note that we do not analyze the lowest part of the spectrum because of poor statistics. The nearest-level-spacing distribution function for the optical phonon spectrum at $`𝐐=0`$ is shown in Fig. 1(a). It coincides with the random matrix theory prediction for real symmetric matrices given by Wigner-Dyson distribution function for the GOE . In contrast, optical phonons with $`𝐐0`$ exhibit a nearest-level-spacing statistics which is best fitted by the Wigner-Dyson distribution function for the GUE. A typical $`P(s)`$-histogram is plotted in Fig. 1(b), and is compared with the GUE analytical result . A similar observation has been made earlier by Mucciolo et al in relation to the electronic band structures in crystals. For completeness, the inset in Fig. 1(c) shows the $`P(s)`$-histogram for a corner of the BZ, which is of GOE-type. Fig. 1(c) also illustrates the form of $`P(s)`$ in the intermediate regime between two distinct statistical classes. It is compared with the result of a fit based on the RMT prediction for an interpolating ensemble between the GOE and GUE which contains a single fitting parameter.
The inter-ensemble crossover takes place at relatively small values of $`𝐐`$ (the parameter responsible for the imaginary part of the dynamical matrix), as it happens with a similar crossover in the energy spectra of chaotic electronic systems in the presence of a weak magnetic field . Note also that for small $`Q`$’s, the function $`P(s)`$ has the following asymptotic behavior: it resembles the GUE distribution function at $`s0`$, whereas it follows the GOE analytical result for large s. This is because the splitting of levels with $`s<1`$ is more sensitive to a small antisymmetric addition to a symmetric dynamical matrix than the splitting of rare pairs with $`s1`$. The above fact also implies that the crossover $`P(s)`$ has a form which cannot be simply reduced to a trivial mixing between two typical GOE and GUE distributions . The crossover studies were based upon the analysis of spectra of 600 random realizations of atomic masses in the unit cell for various values of $`𝐐`$ along $`𝐋_1`$. The result of numerical analysis of the GOE-GUE crossover in different intervals of vibrational spectra is shown at the bottom of Fig. 2 in the form of a gray scale shading of the parametric plane of $`q=𝐐𝐋_1=QL_1`$ and frequency $`\omega `$, where a darker colour stresses higher similarity of $`P(s)`$ to that of the orthogonal symmetry class, and white indicates the dominance of the GUE spectral statistics.
Below we present semiclassical arguments which provide the crossover value of the rescaled $`Q`$, $`q=QL`$, as function of the frequency of the mode considered. Our approach consists of viewing the dynamics as that of a wave-packet of lattice vibrations (in our case optical modes), spreading over a unit cell treated here as a region of disordered medium. This treatment is analogous to the semiclassical treatment of parametric spectral correlations developed in the studies of quantum disordered electron systems subjected to a weak magnetic flux . In the present analysis, it is assumed that the participation of each atom in a given optical phonon mode can be described semiclassically for time intervals shorter than $`t_H1/\mathrm{\Delta }`$, after which the discreteness of the spectrum for each value of $`𝐐`$ starts to dominate. The spread of vibrations over the unit cell and the role of the individual atoms in this dynamics is considered as a diffusion of waves through an fcc lattice with mass disorder and is determined by the interference pattern of a variety of equally probable diffusion paths . These paths are independent of the exact value of $`𝐐`$, whereas the phases of diffusive waves involved in such an analysis are large and random, so that one obtains correlated spectra .
However, the cyclic (albeit non-periodic) boundary conditions impose a torus geometry in the unit cell. If $`W`$ is the total number of windings around the torus then the type of correlations depends on the phase factor $`e^{i\delta \phi }`$ determined by the phase difference between a path with a positive W and its time-reversed counterpart with -W. Therefore, $`\delta \phi `$ gives us a measure of time-reversal symmetry breaking and is controlled by the ’external’ parameter $`q1`$. A relevant crossover parameter can then be found by estimating the r.m.s. value of the symmetry breaking phase $`\delta \phi `$, acquired by waves whose propagation is followed along a diffusive path with the maximal length allowed by the limit set by the time, $`t_H1/\mathrm{\Delta }`$ , where $`\mathrm{\Delta }1/L^3\nu `$ and $`\nu (\omega )`$ is the acoustic phonon density of states.
The parameter $`q`$ affects the phase of the partial amplitudes only for paths that cross the unit cell, i.e., paths whose ’winding’ number is non-zero. Hence, the total symmetry breaking phase is $`\delta \phi qW`$, where $`W=w_i`$ is a winding number made of approximately $`t_H/t_D`$ random contributions $`w_i=\pm 1`$. The time $`t_DL^2/D`$ with $`D`$ the diffusion coefficient, is typical of a diffusive spread over the unit cell of a vibration with frequency $`\omega `$. For our model, $`D=\frac{1}{3}s^2\tau `$ where $`s^2(\omega )`$ is the angular and polarization average of the squared phonon group velocity in the clean limit and $`\tau ^1`$ is the scattering rate, caused by the variation of atomic masses. The latter can be estimated as $`\tau ^1(\omega )\omega ^2\nu \left(\delta m^2/m^2\right)`$. It follows from our considerations that an estimation of the r.m.s. value of $`\delta \phi `$ yields $`\delta \phi ^2^{1/2}q\sqrt{t_H/t_D}`$. The crossover can be assigned to $`\delta \phi ^2^{1/2}1`$. Therefore, the form of the crossover line is determined by
$$q_c\sqrt{t_D/t_H}\sqrt{\mathrm{\Delta }L^2/s^2\tau }\omega /\sqrt{s^2(\omega )},$$
(3)
which is shown in the top of Fig. 2 and is in agreement with the numerically obtained gray scale plot.
The correlation properties of spectra of a solid are sensitive to the geometrical symmetries of its unit cell structure, such as the existence of a mirror plane in it. Below, we extend the numerical analysis to a solid with $`n3`$ mirror symmetry planes in the unit cell, each characterised by unit vector $`\widehat{\eta }_i`$. Numerical simulations similar to the ones described above (with statistics collected from 100 random realizations of masses and averaging extended over the spectral window $`[0.45,0.95]`$) show that the effect of geometrical symmetries on the spectral statistics depends on the point in the BZ. This is because the phonon momentum allows for breaking both the orthogonal and point-group symmetries. Typical forms of the function $`P(s)`$ for each symmetry case are shown in Fig. 3, for various wave vectors $`𝐐`$. The solid lines in these figures illustrate the RMT result for overlapping sequences of GOE or GUE spectra with equal fractional densities . When $`𝐐=0`$, the spectra split into sequences of levels corresponding to different parities, so that $`P(s)`$ coincides with what one would expect for $`2^n`$ overlapping GOE’s (e.g., $`P(s0)=12^n`$).
For finite $`Q`$, spectral statistics is determined by the orientation of $`𝐐`$ with respect to the mirror symmetry planes, falling into one of the GOE or GUE classes. A summary of all distinct statistical limits is given in Table 1. Note that for $`𝐐`$ with all non-zero components perpendicular to a symmetry plane, correlations are of GOE-type. This effect is due to invariance of $`D(𝐐)`$ under the combination of reflection and complex conjugation operations, which results in a real representation of the dynamical matrix . Thus, for a unit cell with 3 mirror symmetry planes only the orthogonal ensemble statistics is realised for an arbitrary $`𝐐`$. The crossover parameter $`q_c`$ which determines which class from Table 1 should be expected, can be estimated using Eq. (3).
The authors thank I. Lerner and J. Pendry for discussions. This work was supported by EPSRC, and by European Union RTN and TMR programmes. Y.G. acknowledges support by the Centre of Excellence of the Israeli Academy of Sciences and Humanities.
|
no-problem/9910/hep-ex9910059.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Recent LEP results on multiparticle production in hadronic jets enable further important tests of QCD. One of the most interesting and intriguing outcomes of the pQCD calculations is the prediction of so-called colour coherence effects .
A number of analyses presented here demonstrate that there is an agreement of the data with the QCD MLLA predictions and provide further support for the concept of local parton hadron duality (LPHD) .
There are also new results in the physics of large distances, where the processes not calculable within the pQCD provide us with abundant information needed for tuning hadronization model parameters.
## 2 Testing analytical predictions of Perturbative QCD
Inclusive charged hadron distributions measured by DELPHI at 189 GeV were presented as a function of the variables: rapidity, $`\xi _p=ln(1/x_p)`$, momentum and transverse momentum. The $`\xi _p`$ distribution demonstrates the so-called “hump backed” behaviour predicted for partons in the framework of the Modified Leading Logarithmic Approximation (MLLA) . The simultaneous fit to the $`\xi _p`$ distribution with a Fong-Webber distorted Gaussian at different energies including the present measurement at 189 GeV show very good agreement of the data and the prediction ($`\chi ^2/dof=99.6/97`$) giving support for the LPHD hypothesis.
MLLA also provides a definite prediction for the energy evolution of the maximum of the $`\xi `$ distribution, $`\xi ^{}`$. As hadronization and resonance decays are expected to act similarly at different centre-of-mass energies, the energy evolution of $`\xi ^{}`$ is expected to be less sensitive to nonperturbative effects. The $`\xi ^{}`$ values entering in the analysis were determined by fitting a distorted Gaussian with the parameters given by the Fong-Webber parametrisation. For the 189 GeV data one obtains $`\xi ^{}=4.157\pm 0.030`$. A fit of the MLLA prediction again demonstrates good agreement while ruling out a phase space expectation and the DLA prediction.
The energy evolution of the momentum distribution is well described by the fragmentation model. An interesting observed feature is the approximate $`E_{CM}`$ independence of hadron production at very small momenta $`p<1`$ GeV. This has been explained in , to be due to the coherent emission of long wavelength gluons by the total colour current which is independent of the internal jet structure and is conserved under parton splittings. Therefore, low-energy gluon emission is expected to be almost independent on the number of hard gluons radiated and hence of the centre-of-mass energy. Provided the LPHD hypothesis is correct, the number of produced hadrons at small momenta is approximately constant.
A sample of 2.2 million hadronic $`Z`$ decays, selected from the data recorded by the Delphi detector at LEP during 1994-1995 was used for a precise measurement of inclusive distributions of $`\pi ^+`$, $`K^+`$ and $`p`$ and their anti-particles in gluon and quark jets . As observed for inclusive charged particles, the production spectra of the individual identified particles were found to be softer in gluon jets compared to the quark jets, with a higher multiplicity in gluon jets. A significant enhancement of protons in gluon jets is observed. The ratio of the average multiplicity in $`g`$ jets with respect to $`q`$ jets was found for all identified particles to be consistent with the ratio measured for all charged particles. The normalised ratio for protons in Y events was measured to be:
$$R_p=1.205\pm 0.041,$$
which differs significantly from unity.
The maxima, $`\xi ^{}`$, of the $`\xi `$-distributions for kaons in gluon and quark jets are observed to be different.
A particularly nice illustration of the phenomenon of QCD coherence and a test of LPHD was obtained by DELPHI using symmetric 3-Jet Events . It is known that soft radiation is sensitive to the total colour flow in the underlying hard partonic structure. Let’s consider, for example, two extreme two-jet topologies of a $`q\overline{q}g`$ event. If the gluon is collinear to one of the quarks, the colour flow in the event will be identical to the $`q\overline{q}`$ case, whereas if the gluon exactly recoils with respect to the two quarks, the colour flow will correspond to that of a $`gg`$ event. In the latter case the soft radiation at large angle is expected to be increased by the colour factor ratio $`C_A/C_F`$ as compared to the $`q\overline{q}`$ case. The evolution between those extreme cases has been calculated as a function of the opening angles between the jets . Thus, the charged hadron multiplicity in a cone perpendicular to the event plane of symmetric three-jet events was determined as a function of an inter-jet angle for the data collected at the Z resonance. A clear dependence of the multiplicity on the opening angle was observed and appears to be in agreement with QCD predictions ,.
An interesting example of the intra-jet QCD coherence is the restriction of forward gluon emission for heavy quarks. A calculation in the framework of MLLA predicts the following angular distribution of gluon emission :
$$\frac{dn}{d\theta ^2}\theta ^2/(\theta ^2+\theta _{min}^2)^2,\theta _{min}\frac{m_Q}{E}$$
where $`m_Q`$ and $`E`$ are a quark mass and energy, respectively. Provided LPHD holds, the effect should be seen, for example, in a comparison of the primary-particle angular distribution for $`b\overline{b}`$ and $`q\overline{q}`$ (where q denotes u,d or s quark) Z decays. Delphi has presented preliminary results that show a difference in this behaviour. Hadrons containing the original quark or originating from the decay of such particle are carefully excluded from consideration.
The phenomenon of colour coherence is not only a subject for tests, but also may be used as a tool for reconstruction of the event colour structure on an event-by-event basis. The idea presented in is based on the fact that soft particles do not originate from a particular parton but rather their production depends on the whole colour topology of the event. Therefore, it is possible to define a way to estimate the colour connection strength between the partons by analysing the behaviour of the soft particles. The method can then be used for parton identification. The proposed algorithm is described below: first, fast particles are used to reconstruct cluster directions and then to define a weight $`w_{ij}`$ that a particle $`i`$ may be connected to a cluster $`j`$ with:
$$w_{ij}=\frac{C_i}{k_{ij}^2}$$
where $`k_{ij}^2=2E_i^2(1cos\mathrm{\Theta }_{ij})`$, with normalisation $`\mathrm{\Sigma }_jw_{ij}=1`$. Then each particle with $`w_{ij}<0.95`$ is assigned to the cluster pair $`kl`$ for which the sum $`w_{ikl}=w_{ik}+w_{il}`$ is maximal. Then parton pair connectedness is defined as
$$W_{kl}=C_{kl}\mathrm{\Sigma }_ig(E_i)(w_{ik}+w_{il})$$
where $`i`$ runs through all the particles assigned to the clusters $`kl`$.
The method was tested by the Delphi collaboration by using double b-tagged 3-jet Mercedes events collected at the Z pole. The gluon jet is known in these kind of events as one which is not b-tagged. On the other hand, the gluon should have two colour connections and thus the colour connectedness $`W_{kl}`$ of the $`b\overline{b}`$ pair must have the smallest value in any given event. Matching the two gives the purity of the method which is found to be above $`60\%`$ and could be improved by requiring the smallest colour connection coefficient to be below a predetermined threshold value.
The method with various modifications could be used for identifying colour connections in numerous applications (pairing, background rejection).
One of the approaches to study a parton shower cascade is to employ the multiplicity moments technique. The oscillations in the ratio of the cumulant factorial to the factorial charged particle multiplicity moments in Z Decays is known to show a quasi-oscillatory behaviour when plotted versus the order of the moment, as was observed by the SLD collaboration some time ago . This peculiarity is also predicted by the NNLLA of perturbative QCD within the LPHD framework .
However, using the jet multiplicity distributions obtained from the Cambridge jet algorithm, in order to vary the dependence on the LPHD hypothesis, the L3 collaboration found that the oscillations appear only for non-perturbative energy scales, namely $`100`$ MeV. From this conclusion it follows that the observed oscillations are unrelated with the behaviour predicted by the NNLLA perturbative QCD calculations.
Another challenging way to study the cascade is to measure multiplicity fluctuations in rings around the jet axis and in off-axis cones. The DELPHI collaboration performed the measurement and compared them with analytical perturbative QCD calculations for the corresponding multiparton system, using the concept of LPHD. Some qualitative features were confirmed by the data but substantial quantitative deviations are observed.
## 3 Fragmentation physics
Our knowledge about the hadronization process has been significantly enriched by recent measurements at LEP.
Thus, results on the production of the $`\mathrm{\Lambda }(1520)`$ are presented, as obtained from hadronic Z decays recorded by DELPHI . The $`\mathrm{\Lambda }(1520)`$ scaled momentum ($`x_p`$) spectrum is determined. The relative importance of $`\mathrm{\Lambda }(1520)`$ production increases with $`x_p`$ similarly to that of orbitally excited mesons. It is shown that the $`\mathrm{\Lambda }(1520)`$ primarily originates from fragmentation and not from heavy particle ($`b,c`$) decays. The large $`\mathrm{\Lambda }(1520)`$ production rate $`N_{\mathrm{\Lambda }(1520)}/N_Z=0.030\pm 0.004\pm 0.005`$ suggest that many stable baryons descend from orbitally excited baryonic states.
The OPAL collaboration measured the helicity density matrix elements $`\rho _{00}`$ of $`\rho (770)^\pm `$ and $`\omega (782)`$ mesons produced in Z<sup>0</sup> decays . Over the measured meson energy range, the values are compatible with 1/3, corresponding to a statistical mix of helicity $`1`$, 0 and 1 states. For the highest accessible scaled energy range 0.3 $`<`$ $`x_E`$ $`<`$ 0.6, the measured $`\rho _{00}`$ values of the $`\rho ^\pm `$ and the $`\omega `$ are 0.373 $`\pm `$ 0.052 and 0.142 $`\pm `$ 0.114, respectively.
The ALEPH collaboration performed an extensive study of the production rates and the inclusive cross sections of the isovector meson $`\pi ^0`$, the isoscalar mesons $`\eta `$ and $`\eta ^{}(958)`$, the strange meson $`\mathrm{K}_\mathrm{S}^0`$ and the $`\mathrm{\Lambda }`$ baryon. This was done as function of scaled energy (momentum) in hadronic events, two-jet events and each jet of three-jet events from hadronic $`\mathrm{Z}`$ decays and compared the results to Monte Carlo models . The JETSET modelling of the gluon fragmentation into isoscalar mesons is found to be in agreement with the experimental results for the measured region. HERWIG fails to describe the $`\mathrm{K}_\mathrm{S}^0`$ spectra in gluon-enriched jets and the $`\mathrm{\Lambda }`$ spectra in quark jets.
An interesting idea which helped to understand the production rates of light-flavour hadrons was proposed by P.Chliapnikov : the difference between the production rates of hadrons composed of the same quarks and belonging to the different SU(3) multiplets but the same SU(6) multiplet is essentially determined by the hyperfine mass splitting. This trend shows up when the direct production rates are plotted versus the sum of the constituent quark masses $`\mathrm{\Sigma }_i(m_q)_i`$. In this case the vector-to-pseudoscalar and decuplet-to-octet suppressions are found to be the same. In the proposed scenario the strangeness suppression factor, $`\lambda =0.295\pm 0.006`$, is the same for mesons and baryons and related to the difference in the constituent quark masses, $`\lambda =e^{(m_s\widehat{m})/T}`$, where $`\widehat{m}=m_u=m_d`$, and the temperature $`T=142.4\pm 1.8`$ MeV/$`c^2`$.
|
no-problem/9910/quant-ph9910043.html
|
ar5iv
|
text
|
# High-Fidelity Teleportation of Independent Qubits
## 1 Introduction
Two of the most fundamental protocols of quantum communication are quantum teleportation and entanglement swapping , the teleportation of an entangled state. With the qubit being the elementary representative of information in the quantum domain, teleportation and entanglement swapping of qubits are essential contributions to any quantum communication toolbox. Thus far there have been two experiments performed on the teleportation of independent qubits. Another experiment demonstrated the quantum teleportation protocol not for an independent qubit but for a qubit that has to be prepared on a specific particle (entangled with another particle). And finally, a fourth experiment demonstrated the quantum teleportation for continuous variables. In the present paper we suggest ways how to characterize the quality of a given teleportation scheme and we discuss specifically the experiments on teleportation of independent qubits from that perspective. We show explicitly that it is important to distinguish teleportation fidelity from teleportation efficiency. That way some criticism which has been raised in the literature turns out to be unjustified . In section 2 we will first briefly review the two experiments concerning teleportation of independent qubits before giving our criteria for experimental quantum teleportation in section 3. In sections 4 and 5 the two experiments will be analyzed in view of the given criteria. Conclusions are drawn in section 6.
## 2 Experimental Quantum Teleportation of Independent Qubits
In the quantum teleportation experiment presented in Ref. an incoming UV pump-pulse has two opportunities to create pairs of photons (Fig.1). The idea is that on the path from left to right the pulse creates an entangled pair. This is the ancillary entangled pair of the original proposal . One of the ancillaries is passed on to Alice and the other one to Bob. The latter one will obtain the teleported qubit encoded in its polarization. On the return path the pulse again creates a pair of photons where in the original experimental teleportation scheme the fact that the two are entangled was not utilized. In fact, one of these two photons was passed through an adjustable polarizer such defining the state (the initial qubit) to be teleported. This procedure breaks the entanglement for that pair. The second photon of that pair is sent to a trigger detector whose purpose it was to reject all detector events where this second pair was not created. In the experiment the entangled photons, photons 2 and 3 in Fig.1, were produced in the anti-symmetric state
$$|\mathrm{\Psi }^{}_{23}=\frac{1}{\sqrt{2}}\left(|\text{H}_2|\text{V}_3|\text{V}_2|\text{H}_3\right),$$
(1)
where $`|\text{H}`$ and $`|\text{V}`$ represent the horizontally- and vertically-polarized photon state.
The idea of the experiment then is that Alice subjects the photon to be teleported and her ancillary photon to a (partial) Bell-state measurement using a beam-splitter. Observation of a coincidence at the Bell-state analyzer detectors f1 and f2 then informs Alice that her two photons were projected into the anti-symmetric state $`|\mathrm{\Psi }^{}_{12}`$.
This then implies that Bob’s photon is projected by Alice’s Bell-state measurement onto the original state. This can be seen by assuming that it is the intention of the experiment to be able to teleport the general qubit,
$$|\mathrm{\Psi }_1=\alpha |\text{H}_1+\beta |\text{V}_1,$$
(2)
with $`\alpha `$ and $`\beta `$ complex amplitudes satisfying $`|\alpha |^2+|\beta |^2=1`$. Then the initial state of qubit plus ancillaries is given by the product state
$$|\mathrm{\Psi }_{123}=|\mathrm{\Psi }_1|\mathrm{\Psi }^{}_{23}.$$
(3)
Projection of photon 1 and 2 onto the anti-symmetric state yields
$$\mathrm{\Psi }^{}|_{12}|\mathrm{\Psi }_{123}=\frac{1}{2}\left(\alpha |\text{H}_3+\beta |\text{V}_3\right).$$
(4)
This indicates that the polarization state determined by the complex amplitude $`\alpha `$ and $`\beta `$ has been transferred from photon 1 to photon 3. The amplitude factor of 1/2 indicates that only in one out of four cases the result of the Bell-state measurement is the anti-symmetric one .
The experimental scheme then simply proceeded in defining various different polarization states using polarizers and wave plates and verifying by polarization measurement that Bob’s photon actually had the state adjusted by the polarizers and wave-plates it never saw given that the coincidence between the detectors f1-f2 did indicate a ($`|\mathrm{\Psi }^{}_{12}`$) Bell-state measurement. In order to demonstrate the generality of the scheme it is not enough to just demonstrate the teleportation of the base states $`|H`$ and $`|V`$, which readily succeeded in experiment, but also to demonstrate superpositions of these states. In the experiment it was decided to demonstrate teleportation both for two real-coefficients superpositions (linear polarization), and for one superposition with imaginary-coefficients representing circular polarization.
The second experiment demonstrated the teleportation of an entangled state by verifying the protocol of entanglement swapping . Experimentally, the essential difference was that in that experiment (Fig.2) the entanglement of the pair created by the pulse upon its return passage was also fully utilized. Therefore, in that experiment there was no polarizer in the path of that photon of the second pair which was sent to Alice’s Bell-state analyzer thus not breaking the initial entanglement. This means that the state when two separate pairs were created in the way described reads
$$|\mathrm{\Psi }_{1234}=|\mathrm{\Psi }^{}_{14}|\mathrm{\Psi }^{}_{23},$$
(5)
which is a product state of two entangled pairs. Observation of a coincidence at the detectors f1-f2 again indicates that photon 1 and 2 have been projected into the anti-symmetric Bell-state which now indicates that the final state is $`|\mathrm{\Psi }^{}_{34}`$. This shows that now the outer two photons 3 and 4 have become entangled. This can be seen as teleportation either of the state of photon 2 over to photon 4 or the state of photon 1 over to photon 3. Those viewpoints are completely equivalent. The remarkable feature of that experiment is that the actually teleported state is a photon state which is not well defined. Since, as is well known, the state of a particle which is maximally entangled to another one has to be described by a maximally mixed density matrix. Indeed, in that experiment neither of the two photons subject to the Bell-state measurement enjoyed a quantum state on its own. They were both maximally mixed. Therefore, what is teleported in such a situation is not the quantum state of the photon but just the way how it relates to the other photon it has been entangled to initially.
In order to demonstrate that teleportation succeeds in that case, it is necessary to show that photons 3 and 4 are now entangled with each other. This can been done by showing that the polarizations of the two photons are always orthogonal irrespective of the detection basis chosen .
## 3 Criteria for Experimental Quantum Teleportation
We will now identify criteria and notions by which the quality of a certain teleportation procedure can be evaluated. This may also serve to a certain extent as a means for comparison of different teleportation procedures. However, it will turn out that it seems impossible to define just a single parameter which would serve to characterize all procedures.
Any quantum teleportation procedure can be characterized by how well it can answer the following questions:
* How well can it teleport any arbitrary quantum state it is intended to teleport? This is the fidelity of teleportation.
* How often does it succeed to teleport, when it is given an input state within the set of states it is designed to teleport? This is the efficiency of teleportation.
* If given a state the scheme is not intended to teleport, how well does it reject such a state? This is the cross-talk rejection efficiency.
But foremost, one has to define the set of states the teleportation procedure should be able to handle. It is of little use to talk about a specific procedure but use the wrong states to characterize its performance. The aim of the experiments presented in Refs. (Innsbruck experiments) has been to teleport with high fidelity a qubit, i.e. a two-dimensional quantum state, given by the polarization state of a single photon. Experiments performed at Caltech addressed the transfer of an infinite dimensional quantum state represented by the continuous quadrature amplitude components of an electro-magnetic field . We want to emphasize, that if one talks about one or the other type of experiments one should use the appropriate states for describing it.
In the following two sections we evaluate the above criteria in detail for the two teleportation schemes realized in Innsbruck, particularly in view of the criticism initially voiced by Braunstein and Kimble . It is explicitly not our intention to criticize the Caltech experiment though it will be obvious from our analysis that the claim voiced by Kimble a number of times that the Caltech experiment is the first bona fide verification of quantum teleportation is unjustified.
## 4 Teleportation of Single Qubits
Let us now analyze the first Innsbruck teleportation experiment of independent qubits . Since it is the intention of the experiment to be able to teleport the general qubit (Eq. 2) encoded in the polarization state of a single photon, it is require (a) that the scheme is able to teleport any superposition of this form with high fidelity and (b) that the scheme does not teleport anything which is not of this form.
What happens, if the system does not output a single photon carrying the desired qubit? This situation can be treated on the same footing as some absorption process along a communication channel. As it is well known from other applications of single-photon quantum communication, like quantum cryptography or quantum dense coding, this will influence the efficiency of the communication system but does not influence the coherence properties of the remaining photons. This comes from the fact that the possibility of absorption of the single photon, the ”carrier” of the qubit, does not alter the qubit itself. After renormalisation of a two-dimensional state, the original state, the qubit, is obtained again without any influence on the teleportation fidelity. The situation changes drastically if one considers the Caltech teleportation experiment of the quadrature amplitude components of an electro-magnetic field. In that case, an absorption of light-quanta changes the amplitudes of the various Fock-states and therefore unavoidably changes the quantum state that is transmitted. Consequently, absorption necessarily decreases the fidelity of the teleportation procedure for continuous variables but not for single qubits.
As explained in section 2 an incoming UV pump pulse has two opportunities to create pairs of photons (Fig. 1). This can happen either on the path from left to right or on the return path. The cases where only one pair is produced can be rejected since only the situations are accepted in which the trigger detector p fires together with both Bell-state analyzer detectors f1 and f2. Also, any cases where more than two pairs are created can safely be ignored because in the experiment the total probability of creating one pair per pulse in the modes actually detected is of the order 10<sup>-4</sup>, which gives a detection rate of three pairs from a single pump pulse of much less than one per day at the experimental parameters.
What then does a three-fold coincidence p-f1-f2 tell us? There are two possibilities. One is that we actually had a case of teleportation of the initial qubit encoded in photon 1. In the experiment this was demonstrated for the 5 polarizer settings H, V, +45, -45 and R (circular). These settings represent non-orthogonal qubits and altogether cover very different directions on the Poincare sphere what provides a proof that the scheme works for an arbitrary superposition. H and V proved the working of the scheme for the natural basis states defined by the properties of the experimental setup. The +45 and - 45 linear polarization states proved the proper operation for coherent superpositions with real probability amplitudes and the R state for imaginary amplitudes. This is sufficient to demonstrate that the scheme will work for any superposition, in contrast to the suggestion by Vaidman that more settings are required for a full proof. The fact that the transfer of a quantum state worked for non-orthogonal states is a direct indication that entanglement is at the heart of the experiments.
The second case when a p-f1-f2 coincidence can occur is when both photon pairs are created by the pulse on its return trip. Thus, in that case no teleported photon arrives at Bob’s station and teleportation did not happen. Yet Alice recorded a coincidence count at her Bell state detector. It has been argued by Braunstein and Kimble that this possibility reduces the fidelity of our teleportation scheme. Yet, as we will show now, it actually is an advantage of our scheme that teleportation did not occur in that case. Indeed, the state behind the polarizer in that case contains two identically prepared photons. Therefore, since according to our protocol, we only wish to teleport qubits encoded in single-photon states, it is an advantage of our scheme that teleportation does not occur. Thus our scheme has a high intrinsic cross-talk rejection efficiency for these cases.
It might be argued that a spurious coincidence trigger at Alice’s Bell state analyzer reduces the usefulness of such a teleportation scheme. Yet, all that happens is that Bob in such a case does not receive a teleported photon even as the message he receives from Alice might indicate that. There is no problem with that since he was not supposed to have obtained a teleported photon in that case anyway, as the state given to Alice does not fall within the class of states, namely single-photon qubits, the scheme is intended to work for. That Alice falsely thinks that teleportation worked in that case does not do any harm.
Another problem already discussed in the original publication is the fact that only one of the four Bell states was identified. This simply means that the procedure works in 25% of the situations. Only whenever the state the two photons at the Bell state analyzer were projected into happened to be the anti symmetric one it was identified by a coincidence behind the beam splitter. In the other 75% of the cases teleportation was not performed. Which of the Bell-states is actually observed is independent of the qubit given to Alice! All this means is simply that the efficiency of the scheme was significantly reduced without any influence on the fidelity of the qubit quantum teleportation.
Losses occur anyway in any realistic scheme and it is always necessary in any protocol to provide for these cases by means of some communication between the various participants. In that spirit we emphatically stress that a reduction of the efficiency of the procedure, of the fraction of cases where it actually finished, does not reduce at all the fidelity which describes how well the actually teleported qubit agrees with the original one.
Clearly, even if a teleportation procedure is inefficient in the sense of rarely teleporting the given qubit, the fidelity of those qubits which are teleported can be very high. This is very different from a scheme which finishes the teleportation procedure very frequently but with low fidelity. We will see below that the experiment on entanglement swapping provides a clear case where the distinction between efficiency and fidelity is obvious.
As evidenced by the final verification of the teleported qubits, that is by a polarization measurement, the measured qubit teleportation fidelity was rather high in the experiment . As can be seen from Fig. 3, it typically was of the order of 0.80. This very clearly surpasses the limit of 2/3 indicated by the dotted line which at best could have been obtained by Alice performing a polarization measurement on the given photon, informing Bob about the measurement result via classical communication, and by Bob accordingly preparing a photon at his output.
In conclusion, neither the fact that sometimes through false coincidences Alice might think that teleportation occurred nor the fact that only one Bell state could be identified is relevant for the teleportation fidelity.
## 5 Quantum Teleportation of Entanglement
Our statement that the rather low efficiencies of the first Innsbruck teleportation experiments do by no means influence the fidelity is even more obvious in the second experiment where, in a realisation of entanglement swapping, it was possible to teleport a qubit which is still entangled to another one. Figure 2 indicates the entanglement swapping procedure and Fig. 4 is a schematic drawing of the experimental setup. The main difference to the first experiment simply was that photon 1, who’s polarization properties had to be teleported, was not prepared in a well-defined state prior to teleportation, but rather in a measurement on its twin, photon 4, at a time after the Bell state analyzer had registered a coincidence. This, undoubtedly, realises teleportation in a clear quantum situation, since entanglement between two particles that did not share a common origin nor interacted with one another in the past is the very result of the teleportation procedure.
As in the first experiment, here too one has to deal with the case that Alice might have false coincidence counts at her Bell state analyzer together with a count at detector D<sub>4</sub> for photon 4 (see Fig. 4). This again simply indicates that two pairs have been emitted to the left with no photon going to Bob. As above, since it is intended to teleport only single-photon qubits, it is an advantage that teleportation did not occur in this case.
In the entanglement teleportation experiment a linear polarizer in front of the detector of photon 4 is set at various angles ($`\mathrm{\Theta }`$). As a consequence, whenever teleportation succeeds, the photon received by Bob should be orthogonal to the detected polarization state of photon 4 (since both pairs 1 and 4, and 2 and 3 are prepared in the anti-symmetric state $`\mathrm{\Psi }^{}`$, and since this state is also monitored by Alice, photon 3 and 4 will be entangled in exactly this state, too (see section 1 and Ref. ).
This can be verified by performing a polarization measurement on photon 3 carrying the teleported polarization properties of photon 1. In the experiment it was decided to register the coincidences between the two polarization measurements on photons 3 and 4 as a function of the relative angle between the two polarizations (the polarization of photon 3 is measured in the +45/-45 basis using a $`\lambda /2`$ rotation plate and a polarizing beamsplitter, while the polarization of photon 4 is measured after passing a variable polarizer at angle $`\mathrm{\Theta }`$). This is equivalent to a measurement of two-qubit correlations in a Bell inequality experiment ().
Again, since Alice identified one Bell state only, the coincidence rate is reduced. Yet, clearly, the observed coincidence counts show correlations well above the classical maximum of 50% visibility and will violate a Bell-type inequality as soon as the coincidence fringe visibility surpasses the critical threshold value of 71%. This visibility was actually surpassed in an individual run of the experiment where alignment and stability parameters appear to have been very favorable. This, and the regular visibility of (65 $`\pm `$ 2)% (Fig. 5., corresponding to a fidelity of 0.82 $`\pm `$ 0.01) indicate that it will be possible to actually demonstrate a violation of Bell’s inequality in the near future.
In the case of entanglement teleportation it is really obvious that it is wrong to use a Fock-state description and to include the vacuum state for those cases where teleportation did not occur in the definition of the teleportation fidelity, as has been suggested by Braunstein and Kimble. To underline their claim, they suggested that, instead of following the teleportation protocol as described above, Bob could simply use randomly polarized photons to obtain the same (or even better) teleportation fidelity. Yet, clearly, if Bob were to follow that procedure, it would never be possible to observe non-classical correlations and to achieve a violation of a Bell-type inequality. Indeed, the observed coincidence count rates (D$`{}_{3}{}^{}{}_{}{}^{}`$D<sub>4</sub> and D$`{}_{3}{}^{}{}_{}{}^{+}`$D<sub>4</sub>) would not even show the sinusoidal variation as function of $`\mathrm{\Theta }`$ exhibited in Fig. 5.
## 6 Concluding Remarks
In this contribution we demonstrated explicitly the high fidelity, of the order of 0.8, achieved in the teleportation experiments first performed in Innsbruck. The measurement of the fidelity of the teleportation is based on a four-fold coincidence detection technique. The detection of Bob’s photon (photon 3 in Fig. 1) plays the double role of projecting out onto the single-photon input state and of measuring the overlap of the single-photon input state with the teleported single-photon state. The role of projecting onto a single-photon input state can be omitted if other means of preparing a single photon input state had been used. This is however a technical, though difficult, issue that has nothing to do with the actual quantum teleportation procedure and therefore the teleportation fidelity will be exactly the same in such situations.
Even if, more or less for technical reasons, the efficiency of the experiments discussed above was very low, the data shown cannot be obtained with any classical communication procedure. Moreover, they clearly demonstrate the capability of this teleportation procedure being implemented as a quantum channel for other quantum communication schemes, e.g., for quantum cryptography, and that the bona fide receiver can be quite sure about the fidelity of the teleported qubit.
The fact that the discussion about the Innsbruck experiments has not abated yet gives us the impression that our initial reply () to the criticism voiced by Braunstein and Kimble () might have been too succinct and condensed. We hope that our present paper will help clarify the essential points such that the debate can be set to rest.
## acknowledgment
This work was supported by the Austrian Science Foundation FWF Project Nos S6502 and F1506, the Austrian Academy of Sciences and TMR program of the European Union (Network contract No. ERBFMRXCT96-0087).
|
no-problem/9910/astro-ph9910411.html
|
ar5iv
|
text
|
# The nearby M-dwarf system Gliese~866 revisited Based on observations collected at the German-Spanish Astronomical Center on Calar Alto, Spain, and at the European Southern Observatory, La Silla, Chile
## 1 Introduction
It is now a well established fact that there exists a large number of substellar objects (see e. g. the recent review article by Oppenheimer et al. Oppenheimer00 (2000)). This increases the need for a better understanding of stellar properties at the lower end of the main sequence. Nearby low-mass stellar systems are particularly important for this purpose, because their orbital motion allows dynamical mass determinations.
Here we will consider the nearby triple system Gliese~866 (Other designations: LHS 68, WDS 22385-1519). Using speckle interferometry, Leinert et al. (Leinert86 (1986)) and also McCarthy et al. (McCarthy87 (1987)) discovered a companion (henceforth Gliese~866 B) located about 0$`\stackrel{}{.}`$4 away from the main component Gliese~866 A. Following observations proved the possibility to cover the whole orbit of this binary system within a few years by speckle-interferometric observations. Based on 16 data points, Leinert et al. (Leinert90 (1990), hereafter L90) presented a first determination of orbital parameters and masses. They derived the combined mass of the system to be $`0.38\pm 0.03M_{}`$. This was inconsistent with values of $`M_\mathrm{A}0.14M_{}`$ and $`M_\mathrm{B}0.11M_{}`$ obtained from empirical mass-luminosity relations and stellar interior models (see L90 and references therein).
In the meantime an additional spectroscopic companion (hereafter called Gliese~866 a) to Gliese~866 A has been detected (Delfosse et al. Delfosse99 (1999)). This is a plausible reason for the mentioned mass excess in Gliese~866, but – given the fact that only $`0.38M_{}`$ now had to be distributed among three stars – raises the question for a substellar component in the Gliese~866 system.
To improve on the mass determination of the components of Gliese~866 we have taken 20 more speckle-interferometric observations and one additional HST determination of relative position. These show the orbit of the wide pair: B with respect to Aa. We present an overview of the observations and the data reduction process in Sect. 2. The results are given in Sect. 3, discussed in Sect. 4 and are summarized in Sect. 5.
## 2 Observations and data reduction
A list of the new observations and their results is given in Table 1. The observations numbered 1, 3, 5 and 7 used one-dimensional speckle-interferometry. Details of observational techniques and data reduction for this method are described in Leinert & Haas (Leinert89 (1989)).
All other speckle observations were done using two-dimensional infrared array cameras. Sequences of typically 1000 images with exposure times of $`0.1\mathrm{sec}`$ were taken for Gliese~866 and a nearby reference star. After background subtraction, flatfielding and badpixel correction these data cubes are Fourier-transformed.
We determine the modulus of the complex visibility (i. e. the Fourier transform of the object brightness distribution) from power spectrum analysis. The phase is recursively reconstructed using two different methods: The Knox-Thompson algorithm (Knox & Thompson Knox74 (1974)) and the bispectrum analysis (Lohmann et al. Lohmann83 (1983)). Modulus and phase are characteristic strip patterns for a binary. As an example we show them in Fig. 1 for the observation done at 27 September 1996 on Calar Alto. By fitting a binary model to the complex visibility we derive the binary parameters: position angle, projected separation and flux ratio.
To obtain a highly precise relative astrometry which is crucial for orbit determination one has to provide a good calibration of pixel scale and detector orientation. For the speckle observations since 9 July 1995 this calibration has been done using astrometric fits to images of the Trapezium cluster, where precise astrometry has been given by McCaughrean & Stauffer (McCaughrean94 (1994)). During the previous observing runs binary stars with well known orbits were observed for calibrating pixel scale and detector orientation. By doing subsequent observations of these systems and calibrating them with the Trapezium cluster we have put all speckle observations since July 1993 in a consistent system of pixel scale and detector orientation.
## 3 Results
After combining the relative astrometry from L90 and our new observations, there are now 37 independent data points for the orbital motion of the visual pair in Gliese~866. They are plotted in Fig. 2, together with the result of an orbital fit that used the method of Thiele and van den Bos, including iterative differential corrections (Heintz Heintz78 (1978)).
The orbital elements resulting from a fit to the full data set are given in Table 2. Since we don’t know which node is ascending and which one is descending, we choose – as usually is done – $`\mathrm{\Omega }`$ to be between $`0\mathrm{°}`$ and $`180\mathrm{°}`$. $`\omega `$ is the angle between the adopted $`\mathrm{\Omega }`$ and the periastron (positive in the direction of motion), and $`i>90\mathrm{°}`$ means clockwise motion.
Gliese~866 has a trigonometric parallax $`\pi =289.5\pm 4.4\mathrm{mas}`$ (van Altena et al. vanAltena95 (1995)). In the following calculations we use the external error of the parallax: $`6.8\mathrm{mas}`$ instead of the (internal) value given by van Altena et al. This yields a distance of $`3.45\pm 0.08\mathrm{pc}`$, a semi majoraxis of $`1.19\mathrm{AU}`$ and finally – using Kepler’s third law – a system mass $`M_{\mathrm{Sys}}=0.336\pm 0.026M_{}`$. This result remains unchanged within the uncertainties if the calculation is done with natural subsets of the data (see Table 3). There is particularly no significant difference if only the 2D data points with good astrometric calibration (see Sect. 2) are used. Because it covers the longest time span, we take the result for the full data set as best values for orbit and system mass.
## 4 Discussion
Most of the uncertainty in system mass is from the parallax error. Further improvements in determining this parameter will improve the accuracy of system mass considerably beyond the present $`\pm 7.5\%`$.
We also want to get a first estimate of the components’ masses in order to judge the probability for a substellar object in the system. This cannot be done empirically from our data, because there are no published radial velocities for Gliese~866. Instead we use the mass-luminosity relation given by Henry & McCarthy (Henry93 (1993)) as
$$\mathrm{log}\frac{M}{M_{}}\pm 0.067=0.1668M_\mathrm{K}+0.5395$$
(1)
for absolute K magnitudes $`9.81M_\mathrm{K}<7.70`$.
This approach is only valid if there is no large shift between the spectral energy distributions of the components. To check this we consider the observations taken at other wavelengths. At $`917\mathrm{nm}`$ the flux ratio is comparable to that in the K-band and also to the flux ratios in other NIR filters given by L90 (Table 3 therein). This supports the conclusion of L90 that both (visual) components nearly have the same spectral type and effective temperature. At $`845\mathrm{nm}`$ L90 have measured a higher flux ratio of $`I_\mathrm{B}/I_{\mathrm{Aa}}=0.83\pm 0.1`$, but this wavelength lies at the edge of a strong TiO absorption feature (L90, Fig. 8 therein), so an extrapolation of flux ratios into the visible is not straightforward. Furthermore Henry et al. (Henry99 (1999)) have observed Gliese~866 with the F583W filter of the HST. The resultant flux ratio in V is $`I_\mathrm{B}/I_{\mathrm{Aa}}=0.69\pm 0.06`$ and thus again close to the values in the near infrared. The fact that the flux ratio $`I_\mathrm{B}/I_{\mathrm{Aa}}`$ is nearly constant over a large range of wavelengths indicates that all three components of the Gliese~866 system have similar effective temperatures and thus similar masses. This idea is further supported by the combined spectrum of Gliese~866 (L90, Fig. 8 therein) that shows the deep molecular absorptions of an M5.5 star.
The apparent magnitude of the system is $`K=(5.56\pm 0.02)\mathrm{mag}`$ (Leggett Leggett92 (1992) and references therein). Combined with the distance given above the absolute system magnitude is $`(7.87\pm 0.04)\mathrm{mag}`$. The K-band flux ratios given in Table 1 result in a mean value of $`I_\mathrm{B}/I_{\mathrm{Aa}}=0.57\pm 0.01`$. We take the components of the spectroscopic pair Gliese~866 Aa to be equally bright. Because we have only given qualitative arguments for this assumption, we use an error reflecting this uncertainty: $`I_\mathrm{a}/I_\mathrm{A}=1.0\pm 0.5`$. This yields the components’ absolute K magnitudes:
$`M_\mathrm{K}(A)`$ $`=`$ $`M_\mathrm{K}(a)=(9.11\pm 0.32)\mathrm{mag}`$
$`M_\mathrm{K}(B)`$ $`=`$ $`(8.98\pm 0.05)\mathrm{mag}`$ (2)
The resulting masses from Eq. 1 then are:
$`M_\mathrm{A}`$ $`=`$ $`M_\mathrm{a}=(0.105\pm 0.021)M_{}`$
$`M_\mathrm{B}`$ $`=`$ $`(0.110\pm 0.018)M_{}.`$ (3)
The given uncertainties originate from the error of the K magnitudes (Eq. 2) and the error of the mass-luminosity relation itself (Eq. 1). The sum of these masses is $`M_{\mathrm{Sys}}=0.320\pm 0.035M_{}`$ and is thus within the uncertainties consistent with the dynamical system mass $`M_{\mathrm{Sys},\mathrm{dyn}}=0.336\pm 0.026M_{}`$ derived above.
## 5 Summary
From a new determination of the visual orbit we have given an improved determination of the system mass in Gliese~866. With simplifying assumptions we have given estimates for the components’ masses using the mass-luminosity relation by Henry & McCarthy (Henry93 (1993)). The sum of the components’ masses estimated in this way is consistent with the dynamically obtained system mass. We conclude that there is no substellar object in the triple system Gliese~866 despite the fact that the total system mass is only $`0.34M_{}`$.
###### Acknowledgements.
We thank Rainer Köhler very much for providing his software package ”speckle” for the reduction of 2D speckle-interferometric data. Mark McCaughrean has contributed the procedure to calibrate pixel scale and detector orientation with the Trapezium cluster which largely improved the relative astrometry. William Hartkopf has provided orbits and ephemerides for several visual binaries that we also used for determination of pixel scale and detector orientation. We are grateful to Patrice Bouchet for carrying out the observation at 16 August 1995 and to Jean-Luc Beuzit for the observation at 25 August 1997.
|
no-problem/9910/cond-mat9910213.html
|
ar5iv
|
text
|
# 1 Stock prices and interest rates
## 1 Stock prices and interest rates
In this paper we show that in the time interval between crash and recovery there is a clear relationship between price variations and the dispersion of interest rates for bonds of different grades (see below), i.e. what is usually called the interest rate spread. Before explaining this relationship in more detail let us emphasize that it was observed empirically from the mid-nineteenth century to the latest major crash in 1987. This is in strong contrast with so many “regularities” which are dependent upon specific business circumstances. Such is for instance the case of the interest rate itself. Because of the close connection between stock and bond markets one would expect a strong link between stock prices and interest rates. This is not the case however; there seems to be no permanent relationship between these variables; see in this respect the conclusions of and \[18, p.241\]. It is true that sometimes a slight decrease in interest rates, by changing the “mood” of the market, suffices to send prices upward. Thus, in the fall of 1998 three successive quarter point decreases of the federal-fund rate (that is to say a global -0.75%) stopped the fall of the prices and brought about a rally. In other circumstances, however, even a huge drop in interest rates is unable to stop the fall of stock prices; an example is provided by the period from January 1930 to May 1931 when the interest rate fell from 6% to 2% without any effect on the level of stock prices; similarly in the aftermath of the 1990 crash of the Japanese stock market interest rates went down to almost zero percent without bringing about any recovery. One should not be surprised by the changing relationship between stock price levels and interest rates. Something similar can be observed in meteorology: sometimes a small fall in temperature is sufficient to produce rain, while in other circumstances a huge fall in temperature will not give any rain. In this case we know that the phenomenon has something to do with the hygrometric degree of the air; in the case of the stock market we do not really know which one of the many other variables plays the crucial role. In the light of such changing patterns the fact that the relationship between stock prices and the spread variable appears to be so robust and so stable in the course of time is worthy of attention.
## 2 Interest rate spread and uncertainty
It is a common saying that “markets dislike and fear uncertainty”. In a strong bull market there is little uncertainty; for everybody the word of the day is “full steam ahead”. The situation is completely different after a crash. There is uncertainty about the duration of the bear market; some would think that it will be short while others expect a long crisis. In 1990 when the bubble burst on the Tokyo stock market only few people would probably have expected the crisis to last for almost ten years. There is also uncertainty about which sectors will be the first to emerge from the turbulence: banks or investment funds, property funds or technology industry, etc. As we know the interest rate represents the price a company pays to buy money for the future. The more uncertain the future, the riskier the investment, the higher the interest rate. We will indeed see that during recessions interest rates often (but not always) show an upward trend. In addition, and this is probably even more important, the increased uncertainty produces greater disparity in the rate of different loans. This uncertainty has different sources (i)those who expect a short crisis will be tempted to lend at lower rates than those who fear a protracted recession (ii) the fact that there is no longer any “leading force” in the economy obscures expectations; therefore it becomes more difficult to make a reliable risk assessment for low-quality borrowers (representing the so-called low-grade bonds). In short, the interest rate spread gives us a means to probe the mood, expectations and forecasts of managers, a means which is probably more reliable than the standard confidence indexes obtained from surveys (in this respect see the last section). Although in many econophysical models of the stock market interest rates do not play a role per se, the fact that uncertainty is greater in the downward phase of the speculative cycle than in the upward phase could be built into the models by adjusting the randomness of the stochastic variables used in Monte Carlo simulations. In contrast, interest rates usually play a determinant role in econometric models. A particularly attractive model of that kind is the Levy-Levy-Solomon model; it describes the stock and bond markets as communicating vessels and how traders switch from one to the other. The book by Oliveira et al. \[12, chapter 4\] details the assumptions of the model, and, through simulations, explains how it works and to which results it leads.
## 3 The data
Monthly stock price data going back into the 19th century can be found fairly easily; possible sources are . Measuring interest rate spreads is a more difficult matter. To begin with it is not obvious which estimates should be used. The primary source about bond rates is ; furthermore a procedure for constructing the spread measure was proposed in . As a matter of fact Mishkin’s stimulating paper provided the main incentive for the writing of the present paper. Mishkin proposed to represent the spread by the difference between the one-fourth of the bonds of the lowest grade (i.e. high rates) and the one-fourth of the bonds of the best grade (i.e. low rates). It turns out that even for the mid-nineteenth century Macaulay’s data provided at least three bonds in each of these classes which is fairly sufficient to give acceptable accuracy; for the more recent period 1888-1935 there are as many as 10 bonds in each “quartile”. For post-World War II crashes, Macaulay’s series can be prolonged by the data in . More detailed comments about how these two measures compare can be found in
## 4 Results
### 4.1 Connection between share prices and interest rate spread between crash and recovery
Fig.1a and 1b show the evolution of stock prices (thick solid line), interest rate spread (thick dashed line), and interest rate (thin dashed line) for 8 major crashes. The left-hand vertical scale is the same for all graphs except 1929: this allows a visual comparison of the crashes’ severity. The right-hand vertical scales although not identical (which was not possible due to different orders of magnitude) are nevertheless comparable in the sense that the overall range $`y_{\text{max}}/y_{\text{min}}`$ is the same (except again for 1929); this allows a visual comparison of the increase of the spread. The horizontal scales represent the number of months after the crash; these scales are the same for all graphs (with the exception of 1929); this allows a comparison of the time elapsed between crash and recovery. It can be seen that the decline in stock prices is mirrored in a similar increase in interest rate spread. As a matter of fact the chronological coincidence between the troughs of the stock prices and the peaks of the spread variable is astonishing. Even for the 1929-1932 episode for which there is a 30-month span between crash and recovery the peak for the spread variable coincides almost to the month with the end of the price fall. The connection between both variables is confirmed by the correlation coefficients (left-hand correlations in Fig.1): they are all negative and comprised between $`0.64`$ and $`0.94`$; note that the smallest correlation ($`0.64`$) corresponds to a relatively small crash with a fall in stock prices of less than 20%. For 19th century episodes the interest rate changes are more or less in the same direction as those of the spread variable; however the correlations with stock prices (right-hand correlations in Fig.1) are substantially lower. For 20th century episodes the picture changes completely: the interest rate no longer moves in the same direction as the spread variable; consequently these correlations become completely random in contrast to the correlations between stock prices and spread variable which remain close to $`1`$. In the interpretative framework that we developed above we come up with the following picture. After a crash uncertainty, doubts and apprehension begin to spread throughout the market; usually (leaving 1929 apart for the moment) the fall last about 10 months; during that time, uncertainty continues to increase. Then, suddenly, within one month, the trend shifts in the opposite direction: price begin to increase and uncertainty to subside. One may wonder how the spread variable behaved in the bull phases. First of all one should note that not all the crashes that we examined were preceded by a wild bull market; so we concentrate here on three typical bull markets that occurred in 1904-1907, 1921-1929 and 1985-1987. During these periods the spread variable remained almost unchanged. Similarly during the period 1950-1967 which was marked by a considerable increase in stock prices (without however being followed by a major crash) the spread variable remained at a fairly constant level of 1.5%. In contrast during the period 1968-1979 which was marked by a downward trend in stock prices the spread variable was substantially larger in the range 2.5% -3.8%. A simple look at the charts in Fig.1 confirms what we already know, namely that the crisis of 1929-1932 was quite exceptional. This is of course obvious in economic terms (unemployment, drop in industrial production, etc.); it is also true from a purely financial perspective. Stock prices plummeted from a level 100 to less than 20, and the spread variable increased from 2.5% to almost 8%, a three-fold increase. For other episodes (see table 1) the corresponding ratios are all below 1.85. As an illustration of the intensity of the financial crisis one can mention the fact that November and December 1929 saw the failure of $`608`$ banks; the crisis continued in subsequent months to the extent that in March 1933 one third of all American banks had disappeared ().
Table 1 Stock price changes versus increase in interest rate spread
$$\begin{array}{ccc}\text{Year}& \text{Stock price}& \text{Interest rate spread}\\ \text{of crash}& \text{fall}& \text{increase}\\ & A_{\text{price}}& A_{\text{spread}}\\ & & \\ 1857& 1.63& 1.46\\ 1873& 1.24& 1.32\\ 1890& 1.23& 1.09\\ 1893& 1.34& 1.38\\ 1906& 1.46& 1.82\\ 1929& 6.12& 3.05\\ 1937& 1.89& 1.82\\ 1987& 1.40& 1.25\end{array}$$
where:
$$A_{\text{price}}=\text{peak price / minimum price},A_{\text{spread}}=\text{maximum spread / initial spread}$$
If we leave 1929 apart the fall/increase ratios of the two variables are almost of the same magnitude; a linear fit gives:
$$A_{\text{price}}=\alpha A_{\text{spread}}+\beta $$
with: $`\alpha =0.63\pm 0.50,\beta =0.54\pm 0.13`$, the correlation is equal to $`r=0.74`$ (confidence interval for $`r`$ to probability 0.95 is 0. to 0.96). If we include 1929 in the sample the coefficients of the linear fit change completely and become: $`\alpha =2.53\pm 0.67,\beta =2.11\pm 0.39`$, with a correlation equal to $`0.96`$ (confidence interval to probability 0.95 is 0.74 to 0.99). Needless to say the last fit has to be looked upon with cautiousness since it is so much dependent upon the figures of the 1929 crash.
### 4.2 Connection between interest spread and market’s uncertainty
In section 3 we interpreted the spread variable as characterizing the uncertainty and lack of confidence existing in the market at a given moment. This interpretation was based on plausible arguments but one would be on firmer ground if it could be supported by some statistical evidence. In this paragraph we provide at least partial proof in that respect by comparing the changes of the spread variable to the consumers’ lack of confidence as measured by standard surveys. This is shown in Fig. 2.; it represents the spread variable along with the lack of confidence index in the United States in the period before and after the 1987 crash. Changes in the two variables are fairly parallel although the spread variable appears to be much more sensitive and displays larger fluctuations. In the two months before the crash of 19 October 1987 both the uncertainty (measured by the spread variable) and the lack of confidence (estimated through consumer surveys) increased by about 20%; after the crash both variables increased rapidly; but the after-effects of the crash were short-lived and uncertainty decreased after the beginning of 1988. If consumer confidence data could be found for the period prior to World War II it would of course be interesting to perform a similar comparison for other crashes.
## 5 Perspectives for an extension to other speculative markets
Relationships which have a validity extending over one century are not frequent either in economics or in finance. Yet, if the above observation remains isolated it will be hardly more than a technical feature of interest for stock market professionals. It is tempting to posit that an increase in uncertainty can play a similar role in other speculative markets. Stock markets are certainly special in so far as they are pure speculative markets; in contrast to property or commodities, stocks do not have any other usage for their buyer than to earn dividends. Nevertheless the stock market seems to be in close connection with the property market; historically stock market crashes have often been preceded by a collapse of property prices; see in this respect \[6, p.65\] and \[14, p.76\]. One problem with the property market is its long relaxation time. For that reason we consider here another case namely the market for gold, silver and diamonds. As is well known, starting in 1977 huge speculative bubbles developed in these items, which collapsed simultaneously in January 1980. Let us concentrate on the diamond market since the gold market has already been closely investigated particularly by A. Johansen and D. Sornette. In Fig.3 we represented the price of diamonds along with the consumer lack of confidence index that we already used above. Two observations can be made (i) There is a huge increase in the lack of confidence index between 1978 and the spring of 1980 that is to say during the period when the bubble developed. This shows that it would be vain to explore the diamond market (or silver/gold markets) in order to find specific causes for the collapse. It was most certainly triggered by exogenous, psycho-sociological factors. (ii) In the phase between collapse and recovery (March 1980-March 1986), in contrast to what we observed with stock prices, there is no connection whatsoever between diamond price changes and the fluctuations of the lack of confidence index. Perhaps the story would be different if one could use a confidence index specially pertaining to the diamond market.
References
(1) BERNANKE (B.S.) 1983: Non-monetary effects of the financial crisis in the propagation of the Great Depression. American Economic Review 73,257.
(2) BOUCHAUD (J.-P.), POTTERS (M.) 1997: Théorie des risques financiers. Alea-Saclay, Eyrolles. Paris.
(3) CALDARELLI (G.), MARSILI (M.), ZHANG (Y.-C.) 1997: A prototype model of stock exchange. Europhysics Letters 40,479.
(4) FARREL (M.L.) 1972: The Dow Jones averages 1885-1970. Dow Jones. Princeton.
(5) FEIGENBAUM (J.A.), FREUND (P.G.O.) 1996: Discrete scaling in stock markets before crashes. International Journal of Modern Physics B 10,3737.
(6) HARRISON (F.) 1983: The power in the land. An inquiry into unemployment, the profits crisis and land speculation. Shepheard-Walwyn. London.
(7) LUX (T.), MARCHESI (M.) 1999: Scaling and criticality in a stochastic multi-agent model of a financial market. Nature 397, 498.
(8) MACAULAY (F.R.) 1938: The movements of interest rates, bond yields and stock prices in the United States since 1856. National Bureau of Economic Research. New York.
(9) MANTEGNA (R.N.), STANLEY (H.E.) 1997: Stock market dynamics and turbulence: parallel analysis of fluctuation phenomena. Physica A 239,255.
(10) MANTEGNA (R.N.), STANLEY (H.E.) 1999: Scaling approach in finance. Cambridge University Press. Cambridge (in press).
(11) MISHKIN (F.S.) 1991: Asymmetric information and financial crises: a historical perspective. in: Hubbard (R.G.): Financial markets and financial crises. National Bureau of Economic Research. University of Chicago Press. Chicago.
(12) OLIVEIRA (S.M. de), OLIVEIRA (P.M.C. de), STAUFFER (D.) 1999: Evolution, money, wars and computers. Teubner. Stuttgart. See especially chapter 4 about stock market models.
(13) OWENS (R.N.), HARDY (C.O.) 1929: Interest rates and stock speculation. A study of the influence of the money market on the stock market. George Allen and Unwin. London.
(14) ROEHNER (B.M.) 1999: Spatial analysis of real estate price bubbles: Paris, 1984-1993. Regional Science and Urban Economics 29,73.
(15) SORNETTE (D.), JOHANSEN (A.) 1997: Large financial crashes. Physica A 245,411.
(16) STAUFFER (D.), OLIVEIRA (P.M.C.), BERNARDES (A.T.) 1999: Monte Carlo simulation of volatility correlation in microscopic market model. International Journal for Theoretical and Applied Finance 2,83.
(17) WILSON (J.), SYLLA (R.), JONES (C.P.) 1990: Financial market volatility. Panics under the national banking system before 1914. Volatility in the long-run 1830-1988. in White (E.N.) Crashes and panics: the lessons from history. Dow Jones-Irwin. Homewood.
(18) WYCKOFF (P.) 1972: Wall Street and the stock markets. A chronology (1644-1971). Chillon Book Company. Philadelphia.
Figure captions
Fig.1a Stock prices versus interest rate spread: 19th century crashes. Thick solid line: stock price index on the NYSE normalized to 100 at its peak value (left-hand vertical scale); thick dashed line: interest rate spread (right-hand vertical scale). The thin dashed line represents the interest rate for high grade commercial paper; it serves as a control variable in order to determine whether it is the spread or the interest rate which is the pivotal variable. For the purpose of facilitating comparison the left-hand vertical scale is the same for all graphs: this allows a visual comparison of the crashes’ severity. The right-hand vertical scales although not identical are nevertheless comparable in the sense that their overall ranges $`y_{\text{max}}/y_{\text{min}}`$ are the same. The horizontal scales represent the number of months after the crash; these scales are the same for all graphs. The numbers under the title are the correlations price/spread and price/interest rate respectively. Sources: see text.
Fig.1b Stock prices versus interest rate spread: 20th century crashes. The caption is the same as for Fig.1a; note however that for the 1929 chart the scales for the stock prices (right-hand vertical scale), for the spread (left-hand vertical scale) and for time (horizontal scale) are not the same as for the other charts. This clearly shows the exceptional magnitude of the crash of 1929. Sources: see text.
Fig.2 Comparison between the spread variable and the consumer lack of confidence index before the crash of October 1987. Changes in the spread variable (solid line) and in the lack of confidence index (broken line) are fairly parallel but the first variable is much more sensitive. The lack of confidence index is the inverse of the standard confidence index obtained from surveys. Sources: Mishkin (1991), Gems and Gemology 24,140 (Fall 1998).
Fig.3 Comparison of the price of diamonds before the collapse of January 1980 with the evolution of the lack of confidence index. In the months before the market collapse the lack of confidence increased rapidly. However after the crash the lack of confidence index does not show the same pattern that we observed in Fig.1. The outcome would perhaps be different if we could use a confidence index focused on the diamond market. Sources: Gems and Gemology 24 (Fall 1998).
|
no-problem/9910/astro-ph9910559.html
|
ar5iv
|
text
|
# Redshifts and age of stellar systems of distant radio galaxies from multicolour photometry data
## 1 Introduction
The labour intensity of obtaining statistically significant high-quality data on distant and faint galaxies and radio galaxies forces one to look for simple indirect procedures in the determination of redshifts and other characteristics of these objects. With regard to radio galaxies, even photometric estimates turned out to be helpful and have so far been used (McCarthy, 1993; Benn et al., 1989).
In the late 1980s and early 1990s it was shown that the colour characteristics of galaxies can yield also the estimates of redshifts and ages for the stellar systems of the host galaxies. Numerous evolutionary models appeared with which observational data were compared to yield results strongly distinguished from one another (Arimoto and Yoshii, 1987; Chambers and Charlot, 1990; Lilly, 1987, 1990).
Over the last few years the two models: PEGASE (Project de’Etude des Galaxies par Synthese Evolutive (Fioc and Rocca-Volmerange, 1997)) and Poggianti (1997) have been extensively used, in which an attempt has been made to eliminate the shortcomings of the previous versions.
In the “Big Trio” experiment (Parijskij et al., 1996) we also attempted to apply these techniques to distant objects of the RC catalogue with ultrasteep spectra (USS). Colour data for nearly the whole basic sample of USS FR II (Fanaroff and Riley, 1974) RC objects have been obtained with the 6 m telescope of SAO RAS. In the present paper we investigate the applicability of new models to the population of all distant ($`z>1`$) radio galaxies with known redshifts. The results of this investigation will be used for the RC objects of the “Big Trio” project.
## 2 Data
To test the potentialities of the method in determination of the redshifts and ages of the stellar population in the host galaxies from photometry data, we have selected about 40 distant radio galaxies with known redshifts, for which the stellar magnitudes in more than 3 bands are available in the literature (Parijskij et al., 1997). The data on these objects are tabulated in Table 1, in the columns of which are listed the universally accepted names of the sources, IAU names, spectroscopic redshifts (z<sub>sp</sub>), apparent stellar magnitudes in the filters from U to K, radio morphology of the objects (P — point source, D — double, T — triple, Ext — extended), and notes. The bracketed values or the values representing the lower limits were disregarded in the calculations. Magnitudes from R column which have a symbol “r” (r-filter) in further calculations were decreased by 0.35 to be used as R magnitudes. Magnitudes from I column with the symbol “i” were decreased by 0.75 and treated as I magnitudes.
The lines describing the objects 3C 65 (B022036+394717), 3C 68.2 (B023124+312110), 3C 184 (B073359+703001) contain the data in which the authors have already taken into account the absorption.
The asterisks in the notes mark the classical FR II-type objects.
It should be noted that the photometry data presented in Table 1 are rather inhomogeneous, obtained using different tools with different apertures and by different observers.
The procedure of estimating the redshifts and ages of the stellar population for each source consisted in:
1. Obtaining the age of the stellar population of the host galaxies from photometry data, PEGASE and Poggianti models with a fixed known redshift.
2. Searching for an optimal model of an object and simultaneous searching for the redshift and age of the stellar population.
3. Comparing the derived values.
## 3 Description of models of energy distribution in the spectra of the host galaxies
The new model PEGASE (Fioc and Rocca-Volmerange, 1997) for the Hubble sequence galaxies, both with star formation and evolved, was used as a basic SED (Spectral Energy Distribution) model. The uniqueness of this model consists in expanding to the near IR (NIR) of Rocca-Volmerange and Guiderdoni’s (1988) atlas of synthetic spectra with a revised stellar library, which includes parameters of cool stars. The NIR is connected coherently with the visible and ultraviolet ranges, so the model is continuous and spans a range from 220 Å to 5 microns. The precise algorithm of the model, to quote the authors, allows revealing rapid evolutionary phases such as red supergiants or AGB in the NIR.
We used from this model a wide collection of SED curves from the range of ages between $`710^6`$ and $`1910^9`$ years for massive elliptical galaxies.
A second model, taken from Poggianti (1997), is based on computations that include the emission of the stellar component after Barabaro and Olivi (1991), synthesizes the SED for galaxies in the spectral range 1000–10000 Å and includes the computed phases of stellar evolution for AGB and Post-AGB along with the main sequence and helium burning phase. The model allows for the chemical evolution in the galaxy and therefore for the contribution of the stellar populations of different metallicities to the integral spectrum. Using the stellar model atmospheres (Kurutz, 1992) Poggianti has managed to compute spectrum up to 25000 Å. Kurutz’s model for stars with $`\mathrm{T}_{\mathrm{eff}}>5500\mathrm{K}`$ has been used in the IR range, while for lower effective temperatures the library of the observed stellar spectra (Lancon and Rocca-Volmerange, 1992) has been employed.
From the second model we have used the SED curves computed for elliptical galaxies, for the ages (2.2, 3.4, 4.3, 5.9, 7.4, 8.7, 10.6, 13.2, 15)$`10^9`$ years.
## 4 Procedure
### 4.1 Allowance for the absorption
In order to take account of the absorption, we have applied the maps (as FITS-files) from the paper “Maps of Dust IR Emission for Use in Estimation of Reddening and CMBR Foregrounds” (Schlegel et al., 1998). The conversion of stellar magnitudes to flux densities has been performed by the formula (e.g. von Hoerner, 1974):
$$S(Jy)=10^{C0.4m}.$$
The values of the constant $`C`$ for different bands are given in Table 2 where also are presented the following characteristics: filter name, wavelength, coefficient A/E(B–V) of transition from distribution of dust emission to absorption in a given band, assuming the absorption curve $`R_V=3.1`$.
The coordinates of sources for the epoch 1950.0, galactic coordinates, and also the absorptions adopted in the further computations are tabulated in Table 3.
### 4.2 Fitting
The estimation of ages and redshifts was performed by way of selection of the optimum location on the SED curves of the measured photometric points obtained when observing radio galaxies in different filters. We used the already computed table SED curves for different ages. The algorithm of selection of the optimum location of points on the curve consisted briefly (for details see Verkhodanov, 1996) in the following: by shifting the points lengthwise and transverse the SED curve such a location was to be found at which the sum of the squares of the discrepancies was a minimum. Through moving over wavelengths and flux density along the SED curve we estimated the displacements of the points from the location of the given filter and then the best fitted positions were used to compute the redshift. From the whole collection of curves, we selected the ones on which the sum of the squares of the discrepancies turned out to be minimal for the given observations of radio galaxies.
Thus we estimated both the age of the galaxy and the redshift within the frame of the given models (see also Verkhodanov et al., 1998a,b). When apprizing the robustness of fitting, the presence of points from infrared wavelengths (up to the K range) is essential, since in the fitting we include the jump before the
infrared region of the SED and can thus locate stably (with a well-defined maximum on the likelihood curve) our data. When removing the points available (to check the robustness) and leaving only 3 points (one of which is in the K range), we obtain in the fitting the same result on the curve of discrepancies as for 4 or 5 points. If the infrared range is not used, the result proves to be more uncertain.
The computation results with the fixed redshift value are given in Table 4, where there listed 1) the name of the object, 2) the spectroscopic redshift, $`z_{sp}`$, 3) the age estimated from Poggianti’s models with $`z_{sp}`$, 4) the r.m.s. deviation, $`\sigma _d`$, of photometric points (Jy) from the optimum age SED curve in Poggianti’s model, 5) the age determined from the PEGASE library models with $`z_{sp}`$, 6) the r.m.s. deviation, $`\sigma _d`$, of photometric points (Jy) from the optimum age SED curve in the PEGASE model. Note that in the cases where we fail to find a consistent solution, the parameters being determined are omitted in Tables 4 and 5.
Figures 9–54 (at the end of the paper) represent the optimum (with a minimum of the squares of the deviations) SED curves with the given spectroscopic $`z_{sp}`$ for the sources under investigation and the curves of the dependence of r. m. s. deviations on age for the given object for both models. The pictures are drawn in pairs “SED–$`\sigma _d`$(age)” for the models of Poggianti and PEGASE, respectively. The figures from the SED models of Poggianti and PEGASE are denoted by (a) and (b), respectively.
The result of computations of the redshifts and the age of the stellar population of the host galaxy from the models of PEGASE and Poggianti are tabulated in Table 5 which presents 1) the object name, 2) the spectroscopic redshift, $`z_{sp}`$, 3) the age estimated from Poggianti’s models in the case of uncertain redshift, 4) the redshift estimate in these models, 5) the r. m. s. deviation, $`\sigma _d`$, of photometric points from the optimum age SED curve in Poggianti’s model for the given case, 6) the age determined from the PEGASE library models in the case of non-fixed redshift, 7) the r. m. s. deviation, $`\sigma _d`$, of photometric points from the optimum age SED curve in the PEGASE model for the given case.
In Figures 55–100 (at the end of the paper) are presented the optimum (with a minimum of the squares of the deviations) SED curves with a variable redshift and normalized likelihood function (LHF) distributions in the “redshift–age” plane for the given object for the two models. The pictures are drawn in pairs “SED–LH function (z, age)” for Poggianti’s and PEGASE models, respectively. When there are several selected curves within one model, all the versions are presented. When it is impossible to choose a model, only the LHF distribution is given. The LHF contours are plotted in the figures by levels 0.6, 0.7, 0.9 and 0.97. The figures from the SED models of Poggianti and PEGASE are labeled by symbols (a) and (b), respectively.
Note that the sought-for parameters for 11 sources are determined ambiguously.
## 5 Discussion
The principal points of our concern are:
* whether one can use the multicolour photometry technique to measure the redshift (first of all) and age of the stellar population of the host galaxy for distant radio galaxies;
* which of the new models give the best agreement of the redshift found by spectroscopy with the derived values;
* to what extent one can rely on the obtained ages of radio galaxies.
Should all the data of Table 5 be used with no selection (leaving only one of the versions for each object), the formal error of one measure of the redshift equals 70–80 % (Fig. 1 a, b), which is almost an order of magnitude worse than for nearby objects (see e. g. Benn et al., 1989). For Poggianti’s models residuals are less clustered in their distribution (compare Fig. 1 a and 1 b).
The situation improves considerably if Table 5 is restricted only by the population of classical FRİI-type objects (marked by asterisks in Tables 1, 5). For the PEGASE models the error decreases to 23 %. It is essential that this error has not been revealed to rise with $`z_{sp}`$ (Fig. 2). Part of the error is without a doubt associated with quality and dissimilarity of observational data, part with the real difference between the SEDs of host galaxies and adopted models.
In a number of properties the PEGASE models turn out to be closer to the real SED for radio galaxies than Poggianti’s models. This is why it is conceived to employ the former for distant FR II-type galaxies until models of higher quality appear.
The errors in age estimates of the stellar population from the data of Tables 4 and 5 can so far be determined only by comparing results of different models, which may not represent the true error. Histograms of such “model” errors in age determination of the stellar population of host galaxies are displayed in Fig. 3a, b. The ages derived from the PEGASE models with a fixed (spectroscopic) redshifts are generally little (10 %) different from the version of simultaneous selection of both age and redshift (Fig. 4).
Fig. 5 shows the differences in age from the models of Poggianti and PEGASE, depending on the spectroscopic redshift. The average age of radio galaxies turned out to be about 2 billion years (see e. g. Fig. 6) and only slightly depends on $`z_{sp}`$. The dispersion of age values decreases with growing $`z_{sp}`$, though the statistical significance of this inference is not high. Besides, there is a systematic difference in the ages estimated from these two models. Poggianti’s model yields a larger by about 1.5–2 billion years age value, except for the utmost ages. Note that the larger the age, the lower its estimation accuracy. For the oldest systems, the differences in the ages estimated from the two models may amount to 100 % and above.
The part played by the red filters, especially K, grows with increasing $`z`$, but it turns out that the “continuity” in the location of the filters across the determined spectrum region is essential. To illustrate this we have compared the accuracy of determination of colour redshifts in two cases: using all the data available, including the K filter and with the application of four neighbouring filters that cover continuously a specified region of the spectrum. We have managed to select 6 of such cases (Tables 6 and 7), and all of them are used in Fig. 7. It follows from the figure that the difference in colour redshifts is as small as 11 %. This allows us to hope for being able to estimate colour redshifts with a sufficient accuracy by using the standard equipment available at SAO RAS.
In selecting the most likely version of colour redshift one can use photometry data in a separate filter since the difference between these versions exceeds sometimes the errors in photometric estimates (see e. g. objects 1108+38, 1017+37).
Using the age of stellar systems of the host galaxies, one can roughly evaluate the time of the latest mass star formation T<sub>sf</sub> and the redshift $`z_{sf}`$, corresponding to that moment. These estimates are model dependent and we restrict ourselves to the standard CDM model of the Universe.
The distribution of T<sub>sf</sub> for the FR subsample is displayed in Fig. 8. For the average $`z_{sf}`$ of this sample, the mean age of stellar systems of host galaxies equals 1.8 billion years, which corresponds to $`z_{sf}=5.5\pm 3.7`$. A considerable part of galaxies have $`z_{sf}`$ larger than 8, which is important for the reconstruction of the history of the Universe. The presence of a certain number of “negative” ages may be due to the error in age estimates of old objects. The well-studied “negative”-age object 53W091, having $`z_{sf}=1.55`$ and age of 3.5–4 billion years, has been found to conflict with the CDM model. The conflict can readily be resolved by introduction of the $`\mathrm{\Lambda }`$ term (Dunlop et al., 1996; Krauss, 1997). In any event it is vital that the mean epoch of mass star formation for a population of galaxies with $`z>1`$ occurs much earlier than, on average, for field galaxies (Cowie et al., 1995).
## 6 Conclusions
1. It is shown that one can measure redshifts with an accuracy of 25–30 % up to the limiting values for 40 radio galaxies with $`1<z<4`$, having measured stellar magnitudes in more than 3 filters. These measures are valid first of all for the PEGASE models of SED evolution with time. Therefore it is hoped we will succeed in obtaining sufficiently reliable redshifts from the 6 m telescope multicolour photometry data, using the PEGASE models for the sample of the “Big Trio” project RC objects, though we have no measurements in the K filter. Thus we have obtained good agreement between spectral and colour redshifts for one of the distant RC objects (Dodonov et al., 1999).
2. Ages and moments of the latest vigorous star formation have been estimated for the radio galaxies with $`z>1`$ discussed above. Stellar population of most objects of this sample is not too old (median PEGASE model age is 1.5 billion years). The age of the stellar population from the models of Poggianti is by 2–2.5 billion years greater. There is not a single object having an age over 7–12 billion years. No perceptible relationship between the age of the stellar population and redshift is observed.
3. The errors can be distinguished as rough ones, that are introduced by the quasiperiodic SED structure, and random errors, which are due to the quality of observational data. The former may reach 100 %, the latter 5–10 %. Simple photometric redshift evaluations allow false estimate to be discarded in a number of cases.
4. A better insight into evolutionary tracks of synthetic spectra in the first generation galaxies must result in a considerable improvement of accuracy of colour estimates. These may not be much different direct spectroscopic values for at least ultimately faint objects.
* The authors are grateful to V. V. Vlasyuk for reading the manuscript and helpful remarks. The work was supported by the RFBR through grants No. 99-07-90334, and partially by the Federal Programme “Astronomy” (grants 1.2.2.1 and 1.2.2.4) and Federal Programme “Integration” (grants No. 206 and No. 578). This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
|
no-problem/9910/cond-mat9910184.html
|
ar5iv
|
text
|
# Structure of Flux Line Lattices with Weak Disorder at Large Length Scales
## Abstract
Dislocation-free decoration images containing up to 80,000 vortices have been obtained on high quality Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+x</sub> superconducting single crystals. The observed flux line lattices are in the random manifold regime with a roughening exponent of 0.44 for length scales up to 80-100 lattice constants. At larger length scales, the data exhibit nonequilibrium features that persist for different cooling rates and field histories.
preprint:
Recent studies of high temperature superconductors have shown richness in the phase diagram due to the presence of weak quenched disorder . Larkin first showed that arbitrarily weak disorder destroys the long range translational order of flux lines (FLs) in a lattice . It was recently pointed out that the Larkin model, which is based on a small displacement expansion of the disorder potential, cannot be applied to length scales larger than the correlated volume of the impurity potential termed the Larkin regime . Beyond the Larkin regime, the behavior of FLs in the absence of dislocations has been considered using elastic models . First, FLs start to behave collectively as an elastic manifold in a random potential with many metastable states (the random manifold regime) . In this random manifold regime, the translational order decreases as a stretched exponential, whereas there is a more rapid exponential decay in the Larkin regime. At even larger length scales, when the displacement correlation of FLs become comparable to the lattice spacing, the random manifold regime transits to a quasiordered regime where the translational order decays as a power law .
Experimentally, neutron diffraction and local Hall probe measurements have shown the existence of an order-disorder phase transition with increased field, although the microscopic details of these phases are not clear. Theoretical progress describing FLs in the presence of weak disorder has been made within elastic theory, which proposes the absence of dislocations at equilibrium . To date, however, there has been no experimental work addressing the structure of dislocation-free FL lattices at large length scales. Previous magnetic decoration studies showed that the dislocation density decreases and the translational order increases with increasing magnetic field. However, only relatively short-range translational order could be probed in the previous work due to the finite image size and relatively low applied fields.
In this paper, we report the first large length scale structural studies of FLs with measurements extending up to $`300`$ lattice constants and fields up to 120 G. Real-space images show dislocation free regions containing up to the order of $`10^5`$ FLs. A very low density of dislocations was also observed, although detailed analysis suggests that the dislocations are not equilibrium features. The translational correlation function and displacement correlator have been calculated from dislocation free data to examine quantitatively the decay of order. These results show a stretched exponential decay of the translational order indicating that FLs are in the random manifold regime. The experimentally determined roughening exponent in the random manifold regime agrees well with theoretical predictions.
High quality single crystals of Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+x</sub> (BSCCO) were grown as described elsewhere . Typically, crystals of $`1mm\times 1mm\times 20\mu `$m size were mounted on a copper cold-finger and decorated with thermally evaporated iron clusters at 4 K. The samples were cooled down to 4 K using different thermal cycles to test nonequilibrium effects and to achieve as close an equilibrium configuration of FLs as possible within the experimental time scale. The FL structure was imaged after decoration using a scanning electron microscope equipped with a 4096 x 4096 pixel, 8-bit gray-scale image acquisition system. Nonlinearity in the system was eliminated using grating standards. This high-resolution system enabled us to acquire images containing nearly $`10^5`$ FLs, while maintaining a similar resolution ($`14`$ pixels between vortices) to previous studies of $`10^3`$ FLs. In addition, an iterative Voronoi construction was used to reduce the positioning inaccuracy to 3 % of a lattice constant.
Samples were decorated in fields of 70, 80 and 120 G parallel to the c axis of BSCCO single crystals. In contrast to the previous decoration experiments at lower fields , we find that the dislocations are rare at these fields. The density of dislocations was $`1.7\times 10^5`$, $`1.4\times 10^5`$ and $`3.1\times 10^5`$ for 70, 80 and 120 G, respectively, where the total number of vortices is $`240,000`$ for each field. It is thus trivial to find many large $`100\times 100`$ $`\mu `$m<sup>2</sup> dislocation free regions in the decorated samples. The size of the largest dislocation free image, which was obtained in a field of 70 G, is $`152\times 152`$ $`\mu `$m<sup>2</sup> with 78,363 vortices . Although a small number of dislocations are detected in our FL images, this does not imply that they are energetically favorable at equilibrium. On the contrary, we believe that the large dislocation-free areas observed in the images provide a lower bound for the length scale of equilibrium dislocation loops. We discuss this point below after presenting a quantitative analysis of the translational order.
To study quantitatively the FL lattice order, we proceed as follows. First, a perfect lattice is constructed and registered to the FL positions obtained from an experimental image. The initial lattice vectors used to construct the perfect lattice were obtained from the Fourier transform of the vortex positions. When an image contains a dislocation, the continuum approximation is used to construct the perfect lattice with the dislocation . Next we minimized the root mean square displacement between the underlying perfect lattice and the real FL lattice by varying the position and orientation of the two lattice vectors of the perfect lattice. The displacement vector $`𝐮(𝐫)`$ associated with each of the vortices positioned at $`𝐫`$ relative to the perfect lattice was then computed. Fig 1 displays a color-representation of the displacement field for a typical dislocation-free image and an image containing three dislocations. In Fig 1(a), the average displacement is 0.22 $`a_0`$, where $`a_0`$ is the lattice constant. Qualitatively, the map consists of several intermixed domain-like structures, within which the displacement fields are correlated. These uniformly dispersed domain-like structures of the displacement field produce sharp Bragg peaks in Fourier space (see Fig 3(b) later). We also believe that $`𝐮(𝐫)`$ provides a quick indication of nonequilibrium effects. For example, Fig 1(b) exhibits large domains of correlated displacements that are sheared relative to each other; that is, the blue-green-blue coded domains. We believe that this larger scale distortion is a manifestation of a nonequilibrium structure that may arise from quenched dynamics of FLs during our field-cooling process (see below).
To compare our data directly with theoretical predictions , we have calculated the displacement correlator, $`B(r)`$, and translational correlation function, $`C_𝐆(r)`$. $`B(r)`$ and $`C_𝐆(r)`$ are defined as $`[𝐮(𝐫)𝐮(\mathrm{𝟎})]^2/2`$ and $`e^{i𝐆[𝐮(𝐫)𝐮(\mathrm{𝟎})]}`$, respectively, where $``$ is the average over thermal fluctuations and quenched disorder, and $`𝐆`$ is one of the reciprocal lattice vectors. Theoretically , we expect $`B(r)`$ will show three distinct behaviors as $`r`$ increases: $`B(r)r`$ in the Larkin regime, where $`B(r)`$ is less than the square of $`\xi `$, the in-plane coherence length. As $`r`$ increases further, FLs are in the random manifold regime where $`\xi ^2<B(r)<a_0^2`$. In this regime $`B(r)r^{2\nu }`$ with the roughening exponent $`2\nu `$ ($`<1`$). Finally, at the largest length scales (the quasiordered regime) where $`a_0^2<B(r)`$, $`B(r)\mathrm{ln}r`$. Since the in-plane $`\xi `$ of BSCCO is only $``$ 20 Å, the Larkin regime is irrelevant in our experiment (i.e., $`a_0\xi `$). Fig 2(a) shows the behavior of B(r) calculated from the data in Fig 1(a). For $`r<80a_0`$, $`B(r)`$ can be fit well with a power law, $`B(r)r^{2\nu }`$, with $`2\nu =0.44`$. Thus our experiment is probing the random manifold regime at least up to this scale. Indeed, $`B(r)`$ grows only up to 0.05 $`a_0^2`$ at $`r=80a_0`$, well below the expected crossover to the quasiordered regime, i.e. $`B(r)a_0^2`$. A naive extrapolation to $`B(r)=a_0^2`$ suggests the crossover at $`r10,000a_0`$ ($``$ 4 mm), which is far beyond our experimental limit. Samples with such a large clean area, and direct imaging of $``$ $`10^8`$ vortices would be required to observe the logarithmic roughening of FLs. The roughening exponent $`2\nu `$ is found to be independent of the field (70 - 120 G) and consistent with the estimate $`2\nu =2/5`$ obtained by Feigel man et al. using a scaling argument . As shown in Fig 2(b), $`C_𝐆(r)`$ and $`e^{G^2B(r)/2}`$ overlap with each other for $`r<L^{}`$, where the measured $`L^{}`$ is $`80a_0`$. These results support the Gaussian approximation, $`C_𝐆(r)e^{G^2B(r)/2}`$, which has been simply assumed for the equilibrium FLs lattice within this length scale. For $`r>L^{}`$, however, B(r) deviates strongly from expected behavior; that is,$`B(r)`$ saturates and even decreases as $`r`$ increases. In addition, the Gaussian approximation breaks down for $`r>L^{}`$ as evidenced by the difference between $`C_𝐆(r)`$ and $`e^{G^2B(r)/2}`$. We believe that this behavior can be attributed to nonequilibrium FL structures at the larger length scales of our experiment.
To examine this point further, we decompose $`B(r)`$ into its longitudinal \[$`B^L(r)`$\] and transverse \[$`B^T(r)`$\] parts: $`B(r)=(B^L(r)+B^T(r))/2`$, where
$$B^L(r)=\left(\left(𝐮(𝐫)𝐮(0)\right)\frac{𝐫}{r}\right)^2.$$
(1)
It is worth noting that in the random manifold regime, the ratio of $`B^T(r)`$ and $`B^L(r)`$ is predicted to be $`2\nu +1`$ , and thus an independent estimate of the roughening exponent. The average value of this ratio measured from our data (inset to Fig 3(a)) is 1.40, which is consistent with the value of $`2\nu `$ obtained from $`B(r)`$. As shown in Fig 3(a), both $`B^L`$ and $`B^T(r)`$ are described well with the power law behavior up to $`rL^{}`$. Beyond this range, however, the transverse displacement $`B^T(r)`$ first deviates from power law causing deviations in $`B(r)`$. Thus, we infer that shear motion of FL lattice should be responsible for the abnormal behavior of $`B(r)`$. Since the shear modulus of FL lattice is much smaller in magnitude than the compressional modulus , $`B^T(r)`$ is always larger than $`B^L(r)`$, and the shear motion dominates the relaxation of the FL lattice during the field cooling process. As temperature decreases, the long wavelength component of shear motion is frozen out. We believe that the domain-like structures seen in Fig 1 are a snap shot of these frozen long wavelength shear motions. Note that the characteristic length scale of these domain like structures in Fig 1 is again $`L^{}`$, which explains the deviations in $`B(r)`$ for $`r>L^{}`$. Therefore, $`L^{}`$ is the equilibrium length scale within which FLs can relax to the local equilibrium during our experimental time scale.
This issue can also be addressed through Fourier space analysis. Fig 3(b) displays a blow-up of one Bragg peak. Several small satellite peaks appear around the relatively sharp main peak; these satellite peaks indicate a large-scale modulation of the FL lattice. If the FLs were in equilibrium, only one main peak should be expected. The corresponding real space distance between the main and satellite peaks is, again, $`L^{}`$. Hence these satellite peaks provide another evidence of the frozen-in dynamics beyond the equilibrium length scale $`L^{}`$. In addition, we have prepared FL lattices in different ways to address the nonequilibrium structures. For example, we cooled the samples in the absence of a field to 65 K, applied a field 70 G, and then cooled slowly (0.1 K/min) to 4 K. Significantly, we find a similar density of dislocations and FL structure compared to the rapid (10 K/s) field-cooled samples. Since 65 K is far below the melting temperature , this observation suggests that the nonequilibrium structures originate from the frozen-in dynamics far below the melting temperature. Although we can probe FLs up to a length scale of $``$ 300 $`a_0`$, there is a much smaller length scale $`L^{}`$ that prohibits direct application of the theory derived for an equilibrium FLs. Further studies should address this important issue.
Finally, we consider the origin of dislocations observed in our experiments, since nonequilibrium vs. equilibrium nature of dislocations is critical to the existence of the Bragg glass phase. We believe that our data, which exhibit the small numbers of dislocations, in fact, favors nonequilibrium nature of dislocation in the FL lattice we probed by following reasons. First, it is found that most dislocations are pinned in between domain boundaries (see Fig 1(b) for example). If there were a dislocation within the domain-like structures where FLs are locally in equilibrium, the dislocation should be an equilibrium feature. Second, $`L^{}L_d=n_d^{1/2}250a_0`$, where $`L_d`$ and $`n_d`$ are the average distance between dislocations and the density of dislocations, respectively. If dislocations were energetically favorable in an equilibrium FL lattice, large dislocation loops should proliferate beyond the equilibrium length scale $`L^{}`$. In addition, if some dislocations drift within domains, and are pinned at domain boundaries, we should have $`L_dL^{}`$. Therefore, our experiment ($`L^{}L_d`$) suggests that dislocations are not equilibrium features in the FL lattice. Together, our data provide a lower bound for the length scale of equilibrium dislocation loop in the FL lattice.
In summary, we have obtained large scale dislocation-free images of the FL lattice in high quality BSCCO superconductors. Quantitative analyses of the translational order indicate that the system is in equilibrium for length scales up to $`80a_0`$, and that FLs are in the random manifold regime with a roughening exponent $`2\nu =0.44`$. We suggest that the very small density of dislocations observed in our data is an out-of-equilibrium feature due to the short time scales involved in our field-cooled experiments.
We thank D. R. Nelson, D. S. Fisher, P. Le Doussal, and T. Giamarchi for helpful discussion. CML acknowledges support of this work by the NSF Division of Materials Research.
|
no-problem/9910/astro-ph9910288.html
|
ar5iv
|
text
|
# On the axis ratio of the stellar velocity ellipsoid in disks of spiral galaxies
## 1 Introduction
It has been known for many decades that the distribution of the stellar velocities in the solar neighbourhood is far from isotropic. A longstanding problem in stellar dynamics (or in recent times more appropriately called galactic dynamics) has been the question of the shape and orientation of the velocity ellipsoid (i.e. the three-dimensional distribution of velocities) of stars in disks of spiral galaxies. The local ellipsoid has generated debate and research for at least a century. There still is no consensus on the question of the orientation of the longest axis of the velocity ellipsoid (the “tilt” away from parallel to the plane) at small and moderate distances from the symmetry plane of the Galaxy (but see Cuddeford & Amendt 1991a,b), which is of vital importance for attempts to estimate the local surface density of the Galactic plane from vertical dynamics of stars. The radial versus tangential dispersion ratio is reasonably well understood as a result of the local sheer, which through the Oort constants governs the shape of the epicyclic stellar orbits (but see Cuddeford & Binney 1994 and Kuijken & Tremaine 1994 for higher order effects as a result of deviations from circular symmetry).
There are two general classes of models for the origin of the velocity dispersions of stars in galactic disks. The first, going back to Spitzer & Schwarzschild (1951), is scattering by irregularities in the gravitational field, later identified with the effects of Giant Molecular Clouds (GMCs). The second class of models can be traced back to the work of Barbanis & Woltjer (1967), who suggested transient spiral waves as the scattering agent; this model has been extended by Carlberg & Sellwood (1985). Recently, the possiblity of infall of satellite galaxies has been recognized as a third option (e.g. Velázquez & White, 1999).
In the solar neighbourhood the ratio of the radial and vertical velocity dispersion of the stars $`\sigma _\mathrm{z}/\sigma _\mathrm{R}`$ is usually taken as roughly 0.5 to 0.6 (Wielen 1977; see also Gomez et al. 1990), although values on the order of 0.7 are also found in the literature (Woolley et al. 1977; Meusinger et al. 1991). The value of this ratio can be used to test predictions for the secular evolution in disks and perhaps distinguish between the general classes of models. Lacey (1984) and Villumsen (1985) have concluded that the Spitzer-Schwarzschild mechanism is not in agreement with observations: the predicted time dependence of the velocity dispersion of a group of stars as a function of age disagrees with the observed age – velocity dispersion relation (see also Wielen 1977), while it would not be possible for the axis ratio of the velocity ellipsoid $`\sigma _\mathrm{z}/\sigma _\mathrm{R}`$ to be less than about 0.7 (but see Ida et al. 1993)
Jenkins & Binney (1990) argued that it is likely that the dynamical evolution in the directions in the plane and that perpendicular to it could have proceeded with both mechanisms contributing, but in different manners. Scattering by GMCs would then be responsible for the vertical velocity dispersion, while scattering from spiral irregularities would produce the velocity dispersions in the plane. The latter would be the prime source of the secular evolution with the scattering by molecular clouds being a mechanism by which some of the energy in random motions in the plane is converted into vertical random motions, hence determining the thickness of galactic disks. The effects of a possible slow, but significant accretion of gas onto the disks over their lifetime has been studied by Jenkins (1992), who pointed out strong effects on the time dependence of the vertical velocity dispersions, in particular giving rise to enhanced velocities for the old stars.
The only other galaxy in which a direct measurement of the velocity ellipsoid has been reported, is NGC488 (Gerssen et al. 1997). NGC488 is a moderately inclined galaxy, which enables these authors to solve for the dispersions from a comparison of measurements along the major and minor axes. NGC488 is a giant Sb galaxy with a photometric scale length of about 6 kpc (in the B-band) and an amplitude of the rotation curve of about 330 km s<sup>-1</sup>. The axis ratio $`\sigma _\mathrm{z}/\sigma _\mathrm{R}`$ is 0.70 $`\pm `$ 0.19; this value, which is probably larger than that for the Galaxy, suggests that the spiral irregularities mechanism should be relatively less important, in agreement with the optical morphology.
The light distribution in galactic disks has in the radial direction an exponential behaviour (Freeman 1970), characterised by a scale length $`h`$. In the vertical direction –at least away from the central layer of young stars and dust that is obvious in edge-on galaxies– it turns out that the light distribution can also be characterised by an exponential scale height $`h_\mathrm{z}`$, which is independent of galactocentric distance (van der Kruit & Searle 1981, but see de Grijs & Peletier 1997). It then is usually assumed that the three-dimensional light distribution traces the distribution of mass; this seems justifiable since the light measured is that of the old disk population, which contains most of the stellar disk mass and dominates the light away from the plane. On general grounds, these two typical length scales are expected to be independent, the radial one resulting from the distribution of angular momentum in the protogalaxy (e.g. van der Kruit 1987; Dalcanton et al. 1997) or that resulting from the merging of pre-galactic units in the galaxy’s early stages, while the length scale in the $`z`$-direction would result from the subsequent, and much slower, disk heating and the consequent thickening of the disk. It is not a priori clear, therefore, that the two scale lengths should correlate. Yet, they do bear a relation to the ratio of the velocity dispersions of the stars in the old disk population. The vertical one follows directly from hydrostatic equilibrium. In the radial direction it is somewhat indirect; a relation between the radial scale length and the corresponding velocity dispersion comes about through conditions of local stability (e.g. Bottema 1993).
In a recent study, de Grijs (1997, 1998; see also de Grijs & van der Kruit 1996) has determined the two scale parameters in a statistically complete sample of edge-on galaxies and found the ratio of the two ($`h`$/$`h_\mathrm{z}`$) to increase with later morphological type. In this paper we will examine this dataset in detail in order to investigate whether it can be used to derive information on the axis ratios of the velocity ellipsoid and help make progress in resolving the general issues described above.
## 2 Background
In an extensive study of stellar kinematics of spiral galaxies, Bottema (1993) presented measurements of the stellar velocity dispersions in the disks of twelve spiral galaxies. This first reasonably sized sample represented a fair range of morphological types and luminosities, although it was not a complete sample in a statistical sense. In each galaxy he determined a fiducial value for the velocity dispersion, namely at one photometric (B-band) scale length. He then found that this fiducial velocity dispersion correlated well with the absolute disk luminosity as well as with the maximum rotation velocity of the galaxy. His sample contained both highly inclined galaxies (where the velocities in the plane are in the line of sight) and close to face-on systems (where one measures the vertical velocity dispersion); when he forced the relations for the two classes of galaxies to coincide he found that a similar ratio between radial and vertical velocity dispersion as applicable for the solar neighbourhood was needed.
Bottema’s empirical relation for velocity dispersion versus rotation velocity is
$$\sigma _{\mathrm{R},\mathrm{h}}=0.29V_{\mathrm{rot}},$$
(1)
whereas for velocity dispersion versus disk luminosity it reads (in the form of absolute magnitude)
$$\sigma _{\mathrm{R},\mathrm{h}}(\mathrm{km}\mathrm{s}^1)=17\times M_B279.$$
(2)
These relations can –for any galaxy for which the photometry is available or for which the rotation curve is known– be used to estimate the radial stellar velocity dispersion of the old disk stars at one photometric scale length from the center. Doing this for the de Grijs sample of edge-on galaxies and estimating the vertical velocity dispersion from the vertical scale height, one can in principle determine the axis ratio of the velocity ellipsoid for this entire sample. It would appear that this is a rather uncertain procedure, since one will have to assume a mass-to-light ratio ($`M/L`$) in order to calculate the vertical velocity dispersion from the photometric parameters. We will show, however, that $`M/L`$ does not enter explicitely in the formula for the ratio of velocity dispersions.
We will list our assumptions:
$``$ The surface density of the disk has an exponential form as a function of galactocentric distance:
$$\mathrm{\Sigma }(R)=\mathrm{\Sigma }(0)\mathrm{e}^{R/h}.$$
(3)
$``$ The vertical distribution of density can be approximated by that of the isothermal sheet (van der Kruit & Searle 1981), but we will use instead the subsequently suggested modification (van der Kruit 1988)
$$\rho (R,z)=\rho (R,0)\mathrm{sech}(z/h_\mathrm{z}).$$
(4)
A detailed investigation of the sample (de Grijs et al. 1997) shows indeed that the vertical light profiles are much closer to exponential than to the isothermal solution, although the mass density distribution most likely is less peaked than that of the light, since young populations with low velocity dispersions add significantly to the luminosity but little to the mass. Then the vertical velocity dispersion $`\sigma _\mathrm{z}`$ can be calculated from
$$\sigma _\mathrm{z}^2=1.7051\pi G\mathrm{\Sigma }(R)h_\mathrm{z}.$$
(5)
The usual parameter $`z_{}`$ used in the notation for the isothermal disk (and in de Grijs 1998) is $`z_{}=2h_\mathrm{z}`$. It is important to note that this formula assumes that the old stellar disk is self-gravitating. Although this can be made acceptable for galaxies like our own at positions of a few radial scale lengths from the center (see van der Kruit & Searle 1981), it is improbable in late-type galaxies, which have significant amounts of gas in the disks, and we will need to allow for this.
$``$ The mass-to-light ratio $`M/L`$ is constant as a function of radius. Support for this comes from the observation by van der Kruit & Freeman (1986) and Bottema (1993) that the vertical velocity dispersion in face-on spiral galaxies falls off with a scale length about twice that of the surface brightness (but note that Gerssen et al. 1997, could not confirm this for NGC488), combined with the observed constant thickness of disks with galactocentric radius.
$``$ We are not making any assumptions on the functional form of the dependence of the radial velocity dispersion or the axis ratio of the velocity ellipsoid. The observed radial stellar velocity dispersions in Bottema’s sample are consistent with a drop-off $`\mathrm{exp}(R/2h)`$, in which case this axis ratio would be constant with galactocentric distance. However, over the range considered the data can be fitted also with a radial dependence for the radial velocity dispersion in which the parameter $`Q`$ for local stability against axisymmetric modes (Toomre 1964) is constant with radius ($`R\mathrm{exp}(R/h)`$; see van der Kruit & Freeman 1986). The definition of Toomre’s (1964) parameter $`Q`$ for local stability against axisymmetric modes is
$$Q=\frac{\sigma _\mathrm{R}\kappa }{3.36G\mathrm{\Sigma }}.$$
(6)
Disks are stabilised at small scales through the Jeans criterion by random motions (up to the radius of the Jeans mass) and for larger scales by differential rotation. Toomre’s condition states that the minimum scale for stability by differential rotation should be no larger than the Jeans radius.
$``$ We assume that spiral galaxies have flat rotation curves with an amplitude $`V_{\mathrm{rot}}`$ over all but their very central extent. This assumption implies that we may write the epicyclic frequency $`\kappa `$ as
$$\kappa =2\sqrt{B(BA)}=\sqrt{2}\frac{V_{\mathrm{rot}}}{R},$$
(7)
where $`A`$ and $`B`$ are the Oort constants.
First we will look into the background of the Bottema relations (1) and (2) (see also van der Kruit 1990; Bottema 1993, 1997).
Evaluating Toomre’s $`Q`$ at $`R=1h`$ and using the expression for the epicyclic frequency above, we find
$$\sigma _{\mathrm{R},\mathrm{h}}=\frac{3.36G}{\sqrt{2}}Q\frac{\mathrm{\Sigma }(h)h}{V_{\mathrm{rot}}}.$$
(8)
Using $`\mathrm{\Sigma }(0)=(M/L)\mu _{}`$ and the total disk luminosity from $`L_\mathrm{d}=2\pi \mu _{}h^2`$ we get
$$\sigma _{\mathrm{R},\mathrm{h}}=\frac{1.68G}{\mathrm{e}\sqrt{\pi }}Q\left(\frac{M}{L}\right)\frac{\mu _{}^{1/2}L_\mathrm{d}^{1/2}}{V_{\mathrm{rot}}}.$$
(9)
Neither in the sample of galaxies that Bottema used to define his relations, nor in our sample of edge-on systems do we have galaxies with unusually low surface brightness. It seems therefore justified to assume that for the galaxies considered we have a reasonably constant central surface brightness (Freeman 1970; van der Kruit 1987)
$$\mu _{}21.6\mu _B=142\mathrm{L}_{}\mathrm{pc}^2,$$
(10)
where $`\mu _B`$ stands for B-magnitudes arcsec<sup>-2</sup>. So, if $`\mu _{}`$, $`Q`$ and $`(M/L)`$ are constant between galaxies, we see that the fiducial velocity dispersion depends only on the disk luminosity and the rotation velocity.
Bottema’s relation (1) can then be reconciled with Eq. (9), if we have
$$L_\mathrm{d}V_{\mathrm{rot}}^4.$$
(11)
This is approximately the Tully-Fisher relation (Tully & Fisher 1977); not precisely, since we use the disk luminosity and not that of the galaxy as a whole (however, for late-type galaxies this would be a minor difference)<sup>1</sup><sup>1</sup>1Eqs. (1) and (2) together would imply an exponential Tully-Fisher relation rather than a power law. The problem is that Bottema chose to fit a linear relation to his data, which is completely justified in view of his error bars. But eqs. (11) and (9) would imply that he should have fitted a curve in which the velocity dispersion is proportional to $`L_\mathrm{d}^{1/4}`$ (see van der Kruit 1990, p. 199). Performing such a fit to Bottema’s data gives a curve that is only marginally different from a straight line over his range of absolute magnitudes (only a few km s<sup>-1</sup>). So, Eq. (2) should only be seen as an empirical fit of the data to a straight line, although these data are equally consistent with the dependence following from Eqs. (9) and (11)..
So, we see that Bottema’s relation (1) follows directly from Toomre’s stability criterion in exponential disks with flat rotation curves as long as Eq. (11) holds. The proportionality constant in Eq. (11) can be fixed using the parameters for the Milky Way Galaxy and for NGC 891 as given in van der Kruit (1990). These two galaxies have $`L_\mathrm{d}1.9\times 10^{10}\mathrm{L}_{}`$ and $`V_{\mathrm{rot}}220\mathrm{km}\mathrm{s}^1`$<sup>2</sup><sup>2</sup>2The distance for NGC 891 –as are all distances in this paper– is based on a Hubble constant of 75 km s<sup>-1</sup> Mpc<sup>-1</sup>.. This gives
$$L_\mathrm{d}(\mathrm{L}_{})=8.11V_{\mathrm{rot}}^4(\mathrm{km}\mathrm{s}^1).$$
(12)
and using also Eq. (10) we get
$$\sigma _{\mathrm{R},\mathrm{h}}=5.08\times 10^2Q\left(\frac{M}{L}\right)V_{\mathrm{rot}}.$$
(13)
¿From this we find with Bottema’s relation (1), that $`Q(M/L)_B5.7`$. In a somewhat different, but comparable manner, Bottema (1993) has also concluded that this product is of order 5.
We now turn to the vertical velocity dispersion. Evaluating the equation for hydrostatic equilibrium (5) at galactocentric distance $`R=1h`$ we find
$$\sigma _{\mathrm{z},\mathrm{h}}=\left\{5.36G\mathrm{\Sigma }(h)h_\mathrm{z}\right\}^{1/2},$$
(14)
and can thus calculate the vertical velocity dispersion from
$$\sigma _{\mathrm{z},\mathrm{h}}=\left\{\frac{5.36}{\mathrm{e}}G\left(\frac{M}{L}\right)\mu _{}h_\mathrm{z}\right\}^{1/2}.$$
(15)
Finally we examine the ratio of the two velocity dispersions. If we eliminate $`\mathrm{\Sigma }(h)`$ between Eqs. (8) and (14) we obtain
$$\sigma _{\mathrm{R},\mathrm{h}}=0.444Q\frac{\sigma _{\mathrm{z},\mathrm{h}}^2}{V_{\mathrm{rot}}}\frac{h}{h_\mathrm{z}},$$
(16)
and with Eq. (1)
$$\left(\frac{\sigma _\mathrm{z}}{\sigma _\mathrm{R}}\right)_\mathrm{h}^2=\frac{7.77}{Q}\frac{h_\mathrm{z}}{h}.$$
(17)
Note that due to the elimination of the surface density also the mass-to-light ratio has dropped out of this equation and the result is independent of any assumption on $`M/L`$. Eq. (17) translates the ratio of the two length scales to that of the corresponding velocity dispersions and the underlying physics can be summarized as follows. In the vertical direction the length scale and the velocity dispersion relate through dynamical equilibrium. In the radial direction the velocity dispersion is related to the epicyclic frequency through the local stability condition, which is proportional to the rotation velocity. The “Tully-Fisher relation” then relates this to the integrated magnitude and hence to the size and length scale of the disk.
One should be careful in the use of Eq. (17), since in practice photometric scale lengths are wavelength dependent and its derivation –and therefore the numerical constant– is valid only at one exponential surface density scale length. The purpose of presenting it here is only to show that, if the two velocity dispersions are derived in a consistent manner from Eqs. (8) and (14) –or alternatively Eqs. (9) and (15)–, the assumption used for $`M/L`$ drops out in the resulting ratio.
## 3 Application to the de Grijs sample
The sample of edge-on disk galaxies of de Grijs (1998) contains 46 systems for which the structural parameters of the disks have been determined (including a bulge/disk separation in the analysis). From this sample we take those for which rotation velocities have been derived in a uniform manner (Mathewson et al. 1992; data collected in de Grijs 1998, Table 4) as well as those for which the Galactic foreground extinction in the B-band is less than 0.25 magnitudes. For this remaining sample of 36 galaxies we perform the following calculations:
$``$ From the total magnitudes in de Grijs (1998, Table 6) we obtain the integrated B\- and I-magnitudes of the disk.
$``$ Using the radial scale length as measured in the I-band we calculate the central (face-on) surface brightness of the disk from its I-band integrated luminosity. So, we do not use Eq. (10) for a constant central surface brightness.
$``$ Then we use one or both of the two Bottema relations (1) and (2) to estimate the radial velocity dispersion at one photometric scale length (by definition in the B-band). Where we can do this with both relations, the ratio between the two estimates is 1.11 $`\pm `$ 0.19. This is not trivial, as the rotation velocities and disk luminosities are determined completely independently (and by different workers) and only the one using the absolute magnitude needs an assumption for the distance scale.
$``$ Then using Eq. (15), we estimate the vertical velocity dispersion at one photometric scale length in the I-band. For this we need a value for the mass-to-light ratio and we will discuss this first.
We found, that through Eq. (13) Bottema’s relation (1) provides a value for $`Q(M/L)`$ of about 5.7. So we make a choice for $`Q`$ rather than for $`M/L`$. It has become customary to assume values of $`Q`$ of order 2, mainly based on the numerical simulations of Sellwood & Carlberg (1984), who find their disks to settle with $`Q`$ 1.7 at all radii. In principle we can use the observed properties of the Galaxy to fix $`Q`$ from Eq. (17). We have $`(\sigma _\mathrm{z}/\sigma _\mathrm{R})^20.5`$ (in the solar neighbourhood, but assume for the sake of the argument also at $`R=1h`$) and $`h_\mathrm{z}/h0.1`$ (see Sackett 1997 for a recent review), so that indeed $`Q1.7`$. We will make the general assumption that $`Q`$ = 2, in agreement with the considerations above; then $`(M/L)_B`$ = 2.8.
The rotation velocity version of Bottema’s empirical relations (Eq. (1)) can provide further support for the choice of $`Q`$, along the lines of the discussion in van der Kruit & Freeman (1986). In the first place we recall the condition for the prevention of swing amplification in disks (Toomre 1981), as reformulated by Sellwood (1983)
$$X=\frac{R\kappa ^2}{2\pi mG\mathrm{\Sigma }}>\mathrm{\hspace{0.33em}3},$$
(18)
where $`m`$ is the number of spiral arms. For a flat rotation curve this can be rewritten as
$$\frac{QV_{\mathrm{rot}}}{\sigma _\mathrm{R}}>\mathrm{\hspace{0.33em}3.97}m.$$
(19)
With Eq. (1) this becomes $`Q>\mathrm{\hspace{0.33em}1.15}m`$. Considering that the coefficient in Eq. (1) has an uncertainty of order 15%, this tells us that we have to assume $`Q`$ at least of order 2 to prevent strong barlike (m=2) disturbances in the disk.
A similar argument can be made using the global stability criterion of Efstathiou et al. (1982). This criterion states that for a galaxy with a flat rotation curve and an exponential disk, global stability requires a dark halo and
$$Y=V_{\mathrm{rot}}\left(\frac{h}{GM_{\mathrm{disk}}}\right)^{1/2}\stackrel{>}{}1.1.$$
(20)
Here $`M_{\mathrm{disk}}`$ is the total mass of the disk. This can be rewritten as
$$Y=0.615\left[\frac{QRV_{\mathrm{rot}}}{h\sigma _\mathrm{R}}\right]^{1/2}\mathrm{exp}\left(\frac{R}{2h}\right)\stackrel{>}{}1.1,$$
(21)
and, when evaluated at $`R=1h`$, yields with Eq. (1) $`0.69\sqrt{Q}\stackrel{>}{}1.1`$, and therefore also implies that $`Q`$ should be at least about 2. Efstathiou et al. have also come to this conclusion for our Galaxy, with the use of local parameters for the solar neighbourhood.
Having adopted a value for $`Q`$ and through this a value for $`(M/L)_B`$, we will have to convert it to $`(M/L)_I`$. For this we need a $`(BI)`$ colour for the disks. From the fits of de Grijs (1998) we find that the total disk magnitudes show a rather large variation in colour; for the sample used here $`(BI)`$ has a mean value of 1.9, but the r.m.s. scatter is 0.8 magnitudes. In his discussion, de Grijs (1998) suspects a systematic effect of the internal dust in the disks (particularly on the B-magnitudes, which is another reason for us to use the $`V_{\mathrm{rot}}`$-version of the Bottema relation (Eq. (1)) in our derivation in the previous section). Instead we turn to the discussion of de Jong (1996b), who compares his surface photometry of less inclined spirals to star formation models. From his Table 3, we infer that for single burst models with solar metallicity and ages of 12 Gyr $`(M/L)_B=2(M/L)_I`$. So we will use an $`(M/L)_I`$ of 1.4.
There is a further refinement required. In order to take into account the fact that in late-type galaxies the gas contributes significantly to the gravitational force, we have to correct for a galaxy’s gas content as a function of Hubble type. In the following, we will discuss the observational data regarding the Hi and the H<sub>2</sub> separately.
For 25 of de Grijs’ sample galaxies Hi observations are available, so that we can estimate the gas-to-total disk mass. For this we apply a correction of a factor 4/3 to the Hi in order to take account of helium and use de Grijs’ (1998) I-band photometry and our adopted $`M/L_B`$ ratio of 2.8 (see below) to estimate the total disk mass. As a function of Hubble type we then find
| Type | gas-to-total disk mass | n |
| --- | --- | --- |
| Sb | 0.31 $`\pm `$ 0.17 | 3 |
| Sbc | 0.36 $`\pm `$ 0.19 | 5 |
| Sc | 0.53 $`\pm `$ 0.09 | 5 |
| Scd | 0.49 $`\pm `$ 0.16 | 9 |
| Sd | 0.52 $`\pm `$ 0.10 | 3 |
We find no dependence on rotation velocity:
| $`V_{\mathrm{rot}}`$ (km s<sup>-1</sup>) | gas-to-total disk mass | n |
| --- | --- | --- |
| 80 – 130 | 0.51 $`\pm `$ 0.09 | 10 |
| 130 – 180 | 0.45 $`\pm `$ 0.17 | 8 |
| 180 – 230 | 0.51 $`\pm `$ 0.09 | 7 |
So, the Hi mass is about half the stellar mass in disks of Sb’s and about similar to that in Sc’s and Sd’s. But there is no dependence on rotation velocity. But this is not what we need; we should use surface densities rather than disk masses. Now, the Hi is usually more extended than the stars and has a shallower radial profile. So the ratios in the tables above are definite upper limits. In order to take into account the effect that in late-type systems the gas contributes significantly to the gravitational force we have “added” for types Scd and Sd a similar amount of gas as in stars and half of that for Sc’s.
The distribution of H<sub>2</sub> in spiral galaxies is a more complex matter; it is often centrally peaked, although some Sb galaxies exhibit central holes (for a recent review, see Kenney 1997). The molecular fraction of the gas appears to be lower in low-mass and late-type galaxies, assuming that the conversion factor from CO to molecular hydrogen is universal<sup>3</sup><sup>3</sup>3There is even doubt concerning the constancy of this conversion factor within our Galaxy (Sodroski et al. 1995).. Since our sample galaxies are generally low-mass, later-type systems, we believe that the corrections for molecular gas are small, and therefore contribute little to the correction for the presence of gas.
We added (a) the galaxies from van der Kruit & Searle (1982) to the sample, (b) our Galaxy using the Lewis & Freeman (1989) velocity dispersion and the structural parameters in van der Kruit (1990), and (c) the observational results for NGC488 from Gerssen et al. (1997). We leave the few early type (S0 and Sa) galaxies out of the discussion, because the component seperation in the surface brightness distributions is troublesome and some of our assumptions (in particular the self-gravitating nature of the disks) are probably seriously wrong.
In order to be able to trace the origin of our results, we first show in Fig. 1 the radial and vertical scale lengths of the sample as a function of the rotation velocity. Both increase with $`V_{\mathrm{rot}}`$, which would be expected intuitively. The main result is presented in Figs. 2 and 3. From Fig. 2 we see that the vertical velocity dispersions, that have been derived from hydrostatic equilibrium, increase with the rotation speed (the radial velocity dispersions do the same automatically as a result of the use of the Bottema relations). For the slowest rotation speeds the predicted vertical velocity dispersion is on the order of 10-20 km s<sup>-1</sup>, which is close to that observed in the neutral hydrogen in face-on galaxies (van der Kruit & Shostak 1984).
The distribution of the axis ratio of the velocity ellipsoid with morphological type is as follows:
| Type | $`\sigma _{\mathrm{z},\mathrm{h}}/\sigma _{\mathrm{R},\mathrm{h}}`$ | n |
| --- | --- | --- |
| Sb | 0.71 $`\pm `$ 0.14 | 11 |
| Sbc | 0.69 $`\pm `$ 0.16 | 7 |
| Sc | 0.49 $`\pm `$ 0.17 | 6 |
| Scd | 0.70 $`\pm `$ 0.20 | 11 |
| Sd | 0.63 $`\pm `$ 0.22 | 5 |
Not much of a trend is seen here. It is in order to comment here briefly on the effects of our corrections for the gas to obtain vertical velocity dispersions. From our discussion above we conclude that there would be no systematic effect introduced as a function of rotation velocity. Furthermore, taking away our correction altogether reduces the values for the average axis ratio in the table just given to about 0.55 for Scd and Sd galaxies. Even in this unrealistic case of not allowing for the presence of the gas, we believe the trend to be hardly significant in view of the uncertainties. Since some correction for the gas mass as a function of morphological type must be made, we cannot claim that we find any evidence for a change in the velocity anisotropy with Hubble type.
Fig. 3 shows the axis ratio of the velocity ellipsoid of all the galaxies versus their rotation velocity. In view of the fact that the two dispersions that go into this ratio are determined from different observational data ($`\sigma _{\mathrm{R},\mathrm{h}}`$ from integral properties such as total luminosity and amplitude of the rotation curve; $`\sigma _{\mathrm{z},\mathrm{h}}`$ from photometric scale parameters and surface brightness) and that we have made rather simplifying assumptions, the scatter is remarkably small. No systematic trends are visible (and would probably not be significant!) in the data. The points closest to unity in the dispersion ratio generally have low rotation velocities and inferred velocity dispersions. One of these ($`V_{\mathrm{rot}}`$ = 95 km s<sup>-1</sup>, $`\sigma _{\mathrm{z},\mathrm{h}}/\sigma _{\mathrm{R},\mathrm{h}}`$ = 0.75) is NGC5023. Bottema et al. (1986) have shown that the stars and the gas in this galaxy are effectively coexistent; the radial and vertical distributions are very similar. This would imply that the velocity dispersions of the gas and the stars are the same. The vertical velocity dispersion found here is about 20 km s<sup>-1</sup>, which is significantly higher than that observed in larger spirals. It would be of interest to measure the Hi velocity dispersion in this galaxy. Since the HI would be expected to have an isotropic velocity distribution from collisions between clouds, the vertical dispersion should be equal to that in the line of sight in edge-on galaxies.
## 4 Discussion
In this section we will critically discuss the uncertainties in our approach.
$``$ The linearity of the magnitude version of the Bottema relation. We have discussed above that the power-law nature of the Tully-Fisher relation (Eq. (11)) would imply a nonlinear form of the magnitude version of Bottema’s relation (Eq. (2)). We have used it as an empirical relation to help (together with Eq. (1)) to estimate the radial velocity dispersions from the observed photometry. One may argue that it is internally consistent to use instead of Eq. (2) a fit of the form
$$\sigma _{\mathrm{R},\mathrm{h}}L_\mathrm{d}^{1/4}.$$
(22)
This has only a noticable effect on galaxies with faint absolute disk magnitudes. We have repeated our analysis using such a fit and find no change in our results. To be more definite we repeat the table of the average axis ratio as a function of morphological type that we then obtain.
| Type | $`\sigma _{\mathrm{z},\mathrm{h}}/\sigma _{\mathrm{R},\mathrm{h}}`$ | n |
| --- | --- | --- |
| Sb | 0.67 $`\pm `$ 0.17 | 11 |
| Sbc | 0.64 $`\pm `$ 0.13 | 7 |
| Sc | 0.47 $`\pm `$ 0.17 | 6 |
| Scd | 0.60 $`\pm `$ 0.11 | 11 |
| Sd | 0.60 $`\pm `$ 0.20 | 5 |
$``$ Choice of $`Q`$. We have adopted a value for $`Q`$ of 2.0, and this enters directly in our results in the calculation of the vertical velocity dispersion through the value of $`M/L`$ that follows from this choice. Had we adopted a value of 1.0 for $`Q`$, $`M/L`$ would have been a factor 2 higher and the value for $`\sigma _{\mathrm{z},\mathrm{h}}`$ a factor $`\sqrt{2}`$ –see Eq. (15)–. Bottema (1993, his Fig. 11) has shown that his observations of the stellar kinematics do not show any evidence for systematic variations in $`Q`$ among galaxies.
We have used Eq. (19) to argue that $`Q`$ is likely of order 2 in order to prevent strong barlike (m=2) disturbances. It does not necessarily follow from this that galaxies with more spiral arms should have higher values of $`Q`$ or that $`Q`$ should be significantly lower in galaxies with very strong two-armed structure. Spiral structure may arise in a variety of ways; we only argue that disks do not have grossly distorted m=2 shapes and that therefore swing amplification is apparently not operating.
We have assumed that the stellar disk is self-gravitating and ignored the influence of the gas in the evaluation of $`Q`$. At one scale length this is probably justified (it is only a small effect in solar neighbourhood), even for late-type disks.
$``$ Non-exponential nature of the disks. Often the disk in actual galaxies can be fitted to an exponential only over a limited radial extent. In that case our description is unlikely to hold. However, in our sample the fits can be made reasonably well at one scale length from the center and we believe this not to be a problem.
$``$ Vertical structure of the disks. Our results depend on the adoption of a particular form of the vertical mass distribution (namely the sech($`z`$)-form) of Eq. (4). This enters our results through the value of the numerical constant 1.7051 in Eq. (5), and in Eq. (15) it enters into the value for $`\sigma _{\mathrm{z},\mathrm{h}}`$ as its square root. Had we assumed the isothermal distribution, then the constant would have been 2.0, while it would have been 1.5 for the exponential distribution. This would have given us values for $`\sigma _{\mathrm{z},\mathrm{h}}`$ which are only 6 to 8% higher or lower. As we have shown in de Grijs et al. (1997), for our sample galaxies the vertical luminosity distributions in these disk-dominated galaxies are slightly rounder than or consistent with the exponential model. However, the vertical mass distribution is probably less sharply peaked, and thus expected to be more closely approximated by the sech(z) model.
$``$ Non-constancy of central (face-on) surface brightness. We have assumed in section 2 that for all galaxies the central surface brightness is constant. This is certainly unjustified for so-called “low surface brightness galaxies”; however, our galaxies have brighter surface brightnesses than galaxies that are usually considered to be of this class. But even for galaxies as in our sample it remains true that the (face-on) central surface brightness is in general somewhat lower for smaller systems (van der Kruit 1987; de Jong 1996a). From de Jong’s bivariate distribution functions, it can be seen that late-type, low absolute magnitude spirals may have a central surface brightness (in B) that is up to 1.0 magnitude fainter. Note, however, that for our derived velocity dispersons we have used the actually observed surface brightness for each galaxy.
$``$ Effects of colour variations on the mass-to-light ratio. There is a fairly large variation in the colours of the disks in the sample. De Grijs (1998) has argued that this is the result of internal dust extinction. However, we have used the I-band data; de Grijs’ Fig. 11 shows that the variation is much less in $`(IK)`$ than in colours involving the B-band.
The colours of the disks do not correlate with morphological type (de Grijs 1998); although such a correlation has been seen in more face-on galaxies (de Jong 1996b), we believe that the fitting procedure has ignored most of the young population and dust absorption near the plane, and that contributions to the $`M/L`$ scatter as a result of young populations has mostly been avoided; and that the I-band scale lengths determined away from the galactic planes are fairly representative of the stellar mass distributions (de Grijs 1998).
The colour of the fitted disks in our sample has $`(IK)`$ in the range $``$ 2 to 4. The latter is red, even for an old population and may well be caused by excessive internal extinction, but we see no strong evidence for a substantial systematic correction in our velocity dispersions from this.
$``$ Effects of metallicity on the mass-to-light ratio. De Jong (1996b) has drawn attention to the non-negligible effects of metallicity on the mass-to-light ratio. From his compilation of models, in particular his W94 (Worthey 1994) models with ages of 12 Gyr, we estimate that the effect in the I-band amounts to 10 to 20% over the range of relevant metallicities. The effect on the derived velocity dispersions is the square root of this.
$``$ Effects of radial colour variations. Since we use an empirical relation to derive the radial velocity dispersion at one B-scale length from the center, we have to consider the effect of using the I-band. We have used the latter as the proper scale length to use for the mass density distribution. The correct one to use here would be the one measured in the K-band, which is 1.15 $`\pm `$ 0.19 times smaller for this sample. The scale length in the B-band is 1.64 $`\pm `$ 0.41 times longer than in the K-band (values quoted here from de Grijs 1998). We deduce from this that we may have underestimated the scale length to use by a factor 1.43 and systematically overestimated the vertical velocity dispersion by about 20%.
The effect of radial metallicity variations on the scale length is probably very small; these gradients in the older stellar populations are in any case expected to be significantly less than in the interstellair medium. This is so, because in models for galactic chemical evolution the mean stellar metallicity of the stars approximates the (effective) yield, while that in the gas grows to much large values in most models (van der Kruit 1990, p. 322).
$``$ Non-flatness of rotation curves. This may be an effect for small, late-type galaxies that have slowly rising rotation curves. It enters however in our analysis only in the derivation of the Bottema relations and this holds empirically to rather small rotation velocities ($``$ 100 km s<sup>-1</sup>).
$``$ The slope of the Tully-Fisher relation. Although we use a relation between the luminosity of the disk alone and the rotation velocity, it remains true that our “slope” of $`V_{\mathrm{rot}}^4`$ is steeper than the usually derived slopes of Tully-Fisher relations, which would indicate $`V_{\mathrm{rot}}^3`$ (e.g. Giovanelli et al.’s 1997 “template relation” yields an exponent of 3.07 $`\pm `$ 0.05). If this slope were to be put into Eq. (9), Eq. (1) would have $`\sigma _{\mathrm{R},\mathrm{h}}\sqrt{V_{\mathrm{rot}}}`$, which is very significantly in disagreement with Bottema’s observations. The same holds for the other Bottema relation, Eq. (2).
The analysis of the present sample (see de Grijs & Peletier 1999) has resulted in slopes in the Tully-Fisher relation of 3.20 $`\pm `$ 0.07 in the I-band and 3.24 $`\pm `$ 0.21 in the K-band. On the other hand, Verheijen (1997), in his extensive study of about 40 galaxies in the Ursa Major cluster, finds a slope of 4.1 $`\pm `$ 0.2 in the K-band.
$``$ Effects of the gas on the value of $`Q`$. In our calculations we have already made crude allowance for the effects of the gas on the gravitational field. But in our derivations we have not taken into account the effect of the Hi on the effective velocity dispersion to be used in the evaluation of the $`Q`$-parameter. The effect of the Hi is a decrease of the effective velocity dispersion and therefore in $`Q`$. This means that the assumed value should in reality be decreased on average, but beyond this numerical effect, it does not affect our results.
The effects just discussed can produce errors in the estimated velocity dispersions of the order of 10 to 20% each. The final result of the dispersion ratios in Fig. 3 may therefore be wrong by a few tenths, which is comparable to the scatter in that figure. However, we have no cause to suspect that we have introduced serious systematic effects that would be strong functions of the rotation velocity or the morphological type and the lack of correlation of the axis ratio with these properties is unlikely to be an artifact of our analysis.
We conclude that it is in principle possible to infer information on the axis ratio of the velocity ellipsoid from a sample of edge-on galaxies for which both the radial scale length as well as the vertical scale height have been measured. The result, however shows much scatter, most of which is a result of the necessary assumptions. There is one significant improvement that can be made and that is the direct observation of the stellar velocity dispersion in these disks. That this is feasible in practice for edge-on systems has been shown by Bottema et al. (1987, 1991). The observed velocity profiles can be corrected for the line-of-sight effects, giving the tangential velocity dispersion, which through the observed shape of the rotation curve can be turned into the radial velocity dispersion. Although a time consuming programme, we believe that it is worth doing for two reasons: (1) It will set both versions of the Bottema relation on a firmer footing. (2) The uncertainties in the analysis above can likely be significantly diminished by direct measurement of the radial velocity dispersion rather than having to infer it from the rotation velocity or the disk absolute magnitude.
## Acknowledgements
PCvdK thanks Jeremy Mould for hospitality at the Mount Stromlo and Siding Spring Observatories, where most of this work was done, and Ken Freeman for discussions, the Space Telescope Science Institute and Ron Allen for hospitality, when the final version of this paper was prepared, and the Faculty of Mathematics and Natural Sciences of the University of Groningen for financial support that made these visits possible. RdG was supported by NASA grants NAG 5-3428 and NAG 5-6403.
|
no-problem/9910/hep-ph9910454.html
|
ar5iv
|
text
|
# A priori mixing of mesons and the |𝚫𝐈|=𝟏/𝟐 rule in 𝐊→𝜋𝜋
## Abstract
We consider the hypothesis of a priori mixings in the mass eigenstates of mesons to obtain the $`|\mathrm{\Delta }I|=1/2`$ rule in $`K\pi \pi `$. The Hamiltonian responsible for the transition is the strong interacting one. The experimental data are described using the isospin symmetry relations between the strong coupling constants.
The explanation of the enhancement phenomenon observed in non-leptonic and weak radiative decays of hadrons (NLDH and WRDH respectively) has represented a challenge for already a long time . In previous work we have studied the possibility that this phenomenon be due, not to some elaborate subtlety of strong interactions dressing weak vertices, but to the existence of far away new physics leading to small admixtures of the known hadrons with the new ones. If these admixtures are flavor and parity violating they would contribute to observed NLDH and WRDH . Because the weak interaction rates are so very much suppressed with respect to the strong and electromagnetic ones, it could well happen that, even if such admixtures are very tiny, they might still give sizeable enough contributions (through the strong and electromagnetic interaction Hamiltonians) as to exceed $`W`$-mediated NLDH and WRDH and explain the enhancement phenomenon observed in them. We have refered to this scheme as “a priori mixings” and in Refs. we have shown that indeed they can be used to describe very satisfactorily the experimental data of NLDH and WRDH.
Another notorious example of the enhancement phenomenon occurs in $`K\pi \pi `$ decays. If the a priori mixings approach is to be succesful it is imperative that it also describes these decays. This is what we shall study in this paper.
The experimental data on these decays can be presented in terms of the decay amplitudes $`A_{+0}`$, $`A_+`$, and $`A_{00}`$, corresponding respectively to the modes $`K^+\pi ^+\pi ^0`$, $`K_1\pi ^+\pi ^{}`$, and $`K_1\pi ^0\pi ^0`$, namely,
$`|A_{+0}|=(1.831\pm 0.006)\times 10^8\mathrm{Gev},`$
$$|A_+|=(3.911\pm 0.007)\times 10^7\mathrm{Gev},$$
(1)
$`|A_{00}|=(3.714\pm 0.015)\times 10^7\mathrm{Gev}.`$
One readily appreciates $`|A_+||A_{00}||A_{+0}|`$. In terms of isospin amplitudes of two weak interaction hamiltonians with $`I=1/2`$ and $`I=3/2`$, respectively, the experimental data mean that the $`I=1/2`$ piece is very importantly enhanced with respect to the $`I=3/2`$ one. Actually, if the latter is neglected one should have $`|A_+|=|A_{00}|`$ and $`|A+0|=0`$. This has been refered to as the $`|\mathrm{\Delta }I|=1/2`$ rule.
We shall now apply the a priori mixing scheme to the three decays. The physical (mass eigenstates) mesons with parity and flavor violating admixtures are given by
$`K_{ph}^+=K_{0p}^+\sigma \pi _{0p}^+\delta ^{}\pi _{0s}^++\mathrm{}`$
$`K_{ph}^0=K_{0p}^0+{\displaystyle \frac{1}{\sqrt{2}}}\sigma \pi _{0p}^0+\sqrt{{\displaystyle \frac{3}{2}}}\sigma \eta _{0p}+\sqrt{{\displaystyle \frac{2}{3}}}\delta \eta _{0s}+{\displaystyle \frac{1}{\sqrt{3}}}\delta \chi _{0s}+{\displaystyle \frac{1}{\sqrt{2}}}\delta ^{}\pi _{0s}^0+{\displaystyle \frac{1}{\sqrt{6}}}\delta ^{}\eta _{0s}{\displaystyle \frac{1}{\sqrt{3}}}\delta ^{}\chi _{0s}+\mathrm{}`$
$`\pi _{ph}^+=\pi _{0p}^++\sigma K_{0p}^+\delta K_{0s}^++\mathrm{}`$
$$\pi _{ph}^0=\pi _{0p}^0\frac{1}{\sqrt{2}}\sigma (K_{0p}^0+\overline{K}_{0p}^0)+\frac{1}{\sqrt{2}}\delta (K_{0s}^0\overline{K}_{0s}^0)+\mathrm{}$$
(2)
$`\pi _{ph}^{}=\pi _{0p}^{}+\sigma K_{0p}^{}+\delta K_{0s}^{}+\mathrm{}`$
$`\overline{K}_{ph}^0=\overline{K}_{0p}^0+{\displaystyle \frac{1}{\sqrt{2}}}\sigma \pi _{0p}^0+\sqrt{{\displaystyle \frac{3}{2}}}\sigma \eta _{0p}\sqrt{{\displaystyle \frac{2}{3}}}\delta \eta _{0s}{\displaystyle \frac{1}{\sqrt{3}}}\delta \chi _{0s}{\displaystyle \frac{1}{\sqrt{2}}}\delta ^{}\pi _{0s}^0{\displaystyle \frac{1}{\sqrt{6}}}\delta ^{}\eta _{0s}+{\displaystyle \frac{1}{\sqrt{3}}}\delta ^{}\chi _{0s}+\mathrm{}`$
$`K_{ph}^{}=K_{0p}^{}\sigma \pi _{0p}^{}+\delta ^{}\pi _{0s}^{}+\mathrm{}`$
$`\eta _{ph}=\eta _{0p}\sqrt{{\displaystyle \frac{3}{2}}}\sigma (K_{0p}^0+\overline{K}_{0p}^0)+{\displaystyle \frac{1}{\sqrt{6}}}(\delta +2\delta ^{})(K_{0s}^0\overline{K}_{0s}^0)+\mathrm{}`$
$`\chi _{ph}=\chi _{0p}{\displaystyle \frac{1}{\sqrt{3}}}(\delta \delta ^{})(K_{0s}^0\overline{K}_{0s}^0)+\mathrm{}.`$
In all these expressions the dots stand for other mixings that will not be used here, and the subindeces naught, $`s`$, and $`p`$ refer to strong-flavor, positive, and negative parity eigenstates. Our phase conventions are those of Ref..
Here, we shall not consider $`CP`$ violation and therefore the above states should not be necessarily $`CP`$-eigenstates, however notice that the physical mesons satisfy $`CPK_{ph}^+=K_{ph}^{}`$, etc. We can form the $`CP`$-eigenstates $`K_1`$ and $`K_2`$ by
$$K_{1_{ph}}=\frac{1}{\sqrt{2}}(K_{ph}^0\overline{K}_{ph}^0)\text{and}K_{2_{ph}}=\frac{1}{\sqrt{2}}(K_{ph}^0+\overline{K}_{ph}^0),$$
(3)
the $`K_{1_{ph}}`$ ($`K_{2_{ph}}`$) is an even (odd) state with respect to $`CP`$.
Substituing the expressions given in Eqs. (2), we obtain (we droped the naught subindex in the strong-flavor eigenstates to simplify the notation),
$`K_{1_{ph}}=K_{1_p}+{\displaystyle \frac{1}{\sqrt{3}}}(2\delta +\delta ^{})\eta _s+\delta ^{}\pi _s^0+\sqrt{{\displaystyle \frac{2}{3}}}(\delta \delta ^{})\chi _s,`$
$$K_{2_{ph}}=K_{2_p}+\sigma \pi _p^0+\sqrt{3}\sigma \eta _p$$
(4)
where the usual definitions $`K_{1_p}=(K_p^0\overline{K}_p^0)/\sqrt{2}`$ and $`K_{2_p}=(K_p^0+\overline{K}_p^0)/\sqrt{2}`$ were used.
From Eqs. (4) we obtain some important conclusions. Since the Hamiltonian, $`H_{st}`$, responsible for the decay $`K\pi \pi `$ is by assumption isoscalar and also a flavour and parity conserving one, we notice that the physical state $`K_{1_{ph}}`$ can only decay into two pions and not into three pions. In the latter case, the final state made out of three pions has total angular momentum equal to zero. The parity is odd, since each of the pions has negative parity, and then the Hamiltonian can not make the transition. In the case of having two pions in the final state, the transition is possible and proportional to the constants $`\delta `$ and $`\delta ^{}`$. Similarly for the state $`K_{2_{ph}}`$: it has to go to three pions and the amplitude is proportional to $`\sigma `$. The above qualitative behavior is observed experimentally neglecting $`CP`$-violation effects.
We now write explicitly the amplitudes for the decays $`K\pi \pi `$. The three amplitudes we want are then, $`A_{+0}=\pi _{ph}^+\pi _{ph}^0|H_{st}|K_{ph}^+`$, $`A_+=\pi _{ph}^+\pi _{ph}^{}|H_{st}|K_{1_{ph}}`$, and $`A_{00}=\pi _{ph}^0\pi _{ph}^0|H_{st}|K_{1_{ph}}`$. After the substitution of the physical mass eigenstates given in Eqs. (2) we obtain,
$`A_{+0}=\delta ^{}\pi _p^+\pi _p^0|H_{st}|\pi _s^++{\displaystyle \frac{1}{\sqrt{2}}}\delta \pi _p^+K_s^0|H_{st}|K_p^+\delta K_s^+\pi _p^0|H_{st}|K_p^+`$
$`A_+`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{3}}}(2\delta +\delta ^{})\pi _p^+\pi _p^{}|H_{st}|\eta _s+\delta ^{}\pi _p^+\pi _p^{}|H_{st}|\pi _s^0+\sqrt{{\displaystyle \frac{2}{3}}}(\delta \delta ^{})\pi _p^+\pi _p^{}|H_{st}|\chi _s`$ (6)
$`{\displaystyle \frac{1}{\sqrt{2}}}\delta K_s^+\pi _p^{}|H_{st}|K_p^0{\displaystyle \frac{1}{\sqrt{2}}}\delta \pi _p^+K_s^{}|H_{st}|\overline{K}_p^0`$
$`A_{00}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{3}}}(2\delta +\delta ^{})\pi _p^0\pi _p^0|H_{st}|\eta _s+\delta ^{}\pi _p^0\pi _p^0|H_{st}|\pi _s^0+\sqrt{{\displaystyle \frac{2}{3}}}(\delta \delta ^{})\pi _p^0\pi _p^0|H_{st}|\chi _s`$
$`+{\displaystyle \frac{1}{2}}\delta \pi _p^0K_s^0|H_{st}|K_p^0+{\displaystyle \frac{1}{2}}\delta \pi _p^0\overline{K}_s^0|H_{st}|\overline{K}_p^0+{\displaystyle \frac{1}{2}}\delta K_s^0\pi _p^0|H_{st}|K_p^0+{\displaystyle \frac{1}{2}}\delta \overline{K}_s^0\pi _p^0|H_{st}|\overline{K}_p^0`$
In the right hand side of these equations, the amplitudes are flavor and parity conserving. The two pions in the physical final state of these amplitudes are either in the $`I=0`$ or in the $`I=2`$ isospin configuration, since the $`I=1`$ state is forbidden by the generalized Bose principle . Also, there is no contribution from the $`I=2`$ component since the interaction Hamiltonian is an isosinglet (so, amplitudes with a $`\pi `$ ($`I=1`$) in the initial state vanish). Therefore, we can write such amplitudes in terms of a single strong coupling constant and take into account the final state interaction by introducing a multiplicative phase factor. We will denote each amplitude in the form, $`M_i^1M_j^2|H_{st}|M_k^3=G_{M^3,M^1M^2}^{k,ij}e^{i\alpha }`$.
The amplitudes for the three decays considered become,
$`A_{+0}=\delta ({\displaystyle \frac{1}{\sqrt{2}}}G_{K^+,\pi ^+K^0}^{p,ps}G_{K^+,K^+\pi ^0}^{p,sp})e^{i\alpha _0}`$
$$A_+=[\frac{1}{\sqrt{3}}(2\delta +\delta ^{})G_{\eta ,\pi ^+\pi ^{}}^{s,pp}+\sqrt{\frac{2}{3}}(\delta \delta ^{})G_{\chi ,\pi ^+\pi ^{}}^{s,pp}]e^{i\alpha _1}$$
(7)
$`A_{00}=[{\displaystyle \frac{1}{\sqrt{3}}}(2\delta +\delta ^{})G_{\eta ,\pi ^0\pi ^0}^{s,pp}+\sqrt{{\displaystyle \frac{2}{3}}}(\delta \delta ^{})G_{\chi ,\pi ^0\pi ^0}^{s,pp}]e^{i\alpha _1}`$
Above we have used the assumption that the strong coupling constants have the property, $`(G_{M^3,M^1M^2}^{i,jk})^{CPT}=G_{\overline{M}^3,\overline{M}^1\overline{M}^2}^{i,jk}`$. Also, we have used the properties, $`j_1j_2m_1m_2|j_1j_2JM=(1)^{Jj_1j_2}j_2j_1m_2m_1|j_2j_1JM`$, of the $`SU(2)`$ Clebsch-Gordan coefficients to simplify the expressions for the amplitudes. The phase introduced for the final state interaction depends only on the total isospin of the final particles, it is for this reason that $`A_+`$ and $`A_{00}`$ have the same phase factor.
It is easy to see from Eqs. (7) that in the $`SU(2)`$ symmetry limit we obtain the so called $`|\mathrm{\Delta }I|=1/2`$ rule predictions: $`A_{+0}=0`$ and $`A_+=A_{00}`$. For instance, from the $`SU(2)`$ Clebsch-Gordan Tables we get $`G_{K^+,\pi ^+K^0}^{p,ps}=\sqrt{2/3}G_{K,\pi K}`$ and $`G_{K^+,K^+\pi ^0}^{p,sp}=()(1/\sqrt{3})G_{K,\pi K}`$.
Since these $`|\mathrm{\Delta }I|=1/2`$ rule equalities are not rigurously exact, it is important to remark that in the a priori mixing scheme their deviations are necessarily proportional to $`SU(2)`$ breaking contributions, in contrast to $`W`$-mediated decays where $`SU(3)`$ breaking is relevant (the mass of the $`s`$-quark). That is, in the present scheme the following ratios are proportional to the order of the $`SU(2)`$ symmetry breaking, $`ϵ`$,
$$\frac{|A_{+0}|}{|A_+|}\frac{|A_{+0}|}{|A_{00}|}\frac{|A_+A_{00}|}{|A_+|}\frac{|A_+A_{00}|}{|A_{00}|}ϵ$$
(8)
From the experimental data of Eqs. (1), these ratios are respectively
$$4.68\%4.93\%5.04\%5.30\%ϵ$$
(9)
These numbers are indeed quite small and can be accepted as compatible with the order of $`SU(2)`$ breaking. Actually, $`ϵ`$ will become smaller if small contributions of the $`W`$-mediated decays are introduced, i.e. assuming that the $`|\mathrm{\Delta }I|=1/2`$ piece of this hamiltonian is not enhanced and is of the size of the $`|\mathrm{\Delta }I|=3/2`$ piece. In this case then one can conclude that the $`SU(2)`$ symmetry limit is even better than the estimate of Eq. (9).
Since the a priori mixing angles have been determined in Ref. to be of order $`10^7`$, we then see that the strong coupling constants (which will remain unmeasured for a long time) are of the order of one, as should be the case. To close, let us finally mention that in the a priori mixing scheme the “$`|\mathrm{\Delta }I|=1/2`$” rule is really a “$`\mathrm{\Delta }I=0`$” rule, because the interaction hamiltonian is the strong flavour conserving one.
We would like to thank CONACyT (México) for partial support.
|
no-problem/9910/hep-ex9910045.html
|
ar5iv
|
text
|
# Measurement of inclusive prompt photon photoproduction at HERA
## 1 Introduction
One of the primary aims of photoproduction measurements in $`ep`$ collisions at HERA is the elucidation of the hadronic behaviour of the photon. The measurement of jets at high transverse energy has provided much information in this area . In the study of inclusive jets, next-to-leading order (NLO) QCD calculations are able to describe the experimental data over a wide range of kinematic conditions, although the agreement is dependent on the jet algorithm . However, significant discrepancies between data and NLO theories are found in dijet measurements . A further means to study photoproduction is provided by final states with an isolated high-transverse-energy photon. These have the particular merit that the photon may emerge directly from the hard QCD subprocess (“prompt” photons), and also can be investigated without the hadronisation corrections needed in the case of quarks or gluons. In a previous measurement by ZEUS at HERA , it was shown that prompt photons, accompanied by balancing jets, are produced at the expected level in photoproduction and with the expected event characteristics. This work is extended in the present paper through the use of a much larger event sample taken in 1996-97, corresponding to an integrated $`ep`$ luminosity of 38.4 pb<sup>-1</sup>. This allows a measurement of inclusive prompt photon distributions as a function of pseudorapidity $`\eta ^\gamma `$ and transverse energy $`E_T^\gamma `$ of the photon, and a comparison with LO and NLO QCD predictions.
## 2 Apparatus and trigger
During 1996-97, HERA collided positrons with energy $`E_e=27.5`$ GeV with protons of energy $`E_p=820`$ GeV. The luminosity was measured by means of the bremsstrahlung process $`epe\gamma p`$.
A description of the ZEUS apparatus and luminosity monitor is given elsewhere . Of particular importance in the present work are the uranium calorimeter (CAL) and the central tracking detector (CTD).
The CAL has an angular coverage of 99.7% of $`4\pi `$ and is divided into three parts (FCAL, BCAL, RCAL), covering the forward (proton direction), central and rear angular ranges, respectively. Each part consists of towers longitudinally subdivided into electromagnetic (EMC) and hadronic (HAC) cells. The electromagnetic section of the BCAL (BEMC) consists of cells of $`20`$ cm length azimuthally and mean width 5.45 cm in the $`Z`$ direction<sup>1</sup><sup>1</sup>1 The ZEUS coordinate system is right-handed with positive-$`Z`$ in the proton beam direction and an upward-pointing $`Y`$ axis. The nominal interaction point is at $`X=Y=Z=0.`$ , at a mean radius of $`1.3`$ m from the beam line. These cells have a projective geometry as viewed from the interaction point. The profile of the electromagnetic signals observed in clusters of cells in the BEMC provides a partial discrimination between those originating from photons or positrons, and those originating from neutral meson decays.
The CTD is a cylindrical drift chamber situated inside a superconducting solenoid which produces a 1.43 T field. Using the tracking information from the CTD, the vertex of an event can be reconstructed with a resolution of 0.4 cm in $`Z`$ and 0.1 cm in $`X,Y`$. In this analysis, the CTD tracks are used to reconstruct the event vertex, and also in the selection criteria for high-$`E_T`$ photons.
The ZEUS detector uses a three-level trigger system, of which the first- and second-level triggers used in this analysis have been described previously . The third-level trigger made use of a standard ZEUS electron finding algorithm to select events with an electromagnetic cluster of transverse energy $`E_T>`$ 4 GeV in the BCAL, with no further tracking requirements at this stage. These events represent the basic sample of prompt photon event candidates.
## 3 Event selection
The offline analysis was based on previously developed methods . An algorithm for finding electromagnetic clusters was applied to the data, and events were retained for final analysis if a photon candidate with $`E_T>5`$ GeV was found in the BCAL. A photon candidate was rejected if a CTD track, as measured at the vertex, pointed to it within 0.3 radians; this removed almost all high-$`E_T`$ positrons and electrons, including the majority of those that underwent hard radiation. The BCAL requirement restricts the photon candidates to the approximate pseudorapidity<sup>2</sup><sup>2</sup>2 All kinematic quantities are given in the laboratory frame. Pseudorapidity $`\eta `$ is defined as $`\mathrm{ln}\mathrm{tan}(\theta /2)`$, where $`\theta `$ is the polar angle relative to the $`Z`$ direction, measured from the $`Z`$ position of the event vertex. range $`0.75<\eta ^\gamma <1.0`$.
Events with an identified deep inelastic scattered (DIS) positron in addition to the BCAL photon candidate were removed, thus restricting the acceptance to incident photons of virtuality $`Q^2\stackrel{<}{}1`$ GeV<sup>2</sup>. The quantity $`y^{meas}=(Ep_Z)/2E_e`$ was calculated, where the sum is over all calorimeter cells, $`E`$ is the energy deposited in the cell, and $`p_Z=E\mathrm{cos}\theta `$. When the outgoing positron is not detected in the CAL, $`y^{meas}`$ is a measure of $`y=E_{\gamma ,in}/E_e`$, where $`E_{\gamma ,in}`$ is the energy of the incident photon. If the outgoing positron is detected in the CAL, $`y^{meas}1`$. A requirement of $`0.15<y^{meas}<0.7`$ was imposed; the lower cut removed some residual proton-gas backgrounds while the upper cut removed remaining DIS events, including any with a photon candidate that was actually a misidentified DIS positron. Wide-angle Compton scattering events ($`epe\gamma p`$) were also excluded by this cut. This range of accepted $`y^{meas}`$ values corresponds approximately to the true $`y`$ range $`0.2<y<0.9`$.
An isolation cone was imposed around the photon candidate: within a cone of unit radius in $`(\eta ,\varphi )`$, the total $`E_T`$ from other particles was required not to exceed $`0.1E_T^\gamma `$. This was calculated by summing the $`E_T`$ in each calorimeter cell within the isolation cone. Further contributions were included from charged tracks which originated within the isolation cone but curved out of it; the small number of tracks which curved into the isolation cone were ignored. The isolation condition much reduces the dijet background by removing a large majority of the events where the photon candidate is closely associated with a jet and is therefore either hadronic (e.g. a $`\pi ^0`$) or else a photon radiated within a jet. In particular, the isolation condition removes most dijet events in which a photon is radiated from a final-state quark. Approximately 6000 events with $`E_T^\gamma >5`$ GeV remained after the above cuts.
Studies based on the single-particle Monte Carlo samples showed that the photon energy measured in the CAL was on average less than the true value, owing to dead material in front of the CAL. To compensate for this, an energy correction, typically 0.2 GeV, was added.
## 4 Monte Carlo simulations
In describing the hard interaction of photons of low virtuality with protons, two major classes of diagram are important. In one of these the photon couples in a pointlike way to a $`q\overline{q}`$ pair, while in the other the photon interacts via an intermediate hadronic state, which provides quarks and gluons which then take part in the hard QCD subprocesses. At leading order (LO) in QCD, the pointlike and hadronic diagrams are distinct and are commonly referred to as direct and resolved processes, respectively.
In the present analysis, three types of Monte Carlo samples were employed to simulate: (1) the LO QCD prompt photon processes, (2) dijet processes in which an outgoing quark radiated a hard photon (radiative events), and (3) single particles ($`\gamma `$, $`\pi ^0`$, $`\eta `$) at high $`E_T`$. All generated events were passed through a full GEANT-based simulation of the ZEUS detector.
The PYTHIA 5.7 and HERWIG 5.9 Monte Carlo generators were both used to simulate the direct and resolved prompt photon processes. These generators include LO QCD subprocesses and higher-order processes modelled by initial- and final-state parton showers. The parton density function (pdf) sets used were MRSA for the proton, and GRV(LO) for the photon. The minimum $`p_T`$ of the hard scatter was set to 2.5 GeV. No multi-parton interactions were implemented in the resolved samples. The radiative event samples were likewise produced using direct and resolved photoproduction generators within PYTHIA and HERWIG.
In modelling the overall photoproduction process, the event samples produced for the separate direct, resolved and radiative processes were combined in proportion to their total cross sections as calculated by the generators. A major difference between PYTHIA and HERWIG is the smaller radiative contribution in the HERWIG model.
Three Monte Carlo single-particle data sets were generated, comprising large samples of $`\gamma `$, $`\pi ^0`$ and $`\eta `$. The single particles were generated uniformly over the acceptance of the BCAL and with a flat $`E_T`$ distribution between 3 and 20 GeV; $`E_T`$-dependent exponential weighting functions were subsequently applied to reproduce the observed distributions. These samples were used in separating the signal from the background using shower shapes.
## 5 Evaluation of the photon signal
Signals in the BEMC that do not arise from charged particles are predominantly due to photons, $`\pi ^0`$ mesons and $`\eta `$ mesons. A large fraction of these mesons decay into multiphoton final states, the $`\pi ^0`$ through its $`2\gamma `$ channel and the $`\eta `$ through its $`2\gamma `$ and $`3\pi ^0`$ channels. For $`\pi ^0,`$ $`\eta `$ produced with $`E_T`$ greater than a few GeV, the photons from the decays are separated in the BEMC by distances comparable to the BEMC cell width in $`Z`$. Therefore the discrimination between photons and neutral mesons was performed on the basis of cluster-shape characteristics, thus avoiding any need to rely on theoretical modelling of the background.
A typical high-$`E_T`$ photon candidate consists of signals from a cluster of 4–5 BEMC cells. Two shape-dependent quantities were used to distinguish $`\gamma `$, $`\pi ^0`$ and $`\eta `$ signals . These were (i) the mean width $`<\delta Z>`$ of the cell cluster in $`Z`$, which is the direction of finer segmentation of the BEMC, and (ii) the fraction $`f_{max}`$ of the cluster energy found in the most energetic cell in the cluster. The quantity $`<\delta Z>`$ is defined as $`\left(E_{cell}|Z_{cell}\overline{Z}|\right)/E_{cell},`$ summing over the cells in the cluster, where $`\overline{Z}`$ is the energy-weighted mean $`Z`$ value of the cells. The $`<\delta Z>`$ distribution for the event sample is shown in Figure 1(a), in which peaks due to the photon and $`\pi ^0`$ contributions are clearly visible.<sup>3</sup><sup>3</sup>3The displacement of the photon peak from the Monte Carlo prediction does not affect the present analysis; the poor fit in the region $`<\delta Z>`$ = 0.6–1.0 is taken into account in the systematic errors. The Monte Carlo samples of single $`\gamma `$, $`\pi ^0`$ and $`\eta `$ were used to establish a cut on $`<\delta Z>`$ at 0.65 BEMC cell widths, such as to remove most of the $`\eta `$ mesons but few of the photons and $`\pi ^0`$s. Candidates with lower $`<\delta Z>`$ were retained, thus providing a sample that consisted of photons, $`\pi ^0`$ mesons and a small admixture of $`\eta `$ mesons.
The extraction of the photon signal from the mixture of photons and a neutral meson background was done by means of the $`f_{max}`$ distributions. Figure 1(b) shows the shape of the $`f_{max}`$ distribution for the final event sample, after the $`<\delta Z>`$ cut, fitted to the $`\eta `$ component determined from the $`<\delta Z>`$ distribution plus freely-varying $`\gamma `$ and $`\pi ^0`$ contributions. Above an $`f_{max}`$ value of 0.75, the distribution is dominated by the photons; below this value it consists mainly of meson background. Since the shape of the $`f_{max}`$ distribution is similar for the $`\eta `$ and $`\pi ^0`$ contributions, the background subtraction is insensitive to uncertainties in the fitted $`\pi ^0`$ to $`\eta `$ ratio.
The numbers of candidates with $`f_{max}>0.75`$ and $`f_{max}<0.75`$ were calculated for the sample of events occurring in each bin of a measured quantity. From these numbers, and the ratios of the corresponding numbers for the $`f_{max}`$ distributions of the single particle samples, the number of photon events in the given bin was evaluated .
## 6 Cross section calculation and systematic uncertainties
Cross sections are given for the photoproduction process $`ep\gamma (\text{prompt})+X,`$ taking place in the incident $`\gamma p`$ centre-of-mass energy $`(W)`$ range 134–285 GeV, i.e. $`0.2<y<0.9`$. The virtuality of the incident photon is restricted to the range $`Q^2\stackrel{<}{}1`$ GeV<sup>2</sup>, with a median value of approximately $`10^3`$ GeV<sup>2</sup>. The cross sections represent numbers of events within a given bin, divided by the bin width and integrated luminosity. They are given at the hadron level, with an isolation cone defined around the prompt photon as at the detector level. To obtain the hadron-level cross sections, bin-by-bin correction factors were applied to the corresponding detector-level distributions; these factors were calculated using PYTHIA.
The following sources of systematic error were taken into account:
* Calorimeter simulation: the uncertainty of the simulation of the calorimeter response gives rise to an uncertainty on the cross sections of $`\pm 7\%`$;
* Modelling of the shower shape: uncertainties on the agreement of the simulated $`f_{max}`$ distributions with the data correspond to a systematic error averaging $`\pm 8\%`$ on the final cross sections;
* Kinematic cuts: the cuts defining the accepted kinematic range at the detector level were varied by amounts corresponding to the resolution on the variables. Changes of up to 5% in the cross section were observed;
* $`\eta /(\eta +\pi ^0)`$ ratio: the fitted value was typically 25%; variations of this ratio in the range 15–35% led to cross section variations of around $`\pm 2\%`$;
* Vertex cuts: narrowing the vertex cuts to ($`25,+15`$) cm from their standard values of ($`50,+40`$) cm gave changes in the cross sections of typically $`\pm 4\%`$.
In addition, studies were made of the effects of using HERWIG instead of PYTHIA for the correction factors, of varying the $`E_T`$ distribution applied to the single-particle samples, and of varying the composition of the Monte Carlo simulation in terms of direct, resolved and radiative processes. These gave changes in the cross sections at the 1% level. The 1.6% uncertainty on the integrated luminosity was neglected. The individual contributions were combined in quadrature to give the total systematic error.
## 7 Theoretical calculations
In presenting cross sections, comparison is made with two types of theoretical calculation, in which the pdf sets taken for both the photon and proton can be varied, although there is little sensitivity to the choice of proton pdf. These are:
(i) PYTHIA and HERWIG calculations evaluated at the final-state hadron level, as outlined in Sect. 4. Each of these programs comprises a set of LO matrix elements augmented by parton showers in the initial and final states together with hadronisation;
(ii) NLO parton-level calculations of Gordon (LG) and of Krawczyk and Zembrzuski (K&Z) . Pointlike and hadronic diagrams at the Born level are included, together with virtual (loop) corrections and terms taking into account three-body final states. The radiative terms are evaluated by means of fragmentation functions obtained from experiment. In both calculations, the isolation criterion was applied at the parton level.
The LG and K&Z calculations differ in several respects . The K&Z calculation includes a box-diagram contribution for the process $`\gamma g\gamma g`$ , but excludes higher-order corrections to the resolved terms which are present in LG. A value of $`\mathrm{\Lambda }_{\overline{MS}}=200`$ MeV (5 flavours) is used in LG while in K&Z a value of 320 MeV (4 flavours) is used, so as to reproduce a fixed value of $`\alpha _S=0.118`$ at the $`Z^0`$ mass. The standard versions of both calculations use a QCD scale of $`p_T^2`$. Both calculations use higher-order (HO) versions of the GRV and GS photon pdf sets.
## 8 Results
Figure 2 and Table 1 give the inclusive cross-section $`d\sigma /dE_T^\gamma `$ for the production of isolated prompt photons in the range $`0.7<\eta ^\gamma <0.9`$ for $`0.2<y<0.9`$. All the theoretical models describe the shape of the data well; however the predictions of PYTHIA and especially HERWIG are too low in magnitude. The LG and K&Z calculations give better agreement with the data.
Figure 3 and Table 2 give the inclusive cross-section $`d\sigma /d\eta ^\gamma `$ for isolated prompt photons in the range $`5<E_T^\gamma <10`$ GeV for $`0.2<y<0.9`$. Using the GRV pdf’s in the photon, PYTHIA gives a good description of the data for forward pseudorapidities. The HERWIG distribution, while similar in shape to that of PYTHIA, is lower throughout; this is attributable chiefly to the lower value of the radiative contribution in HERWIG (see Sect. 4). The LG and K&Z calculations using GRV are similar to each other and to PYTHIA. All the calculations lie below the data in the lower $`\eta ^\gamma `$ range.
The effects were investigated of varying some of the parameters of the K&Z calculation relative to their standard values (NLO, 4 flavours, $`\mathrm{\Lambda }_{\overline{MS}}=320`$ MeV, GRV photon pdf). Reducing the number of flavours used in the calculation to three (with $`\mathrm{\Lambda }_{\overline{MS}}=365`$ MeV) reduced the cross sections by 35–40% across the $`\eta ^\gamma `$ range, confirming the need to take charm into account. A LO calculation (with $`\mathrm{\Lambda }_{\overline{MS}}=120`$ MeV and a NLO radiative contribution) was approximately 25% lower than the standard NLO calculation. Variations of the QCD scale between $`0.25E_T^2`$ and $`4E_T^2`$ gave cross-section variations of approximately $`\pm 3\%`$.
Figure 3(b) illustrates the effects of varying the photon parton densities, comparing the results using GRV with those using GS. The ACFGP parton set gives results (not shown) similar to GRV. All NLO calculations describe the data well for $`\eta ^\gamma >0.1`$, as does PYTHIA, but are low at more negative $`\eta ^\gamma `$ values, where the curves using the GS parton densities give poorer agreement than those using GRV.
As a check on the above results, the same cross sections were evaluated with the additional requirement that each event should contain a jet (see ) with $`E_T5`$ GeV in the pseudorapidity range ($`1.5`$, 1.8). Both the measured and theoretical distributions were found to be of a similar shape to those in Fig. 3.
The discrepancy between data and theory at negative $`\eta ^\gamma `$ is found to be relatively strongest at low values of $`y`$. Figure 4 shows the inclusive cross section $`d\sigma /d\eta ^\gamma `$ as in Fig. 3, evaluated for the three $`y`$ ranges 0.2–0.32, 0.32–0.5 and 0.5–0.9 by selecting the $`y^{meas}`$ ranges 0.15–0.25, 0.25–0.4 and 0.4–0.7 at the detector level. The numerical values are listed in Table 2. In the lowest $`y`$ range, both theory and data show a peaking at negative $`\eta ^\gamma `$, but it is stronger in the data. The Monte Carlo calculations indicate that the peak occurs at more negative $`\eta ^\gamma `$ values as $`y`$ increases, eventually leaving the measurement acceptance. In the highest $`y`$ range (Fig. 4(c)), agreement is found between theory and data. The movement of the peak can be qualitatively understood by noting that for fixed values of $`E_T`$ and $`x_\gamma `$, where $`x_\gamma `$ is the fraction of the incident photon energy that contributes to the resolved QCD subprocesses, measurements at increasing $`y`$ correspond on average to decreasing values of pseudorapidity. By varying the theoretical parameters, the discrepancy was found to correspond in the K&Z calculation to insufficient high $`x_\gamma `$ partons in the resolved photon.
## 9 Summary and conclusions
The photoproduction of isolated prompt photons within the kinematic range $`0.2<y<0.9`$, equivalent to incident $`\gamma p`$ centre-of-mass energies $`W`$ of 134–285 GeV, has been measured in the ZEUS detector at HERA, using an integrated luminosity of 38.4 pb<sup>-1</sup>. Inclusive cross sections for $`ep\gamma +X`$ have been presented as a function of $`E_T^\gamma `$ for photons in the pseudorapidity range $`0.7<\eta ^\gamma <0.9`$, and as a function of $`\eta ^\gamma `$ for photons with $`5<E_T^\gamma <10`$ GeV. The latter results have been given also for three subdivisions of the $`y`$ range. All kinematic quantities are quoted in the laboratory frame.
Comparisons have been made with predictions from leading-logarithm parton-shower Monte Carlos (PYTHIA and HERWIG), and from next-to-leading-order parton-level calculations. The models are able to describe the data well for forward (proton direction) photon pseudorapidities, but are low in the rear direction. None of the available variations of the model parameters was found to be capable of removing the discrepancy with the data. The disagreement is strongest in the $`W`$ interval 134–170 GeV, but not seen within the measurement acceptance for $`W>212`$ GeV. This result, together with the disagreements with NLO predictions seen also in recent dijet results at HERA, would appear to indicate a need to review the present theoretical modelling of the parton structure of the photon.
## Acknowledgements
It a pleasure to thank the DESY directorate and staff for their unfailing support and encouragement. The outstanding efforts of the HERA machine group in providing high luminosity for the 1996–97 running are much appreciated. We are also extremely grateful to L. E. Gordon, M. Krawczyk and A. Zembrzuski for helpful conversations, and for making available to us calculations of the NLO prompt photon cross section and a computer program (K&Z).
|
no-problem/9910/astro-ph9910460.html
|
ar5iv
|
text
|
# Modeling the April 1997 flare of Mkn 501
## I The co-acceleration scenario
With its giant outburst in 1997, emitting photons up to $`24`$ TeV and $`0.5`$ MeV in the $`\gamma `$-ray and X-ray bands, Mkn 501 has proved to be the most extreme TeV-blazar observed so far (e.g. Catanese et al 1997, Pian et al 1997, Aharonian et al 1999).
In this paper, we consider the April 1997 flare of Mkn 501 in the light of a modified version of the Synchrotron Proton Blazar model (SPB) (Mannheim 1993), and present a preliminary model fit.
In the model, shock accelerated protons ($`p`$) interact in the synchrotron photon field generated by the electrons ($`e^{}`$) co-accelerated at the same shock. This scenario may put constraints on the maximum achievable particle energies.
The usual process considered for accelerating charged particles in the plasma jet is diffusive shock acceleration (see e.g. Drury 1983, Biermann & Strittmatter 1987). If the particle spectra are cut off due to synchrotron losses, the ratio of the maximum particle energies $`\gamma _{p,max}/\gamma _{e,max}`$ can be derived by equating $`t_{acc,p}/t_{acc,e}=t_{syn,p}/t_{syn,e}`$, with $`t_{syn,p}`$ and $`t_{syn,e}`$ being the synchrotron loss time scales for $`p`$ and $`e^{}`$, respectively. We find that for shocks of compression ratio 4 (see Mücke & Protheroe 1999 for a detailed derivation)
$$\frac{\gamma _{p,\mathrm{max}}}{\gamma _{e,\mathrm{max}}}\frac{m_p}{m_e}(\frac{m_p}{m_e})^{\frac{2(\delta 1)}{3\delta }}\sqrt{\frac{F(\theta ,\eta _{e,\mathrm{max}})}{F(\theta ,\eta _{p,\mathrm{max}})}}=\frac{m_p}{m_e}\sqrt{\frac{\eta _{e,\mathrm{max}}F(\theta ,\eta _{e,\mathrm{max}})}{\eta _{p,\mathrm{max}}F(\theta ,\eta _{p,\mathrm{max}})}}$$
(1)
where the “=”-sign corresponds to synchrotron loss, and the “$`<`$”-sign to adiabatic loss determining the maximum energies. $`\delta `$ is the power law index of the magnetic turbulence spectrum ($`\delta =5/3`$: Kolmogorov turbulence, $`\delta =3/2`$: Kraichnan turbulence, and $`\delta =1`$ corresponds to Bohm diffusion). $`\eta _{e,\mathrm{max}}`$ is the mean free path at maximum energy in units of the particle’s gyroradius and $`F(\theta ,\eta _{e,\mathrm{max}})`$ takes account of the shock angle $`\theta `$ (Jokipii 1987). The ratio $`F(\theta ,\eta _{e,\mathrm{max}})\eta _{e,\mathrm{max}}/F(\theta ,\eta _{p,\mathrm{max}})\eta _{p,max}`$ can be constrained by the variability time scale $`t_{\mathrm{var}}`$, requiring $`t_{var}Dt_{acc,p,max}`$ ($`D`$ = Doppler factor, $`t_{acc,p,max}`$ = acceleration time scale at maximum particle energy) for a given parameter combination. As an example, we adopt $`D=10`$, $`B=20`$G and $`t_{\mathrm{var}}=2`$ days. Eq. 1 then restricts for these parameters the ratio of the allowed maximum particle energies to the range below the solid lines shown in Fig. 1. Points exactly on this line correspond to synchrotron-loss limited particle spectra which are accelerated with exactly the variability time scale.
In hadronic models $`\pi `$ photoproduction is essential for $`\gamma `$-ray production. The threshold of this process is given by $`ϵ_{\mathrm{max}}\gamma _{p,\mathrm{max}}=0.0745`$ GeV where $`ϵ_{\mathrm{max}}`$ is the maximum photon energy of the synchrotron target field. Inserting $`ϵ_{\mathrm{max}}=3/8\gamma _{e,\mathrm{max}}^2B/(4.414\times 10^{13}\mathrm{G})\mathrm{m}_\mathrm{e}\mathrm{c}^2`$ into the threshold condition, we find
$$\gamma _{p,\mathrm{max}}1.7210^{16}(\frac{B}{\mathrm{Gauss}})^1\gamma _{e,\mathrm{max}}^2$$
which is shown in Fig. 1 as dashed line for various magnetic field strengths. Together with Eq. 1, the allowed range of maximum particle energies is then restricted to the shaded area in Fig. 1.
## II The Mkn 501 flare in the Synchrotron Proton Blazar (SPB) Model
We assume the parameters used in Fig. 1, and that the co-accelerated $`e^{}`$ produce the observed synchrotron spectrum, unlike in previous SPB models, and this is the target radiation field for the $`p\gamma `$-interactions. This synchrotron spectrum, and its hardening with rising flux, has recently been convincingly reproduced by a shock model with escape and synchrotron losses (Kirk et al 1998). We use the Monte-Carlo technique for particle production/cascade development, which allows us to use exact cross sections.
For simplicity we represent the observed synchrotron spectrum (target photon field for the $`p\gamma `$-collisions) as a broken power law in the jet frame with photon power law index 1.4 below the break energy of 0.2 keV, and index 1.8 up to 50 keV.
The variability time scale restricts the radius $`R`$ of the emission region. For our model we use $`t_{\mathrm{var},\mathrm{x}}2`$ days (Catanese et al 1997), and find $`R2.6\times 10^{16}`$cm for $`D=10`$, $`B=19.6`$ G. With these parameters the $`\gamma \gamma `$-pair production optical depth reaches unity for $`25`$ TeV photons.
Our model considers photomeson production (simulated using SOPHIA, Mücke et al 1999), Bethe-Heitler pair production (simulated using the code of Protheroe & Johnson 1996), $`p`$ synchrotron radiation and adiabatic losses due to jet expansion. The mean energy loss and acceleration time scales are presented in Fig. 2.
Synchrotron losses, which turn out to be at least as important as losses due to $`\pi `$ photoproduction for the assumed 2 day variability, limit the injected $`p`$ spectrum $`\gamma _p^2`$ to $`2\gamma _p4.4\times 10^{10}`$. This leads to a $`p`$ energy density $`u_p0.2\mathrm{TeV}/\mathrm{cm}^3`$, which is bracketed by the photon energy density $`u_{\mathrm{target}}0.01\mathrm{TeV}/\mathrm{cm}^3`$, and a magnetic field energy density $`u_B9.5\mathrm{TeV}/\mathrm{cm}^3`$. With $`u_Bu_{\mathrm{target}}`$ significant Inverse Compton radiation from the co-accelerated $`e^{}`$ is not expected.
Rachen & Meszaros (1998) noted the importance of synchrotron losses of $`\mu ^\pm `$\- (and $`\pi ^\pm `$-) prior to their decay in AGN jets and GRBs. For the present model, the critical Lorentz factors $`\gamma _\mu 3\times 10^9`$ and $`\gamma _\pi 4\times 10^{10}`$, above which synchrotron losses dominate above decay, lie well below the maximum particle energy for $`\mu ^\pm `$, while $`\pi ^\pm `$-synchrotron losses can be neglected due to the shorter decay time.
The matrix method (e.g. Protheroe & Johnson 1996) is used to follow the pair-synchrotron cascade in the ambient synchrotron radiation field and magnetic field, developing as a result of photon-photon pair production. The cascade can be initiated by photons from $`\pi ^0`$-decay (“$`\pi ^0`$-cascade”), electrons from the $`\pi ^\pm \mu ^\pm e^\pm `$-decay (“$`\pi ^\pm `$-cascade”), $`e^\pm `$ from the proton-photon Bethe-Heitler pair production (“Bethe-Heitler-cascade”) and $`p`$ and $`\mu `$-synchrotron photons (“$`p`$-synchrotron cascade” and “$`\mu ^\pm `$-synchrotron cascade”). In this model, the cascades develop linearly.
Fig. 3 shows an example of cascade spectra initiated by photons of different origin, and for the parameter combination given above. $`\pi ^0`$\- and $`\pi ^\pm `$-cascades obviously produce featureless spectra whereas $`p`$\- and $`\mu ^\pm `$-synchrotron cascades cause the typical double hump shaped SED as observed in $`\gamma `$-ray blazars (see also Rachen, these proceedings). The contribution from Bethe-Heitler cascades turns out to be negligible. Direct $`p`$\- and $`\mu ^\pm `$-synchrotron radiation is responsible for the high energy peak, whereas the low energy hump may be either synchrotron radiation from the directly accelerated $`e^{}`$ and/or by pairs produced by the “low energy hump”.
Adding the four components of the cascade spectrum in Fig. 3 and normalizing to an ambient, accelerated $`p`$ density of $`n_{tot,p}=7\mathrm{cm}^3`$, we obtain the SED shown in Fig. 4 where it is compared with the multifrequency observations of the 16 April 1997 flare of Mkn 501.
|
no-problem/9910/cond-mat9910435.html
|
ar5iv
|
text
|
# Composition Patterning in Systems Driven by Competing Dynamics
## Abstract
We study an alloy system where short-ranged, thermally-driven diffusion competes with externally imposed, finite-ranged, athermal atomic exchanges, as is the case in alloys under irradiation. Using a Cahn-Hilliard-type approach, we show that when the range of these exchanges exceeds a critical value, labyrinthine concentration patterns at a mesoscopic scale can be stabilized. Furthermore, these steady-state patterns appear only for a window of the frequency of forced exchanges. Our results suggest that ion beams may provide a novel route to stabilize and tune the size of nanoscale structural features in materials.
The spontaneous formation of steady-state patterns have been extensively observed in many equilibrium and nonequilibrium systems . While for equilibrium systems (e.g. ferrofluids, block copolymer melts, etc.) patterning is a result of the competition between repulsive and attractive interactions of different length scales, in nonequilibrium systems (e.g. reaction-diffusion systems, etc.), steady-state patterning is often the result of the competition between several dynamical mechanisms. A conceptual connection between the two classes of systems can sometimes be realized with the construction of Lyapunov functionals and effective Hamiltonians, by which steady-state pattern formation in dynamical systems is interpreted as resulting from the competition between different types of effective interactions.
The kinetic Ising-type model with competing dynamics, and its continuum mean field counterpart, are instruments by which we hope to understand a whole class of nonequilibrium driven systems , ranging from fast ionic conductors to alloys under irradiation. The main ingredient of this model is the competition between two dynamics: one one hand, a thermally-driven mechanism trying to bring the system to thermodynamical equilibrium; on the other hand, externally imposed particle exchanges of a nature essentially athermal. The usual attempt has been to express the steady state of the system in terms of effective Hamiltonians and effective thermodynamic potentials. P. Garrido, J. Marro, and collaborators , were able to derive effective Hamiltonians for several types of 1D Ising models with competing dynamics. Z. Rácz and collaborators , studied the relation between the range of the externally imposed exchanges and the range of the effective interactions. In the context of alloys under irradiation, using a kinetic Ising-type model, Vaks and Kamyshenko derived a formal expression for the steady state probability distribution in terms of effective interactions, while from a continuum perspective, Martin studied the corresponding dynamical phase diagram by an effective free energy. The possibility of patterning as a result of the competing dynamics has not been considered in these works. However, in the limiting case of arbitrary length external exchanges, patterning has been recently observed in mean field and Monte Carlo simulations . In this situation, the coarsening of segregated phases (magnetic domains in the Ising case) saturates, leading to a steady-state labyrinthine patterning at a mesoscopic length scale. This microstructure is rationalized in terms of a competition between the attractive nearest neighbors interactions, and a repulsive electrostatic-like effective interaction. These patterns do not appear if the external exchanges are short range, e.g. when they occur between nearest neighbors . The behavior difference between these two limiting regimes raises the question of whether there exists a critical value for the range of external exchanges for patterning to occur. The main objective of this Letter is to address this question, which besides having its own theoretical interest, is relevant to alloys under irradiation. Indeed, forced relocation of atoms in displacement cascades may extend beyond nearest-neighbor distances, especially in the case of dense cascades or for open crystal structures .
To render the problem more concrete, let us consider a binary alloy with a positive heat of mixing, under irradiation. Each time an external particle collides with the solid, a local atomic rearrangement is produced. These rearrangements have a ballistic component that mixes the atoms regardless of their chemical identity, trying to bring the system to a random solid solution. Due to its local nature, the ballistic mixing will relocate atoms in a region of characteristic radius $`R`$. The case $`R\mathrm{}`$, or arbitrary-length ballistic exchanges, has already been studied: The macroscopic governing equation is identical to that describing a binary alloy undergoing a chemical reaction $`AB`$ , and the one describing a block copolymer (BCP) melt . From the studies of these systems, the physics of this case is well understood. In terms of the frequency of forced exchanges $`\mathrm{\Gamma }`$, it has been shown that while high values bring the system to a random solid solution, there is a critical value below which the homogeneous concentration profile becomes unstable towards phase separation. As in spinodal decomposition, enriched regions form and coarsen. However, the characteristic length of the domains $`l`$, instead of growing indefinitely towards a macroscopic phase separation, saturates at a mesoscopic scale, $`l_{\mathrm{}}`$. For $`\mathrm{\Gamma }`$ values close to the critical value, the steady-state concentration profile has a sinusoidal-wave appearance, with diffuse interfaces, what is referred to as the weak-segregation regime. For smaller $`\mathrm{\Gamma }`$ values, the concentration profile presents sharper interfaces, with a square-wave-like appearance, what is referred to as the strong-segregation regime. In each regime, the characteristic length has been shown to follow a power law with the exchange frequency $`l_{\mathrm{}}\mathrm{\Gamma }^\varphi `$, with an exponent $`\varphi `$ of 1/4 for the weak and 1/3 for the strong-segregation regime .
In the case of ballistic exchanges of a finite range $`R`$, there should still be a critical value $`\mathrm{\Gamma }(R)`$ below which the system phase separates. The question is whether phase coexistence takes place at a macroscopic or mesoscopic scale. In principle, we can predict that if certain $`\mathrm{\Gamma }`$ equilibrates a wavelength $`l`$ for arbitrary length exchanges, finite range exchanges must also generate patterns when $`Rl`$. It is, however, difficult to determine a priori the exact conditions under which coarsening will saturate as a function of $`R`$ and $`\mathrm{\Gamma }`$. In this Letter we show that given a certain $`R`$, there is an interval $`[\mathrm{\Gamma }_1(R),\mathrm{\Gamma }_2(R)]`$ for the stabilization of patterns. Above $`\mathrm{\Gamma }_2`$ the homogeneous concentration profile is stable, and below $`\mathrm{\Gamma }_1`$ coarsening continues with the time, with the system separating into macroscopic phases. The extent of this interval for patterning decreases with $`R`$, reaching a zero value at a critical, nonzero value $`R_c`$, when $`\mathrm{\Gamma }_1(R_c)=\mathrm{\Gamma }_2(R_c)=\mathrm{\Gamma }_c`$. For ballistic mixing with a radius smaller than $`R_c`$, patterning is not possible. We also show that, given a mixing distance $`R`$, there is an upper bound for the wavelengths attainable as $`\mathrm{\Gamma }\mathrm{\Gamma }_1^+(R)`$. These conclusions are in agreement with recent Kinetic Monte Carlo simulations of binary alloys under finite-range ballistic exchanges .
We study the problem using a Cahn-Hilliard-type description of one-dimensional fronts, simulating the walls of the labyrinthine patterns, and we construct a variational formulation to investigate the solution. The equation describing the temporal evolution is composed of two terms, one for thermal diffusion and another one for ballistic mixing :
$$\frac{\psi }{t}=\frac{\psi ^{\text{th}}}{t}+\frac{\psi ^{\text{bal}}}{t}.$$
(1)
Here we have chosen to represent the concentration field by a globally conserved order parameter $`\psi (𝐱)`$ so that the homogeneous concentration profile (solid solution) corresponds to $`\psi =0`$. In the previous equation, the first term is simply given by $`M^2(\frac{\delta F}{\delta \psi })`$, where $`F`$ is the global free energy, and we have assumed a constant mobility $`M`$. As for the second term, we actually need to perform a derivation. For that purpose, let us consider first ballistic mixing occurring one dimensionally between planes along a crystallographic direction. The rate of change of concentration in the plane $`i`$ due to interchange of atoms with the planes labeled $`j`$ is given by
$$\frac{\psi _i^{\text{bal}}}{t}=\mathrm{\Gamma }\underset{j}{}w_j(\psi _i\psi _{i+j})=\mathrm{\Gamma }(\psi _i\psi ),$$
(2)
where $`w_j`$ is a normalized weight function describing the distribution of ballistic exchange distances, and the brackets denote the corresponding (discrete) weighted spatial average. The extension to the continuum is immediate, and we write the governing equation as:
$$\frac{\psi }{t}=M^2(\frac{\delta F}{\delta \psi })\mathrm{\Gamma }(\psi \psi _R).$$
(3)
$`w_R(𝐱)`$ is now a continuous function peaked around the origin with a width proportional to $`R`$, and the average denoted by the brackets is defined as:
$$\psi _R=w_R(𝐱𝐱^{})\psi (𝐱^{})𝑑𝐱^{}.$$
(4)
In the limit $`R0`$, the ballistic term reduces to a Laplacian term, expressing diffusion. In limit $`R\mathrm{}`$, we recover the governing equation for the case of arbitrary-length ballistic exchanges.
In analogy to what was done in the case of arbitrary-length exchanges, we seek to find a Lyapunov functional for this problem, which we shall refer to as the free energy functional of the system . This idea actually traces back to the work of Leibler , and Ohta and Kawasaki , on block copolymer melts. This functional is given by $`E=F+\gamma G`$, and it is built so as to determine the kinetics: $`\frac{\psi }{t}=M^2(\frac{\delta E}{\delta \psi })`$. $`G`$ is a new term describing effective interactions related to the ballistic term, and to simplify the notation, we use $`\gamma =\mathrm{\Gamma }/M`$ for the rest of this paper.
For $`F`$, we use a Ginzburg-Landau free energy
$$F=(A\psi ^2+B\psi ^4+C|\psi |^2)𝑑𝐱,$$
(5)
while $`G`$ is expressed by a self-interaction of the form
$$G=\frac{1}{2}\psi (𝐱)g(𝐱𝐱^{})\psi (𝐱^{})𝑑𝐱𝑑𝐱^{},$$
(6)
with $`g`$ a kernel satisfying
$$^2g(𝐱𝐱^{})=(\delta (𝐱𝐱^{})w_R(𝐱𝐱^{})).$$
(7)
To proceed further, at this point we need to make a choice of the weight function $`w_R`$. A Yukawa-type potential has been proposed by N. Goldenfeld . This form fits the observed distribution distances of ballistic exchanges for crystals under irradiation , while allowing us to handle part of the minimization problem analytically. In one dimension, $`w_R(u)=R/2\mathrm{exp}(|u|/R)`$.
Having stated the problem, we start by performing a stability analysis of Eq. 3. For small perturbations of the form $`e^{\omega t+ikx}`$ around a constant profile $`\psi =0`$, it is straightforward to find the dispersion relation:
$$\frac{\omega (k)}{M}=2Ak^22Ck^4\frac{\gamma R^2k^2}{1+R^2k^2}.$$
(8)
A series of plots of this dispersion relationship for different values of $`\gamma `$ is shown in Fig. 1. As in the case of arbitrary length ballistic exchanges, there is a critical value $`\gamma _2`$ below which the homogeneous concentration profile becomes unstable. Below this value, there is a window of $`k`$ values $`(k_1,k_2)`$ for which the homogeneous solution is locally unstable, suggesting a wavelength selection. For smaller values of $`\gamma `$, the dispersion relation resembles the one of spinodal decomposition, suggesting macroscopic phase separation.
To confirm these predictions, we use a variational approach, based in minimizing the free energy functional $`E`$. Let us consider first the weak-segregation regime, where the choice of $`w_R`$ allows us to solve the problem analytically, and then consider the strong-segregation regime, where we need to appeal to a numerical treatment.
In the weak-segregation regime, we can perform the minimization of $`E`$ by considering a sine family of parametric functions, $`\psi (x)=\alpha \mathrm{sin}(kx)`$. We obtain the energy per unit length:
$$E(\alpha ,k)=\alpha ^2\frac{A}{2}+\alpha ^4\frac{3B}{8}+\alpha ^2k^2\frac{C}{2}+\alpha ^2\frac{\gamma L^2}{4(1+k^2L^2)}.$$
(9)
Minimization of this energy is performed analytically, and concentration patterning, indicated by solutions with nonzero values of $`k`$ and $`\alpha `$ are found for an interval in $`\gamma `$. In reality, the value of $`\gamma _1`$ predicted by this parametric functions is an underestimation. Before that value is reached, the energy per unit length for the macroscopically separated system becomes lower than the energy per unit length for the sine profile. The crossover point determines the actual value of $`\gamma _1`$ in the weak segregation regime approximation, where patterning occurs in the interval given by:
$`\gamma _1`$ $`=`$ $`\sqrt{({\displaystyle \frac{2\sqrt{3C}+\sqrt{6[(3\sqrt{6})AR^2(\sqrt{6}2)C]}}{3R^2}})},`$ (10)
$`\gamma _2`$ $`=`$ $`(A+C/R^2)^2/2C.`$ (11)
This interval shrinks to zero at a critical value of $`R_c=\sqrt{C/A}`$, corresponding to $`\gamma _c=2A^2/C`$. As a consequence of $`\gamma _1`$ being determined by a crossover of energies, a transition towards macroscopic phase separation occurs at a finite value of $`k`$. As a result, there is a bound in the wavelength of the patterns for a given $`R`$.
In the strong-segregation regime, we need to improve over the approximation of sine waves for a proper evaluation of $`\gamma _1`$ away from the critical point. To this purpose, we propose to minimize the free energy functional using the tanh-sine family of parametric functions $`\psi (x)=\alpha \mathrm{tanh}(m/k\mathrm{sin}(kx))`$. The parameter $`m`$ serves to change the wave profile continuously from a sinusoidal type, to a tanh-like type with sharp interfaces, matching the concentration profile of an equilibrium interface.
It is easy to minimize first with respect to the parameter $`\alpha `$. We obtain the expression, for $`\alpha >0`$,
$$E=\frac{(A\epsilon _1C\epsilon _3+\gamma \epsilon _4)^2}{4B\epsilon _2},$$
(12)
where the $`\epsilon _i`$ quantities denote the energy per unit length associated with each one of the terms in the free energy functional. Here we can see that the values of $`\gamma `$ and $`k`$ that minimize the free energy are independent of the parameter $`B`$, which only relates to the amplitude $`\alpha `$.
The energy per unit length (the $`\epsilon `$ terms) cannot be obtained analytically for these functions, so we proceed with a numerical strategy. The free energy per unit length is computed by numerical integration, and the actual minimization is performed by means of the subroutine MNFB from NETLIB , based on a secant Hessian approximation. Figure 2 shows $`k`$ versus $`\gamma `$ plots for a series of values of $`R`$. The physical parameters $`A`$ and $`C`$ are set to unity. From this plot we obtain the dependency of $`\gamma _1`$ and the corresponding wave vector $`k_1`$ (related to the maximum attainable wavelength) as a function of $`R`$. Figure 3 is a double-log plot of these quantities for large values of $`R`$, showing a power law dependency. A fit of the data with the power laws $`\gamma _1=pR^\theta `$ and $`k_1=qR^\sigma `$ yield the quantities $`p=2.26\pm 0.03`$, $`\theta =3.039\pm 0.006`$ and $`q=0.54\pm 0.02`$, $`\sigma =1.03\pm 0.01`$. Still, an almost perfect fit is obtained by the simple laws: $`\gamma _1=2/R^3`$ and $`k_1=1/(2R)`$, as shown in the figure. The latter relationship has the physical interpretation that $`R`$ is the parameter determining the maximum wavelength. Furthermore, the combination of these two relationships yield the power law for the $`R\mathrm{}`$ case: $`k\gamma ^{1/3}`$. For values of $`A`$ and $`C`$ not equal to one, the corresponding dependencies can be derived by dimensional analysis, giving: $`\gamma _1=2\sqrt{AC}/R^3`$ and $`k_1=1/(2R)`$. Figure 4 summarizes the steady-state regimes in the $`\gamma `$-$`R`$ space. Patterning may occur in the present model when the range of the forced exchanges exceeds a critical value, with a maximum wavelength proportional to that range. In the case of alloys under irradiation, typical $`R`$ values range from 2 to 10 $`\AA `$, suggesting that patterns up to 100 $`\AA `$ could be stabilized. Experimental work to test these predictions is under progress.
We are very grateful to Nigel Goldenfeld for his clever suggestions and enlighting discussions. We also want to thank the MRL Center for Computation at the University of Illinois for their assistance. This work was partly supported by the US Department of Energy grant DOFG02-96ER45439 through the University of Illinois Materials Research Laboratory.
|
no-problem/9910/quant-ph9910122.html
|
ar5iv
|
text
|
# Rank Two Bipartite Bound Entangled States Do Not Exist
## Abstract
We explore the relation between the rank of a bipartite density matrix and the existence of bound entanglement. We show a relation between the rank, marginal ranks, and distillability of a mixed state and use this to prove that any rank $`n`$ bound entangled state must have support on no more than an $`n\times n`$ Hilbert space. A direct consequence of this result is that there are no bipartite bound entangled states of rank two. We also show that a separability condition in terms of a quantum entropy inequality is associated with the above results. We explore the idea of how many pure states are needed in a mixture to cancel the distillable entanglement of a Schmidt rank $`n`$ pure state and provide a lower bound of $`n1`$. We also prove that a mixture of a non-zero amount of any pure entangled state with a pure product state is distillable.
Perhaps the central topic in quantum information theory has been the study of entanglement, the nonclassical correlations between separated parts of a quantum system. In the early days of quantum theory Einstein, Podolsky and Rosen discuss the paradoxical “spooky action at a distance” of entangled particle pairs to express their disbelief that quantum theory could provide a complete picture of reality . Later, Bell used entanglement to prove that quantum mechanics is inconsistent with local reality . An understanding of entanglement seems to be at the heart of theories of quantum computation and quantum cryptography , as it has been at the heart of quantum mechanics itself.
In the case of bipartite pure states entanglement is rather well understood, in the sense that there is a good measure of entanglement, namely the entropy of the reduced density matrix of one party. For a pure state $`|\psi `$ in a Hilbert space belonging to two parties (traditionally named Alice and Bob) $`_A_B`$ we define
$$E(|\psi \psi |)=S(\mathrm{Tr}_A|\psi \psi |)=S(\mathrm{Tr}_B|\psi \psi |)$$
(1)
where $`S`$ is the von Neumann entropy function of a density matrix given by $`S(\rho )=\mathrm{Tr}(\rho \mathrm{log}\rho )`$. Different entangled bipartite pure states of the same entanglement $`E`$ can be asymptotically converted amongst one another while conserving entanglement . In the case of three or more parties sharing entanglement, the situation is more complicated (cf. ), as it is for mixed states even for only two parties .
There are several measures of entanglement for bipartite mixed states. The entanglement of formation $`E_f`$ is defined as
$$E_f(\rho )=\mathrm{min}\underset{i=1}{\overset{k}{}}p_iE(|\psi _i)$$
(2)
where the minimization is over all ensembles of pure states $`\psi _i`$ and non-negative real numbers $`p_i`$ summing to 1 such that
$$\underset{i=1}{\overset{k}{}}p_i|\psi _i\psi _i|=\rho .$$
(3)
The distillable entanglement $`D(\rho )`$ is defined as the asymptotic amount of pure entanglement that can be gotten out of the mixed state $`\rho `$ using only local quantum operations and classical communication (these were defined originally in ). For pure states these two measures are equal and equal to the entropy of the reduced density matrix of each party as in Eq. (1). For mixed states, we have the property $`DE_f`$, reflecting the intuitive idea that one can’t distill more entanglement out of a state than was used in preparing it. Indeed, it has been shown that there exist states for which $`D=0`$ but $`E_f>0`$. Such states are known as the bound entangled states.
The entanglement of formation and the related relative entropy measure of entanglement share the problem that it is unknown whether or not they are additive. Is the entanglement of formation of two copies of a state $`\rho ^2=\rho \rho `$ always exactly twice that of one copy? More generally, is the entanglement of formation of $`n`$ copies of a state always exactly $`n`$ times the entanglement of formation of just one copy? This has turned out to be a rather difficult problem to solve, but a conjecture about the bound entangled states of may give us a clue. Bound entangled states have finite entanglement of formation, but might it be that in the asymptotic limit as $`n`$ approaches infinity we have $`E_f(\rho _{\mathrm{bound}}^n)/n0`$, which would explain why no entanglement can be distilled from them? <sup>*</sup><sup>*</sup>*Since the initial submission of this paper, Vidal and Cirac have shown that bound entangled states do exist that have a nonzero asymptotic entanglement of formation (which is equal to the entanglement cost ). Still, the motivation remains to look for low rank bound entangled states as a means of finding states with subadditive entanglement of formation or even asymptotically zero but single-copy finite entanglement of formation.
One way of exploring this question is to numerically calculate the entanglement of formation for a bound entangled state $`\rho `$, and then for $`\rho ^2`$, $`\rho ^3`$, etc., and look for subadditivity. We have done this for some small examples, but unfortunately the difficulty of the calculation scales exponentially with the number of copies of $`\rho `$, and our only results so far have been negative. On the other hand, the rate of the exponential growth is strongly dependent on the rank of the density matrix.
When trying to find the minimum ensemble (as in Eq. (2)) for a density matrix of rank $`R`$ it is known that at most $`k=R^2`$ pure states $`\psi _i`$ are required and each pure state is of dimension $`R`$. Thus, for a density matrix of the form $`\rho ^n`$ with $`\rho `$ of rank $`r`$, $`\rho ^n`$ will have rank $`R=r^n`$ and the minimization will in general require $`r^{2n}`$ vectors of dimension $`r^n`$. For this reason it is desirable to find bound entangled states of low rank. The bound entangled states of smallest known rank have rank four .
Recently results have been obtained suggesting the sensitivity of bound entanglement to rank. In it was shown by means of numerical analysis that randomly generated bound entangled states in $`2\times 4`$ typically have a participation ratio (a quantity related to the rank) $`\stackrel{~}{R}1/\mathrm{Tr}(\varrho ^2)`$ between 5 and 6. On the other hand for $`2\times n`$ it was proved that all states which remain positive semi-definite under partial transposition The partial transpose is defined by
$$\mathrm{PT}(\rho )_{ij,kl}=\rho _{il,kj}$$
(4) where the $`i`$ and $`k`$ indices are associated with Hilbert space $`_A`$ and the $`j`$ and $`l`$ indices with $`_B`$. (PPT states) of rank $`n`$ and full rank of the reduced density matrix of the second party are separable.
In this paper we prove that if a bipartite density matrix’s rank is less than the rank of the marginal density matrix of either party, then the density matrix has distillable entanglement (is not bound). We use this to prove the negative result that there do not exist bound entangled states of rank two, and to put some restrictions on what a rank three bound entangled state must be like. Note that this implies that there are no unextendible product bases (UPBs) in $`m\times n`$ with $`mn2`$ members because the complementary state to such a UPB would be rank 2 and bound entangled.
We also show that a mixture of a pure product state with a non-vanishing amount of any pure entangled state is distillable and hence not bound entangled. Finally we conjecture that any irreducible bound entangled state in $`n\times n`$ has rank greater than $`n`$ and show that this would imply that no rank three bound entangled state exists.
We will first prove a powerful theorem relating the distillability, rank and marginal ranks of a mixed state. Then we will use this result to prove that there exists no bound entangled state of rank two.
Consider a bipartite density matrix $`\rho `$ whose two parts belong to Alice and to Bob. We denote its marginal or local density matrices by $`\rho ^\mathrm{A}=\mathrm{Tr}_B(\rho )`$ and $`\rho ^\mathrm{B}=\mathrm{Tr}_A(\rho )`$ respectively obtained by tracing out Bob and Alice. Its marginal rank on Alice’s side $`(\rho ^\mathrm{A})`$ is the rank of $`\rho ^\mathrm{A}`$ and similarly for Bob. For pure states the marginal ranks are equal and are also called the Schmidt rank of the state . We say a state $`\rho `$ in $`m\times n`$ is irreducible if and only if $`(\rho ^\mathrm{A})=m`$ and $`(\rho ^\mathrm{B})=n`$. Intuitively this means that the density matrix fully utilizes each of the local Hilbert spaces of Alice and Bob.
In our proof we use the reduction criterion of separability and distillability : If a state $`\rho `$ is separable then $`\text{1 1 }\rho ^\mathrm{B}\rho 0`$ and $`\rho ^\mathrm{A}\text{1 1 }\rho 0`$. (A Hermitian matrix $`H`$ is positive semi-definite, or $`H0`$ for short, if and only if $`\psi |H|\psi 0_\psi `$, or equivalently $`H`$ has no negative eigenvalues.) If this criterion is violated then $`\rho `$ is distillable. This provides a necessary condition for separability and a sufficient condition for distillability.
###### Theorem 1
If $`(\rho )<\mathrm{max}[(\rho ^\mathrm{A}),(\rho ^\mathrm{B})]`$ then $`\rho `$ is distillable.
Proof: Without loss of generality let $`R=(\rho _\mathrm{A})>(\rho )=r`$. By local filtering Alice takes $`\rho `$ to $`\rho _\mathrm{f}`$ such that,
$$\rho _\mathrm{f}^\mathrm{A}=\frac{1}{R}\text{1 1 }_R.$$
(5)
This can always be done with non-zero probability by applying the local filter $`W=_{i=1}^R\frac{1}{\mu _iR}|\mu _i\mu _i|`$, where $`\mu _i`$ are the non-zero eigenvalues and $`|\mu _i`$ are the corresponding eigenvectors of $`\rho ^\mathrm{A}`$. Thus the eigenvalues of $`\rho _\mathrm{f}^\mathrm{A}`$ are $`1/R`$. Since $`\rho _\mathrm{f}`$ has unit trace and is of rank $`r`$, its largest eigenvalue $`\lambda _{\mathrm{max}}`$ cannot be less than $`1/r`$ which in turn is larger than $`1/R`$. Choosing $`|\psi `$ to be the eigenvector of $`\rho _\mathrm{f}`$ corresponding to the eigenvalue $`\lambda _{\mathrm{max}}`$ we have
$$\psi |\rho _\mathrm{f}^\mathrm{A}\text{1 1 }\rho _\mathrm{f}|\psi =\psi |\rho _\mathrm{f}^\mathrm{A}\text{1 1 }|\psi \lambda _{\mathrm{max}}\frac{1}{R}\frac{1}{r}<0.$$
(6)
Thus the reduction criterion for separability is violated and hence $`\rho _\mathrm{f}`$ is distillable. Since $`\rho _\mathrm{f}`$ can be obtained from $`\rho `$ by a local quantum operation (filtering), this implies $`\rho `$ is distillable. $`\mathrm{}`$
Remark: It is also easy to show by the same reasoning that if $`(\rho )=R`$ then $`\rho `$ is distillable except in the case where $`\rho _\mathrm{f}`$ is proportional to the identity. This is the case where $`\lambda _{\mathrm{max}}`$ is smallest: $`\lambda _{\mathrm{max}}=1/R=1/r`$.
Turning around the inequality (Theorem 1) we have that if a mixed state $`\rho `$ is separable or bound entangled (i.e. not distillable) then
$$(\rho )\mathrm{max}[(\rho ^\mathrm{A}),(\rho ^\mathrm{B})].$$
(7)
Using the monotonicity of logarithms, one can rephrase the theorem as an entropy inequality: For separable or bound entangled states the entropy $`S_0(\rho )\mathrm{log}((\rho ))`$ must satisfy the inequality
$$S_0(\rho )S_0(\rho ^\mathrm{A})\mathrm{and}S_0(\rho )S_0(\rho ^\mathrm{B}).$$
(8)
The entropy $`S_0`$ (called the Hartley entropy) is a special case of quantum Renýi entropies
$$S_\alpha (1\alpha )^1\mathrm{log}(\mathrm{Tr}\rho ^\alpha ).$$
(9)
The counterparts of the separability condition (8) have already been proved for the case of $`\alpha =2`$ and the cases where $`\alpha `$ limits to $`1`$ and $`\mathrm{}`$ (see ). It is interesting to note that the ratio of violation of (8) in the limit $`\alpha 1`$ (the von Neumann entropy) gives in the case of $`2\times 2`$ Werner states the yield of the hashing method of distillation of entanglement .
Let us now look at some implications of this result for separable and for bound entangled states.
###### Corollary 1
A rank $`n`$ separable or bound entangled state in a Hilbert space $``$ has support in at most an $`n\times n`$ subspace of $``$. Further, there is no rank two bound entangled state.
Proof : The first statement follows directly from Eq. (7). The second statement is a consequence of the first and the fact that in $`2\times 2`$ every entangled state is distillable . $`\mathrm{}`$
An open question is whether there exists a rank three bound entangled state. We can put some constraints on the form of such a state: If a rank three bound entangled state exists, Corollary 1 shows it must have support on no more than a $`3\times 3`$ subspace. It must also be that such a state is irreducible because no bound entanglement can exist in $`2\times 2`$ or $`2\times 3`$ . Then, as in the remark above, $`\rho _\mathrm{f}`$ must be proportional to the identity on a three-dimensional subspace in $`3\times 3`$.
Given the fact that a rank $`n`$ bound entangled state must live in a $`n\times n`$ subspace, one may ask whether there are irreducible bound entangled states of rank $`n`$ in $`n\times n`$. Of course there is no rank two bound entangled state in $`2\times 2`$, and all the known examples of bound entanglement are not of this type. The closest one can get to this among known examples is the UPB state of rank four in $`3\times 3`$ . Thus we conjecture:
###### Conjecture 1
There are no irreducible bound entangled states of rank $`n`$ in $`n\times n`$, i.e. any rank $`n`$ bound-entangled state can be expressed in a bipartite space of dimension $`n\times (n1)`$ or $`(n1)\times n`$.
If this conjecture holds then there are no rank three bound entangled states at all.
Theorem 1 also has consequences for the “cancellation of distillable entanglement.” Suppose one has a Schmidt rank $`n`$ pure state $`|\mathrm{\Psi }=_{i=1}^n\sqrt{\mu _i}|\psi _i^A|\psi _i^B`$. How many other arbitrary pure states $`|\varphi _j`$ are needed in a mixture $`\rho =p_0|\mathrm{\Psi }\mathrm{\Psi }|+_{j=1}^kp_j|\varphi _j\varphi _j|`$ before $`\rho `$ stops being distillable? We have the following corollary:
###### Corollary 2
If $`\rho =p_0|\mathrm{\Psi }\mathrm{\Psi }|+_{j=1}^{n2}p_j|\varphi _j\varphi _j|`$ with $`p_0>0`$ and $`|\mathrm{\Psi }`$ of Schmidt rank $`n`$ then $`\rho `$ is distillable.
Proof: The proof follows from Theorem 1 and the fact that $`(\rho )n1`$ and $`(\rho ^\mathrm{A})n`$ since $`|\mathrm{\Psi }`$ is Schmidt rank $`n`$. Thus $`(\rho )n1<n(\rho ^\mathrm{A})`$ and the theorem applies. $`\mathrm{}`$
For $`n=2`$ the above corollary gives the empty result that a Schmidt rank two state mixed with zero other states is distillable. We will show a slight extension of this namely:
###### Theorem 2
A mixture of a non-zero amount of any entangled pure state with any pure product state is always distillable.
Proof: Consider the entangled and product pure states in the Hilbert space of $`_A_B`$. Since the state $`\rho `$ made by mixing them has rank two, it is always distillable unless $`\rho `$ has support on at most a $`2\times 2`$ subspace, by Corollary 1. Thus, without loss of generality we may choose the product state to be $`|0|0`$ and the entangled state to be $`|\psi =a|0|0+b|0|1+c|1|0+d|1|1`$.
In $`_2_2`$ a density matrix $`\rho `$ is entangled and distillable if and only if the partial transpose of $`\rho `$ is not positive semi-definite.
We consider the mixture
$$\rho =p|0000|+|\psi \psi |.$$
(10)
This mixture would be normalized by a denominator of $`1+p`$ but this will not affect positivity and so we omit it.
The partial transpose $`\mathrm{PT}(\rho )`$ is
$$\rho ^{}=\mathrm{PT}(\rho )=\left(\begin{array}{cccc}p+|a|^2& a^{}b& ac^{}& bc^{}\\ ab^{}& |b|^2& ad^{}& bd^{}\\ a^{}c& a^{}d& |c|^2& c^{}d\\ b^{}c& b^{}d& cd^{}& |d|^2\end{array}\right)$$
(11)
If a matrix has a negative determinant then the matrix is not positive semi-definite. Expanding the determinant of $`\rho ^{}`$ using Cramer’s rule on the top row we can write
$$det(\rho ^{})=pdet(C_{11})+det(\rho ^{}p|0000|)=pdet(C_{11})+det(\mathrm{PT}(|\psi \psi |))$$
(12)
where $`C_{11}`$ is the $`3\times 3`$ matrix formed by leaving out the first row and first column of $`\rho ^{}`$. The second term is always negative since $`|\psi `$ is entangled (this is easily seen by doing the partial transpose in the Schmidt basis and noting the fact that the eigenvalues and hence the determinant of the partial transpose are invariant of which basis the partial transposition is done.). Writing out the first term as
$$p|d|^2(|a|^2|d|^2ab^{}cd^{}a^{}bc^{}d+|b|^2|c|^2)=p|d|^2|(adbc)|^2$$
(13)
we see that it is always less than or equal to zero. Thus the determinant of $`\rho ^{}`$ is negative, implying $`\rho ^{}`$ is not positive semi-definite and $`\rho `$ is distillable. $`\mathrm{}`$
Remark: In contrast to this theorem, a mixture of two entangled pure state can be separable, for example, the equal mixture of $`\frac{1}{\sqrt{2}}(|00+|11)`$ and $`\frac{1}{\sqrt{2}}(|00|11)`$.
We have explored the relation between the rank of a density matrix and the existence of bound entanglement. In particular we find that a density matrix with rank smaller than either of the marginal ranks is distillable. This gave us the result that any rank $`n`$ bound entangled state must belong to a $`n\times n`$ subspace. A direct consequence of this result is that there are no bound entangled states of rank two.
We also pointed out that there is a separability criterion in terms of a quantum entropy inequality which is naturally associated with our results. Further, we have explored the idea of how many pure states are needed to cancel the distillable entanglement of a Schmidt rank $`n`$ pure state and we provided a lower bound of $`n1`$. We also showed the related result that a mixture of a non-zero amount of any entangled pure state with a pure product state is distillable. Thus mixing with a single pure product state cannot prevent the distillability of an entangled pure state.
It should be noted that our results on bound entanglement hold even for bound entangled states whose partial transpositions are not positive semi-definite, should such states exist (for evidence that they may, see ). This can be seen because our proofs are based directly on distillability.
It is an open question whether bound entangled states of rank three exist. Another related question is whether states with rank equal to the marginal ranks can be bound entangled or whether the rank needs to be strictly greater.
The work of J.A. Smolin, B.M. Terhal and A.V. Thapliyal has been supported in part by the Army Research Office under contract numbers DAAG55-98-C-0041 and DAAG55-98-1-0366. P. Horodecki is supported by the Polish Committee for Scientific Research, contract No. 2 P03B 103 16 and would like to thank M. Horodecki for helpful discussion. The authors would like to thank the organizers of the AQIP99 conference for providing an environment for us to collaborate and for financial support.
|
no-problem/9910/hep-th9910082.html
|
ar5iv
|
text
|
# Untitled Document
hep-th/9910082
Comments on Black Holes in String Theory
Gary T. Horowitz To appear in the proceedings of the Strings ’99 conference, Potsdam, Germany, July 1999.
Physics Department, University of California, Santa Barbara, CA 93106, USA
Abstract
A very brief review is given of some of the developments leading to our current understanding of black holes in string theory. This is followed by a discussion of two possible misconceptions in this subject – one involving the stability of small black holes and the other involving scale radius duality. Finally, I describe some recent results concerning quasinormal modes of black holes in anti de Sitter spacetime, and their implications for strongly coupled conformal field theories (in various dimensions).
October, 1999
1. Review
This talk is divided into three parts. The first is a brief review of some of the key developments leading to our current understanding of black holes in string theory. This part will be very elementary, and not assume much knowledge of string theory. Next, I will try to clear up two misconceptions that I had until recently, and that I have seen in the literature. Finally, I will describe some recent work about black holes in anti de Sitter spacetime, and their implications for the approach to thermal equilibrium in strongly coupled conformal field theories.
Since supergravity is the low energy limit of string theory, the study of black holes begins by finding solutions to this theory with horizons. Actually, there are several supergravity theories that arise in string theory, starting with the ten and eleven dimensional theories. Since we are in higher dimensions, there are extended black holes, or black $`p`$-branes. The simplest solutions are products of $`R^p`$ with the $`Dp`$ dimensional Schwarzschild solution (where $`D=10`$ or $`11`$), but more interesting solutions carry charge associated with a $`p+2`$ form. The rank is $`p+2`$ since the solution has $`p`$ spatial dimensions along the brane. Adding one for time and one for the radius in the transverse space, one finds that a sphere $`S`$ which surrounds the brane must have dimension $`D(p+2)`$. The charge is then $`Q_S{}_{}{}^{}F_{p+2}^{}`$. This charge can be nonzero even though there are no fundamental sources in supergravity, since all you need is nontrivial spacetime topology. This is exactly analogous the existence of charged black holes in Einstein-Maxwell theory without charged matter. The first charged black $`p`$-branes that were found almost ten years ago assumed maximal symmetry, so all fields were a function of only one radial variable. These solutions depended on two parameters which were the mass $`M`$ (really mass per unit volume) and charge. Solutions existed only when $`M`$ and $`Q`$ satisfied a certain inequality. Since then, the number of black $`p`$-brane solutions has grown enormously as people have learned how to add multiple charges, rotation, traveling waves, etc. .
Unlike supergravity, string theory does have sources for many of these charges called D-branes . The charge to mass ratio of these D-branes is exactly the same as the extremal limit of the black $`p`$-branes, so the latter can be interpreted as the gravitational field of the D-brane. At weak coupling, this gravitational field goes to zero and the low energy excitations of $`N`$ parallel D-branes are described by an $`SU(N)`$ gauge theory. The strongly coupled description of the same excited system should be a nonextreme black $`p`$-brane. By comparing the weak and strong coupling descriptions, one had the possibility of understanding black hole entropy by counting quantum states for the first time.
As an example, consider $`N`$ three-branes. To keep all quantities finite, it is convenient to compactify the directions along the brane into a three torus. Then, in the extreme limit, the area of the horizon goes to zero, which agrees with the fact that at zero excitation energy, the only state in the gauge theory is the ground state. To compare the entropies, we want to add energy to the system. Equivalently, we can consider nonzero temperature $`T`$. The effective coupling is $`gN`$ where $`g`$ is the string coupling constant. When $`gN1`$ the system is a weakly coupled $`3+1`$ dimensional gauge theory at temperature $`T`$. When $`gN1`$ one has a near extremal black three-brane at the same temperature. One can compare the entropies and find
$$S_{bh}=\frac{3}{4}S_{gauge}$$
where $`S_{bh}`$ is the Bekenstein Hawking entropy of the black three-brane. So the gauge theory has roughly the right number of degrees of freedom to explain the entropy of near extremal black three-branes. The fact that they are not exactly the same was not a surprise. At the time this was first computed, it appeared that one had two different descriptions of the same system which were valid for different ranges of the parameter $`gN`$. They appeared to have no overlapping region of validity.
However, there were other situations where the entropies agreed exactly. These were obtained by looking at solutions with more than one charge. For example, suppose four dimensions of space are compactified on a small $`T^4`$. We can take $`Q_5`$ five-branes and wrap them around the compact dimensions to produce an effective string in six dimensions. One can then add $`Q_1`$ one-branes to this string. When $`g^2Q_1Q_51`$, the low energy excitations are described by a $`1+1`$ dimensional conformal field theory . When $`g^2Q_1Q_51`$ the system is described by a black string in six dimensions. If one now adds a small amount of energy and compares the entropies, one finds complete agreement (for large charges)
$$S_{bh}=S_{cft}$$
Why is this working? For the special case where the momentum along the effective string is equal to the added energy, i.e., one excites only right moving modes, there is unbroken supersymmetry. The momentum along the string is like another charge, and the black string remains extremal. In this case, one can show that the number of supersymmetric states should not depend on the coupling. But the entropy turns out to agree even when supersymmetry is broken, e.g., when you excite equal amounts of left and right moving modes \[7,,8\]. Even more importantly, the spectrum of Hawking radiation also agrees \[9,,10\].
The situation was clarified by Maldacena who took a low energy limit which decoupled the excitations of the branes from the excitations of the strings off the branes. At strong coupling, this same limit corresponded to considering strings moving very close to the horizon of the black $`p`$-brane. In the cases of interest, the excitation of the branes is described by a conformal field theory (CFT), and the near horizon geometry of the extremal black $`p`$-brane is a product of anti de Sitter (AdS) space and a sphere. For example, in the case of the three brane, this geometry is $`AdS_5\times S^5`$ where the radii of curvature are equal. For the one-brane five-brane system, the near horizon geometry is $`AdS_3\times S^3\times T^4`$. Since the conformal field theory is well defined even at strong coupling, we obtain two different descriptions which are now valid for the same range of parameters. This lead Maldacena to his famous AdS/CFT correspondence. If one adds energy to the system, the spacetime is not exactly AdS, but still approaches it asymptotically. So the correspondence says that string theory in spacetimes which asymptotically approach AdS times a sphere is completely described by a conformal field theory. There is growing evidence in support of this remarkable conjecture .
How does this explain the entropy results? For the case of the three-brane, it is easy to see from the field theory side why the entropy might change between weak and strong coupling. As you increase the coupling constant you add potential energy to each state and increase its energy, so the number of states for given total energy goes down. Similarly, from the gravity side one can understand the change in entropy as follows. The near horizon geometry of the near extremal solution is a black hole in AdS. As you lower the string coupling, the spacetime curvature increases in string units. This results in stringy corrections to the geometry, and hence corrections to the black hole entropy. In light of these effects, one would expect the weak coupling and strong coupling results to be related in a complicated way. The fact that they are related by a simple factor of $`3/4`$ is rather mysterious and still not understood.
In contrast, for the one-brane five-brane system, one has a $`1+1`$ dimensional CFT whose entropy depends only on the central charge. This can be computed exactly, and is independent of the coupling constant. On the gravity side, the near horizon geometry turns out to be the product of a three dimensional BTZ black hole and $`S^3\times T^4`$. This is locally a space of constant curvature and probably does not receive string corrections as $`g0`$ . There are other systems where the entropy can be computed exactly without supersymmetry, including near extremal four dimensional black holes. But as far as I know, in all such cases the corresponding field theory is a $`1+1`$ dimensional CFT and the near horizon geometry is a space of constant curvature. (When there is unbroken supersymmetry, the entropy can be reproduced for a wider class of black holes, including higher order corrections to the Bekenstein Hawking entropy .)
In light of the AdS/CFT correspondence, we can begin to translate questions about black hole physics into questions about field theory. For example, the formation of a large black hole in AdS is not an exotic process in the CFT. It corresponds to the field theory evolution of a very special high energy state into a typical (approximately thermal) state. More importantly, the formation and evaporation of a small black hole in AdS should be described by the usual unitary evolution in the field theory.
2. Misconceptions
We now come to our first possible misconception, which involves small black holes in AdS. Let $`r_+`$ denote the horizon radius, and $`R`$ denote the radius of AdS. The temperature of a black hole in AdS decreases with mass for $`r_+R`$, but increases with mass for $`r_+R`$. So large black holes have positive specific heat and are stable. Small black holes have negative specific heat and one often concludes that they are unstable and will evaporate. However, let us compare the entropy of a small black hole with the entropy of the same amount of energy in a thermal gas. We will consider the case of black holes in $`AdS_5\times S^5`$. Since the magnitude of the curvature on $`S^5`$ is the same as $`AdS_5`$, a gas of radiation will be effectively ten dimensional. Since the curvature of AdS acts like a confining box of side $`R`$, the gas has $`ST^9R^9`$ and $`ET^{10}R^9`$, so
$$S_{gas}(RE)^{9/10}$$
A small black hole in $`AdS_5`$ which is uniform over the $`S^5`$ is unstable to localizing on the $`S^5`$ due to the Gregory-Laflamme instability . So we should use ten dimensional black holes which have $`S_{bh}r_+^8/\mathrm{}_p^8`$ (where $`\mathrm{}_p`$ is the ten dimensional Planck scale) and $`Er_+^7/\mathrm{}_p^8`$, which implies $`S_{bh}Er_+`$. Now let $`R^8N^2\mathrm{}_p^8`$, so $`N`$ is a measure of how large the $`S^5`$ (or $`AdS_5`$) is in Planck units. (In the AdS/CFT correspondence, this is the same $`N`$ that appears in the group $`SU(N)`$, but since we are asking a pure supergravity question, we don’t need to introduce any string theory or gauge theory quantities.) So the entropies will be equal when
$$S_{bh}\frac{N^2r_+^8}{R^8}(RE)^{9/10}\left(\frac{N^2r_+^7}{R^7}\right)^{9/10}.$$
This implies
$$\frac{r_+}{R}\frac{1}{N^{2/17}}$$
which can be made arbitrarily small for large $`N`$. In other words, if $`r_+/R>N^{2/17}`$, the black hole has more entropy than a gas in AdS. So its evaporation would violate the second law of thermodynamics. What happens?
If you fix the temperature, a small black hole with high temperature will simply absorb energy from the heat bath until it turns into a large black hole with the same temperature which is stable. But this is rather unphysical since its hard to connect a heat bath to AdS, and this is not the right boundary conditions when a black hole evaporates. One should instead fix the total energy, and consider a system consisting of both a black hole and radiation. It is clear that if you start with all the energy in the black hole and radiate a small amount $`ϵ`$, $`\delta S_{bh}ϵ`$ and $`\delta S_gϵ^{9/10}`$. So $`\delta S_g+\delta S_{bh}>0`$ and you increase the total entropy by starting to radiate. This is a consequence of the negative specific heat. But to see the final outcome, we must maximize the total entropy for given energy. Let us divide the total energy into a part which is the gas, and a part which is the black hole: $`E=E_g+E_{bh}`$. As a crude approximation, we will assume the entropy of the gas is the same that it would be in the absence of the black hole. This may be justified since we are considering small black holes. The total entropy is then
$$S(E_gR)^{9/10}+E_{bh}(E_{bh}\mathrm{}_p^8)^{1/7}$$
Using $`\mathrm{}_p^8R^8/N^2`$, the second term becomes $`(E_{bh}^8R^8/N^2)^{1/7}`$. Extreming $`S`$ keeping the total energy fixed yields
$$E_g^7E_{bh}^{10}\frac{N^{20}}{R^{17}}$$
Since the left hand side has a maximum when $`E_gE_{bh}`$, we clearly need $`ER>N^{20/17}`$ in order to have a stable equilibrium. (One can easily check that this is equivalent to our earlier condition $`r_+/R>N^{2/17}`$.) When this condition is satisfied, there are two extrema of the entropy, a local maximum when $`E_{bh}>E_g`$ and a local minimum when $`E_{bh}<E_g`$. When $`ERN^{20/17}`$, the ratio $`E_g/E_{bh}`$ is very small in the maximum entropy configuration. So the net result is that if you fix the total energy, most black holes will evaporate slightly and quickly come into equilibrium with their Hawking radiation<sup>1</sup> This is a more refined version of the discussion in .. This is very similar to earlier studies of a black hole in a box.
A simple check on our result is the following. In order for the black hole to be in stable equilibrium with the radiation, it has to be large enough that it does not evaporate before the radiation has a chance to see the background curvature. In other words, the lifetime of the black hole must be larger than $`R`$. It is easy to check that in the stable regime, all black holes satisfy this condition. Ignoring the background curvature, the lifetime of a small ten dimensional black hole can be computed from
$$\frac{dE}{dt}T^{10}r_+^8\frac{1}{r_+^2}$$
Since $`Er_+^7/\mathrm{}_p^8`$, the lifetime is $`t_0r_+^9/\mathrm{}_p^8N^2r_+^9/R^8`$. This will be of order $`R`$ when $`r_+/RN^{2/9}`$. Since $`N^{2/9}N^{2/17}`$ for large $`N`$, this is much smaller than our lower bound for a black hole to be in stable equilibrium. In the context of string theory, another thing we should check is whether the size of the black hole remains bigger than the string scale $`\mathrm{}_s`$. Since the smallest stable black hole has $`r_+/RN^{2/17}`$, and $`R^4gN\mathrm{}_s^4`$, we see that $`r_+`$ will be larger than $`\mathrm{}_s`$ provided $`gN>N^{8/17}`$.
A similar calculation in eleven dimensions shows that black holes have more entropy than a gas of radiation provided $`r_+/R>(\mathrm{}_p/R)^{9/19}`$ where $`\mathrm{}_p`$ is now the eleven dimensional Planck scale.
A second possible misconception concerns the relation between AdS radius and scale size in the CFT. In Poincare coordinates, the AdS metric is simply
$$ds^2=\frac{r^2}{R^2}(dt^2+dx^idx_i)+\frac{R^2}{r^2}dr^2$$
Since this metric is invariant under $`r\lambda r,(t,x^i)\lambda ^1(t,x^i)`$, it is often assumed that radial position in AdS is reflected in the scale size of the corresponding excitation in the field theory. This has been checked and is certainly true in many cases, usually involving static configurations. A particularly simple example of this seemed to be a null particle moving radially in AdS. It produces a gravitational shock wave which is reflected in the field theory by a $`<T_{\mu \nu }>`$ which is concentrated on the null cone . So the expanding excitation in the CFT is correlated with decreasing radial position. However, we should ask what happens if the particle changes its orbit inside AdS. The answer turns out to be that $`<T_{\mu \nu }>`$ continues to grow at the speed of light even if the particle stops at some radius $`r`$!
This is essentially a consequence of causality: $`<T_{\mu \nu }>`$ is determined by the asymptotic form of the spacetime metric, and this metric is causally related to the null particle. Letting $`z=R^2/r`$, (2.1) becomes
$$ds^2=\frac{R^2}{z^2}(dt^2+dx_idx^i+dz^2)$$
A particle initially falling in from $`t=x_i=z=0`$ can only influence fields inside its future light cone $`t^2x_ix^i+z^2`$. A point on the boundary $`z=0,t=t^0,x_i=x_i^0`$ can only be affected by events inside its past light cone $`(t^0t)^2(x_ix_i^0)^2+z^2`$. Looking at the intersection of these two sets, its clear that the maximum $`z`$ value for the particle that can affect the asymptotic field at $`t^0,x_i^0`$ is
$$z_{max}=\frac{1}{2}[(t^0)^2(x_i^0)^2]^{1/2}$$
which occurs at $`t=t^0/2`$ and $`x_i=x_i^0/2`$. Therefore, as $`(x_i^0)^2(t^0)^2`$, i.e., one approaches the light cone on the boundary, $`z_{max}0`$. This means that even if the particle, or better yet, a rocket ship, stops at a constant value of $`z`$ inside AdS, the field will continue to grow along the light cone on the boundary. Of course changing the trajectory inside will produce additional gravitational waves which will result in a change in the expectation value of the stress tensor inside the light cone. But the main lesson is that, in dynamical processes, the size of the disturbance on the boundary is NOT always a measure of the radial position of the particle in the interior. (For further examples of this phenomenon, see .)
3. Quasinormal modes
A spherical black hole in $`AdS_d`$ is described by
$$ds^2=f(r)dt^2+f(r)^1dr^2+r^2d\mathrm{\Omega }_{d2}^2$$
where
$$f(r)1+\frac{r^2}{R^2}\left(\frac{r_0}{r}\right)^{d3}.$$
The black hole horizon $`r=r_+`$ is at the largest zero of $`f`$. A particle falling into this black hole will produce gravitational waves. A rough estimate for when this radiation reaches infinity is just the time it takes for the particle to fall from infinity to the vicinity of the black hole. For a large black hole $`r_+R`$, this is of order $`1/T`$ where $`Tr_+/R^2`$ is the black hole temperature. At late times, this radiation is independent of the details of what fell in. It is described by characteristic oscillations of the black hole geometry known as quasinormal modes . These oscillations are damped and the corresponding quasinormal frequencies are complex. The mode with the smallest imaginary part dominates at late time and gives the timescale for generic perturbations to decay. My student V. Hubeny and I have recently computed these quasinormal frequencies. (For a more complete discussion, see .)
The damping time of these oscillations have important implications for the dual CFT. Suppose we start with a large static black hole with temperature $`T`$. This is described in the field theory by the thermal state<sup>2</sup> For a black hole formed from collapse of a pure state, the CFT state will still be pure, but resemble the thermal state for macroscopic observations. with temperature $`T`$. Perturbing the black hole, corresponds to perturbing this thermal state, and the timescale for the decay of the perturbation is the timescale for the return to thermal equilibrium. This dynamical timescale is extremely difficult to compute directly, but can be done relatively easily using the AdS/CFT correspondence. For simplicity, we considered perturbations described by a real scalar field like the dilaton.
Since a black hole in AdS has two dimensionful parameters $`R`$ and $`r_+`$, it is not obvious how the quasinormal frequencies $`\omega `$ will scale as we change the size of the black hole. But for large black holes $`r_+R`$, it turns out that there is an extra symmetry which ensures that $`\omega `$ will be proportional to the black hole temperature.
Fig. 1: For large black holes, $`\omega _I`$ is proportional to the temperature. The top line is $`d=4`$, the middle line is $`d=5`$ and the bottom line is $`d=7`$.
Fig. 2: For large black holes, $`\omega _R`$ is also proportional to the temperature. The top line is now $`d=7`$, the middle line is $`d=5`$ and the bottom line is $`d=4`$.
Let us decompose the quasinormal frequencies into real and imaginary parts as $`\omega =\omega _Ri\omega _I`$. (The sign is chosen so that exponentially decaying modes correspond to $`\omega _I>0`$.) The linear dependence with temperature is clearly shown in fig. 1 and fig. 2, where $`\omega _I`$ and $`\omega _R`$ respectively are plotted as a function of the temperature for the four, five, and seven dimensional cases. We have set the AdS radius equal to one, so all quantities are measured in units of the AdS radius. The dots, representing the lowest quasinormal mode for each black hole, lie on straight lines through the origin. In fig. 1, the top line corresponds to the $`d=4`$ case, the middle line is the $`d=5`$ case, and the bottom line is the $`d=7`$ case. Explicitly, the lines are given by
$$\omega _I=11.16T\mathrm{for}d=4$$
$$\omega _I=8.63T\mathrm{for}d=5$$
$$\omega _I=5.47T\mathrm{for}d=7$$
Fig. 3: $`\omega _I`$ for smaller black holes in four dimensions. The solid line is $`\omega _I=2.66r_+`$, and the dashed line is $`\omega _I=11.16T`$.
For smaller black holes, the quasinormal frequencies do not scale with the temperature. This is clearly shown in fig. 3 which plots $`\omega _I`$ as a function of $`r_+`$ for $`d=4`$ black holes with $`r_+1`$. To a remarkable accuracy, the points continue to lie along a straight line $`\omega _I=2.66r_+`$. The dashed curve represents the continuation of the curve $`\omega _I=11.16T`$ shown in Fig. 1 to smaller values of $`r_+`$. (For large $`r_+`$ these two curves are identical.) It is not yet clear what the significance of this linear relation is for the dual CFT. As we have seen, these black holes are stable if one fixes the total energy, and thus correspond to a class of stable states in the field theory. This linear relation is describing the timescale for the decay of perturbations of these states.
The fact that the quasinormal frequencies do not follow the temperature is very different from small black holes in asymptotically flat spacetimes. In that case, there is only one scale $`r_+`$ in the problem and the frequencies must go like $`T1/r_+`$. It is different in AdS simply because the boundary conditions at infinity have been changed. It should not be surprising that even for small black holes, the late time behavior of fields is different in AdS than in an asymptotically flat spacetime.
There is a striking similarity between the slope of the line in Fig. 3 and a number that has been computed in a completely different problem. If you study the gravitational collapse of spherically symmetric scalar fields (in four dimensional asymptotically flat spacetimes), one finds that weak waves scatter and go off to infinity while strong waves collapse to form black holes. On the boundary between these two possibilities, there is initial data which collapses to form a ‘zero mass black hole’, which is really a naked singularity . All such initial data approach the same solution, called the critical solution, near the singularity. This critical solution is known to have one unstable modes which grows like $`e^{2.67t}`$ . This number is very similar to the slope $`2.66`$ that we found above. Despite the fact that both numbers characterize exponential behavior of spherically symmetric scalar fields in four dimensions, further investigation has failed to find any confirming indications of a connection between black holes in AdS and black hole critical phenomena. It appears at the moment to be just a numerical coincidence.
Fig. 4: $`\omega _I`$ for smaller black holes in five dimensions. The solid line is $`\omega _I=2.75r_+`$, and the dashed line is $`\omega _I=8.63T`$.
One reason for this is that the linear relation does not extend to very small black holes. In fact, since the quasinormal frequencies can be computed to an accuracy much better than the size of the dots in Fig. 3, one can check that the points actually lie slightly off the line. This is shown more clearly in the five dimensional results in Fig. 4. Once again, the dashed curve is the continuation of the curve $`\omega _I=8.63T`$ shown in Fig. 1, and the solid curve is the line $`\omega _I=2.75r_+`$ that it approaches asymptotically.
4. Conclusion
If I was granted three wishes in the subject of black holes in string theory, they would be:
a) Explain the $`3/4`$ factor relating the weak and strong coupling calculations of the entropy of the near extremal three-brane.
b) Find an exact calculation of the entropy of a Schwarzschild black hole.
c) Understand how (whether?) the usual information loss arguments break down in the evaporation of a small black hole.
We have already discussed (a). The current status of (b) is that there are general arguments which relate uncharged black holes to excited string states, and show that the entropy should be proportional to the horizon area \[24,,25\]. But they are not yet able to compute the numerical factor. Finally, we mentioned that in terms of the AdS/CFT correspondence, the evaporation of a small black hole in AdS should be a unitary process in the CFT. But we do not yet understand how the usual semiclassical arguments for information loss break down. This might point toward a possible limitation of the AdS/CFT correspondence, but is more likely just a result of our current lack of understanding of how the CFT describes the spacetime inside the horizon.
Acknowledgements
I would like to thank the organizers of the Strings ’99 conference for a very stimulating meeting. This work was supported in part by NSF grant PHY95-07065.
References
relax G. Horowitz and A. Strominger, Nucl. Phys. B360 (1991) 197. relax For a review of some of these solutions, see M. Duff, R. Khuri and J. Lu, Phys. Rep. 259 (1995) 213, hep-th/9412184. relax J. Polchinski, Phys. Rev. Lett. 75 (1995) 4724, hep-th/9510017; “TASI Lectures on D-Branes”, hep-th/9611050. relax S. Gubser, I. Klebanov, and A. Peet, Phys. Rev. D54 (1996) 3915, hep-th/9602135; A. Strominger, unpublished. relax See, e.g., R. Dijkgraaf, talk at Strings ’99. relax A. Strominger and C. Vafa, Phys. Lett. B379 (1996) 99, hep-th/9601029. relax C. Callan and J. Maldacena, Nucl. Phys. B472 (1996) 591, hep-th/9602043. relax G. Horowitz and A. Strominger, Phys. Rev. Lett. 77 (1996) 2368, hep-th/9602051. relax S. Das and S. Mathur, Nucl. Phys. B478 (1996) 561, hep-th/9606185. relax J. Maldacena and A. Strominger, Phys. Rev. D55 (1997) 861, hep-th/9609026. relax J. Maldacena, Adv. Theor. Math. Phys. 2 (1998) 231, hep-th/9711200. relax For a comprehensive review, see O. Aharony, S.S. Gubser, J. Maldacena, H. Ooguri, and Y. Oz, “Large N Field Theories, String Theory and Gravity”, hep-th/9905111. relax M. Banados, C. Teitelboim, and J. Zanelli, Phys. Rev. Lett. 69 (1992) 1849. relax S. Gubser, I. Klebanov, and A. Tseytlin, Nucl. Phys. B534 (1998) 202, hep-th/9805156. relax G. Cardoso, B. de Wit, and T. Mohaupt, “Deviations from the Area Law for Supersymmetric Black Holes”, hep-th/9904005. relax R. Gregory and R. Laflamme, Phys. Rev. Lett, 70 (1993) 2837, hep-th/9301052. relax T. Banks, M. Douglas, G. Horowitz, and E. Martinec, “AdS Dynamics from Conformal Field Theory”, hep-th/9808016. relax G. Horowitz and N. Itzhaki, JHEP 9902 (1999) 010, hep-th/9901012. relax V. Balasubramanian and S. Ross, “Holographic Particle Detection”, hep-th/9906226. relax For a recent review, see K. Kokkotas and B. Schmidt, “Quasi-normal modes of stars and black holes”, gr-qc/9909058. relax G. Horowitz and V. Hubeny, “Quasinormal Modes of AdS Black Holes and the Approach to Thermal Equilibrium”, hep-th/9909056. relax M. Choptuik, Phys. Rev. Lett. 70 (1993) 9. relax For a review, see C. Gundlach, Adv. Theor. Math. Phys. 2 (1998) 1, gr-qc/9712084. relax L. Susskind, “Some Speculations about Black Hole Entropy in String Theory”, hep-th/9309145; G. Horowitz and J. Polchinski, Phys. Rev. D55 (1997) 6189, hep-th/9612146. relax G. Horowitz and J. Polchinski, Phys. Rev. D57 (1998) 2557, hep-th/9707170; T. Damour and G. Veneziano, “Self-gravitating fundamental strings and black-holes”, hep-th/9907030.
|
no-problem/9910/cond-mat9910238.html
|
ar5iv
|
text
|
# Effect of the Electromagnetic Environment on Arrays of Small Normal Metal Tunnel Junctions: Numerical and Experimental Investigation
## Abstract
We present results of a set of experiments to investigate the effect of dissipative external electromagnetic environment on tunneling in linear arrays of junctions in the weak tunneling regime. The influence of this resistance decreases as the number of junctions in the chain increases and ultimately becomes negligible. Further, there is a value of external impedance, typically $`0.5`$ k$`\mathrm{\Omega }`$, at which the half-width of the zero-voltage dip in the conductance curve shows a maximum. Some new analytical formulae, based on the phase-correlation theory, along with numerical results will be presented.
In recent years large attention has been paid to the role of electromagnetic environment on charging effects in small tunnel junctions, both theoretically and experimentally -. Yet, arrays of such tunnel junctions with well-defined external impedances have not been extensively discussed. This is partly because the theoretical formulation of such arrays is more elaborate: there are, e.g., cotunneling effects, inhomogeneities, and background charges, which along with the effects of environment make such an investigation rather difficult in general terms. This lack of theoretical predictions, in turn, has decelerated experimental search for observation of new features in arrays. In this letter we will attempt to fill part of this gap by demonstrating a set of experimental observations and a comparison of them to the results obtained from the already existing phase-correlation (PC) theory for single tunnel junctions, which we have now extended to analyze junction arrays numerically. This analysis is important in setting limits to the systematic error of the reading of the Coulomb blockade primary thermometer .
According to the PC theory, the tunneling rate through the kth junction of a completely symmetric array with $`C_kC`$, $`R_{T,k}R_T`$ and $`C_{0,k}=0`$ (see Fig. 1), in the weak-tunneling regime, $`R_{T,k}R_Kh/e^2,`$ can be written as a convolution integral of the form
$$\mathrm{\Gamma }_k^\pm (\{n\})\mathrm{\Gamma }(\delta F_k^\pm ,\{n\})=\frac{1}{e^2R_T}_{\mathrm{}}^+\mathrm{}𝑑E\frac{E}{1e^{\beta E}}P_k(\delta F_k^\pm E).$$
(1)
Here $`\delta F_k^{\genfrac{}{}{0pt}{}{+}{()}}`$ is the change in free energy of the array when an electron tunnels to right (left), $`\{n\}\{n_1,n_2,\mathrm{},n_{N1}\}`$ designates the charge configuration on the islands, and $`P_k(E)(2\pi \mathrm{})^1_{\mathrm{}}^+\mathrm{}𝑑t\text{e}^{J_k(t)+i\frac{E}{\mathrm{}}t}`$ is the probability density for the electron to exchange energy $`E`$ with the environment. The correlation function $`J_k(t)`$, which accounts for the environment of the kth junction, is given by
$$J_k(t)=2_{\mathrm{}}^{\mathrm{}}\frac{d\omega }{\omega }\frac{\text{Re}[Z_t^k(\omega )]}{R_K}\frac{\text{e}^{i\omega t}1}{1\text{e}^{\beta \mathrm{}\omega }},$$
(2)
where $`Z_t^k(\omega )`$ is the total impedance of the circuit as seen by this junction, and $`\beta (k_BT)^1`$.
By applying Fourier transform techniques to Eq. (1) one obtains
$$\mathrm{\Gamma }_k^\pm (\{n\})=\frac{1}{e^2R_T}\{\frac{1}{\beta }+\frac{\delta F_k^\pm i\mathrm{}J_k^{}(0)}{2}\frac{\pi }{2\beta ^2\mathrm{}}_{\mathrm{}}^+\mathrm{}𝑑t\frac{e^{[J_k(t)i\delta F_k^\pm \frac{t}{\mathrm{}}]}1}{\mathrm{sinh}^2(\frac{\pi t}{\beta \mathrm{}})}\}.$$
(3)
This equation is central in the following numerical calculations. With $`\mathrm{\Gamma }_k`$’s and the algorithm in with $`R_\mathrm{\Sigma }_{k=1}^NR_{T,k}`$, we find the current $`I`$ through the array in equilibrium as
$$I=\underset{\{n\}_{visited}}{}\{e\frac{_{k=1}^N[\mathrm{\Gamma }_k^+(\{n\})\mathrm{\Gamma }_k^{}(\{n\})].\frac{R_{T,k}}{R_\mathrm{\Sigma }}}{_{k=1}^N[\mathrm{\Gamma }_k^+(\{n\})+\mathrm{\Gamma }_k^{}(\{n\})]}\}\frac{1}{\underset{\{n\}_{visited}}{}(_{k=1}^N[\mathrm{\Gamma }_k^+(\{n\})+\mathrm{\Gamma }_k^{}(\{n\})])^1}.$$
(4)
In the expression above, starting initially from an arbitrary configuration, the states visited, $`\{n\}_{visited}`$, over which the outer sums run through, are obtained by dividing the interval $`[0,1]`$ into segments proportional to $`\mathrm{\Gamma }_k^\pm `$’s in each current state, and drawing a random number $`r`$ in the interval. For sequential tunneling, the segment to which $`r`$ corresponds to will specify the junction through which the tunneling event happens and the tunneling direction. This way the distribution of $`\{n\}_{visited}`$ will be statistically collected, and it will allow one to calculate the sums (now weighted by the distribution) according to Eq. (4) similarly to what has been done in .
In the case of a symmetric two-junction array we will also use a simpler algorithm, described in , to obtain the probability of finding $`n`$ excess electrons on the island, $`\sigma (\{n\})`$, and finally the equilibrium current $`I`$ through the array
$$I=I_k=e\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\sigma (\{n\})[\mathrm{\Gamma }_k^+(\{n\})\mathrm{\Gamma }_k^{}(\{n\})].$$
(5)
Assuming a completely symmetric array and a purely resistive environment $`R_e`$, the common real part of the total impedance, $`Z_t(\omega )Z_t^k(\omega )`$, in Eq. (2) reduces to the simple form $`\text{Re}[Z_t(\omega )]=R_e/[(\omega /\omega _c)^2+N^2]`$, with $`\omega _c1/R_eC`$. It is already suggested by this equation that the effect of the environment decreases with increasing $`N`$ and becomes vanishingly small for long arrays. Furthermore, using the partial sum expansion of $`\mathrm{coth}(x)`$ (or by applying Cauchy’s integral theorem), one can evaluate $`J(t)J_k(t)`$ in Eq. (2) as
$$J(t)=\frac{\pi }{N^2}\frac{R_e}{R_K}\{(1e^{N\omega _ct})[\mathrm{cot}(\frac{\beta \mathrm{}N\omega _c}{2})i]\frac{2t}{\beta \mathrm{}}+4\underset{n=1}{\overset{\mathrm{}}{}}\frac{(N\omega _c)^2(1e^{N\omega _nt})}{2\pi n[\omega _{n}^{}{}_{}{}^{2}(N\omega _c)^2]}\},$$
(6)
where $`\omega _n2\pi n/\beta \mathrm{}`$ are Matsubara frequencies. The above equation is a straightforward extension of the result obtained for a single tunnel junction in resistive environment . In our numerical calculations we have used the generally valid formula in Eq. (2), but equivalence of results is checked both by direct comparison of the numerical values derived from Eqs. (2) and (6), and by comparing the final conductance curves; the results from the two methods are indistinguishable.
Figure 1 shows a schematic view of an array with its bias circuitry. To check for the consistency of results for arrays with different number of junctions, each sample included one pair of arrays with about 3 $`\mu `$m space in between. Each pair had a different number of junctions, typically $`N=1`$, 2, $`N=2`$, 8, and $`N=2`$, 20 (only samples with $`N=2`$, 8 and $`N=2`$, 20 are shown in Table I). The array consisted of $`\mathrm{Al}/\mathrm{AlO}_\mathrm{x}/\mathrm{Al}`$ tunnel junctions ($`0.010.05`$ $`\mu `$m<sup>2</sup>) with four chromium resistors, $`Z_{e,j}(\omega )=R_j`$, at a distance of 2 $`\mu `$m, two at each end of the array. Cr resistances of $`120`$ k$`\mathrm{\Omega }`$ can be easily obtained by adjusting the width ($`100`$ nm), length ($`2`$ $`\mu `$m), and thickness ($``$ $`38`$ nm) of chromium films. The equivalent environment resistance (additional to the natural free space like impedance of $`100`$ $`\mathrm{\Omega }`$) will, then, be $`R_e=R_1R_2/(R_1+R_2)+R_3R_4/(R_3+R_4)`$. Samples were made by e-beam lithography and three-angle evaporation techniques. All measurements were carried out at 4.2 K. More details of measurement techniques are presented in .
Figure 2 shows measured data for a typical sample with $`N=2`$, $`R_T=44.9`$ k$`\mathrm{\Omega }`$, $`R_e=1.12`$ k$`\mathrm{\Omega }`$ and $`C=2.25`$ fF (sample 9A in Table I (a)). $`C`$ was determined by searching for the best fit of the measured depth of the zero-bias anomaly to the value given by Eqs. ($`13`$); those equations are in agreement with the zero-bias results of . Each point in the simulated $`IV`$-curve (not shown in the figure) was obtained as a result of $`1000`$ (pseudo)random tunneling events. Conductance curve (CC) was obtained by numerical derivation of the $`IV`$-curve. It is worth mentioning that even with much smaller number of draws, e.g. $`10`$, the result agrees within better than $`1.5\%`$ accuracy (with respect to the height and width of the CC dip) with those obtained from ”long simulations”. The time needed for each step, with a reasonable partition of voltage interval and using a regular desktop computer, was about $`1`$ minute. The result with $`N=2`$ obtained from Eq. ($`5`$), based on the algorithm presented in , is identical with that of a more comprehensive method described above.
In Fig. 3 we have drawn the normalized half-width of the zero-bias minimum of the conductance curve, $`V_{1/2}`$, as a function of the environment resistance for different pairs of samples. By normalization we mean scaling the half-widths by those derived for arrays without an external impedance, i.e., scaling by $`V_{1/2,0}5.439Nk_BT/e`$ . Usually, we have done measurements for each sample in six different combinations of current and voltage probes (four-probe measurement) and there are variations between the different combinations shown by the error bars. The unequal height of the error bars for different samples is due to this. For $`N=2`$, $`V_{1/2}`$ shows a sharp maximum at around $`R_e1`$ k$`\mathrm{\Omega }`$. For longer arrays, $`N=8`$ and $`N=20`$, the dependence is much weaker, and $`V_{1/2}`$ stays close to $`V_{1/2,0}`$, which is directly demonstrating the advantage of using long arrays in Coulomb blockade thermometry. In each case (unambiquously only for $`N=2`$, though), the normalized $`V_{1/2}`$ approaches unity with $`R_e0`$ and for $`R_e\mathrm{}`$. This is in agreement with predictions of the PC theory. We will discuss this in further detail below.
In the same figure the results obtained by numerical simulation, and for $`N=2`$ by the direct calculation described above, are depicted. In spite of the overall agreement between experiment and theory, there is a noticeable discrepancy between the measured and the predicted value of $`V_{1/2,max}`$, the widest half-width, of the two-junction array. In this case ($`N=2`$) we could calculate the half-width with the two methods described above, without a noticeable difference between them. For $`N=20`$ the absolute difference between experiment and theory is smaller; the predicted peak itself is very small, less than $`1\%`$ (see inset of Fig. 3). Comparison of experimental data to those derived from the theory for $`N=2`$ shows that the former produce even $`15\%`$ wider conductance curves (Fig. 3). Cotunneling and other higher order tunneling effects may play some role in our samples, because the junction resistances are not large as compared to $`R_K`$. While in the simpler case of a two-junction array without electromagnetic environment higher order tunneling tends to broaden the half-width of the CC dip , making such comparison between our data and the theoretical study of cotunneling effects in an array with dissipative environment is beyond the scope of this work and is not done here. There is also a difference in the shape of the ”shoulders” of the measured conductance curves for $`N=2`$ in particular (see Fig. 2) as compared to the theory. Such a distortion of shape can be caused by the nonuniform size distribution of junctions in the array, which we have studied in the case of no external impedance . Experimentally the intercomparison of the depths in different samples is difficult because the size of the junctions varies from sample to sample, and this gives the main contribution to the depth variation.
Next, let us consider the conductance curve in more detail. Using the time-domain formulation of single electron tunneling presented in we repeated calculation of the high-temperature conductance $`G(V)`$ of the symmetric $`N`$-junction array without stray capacitance in the limit of large $`R_e`$ ($`Nu_NR_e/R_K`$; $`u_N[(N1)/N][e^2/Ck_BT]`$). The result reads: $`\frac{G(V)}{G_T}=1u_Ng(v)`$ with $`G_T1/NR_T`$, $`veV\beta /N`$ and $`g(x)[x\mathrm{sinh}x4\mathrm{sinh}^2(x/2)]/8\mathrm{sinh}^4(x/2)`$. Comparison of the above expression with the corresponding one derived for the perfectly conducting environment ($`R_e=0`$) in , $`\frac{G(V)}{G_T}=1\frac{N1}{N}u_Ng(v)`$, indicates that $`(\frac{\mathrm{\Delta }G}{G_T})_{R_e\mathrm{}}=\frac{N}{N1}(\frac{\mathrm{\Delta }G}{G_T})_{R_e0}`$, where $`\frac{\mathrm{\Delta }G}{G_T}1\frac{G(0)}{G_T}`$ stands for the depth of the conductance dip. Here, again, the half-width has a universal value given by $`V_{1/2}=V_{1/2,0}`$. The above equation for the asymptotic values of $`\mathrm{\Delta }G/G_T`$ together with the last expression for $`V_{1/2}`$ confirms that the effect of a dissipative environment on conductance becomes increasingly suppressed by large $`N`$ and is most noticeable for a two-junction array.
Finally, let us have a closer look at the results above in view of Coulomb blockade thermometry (CBT) which is a primary (and secondary) thermometer based on single electron charging effects in arrays of tunnel junctions . The main parameter is the half-width of the CC dip. We conclude that the inaccuracy in temperature measurements arising from environmental effects can be made small by increasing the number of junctions, $`N`$. Our numerical simulations along with the experimental results depicted above show that for $`N=20`$ the temperature determined by CBT is very close to the thermodynamic temperature. In practice, in a sample with no intentional $`R_e`$ the effective value of the external impedance is of the order of free space impedance $`R_{e,eff}<Z_0377`$ $`\mathrm{\Omega }`$, and therefore the agreement is better than $`\pm 0.5\%`$ (see the experimental data point marked by an open circle in Fig. 2).
In summary, we have studied the effect of the resistive electromagnetic environment on transport in arrays of normal metal tunnel junctions within the high temperature and the weak tunneling regime. Special attention has been paid to the half-width of the conductance curve. Overall agreement between numerical results based on the extension of the PC theory and experimental data has been observed. We cannot explain the quantitative discrepancy between theory and experiment for the strong enhancement of the width at intermediate values of $`R_e`$ in the $`N=2`$ case: higher order tunnelling effects have not, however, been included in the present theory. As a practical conclusion, we verify that the effect of environment is mostly emphasized in a two-junction array and can be made sufficiently small for thermometric applications by increasing the number of junctions in the array.
We thank A. Korotkov and J. König for discussions. We thank the National Graduate School in Materials Physics for support.
TABLE CAPTION Table I. (a) Parameters of the measured samples with $`N=2`$. Samples with the same capital letter, e.g., 2A, 8A, 9A and 10A, belong to the same chip. (b) and (c) Parameters of the measured samples with $`N=8`$ and $`N=20`$, respectively. Samples 4C and 1C, 7C and 2C, and 5B and 1B constitute pairs of samples, with a space of only about 3 $`\mu `$m between the respective arrays. Capacitances are obtained by fitting the theoretical conductance curves to the experimental ones. The only fitting parameter is the capacitance of the sample.
FIGURE CAPTIONS
Fig. 1. An array of $`N`$ tunnel junctions in an electromagnetic environment together with its bias circuitry. For a symmetric array in a purely dissipative environment, $`Z_{e,j}(\omega )=R_j`$, and with negligible stray capacitances, $`C_{0,k}=0`$, the real part of the total impedance as seen by the $`k`$th junction, $`\text{R}e[Z_t^k(\omega )]`$, reduces to the simple form presented in the text (Eq. (6)).
Fig. 2. The measured conductance curve for a two-junction array with $`R_e=1.12`$ k$`\mathrm{\Omega }`$ (sample 9A). The curve under the data points is the theoretical conductance curve with $`C=2.25`$ fF (see text).
Fig. 3. Measured half-width of the conductance curve, normalized by $`5.439Nk_BT/e`$ (see text), for samples with $`N=2`$ (solid diamonds), $`N=8`$ (open triangles) and $`N=20`$ (open squares) as a function of extra (i.e., that in addition to the unintentional space impedance of $`100`$ $`\mathrm{\Omega }`$) external resistance. For comparison, a sample ($`N=20`$) without any intentional on-chip impedance has been shown in the plot by an open circle. The lowermost solid curve is obtained by simulation for completely symmetric $`20`$-junction array with $`CC_k=5.0`$ fF. The uppermost solid curve is the result of simulation for a two-junction array with $`C=2.2`$ fF, whereas dotted and dashed lines correspond to $`C=5.0`$ fF and $`C=1.4`$ fF, respectively. The middle solid curve indicates the result of simulation for an eight-junction array with $`C=5.0`$ fF. The capacitances were obtained from the depth of the dip of the conductance curves. The inset shows the normalized half-width together with the depth of the conductance curve ($`\mathrm{\Delta }G/G_T`$), obtained from the theory, for a 20-junction array.
|
no-problem/9910/astro-ph9910398.html
|
ar5iv
|
text
|
# Searches for HI in the Outer Parts of Four Dwarf Spheroidal Galaxies
## 1 Introduction
Our Galaxy’s dwarf spheroidal companions have long been thought to be old and dead galaxies. They show no evidence for current star formation. Furthermore, searches for HI emission from the dwarf spheroidal galaxies found no evidence of neutral gas (Knapp et al. 1978; Mould et al. 1990; Koribalski, Johnston, & Otrupcek 1994); one exception seems to be Sculptor (Carignan et al. 1998). Optical and UV absorption experiments for Leo I (Bowen et al. 1995, 1997) detected no neutral or low ionization level gas near that galaxy. Thus, it is commonly assumed that the dwarf spheroidal galaxies have no interstellar medium at all.
The traditional picture of dwarf spheroidals has been that they formed their stars in one major star formation event early in the universe, more than 10 Gyr ago. They were then were evacuated of gas by some process and have been evolving passively ever since. The evacuating process could have been stellar winds and supernovae from the first generation of stars, gravitational interactions with our Galaxy, stripping by the halo of our Galaxy, or perhaps other processes.
Recent information on the star formation histories of the spheroidals contradicts this simple picture. Most of the dwarf spheroidals experienced periods of star formation activity at various times from 10 Gyr to 1 Gyr ago (see reviews by Mateo 1998 and Grebel 1999). Fornax and Leo I may even have had star formation as recently as 100 Myr ago (Stetson, Hesser, & Smecker-Hane 1998; Gallart et al. 1999). Since star formation implies the presence of gas, the majority of the spheroidals did contain neutral gas just a few Gyr ago. The history of the spheroidals seems to be more complicated than previously thought. Moreover, there are at least two galaxies which look a great deal like the other spheroidals but which do contain neutral gas: Sculptor (Carignan et al. 1998) and LGS 3 (Young & Lo 1997). Thus, in the light of the detailed photometric evidence for recent star formation, it is important to establish whether or not the spheroidals have a neutral interstellar medium (ISM). Clearly, the presence or absence of gas is an important clue to the history of these galaxies.
The problem is that existing searches for HI in the dwarf spheroidals do not clearly establish whether they have HI or not. Young (1999) summarizes the limitations of previous observations; the major fault is that they covered only a small fraction of the central parts of the galaxies. Thus the HI mass limits, which appear to be very low indeed, do not pertain to the entire galaxy; they pertain only to the small portion of the galaxy which has been observed. In the case of the Draco and Ursa Minor spheroidals (Knapp et al. 1978) a beam with a half-power radius of 5′ was centered on galaxies whose core radii (semi-major axes) are 9′ and 16′. Therefore, less than one third of the area inside the core radii of these galaxies has been observed. Similarly, previous observations of the Sextans dwarf spheroidal (Carignan et al. 1998) centered a beam with a half-power radius of 7.5′ on a galaxy with a core radius of 17′. This problem also exists for the Sagittarius dwarf spheroidal, Fornax, and Carina (Koribalski et al. 1994; Knapp et al. 1978; Mould et al. 1990). Substantial amounts of HI could be present in the unobserved parts of the galaxies, or even nearby and outside the optical galaxy. There are many examples of the latter case, including the dwarf irregular M81 dwarf A, where HI is found in a ring outside the optical galaxy (Sargent, Sancisi, & Lo 1983), and Sculptor, where HI is also found outside the optical galaxy (Carignan et al. 1998). In short, it cannot be assumed that HI would have to be in the centers of the dwarf spheroidals, and therefore the existing observations do not constrain their HI contents.
We address these problems by making additional searches for HI emission in the outer parts of our Galaxy’s dwarf spheroidal companions. Young (1999) describes interferometric (Very Large Array) observations of Fornax, Leo II, and Draco. The present paper complements that work by using the NRAO 140-foot telescope to search for HI emission in Sextans, Leo I, Ursa Minor, and Draco. These new observations are important because they go out much farther in radius than previous observations. We go out to radii beyond 2.5 times the core radius in all four galaxies. In all galaxies except Sextans, the observations go out to radii beyond or close to the tidal radius. Subsequent sections describe the observations, the detection limits, whether gas could have been missed by the present observations, and the significance of the new data.
## 2 Observations
The observations were made with the NRAO 140-foot telescope during the nights of March 30 to April 11, 1998. The 21cm HFET receiver was employed, and system temperatures were around 19–20 K. The correlator was divided into two banks of 512 channels, each receiving an independent circular polarization. Bandwidths were 2.5 MHz (530 km s<sup>-1</sup>) and 1.25 MHz (264 km s<sup>-1</sup>), which give velocity resolutions of 1.0 and 0.5 km s<sup>-1</sup>. Figures 1 through 4 show the observed locations superposed on the stellar distributions of the galaxies (the isopleth maps of Irwin & Hatzidimitriou 1995). The pointings are spaced by the FWHM of the 140-ft beam, which is 21′.
The observations were made in frequency-switching mode so that any possible HI structures on very large scales would not be removed by position switching. The center positions of all galaxies were observed with 2.5 MHz bandwidth and 1.0 km s<sup>-1</sup> resolution. Non-central positions for Sextans and Leo I were observed with a 1.25 MHz bandwidth and 0.5 km s<sup>-1</sup> resolution (and overlapped frequency switching in the case of Leo I), and were later binned to 1.0 km s<sup>-1</sup> resolution. In all cases, the frequency offset was more than adequate to ensure the inclusion of all possible gas bound to the dwarf spheroidals, which have stellar velocity dispersions on the order of 10 km s<sup>-1</sup>. In order to improve the baselines, observations were made at night and the focus was continually modulated by $`\pm `$1/8 wavelength.
The flux calibration scale for this dataset is based on the continuum sources 3C295 and 3C274, which were observed three times per night. They are assumed to have flux densities of 22.22$`\pm `$0.11 Jy and 203.0$`\pm `$6.0 Jy, respectively, at 1408 MHz (Ott et al. 1994). The antenna temperature scale of the telescope was 3.46$`\pm `$0.05 Jy/K when assuming a noise tube temperature of 1.60 K. The effects of atmospheric opacity are ignored in these data; all observations were made at elevations above 27°, so that the maximum opacity correction would be 2.5% (van Zee et al. 1997). Spectral index corrections and the 1% gain effect of focus modulation are also ignored. The pointing was also checked three times per night with observations of 3C274 and 3C295; all pointing offsets were less than 45″ (3.6% of the beam width), so have negligible effect on the data.
After inspection, the individual scans at each position were stacked. Baseline subtraction was done with second order baselines over regions free from Galactic and/or high velocity cloud (HVC) emission. The exception to this statement is Leo I, which was observed with overlapped frequency switching, and third order baselines were used. The resulting spectra are shown in Figures 5 through 8. Table 1 gives the center coordinates and optical velocity adopted for each galaxy in this paper. Table 1 also gives the angular major axis core and tidal radii and the distance of each galaxy, taken from Irwin & Hatzidimitriou (1995).
## 3 Results
Table 2 gives every location observed along with the noise level per 1.0 km s<sup>-1</sup> channel, integrated intensity, and column density and mass limits. At each position the integrated intensity is summed in a range 60 km s<sup>-1</sup> wide, centered on the adopted optical velocity from Table 1. The 60 km s<sup>-1</sup> range is chosen for the following reason: in the absence of other information, we assume that any HI emission would be centered on the optical velocity with a velocity dispersion not very much greater than that of the stars. The spheroidals have stellar velocity dispersions of 13 km s<sup>-1</sup> and less, and have little rotational support (Irwin & Hatzidimitriou (1995), and references therein). The 60 km s<sup>-1</sup> range should be wide enough to include any possible emission from the spheroidals, including effects of the width of the HI line and uncertainty in the optical velocity.
The uncertainty in the integrated intensity is a formal estimate which counts two contributions; the dominant one is from the statistical uncertainty of summing noisy channels, and a smaller contribution is from the uncertainty in determining the baseline level. This estimate is described by many authors— for example, Sage (1990). The estimate assumes completely uncorrelated channels (e.g. no baseline wiggles), so it may be an underestimate of the true uncertainty. The HI column density upper limit is three times the uncertainty in the integrated intensity; the advantage of using this estimator is that it provides a conservative limit which is independent of the velocity resolution. The mass upper limit is derived from the column density upper limit and the distance in Table 1.
All of the integrated intensities in Table 2 have significance levels less than $`2\sigma `$. The column density limits in Table 2 are only a few $`\times `$ 10<sup>17</sup> $`\mathrm{cm}^2`$, and mass limits are 300–3000 $`\mathrm{M}_{\mathrm{}}`$. Only one spectrum shows a feature which looks as if it might be a real emission line— the southwest position on Ursa Minor. This feature peaks at $``$260 km s<sup>-1</sup>, only 13 km s<sup>-1</sup> away from the optical velocity of the galaxy. But it is not strong enough to be considered a real detection. The highest possible significance level that this feature can achieve comes from summing five consecutive positive-valued channels, which gives an integrated intensity of 0.054 $`\pm `$ 0.016 K km s<sup>-1</sup> (3.4$`\sigma `$). It is unlikely that this feature is anything more than noise. Given the large number of independent pointings in this project, the probability of a noise spike at this level is high; moreover, the feature does not appear in adjacent, more sensitive spectra. In any case, if this feature were real, it would correspond to a column density of 1.3$`\times 10^{17}`$ $`\mathrm{cm}^2`$ and a mass of 180 $`\mathrm{M}_{\mathrm{}}`$, and both of these values are well under the limits in Table 2. In short, no HI emission (or absorption) is detected at any position, other than gas that can be attributed to our Galaxy or HVCs.
Direct comparison of the HI mass upper limits in Table 2 to the previously published limits is not straightforward. Such comparisons are complicated by differences in the beam sizes, in the assumed line width (which is not always specified), and in the method used for estimating a detection limit. For Leo I, the mass limit of Knapp et al. (1978) is 5800 $`\mathrm{M}_{\mathrm{}}`$ in a 10′ beam, assuming a 15 km s<sup>-1</sup> line width, after recalculating for the distance assumed in this paper. The current mass limit for the central position of Leo I is 2900 $`\mathrm{M}_{\mathrm{}}`$ in a 21′ beam, assuming a 60 km s<sup>-1</sup> line width. In this case, the present observations give a substantial increase in sensitivity over the previous observations. But it is clear that in all four cases, the major advantage of the present observations is in covering a much larger area than was previously studied.
Earlier HI observations of these four galaxies consisted of one pointing with a half-power radius of 5′ to 10′. The present data go out to radii of 42′ or 21′ (Leo I). In all cases the observations cover a region out to a radius of at least 2.5 core radii. For Leo II, Draco, and Ursa Minor, the observations go out to 0.8 to 1.7 times the major axis tidal radius. Sextans, which has a tidal radius of nearly three degrees, is the only case in which the observations do not go out close to the tidal radius; nevertheless, the spatial coverage is still substantially improved.
Table 1 gives, in column 8, the velocity ranges which were observed but which were covered by Galactic or HVC emission. Column 9 of Table 1 gives the velocity ranges that have been searched for HI emission in each galaxy; for Leo I and Sextans the larger velocity range applies to the center position and the smaller range to the non-center positions. We presume that a weak emission line from gas associated with the dwarf spheroidals would not be detectable on top of the strong Galactic and HVC emission. Furthermore, spatial variations in the intensity of the Galactic or HVC emission can be a factor of two and greater on 21′ scales, so that position switching or mapping still does not make it possible to detect a weak dwarf spheroidal line on top of strong Galactic or HVC emission. Draco and Ursa Minor are projected onto HVC complex C, which has center velocities around $``$180 km s<sup>-1</sup> in this part of the sky (Hulsbosch & Wakker 1988; Wakker & van Woerden 1991). This HVC complex is strongly detected in the present observations and is detectable out to velocities of about $``$230 km s<sup>-1</sup>. Leo I and Sextans are not projected in front of known HVC gas, but Galactic emission covers velocities out to about +150 km s<sup>-1</sup>. Fortunately, none of the Galactic or HVC emission extends to the optical velocities of the dwarf spheroidals. Thus the Galactic and HVC emission probably does not hide gas which is associated with the dwarf spheroidals, unless the optical velocities are wrong by 20 to 140 km s<sup>-1</sup>.
## 4 Discussion
No HI emission was detected from the dwarf spheroidals, with column density limits of 2–6$`\times 10^{17}`$ $`\mathrm{cm}^2`$ and with mass limits of 300 to 3000 $`\mathrm{M}_{\mathrm{}}`$. To understand the properties of atomic gas which could escape detection, consider the observed quantity: the brightness temperature (after continuum subtraction), $`\mathrm{\Delta }\mathrm{T}_\mathrm{B}`$. The brightness temperature can be related to the gas’s physical parameters of spin temperature $`\mathrm{T}_{\mathrm{spin}}`$, optical depth $`\tau `$, and beam filling factor $`\mathrm{\Phi }`$ in the usual way
$$\mathrm{\Delta }\mathrm{T}_\mathrm{B}=\mathrm{\Phi }(\mathrm{T}_{\mathrm{spin}}\mathrm{T}_{\mathrm{bg}})(1\mathrm{e}^\tau ).$$
In the present case the background brightness temperature $`\mathrm{T}_{\mathrm{bg}}`$ refers to the microwave background at 2.7 K. Thus, there are three ways that atomic gas could escape detection by the present observations: it could have very low optical depth, hence low column density; small beam filling factor; or spin temperature very close to 2.7 K. These three possibilities are discussed below, and we argue that none of these possibilities is likely to have hidden a substantial mass of atomic gas.
The column density detection limit is derived from integrating the observed brightness temperature over a velocity range, and therefore the column density limits in Table 2 refer to the product of the true gas column density and the unknown beam filling factor $`\mathrm{\Phi }`$. HI emission could be present at the observed positions with column densities greater than 10<sup>17</sup> $`\mathrm{cm}^2`$, provided that the gas is patchy. For example, a circular source of diameter 2′ would fill only 1% of the beam, and such a source could escape detection in our data even if the column density were as high as 10<sup>19</sup> $`\mathrm{cm}^2`$. But because the gas mass is integrated over area, the gas mass limits in Table 2 are independent of the beam filling factor. Also note that this kind of bright, patchy emission would be easily detectable in interferometric observations. The VLA observations of Draco, Leo II, and Fornax (Young 1999) provide direct evidence that bright, patchy HI emission is not present in those galaxies. These complementary data, along with the fact that the mass limit is independent of beam filling factor, suggest that there isn’t (much) HI which has escaped the present single-dish observations by virtue of small beam filling factors.
HI with very low column density (low optical depth) probably doesn’t exist in the dwarf spheroidals; it is expected to be ionized. Sensitive observations of a spiral galaxy (van Gorkom 1993) show that the HI disk of the spiral cuts off sharply when the HI column density reaches about 10<sup>19</sup> $`\mathrm{cm}^2`$. A similar effect is seen in high velocity clouds, where Colgan et al. (1990) observe a tendency for the HI in the clouds to cut off sharply at column densities below 5$`\times 10^{18}`$ $`\mathrm{cm}^2`$. Corbelli & Salpeter (1993a, 1993b) and others have argued that these HI cutoffs are caused by ionization by the galactic and/or extragalactic UV radiation field. In fact, ionized gas outside of the truncated HI disk has been detected in NGC 253 (Bland-Hawthorn, Freeman, & Quinn 1997; Bland-Hawthorn 1998). In this picture, the dwarf spheroidals are probably similar to high velocity clouds, and hydrogen at column densities below about 5$`\times 10^{18}`$ $`\mathrm{cm}^2`$ would be photoionized by our own Local Group galaxies or quasars. Therefore, it is unlikely that the present observations could have missed substantial amounts of either low column density HI with large beam filling factors or high column density HI with small beam filling factors.
The third way in which atomic gas could evade detection is by having a spin temperature close to 2.7 K. The true column density could then be greater than the apparent column density (which is based on the assumption that spin temperatures are not close to 2.7 K). Current data cannot rule out this possibility, but it is not usually considered to be likely. Corbelli & Salpeter (1993a) studied the excitation of HI in an environment of low pressure and low heating rate— conditions which were intended to be representative of the far outer disks of spirals, and which are probably applicable to the dwarf spheroidals as well. The result is that the extragalactic UV background, via Lyman $`\alpha `$ pumping, is expected to keep the spin temperature of HI well above 2.7 K even when collisional pumping is minimal. In the most extreme case of an unrealistically low UV flux, Corbelli & Salpeter find that the spin temperatures can drop low enough so that the true column density is four times greater than the apparent column density. In more realistic cases, the difference is much less than a factor of four. Again, this mechanism does not seem to be a viable method for hiding large amounts of atomic gas.
Of course, it is always possible that atomic gas could be outside the regions observed. In Sextans, HI emission could be located between a radius of 42′ = 2.5 core radii and the tidal radius. For the other three galaxies, a significant amount of atomic gas could remain undetected only if it was outside the tidal radius. A search for this kind of “extra-tidal” gas would require sensitive surveys, which are not presently planned for the northern hemisphere. In fact, Lin & Murray (1999) have proposed an interesting scenario in which gas might be associated with the spheroidals, outside the tidal radius, and strung out along the spheroidals’ orbits. At some later apogalactic point, the gas could be recollected into the dwarf galaxy proper. But in this picture, the co-orbiting gas is very low column density and is ionized, not neutral. As possible evidence against this picture, note that the absorption experiments of Bowen et al. (1995, 1997) give column density limits of 10<sup>15</sup>–10<sup>16</sup> $`\mathrm{cm}^2`$ for neutral or low ionization state gas outside the tidal radius of Leo I. Nevertheless, except perhaps for Sextans, it is unlikely that substantial amounts of neutral gas would be associated with the dwarf spheroidals and outside the regions observed.
Finally, it must be conceded that substantial amounts of neutral gas could be hidden in the dwarf spheroidals in molecular form. Molecular hydrogen itself would be difficult to detect except in the UV absorption lines, and low metallicities might make the CO tracer molecule virtually undetectable as well. However, one might propose to detect molecular clouds in the spheroidals via their HI emission; after all, molecular clouds in our own Galaxy are associated with HI envelopes of column density 10<sup>20</sup> $`\mathrm{cm}^2`$ and higher (Blitz & Williams 1999). But molecular clouds in the dwarf spheroidals would probably not have such extensive HI envelopes. Theoretical models (e.g. Draine & Bertoldi 1996) explain that the depth of the atomic envelope around a molecular cloud depends on the ratio of the local UV field strength to the gas density. The Galactic molecular clouds have HI envelopes of 10<sup>20</sup> $`\mathrm{cm}^2`$ and higher because they are bathed in relatively strong UV fields. But if the molecular gas density could be maintained at 100 $`\mathrm{cm}^3`$ or higher (a value which is typical for giant molecular clouds in our Galaxy’s disk), and if the UV field (at 1000 Å) is one tenth as strong as the standard “solar neighborhood” or Habing field, molecular clouds in the spheroidals could have atomic envelopes of column density 10<sup>18</sup> $`\mathrm{cm}^2`$ and less. The more exotic, extremely dense (10<sup>9</sup> $`\mathrm{cm}^3`$), fractal molecular clouds postulated by Pfenniger, Combes, & Martinet (1994) would have even smaller atomic envelopes. Thus, ignoring questions of the formation of H<sub>2</sub>, it is easy to see that a substantial mass of molecular gas could be present in the dwarf spheroidals, undetected by existing observations.
## 5 Implications
We have argued that it is unlikely that the dwarf spheroidals contain substantial amounts of atomic gas, unless that gas is quite far from the centers of the galaxies. Some interstellar medium could be hidden in the galaxies in ionized or molecular form. This situation is an important facet of what Mateo (1998) calls “the interstellar medium ‘crisis’ ” in dwarf spheroidal galaxies. Some of the spheroidals show evidence for very recent star formation, and they might be expected to have interstellar gas. Even in the spheroidals without recent star formation, evolved stars should be pumping gas back into the system in detectable amounts.
Of the four galaxies studied in this paper, Ursa Minor and Draco most closely conform to the traditional view of dwarf spheroidals as “Population II” systems. Color-magnitude diagrams for Ursa Minor and Draco indicate that the bulk of the stars are old, $`>10`$ Gyr, with very small, if any, intermediate age or young stellar populations. Recent work on the color-magnitude diagrams of these galaxies has been published by Martínez-Delgado & Aparicio (1999) and Grillmair et al. (1998). The Sextans dwarf spheroidal also appears to be predominantly old, though up to 25% of the stars may have intermediate ages of a few Gyr (Mateo et al. 1995). The most striking of the four is Leo I, in which the majority of the stars are intermediate-age stars (1–10 Gyr). Few of the stars are older than 10 Gyr, and some of the stars may be as young as a few hundred Myr (Gallart et al. 1999; Caputo et al. 1999). Star formation implies that Leo I did contain a substantial amount of cold, neutral interstellar gas just a few Gyr ago, and maybe just a few hundred Myr ago. The same is true for Fornax, in which HI has also not been detected (Stetson et al. 1998; Young 1999). The very recent star formation in these galaxies and the apparent lack of any substantial neutral medium are puzzles which we cannot resolve at this time.
Furthermore, in all four spheroidals observed here, mass loss from evolved stars might be expected to exceed the gas detection limits in relatively short times. According to Faber & Gallagher (1976), the mass loss rate in an old stellar population should be about 1.5 $`\mathrm{M}_{\mathrm{}}`$ $`\mathrm{yr}^1`$ per 10<sup>11</sup> $`\mathrm{L}_{\mathrm{}}`$. The V-band luminosities of the present sample of spheroidals are 3–5$`\times 10^5`$ $`\mathrm{L}_{\mathrm{}}`$ for Ursa Minor, Draco, and Sextans, and 5$`\times 10^6`$ $`\mathrm{L}_{\mathrm{}}`$ for Leo I (Mateo 1998). If all the gas lost from evolved stars is in the form of atomic gas, and all of it settles to the central pointing, this gas would exceed the present detection limits in only 4$`\times 10^7`$ to 10<sup>8</sup> yr. If the gas is extended over the entire surveyed area, it would exceed the present detection limits in 2$`\times 10^8`$ to 2$`\times 10^9`$ yr. Again, what has happened to this gas is a puzzle. If the gas from evolved stars has high enough density, perhaps it is hidden in molecular form; if it has low enough density, perhaps it is ionized; perhaps this gas has been completely removed from the galaxies.
Have the spheroidals always contained gas, but in forms which have not yet been observed? Do the spheroidals occasionally acquire neutral gas (perhaps from the Magellanic Stream or high velocity clouds), spend part of their lives looking like Sculptor and/or LGS 3, and later lose their gas? One thing is clear: despite the variety of interesting hypotheses about the evolution of the dwarf spheroidals, we do not yet understand their histories, and this problem remains one of the outstanding problems in the evolution of small galaxies and loose groups.
## 6 Summary
We have searched for HI emission in and aound our Galaxy’s dwarf spheroidal companions Sextans, Leo I, Ursa Minor, and Draco. New information on the star formation histories of dwarf spheroidals shows that many of them formed stars a few Gyr ago, and some as recently as 100 Myr ago. Furthermore, previously published observations of the dwarf spheroidals covered only a small fraction of the galaxies’ areas, so they could not exclude the possibility of substantial amounts of gas elsewhere in the galaxies. The present observations cover much larger areas than previously studied; however, no HI was detected in these galaxies, down to column density limits of 2–6$`\times 10^{17}`$ $`\mathrm{cm}^2`$. From these observations we conclude that there is no significant HI within the tidal radius of Leo I, Draco, or Ursa Minor, or within 2.5 core radii for Sextans.
Many thanks to J. Gallagher for suggesting the idea of this project and for providing invaluable advice. Thanks to M. Irwin for providing the isopleth maps of the dwarf spheroidals, and to B. Wakker for information on high velocity clouds.
|
no-problem/9910/astro-ph9910433.html
|
ar5iv
|
text
|
# Numerical simulations of relativistic wind accretion on to black holes using Godunov-type methods
## 1 Introduction
The term “wind” or hydrodynamic accretion refers to the capture of matter by a moving object under the effect of the underlying gravitational field. The canonical astrophysical scenario in which matter is accreted in such a non-spherical way was suggested originally by Bondi and Hoyle , who studied, using Newtonian gravity, the accretion on to a gravitating point mass moving with constant velocity through a non-relativistic gas of uniform density. Such process applies to describe mass transfer and accretion in compact X-ray binaries, in particular in the case in which the donor (giant) star lies inside its Roche lobe and loses mass via a stellar wind. This wind impacts on the orbiting compact star forming a bow-shaped shock front around it.
The problem was first numerically investigated in the early 70’s. Since then, contributions of a large number of authors using highly developed Godunov-type methods extended the simplified analytic models (see, e.g., and references there in). These Newtonian investigations helped develop a thorough understanding of the hydrodynamic accretion scenario, in its fully three-dimensional character, revealing the formation of accretion disks and the appearance of non-trivial phenomena such as shock waves or flip-flop instabilities.
We have recently considered hydrodynamic accretion on to a moving black hole using relativistic gravity and the “test fluid” approximation . We present here a brief summary of the methodology and results of such simulations. We integrate the general relativistic hydrodynamic equations in the fixed background of the Kerr spacetime (including its non-rotating Schwarzschild limit) and neglect the self-gravity of the fluid as well as non-adiabatic processes such as viscosity or radiative transfer. In the black hole case the matter flows ultimately across the event horizon and becomes causally disconnected of distant observers . Near that region the problem is intrinsically relativistic and the gravitational accelerations significantly deviate from the Newtonian values.
## 2 Equations
The general relativistic hydrodynamic equations can be cast as a first-order flux-conservative system describing the conservation of mass, momentum and energy. Formulations of this sort are given, e.g. in . In this work we follow the approach laid out in for a perfect fluid stress-energy tensor $`T^{\mu \nu }`$. The system of equation then reads:
$$\frac{1}{\sqrt{g}}\left(\frac{\sqrt{\gamma }𝐮}{x^0}+\frac{\sqrt{g}𝐟^i}{x^i}\right)=𝐬$$
(1)
($`x^0=t`$; $`x^i`$ spatial coordinates, $`i=1,2,3`$) where $`𝐮𝐮(𝐰)`$ are the evolved quantities, $`𝐮=(D,S_j,\tau )`$ and $`𝐟^i`$ are the fluxes
$$𝐟^i=(D\left(v^i\frac{\beta ^i}{\alpha }\right),S_j\left(v^i\frac{\beta ^i}{\alpha }\right)+p\delta _j^i,\tau \left(v^i\frac{\beta ^i}{\alpha }\right)+pv^i),$$
(2)
$`v^i`$ being the 3-velocity and $`p`$ the pressure. The corresponding sources $`𝐬`$ are given by
$`𝐬=(0,T^{\mu \nu }\left({\displaystyle \frac{g_{\nu j}}{x^\mu }}\mathrm{\Gamma }_{\nu \mu }^\delta g_{\delta j}\right),\alpha \left(T^{\mu 0}{\displaystyle \frac{\mathrm{ln}\alpha }{x^\mu }}T^{\mu \nu }\mathrm{\Gamma }_{\nu \mu }^0\right)).`$ (3)
We note the presence of geometric terms in the fluxes and sources which appear as the local conservation laws of the density current and stress-energy are expressed in terms of partial derivatives. These terms are the lapse function $`\alpha `$, the shift vector $`\beta ^i`$ and the connection coefficients $`\mathrm{\Gamma }_{\nu \mu }^\delta `$ of the 3+1 spacetime metric
$$ds^2g_{\mu \nu }dx^\mu dx^\nu =(\alpha ^2\beta _i\beta ^i)dt^2+2\beta _idx^idt+\gamma _{ij}dx^idx^j$$
(4)
Additionally $`gdet(g_{\mu \nu })`$ is such that $`\sqrt{g}=\alpha \sqrt{\gamma }`$ and $`\gamma det(\gamma _{ij})`$.
The vector $`𝐰`$, representing the primitive variables, is given by $`𝐰=(\rho ,v_i,\epsilon )`$ where $`\rho `$ is the density and $`\epsilon `$ the specific internal energy. The evolved quantities are defined in terms of the primitive variables as $`D=\rho W`$, $`S_j=\rho hW^2v_j`$ and $`\tau =\rho hW^2pD`$, $`W`$ being the Lorentz factor $`W=(1v^2)^{1/2}`$, with $`v^2=\gamma _{ij}v^iv^j`$, and $`h`$ the specific enthalpy, $`h=1+\epsilon +p/\rho `$. A perfect fluid equation of state $`p=(\mathrm{\Gamma }1)\rho \epsilon `$, $`\mathrm{\Gamma }`$ being the constant adiabatic index, closes the system.
In our computations we specialize the above expressions to the Kerr line element which describes the exterior geometry of a rotating black hole. We use the Kerr-Schild form of the Kerr metric, which is free of coordinate singularities at the black hole horizon. Computations using the more standard Boyer-Lindquist (singular) form of the metric are presented in . Pertinent technical details concerning the specific form of these metrics are given in .
## 3 Numerical scheme
Our hydrodynamical code performs the numerical integration of system (1) using a Godunov-type method. The time update from $`t^n`$ to $`t^{n+1}`$ proceeds according to the following algorithm in conservation form:
$`𝐮_{i,j}^{n+1}=𝐮_{i,j}^n{\displaystyle \frac{\mathrm{\Delta }t}{\mathrm{\Delta }x^k}}(\widehat{𝐟}_{i+1/2,j}\widehat{𝐟}_{i1/2,j})+\mathrm{\Delta }t𝐬_{i,j},`$ (5)
improved with the use of (second-order) conservative Runge-Kutta sub-steps to gain accuracy in time . The numerical fluxes are computed by means of Marquina’s flux-formula . After the update of the conserved quantities the primitive variables are computed via a root-finding procedure.
The flux-formula makes use of the complete characteristic information of system (1), eigenvalues (characteristic speeds) and right and left eigenvectors. Generic expressions are collected in .
The state variables, $`𝐮`$, must be computed (reconstructed) at the left and right sides of a given interface, out of the cell-centered quantities, prior to compute the numerical fluxes. In relativistic hydrodynamics one has the freedom to reconstruct either $`𝐰`$ (primitive variables) or $`𝐮`$ (evolved variables). For efficiency and accuracy considerations we reconstruct the first set, from which the remaining variables are obtained algebraically. The code uses slope-limiter methods to construct second-order TVD schemes by means of monotonic piecewise linear reconstructions of the cell-centered quantities. We use the standard minmod slope which provides the desired second-order accuracy for smooth solutions, while still satisfying the TVD property.
## 4 Results
The classical solution for an asymptotically uniform wind of presureless gas past a compact source (modeled analytically by a point mass) was obtained by . In this solution the material is focussed at the rear part of the object as a result of the gravitational pull. For a pressureless gas, the density at this symmetry line could reach an infinite value and matter would flow on to the hole along this accretion line. However, when pressure is included in the model, a cylindrical shock forms around this line and the accretion proceeds along an accretion column of high density and pressure shocked material. The predicted final accretion pattern consists of a stationary conical shock with the material inside the accretion radius being captured by the central object. An schematic representation of this solution is depicted in Fig. 1.
A numerical evolution of relativistic wind accretion past a rapidly-rotating Kerr black hole ($`a=0.999M`$, $`a`$ specific angular momentum, $`M`$ black hole mass) is depicted in Fig. 2 (left panel). This simulation shows the steady-state pattern in the equatorial plane of the black hole. The tail shock appears stable to tangential oscillations, in contrast to Newtonian simulations with tiny accretors (see, e.g., and references there in; see for a related discussion). The accretion rates of mass and linear and angular momentum also show a stationary behavior (see, e.g., ). As opposed to the non-rotating black hole, in the rotating case the shock becomes wrapped around the central accretor, the effect being more pronounced as the black hole angular momentum $`a`$ increases. The inner boundary of the domain is located at $`r=1.0M`$ (inside the event horizon which, for this model, is at $`1.04M`$) which is only possible with the adopted regular coordinate system. The flow morphology shows smooth behavior when crossing the horizon, all matter fields being regular there.
The enhancement of the pressure in the post-shock zone is responsible for the “drag” force experienced by the accretor. The rotating black hole redistributes the high pressure area, with non-trivial effects on the nature of the drag force. The pressure enhancement is predominantly on the counter-rotating side. We observe a pressure difference of almost two orders of magnitude, along the axis normal to the asymptotic flow direction. The implication of this asymmetry is that a rotating hole moving across the interstellar medium (or accreting from a wind), will experience, on top of the drag force, a “lift” force, normal to its direction of motion (to the wind direction). Although different in origin this feature bears a superficial resemblance with the Magnus effect of classical fluid dynamics.
The right panel of Fig. 2 shows how the accretion pattern would look like were the computation performed using the more common (though singular) Boyer-Lindquist coordinates. The transformation induces a noticeable wrapping of the shock around the central hole. The shock would wrap infinitely many times before reaching the horizon. As a result, the computation in these coordinates would be much more challenging than in Kerr-Schild coordinates, particularly near the horizon. Since the last stable orbit approaches closely the horizon in the case of maximal rotation, the interesting scenario of co-rotating extreme Kerr accretion would be severely affected by the strong gradients which develop in the strong-field region. This will most certainly affect the accuracy and, potentially, also the stability of numerical codes.
### Acknowledgments:
J.A.F. acknowledges financial support from a TMR fellowship of the European Union (contract nr. ERBFMBICT971902).
|
no-problem/9910/astro-ph9910198.html
|
ar5iv
|
text
|
# On the Baseline Flux Determination of Microlensing Events Detectable with the Difference Image Analysis Method
## 1 Introduction
Detection of a large number of events is one of the big challenges in microlensing searches. Classical solution to this challenge is observing fields with greatest density of stars such as the Galactic bulge (Alcock et al. 1997a; Udalski et al. 1997; Alard & Guibert 1997) and the Magellanic Clouds (Alcock et al. 1997b, 1997c; Ansari et al. 1996). While the use of such crowded fields increases the event rate, it also limits the precision of the photometry due to blending (Di Stefano & Esin 1995; Woźniak & Paczyński 1997; Han 1997; Alard 1997). In addition, with the use of the classical method based on PSF photometry one can monitor only stars with resolved images, and thus the number of source stars is limited by crowding.
These problems of the classical method of microlensing experiments can be resolved with the newly developed technique of difference image analysis (DIA, Alard 1998, 1999; Alard & Lupton 1998; Alcock et al. 1999a, 1999b; Melchior et al. 1998, 1999). Since the DIA method detects and measures the variation of source star flux by subtracting an observed image from a convolved and normalized reference image, one can measure light variations even for unresolved stars. By using the DIA method, one can not only improve the photometric precision by removing the effect of blending but also increase the number of detected events by including unresolved stars into monitoring sources. In addition, the DIA method allows one to overcome the restriction of conducting lensing experiments toward only resolved star fields and thus can extend our ability to probe extra-galactic MACHOs (Gould 1995, 1996; Han 1996; Han & Gould 1996; Crotts & Tomaney 1996; Tomaney & Crotts 1996; Ansari et al. 1997, 1999).
However, the principal problem with the DIA method in microlensing experiments is that, by its very nature, it has difficulties in measuring the unamplified flux (baseline flux, $`F_0`$) of a source star. This is because the observed light curve<sup>1</sup><sup>1</sup>1Throughout this paper, we use the term ‘light curve’ to designate the changes in the flux of a source star, while the term ‘amplification curve’ is used to represent the changes in the amplification of the source star flux. of a microlensing event obtained by the DIA method, $`F_{\mathrm{DIA}}`$, results from the combination of the true amplification $`A_0`$ and the baseline flux, i.e.
$$F_{\mathrm{DIA}}=FF_{\mathrm{ref}}=F_0(A_01),$$
$`(1.1)`$
where $`F`$ and $`F_{\mathrm{ref}}`$ represent the source star fluxes measured from the image obtained during the progress of the event and the reference image, respectively. One significant consequence of this problem is that it produces degeneracy in determining the lensing parameters of the event (see § 3) like the degeneracy problem for a blended event whose light curve results from the combination of $`A_0`$ and the blended flux. Therefore, it is often believed that the DIA method is not as powerful as the classical method based on the PSF photometry in determining the Einstein time scale $`t_\mathrm{E}`$ of an event.
In this paper, we demonstrate that the degeneracy problem in microlensing events detectable from the searches by using the DIA method will not be as serious as it is often worried about. This is because a substantial fraction of events will be high amplification events for which the deviations of the amplification curves constructed with the wrong baseline fluxes from their corresponding best-fit standard amplification curves will be considerable even for a small amount of the fractional baseline flux deviation $`\mathrm{\Delta }F_0/F_0`$. With a model luminosity function of source stars and under realistic observational conditions, we find that $`30\%`$ of detectable Galactic bulge events are expected to have high amplifications and their baseline fluxes can be determined with uncertainties $`\mathrm{\Delta }F_0/F_00.5`$.
## 2 Mis-normalized Amplification Curves
The standard form of the amplification curve of a gravitational microlensing event is related to the lensing parameters by
$$A_0(u)=\frac{u^2+2}{u(u^2+4)^{1/2}};u=\left[\left(\frac{tt_{\mathrm{max}}}{t_{\mathrm{E},0}}\right)^2+\beta _0^2\right]^{1/2},$$
$`(2.1)`$
where $`u`$ is the lens-source separation normalized in units of the angular Einstein ring radius $`\theta _\mathrm{E}`$, and the lensing parameters $`\beta _0`$, $`t_{\mathrm{max}}`$, and $`t_{\mathrm{E},0}`$ represent the impact parameter for the lens-source encounter, the time of maximum amplification, and the Einstein ring radius crossing time (Einstein time scale), respectively. Once these lensing parameters are determined from the amplification curve, one can obtain information about the lens because the Einstein time scale is related to the physical parameters of the lens by
$$t_{\mathrm{E},0}=\frac{r_\mathrm{E}}{v};r_\mathrm{E}=\left(\frac{4GM}{c^2}\frac{D_{ol}D_{ls}}{D_{os}}\right)^{1/2},$$
$`(2.2)`$
where $`r_\mathrm{E}=D_{ol}\theta _\mathrm{E}`$ is the Einstein ring radius, $`v`$ is lens-source transverse speed, $`M`$ is the mass of the lens, and $`D_{ol}`$, $`D_{ls}`$, and $`D_{os}`$ are the separations between the observer, lens, and source star.
However, if the baseline source star flux of an event is misestimated by an amount $`\mathrm{\Delta }F_0`$, the resulting amplification curve $`A`$ (hereafter ‘mis-normalized’ amplification curve) deviates from the true amplification curve $`A_0`$ by
$$A=\frac{F_0A_0+\mathrm{\Delta }F_0}{F_0+\mathrm{\Delta }F_0}=\frac{A_0+f}{1+f},$$
$`(2.3)`$
where $`f=\mathrm{\Delta }F_0/F_0`$ is the fractional deviation in the determined baseline flux.<sup>2</sup><sup>2</sup>2We note that if $`f`$ represents the blended light fraction, i.e. $`f=B/F_0`$, equation (2.3) describes the observed amplification curve of a microlensing event affected by blended light of an amount $`B`$. Therefore, the amplification curve of a blended event can be regarded as the mis-normalized amplification curve constructed with the baseline flux deviation $`\mathrm{\Delta }F_0=B`$. The only difference is that since the blended light should be positive, i.e. $`f>0`$, while the baseline flux deviation can be either negative or positive, blended amplification curves are always underestimated. If the baseline flux is overestimated (i.e. $`f>0`$), the determined amplification is lower than $`A_0`$, and vice versa. Note that while there is no upper limit for $`f`$, it should be greater than $`1`$ (i.e. $`f>1`$).
The shape of a microlensing event amplification curve is characterized by its height (peak amplification) and the width (event duration), which are parameterized by the impact parameter and the Einstein time scale, respectively. Since both the height and width of the amplification curve are changed due to the wrong estimation of the baseline flux, the lensing parameters determined from the mis-normalized amplification curve will differ from the true values. First, the change in the peak amplification makes the determined impact parameter change into
$$\beta =\left[2\left(1A_p^2\right)^{1/2}2\right]^{1/2};A_p=\frac{A_{p,0}+f}{1+f},$$
$`(2.4)`$
where $`A_{p,0}=(\beta _0^2+2)/\beta _0(\beta _0^2+4)^{1/2}`$ and $`A_p`$ are the peak amplifications of the true and the mis-normalized amplification curves. In addition, due to the change in the event duration, the determined Einstein time scale differs from the value $`t_{\mathrm{E},0}`$ by
$$t_\mathrm{E}=t_{\mathrm{E},0}\left(\frac{\beta _{th}^2\beta _0^2}{\beta _{th,0}^2\beta ^2}\right)^{1/2},$$
$`(2.5)`$
where $`\beta _{th}`$ represents the maximum allowed impact parameter (threshold impact parameter) for a source star to be detected by having a peak amplification higher than a certain threshold minimum value $`A_{th}`$. With the right choice of the baseline flux, the required minimum peak amplification and the corresponding maximum impact parameter are $`A_{th,0}=3/\sqrt{5}`$ and $`\beta _{th,0}=1`$. However, since the detectability will be determined from the mis-normalized amplification curve, the actually applied threshold amplification and the corresponding impact parameter will differ from $`A_{th,0}`$ and $`\beta _{th,0}`$ by
$$A_{th}=A_{th,0}(1+f)f,$$
$`(2.6)`$
and
$$\beta _{th}=\left[2\left(1A_{th}^2\right)^{1/2}2\right]^{1/2}$$
$`(2.7)`$
(Han 1999).
In the upper panels of Figure 1, we present four example mis-normalized amplification curves $`A`$ (solid curves) which are expected when the baseline flux of the source star for a microlensing event with $`\beta _0=0.5`$ is determined with the fractional deviations of $`f=\pm 0.2`$ and $`\pm 0.5`$. By using equations (2.4) – (2.7), we compute the lensing parameters of the standard amplification curves which best fit the individual mis-normalized amplification curves, and the resulting amplification curves $`A_{\mathrm{fit}}`$ are presented by dotted lines. In the lower panels, to better show the difference between each pair of curves $`A`$ and $`A_{\mathrm{fit}}`$, we also present the fractional deviations of the amplification curves $`A`$ from their corresponding best-fit standard amplification curves, i.e. $`\mathrm{\Delta }A/A_{\mathrm{fit}};\mathrm{\Delta }A=A_{\mathrm{fit}}A`$. From the figure, one finds the following trends. First, for the same amount of $`\left|f\right|=\left|\mathrm{\Delta }F_0\right|/F_0`$, the fractional deviation $`\mathrm{\Delta }A/A_{\mathrm{fit}}`$ is larger when the baseline flux is underestimated (i.e. $`f<0`$) compared to the deviation when the baseline flux is overestimated (i.e. $`f>0`$). Second, although the difference between the two amplification curves $`A`$ and $`A_{\mathrm{fit}}`$ becomes bigger as the deviation $`\mathrm{\Delta }F_0/F_0`$ increases, the mis-normalized amplification curves, in general, are well fit by standard amplification curves with different lensing parameters.
## 3 Baseline Flux Determination for High Amplification Events
In previous section, we showed that since the amplification curve of a general microlensing event obtained based on wrong estimation of the baseline flux is well fit by a standard amplification curve with different lensing parameters, making it difficult to determine $`F_0`$ from the shape of the observed light curves. In this section, however, we show that for high amplification events the deviations of the mis-normalized amplification curves from their best-fit standard curves are considerable even for a small fractional deviation $`\mathrm{\Delta }F_0/F_0`$, and thus one can determine the baseline fluxes with small uncertainties.
To demonstrate this, in the upper panels of Figure 2, we present the mis-normalized amplification curves constructed with the same fractional baseline flux deviations of $`f=\pm 0.2`$ and $`\pm 0.5`$ as the cases in Figure 1, and the corresponding best-fit standard amplification curves for a higher amplification event with an impact parameter of $`\beta _0=0.1`$. In the lower panels, we also present the fractional differences $`\mathrm{\Delta }A/A_{\mathrm{fit}}`$. From the comparison of the fractional differences $`\mathrm{\Delta }A/A_{\mathrm{fit}}`$ in Figure 1 and 2, one finds that the deviations of the mis-normalized amplification curves from their corresponding standard amplification curves are siginificantly larger for the higher amplification event.
To quantify how better one can determine the baseline flux with increasing event amplifications, we determine the uncertainty ranges of $`F_0`$ for microlensing events with various impact parameters $`\beta _0`$ under realistic observational conditions. To do this, for a given event with $`\beta _0`$ we first produce a series of mis-normalized amplification curves which are constructed with varying values of $`f`$. In the next step, we obtain the best-fit standard amplification curves corresponding to the individual mis-normalized amplification curves by using the relations in equations (2.4) – (2.7). We then statistically compare each pair of the amplification curves $`A`$ and $`A_{\mathrm{fit}}`$ by computing $`\chi ^2`$, which are determined by
$$\chi ^2=\underset{i=1}{\overset{N_{\mathrm{dat}}}{}}\left[\frac{A(t_i)A_{\mathrm{fit}}(t_i)}{pA_{\mathrm{fit}}(t_i)}\right]^2.$$
$`(3.1)`$
For the computation of $`\chi ^2`$, we assume that the events are observed $`N_{\mathrm{dat}}=60`$ times during $`1t_\mathrm{E}t1t_\mathrm{E}`$. The photometric uncertainty $`p`$ depends on the observational strategy, instrument, and source star brightness. Therefore, we determine the photometric uncertainty by computing the signal-to-noise ratio ($`1/p=S/N`$) under the assumption that events are observed with a mean exposure time of $`t_{\mathrm{exp}}=150\mathrm{s}`$ by using a 1-m telescope equipped with a detector that can detect 12 photons/s for a $`I=12`$ star. The detailed description about the signal-to-noise computation is described in § 4. Once the values of $`\chi ^2`$ as a function of $`f`$ are computed, the uncertainty of $`F_0`$ is determined at $`1\sigma `$ (i.e. $`\chi ^2=1`$) level. We then repeat the same procedures for events with various values of $`\beta _0`$ (and thus the peak amplifications).
In the upper of Figure 3, we present the resulting values of $`\chi ^2`$ as a function of $`\mathrm{log}(1+f)`$ for example events with the source star brightness $`I=18`$ and various impact parameters of $`\beta _0=0.05`$, 0.1, 0.2, and 0.22. In the lower panel, we also present the uncertainty range of $`\mathrm{\Delta }F_0/F_0`$ (shaded region). From the figure, one finds the following trends. First, the uncertainty significantly decreases as the impact parameter decreases, implying that the baseline fluxes for high amplification events can be determined with small uncertainties. Second, if the impact parameter becomes bigger than a certain critical value ($`\beta _{\mathrm{crit}}`$), the value of $`\chi ^2`$ becomes less than 1, implying that the $`F_0`$ cannot be determined from the shape of the obtained light curve. For our example events with $`I=18`$, this corresponds to $`\beta _{\mathrm{crit}}=0.22`$. Note that the uncertainty range $`\mathrm{\Delta }F_0/F_0`$ in the lower panel is determined only for impact parameters yielding $`\chi ^21`$. Third, the upper limit of the uncertainty range is always bigger than the lower limit.
Knowing that $`F_0`$ can be determined only for high amplification events, we define the critical impact parameter $`\beta _{\mathrm{crit}}`$ as the maximum allowed impact parameter below which the baseline flux of the event can be determined with uncertainty less than $`50\%`$ (i.e. $`\chi ^21`$ and $`\mathrm{\Delta }F_0/F_00.5`$. Then $`\beta _{\mathrm{crit}}`$($`F_0`$) represents the average probability that the baseline flux of an event with a source star brightness $`F_0`$ can be determinded with an uncertainty less than 50%. We compute the critical impact parameters for events expected to be detected towards thd Galactic bulge, and they are presented in the upper panel of Figure 4 as a function of the source star brightness in $`I`$ band. From the figure, one finds that as the source star becomes fainter, the value of $`\beta _{\mathrm{crit}}`$ decreases. This is because for a faint source event, the photometric uncertainty $`p`$ is large. Therefore, to be distinguished from standard amplification curves with a statistical confidence level higher than the required level (i.e. $`\chi ^21`$), the event should be highly amplified.
## 4 Fraction of High Amplification events
In previous section, we showed that the baseline fluxes of high amplification events can be determined with precision. In this section, we determine the fraction of high amplification events for which one can determine $`F_0`$ with small uncertainties among the total microlensing events detectable by using the DIA method.
Under the assumption that image subtraction is perfectly conducted<sup>3</sup><sup>3</sup>3 A very ingeneous image subtraction method developed by Alard & Lupton (1998) demostrate that it is possible to measure the variable flux to a precision very close to the photon noise., the signal measured from the subtracted image by using the DIA technique is proportional to the variation of the source star flux, i.e. $`SF_0(A_01)t_{\mathrm{exp}}`$. On the other hand, the noise of the source star flux measurements comes from both the lensed source star and blended stars, i.e. $`N(F_0A_0+B)^{1/2}`$, where $`B`$ is the background flux from blended stars in the effective seeing disk (i.e. the undistinguishable separation between images) with a size (i.e. diameter) $`\mathrm{\Delta }\theta _{\mathrm{see}}`$. Then the signal-to-noise ratio of an event whose light variation is detected by using the DIA method is given by
$$S/N=F_0(A_01)\left(\frac{t_{\mathrm{exp}}}{F_0A_0+B}\right)^{1/2},$$
$`(4.1)`$
where $`B`$ represents the mean background flux. For a high amplification event ($`A_01`$) with a bright source star ($`F_0A_0B`$), the signal-to-noise ratio becomes photon limited, i.e. $`S/N(F_0A_0t_{\mathrm{exp}})^{1/2}`$. By contrast, for a low amplification event with a faint source star ($`F_0A_0B`$), the noise from the background flux becomes important. Let us define $`\beta _{\mathrm{max}}(F_0)`$ as the maximum impact parameter within which a lensing event can be detected by having signal-to-noise ratios higher than a certain threshold value $`(S/N)_{\mathrm{th}}`$ during a range of time longer than a required one $`\mathrm{\Delta }t`$. Then, $`\beta _{\mathrm{max}}(F_0)`$ represents the average detection probability of an event with a source star brightness $`F_0`$ from the microlensing search by using the DIA method, and it is computed by
$$\beta _{\mathrm{max}}=\{\begin{array}{cc}\left[u_{\mathrm{max}}^2(\mathrm{\Delta }t/t_E)^2\right]^{1/2}\hfill & \text{when }u_{\mathrm{max}}\mathrm{\Delta }t/t_\mathrm{E}\hfill \\ 0\hfill & \text{when }u_{\mathrm{max}}<\mathrm{\Delta }t/t_\mathrm{E}\hfill \end{array}$$
$`(4.2)`$
where $`u_{\mathrm{max}}=[2(1A_{\mathrm{min}}^2)^{1/2}2]^{1/2}`$ represents the threshold lens-source separation below which the signal-to-noise ratio of an event becomes greater than $`(S/N)_{th}`$ and $`A_{\mathrm{min}}`$ is the amplification of the event then $`u`$=$`u_{\mathrm{max}}`$. The value of $`A_{\mathrm{min}}`$ is obtained by numerically solving equation (4.1) with respect to the amplification for a given threshold signal-to-noise ratio of $`(S/N)_{\mathrm{th}}`$.
In the upper panel of Figure 4, we present the maximum impact parameter $`\beta _{\mathrm{max}}`$ as a function of the source brightenss for stars in the Galactic bulge. For the computation of $`\beta _{\mathrm{max}}(F_0)`$, we assume the same observational conditions described in § 3. The adopted threshold signal-to-noise ratio of the MACHO experiment is $`(S/N)_{\mathrm{th}}=10`$ (Alcock 1999a, 1999b). In our computation, however, a higher value of $`(S/N)_{\mathrm{th}}=15`$ is adopted to account for the additional noise from the sky brightness and the residual flux due to imperfect image subtraction. The average background flux is obtained by
$$B=_0^{F_{\mathrm{CL}}}F_0\mathrm{\Phi }_0(F_0)𝑑F_0,$$
$`(4.3)`$
where $`F_{\mathrm{CL}}`$ and $`\mathrm{\Phi }_0(F_0)`$ are the crowding limit of the Galactic bulge field and the luminosity function of stars in the field normalized to the area of $`\pi (\mathrm{\Delta }\theta _{\mathrm{see}}/2)^2`$. We adopt the luminosity function of Holtzman et al. (1998) constructed from the observations of bulge stars by using the Hubble Space Telescope and the adopted crowding limit is $`I=18.2`$ mag. We assume that an event is detectable if signal-to-noise ratios are higher than $`(S/N)_{\mathrm{th}}`$ during $`\mathrm{\Delta }t=0.2t_\mathrm{E}`$ of the source star flux variation measurements. From the figure, one finds that the detection probabilities (i.e. $`\beta _{\mathrm{max}}`$) of events with source stars faint than the crowding limit, and thus unresolvable, are not negligible up to $`3`$ mag below $`F_{\mathrm{CL}}`$ implying that a substantial fraction of events detectable by using the DIA method will be faint source star events (Jeong, Park, & Han 1999).
With the determined values of $`\beta _{\mathrm{max}}`$ as a function of source star brightness, we then construct the effective source star luminosity function by
$$\mathrm{\Phi }_{\mathrm{eff}}(F_0)=\beta _{\mathrm{max}}(F_0)\mathrm{\Phi }_0(F_0).$$
$`(4.4)`$
We also construct the luminosity function of source stars for high amplification events with measurable baseline fluxes by
$$\mathrm{\Phi }_{\mathrm{high}}=\eta (F_0)\mathrm{\Phi }_0(F_0);\eta =\{\begin{array}{cc}\beta _{\mathrm{crit}}/\beta _{\mathrm{max}}\hfill & \text{when }\beta _{\mathrm{crit}}\beta _{\mathrm{max}}\hfill \\ 1.0\hfill & \text{when }\beta _{\mathrm{crit}}>\beta _{\mathrm{max}}\hfill \end{array}$$
$`(4.5)`$
Once the two luminosity functions $`\mathrm{\Phi }_{\mathrm{eff}}`$ and $`\mathrm{\Phi }_{\mathrm{high}}`$ are constructed, the fraction of high amplification events out of the total number of detactable events is computed by
$$\frac{\mathrm{\Gamma }_{\mathrm{high}}}{\mathrm{\Gamma }_{\mathrm{tot}}}=\frac{_0^{\mathrm{}}\mathrm{\Phi }_{\mathrm{high}}(F_0)𝑑F_0}{_0^{\mathrm{}}\mathrm{\Phi }_{\mathrm{eff}}(F_0)𝑑F_0}.$$
$`(4.6)`$
We find that for $`33\%`$ of events detectable from the microlensing searches by using the DIA method will have high amplification events for which the baseline fluxes can be determined with uncertainties $`\mathrm{\Delta }F_0/F_00.5`$.
## 5 Conclusion
We have investigated how the lensing parameters change due to the wrong determination of the baseline flux of a microlensing event. We have also investigated the feasibility of the baseline flux determination from the shape of the observed light curve. The results of these investigations are as follows:
1. The obtained amplification curve of a general microlensing event based on wrong baseline flux is well fit by a standard amplification curve with different lensing parameters, implying that precise determination of $`F_0`$ from the shape of the observed light curve will be difficult.
2. However, for a high amplification event, the mis-normalized amplification curve deviates from the standard form by a considerable amount even for a small fractional deviation of baseline flux, allowing one to determine $`F_0`$ with a small uncertainty.
3. With a model luminosity function of Galactic bulge stars and under realistic observational conditions of the microlensing searches with the DIA method, we find that a substantial fraction ($`33\%`$) of microlensing events detectable by using the DIA method will be high amplification events, for which the baseline fluxes of source stars can be determined with uncertainties $`\mathrm{\Delta }F_0/F_050\%`$.
This work was supported by the grant (1999-2-113-001-5) of the Korea Science & Engineering Foundation.
Figure 1: Upper panels: four example mis-normalized amplification curves (solid curves) which are expected when the baseline flux of the source star for a microlensing event with $`\beta _0=0.5`$ is determined with the fractional deviations of $`f=\pm 0.2`$ and $`\pm 0.5`$. Also presented are the standard amplification curves ($`A_{\mathrm{fit}}`$, dotted curves) which best fit the individual mis-normalized amplification curves. Lower panels: the fractional deviations of the mis-normalized amplification curves from their corresponding best-fit standard amplification curves, i.e. $`\mathrm{\Delta }A/A_{\mathrm{fit}};\mathrm{\Delta }A=A_{\mathrm{fit}}A`$.
Figure 2: Upper panel: the mis-normalized amplification curves with the same fractional baseline flux deviations of $`f=\pm 0.2`$ and $`\pm 0.5`$ as the cases in Figure 1, and the corresponding best-fit standard amplification curves for a higher amplification event with $`\beta _0=0.1`$. Lower panel: the fractional differences between the amplification curves $`A`$ and $`A_{\mathrm{fit}}`$.
Figure 3: Upper panel: the values of $`\chi ^2`$ as a function of $`\mathrm{log}(1+\mathrm{\Delta }F_0/F_0)`$ for example events with a source star brightness of $`I=18`$ and various impact parameters $`\beta _0`$. The value of $`\chi ^2`$ is computed by comparing the mis-normalized amplification curve with a fractional baseline flux deviation $`f=\mathrm{\Delta }F_0/F_0`$ and the corresponding best-fit standard amplification curve. Lower panel: the uncertainty range of the baseline flux (shaded region) determined from the shape of the light curve of a lensing event detected by using the DIA technique. The uncertainties are determined at $`1\sigma `$ (i.e. $`\chi ^2=1`$) level.
Figure 4: Upper panel: the critical and the maximum impact parameters ($`\beta _{\mathrm{crit}}`$ and $`\beta _{\mathrm{max}}`$) as functions of the source brightness for stars in the Galactic bulge field. The value of $`\beta _{\mathrm{max}}`$ is equivalent to the average detection probability of an event with a source star brightness $`I`$ from the microlensing search by using the DIA method. On the other hand, $`\beta _{\mathrm{crit}}`$ represents the average probability that the baseline flux of an event with a source star brightness $`I`$ can be determined with an uncertainty less than 50%. Lower panel: the effective source star luminosity functions of the total ($`\mathrm{\Phi }_{\mathrm{eff}}`$) and high amplification events ($`\mathrm{\Phi }_{\mathrm{high}}`$) detectable from the microlensing searches by using the DIA technique toward the Galactic bulge field. The shaded region represents the fraction of high amplification events.
|
no-problem/9910/astro-ph9910061.html
|
ar5iv
|
text
|
# Black hole mergers in the universe
## 1 Introduction
The search for gravitational waves will begin in earnest in January 2002, when LIGO-I (Abramovici et al. 1992) becomes fully operational (K. Thorne, private communication). The appearance of this new and wholly unexplored observational window challenges physicists and astronomers to predict detection rates and source characteristics. Mergers of neutron-star binaries are widely regarded as the most promising sources of gravitational radiation, and estimates of neutron star merger rates (per unit volume) range from $`1.9\times 10^7h^3\mathrm{yr}^1\mathrm{Mpc}^3`$ (Narayan et al. 1991; Phinney 1991; Portegies Zwart & Spreeuw 1996) , where $`h=H_0/100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, to roughly ten times this value (Tutukov & Yungelson 1993; Lipunov et al., 1997). However, even with the most optimistic assumptions, we can expect a LIGO-I detection rate of only a few neutron star events per millennium.
Inspiral and merger of black-hole binaries are considerably more energetic events than neutron star mergers, due to the higher masses of the black holes (Tutukov & Yungelson 1993; Lipunov et al. 1997). Black-hole binaries can result from the evolution of two stars which are born in a close binary, experience several phases of mass transfer, and subsequently survive two supernovae (Tutukov & Yungelson 1993). Calculations of event rates from such field binaries depend sensitively on many unknown parameters and much poorly understood physics, but the models generally predict a black hole merger rate $`\stackrel{<}{}2\times 10^9h^3\mathrm{yr}^1\mathrm{Mpc}^3`$ (Tutukov & Yungelson 1993; Portegies Zwart & Yungelson 1998; Bethe & Brown 1999), substantially lower than the rate for neutron stars. An alternative possibility, which we explore here, is that black holes become members of close binaries not through internal binary evolution, but rather via dynamical interactions with other stars in a dense stellar system.
## 2 Black hole binaries in star clusters
Black holes are the products of stars with initial masses exceeding $``$20–25 $`\mathrm{M}_{}`$(Maeder 1992; Portegies Zwart et al. 1997). A Scalo (1986) mass distribution with a lower mass limit of 0.1 $`\mathrm{M}_{}`$ and an upper limit of 100 $`\mathrm{M}_{}`$ has 0.071% of its stars more massive than 20 $`\mathrm{M}_{}`$, and 0.045% more massive than 25 $`\mathrm{M}_{}`$. A star cluster containing $`N`$ stars thus produces $`6\times 10^4N`$ black holes. Known Galactic black holes have masses $`m_{\mathrm{bh}}`$ between 6 $`\mathrm{M}_{}`$ and 18 $`\mathrm{M}_{}`$ (Cowlay 1992). For definiteness, we adopt $`m_{\mathrm{bh}}=10`$$`\mathrm{M}_{}`$.
### 2.1 Binary formation and dynamical evolution
A black hole is formed in a supernova explosion. If the progenitor is a single star (i.e. not a member of a binary), the black hole experiences little or no recoil and remains a member of the parent cluster (White & van Paradijs 1996). If the progenitor is a member of a binary, mass loss during the supernova may eject the binary from the cluster potential via the Blaauw mechanism (Blaauw 1962), where conservation of momentum causes recoil in a binary which loses mass impulsively from one component. We estimate that no more than $`10`$% of black holes are ejected from the cluster immediately following their formation.
After $`40`$ Myr the last supernova has occurred, the mean mass of the cluster stars is $`m0.56`$$`\mathrm{M}_{}`$ (Scalo 1986), and black holes are by far the most massive objects in the system. Mass segregation causes the black holes to sink to the cluster core in a fraction $`m/m_{\mathrm{bh}}`$ of the half-mass relaxation time. For a typical globular cluster, the relaxation time is $`1`$ Gyr; for a young populous cluster, such as R 136 (NGC 2070) in the 30 Doradus region of the Large Magellanic Cloud (Massey & Hunter 1998), it is $`10`$ Myr.
By the time of the last supernova, stellar mass loss has also significantly diminished and the cluster core starts to contract, enhancing the formation of binaries by three-body interactions. Single black holes form binaries preferentially with other black holes (Kulkarni et al. 1992), while black holes born in binaries with a lower-mass stellar companion rapidly exchange the companion for another black hole. The result in all cases is a growing black-hole binary population in the cluster core. Once formed, the black-hole binaries become more tightly bound through superelastic encounters with other cluster members (Heggie 1975; Kulkarni et al. 1992; Sigurdsson & Hernquist 1993). On average, following each close binary–single black hole encounter, the binding energy of the binary increases by about 20% (Hut et al. 1992); roughly one third of this energy goes into binary recoil, assuming equal mass stars. The minimum binding energy of an escaping black-hole binary may then be estimated as
$$E_{b,\mathrm{min}}36W_0\frac{m_{\mathrm{bh}}}{m}kT,$$
(1)
where $`\frac{3}{2}kT`$ is the mean stellar kinetic energy and $`W_0=m|\varphi _0|/kT`$ is the dimensionless central potential of the cluster (King 1966). By the time the black holes are ejected, $`m0.4\mathrm{M}_{}`$. Taking $`W_05`$–10 as a representative range, we find $`E_{b,\mathrm{min}}5000`$–10000 $`kT`$.
We have tested and refined the above estimates by performing a series of N-body simulations within the “Starlab” software environment (Portegies Zwart et al. 1999: see http::/www.sns.ias.edu/starlab), using the special-purpose computer GRAPE-4 to speed up the calculations (Makino et al. 1997). For most (seven) of our calculations we used 2048 equal-mass stars with 1% of them ten times more massive than the average; two calculations were performed with 4096 stars. One of the 4096-particle runs contained 0.5% black holes; the smaller black-hole fraction did not result in significantly different behavior. We also tested alternative initial configurations, starting some models with the black holes in primordial binaries with other black holes, or in primordial binaries with lower-mass stars.
The results of our simulations may be summarized as follows. Of a total of 204 black holes, 62 ($`30\%`$) were ejected from the model clusters in the form of black-hole binaries. A total of 124 ($`61\%`$) black holes were ejected single, and one escaping black hole had a low-mass star as a companion. The remaining 17 ($`8\%`$) black holes were retained by their parent clusters. The binding energies $`E_b`$ of the ejected black-hole binaries ranged from about $`1000kT`$ to $`10000kT`$ in a distribution more or less flat in $`\mathrm{log}E_b`$, consistent with the assumptions made by Hut et al. (1992). The eccentricities $`e`$ followed a roughly thermal distribution \[$`p(e)2e`$\], with high eccentricities slightly overrepresented. About half of the black holes were ejected while the parent cluster still retained more than 90% ($`\stackrel{<}{}2`$ initial relaxation times) of its birth mass, and $`\stackrel{>}{}90`$% of the black holes were ejected before the cluster had lost 30% (between 4 and 10 relaxation times) of its initial mass. These findings are in good agreement with previous estimates that black-hole binaries are ejected within a few Gyr, well before core collapse occurs (Kulkarni et al. 1993; Sigurdsson & Hernquist 1993).
We performed additional calculations incorporating a realistic (Scalo) mass function, the effects of stellar evolution, and the gravitational influence of the Galaxy. Our model clusters generally dissolved rather quickly (within a few hundred Myr) in the Galactic tidal field. Clusters which dissolved within $`40`$ Myr (before the last supernova) had no time to eject their black holes. However, those that survived beyond this time were generally able to eject at least one close black-hole binary before dissolution.
Based on these considerations, we conservatively estimate the number of ejected black-hole binaries to be about $`10^4N`$ per star cluster, more or less independent of the cluster lifetime.
### 2.2 Characteristics of the binary population
The energy of an ejected binary and its orbital separation are coupled to the dynamical characteristics of the star cluster. For a cluster in virial equilibrium, we have $`kT2E_{\mathrm{kin}}/3N=E_{\mathrm{pot}}/3N=GM^2/6Nr_{\mathrm{vir}}`$, where $`M`$ is the total cluster mass and $`r_{\mathrm{vir}}`$ is the virial radius. A black-hole binary with semi-major axis $`a`$ has $`E_b=Gm_{\mathrm{bh}}^2/2a`$, so
$$\frac{E_b}{kT}=3N\left(\frac{m_{\mathrm{bh}}}{M}\right)^2\frac{r_{\mathrm{vir}}}{a},$$
(2)
establishing the connection between $`a`$ and the bulk parameters of the cluster.
In computing the properties of the black-hole binaries resulting from cluster evolution in §3, it is convenient to distinguish three broad categories of dense stellar systems: (1) young populous clusters, (2) globular clusters, and (3) galactic nuclei. Table 1 lists characteristic parameters for each. The masses and virial radii of globular clusters are assumed to be distributed as independent Gaussians with means and dispersions as presented in the table; this assumption is supported by correlation studies (Djorgovski & Meylan 1994). Table 1 also presents estimates of the parameters of globular clusters at birth (bottom row), based on a recent parameter-space survey of cluster initial conditions (Takahashi & Portegies Zwart 2000); globular clusters which have survived for a Hubble time have lost $`\stackrel{>}{}60`$% of their initial mass and have expanded by about a factor of three. We draw no distinction between core-collapsed globular clusters (about 20% of the current population) and non-collapsed globulars—the present dynamical state of a cluster has little bearing on how black-hole binaries were formed and ejected during the first few Gyr of the cluster’s life.
## 3 Production of gravitational radiation
An approximate formula for the merger time of two stars due to the emission of gravitational waves is given by Peters & Mathews 1963):
$$t_{\mathrm{mrg}}150\mathrm{Myr}\left(\frac{\mathrm{M}_{}}{m_{\mathrm{bh}}}\right)^3\left(\frac{a}{\mathrm{R}_{}}\right)^4(1e^2)^{7/2}.$$
(3)
The sixth column of Table 1 lists the fraction of black-hole binaries which merge within a Hubble time due to gravitational radiation, assuming that the binary binding energies are distributed flat in $`\mathrm{log}E_b`$ between $`1000kT`$ and $`10000kT`$, that the eccentricities are thermal, independent of $`E_b`$, and that the universe is 15 Gyr old (Jha et al. 1999). The final column of the table lists the contribution to the total black-hole merger rate from each cluster category.
### 3.1 Merger rate in the local universe
Given the black-hole merger rate corresponding to each category of star cluster, we now estimate the total merger rate $``$ per unit volume. Table 2 lists, for various tyes of galaxies, the space densities and $`S_N`$, the specific number of globular clusters per $`M_v=15`$ magnitude (van den Bergh 1995):
$$S_N=N_{GC}10^{0.4(M_v+15)}$$
(4)
(where $`N_{GC}`$ is the total number of globular clusters in the galaxy under consideration). The values given for $`S_N`$ in Table 2 are corrected for internal absorption; the absorbed component is estimated from observations in the far infrared. The estimated number density of globular clusters in the universe is
$$\varphi _{GC}=8.4h^3\mathrm{Mpc}^3.$$
(5)
A conservative estimate of the merger rate of black-hole binaries formed in globular clusters is obtained by assuming that globular clusters in other galaxies have characteristics similar to those found in our own. The result is
$$_{GC}=5.4\times 10^8h^3\mathrm{yr}^1\mathrm{Mpc}^3.$$
(6)
Irregular galaxies, starburst galaxies, early type spirals and blue elliptical galaxies all contribute to the formation of young populous clusters. In the absence of firm measurements of the numbers of young populous clusters in other galaxies, we simply use the same values of $`S_N`$ as for globular clusters. The space density of such clusters is then $`\varphi _{YPC}=3.5h^3\mathrm{Mpc}^3,`$ and the black hole merger rate is
$$_{YPC}=2.1\times 10^8h^3\mathrm{yr}^1\mathrm{Mpc}^3.$$
(7)
We find that galactic nuclei contribute negligibly to the total black hole merger rate.
Based on the assumptions outlined above, our estimated total merger rate per unit volume of black-hole binaries is
$$=7.5\times 10^8h^3\mathrm{yr}^1\mathrm{Mpc}^3.$$
(8)
However, this may be a considerable underestimate of the true rate. First, as already mentioned, our assumed number ($`10^4N`$) of ejected black-hole binaries is quite conservative. Second, the observed population of globular clusters naturally represents only those clusters that have survived until the present day. The study by Takahashi & Portegies Zwart (2000) indicates that $``$ 50% of globular clusters dissolve in the tidal field of the parent galaxy within a few billion years of formation. We have therefore underestimated the total number of globular clusters, and hence the black-hole merger rate, by about a factor of two. Third, a very substantial underestimate stems from the assumption that the masses and radii of present-day globular clusters are representative of the initial population. When estimated initial parameters (Table 1, bottom row) are used, the total merger rate increases by a further factor of six. Taking all these effects into account, we obtain a net black-hole merger rate of
$$3\times 10^7h^3\mathrm{yr}^1\mathrm{Mpc}^3.$$
(9)
We note that this figure is significantly larger than the current best estimates of the neutron-star merger rate.
### 3.2 LIGO observations
The maximum distance within which LIGO-I can detect an inspiral event is estimated to be
$$R_{\mathrm{eff}}=18\mathrm{Mpc}\left(\frac{M_{\mathrm{chirp}}}{\mathrm{M}_{}}\right)^{5/6}$$
(10)
(K. Thorne, private communication). Here, the “chirp” mass for a binary with component masses $`m_1`$ and $`m_2`$ is $`M_{\mathrm{chirp}}=(m_1m_2)^{3/5}/(m_1+m_2)^{1/5}`$. For neutron star inspiral, $`m_1=m_2=1.4\mathrm{M}_{}`$, so $`M_{\mathrm{chirp}}=1.22\mathrm{M}_{}`$, $`R_{\mathrm{eff}}=21`$ Mpc, and we obtain the detection rate mentioned in the introduction. For black-hole binaries with $`m_1=m_2=m_{\mathrm{bh}}=10\mathrm{M}_{}`$, we find $`M_{\mathrm{chirp}}=8.71\mathrm{M}_{}`$, $`R_{\mathrm{eff}}=109`$ Mpc, and a LIGO-I detection rate of about 1.7 $`h^3`$ per year. For $`h0.65`$ (Jha 1999), this results in about one detection event every two years. LIGO-II should become operational by 2007, and is expected to have $`R_{\mathrm{eff}}`$ about ten times greater than LIGO-I, resulting in a detection rate 1000 times higher, or about one event per day.
## 4 Discussion
Black-hole binaries ejected from galactic nuclei, the most massive globular clusters (masses $`\stackrel{>}{}10^6\mathrm{M}_{}`$), and globular clusters which experience core collapse soon after formation, tend to be very tightly bound, have high eccentricities and merge within a few million years of ejection. These mergers therefore trace the formation of dense stellar systems with a delay of a few Gyr (the typical time required to form and eject binaries), making these systems unlikely candidates for LIGO detections, as the majority merged long ago. This effect may reduce the current merger rate by an order of magnitude, although more sensitive future gravitational wave detectors may see some of these early universe events. We estimate that the most massive globular clusters contribute about 90% of the total black hole merger rate. However, while their black-hole binaries merge promptly upon ejection, the longer relaxation times of these clusters mean that binaries tend to be ejected much later than in lower mass systems. Consequently, we have retained these binaries in our final merger rate estimate (Eq. 9). But we note that represents a significant source if uncertainty.
By the time the black hole binary is ejected it has experienced $`40`$–50 hard encounters with other black holes, as well as a similar number of encounters with other stars or binaries. During each of these latter encounters, there is a small probability that a low-mass star may collide with one of the black holes. Such collisions tend to soften the black hole binary somewhat (see Portegies Zwart et al. 1999), but they are unlikely to delay ejection significantly. A collision between a main-sequence star and a black hole may, however, lead to brief but intense X-ray phase.
Finally, we have assumed that the mass of a stellar black hole is 10 $`\mathrm{M}_{}`$. Increasing this mass to 18$`\mathrm{M}_{}`$ decreases the expected merger rate by about 50%—higher mass black holes tend to have wider orbits. However, the larger chirp mass increases the signal to noise, and the distance to which such a merger can be observed increases by about 60% and the overall detection rate on Earth increases by about a factor of three. For 6 $`\mathrm{M}_{}`$ black holes, the detection rate decreases by a similar factor. For black-hole binaries with component masses $`\stackrel{>}{}12`$$`\mathrm{M}_{}`$, the first generation of detectors will be more sensitive to the merger itself than to the inspiral phase that precedes it (Flanagan & Hughes 1998). Since the strongest signal is expected from black-hole binaries with high-mass components, it is critically important to improve our understanding of the merger waveform. Even for lower-mass black holes (with $`m_{bh}\stackrel{>}{}10\mathrm{M}_{}`$), the inspiral signal comes from an epoch when the holes are so close together that the post-Newtonian expansions used to calculate the wave forms are unreliable (Brady et al. 1998).
Acknowledgments We thank Piet Hut, Jun Makino and Kip Thorne for insightful comments on the manuscript. This work was supported by NASA through Hubble Fellowship grant HF-01112.01-98A awarded (to SPZ) by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555, and by ATP grant NAG5-6964 (to SLWM). SPZ is grateful to Drexel University, Tokyo University and the University of Amsterdam (under Spinoza grant 0-08 to Edward P.J. van den Heuvel) for their hospitality. Calculations are performed on the GRAPE-4 computers at Tokyo University and Drexel University, and on the SGI/Cray Origin2000 supercomputer at Boston University.
|
no-problem/9910/quant-ph9910108.html
|
ar5iv
|
text
|
# Quantum probability from a geometrical interpretation of a wave function
## 1 Introduction
There was no theory for Many-Worlds Interpretation which can provide the probabilistic prediction of quantum theory . To obtain the probabilistic prediction of quantum theory, I suppose some assumptions and construct a theory for Many-Worlds Interpretation. I call the theory Many-Events Theory.
## 2 Assumptions
### 2.1 An assumption of ”no boundary”
There are some physical quantities, position $`x`$,time $`t`$ or phase $`\theta `$,etc. I call these physical quantities ”position quantity”. I suppose that all position quantities have no boundaries and are compact.
### 2.2 An assumption of ”a minimum unit”
I suppose that all position quantities have a common minimum unit $`u`$.
### 2.3 An assumption of ”a wave space”
I suppose new another position quantity $`w`$. I call a space of the quantity a wave space. This quantity has something to do with a wave function.
### 2.4 An assumption of ”a phase space”
I suppose a phase space $`\theta `$ have a structure like a M$`\ddot{o}`$bius strip. This is analogous with a spin of an electron. Thus I define a wave function
$$\psi =R_wexp(\frac{i}{2}\frac{\theta }{R_\theta }),$$
(1)
where $`R_w`$ is the radius of curvature of a wave space and $`R_\theta `$ is the radius of curvature of a phase space. $`R_w`$,$`R_\theta `$ and $`\theta `$ are functions of $`\theta `$,$`x`$ and $`t`$.
### 2.5 An assumption of ”an elementary state”
I suppose that there is a minimum unit of a state. I call this an elementary state. For example in the quantum theory one particle’s elementary state is
$$|\psi >=|w>|\theta >|x>|t>.$$
(2)
We can point the elementary state using a combination of position quantities. I suppose that same elementary state is only one. Thus we can interpret it as a point in a space. A state is connected to a set of points.
### 2.6 An assumption of ”an elementary event”
I call the transition from one elementary state to another elementary state an elementary event. I suppose that same elementary event is only one. Thus we can interpret an elementary event as one line in a space. An event is connected to a set of lines. All possible elementary events exist. This is analogous with path integrals.
## 3 The construction of a torus
To make the problem easy I think one particle. And I use $`w`$,$`\theta `$,$`x`$ and $`t`$. I construct a torus,
$$F=S_w^1\times _\tau S_\theta ^1\times S_x^1\times S_t^1.$$
(3)
And I define a point $`f`$ on the torus,
$$f(w,\theta ,x,t).$$
(4)
$`R_w`$ is linear like a wave function approximately. If there is a wave on the torus, an effect of one loop in a phase space and an effect of two loops cancel out. Because the phase space has a structure like a M$`\ddot{o}`$bius strip. Thus $`R_w`$ has properties like a wave function. The shape of the torus is decided by the Hamiltonian $`H`$ of the system. To describe many particles, we need a new position quantity. This is analogous with second quantization.
A point on the torus is an elementary state $`f(w,\theta ,x,t)`$. We can count elementary states since a position quantity has a minimum unit. The number of elementary states $`N(\theta ,x,t)`$ is
$$N(\theta ,x,t)=\underset{w}{}f(w,\theta ,x,t)=\frac{2\pi R_w}{u}=\frac{2\pi |\psi |}{u}$$
(5)
where $`u`$ is a minimum unit and $`f(w,\theta ,x,t)=1`$.
## 4 Probability
The probability of quantum theory is the probability of an event. We consider only the number of different elementary events since we measure only one elementary event of many elementary events. I suppose that $`N(\theta ,x,t)=m`$ and $`N^{}(\theta ^{},x,t+u)=m^{}m`$. Then we can obtain the number of different combinations of the elementary states,$`m^2`$ since the time has minimum unit. The combinations are elementary events. If there are only $`m`$ different elementary states, there are $`m^2`$ different elementary events.
If there is a state $`|\mathrm{\Psi }>=m|x>+n|x^{}>`$,the probability of finding a particle at the position $`x`$,$`P(x)`$ is
$$P(x)=\frac{m^2}{m^2+n^2}=\frac{|<x|\mathrm{\Psi }>|^2}{<\mathrm{\Psi }|\mathrm{\Psi }>}.$$
(6)
Thus we can obtain the probability from this theory.
Each elementary event exists. Thus each observer measuring each elementary event exists,too. This is Many-Worlds Interpretation.
## 5 Conclusion
This theory provides the probability of quantum theory.
|
no-problem/9910/astro-ph9910454.html
|
ar5iv
|
text
|
# Untitled Document
DEFLAGRATION TO DETONATION
A.M. Khokhlov
Laboratory for Computational Physics and Fluid Dynamics,
Naval Research Laboratory, Washington, DC
Thermonuclear explosions of Type Ia supernovae (SNIa) involve turbulent deflagrations, detonations, and possibly a deflagration-to-detonation transition. A phenomenological delayed detonation model of SNIa successfully explains many observational properties of SNIa including monochromatic light curves, spectra, brightness – decline and color – decline relations. Observed variations among SNia are explained as a result of varying nickel mass synthesised in an explosion of a Chandrasekhar mass C/O white dwarf. Based on theoretical models of SNIa, the value of the Hubble constant $`H_o67`$km/s/Mpc was determined without the use of secondary distance indicators. The cause for the nickel mass variations in SNIa is still debated. It may be a variation of the initial C/O ratio in a supernova progenitor, rotation, or other effects.
1. Introduction
Type Ia supernovae (SNIa) are important astrophysical objects which are increasingly used as distance indicators in cosmology. SNIa appear to be a rather well behaving group of objects. There are deviations in maximum brightness of $`2`$m among SNIa, but they correlate with variations in the shape of SNIa light curves: less bright supernovae tend to decline faster. This is often expressed as a correlation between $`m`$ and $`dm_{15}`$, where $`dm_{15}`$ is the decrease in magnitude 15 days since maximum. Another correlation exists between SNIa color at maximum and postmaximum decline, $`(BV)`$$`dm_{15}`$ – less bright supernovae tend to be more red . These two correlations can be used to account for variations in brightness of SNIa and for interstellar absorption. Using these has led to improved determinations of $`H_o`$ and to new findings concerning $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ .
Are there exact and unique maximum brightness – postmaximum decline and color – postmaximum decline relations among SNIa? Are these relations the same for nearby and cosmological supernovae? Before these questions so important for $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ can be addressed from theoretical grounds, we would like the theory of SNIa to answer more general questions:
(1) Why do SNIa differ from each other?
(2) Why do some of SNIa characteristics correlate?
2. Pre-supernovae
It is believed that SNIa are thermonuclear explosions of carbon-oxygen (CO) white dwarfs (WD). However, evolutionary paths leading to SNIa are still a bit of a mystery . Three major scenarios have been considered based on the evolution of binary stellar systems: (1) a CO-WD accreting mass through Roche-lobe overflow from an evolved companion star . The explosion is triggered by compressional heating near the WD center when the WD approaches the Chandrasekhar mass. (2) Merging of two low-mass WDs caused by the loss of angular momentum due to gravitational radiation . Resulting merged configuration consists of a massive WD component surrounded by the rotationally supported envelope made of less massive, disrupted WD . If ignition takes place at low densities, near the base of the rotating envelope, it will probably lead to slow burning and subsequent core collapse . Otherwise, gradual redistribution of angular momentum may lead to a growth of a massive CO core which then ignites near the center when its mass approaches the Chandrasekhar limit. The exploding configuration will resemble an isolated $`1.4`$M CO-WD, but with rotation and surrounded by an extended CO envelope. (3) a CO-WD accreting mass through Roche-lobe overflow as in (1), but the explosion is triggered by the detonation of an accumulated layer of helium before the total mass of the configuration reaches the Chandrasekhar mass . Only the first two models appear to be viable. The third, the sub-Chandrasekhar WD model, has been ruled out on the basis of predicted light curves and spectra.
3. Phenomenological models of SNIa explosion
Many ingredients of the SNIa explosion physics such as equation of state and nuclear reaction rates are known well. However, flame propagation in a supernova is difficult to model from first principles due to an enormous disparity of spatial and temporal scales involved. One is forced to make assumptions about regimes of burning (detonation or deflagration), and about the speed with which the flame propagates in case of turbulent deflagration. Once the assumptions are made, an outcome can be calculated by solving the equations of fluid dynamics coupled with the (prescribed) nuclear energy release terms, and with terms describing self-gravity of the star.
Three major models of the explosion of a Chandrasekhar mass CO-WD have been considered: (1) detonation model , (2) deflagration model , and (3) delayed-detonation (DD) model \[12-15\]. The most detailed computations of SNIa explosion to date involve a hydrodynamic calculation of the thermonuclear explosion that includes a nucleosynthesis computation, a time-dependent radiation-transport computation that gives the light curve, including mechanisms for $`\gamma `$-ray and positron deposition, the effects of expansion opacity, and scattering, and NLTE spectra computations \[16,17 and references therein\].
It was found that purely detonation models do not fit observations because they do not produce intermediate mass (Si-group) elements which are so prominent in the spectra of SNIa around maximum light. Deflagration models produce intermediate mass elements, but typically in a too narrow velocity range, and also have difficulty explaining the variety of SNIa. Delayed detonation models are successful in reproducing the main features of SNIa, including multi-wavelength light curves, the spectral behavior, and the brightness – decline and color – decline correlations .
Delayed detonation models assume that burning starts as a subsonic deflagration and then turns into a supersonic detonation. The deflagration speed, $`S_{\mathrm{def}}`$, and the moment of deflagration-to-detonation transition (DDT) are free parameters. The moment of DDT is conveniently parametrized by introducing the transition density, $`\rho _{\mathrm{tr}}`$, at which DDT happens. Initial central density and initial composition (C/O ratio) of the the exploding WD must also be specified. To reproduce observations, deflagration speed should be a rather small fraction of the speed of sound $`a_s`$, say, $`S_{\mathrm{def}}<0.1a_s`$. Physical arguments why $`S_{\mathrm{def}}`$ is small are discussed in the next section. The models are very sensitive to the variations of $`\rho _{\mathrm{tr}}`$, but to a much lesser extent on the exact assumed value of the deflagration speed, initial central density of the exploding star, and the initial chemical composition.
A delayed detonation explosion is schematically illustrated in Figure 1. Because the speed of deflagration is less than the speed of sound, pressure waves generated by burning propagate ahead of the deflagration front and cause the star to expand. As a result, deflagration propagates through matter which density continuously decreases with time. After deflagration turns into a detonation, detonation wave incinerates the rest of the WD left unburned during the deflagration phase. Detonation produces Fe-group elements if it occurs at densities greater than $`\rho 10^7`$g/cc. At lower densities it produces intermediate mass elements. At even lower densities around $`10^6`$g/cc only carbon has time to burn. The outermost layers of a supernova will consist of products of explosive carbon burning such as O, Ne, Mg, etc. To reproduce observations, $`\rho _{\mathrm{tr}}`$ must be selected in the range $`\rho _{\mathrm{tr}}(13)\times 10^7`$g/cc. Virtually no intermediate mass elements will be produced for larger values of $`\rho _{\mathrm{tr}}`$. For lower $`\rho _{\mathrm{tr}}`$, the WD expands so much that a detonation cannot be sustained. With $`\rho _{\mathrm{tr}}`$ in the right range, the inner parts of the exploded star consist of Fe-peak elements and contain radioactive <sup>56</sup>Ni. Outer parts contain intermediate group elements and products of explosive carbon burning (Figure 1).
The amount of <sup>56</sup>Ni produced during the explosion is very sensitive to $`\rho _{\mathrm{tr}}`$. Varying $`\rho _{\mathrm{tr}}`$ in the range $`(13)\times 10^7`$g/cc gives nickel mass in the range $`0.10.7M_{}`$, respectively. The reason for such a sensitivity is the combination of an exponential temperature dependence of reaction rates and the dependence of the specific heat of a degenerate matter on density. Small differences in density at which burning takes place translate into small differences in burning temperature. These, however, translate into large differences in reaction rates, and into qualitative differences in the resulting chemical composition. The kinetic energy of the explosion, on the other hand, is very insensitive to $`\rho _{\mathrm{tr}}`$. It depends on the total amount of burned material (Fe-group and Si-group together). This is because the difference in binding energies of Fe-group and Si-group nuclei is relatively small compared to the difference between binding energies of both Fe- and Si-group elements and the initial CO mixture. Thus, the delayed detonation model predict SNIa with significantly varying nickel mass but with almost the same kinetic energy and expansion velocities.
The above property of the delayed detonation model is the key to the explanation of the brightness – decline and color – decline relations among Type Ia supernovae. All delayed detonation supernovae expand with approximately the same velocity. Explosions with more nickel give rise to brighter supernovae. Also, because of more nickel decays, envelopes of these supernovae are heated better and stay hot and opaque. The result is a slow post-maximum decline and a blue color. Explosions with less nickel give rise to dim supernovae. Envelopes of these supernovae are cool and transparent because they contain less nickel. The result is a fast post-maximum decline and a red color<sup>1</sup> In deflagration models, the amount of nickel and kinetic energy of the explosion are tightly related. Supernovae with more nickel expand and cool faster, while supernovae with less nickel are expanding slowly. Deflagration models predict that light curves of brighter supernovae should decline faster, which is contrary to observations..
As a representative example, Figure 2 shows results of numerical modeling of the bright SNIa 1994D . The light curves of SN1994D are fit with the light curves of the best fit delayed detonation model M36, one of the models of the series of with the initial central density $`\rho _c=2.7\times 10^9`$ g/cc and $`S_{\mathrm{def}}=0.04a_s`$. For the M36 model, the transition density was $`\rho _{\mathrm{tr}}=2.4\times 10^7`$ g/cc. As can be seen, both optical and IR light curves are fit by M36 rather well, including the secondary maximum in R and I typical of normally bright SNIa. Models with other values of $`\rho _{\mathrm{tr}}`$ led to much worse fits to observations.
Delayed detonation models have been used to predict a purely theoretical (without using Cepheid distances) value of the Hubble constant . The idea is to fit a supernova with the model which best reproduces its light curves and spectra. The model then gives the absolute brightness of a supernova. The method takes into account both brightness variations among SNIa and possible interstellar absorption. The result is shown in Figure 3. Values of $`H_o`$ determined from individual supernovae show a large spread for close SNIa but converge to $`H_o67\pm 9`$km/s/Mpc with increasing $`z`$. This value is in agreement with $`H_o`$ found using Cepheid variables .
4. Three-dimensional SNIa
Three-dimensional effects in the propagation of turbulent flames and deflagration-to-detonation transition (DDT) must play a key role in SNIa in determining the actual speed of the flame propagation, energetics, and nucleosynthesis, and also are likely to translate initial differences in presupernova structure into the observed differences among SNIa.
Deflagration. – Laminar flame in a WD is driven by heat conduction due to degenerate electrons and propagates very subsonically with the speed $`S_{\mathrm{lam}}<0.01a_s`$ . Such a slow flame cannot account for the explosion properties of SNIa. However, in the presence of gravity, the flame speed will be enhanced by the Rayleigh-Taylor instability . Whether the Rayleigh-Taylor instability can itself sufficiently increase the flame speed to cause the explosion or whether deflagration just serves to pre-expand the star which is incinerated later by a supersonic detonation, has been a subject of numerous studies and 3D simulations\[24,25,27,32 and references therein\].
Simple scaling arguments show that turbulent flame subjected to a uniform gravity acceleration in a vertical column must propagate with a speed
$$S_{\mathrm{def}}\alpha \left(gL\frac{\rho _0\rho _1}{\rho _0+\rho _1}\right)^{1/2},$$
$`(1)`$
where $`g`$ is local acceleration, $`L`$ is the width of the column, $`\rho _0`$ and $`\rho _1`$ are densities ahead and behind the flame, respectively, and $`\alpha <1`$ is a constant which depends on the column’s geometry and boundary conditions. Formula (1) is valid, of course, only when $`S_{\mathrm{def}}`$ is much larger than $`S_{\mathrm{lam}}`$. It tells that when the characteristic RT speed $`(gL)^{1/2}`$ is greater than $`S_{\mathrm{lam}}`$, the flame speed is determined by the turbulence on the largest scale, $`L`$, independent of details of flame propagation on smaller scales. The reason for this behavior is self-similarity of the flame. Turbulent flame speed is the product of the area $`A`$ of the flame surface and the laminar flame speed, $`S_{\mathrm{def}}=AS_{\mathrm{lam}}`$. Turbulence tends to increase $`A`$ whereas intersections of different portions of the flame front tend to decrease $`A`$. The latter effect is proportional to $`S_{\mathrm{lam}}`$. In equilibrium, the two effects balance each other, and $`A1/S_{\mathrm{lam}}`$. The product of $`A`$ and $`S_{\mathrm{lam}}`$ remains constant<sup>2</sup> There is a close analogy here with the self-similarity of an ordinary Kolmogorov cascade. In the Kolmogorov cascade, changes in the fluid viscosity lead to changes in the viscous microscale, but do not influence the amount of energy dissipated into heat. The rate of dissipation depends only on the intensity of turbulent motions on the largest scale. $`S_{\mathrm{lam}}`$ plays the role of viscosity in a turbulent flame. . Three-dimensional numerical simulations with varying $`g`$, $`L`$ and varying laminar flame speed confirm equation (1) including its independence of $`S_{\mathrm{lam}}`$ and indicate $`\alpha 0.5`$ . The results are consistent with high-gravity combustion experiments that used a centrifuge to study premixed turbulent flames at various $`g`$.
Equation (1) tells several important things. Near the WD center $`g0`$. Thus, in the beginning the deflagration speed $`S_{\mathrm{def}}S_{\mathrm{lam}}`$ should be small. The speed will then tend to increase as the flame goes away from the center and the gravity increases. When the intensity of turbulence increases, the flame speed will become independent of the physics of burning on small scales. The latter conclusion is very important since it gives us a hope that SNIa explosions can be modeled in three dimensions without resolving all small spatial and temporal scales. However, equation (1) is missing an important piece of physics. It is valid only in a uniform gravitational field and only when there is no global expansion of matter.
In a supernova explosion, burning causes a global expansion of a star. Equation (1) may be valid only on scales where the expansion velocity is less than the characteristic RT speed. On larger scales, expansion will tend to freeze the turbulence out. The net result will be a substantial decrease of the turbulent flame speed. A crude estimate of the scale $`L_f`$ at which the turbulence becomes frozen and of the effective deflagration speed limited by expansion can be obtained as follows. First, carry out a one-dimensional simulation assuming no turbulence freeze-out, that is, with the flame speed given by equation (1) with $`L`$ equal to the flame radius $`R_f`$. This gives the expansion rate. Then estimate $`L_f`$ as a scale at which the expansion velocity becomes comparable with the characteristic RT velocity. Finally, estimate the effective deflagration speed from equation (1) using $`L=L_f`$ . The estimates are $`L_f(\mathrm{a}\mathrm{few})\times 10^7`$cm, and
$$S_{\mathrm{def}}1.5\times 10^7\mathrm{cm}/\mathrm{s}\left(\frac{g}{10^9\mathrm{cm}/\mathrm{s}^2}\right)^{1/2}\left(\frac{L_f}{10^8\mathrm{cm}}\right)^{1/2}.$$
$`(2)`$
Equation (2) shows that in conditions typical of the exploding white dwarf, a turbulent burning speed is a few per cent of the sound speed $`a_s5\times 10^8`$cm/s. This is not enough to cause a powerful explosion. An additional effect that further limits the rate of deflagration is a deviation from the steady-state turbulent burning regime. A certain time is required for a turbulent flame to reach a steady-state. This time is larger for larger scales. Scales of the order of $`R_f`$ might never reach a steady-state during the explosion.
Figure 4 shows some results of a three-dimensional numerical simulation of the entire Chandrasekhar mass CO-WD exploding as a supernova. In this simulation, equation (1) has been used for the turbulent flame velocity on scales not resolved numerically. For regions of the “average” flame front not oriented “upwards” against gravity this formula most probably overestimates the local turbulent flame speed. Despite this, only $`5`$% of the mass has been burned by the time the star has expanded and quenched the flame, and the white dwarf has not even become unbound. These results show that spherical expansion is indeed important and that burning on large scales does not reach a steady state. Big blobs of burned gas rise and penetrate low density outer layers, whereas unburned matter flows down and reaches the stellar center. The model experienced an almost complete overturn. This has obviously important implications for nucleosynthesis and may cause an element stratification incompatible with observations if composition inhomogeneities are not smeared out during the subsequent detonation stage of burning. The results of 3D modeling indicate that the deflagration alone is not sufficient to cause an explosion. To make a powerful explosion, the deflagration must somehow make a transition to a detonation (delayed detonation model).
Deflagration-to-Detonation Transition. – In terrestrial conditions detonation may arize from a non-uniform explosion of a region of a fuel with a gradient of reaction (induction) time via the Zeldovich gradient mechanism . The region may be created by mixing of fresh fuel and hot products of burning, as in jet initiation, or it may be created by multiple shocks, etc. The same gradient mechanism can operate in supernovae \[29-31\]. There exist a minimum, critical size of the region capable of generating a detonation, $`L_i`$. This parameter is determined by the the equation of state and nuclear reaction rates and is mainly a function of the density of the material. $`L_i`$ is much much less than the size of a WD for all but very low densities $`\rho <10^7`$g/cc .
Why then DDT does not happen in supernovae at high densities? Why does it have to wait until the WD expands significantly? The explanation may be this. The critical size $`L_i`$, however small, is still several orders of magnitude larger than the thickness of a laminar flame. To mix fresh fuel with products of burning, the surface of the flame must be disrupted. But this is difficult to achieve unless the turbulence on a scale of a flame front is larger than the laminar flame speed. Only at very low densities, where reactions slow down, the width of the laminar flame becomes very large, and its speed becomes very small, the turbulence may have a chance to create the right conditions for DDT .
In addition to mixing fuel and products inside an active deflagration front, another mechanism for creating the right conditions for DDT may be as follows. As mentioned above, turbulence in a SNIa will be limited by the expansion. The conditions for DDT during the expansion of a star may not be fulfilled at all. But when deflagration speed is small, deflagration quenches due to expansion before the WD becomes unbound. This happens, in particular, in the simulation shown in Figure 4. The star will then experience a pulsation and collapse back. During the expansion and contraction phases of the pulsation, the high entropy ashes of dead deflagration front will mix with the fresh low entropy fuel again to form a mixture with the reaction time gradients. Mixing will be facilitated during the contraction phase due to the increase of turbulent motions due to the conservation of angular momentum (like a skating ring dancer increase his rotation by squizing his arms). The estimate of the mixing region formed during pulsation is $`10^610^7`$ cm, much larger than $`L_i`$. Is was also shown that as soon as only a few per cent of hot ashes are mixed with a cold fuel, the mixture cannot be compressed to densities higher than $`(\mathrm{a}\mathrm{few})\times 10^7`$g/cc. Further compression will lead to a burnout on time scales much shorter than the pulsation time scale. As soon as this mixture returns to high enough densities $`10^7`$ and re-ignites, the detonation will be triggered .
It should be noted at this point that three-dimensional theory of flame propagation and DDT in supernovae is far from being finished, and remains a subject of an active research. In particular, it was speculated recently that DDT may be caused by a sudden acceleration of a quasi-spherical deflagration front, due to the Landau-Darrheus or some other yet unknown internal instabilities of the flame; that a suddenly accelerated deflagration might keep propagating with the speed of sound without turning into a detonation, etc. . Whether any of these can actually happen should be either tested in appropriately scaled terrestrial experiments or demonstrated in three-dimensional simulations. Further work is required, and it will undoubtedly improve our understanding of SNIa explosions.
It may also be possible to to distinguish between different multi-dimensional explosion mechanisms on the basis of observations. One of the amazing properties of SNIa is their apparently small deviation from spherical symmetry. We do not expect all three-dimensional models to have this property. For example, pure deflagration models are expected to be clumpy (Figure 4) and asymmetric with large blobs of Si and Fe group elements embedded in the unburned CO envelope. Delayed detonation models, on the other hand, should be more symmetric. A supersonic detonation mode of burning that follows deflagration will tend to homogenize the ejecta. Rotation of the progenitor may impose a global, low order asymmetry on the ejecta. Viable models can be limited by computing the polarization of the emerging radiation and comparing the predictions with the existing and planned observations.
5. Discussion
We described a phenomenological delayed detonation model of SNIa based on the explosion of a Chandrasekhar mass carbon-oxygen white dwarf. The model assumes that the explosion starts as a subsonic deflagration and then turns into a supersonic detonation mode of burning. The model is successful in reproducing the main features of SNIa, including multi-wavelength light curves, the spectral behavior, and the brightness – decline and color – decline correlations. It was argued that an apparently low deviations of SNIa from spherical symmetry (low polarization of SNIa) may be attributed in delayed detonation models to the homogenizing effect of the detonation phase of an explosion.
The model interprets existing brightness – decline and color – decline relations among SNIa as a result of varying nickel mass synthesised during the explosion. Major free parameters of the model are the deflagration speed $`S_{\mathrm{def}}`$, the transition density $`\rho _{\mathrm{tr}}`$ at which deflagration turns into a detonation, and also initial density and composition (C/O ratio) of the exploding WD. The variation of nickel mass in the model is caused by the variation of $`\rho _{\mathrm{tr}}`$. Strong sensitivity of the nickel mass to $`\rho _{\mathrm{tr}}`$ is probably the basis of why, to first approximation, SNIa appear to be a one-parameter family. Nonetheless, variations of the other parameters also lead to some relatively small variations of the predicted properties of SNIa, which indicate that the assumption of a one-parameter family may not be strictly valid.
To fit observations, the delayed detonation model requires low values of $`S_{\mathrm{def}}<0.1a_s`$ and low values of $`\rho _{\mathrm{tr}}(13)\times 10^7`$g/cc. In Section 4 it was argued that slow deflagration is the result of an expansion of a star caused by the deflagration itself. The expansion tends to freeze the turbulence and, thus, limits the deflagration speed. The actual rate of deflagration in a supernova is determined by the competition of the Rayleigh-Taylor instability which is the turbulence driving force, the turbulent cascade from large to small scales, and the turbulence freeze-out. Two possible mechanisms that lead to a low $`\rho _{\mathrm{tr}}`$ were discussed – one related to the disruption of an active deflagration front by the existing turbulence, and the other related to quenching of deflagration, mixing of the low-entropy fuel with high-entropy burning products, and its subsequent compression.
It may seem unusual that the two apparently different mechanisms predict almost the same low values for $`\rho _{\mathrm{tr}}`$, these same low values that are required to fit observations in phenomenological delayed detonation models. Note, however, that predictions of low transition density by both DDT mechanisms and the very reason why low $`\rho _{\mathrm{tr}}`$ is needed to fit SNIa observations steam from the same two fundamental facts: (1) specific heat of matter in supernovae depends on density; (2) nuclear reactions depend on temperature exponentially. The resulting dependence of burning timescales on density is very steep. Numbers are such that at densities above $`10^7`$g/cc nuclear burning timescales are much shorter than the sound crossing time ($``$ explosion timescale) of a WD. At densities below $`10^7`$g/cc the timescales become much longer than the sound crossing time. That is why a laminar flame front can be disrupted by turbulence only below approximately $`10^7`$g/cc – flame width is proportional to a burning timescale and at higher densities it is much much shorter than any other relevant spatial scale of a WD. That is why a mixture of cold fuel and hot products cannot be compressed to densities much higher than $`10^7`$g/cc – at higher densities it will react faster than it is being compressed. And that is also why intermediate mass elements can be synthesised in an SNIa only at densities around $`10^7`$g/cc – at higher densities reactions will have enough time to reach a nuclear statistical equilibrium and, thus, to burn CO into Ni-group elements.
What may cause the variations of $`\rho _{\mathrm{tr}}`$ among SNIa? There are several possibilities. One is differences in the initial C/O ratio. If less carbon is present near the WD center, less energy will be released by burning, and this will affect both the buoyancy of burning products and the rate of expansion of a WD. This, in turn, will affect the speed of deflagration, and will lead to different conditions for DDT. Variations of initial C/O ratio among SNIa has been recently studied in the framework of one-dimensional phenomenological delayed detonation models in \[35-37\]. The effect of varying C/O ratio may result in small but noticeable variations in the rise time to maximum and in some other variations in light curve behavior. This is a potential source of systematic evolutionary effects, and has obvious implications for using SNIa in cosmology. However, in one-dimensional models one has to assume how changes in C/O influence the deflagration speed and $`\rho _{\mathrm{tr}}`$, and the predictions then depend on these assumptions. Three-dimensional modeling is required in order to predict the actual influence of C/O ratio on the outcome of the explosion.
Another possibility is the influence of rotation if SNIa are the result of a merger of low-mass CO-WD. Rotation will undoubtedly influence the turbulent deflagration phase which, in turn, will affect DDT. Merger configurations may also differ in their mass, so that slightly super-Chandrasekhar mass WD explosions are probable. Could they be responsible for unusually bright SN1991T-like events? An extended CO envelope around a merger WD may manifest itself in SNIa light curves and spectra. Further work is needed to answer these questions.
This contribition is based in part on work done in collaboration with Peter Höflich, Ewald Müller, Elaine Oran, Craig Wheeler, and others. I thank them and also David Arnett, David Branch, Caren Brown, Robert Harkness, Eli Livne, Ken Nomoto, Geraint Thomas, and Lifan Wang for many discussions. This research was supported in part by the NASA Grant NAG52888 and by the Office of Naval Research.
References
1. Phillips, M. M. 1993, ApJ, 413, L105
2. Riess A.G., Press W.H., Kirshner R.P. 1996, ApJ, 473, 588
3. Shmidt, B. et al., 1998, ApJ, 507, 46; Perlmutter, S. et al. 1999, ApJ, 517,565.
4. Livio, M. 2000, in The Largest Explosions Since the Big Bang: Supernovae and Gamma-Ray Bursts, eds. M. Livio, K. Sahu, & N. Panagia, in press.
5. Nomoto, K. & Sugimoto, D., 1977, PASJ, 29, 765; Nomoto, K., 1982, ApJ 253, 798.
6. Webbink R.F. 1984, ApJ, 277, 355; Iben I.Jr., Tutukov A.V. 1984, ApJS, 54, 335.
7. Benz, W., Bowers, R.L., Cameron, A.G.W, Press, W.H., 1990, ApJ, 348, 647.
8. Mochkovich R., Livio, M. 1990, A&A, 236,378.
9. Nomoto K. 1980, ApJ ,248, 798; Woosley S.E., Weaver T.A., Taam R.E. 1980, in: Type I Supernovae, eds. C.Wheeler, Austin, U.Texas, p. 96.
10. Arnett W.D. 1969, Ap. Space Sci., 5, 280; Hansen C.J., Wheeler J.C. 1969, Ap. Space Sci., 3, 464.
11. Nomoto K., Sugimoto S., & Neo S. 1976, ApSS, 39, L37.
12. Khokhlov A.M. 1991, AA, 245, 114.
13. Khokhlov A.M. 1991, AA, 245, L25.
14. Yamaoka H., Nomoto K., Shigeyama T., Thielemann F.-K. 1992, ApJ, 393, 55.
15. Woosley, S. E., Weaver, T. A. 1994, in Les Houches, Session LIV, Supernovae, ed. S. A. Bludman, R. Mochkovich, & . Zinn-Justin (Amsterdam: North-Holland), 63
16. Höflich, P., Khokhlov, A.M. 1996, ApJ, 457, 500
17. Nugent, P. et al. 1997, ApJ, 485, 812
18. Höflich, P.; Khokhlov, A.; Wheeler, J. C. 1995, ApJ, 444, 831; Höflich, P.; Khokhlov, A., Wheeler, C.J., Phillips, M.M., Sunzeff, N.B., Hamuy, M. 1996, ApJ, 472, L81; Wheeler et al. 1998, ApJ, 496, 908, and references therein.
19. Höflich, P. 1995, ApJ, 459, 307.
20. Müller, E., Höflich, P.A. 1994, A&A, 281, 51.
21. Sandage, A. 2000, in The Largest Explosions Since the Big Bang: Supernovae and Gamma-Ray Bursts, eds. M. Livio, K. Sahu, & N. Panagia, in press.; Riess, A., Nugent, P., Filippenko, A.V., Kirshner, R., Perlmutter, S., 1998, ApJ, 504, 935; Ferrarese, L. et al. 1999, ApJ, in press.
22. Timmes F.X., Woosley S.E. 1992, ApJ 396, 649.
23. Nomoto K., Sugimoto S., & Neo S. 1976, ApSS, 39, L37
24. Khokhlov, A.M., 1995, ApJ, 449, 695.
25. Livne, E., Arnett, W.D. 1993, ApJ 415, L107; Arnett, W.D., Livne, E. 1994, 427, 314.;
26. Khokhlov, A.M. Oran, E.S., Wheeler, J.C. 1996, Combustion & Flame, 105, 28
27. Khokhlov, A.M., Oran, E.S., Wheeler, J.C., 1995, in Type Ia Supernovae, proc. of the NATO conference, Barcelona, Spain, 1995.
28. Zeldovich, Ya. B., Librovich, V. B., Makhviladze, G. M., & Sivashinsky, G. L. 1970, Acta Astron., 15, 313; Lee, J. H. S., Knystautas, R., & Yoshikawa, N. 1978, Acta Astron., 5, 971.
29. Blinnikov, S.I., Khokhlov, A.M. 1986, Soviet Astron. Lett., 12, 131.
30. Khokhlov, A.M., Oran, E.S., Wheeler, J.C. 1997, ApJ, 478, 678.
31. Niemeyer, J.C., Woosley, S.E. 1997, ApJ, 475, 740.
32. Niemeyer, J.C., 1999, ApJ, 523, L57.
33. Arnett, D., & Livne, E. 1994, ApJ, 427, 330.
34. Wang, L., Wheeler, J.C., Höflich, P. 1997, ApJ, 476, 27.
35. Dominguez, I., Höflich, P.A., 1999, ApJ, in press (astro-ph/9908204); Höflich, P.A., Dominguez, I., 2000, in The Largest Explosions Since the Big Bang: Supernovae and Gamma-Ray Bursts, eds. M. Livio, K. Sahu, & N. Panagia, in press.
36. Umeda, H., Nomoto, K., Kobayashi, C., Hachisu, I., Kato, M., 1999, ApJL, in press
37. Höflich, P.A., Nomoto, K., Umeda, H., Wheeler, J.C., 1999, ApJ, in press.
Figure captions
1. Schematics of the delayed detonation explosion. Ignition takes place in a dense, Chandrasekhar mass carbon-oxygen white dwarf. Flame propagates from the center as a subsonic turbulent deflagration. Deflagration-to-detonation transition (DDT) takes place in a significantly expanded star after only a small fraction of mass has been burned. Detonation incinerates the rest of the white dwarf. The resulting configuration consists of the inner core of Fe-group elements including <sup>56</sup>Ni surrounded by a massive envelope of Si-group elements.
2. Comparison of observed (SN1994D) and theoretical (M36) B, V, R, I light curves .
3. Direct determination of the Hubble constant ($`H_o=67\pm 9`$km/s/Mpc) using delayed detonation models . Values of $`H_o`$ are plotted for individual SNIa based on distances determined by fitting their light curves and spectra with theoretical models.
4. Three-dimensional simulation of an explosion of a Chandrasekhar mass CO-WD . The figure shows density distribution during the deflagration phase of the explosion.
|
no-problem/9910/cond-mat9910469.html
|
ar5iv
|
text
|
# Buckling instability in type-II superconductors with strong pinning
## Abstract
We predict a novel buckling instability in the critical state of thin type-II superconductors with strong pinning. This elastic instability appears in high perpendicular magnetic fields and may cause an almost periodic series of flux jumps visible in the magnetization curve. As an illustration we apply the obtained criteria to a long rectangular strip.
In high magnetic fields a noticeable deformation of superconductors occurs in the critical state because of the magnetic force density $`𝐟=𝐣\times 𝐁`$, where $`𝐣`$ is the current density and $`𝐁`$ is the magnetic field. This results in an anomalous irreversible magnetostriction (“suprastriction”) and shape distortion of type-II superconductors with strong pinning. Similar as in magnetic fluid dynamics the stress tensor of a superconductor in a magnetic field includes an additional term, the Maxwell stress tensor of the magnetic field with components of order $`B^2/\mu _0`$. Since this is quadratic in $`B`$, the Maxwell stress tensor in the critical state can be important for elasticity in strong magnetic field . However, even in a field of $`10\mathrm{T}`$ the value of $`B^2/\mu _0`$ is small compared to the Young modulus $`E`$ of the material. We estimate the ratio $`B^2/\mu _0E10^3`$ for $`B10`$T and $`E100`$GPa which is a typical value for YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-x</sub> high-temperature superconductors .
The effect of the magnetic field on the elastic behavior may be much higher if one considers bending of thin samples since the effective elastic modulus for bending $`\stackrel{~}{E}`$ is much less than the Young modulus. In particular, for a long rectangular strip of extension $`l\times w\times d`$ ($`lwd`$) one has $`\stackrel{~}{E}E(d/l)^2E`$. If for instance $`d/l10^2`$ and $`E100`$GPa, then $`B^2/\mu _0`$ is of the order of the effective bending modulus $`\stackrel{~}{E}`$ at $`B3.5\mathrm{T}`$.
An important consequence of a small value of the effective elastic modulus for bending $`\stackrel{~}{E}`$ is the classical Euler buckling instability . This elastic instability occurs for rods and thin strips when the longitudinal compression force $`F`$ at the edges of the sample exceeds a critical value $`F_b\stackrel{~}{E}`$. In particular, one has $`F_b=\pi ^2Ewd^3/48l^2`$ for a long rectangular strip with one edge clamped and the other edge free as shown in Fig. 1 . The buckling instability manifests itself at $`FF_b`$ by a sudden bending with amplitude $`s\sqrt{FF_b}`$.
The magnetization of type-II superconductors with strong pinning and the associated magnetic forces are successfully described by the Bean critical state model using a critical current density $`j_c`$ which decreases with increasing temperature and magnetic field. In the transverse geometry of a thin strip in a perpendicular field one has $`j=j_c`$ in the region where the magnetic flux has penetrated and screening sheet currents $`J`$ with $`0<J<j_cd`$ in the flux-free region . This nonuniform flux distribution is not in equilibrium and under certain conditions a thermomagnetic flux-jump instability may occur producing a sudden intensive heat release. This heat pulse decreases the critical current density and drives the system towards the equilibrium state with a uniform flux. A sudden buckling of a superconductor in the critical state may also lead to a heat pulse and thus to a sudden flux penetration into the sample, which shows as a flux jump instability in the magnetization curve.
In this letter we predict a novel Euler buckling instability caused by the longitudinal magnetic compression force acting in the critical state of a thin superconducting strip in a strong transverse magnetic field. We discuss several scenarios how the buckling instability develops, including the cases when a sudden buckling shows as a flux jump instability in the magnetization curve. A series of buckling induced flux jumps almost periodic in an increasing applied magnetic field is predicted.
We consider first the elastic stability of a long rectangular strip $`l\times w\times d`$ ($`lwd`$) in an increasing transverse magnetic field $`𝐁_a\widehat{𝐳}`$, assuming that the strip is glued to the substrate at the left edge ($`y=0`$) as shown in Fig. 1. A longitudinal compression force $`F`$ acts near the right edge of the strip ($`y=l`$) in the area where the electromagnetic force density $`𝐟=𝐣\times 𝐁`$ has a $`y`$-component due to the U-turning current, thus
$$F=F_y=B_adj_x𝑑x𝑑y,$$
(1)
where the integral is over the U-turn area. As shown in Fig. 2, in the fully penetrated critical state this area is a triangle where $`j_y=j_c`$, but in general the integral is over the right half of the strip ($`y>l/2`$). If $`wl`$ the deformation of the strip can be obtained assuming that $`F`$ is applied to the very end of the strip at $`y=l`$. For such narrow strips one can show that exactly $`F=B_aM`$, where $`M`$ is the total magnetic moment of the strip divided by its length $`l`$.
Depending on the magnetic prehistory of the sample the dependence of the longitudinal compression force $`F`$ on $`B_a`$ is described by the following three formulas.
(a) For a zero-field cooled straight strip (Fig. 1a) with $`B_a`$ increasing from zero one has
$$F=j_cB_ada^2\mathrm{tanh}\frac{B_a}{B_c},$$
(2)
where we introduce $`a=w/2`$ and $`B_c=\mu _0j_cd/\pi `$. The longitudinal force $`F(B_a)`$, Eq. (2), has the limits $`F\pi B_a^2a^2/\mu _0`$ ($`B_aB_c`$, Meissner state) and $`Fj_cB_ada^2`$ ($`B_aB_c`$, fully penetrated critical state).
(b) For $`B_a`$ increasing from a field-cooled value $`B_0`$, one has the force
$$F=j_cB_ada^2\mathrm{tanh}\frac{B_aB_0}{B_c}.$$
(3)
(c) For $`B_a`$ decreasing from a fully penetrated critical state with $`B_a=B_0`$, the force $`F=B_aM`$ decreases as
$$F=j_cB_ada^2\left[\mathrm{\hspace{0.17em}1}2\mathrm{tanh}\frac{B_0B_a}{2B_c}\right],$$
(4)
going through $`F=0`$ at $`B_aB_01.1B_c`$. For a narrow strip the field of full penetration is
$$B_p=B_c\left(1+\mathrm{ln}\frac{w}{d}\right).$$
(5)
In the case of a curved strip (Fig. 1b) in the formulae (2-4) for $`F`$ the factor $`B_a=F/M`$ means the $`z`$ component of $`B_a`$, while in the argument of $`\mathrm{tanh}(\mathrm{})`$ the $`B_a`$ should be replaced by the component $`B_{}`$ of $`B_a`$ perpendicular to the strip near its right end (where the U-turning currents flow). In general, the magnetic moment $`M`$ and the force $`F=B_aM`$ depend on the prehistory of $`B_{}(t)`$ and may relax with time $`t`$.
If the buckling instability for a zero-field cooled strip occurs at a field $`B_b>B_p`$, the force is
$$F=j_cda^2B_b.$$
(6)
The critical force $`F_b`$ for the buckling instability of a strip with one edge clamped and the other edge free is
$$F_b=\frac{\pi ^2ad^3E}{24l^2}.$$
(7)
Equating the forces $`F`$ and $`F_b`$ we find that the magnetic field $`B_b`$ at which the first buckling instability occurs is
$$B_b=\frac{F_b}{j_cda^2}=\frac{\pi ^2}{24}\frac{E}{j_ca}\left(\frac{d}{l}\right)^2.$$
(8)
We estimate the fields $`B_c0.04`$T, $`B_p0.15`$T, and $`B_b4`$T using the data for YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-x</sub> superconductors $`E10^2`$GPa , and assuming that $`j_c10^9`$A/m <sup>2</sup>, $`w10^3`$m, $`d10^4`$m, and $`d/l10^2`$. This estimate verifies our initial suggestion that $`B_pB_b`$.
The height $`s`$ of the right end of the buckled strip (see Fig. 1b) can be found analytically if the maximum angle $`\theta _m`$ between the tangent to the buckled strip and the substrate is small . Assuming that the force $`F`$ slightly exceeds the critical value $`F_b`$ we obtain a sinusoidal bending with the amplitude
$$\frac{s}{l}\frac{4\sqrt{2}}{\pi }\sqrt{\frac{F}{F_b}1}1.8\sqrt{\frac{F}{F_b}1}$$
(9)
and
$$\theta _m2\sqrt{2}\sqrt{\frac{F}{F_b}1}=\frac{\pi }{2}\frac{s}{l}.$$
(10)
Now assume that the external magnetic field is increased with constant ramp rate $`\dot{B}_a`$ and the threshold of the buckling instability is reached when $`F=F_b`$. One can consider several scenarios how the buckling evolves, depending on the value of $`\dot{B}_a`$ and on the ratio of the time constants for bending of the strip, $`\tau _b`$, for magnetic flux diffusion, $`\tau _m`$, and for heat diffusion, $`\tau _h`$, see Ref. for details.
The first scenario applies to a very low ramp rate $`\dot{B}_aB_a/\tau _m`$, where the current and magnetic field distributions inside the strip, and thus the magnetic forces, follow the increasing field $`B_a`$ without delay. In this case the strip starts to bend as soon as the magnetic compression force $`F`$ reaches the critical value $`F_b`$. The force $`F=B_aM`$ via the magnetic moment $`M`$ depends on the perpendicular field component near the tilted tip of the long strip, $`B_{}=B_a\mathrm{cos}\theta _m`$. This means that in $`\theta _m(F)`$, Eq. (10), $`F`$ depends on $`\theta _m`$ and one has to find the value of $`\theta _m`$ self-consistently. To do this we need the appropriate dependence $`M(B_{})`$. We shall see that the resulting $`B_{}`$ decreases with increasing $`B_a`$ (or time); thus we have to use Eq. (4) with $`B_0=B_b`$ (the field where buckling starts) and with $`B_a`$ replaced by $`B_{}`$ in $`\mathrm{tanh}(\mathrm{})`$. Expanding the hyperbolic tangent we thus find for $`B_bB_aB_c`$,
$$FF_b\frac{B_a}{B_b}\left(1\frac{B_bB_{}}{B_c}\right).$$
(11)
Inserting this force into Eq. (10), $`\theta _m^2=8(F/F_b1)`$, and solving for $`\theta _m`$ using $`B_{}B_a(1\theta _m^2/2)`$ and $`B_aB_c`$ we obtain
$$\theta _m\sqrt{2}\sqrt{\frac{B_a}{B_b}1}.$$
(12)
This self-consistent tilt angle $`\theta _m`$ is two times less than the tilt angle Eq. (10) for constant compression force $`F`$. The physical origin of this negative feedback is the reduction of the total U-turning current and thus of the force $`F`$, caused by the decrease of $`B_{}`$ when the end of the strip tilts, compare the current distributions in Fig. 2.
A different scenario appears when the buckling occurs with a delay at a force $`F_d`$ slightly above $`F_b`$ (“overheating”). Several reasons for such a delay are conceivable, e.g., sticking of the strip to the substrate by adhesion, or a misalignment of the perpendicular applied magnetic field $`𝐁_a`$ such that the force $`𝐅`$ in Fig. 1 points slightly downward to the substrate. A small misalignment is probably inevitable for a typical experiment.
When after zero-field cooling $`F=F_d=j_cB_dda^2`$ is reached at $`B_a=B_d`$, the buckling amplitude jumps almost instantly to a finite value $`s\sqrt{F_d/F_b1}`$. To obtain this amplitude self-consistently one may combine Eq. (10) for $`\theta _m(F)`$ with Eq. (4) for $`F(\theta _m)`$, like in the first scenario, noting that $`M`$ and thus the force $`F=B_aM`$ depend on $`B_{}=B_a\mathrm{cos}\theta _m`$. The sudden jump of $`\theta _m`$ at $`B_a=B_d`$ means that $`B_{}`$ is reduced from $`B_d`$ to $`B_d(1\theta _m^2/2)`$ (if $`\theta _m^21`$) and thus Eq. (4) is required yielding
$$F=F_d\left[\mathrm{\hspace{0.17em}1}2\mathrm{tanh}\frac{\theta _m^2B_d}{4B_c}\right]$$
(13)
with $`F_d=j_cB_dda^2`$. Inserting this into Eq. (10) and solving for $`\theta _m^24B_c/B_d`$ one obtains for $`B_dB_c`$:
$$\theta _m^2=\frac{2B_c}{B_d}\left(\frac{F_d}{F_b}1\right).$$
(14)
Equation (14) differs from Eq. (12) because of the different history of the perpendicular field $`B_{}(t)`$ and thus of the magnetic moment: In the first scenario $`B_{}`$ started to decrease from the lower threshold $`B_b`$ and the decrease occurs since the rising $`B_a`$ is overcompensated by the growing $`\theta _m`$. In the present scenario, $`B_{}`$ has reached the higher threshold $`B_d>B_b`$ before it drops down, and this drop is solely due to the growing tilt angle $`\theta _m`$ while $`B_a=B_d`$ is constant in this approximation. As a consequence, the self-consistent tilt angle $`\theta _m`$ is reduced much more in this case, by a factor $`\sqrt{B_c/4B_d}1`$.
This strong feedback mechanism requires that the change of the current density occurs instantaneously, much faster than the mechanical buckling, $`\tau _m\tau _b`$. In reality the redistribution of the currents will lag behind the buckling. In the extreme limit $`\tau _m\tau _b`$, the tilt angle would first jump to its original large value $`\theta _d^2=8(F_d/F_b1)`$, Eq. (10), and then relax to the small value of Eq. (14), or to zero, or to some other value. The theoretical problem is intricate since a quantitative treatment requires the self-consistent time dependent solution of the equations for $`B_{}(t)`$ with a relaxing, history dependent magnetic moment $`M\{B_{}(t)\}`$, using $`B_{}=B_a(t)(1\theta _m^2/2)`$ and $`\theta _m^2=8(F/F_b1)`$ with $`F=B_aM`$. This yields the implicit equation for $`B_{}(t)`$,
$$B_{}(t)=B_a(t)[\mathrm{\hspace{0.17em}5}4B_a(t)M\{B_{}(t)\}/F_b],$$
(15)
from which the tilt angle $`\theta _m^2(t)=2(B_{}/B_a1)`$ is obtained. To solve this one requires a realistic model for the relaxing history dependent magnetization.
From our numerical work we expect the magnetic relaxation to be very fast and non-exponential when $`B_{}/t`$ changes sign , as it is the case during buckling. During very fast switching of $`B_{}(t)`$, the electric field is so large that, irrespective of pinning, the vortices exhibit usual flux-flow behavior, with flux-flow resistivity $`\rho _f(B/B_{c2})\rho _n`$, where $`B_{c2}`$ is the upper critical field and $`\rho _n`$ is the resistivity in the normal state. In this case the magnetic relaxation time of an Ohmic strip applies, $`\tau _m\tau _0=0.249ad\mu _0/\rho _f`$ . This time has to be compared with the buckling time $`\tau _b`$, which we estimate from the lowest resonance frequency $`\omega _1`$ of the strip (a cantilevered reed ), $`\tau _b\omega _1^1`$, $`\omega _1^21.03Ed^2/(\rho l^4)`$ where $`\rho `$ is the specific weight. Inserting here numbers for YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-x</sub> at $`B_a=4`$ T, we estimate $`\tau _m\tau _b`$, i.e., the magnetic relaxation initially is instantaneous. With proceeding relaxation, the electric field and the effective resistivity decrease, and thus the magnetic relaxation time increases. We thus expect that the real behavior of the strip is somewhere between the two considered limits $`\tau _m\tau _b`$ and $`\tau _m\tau _b`$.
Therefore, if buckling starts delayed at a force $`F_d`$ and disappears at a smaller force $`F_b<F_d`$, the tilt angle at the tip of the strip may oscillate between a maximum value $`\theta _{\mathrm{max}}\theta _d`$ and zero. Such oscillation may occur since at $`\theta _d`$ the reduction of $`B_{}`$ is so large that the currents tend to change sign and thus the force $`F`$ rapidly decreases. The tilt angle then may drop to zero, undershooting the small equilibrium value, Eq. (14). With continuously increasing applied field $`B_a(t)`$, the tilt angle $`\theta _m`$ thus makes a sudden jump from zero to $`\theta _{\mathrm{max}}`$, then drops rapidly back to zero, where it remains until the next excursion occurs when $`F`$ again reaches $`F_d`$. These buckling instabilities should occur at nearly equidistant field values with period of the order of the penetration field $`B_p`$, Eq. (5), and they will show up in the magnetization curve as a periodic set of flux jumps.
So far we assumed that the temperature $`T`$ of the strip stays constant, $`T=T_0`$. However, buckling of a strip in the critical state causes some heat release which increases the temperature and decreases the force $`F(T)j_c(T)`$. A complete solution of the buckling instability in type-II superconductors with high critical current density should therefore include a self-consistent treatment of the magnetic field and temperature variations.
For a rough estimate of the decrease of the force $`F(T)`$ we assume here that $`j_c(T)(T_cT)`$, the critical temperature is $`T_cT_0`$, and the heating of the strip is adiabatic. In this case we find that a sudden tilt to an angle $`\theta _m`$ leads to
$$\frac{F(T)F(T_0)}{F(T_0)}\frac{j_cwB_d}{C(T_0)T_c}\frac{\theta _m^2}{2}.$$
(16)
Combining Eqs. (10), (13), and (16) we find that self-heating affects the buckling instability threshold if $`C(T_0)T_cj_cwB_c`$, which results in $`T_03`$K for a heat capacity $`C(T)7\times T^3`$J/Km<sup>3</sup> .
The temperature dependence of $`F(T)`$ may cause oscillations of the strip. Indeed, a sudden buckling leads to a heat pulse increasing the temperature $`T`$ and decreasing the force $`F(T)`$. If because of the temperature increase the force $`F(T)`$ falls below the buckling threshold $`F_b`$ then the strip straightens and the next instability occurs after the strip has cooled down.
In summary, we have shown that a strong magnetic field applied perpendicular to a cantilevered superconductor strip will lead to Euler buckling of this strip. We give the threshold field at which this elastic instability occurs. During buckling, the effective applied field at the tip of the strip decreases due to tilting. As a consequence, the buckling force is reduced. This feedback mechanism may lead to mechanical oscillations of the strip and its magnetization, which depend on the magnetic and thermal relaxation times of the specific experiment. At sufficiently low temperatures this sudden buckling may trigger a periodic series of flux-jump instabilities which should show in the magnetization curve.
R.G.M. acknowledges numerous stimulating discussions with Dr. A. Gerber and support from the Max-Planck-Institut für Metallforschung.
|
no-problem/9910/gr-qc9910022.html
|
ar5iv
|
text
|
# Collision of 1.4 𝑀_⊙ Neutron Stars: Dynamical or Quasi-Equilibrium?
## Conclusion.
In the two recent papers and , Shapiro provided useful insight to processes involving neutron stars. However, the arguments in these papers are not applicable to the head-on collision of 1.4 $`M_{}`$ neutron stars in and this paper. The crux of the problem is that such a collision is too dynamical to be studied using quasi-equilibrium analyses.
## Acknowledgments.
This research is supported by NASA NCS5-153, NSF NRAC MCA93S025, and NSF grants PHY96-00507, 96-00049, and 99-79985.
|
no-problem/9910/astro-ph9910183.html
|
ar5iv
|
text
|
# The kpc-scale radio source population
## 1 Introduction
Augusto et al., (1998) conducted the first systematic search for flat-spectrum radio sources with dominant structure on 90–300 mas angular scales (0.2–2 kpc linear scales at $`z>0.2`$): gravitational lenses and compact-/medium-sized symmetric objects (CSO/MSOs), in particular. The selected sample of 55 such radio sources is described in Section 2. In Section 3 we discuss results, from this sample, pertaining to CSO/MSOs. These are symmetric double or triple sources, with sizes smaller than 15 kpc (e.g., Readhead et al., 1996a ; Readhead et al., 1996b ). CSOs ($`<1`$ kpc; aged $`10^3`$$`10^4`$ years) and MSOs (1–15 kpc; aged $`10^5`$$`10^6`$ years) present compact lobes ($`<20`$ mas) having, overall, $`\alpha <0.75`$ ($`S_\nu \nu ^\alpha `$), and are probably the precursors of the large radio galaxies which they resemble. VLBI surveys have unveiled a significant population of eighteen 0.01–0.1 kpc CSOs, which constitute $`6\%`$ of a complete flux-limited sample of 293 flat-spectrum radio sources (CJF; Taylor et al., 1996a ; Taylor et al., 1996b ). In the last section, we mention the potential of our sample in terms of understanding the hypothesized kpc-sized narrow-line region (NLR) in active galactic nuclei (AGN; e.g., Robson, (1996)).
## 2 The Sample
Starting from the total of $`4800`$ sources in the Jodrell-VLA Astrometric Survey (e.g., Patnaik et al., (1992)) and the first part of the Cosmic Lens All Sky Survey (e.g., Browne et al., (1998)), Augusto et al., (1998) have first established a parent sample containing 1665 strong ($`S_{8.4\mathrm{GHz}}>100`$ mJy), flat-spectrum ($`\alpha _{1.4}^{4.85}<0.5`$) radio sources. From this sample, 55 sources were selected in accordance with an extra resolution criterion as described in Augusto et al., (1998). The completeness of this latter sample depends on both the separation and the flux density ratio of the components of each radio source in the parent sample. Unresolved single-component sources would be rejected.
With regard to the spectral properties of the 55-source sample (Augusto et al.,, 1998), 45 sources have power-law radio spectra down to the lowest measured frequency (which is 365 MHz for 31 of the objects and 151 MHz for 14 of them), 3 sources present complex spectra, and 7 have spectra peaked at $`300`$ MHz. It is relevant that only two of the fourteen 0.2–1 kpc CSO candidates in Table 1 can be classified as GHz-Peaked Spectrum Sources (GPSs), peaking at $`0.5`$–10 GHz. From the same table, only three CSO candidates have a peak at $`300`$ MHz. Hence, the statement “every CSO is a GPS source” (Bicknell et al.,, 1997) seems incorrect.
There are two main populations of radio sources uncovered on kpc scales by Augusto et al., (1998). These consist of 22 CSO/MSOs and 30 core-plus-one-sided-jet (CJ) sources.
It is unfortunate that, for the vast majority of the 55 sources, information on any optical counterparts comes only from the Palomar Observatory Sky Survey (POSS) plates. Using POSS identifications, we have compared (Augusto et al.,, 1998) the abundance of blue stellar objects, red stellar objects, galaxies, and empty fields for the 55-source and the parent 1665-source samples. We have found that the fraction of blue stellar objects in the 55-source sample is half of the fraction of such objects in the 1665-source sample. Furthermore, the fraction of galaxies is three times larger in the 55-source sample. For the other two identification types examined, the results are comparable in both samples (one-third are empty fields and one-eighth are red stellar objects).
Thus, it seems that selecting kpc structure in the radio leads to selecting structure in the optical. There seems to exist a global bias against radio/optical unresolved sources. As regards redshift information, we have $`<z>0.7`$ for the 19 out of the 55 sources that have spectroscopic data. The faintest sources (namely, the 18 sources that correspond to POSS empty fields) still need redshift determinations, suggesting that the average redshift of the sample will increase. Unfortunately for the discussion on this paper, very few of the CSO/MSOs have measured redshifts (Table 1).
## 3 Compact/Medium Symmetric Objects
The sample of 22 CSO/MSO candidates found by Augusto et al., (1998) — Table 1 — contains 9 certain CSOs and two certain MSOs. Most likely, six of the remaining sources are MSOs, leaving five sources that could be either. The fact that for sources at $`z>0.2`$ we have selected the ones dominated by structures on 0.2–2 kpc scales suggests a bias against the population of MSOs within our sample. Since most flat spectral-index sources will consist of a pair of compact lobes (plus, possibly, a core), they will be included in our sample only if their sizes are $`0\text{.}^{\prime \prime }3`$. Much larger sources (like B0824+355 in Table 1), consisting of very weak jets and low surface brightness extended lobes, are probably the exception. We believe that most MSOs in our sample will have sizes of $``$1–2 kpc, much like the confirmed MSO B0205+722 (Table 1). Note that even if we allow for sources with $`z<0.2`$, this will only favour the increase in number of small MSOs, since for the same angular dimensions seen, a lower redshift will translate into a smaller linear size.
Augusto et al., (1998) have shown that the 55-source sample includes every CSO from the 1665-source parent sample having a 160–300 mas separation (0.4–1 kpc for sources at $`z>0.2`$) between compact components with a flux-density ratio of 7:1 or smaller. CSOs containing compact lobes with similar flux-density ratios are included in the sample down to a separation of 90 mas (0.2–0.6 kpc at $`z>0.2`$). The key issue now is to review evidence for why virtually all CSOs present in the 1665-source parent sample are at most the 14 found by Augusto et al., (1998) among their 55-source sample. Typically, CSOs have weak cores (weaker than any of the lobes) and, hence, it is the lobes that are the ‘components’ that will go through the selection criterion of Augusto et al., (1998). It is very rare to find a CSO with lobes presenting flux density ratios greater than 7. In fact, there are not any of these cases among the 0.2–1 kpc CSOs in Table 1 or the eighteen 0.01–0.1 kpc CJF CSOs discussed here. Therefore, we believe that Augusto et al., (1998) have selected virtually all of the CSOs present in the 1665-source parent sample; these are shown in Table 1. In any case, for the discussion of this paper, we performed simulations (see below), which estimate the effects of the ‘resolution’ criterion on the CJF sample, before making any comparison between our CSO-fraction and that of the CJF. The simulations give results that are consistent with the ‘typical’ morphology of CSOs just presented.
Conservatively, taking a maximal number of 0.2–1 kpc CSOs as 14 (Table 1, including the sources classified as ‘question marks’) out of a parent sample containing 1665 sources, only $`0.8\%`$ of flux-limited samples seem to be such CSOs. It seems, then, that these are six times less common than 0.01–0.1 kpc CSOs (which constitute $`6\%`$ of CJF). Both the 0.2–1 kpc and the 0.01–0.1 kpc CSOs are dominated by components $`<20`$ mas in size. Is the number difference due to luminosity evolution alone? Strong luminosity evolution takes place during the time that the 0.01-kpc scale CSOs grow to be 100-kpc scale radio sources (e.g., Readhead et al., 1996a ; Readhead et al., 1996b ). Kaiser & Alexander, (1997) have proposed a model in which the luminosity of double sources decreases proportionally to the square root of their size. If this relation applies continuously as the source evolves from the 0.01-kpc to the 100-kpc scale, then in the evolution from a 0.01–0.1 kpc to a 0.2–1 kpc CSO, size increases by a factor of $`10`$ and hence the luminosity decreases by $`3`$. Given that all CJF sources have $`S_{5\mathrm{GHz}}>350`$ mJy and, like the 55-source sample, have $`\alpha _{1.4}^{4.85}<0.5`$, our flux-density criterion $`S_{8.4\mathrm{GHz}}>100`$ mJy allows a sampling $`3`$ times fainter, cancelling out the predicted luminosity evolution. Before rushing to other evolutionary explanations, we note that our selection process included a resolution criterion not present in CJF. Hence, we need to find out how many of the 18 CJF CSOs would remain in the CJF if it had an equivalent resolution criterion. The simplest way to do this is to use models fitted to the 18 CJF CSOs, expand the separation of the components by a factor of 10, and check whether they meet the criteria for inclusion in our sample. This will only give indicative results, of course. The models and maps are found in the literature from the VLBI surveys, except for three models that we crudely produced from the available maps.
Using the program FAKE in the Caltech VLBI package (Pearson,, 1991), we have performed a test for the reliability of selection (details in Augusto et al., (1998)). Eleven out of the ‘order-of-magnitude-expanded’ 18 CJF 0.01–0.1 kpc CSOs would be in our sample. To contemplate the possibility that some of our 0.2–1 kpc CSOs might have been selected by a lucky combination of observational conditions, we also ran FAKE on the 14 such CSOs in Table 1. All of them are reliably in our sample.
The revised frequencies of CSOs are then $`0.8\%`$ (14/1665) in our sample and $`4\%`$ (11/293) in CJF. Since five of the fourteen 0.2–1 kpc CSO candidates in Table 1 could be $`>`$1 kpc MSOs, a conservative factor of $`5`$ still remains between the abundance of CSOs in both samples. To explain this difference, we suggest evolution of the lobes in CSOs as they grow — self-similar growth of radio galaxies: the lobes start off as compact hot spots when 0.01–0.1 kpc apart and expand until they grow $`100`$ kpc apart, as in normal radio galaxies. The number of 0.2–1 kpc young radio galaxies seems to be less than the number with sizes 0.01–0.1 kpc due to the resolution criterion used to select the 55-source sample in Augusto et al., (1998): only double (or triple) sources with compact ($`<20`$ mas) components are in the sample. The extended lobes of Compact Steep Spectrum ($`\alpha >0.5`$) radio sources, the dominant radio sources on 0.2–1 kpc scales, cannot be selected by the resolution criterion of Augusto et al., (1998).
## 4 Future
Once redshifts are determined for the remaining 36 of the 55 sources, we will not only classify CSO/MSOs correctly, according to their sizes, but also determine the linear (projected) sizes of the CJs. Most of these CJs might also show evidence for strong shocks in the NLR. Half of the CJs in the 55-source sample contain sharply bent jets that bend by more than $`90^{}`$, in some cases more than once. This hints at strong interactions with the interstellar medium of the host galaxies. Altogether, the CSOs, MSOs, and CJs in our sample will give us clues about the composition and density of the NLR in galaxies because of their interactions with the NLRs of their hosts. Due to our good statistics, this might be a useful step forward towards understanding the standard model of AGN as a whole, and the NLR in particular.
|
no-problem/9910/hep-th9910193.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The simplest compactifications of string theories are those on tori. These preserve maximal supersymmetry, and furthermore they enjoy a large discrete symmetry, called U-duality. This U-duality group can be viewed as being generated by two distinct sets of symmetries (see e.g. and references therein).
The first is T-duality, a perturbative symmetry of string theory, i.e. it holds order by order in the string loop expansion. T-duality states that strings on circles of radius $`R`$ are in fact equivalent to strings on circles with inverse radii. In the identification, the roles of string momentum and winding states are interchanged. The T-duality group of string theory on a $`d`$-dimensional torus is $`SO(d,d,𝐙)`$.
The second contribution to the U-duality group follows from the observation that type IIA strings on a $`d`$-torus may alternatively be regarded as M-theory on a torus of one dimension higher. This point of view makes obvious a geometric symmetry group $`SL(d+1,𝐙)`$ of the torus. From the string theory perspective, this is a non-perturbative symmetry: for instance, it interchanges non-perturbative D$`0`$-branes with stringy momentum modes.
When these two groups are combined, they generate the U-duality groups, whose continuous versions have been known to exist for a long time as the hidden symmetries of supergravity theories. For various torus dimensions $`d`$, they are listed in table 1.
In the following we will concentrate on the case $`d=3`$, where the U-duality group is $`SL(5,𝐙)`$.
The states in the theory transform in multiplets of the U-duality group. In particular this is the case for the $`1/2`$ BPS states. In type IIB strings on a three-torus these states are D$`3`$-branes, three different D$`1`$-branes (one for each cycle of the torus), three fundamental string winding modes and three momenta around the torus. These ten objects transform as an antisymmetric tensor under $`SL(5)`$. The $`1/4`$ BPS states, in which we will be mainly interested, are realised as combinations of such $`1/2`$ BPS objects.
The symmetry under U-duality transformations also implies that the degeneracies of states related by such a transformation should coincide. In the case of $`1/4`$ BPS states this degeneracy can be calculated in perturbative string theory by considering a state with only momentum and winding quantum numbers. The degeneracy of a state with momentum vector $`p_i`$ and winding numbers $`w_i`$ is given by $`D(pw)`$, with $`D(n)`$ defined by the chiral string partition function:
$$D(n)q^n=256\left(\frac{1+q^n}{1q^n}\right)^8.$$
(1)
Since any other $`1/4`$ BPS state can be mapped to such a state, all $`1/4`$ BPS degeneracies should be given by a similar expression.
The purpose of this work (reported in ) is to investigate the implications of the U-duality symmetry of the string theory on the Born-Infeld gauge theory living on a three-brane wrapping a torus. As we will see, the BPS states of the strings have an interpretation in terms of fluxes in the gauge theory on the three-brane. We will study the BPS sector of this Born-Infeld theory and, via BPS quantisation (following ) determine the associated degeneracies of the BPS states in this gauge theory. These will turn out to be in accord with the string results.
## 2 The Born-Infeld gauge theory on the three-brane
We will focus the discussion on the seven-dimensional case, corresponding to string theory compactified on a three-torus, with U-duality group $`SL(5,𝐙)`$. On the type IIB string theory side, BPS states are built from ten distinct objects: three-branes wrapping the three-torus, fundamental and D-strings winding around three different one-cycles, and momentum modes in the three internal directions. The associated quantum numbers transform in the ten-dimensional representation of $`SL(5)`$.
In the gauge theory on a three-brane wrapping the torus, all these ten quantum numbers have an interpretation as fluxes. The relations are given in table 2.
The number of D$`3`$-branes is of course related to the rank $`N`$ of the $`U(N)`$ gauge theory. The magnetic fluxes, the zero modes of the magnetic field $`B_i=\frac{1}{2}ϵ_{ijk}F_{ij}`$, correspond to D-strings, whereas the S-dual electric fluxes take the role of fundamental strings. Finally the momenta (i.e. the components of the integrated Poynting vector $`EB`$) simply translate to the string theory momenta around the torus.
To bring out the U-duality properties, it is convenient to organise these gauge theory quantities in an antisymmetric five by five tensor $`M_{ij}`$, as follows:
$$M=\left(\begin{array}{ccccc}0& P_3& P_2& E_1& B_1\\ P_3& 0& P_1& E_2& B_2\\ P_2& P_1& 0& E_3& B_3\\ E_1& E_2& E_3& 0& N\\ B_1& B_2& B_3& N& 0\end{array}\right).$$
(2)
The $`SL(5)`$ acts on this matrix by conjugation. One can easily recognise the two subgroup $`SL(3)`$ and $`SL(2)`$. The former is the geometric symmetry on the torus and acts as such on the three vectors. It sits in the top lefthand block of the $`SL(5)`$ matrices. The two by two lower righthand block realises the $`SL(2)`$ which is the electromagnetic duality. It mixes electric and magnetic components. Both subgroups leave the rank $`N`$ untouched. We will discuss those transformations affecting the rank in the following.
The particular gauge theory on the D$`3`$-brane that we want to consider is the Born-Infeld gauge theory. The abelian version of its action is given by
$$S_{BI}=\frac{1}{g_s}\sqrt{det(G_{\mu \nu }+F_{\mu \nu })}.$$
(3)
The inverse string coupling in front of the action is typical of D-branes. We omit possible non-trivial $`B`$-field background contributions. The generalisation to higher rank is thought to be given by a symmetrised trace over the gauge group; this issue is not fully resolved yet however, for a discussion see e.g. . We will compute the Hamiltonian and BPS masses for the abelian case and assume the generalised result for arbitrary $`N`$.
In order to calculate the Hamiltonian we need to introduce the electric field
$$E^i=\frac{\delta }{\delta A_i}.$$
It now turns out that the square of the Hamiltonian density $`H`$ can be expressed in a simple way in terms of the matrix $`M_{ij}`$ defined in equation (2), as
$`H^2`$ $`=`$ $`{\displaystyle \frac{1}{g_s^2}}(N^2+B^2)+E^iG_{ij}E^j+P_i(G^1)^{ij}P_j`$ (4)
$`=`$ $`{\displaystyle \frac{1}{2}}\text{Tr }M^2.`$
In the first line, we see the contribution from the D$`3`$ and D$`1`$-branes, with the characteristic coupling constant dependence, and then the terms corresponding to winding and momentum. The second line demonstrates that the Hamiltonian takes a very simple form in terms of the $`SL(5)`$ tensor $`M`$. (We can absorb the coupling, as well as any non-trivial background fields, in a five-dimensional metric).
This form suggests an $`SL(5)`$ covariant description of the theory. However, the matrix $`M`$ does not have arbitrary components, there exist relations between them. Remarkably we can write these relations again in an $`SL(5)`$ covariant form, as the constraint
$$K^i=\frac{1}{8}ϵ^{ijklm}M_{jk}M_{lm}=0.$$
(5)
In components the five-vector $`K^i`$ is given by
$$K=(NP_i(EB)_i,PB,PE).$$
The first three components are precisely the definition of the Poynting vector, while the last two components are automatically zero whenever the first three are.
We are therefore led to consider an arbitrary matrix $`M`$, provided it satisfies the five-vector constraint $`K=1/2(MM)=0`$. At this point a major problem is of course that, to make the connection to gauge theory, while $`E`$, $`B`$ and $`P`$ may be position dependent, the rank $`N`$ should of course be a constant. We will turn to this in a moment.
Finally, we are interested in the BPS states of the theory. Again the BPS masses can be written in a nice form using the matrix $`M_{ij}`$. The BPS mass is a function of the ten charges, which are given by the zero modes of $`M`$. We write these as $`m_{ij}=M_{ij}`$. In terms of this matrix of fluxes, the BPS mass formula takes the form
$$M_{BPS}^2=\frac{1}{2}\text{Tr }m^2+2|k|.$$
(6)
Here $`k`$ is the zero mode equivalent of $`K`$, i.e. $`k=1/2(mm)`$. Note that while the space dependent $`K`$ is automatically zero, $`k`$ is not. In fact $`k=0`$ only for $`1/2`$ BPS states, while $`1/4`$ BPS states have non-zero vector $`k`$. The BPS equations can be expressed in a covariant fashion in terms of $`M`$ and its zero-modes $`m`$ as well.
We will go on to quantise the space of BPS states, in order to try to determine the quantum degeneracies of the BPS states. We will first review the method of BPS quantisation introduced in for the $`U(N)`$ Yang-Mills theory; then we will apply this to the Born-Infeld case.
## 3 BPS quantisation of Yang-Mills theory
In Hacquebord and Verlinde discussed the question of $`SL(5)`$ invariance of the BPS spectrum in the context of Yang-Mills theory on a torus. In the Yang-Mills theory we have the fields $`A_\mu `$, the vector potential, and six (adjoint) scalar fields $`X^I`$. The BPS equations depend on the fluxes of the configuration. In the simple case where only the momentum in the one-direction, $`p_1`$, and the rank $`n`$ are non-zero, we may gauge-fix $`A_0`$ and $`A_1`$ to zero and then obtain the BPS equations
$$(_0_1)A_{2,3}=0,(_0_1)X^I=0,[A_i,A_j]=[A_i,X^I]=[X^I,X^J]=0.$$
(7)
These equations were recognised in as the left-moving sector of a matrix string theory. Due to the vanishing of the commutators, one can take all $`n`$ by $`n`$ matrices to be diagonal. At first sight this seems to imply that we have simply $`n`$ distinct left-moving theories on a string. However, in the periodicity conditions in the coordinate $`x_1`$ one may include a permutation of the eigenvalues,
$$A_i(x_1+2\pi )=SA_i(x_1)S^1,$$
and similarly for the $`X^I`$, so that effectively one describes the “long strings” introduced in as the twisted sectors of a conformal field theory on a symmetric product, with a total length $`n`$.
Hacquebord and Verlinde concentrated on the case where one has just one string of length $`n`$. In this case, quantisation of the theory restricted to the BPS configurations yields left-moving oscillators with a fractional moding, in multiples of $`1/n`$. Then, to obtain a total momentum $`p_1`$ one should consider states with oscillator number $`np_1`$, so that the degeneracy of such states is indeed given by $`D(np_1)`$, the result from string theory.
For more general quantum numbers the degeneracies were argued to be the same in . The total BPS degeneracies for the Yang-Mills $`U(N)`$ gauge theory therefore respect the U-duality group $`SL(5,𝐙)`$, at least in the single long string sector.
The essential point in this result is that in the subsector of the theory respecting the BPS conditions, the configurations reduce to strings. The length of these strings equals the rank of the gauge theory. We will now try to apply the same arguments to the Born-Infeld gauge theory. In the abelian case where the theory is well understood, we will again see the reduction to a string theory. For the non-abelian version, we will assume that similarly the equations reduce to those of a matrix string theory, so that effectively we can also here use the abelian BPS equations. We know that in the limit of large $`N`$, where the theory is adequately described by the Yang-Mills theory, this should be the case, but for general $`N`$ this remains an assumption essential for our result.
## 4 BPS quantisation of the Born-Infeld theory
For the supersymmetric Born-Infeld case we will now reexamine the situation. As explained above we will use the BPS equations found from the abelian Born-Infeld theory, and assume these to be valid for the non-abelian case, with the only alteration that the length of the domain on which the fields live is multiplied by the rank $`n`$.
Just as in the Yang-Mills case let us start from the easy case, where only the quantum numbers associated to the rank, $`n`$, and the momentum in the one-direction, $`p_1`$, are non-zero. In this case the BPS equations are
$$E_2=B_3,E_3=B_2,E_1=B_1=P_2=P_3=0.$$
(8)
If we insert the expressions for $`E_i`$ and $`B_i`$ in terms of the gauge field $`A_i`$ we again find the same equation as in the case of the Yang-Mills theory,
$$(_0_1)A_i=0,$$
and we are suppressing the six extra scalars $`X^I`$. From the fact that the equations are precisely the same we can of course conclude that here we have the same degeneracy as the one found in the previous situation. However, in order to be able to generalise to arbitrary fluxes, it is convenient to go through the calculation in a little more detail.
In order to quantise the theory in a lightcone gauge, we identify the electric field field with left-moving string coordinates,
$$E_iX_i,$$
enjoying the appropriate commutation relations
$$[X(\sigma ),X(\sigma ^{})]=i\delta (\sigma \sigma ^{}).$$
From this relation to a string theory one can compute the degeneracy. However, let us step back and try to make the analogy to the string theory before the fixing to lightcone gauge. In order to do this we propose to identify the lightcone coordinates $`X^\pm `$ with the rank, $`N`$, and the momentum, $`P_1`$. For the moment we therefore assume $`N`$ to be a real, fluctuating field, whose zero mode is the rank $`n`$. This implies imposing a commutation relation between the two quantities,
$$[N,P_1]=i\delta .$$
(9)
In this notation, if we write down the constraint $`K^i=0`$ (equation (5)), its only non-trivial component $`K^1`$ takes the form
$$K^1=X^+X^{}\frac{1}{2}X^iX^i=0$$
(10)
which we recognise as precisely the Virasoro constraint! Using the gauge symmetry generated by this constraint we may now fix $`N=X^+`$ to be a constant, $`n`$. This then determines $`P^1=X^{}`$ in terms of the other fields as
$$nP_1=(EB)_1.$$
In conclusion, by introducing an underlying pre-theory, in which the rank is allowed to be a fluctuating field, we manage to make contact to a string theory before the fixing of lightcone gauge. In this theory the constraint $`K=0`$ is interpreted as the Virasoro constraint. The original gauge theory is then identified with the lightcone gauge-fixed version of this theory, where the rank is identified with the lightcone momentum.
So far we have only introduced some additional structure, which by fixing a gauge we again removed. The usefulness of this additional structure becomes clear when we consider the generalisation to arbitrary fluxes. To solve the problem for general fluxes, let us insert the expression in terms of the $`X`$’s from the previous discussion in the matrix $`M_{ij}`$:
$$M=\left(\begin{array}{ccccc}0& 0& 0& 0& 0\\ 0& 0& X^{}& X^2& X^3\\ 0& X^{}& 0& X^3& X^2\\ 0& X^2& X^3& 0& X^+\\ 0& X^3& X^2& X^+& 0\end{array}\right).$$
(11)
Now, if $`M_{ij}`$ satisfies the Born-Infeld BPS equations, then, since these equations are covariant, an $`SL(5)`$ transformed $`M_{ij}^{}`$ is a solution as well. If we therefore conjugate the matrix $`M`$ above with an appropriate $`SL(5,𝐙)`$ matrix, we obtain a new solution with arbitrary new zero-modes of the fields (fluxes). The entries of this new $`M^{}`$ are all linear combinations of the $`X`$, so the new electric and magnetic fields, as well as momenta and rank, are all functions of all the $`X`$’s. In particular, since the new rank $`N^{}`$ is not anymore simply $`X^+`$, it is no longer a constant. However, from the previous discussion we see that we may remedy this simply by making a gauge transformation generated by the constraint $`K^i`$, to fix the gauge so that again $`N^{}`$ is a constant. Effectively, by performing a U-duality transformation we have gone out of the lightcone gauge, and one has to apply a compensating gauge transformation to reach a new configuration with constant rank. Since this is only a gauge transformation, it does not of course affect the degeneracy, so that this is indeed automatically U-invariant.
## 5 Conclusions
We have seen that Born-Infeld theory in $`3+1`$ dimensions can be naturally written in terms of U-covariant objects: the antisymmetric matrix $`M_{ij}`$, together with the five-vector of constraints $`K^i=0`$. The spectrum of BPS masses, as well as the BPS equations take a covariant form in terms of these quantities.
To study the degeneracies of the BPS states, we generalised the BPS quantisation applied to Yang-Mills theory in . In the BPS sector, we saw that the theory reduced to a string theory, giving rise to the stringy degeneracies $`D(n)`$. Furthermore, we proposed a theory underlying the actual gauge theory in the BPS sector, in which the rank is treated on an equal footing as the other fields. In this formulation the constraint $`K`$ was identified with the generator of conformal transformations. Fixing this theory to lightcone gauge yields the actual gauge theory with constant rank $`N`$.
|
no-problem/9910/quant-ph9910021.html
|
ar5iv
|
text
|
# Probabilistic Teleportation and Entanglement Matching
## Abstract
Teleportation may be taken as sending and extracting quantum information through quantum channels. In this report, it is shown that to get the maximal probability of exact teleportation through partially entangled quantum channels, the sender (Alice) need only to operate a measurement which satisfy an “entanglement matching” to this channel. An optimal strategy is also provided for the receiver (Bob) to extract the quantum information by adopting general evolutions.
PACS Number(s) 03.67, 03.65.Bz
Quantum teleportation, the process that transmits an unknown qubit from a sender (Alice) to a receiver (Bob) via a quantum channel with the help of some classical information, is originally concerned by Bennett, Brassard, Crepeau, Jozsa, Peres, and Wootters (BBCJPW) $`\left[1\right]`$. In their scheme, such a quantum channel is represented by a maximally entangled pair (any of the Bell states) and the original state could be deterministically transmitted to Bob.
The process of teleportation may be regarded as sending and extracting quantum information via the quantum channel. We will apply this picture to investigate a partially entangled quantum channel. Because a mixed state can be purified to a Bell state with no probability $`\left[24\right]`$, a quantum channel of mixed state could never provide an teleportation with fidelity 1. Therefore only pure entangled pairs should be considered if we prefer an exact teleportation (even with some probability). For the reason of Schmidt disposition $`\left[5\right]`$, a partially entangled pair may be expressed as
$$|\mathrm{\Phi }_{2,3}=a|00_{2,3}+b|11_{2,3}\text{ }(\left|a\right|^2+\left|b\right|^2=1,\text{ }\left|a\right|>\left|b\right|).$$
(1)
(hereafter, we assume particle 2 is at Alice’s site and particle 3 at Bob’s site) Absolute value of the Schmidt coefficient $`\left|b\right|`$ is an invariant under local operations, and it corresponds to the entanglement entropy $`E`$ of the state $`\left[6\right]`$. Such a state can be concentrated to a Bell state $`[6,7]`$ with the probability of $`2b^2`$ and the concentrated pair may be used as a new quantum channel to carry out a teleportation.
In this report, Alice performs a Von-Neumann measurement on her side while Bob performs a corresponding general evolution to reestablish the initial state with a certain probability. We will give a measure of the entanglement degree to Alice’s measurement and show that the optimal probability of an exact teleportation is determined by the less one of the entanglement degrees of Alice’s measurement and the quantum channel. Thus the matching of these entanglement degrees should be considered and the entanglement degree of the measurement is endowed a meaning of Alice’s ability to send quantum information.
First, we consider the case Alice operates a Bell measurement and give Bob’s proper general evolution to reestablish the initial state with an optimal probability. Considering the previously shared pair shown in Eq. $`\left(1\right)`$ and the unknown state (which is to be send) of particle 1 $`\varphi _1=\alpha |0_1+\beta |1_1`$, the total state could be written as $`|\mathrm{\Psi }_{1,2,3}=|\varphi _1|\mathrm{\Phi }_{2,3}=\alpha a|000_{1,2,3}+\alpha b|011_{1,2,3}+\beta a|100_{1,2,3}+\beta b|111_{1,2,3}`$. If Alice operates a Bell measurement, Bob will get the corresponding unnomalized states as the following:
$$\begin{array}{c}\mathrm{\Phi }_{1,2}^+\mathrm{\Psi }_{1,2,3}=\frac{\sqrt{2}}{2}\left(\alpha a|0_3+\beta b|1_3\right),\\ \mathrm{\Phi }_{1,2}^{}\mathrm{\Psi }_{1,2,3}=\frac{\sqrt{2}}{2}\left(\alpha a|0_3\beta b|1_3\right),\\ \mathrm{\Psi }_{1,2}^+\mathrm{\Psi }_{1,2,3}=\frac{\sqrt{2}}{2}\left(\beta a|0_3+\alpha b|1_3\right),\\ \mathrm{\Psi }_{1,2}^{}\mathrm{\Psi }_{1,2,3}=\frac{\sqrt{2}}{2}\left(\beta a|0_3\alpha b|1_3\right).\end{array}$$
(2)
where $`\left\{|\mathrm{\Phi }_{1,2}^\pm =\frac{\sqrt{2}}{2}\left(|00_{1,2}\pm |11_{1,2}\right),\text{ }|\mathrm{\Psi }_{1,2}^\pm =\frac{\sqrt{2}}{2}\left(|01_{1,2}\pm |10_{1,2}\right)\right\}`$ are Bell states of particle 1 and particle 2. Alice informs Bob her measurement result, for example $`|\mathrm{\Phi }_{1,2}^+`$ (with the corresponding collapsed state of particle 3 as $`\mathrm{\Phi }_{1,2}^+\mathrm{\Psi }_{1,2,3}=\frac{\sqrt{2}}{2}\left(\alpha a|0_3+\beta b|1_3\right)`$ which is unnomalized), and Bob gives a corresponding general evolution. To carry out a general evolution, an auxiliary qubit with the original state $`|0_{aux}`$ is introduced. Under the basis $`\{|0_3|0_{aux},|1_3|0_{aux},|0_3|1_{aux},|1_3|1_{aux}\}`$, a collective unitary transformation
$$U_{sim}=\left(\begin{array}{cccc}\frac{b}{a}& 0& \sqrt{1\frac{b^2}{a^2}}& 0\\ 0& 1& 0& 0\\ 0& 0& 0& 1\\ \sqrt{1\frac{b^2}{a^2}}& 0& \frac{b}{a}& 0\end{array}\right),$$
(3)
transforms the unnomalized product state $`\frac{\sqrt{2}}{2}\left(\alpha a|0_3|0_{aux}+\beta b|1_3|0_{aux}\right)`$ to the result:
$$|\mathrm{\Phi }_{3,aux}=\frac{\sqrt{2}}{2}\left[b\left(\alpha |0_3+\beta |1_3\right)|0_{aux}+a\sqrt{1\frac{b^2}{a^2}}\alpha |1_3|1_{aux}\right],$$
(4)
which is also unnormalized. Then a measurement to the auxiliary particle follows. If the measurement result is $`|0_{aux}`$, the teleportation is successfully accessed, while if the result is $`|1_{aux}`$, teleportation fails with the state of qubit 3 transformed to a blank state $`|1_3`$ and no information about the initial qubit 1 left (thus an optimal probability of teleportation is accessed). The contribution of this unnomalized state to the probability of successful teleportation may be expressed by the probabilistic amplitude of $`\alpha |0_3+\beta |1_3`$ in Eq. $`\left(4\right)`$ as $`\left|(\frac{\sqrt{2}}{2}b)\right|^2=\frac{1}{2}\left|b\right|^2`$.
Other states in Eq. $`\left(2\right)`$ could be discussed in the same way, and their contributions to the probability of successful teleportation may be calculated directly by using a general method: if the unnormalized state in Eq. $`\left(2\right)`$ is written as $`\alpha x|0_3+\beta y|1_3`$ or $`\alpha x|1_3+\beta y|0_3`$, after Bob’s optimal operation, it gives a contribution to the whole successful probability as
$$p=\left(\mathrm{min}\{\left|x\right|,\left|y\right|\}\right)^2.$$
(5)
Adding up all the contributions, the optimal probability of successful teleportation is obtained as $`P=\frac{1}{2}\left|b\right|^2\times 4=2\left|b\right|^2`$.
Then consider more general cases: Alice operates a measurement with such eigenstates:
$$\begin{array}{c}|\mathrm{\Phi }_{1,2}^1=a^{^{}}|00_{1,2}+b^{^{}}|11_{1,2},\\ |\mathrm{\Phi }_{1,2}^2=b^{^{}}|00_{1,2}a^{^{}}|11_{1,2},\\ |\mathrm{\Phi }_{1,2}^3=a^{^{}}|10_{1,2}+b^{^{}}|01_{1,2},\\ |\mathrm{\Phi }_{1,2}^4=b^{^{}}|10_{1,2}a^{}|01_{1,2}.\end{array}(\left|a^{^{}}\right|^2+\left|b^{^{}}\right|^2=1,\left|a^{^{}}\right|\left|b^{^{}}\right|).$$
(6)
For the reason of Schimidt disposition, this basis has represented all possible Von-Neumann measurements of two particles when $`(a^{^{}},b^{^{}})`$ varies. The four states above are orthogonal and have the same entanglement entropy, so the measurement’s entanglement degree $`E`$ can be defined as that of any of the four states. Collapsed states of particle 3 corresponding to the four measurement result could be written as:
$$\begin{array}{c}\mathrm{\Phi }_{1,2}^1\mathrm{\Psi }_{1,2,3}=\alpha aa^{^{}}|0_3+\beta bb^{^{}}|1_3,\\ \mathrm{\Phi }_{1,2}^2\mathrm{\Psi }_{1,2,3}=\alpha ab^{^{}}|0_3\beta ba^{^{}}|1_3,\\ \mathrm{\Phi }_{1,2}^3\mathrm{\Psi }_{1,2,3}=\beta aa^{^{}}|0_3+\alpha bb^{^{}}|1_3,\\ \mathrm{\Phi }_{1,2}^4\mathrm{\Psi }_{1,2,3}=\beta ab^{^{}}|0_3\alpha ba^{^{}}|1_3,\end{array}$$
(7)
which is unnormalized. The general evolution to particle 3 is similar to what is shown in Eq. $`\left(3\right)`$. From the result of Eq. $`\left(6\right)`$, the probability of successful teleportation could be considered directly in the following two cases:
1. $`\left|a\right|\left|a^{^{}}\right|\left|b^{^{}}\right|\left|b\right|`$
In this case, because $`\left|\left(ab^{^{}}\right)\right|^2=\left|a\right|^2`$ $`\left(1\left|a^{^{}}\right|^2\right)`$ and $`\left|\left(ba^{^{}}\right)\right|^2=\left|a^{^{}}\right|^2\left(1\left|a\right|^2\right)`$, inequality $`\left|ab^{^{}}\right|\left|ba^{^{}}\right|`$ is established, and $`\left|aa^{^{}}\right|\left|bb^{^{}}\right|`$ is obvious, so the whole probability of successful teleportation may be written as
$`P=\left|\left(bb^{^{}}\right)\right|^2+\left|\left(ba^{^{}}\right)\right|^2+\left|\left(bb^{^{}}\right)\right|^2+\left|\left(ba^{^{}}\right)\right|^2=2\left|b\right|^2,`$
which is just the same as the case Alice operates a Bell measurement.
2. $`\left|a^{^{}}\right|\left|a\right|\left|b\right|\left|b^{^{}}\right|`$
In this case, $`\left|ba^{^{}}\right|\left|ab^{^{}}\right|`$, and the probability of successful teleportation is
$$P=\left|\left(bb^{^{}}\right)\right|^2+\left|\left(ab^{^{}}\right)\right|^2+\left|\left(bb^{^{}}\right)\right|^2+\left|\left(ab^{^{}}\right)\right|^2=2\left|b^{^{}}\right|^2.$$
(8)
From the analysis above, the probability of successful teleportation is determined by the less one of $`\left|b\right|`$ and $`\left|b^{^{}}\right|`$, and may be regarded as being determined by the less entanglement degree of Alice’s measurement and the quantum channel.
Just as what is mentioned above, teleportation may be regarded as the quantum channel’s preparation and quantum information’s sending and extraction. The result above may be explained clearly by using this picture. The entanglement degree of Alice’s measurement could be considered as Alice’s sending ability and the entanglement degree of the quantum channel could be taken as the width of it. Then the amount of transmitted quantum information is determined by the lower one of these two bounds: the width of the quantum channel $`2\left|b\right|^2`$ and the sending ability of Alice $`2\left|b^{^{}}\right|^2`$. If they are just the same, an “entanglement matching” is satisfied. If Bob always reestablish the to-be-sent state in an optimal probability (which means he always extract all the quantum information he received), an exact teleportation will be performed with the probability equal to the amount of the quantum information transmitted, just as what is shown in Eqs. $`(8,9)`$.
Though Bell measurement is an essential task of quantum teleportation, it is very difficult to be fully accessed and It has been shown that Bell states cannot be distinguished completely by using linear devices $`[8,9]`$, while this difficulty can be seen in some teleportation experiments $`\left[10\right]`$. Von-Neumann Measurements with less entangled eigenstates may be more efficient. From the result above, if a partial entanglement state $`|\mathrm{\Phi }_{2,3}=a|00_{2,3}+b|11_{2,3}`$ is adopted as the quantum channel, the same optimal probability of successful teleportation could be accessed if only Alice’s measurement satisfied the “entanglement matching”, while a Bell measurement or a POVM is not necessary. The matching here is essential to get an optimal probability, and it could also be regarded as the matching between the quantum channel’s width and Alice’s sending ability. Without such a matching, a waste of quantum information either at Alice’s site or through the quantum channel will be caused.
The result of entanglement matching can be generalized to the teleportation of multi-particle system. Considering a $`k`$-particle system $`P`$ at Alice’s site with the state $`|\mathrm{\Psi }_P=`$ $`\alpha _0|00\mathrm{}00_{P_1,\mathrm{},P_k}+\alpha _1|00\mathrm{}01_{P_1,\mathrm{},P_k}+\mathrm{}\mathrm{}+\alpha _{2^k1}|11\mathrm{}11_{P_1,\mathrm{},P_k}`$. Without loss of generality, the quantum channel between Alice and Bob is $`k`$ independent entangled pairs with the state $`\underset{i=1}{\overset{k}{}}\left(a_i|00_{A_i,B_i}+b_i|11_{A_i,B_i}\right)`$ (any other pure quantum channel could be transformed to this by local operations). Alice draws $`k`$ collective measurements, each of which is Von-Neumann measurement with the following eigenstates:
$$\begin{array}{c}|\mathrm{\Phi }^{i,1}=a_i^{^{}}|00_{P_i,A_i}+b_i^{^{}}|11_{P_i,A_i},\\ |\mathrm{\Phi }^{i,2}=b_i^{^{}}|00_{P_i,A_i}a_i^{^{}}|11_{P_i,A_i},\\ |\mathrm{\Phi }^{i,3}=a_i^{^{}}|10_{P_i,A_i}+b_i^{^{}}|01_{P_i,A_i},\\ |\mathrm{\Phi }^{i,4}=b_i^{^{}}|10_{P_i,A_i}a_i^{}|01_{P_i,A_i}.\end{array}(\left|a_i^{^{}}\right|^2+\left|b_i^{^{}}\right|^2=1,\text{ }\left|a_i^{^{}}\right|\left|b_i^{^{}}\right|),$$
(9)
where $`i=1,2,\mathrm{},k`$. Then Bob reestablishes the original state as $`|\mathrm{\Psi }^B`$ with a certain probability by adopting a proper general evolution. Using similar methods as the case of mono-qubit teleportation, we may show that there also exists an entanglement matching in multi-qubit teleportation: If $`c_i`$ is defined as $`\mathrm{min}\{\left|b_i^{^{}}\right|,\left|b_i\right|\}`$, the optimal probability of successful teleportation could be expressed as $`2^k\underset{i=1}{\overset{k}{}}c_i^2`$.
This work was supported by the National Natural Science Foundation of China.
|
no-problem/9910/astro-ph9910214.html
|
ar5iv
|
text
|
# Quintessence arising from exponential potentials
## I Introduction
Measurements of the redshift-luminosity distance relation using high redshift type Ia supernovae combined with cosmic microwave background (CMB) and galaxy clusters data appear to suggest that the present Universe is flat and undergoing a period of $`\mathrm{\Lambda }`$ driven inflation, with the energy density split into two main contributions, $`\mathrm{\Omega }_{\mathrm{matter}}1/3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }2/3`$ . Such a startling finding has naturally led theorists to propose explanations for such a phenomenon. One such possibility that has attracted a great deal of attention is the suggestion that a minimally coupled homogeneous scalar field $`Q`$ (the “quintessence” field), slowly rolling down its potential, could provide the dominant contribution to the energy density today thanks to the special form of the potential . Non-minimally coupled models have also been investigated . The advantage of considering a more general component that evolves in time so as to dominate the energy density today, as opposed to simply inserting the familiar cosmological constant is that the latter would require a term $`\rho _\mathrm{\Lambda }10^{47}\mathrm{GeV}^4`$ to be present at all epochs, a rather small value when compared to typical particle physics scales. On the other hand, quintessence models possess attractor solutions which allow for a wide range of initial conditions, all of which can correspond to the same energy density today simply by tuning one overall multiplicative parameter in the potential.
There is a long history to the study of scalar field cosmology especially related to time varying cosmological constants. Some of the most influential early work is to be found in Refs. . One particular case which at first sight appears promising is the one involving exponential potentials of the form $`V\mathrm{exp}(\lambda \kappa Q)`$, where $`\kappa ^2=8\pi G`$ . These have two possible late-time attractors in the presence of a barotropic fluid: a scaling regime where the scalar field mimics the dynamics of the background fluid present, with a constant ratio between both energy densities, or a solution dominated by the scalar field. The former regime cannot explain the observed values for the cosmological parameters discussed above; basically it does not allow for an accelerating expansion in the presence of a matter background fluid. However, the latter regime does not provide a feasible scenario either, as there is a tight constraint on the allowed magnitude of $`\mathrm{\Omega }_Q`$ at nucleosynthesis. It turns out that it must satisfy $`\mathrm{\Omega }_Q(1\mathrm{M}\mathrm{e}\mathrm{V})<0.13`$. On the other hand, we must allow time for formation of structure before the Universe starts accelerating. For this scenario to be possible we would have to fine tune the initial value of $`\rho _Q`$, but this is precisely the kind of thing we want to avoid.
A number of authors have proposed potentials which will lead to $`\mathrm{\Lambda }`$ dominance today. The initial suggestion was an inverse power law potential (“tracker type”) $`VQ^\alpha `$ , which can be found in models of supersymmetric QCD . Here the ratio of energy densities is no longer a constant but $`\rho _Q`$ scales slower than $`\rho _B`$ (the background energy density) and will eventually dominate. This epoch can be set conveniently to be today by tuning the value of only one parameter in the potential. However, although appealing, these models suffer in that their predicted equation of state $`w_Q=p_Q/\rho _Q`$ is marginally compatible with the favored values emerging from observations using SNIa and CMB measurements, considering a flat universe . For example, at the 2$`\sigma `$ confidence level in the $`\mathrm{\Omega }_Mw_Q`$ plane, the data prefer $`w_Q<0.6`$ with a favored cosmological constant $`w_Q=1`$ (see e.g. ), whereas the values permitted by these tracker potentials (without fine-tuning) have $`w_Q>0.7`$ . For an interpretation of the data which allows for $`w_Q<1`$ see Ref..
Since this initial proposal, a number of authors have made suggestions as to the form the quintessence potential could take . In particular, Brax and Martin constructed a simple positive scalar potential motivated from supergravity models, $`V\mathrm{exp}(Q^2)/Q^\alpha `$, and showed that even with the condition $`\alpha 11`$, the equation of state could be pushed to $`w_Q0.82`$, for $`\mathrm{\Omega }_Q=0.7`$. A different approach was followed by the authors of . They investigated a class of scalar field potentials where the quintessence field scales through an exponential regime until it gets trapped in a minimum with a non-zero vacuum energy, leading to a period of de Sitter inflation with $`w_Q1`$.
In this Brief Report we investigate a simple class of potentials which lead to striking results. Despite previous claims, exponential potentials by themselves are a promising fundamental tool to build quintessence potentials. In particular, we show that potentials consisting of sums of exponential terms can easily deliver acceptable models of quintessence in close agreement with observations for natural values of parameters.
## II Model
We first recall some of the results presented in . Consider the dynamics of a scalar field $`Q`$, with an exponential potential $`V\mathrm{exp}(\lambda \kappa Q)`$. The field is evolving in a spatially flat Friedmann-Robertson-Walker (FRW) universe with a background fluid which has an equation of state $`p_B=w_B\rho _B`$. There exists just two possible late time attractor solutions with quite different properties, depending on the values of $`\lambda `$ and $`w_B`$:
(1) $`\lambda ^2>3(w_B+1)`$. The late time attractor is one where the scalar field mimics the evolution of the barotropic fluid with $`w_Q=w_B`$, and the relation $`\mathrm{\Omega }_Q=3(w_B+1)/\lambda ^2`$ holds.
(2) $`\lambda ^2<3(w_B+1)`$. The late time attractor is the scalar field dominated solution ($`\mathrm{\Omega }_Q=1`$) with $`w_Q=1+\lambda ^2/3`$.
Given that single exponential terms can lead to one of the above scaling solutions, then it should follow that a combination of the above regimes should allow for a scenario where the universe can evolve through a radiation-matter regime (attractor 1) and at some recent epoch evolve into the scalar field dominated regime (attractor 2). We will show that this does in fact occur for a wide range of initial conditions. To provide a concrete example consider the following potential for a scalar field $`Q`$:
$$V(Q)=M^4(e^{\alpha \kappa Q}+e^{\beta \kappa Q}),$$
(1)
where we assume $`\alpha `$ to be positive (the case $`\alpha <0`$ can always be obtained taking $`QQ`$). We also require $`\alpha >5.5`$, a constraint coming from the nucleosynthesis bounds on $`\mathrm{\Omega }_Q`$ mentioned earlier .
First, we assume that $`\beta `$ is also positive. In order to have an idea of what the value of $`\beta `$ should be, note that if today we were in the regime dominated by the scalar field (i.e. attractor 2), then in order to satisfy observational constraints for the quintessence equation of state (i.e. $`w_Q<0.8`$), we must have $`\beta <0.8`$. We are not obviously in the dominant regime today but in the transition between the two regimes so this is just a central value to be considered. In Fig. 1 we show that acceptable solutions to Einstein’s equations in the presence of radiation, matter and the quintessence field can be accommodated for a large range of parameters ($`\alpha ,\beta `$).
The value of $`M`$ in Eq. (1) is chosen so that today $`\rho _Q\rho _c10^{47}\mathrm{GeV}^4`$. This then implies $`M10^{31}M_{\mathrm{Pl}}10^3\mathrm{eV}`$. However, note that if we generalize the potential in Eq. (1) to
$$V(Q)=M_{\mathrm{Pl}}^4(e^{\alpha \kappa (QA)}+e^{\beta \kappa (QB)}),$$
(2)
then all the parameters become of the order of the Planck scale. Since the scaling regime of exponential potentials does not depend upon its mass scale \[i.e. $`M`$ in Eq. (1)\], $`A`$ is actually a free parameter that can, for simplicity, be set to $`M_{\mathrm{Pl}}`$ or even to zero. On the other hand, just like before for $`M`$, $`B`$ needs to be such that today we obtain the right value of $`\rho _Q`$. In other words, we require $`M^4M_{\mathrm{Pl}}^4e^{\beta B}\rho _Q`$. This turns out to be $`B=𝒪(100)M_{\mathrm{Pl}}`$, depending on the precise values of $`\alpha `$, $`\beta `$ and $`A`$.
There is another important advantage to the potentials of the form in Eq. (1) or Eq. (2); namely, we obtain acceptable solutions for a wider range of initial energy densities of the quintessence field than we would with say the inverse power law potentials. For example, in Fig. 2 we show that it is perfectly acceptable to start with the energy density of the quintessence field above that of radiation, and still enter into a subdominant scaling regime at later times; however, this is an impossible feature in the context of inverse power law type potentials .
Another manifestation of this wider class of solutions can be seen by considering the case where the field evolution began at the end of an initial period of inflation. In that case, as discussed in Ref., we could expect that the energy density of the system is equally divided among all the thousands of degrees of freedom in the cosmological fluid. This equipartition of energy would imply that just after inflation $`\mathrm{\Omega }_i10^3`$. If this were the case, for inverse power law potentials, the power could not be smaller than $`5`$ if the field was to reach the attractor by matter domination. Otherwise, $`Q`$ would freeze at some value and simply act as a cosmological constant until the present (a perfectly acceptable scenario of course, but not as interesting). Such a bound on the power implies $`w_Q>0.44`$ for $`\mathrm{\Omega }_Q=0.7`$. With an exponential term, this constraint is considerably weakened. Using the fact that the field is frozen at a value $`Q_fQ_i\sqrt{6\mathrm{\Omega }_i}/\kappa `$, where $`Q_i`$ is the initial value of the field , we can see that the equivalent problem only arises when
$$\alpha \sqrt{6\mathrm{\Omega }_i}2\mathrm{ln}\alpha \mathrm{ln}\left(\frac{\rho _{Q_i}}{2\rho _{eq}}\right),$$
(3)
where $`\rho _{Q_i}`$ is the initial energy density of the scalar field and $`\rho _{eq}`$ is the background energy density at radiation-matter equality. For instance, for our plots with $`a_i=10^{14}`$, $`a_{eq}=10^4`$, this results in a bound $`\alpha 10^3`$.
A new feature arises when we consider potentials of the form given in Eq. (1) with the nucleosynthesis bound $`\alpha >5.5`$ but taking this time $`\beta <0`$. In this case the potential has a minimum at $`\kappa Q_{\mathrm{min}}=\mathrm{ln}(\beta /\alpha )/(\alpha \beta )`$ with a corresponding value $`V_{\mathrm{min}}=M^4\frac{\beta \alpha }{\beta }(\frac{\beta }{\alpha })^{\alpha /(\alpha \beta )}`$.
Far from the minimum, the scalar field scales as described above (attractor 1). However, when the field reaches the minimum, the effective cosmological constant $`V_{\mathrm{min}}`$ will quickly take over the evolution as the oscillations are damped, driving the equation of state towards $`w_Q=1`$. This scenario is illustrated in Fig. 3, where the evolution of the equation of state is shown and compared to the previous case with $`\beta >0`$. In many ways this is the key result of the paper, as in this figure it is clearly seen that the field scales the radiation ($`w=1/3`$) and matter ($`w=0`$) evolutions before settling in an accelerating ($`w<0`$) expansion. Once again, as a result of the scaling behavior of attractor 1, it is clear that there exists a wide range of initial conditions that provide realistic results. The feature resembles the recent suggestions of Albrecht and Skordis . The same mechanism can be used to stabilize the dilaton in string theories where the minimum of the potential is fine-tuned to be zero rather than the non-zero value it has in these models .
In , a quantity $`\mathrm{\Gamma }V^{\prime \prime }V/(V^{})^2`$ is proposed as an indicator of how well a given model converges to a tracker solution. If it remains nearly constant, then the solutions can converge to a tracker solution. It is easy to see from Eq. (1) that apart from the transient regime where the solution evolves from attractor 1 to attractor 2, $`\mathrm{\Gamma }=1`$ to a high degree of accuracy.
It is important to note that for this mechanism to work, we are not limited to potentials containing only two exponential terms and one field. Indeed, all we require of the dynamics is to enter one period like regime 1, which can either be followed by one regime like 2, or by the field settling in a minimum with a non-zero vacuum energy. We can consider as an example the case of a potential depending on two fields of the form
$$V(Q_1,Q_2)=M^4(e^{\alpha _1\kappa Q_1+\alpha _2\kappa Q_2}+e^{\beta _1\kappa Q_1+\beta _2\kappa Q_2}),$$
(4)
where all the coeficients are positive. This leads to similar results to Eq. (1) for a single field $`Q`$, with effective early and late slopes given by $`\alpha _{\mathrm{eff}}^2=\alpha _1^2+\alpha _2^2`$ and $`\beta _{\mathrm{eff}}^2=\beta _1^2+\beta _2^2`$, respectively. Such a result is not surprising and is caused by the assisted behavior that can occur for multiple fields . Note that for this type of multiple field examples the effective slopes in the resulting effective potential are larger than the individual slopes, a useful feature since we require $`\alpha _{\mathrm{eff}}`$ to be large.
## III Discussion
So far, we have presented a series of potentials that can lead to the type of quintessence behavior capable of explaining the current data arising from high redshift type Ia supernovas, CMB and cluster measurements. The beautiful property of exponential potentials is that they lead to scaling solutions which can either mimic the background fluid or dominate the background dynamics depending on the slope of the potential. We have used this to develop a picture where by simply combining potentials of different slopes, it is easy to obtain solutions which first enter a period of scaling through the radiation and matter domination eras and then smoothly evolve to dominate the energy density today. We have been able to demonstrate that the quintessence behavior occurred for a wide range of initial conditions of the field, whether $`\rho _Q`$ be initially higher or lower than $`\rho _{\mathrm{matter}}+\rho _{\mathrm{radiation}}`$. We have also shown that the favored observational values for the equation of state $`w_Q(\mathrm{today})<0.8`$ can be easily reached for natural values of the parameters in the potential. This is a big improvement in respect to most quintessence models as they usually give either $`w_Q0.8`$ or $`w_Q=1`$.
We have to ask, how sensible are such potentials? Can they be found in nature and, if so, can we make use of them here? The answer to the first question seems to be, yes they do arise in realistic particle physics models , but the current models do not have the correct slopes. Unfortunately, the tight constraint emerging from nucleosynthesis, namely $`\alpha >5.5`$, is difficult to satisfy in the models considered to date which generally have $`\alpha 1`$. It remains a challenge to see if such potentials with the required slopes can arise out of particle physics. One possibility is that the desirable slopes will be obtained from the assisted behavior when several fields are present as mentioned above.
It is encouraging that the quintessence behavior required to match current observations occurs for such simple potentials.
###### Acknowledgements.
We would like to thank Orfeu Bertolami, Robert Caldwell, Thomas Dent, Jackie Grant, Andrew Liddle, Jim Lidsey and David Wands for useful discussions. E.J.C. and T.B. are supported by PPARC. N.J.N. is supported by FCT (Portugal) under contract PRAXIS XXI BD/15736/98. E.J.C is grateful to the staff of the Isaac Newton Institute for their kind hospitality during the period when part of this work was being completed.
|
no-problem/9910/hep-ph9910262.html
|
ar5iv
|
text
|
# Untitled Document
DAMTP-1999-134
MC/TH-99-13
Charm production at HERA
A Donnachie
Department of Physics, Manchester University
P V Landshoff
DAMTP, Cambridge University
email addresses: ad@a3.ph.man.ac.uk, pvl@damtp.cam.ac.uk
Abstract The ZEUS data on the charm structure function $`F_2^c`$ at small $`x`$ fit well to a single power of $`x`$, corresponding to the exchange of a hard pomeron that is flavour-blind. When combined with the contribution from the exchange of a soft pomeron, the hard pomeron gives a good description of elastic $`J/\psi `$ photoproduction.
We have argued$`^{\text{[1]}}`$ that Regge theory should be applicable to the structure function $`F_2(x,Q^2)`$ for small $`x`$ and all values of $`Q^2`$, however large, and have shown$`^{\text{[2]}}`$ that indeed, in its very simplest form, it agrees extremely well with the available data. In order to fit the data, we introduced a second pomeron, the hard pomeron, with an intercept a little greater than 1.4; this is to be contrasted with the soft pomeron that is well-known from soft hadronic physics, whose intercept is close to 1.08.
Our main message in this paper is that the concept of the hard pomeron, with an intercept that is independent of $`Q^2`$ and is a little greater than 1.4, is supported by the recent ZEUS data$`^{\text{[3]}}`$ for the charm structure function $`F_2^c`$. These data require only a hard pomeron: the coupling of the soft pomeron to charm is apparently very small. Hence the data for $`F_2^c`$ are described by a single power of $`x`$. This is shown in figure 1, where the straight lines are
$$F_2^c(x,Q^2)=f_c(Q^2)x^{ϵ_0}$$
$`(1)`$
with $`ϵ_0=0.44`$.
Figure 1: ZEUS data for $`Q^4F_2^c`$, fitted to a single fixed power of $`x`$
In our original fit$`^{\text{[1]}}`$ to the data for the complete structure function $`F_2(x,Q^2)`$, we assumed a particular functional form for the coefficient function $`f_0(Q^2)`$ that multiplied $`x^{ϵ_0}`$. It had 4 parameters, and at large $`Q^2`$ it increased logarithmically with $`Q^2`$. We have since found that a form with only 2 parameters works at least as well:
$$f_0(Q^2)=A_0\left(\frac{Q^2}{Q^2+Q_0^2}\right)^{1+ϵ_0}\left(1+\frac{Q^2}{Q_0^2}\right)^{{\scriptscriptstyle \frac{1}{2}}ϵ_0}$$
$`(2)`$
With this form, $`f_0(Q^2)x^{ϵ_0}`$ behaves as a $`Q^2`$-independent constant times $`\nu ^{ϵ_0}`$ for large $`Q^2`$. There is no general theory that explains this behaviour, though it has been predicted$`^{\text{[4]}}`$ from the BFKL equation. As we have explained previously$`^{\text{[1]}}`$, while the large-$`Q^2`$ behaviour of $`f_0(Q^2)`$ should surely be calculable from perturbative QCD, leading-order or next-to-leading-order approximations are inadequate and at present we do not know how to perform the necessary all-order resummations.
The fit to $`F_2^c`$ shown in figure 1 takes
$$f_c(Q^2)=A_c\left(\frac{Q^2}{Q^2+Q_c^2}\right)^{1+ϵ_0}\left(1+\frac{Q^2}{Q_c^2}\right)^{{\scriptscriptstyle \frac{1}{2}}ϵ_0}$$
$`(3)`$
In making our fit, we wrote the hard-pomeron contribution to the complete structure function $`F_2(x,Q^2)`$ as
$$\left(f_0(Q^2)+f_c(Q^2)\right)x^{ϵ_0}$$
with $`f_0(Q^2)`$ and $`f_c(Q^2)`$ parametrised as in (2) and (3). Initially we imposed the constraint that at large $`Q^2`$ the hard pomeron coupling becomes flavour-blind, so that
$$A_cQ_c^{ϵ_0}=\frac{4}{7}A_0Q_0^{ϵ_0}$$
$`(4)`$
The factor $`\frac{4}{7}`$ is calculated from squares of quark charges: $`\frac{4}{9}/(\frac{4}{9}+\frac{1}{9}+\frac{1}{9}+\frac{1}{9})`$. However, we found that, although it is not excluded that $`Q_c^2`$ is somewhat greater than $`Q_0^2`$, the best fit has $`Q_c^2`$ close to $`Q_0^2`$. That is, the data indicate that the coupling of the hard pomeron may be flavour-blind even for small $`Q^2`$. This came as a surprise to us. Presumably it would imply that the same be true for the proton’s bottom distribution.
With the constraint that $`Q_c^2=Q_0^2`$, our fit to the ZEUS charm structure function data, together with nearly 600 data points for $`F_2`$, corresponding to $`x<0.07`$ and $`0Q^22000`$ GeV<sup>2</sup>, yielded a $`\chi ^2`$ of less than 1 per data point and
$$ϵ_0=0.44A_0=0.025Q_0^2=8.1\text{ GeV}^2$$
$`(5)`$
More accurate data for $`F_2`$ are expected soon from HERA, and so the parameter values will change, as may the tentative conclusion that $`Q_c^2=Q_0^2`$.
We have already shown$`^{\text{[2]}}`$ that the two-pomeron picture gives a good fit to the total cross-section for elastic $`J/\psi `$ photoproduction, $`\gamma pJ/\psi p`$. There are now preliminary data $`^{\text{[5]}}`$ on the differential cross section. As before$`^{\text{[2]}}`$, we take the amplitude to be
$$T(s,t)=i\underset{i=0,1}{}\beta _i(t)s^{e_i(t)}e^{\frac{1}{2}\pi e_i(t)}$$
$`(6)`$
We normalise it so that $`d\sigma /dt=|T|^2`$. The differential-cross-section data now allow us to make a more informed choice of the pomeron coupling functions $`\beta _i(t)`$. Whereas in elastic $`pp`$ scattering the data are in excellent agreement with the hypothesis$`^{\text{[6]}}`$ that the soft-pomeron coupling function is proportional to the square $`[F_1(t)]^2`$ of the Dirac electric form factor, the data for $`\gamma pJ/\psi p`$ rather need just $`F_1(t)`$. That is, the proton coupling to the pomeron (either soft or hard) is proportional to $`F_1(t)`$, but the pomeron-$`\gamma `$-$`J/\psi `$ coupling apparently is flat in $`t`$. So we use
$$\beta _i(t)=\beta _{0i}F_1(t)i=0,1$$
$$F_1(t)=\frac{4m^22.79t}{4m^2t}\frac{1}{(1t/0.71)^2}$$
$`(7)`$
For the functions $`e_i(t)`$, which are related to the two pomeron trajectories by $`\alpha _i(t)=1+e_i(t)`$, we take
$$e_0(t)=0.44+\alpha _0^{}te_1(t)=0.08+0.25t$$
$`(8)`$
Figure 2: Fit to the total cross-section for elastic $`J/\psi `$ photoproduction; the data are fixed-target and H1$`^{\text{[5]}}`$. The three contributions add up to the solid curve.
Figure 3: Fits to the differential cross-section for elastic $`J/\psi `$ photoproduction for three $`t`$-values and hard pomeron slope $`\alpha _0^{}=0`$ (solid lines), $`\alpha _0^{}=0.1`$ (dotted lines) and $`\alpha _0^{}=0.2`$ (dashed lines)
The soft-pomeron trajectory is familiar$`^{\text{[6]}}`$, but the slope of the hard-pomeron trajectory is not known. The fit shown in figure 2 for the total cross-section is for
$$\alpha _0^{}=0.1\beta _{01}^2=24.6\beta _{00}=0.038\beta _{01}$$
$`(10)`$
We may obtain almost equally good fits to the total cross section if we make different choices of $`\alpha _0^{}`$, provided we adjust $`\beta _{00}`$ and $`\beta _{01}`$:
$$\alpha _0^{}=0.0\beta _{01}^2=26.4\beta _{00}=0.028\beta _{01}$$
$$\alpha _0^{}=0.2\beta _{01}^2=23.7\beta _{00}=0.046\beta _{01}$$
$`(11)`$
Note, though, that $`\alpha _0^{}=0`$ strictly is excluded, through $`t`$-channel unitarity$`^{\text{[7]}}`$. We show in figure 3 the differential cross-section for these three choices of $`\alpha _0^{}`$. It is evident that a choice somewhere near to 0.1 is a good one — though this cannot be a firm conclusion because the data are not good enough to confirm that (7) is necessarily the correct choice for $`\beta _i(t)`$. However, it is interesting that $`\alpha _0^{}=0.1`$ happens to be the value that is obtained by supposing that the hard pomeron trajectory is a glueball trajectory, so that there is a 2<sup>++</sup> glueball of mass $`M`$ given by $`\alpha _0(M^2)=2`$. This corresponds to $`M=2370`$ MeV, close to the mass of a 2<sup>++</sup> glueball candidate reported by the WA102 collaboration$`^{\text{[8]}}`$. (Similarly, there is a 2<sup>++</sup> glueball candidate at 1930 MeV, the correct mass for it to lie on the soft pomeron trajectory$`^{\text{[9]}}`$.) The values of 0.0 and 0.2 for $`\alpha _0^{}`$ are at the extremes which the differential cross sections will accept, and limits of $`0.05`$ and 0.15 are more reasonable, with of course the above caveat on our choice of $`\beta _i(t)`$.
It is not excluded that there is also a hard-pomeron component present in elastic $`\rho `$ photoproduction, though there the ratio $`\beta _{00}/\beta _{01}`$ is very much smaller. It is possible that the value of $`\beta _{00}`$ is the same in each case, up to a factor that reflects the different charges on the active quarks. In either case, $`\rho `$ or $`J/\psi `$, if the data are parametrised by an effective power rise with energy $`W^\delta `$, the increase$`^{\text{[10]}}`$ of $`\delta `$ with $`Q^2`$ may be explained by the ratio $`\beta _{00}/\beta _{01}`$ increasing with $`Q^2`$.
We end with a comment that the surprisingly complete decoupling of the soft pomeron in the charm structure function presumably results from the limited overlap between the small $`c\overline{c}`$ pair and the extended soft pomeron. Justification for this view is the observation $`^{\text{[2]}}`$ that the soft pomeron contribution to the proton structure function $`F_2`$ decreases with increasing $`Q^2`$ for $`Q^2>\text{ }5`$ GeV<sup>2</sup>. This can be quantified in the dipole-scattering approach of the Heidelberg model $`^{\text{[11]}}`$, in which an explicit cut-off for the coupling of the soft pomeron to small dipoles simulates the phenomenological result of . It might then be thought that exactly the same phenomenon would be observed in $`J/\psi `$ photoproduction. However the fixed-target data collectively imply that there is some contribution at lower energies from the soft pomeron. This is confirmed by specific fits $`^{\text{[2,11]}}`$ in the two-pomeron approach. A resolution of this apparent inconsistency can be obtained by postulating that there is an OZI-violating contribution to $`J/\psi `$ photoproduction. Quite apart from the fact that the hadronic decays of the $`J/\psi `$ are by this mechanism, there is clear evidence for an OZI-violating contribution to inclusive $`J/\psi `$ production in hadronic interactions. At low energy the $`J/\psi `$ production cross section from an antiproton beam is$`^{\text{[12]}}`$ is several times greater than that from a proton beam. This shows that, in $`J/\psi `$ production in hadronic interactions, there is a contribution from the valence quarks of the nucleon. The strength of the coupling of the $`J/\psi `$ to a light quark-antiquark pair may be extracted from the production data$`^{\text{[13]}}`$$`^{\text{[14]}}`$$`^{\text{[15]}}`$, and is compatible with the hadronic decay rate of the $`J/\psi `$. The data on $`\mathrm{{\rm Y}}`$ production in hadronic interactions, in an equivalent region of $`x_F`$, imply that an OZI-violating mechanism is operable there also $`^{\text{[16]}}`$. It is not possible to quantify a priori the OZI-violating contribution to $`J/\psi `$ photoproduction as it must arise from complicated $`u\overline{u}`$, $`d\overline{d}`$, $`s\overline{s}`$ systems.
In conclusion, the fixed power of $`x`$ found in the ZEUS data for the charmed structure function is most naturally explained by applying Regge theory at all $`Q^2`$. This requires the introduction of a hard pomeron, just as we have found gives an excellent description of the total proton structure function $`F_2`$ and elastic $`J/\psi `$ photoproduction.
This research is supported in part by the EU Programme ‘‘Training and Mobility of Researchers", Networks ‘‘Hadronic Physics with High Energy Electromagnetic Probes" (contract FMRX-CT96-0008) and ‘‘Quantum Chromodynamics and the Deep Structure of Elementary Particles’’ (contract FMRX-CT98-0194), and by PPARC
References
relax1 J R Cudell, A Donnachie and P V Landshoff, Physics Letters B448 (1999) 281 relax2 A Donnachie and P V Landshoff, Physics Letters B437 (1998) 408 relax3 ZEUS collaboration: A Breitweg et al, hep-ex/9908012 relax4 B Ermolaev, private communication relax5 H1 Collaboration, submitted to the International Europhysics Conference on High Energy Physics HEP99, Tampere, Finland, July 1999 relax6 A Donnachie and P V Landshoff, Nuclear Physics B231 (1983) 189 relax7 P D B Collins, Regge theory and high energy physics, Cambridge University Press (1977) relax8 WA102 collaboration: A Barberis et al, Physics Letters B432 (1998) 436 relax9 WA91 collaboration: S Abatzis et al, Physics Letters B324 (1994) 509 relax10 ZEUS Collaboration: J. Breitweg et al, Eur Phys J C6 (1999) 603 H1 Collaboration: C Adloff et al, hep-ex/9902019 relax11 M Rueter, Eur.Phys.J. C7 (1999) 233 relax12 M J Corden et al, Physics Letters 68B (1977) 96 relax13 M B Green, M Jacob and P V Landshoff, Nuovo Cimento 29A (1975) 123 relax14 J F Gunion, Phys.Rev. D12 (1975) 1345 relax15 A Donnachie and P V Landshoff, Nucl.Phys. B112 (1976) 233 relax16 A Donnachie and P V Landshoff, Zeits.Phys. C4 (1980) 231
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.