id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9905/cond-mat9905093.html
|
ar5iv
|
text
|
# Giant vortex state in perforated aluminum microsquares
## I Introduction
Recent experiments on mesoscopic superconducting aluminum structures with sizes smaller than the temperature dependent coherence length $`\xi (T)`$ and penetration depth $`\lambda (T)`$ have shown the influence of the sample topology on the superconducting critical parameters, like the normal/superconducting phase boundary $`T_c(H)`$ (i.e. the critical temperature $`T_c`$ in the presence of an applied magnetic field $`H`$) . Many different topologies have been studied experimentally and theoretically.
First of all, there are structures made of quasi-one-dimensional strips, which can be further classified into single loops (where $`T_c(H)`$ shows the well-known periodic Little-Parks oscillations ), multiloop structures (double loop, yin-yang, lasso, 2$`\times `$2-cell, etc.) and large infinite networks . The theories used to calculate the $`T_c(H)`$ for these structures are based on the linearized Ginzburg-Landau theory, using either the de Gennes-Alexander formalism or the London limit . The fluxoid quantization constraint (i.e. the requirement that the order parameter is uniquely defined after integration of the phase gradient along a closed superconducting contour) in these ’multiply connected’ structures gives rise to the oscillatory shape of the phase boundaries $`T_c(H)`$, superimposed usually on a parabolic background.
Secondly, surface superconductivity effects in (circular or square shaped) single dots and antidots (i.e. one antidot in a plain film) have been studied intensively. In these structures, $`T_c(H)`$ consists of oscillations, which are pseudoperiodic. The appearance of the giant vortex state, where superconductivity is nucleated only near the sample boundary , is due to the quantization of the phase winding number $`L`$ of the superconducting order parameter $`\mathrm{\Psi }=\left|\mathrm{\Psi }\right|e^{iL\phi }`$ (this is equivalent to fluxoid quantization). For cylindrically symmetric structures, one refers to $`L`$ as the angular (or orbital) momentum quantum number . For the states $`L>1`$, the dot (or antidot) area is threaded by multiples $`L`$ of the superconducting flux quantum $`\mathrm{\Phi }_0=h/2e`$. This ”surface superconductivity” gives rise to a quasi-linear critical field $`H_{c3}`$ versus temperature $`T`$, which we will further compare with the experimental $`T_c(H)`$.
The experimental studies of the antidot structure are usually carried out on samples with regular lattices of antidots. It was shown by Bezryadin et al. that, at sufficiently high magnetic fields, the antidots behave independently . In these systems, the antidots create efficient pinning centers for the flux line lattice .
We will present the measured phase boundaries $`T_c(H)`$ of three different topologies, which are shown in Fig. 1. The three structures studied are a filled microsquare, and two squares with 2 and 4 square antidots respectively. Similar structures were studied in Refs. , where the 4-antidot structure was proposed as a basic cell for a memory based on flux logic. In those papers, different stable vortex configurations were detected at low magnetic fields.
The goal of the present report is to study the influence of the antidots inside a microsquare on the crossover from the ”network” behavior at low fields to a giant vortex state at higher fields, and whether eventually the two configurations (vortices pinned by the antidots and the giant vortex state) can coexist. We will mainly focus on the high magnetic field regime. In a structure with one single antidot (i.e. a loop, as in Ref. ) as well, the presence of a giant vortex state can be anticipated at sufficiently high magnetic fields. In such a system, however, the phase winding number $`L`$ will be identical for every contour encircling the antidot. The development of the giant vortex state is accompanied by a transition from a parabolic background in $`T_c(H)`$ to a quasi-linear $`T_c(H)`$ behavior. The crossover field is strongly dependent on the size and the aspect ratio of the loop and will be the subject of a future paper. For the loop studied in Ref. , the magnetic field was clearly not sufficiently high to reach this transition regime.
The advantage of a structure with more than one antidot is the property that each antidot can in principle contain a different number $`L`$ of flux quanta $`\mathrm{\Phi }_0`$. Simultaneously, a quantum number $`L`$ is attributed to the outside square. The observed cusps in $`T_c(H)`$ can then be related to the switching of either the quantum state of an antidot, or of the whole square.
For the 4-antidot structure, a ’collective’ or network behavior can be expected at low magnetic fields, while a ’single object’ regime can be reached at higher fields, where at $`T_c(H)`$ a surface superconducting sheath develops near the sample boundary. The comparison of the $`T_c(H)`$ data obtained on the perforated Al microstructures with that of a reference microsquare without antidots confirms the presence of a giant vortex state in the three structures in the high magnetic field regime.
## II Experiment
Three different microstructures, shown in Fig. 1, have been studied. A square dot, with side $`a`$=2.04 $`\mu m`$ is taken as a reference sample (a); a square of side $`a`$=2.04 $`\mu m`$, with four 0.46$`\times `$0.46 $`\mu m^2`$ square antidots (b); and a square, side $`a`$=2.14 $`\mu m`$, with two 0.52$`\times `$ 0.52 $`\mu m^2`$ antidots, placed along a diagonal (c). For the 4-antidot sample (b) the width of the superconducting outer stripes is 0.33 $`\mu m`$ and the inner stripes are 0.46 $`\mu m`$ wide. For the 2-antidot sample (c) the outer stripes are 0.35 $`\mu m`$ wide, and the non-perforated areas are 1.27 $`\mu m`$ wide. The dimensions are summarized in Table I. Electrical contacts have been attached to the samples using an ultrasonic wire bonding technique on the $`150\times 150`$ $`\mu m^2`$ large contact pads.
The three samples have been prepared in a single run by thermal evaporation of $`99.999\%`$ pure Al on a SiO<sub>2</sub> substrate. The patterns are defined using e-beam lithography on a bilayer of PMMA resist previous to the deposition of a $`24`$ nm thick aluminum film. After the evaporation, the liftoff was performed using dichloremethane. The structures were characterized by X-ray, SEM and AFM (Fig. 1).
Four-point resistance measurements were performed in a <sup>4</sup>He cryostat, using a PAR 124A lock-in amplifier. A measuring current of 100 nA r.m.s. with a frequency of 27 Hz was used, which is depressing the $`T_c`$ by only a few millikelvins, in the whole magnetic field range.
The $`T_c(H)`$ measurements are done in a continuous run, keeping the sample resistance typically at $`50\%`$ of the normal state value and sweeping the magnetic field slowly while recording the temperature. The magnetic field was applied perpendicular to the structures, and a temperature stability better than 0.5 mK was achieved.
## III Results
In Fig. 2 we present the experimental phase boundary $`T_c(H)`$ of the three structures. The measured $`T_c(H)`$ values were independent of the direction of the magnetic field scans and were reproduced in several measurement rounds. In this paper we will always plot $`T_c(H)`$ in the usual way, i.e. with the $`T_c`$-axis pointing from the highest to the lowest temperature. Peaks in the $`T_c(H)`$ plots are then in reality local minima of the critical temperature $`T_c`$.
For the reference full square, we observe pseudoperiodic oscillations in $`T_c(H)`$ superimposed with an almost linear background, where the period of the oscillations slightly decreases with increasing field, in agreement with previous studies . These observations are characteristic for the presence of the giant vortex state. For the perforated microstructures, two different magnetic field regimes can be distinguished. At high magnetic fields, the oscillations in $`T_c(H)`$ are pseudoperiodic, just as the $`T_c(H)`$ of the full square. For the low field part of the phase diagram, distinct features appear (i.e., below $`2.5`$ mT): for the 2-antidot sample we observe the same number of peaks compared to the full square, but with a considerable shift of the positions of the first peaks. Compared to the full square $`T_c(H)`$, a new series of peaks, positioned symmetrically with respect to $`\mu _0H1.4`$ mT, is found for the 4-antidot sample, as can be expected for a 2$`\times 2`$ cell network.
In what follows, we will investigate in detail the shape of $`T_c(H)`$ in the two flux regimes for the three structures. We will discuss our results in terms of the existing models, within the Ginzburg-Landau (GL) theory, developed for mesoscopic structures with a cylindrical symmetry (disks, loops) which have been successfully applied earlier to interpret the results obtained in mesoscopic square structures .
## IV Discussion
### A The $`T_c(H)`$ phase boundary of the full microsquare
The $`T_c(H)`$ curve measured for the full square structure is very similar to the result obtained from a calculation for a mesoscopic disk in the presence of a magnetic field (indicated as ’$`H_{c3}`$’ in Fig. 2a). In that model the linearized first GL equation is solved with the boundary condition for an ideal superconductor/insulator interface:
$$\left(ı\mathrm{}\stackrel{}{}2e\stackrel{}{A}\right)\mathrm{\Psi }|_{,b}=0,$$
(1)
which is the condition that no supercurrent can flow perpendicular to the interface. In the linear approach, the vector potential $`\stackrel{}{A}`$ is related to the applied magnetic field $`\stackrel{}{H}`$ through $`\mu _0\stackrel{}{H}=rot\stackrel{}{A}`$. In order to obtain the solutions which fulfill the boundary condition (Eq. (1)), one has to solve the equation:
$$(L\mathrm{\Phi }/\mathrm{\Phi }_0)M(n,L+1,\mathrm{\Phi }/\mathrm{\Phi }_0)\frac{2n\mathrm{\Phi }/\mathrm{\Phi }_0}{L+1}M(n+1,L+2,\mathrm{\Phi }/\mathrm{\Phi }_0)=0$$
(2)
The function $`M`$ is the so-called Kummer function of the first kind, $`n`$ is a real number depending on the phase winding number $`L`$, which has to be obtained numerically by solving Eq. (2). The flux is defined as $`\mathrm{\Phi }=\mu _0H\pi R^2`$, $`R`$ being the radius of the disk. The $`T_c(\mathrm{\Phi })`$ is obtained via the relation:
$$1\frac{T_c(\mathrm{\Phi })}{T_c(0)}=4\left(n+\frac{1}{2}\right)\frac{\xi ^2(0)}{R^2}\frac{\mathrm{\Phi }}{\mathrm{\Phi }_0}$$
(3)
The upper critical field $`H_{c2}`$ for a bulk superconductor is obtained when substituting $`n=0`$ in Eq. (3), which gives a linear relation between $`H_{c2}`$ and $`T`$. However, for a finite size superconductor, a third critical field $`H_{c3}`$ can be found, because the ground state is obtained from solutions of Eq. (2) with $`n<0`$. Superconductivity is concentrated near the sample edge (for $`L>0`$), while the ”normal” core contains one or several flux quanta $`L\mathrm{\Phi }_0`$. This quasi-linear critical field $`H_{c3}(T)`$ is the analog of the surface critical field for a semi-infinite superconducting slab in contact with vacuum (or insulator), where superconductivity persists in a surface sheath up to magnetic fields $`H_{c3}(T)1.69H_{c2}(T)`$ above the upper critical field .
The series of peaks in the $`T_c(H)`$ curve correspond to transitions between states with different angular momenta $`LL+1`$ of the superconducting order parameter as successive flux quanta, $`\mathrm{\Phi }=L\mathrm{\Phi }_0`$, enter the superconductor. A comparison with the experimental result for a square structure was made in Ref.. In a very recent paper by Jadallah et al. the $`T_c(H)`$ phase boundary is studied theoretically and is compared to the experimental $`T_c(H)`$ curve for the full square, described in the present paper.
Following Ref., between $`\mathrm{\Phi }`$=0 and the first peak located at $`\mathrm{\Phi }=1.92\mathrm{\Phi }_0`$, the superconducting order parameter $`\mathrm{\Psi }`$ has angular momentum $`L=0`$, and the reduced critical temperature is quadratic in $`\mathrm{\Phi }`$:
$$1\frac{T_c(\mathrm{\Phi })}{T_c(0)}=\frac{\xi ^2(0)}{2R^2}\left(\frac{\mathrm{\Phi }}{\mathrm{\Phi }_0}\right)^2$$
(4)
The quasi-linear background at high flux, $`\mathrm{\Phi }/\mathrm{\Phi }_01`$, follows the asymptotic expression:
$$1\frac{T_c(\mathrm{\Phi })}{T_c(0)}=\frac{2}{\eta }\frac{\xi ^2(0)}{R^2}\frac{\mathrm{\Phi }}{\mathrm{\Phi }_0}$$
(5)
The parameter $`\eta `$ represents the ratio of the ground state energy ($`H_{c3}`$) to the lowest bulk Landau level ($`H_{c2}`$) and therefore coincides with the ratio $`H_{c3}/H_{c2}`$ at a fixed temperature. For $`\mathrm{\Phi }/\mathrm{\Phi }_0\mathrm{}`$, in other words for $`R\mathrm{}`$, the value $`\eta 1.69`$. Note that substituting $`\eta =1`$ in Eq. (5) gives the equation for $`H_{c2}(T)`$ (or $`T_{c2}(H)`$), indicated by the straight line labeled ’$`H_{c2}`$’ in Fig. 2a.
When matching the position of the experimental and theoretical peaks, we obtain a value for the field corresponding to one flux quantum, $`\mu _0H_0=0.53`$ mT. This scaling leads to a very good agreement in the position of all the peaks (see the inset of Fig. 3) and strongly supports the validity of applying this model to our experiments. From $`H_0`$ we obtain an effective area of 3.9 $`\mu m^2`$, close to the actual size of the structure, 4.2 $`\mu m^2`$. The introduction of this ’effective area’ is obviously not needed if the $`T_c(H)`$ is compared with a calculation performed for a square . Then, from Eq. (4), we find the coherence length, $`\xi (0)`$=92 nm (dashed line in Fig. 2a), and using the values for pure Al and the Ginzburg-Landau expressions for dirty superconductors, we can estimate the mean free path, $`\mathrm{}`$=7 nm, and the penetration depth, $`\lambda (0)`$=140 nm. The results for the three structures are summarized in Table I. The determined value $`\xi (0)`$=92 nm might be a bit too low, since at low magnetic fields the electrical leads attached to the square can give rise to nonlocal effects . Simultaneously, fitting the low field part of the experimental $`T_c(H)`$ to Eq. (6) of Ref. gives an increased $`\xi (0)`$=95 nm.
In contrast to the experimental result presented in Ref., which was obtained for a substantially larger, but circular dot, the field period can be matched to the theoretical predictions in the whole field interval. The distance $`\mathrm{\Delta }\mathrm{\Phi }`$ between the peaks in $`T_c(\mathrm{\Phi })`$ follows the asymptotic limit $`\mathrm{\Delta }\mathrm{\Phi }=\mathrm{\Phi }_0\left(1+\left(2\eta \mathrm{\Phi }/\mathrm{\Phi }_0\right)^{1/2}\right)`$. In our experiment $`\mathrm{\Delta }\mathrm{\Phi }`$ reaches a nearly constant value for $`\mathrm{\Phi }/\mathrm{\Phi }_06`$, with $`\mu _0\mathrm{\Delta }H0.600.65`$ mT. When a sufficiently high magnetic field is applied to the sample, a superconducting edge state is formed, where superconductivity only nucleates within a surface layer of thickness $`w_H=\sqrt{\mathrm{\Phi }_0/2\eta \pi \mu _0H}`$. The remaining area acts like a normal core of radius $`R_{\text{eff}}Rw_H`$, and carries $`L`$ flux quanta in its interior. Due to the expanding normal core, the sample can be seen topologically as a loop of variable radius. For this reason, the $`T_c(H)`$ of the dot shows Little-Parks-like oscillations, which are, however, nonperiodic. The magnetic period $`\mathrm{\Delta }H`$ decreases, since the ’effective’ radius grows with increasing field. In contrast to the case of the loop, which has a parabolic background on $`T_c(H)`$, the background is quasi-linear, because of the additional energy cost (i.e. extra reduction of $`T_c`$) for depressing superconductivity in the sample core. As the applied magnetic field grows this ’giant vortex’ core expands until it almost fills the entire sample area. According to this expression, at $`\mu _0H`$=5 mT, for example, $`w_H0.2`$ $`\mu m`$ and the effective area of the normal core for the full square structure is $``$ 3.3 $`\mu m^2`$. This value is in agreement with the observed magnetic period $`\mu _0\mathrm{\Delta }H`$.
The amplitude of the experimental oscillations is higher than expected from the theory (which was observed also in Ref.) (see Fig. 2a). We carried out a few $`T_c(H)`$ measurements, where we fixed the electronic feedback circuit at a different resistance value ($`1090\%`$ of the normal state resistance $`R_n`$). When a higher fixed resistance value was chosen ($`90\%`$ of $`R_n`$), the amplitude of the $`T_c(H)`$ oscillations was decreased. We should mention here that the measured resistance versus temperature curves, as the magnetic field increases, reveal a ’bump’ above the normal state resistance, together with a significant broadening of the temperature interval in which the transition takes place. In Ref. the bump was shown to appear in the ’superconducting state’. Since this disturbance is still small at low fields, we believe that the determination of $`\xi (0)`$ from the low field formula (Eq. (4)) is rather accurate and therefore we use $`\xi (0)`$=92 nm further on in this paper. The resistance peaks are caused by the electrical leads attached to the square, which have a higher transition temperature. For this reason, normal/superconducting boundaries are created at $`TT_c`$, which, as in the experiment of Park et al., can give rise to a resistive transition showing a peak.
### B The $`T_c(H)`$ phase boundaries of the perforated microsquares
Now, we will analyze the phase boundary observed for the 4-antidot structure (Fig. 2b). It shows similarities with the full square: there is a quasi-linear background with pseudoperiodic oscillations at high fields and the $`T_c(H)`$ is parabolic for low magnetic fields (dashed line in Fig. 2b). At the same time new features (extra peaks) are clearly seen below $``$2.5 mT.
Let us first discuss the $`T_c(H)`$ curve in this low field regime. We will compare it with the $`T_c(H)`$ calculated for a $`2\times 2`$-cell network consisting of one-dimensional strips. Strictly speaking, the theory for networks is valid only when the width of the strips forming the structure are much smaller than $`\xi (T)`$. For the dimensions of the sample studied here, variations of $`\mathrm{\Psi }`$ along the strip width can be expected if $`T`$ is slightly below $`T_c(0)`$. The de Gennes-Alexander (dGA) model, based on the linearized Ginzburg-Landau equations, has been used successfully to explain the phase boundaries obtained in mesoscopic single- and multiloop structures with narrow superconducting strips. The depression of $`T_c(H)`$ can be expressed as the sum of a topology dependent oscillatory component and a parabolic term (dashed line in Fig. 2b) which is due to the finite width $`w`$ of the strips:
$$1\frac{T_c(H)}{T_c(0)}=\frac{\pi ^2}{3}\left(\frac{w\xi (0)\mu _0H}{\mathrm{\Phi }_0}\right)^2$$
(6)
In Fig. 4 we compare the low magnetic field part of the experimental $`T_c(H)`$, where a parabolic background (Eq. (6)) with an averaged (over inner and outer strips) width of 0.4 $`\mu m`$ has been subtracted, with the $`T_c(H)`$ obtained from the dGA model. In Ref. (Eqs. 20-22) and Ref. (Eq. 3.12) the functions forming the $`T_c(H)`$ for a $`2\times 2`$-cell network can be found. The field corresponding to one flux quantum per elementary cell is 2.8 mT, leading to an effective total area for the $`2\times 2`$-cell network of 2.96 $`\mu m^2`$ (0.74 $`\mu m^2`$ per cell, the side length of each cell $`a`$=0.86 $`\mu m`$). The theoretical $`T_c(H)`$ reproduce the observed flux (or fluxoid) states, and have been calculated with $`\xi (0)`$=92 nm (dashed line in Fig. 4), obtained for the full square, and with $`\xi (0)`$=140 nm (solid line in Fig. 4), estimated from the parabolic background (Eq. (6)).
It is clear that, for increasing magnetic field, the $`T_c(H)`$ of the 4-antidot structure can no longer be calculated from the dGA model, since $`\xi (T)`$ becomes comparable to the width of the strips, giving rise to spatial variation of $`\mathrm{\Psi }`$ perpendicular to the strips.
Let us point out the differences between the $`T_c(H)`$ of the present 4-antidot structure and the previously measured ’$`2\times 2`$-antidot cluster’ made of Pb/Cu . In the present Al structure, the network features are not periodically repeated as the magnetic field is increased. Instead, above 2.8 mT, the positions of the successive peaks coincide with the peaks observed in the $`T_c(H)`$ of the reference full square. Moreover, the background clearly starts deviating from parabolic to quasi-linear. Contrary to this, in the Pb/Cu antidot cluster (see Fig. 1 in Ref. ) the peaks related to the network behavior are visible over two periods, i.e. up to $`\mu _0H`$ 5 mT. For higher fields, no giant vortex state can be deduced for the Pb/Cu sample, since the background reduction of $`T_c`$ stays parabolic and $`T_c(H)`$ shows pronounced peaks instead of cusps.
The main parameter which determines the $`T_c(H)`$ is the coherence length $`\xi (T)`$. Since the coherence length $`\xi (0)`$ of Al is approximately three times larger than for Pb/Cu, the relative $`T_c`$ reduction $`\delta T_c=1T_c(H)/T_c(0)`$ is almost a factor 10 higher in Al than in an identical Pb/Cu sample (see Eqs. (3)-(6)). Since, for a particular sample geometry, $`\delta T_c/\xi ^2(0)=1/\xi ^2(T)`$ should only depend on the magnetic field (at $`T=T_c(H)`$), the penetration depth $`\lambda (T)`$ might play an important role. Using $`\lambda ^2(T)=\lambda ^2(0)/\delta T_c`$, with $`\lambda (0)`$=140 nm for Al, and $`\lambda (0)`$=76 nm for Pb/Cu , we obtain $`\lambda (T=T_c^{Al})0.6\lambda (T=T_c^{Pb/Cu})`$. In other words, the assumption $`\mu _0\stackrel{}{H}=rot\stackrel{}{A}`$ (corresponding to $`\lambda w`$) is fulfilled up to higher magnetic fields in the case of Pb/Cu. This is consistent with the fact that, the peaks due to switching of the state of single antidots, is seen up to higher fields in Pb/Cu, but it does not explain the very different behavior in the two materials of course. Other possibilities might be related to the proximity effect in Pb/Cu, as well as to a different saturation number $`n_sR/(2\xi (T))`$ in the two materials, although the simple formula for $`n_s`$ was obtained only for a single antidot surrounded by a large superconducting area and might not be valid here.
For the (slightly larger) 2-antidot Al structure (Fig. 2c) the interpretation of the low field regime of $`T_c(H)`$ is more difficult. We will return briefly to this point later in the paper. At high fields, however, the positions of the peaks in $`T_c(H)`$ correspond to the same pseudoperiodic oscillations as for the full square and 4-antidot structure.
In Fig. 3 we have replotted the phase diagram in units of $`\mathrm{\Phi }/\mathrm{\Phi }_0`$. It is important to note that we defined the flux as $`\mathrm{\Phi }=\mu _0HS_{eff}`$, with $`S_{eff}`$ the effective area of the whole microsquare. It is close to the exact outer sample area $`S`$, and was introduced in order to fit the peak positions to the calculated $`T_c(\mathrm{\Phi })`$ for a circular dot. To avoid confusion, we want to mention the different definition of flux in Refs. (antidot cluster), where flux is referred to the area available for one single antidot. In the case of a loop , it is natural to define the flux as the area enclosed by a contour through the middle of the strips multiplied by the magnetic field, in order to ensure a perfectly periodic $`T_c(\mathrm{\Phi })`$, with a period $`\mathrm{\Phi }_0`$.
Since the 2-antidot structure is a bit larger than the two other structures, a different $`H_0`$ is used to scale the magnetic field. In the inset the positions of the peaks in the experimental $`T_c(\mathrm{\Phi })`$ are compared with the theoretical prediction for a mesoscopic superconducting disk. The $`n`$-th peak corresponds to the transition between the state $`L=n1`$ and $`L=n`$ (for the 4-antidot sample the peak numbers have been reassigned due to the extra peaks in the network regime). At high magnetic fields, there is a quite good agreement of the peak positions of the three structures and a good correspondence with the theoretical values found for the disk, which is drawn in the inset of Fig. 3 as a solid line.
How can we understand this striking coincidence of the peak positions at high fields for the three structures? For this, we have to look how the superconducting order parameter nucleates along a curved superconductor/insulator boundary. Figure 5a shows the calculated $`T_c(\mathrm{\Phi })`$ curves for a single circular dot and for an antidot (see also Ref.) in an infinite film, both of radius $`R`$. The latter has been calculated in a similar way as the $`T_c(\mathrm{\Phi })`$ of the dot. For a single antidot, the boundary condition (Eq. (1)) translates into:
$$(L\mathrm{\Phi }/\mathrm{\Phi }_0)U(n,L+1,\mathrm{\Phi }/\mathrm{\Phi }_0)+2n\frac{\mathrm{\Phi }}{\mathrm{\Phi }_0}U(n+1,L+2,\mathrm{\Phi }/\mathrm{\Phi }_0)=0,$$
(7)
where the function $`U`$ is the Kummer function of the second kind, diverging at the origin, i.e. at the center of the antidot. The numeric values for $`n`$ have to be inserted into Eq. (3), to obtain the $`T_c(\mathrm{\Phi })`$.
In Figure 5b the respective enhancement factors $`\eta `$ (corresponding to $`H_{c3}/H_{c2}`$) are shown. For both the dot and the antidot the value $`\eta `$=1.69 is approached as the curvature radius $`R`$ goes to infinity (or for a fixed $`R`$, as $`H\mathrm{}`$). Since the dot has a larger $`\eta `$ than the antidot, corresponding to a higher $`H_{c3}(T)`$, the superconducting order parameter is expected to grow initially at the outer sample boundary, as the temperature drops below $`T_c`$. At slightly lower temperatures surface superconductivity should as well nucleate around the antidots. In the mean time, however, the order parameter has reached already a finite value over the whole width of the strips. In the complete temperature (or flux) interval of our measurements $`\eta <1.5`$ for the antidots and $`\eta >1.8`$ for the dot (when scaling the radii to the actual sample dimensions). The resistively measured phase transitions, probably because of this substantially different $`H_{c3}`$ for a dot and an antidot, only show peaks related to the switching of the angular momentum $`L`$, associated with a closed contour along the outer sample boundary. At the $`T_c(H)`$ boundary, in the high magnetic field regime, there is no such closed superconducting path around each single antidot, and therefore the fluxoid quantization condition does not need to be fulfilled for a closed contour encircling each single antidot.
Although a more detailed analysis has to be carried out, since the sample boundaries in our experiments have sharp corners, the interpretation given above is expected still to be valid. A detailed analysis of a square loop geometry performed by Fomin et al. has shown that the superconducting order parameter preferentially nucleates near the sharp corners of the structure. In another paper, the same authors discuss the enhancement of the surface critical field $`H_{c3}`$ above the bulk upper critical field $`H_{c2}`$ in a semiplane, which is bent over a certain angle $`\alpha `$ (superconducting wedge) . The magnetic field is parallel to the wedge edge. An enhanced $`\eta `$ value is found for angles $`\alpha <\pi `$ , which can be as high as $`\eta =3.79`$ for $`\alpha =0.44\pi `$. For angles $`\alpha >\pi `$, the surface critical field is not enhanced above $`\eta =1.69`$. Note that the value $`\eta =3.67`$, obtained for $`\alpha =\pi /2`$ differs from the calculation in Ref. , where for a square domain in the limit $`\mathrm{\Phi }/\mathrm{\Phi }_0\mathrm{}`$ the factor $`\eta 1.8`$ only. The discrepancy between these two results still has to be clarified.
For the 2-antidot structure, further quantitative calculations are needed to describe the low field behavior. Since there are no extra peaks present in $`T_c(\mathrm{\Phi })`$, compared to the full square, we believe that, here as well, a surface superconducting sheath develops along the outer sample boundary. At the lowest fields, the sheath width $`w_H`$ is still larger than the width $`w`$ of the strips, and therefore the position of the first peaks (mainly the second) in $`T_c(\mathrm{\Phi })`$ are different from the full square.
In Fig. 6 the superconducting order parameter profiles are shown for a disk, calculated at different points $`\mathrm{\Phi }/\mathrm{\Phi }_0=1,3,\mathrm{},17`$ on the $`T_c(\mathrm{\Phi })`$ curve. For $`\mathrm{\Phi }/\mathrm{\Phi }_0=1`$ the ground state corresponds to $`L=0`$, and the order parameter is only weakly modulated since there are no flux lines threading the sample. As we move to higher $`\mathrm{\Phi }/\mathrm{\Phi }_0`$ superconductivity becomes more and more concentrated near the sample boundary. The presence of the antidots in the perforated samples produces different profiles for the superconducting order parameter, since the $`\mathrm{\Psi }`$ has to fulfill the boundary conditions (Eq. (1)) also at the antidot boundaries. A two-dimensional GL calculation would be required to obtain the proper order parameter distribution here.
The inset of Fig. 6 shows the width $`w_H`$ of the edge state as a function of $`\mathrm{\Phi }/\mathrm{\Phi }_0`$. For our samples $`w_H`$ becomes equal to the width of the strips at $`(34)\mathrm{\Phi }/\mathrm{\Phi }_0`$. For fluxes above this value the presence of the antidots will not influence the position of the peaks in $`T_c(\mathrm{\Phi })`$. Therefore, at high fields, when $`w_H`$ is smaller than the width of the outer strips in our structures, the order parameter is restricted to the outer border of the sample, and therefore it is impossible to have supercurrents around a single antidot.
The background depression of $`T_c`$ is different for the three structures studied (Fig. 3). The larger the perforated area (in other words the smaller the area exposed to the perpendicular magnetic field), the less $`T_c(\mathrm{\Phi })`$ is pushed to lower temperatures. Another clear example of a similar behavior is given in Ref. , where the $`T_c(H)`$ of the (square) dot is shown to be lower than the $`T_c(H)`$ of a the loop, when exposed to a perpendicular magnetic field. This general rule applies, for instance, also to simple strips for which $`T_c(H)`$ is suppressed more when the width $`w`$ increases, which is described by Eq. (6). In the dot of Ref. , the giant vortex state was shown to develop. For the loop studied in that paper , however, the used magnetic fields were too low to induce the crossover to a giant vortex state, which contrasts the observations from the present paper, in the high magnetic field regime.
The appearance of the giant vortex state in the high field regime is the most plausible explanation at the moment. We can not exclude, however, that another scenario is also possible. Namely, a nearly flat non-zero distribution of $`\left|\mathrm{\Psi }\right|`$ in the sample interior could coexist with an enhanced $`\left|\mathrm{\Psi }\right|`$ at the external sample boundary, although we believe that such a situation would give rise to peaks in $`T_c(\mathrm{\Phi })`$ each time an antidot changes its quantum state. In any case, the final description of the specific shape of the superconducting order parameter at $`T_c(\mathrm{\Phi })`$ requires a numerical two-dimensional calculation of $`\mathrm{\Psi }`$ for the perforated topologies, where the boundary conditions are fulfilled both at the outer and at antidot superconducting/insulator interfaces.
In summary, we have presented the experimental superconducting/normal phase boundaries $`T_c(H)`$ of a mesoscopic full square and two perforated mesoscopic aluminum squares. The flux interval was divided in two regimes by comparing the results with the behavior of a full square microstructure: for low magnetic fields the 4-antidot structure behaves like a network consisting of quasi-one-dimensional strips, giving rise to extra peaks in $`T_c(H)`$ in comparison to the full square. In the 2-antidot structure the peak positions are only shifted compared to the full square. As soon as each antidot contains one flux quantum, the giant vortex develops, resulting in pseudoperiodic oscillations in the $`T_c(H)`$ and a quasi-linear background on $`T_c(H)`$ at high magnetic fields. In this regime, the peak positions coincide for all three structures studied when the phase boundaries are plotted in flux quanta units (where flux is referred to the total sample area). For high magnetic fields, the presence of the antidots apparently does not change the phase winding number $`L`$ for a closed contour around the outer perimeter of the whole square. Since the enhancement factor $`H_{c3}/H_{c2}`$ is the highest at the outer sample boundary, superconductivity nucleates initially near the outside sample edges, resulting in a giant vortex state.
Note added in proof: The discrepancy between the result obtained by the authors of Ref. and of Ref. has been clarified in an erratum .
## Acknowledgments
The authors are thankful to the FWO-Vlaanderen, the Flemish Concerted Action (GOA) and the Belgian Inter-University Attraction Poles (IUAP) for the financial support. T. Puig wishes to thank the Training and Mobility of Researchers Program of the European Union. J. G. Rodrigo is a Research Fellow of the K.U.Leuven Onderzoeksraad. Discussions with Y. Bruynseraede, V.M. Fomin, J. Devreese, J. Rubinstein, C. Strunk and E. Rosseel are gratefully acknowledged.
e-mail: Vital.Bruyndoncx@fys.kuleuven.ac.be
Present address: Institut de Ciencia de Materials de Barcelona - CSIC, Campus de la UAB, 08193 Bellaterra, Spain.
|
no-problem/9905/cond-mat9905311.html
|
ar5iv
|
text
|
# Breakdown of a Magnetization Plateau due to Anisotropy in Heisenberg Mixed-Spin Chains
## I Introduction
Ground-state magnetization curves of quantum spin chains have been attracting much current interest due to their quantized plateaux as functions of a magnetic field. Several years ago Hida revealed that a spin-$`\frac{1}{2}`$ ferromagnetic-antiferromagnetic-antiferromagnetic trimerized chain exhibits a plateau in its magnetization curve at a third of the full magnetization. Although it was already familiar that, in the presence of a field, integer-spin Heisenberg antiferromagnetic chains remain massive from zero field up to a critical field , yet the magnetization plateau at a fractional value of the full magnetization was still met with a surprise. Since then various low-dimensional quantum spin systems in a field have been investigated, including polymerized spin chains , spin chains with anisotropy or four-spin exchange coupling , and decorated spin ladders . Experimental observations of quantized magnetization plateaux have also been reported. In such circumstances, generalizing the Lieb-Schultz-Mattis theorem , Oshikawa, Yamanaka, and Affleck (OYA) found a criterion for the fractional quantization. They pointed out that quantized plateaux in magnetization curves may appear under the condition
$$S_{\mathrm{unit}}m=\text{integer},$$
(1)
where $`S_{\mathrm{unit}}`$ is the sum of spins over all sites in the unit period and $`m`$ is the magnetization $`M`$ divided by the number of the unit cells.
Mixed-spin chains are the system of all others that stimulates us in this context. There exists a large amount of chemical knowledge on quantum ferrimagnets. In an attempt to realize a quasi-one-dimensional ferrimagnetic system, Gleizes and Verdaguer synthesized a few bimetallic compounds such as AMn(S<sub>2</sub>C<sub>2</sub>O<sub>2</sub>)<sub>2</sub>(H<sub>2</sub>O)<sub>3</sub>$``$$`4.5`$H<sub>2</sub>O (A = Cu, Ni, Pd, Pt). Then numerous chemical explorations followed and various examples of a ferrimagnetic one-dimensional compound were systematically obtained. The vigorous experimental research motivated theoretical investigations into Heisenberg ferrimagnets. Drillon et al. pioneeringly carried out numerical diagonalizations of spin-$`(S,\frac{1}{2})`$ Heisenberg Hamiltonians for $`S=1`$ to $`\frac{5}{2}`$ and revealed typical thermodynamic properties of ferrimagnetic mixed-spin chains. In recent years, quantum ferrimagnets have met with further theoretical understanding owing to various tools such as field and spin-wave theories, matrix-product formalism , and quantum Monte Carlo and density-matrix renormalization-group techniques. In particular, their mixed nature, showing both ferromagnetic and antiferromagnetic aspects , has lately attracted considerable attention.
However, little is known about quantum ferrimagnetic behavior in a magnetic field , especially about magnetization curves . Although anisotropy is an interesting and important factor from an experimental point of view, there exist few arguments on anisotropic models in a field. Now, considering the OYA argument and the accumulated chemical knowledge on ferrimagnetic compounds, the magnetization process of realistic mixed-spin-chain models arouses our interest all the more and indeed deserves urgent communication. In an attempt to serve as guides for further experimental study, we here consider an alignment of alternating spins $`S`$ and $`s`$ in a field, as described by the Hamiltonian
$$=\underset{j=1}{\overset{N}{}}\left[(𝑺_j𝒔_j)_\alpha +\delta (𝒔_j𝑺_{j+1})_\alpha H(S_j^z+s_j^z)\right],$$
(2)
where $`(𝑺𝒔)_\alpha =S^xs^x+S^ys^y+\alpha S^zs^z`$. We note that even the bond alternation $`\delta `$ is now experimentally adjustable . According to the OYA criterion (1), as $`H`$ increases from zero to the saturation field
$$H_{\mathrm{sat}}=\frac{1}{2}(1+\delta )\left[\alpha (S+s)+\sqrt{\alpha ^2(Ss)^2+4Ss}\right],$$
(3)
the model (2) may exhibit quantized plateaux at $`m=\frac{1}{2}`$ $`(1)`$, $`\frac{3}{2}`$ $`(2)`$, $`\mathrm{}`$, $`S+s1`$. Though a multi-plateau problem is a fascinating subject, we restrict our argument to the simplest case of $`(S,s)=(1,\frac{1}{2})`$ in the following. This is, on the one hand, because we at first aim at understanding the typical and essential behavior of quantum ferrimagnets in a field, and, on the other hand, because the low-energy structure of the model (2) remains qualitatively the same as long as $`Ss`$. Then, a plateau is expected at $`m=\frac{1}{2}`$. At the Heisenberg point, the ground state of the Hamiltonian (2) without field is a multiplet of spin $`N/2`$ . The ferromagnetic excitations, reducing the ground-state magnetization, exhibit a gapless dispersion relation, whereas the antiferromagnetic ones, enhancing the ground-state magnetization, are gapped from the ground state . Therefore, at the isotropic point, $`m`$ jumps up to $`\frac{1}{2}`$ just as a field is applied and forms a plateau for $`H_{\mathrm{c1}}HH_{\mathrm{c2}}`$ , where $`H_{\mathrm{c1}}`$ and $`H_{\mathrm{c2}}`$ are the lower and upper critical fields, equal to $`0`$ and the antiferromagnetic gap, respectively.
In the presence of exchange anisotropy, the above argument should be modified, where the $`(N+1)`$-fold degenerate ground-state multiplet splits , as illustrated in Fig. 1. In the Ising region, the ground state is a doublet of $`M=\pm N/2`$ and therefore $`H_{\mathrm{c1}}`$ remains $`0`$. As $`\alpha `$ increases, $`H_{\mathrm{c2}}`$ comes to be given as $`(1+\delta )\alpha `$ and the magnetization curve ends up with a trivial step. Thus we take little interest in this region. In the $`XY`$ region, on the other hand, the ground state is a singlet of $`M=0`$. Now $`H_{\mathrm{c1}}`$ moves away from $`0`$ and the plateau shrinks as $`\alpha `$ decreases (see Fig. 2 below). Here arises a stimulative problem: how stable the plateau is against the anisotropy and what comes over the plateau phase? In this article, we demonstrate that the plateau survives the $`XY`$ anisotropy in the entire antiferromagnetic region and vanishes in the ferromagnetic region. The transition is of Kosterlitz-Thouless (KT) type and a gapless spin-fluid phase appears instead.
## II Scaling Analysis
We numerically diagonalize finite clusters up to $`N=12`$ and analyze the data obtained employing a scaling technique . Suppose a field is applied to the cluster of $`N`$ unit cells, a magnetization, let us say, $`M`$, is induced in the ground state. In this sense we represent a field as a function of $`N`$ and $`M`$: $`H(N,M)`$. Even though $`M`$, as well as $`N`$, is given, $`H(N,M)`$ is not in general unique. The upper and lower bounds of $`H(N,M)`$ are, respectively, given by
$`H_+(N,M)=E(N,M+1)E(N,M),`$ (4)
$`H_{}(N,M)=E(N,M)E(N,M1),`$ (5)
where $`E(N,M)`$ is the lowest energy in the subspace labeled $`M`$ of the Hamiltonian (2) without the Zeeman term. If the system is massive at the sector labeled $`M`$, $`H_\pm (N,M)`$ should approach different values $`H_\pm (m)`$, respectively, as $`N\mathrm{}`$, which can be estimated by the Shanks’ extrapolation . In the critical system, on the other hand, $`H_\pm (N,M)`$ should converge to the same value as
$$H_\pm (N,M)H(m)\pm \frac{\pi v_\mathrm{s}\eta }{N}(N\mathrm{}),$$
(6)
where $`v_\mathrm{s}`$ is the sound velocity and $`\eta `$ is the critical index defined as $`\sigma _0^+\sigma _r^{}(1)^rr^\eta `$ with a relevant spin operator $`\sigma `$, which may here be a certain linear combination of $`𝑺`$ and $`𝒔`$.
In Fig. 2 we show thus-obtained thermodynamic-limit magnetization-versus-field curves, where we smoothly interpolate the raw data $`H(m)`$ for the sake of guiding eyes. We might expect that the bond alternation simply makes the plateau grow because the magnetization curve becomes stepwise as $`\delta 0`$. However, this naive idea is not true in general. In the vicinity of the Ising limit $`\alpha \mathrm{}`$, the plateau length behaves as $`(1+\delta )\alpha `$ and thus the bond alternation makes the plateau shrink. Around the Heisenberg point $`\alpha =1`$, this picture seems to be still valid in part but the precise scenario is not so simple. At the Heisenberg point, for example, the antiferromagnetic excitation gap, that is, the gap between the ground state and the lowest level in the subspace with $`M=N/2+1`$, is not a monotonic function of $`\delta `$ (Table I). On the other hand, near the $`XY`$ point $`\alpha =0`$, the plateau seems to grow monotonically with the bond alternation.
Once $`\delta `$ is given, the plateau length is monotonically reduced with the decrease of $`\alpha `$. The system is gapless at every sector of the Hilbert space in the ferromagnetically ordered region $`\alpha 1`$ and is thus supposed to encounter a phase transition going through the $`XY`$ region $`1<\alpha <1`$. It is surprising that the plateau still exists at the $`XY`$ point. We will show later that such a stable plateau is peculiar to quantum spins, while, for classical spins, only a slight anisotropy of $`XY`$ type breaks the plateau.
The plateau length $`\mathrm{\Delta }_N=H_+(N,M)H_{}(N,M)`$ is a relevant order parameter to detect the phase boundary. The scaling relation (6) suggests that $`\mathrm{\Delta }_N`$ should be proportional to $`1/N`$ in the critical system. We plot in Fig. 3(a) the scaled quantity $`N\mathrm{\Delta }_N`$ as a function of $`\alpha `$. $`N\mathrm{\Delta }_N`$ looks independent of $`N`$ beyond a certain value of $`\alpha `$, showing an aspect of the KT transition. The central charge $`c`$ of the critical phase can be extracted from the scaling relation of the ground-state energy:
$$\frac{E(N,M)}{N}\epsilon (m)\frac{\pi cv_\mathrm{s}}{N^2}(N\mathrm{}).$$
(7)
Due to the small correlation length of the present system, we can directly and precisely estimate $`v_\mathrm{s}`$ from the dispersion curves. In Fig. 3(b) we plot $`c`$ versus $`\alpha `$ and find that $`c`$ approaches unity as the system goes toward the critical region. Assuming the asymptotic formula $`\mathrm{\Delta }_N2\pi v_\mathrm{s}\eta /N`$, we can further evaluate the critical exponent $`\eta `$, which is also shown in Fig. 3(b). Figure 3 fully convinces us of the KT universality of this phase transition. The phase boundary is obtained by tracing the points of $`\eta =\frac{1}{4}`$ and is shown in Fig. 4 by a solid line. On the other hand, we have another numerical tool, the phenomenological renormalization-group (PRG) technique , to determine the phase boundary. At each $`\delta `$, the PRG equation
$$(N+2)\mathrm{\Delta }_{N+2}(\alpha ,\delta )=N\mathrm{\Delta }_N(\alpha ,\delta ),$$
(8)
gives size-dependent fixed points $`\alpha _\mathrm{c}(N,N+2)`$. $`\alpha _\mathrm{c}(N,N+2)`$ is well fitted to a linear function of $`1/(N+1)`$ in the vicinity of $`\delta =1`$, whereas, as $`\delta 0`$, the linearity becomes worse and thus the uncertainty in the $`N\mathrm{}`$ extrapolation increases. Just for reference, the thus-obtained phase boundary is also shown in Fig. 4 by a dotted line, which is somewhat discrepant from the highly accurate estimate based on $`\eta `$. The PRG equation applied to gapful-to-gapful phase transitions yields an accurate solution, to be sure, but, for transitions to a gapless phase, including those of KT type, the PRG analysis is likely to miss the correct solution due to essential corrections to the scaling law (6), overestimating the gapful-phase region . The present PRG solution may still be recognized as the lower boundary of $`\alpha _\mathrm{c}`$.
## III Sublattice Magnetizations
In an attempt to elucidate how much effect quantum fluctuations have on the stability of the plateau, we investigate the Hamiltonian (2) of classical version as well, where $`𝑺_j`$ and $`𝒔_j`$ are classical vectors of magnitude $`1`$ and $`\frac{1}{2}`$, respectively. We show in Fig. 5 the classical magnetization curves. We note that the classical model also exhibits a plateau at $`m=\frac{1}{2}`$. The magnetization curves in the Ising region are not so far from the quantum behavior, though we have not shown them explicitly. However, the classical plateau can hardly stand the anisotropy of $`XY`$ type. In this context, it is interesting to observe sublattice magnetizations separately. We show in Fig. 6 the configuration of each classical spin as a function of a field. The classical plateau is nothing but a Néel-ordered state. In other words, without the fully ordered staggered magnetization, classical spins could not form a magnetization plateau. On the other hand, Fig. 7 shows that quantum spins can form a magnetization plateau with any combination of sublattice magnetizations. It is the case with the quantum model as well that sublattice magnetizations themselves freeze while going through the plateau. However, as long as the $`XY`$ exchange interaction exists, they are in general reduced from the full values $`1`$ and $`\frac{1}{2}`$, respectively. It is quantum fluctuations that stabilize the plateau with unsaturated sublattice magnetizations.
One more interesting observation on the quantum spin configuration is that the collapse of the staggered order in $`z`$ direction neither coincides with the $`XY`$ point nor results in the disappearance of the plateau. The $`z`$-direction spin correlations between the two sublattices turn ferromagnetic before the model reaches the $`XY`$ point. Here let us be reminded of the mixed nature of quantum ferrimagnets. Because of the coexistent elementary excitations of different types, the specific heat exhibits a Schottky-like peak in spite of the initial ferromagnetic behavior at low temperatures, whereas the susceptibility-temperature product shows both increasing and decreasing behaviors as functions of temperature. The present phenomenon, a massive state in the ferromagnetic background, might also be recognized as a combination of ferromagnetic and antiferromagnetic features.
## IV Summary and Discussion
We have investigated the critical behavior of anisotropic Heisenberg mixed-spin chains in a field. The model shows an anisotropy-induced transition of KT type between the plateau and spin-fluid phases, whose phase boundary lies in the ferromagnetic-coupling region. Though we have restricted our argument to the case of $`(S,s)=(1,\frac{1}{2})`$, qualitatively the same scenario may be expected in higher-spin cases, where multi-plateau phases are possible with the assistance of bond alternation .
While our scaling analysis is highly accurate, it is subtle whether or not the plateau still exists at the $`XY`$ point. Therefore, any other argument would be helpful in understanding further the numerical findings obtained. Let us consider a spin-$`\frac{1}{2}`$ ferromagnetic-antiferromagnetic-antiferromagnetic trimerized chain
$$=\underset{j=1}{\overset{N}{}}\left[\gamma (𝝈_j^a𝝈_j^b)_\alpha +(𝝈_j^b𝝈_j^c)_\alpha +(𝝈_j^c𝝈_{j+1}^a)_\alpha \right],$$
(9)
which can be regarded as the Heisenberg ferrimagnet of our interest in the $`\gamma \mathrm{}`$ limit. Such a replica-model approach is quite useful in studying low-dimensional quantum magnetism. Introducing the Jordan-Wigner spinless fermions via
$$\lambda _j^{}=\sigma _{j}^{\lambda }{}_{}{}^{+}\mathrm{exp}\left[\mathrm{i}\pi \underset{l=1}{\overset{j1}{}}\sigma _{l}^{\lambda }{}_{}{}^{+}\sigma _{l}^{\lambda }{}_{}{}^{}\right](\lambda =a,b,c),$$
(10)
we replace the Hamiltonian (9) by
$$=\underset{j=1}{\overset{N}{}}\left[(a_j,b_j)_{\gamma ,\alpha }+(b_j,c_j)_{1,\alpha }+(c_j,a_{j+1})_{1,\alpha }\right],$$
(11)
where $`4(a,b)_{\gamma ,\alpha }=2\gamma (a^{}b+b^{}a)+\alpha (2a^{}a1)(2b^{}b1)`$.
Now we focus our interest on the $`XY`$ point $`\alpha =0`$. After the Fourier transformation, we obtain the equation to determine the single-particle excitation spectrum as
$$\epsilon _k^3(\gamma ^2+2)\epsilon _k2\gamma \mathrm{cos}k=0.$$
(12)
The resultant dispersion relation is qualitatively different according as $`\gamma =1`$ or not, as illustrated in Fig. 8. At $`\gamma =1`$, which is not large enough to let ferromagnetically coupled neighboring spins construct spin $`1`$’s, there is no gap in the excitation spectrum. However, as $`\gamma `$ increases, gaps open up at the sectors of $`\frac{1}{3}`$ and $`\frac{2}{3}`$ band filling and this scenario remains qualitatively unchanged in the whole region $`\gamma >1`$. Noting the relation between the magnetization and the band filling,
$$M=N_{\mathrm{occ}}\frac{3N}{2},$$
(13)
where $`N_{\mathrm{occ}}`$ is the number of occupied states, we are allowed to expect magnetization plateaux at $`m=\pm \frac{1}{2}`$. The inclusion of the bond alternation $`\delta `$ results in the enhancement of the gap, which is consistent with Fig. 2. Qualitatively the same scenario is available for a ferromagnetic-ferromagnetic-antiferromagnetic trimerized chain, as was pointed out by two pioneering authors . The present analysis is not strictly comparable to the original argument unless $`\alpha =1`$. However, the nonvanishing gap in the $`\gamma \mathrm{}`$ limit may be a qualitative evidence for the existence of the plateau at the $`XY`$ point in the original model (2). We further show in Fig. 9 the sublattice magnetizations in the ground state of the replica model with $`\alpha =0`$ as functions of a field at a few values of $`\gamma >1`$. We are convinced all the more that the Néel order has already disappeared and both the spins $`1`$ and $`\frac{1}{2}`$ have the same-sign $`z`$ components at the $`XY`$ point.
In recent years, a massive-to-spin-fluid phase transition of KT type has been given a great deal of attention in the context of Haldane’s conjecture . In such cases the critical point never goes beyond the $`XY`$ point. The magnetization plateau in our argument should be distinguished from the gap immediately above the ground state, to be sure, but, compared with Haldane’s scenario , the present observation looks novel and is fascinating to be further studied. There may be a new mass-generation mechanism peculiar to quantum mixed-spin chains, other than the valence-bond picture . Quite recently Okamoto and Kitazawa have reported that the magnetization plateau of the spin-$`\frac{1}{2}`$ trimerized chain which is closely related with the present model also disappears in the $`XY`$ ferromagnetic region. We hope that our investigation, combined with such an argument from a different viewpoint, will contribute toward revealing the possibly novel scenario for the breakdown of quantized plateaux.
###### Acknowledgements.
It is a pleasure to thank H.-J. Mikeska and U. Schollwöck for helpful discussions. This work was supported by the Japanese Ministry of Education, Science, and Culture through Grant-in-Aid No. 09740286 and by the Okayama Foundation for Science and Technology. The numerical computation was done in part using the facility of the Supercomputer Center, Institute for Solid State Physics, University of Tokyo.
|
no-problem/9905/physics9905016.html
|
ar5iv
|
text
|
# Parametric Resonance in the Drift Motion of an Ionic Bubble in near critical Ar Gas
## Abstract
The drift mobility of the O$`{}_{}{}^{}{}_{2}{}^{}`$ ion in dense Argon gas near the liquid–vapor critical point has been measured as a function of the density. Near $`T_c`$ the zero–field density–normalized ion mobility $`\mu _0N`$ shows a deep minimum at a density smaller than $`N_c.`$ This phenomenon was previously intepreted as the result of the formation of a correlated cluster of Argon atoms around the ion because of the strong electrostriction exerted by the ion on the highly polarizable and compressible gas. We now suggest that a possible alternative explanation is related to the onset of a parametric resonance of a bubble surrounding the ion. At resonance a large amount of energy is dissipated by sound waves in addition to viscous dissipation processes, resulting in the large mobility drop observed.
The transport properties of negative ions in dense rare gases and liquids have been recently subject of renewed interest because they can give information on how the microscopic structure of the fluid around the ion is modified by the ion–atom interaction . Moreover, the possibility to greatly change with relative ease the gas density $`N`$ close to the critical point of the liquid–vapor transition gives the experimenters the unique opportunity to investigate the transition from the kinetic to the hydrodynamic transport regime .
The drift mobility of O$`{}_{}{}^{}{}_{2}{}^{}`$ in dense Ne gas or in liquid Xe can be satisfactorily explained in terms of hydrodynamics if ions are assumed to be surrounded by a complex structure. The competition between short–range repulsive exchange forces between the weakly bound outer electron of the ion and the electrons of the atoms and the long–range polarization attraction of the ion with the atoms leads to the formation of a microcavity around the ion, which is in turn surrounded by a strong density enhancement .
A self–consistent field model has been developed to compute the structure shape . The size of the complex is taken as effective hydrodynamic radius $`R`$ of the ion and its drift mobility is calculated by means of the Stokes formula $`\mu _0=e/6\pi \eta R,`$ where $`\eta `$ is the viscosity. The agreement with O$`{}_{}{}^{}{}_{2}{}^{}`$ mobility data in liquid Xe is quite good, but the data in Ne gas near $`T_c`$ are almost quantitatively reproduced only for $`N>N_c.`$ In particular, the model does not reproduce even qualitatively the little dimple in the density–normalized mobility $`\mu _0N`$ for $`N0.7N_c.`$ In this Letter we show that this general feature of the mobility can be explained by assuming that under certain termodynamic conditions a further dissipation mechanism becomes active in addition to viscous processes, namely sound wave emission by oscillations of the structure surrounding the ion.
Since the influence of electrostriction on the structure formation depends on the gas polarizability, we have carried out O$`{}_{}{}^{}{}_{2}{}^{}`$ mobility measurements in Ar gas because its polarizability is much larger than that of Ne. We used the pulsed photoinjection technique as for Ne and He . Details of the experiment are reported in literature . A small bunch of electrons is injected into the gas by irradiating a photocathode by means of a short UV light pulse. Electrons are captured by O<sub>2</sub> impurities (in a concentration of $`10100`$ p.p.m.) forming stable O$`{}_{}{}^{}{}_{2}{}^{}`$ ions . They drift under the action of an external electric field $`E`$ towards the anode inducing a detectable current. The analysis of its time dependence allows the determination of the drift time and, hence, of the drift velocity $`v_D`$ . Finally, the mobility is calculated as $`\mu =v_D/E.`$
In Fig.1 we show the measured zero–field density–normalized mobility $`\mu _0N`$ vs. the reduced density $`N/N_c`$ for $`T=151.5\mathrm{K}(T/T_c1.005)`$ ($`N_c=8.08\mathrm{atoms}\mathrm{nm}^3`$ $`T_c=151.7\mathrm{K}`$). $`\mu _0N`$ shows a very deep minimum for $`N/N_c0.76.`$ The Stokes formula with constant radius reproduces the data, at most, for $`N/N_c1.3`$ . The data are heuristically explained by assuming that a cluster of Ar atoms forms around the ion as a consequence of electrostriction . This large structure interacts hydrodynamically with the gas. The effective hydrodynamic radius depends on the local properties of the fluid. The good agreement with the data for $`N/N_c0.5`$ is obtained by introducing into the effective radius an adjustable contribution proportional to the size of the strongly correlated cluster. Thus, the hydrodynamic radius shows a complicated ad hoc density dependence, whose meaning is not easy to grasp. The reason might be that this model focuses only on the structure determined by the long–range part of the ion–atom interaction potential and does not take into account the presence of the microcavity closely surrounding the ion. Therefore, it neglects the possibility that the cavity may oscillate. In this case the energy of the moving complex, ion plus structure or, briefly, ionic bubble, can be dissipated by sound wave radiation in addition to viscous processes.
The present problem bears many affinities with that of the dynamics of electron bubble formation in liquid He , the adiabatic process where the repulsive electron–atom interaction displaces a large number of atoms far away from the electron which is localized within the fluid dilation, or the electron solvation dynamics in water, where cavity contraction is induced by long–range attractive polarization interactions . These problems have been treated in terms of a hydrodynamic model of cavity expansion or collapse . The dynamics of the cavity boundary is determined by the flow of the surrounding solvent. This is assumed to be spherically symmetric with velocity in the radial direction. In absence of energy dissipation and by assuming the incompressibility of the solvent, the cavity boundary velocity $`U`$ is described by the Rayleigh–Plesset (RP) equation
$$\frac{\mathrm{d}U}{\mathrm{d}t}=\frac{3U^2}{2R}+\frac{1}{\rho R}\left[p(R)p_e\right]$$
(1)
where $`R`$ is the time dependent cavity radius, $`\rho `$ is the gas mass density, $`p_e`$ is the external pressure, and $`p(R)`$ is the pressure on the cavity boundary, expressed in terms of the free energy change $`F`$ in the process: $`p(R)=\left(F/R\right)/4\pi R^2.`$ The equilibrium radius is obtained by minimizing $`F,`$ but in absence of dissipation the system will not equilibrate. The cavity oscillates around the equilibrium value of the radius. Energy dissipation causes the damping of the oscillations. Within the given approximations, the first passage time, i.e., the time required for the cavity radius to expand (contract) from its initial value to its maximum (minimum) is taken as the lower bound for the relaxation timescale of the process. Viscosity and emission of sound waves drive the system towards the equilibrium .
It is obvious that also the drift motion of the ion is affected if energy is dissipated also by emission of sound waves. We therefore address the issue of the motion of the ionic bubble in dense Ar gas, considering the oscillations of the bubble boundary. We assume a spherical cavity with well–defined boundary of radius $`R(t)`$ with equilibrium value $`R_0.`$ The main contribution to the restoring force is due to the surface tension. Although a gas has no surface tension, the enhanced density region around the cavity acts as a well–defined interface. Oscillations are initiated by collisions with energetic Ar atoms. Owing to the bubble smallness and to the asimmetry induced by the average motion driven by the external electric field, these collisions are not very frequent and occur in a spatially non homogeneous way. The equation of motion of the bubble boundary is then given by the RP equation, and, beside the impulsive nature of the driving force, is the same equation used for the bubble dynamics in the sonoluminescence experiments .
So, for small distortion of the spherical shape a solution for the bubble radius is sought in the form $`R_0+a_nY_n,`$ where $`Y_n`$ is a spherical harmonic of degree $`n,`$ and $`a_n`$ are the distortion amplitude coefficients. As bubble oscillations are initiated by collisions with energetic Ar atoms, this phenomenon can be basically described by a kicked rotator model, with the following equation of motion for the simple normalized periodic case: $`a_n^{\prime \prime }+\mathrm{\Gamma }a_n^{}Kf(a_n)_{m=0}^{\mathrm{}}\delta (tm\tau )=0`$, where $`m`$ is an integer, $`\mathrm{\Gamma }`$ the damping constant and $`\tau `$ the period between two kicks. Imposing a stochastic behaviour to the potential function, we can model collision events occurring with a gaussian distribution, e.g. in the quantum kicked rotator and electron localization problem . All properties of this equation are well known results of nonlinear dynamical system theory, which assure, under certain assumptions, the existence of a universal route to chaos. In the limit of small forcing the dynamics of the distortion amplitude can be cast in the form of the Mathieu–Hill (MH) like equation
$$b_n^{\prime \prime }+2\xi _mb_n^{}+\omega _m^2\left(1+ϵ_m\mathrm{cos}2\stackrel{~}{t}\right)b_n=0$$
(2)
with $`b_nR_0^{3/2}a_n(t).`$ $`\omega _m^2=(\omega _0/\omega )^2`$ is the square of the ratio between the natural frequency of the bubble, $`\omega _0,`$ and the excitation frequency $`\omega .`$ The natural frequency is given by $`\omega _0^2=\beta _n\sigma /\rho R_0^3,`$ where $`\beta _n=(n1)(n+1)(n+2),`$ $`\sigma `$ is the surface tension, and $`\rho `$ is the mass density of the gas . Primes denote differentiation with respect to the dimensionless time $`\stackrel{~}{t}=\omega t.`$ Assuming that the bubble is empty, the surface tension can be calculated by using the parachoric formula $`\sigma =(PN)^4,`$ where $`N`$ is the gas number density and $`P`$ is a constant. For Argon $`P1.39\times 10^{29}\mathrm{J}^{1/4}\mathrm{m}^{5/2}.`$ The term $`\xi _m=2n(n+2)\eta /\rho \omega R_0^2`$ is a damping coefficient related to the viscosity $`\eta `$ whose values are found in literature. Of the forcing term only the first Fourier component of amplitude $`ϵ_m`$ has been retained.
For a dissipation–free system the Mathieu–Hill equation is known to give origin to parametric instability, when deviations from the spherical shape accumulate over many oscillation cycles. Floquet’s theorem states that solutions of Eq.2 take the form
$$b_n\left(\stackrel{~}{t}\right)=e^{\mu _f\stackrel{~}{t}}P_n\left(\stackrel{~}{t}\right)$$
(3)
where $`\mu _f`$ is the Floquet’s index and $`P_n`$ is periodic. $`\omega _m`$ and $`ϵ_m`$ span a plane geometrically divided into stability and instability regions. We focus on the $`n=2`$ mode. In the instability regions $`\mu _f^2>0`$ and the envelope of the solutions to the MH equation grows exponentially. In the physical systems at hand this amplitude cannot grow indefinitely because dissipation stabilizes the surface dynamics. In the stability regions where $`\mathrm{𝚁𝚎}\mu _f=0`$ and $`\mathrm{𝙸𝚖}\mu _f=2\omega _m`$ the solutions are periodic with angular frequency $`\omega _0`$ with a small modulation at angular frequency $`2\omega _0.`$ In this case the system experiences parametric resonance. In the limit of small forcing $`(ϵ_m0),`$ in absence of dissipation $`(\xi _m0)`$ and close to the resonance $`(\omega _m1),`$ the non–zero solution for the Floquet’s index is $`\mu _f2i\omega _m=2i\omega _0/\omega `$ . We now show that $`\mathrm{𝙸𝚖}\mu _f`$ is related to the measured mobility $`\mu _{0.}`$ From the expression of the bubble eigenfrequency, using the parachoric formula, we can write
$$\omega _m=\frac{g}{\omega }N^{3/2}$$
(4)
where $`g=(N_A\beta _nP^4/MR_0^{3/2})^{1/2}`$ is a constant. $`N_A`$ is the Avogadro’s number and $`M`$ is the atomic weight of Argon. $`\omega `$ is the excitation frequency. Therefore, under the hypotesis of collisional induced oscillations, $`\omega =\omega _{coll}/l=2\pi \nu _{coll}/l,`$ where $`\nu _{coll}`$ is the collision frequency and $`l`$ defines the order of a suitable subharmonic. In the Knudsen regime the mobility of a heavy ion of radius $`R_0`$ scattered off light particles of mass $`m`$ is
$$\mu _0N=\frac{3e}{8R_0^2\sqrt{2\pi mk_\mathrm{B}T}}$$
(5)
and is related to the collision frequency by
$$\nu _{coll}=\frac{3eN}{m\left(\mu _0N\right)}$$
(6)
By inserting this result into Eq.4 we get
$$\mathrm{𝙸𝚖}\mu _f\sqrt{N}\left(\mu _0N\right)$$
(7)
where $`\mu _0N`$ is the experimentally measured zero–field density–normalized mobility. Therefore, $`\sqrt{N}\mu _0N`$ can be used to make a stability analysis of the bubble boundary motion.
In the real system, the ideal conditions of small forcing and absence of dissipation are not met and a parametric resonance is characterized by a minimum of the effective Floquet index, approximated by $`\mathrm{𝚁𝚎}\mu _f,`$ which is shown to be related to $`\sqrt{N}\mu _0N.`$ The minimum observed for $`N/N_c0.76(N6.2\mathrm{atoms}\mathrm{nm}^3)`$ is due to parametric resonance of the ionic bubble. We follow a heuristic line of reasoning. Once excited, the bubble starts oscillating with characteristic frequency $`\mathrm{𝙸𝚖}\{\mu _f\}`$ and growing amplitude given by $`A(t)=\mathrm{exp}(\mathrm{𝚁𝚎}\{\mu _f\}t).`$ The growth of the amplitude is limited because collision processes can absorb the oscillation energy. Therefore, if $`𝒯`$ is the time interval during which the bubble oscillates, $`\mathrm{𝚁𝚎}\{\mu _f\}𝒯𝒪(1).`$ On the other hand, according to calculations in the cavitation model , the oscillations last for a few cycles, hence $`\mathrm{𝙸𝚖}\{\mu _f\}𝒯𝒪(1).`$ Then, we have $`\mathrm{𝚁𝚎}\{\mu _f\}\mathrm{𝙸𝚖}\{\mu _f\}`$ and, owing to Eq.7, $`\sqrt{N}(\mu _0N)`$ is also related to the real part of the Floquet’s index. The Ansatz must be then verified by a direct numerical stability analysis of Eq.2.
The minimum of $`\mu _0N`$ occurs at a density $`N/N_c0.76`$ where the resonance condition $`\omega =\omega _0`$ is met. In Fig.2 we plot vs. $`N`$ both $`\omega _0N^{3/2}`$ and $`\omega =\omega _{coll}/lN,`$ approximately.
The two curves intersect for the right density value if $`l2^{11}.`$ So, the excitation frequency must be $`2^{11}`$ times lower than the average collision frequency given by Eq.6. The factor $`2^{11}`$ can be explained assuming that only collisions with Ar atoms in the tail of the Maxwell–Boltzmann energy distribution function, namely those with average kinetic energy in excess of $`3k_\mathrm{B}T,`$ are energetic enough to initiate bubble oscillations. These energetic collisions are less frequent than average by the right factor.
We have numerically integrated the MH equation (Eq.2) by assuming an equilibrium radius $`R_010\mathrm{\AA },`$ and have carried out the usual analysis of the stability bands. Usually, the Floquet’s index is plotted vs. $`\omega _m^2.`$ However, to compare the experimental data with the results of numerical analysis we have converted $`\omega _m`$ to $`N`$ by means of Eqns. 4 and 6. In Fig. 3 we plot $`(\mu _0N)\sqrt{N}`$ and $`\mathrm{𝚁𝚎}\mu _f`$ as a function of the gas density $`N.`$ The Floquet’s index shows a number of small bands, but also a very deep minimum at the same $`N`$ of the experimental $`\mu _0N.`$ This minimum is therefore associated with a strong parametric resonance of the ionic bubble. The larger width of the experimental data can be explained by considering the fact that the experimental result is an average over a distribution of ionic bubble radii. At the condition of parametric resonance the amplitude of oscillation can reach substantial values and the kinetic energy gained by the external electric field can be efficiently dissipated by emission of sound waves.
Close to the critical point the sound velocity $`c`$ is minimum ($`190m/s`$) and sound dissipation is favoured. The sound intensity irradiated by an oscillating bubble in the long wavelength limit is $`I=2\pi \rho c\omega ^2R_0^4.`$ At the density of the minimum $`\mu _0N`$ the intensity irradiated per cycle is in the tens of meV range and a relevant part of the energy gained by the ion from the electric field is dissipated by means of this process in addition to the usual viscous processes. At higher $`N`$ the stiffness of the bubble surface becomes too large and the oscillations cannot be initiated so easily as in the resonance region. Hence, at high $`N`$ the contribution of the sound to energy dissipation is limited and $`\mu _0`$ is determined by the viscosity. This is why the high$`N`$ region $`\mu _0N`$ can be well described by the Stokes formula. On the contrary, at much lower $`N`$ bubbles cannot form and $`\mu _0N`$ is determined by dissipation processes other than sound emission.
In the region where stable bubbles exist a further dissipation mechanism, though probably very small, should be considered. Within the bubble the ion undergoes a chaotic motion bouncing back and forth from the inner bubble wall. Molecular Dynamics studies have shown a vibration of the ion within the cavity. The ion should therefore behave as an emitting antenna of characteristic frequency $`\omega _e2\pi v_{th}/2R_0,`$ where $`v_{th}=(3k_\mathrm{B}T/m_i)^{1/2}`$ is the ion thermal velocity and $`m_i`$ its mass. At the experiment temperature $`\omega _e10^{12}rads^1.`$ Moreover, this radiation should be modulated by the slower oscillation of the bubble boundary. We finally note that in this problem there is a moving interface between media of different polarizability, crossed by the strong electrical field of the ion. Therefore, an experiment could be designed to detect the expected quantum radiation emitted as a dynamic Casimir effect, that some authors consider the physical cause of sonoluminescence.
|
no-problem/9905/nucl-th9905017.html
|
ar5iv
|
text
|
# Relativistic Calculations of Induced Polarization in ¹²𝐶(𝑒,𝑒'𝑝⃗) Reactions This work supported in part by the Natural Sciences and Engineering Research Council of Canada
## Acknowledgements
JIJ would like to thank James Kelly for kindly providing his EEI calculations from reference .
## Figure Captions
FIG. 1. Polarization of the knocked-out proton in the $`{}_{}{}^{12}C(e,e^{}\stackrel{}{p})^{11}B`$ reaction. The energy of the incident electron is 579 $`MeV`$, with constant q-$`\omega `$ kinematics. The Hartree bound state wave functions are from while the proton optical potentials are from reference . (a) Knockout of a $`1p_{3/2}`$ proton. (b) Knockout of a $`1s_{1/2}`$ proton. Solid curves — Hartree binding potential and E-dep optical potential for $`{}_{}{}^{12}C`$. Dashed curves — Woods-Saxon binding potential and E-dep optical potential for $`{}_{}{}^{12}C`$. Dotted curves — Hartree binding potential and E+A-dep optical potential, fit 1. Dot-dashed curves — EEI calculations from . The data are from reference . Closed circles denote missing energy in the range $`28<E_m<39`$ MeV, and open circles denote missing energy in the range $`39<E_m<50`$ MeV.
|
no-problem/9905/gr-qc9905101.html
|
ar5iv
|
text
|
# The Bright Side of Dark Matterreceived an “honorable mention” in the 1999 Gravity Research Foundation Essay Competition.
## ACKNOWLEDGEMENTS
I wish to thank Robert Myers for his useful comments and McGill University for their financial support.
|
no-problem/9905/astro-ph9905215.html
|
ar5iv
|
text
|
# Advection-Dominated Accetion Flows
## 1 Introduction
It has recently been recognized that rotating accretion flows with low radiative efficiency are applicable to a wide range of astrophysical systems including Galactic X-ray transients and active galactic nuclei. In the recently discussed models for such sources, the radiative efficiency becomes low because the gravitational binding energy dissipated during infall of accreted matter is not efficiently radiated away due to low gas densities but stored within the flows and radially advected inward. Viscous torque transports angular momentum while generating heat through dissipation. Radiative efficiency is basically determined by the fraction of the viscously dissipated energy which goes into radiation (Narayan & Yi 1995b). In the optically thin limit, which occurs when the density of the accretion flow falls below a certain limit, the radiative efficiency becomes small due to the long time scales for electron-ion energy exchange process and relevant cooling processes. The first discussions of such flows are found in Ichimaru (1977), Rees et al. (1982), and references therein.
The accretion flows around compact objects could be classified into several types (e.g. Narayan et al. 1998b, Frank et al. 1992). (i) Geometrically thin disks radiate away a large fraction of the viscously dissipated energy and due to high density, the optical depth for outgoing radiation is large. The disk temperature is relatively low and hence the internal pressure support is small, which results in the small geometrical thickness (e.g. Frank et al. 1992 for review). In these flows, the cooling rate $`Q^{}`$ is balanced by the heating rate $`Q^+`$, $`Q^+=Q^{}`$ while the electron temperature $`T_e`$ equals the ion temperature $`T_i`$. (ii) Two-temperature, Shapiro-Lightman-Eardley (1976) type accretion flows also maintain the energy balance as $`Q^+=Q^{}`$ while $`T_iT_{vir}T_e`$ where $`T_{vir}`$ is the usual virial temperature. These flows are thermally unstable (Piran 1978, cf. Rees et al. 1982). (iii) Slim disk or optically thick advection-dominated accretion flows (ADAFs) occur when accretion rates typically exceed the Eddington rate and the optical depth is large. In this limit, photons diffuse out on a time scale longer than the radial inflow time scale (Abramowicz et al. 1988 and references therein). These flows have $`Q^+>Q^{}`$ and $`T_i=T_e`$. (iv) Optically thin ADAFs (Ichimaru 1977, Rees et al. 1982, Narayan & Yi 1994, 1995b, Abramowicz et al. 1995) are relevant for substantially sub-Eddington accretion rates. The optical depth in these flows are typically less than unity and the radiative cooling rate is much smaller than the heating rate, i.e. $`Q^+>Q^{}`$, while the ion temperature $`T_iT_{vir}`$ is much higher than the electron temperature $`T_e`$. These flows are most interesting in systems which have relatively low luminosities and high emission temperatures (Narayan & Yi 1995b).
In this review, we focus on the optically thin ADAFs and discuss their applications to various astrophysical systems.
## 2 Basics of ADAFs: A Simple Version
### 2.1 Basic Equations
Following Narayan & Yi (1994, 1995ab), we adopt the following notations for a steady, axisymmetric, rotating accretion flow; $`R=`$ cylindrical radius from the central star, $`\rho =`$ gas density, $`H=`$ thickness or vertical scale height of the flow, $`v=`$ radial velocity, $`\mathrm{\Omega }=`$ angular velocity, $`\mathrm{\Omega }_K=(GM/R^3)^{1/2}=`$ Keplerian angular velocity, $`c_s=`$ isothermal sound speed, $`\nu =`$ kinematic viscosity coefficient, $`T=`$ is temperature of the gas, and $`s=`$ specific entropy of the gas. Then, the basic conservation equations for mass, radial momentum, angular momentum, and energy become respectively,
$$\rho RHv=constant,$$
(1)
$$v\frac{dv}{dR}(\mathrm{\Omega }^2\mathrm{\Omega }_K^2)R=\frac{1}{\rho }\frac{d}{dR}(\rho c_s^2),$$
(2)
$$\rho RHv\frac{d(\mathrm{\Omega }R^2)}{dR}=\frac{1}{dR}\left(\nu \rho R^3H\frac{d\mathrm{\Omega }}{dR}\right),$$
(3)
$$\rho vT\frac{ds}{dR}=q^+q^{}fq^+$$
(4)
where $`q^+=\nu \rho R^2(d\mathrm{\Omega }/dR)^2=`$ viscous dissipation rate per unit volume, $`q^{}=`$ radiative cooling rate per unit volume, and $`\rho vT(ds/dR)=q^{adv}=`$ radial advection rate per unit volume. Therefore, the energy equation becomes simply $`q^{adv}=q^+q^{}=fq^+`$ which defines the advection fraction $`f`$. When advection cooling per unit volume dominates, $`q^{}q^+`$, $`f1`$. Even when $`f`$ differs substantially from unity, most of dynamical calculations assume a constant f rather than solving the full energy equation (or assumes a simple cooling such as bremsstrahlung). Viscosity coefficient is specified by the $`\alpha `$ prescription (e.g. Frank et al. 1992) $`\nu =\alpha c_sH=\alpha c_s^2/\mathrm{\Omega }_K`$ where $`\alpha `$ is a constant often assumed to be in the range $`0.011`$,
The accretion flows are classified into three types according to the relative importance of the terms in the energy equation (Narayan et al. 1998). (i) $`q^+q^{}q^{adv}`$: energy balance is maintained between viscous heating and radiative cooling, which corresponds to high-efficiency flows such as geometrically thin disks. (ii) $`q^+q^{adv}q^{}`$: radiative loss is negligible and luminosity is very low. (iii) $`q^+q^{}q^{adv}`$: viscous heating is negligible and thermal energy of the flow is converted to radiation as in the cooling flows. Infall of matter is driven by pressure loss as gas cools.
### 2.2 Self-Similar Solution
The dynamical equations derived admit a self-similar solution for $`f=`$constant as shown in Narayan & Yi (1994) and Spruit et al. (1987);
$$v=\left[\frac{(5+2ϵ^{})}{3\alpha ^2}g(\alpha ,ϵ^{})\right]\alpha v_K,$$
(5)
$$\mathrm{\Omega }=\left[\frac{2ϵ^{}(5+2ϵ^{})}{9\alpha ^2}g(\alpha ,ϵ^{})\right]^{1/2}\frac{v_K}{R},$$
(6)
$$c_s^2=\left[\frac{2(5+2ϵ^{})}{9\alpha ^2}g(\alpha ,ϵ^{})\right]v_K^2,$$
(7)
where $`v_K=R\mathrm{\Omega }_K=(GM/R)^{1/2}`$, $`ϵ^{}=ϵ/f`$, $`ϵ=(5/3\gamma )/(\gamma 1)`$, and $`g(\alpha ,ϵ^{})=\left[1+\frac{18\alpha ^2}{(5+2ϵ^{})^2}\right]^{1/2}1`$. The specific heat ratio $`\gamma =4/35/3`$.
For $`\alpha =0.010.3`$, $`\alpha ^21`$ and $`f1`$ gives
$$v\left(\frac{9\gamma 9}{9\gamma 5}\right)\alpha v_K,$$
(8)
$$\mathrm{\Omega }\left[\frac{2(159\gamma )}{3(9\gamma 5)}\right]^{1/2}\mathrm{\Omega }_K\mathrm{\Omega }_K,$$
(9)
$$c_s^2\frac{6\gamma 6}{9\gamma 5}v_K^2$$
(10)
The self-similar solution reveals the basic properties of the ADAFs. (i) The radial accretion time scale is much shorter than that of the thin disk. (ii) Sub-Keplerian rotation occurs due to large internal pressure support. (iii) Vertical scale height $`Hc_s/\mathrm{\Omega }_KR`$. Moreover, the positive Bernoulli parameter indicates that ADAFs are prone to outflows although there have not been any self-consistent inflow/outflow solutions (Narayan & Yi 1995a). The vertically integrated equations and the self-similar solutions do not introduce any serious errors in flow dynamics since the height integration is a good approximation (Narayan & Yi 1995a).
### 2.3 Cooling and Heating Mechanisms
In ADAFs, due to low radiative cooling efficiency of electrons and rather ineffective ion-electron coupling, which is taken to be the Coulomb coupling, ions are nearly virialized (Narayan & Yi 1995b)
$$T_i2\times 10^{12}\beta r^1K.$$
(11)
Electrons’ energy balance is maintained as (Mahadevan & Quataert 1997)
$$\rho T_ev\frac{ds}{dR}=\rho v\frac{dϵ}{dR}kT_v\frac{dn}{dR}=q^{ie}+\delta q^+q^{}$$
(12)
where $`kT_ev(dn/dR)=q^{compress}`$ is the compressive heating and $`\delta q^+`$ is the direct viscous heating on electrons with $`\delta m_e/m_p10^3`$ (Nakamura et al. 1996). $`q^{ie}>q^{compress}`$ for $`\dot{m}>0.1\alpha ^2`$ and $`\rho v(dϵ/dR)q^{ie}q^{}`$ describes the electron energy balance. $`q^{ie}<q^{compress}`$ occurs when $`\dot{m}<10^4\alpha ^2`$ and the energy balance for electrons becomes $`\rho v(dϵ/dR)q^{compress}`$, which is appropriate only for very low luminosity systems. $`\delta q^+<q^{compress}`$ is realized only for $`\delta <10^2`$ which implies that the direct viscous heating is uninteresting in most practical cases (e.g. Mahadevan & Quataert 1997).
For convenience, we introduce some physical scalings; mass $`mM/M_{}`$, radius $`r=R/R_s`$ ($`R_s=2GM/c^2=2.95\times 10^5mcm`$), accretion rate $`\dot{m}=\dot{M}/\dot{M}_{Edd}`$ ($`\dot{M}_{Edd}=L_{Edd}/0.1c^2=1.39\times 10^{18}mg/s`$ where $`L_{Edd}`$ is the Eddington luminosity). The equipartition magnetic field $`B^2/8\pi =(1\beta )\rho c_s^2`$ with $`\beta =0.5`$ and $`f=1`$ where $`\beta `$ is the ratio of magnetic to total pressure. The self-similar solution gives the physical quantities in terms of the dimensionless variables defined here.
$$v1\times 10^{10}\alpha r^{1/2}cm/s,$$
(13)
$$\mathrm{\Omega }3\times 10^4m^1r^{3/2}s^1,$$
(14)
$$c_s^21\times 10^{20}r^1cm^2/s^2,$$
(15)
$$n_e6\times 10^{19}\alpha ^1m^1\dot{m}r^{3/2}cm^3,$$
(16)
$$B8\times 10^8\alpha ^{1/2}m^{1/2}r^{5/4}\dot{m}^{1/2}G,$$
(17)
$$\tau _{es}24\alpha ^1r^{1/2}\dot{m},$$
(18)
$$q^+5\times 10^{21}m^2r^4\dot{m}erg/s/cm^3,$$
(19)
where $`n_e`$ is the electron number density and $`\tau _{es}`$ is the electron scattering depth.
Various (electron) cooling processes give rise to distinct spectral components (Narayan & Yi 1995b). The integrated cooling rate $`Q𝑑Vq`$ where $`q`$ is the cooling rate per unit volume and $`𝑑V`$ denotes integration over the entire accretion flow. The total electron cooling rate is
$$Q_e^{}=Q_{sync}+Q_{Compt}+Q_{brem}$$
(20)
where $`Q_{sync}`$ is the synchrotron cooling rate which gives rise to spectral emission components in radio, IR, or optical/UV depending on the mass $`m`$ and accretion rate $`\dot{m}`$. $`Q_{Compt}`$ is the Compton cooling rate which is mainly responsible for optical/UV/soft X-ray emission. $`Q_{brem}`$ is the bremsstrahlung cooling contributing to X-ray and soft gamma-ray emission. If ADAFs in the inner regions around accreting compact objects are surrounded by the optically thick disks, optical/UV emission from cool disks is expected. Energetic protons in ADAFs may result in high energy gamma-rays (Mahadevan et al. 1997). Similar radiation processes in zero angular momentum spherical accretion have been extensively discussed (e.g. Melia 1992 and references).
### 2.4 Critical Quantities
ADAFs exist when accretion rates fall below a certain critical rate $`\dot{M}_{crit}=\dot{m}_{crit}\dot{M}_{Edd}`$. Such a critical rate arises because there exists a maximum accretion rate above which heating could be balanced by radiative cooling without any necessity of advective cooling (Rees et al. 1982, Narayan & Yi 1995b, Abramowicz et al. 1995, Narayan et al. 1998). In the case of the single temperature case, i.e. $`T_e=T_ir^1`$, assuming an optically thin, bremsstrahlung cooling (good for $`r>10^3`$), we get $`q^+m^2r^4\dot{m}\dot{m}`$ and $`q^{}=q_{brem}^{}\rho ^2T_e^{1/2}\rho ^2T\alpha ^2m^2r^{7/2}\dot{m}^2\dot{m}^2`$ or from $`q^+q^{}`$ we get the critical accretion rate $`\dot{m}_{crit}\alpha ^2r^{1/2}`$. In the case of the single temperature with $`T_e=T_iT_{vir}`$, assuming synchrotron and Compton cooling, the critical accretion rate becomes $`\dot{m}_{crit}10^4\alpha ^2`$. These well motivated critical rates are too low to be of practical interest (Esin et al. 1997). Esin et al. (1997) found that the critical rate deduced from observed spectral transition in soft X-ray transients is much higher than the above rates.
In the two-temperature ADAFs, the bottleneck in energy transfer from ions to electrons define another critical rate which is good for $`r<10^3`$ (e.g. Narayan et al. 1998). That is, using $`q^+\dot{m}`$ and $`q^{}q^{ie}\dot{m}^2`$ and equating the two rates, $`q^+=q^{ie}`$, gives a critical accretion rate $`\dot{m}_{crit}1\times 10^3(1f)ϵ^{}\alpha ^2\beta ^{1/2}r^{1/2}`$. Alternatively, $`t_{ie}=t_{acc}R/v`$ gives $`\dot{m}_{crit}0.3\alpha ^2`$. In sum, $`\dot{m}_{crit}\alpha ^2`$ for $`r<10^3`$ and $`\dot{m}_{crit}\alpha ^2r^{1/2}`$ for $`r>10^3`$, which depicts the actual radial dependence of the critical accretion rate.
It is interesting to point out that there exists a critical $`\alpha `$ (Chen et al. 1995). For $`\alpha <\alpha _{crit}r`$, $`\dot{m}_{crit}`$ exists while for $`\alpha >\alpha _{crit}r`$, $`\dot{m}_{crit}`$ doesn’t exist.
### 2.5 Some Recent Works on Heating Ions
Bisnovatyi-Kogan & Lovelace (1997) recently claimed that large electric fields parallel to magnetic fields can accelerate electrons and hence bypassing the bottleneck in energy transfer from ions to electrons, which could rule out ADAFs as a possible accretion flow type. However, such a possibility is realized only when a substantial resistivity on microscopic scale exists. This requires a small magnetic Reynolds number. Their proposal to use the macroscopic turbulent resistivity is not applicable on microscopic scales as pointed out by Blackman (1998) and Quataert (1998). Blackman (1998) argued that the Fermi acceleration by large scale magnetic fluctuations associated with MHD turbulence may lead to preferential ion heating and hence two-temperature plasma. This heating is not applicable to non-compressive Alfvenic turbulence which is most likely to be more important than the compressive mode. For weak magnetic fields substantially weaker than equipartition fields, Alfvenic component of MHD turbulence is dissipated on scales of proton Larmor radii (Gruzinov 1998, Quataert 1998). This mechanism favors ion heating and two-temperature plasma. For strong fields (i.e. near equipartition), the Alfvenic turbulence cascades to scales much smaller than proton Larmor radii and can directly heat electrons, which could cast doubt on ADAFs with equipartition strength magnetic fields. That is, there is a possibility that equipartition plasma doesn’t allow two-temperature plasma. This issue may ultimately be resolved by observed spectra.
### 2.6 ADAF Luminosity
In ADAFs, the observed radiative luminosity $`L_{ADAF}=L(E)𝑑Eq^{ie}𝑑V`$ or in terms of the ADAF radiative efficiency $`\eta _{ADAF}`$, $`L_{ADAF}=\eta _{ADAF}\dot{M}c^2`$ where $`\eta _{ADAF}=\eta _{eff}\times 0.2\dot{m}\alpha ^2\dot{m}\dot{M}/M`$ (Narayan & Yi 1995b). In contrast, the thin disk luminosity $`L_{thindisk}\eta _{eff}\dot{M}c^2`$ with $`\eta _{eff}0.1`$ (e.g. Frank et al. 1992).
## 3 Some Physical Issues
### 3.1 Global Solutions
The self-similar solution considered so far applies to regions far from the inner and outer boundaries where physical scales of the system demand the dynamical equations to deviate from self-similarity. A comprehensive summary of the Newtonian case is found in Kato et al. (1998). The task of finding the global solution is to find an eigenvalue $`j`$ (specific angular momentum accreted by the central object) with proper boundary conditions. The proper boundary conditions are (i) outer thin disk matching inner ADAF, (ii) sonic point for the transonic ADAF, and (iii) vanishing torque at the inner boundary. Solving the radial momentum equation, angular momentum equation, and energy equation, along with the implicit continuity equation for $`v,\mathrm{\Omega },c_s,\rho `$ and eigenvalue $`j`$, global solutions are found. The self-similar solution is a good approximation for a wide range of radii between the inner and outer boundaries. The pseudo-Newtonian potential case has been solved by Matsumoto et al. (1985), Narayan et al. (1997), Chen et al. (1997). The major findings are as follows. (i) For $`\alpha <0.01`$, inefficient angular momentum transport results in slow radial accretion and a wide radial zone of super-Keplerian rotation. In the super-Keplerian rotation region, a thick torus-like structure with funnel around the rotation axis forms and the pressure profile shows a maximum, which is reminiscent of the ion torus model (e.g. Rees et al. 1982). (ii) For $`\alpha >0.01`$, efficient angular momentum transport and rapid accretion $`v\alpha v_K`$ occur. There exist no pressure maximum and the accretion flows are quasi-spherical.
In the relativistic case with a spinning black hole (Abramowicz et al. 1996, Peitz & Appl 1997, Gammie & Popham 1998, Popham & Gammie 1998), the Newtonian and pseudo-Newtonian results are largely confirmed for $`r>10`$. For $`r<10`$, however, significant differences and spin effects are seen. Detailed calculations of emission spectra have been carried out by Jaroszynski & Kurpiewski (1997).
It has been an issue whether shocks form in transonic accretion flows. In the steady calculations, shocks are not seen and in the time-dependent calculation of Igumenshchev et al. (1996), no shocks have been seen. Manmoto et al. (1996)’s time-dependent calculation shows shock-like steepening in density waves, which could be responsible for ADAF variabilities.
### 3.2 Stability
The geometrically thin disks could be thermally and viscously unstable under certain circumstances (e.g. Frank et al. 1992). ADAFs are primarily stable agains thermal and viscous perturbations: (i) ADAFs are stable against long wavelength perturbations (Narayan & Yi 1995b, Abramowicz et al. 1995). (ii) ADAFs are marginally stable against short wavelength perturbations in the single temperature case (Kato et al. 1996, 1997, Wu 1997). (iii) ADAFs are stable against short wavelength perturbations both thermally and viscously in the two temperature case (Wu & Li 1996). The stability of ADAFs have led to an argument that the thermally unstable Shapiro-Lightman-Eardley disk (Piran 1978) is a spatially transitional accretion flow linking the thermally unstable outer thin disk to the stable inner ADAF. In the time-dependent calculations, small wavelength perturbations in the single temperature ADAFs have been observed to grow while moving inward. The growth rate is however not rapid enough to affect the steady global structure (Manmoto et al. 1996).
## 4 X-ray Transients
### 4.1 Black Hole Systems
Detailed spectral fitting has been tried for black hole systems such as A0620-00, V404 Cyg (Narayan et al. 1996), Nova Mus 1991, Cyg X-1, GRO J0422+32, GRO J1719-24 (Esin et al. 1997), and 1E1740.7-2942 (Vilhu et al. 1997). The main physical parameters used in the spectral fitting are $`M`$, $`\dot{m}`$, $`\alpha `$, $`\beta `$, and $`r_{tr}`$, where the last one is the radius of accretion flow transition from outer thin disk to inner ADAF. The spectral fitting assumes that (i) the outer thin disk is joined to the inner ADAF and (ii) the outer thin disk is unstable against disk instability. The instability causes the heating/cooling waves to propagate inward, resulting in delays between different emission components (Lasota et al. 1996, Hameury et al. 1997). The transition radius $`r_{tr}`$ is crucial in determining spectra (Lasota et al. 1996, Narayan et al. 1996,1998, Esin et al. 1998).
### 4.2 Thin Disk - ADAF Transition
ADAFs exist when $`q^{}<q^+`$ which in principle determines $`r_{tr}`$. In the single temperature, bremsstrahlung dominated case,
$$q_{vis}^+m^2r^4\dot{m}r^4$$
(21)
$$q_{brem}^{}\rho ^2T^{1/2}\alpha ^2m^2r^{7/2}\dot{m}^2r^{7/2}$$
(22)
and $`q^+=q^{}`$ gives (Honma 1996)
$$r_{tr}3\times 10^2(\alpha ^4/\dot{m}^2).$$
(23)
In more realistic cases, details of transition are unknown. For instance, Honma (1996) considers the radial turbulent heat diffusion near the interface between the thin disk and the ADAF. Spectral fitting gives a different result on $`r_{tr}`$ (Esin et al. 1997). The issue of the accretion flow transition still remains unresolved.
### 4.3 Neutron Star Systems
Neutron star transients have not been well studied using the ADAF models mainly due to the lack of sufficiently well-developed physical understanding of accretion flows near the neutron star surface. The radiation feedback from the soft radiation emitted by the stellar surface makes the spectral calculations considerably more complicated (Narayan & Yi 1995b, Yi et al. 1996). Moreover, it is unclear whether the spectral transition similar to that of black hole systems exists in neutron star systems.
### 4.4 Accretion-Powered X-ray Pulsars
Spectral transition in neutron star systems could be potentially very difficult to detect if thermalization near the neutron star surface occurs rapidly (e.g. Yi et al. 1996). Accretion-powered X-ray pulsars could however provide an observable signature of the temporal accretion flow transition. In accretion-powered pulsars such as 4U 1626-67, GX 1+4, and OAO 1657-415, abrupt torque reversals have been observed, which are extremely difficult to explain within the conventional models involving disk-magnetospheric interaction (Chakrabarty et al. 1997). The difficulties arise mainly due to (i) short reversal time scales, (ii) nearly identical spin-up and spin-down rates, (iii) small X-ray luminosity changes, (iv) significant spectral transition reported in 4U 1626-67 (Vaughan & Kitamoto 1997), and (v) torque-flux correlation observed in GX 1+4 (Chakrabarty et al. 1997).
The observed phenomenon is well accounted for by the accretion disk transition triggered by a gradual, small amplitude modulation of mass accretion rate. The sudden reversals occur at a rate $`10^{1617}g/s`$ when the accretion flow makes a transition from (to) a primarily Keplerian flow to (from) a substantially sub-Keplerian, radially advective flow (Yi et al. 1997, Yi & Wheeler 1998). The proposed transition model naturally shows that (i) the transition time scale is likely to be shorter than days, (ii) the required accretion rate change is at the level of a few $`\times 10`$ percent, and (iii) the abrupt reversal is a signature of a pulsar system near spin equilibrium with small mass accretion rate modulations near the critical accretion rate on the time scale of years. Other possible explanations for the spectral transition and the torque-flux correlation have been suggested (Nelson et al. 1997, van Kerkwijk et al. 1998) with varying difficulties in explaining the above observational facts. The accretion flow transition is similar to those in black hole transients and cataclysmic variables, which strongly suggests a common physical origin (Yi et al. 1997). The transition time scale is $`t_{thermal}(\alpha \mathrm{\Omega }_K)^1`$ or $`t_{vis}R/\alpha c_s(R/H)t_{thermal}`$, where the latter time scale becomes $`10^3s`$ for $`\alpha 0.3`$, $`R10^9cm`$, $`\dot{M}10^{16}g/s`$. This time scale is short enough to induce transition on a time scale of a day or less.
If the accretion flow’s thermal pressure is a substantial fraction $`\xi `$ of the dynamical pressure, i.e. $`c_s^2\xi R^2\mathrm{\Omega }_K^2`$, the thickness of the flow $`H\xi ^{1/2}R`$, the radial accretion speed $`v_R\alpha \xi R\mathrm{\Omega }_K`$, and the angular rotational frequency $`\mathrm{\Omega }(15\xi /2\alpha ^2\xi ^2/2)^{1/2}\mathrm{\Omega }_K=A\mathrm{\Omega }_K`$ where $`\xi 0`$ is the Keplerian limit and $`A<1`$ for ADAFs and $`A=1`$ for thin Keplerian disks. When $`A<1`$ occurs, $`R_c^{}=A^{2/3}R_c`$, where $`R_c`$ is the Keplerian corotation radius and $`R_c^{}`$ is the sub-Keplerian corotation radius. The accretion flow is truncated at a radius $`R_o^{}`$ and using $`N_0^{}=A\dot{M}(GM_{}R_o^{})^{1/2}`$ the torque on the star becomes
$$\frac{N^{}}{N_0^{}}=\frac{7}{6}\frac{1(8/7)(R_o^{}/R_c^{})^{3/2}}{1(R_o^{}/R_c^{})^{3/2}}$$
(24)
which pushes the pulsar’s spin toward an equilibrium spin $`P_{eq}^{}=P_{eq}/A>P_{eq}`$ where $``$ denote quantities after transition from thin disk to ADAF. In this picture, torque reversal is expected if
$$B_{}5\times 10^{11}L_{x,36}^{1/2}P_{,10}^{1/2}G$$
(25)
where $`L_{x,36}=L_x/10^{36}erg/s`$ is the X-ray luminosity and $`P_{,10}=P_{}/10s`$ is the pulsar spin period. Observed quasi-periodic oscillation (QPO) periods tightly constrain the proposed model. Yi & Grindlay (1998) discuss some possible implications on spin-up of LMXBs to MSPs when the accretion flows are ADAFs in these systems.
### 4.5 Energetic Protons: Lithium Production
Energetic ions present in ADAFs are capable of nuclear spallation. Lithium production in ADAFs in binary systems and ion tori in galactic nuclei has been discussed by Ramadurai & Rees (1985), Jin (1990), Yi & Narayan (1997). Using the self-similar solution, the relevant physical quantities are as follows: number density of protons $`n_H6\times 10^{20}m^1\dot{m}r^{3/2}cm^3`$, number density of $`\alpha `$ particles $`n_\alpha 5\times 10^{19}m^1\dot{m}r^{3/2}cm^3`$, energy per nucleon $`E300r^1MeV`$, and the radial accretion speed $`v_R2\times 10^9r^{1/2}cm/s`$. The production of $`{}_{}{}^{7}Li`$ dominates and the production cross-section $`\sigma _+(E)100(E/100MeV)^2mbarn`$ for $`E8.5MeV`$. Continuous production of $`{}_{}{}^{7}Li`$ within the accretion flow leads to increase in Lithium abundance according to
$$\frac{\mathrm{\Delta }n_{Li}}{n_H}=\frac{1}{2}\sigma _+(E)v_r\frac{n_\alpha ^2}{n_H}\mathrm{\Delta }t_{flow}$$
(26)
where $`\mathrm{\Delta }t_{flow}=\mathrm{\Delta }R/v_R`$. The enrichment results in the terminal abundance $`n_{Li}/n_H=d(n_{Li}/n_H)0.1\dot{m}`$ or the total Lithium mass production rate $`\dot{M}_{Li}=2\times 10^8m\dot{m}^2M_{}/yr`$.
It is expected that $`{}_{}{}^{7}Li`$ enrichment occurs around NSs and BHs containing ADAFs, which has been seen in recent precision spectroscopic measurements of $`{}_{}{}^{7}Li`$ in V404 Cyg, A0620-00, GS 2000+25, Nova Mus 1991, Cen X-4 (Martin et al. 1994 and references therein). In contrast, WD systems show no such effect, which indicates that only at $`r1`$ relativistic energies are reached while $`r10^3`$ is the inner most radius in the WD systems, which gives $`E1MeV`$.
### 4.6 Thermalization of Particles
Although we have adopted the thermal temperatures for ions and electrons, it is not proven that particle energy distributions are adequately approximated by thermal distributions. Recent investigations (e.g. Mahadevan & Quataert 1997, Quataert 1998, Blackman 1998) have suggested some limited information on this unresolved issue.
In most of the ADAF models, protons and ions are energized by viscous heating which is mostly unspecified. Alfvenic turbulence (which does not result in strongly non-thermal distributions) and Fermi acceleration (leading to power-law tails) have been considered. Coulomb collisions and synchrotron absorption do not lead to rapid thermalization of protons. So far it appears that the acceleration mechanism itself determines the proton energy distributions. Thermalization of electrons could occur more easily (Ghisellini & Svensson 1991). Coulomb collisions can thermalize electrons for $`\dot{m}>10^2\alpha ^2`$. Synchrotron self-absorption leads to thermalization when $`\dot{m}>10^5\alpha ^2r`$. The electron energy distributions directly affect radio emission spectra.
Recently Mahadevan (1998) pointed out that low frequency $`\nu <10^9Hz`$ radio spectrum of Sgr A could be contributed by electrons and positrons produced by charged pions from proton-proton collisions and that neutral pion production could account for tentative detection of gamma-rays in the direction of Sgr $`A^{}`$. In both cases, particle distributions need to be strongly nonthermal and the results depend very sensitively on high energy tails.
## 5 Galactic Nuclei
ADAFs may exist in galactic nuclei including the Galactic center source Sgr $`A^{}`$. For our discussions, we define $`m_7=m/10^7`$, $`\dot{m}_3=\dot{m}/10^3`$, and $`R_s=2GM/c^2=3\times 10^{12}m_7cm`$.
When the accretion flow is a thin disk, the luminosity $`L=\eta \dot{M}c^2`$ with the efficiency $`\eta =\eta _{eff}0.1`$ (e.g. Frank et al. 1992). Although the total luminosity is high, the emission temperature or the disk temperature $`T_{disk}6\times 10^6m_7^{1/5}\dot{m}_3^{3/10}r^{3/4}K`$ is too low to account for X-ray emission. The disk luminosity $`L_{disk}1\times 10^{42}m_7\dot{m}_3erg/s`$ is expected to occur in optical/UV/soft X-ray. X-ray emission and radio emission are usually accounted for by optically thin corona and radio jets. For the latter, the estimated radio power is $`L_{jet}1\times 10^{42}\overline{a}^2\eta _{jet}m_7\dot{m}_3erg/s`$ where $`\overline{a}`$ is the black hole spin parameter and $`\eta _{jet}`$ is the jet radiative efficiency.
In ADAFs, relevant physical quantities scaled for galactic nuclei are equipartition magnetic field $`B1\times 10^4m_7^{1/2}\dot{m}_3^{1/2}r^{5/4}G`$, electron scattering depth $`\tau _{es}5\times 10^2\dot{m}_3`$, the ion temperature $`T_i2\times 10^{12}r^1K`$, and the electron temperature $`T_e5\times 10^9K`$. ADAFs are expected to exist when the mass accretion rate falls below $`\dot{m}_{crit}=\dot{M}_{crit}/\dot{M}_{Edd}0.3\alpha ^210^310^2`$. Radio emission is easily explained by the synchrotron emission with the characteristic synchrotron emission frequency (Yi & Boughn 1998ab)
$$\nu _{sync}1\times 10^{12}m_7^{1/2}\dot{m}_3^{1/2}r^{5/4}T_{e9}^2Hz$$
(27)
where $`T_{e9}=T_e/10^9K5`$. The highest synchrotron radio emission frequency is $`\nu _{max}=\nu _{sync}(r1)3\times 10^{13}m_7^{1/2}\dot{m}_3^{1/2}Hz`$ which comes from the inner most region of ADAF near the black hole horizon. The radio luminosity
$$L_R\nu L_\nu ^{sync}2\times 10^{32}x_{M3}^{8/5}T_{e9}^{21/5}m_7^{6/5}\dot{m}_3^{4/5}\nu _{10}^{7/5}erg/s$$
(28)
where $`x_{M3}=x_M/10^3`$ is a dimensionless synchrotron self-absorption parameter and $`\nu _{10}=\nu /10^{10}Hz`$. ADAF radio emission could be tested by the radio source size - frequency relation. ADAFs predict such a relation
$$\theta (\nu )2m_7^{3/5}\dot{m}_3^{2/5}\nu _{10}^{4/5}(D/10Mpc)^1\mu as$$
(29)
where the angular size $`\theta (\nu )`$ becomes $`mas`$ for distance scales $`D10kpc`$.
Optical/UV/X-ray in ADAFs arise from inverse Compton scattering of radio synchrotron photons and hard X-rays are from bremsstrahlung and multiple Compton scattering. For $`\dot{m}<10^3`$, X-ray emission is dominated by the bremsstrahlung emission, $`L_xL_x^{brem}m\dot{m}^2`$ and the radio luminosity $`L_Rm^{8/5}\dot{m}^{6/5}`$, which suggests $`L_RmL_x^{3/5}`$. For $`\dot{m}>10^3`$, X-ray emission gives the luminosity $`L_x^{Compt}\dot{m}^{7/5+N}`$ with $`N2`$. In this case, we expect $`L_RmL_x^{6/5(N+1)}`$. Yi & Boughn (1998ab) have derived and tested the radio/X-ray luminosity relation for ADAFs using $`L_x=L_x(210keV)`$;
$$L_R10^{36}m_7(\nu /15GHz)^{7/5}(L_x/10^{40}erg/s)^xerg/s$$
(30)
where $`x1/5`$ for $`\dot{m}<10^3`$ and $`x1/10`$ for $`\dot{m}>10^3`$ or $`L_{R,adv}/L_{x,adv}mL_{x,adv}^1`$.
ADAFs are likely to drive jets/outflows (Narayan & Yi 1995a). If jets are powered by black hole’s spin energy (e.g. Frank et al. 1992),
$$L_{R,jet}/L_{R,adv}4\times 10^5\overline{a}^2\eta _{jet}m_7^{1/5}\dot{m}_3^{1/5}$$
(31)
and $`L_{R,jet}/L_{x,adv}\overline{a}^2mL_{x,adv}^1`$ are expected. That is, $`L_{R,jet}L_{R,adv}`$ for $`\overline{a}2\times 10^3\eta _{jet}^{1/2}`$. It is often argued that radio-loud nuclei have $`\overline{a}<1`$. If a galactic nucleus contains a thin disk and a jet $`L_{R,jet}/L_{x,disk}\overline{a}^2ϵ_{jet}/\eta _{eff}O(1)`$ is expected.
Characteristic ADAF emission spectra are determined primarily by $`\dot{m}`$ and weakly affected by the black hole mass $`M`$. Any combinations among $`L_x`$, $`L_R`$ and $`M`$ give useful information on the nature of emission from galactic nuclei.
### 5.1 Galactic Center Source Sgr A
Galactic center radio source Sgr $`A^{}`$ is a prime candidate for an ADAF (Rees 1982, Narayan et al. 1995, 1998, Manmoto et al. 1997). Sgr A, which is at the dynamical center of the Galaxy, appears to contain a massive black hole with mass $`(2.5\pm 0.4)\times 10^6M_{}`$. Wind accretion from nearby IRS 16 wind is expected to provide a mass accretion rate $`\dot{M}>`$ a few $`\times 10^6M_{}/yr`$. With the conventional $`10\%`$ efficiency, such a high accretion rate would correspond to a luminosity of $`0.1\dot{M}c^2>10^{40}erg/s`$ which is some 3 to 4 orders of magnitude larger than the observed radio to gamma-ray luminosity of $`<10^{37}erg/s`$. Moreover, a standard thin disk would give peak emission in near infrared but Menten et al. (1997)’s 2.2 micron upper limit rules out this possibility. These facts strongly suggest that an ADAF is present in Sgr $`A^{}`$.
Spectral fitting based on the observed $`M`$ and the estimated $`\dot{m}`$ adequately accounts for the observed emission seen from radio to hard X-ray. Bower & Backer (1998) measured the intrinsic source size $`<0.48mas`$ at 7mm, which at 8.5kpc gives the linear size of 4.1AU and the lower limit on the brightness temperature $`4.9\times 10^9K`$. Both the size and temperature measurements are consistent with the ADAF models. Inverted radio spectrum with sharp cutoff along with no jet like elongations near Sgr A are also consistent with the ADAF predictions. X-ray constraints are rather uncertain. ROSAT 0.8-2.5 keV luminosity is $`1.6\times 10^{34}erg/s`$ (Predehl & Trumper 1994) and ASCA 2-10 keV luminosity is $`4.8\times 10^{35}erg/s`$ (Koyama et al. 1996). Both constraints are easily satisfied by an ADAF.
### 5.2 NGC 4258
NGC 4258 almost certainly has a central black hole with mass $`M=3.5\pm 0.1\times 10^7M_{}`$, which is concentrated within 0.13pc (for a distance of 6.4Mpc) of the dynamical center. This source has an observed 2-10 keV X-ray luminosity of $`L_x=4\times 10^{40}erg/s`$ (Makishima et al. 1994). Optical luminosity $`L_{opt}<10^{42}erg/s`$ (Wilkes et al. 1994) provides an additional constraint. 22 GHz continuum emission (after subtraction of jet component) has not been detected with a 3$`\sigma `$ upper limit of 220$`\mu `$Jy (Herrnstein et al. 1998) or a luminosity upper limit $`L_R(22GHz)<2.4\times 10^{35}erg/s`$. The non-detection of the core at 22 GHz could imply that $`\dot{m}10^2`$ and $`r_{tr}30`$ if an ADAF exists in NGC 4258 (Gammie et al. 1998, cf. Lasota et al. 1996). Such a constraint is highly suspect due to a possibility of strong variabilities in radio emission from ADAFs (Blackman 1998, Ptak et al. 1998, Herrnstein et al. 1998).
### 5.3 M60, M87, NGC1068, and M31
Di Matteo & Fabian (1997b) has attempted to fit M60 emission spectra with an ADAF under the assumption that the accretion rate is close to the Bondi accretion rate with $`M10^9M_{}`$. Due to uncertainties and lack of flux measurements, a definitive conclusion as to whether an ADAF exists needs more data in X-ray or other wavebands. Reynolds et al. (1996) claimed that an ADAF model spectrum for a blackhole mass $`M=3\times 10^9M_{}`$ and the accretion rate $`\dot{m}10^3`$ accounts for the observed fluxes of M87. However, such a conclusion is highly suspect because the extended radio emission component has not been properly removed. ADAFs themselves do not produce extended radio emission. NGC1068 is an obscured Seyfert with a very high obscuration-corrected X-ray luminosity. X-rays are most likely to be dominated by scattering and the X-ray luminosity may be too bright for ADAFs based on the estimated black hole mass $``$ a few$`\times 10^7M_{}`$ (Yi & Boughn 1998ab). Yi & Boughn (1998b) also considered M31 which has a central black hole of mass $`M=3\times 10^7M_{}`$. The observed radio luminosity is too low for the observed X-ray luminosity if an ADAF is assumed around the black hole. It is highly likely that the X-ray luminosity is dominated by binary sources in the nucleus while the radio emission may be due to a very weak ADAF.
### 5.4 X-ray Bright Galactic Nuclei
Yi & Boughn (1998ab) and Franceschini et al. (1998) applied the ADAF model to a small sample of X-ray bright galactic nuclei which have black hole mass estimates. Since ADAFs are most relevant for low luminosity, hard X-ray sources, faint, hard X-ray galactic nuclei are likely hosts of ADAFs (Fabian & Rees 1995, Di Matteo & Fabian 1997a, Yi & Boughn 1998ab). Hard spectrum, faint X-ray sources could contribute significantly to the diffuse X-ray background. 50% of 2-10 keV XRB could be accounted for by ADAF sources with the comoving density of $`3\times 10^3Mpc^3`$ for $`L_x10^{41}erg/s`$, which is comparable to the local density of the $`L_{}`$ galaxy (Di Matteo & Fabian 1997a, Yi & Boughn 1998a). However, unless $`M>10^9M_{}`$, $`L_x>10^{40}erg/s`$ would be already too bright for ADAFs to account for the observed X-ray background. This is because at high luminosities X-ray emission is dominated by the Compton scattering which result in X-ray spectra much different from the background spectrum similar to the bremsstrahlung-dominated X-ray spectra. That is, in relatively bright ADAFs, high $`\dot{m}`$’s correspond to the Compton-dominated cooling regime. Although a significant clumping of ADAF gas would enhance bremsstrahlung over Compton (Di Matteo & Fabian 1997), such a possibility is difficult to realize.
So far, we have argued that ADAF models in galactic nuclei are testable due to distinguishing characteristics of ADAFs. ADAFs with low radiative efficiency and high temperature are likely for massive black holes accreting at accretion rates $`<10^2M_{Edd}`$. Hard X-ray emission and inverted spectrum radio emission from compact core are expected. There exists a characteristic radio/X-ray luminosity relation as shown by Yi & Boughn (1998ab). For known black hole masses, existence of hot ADAFs can be tested by radio/X-ray observations. Black hole masses could be estimated based on radio/X-ray luminosities. ADAF sources could however contain jets/outflows which can contribute to radio emission. Depending on the level of radio activity and existence of extended radio emission features, galactic sources could be classified (Yi & Bough 1998ab). Such a classification can be quantified in a manner similar to that adopted for Galactic X-ray sources. In fact, there exist interesting spectral similarities between Galactic binary X-ray sources and galactic nuclei.
We define X-ray bright galactic nuclei (XBGN) as galactic nuclei with X-ray luminosities in the range $`10^{40}<L_x<10^{42}erg/s`$ which is sub-luminous compared with the more powerful active galactic nuclei (AGN) which generally have $`L_x>10^{43}erg/s`$. Most of XBGN are expected to overlap with emission line galaxies with $`L_x10^{39}10^{42}erg/s`$. However, some of the low luminosity Seyferts with $`L_x>10^{42}erg/s`$ cannot be ruled out. Based on our discussions of ADAFs, for $`L_x10^{41}erg/s`$,
$$L_R4\times 10^{36}(M_{BH}/3\times 10^7M_{})erg/s$$
(32)
at 20 GHz with the characteristic inverted radio spectrum $`I_\nu \nu ^{2/5}`$. These sources at distances $`10(M/3\times 10^7M_{})^{1/2}Mpc`$ should be detected as $`1mJy`$ point-like radio sources (Yi & Boughn 1998a). If X-ray and radio are indeed from ADAFs, the black hole masses can be estimated (Yi & Boughn 1998b).
Yi and Boughn (1998ab) proposed the source classification based on the known black hole masses and the ADAF radio/X-ray luminosity relation. Adopting Sgr A, NGC 4258, NGC 1068, NGC 1316, NGC 4261, and NGC 4594 as fiducial sources, a statistically incomplete sample of XBGN are classified into radio-loud XBGN and radio-quiet XBGN. The former show that the observed radio luminosity $`L_{R,obs}L_{R,jet}L_{R,adv}`$ where $`L_{R,jet}`$ and $`L_{R,adv}`$ are the expected radio jet luminosity and ADAF radio luminosity, respectively. These sources are expected to show extended radio emission, unlikely to have strongly inverted radio spectra, and may have compact ADAF radio emission from compact cores separate from the extended emission components. The latter show that $`L_{R,obs}L_{R,adv}`$ and that the dominating emission components are compact cores with inverted spectra.
Kellermann et al. (1998) and Falcke (1998) show that jet-like radio emission features are common among AGN and emission line galaxies. Surprisingly, even in radio-quiet sources, elongated radio emission features are sometimes seen on small scales, which could imply that some type of jet/outflow activities are very common in galactic nuclei regardless of their large scale radio activities. In order to resolve this issue, high resolution radio measurements for nearby ($`<10Mpc`$) sources are crucial. Hard X-ray emission and inverted spectrum, compact radio emission are very likely to be found closely correlated.
### 5.5 QSO Evolution
Yi (1996) has suggested that transition of accretion flows from thin disks to ADAFs at $`\dot{m}_{crit}0.3\alpha ^2`$ could account for the observed sudden decline in the number of bright QSOs at redshift $`z2`$ and downward (see also Fabian & Rees 1995). Once ADAFs set in, the luminosity evolves according to
$$L=L_{ADAF}30\dot{m}^xL_{Edd}$$
(33)
where $`x2`$ and $`\dot{m}=\dot{M}/\dot{M}_{Edd}\dot{M}/M`$. the last expression implies that even when the mass accretion rate $`\dot{M}`$ is kept constant, $`\dot{m}`$ decreases merely due to the growth of black hole mass.
For instance, in a flat universe with no cosmological constant, for the initial black hole mass $`M=M_i`$ at $`z=z_i`$ and $`\dot{M}=constant`$, when $`\dot{m}<\dot{m}_{crit}`$ and $`\dot{M}/H_oM_i`$
$$L(z)(1+z)^{3(x1)/2}\left[(1+z)^{3/2}(1+z_i)^{3/2}\right]$$
(34)
or
$$L(z)(1+z)^{K(z)}$$
(35)
with
$$K(z)=\frac{3(x1)}{2}\left[1+\frac{1}{[(1+z_i)/(1+z)]^{3/2}1}\right]$$
(36)
which shows that the luminosity declines with a power-law similar to that seen in observations (Yi 1996 and references therein). The epoch at which ADAFs set in for $`\dot{m}=\dot{m}_i1`$ is (i.e. $`\dot{m}=\dot{m}_{crit}`$ first occurs)
$$1+z_c=\left[\left(\frac{t_{Edd}}{t_o}\right)\left(\frac{1}{\dot{m}_{crit}}\frac{1}{\delta }\right)+(1+z_i)^{3/2}\right]^{2/3}$$
(37)
where $`\delta =\dot{M}t_{Edd}/M_i`$ and $`t_{Edd}=(\dot{M}_{Edd}/M)^1=4.5\times 10^7(\eta _{eff}/0.1)yr`$. The observed sudden decline of QSOs at $`z2`$ is naturally accounted for.
### 5.6 Outflows
ADAFs are prone to outflows or jets (Narayan & Yi 1994,1995a) although a self-consistent inflow/outflow solution has not been found yet (cf. Xu & Chen 1997). It remains to be seen if a self-consistent inflow/outflow solution can account for compact and extended jet-like emission components in XBGN.
## 6 Some Unresolved Issues
ADAFs may exist in sources spanning many decades of masses of compact accreting sources. Some of the old outstanding issues, which are mostly concerning with low-luminosities and hard X-ray emission, are plausibly resolved by various versions of ADAF models. There exist however a number of unsolved problems in the ADAF framework.
(i) The physics of accretion flow transition, temporal and spatial, remains unclear. The spatial transition (i.e. from outer thin disk to inner ADAF) is better understood than the temporal transition to a certain extent. However, there has not been an adequate explanation for $`r_{tr}`$. It remains unsolved why the disk flow makes a transition to ADAF with little, if any, change of accretion rate. (ii) Even luminous systems show energetic X-ray emission which is absent in the thin disk models. It is often assumed that these sources have X-ray emitting coronae for which little physics is known. It is crucial to link the ADAF models to corona models with proper physical understanding. (iii) ADAFs emission could be highly variable. The issue of steady vs. non-steady ADAFs is directly related to the observed variabilities in ADAF candidate sources which show occasional non-detections. (iv) A number of plasma astrophysical issues remain to be solved. Is the two-temperature flow physically allowed? Are particles rapidly thermalized? What is the correct strength of magnetic fields responsible for synchrotron emission? (v) Observationally, faint X-ray sources, which could be seen by high resolution, high sensitivity experiments such as AXAF, should be studied in great details. Ultimately, ADAF-related issues and the very question on the future of ADAFs’ are likely to be answered by observations.
|
no-problem/9905/astro-ph9905206.html
|
ar5iv
|
text
|
# Outburst in the Polarized Structure of the Compact Jet of 3C 454.3
## 1 Introduction
The quasar 3C 454.3 at redshift $`z`$=0.859 is one of the brightest extragalactic radio sources. It is an optically violent variable with a relatively high total linear polarization. It was the subject of the first polarimetric VLBI observation (Cotton et al. Co84 (1984)) at a wavelength of 13 cm. Pauliny-Toth et al. (PT87 (1987)) presented the results of an extensive monitoring program of this source, covering about 5 yr of observations at 2.8 cm. These observations revealed the existence of superluminal components, with apparent proper motions between 0.21 and 0.35 mas yr<sup>-1</sup>, or equivalently between 4.4 and 7.3 $`h^1c`$ ($`H_{}`$= 100 $`h`$ km s<sup>-1</sup> Mpc <sup>-1</sup>, $`q_{}`$=0.5), and a pair of quasi-stationary components, one situated at about 0.6 mas of the core, and a second one at about 1 mas, the latter of which was only detected from 1983.8 to 1984.9. A higher resolution polarimetric 6 cm VLBI map was presented by Cawthorne & Gabuzda (CG96 (1996)). This showed a curving jet with a magnetic field aligned with the jet axis, except for the inner component K7 (which can be identified with the stationary component at 0.6 mas observed by Pauliny-Toth et al. PT87 (1987)), whose magnetic field lay almost perpendicular to the direction of the jet flow. Kemball et al. (Ke96 (1996)) presented the first polarimetric 7 mm Very Long Baseline Array (VLBA)<sup>1</sup><sup>1</sup>1The VLBA is an instrument of the National Radio Astronomy Observatory, which is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. image of 3C 454.3. This revealed a three-component structure in polarized intensity, consisting of the core, the stationary component previously found by Pauliny-Toth et al. (PT87 (1987)) at 0.6 mas, and a new component located between both of these. This component was determined to have a polarization position angle almost perpendicular to the stationary component, which has a magnetic field perpendicular to the jet axis, as previously observed by Cawthorne & Gabuzda (CG96 (1996)). Marscher (Al98 (1998)) presented a sequence of eleven 43 GHz VLBA images covering two years of observations starting in late 1994. A proper motion of 0.28 $`\pm `$ 0.02 mas yr<sup>-1</sup> was observed for a component in the inner milliarsecond structure. These observations revealed no stationary component at the eastern end of the jet, but rather a weak, roughly stationary component 0.2 mas downstream of it. Further 5 and 8.4 GHz VLBI observations of 3C 454.3 were presented by Pauliny-Toth (PT98 (1998)), showing mean proper motions of 0.68 $`\pm `$ 0.02 mas yr<sup>-1</sup> along a curved path at a distance of 2–4 mas from the “core.”
## 2 Observations
We present three-epoch polarimetric observations of 3C 454.3 obtained with the VLBA. The first two observations were performed on 1996 November 11 and December 22 at 1.3 cm and 7 mm, in which 3C 454.3 served as a calibrator for 3C 120 (Gómez et al. JL98 (1998)). The data were recordered in 1-bit sampling VLBA format with 32 MHz bandwidth per circular polarization. The reduction of the data was performed within the NRAO Astronomical Image Processing System (AIPS) software. The instrumental polarization was determined using the feed solution algorithm developed by Leppänen et al. (Le95 (1995)) on the source 0420-014. Comparison with other polarimetric observations carried out at similar epochs and frequencies revealed a good agreement in the determination of the D-terms (M. Lister, private communication), and were used to calibrate the electric-vector position angle (EVPA) with an error that we estimate to be within 10. We also refer the readers to Gómez et al. (JL98 (1998)) for further details about the reduction and calibration of the data.
Figure 1 shows the VLBA images obtained for 0420-014. Only slight structural changes are observed between both frequencies and epochs. The total and polarized intensity images are dominated by a strong core component, with an EVPA in the east-west direction, which smoothly rotates toward the southwest, in the direction of the jet structure. This represents a rotation of the EVPA by about 45 with respect to that measured by Kemball et al. (Ke96 (1996)) two years previously, accompanied by a decrease in the peak percentage linear polarization to less than 2.2%.
The third 43 GHz polarimetric VLBA observation took place on 1997 July 30–31 as part of a $`\gamma `$-ray blazar monitoring program. The data analysis was similar to the previous two epochs, except that a component of 3C 279, with its EVPA exactly parallel to the component’s position angle measured with respect to the core, was used to calibrate the EVPA. We estimate an error in the EVPA absolute orientation to be less than 5 for this epoch.
## 3 Results
Figures 2 and 3 show VLBA images of 3C 454.3 obtained with the VLBA at 22 and 43 GHz at the three epochs. Tables 1 and 2 summarize the physical parameters obtained for 3C 454.3. Tabulated data correspond to total flux density ($`S`$), polarized flux density ($`p`$), degree of polarization ($`m`$), EVPA ($`\chi `$), separation ($`r`$) and structural position angle ($`\theta `$) relative to the easternmost bright component \[which we refer to as the “core” despite the fact that this component is not always bright (Marscher Al98 (1998)) and may not be completely stationary (see below)\], and angular size (FWHM). Components in the total intensity images were analyzed by model fitting the uv data with circular Gaussian components within the software Difmap (Shepherd, Pearson, & Taylor Se94 (1994)). Components seen in the images of polarized intensity are not always coincident with maxima in total intensity (see also Kemball et al. Ke96 (1996); Gómez et al. JL98 (1998)), therefore we can only obtain estimates of $`m`$ by the approximation that both maxima are coincident.
### 3.1 Source structure
Total intensity 1.3 cm images show a double component structure, very similar to that observed by Kemball et al. (Ke96 (1996)) at 7 mm, consisting of the core and component St. This appears at a very similar separation from the core to that of component 2 in Pauliny-Toth et al. (PT87 (1987)), component K7 of Cawthorne & Gabuzda (CG96 (1996)), and component 2 of Kemball et al. (Ke96 (1996)). Hence, we identify it as the same stationary component, first detected in 1983.8 by Pauliny-Toth et al. (PT87 (1987)). We notice, however, that St seems to have been observed at different position angles from the core, from values close to $`100^{}`$ measured by Pauliny-Toth et al. (PT87 (1987)) and Cawthorne & Gabuzda (CG96 (1996)), to about $`70^{}`$ measured by Kemball et al. (Ke96 (1996)), in closer agreement to the values we obtain. This is indicative of a swing toward the north in the inner jet between the 1980’s and 1990’s. Furthermore, our images show a systematic difference between the position angles and distances from the core of St for both observing wavelengths. This may be due to changes in the internal structure of St or the core — most probably the latter, since such changes are expected during the ejection of new components from the core. Indeed, our 7 mm images reveal the existence of component A, blended with the core in the corresponding 1.3 cm images.
Figure 3 also shows a component between the core and St, at a distance from the core in good accordance with that expected from extrapolating the motion of A between the previous two epochs (see Fig. 2). Therefore, we identify it as the same component, obtaining a proper motion relative to the core for the three combined epochs of $`\mu `$=0.14 $`\pm `$0.02 mas yr<sup>-1</sup>, which corresponds to an apparent speed of 2.9$`\pm `$0.4 $`h^1c`$. Relative to component St, the proper motion is $`\mu `$=0.18$`\pm `$0.02 mas yr<sup>-1</sup>, or 3.9$`\pm `$0.4 $`h^1c`$. In either case, the motion is significantly slower than that observed by Marscher (Al98 (1998)), who measured $`\mu `$=0.28 $`\pm `$0.02 mas yr<sup>-1</sup> for a component found at 43 GHz to be moving between the core and St during the period December 1994-October 1995. Unless the orientation with respect to the observer of the inner jet in 3C 454.3 changed significantly between 1995 and 1996-1997, this would imply the ejection of components with intrinsically very different velocities in order to account for the differences in the observed proper motions. Even faster velocities were detected at larger scales by Pauliny-Toth (PT98 (1998)), whose observations at 5 and 8.4 GHz between 1984 and 1991.9 indicated a component moving along a curved path with a mean proper motion of $`\mu `$=0.68 $`\pm `$0.02 mas yr<sup>-1</sup>.
Assuming that A has maintained a constant speed since its ejection from the core, we estimate its birth at approximately 1995.7, coincident with a small outburst detected by the University of Michigan monitoring at 14.5 GHz<sup>2</sup><sup>2</sup>2The University of Michigan Radio Astronomy Observatory is supported by the National Science Foundation and by funds from the University of Michigan. and Metsähovi Radio Research Station monitoring data at 22 and 37 GHz (Teräsranta et al. Hi98 (1998)). Kemball et al. (Ke96 (1996)) observed a component — only detected in polarization — at a very similar separation from the core to that of A in July 1997, most probably associated with a previous ejection which they estimate took place at about 1994.4.
While component A does not show significant changes in flux during the three epochs, the core experienced a significant decrease in flux by about 1.5 Jy between the 1996.98 and 1997.58 epochs. Component St shows a very similar flux between both of the 1.3 cm epochs, however at 7 mm its flux progressively decreased, with a total variation of more than 2 Jy between the 1996.86 and 1997.58 epochs. Similar large variations of flux were also found in St by Pauliny-Toth et al. (PT87 (1987)) at 2.8 cm.
Beyond component St, Fig. 3 shows a complex jet structure, with emission extending to the north and south. Model fitting reveals a faint component in the north direction, which we have labeled n; it appears in the polarized intensity images as well. A lower resolution 15 GHz image presented by Kellermann et al. (Ke98 (1998)) shows some indications of this structure in the form of a very extended core emission. 3C 454.3 is observed to extend initially to the west, presenting a relatively strong bend toward the northwest direction at about 4-5 mas from the core (Pauliny-Toth et al. PT87 (1987), Pauliny-Toth PT98 (1998); Cawthorne & Gabuzda CG96 (1996); Kellermann et al. Ke98 (1998)).
Figure 3 shows a faint extension of emission east of the core position. Unless it is due to calibration errors, its presence indicates emission upstream of the “core.” A similar structure was first detected by Marscher (Al98 (1998)) in a series of 43 GHz VLBA images covering about two years, starting December 1994. Other indications of emission upstream of the core have also been found in high resolution 43 GHz VLBA images of 3C 120 (Gómez et al. JL98 (1998)). A possible interpretation is that it corresponds to the actual region where the jet is being generated; such weak emission could be due to a lower flow Lorentz factor, as in the accelerating jet model of Marscher (Al80 (1980)), or associated with the birth of new moving components upstream of the core (Marscher Al98 (1998)). In this case the core would represent a recollimation shock — strong in our observations and weak in 1995 when observed by Marscher (Al98 (1998)), as expected theoretically (Daly & Marscher DM88 (1988); Gómez et al. JL95 (1995)).
### 3.2 Polarization
Polarized intensity images corresponding to the epochs at the end of 1996, shown in Fig. 2, reveal a sudden change in the polarized structure of the core at both observing frequencies in a 41-day interval. At 1.3 cm the core changes from being almost undetected in polarization, to showing a polarized flux of 78 mJy, with a degree of polarization of $`m`$=2.2%. This change in polarized flux is accompanied by a rotation of $`\chi `$ by almost 60. At 7 mm we observe a very similar situation, in which the core and component A remain undetected in polarization at the 1996.86 epoch, but at 1996.98 appear with a degree of polarization for the core similar to that observed at 1.3 cm, and a dramatic increase in polarization for A, with $`m`$=7.4%. Both the core and A show a similar $`\chi `$ to that observed at 1.3 cm for the core. The third 7 mm epoch reveals a rotation in $`\chi `$ of about 105 for component A, accompanied by a small decrease in $`m`$, while the core remained with a similar $`\chi `$ but a reduced percentage polarization to values similar to that corresponding to 1996.86 at 1.3 cm.
Component St is detected at all observing epochs and frequencies. Its degree of polarization maintains values between 1.3 and 2.2%, similar to that observed at 5 GHz by Cawthorne & Gabuzda (CG96 (1996)), except for the July 1997 epoch, in which it increases to 8.6%, close to that obtained by Kemball et al. (Ke96 (1996)). Component St seems to have a frequency dependent EVPA. Our 7 mm images show $`\chi `$ close to the east-west direction, similar to that observed by Kemball et al. (Ke96 (1996)). However, at 1.3 cm the EVPA of St presents a systematic offset of about 30–40 with respect to the values measured at 7 mm. Cawthorne & Gabuzda (CG96 (1996)) obtained a value of 29 in observations at 6 cm, which suggest a rotation of $`\chi `$ in St toward the north-south direction with increasing wavelength. Broten, Macleod, & Vallée (BMV88 (1988)) obtained a rotation measure of -57 rad m<sup>-2</sup> for 3C 454.3, which may account for the rotation of $`\chi `$ between our 1.3 cm values and those presented by Cawthorne & Gabuzda (CG96 (1996)). However, this small rotation measure would not affect our 1.3 cm and 7 mm observations, and consequently no Faraday rotation corrections have been made for the measured EVPA.
## 4 Theoretical interpretation and conclusions
### 4.1 Stationary component St
The existence of stationary and moving components in 3C 454.3 is also found in several other sources, e.g. in 4C 39.25 (Alberdi et al. An93 (1993)). In this case, the stationary component is explained as being produced by a bend toward the observer, with the increased flux due to enhanced Doppler boosting. Similarly, we could explain the stationarity of St as due to a bend toward the line of sight. Indeed, the jet of 3C 454.3 is observed to bend toward the northwest direction at about 4-5 mas from the core (e.g., Pauliny-Toth et al. PT87 (1987), Pauliny-Toth PT98 (1998)). In this case, a change in the apparent motion of component A is expected as it moves along the hypothetical bent trajectory, as observed and simulated in the case of 4C 39.25. Computing the proper motion of A between each pair of consecutive epochs, we find some evidence of deceleration as it moves closer to St, as well as a progressive, although small change in its position angle toward the northwest. With the information provided by the proper motion of A derived from the three epochs combined, we can estimate a maximum viewing angle of about 38, and a minimum Lorentz factor of 3.07. In order to obtain a deceleration of A as produced by a bend toward the observer, the viewing angle must be significantly smaller than the maximum allowed, which final value depends on the actual Lorentz factor of A. In this case component A should also experience an increase in its flux, due to an enhancement of its Doppler boosting. However, only minor changes in the flux of A are observed across the three epochs. Of course, it is also possible that the apparent small deceleration of A could also result from a true change in its bulk Lorentz factor. However, the large error affecting the proper motion determination between the first two epochs prevents from drawing any solid conclusion regarding a possible deceleration of A.
The existence of a bend toward the observer in the region of St could also explain the large changes in flux experienced by this component, with no significant changes in its position and structure. A moving component should pass through St, giving during the interaction the impression of a single component with increasing flux, subsequently fading progressively as it passes St and turns around the bend (e.g., Gómez et al. JL94 (1994)). Unless the the jet in St bends in a plane containing the observer, which is a priori unlikely, future components that pass through St should move in a different direction in the plane of sky after the event.
Another possibility to explain the stationarity of St is that it is produced by a recollimation shock in the jet flow. Numerical simulations of the relativistic hydrodynamics and emission of jets have shown that pressure mismatches between the jet and the external medium may result in the generation of internal oblique shocks (Gómez et al. JL95 (1995), JL97 (1997)). These shocks appear in the emission as stationary components due to the increased specific internal energy and rest-mass density. When a moving component passes through one of these recollimation shocks, both components would blend to appear as a single feature. This is accompanied by a “dragging” of the merged components downstream, because of the increase in the Mach number, as well as an enhancement of the emission. After the collision, the two components would split up, with the previously stationary component associated with the recollimation shock progressively fading and recovering its initial position and flux. The would give the appearance of motion upstream, as long as the initial physical conditions in the jet are recovered (Gómez et al. JL97 (1997)). Within this scenario, a moving component would not experience significant changes in its proper motion and flux as it approached the stationary component, similar to what is observed for A. This would give the impression of a quiescent merge of the two components. A similar situation, accompanied by a brief dragging of the stationary component, was observed for the merging of components K1 and K2 by Gabuzda et al. (De94 (1994)) in the BL Lac object 0735+178. In the case of a strong standing shock, as perhaps produced by a sudden change in the external medium pressure, more violent interactions with moving shocks may be expected.
We propose a consistent scenario for the inner region in 3C 454.3 in which both the core and component St represent strong recollimation shocks. When new components are generated, they should increase significantly the emission of the core, briefly dragging its position downstream. This interpretation is in very good agreement with the observations by Marscher (Al98 (1998)). These show a roughly stationary component, labeled S2 (corresponding to the component marked as the core in this paper) at about 0.2 mas downstream of the eastern end of the jet. New superluminal components seem to appear upstream of S2, which could explain the emission upstream of the core in Fig. 3. These observations also showed a slight motion downstream of S2, before it recovered its initial position as a moving component passes it, as predicted by the theory (Gómez et al. JL97 (1997)). A similar interaction would be expected when the moving component reaches the position of the next recollimation shock, corresponding to St. The large changes in the flux of St could then be explained by these interactions. However, in order to test this model, accurate measurements of the absolute positions of components are needed, possible through a careful high resolution phase-reference monitoring program. They should provide the necessary information to confirm the constancy of the core–St separation (within the expected motions due to the passage of moving components) and the emergence and evolution of new components upstream of the core.
Cawthorne & Cobb (CC90 (1990)) showed that, depending on the jet flow Lorentz factor and the orientation with respect to the observer, conical shocks may show a polarization position angle parallel or perpendicular to the jet flow. In the limit of strong shocks, this would require small viewing angles, as measured in the rest frame of the shock, to explain the aligned EVPA with respect to the projected direction of the jet observed for the core and St.
Tables 1 and 2 reveal an optically thin spectrum for component St at both 1996 epochs, which eliminates opacity effects as being responsible for the systematic offset observed in the EVPA at different wavelengths. The observed EVPA for component St at 1.3 cm differs only by about 20 from that measured by Cawthorne & Gabuzda (CG96 (1996)) at 6 cm. Some of this discrepancy may be explained by Faraday rotation, which at 6 cm would produce a rotation of $`12^{}`$ (Broten, Macleod, & Vallée BMV88 (1988)). Hence, the measured difference in the EVPA these authors observed between St and the outer components is probably due to an intrinsically different nature, rather than to opacity effects. A possibility is that the outer components Cawthorne & Gabuzda (CG96 (1996)) observed represent weak plane-perpendicular moving shocks in a predominantly longitudinal magnetic field configuration. These shocks will slightly enhance the perpendicular component of the magnetic field (parallel to the shock front), but the increase will not be enough to overcome the initial longitudinal field. Hence the final net orientation of the field would remain aligned in the direction of the jet flow. As a consequence of the partial cancellation of the magnetic field produced by the shocks, a small degree of polarization is expected, which contrasts with the large values measured by Cawthorne & Gabuzda (CG96 (1996)) for the outer components. This cannot be applied to St, and we need to consider a different interpretation in terms of a conical shock or a bend, as outlined previously. If St corresponds to a bend, no changes in the EVPA are expected as a consequence of the change in curvature along the bent portion of the jet, and we need to assume an underlying perpendicular magnetic field for St to explain the observed EVPA, as opposed to that measured downstream by Cawthorne & Gabuzda (CG96 (1996)). Another possibility, still under the hypothesis of a bend along the position of St, is that there is another moving plane perpendicular shock component passing through it, such that St represents the blended component, whose parallel EVPA is due to the enhancement by the shock of the perpendicular component of the magnetic field. However, it seems unlikely to have such a situation each time the source has been observed. In the case that St corresponds to an oblique shock, the observed EVPA can still be explained without needing to assume a change in the underlying magnetic field of St with respect to the outer components, in which case the magnetic field would be aligned with the jet axis throughout the entire jet. Depending on the jet flow Lorentz factor and viewing angle, Cawthorne & Cobb (CC90 (1990)) showed that a conical shock may exhibit an EVPA aligned with the jet axis. Those results were obtained considering an initially randomly oriented magnetic field, and need to be confirmed in the case of an initially aligned field.
### 4.2 Polarized outburst in superluminal component A
The polarized structural outburst observed in the 1996 epochs may be explained by assuming that, at the 1996.86 epoch, the EVPA of the core and component A were mutually perpendicular, producing a net cancellation of the polarized intensity. In this case, component A is required to have changed its EVPA by almost a full rotation of 90, making it approximately aligned with that of the core – assumed to remain with an approximately constant EVPA — which led to the sudden appearance of both components in the polarized intensity images at 1996.98. Taking into account that the apparent core at 1.3 cm in fact corresponded to blending of the core and component A, the resulting spectrum is rather flat, and we could assume that the rotation of 90 in the EVPA of A may be due to a change from being optically thick at 1996.86, to optically thin at 1996.98. We shall also note that the lack of polarization in the core region at the first epoch may be due to large opacity values. However, the fact that A remained undetected at 22 GHz prevents us from obtaining a reliable determination of its spectrum, and hence its opacity. It is also possible that the burst in polarization may result from drastic changes in the magnetic field configuration.
This represents a remarkably rapid change in the polarized structure of component A. From the timescale of variability, we can derive an upper limit to the size of A as $`R_{\mathrm{max}}=ct_{\mathrm{var}}[\delta /(1+z)]`$, where $`\delta `$ is the Doppler factor. Using the minimum Lorentz factor of 3.07 derived from the proper motion, and assuming a viewing angle of 1/$`\mathrm{\Gamma }`$, which maximizes the apparent velocity, we obtain a maximum size of $``$10 $`\mu `$as. Model fitting component sizes tabulated in table 2 show that only for epoch 1997.58 can the size of component A be measured, with an estimated FWHM of 190 $`\mu `$as, well above the derived maximum. We shall note however, that we cannot rule out that the observed variability may arise from the core. In this case the estimated sizes tabulated in Table 2 are very similar and also above the estimated maximum, except for epoch 1996.86 in which model fitting yields a delta-function component for the core. We therefore conclude that the Doppler factor must be higher than the minimum implied by these observations, and instead must be similar to those suggested by the faster proper motions observed by Marscher (Al98 (1998)) and Pauliny-Toth (PT98 (1998)).
If component A changed its opacity from optically thick to thin between 1996.86 and 1996.98, it seems less plausible to assume that A became thick again in 1997.58, as would be required if we were to explain the further rotation of 90 in its EVPA in this way. To account for this extra rotation, we need to assume a change in the magnetic field of the underlying jet or in component A. If component A is associated with a moving plane-perpendicular shock, we expect an EVPA aligned with the direction of the jet flow when the component is optically thin. This could explain the value of $`\chi `$ observed in 1996.98. Since the underlying magnetic field remains aligned with the jet axis through the jet of 3C 454.3, as seems to be deduced from the outer components observed by Cawthorne & Gabuzda (CG96 (1996)), the rotation of 90 in the EVPA of A could be explained by assuming that the strength of the shock associated with it decreases as the component moves downstream. In 1996.98 the enhancement of the perpendicular component of the magnetic field produced by the shock associated with A would overcome the initial longitudinal field direction. However, once the shock has moved to the position observed in 1997.58, we find that the enhancement of the perpendicular field by a weaker shock would not be enough to change the initially aligned net field of the underlying jet, resulting in a net magnetic field parallel, and an EVPA perpendicular, to the jet axis. However, it then remains unclear why component A maintained a similar flux — even experiencing a small increase, as opposed to what it would be expected in the case of a decrease in the shock strength. Within this scenario, component St would be required to be associated with a conical shock in order to obtain a net magnetic field perpendicular to the jet axis (assuming no magnetic field changes between the positions of A and St).
Further polarimetric high-resolution VLBA observations are required to test these hypotheses. The study would be improved significantly by performing phase-reference to an external source, allowing a detailed determination of the proper motions of components. These would be of great importance to test our hypothesis of recollimation shocks to interpret the nature of St and possibly the core. In this case, numerical simulations predict a temporary drag of these components, followed by a brief upstream motion to recover their initial positions (Gómez et al. JL97 (1997)). Polarimetric observations would provide the necessary information to discern between a possible rotation of the underlying magnetic field configuration, or a change in the strength of shocks in the inner structure of 3C 454.3.
###### Acknowledgements.
This research was supported in part by Spain’s Dirección General de Investigación Científica y Técnica (DGICYT), grants PB94-1275 and PB97-1164, by NATO travel grant SA.5-2-03 (CRG/961228), by U.S. National Science Foundation grant AST-9802941, and by NASA through CGRO Guest Investigator Program grants NAG5-2508, NAG5-3829, and NAG5-7323 and RXTE Guest Investigator Program grants NAG5-3291, NAG5-4245, and NAG5-7338.
|
no-problem/9905/chao-dyn9905004.html
|
ar5iv
|
text
|
# I Introduction
## I Introduction
The analysis of multilead electroencephalographic (EEG) time series is an important but difficult goal. Such an analysis is important because the EEG provides a relatively low-cost, noninvasive way to monitor brain behavior that can yield valuable insights about brain function, pathology, and treatment. The goal is difficult because of the great complexity of brain dynamics which remains poorly understood.
For EEG data analysis, the complexity of brain dynamics manifests itself in at least three different ways. First, EEG time series are nonstationary which makes difficult the application and interpretation of many methods of signal analysis including those based on statistics and nonlinear dynamics . It is not understood yet how to quantify the magnitude and time-dependence of nonstationarities, how to compare the nonstationarity of one EEG recording with another, or how to identify regions of approximately stationary behavior. Second, multielectrode EEG data are spatiotemporal in character since the time series are recorded simultaneously from electrodes at different locations on the scalp. Here the relation between time series at different electrodes is not well understood, e.g., to what extent is there redundancy of the time series from different electrodes or how does one interpret the correlations that may exist. A third difficulty is the statistical variability of EEG data, which can be substantial even for EEG recorded from the same patient under presumably similar conditions. This variability complicates the extraction of features that can be used to compare one EEG with another or to classify EEG for clinical purposes.
In this paper, we apply and compare several statistical and visualization methods to quantify the stationarity and redundancy of 21-electrode EEG time series measured during generalized tonic-clonic (GTC) seizures associated with electroconvulsive therapy (ECT) treatments. Such seizures are characterized by polyspike and slow-wave activity and are often observed to be nonstationary. Although nonstationarity and redundancy of EEG data have been studied previously, this earlier work has not addressed GTC seizures. ECT provides a unique opportunity for the study of GTC seizures because the seizures are induced under controlled conditions. The patient is anesthetized and given a neuromuscular relaxant (succinylcholine) which eliminates movement artifacts that would otherwise obscure the EEG data. The seizure is induced using a reproducible procedure in which a current of known amplitude and wave form is administered between two electrodes attached to particular locations on the scalp . We partly address the issue of statistical variability of the EEG time series by recording and comparing ECT-induced GTC seizures from ten patients.
Our analysis of EEG data associated with GTC seizures has two goals. One is to further basic science by understanding more clearly when EEG signals may be considered stationary so that particular signal analysis methods might be applied more successfully. Our results for GTC ECT-related seizures show that several commonly used criteria for stationarity such as a power spectral density (psd) and probability distribution functions (pdfs) applied to successive windows of data often do identify the same stationary parts of the data but there are qualifications as we discuss in Section V A below. In analyzing EEG redundancy, our results further suggest that the majority of electrodes are often stationary or nonstationary together. There is a high cross-correlation between electrodes (see Eq. (4) below), which has been observed in previous EEG research but not quantified in the context of GTC seizures .
A second goal of this paper is to understand more specifically the electrophysiology of GTC ETC seizures. The concept of “seizure generalization” is used frequently to describe spatial aspects of ECT seizures and has been considered to be a central factor in determining the therapeutic effectiveness and side-effects associated with ECT treatments . For example, it has been suggested that marginal seizures are less generalized spatially . Several authors have suggested that the differences in efficacy and side-effects of unilateral (UL) and bilateral (BL) ECT (the two most commonly employed electrode placements) are due to differences in generalization . Although ECT seizures have been considered to be generalized tonic-clonic seizures , we observe a variation in generalization through spatial inhomogeneities of the EEG which suggest that generalization is a graded phenomenon (not all or nothing).
Our data analysis leads to new conclusions. We quantify earlier claims that the mean frequency of an ECT seizure decreases steadily during the seizure. (This has recently also been reported with GTC seizure EEG data using a different set of techniques based on time-varying autoregressive modeling .) We also identify statistical differences in ECT seizures generated by unilateral and bilateral electrode stimulation. Our statistical analysis based on time delays also provide the first evidence for wave propagation during ECT seizures. We identify evidence that seizure activity is expressed regionally in the brain and that some regions are delayed in manifesting the statistical changes characteristic of seizure activity in other leads. This variation, particularly in the temporal and pre-frontal regions, may have implications for understanding the variable cognitive side-effects and anti-depressant efficacy associated with the induced seizures.
The rest of the paper consists of the following sections. In Section II, a survey of prior work is given. In Sections III and IV, details are discussed concerning how the ECT EEG data were clinically recorded and analyzed. In Section V, we discuss our results on stationarity and redundancy. Finally, in Section VI, we summarize our conclusions and discuss some questions for further study.
## II Previous Work
In this Section, we review prior work with an emphasis on stationarity and redundancy. In the following Section, we discuss details of the methods that we use to measure and to analyze the ten-patient ECT EEG data set recorded at Duke University’s Quantitative EEG Laboratory.
### Stationarity
Researchers have studied EEG stationarity with several methods and some of this earlier work motivated our own analysis. One method involves partitioning time series into equal-size non-overlapping segments (typically a few seconds long), calculating power spectra for each segment, and then studying how the power spectra evolve from one segment to the next . In another EEG study of patients in the eyes-closed waking state, researchers have compared the means of time series over successive segments and found that segments shorter than 12 seconds could often be considered stationary by this criterion . Stationarity has also been studied in EEG data during sleep by testing whether there were trends over time in the signal variance . Still another approach has been to compute and to compare probability distributions (pdfs) of the EEG signal over time .
### Redundancy
Many of the studies of stationarity mentioned above concern EEG data from only one electrode and direct observation of EEG time traces indicates that not all EEG channels are stationary or nonstationary together. Although correlations and redundancy have been studied for focal epilepsies , researchers have not yet studied these for generalized seizures . Instead, several authors who have studied ECT seizures have noted spatial inhomogeneities with a tendency for the EEG amplitude to be greatest in the central part of the scalp . An amplitude asymmetry in which electrodes on the right side have larger amplitude than those on the left has been described for right unilateral ECT (in which the stimulating electrodes are placed at the vertex and right temple) but this has not been observed for bilateral ECT (in which the electrodes are placed bi-temporally) . Researchers have also observed delays in the onset of seizure activity of about a second when comparing time series from two different leads .
Some preliminary studies have analyzed the inter-hemispheric redundancy of ictal EEG data but only in two-channel data, for electrodes Fp1 and Fp2 referenced to the ipsilateral mastoid. In the 2-5 Hz frequency band, greater coherence (defined below in Eq. (4)) in the first 6 seconds after the stimulus and a lower coherence in the 6 seconds immediately after the end of the seizure have been associated with a greater likelihood of ECT therapeutic benefit .
Several previous studies have also employed techniques for quantifying the spatial redundancy of EEG data. Such techniques include linear correlation , nonlinear correlation , the average amount of mutual information , coherence , and estimates of time-delay based on these measures . Motivated by these reports, we have studied interlead linear correlation, average amount of mutual information, and interlead time delays based on these two measures.
## III Clinical Methodology
In this section, we discuss the experimental details of obtaining 21-electrode ECT EEG data. As a first step, ten subjects were studied who had been clinically referred for ECT. These subjects consisted of 6 women and 4 men with ages ranging from 45-73 years, representing a typical clinical ECT population. Prior to each treatment, the barbiturate methohexital and the muscle relaxant succinylcholine were administered at a dosage of 1 mg/kg according to standard ECT practice . All subjects were free of any antidepressant, anticonvulsant, antipsychotic, or mood stabilizing medications for at least 5 days prior to the ECT course. A single seizure was recorded for each subject.
Experiments have shown that the dynamics and clinical benefits of an ECT seizure depend on the placement of the ECT electrodes on the scalp (e.g., UL or BL) and on the stimulus intensity as measured in units of the threshold intensity needed to induce a seizure . BL ECT may be more effective, and higher-above-threshold stimuli appear to be more efficacious, particularly for UL ECT. Of the ten subjects studied in this paper, eight received pulse right unilateral ECT and two received bilateral ECT. We note that the stimulus electrode placement was not controlled in this study but was determined clinically and happened to include more UL subjects. Nonetheless, this study allows a preliminary comparison of dynamical EEG differences in these two forms of treatment. Electrical stimulus dosing was administered via a standard clinical technique in which the seizure threshold at treatment 1 was determined and then a stimulus at subsequent treatments was delivered that was a multiple (in terms of charge) of the determined threshold .
During the ECT treatment, twenty-one channels of EEG data (nineteen EEG leads plus two eye leads) were recorded using the International 10/20 System with locations indicated as in Fig. 2. Because of the placement of ECT stimulus electrodes, position F5 was used (midway between positions F7 and F3) as was F6 (midway between F8 and F4) but for convenience we refer to F5 and F6 throughout as leads F7 and F8. All leads were referenced to linked ears and recorded using Ag/AgCl electrodes. The data were amplified and filtered using a Nihon-Khoden 4221 device (Nihon-Khoden Corp.) with a low-frequency cutoff of 1.6 Hz and a high-frequency cutoff of 70 Hz. The data were digitized at 256 Hz with 12-bit accuracy in the form of integers between the values -2048 and 2048. Although the use of succinylcholine greatly diminishes artifacts in the EEG that would otherwise be present, some artifacts may occasionally still occur. As a result, the second author, A.D.K., carefully screened all EEG data for artifacts. Brief segments of EEG data from 2 subjects were excluded from analysis on this basis.
## IV Methods for Quantifying Stationarity and Redundancy
In this section, we discuss the methods that we use to analyze the ECT EEG data for stationarity and redundancy. We first discuss stationarity, for which various quantities are calculated over successive equal-sized non-overlapping time windows and over each electrode of the multivariate recording. We next discuss measures for quantifying redundancy of the multivariate EEG data and how this redundancy may evolve in time. We conclude this section by discussing how one can study whether stationarity and redundancy are affected by time-delays between a given pair of electrodes.
We note that our emphasis in using these methods is somewhat different than that of a statistician or of a nonlinear dynamicist. Since GTC seizures terminate spontaneously 1-2 minutes after induction, the corresponding EEG time series are obviously nonstationary. There are then three interesting questions. One is simply how to characterize the nonstationary dynamics so as to provide insights into the properties of a GTC seizure and this is the main emphasis of our work. A second question is whether there are significant windows of approximately stationary behavior which one could then treat by statistical and nonlinear techniques that assume stationarity. A third question, which we do not address here, is whether some hidden component of the EEG signals are statistically stationary, e.g., whether a particular filtering of the data would yield stationary behavior. Given the complexities of brain physiology and of the time series themselves (there is no known underlying mathematical model based on first principles of neuronal physiology), our analysis should be regarded as exploratory rather than intended to falsify specific hypotheses of the statistical structure of ECT EEG data.
### A Measures of Stationarity
A statistical process is defined to be stationary if its statistical properties are time-translation invariant, i.e., shifting the origin of time (making the substitution $`tt+t_0`$ where $`t_0`$ is some constant) has no effect on the statistics of the process. (A weaker definition of stationarity is a process for which only the mean and variance of the process are established to be time-translation invariant.) A statistical process is nonstationary if any of its statistical properties depend on time .
Although not often stated explicitly, it should be appreciated that the definitions of stationarity and nonstationarity involve mathematical idealizations and so are impossible to establish rigorously with a finite amount of empirical data. Technically, an infinite amount of data is required to define a statistical property such as a probability distribution or joint probability distribution. Similarly, one needs infinitely many realizations to study statistical properties over some interval of time and then to examine whether those properties change with the choice of time interval.
Besides the challenge of testing the hypothesis of nonstationarity with a finite amount of data, there is a related conceptual difficulty that there is no natural measure of “degree of nonstationarity” or “magnitude of nonstationarity”. This is a simple consequence of the fact that nonstationarity is defined as the negation of stationarity, so that an arbitrarily weak time dependence of any statistical property is enough to make a time series nonstationary. One then has to look carefully at specific data and hope that certain features will suggest themselves as significant for causing nonstationary statistics.
Our approach for testing nonstationary structure was to divide all time series into successive non-overlapping equal-sized time windows (also called epochs), calculate some statistical properties of the data in each window, then study these statistical properties as a function of time (from one window to the next). Each window was chosen to be 1- or 2-seconds in length, with the length chosen qualitatively after examining visualizations of several statistical quantities as a function of time (see Fig. 3(a) as an example). These time intervals were sufficiently long to contain enough points (256 and 512 points respectively for 1- and 2-second windows) for reasonable estimates of statistical quantities yet were empirically short enough that time variations of the statistics could be examined over the typical 0.5-2 minute duration of an ECT seizure. Our results were weakly dependent on the window width over a range of 1-4 secs. In future studies, it would be useful to explore some recently proposed stationarity tests that avoid the use of windows .
Once a window length was determined, we used three statistical quantities to monitor possible nonstationary behavior: a window variance $`\sigma ^2`$, the mean frequency $`\omega `$ of the power spectrum $`P(\omega )`$ calculated over the time series in the window, and a $`\chi ^2`$ statistic that measured the deviation of the probability distribution function (pdf) $`\rho (x)`$ in a given window (where $`x`$ denotes the amplitude) from a cumulative pdf based on the time series of previous approximately stationary regions. In our analysis, windows were the same size and synchronized across all EEG electrodes and so each of the 19 time series of a particular EEG recording produced three new shorter time series representing the window-dependence of the above three statistical quantities. To visualize statistical trends across all 19 electrodes, these shorter time series were next plotted as a matrix of color pixels $`M_{ij}`$ by assigning a color palette to the range of the shorter time series. Each row of the matrix indicates the time dependence (from left to right) of a statistic associated with a particular electrode (see for example Fig. 3), while each column represents the values of a statistic for all electrodes in a given window.
The three statistical quantities were calculated as follows. The variance $`\sigma ^2`$ over a given window was calculated via the usual statistical formula
$$\sigma ^2=\frac{1}{N1}\underset{i=1}{\overset{N}{}}\left(x_ix\right)^2,$$
(1)
where $`N`$ is the number of data points in a given window, $`x_i`$ are the values of the EEG time series, and $`x=(1/N)_{i=1}^Nx_i`$ denotes the average value of the time series over the window.
The mean frequency $`\omega `$ over a given window was obtained from a frequency-weighted average of the power spectrum $`P(\omega )`$ over that window:
$$\omega =\frac{\underset{j=0}{\overset{N/2}{}}\omega _jP(\omega _j)}{_{j=0}^{N/2}P(\omega _j)}.$$
(2)
(The sums go from 0 to $`N/2`$ because the power spectrum has only $`1+N/2`$ separate magnitudes of Fourier coefficients for a time series of length $`N`$.) The power spectrum $`P(\omega )`$ over a given window was estimated with a Fast Fourier Transform . Power spectra for overlapping intervals of length 2 seconds (each overlapping by 1 second) were averaged to reduce the variance of the spectrum. Each time series was also multiplied by a Parzen window before being Fourier analyzed to reduce artifacts arising from the nonperiodicity of the time series over the window . The windowing of the data allowed a frequency resolution in $`P(\omega )`$ of $`\mathrm{}\omega =0.5\mathrm{Hz}`$. We note that the variance $`\sigma ^2`$ calculated for a window is proportional to the integral $`P(\omega )𝑑\omega `$ of the power spectrum over the window and so is not entirely independent of the power spectrum.
The pdf $`\rho (x)`$ of a time series $`x_i`$ in one-second-long windows was calculated by binning the data (256 points of $`x_i`$) into 40 bins that spanned the range $`[x_{\mathrm{min}},x_{\mathrm{max}}]`$ of the minimum $`x_{\mathrm{min}}`$ and maximum $`x_{\mathrm{max}}`$ of the time series. Several different numbers of bins varying from 20 to 200 were studied before establishing that 40 was adequate in capturing most features of the pdf without too much statistical noise.
Since it is difficult to plot and to understand the time dependence of functions like pdfs for multivariate data and for many different electrodes, the nonstationarity of the pdfs was analyzed by plotting instead whether each pdf passed a $`\chi ^2`$ test at the 95% level which measured the difference between the pdf in a given current window and a cumulative pdf over previous contiguous stationary windows. A cumulative pdf has the advantage of increasing the statistical accuracy when comparing a new pdf with a previous standard. (Recently Witt et al also suggested using a $`\chi ^2`$ test for pdfs to quantify nonstationarity in a time series, but they did not use a cumulative pdf as we do here.)
The $`\chi ^2`$ value for a particular window was calculated as follows. If $`M`$ denotes the number of bins (here $`M=40`$), and $`n_i`$ and $`N_i`$ denote respectively the number of points in the $`i`$th bin of the current and cumulative pdfs, then we calculated the number
$$\chi ^2=\underset{i=1}{\overset{M}{}}\frac{(n_iN_i)^2}{(n_i+N_i)}.$$
(3)
Nonstationarity was then visualized by assigning values of 0 and 1 to windows that respectively failed and passed the $`\chi ^2`$ test. A visualization of such $`\chi ^2`$ values as a function of window index $`i`$ is given in Fig. 4 and discussed further below in Section V A.
### B Redundancy and Time Delays
Redundancy of the 19 electrode time series and of statistics calculated for the 19 electrode time series were quantified using linear correlation coefficients and mutual information, with large values of these quantities corresponding to substantial redundancy. The linear correlation coefficients $`r_{xy}`$ between two time series $`x_i`$ and $`y_i`$ were estimated from the sample correlation coefficient defined as follows :
$$r_{xy}=\frac{\underset{i=1}{\overset{N}{}}\left(x_ix\right)\left(y_iy\right)}{\sqrt{_{i=1}^N\left(x_ix\right)^2}\sqrt{_{i=1}^N\left(y_iy\right)^2}}.$$
(4)
In this paper we studied the 18 correlation coefficients $`r_{x,CZ}`$ of all electrodes with the centrally located electrode CZ. The choice of CZ was motivated by earlier work which showed that ECT EEG signals are often largest in amplitude in an apparently highly correlated region centered around lead CZ. The use of this lead in analyses of redundancy of other leads thus helped to test when leads were highly related to the dominant activity. The 18 coefficients $`r_{xy}`$ giving the time-evolution of redundancy with electrode CZ were computed over a segment of the EEG where at least 15 electrodes were simultaneously stationary.
Since it is known from studies in nonlinear dynamics that linear coefficients such as Eq. (4) sometimes miss nonlinear correlations , we supplemented our analysis of cross-correlation by studying the mutual information of pairs of time series . If the quantities $`\rho _x(x)`$ and $`\rho _y(y)`$ denote the pdfs for two time series $`x_i`$ and $`y_i`$ on a given window and if $`\rho _{xy}(x,y)`$ denotes the joint pdf (estimated numerically by sorting pairs of points $`(x_i,y_i)`$ simultaneously into 40 bins spanning the $`x`$-range and 40 bins spanning the $`y`$ range), then the mutual information $`I(x,y)`$ is defined to be :
$$I(x,y)=\underset{i=1}{\overset{N}{}}\underset{j=1}{\overset{N}{}}\rho _{xy}(x_i,y_j)\mathrm{log}\left(\frac{\rho _{xy}(x_i,y_j)}{\rho _x(x_i)\rho _y(y_j)}\right).$$
(5)
For statistically independent time series, the joint distribution $`\rho _{xy}(x,y)=\rho _x(x)\rho _y(y)`$ factors into a product of the separate pdfs and $`I(x,y)`$ becomes zero. Empirically we found that 2-second windows contained enough data to generate good approximate joint probability distributions; if larger segments of data were studied the time evolution of redundancy could not be studied. Mutual information coefficients between electrode CZ and the other 18 leads were calculated every 2 seconds to obtain their time dependence over the seizure.
All interlead time-averaged linear correlation and mutual information coefficients were calculated over “global” stationary regions determined by the stationarity tests indicated above. These global stationary regions were identified visually as the largest continuous part of the time series that was stationary for the majority of electrodes. The average interlead coefficients between all pairs of electrodes were represented in a $`19\times 19`$ square matrix. These matrices allowed us to determine the average redundancy between one lead and all other leads in the seizure using both linear correlations and mutual information (see Figs. 8 and 9).
The correlation function Eq. (4) and the mutual information Eq. (5) were also used to measure time-delays between two time series $`x_i`$ and $`y_j`$ associated with two different electrodes. To compute the time-delay, one time series $`y_i`$ was fixed and and the second series $`x_i`$ waveform was then shifted in time from -40 ms to 40 ms (corresponding to integer shifts $`x_{i+k}`$ of $`k=10`$ and $`k=+10`$) to find the time-delay that resulted in the maximum value of the mutual information. We then determined the shift that gave the largest redundancy according to Eq. (4) or Eq. (5). To reduce the large number of inter-lead comparisons, time-delays were again calculated only for leads paired with the central electrode CZ. We found 2-second or larger stationary segments of uniform time-delay in one half of the seizures. In Fig. 10, we display the time-average of the time-delays over a global stationary region over the surface of the head.
## V Results and Discussion
### A Stationarity
#### 1 Signal Variance Over Time
By using plots similar to Fig. 3 to examine all 10 seizures, we found a substantial variation between seizures in the pattern and degree of stationarity, as measured by the signal variance. Figs. 3a and 3b are representative of the diversity of variance evolution that was present across these seizures. Fig. 3a illustrates that the variance remains relatively low for the first 18 seconds of the seizure in all leads and then increases (as indicated by the change from blue and green to yellow and red) for the fourteen seconds thereafter, but only in the fronto-central region of the head (leads FP1, F3, FZ, CZ, FP2, and F4). This behavior is typical of the increase in amplitude that has previously been described in the transition from the early tonic phase of the seizure to the later larger amplitude mid-ictal poly-spike and wave EEG pattern characteristic of the clonic phase of generalized tonic-clonic seizures, which is largest in amplitude in the fronto-central region . Fig. 3(a) also manifests two periods of apparent stationarity of signal amplitude in that for times 2-16 (a 14-second segment) and times 20-32 (a later 12-second segment) there is minimal change in the color of the figure in any channel.
In contrast, Fig. 3b is a seizure whose variance is three times smaller and that has briefer segments of amplitude stationarity. There are a number of approximately 5-second segments that appear to maintain consistent variance across the head but not longer stationary segments. The amplitude is once again greatest fronto-centrally but only intermittently.
To better illustrate the range of variance stationarity present, we note that for 8 of 10 seizures, stationary segments of 8 seconds or longer could be identified where there was little change in the signal variance in any lead. The other two seizures had highly variable variance as shown in Fig. 3b. The largest stationary segment present in a single-lead in any of the seizures was 80 seconds. The largest segment that was stationary across all of the leads was 30 seconds in length.
There was some consistency across the seizures in the spatial and temporal patterns of variance. The fronto-central leads tended to be larger in amplitude than the temporal and occipital leads in nine of ten of the seizures including both unilateral and bilateral seizures. For most leads the variance increased from the start of the seizure to the mid-ictal portion, however this increase was relatively delayed in the onset of the temporal and occipital leads. For 6 of the 10 seizures, the time at which the variance increased was delayed in at least one lead for both unilateral and bilaterally induced seizures. In some of the seizures a few leads never manifested an increase in variance. This can be seen in Fig. 3a where T4, T6, T3, T5, P3, O1, O2, and F8 do not appear to increase in amplitude and in Fig. 3b where there does not appear to be an increase for leads T3 and T5.
We conclude that the variance indicates a range of amplitude stationarity for generalized tonic-clonic seizures. For most seizures (eight of ten) a region of stationarity of at least 8 seconds can be expected for all leads, however there are some seizures where only much briefer periods of amplitude stationarity can be found. These analyses also indicate that the signal amplitude tends to be greatest, and that there is an earlier onset in increased variance, in the fronto-central as compared with temporal and occipital leads and occasionally frontopolar leads.
#### 2 $`\chi ^2`$ Stationarity Test
Fig. 4(a) and Fig. 4(b) depict the results of the $`\chi ^2`$ stationarity test for two seizures, which are again representative of the range of observed stationarity phenomena. These figures utilize the same format as the variance evolution figures except that black and white pixels are now used to indicate whether the pdf of a new EEG epoch was or was not distinct from the accumulated pdf for previous epochs at the 95% confidence level.
The $`\chi ^2`$ test results for the seizure depicted in Fig. 3(b) appear in Fig. 4(a). While the data in Fig. 3(b) are relatively nonstationary by the variance analysis, they are not obviously nonstationary using the $`\chi ^2`$ measure, so that there are differences between these measures of stationarity. In Fig. 3(a), leads O2, T3, T5, T4, T6 and O1 have the longest stationary single-lead segments during a seizure lasting about 16-40 seconds, which is consistent with these leads not demonstrating an increase in variance over the seizure. Regions of global stationarity as identified by the $`\chi ^2`$ test were found in all seizures, with the shortest global segments observed in two unilateral seizures (e.g., see Fig. 4(b) where no stationary segments were observed longer than a few seconds for all leads). The range in length of the stationary segments over all leads was 4 to 30 seconds. All of the seizures had single-lead stationarity segments of at least 10 seconds, with the longest single-lead stationary segment lasting 70 seconds. Fifteen second or larger segments of multi-lead stationarity were found in half of the seizures.
#### 3 Average Power Spectral Frequency
We found that the pdf nonstationarity tests and regions of uniform frequency generally agreed with one another. Fig. 5a displays the average frequency over time for the same seizure depicted in Fig. 3a and indicates a 10-second region of nearly constant multi-lead frequency from 10-20 seconds. In contrast, Fig. 5b has single leads with long times of stationary average frequency but this is not seen across all of the leads. Leads FP1 and O2 have greater variation in frequency across the seizure and have, in general, higher average frequency content. The average frequency for both of these seizures decreases over the seizure. However, in Fig. 5b, this decrease does not seem to occur for leads FP1 and O2. In fact, for seven of the ten seizures, at least one lead was delayed in decreasing frequency or did not manifest a decrease in frequency over the seizure. This was most commonly seen in the temporal, occipital, and frontopolar leads. Across all of the seizures, fifteen second or longer regions of uniform frequency in all leads were found in six of the ten seizures. Five of ten seizures had many brief global average frequency nonstationarities and these corresponded to nonstationarities detected by the pdf nonstationarity tests, e.g., time 21 seconds in Fig. 5a.
#### Summary Of Stationarity Analysis
A substantial variability was found in the length of stationary segments across the seizures studied. For most of the seizures, there were substantial windows of approximately stationary behavior observed in all leads simultaneously, using several different criteria. Significant differences between the measures were found suggesting that stationarity according to one of the criteria does not insure stationarity by another criteria. Some of the seizures studied did not have multi-lead stationarity segments longer than a few seconds, which suggests that there are likely to be problems when applying signal analysis techniques that assume signal stationarity for generalized tonic-clonic seizure data. On the other-hand, since these analyses indicate that stationary segments exist for the vast majority of these seizures, such analytic techniques may be validly applied after first verifying that they are being applied to a segment that is stationary according a range of stationarity tests. For both unilateral and bilateral seizures, these analyses also indicate that there may be leads that are delayed in demonstrating statistical changes such as an increase in signal variance or a decrease in average frequency as compared with other leads. This was seen most commonly in temporal, occipital, and prefrontal regions.
### B Redundancy
#### 1 Time Evolution Of Mutual Information Transmission Coefficient
The time evolution of mutual information for each lead were calculated with respect to the lead CZ. (As discussed above, CZ was picked because of prior reports suggesting that ECT-induced seizures tend to be maximal in amplitude in this region . However we verified that the results were the same when mutual information was studied with respect to a number of different leads.) The time evolution of mutual information over the course of two seizures that illustrate the range of phenomena seen in the 10 seizures are depicted in Fig. 6a and Fig. 6b. As in previous figures, the amount of mutual information shared between each lead and CZ is represented by color pixels whose magnitude is indicated in the color bar below each figure.
The ten seizures varied substantially in the spatiotemporal pattern of redundancy as measured with mutual information. Fig. 6a illustrates a seizure with high redundancy of CZ with most other leads, whereas Fig. 6b depicts a seizure with little inter-lead redundancy with CZ, even though this seizure has a large stationary region. Fig. 6a also demonstrates that over the course of the seizure there is a transition from a period of relatively low-interlead redundancy to increased redundancy where the greatest inter-lead redundancy, like the greatest variance, is in the fronto-central region. The increase in redundancy and amplitude, although not manifest in all seizures, coincides with the transition from the tonic to clonic phases of the seizures . In addition, as with variance and average frequency, some leads are late to increase in redundancy or do not do so at all. This was true for 5 of the 10 seizures and most often occurred in the temporal and occipital leads.
The mutual information also provided information about the redundancy of stationarity for the EEG data. There was a period of uniform redundancy for 6 of the 10 seizures ranging from 10 to 20 seconds in length. These regions were located in the mid-ictal region of the seizure, coinciding with the clonic phase. The regions of constant redundancy across the leads coincided with regions where CZ was stationary and also where the frequency content of all the leads was nearly uniform.
#### 2 Correlation Coefficient
The amount of redundancy among seizures was also studied by calculating the time-evolution of the correlation coefficient of each lead with CZ. Two seizures illustrating the range of patterns of inter-lead correlation among the ten seizures appear in Fig. 7a and Fig. 7b. Similar phenomena are seen with the correlation coefficient as with mutual information. All leads have low redundancy initially followed by an increase in the mid-ictal period, with the greatest redundancy among the fronto-central leads with correlation coefficients ranging from 0.7-0.95. Some leads are late to increase or never increase in redundancy. In both of the figures, this occurs most prominently for the fronto-polar, temporal, and occipital leads. Late onset of an increase in redundancy occurred in 8 of the 10 seizures and was most consistent and most pronounced in the occipital and temporal leads. In terms of the stationarity of redundancy, there were regions of uniform correlation for 5 of 10 seizures ranging from 10 to 20 seconds in length. As discussed below, one factor that must be considered for accounting for leads with decreased redundancy is the possibility of a phase lag between the signals in the leads studied.
#### 3 Average Mutual Information For Stationary Midictal Segments
Average mutual information calculations were performed on regions that were found to be stationary as determined by the $`\chi ^2`$ nonstationarity test. Stationary regions were identified that were 6 to 20 seconds in length within the mid-ictal portions of each seizure. The average mutual information for two seizures, that are representative of the phenomena we observed among the ten seizures, is depicted in Fig. 8a and Fig. 8b. In these figures, the darkness of the square at the intersection of two lead labels indicates the mutual information shared by those leads. The correspondence between the degree of shading and the degree of redundancy is indicated by the scale at the left of the figure. Note that the same information is portrayed in the upper left and lower right halves of these figures.
Once again we found that the greatest interlead redundancy tended to occur in the frontocentral regions. The regions of highest redundancy tended to differ for UL- (see Fig. 8a) and BL-induced ECT seizures (see Fig. 8b). There tended to be increased redundancy in the right (stimulated) hemisphere for UL ECT (note the clustering of lighter boxes in the lower left and upper right corners of Fig. 8a), whereas for BL ECT the redundancy was not localized to either hemisphere. The regions of lowest redundancy were the occipital and left temporal leads in all unilateral seizures. In bilateral seizures, all temporal and occipital leads had lowered redundancy with no hemisphere dependence.
#### 4 Inter-Lead Correlation For Stationary Midictal Segments
The same global stationary regions utilized for average mutual information analysis were also used for the average correlation coefficient calculations. Two representative seizures are depicted in Fig. 9a and Fig. 9b. Relatively weaker correlations of the frontal-polar and occipital leads (see the bands of darkly shaded squares associated with these leads in Fig. 9a and Fig. 9b) with other leads on the scalp were found in seven of ten seizures with the frontal polar regions having the poorest correlations. In some cases, these leads have poor correlations due to the presence of large time-delays with other leads on the head (see below). Lowered redundancy for the frontopolar leads in Fig. 9b was not indicated by the mutual information measure for the same seizure Fig. 8b since mutual information was less sensitive to time delays. A reduced redundancy was thus found particularly for the frontopolar and occipital leads in the midictal period. In some instances, this was due to time delays but, in several other cases, the data in the leads were not as well related.
### C Interlead Time Delays
A number of leads had sustained poor redundancy with CZ because of time delays with CZ, which artificially lowered the apparent redundancy in our correlation analysis. This is seen in 6 of 10 seizures in leads FP1, FP2, O1, and O2 (see Fig. 7a). In contrast, Fig. 7b illustrates an instance where leads O1, O2, FP1, and FP2 have poor correlations with CZ for reasons other than time delays. Because of the dynamical and physiological importance of consistent time delays across the head in the midictal period, we sought to understand this phenomenon more thoroughly as we now discuss.
The time delay calculation cannot be calculated for data where the leads have a low redundancy since if two leads are not related, then their time delay has no meaning. As a result, we required that segments meet the $`\chi ^2`$-stationarity criterion, be of constant average frequency by visual inspection, and have interlead redundancy of at least 0.5 as measured by mutual information. We first studied time delay over the course of the seizures. Segments of uniform nonzero time-delay over many leads were identified in 7 of the 10 seizures studied and they lasted between 4 and 20 seconds. The most consistent pattern in time delays was a large delay from the front to the back of the head. This pattern was found in 4 of the 10 seizures, and the magnitude of the delay ranged from -10 to 15 ms. The occipital leads had a negative time delay with CZ whereas the prefrontal leads had a positive time delay.
The average amplitude of the time delay with respect to CZ across the head for one representative mid-ictal segment is mapped in Fig. 10. This map depicts the time delay with respect to CZ for the 19 scalp leads such that the color at each point indicates the degree of time delay based on the scale at the bottom of the figure. Data is mapped at the 19 white marks corresponding to lead location with linear interpolation in between leads. This figure illustrates what appears to be a consistent frontal-to-occipital time delay indicative of wave-like propagation from the occipital to prefrontal regions during the mid-ictal portion of some seizures. In some instances we observed a tendency for counter-clockwise rotations around CZ. While this wave phenomena was mapped for a brief period, it could be observed for over 20 seconds.
## VI Conclusions
In this paper, we have studied the stationarity and redundancy of multielectrode EEG data during GTC seizures. Stationarity of EEG data is a main concern in gaining insights to the underlying dynamics. Reliable statistical analysis and many nonlinear measures of complexity assume statistically stationary data. We find that the use of variance non-stationarity tests, pdf non-stationarity tests, and average-frequency evolution may provide a useful measure of the stationarity in these signals. Global regions of stationary data are of length 8 to 20 seconds in the majority of seizures and single lead stationary regions are typically of length 20 to 40 seconds. Highly non-stationary seizures tend to have higher variance and show no global regions of stationarity. In prior work, we have found that seizures which had a more predictable EEG pattern over time (according to the largest Lyapunov exponent) were more therapeutically effective . Further work would be useful to establish whether the more stationary seizures are also more beneficial in the treatment of depression.
Redundancy on the surface of the head varies among patients and is further complicated by nonstationarities and leads that are late or never enter into the seizure activity. We find that the mutual information statistic is much more robust to time-delays between different leads than linear correlation measures. The frontal region of most seizures along the excitatory current path tends to have sustained higher redundancies than other portions of the head. More interesting is the high redundancy seen in some seizures among spatially distant portions of the head. In the majority of seizures, the frontal and occipital regions are poorly redundant. However, in three seizures we find high redundancy between different pairings of anterior and posterior leads. High redundancy pairings, O2-FP1 and O2-F8, seen in some seizures are not understood. In all seizures, the redundancy tends to increase in the later mid-ictal portions of these seizures. This also coincides with seizures exhibiting more rhythmicity and spatial wave behavior.
The wave-like behavior in the EEG which we have observed in GTC seizures has not been previously reported. This behavior could be caused collectively by the coupling of dynamically different parts of the brain, leading to a wave that propagates cyclically through the brain tissue. Alternatively, one region of the brain may be a source that drives the observed seizure activity in other parts of the brain. Unfortunately, physiologic evidence is presently lacking that can determine the mechanism of this wave-like electrical activity pattern. Identification and characterization of such cortical waves may be possible by using a complex Karhunen-Loève decomposition , which has been used successfully by meteorologists to identify wave motion in the atmosphere. Such analysis may help to determine how frequently wave motion occurs in GTC seizures and to explain the mechanism of electrical propagation of activity in these seizures.
The occurrence of surface waves on the cortex of the head and the fact that many seizures have leads that are late or that never enter into the seizure evolution call into question the utility of the term “generalized” in describing GTC seizures. There is evidence that sometimes the seizure spreads from the frontal region to the occipital and temporal regions of the head. Also, some leads can be as late as 10 seconds to enter into the seizure or do not participate in the seizure. Late-to-generalize seizures have been previously reported , but leads not involved in the seizure have not previously been described and the clinical relevance of these uninvolved leads remains to be established.
These findings speak against the view that GTC seizures are an instantaneous all-out response of the brain. Instead they are consistent with prior work indicating that GTC seizures are graded rather than maximal responses . Such work involved the demonstration that the cerebellum and the lower brainstem had varying degrees of electrophysiological involvement in GTC seizures in the cat and that the visual evoked response was variably disrupted in GTC seizures in humans . The findings of the present study demonstrate that such variability is manifest in the EEG signals recorded during GTC seizures as well. In addition, there is evidence that there are complex spatiotemporal dynamics involved in the development and propagation of these seizures. Such results speak against the sudden onset of massive discharge in reticular structures that was once proposed and more for a graded, diffusive model for the origin and spread of GTC seizure activity.
In summary, our analysis of the stationarity and redundancy of multichannel EEG data recorded during GTC seizures has identified numerous new features and these have implications for further research. First, there is a substantial variability in stationarity. Techniques that assume stationarity should be applied only after verifying that a given time segment is acceptably stationary. The variability in stationarity suggests a need for further studies that could determine the relationship of the degree of nonstationarity of GTC seizures to their antidepressant efficacy. We also found a variation in redundancy between the leads, with the greatest redundancy in fronto-central regions with decreased pre-frontal, temporal and occipital redundancy. Further work to determine the physiology underlying this differential spatial redundancy will be important for understanding GTC seizures as will attempts to determine whether diminished redundancy in particular regions is associated with diminished therapeutic efficacy or side-effects of ECT. These same regions are also apparently delayed at times in entering the seizures suggesting a more complex spatio-temporal evolution than previously reported, which is another feature that will be important for understanding the physiology and antidepressant efficacy of these seizures. Complex spatiotemporal dynamics is also suggested by evidence of wave-like behavior. All of these observations point to graded, rather than all or none physiologic phenomena underlying GTC seizures and will be important bases for new models of GTC seizures and in better understanding and improving ECT treatment.
## Acknowledgements
This work was supported by grants NSF-CDA-91-23483 and NSF-DMS-93-07893 of the National Science Foundation, by grant DOE-DE-FG05-94ER25214 of the Department of Energy, by grants K20MH01151 and R29MH57532 of the National Institute of Mental Health, and by a Computational Science Graduate Fellowship Program of the Department of Energy.
Figure 1a:
Figure 1b:
Figure 2:
Figure 3a:
Figure 3b:
Figure 4a:
Figure 4b:
Figure 5a:
Figure 5b:
Figure 6a:
Figure 6b:
Figure 7a:
Figure 7b:
Figure 8a:
Figure 8b:
Figure 9a:
Figure 9b:
Figure 10:
|
no-problem/9905/astro-ph9905395.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Our numerical code is a combination of a Particle-Mesh (PM) Poisson solver and an Eulerian PPM hydrodynamical code. A detailed description of the N-body/hydro code and the equations governing the star- gas interactions can be found in Ref. (YK<sup>3</sup>). Here we briefly summarize their main features. In our numerical model, matter is treated as a multi-phase fluid with different physical processes acting between them. The gas is treated as two separate phases, depending on its temperature: Hot Gas ($`T_h>10^4`$K) and Cold Gas Clouds ($`T_h<10^4`$K) from which stars are formed. In each resolution element, the amount of cold gas, $`m_{\mathrm{cold}}`$, capable of producing stars is regulated by the mass of the hot gas, $`m_{\mathrm{hot}}`$, (that can cool on the time scale $`t_{\mathrm{cool}}`$) , by the rate of forming new stable stars, and by supernovae (SN), which heat and evaporate cold gas. The supernovae formation rate is assumed to be proportional to the mass of cold gas: $`\dot{m}_{\mathrm{SN}}=\beta m_{\mathrm{cold}}/t_{}`$, where $`t_{}=10^8`$yr is the time scale for star formation, and $`\beta `$ is the fraction of mass that explode as supernovae ($`\beta =0.12`$ for the Salpeter IMF). Each $`1M_{}`$ of supernovae dumps $`4.5\times 10^{49}`$ ergs of heat into the interstellar medium and evaporates a mass $`AM_{}`$ of cold gas. Small values of $`A`$ imply large reheating of hot gas and small evaporation, which makes the gas to expand due to the large pressure gradients. The feedback parameter $`A`$ is taken to be large ($`A200`$) resulting in low efficiency of converting cold gas into stars. Chemical enrichment due to supernovae is also taken into account in the following way: We assume solar composition for the gas in regions where star formation has taken place. In regions where no stars are present, we assume that the gas has primordial composition. The gas then cools with cooling rates, $`\mathrm{\Lambda }(T_h)`$ corresponding to either primordial or solar plasmas. In order to mimic the effects of photoionization by quasars and AGNs, the gas with overdensity less than 2 was kept at a constant temperature of $`10^{3.8}`$K (see e.g. Refs , ).
## 2 Description of simulations
The purpose of the simulations was to obtain a sufficiently large catalog of “numerical galaxies”, permitting reliable, statistically significant comparisons with observational quantities. To this end, a set of 11 simulations were performed for each of the CDM, $`\mathrm{\Lambda }`$CDM ($`\mathrm{\Omega }_\mathrm{\Lambda }=0.65`$), and BSI models . COBE normalization was taken; baryon fractions were compatible with nucleosynthesis constraints ($`\mathrm{\Omega }_B=0.051`$ for BSI and CDM, $`\mathrm{\Omega }_B=0.026`$ for $`\mathrm{\Lambda }`$CDM ). The box size was chosen to be 5.0 Mpc, but with different Hubble constants given by $`h=0.7`$ for the $`\mathrm{\Lambda }`$CDM model and $`h=0.5`$ for the CDM and BSI simulations. The simulations were performed at the CEPBA (Centro Europeo de Paralelismo de Barcelona) with 128<sup>3</sup> particles and cells (i.e., 39 kpc cell width). Effects of resolution have been checked by re-running 2 of the simulations with $`256^3`$ cells and particles. It did not result in significant changes in global parameters (mass, luminosity) of galaxies. To test the effects of supernovae feedback on the final observational properties of galactic halos, we have rerun 6 of the 11 simulations for each cosmological model, with different feedback parameter: $`A=50`$, (strong gas reheating) and $`A=\beta =0`$, (no reheating or mass transfer). A detailed analysis of these simulations can be found elsewhere (-)
More recently, we have completed a $`\mathrm{\Lambda }`$CDM simulation with $`300^3`$ particles in a 12 Mpc box (i.e. same resolution than in a 5 Mpc box). This simulation was done to check for possible effects due to the lack of long wavelenghts in the initial power spectrum.
## 3 Results
From the data base of galaxy-type halos extracted from the abovementioned simulations we have studied the Tully-Fisher (TF) relation in different bands (B, R, I) as well as the luminosity function in B and K .
The luminosity functions in the $`B`$ and $`K`$ bands are quite sensitive to supernova feedback. We find that the slope of the faint end ($`18M_B15`$) of the $`B`$-band luminosity function is $`\alpha `$ $`1.5`$ to $`1.9`$. This slope is steeper than the Stromlo-APM estimate, but in rough agreement with the recent ESO Slice Project . The galaxy catalogs derived from our hydrodynamic simulations lead to an acceptably small scatter in the theoretical TF relation amounting to $`\mathrm{\Delta }M=0.20.4`$ in the $`I`$ band, and increasing by 0.1 magnitude from the $`I`$-band to the $`B`$-band. Our results give strong evidence that the tightness of the TF relation cannot be attributed to supernova feedback alone. However, although eliminating supernova feedback affects the scatter only moderately ($`\mathrm{\Delta }M=0.30.6`$), it does influence the slope of the TF relation quite sensitively. With supernova feedback, $`LV_c^{33.5}`$ (the exponent depending on the degree of feedback). Without it, $`LV_c^2`$ as predicted by the virial theorem with constant $`M/L`$ and radius independent of luminosity.
In Figure 1 we show the redshift evolution of the comoving star-formation density , $`\dot{\rho _{}}`$, of our simulations, together with a compilation of the most recent observational estimates (see ,, , , , ) derived from the UV luminosity density, corrected from dust extinction, following Madau’s prescription .
In the low evolved BSI simulations, the effects of SN feedback on $`\dot{\rho _{}}`$ are striking at all redshifts, while in the CDM and $`\mathrm{\Lambda }`$CDM simulations, feedback effects become significant only at $`z<1`$: Simulations with SN feedback show a much sharper decline of $`\dot{\rho _{}}(z)`$, which is a reflection of the decline of the SFR inside the bright galactic halos at low $`z`$ as a consequence of the higher temperature of the hot gas .
In higher normalized CDM and $`\mathrm{\Lambda }`$CDM simulations, $`\dot{\rho _{}}(z)`$ is almost flat or slightly declining at $`z15`$. This is in good agreement with the recent observational data when correction from dust extinction is considered. In the $`\mathrm{\Lambda }`$CDM–A=200 panel, we also show the $`\dot{\rho _{}}(z)`$ computed in the 12 Mpc simulation box. As can be seen, it is consistent with the results from the 5 Mpc simulations. Morover, they are also in fairly good agreement with estimates of $`\dot{\rho _{}}(z)`$ computed from large-scale (100 h<sup>-1</sup>Mpc) $`\mathrm{\Lambda }`$CDM hydrodynamical simulations , with an analytical prescription for the SFR inside galactic halos.
## 4 Conclusions and future work
Despite the success of our model to reproduce some of the properties of real galaxies, it is nevertheless a simplified model of the complex star-gas interactions. UV photoionization and chemical enrichment are treated in a phenomenological way. These effects could be very important, and could change the evolution of the gas component. The advantage of using eulerian PPM hydrodynamics is that it is simpler to advect metals as a new “phase” of the gas density, which would change the local cooling rates of the gas. Stars will then form with different metallicities and their luminosities can be computed by the new generation of population synthesis models. UV radiation from the stars is another important effect that has not yet been fully explored. Our purpose is to make a self-consistent modeling of the photoionization of the gas from UV flux coming from the stars generated in the simulations. This will constitute another feedback mechanism in our star-gas model.
The main goal one wants to achieve in a cosmological simulation is to resolve the internal structure of individual galaxies formed in volumes that are large enough to allow a reliable realization of the initial power spectrum. New numerical algorithms based on Adaptive Mesh Refinement (AMR) techniques are starting to be considered one of the most promising ways to pursue this goal. But higher resolution does not necessarily mean better results, unless the most important physical processes acting at the scales of interest have been included in the simulation. Our work follows this premise. On one hand it is necessary to explore many approximations to properly model the complex physics of star formation and star-gas interactions . On the other hand, new AMR methods for gravity and hydrodynamics have already been developed. The next logical step is to put together the different pieces and build a new generation of numerical simulations to study galaxy formation from cosmological initial conditions. In this regard, the data base obtained from simulations performed with our present numerical code (YK<sup>3</sup>) will be very useful as a testbed for the new models we are currently developing.
|
no-problem/9905/astro-ph9905223.html
|
ar5iv
|
text
|
# 1 A2199 surface brightness profile in the 0.1 - 0.2 keV band as observed by the EUVE/DS and SAX/LECS detectors; the solid and dashed lines give the respective expected DS and LECS brightness had the soft emission been due only to the hot ICM gas radiation, thereby revealing CSE at the outer portions of the cluster.
Celestial Origin of the Extended EUV Emission
from the Abell 2199 and Abell 1795 Clusters of Galaxies
Richard Lieu$`^1`$ , Jonathan P. D. Mittaz$`^2`$, Massimiliano Bonamente$`^1`$,
Florence Durret$`^3`$, Sergio Dos Santos $`^3`$ and Jelle S. Kaastra$`^4`$
<sup>1</sup>Department of Physics, University of Alabama, Huntsville, AL 35899, U.S.A.
<sup>2</sup>Mullard Space Science Laboratory, UCL, Holmbury St. Mary,
Dorking, Surrey, RH5 6NT, U.K.
<sup>3</sup>Institut d’Astrophysique de Paris, CNRS, 98bis Bd Arago, F-75014 Paris, France
<sup>4</sup>SRON Laboratory for Space Research, Sorbonnelaan 2,
NL-3584 CA Utrecht, The Netherlands
## Abstract
Several authors (S. Bowyer, T. Berghöffer, and E. Korpela, hereinafter abbreviated as BBK) recently announced that the luminous extended EUV radiation from the clusters Abell 1795 and Abell 2199, which represents the large scale presence of a new and very soft emission component, is an illusion in the EUVE Deep Survey (DS) detector image. Specifically BBK found that the radial profile of photon background surface brightness, for concentric annuli centered at a detector position which has been used to observe cluster targets, shows an intrinsic ‘hump’ of excess at small radii which resembles the detection of extended cluster EUV. We accordingly profiled background data, but found no evidence of significant central excess. However, to avoid argument concerning possible variability in the background pattern, we performed a clincher test which demonstrates that a cluster’s EUV profile is invariant with respect to photon background. The test involves re-observation of A2199 and A1795 when the photon background was respectively three and two times higher than before, and using a different part of the detector. The radial profiles of both clusters, which have entirely different shapes, were accurately reproduced. In this way the BBK scenario is quantitatively excluded, with the inevitable conclusion that the detected signals are genuinely celestial.
1. Introduction
Since the original EUVE discovery (Lieu et al 1996a) of the CSE effect, substantial development in the field has taken place, including the ROSAT and BeppoSAX confirmatory detection of soft X-ray (0.1 - 0.4 keV) excesses (Lieu et al 1996a,b; Bowyer, Lampton and Lieu 1996; Fabian 1996; Mittaz, Lieu and Lockman 1998; Bowyer, Lieu and Mittaz 1998; Kaastra 1998, Kaastra et al 1999) from Virgo, Coma, A1795, A2199, and A4038, and the proposition of theoretical ideas which relate the CSE to other new radiation components (in particular the hard excess (HEX) emission recently found from some clusters; details and references may be found in Ensslin and Biermann 1998, Sarazin and Lieu 1998, Rephaeli et el 1999, Kaastra 1998 and Fusco-Femiano et al 1998).
Despite the exciting progress, some researchers continue to question the reality of CSE. In a recent workshop on clusters of galaxies (Ringberg 1999<sup>1</sup><sup>1</sup>1Thermal and relativistic plasmas in clusters of galaxies, Schlöss Ringberg, Germany, April 19-23, 1999.) BBK suggested that the detection of spatially extended EUV emission from the clusters A1795 and A2199 (Mittaz, Lieu, and Lockman 1998, Lieu, Bonamente, and Mittaz 1999), which implies a radially increasing importance of the CSE, results from an error in the photon background subtraction procedure.
The original authors of the CSE detection determined the background level from a large outer annulus (typically between 15 abd 30 arcmin cluster angular radii) of the EUVE Deep Survey (DS) Lex/B (69 - 190 eV) image, a region where the radial profile of the cluster surface brightness has reached asymptotic flatness (i.e. the brightness no longer falls with radius). BBK, on the other hand, asserted that the photon background radial profile is enhanced from the asymptotic level within a circle of radius $``$ 15 arcmin centered at a position of the detector where observations of clusters were performed (the enhancement is in the form of a ‘hump’ which peaks at $``$ 5 arcmin radius); outside this circle the profile does not flatten, but gradually decreases with radius. Thus the original usage of an asymptotic background would have led to an undersubtraction effect in the central 15 arcmin region, and hence a false detection of cluster EUV.
BBK then applied the forementioned photon background profile as a ‘template’ (i.e. the shape remains unchanged and the ‘normalization’ is adjusted to obtain agreement with the measured photon background of a cluster field at large radii) in a ‘revised’ background subtraction procedure which led to the removal of all previously detected cluster signals from A1795 and A2199 at radii beyond $``$ 5 arcmin. Moreover within 5 arcmin the CSE was turned into an intrinsic cluster absorption effect, with the detected brightness being smaller than the amount of EUV expected from the hot intracluster medium (ICM).
2. No CSE for A1795 and A2199 ?
The results of BBK raise several perplexing questions.
(a) The CSE effect was confirmed by the LECS instrument aboard BeppoSAX (Kaastra et al 1999, see also Figure 1). The radially rising trend of the CSE, as detected by LECS, is commensurate with that found by EUVE/DS (Figure 1) when instrument effective areas are taken into account. Moreover the LECS soft excess was noted to exist only below 0.2 keV, again in agreement with the conclusion obtained from simultaneous modeling of the EUVE/DS and ROSAT/PSPC data of this cluster.
(b) A multi-scale wavelet analysis of the EUVE/DS data of A1795 (Durret et al 1999) shows clear signatures of cluster emission out to a radius of at least 8 arcmin with the isophot levels rising radially to reach a factor of $``$ 4 above those expected from the hot ICM (see Figure 2). The same technique as applied to a background field of comparable exposure (see next section for details of this field) revealed no statistically significant features at $``$ 3 $`\sigma `$ level around detector areas where clusters were normally observed by the DS. To facilitate direct appreciation by the reader of these points, FITS images of A1795 (90 ksec exposure), A2199 (48 ksec) and background (85 ksec) can be downloaded via anonymous ftp to ftp://cspar.uah.edu/input/max (the README.TXT file contains further information). These images are raw data (for precise meaning see section 3) which underwent only one stage of additional processing: they were smoothed with a constant gaussian filter (size commensurate with the DS point spread function) to enhance larger features. Each image has equal spatial scale, and the central portion corresponds to the same part of the DS detector. It will immediately be obvious to the viewer that the cluster fields exhibit a luminous extended glow (with A2199 particularly sprawling) which is not reproduced in the background field.
(c) The additional background subtracted by BBK did not lead to the removal of CSE from the Virgo and Coma cluster data. A natural puzzle is why these two clusters exhibit CSE but not the others (in fact the rest suffer from the opposite effect: they are strongly intrinsically absorbed). We note in this regard that the presence of CSE in Virgo and Coma excludes the possibility of a simple correlation or anti-correlation between the CSE on one hand, and cooling flow or merging/subclustering on the other.
(d) The strong intrinsic absorption of A1795 and A2199 within cluster radii of $``$ 5 arcmin should have measurable effects in the ROSAT/PSPC 0.25 keV band and in the LECS data at energies $``$ 0.2 keV, yet it is not immediately obvious why such effects are totally absent from the PSPC and LECS data of A1795 and A2199.
At the request of workshop participants, including some of the scientific organizers, we are circulating this memo to address the BBK criticism of the CSE. In particular we shall demonstrate that the EUVE/DS photon background does not exhibit a template radial distribution involving excess counts at small radii. In fact the integrity of the original method of asymptotic background subtraction is assured by pairs of observations of the same clusters, under conditions of very different photon background levels, yielding the same radial brightness dependence which varies only from cluster to cluster.
3. Background radial profiles from blank fields
Public domain EUVE DS data are accessible from HEASARC, and standard data products (hereinafter referred to as raw data) are obtained by running routine pipeline software packages which accept satellite telemetry as input. The raw data (with a background that includes photons, particles, and detector intrinsic noise) can further be processed in any number of ways, some of which could lead to artifacts. The CSE results published so far are, however, based on the analysis of raw data. We show in Figure 3 the radial profiles of the DS background, obtained with the center at $``$3 and $``$ 10 arcmin off-axis<sup>2</sup><sup>2</sup>2the former is the position where clusters centroids were located during the cycle 4 and 5 observations, while the latter is the position for the cycle 6 re-observations of, e.g., A2199. along the detector x-axis, i.e. the direction parallel to the long side of the relevant (Lex/B) filter. The data were gathered by merging three blank field<sup>3</sup><sup>3</sup>3Defined as fields which do not contain bright sources. The three specific datasets correspond to the targets 2EUVE J1100+34.4, 2EUVE J0908+32.6, and M15. pointings which took place in July 1994 and December 1996, with a total exposure of $``$ 85 ksec, comparable to the longest cluster observation by EUVE. It can be seen from Figure 3 that the profiles reveal no significant central enhancement - there are no signatures of extended emission resembling those of A1795 (Figure 1 of Mittaz et al 1998) or A2199 (Figure 1 of Lieu, Bonamente and Mittaz 1999). Specifically for the left plot the average 0 - 15 arcmin background is 1.75 $`\pm `$ 0.50 % higher than the 15 - 30 arcmin value, and for the right plot the same comparison yields 0.9 $`\pm `$ 0.5 %.
A technique often applied to reduce the raw background by removing its particle component in the cluster signal is pulse height (PH) thresholding. It has the positive effect of lowering the Poisson noise in the cluster signal. In brief, one first constructs a PH histogram of all detected counts, which consists of a normally distributed peak of photon events superposed upon an underlying (pseudo power-law) continuum of particle events. By suitably setting the PH thresholds to select only events within the photon peak, it is possible to reject the majority of the particle background without compromising the cluster signal. The precise fractional reduction of the particle component varies from one dataset to the next. As an example, for the first observation of A2199 at least $``$ 62 % of the background was due to particles, and was removed by PH thresholding. The resulting product, from which BBK derived the photon background, is no longer a raw dataset. However, if proper processing is performed the integrity of the PH thresholded data may still be maintained, as demonstrated by Figure 4 where there is close resemblance between the radial profiles of the raw and PH thresholded data (with no extra features in the latter) for the clusters A1795 and A2199. We also note in passing that, like the raw profile and unlike the BBK template, the thresholded photon background is flat - it does not exhibit BBK’s downward slope from 10 arcmin outwards (Fig. 4, left). Thus such a pattern, even if it were to exist, is not the norm of behavior, and certainly does not have to be be present in every observation.
4. Still, does this mean Lieu et al were right ?
At the very least, therefore, it is clear that the DS background profile is not a template which carries features reminiscent of a cluster detection, as advocated by BBK. On the other hand, BBK did prompt an intruiging question, viz. given that the EUVE/DS background (like that of most satellite detectors) is a complex function of many parameters, how can one guarantee that the original asymptotic subtraction procedure is correct, short of actually measuring the background underlying each cluster at the time of every observation (an impossible task) ?
Fortunately there is a clear answer to even this question. The test is essentially an extension of the approach depicted in Figure 4.
5. Clincher test of background subtraction - the reproducibility of cluster EUV signals
The longevity of EUVE made it possible for repeated observations of the same clusters - indeed some of these took place within cycle 6. The DS background of the re-observations is in general considerably higher (typically by a factor of $``$ 2) than that of the original data, mainly due to an increase in the photon background. Further, to explore the effects of detector uniformity the clusters were usually re-observed in a different detector position, some 10 arcmin off-axis as described earlier. Thus, e.g., in the case of A2199 a comparison using PH thresholded data reveals that during re-observation the photon background was $``$ three times higher than before.
New data for three bright clusters: A2199, A1795, and Coma are available, and in every case the signals peter out to a flat background with a radial profile which is consistent with that of the original observation. As an example, we show in Figure 5 the PH thresholded and background subtracted profiles of A2199.
The significance of this agreement, however, lies in the fact that the large difference in the photon background level between each pair of observations clinches any scenario which attributes cluster signals to a centrally enhanced template profile of the photon background. To prove our statement, Figure 6 shows the radial profiles in the form of percentage above this background<sup>4</sup><sup>4</sup>4Data for Coma and A1795 only reinforce the same points made by Figures 5 & 6. They are therefore not shown here, except to add that for A1795 BBK also noted the reproducibility of the radial profile, and that the re-observation photon background is twice as high as before, again quantitatively excluding any part of the cluster detection as a background effect.
It is clear that within the context of the BBK scenario the photon background must assume two templates, suitably correlated with each other as to produce the same absolute brightness profile (Figure 5) ! In particular, if the 5 - 15 arcmin signals were to be a background variation effect (as advocated by BBK) this variation will have to form templates (Figure 6) which are statistically distinct, and which conspired<sup>5</sup><sup>5</sup>5The conspiracy could exist in a ‘relatively simple’ form if the higher re-observational background is due primarily to an increase in the level of a ‘flat profile’ component, such as particles, superposed upon a photon component which is constant, and which carries an intrinsic enhancement at the inner radii where cluster EUV were reported. However, this scenario is not viable because Figures 5 and 6 refer to thresholded background, which consists principally of photons (see earlier discussion). Certainly particles cannot be responsible for the 3-fold increase of the background in the new A2199 data. to yield Figure 5.
6. Another cosmic conspiracy
Finally, suppose we do accept the last two statements of the previous section as truth, do we need even more cosmic coincidences to explain any ‘loose ends’ that may still exist ? If yes, then perhaps one can safely declare that the premise of CSE as a background variation effect has been given sufficient ‘benefit of the doubt’, and exclude it as a sensible, viable approach ?
The answer is indeed yes, for if there exists highly contrived ‘multiple templates’ which correspond to a single absolute brightness profile irrespective of the background level (the ‘beyond BBK’ scenario described in section 4), one will be forced to conclude that such a profile must apply to every cluster observed by EUVE, i.e. all clusters must appear in the EUV like A2199. That this is not the case is immediately revealed by a comparison of the brightness profiles of A2199 and A1795. As Figure 7 clearly shows, they are not consistent with each other.
The evidence presented are sufficient to secure a firm conclusion. The radial profiles of blank sky and pairs of cluster fields provided stringent tests of the integrity of the background subtraction procedure used by the original authors who announced the CSE discovery. The verdict is favorable: while it is always possible to interpret the data in a less straightforward way, any attempt to attribute genuine cluster signals to illusions created by substantial deviations from flatness of the underlying background profile must invoke several very artificial arguments. In fact, the only premise upon which one can sensibly explain all the data is that for the clusters in question the underlying background were reasonably flat, and the correct cluster EUV profiles are the published ones.
References
Bowyer, S., Lampton, M., Lieu, R. 1996, Science, 274, 1338– 1340.
Bowyer, S., Lieu, R., Mittaz, J.P.D. 1998, The Hot Universe: Proc. 188th
IAU Symp., Dordrecht-Kluwer, 52.
Durret, F., Dos Santos, S. and Lieu, R. 1999, ApJ, in preparation .
Ensslin, T.A., Biermann, P.L. 1998, Astron. Astrophys., 330, 90–98.
Fabian, A.C. 1996, Science, 271, 1244–1245.
Fusco-Femiano, R., Dal Fiume, D., Feretti, L., Giovannini, G., Matt, G.,
Molendi, S. 1998, Proc. of the 32nd COSPAR Scientific Assembly,
Nagoya, Japan (astro-ph 9808012).
Kaastra, J.S. 1998, Proc. of the 32nd COSPAR Scientific Assembly,
Nagoya, Japan .
Kaastra,J.S. ,Lieu, R., Mittaz, J.P.D., Bleeker, J.A.M., Mewe, R.,
Colafrancesco, S. 1999, Ap.J.L. in press (astro-ph/9905209).
Lieu, R., Mittaz, J.P.D., Bowyer, S., Lockman, F.J., Hwang, C. -Y., Schmitt, J.H.M.M.,
Astrophys. J., 458, L5–7 (1996a)
Lieu, R., Mittaz, J.P.D., Bowyer, S., Breen, J.O., Lockman, F.J.,
Murphy, E.M. & Hwang, C. -Y., Science, 274, 1335–1338 (1996b).
Lieu, R., Bonamente, M. and Mittaz, J.P.D. 1999, ApJL in press.
Mittaz, J.P.D., Lieu, R., Lockman, F.J. 1998, Astrophys. J., 498, L17–20.
Rephaeli, Y., Gruber, D. and Blanco, P. 1999 , ApJL., 511, L21–24.
Sarazin, C.L., Lieu, R. 1998, Astrophys. J., 494, L177–180.
|
no-problem/9905/astro-ph9905084.html
|
ar5iv
|
text
|
# Effect of the large scale environment on the stellar content of early-type galaxies Based on observations collected at the Observatoire de Haute-Provence
## 1 Introduction
The comparison of the stellar populations at different distances (e.g., Bender et al. Be+93 (1993), Ziegler & Bender ZB97 (1997), Stanford et al. Stan98 (1998)) shows that early-type galaxies (E and S0) are essentially old and undergo only a passive evolution. Even in sparse environments, where hierarchical models of galaxy formation predict a younger age, the mean ages are not dramatically younger (Jørgensen Jor97 (1997), Bernardi et al. Ber98 (1998)). However, a fraction of elliptical galaxies contains also a younger component. This is apparent from statistical analyses, eg. Forbes et al. (For98 (1998)), and strong evidence comes from the observation of merger remnants (e.g., NGC 7252, Schweizer Sch82 (1982)) which will turn into “normal” ellipticals after a couple of gigayears.
These intermediate-age sub-populations provide the framework for understanding the connection between fine structures and spectral peculiarities (Schweizer et al. S+90 (1990)) and probably also for interpreting the peculiar velocities found in clusters (Gregg Gr92 (1992)) and residuals to the Fundamental Plane (Prugniel & Simien PS96 (1996)).
It is generally accepted that the rate of star formation in early-type galaxies is enhanced in low-density environment (Schweizer & Seitzer SS92 (1992), de Carvalho & Djorgovski dCD92 (1992), Guzmán et al. G+92 (1992), Rose et al. Ros94 (1994), Jørgensen & Jønch-Sørensen JJ98 (1998)). It results in a population – density relation: The metallic features in the spectra (eg. $`\mathrm{Mg}_2`$) are weaker and the Balmer lines stronger in low-density regions.
However, the interpretation of this environmental dependence is not straightforward. For example, Mehlert et al. (M+98 (1998)) interpret the dependence of $`\mathrm{Mg}_2`$ on the distance to the center of the Coma cluster, previously found by Guzmán et al. (G+92 (1992)), as a bias in the morphological classification. The “young” early-type galaxies found in the outskirts of Coma would actually be lenticular galaxies, and the population segregation would simply reflect the morphological segregation.
Because the stellar population varies with the Hubble type, at least part of the population – density relation is likely a by-product of the morphology-density relation (Dressler Dr80 (1980), Dr84 (1984), Whitmore et al. WGJ93 (1993); the fraction of early-type galaxies is higher in dense regions). Conversely, the population segregation may bias the morphological classification (for spiral galaxies) and hence contribute to the morphological segregation (Koopmann & Kenney KK98 (1998)).
The question left open is to determine the fraction of the population segregation due the morphological segregation. It is connected with that of the origin of the morphological segregation (Martel et al. Mar98 (1998)). This segregation may be due to initial conditions, morphological evolution (due to gas stripping and mergers), or a combination of both. The first hypothesis is difficult to defend in the light of the existence of clear cases of merger remnants. If the environment is responsible for a significant morphological evolution, it will also be at the origin of a population segregation (at a given morphological type) because the morphological transformations are accompanied with star formation.
In elliptical galaxies, the “young” stellar sub-populations are likely due to merging with a gas-rich companion that occurred at maximum a couple of gigayears ago. In lenticular galaxies, the recent populations may be the result of the residual star formation in the gaseous disk. The first class of object will not be subject to the morphological segregation, while the second will be (in this case the morphology is related to the importance of the disk). Subtracting the contribution of the morphological segregation to the observed population segregation, would in principle allow to determine the present rate of environmentally triggered star formation.
The aim of this Letter is to study the population segregation in the sample of nearby early-type galaxies presented in Prugniel & Simien (PS96 (1996)), and to address the question of its origin. We will first parameterize the density of the environment using the HYPERCAT database (http://www-obs.univ-lyon1.fr/hypercat). Then we will use two diagnostics to detect the presence of a young stellar sub-population. The first one is the analysis of the residuals, $`R_a`$, to the well-known relation between the magnesium line strength index $`\mathrm{Mg}_2`$and the central velocity dispersion $`\sigma _0`$(Terlevich et al. Ter81 (1981), Bender et al. Be+93 (1993)). The second approach similarly analyses the residuals to the Fundamental Plane (FP), $`R_f`$, following the line of Prugniel & Simien (PS94 (1994), PS96 (1996), and PS97 (1997), collectively referred as PS). The analysis of the rotational support of the galaxies where a young sub-population is detected will allow to determine if the age segregation is linked to the morphology-density relation.
## 2 Analysis
Our sample of galaxies, described in PS, consists in nearby early-type galaxies in different environments. Photometric and kinematic observations were obtained using the 1.20m and 1.93m telescopes of Observatoire de Haute Provence and the CARELEC long-slit spectrograph. This material is already presented in details in Prugniel & Héraudeau (PH98 (1998)), Prugniel & Simien (PS94 (1994)), Simien & Prugniel (1997a , 1997b , and 1997c ), and Golev et al. (Go+99 (1999)). All the data are also available through HYPERCAT. Both $`\mathrm{Mg}_2`$(Lick system) and $`\sigma _0`$are aperture corrected and standardized to an homogeneous system (see Golev & Prugniel GP98 (1998)).
The sample includes bona-fide elliptical and lenticular galaxies as well as merger remnants. The classification was made on the basis of literature assessments and presence of peculiarities: galaxies with morphological disturbances, dust-lanes, post-mergers … were rejected from the bona-fide elliptical subsample (Prugniel & Simien PS96 (1996)). This classification is clearly subjective, and some of the bona-fide ellipticals may actually hide a disk and should be re-classified as lenticular. This is a general drawback of the morphological classification of galaxies (see Kormendy & Bender KoBe96 (1996), Andreon AND98 (1998)). In this paper, the bona-fide ellipticals subsample will be used as a reference for determining the FP and $`\mathrm{Mg}_2`$$`\sigma _0`$relations.
### 2.1 Parmeterizing the density
To study environmental effects, we will define a parameter to measure the density of the environment. Ideally, the density of the environment should be the number of galaxies per megaparsec cubes locally measured in a given volume around a galaxy. For our purpose, the smoothing-volume will be the group or cluster the galaxy belongs to. The underlying idea is to characterize the mean environment of a galaxy over its $`10^{10}`$ years life in order to connect the density of this environment with the stellar population built over the same period.
We will start from the sample of galaxies with measured radial velocity smaller than 9000 km sec<sup>-1</sup>extracted from HYPERCAT (the velocity data compilation primarily comes from the LEDA database<sup>1</sup><sup>1</sup>1 http://www-obs.univ-lyon1.fr/leda ). It contains 22689 galaxies, but is not complete in apparent magnitude or diameter. Extracting a magnitude-limited whole-sky sample would produce a list of galaxies too restricted to allow proper determinations of the density of the environment. We prefer to apply completeness corrections to the present sample. Since the number of galaxies per 1000 km sec<sup>-1</sup>shell is almost constant in the sample, assuming a uniform distribution of the galaxies in all the volume suggests a completeness correction: $`\rho _\mathrm{c}=\rho _\mathrm{u}(V_{\mathrm{dist}}/1000)^2`$ (where $`\rho _\mathrm{u}`$ and $`\rho _\mathrm{c}`$ are respectively the un-corrected and corrected densities, in relative units, and $`V_{\mathrm{dist}}`$ the distance of the group or cluster in km sec<sup>-1</sup>). We do not apply correction for galaxies nearer than $`V_{\mathrm{dist}}=1000`$ km sec<sup>-1</sup>. This correction supposes that the list of galaxies with known redshift uniformly samples the real distribution of galaxies. This is a crude hypothesis, since most redshift surveys concentrated on limited regions (in particular clusters), and this could lead to an overestimation of the density in the regions of deep redshift surveys. To check the magnitude of this bias, we compared the redshift sample with the UGC and ESO samples (which are diameter limited). We found, that the density projected on the sky of the redshift catalogue, normalized to the density in the UGC and ESO samples, fluctuates by a factor 2 rms (the density was smoothed in 5 degree diameter disks). We did not attempt to use this result to modify the completeness correction, because that would have supposed to apply a further correction to the UGC/ESO samples, making assumptions on the luminosity function of galaxies, which would carry uncertainties not smaller than a factor 2 also.
The algorithm to group the galaxies is described in Golev & Prugniel (GP98 (1998)). It associates a group to all the galaxies in the sample, returning the mean (flow) velocity, the radius and the number of galaxies grouped. This is used to compute the density and also the aperture corrections which are applied to the $`\mathrm{Mg}_2`$and $`\sigma _0`$data. Rescaled to the mean density of Virgo, this algorithm gives $`\rho =\rho _\mathrm{c}/\rho _\mathrm{c}(\mathrm{Virgo})=0.6`$ for Fornax, 4 for Coma and about 0.3 for Leo or NGC 5846 groups. These values restore the hierarchy of concentrations between these different groups and clusters, thus validating our measurements of $`\rho `$. The density provides a smooth parameterization of the field-group-cluster classification. Our sample covers mostly the range of low densities, from field galaxies to poor clusters.
### 2.2 Residuals to the $`\mathrm{Mg}_2`$\- $`\sigma _0`$relation
The existence of a tight correlation between $`\mathrm{Mg}_2`$and $`\sigma _0`$was discovered by Terlevich et al. (Ter81 (1981)) and further discussed in Burstein et al. (Bur88 (1988)) and Bender et al. (Be+93 (1993)). The slope of this relation is clearly due to the metallicity, and not age, as assessed by the constancy of the slope of the color-magnitude relations out to z$``$0.9 (Stanford et al. Stan98 (1998), Kodama et al. Kod98 (1998)). At variance, the spread around this relation is likely due to the contribution of young sub-populations which results in a skewness of the residuals (Burstein et al. Bur88 (1988), Prugniel & Simien PS96 (1996)).
We have used the subsample of bona-fide ellipticals to fit the relation between $`\mathrm{Mg}_2`$and $`\sigma _0`$taking properly into account the errors on both coordinates. The residuals, $`R_a`$, fitted on 308 objects, are defined as:
$$R_a=\mathrm{Mg}_20.225\pm 0.052\times \mathrm{log}(\sigma _0)+0.235\pm 0.083$$
(1)
The estimated error on calculated $`\mathrm{Mg}_2`$is $`0.026`$. This fit is in agreement with Guzmán & Lucey (rguz193 (1993)) and with Davies et al. (rldav187 (1987)) who both find a slope of $`0.20\pm 0.05`$.
Separating this sample in two parts according to the density, we find $`<R_a>=0.003`$ (154 galaxies) and $`<R_a>=0.004`$ (154 galaxies) respectively for the high- and low- density subsamples. The difference, $`0.007\pm 0.005`$, perfectly agrees with similar estimates by Jørgensen (Jor97 (1997)) and Bernardi et al. (Ber98 (1998)).
In the present analysis, we are interested in the nature and environment of early-type galaxies hosting a young sub-population. For this purpose, Fig. 1 presents the average $`\mathrm{log}(\rho )`$as a function of $`R_a`$. The galaxies are grouped in bin of similar $`R_a`$ (ie. presumably similar stellar population). The trend for galaxies with most negative residuals to be located in lower density environment is clear.
We have over-plotted in Fig. 1 the subset of elliptical galaxies, it is interesting to note that the same tendency is observed, but with a slight offset toward higher density at a given $`R_a`$. This offset probably reflects the morphology-density relation: For a given $`\mathrm{Mg}_2`$, elliptical galaxies reside in denser environments than lenticular galaxies. In addition, assuming that our sample of elliptical galaxies is morphologically homogeneous, ie. not contaminated by S0 galaxies, the fact that they are also segregated indicates that the population-density relation may be more than a by-product of the morphology-density relation.
Negative $`R_a`$ values may be due either to a low $`\mathrm{Mg}_2`$or symmetrically to a high $`\sigma _0`$. While the first hypothesis is the most favored in the frame of the usual interpretation, the second may result from the dynamical evolution of galaxies harassed by gravitational encounters. In order to check this possibility we will analyze the residuals to the FP: A low $`\mathrm{Mg}_2`$or a high $`\sigma _0`$will result in opposite residuals.
### 2.3 Residuals to the Fundamental Plane
We performed a fit of the FP using the subsample of 291 bona-fide elliptical galaxies (17 galaxies were excluded from the 308 galaxies sample because of the lack of reliable photometry):
$$R_f=2\mathrm{log}\sigma _0+0.2(1+2\beta )M_B+0.2\mu _e+\eta .$$
(2)
$`R_f`$ is the residual to the FP, $`\sigma _0`$, the central velocity dispersion (in km sec<sup>-1</sup>) $`M_B`$ and $`\mu _e`$ respectively the asymptotic magnitude and mean surface brightness within the effective aperture in the B band, $`\beta `$ and $`\eta `$ are the free parameters. Their best fit values are: $`\beta =0.20\pm 0.02`$, $`\eta =3.2\pm 0.2`$, similar to the values previously reported in PS.
Note that we fitted the classical FP, ie., we did not include the additional terms accounting for the stellar population, the rotational support and the non-homology of the spatial structure (see PS). Since we precisely want to use the residuals to the fundamental plane as a parameterisation of the stellar population, we did not include the first scaling relation in the FP equation. Including the two latter would have dramatically reduced the size of the sample and is not useful to the present goal.
In Fig. 2 we present the average of $`\mathrm{log}(\rho )`$as a function of $`R_f`$. Each point on the figure represents 40 galaxies. There is a (somewhat marginal) tendency for the galaxies with the more negative residuals to be found in lower density environments.
Splitting the sample in two parts, we find for $`R_f<0.1`$, $`\mathrm{log}(\rho )`$$`=0.35`$ (154 galaxies), and for $`R_f<0.1`$, $`\mathrm{log}(\rho )`$$`=0.19`$ (264 galaxies). The uncertainty on the mean $`\mathrm{log}(\rho )`$being about 0.06, the difference is significant at the 3 $`\sigma `$ level.
Repeating this on arbitrary extracted subsamples of half the total size produce the same result. Therefore, we conclude that the trend is significant. Since there is no reason why the measurement errors could be connected with the environment, we conclude that the relation is physically significant.
Combining the results from the $`R_a`$ and $`R_f`$ analyses, we confirm the existence of a population segregation, and extend it to the low-density environments.
## 3 The origin of the population segregation
Our result fits in the frame of the current paradigm suggesting a reduced rate of star formation in high density environments. The morphology-density relation, the HI deficiency of cluster spirals or the relative isolation of shell galaxies are classically interpreted as the result of past gravitational interactions which both stimulated the star formation earlier in the life of the galaxies and stripped the dense gas, prohibiting present-epoch star formation.
We will now try to extract the fraction of the population segregation due to recent merging or accretion of a gas-rich companion, which cannot be accounted for by the morphological segregation.
### 3.1 Evidence for recent merging events and star formation triggered by gravitational encounters
Apart the violent cases of star forming galaxies often associated with strong interactions or mergers, the “weak” interactions have also been related with the presence of a young stellar component. In particular, Schweizer et al. (S+90 (1990)) and Schweizer & Seitzer (SS92 (1992)) found clear correlations between the anomalies on the colors, $`\mathrm{Mg}_2`$, $`H\beta `$ and CN, and the fine structure index $`\mathrm{\Sigma }`$ (indicating the amount of shells, ripples, boxiness, etc.). Gregg (Gr92 (1992)) showed that the $`\mathrm{\Sigma }`$-index is also correlated with the residuals to the FP (peculiar velocities in his terminology). To summarize, the presence of a “young” sub-population is clearly associated with merging events or gravitational encounters. Hence, this population effect is expected to depend on the environment.
Indeed, Guzmán et al. (G+92 (1992)) found a difference between the galaxies located in the outer and inner regions of the Coma cluster. The former have negative residuals (i.e. they are younger or less metallic). Bower et al. (Bo+92 (1992)) found a difference between the Coma and Virgo clusters, but this could also be an observational effect due to the difference in the projected slit sizes. Recently Jørgensen & Jønch-Sørensen (JJ98 (1998)) showed that the colors and the absorption-line indices of E and S0 galaxies belonging to the poor cluster S 639 indicate that the stellar populations in these galaxies are probably younger (or less metallic) than those in rich clusters. This is in agreement with the results by de Carvalho and Djorgovski (dCD92 (1992)), Jørgensen (Jor97 (1997)) and Bernardi et al. (Ber98 (1998)).
It is important to note that these previous works searched for mean-age differences as a function of the environment. For instance, Bernardi et al. (Ber98 (1998)) found a zero-point difference on the $`\mathrm{Mg}_2`$$`\sigma _0`$relation of $`\delta (\mathrm{Mg}_2)=0.007`$ between cluster and field ellipticals, compatible with our comparison between the high- and low-density subsamples. Assuming a single burst stellar population model (Worthey Wor94 (1994)), this corresponds to an age difference of about 1 Gyr.
In the present work, we found a much stronger difference when grouping the galaxies according the $`R_a`$ or $`R_f`$ residuals. The galaxies containing a younger stellar population are clearly found in lowest-density environment, but, in any environment, the majority of the early-type galaxies have only an old population. The rare galaxies containing young stars form the tail of negative residuals noted in Burstein (Bur88 (1988)) and Prugniel& Simien (PS96 (1996)). This same skewness is also apparent in the distribution of the color-magnitude diagrams. It is also reminiscent of the population of blue galaxies in clusters (Butcher-Oemler effect; Aragón-Salamanca et al. Ara91 (1991)). If we compare the median of $`R_a`$, less sensitive to the skewness, instead of the average, the difference between the high- and low-density subsamples vanishs.
We also tried to connect the residuals with the small-scale density of the environment by weighting the galaxies according to their separation on the sky when computing the density, with the scope of defining a “strength of the tidal field”. We experimented different weightings, but failed to find evidence for an effect related with the “tidal” field. Only the large scale environment correlates with the stellar content.
The population segregation may still be contaminated by the morphological segregation. For example, Kuntschner & Davies (KD98 (1998)) and Mehlert et al. (M+98 (1998)) find the ellipticals to be coeval in Coma and Fornax, and detect the presence of residual star formation in lenticulars only.
### 3.2 Subtracting the morphological segregation
If the morphological segregation results from the morphological segregation, $`R_a`$ should also correlate with any other parameter dependent on the morphological type. Such parameter could be the signature of a disk, as detected from image analysis (Scorza and Bender Sco95 (1995)) or from the rotational support ($`V_{\mathrm{max}}/\sigma _0`$, where $`V_{\mathrm{max}}`$ is the observed maximum rotational velocity). Both of these parameters are affected by projection effects and are available only for a restricted subsample. We will use $`V_{\mathrm{max}}/\sigma _0`$, taken from HYPERCAT.
In our sample, $`V_{\mathrm{max}}/\sigma _0`$is available for 272 galaxies: 157 ellipticals and 115 lenticulars, the mean values are respectively: $`<V_{\mathrm{max}}/\sigma _0>=0.27\pm 0.02`$ and $`0.43\pm 0.04`$ (The corresponding mean $`\mathrm{log}(\rho )`$are: $`0.46\pm 0.04`$ and $`0.50\pm 0.04`$). The difference is clear and shows that $`V_{\mathrm{max}}/\sigma _0`$is fairly correlated with the morphology, but the morphology-density relation is here very marginal.
Because of the smaller statistics the $`\mathrm{log}(\rho )`$$`R_a`$ correlation is much noisier than in Fig. 1, but the effect is still marginally observed in the subsample: The mean $`\mathrm{log}(\rho )`$for $`R_a<0`$ is $`0.50\pm 0.03`$ (162 galaxies) and $`0.44\pm 0.04`$ for $`R_a0`$ (114 galaxies). The corresponding mean $`V_{\mathrm{max}}/\sigma _0`$are: $`0.328\pm 0.023`$ and $`0.288\pm 0.025`$. The difference is marginally significant $`0.040\pm 0.035`$
The $`V_{\mathrm{max}}/\sigma _0`$$`\mathrm{log}(\rho )`$correlation, represents the morphology – density relation, it is fitted as: $`\mathrm{log}(\rho )0.25V_{\mathrm{max}}/\sigma _0`$. It predicts a difference in $`\mathrm{log}(\rho )`$between the two subsamples, $`R_a<0`$ and $`R_a>0`$, of: $`0.010\pm 0.015`$. This is about $`1/4`$ of the observed effect. However, taking into account the large uncertainties, it is not possible to reject the possibility that the totality of the stellar population segregation may be due to the morphology – density relation.
Repeating the analysis on $`R_f`$ reproduces the dynamical effect found by Prugniel & Simien (PS94 (1994)) and does not help to understand the origin of the population segregation.
## 4 Conclusion
Analyzing the residuals to the $`\mathrm{Mg}_2`$$`\sigma _0`$and Fundamental Plane relations as a function of the large scale density of the environment, we find a segregation of the stellar population. The early-type galaxies with a line strength index $`\mathrm{Mg}_2`$weaker than expected from their value of $`\sigma _0`$are preferentially found in low-density regions. These galaxies are likely to contain an excess of young stars, presumably formed after a gravitational encounter or merging event that occurred in the last gigayears. This extends previous results to very sparse environments.
However, we cannot rule out that this effect is a by-product of the morphological segregation. Indeed, even our subsample of bona-fide ellipticals may still be contaminated by some lenticular galaxies seen almost face-on or with a weak disk, and the observed segregation could be due to residual star formation in these galaxies expected to be found mostly in low-density regions. We tried to use the observed rotational support, $`V_{\mathrm{max}}/\sigma _0`$, to parameterize the morphology, as it traces the presence of a disk. The morphology-density relation seems to account for part of the segregation observed on stellar population, but the data are still too noisy to exclude that it is the totality. New data on a larger sample are needed.
###### Acknowledgements.
We are grateful to the telescope operators at Observatoire de Haute-Provence for their help during the observations. VG thanks the CRAL-Observatoire Astronomique de Lyon for an invited-astronomer position. We thank Guy Worthey for remarks that have improved the paper.
|
no-problem/9905/cond-mat9905323.html
|
ar5iv
|
text
|
# Two-Dimensional Electron-Hole Systems in a Strong Magnetic Field
## I Introduction
In two-dimensional electron-hole systems in the presence of a strong magnetic field, neutral excitons $`X^0`$ and spin-polarized charged excitonic ions $`X_k^{}`$ ($`X_k^{}`$ consists of $`k`$ neutral excitons bound to an electron) can occur. The complexes $`X_k^{}`$ should be distinguished from spin-unpolarized ones (e.g. spin-singlet biexciton or charged exciton) that are found at lower magnetic fields but unbind at very high fields as predicted by hidden symmetry arguments. The excitonic ions $`X_k^{}`$ are long lived Fermions whose energy spectra display Landau level structure. In this work we investigate, by exact numerical diagonalization within the lowest Landau level, small systems containing $`N_e`$ electrons and $`N_h`$ holes ($`N_e>N_h`$) confined to the surface of a Haldane sphere . For $`N_h=1`$ these systems serve as simple guides to understanding photoluminescence. For larger values of $`N_h`$ it is possible to form a multi-component plasma containing electrons and $`X_k^{}`$ complexes. We propose a model for determining the incompressible quantum fluid states of such plasmas, and confirm the validity of the model by numerical calculations. In addition, we introduce a new generalized composite Fermion (CF) picture for the multi-component plasma and use it to predict the low lying bands of angular momentum multiplets for any value of the magnetic field.
The single particle states of an electron confined to a spherical surface of radius $`R`$ containing at its center a magnetic monopole of strength $`2S\varphi _0`$, where $`\varphi _0=hc/e`$ is the flux quantum and $`2S`$ is an integer, are denoted by $`|S,l,m`$ and are called monopole harmonics . They are eigenstates of $`\widehat{l}^2`$, the square of the angular momentum operator, with an eigenvalue $`\mathrm{}^2l(l+1)`$, and of $`\widehat{l}_z`$, the $`z`$ component of the angular momentum, with an eigenvalue $`\mathrm{}m`$. The energy eigenvalue is given by $`(\mathrm{}\omega _c/2S)[l(l+1)S^2]`$, where $`\mathrm{}\omega _c`$ is the cyclotron energy. The $`(2l+1)`$-fold degenerate Landau levels (or angular momentum shells) are labelled by $`n=lS=0`$, 1, 2, …
## II Four Electron–Two Hole System
In Fig. 1 we display the energy spectrum obtained by numerical diagonalization of the Coulomb interaction of a system of four electrons and two holes at $`2S=17`$.
The states marked by open and solid circles are multiplicative (containing one or more decoupled $`X^0`$’s) and non-multiplicative states, respectively. For $`L<12`$ there are four rather well defined low lying bands. Two of them begin at $`L=0`$. The lower of these consists of two $`X^{}`$ ions interacting through a pseudopotential $`V_{X^{}X^{}}(L)`$. The upper band consists of states containing two decoupled $`X^0`$’s plus two electrons interacting through $`V_{e^{}e^{}}(L)`$. The band that begins at $`L=1`$ consists of one $`X^0`$ plus an $`X^{}`$ and an electron interacting through $`V_{e^{}X^{}}(L)`$, while the band which starts at $`L=2`$ consists of an $`X_2^{}`$ interacting with a free electron.
Knowing that the angular momentum of an electron is $`l_e^{}=S`$, we can see that $`l_{X_k^{}}=Sk`$, and that decoupled excitons do not carry angular momentum ($`l_{X^0}=0`$). For a pair of identical Fermions of angular momentum $`l`$ the allowed values of the pair angular momentum are $`L=2lj`$, where $`j`$ is an odd integer. For a pair of distinguishable particles with angular momenta $`l_A`$ and $`l_B`$, the total angular momentum satisfies $`|l_Al_B|Ll_A+l_B`$. The states containing two free electrons and two decoupled neutral excitons fit exactly the pseudopotential for a pair of electrons at $`2S=17`$; the maximum pair angular momentum is $`L^{\mathrm{MAX}}=16`$ as expected. The states containing two $`X^{}`$’s terminate at $`L=12`$. Since the $`X^{}`$’s are Fermions, one would have expected a state at $`L^{\mathrm{MAX}}=2l_X^{}1=14`$. This state is missing in Fig. 1. By studying two $`X^{}`$ states for low values of $`S`$, we surmise that the state with $`L=L^{\mathrm{MAX}}`$ does not occur because of the finite size of the $`X^{}`$. Large pair angular momentum corresponds to the small average separation, and two $`X^{}`$’s in the state with $`L^{\mathrm{MAX}}`$ would be too close to one another for the bound $`X^{}`$’s to remain stable. We can think of this as a “hard core” repulsion for $`L=L^{\mathrm{MAX}}`$. Effectively, the corresponding pseudopotential parameter, $`V_{X^{}X^{}}(L^{\mathrm{MAX}})`$ is infinite. In a similar way, $`V_{e^{}X^{}}(L)`$ is effectively infinite for $`L=L^{\mathrm{MAX}}=16`$, and $`V_{e^{}X_2^{}}(L)`$ is infinite for $`L=L^{\mathrm{MAX}}=15`$.
Once the maximum allowed angular momenta for all four pairings $`AB`$ are established, all four bands in Fig. 1 can be roughly approximated by the pseudopotentials of a pair of electrons (point charges) with angular momenta $`l_A`$ and $`l_B`$, shifted by the binding energies of appropriate composite particles. For example, the $`X^{}`$$`X^{}`$ band is approximated by the $`e^{}`$$`e^{}`$ pseudopotential for $`l=l_X^{}=S1`$ plus twice the $`X^{}`$ energy. The agreement is demonstrated in Fig. 1, where the squares, diamonds, and two kinds of triangles approximate the four bands in the four-electron–two-hole spectrum. The fit of the diamonds to the actual $`X^{}`$$`X^{}`$ spectrum is quite good for $`L<12`$. The fit of the $`e^{}`$$`X^{}`$ squares to the open circle multiplicative states is reasonably good for $`L<14`$, and the $`e^{}`$$`X_2^{}`$ triangles fit their solid circle non-multiplicative states rather well for $`L<13`$. At sufficiently large separation (low $`L`$), the repulsion between ions is weaker than their binding and the bands for distinct charge configurations do not overlap.
There are two important differences between the pseudopotentials $`V_{AB}(L)`$ involving composite particles and those involving point particles. The main difference is the hard core discussed above. If we define the relative angular momentum $`=l_A+l_BL`$ for a pair of particles with angular momentum $`l_A`$ and $`l_B`$, then the minimum allowed relative angular momentum (which avoids the hard core) is found to be given by
$$_{AB}^{\mathrm{min}}=2\mathrm{min}(k_A,k_B)+1,$$
(1)
where $`A=X_{k_A}^{}`$ and $`B=X_{k_B}^{}`$. The other difference involves polarization of the composite particle. A dipole moment is induced on the composite particle by the electric field of the charged particles with which it is interacting. By associating an “ionic polarizability” with the excitonic ion $`X_k^{}`$, the polarization contribution to the pseudopotential can easily be estimated. When a number of charges interact with a given composite particle, the polarization effect is reduced from that caused by a single charge, because the total electric field at the position of the excitonic ion is the vector sum of contributions from all the other charges, and there is usually some cancellation. We will ignore this effect in the present work and simply use the pseudopotentials $`V_{AB}(L)`$ obtained from Fig. 1 to describe the effective interaction.
## III Eight Electron–Two Hole System
As an illustration, we first present the results of exact numerical diagonalization performed on the ten particle system ($`8e^{}`$ and $`2h^+`$). We expect low lying bands of states containing the following combinations of complexes: (i) $`4e^{}+2X^{}`$, (ii) $`5e^{}+X_2^{}`$, (iii) $`5e^{}+X^{}+X^0`$, and (iv) $`6e^{}+2X^0`$. The total binding energies of these configurations are: $`\epsilon _\mathrm{i}=2\epsilon _0+2\epsilon _1`$, $`\epsilon _{\mathrm{ii}}=2\epsilon _0+\epsilon _1+\epsilon _2`$, $`\epsilon _{\mathrm{iii}}=2\epsilon _0+\epsilon _1`$, and $`\epsilon _{\mathrm{iv}}=2\epsilon _0`$. Here $`\epsilon _0`$ is the binding energy of an $`X^0`$, $`\epsilon _1`$ is the binding energy of an $`X^0`$ to an electron to form an $`X^{}`$, and $`\epsilon _k`$ is the binding energy of an $`X^0`$ to an $`X_{k1}^{}`$ to form an $`X_k^{}`$. Some estimates of these binding energies (in magnetic units $`e^2/\lambda `$ where $`\lambda `$ is the magnetic length) as a function of $`2S`$ are given in Tab. I.
Clearly, $`\epsilon _0>\epsilon _1>\epsilon _2>\epsilon _3`$. The total energy depends upon not only the total binding energy, but the interactions between all the charged complexes in the system as well. All groupings (i)–(iv) contain an equal number of $`N=N_eN_h`$ singly charged complexes. However, both angular momenta of involved complexes and the relevant hard cores are different. Which of the groupings has a state with the lowest total repulsion and binding energy, i.e. the absolute (possibly incompressible) ground state of the electron-hole system, depends on $`2S`$. It follows from the mapping between electron-hole and spin-unpolarized electron systems that the multiplicative state $`Ne^{}+N_hX^0`$ in which all holes are bound into decoupled $`X^0`$’s is the absolute ground state (only) at the values of $`2S`$ corresponding to the filling factor $`\nu =11/m=2/3`$, 4/5, … of $`N`$ (excess) electrons. At other values of $`2S`$, a non-multiplicative state containing an $`X^{}`$ is likely to have lower energy.
In Fig. 2, we show the low energy spectra of the $`8e+2h`$ system at $`2S=9`$ (a), $`2S=13`$ (c), and $`2S=14`$ (e).
Filled circles mark the non-multiplicative states, and the open circles and squares mark the multiplicative states with one and two decoupled excitons, respectively. In frames (b), (d) and (f) we plot the low energy spectra of different charge complexes interacting through appropriate pseudopotentials (see Fig. 1), corresponding to four possible groupings (i)–(iv). As marked with lines, by comparing left and right frames, we can identify low lying states of type (i)–(iv) in the electron-hole spectra.
The fitting of energies in left and right frames at $`2S=13`$ and 14 is noticeably worse than at $`2S=9`$. It is also much worse than almost a perfect fit obtained for the three charge system ($`6e+3h`$ vs. $`3X^{}`$, $`e^{}+X^{}+X_2^{}`$, etc.). This is almost certainly due to treating the polarization effect of the six charged particle system improperly by using the pseudopotential obtained from the two charged particle system (Fig. 1). A better fit is obtained by ignoring the polarization effect, and only including the hard core effect on the pseudopotentials of a pair of point charges with angular momentum $`l_A`$ and $`l_B`$.
## IV Larger Systems
It is unlikely that a system containing a large number of different species (e.g. $`e^{}`$, $`X^{}`$, $`X_2^{}`$, etc.) will form the absolute ground state of the electron-hole system. However, different charge configurations can form low lying excited bands. An interesting example is the $`12e+6h`$ system at $`2S=17`$. The $`6X^{}`$ grouping (v) has the maximum total binding energy $`\epsilon _\mathrm{v}=6\epsilon _0+6\epsilon _1`$. Other expected low lying bands correspond to the following groupings: (vi) $`e^{}+5X^{}+X^0`$ with $`\epsilon _{\mathrm{vi}}=6\epsilon _0+5\epsilon _1`$ and (vii) $`e^{}+4X^{}+X_2^{}`$ with $`\epsilon _{\mathrm{vii}}=6\epsilon _0+5\epsilon _1+\epsilon _2`$.
Although we are unable to perform an exact diagonalization for the $`12e+6h`$ system in terms individual electrons and holes, we can use appropriate pseudopotentials and binding energies of groupings (v)–(vii) to obtain the low lying states in the spectrum. The results are presented in Fig. 3.
There is only one $`6X^{}`$ state (the $`L=0`$ Laughlin $`\nu _X^{}=1/3`$ state) and two bands of states in each of groupings (vi) and (vii). A gap of 0.0626 $`e^2/\lambda `$ separates the $`L=0`$ ground state from the lowest excited state.
In Fig. 4 we present the spectra of the $`6X^{}`$ charge configurations for $`2S=21`$, 23, 25, and 27.
The dashed lines are obtained by adding to the ground state energy the binding energy difference appropriate for the next lowest charge configuration; no states other than the plotted six $`X^{}`$ states are expected below these lines. The $`L=0`$ ground states observed at different $`2S`$ correspond to $`\nu _X^{}=2/7`$, 2/9, 6/29, and 1/5. The $`\nu _X^{}=1/5`$ state is a Laughlin state and $`\nu _X^{}=2/7`$ and 2/9 are states in Jain sequence. The $`\nu _X^{}=6/29`$ state is a CF hierarchy state corresponding to two quasiparticles (QP’s) of the $`\nu _X^{}=1/5`$ state forming a $`\nu _{\mathrm{QP}}=1/5`$ state at the next level of the CF hierarchy. Without knowing the nature of the QP-QP interaction vs. pair angular momentum $`L`$, there is no guarantee that the CF hierarchy picture (which assumes the validity of the mean field approximation) is valid. Fig. 4c seems to indicate that it is, since the $`L=0`$ state has a lower energy than the other two states at $`L=0`$ and 4, predicted for two QP’s each with $`l_{\mathrm{Q}\mathrm{P}}=5/2`$. Our study of the pseudopotential of QP’s in the Laughlin $`\nu =1/5`$ state at $`\nu _{\mathrm{QP}}=1/5`$ very strongly suggests that it behaves like the Coulomb pseudopotential, so that the MFCF picture should work.
## V Generalized Laughlin Wavefunction
It is known that if the pseudopotential $`V()`$ decreases quickly with increasing $``$, the low lying multiplets avoid (strongly repulsive) pair states with one or more of the smallest values of $``$. For the (one-component) electron gas on a plane, avoiding pair states with $`<m`$ is achieved with the factor $`_{i<j}(x_ix_j)^m`$ in the Laughlin $`\nu =1/m`$ wavefunction. For a system containing a number of distinguishable types of Fermions interacting through Coulomb-like pseudopotentials, the appropriate generalization of the Laughlin wavefunction will contain a factor $`(x_i^{(a)}x_j^{(b)})^{m_{ab}}`$, where $`x_i^{(a)}`$ is the complex coordinate for the position of $`i`$th particle of type $`a`$, and the product is taken over all pairs. For each type of particle one power of $`(x_i^{(a)}x_j^{(a)})`$ results from the antisymmetrization required for indistinguishable Fermions and the other factors describe Jastrow type correlations between the interacting particles. Such a wavefunction guarantees that $`_{ab}m_{ab}`$, for all pairings of various types of particles, thereby avoiding large pair repulsion. Fermi statistics of particles of each type requires that all $`m_{aa}`$ are odd, and the hard cores defined by Eq. (1) require that $`m_{ab}_{ab}^{\mathrm{min}}`$ for all pairs.
## VI Generalized Composite Fermion Picture
In order to understand the numerical results obtained in the spherical geometry (Figs. 2 and 3), it is useful to introduce a generalized CF picture by attaching to each particle fictitious flux tubes carrying an integral number of flux quanta $`\varphi _0`$. In the multi-component system, each $`a`$-particle carries flux $`(m_{aa}1)\varphi _0`$ that couples only to charges on all other $`a`$-particles and fluxes $`m_{ab}\varphi _0`$ that couple only to charges on all $`b`$-particles, where $`a`$ and $`b`$ are any of the types of Fermions. The effective monopole strength seen by a CF of type $`a`$ (CF-$`a`$) is
$$2S_a^{}=2S\underset{b}{}(m_{ab}\delta _{ab})(N_b\delta _{ab})$$
(2)
For different multi-component systems we expect generalized Laughlin incompressible states (for two components denoted as $`[m_{aa},m_{bb},m_{ab}]`$) when all the hard core pseudopotentials are avoided and CF’s of each kind fill completely an integral number of their CF shells (e.g. $`N_a=2l_a^{}+1`$ for the lowest shell). In other cases, the low lying multiplets are expected to contain different kinds of quasiparticles (QP-$`a`$, QP-$`b`$, …) or quasiholes (QH-$`a`$, QH-$`b`$, …) in neighboring filled shells.
Our multi-component CF picture can be applied to the system of excitonic ions, where the CF angular momenta are given by $`l_{X_k^{}}^{}=|S_{X_k^{}}^{}|k`$. As an example, let us first analyze the low lying $`8e+2h`$ states in Fig. 2. At $`2S=9`$, for $`m_{e^{}e^{}}=m_{X^{}X^{}}=3`$ and $`m_{e^{}X^{}}=1`$ we predict the following low lying multiplets in each grouping: (i) $`2S_e^{}^{}=1`$ and $`2S_X^{}^{}=3`$ gives $`l_e^{}^{}=l_X^{}^{}=1/2`$. Two CF-$`X^{}`$’s fill their lowest shell ($`L_X^{}=0`$) and we have two QP-$`e^{}`$’s in their first excited shell, each with angular momentum $`l_e^{}^{}+1=3/2`$ ($`L_e^{}=0`$ and 2). Addition of $`L_e^{}`$ and $`L_X^{}`$ gives total angular momenta $`L=0`$ and 2. We interpret these states as those of two QP-$`e`$’s in the incompressible state. Similarly, for other groupings we obtain: (ii) $`L=2`$; (iii) $`L=1`$, 2, and 3; and (iv) $`L=0`$ ($`\nu =2/3`$ state of six electrons).
At $`2S=13`$ and 14 we set $`m_{e^{}e^{}}=m_{X^{}X^{}}=3`$ and $`m_{e^{}X^{}}=2`$ and obtain the following predictions. First, at $`2S=13`$: (i) The ground state is the incompressible state at $`L=0`$; the first excited band should therefore contain states with one QP-QH pair of either kind. For the $`e^{}`$ excitations, the QP-$`e^{}`$ and QH-$`e^{}`$ angular momenta are $`l_e^{}^{}=3/2`$ and $`l_e^{}^{}+1=5/2`$, respectively, and the allowed pair states have $`L_e^{}=1`$, 2, 3, and 4. However, the $`L=1`$ state has to be discarded, as it is known to have high energy in the one-component (four electron) spectrum. For the $`X^{}`$ excitations, we have $`l_X^{}^{}=1/2`$ and pair states can have $`L_X^{}=1`$ or 2. The first excited band is therefore expected to contain multiplets at $`L=1`$, $`2^2`$, 3, and 4. The low lying multiplets for other groupings are expected at: (ii) $`L=2`$ and 3; (iii) $`2S_{X_2^{}}^{}=3`$ gives no bound $`X_2^{}`$ state; setting $`m_{e^{}X^{}}=1`$ we obtain $`L=2`$; and (iv) $`L=0`$, 2, and 4. Finally, at $`2S=14`$ we obtain: (i) $`L=1`$, 2, and 3; (ii) incompressible \[3\*2\] state at $`L=0`$ ($`m_{X^{}X^{}}`$ is irrelevant for one $`X^{}`$) and the first excited band at $`L=1`$, 2, 3, 4, and 5; (iii) $`L=1`$; and (iv) $`L=3`$.
For the $`12e+6h`$ spectrum in Fig. 3 the following CF predictions are obtained: (v) For $`m_{X^{}X^{}}=3`$ we obtain the Laughlin $`\nu =1/3`$ state with $`L=0`$. Because of the hard core of $`V_{X^{}X^{}}`$, this is the only state of this grouping. (vi) We set $`m_{X^{}X^{}}=3`$ and $`m_{e^{}X^{}}=1`$, 2, and 3. For $`m_{e^{}X^{}}=1`$ we obtain $`L=1`$, 2, $`3^2`$, $`4^2`$, $`5^3`$, $`6^3`$, $`7^3`$, $`8^2`$, $`9^2`$, 10, and 11. For $`m_{e^{}X^{}}=2`$ we obtain $`L=1`$, 2, 3, 4, 5, and 6. For $`m_{e^{}X^{}}=3`$ we obtain $`L=1`$. (vii) We set $`m_{X^{}X^{}}=3`$, $`m_{e^{}X_2^{}}=1`$, $`m_{X^{}X_2^{}}=3`$, and $`m_{e^{}X^{}}=1`$, 2, or 3. For $`m_{e^{}X^{}}=1`$ we obtain $`L=2`$, 3, $`4^2`$, $`5^2`$, $`6^3`$, $`7^2`$, $`8^2`$, 9, and 10. For $`m_{e^{}X^{}}=2`$ we obtain $`L=2`$, 3, 4, 5, and 6. For $`m_{e^{}X^{}}=3`$ we obtain $`L=2`$. In groupings (vi) and (vii), the sets of multiplets obtained for higher values of $`m_{e^{}X^{}}`$ are subsets of the sets obtained for lower values, and we would expect them to form lower energy bands since they avoid additional small values of $`_{e^{}X^{}}`$. However, note that the (vi) and (vii) states predicted for $`m_{e^{}X^{}}=3`$ (at $`L=1`$ and 2, respectively) do not form separate bands in Fig. 3. This is because the $`V_{e^{}X^{}}`$ pseudopotential increases more slowly than linearly as a function of $`L(L+1)`$ in the vicinity of $`_{e^{}X^{}}=3`$. In such case the CF picture fails.
The agreement of our CF predictions with the data in Figs. 2 and 3 (marked with lines) is really quite remarkable and strongly indicates that our multi-component CF picture is correct. We were indeed able to confirm predicted Jastrow type correlations in the low lying states by calculating their coefficients of fractional parentage. We have also verified the CF predictions for other systems that we were able to treat numerically. If exponents $`m_{ab}`$ are chosen correctly, the CF picture works well in all cases.
## VII Special Case: Many Electron–One Hole Systems
In an investigation of photoluminescence, the eigenstates of a system containing up to $`N_e=7`$ electrons and a single hole have been studied as a function of $`d`$, the separation between the surfaces on which electrons and the hole are confined. For $`d`$ larger than a few magnetic lengths $`\lambda `$, the low energy spectra can be understood quite simply in terms of the lowest band of multiplets of $`N_e`$ electrons weakly coupled to the hole. There is clear evidence for bound states of the hole to one or more Laughlin quasielectrons. For $`d<\lambda `$ there has been no convincing interpretation of the low lying states, although Apalkov et al. suggested an explanation in terms of “dressed” $`X^0`$ excitons.
At $`d=0`$ there are two types of states which contain excitons, viz. multiplicative states containing $`N_e1`$ electrons and one $`X^0`$, and non-multiplicative states containing $`N_e2`$ electrons and one $`X^{}`$. The multiplicative states are particularly simple; their energies are simply the energies of $`N_e1`$ interacting electrons less the binding energy $`\epsilon _0`$ of an $`X^0`$. The non-multiplicative states are an example of a two-component plasma and can be understood in our generalized CF picture.
For $`N_e=7`$, the $`6e^{}+X^0`$ and $`5e^{}+X^{}`$ states can be found in the $`8e+2h`$ spectra shown in Fig. 2, where they correspond to the $`6e^{}+2X^0`$ and $`5e^{}+X^{}+X^0`$ multiplicative states marked with open symbols. We have shown that the predictions of our model work very well for this system. In particular, it is clear from Fig.2ab that while the $`7e+1h`$ ground state at $`2S=9`$ is the (multiplicative) incompressible $`\nu =2/3`$ state of six electrons, the low lying states at $`L=1`$, 2, and 3 all contain an $`X^{}`$ and thus their nature is very different.
Similarly, at $`2S=15`$, the pseudopotential calculation for the $`5e^{}+X^{}`$ grouping (as in Fig.2bdf) as well as the CF prediction for $`m_{e^{}e^{}}=3`$ and $`m_{e^{}X^{}}=2`$ undoubtly preclude the interpretation of the low energy band at $`L=1`$, 2, 3, and 4 (see figures in Refs. ) in terms of a “dressed” exciton in favor of the $`5e^{}+1X^{}`$ configuration. In the CF picture of those states, one electron binds to the $`X^0`$ forming an $`X^{}`$ and leaving behind a quasihole (QH-$`e^{}`$) in the Laughlin $`\nu =1/3`$ state. The $`X^{}`$ (with $`l_X^{}^{}=3/2`$) and the QH-$`e^{}`$ (with $`l_e^{}^{}=5/2`$) have opposite charges and attract one another; what results in their excitonic dispersion. We have checked that the present interpretation remains valid at inter-layer separations $`d`$ up the order of $`\lambda `$, when $`X^{}`$’s unbind (detailed analysis of spatially separated systems will be presented elsewhere).
## VIII Photoluminescence
A single $`X^{}`$ cannot emit a photon by $`eh`$ recombination and leave behind a free electron. In the simplest terms, this is because the luminescence operator conserves total angular momentum, and an $`X^{}`$ has $`l_X^{}=S1`$, while the electron has $`l_e^{}=S`$. For separated electron and hole planes, the hidden symmetry theorem does not hold, and it is possible to have weak luminescence from an $`X^{}`$ interacting with other charged particles. However, the luminescence intensity is much weaker than the fundamental luminescence line due to a neutral $`X^0`$. The existence of free $`X_k^{}`$ complexes appears to act as a trap that inhibits a strong luminescence intensity from $`X^0`$’s. Observation of a strong $`X^{}`$ luminescence signal seems likely to be associated with excitons bound to an impurity and/or mixing of higher Landau levels. This might break the selection rule that forbids luminescence for a free $`X^{}`$.
## IX Speculation
The generalized CF picture will be of value if it can make predictions for systems which at the moment are too large to evaluate numerically. An example that we have not been able to study numerically is that of $`N_e=14`$ and $`N_h=5`$. The configuration with the largest binding energy is (viii) $`4e^{}+5X^{}`$, but the configuration (ix) $`5e^{}+3X^{}+X_2^{}`$ is only slightly smaller in binding energy. Which of these configurations has the lowest energy at a given value of $`2S`$ will depend on both the binding energy and the interparticle interactions. For $`2S=36`$, we can choose $`m_{e^{}e^{}}=m_{X^{}X^{}}=5`$ and $`m_{e^{}X^{}}=m_{e^{}X_2^{}}=m_{X^{}X_2^{}}=4`$. This choice satisfies all the requirements imposed by the Pauli principle and by the hard cores of the different pseudopotentials. From Eq. (2), we find $`2S_e^{}^{}=2S_X^{}^{}=2S_{X_2^{}}^{}=4`$ so that $`2l_e^{}^{}=4`$, $`2l_X^{}^{}=2`$, and $`2l_{X_2^{}}^{}=0`$. This leads to an $`L=0`$ state of configuration (ix). If it is lower in energy than the lowest state of configuration (viii), it is very probably a Laughlin incompressible state. For configuration (viii), we find that there is a quasihole in the electron shell of angular momentum $`l_e^{}^{}=2`$ and a pair of quasiparticles in the $`X^{}`$ shell of angular momentum $`l_X^{}^{}=2`$. This gives $`L_e^{}=2`$, $`L_X^{}=1`$, 3, and thus $`L=1`$, $`2^2`$, $`3^2`$, and 4. It seems likely that the quasiparticle energy in configuration (viii) more than compensates its slightly higher binding energy and that configuration (ix) is an incompressible quantum fluid state. It is unlikely that one will be able to diagonalize the nineteen particle electron-hole system at $`2S=36`$, but the nine particle systems ($`4e^{}+5X^{}`$ and $`5e^{}+3X^{}+X_2^{}`$) might be possible.
## X Summary
Charged excitons and excitonic complexes play an important role in determining the low energy spectra of electron-hole systems in a strong magnetic field. We have introduced general Laughlin type correlations into the wavefunctions, and proposed a generalized CF picture to elucidate the angular momentum multiplets forming the lowest energy bands for different charge configurations occurring in the electron-hole system. We have found Laughlin incompressible fluid states of multi-component plasmas at particular values of the magnetic field, and the lowest bands of multiplets for various charge configurations at any value of the magnetic field. It is noteworthy that the fictitious Chern–Simons fluxes and charges of different types or colors are needed in the generalized CF model. This strongly suggests that the effective magnetic field seen by the CF’s does not physically exist and that the CF picture should be regarded as a mathematical convenience rather than physical reality.
We thank P. Hawrylak and M. Potemski for helpful discussions. AW and JJQ acknowledge partial support from the Materials Research Program of Basic Energy Sciences, US Department of Energy. KSY acknowledges support from the Korea Research Foundation (Project No. 1998-001-D00305).
|
no-problem/9905/cond-mat9905296.html
|
ar5iv
|
text
|
# Trapped 6Li : A high Tc superfluid ?
## Abstract
We consider the effect of the indirect interaction due to the exchange of density fluctuations on the critical temperature of superfluid <sup>6</sup>Li . We obtain the strong coupling equation giving this critical temperature. This equation is solved approximately by retaining the same set of diagrams as in the paramagnon model. We show that, near the instability threshold, the attractive interaction due to density fluctuations gives rise to a strong increase in the critical temperature, providing a clear signature of the existence of fluctuation induced interactions.
The recent quite impressive progress in obtaining ultracold atomic gases has open the way to the discovery of a large number of new superfluids. Already most alkali Bose gases have been shown to undergo Bose Einstein condensation. Even if superfluidity has not been yet demonstrated explicitely in experiments, they are firmly believed to be superfluids because they have already been seen to display phase coherence. Moreover the search for the transition of trapped Fermi gases toward a BCS type superfluid is now actively considered , as the possibility of experimental observation is quite realistic.
A high critical temperature will be obtained for a strong attractive effective interaction between fermions. Since at low temperature scattering is essentially s-wave, a large negative scattering length is most favorable. Therefore spin polarized <sup>6</sup>Li appears as a strong candidate since its scattering length is found experimentally to be very large $`a`$ = - 1140 $`\AA `$ . In this case pairing would occur between two different hyperfine states , which play the same role as spin states in standard BCS theory. This is the situation we will consider and we will assume the most favorable case where the number of atoms is the same in the two hyperfine states. It is naturally important to assess as well as possible the value of the critical temperature. Experiments are presently performed in magnetic atomic traps leading to a harmonic potential and calculations should be done for this geometry . Or one can also consider making use of an optical trap. Here we will restrict ourselves to the simpler situation of fermions confined in a box with total density $`n=k_F^3/3\pi ^2`$.
The high density regime is of particular interest since it corresponds clearly to higher critical temperature and the superfluid will be more accessible experimentally in this regime. However there is also a deep theoretical interest to investigate this domain. Indeed the high density regime is bounded by an instability which occurs for a coupling constant $`\lambda =2k_F|a|/\pi 1`$ . Beyond this limit the compressibility becomes negative because of the strong effective attraction between atoms in different hyperfine states. In the vicinity of this instability $`\lambda 1`$, the compressibility becomes high and density fluctuations occur easily. This leads to the possibility of an attractive interaction between fermions through the exchange of density fluctuations, in a way completely analogous to the phonon exchange mechanism of standard superconductivity. Qualitatively we expect this mechanism to add up to the direct attractive interaction and to lead to an increase in the critical temperature.
Independently of this increase, this situation is extremely interesting because it is quite analogous to what is believed to happen in many other condensed matter systems. High $`T_c`$ superconductors and the newly discovered low temperature superconductor $`Sr_2RuO_4`$ are well-known examples. However the best case is probably liquid <sup>3</sup>He which is not so far from having a ferromagnetic instability. It has been proposed that strong spin fluctuations, the so-called paramagnons, exist in this liquid and play an important role in the physics. In particular the attractive interaction, leading to the Cooper pairs formed in the superfluid, has been attributed to paramagnon exchange. A strong qualitative support for this picture is the existence of the A phase of superfluid <sup>3</sup>He at high pressure, which has been explained by feedback effects of its specific structure on the paramagnon propagator. However it is not clear at all that the paramagnon model can provide a quantitative description of the properties of the liquid, and in particular can account for the observed values of the critical temperature .
In this respect the situation in <sup>3</sup>He is difficult, since one does not have a precise knowledge of the instantaneous part of the pairing interaction. Moreover the parameter $`\overline{I}`$ involved in the paramagnon description varies in a quite restricted range in the vicinity of the instability limit $`\overline{I}`$ = 1, when the pressure of the liquid is varied over the full range available in the phase diagram. By contrast the situation potentially offered by superfluid <sup>6</sup>Li gas is much more agreable : the instantaneous interaction can be directly linked to the diffusion length, which is fairly precisely known. Moreover the possibility of varying to a large extent the density allows to change the coupling constant at will. This offers a stringent coherence check of the theory since experiment will be able to verify the general qualitative behaviour of the model. In this way we can hope to have a definite answer to the question : is the paramagnon model a proper description or not ?
The problem of the critical temperature for a BCS superfluid in a dilute Fermi gas has been investigated by Gorkov and Melik-Barkhudarov , following the work of Galitskii. It has been recently considered in the more general context of dilute atomic gases by Stoof et al. who found a typical value of 40 nK for a density $`n=10^{12}cm^3`$. This temperature is quite within reach experimentally. Our purpose is to extend these works in the high density regime. Naturally our result will reduce to the proper one in the dilute limit.
The critical temperature is obtained by writing that the vertex part in the normal state diverges, which is expected to occur first for zero total momentum and energy of the pair. The corresponding vertex part $`\mathrm{\Gamma }_{p,p^{}}`$ is related to the irreducible vertex $`\overline{\mathrm{\Gamma }}_{p,p^{}}`$ in the particle-particle channel by :
$`\mathrm{\Gamma }_{p,p^{}}=\overline{\mathrm{\Gamma }}_{p,p^{}}T{\displaystyle \underset{k}{}}\overline{\mathrm{\Gamma }}_{p,k}D_k\mathrm{\Gamma }_{k,p^{}}`$ (1)
where $`_k`$ is for $`(2\pi )^3_n𝑑𝐤`$. We have set $`D_k=G(k)G(k)`$ where $`G(k)`$ is the full Green’s function and $`k=(𝐤,\omega _n)`$ is a four-momentum. The summation runs over the wavector $`𝐤`$ and the Matsubara frequency $`\omega _n`$ = $`(2n+1)\pi T`$.
We split the irreducible vertex into the bare interaction $`U_{𝐩,𝐩^{}}`$ and all the contributions $`\mathrm{\Gamma }_{p,p^{}}^{}`$ which are higher order in the interaction : $`\overline{\mathrm{\Gamma }}_{p,p^{}}`$ = $`U_{𝐩,𝐩^{}}`$ \+ $`\mathrm{\Gamma }_{p,p^{}}^{}`$ . Then, following Galitskii , we eliminate the interaction U in favor of the vertex in the dilute limit corresponding physically to two atoms scattering in vacuum. In this limit, $`\overline{\mathrm{\Gamma }}`$ reduces to U, and $`G(k)`$ becomes the free particule Green’s function $`(i\omega _nϵ_k)^1`$ with $`ϵ_k=k^2/2m`$. In contrast to Ref. , we have taken for convenience the chemical potential equal to zero in this limit. Let us call $`\mathrm{\Gamma }_{𝐩,𝐩^{}}^T`$ the vertex in this limit. From Eq.(1) it satisfies :
$`\mathrm{\Gamma }_{𝐩,𝐩^{}}^T=U_{𝐩,𝐩^{}}{\displaystyle \frac{T}{(2\pi )^3}}{\displaystyle 𝑑𝐤U_{𝐩,𝐤}\underset{n}{}D_k^0\mathrm{\Gamma }_{𝐤,𝐩^{}}^T}`$ (2)
where $`D_k^0=(\omega _n^2+ϵ_k^2)^1`$. Since we will deal with fairly small temperature, we can take the T $`0`$ limit where $`\mathrm{\Gamma }_{𝐩,𝐩^{}}^T`$ reduces to the vertex $`\mathrm{\Gamma }_{𝐩,𝐩^{}}^0`$ for two scattering atoms evaluated at zero energy. This vertex can be explicitely expressed in terms of the scattering amplitude $`f(𝐩,𝐩^{})`$ corresponding to the scattering potential U. Since the atomic potential has a very short range compared to all other lengths involved in the problem, the typical wavevector for a change in $`U(𝐩,𝐩^{})`$ is very large compared to the wavevectors we have to deal with and similarly for $`\mathrm{\Gamma }_{𝐩,𝐩^{}}^0`$. Hence we can take in our problem $`\mathrm{\Gamma }_{𝐩,𝐩^{}}^0`$ equal to its $`𝐩=𝐩^{}=\mathrm{𝟎}`$ limit, which is given by $`\mathrm{\Gamma }^0=4\pi a/m`$ in terms of the scattering length. This can be checked explicitely in the case of a separable potential or for a pseudopotential .
Since $`\mathrm{\Gamma }_{p,p^{}}`$ diverges at $`T_c`$, the first term in the r.h.s. of Eq.(1) is negligible. In the resulting integral equation, $`p^{}`$ appears as a free parameter which can be omitted. Writing $`\mathrm{\Gamma }_p`$ instead of $`\mathrm{\Gamma }_{p,p^{}}`$, we obtain that $`T=T_c`$ when:
$`\mathrm{\Gamma }_p=T{\displaystyle \underset{k}{}}[U_{𝐩,𝐤}+\mathrm{\Gamma }_{p,k}^{}]D_k\mathrm{\Gamma }_k`$ (3)
is satisfied. Note that the effective interaction $`\mathrm{\Gamma }_{p,k}^{}`$ will be frequency dependent. Therefore we have also to retain the frequency dependence of $`\mathrm{\Gamma }_k`$. In other words the order parameter in the superfluid phase will have a frequency dependence, which corresponds to a strong coupling situation. We eliminate $`U`$ from Eq.(3) by premultiplying it by $`\delta _{p^{},p}T\mathrm{\Gamma }_{𝐩^{},𝐩}^0D_p^0`$ and summing over $`p`$. Making use of Eq.(2) this leads to:
$`\mathrm{\Gamma }_p=\mathrm{\Gamma }^0T{\displaystyle \underset{k}{}}[D_kD_k^0]\mathrm{\Gamma }_k`$ (4)
$`T{\displaystyle \underset{k}{}}[\mathrm{\Gamma }_{p,k}^{}\mathrm{\Gamma }^0T{\displaystyle \underset{k^{}}{}}D_k^{}^0\mathrm{\Gamma }_{k^{},k}^{}]D_k\mathrm{\Gamma }_k`$ (5)
Since we expect the dependence $`\mathrm{\Gamma }_p`$ to be fairly slow, over a typical scale $`k_F`$, with respect to the wavevector p, we will neglect it in the following. For coherence $`\mathrm{\Gamma }_{p,k}^{}`$ in the r.h.s. of Eq.(4) will be evaluated for $`|𝐩|=k_F`$. Similarly we can evaluate the self-energy $`\mathrm{\Sigma }(k)`$ at the Fermi surface. In terms of the renormalization function $`Z_n`$ defined by $`\mathrm{\Sigma }(k_F,i\omega _n)\mathrm{\Sigma }(k_F,0)=i\omega _n(1Z_n)`$ this leads to $`D_k=(\omega _n^2Z_n^2+\xi _k^2)^1`$ with $`\xi _k=ϵ_kE_F`$. This procedure is the standard one for classical superconductors, fully justified by the existence of a small energy scale which is the phonon energy. Here to the contrary we have the single energy scale $`E_F`$, and the quality of this set of approximations is less obvious. Nevertheless, since one does not expect any drastic effect to arise from this k dependence and that this procedure is consistent with the paramagnons approach, we will work with this approximate scheme.
Once $`\mathrm{\Gamma }_p`$ has no p dependence, integrations over momentum can be performed in Eq.(4). In the first term, we write $`D_kD_k^0=[D_kD_k^1]+[D_k^1D_k^0]`$ where $`D_k^1=(\omega _n^2+\xi _k^2)^1`$. The contribution from $`D_k^1D_k^0`$ can be integrated exactly. For $`D_kD_k^1`$ we take into account, consistently with our above approximation, that $`\omega _n`$ is small. In this way we obtain :
$`{\displaystyle \frac{1}{(2\pi )^3}}{\displaystyle 𝑑𝐤[D_kD_k^0]}={\displaystyle \frac{\pi N_f}{|\omega _n|}}[C_n^WC_n^M]`$ (6)
where $`N_f=mk_F/2\pi ^2`$ is the density of states at the Fermi surface. We have set $`C_n^M=1Z_n^1`$ and $`C_n^W=[\sqrt{1+(1+w_n^2)^{1/2}}\sqrt{|w_n|}]/\sqrt{2}`$ with $`w_n=\omega _n/E_F`$. One can check that the term $`C_n^W`$ gives in Eq.(4) a contribution $`\mathrm{\Gamma }^0N_f\mathrm{ln}(8e^{C2}E_F/\pi T)`$ (C is the Euler constant). If only this term is retained one obtains $`T_c/E_F=8e^{C2}/\pi \mathrm{exp}(1/\lambda )`$ used in . The $`C_n^M`$ term will give a decrease of the critical temperature due to mass renormalization and lifetime effects.
We turn now to the irreducible vertex $`\mathrm{\Gamma }_{k,k^{}}^{}`$. As we have indicated we are mostly interested in the contribution of density fluctuations to this vertex, and here we will handle it by retaining the same set of diagrams as in paramagnon theory. Actually since the attractive interaction acts only between atoms with different hyperfine states, we are exactly in the same situation as for paramagnons where interaction takes place only between different spins. The only qualitative difference is that the interaction is repulsive in paramagnon theory, leading to a positive dimensionless coupling constant $`\overline{I}=N_fI`$, while the attractive interaction between <sup>6</sup>Li correspond to a negative $`\overline{I}`$. Another important difference is that it would be inaccurate to retain only the bare interaction for all the elementary vertices in the paramagnon diagrams. Indeed we know that we have a large scattering length. This is obtained quantitatively by summing up the ladder diagrams for two scattering atoms, as it is clear from Eq.(2). Obviously we have to do a similar summation in the paramagnon diagrams, otherwise we will miss the dominant contribution. More precisely we would need to know the irreducible vertex in the particle-hole channel. We will assume that the dominant contribution to this vertex is given by the sum of the ladder diagrams. Also we will not attempt to take into account its energy dependence and consider that this vertex is the same as for two atoms in vacuum. With these hypotheses we are led to take the interaction $`I`$ of paramagnon theory equal to $`\mathrm{\Gamma }^0`$. This gives us $`\overline{I}=N_f\mathrm{\Gamma }^0=\lambda `$.
Now the sum of the paramagnons diagrams ( including ladder and bubble diagrams ) gives $`\mathrm{\Gamma }_{k,k^{}}^{}=N_fV_{\mathrm{eff}}(kk^{})`$ with $`N_fV_{\mathrm{eff}}(k)=\overline{I}^2\overline{\chi }_0/(1\overline{I}\overline{\chi }_0)+\overline{I}^3\overline{\chi }_0^2/(1\overline{I}^2\overline{\chi }_0^2)`$ and $`\overline{\chi }_0(k)`$ is the dimensionless elementary bubble . When the self-energy is evaluated with the corresponding set of diagrams, one finds that it is given by the same expression as for an effective interaction $`N_fV_Z(k)=\overline{I}^3\overline{\chi }_0^2/(1\overline{I}\overline{\chi }_0)+\overline{I}^2\overline{\chi }_0/(1\overline{I}^2\overline{\chi }_0^2)`$ In agreement with our above approximation, we evaluate the self- energy for wavevector $`k_F`$ and for small energies. This leads to:
$`(2n+1)(Z_n1)=\overline{V}_Z(0)+2{\displaystyle \underset{p=1}{\overset{n}{}}}\overline{V}_Z(2\pi pT)`$ (7)
with $`\overline{V}_Z(\omega _p)=(1/2k_F^2)_0^2q𝑑qN_fV_Z(q,\omega _p)`$. This $`\overline{V}_Z(\omega _p)`$ corresponds to the average of the interaction for scattering of an atom at the Fermi surface from $`𝐤`$ to $`𝐤^{}`$ with wavevector transfer $`q=2k_F\mathrm{sin}(\theta /2)`$ where $`\theta `$ is the angle between $`𝐤`$ and $`𝐤^{}`$. This angular average has to be performed numerically, in order to obtain $`\overline{V}_Z`$ . As expected $`Z_n>1`$. The maximum is at zero energy, with a fairly long tail at high energy. Naturally $`Z_n`$ increases when $`|\overline{I}|1`$ and it diverges at zero energy in this limit.
In the term $`\mathrm{\Gamma }_{p,k}^{}D_k\mathrm{\Gamma }_k`$ in Eq.(4), the only strong dependence on k, for low energies, comes from $`D_k`$ which forces $`kk_F`$. So we integrate over $`\xi _k`$ as above. Setting $`\mathrm{\Gamma }_n=\mathrm{\Gamma }(\omega _n)`$, this leads us to:
$`T{\displaystyle \underset{k}{}}\mathrm{\Gamma }_{p,k}^{}D_k\mathrm{\Gamma }_k={\displaystyle \underset{m}{}}{\displaystyle \frac{\pi T\mathrm{\Gamma }_m}{|\omega _m|Z_m}}\overline{V}_{\mathrm{eff}}(\omega _n\omega _m)`$ (8)
where as above $`\overline{V}_{\mathrm{eff}}(\omega _p)=(1/2k_F^2)_0^2q𝑑qN_fV_{\mathrm{eff}}(q,\omega _p)`$. Finally the contribution from the last term in Eq.(4) with the double summation is somewhat more complicated to evaluate. Nevertheless after a double integration, performed numerically without problems, which provides a function $`V_c(\omega _n)`$, it can be written as:
$`T^2{\displaystyle \underset{k,k^{}}{}}D_k^{}^0\mathrm{\Gamma }_{k^{},k}^{}D_k\mathrm{\Gamma }_k=N_f{\displaystyle \underset{m}{}}{\displaystyle \frac{\pi T\mathrm{\Gamma }_m}{|\omega _m|Z_m}}V_c(\omega _m)`$ (9)
Finally Eq.(4) can be cast as:
$`\mathrm{\Gamma }_n={\displaystyle \underset{m}{}}{\displaystyle \frac{\pi T}{|\omega _m|}}\lambda C_m\mathrm{\Gamma }_m+{\displaystyle \frac{\pi T}{|\omega _m|Z_m}}\overline{V}_{\mathrm{eff}}(\omega _n\omega _m)\mathrm{\Gamma }_m`$ (10)
where we have set $`C_n=C_n^WC_n^MC_n^V`$ with $`C_n^V=V_c(\omega _n)/Z_n`$. Hence $`T_c`$ is the highest temperature for which the matrix corresponding to the r.h.s. of Eq.(9) has an eigenvalue equal to 1. Eq.(9) is very similar to the one obtained from Eliashberg equations for a strongly coupled superconductor. The most noticeable difference is the term $`C_n^V`$ produced by the interaction through fluctuation exchange when one replaces the bare potential in terms of the scattering length. We note also that, in the dilute limit, we recover the result of Gorkov and Melik-Barkhudarov .
Let us now consider the physical effect of $`V_{\mathrm{eff}}`$. It is more convenient to decompose it as $`N_fV_{\mathrm{eff}}=(3/2)\overline{I}^2\overline{\chi }_0/(1\overline{I}\overline{\chi }_0)(1/2)\overline{I}^2\overline{\chi }_0/(1+\overline{I}\overline{\chi }_0)`$. The first term is due to spin fluctuations and the second one to density fluctuations. As pointed out by Berk and Schrieffer the first term gives always a repulsion which increases strongly in the vicinity of the ferromagnetic instability $`\overline{I}1`$. However in the range $`\overline{I}<0`$ which is of interest to us, the density fluctuations attraction can take over the spin fluctuations repulsion. This is what happens in the limit $`\overline{I}1`$ where the gas becomes unstable. Then in complete analogy with the paramagnon case, the effective interaction at the Fermi surface $`\overline{V}_{\mathrm{eff}}(\omega _n)`$ diverges for zero frequency. For nonzero frequency it remains finite. In particular at high energy where $`\overline{\chi }_0`$ gets small, spin fluctuations dominate, $`N_fV_{\mathrm{eff}}\overline{I}^2\overline{\chi }_0`$ and the overall interaction is repulsive. So for $`\overline{I}`$ near -1, the effective interaction is attractive at low frequency and has a repulsive high energy tail. But the attractive part exists only for $`\overline{I}0.6`$. For lower $`\overline{I}`$ fluctuations are repulsive for all energies.
The divergence of $`\overline{V}_{\mathrm{eff}}`$ for $`\overline{I}1`$ may raise hope that in this limit $`T_c`$ is going to be very large. However it is known, in the case of strongly coupled superconductor as well as for <sup>3</sup>He in paramagnon theory, that this increase is countered by the corresponding increase of $`Z_n`$, physically due to mass renormalization and lifetime effects. The net result of these opposite effects has to be found numerically, so we turn to our numerical results from Eq.(9) for $`T_c`$. They are given in Fig.1.
Surprisingly, instead of a regular rise of $`T_c/E_F`$ as a function of $`\lambda =\overline{I}`$, we find two regimes. Up to $`\lambda 0.4`$ we have a regular and strong increase (note that for $`\lambda =0.4`$ our result is markedly below the extrapolation of the result of Ref. , which would give $`T_c/E_F`$ = 0.023). We have then a saturation and even a slight decrease up to $`\lambda 0.6`$. We attribute this effect to the increase of $`Z_n`$ which overcompensates the increase with $`\lambda `$ of the direct attractive interaction. Then, starting for $`\lambda 0.6`$ we obtain another strong rise of $`T_c/E_F`$. This second regime is clearly due to the increasing contribution of the attractive interaction due density fluctuations. $`T_c/E_F`$ grows up to a maximum 0.025 found for $`\lambda 0.98`$ (corresponding to $`T_c`$ = 190 nK). We stress that, if observed, this critical temperature would be the highest (relative to $`E_F`$) among BCS superfluids, since for standard superconductors as well as superfluid <sup>3</sup>He this ratio is of order 10<sup>-3</sup> whereas for high $`T_c`$ superconductors (if they are BCS) it reaches at best 10<sup>-2</sup> . Then, when $`\lambda `$ increases further, $`T_c`$ decreases gently in the vicinity of $`\lambda =1`$. This effect is clearly due to the increase of $`Z_n`$ . However the maximum of $`T_c`$ is obtained for $`\lambda `$ so close to 1 that this decrease is probably unobservable experimentally.
The most interesting feature of these results is that the existence of an indirect attractive interaction due to density fluctuations exchange gives a qualitative signature in $`T_c(\lambda )`$. Although we may wonder about the quantitative validity of our results, due to the approximations we have made, it is reasonable to believe that the qualitative rise in $`T_c(\lambda )`$ will survive. Its observation would be a strong indication of the importance of fluctuation exchange in the effective interaction in <sup>6</sup>Li. And indirectly it would also bring some support to the existence of similar mechanisms in other BCS superfluids. Finally in this regime we expect, in the superfluid phase, deviations from standard weak coupling BCS theory and feedback effects. In conclusion we have shown that, for <sup>6</sup>Li near the instability threshold, indirect attractive interaction due to density fluctuations exchange can lead to rather high critical temperature, with a clear signature for its dependence as a function of the coupling constant. The observation of this effect would be very interesting, as a clear example of collective mode induced superfluidity.
We are very grateful to Y. Castin, C. Cohen-Tannoudji, J. Dalibard, W. Krauth, M.O. Mewes and C. Salomon for very stimulating discussions.
* Laboratoire associé au Centre National de la Recherche Scientifique et aux Universités Paris 6 et Paris 7.
|
no-problem/9905/astro-ph9905146.html
|
ar5iv
|
text
|
# 1 Log of the RXTE and OSSE observations. The ASCA observation, near-simultaneous with the RXTE observation 3, is described in Section 2.
|
no-problem/9905/hep-ph9905553.html
|
ar5iv
|
text
|
# Virtual annihilation contribution to orthopositronium decay rate
## Abstract
Order $`\alpha ^2`$ contribution to the orthopositronium decay rate due to one-photon virtual annihilation is found to be $`\delta _{\mathrm{ann}}\mathrm{\Gamma }^{(2)}=\left(\frac{\alpha }{\pi }\right)^2\left(\pi ^2\mathrm{ln}\alpha 0.8622(9)\right)\mathrm{\Gamma }_{\mathrm{LO}}`$.
PACS numbers: 36.10.Dr, 06.20.Jr, 12.20.Ds, 31.30.Jv
preprint: TTP-99-24, hep-ph/9905553
Positronium, the bound state of an electron and positron, is an excellent laboratory to test our understanding of Quantum Electrodynamics of bound states. Although in the majority of cases the agreement between theory and experiment is very good, the case of orthopositronium (o-Ps) decay into three photons is outstanding, since the theoretical predictions differ by about 6 standard deviations from the most accurate experimental result (see, however, an alternative result in ). Provided that the experiment is correct, the theory can only be rescued if the second order correction to the o-Ps lifetime turns out to be $`250(\alpha /\pi )^2\mathrm{\Gamma }_{\mathrm{LO}}`$.
It is however difficult to imagine how such a large number could appear in the perturbative calculations, even if the bound state is involved. This point of view is supported by a recent complete calculation of the $`𝒪(\alpha ^2)`$ correction to the parapositronium (p-Ps) decay rate into two photons . It has shown that the “natural scale” of the gauge-invariant contributions is \[several units\]$`\times (\alpha /\pi )^2\mathrm{\Gamma }_{\mathrm{LO}}`$. For this reason, it was conjectured in that the $`𝒪(\alpha ^2)`$ correction to the orthopositronium decay rate o-Ps $`3\gamma `$ most likely is of the same order of magnitude.
At first sight, the result of Ref.,
$$\delta _{\mathrm{ann}}\mathrm{\Gamma }^{(2)}=\left(\frac{\alpha }{\pi }\right)^2\left(\pi ^2\mathrm{ln}\alpha +9.0074(9)\right)\mathrm{\Gamma }_{\mathrm{LO}},$$
(1)
for the gauge-invariant contribution to the o-Ps$`3\gamma `$ decay rate induced by the single-photon virtual annihilation, does not provide much support for this conjecture. In fact, the value of the non-logarithmic constant in Eq.(1) is larger by approximately one order of magnitude than the values of coefficients in gauge-invariant contributions to p-Ps$`2\gamma `$ decay rate.
In this note we would like to point out that the result for the second order correction to virtual annihilation contribution given in Eq.(1) is incomplete, in that closely related contributions should be included as well. It turns out that if the missing pieces are added to Eq.(1), then the complete result for $`\delta _{\mathrm{ann}}\mathrm{\Gamma }^{(2)}`$ decreases and is in accord with the expectations advocated in Ref. .
We recall, that in bound state calculations there are two different types of contributions. The hard corrections arise as contributions of virtual photons with momenta $`km`$. These contributions renormalize local operators in the non-relativistic Hamiltonian. For this reason they can be computed without any reference to the bound state.
On the contrary, the soft scale contributions come from a typical momenta scale $`km\alpha `$ in virtual loops, and for this reason are sensitive to the bound state dynamics. For $`\delta _{\mathrm{ann}}\mathrm{\Gamma }^{(2)}`$, it is easy to see that the soft scale contribution reads:
$$\delta _{\mathrm{ann}}^{(\mathrm{soft})}\mathrm{\Gamma }^{(2)}=\frac{4\pi \alpha }{m^2}G(0,0),$$
(2)
where
$$G(r,r^{})=\underset{n}{}{}_{}{}^{}\frac{|n(r)n(r^{})|}{EE_n}$$
is the reduced Green function of the Schrödinger equation in the Coulomb field.
Let us write the expansion of the Green function in a series over the Coulomb potential:
$$G(0,0)=G_0(0,0)+G_1(0,0)+G_{\mathrm{multi}}(0,0).$$
(3)
The first two terms in this expansion are divergent and require regularization. If we use dimensional regularization, then the $`G_0(0,0)`$ piece delivers a finite contribution, since it has only a power divergence. The second term $`G_1(0,0)`$ is logarithmically divergent. It can be easily seen that just this term was accounted for in the calculation of Ref., and it is precisely the term that delivers the $`\mathrm{ln}\alpha `$ in Eq.(1). However, the contributions of $`G_0`$ and $`G_{\mathrm{multi}}`$ were not calculated there.
Both additional terms can be easily extracted from Ref.. We then obtain
$$\delta _0\mathrm{\Gamma }=\frac{4\pi \alpha }{m^2}G_0(0,0)=\frac{1}{2}\alpha ^2\mathrm{\Gamma }_{\mathrm{LO}}$$
(4)
and
$$\delta _{\mathrm{multi}}\mathrm{\Gamma }=\frac{4\pi \alpha }{m^2}\underset{n=2}{\overset{\mathrm{}}{}}G_n(0,0)=\frac{3}{2}\alpha ^2\mathrm{\Gamma }_{\mathrm{LO}},$$
(5)
consistent with the results of Ref..
If we now add Eqs.(4), (5) and (1), we obtain the complete $`𝒪(\alpha ^2)`$ correction to the o-Ps$`3\gamma `$ decay rate due to single-photon virtual annihilation:
$$\delta _{\mathrm{ann}}\mathrm{\Gamma }^{(2)}=\left(\frac{\alpha }{\pi }\right)^2\left\{\pi ^2\mathrm{ln}\alpha 0.8622(9)\right\}\mathrm{\Gamma }_{\mathrm{LO}}.$$
(6)
One sees that the non-logarithmic contribution is in fact of order one times $`(\alpha /\pi )^2\mathrm{\Gamma }_{\mathrm{LO}}`$, in accord with the conjecture in Ref.. Its value is similar in magnitude to the known results for other gauge-invariant $`𝒪(\alpha ^2)`$ corrections to the decay rate of orthopositronium .
The only known exception from this “rule” is provided by the square of the $`𝒪(\alpha )`$ corrections to the o-Ps decay amplitude. This (gauge-invariant) contribution has an anomalously large coefficient: $`28.860(2)(\alpha /\pi )^2\mathrm{\Gamma }_{\mathrm{LO}}`$ . In this respect, we would like to note that there may be an enhancement factor due to a larger (by about a factor of 3) number of diagrams contributing to o-Ps decay as compared to p-Ps decay. This enhancement is seen already in the magnitude of the $`𝒪(\alpha )`$ corrections and translates naturally to the large value of the $`𝒪(\alpha ^2)`$ contribution originating from the square of the $`𝒪(\alpha )`$ corrections to the o-Ps decay amplitude. However, unless this enhancement is dramatic, it is hard to believe that this fact alone can explain the discrepancy between theoretical and experimental results on o-Ps decay rate.
This research was supported in part by the National Science Foundation under grant number PHY-9722074, by BMBF under grant number BMBF-057KA92P, by Graduiertenkolleg “Teilchenphysik” at the University of Karlsruhe, by the Russian Foundation for Basic Research under grant number 99-02-17135 and by the Russian Ministry of Higher Education.
|
no-problem/9905/astro-ph9905386.html
|
ar5iv
|
text
|
# 1 Galaxy Samples
## 1 Galaxy Samples
Our sample of galaxies is taken from the Stromlo-APM redshift survey which covers 4300 sq-deg of the south galactic cap and consists of 1797 galaxies brighter than $`b_J=17.15`$ mag. The galaxies all have redshifts $`z<0.145`$, and the mean is $`z=0.051`$. A detailed description of the spectroscopic observations and the redshift catalog is published in . Measurement of EW (H$`\alpha `$) and other spectral properties is described in and a more detailed analysis of the luminosity function and clustering of galaxies selected by EW (H$`\alpha `$) as well as EW (\[O ii\]) may be found in . Of the 1797 galaxies originally published in the redshift survey, 1521 are suitable for analysis here. The rest are brighter than 15th magnitude, and so have unreliable APM photometry, or have a problem with the spectrum meaning that EW (H$`\alpha `$) could not be accurately measured.
We select galaxy subsamples using measured equivalent width of the H$`\alpha `$ emission line, the most reliable tracer of massive star formation . The H$`\alpha `$ line is detected with EW $`2`$Å in 61% of galaxies. Of these emission-line galaxies, half have EW (H$`\alpha `$) $`>15`$Å. Thus we form three subsamples of comparable size by dividing the sample at EW (H$`\alpha `$) of 2Å and 15Å. The galaxy samples selected by H$`\alpha `$ equivalent width are defined in Table 1 which also gives the numbers of galaxies of each morphological type in each spectroscopically selected subsample. The sample labeled “Unk” consists of galaxies to which no morphological classification was assigned. We see that early-type galaxies dominate when H$`\alpha `$ emission is not detected and are underrepresented when emission is detected. Conversely, the number of irregular galaxies increases significantly in the spectroscopic samples which show strongest star formation.
## 2 The Galaxy Luminosity Function
Our estimates of the luminosity function for the EW (H$`\alpha `$) selected samples, assuming a Hubble constant of $`H_0=100`$ km/s/Mpc, are shown in Figure 1. The inset to this Figure shows the likelihood contours for the best-fit Schechter parameters $`\alpha `$ and $`M^{}`$. The Schechter parameters and their $`1\sigma `$ errors (from the bounding box of the $`1\sigma `$ error contours) are also listed in Table 2. Note that the estimates of $`\alpha `$ and $`M^{}`$ are strongly correlated and so the errors quoted for $`\alpha `$ and $`M^{}`$ in the Table are conservatively large. We see a trend of faintening $`M^{}`$ and steepening $`\alpha `$ as EW (H$`\alpha `$) increases.
The normalisation $`\varphi ^{}`$ of the fitted Schechter functions was estimated using a minimum variance estimate of the space density $`\overline{n}`$ of galaxies in each sample , . We corrected our estimates of $`\overline{n}`$, $`\varphi ^{}`$ and luminosity density $`\rho _L`$ to allow for those galaxies excluded from each subsample. The uncertainty in mean density due to “cosmic variance” is $`6\%`$ for each sample. However, the errors in these quantities are dominated by the uncertainty in the shape of the LF, particularly by the estimated value of the characteristic magnitude $`M^{}`$.
Using H$`\alpha `$ equivalent width as an indicator of star formation activity, we find that galaxies currently undergoing significant bursts of star formation dominate the faint-end of the luminosity function, whereas more quiescent galaxies dominate at the bright end. This is in agreement with the results from the LCRS and ESP surveys for samples selected by EW (\[O ii\]).
## 3 Galaxy Clustering
We have calculated the projected cross-correlation function $`\mathrm{\Xi }(\sigma )`$ of each galaxy subsample with all galaxies in the APM survey to a magnitude limit of $`b_J=17.15`$. We then invert this projected correlation function to obtain the real space cross-correlation function $`\xi (r)`$ of each subsample with the full galaxy sample. This method of estimating $`\xi (r)`$ is described by and by . Our estimates of $`\xi (r)`$ are plotted in Figure 2 and our best-fit power-laws are tabulated in Table 2. We see that strong emission-line galaxies are more weakly clustered than their quiescent counterparts by a factor of about two.
The clustering measured for non-ELGs is very close to that measured for early-type (E + S0) galaxies, and the clustering of late-type (Sp + Irr) galaxies lies between that of the moderate and high EW galaxies (cf. ). Given the strong correlation between morphological type and presence of emission lines (Table 1) this result is not unexpected. The power-law slopes are consistent ($`\gamma _r=1.8\pm 0.1`$) between the H-low and H-high samples. For the H-mid sample we find shallower slopes ($`\gamma _r=1.6\pm 0.1`$). This is only a marginally significant (1–2 $`\sigma `$) effect, but may indicate a deficit of moderately star-forming galaxies principally in the cores of high density regions, whereas strongly star forming galaxies appear to more generally avoid overdense regions.
## 4 Conclusions
We have presented the first analysis of the luminosity function and spatial clustering for representative and well-defined local samples of galaxies selected by EW (H$`\alpha `$), the most direct tracer of star-formation. We observe that $`M^{}`$ faintens systematically with increasing EW (H$`\alpha `$), and that the faint-end slope increases. Star-forming galaxies are thus likely to be significantly fainter than their quiescent counterparts. The faint-end ($`M>M^{}`$) of the luminosity function is dominated by ELGs and thus the majority of local dwarf galaxies are currently undergoing star formation. Star-forming galaxies are more weakly clustered than quiescent galaxies. This weaker clustering is observable on scales from 0.1–10 $`h^1\mathrm{Mpc}`$. We thus confirm that star-forming galaxies are preferentially found today in low-density environments.
A possible explanation for these observations is that luminous galaxies in high-density regions have already formed all their stars by today, while less luminous galaxies in low-density regions are still undergoing star formation. It is not clear what might be triggering the star formation in these galaxies today. While interactions certainly enhance the rate of star formation in some disk galaxies, interactions with luminous companions can only account for a small fraction of the total star formation in disk galaxies today (). Telles & Maddox have investigated the environments of H ii galaxies by cross-correlating a sample of H ii galaxies with APM galaxies as faint as $`b_J=20.5`$. They find no excess of companions with H i mass $`>10^8M_{}`$ near H ii galaxies, thus arguing that star formation in most H ii galaxies is unlikely to be induced by even a low-mass companion.
Our results are entirely consistent with the hierarchical picture of galaxy formation. In this picture, today’s luminous spheroidal galaxies formed from past mergers of galactic sub-units in high density regions, and produced all of their stars in a merger induced burst, or series of bursts, over a relatively short timescale. The majority of present-day dwarf, star-forming galaxies in lower density regions may correspond to unmerged systems formed at lower peaks in the primordial density field and whose star formation is still taking place. Of course, the full picture of galaxy formation is likely to be significantly more complicated than this simple sketch, and numerous physical effects such as depletion of star-forming material and other feedback mechanisms are likely to play an important role.
|
no-problem/9905/astro-ph9905097.html
|
ar5iv
|
text
|
# On the Nuclear Rotation Curve of M31
## 1 Introduction
Currently viable explanations for the double-peaked structure of the nucleus of M31 revealed by the Hubble Space Telescope (HST; Lauer et al. 1993, 1998) center around two basic scenarios: first, that the off-center brightness peak, P1, represents a transient structure, possibly a star cluster on the verge of tidal disruption (Emsellem & Combes 1997); second, that P1 is an equilibrium configuration, resulting from a statistical accumulation of stars near the apoapsides of their orbits in an eccentric ring or disk (Tremaine 1995, hereafter T95). Groundbased spectroscopy at arcsecond (Bacon et al. 1994) and better (Kormendy & Bender 1999, hereafter KB99) resolution shows asymmetries in the rotation and dispersion profiles across the two brightness peaks. These asymmetries are in the sense expected in the eccentric disk picture, prompting KB99 to argue in strong support of the T95 model. However, the sinking-cluster scenario has enough adjustable parameters that, with sufficient perseverance, an adequate fit to the data could probably be found.
The highest resolution kinematic data to date come from the f/48 long-slit spectrograph of the HST Faint Object Camera (FOC; Statler et al. 1999, hereafter SKCJ). The FOC rotation curve is completely consistent with the groundbased data, when the former is convolved to the resolution of the latter. In addition, the FOC data show kinematic features at smaller scales. The most significant is a disturbance to the rotation curve in P1, superficially resembling a barely resolved local rotation in the same sense as the overall rotation of the nucleus (fig. 4c below). The FOC data are limited by low signal-to-noise ($`S/N`$) ratio, and confirmation by upcoming Space Telescope Imaging Spectrograph (STIS) observations is certainly desirable. However, the “P1 wiggle” is found in the region of highest $`S/N`$ in the FOC data, and the peak-to-peak amplitude of $`120\mathrm{km}\mathrm{s}^1`$ is robust and insensitive to details of the reduction process.
It is difficult to argue that the P1 wiggle could be a natural consequence of the sinking cluster scenario. If P1 is a bound object, its luminosity ($`10^6L_{}`$) and characteristic radius ($`1\mathrm{pc}`$) would suggest a severely tidally truncated, collapsed-core globular cluster. But a rotation velocity $`60\mathrm{km}\mathrm{s}^1`$ would correspond to $`V/\sigma >1`$, at least two times higher than observed in Galactic globulars (e.g., Gebhardt et al. 1997) or inferred from the their flattenings (Davoust & Prugniel 1990). Spin-up by an earlier tidal interaction with the central black hole (BH) is conceivable, but it is hard to see how such an interaction could have avoided disrupting the cluster.
On the other hand, a local distortion to the rotation curve of this magnitude may be a natural consequence of the eccentric disk picture. The shape of the distortion suggests that the self-gravity of the disk may be at work. It is easy to see how such a feature could arise in the simple case of a massive circular ring centered on the BH. The ring pulls outward on objects in its interior; thus the ring potential would lower the circular velocity for orbits just inside the ring, and raise it for orbits just outside, distorting the rotation curve in the sense observed. Of course, this distortion would be seen symmetrically on both sides of the center. In this Letter, I show that the self-gravity of an eccentric disk will naturally produce the same kind of feature in the rotation curve—though for a different reason—only on one side, as is seen in the FOC data.
The plan of this Letter is as follows: In § 2 I construct a simple model for a non-self-gravitating eccentric disk, whose parameters are consistent with the T95 model for M31. The purpose of the initial model is only to provide a plausible density distribution, from which I calculate a plausible perturbation to the otherwise Keplerian potential. In § 3, I examine the closed periodic orbits in the perturbed potential, in a frame rotating at an assumed precession speed $`\mathrm{\Omega }_p`$; these orbits will be the parents of more general quasi-periodic orbits that will be populated in the self-gravitating disk. The character of the perturbation forces a steep negative eccentricity gradient in the sequence of orbits moving outward through the densest part of the disk. This gradient reverses the arrangement of periapsis and apoapsis, such that stars making up the inner part of the density concentration are at apoapsis, while stars making up the outer part are at periapsis. Elementary considerations of celestial mechanics show why this must be the case, independent of the details of the mass distribution. While self-consistent models are outside the scope of this Letter, I estimate in § 4 the effect on the rotation curve by integrating the closed-orbit velocity field over an aperture approximating the FOC slit and show that the expected signature is qualitatively close to the observed rotation curve. Finally, I discuss in § 5 the implications for more realistic models and the prospects for using details of the rotation curve to constrain the masses of the disk and BH.
## 2 Mass Distribution in an Eccentric Disk
To calculate a plausible density distribution, I assume a cold, infinitesimally thin disk of stars on aligned elliptical Kepler orbits. I place the BH of mass $`M`$ at the origin and the common line of apsides along the $`x`$ axis in a cartesian system, with periapsides at positive values of $`x`$. The eccentricity $`e`$ as a function of semimajor axis $`a`$ in the initial model is specified by a fixed function $`e(a)`$, which I assume changes sufficiently slowly that the orbits are not mutually intersecting. The gravity of the bulge is ignored.
For a mass $`dM`$ distributed in a phase-independent way around a single orbit, the mass per unit length $`\mathrm{}`$ would be given by $`dM/vP`$, where $`v`$ is the instantaneous speed and $`P`$ the period. For a continuum of orbits labeled by their semimajor axes $`a`$ and populated so that the mass per unit interval of $`a`$ is $`\mu (a)`$, the mass in the area $`dad\mathrm{}`$ at $`(a,\mathrm{})`$ is $`dM=\mu (a)dad\mathrm{}/vP`$. Replacing $`a`$ by length $`s`$ measured perpendicular to the orbit introduces a factor $`da/ds=|a|`$. Writing this factor explicitly in terms of the eccentricity law $`e(a)`$, and using the standard formulae for Kepler orbits, I obtain for the surface density of the disk, after some algebra,
$$\mathrm{\Sigma }(a,x)=\frac{\mu (a)}{2\pi a}\frac{(1e^2)^{1/2}}{1e^2(2ae+x)e^{}}.$$
(1)
In equation (1), $`x`$ is the cartesian coordinate and $`e^{}de/da`$. Note that having a density maximum at apoapsis ($`x=a[1+e]`$) requires that the eccentricity decrease outwards.
A particularly simple choice for $`e(a)`$ that is qualitatively consistent with the behavior in the T95 model is
$$e(a)=\frac{\mathrm{\Delta }}{a}.$$
(2)
Equation (2) implies that the orbits are all confocal, sharing the focus at the origin occupied by the BH and the empty focus at $`x=2\mathrm{\Delta }`$. The surface density of a confocal disk takes on a simple form in the elliptic coordinate system $`(u,w)`$ defined by
$$x=\mathrm{\Delta }(\mathrm{cosh}u\mathrm{sin}w1),y=\mathrm{\Delta }\mathrm{sinh}u\mathrm{cos}w.$$
(3)
In these coordinates each orbit follows an ellipse $`u=\mathrm{constant}`$, and the surface density is given by
$$\mathrm{\Sigma }(u,w)=\frac{\mu (\mathrm{\Delta }\mathrm{cosh}u)}{2\pi \mathrm{\Delta }\mathrm{cosh}u}\frac{\mathrm{tanh}u}{\mathrm{cosh}u+\mathrm{sin}w}.$$
(4)
The potential can be computed by standard methods, most efficiently by an expansion in suitable basis functions. For this work I simply evaluate the integral
$$\mathrm{\Phi }=\frac{G}{\mathrm{\Delta }}_0^{2\pi }𝑑w^{}_0^{\mathrm{}}𝑑u^{}\frac{\mathrm{\Sigma }(u^{},w^{})\left(\mathrm{sinh}^2u^{}+\mathrm{cos}^2w^{}\right)}{\left\{\left[\mathrm{cosh}(uu^{})\mathrm{cos}(ww^{})\right]\left[\mathrm{cosh}(u+u^{})+\mathrm{cos}(w+w^{})\right]\right\}^{1/2}}.$$
(5)
numerically and tabulate it on a grid in $`(u,w)`$.
A choice for $`\mu (a)`$ that produces a disk with a central density minimum and a manageable outer cutoff is
$$\mu (a)=\mu _0(a\mathrm{\Delta })\mathrm{exp}\left[\frac{(aa_0)^2}{2\sigma ^2}\right],$$
(6)
where the leading factor prevents density singularities at the foci. The numerical results below are computed in units where $`G=M=\mathrm{\Delta }=1`$. Taking $`a_0=2`$ places the peak density on orbits with eccentricities close to $`e=0.5`$; for comparison, the innermost ringlet in the T95 model, which contributes most of the density in P1, has $`e=0.44`$. I consider two models with $`\mu (a)`$ as given in equation (6) and $`a_0=2`$: a “wide” disk with $`\sigma =0.5`$, and a “narrow” disk with $`\sigma =0.2`$. The center of mass is at $`x=1.5`$ for both disks. The surface densities are shown in figure 1, along with some representative orbits. I emphasize again that the purpose of these models is only to provide a plausible perturbing potential.
## 3 Periodic Orbits
Both the disk models discussed above and the T95 model are built entirely from aligned, periodic orbits. A real self-gravitating disk with finite velocity dispersion will be made predominantly from quasiperiodic orbits whose parents are nearly elliptical, periodic orbits elongated in the same sense as the disk. As long as the dispersion is not too large, the kinematics of the quasiperiodic orbits will approximately follow that of their periodic parents. Therefore a first approximation to the disk kinematics can be found by examining the sequence of periodic orbits elongated in the $`x`$ direction in the perturbed potential. Consider the effect of the perturbation on purely Keplerian orbits. If it were fixed in an inertial frame, the perturbing potential \[equation (5)\] would initially drive a precession of each Kepler orbit at a frequency $`\mathrm{\Omega }(a,e)`$, depending on semimajor axis and eccentricity. Alternatively, if it were fixed in a frame rotating at frequency $`\mathrm{\Omega }(a,e)`$, then the perturbed orbit with that $`a`$ and $`e`$ would be closed in the rotating frame. I assume that the disk is stationary in a frame rotating at a fixed precession speed $`\mathrm{\Omega }_p`$; then the set of orbits satisfying $`\mathrm{\Omega }(a,e)=\mathrm{\Omega }_p`$ are the closed orbits in the disk.
I calculate sequences of periodic orbits for various assumed values of $`\mathrm{\Omega }_p`$ by direct integration of the equations of motion in a frame rotating about the system barycenter. Orbits are launched perpendicularly from the $`x`$ axis and the initial velocities adjusted iteratively so that the next $`x`$ axis crossing occurs with $`v_x=0`$. This procedure is not intended to find all of the periodic orbits, only the nearly elliptical ones elongated in the $`x`$ direction.
A typical sequence of orbits in the wide ($`\sigma =0.5`$) disk is shown in Figure 2. This sequence is computed for a disk mass $`m=0.025`$ and precession speed $`\mathrm{\Omega }_p=0.006`$. All of the numerical results below are computed using the same disk mass; to leading order the results depend only on the ratio $`m/\mathrm{\Omega }_p`$ and thus can be scaled to other masses. Notice that: (1) the innermost and outermost orbits are nearly circular; (2) a maximum eccentricity is reached by orbits whose apoapsides are slightly inside the peak density in the disk; and, most important, (3) many orbits of rapidly declining eccentricity pass through nearly the same apoapsis, in this case just outside the density peak.
Since the perturbed orbits are not precisely elliptical, eccentricity is defined here in terms of the points $`x_+`$ and $`x_{}`$ where the orbit intersects the positive and negative $`x`$ axis, respectively. I let $`e(x_{}+x_+)/(x_{}x_+)`$, so that $`e>0`$ ($`e<0`$) indicates periapsis at positive (negative) $`x`$. The run of $`e`$ against $`x_{}`$ is shown in Figure 3a. Notice that $`e`$ goes negative for orbits outside the density peak. It is easy to see why $`e`$ has to change sign. Consider an orbit just inside the density peak. The perturbation can be approximated by an outward impulse applied to the orbit at apoapsis, which induces a precession in the prograde direction (Brouwer & Clemence 1961). Conversely, an orbit just outside the density peak receives an inward impulse, which must be applied at periapsis to cause precession in the same direction. The required gradient, $`e^{}<0`$, is just what equation (1) says is needed to produce a mass concentration along the negative $`x`$ axis. Note that equation (1) applies for $`e`$ of either sign; thus in the region where $`e<0`$, the outward increase in $`|e|`$ still contributes to the mass concentration at negative $`x`$. This argument also shows that the disk precession must be prograde, since retrograde precession would require $`e^{}>0`$, putting the mass concentration on the wrong side to produce the needed impulse.<sup>1</sup><sup>1</sup>1Goldreich & Tremaine (1979) apply a similar argument the $`ϵ`$ ring of Uranus. In that situation, however, the ring self-gravity acts to lower the precession rate driven by the Uranian quadrupole; hence the eccentricity increases outwards and the densest part of the ring is at periapsis. At smaller and larger $`a`$, $`|e|`$ returns to near zero. For these orbits, the perturbation approximates a constant force in the negative $`x`$ and $`r`$ directions, respectively. In either case, the precession rate contains a leading factor $`(1e^2)^{1/2}/e`$ (Brouwer & Clemence 1961). Orbits more distant from the density peak must therefore be less eccentric to maintain the same precession rate.
Relatively simple celestial mechanics thus shows that $`e`$ must change sign (in the convention used here, from positive to negative) in any near-Keplerian disk with an eccentric density peak, in order for the disk to precess uniformly. The change in the sign of $`e`$ means that stars contributing to the inner part of the density concentration are lingering near apoapsis, having risen from smaller radii, while stars making up the outer part are swinging through periapsis, having fallen from larger radii. This is the key to the observable kinematic signature.
## 4 Effect on the Rotation Curve
Figure 3b shows the velocities of orbits crossing the negative $`x`$ axis as a function of the crossing point $`x_{}`$. The velocity falls below the local circular speed for $`x_{}>3.4`$ because of the positive eccentricity, then abruptly rises as $`e`$ drops through zero. The function $`v(x_{})`$ is actually double-valued because the orbits are mutually crossing near apoapsis in the region where $`e`$ is changing rapidly (Fig. 2). Orbit crossing occurs for both of the disk density models considered here, though it is reduced somewhat by higher precession speeds. The rather rapidly precessing case shown in Figures 2 and 3 was chosen more for clarity than for realism. In reality, the precession speed will be set by self-consistency. A reasonable estimate for $`\mathrm{\Omega }_p`$ can be obtained by requiring that the orbit through the density maximum of the original confocal model precess with the disk, retaining its original eccentricity. This gives slightly slower speeds of $`\mathrm{\Omega }_p=0.004`$ for the wide disk and $`\mathrm{\Omega }_p=0.006`$ for the narrow disk, for a mass of $`0.025`$.
The double-valued nature of the rotation curve in Figure 3b will, of course, be washed out by projection effects. To approximate the observable signature, I project the disk velocity fields for a line of sight parallel to the $`y`$ axis, adopting for the disk density that of the original confocal model. To compare with the FOC rotation curve, I let the distance from the origin to the density maximum in the model correspond to the P1-P2 separation of $`0\stackrel{}{\mathrm{.}}49`$. In the T95 model, the disk has an inclination of $`77\mathrm{°}`$, at which the FOC slit width of $`0\stackrel{}{\mathrm{.}}063`$ projects to a width of $`0\stackrel{}{\mathrm{.}}28`$ in the disk plane. This corresponds to a band of width $`\mathrm{\Delta }y=1.8`$ about the $`x`$ axis, over which I integrate the line of sight velocity. Since I completely ignore any contribution from the bulge, either to the light or to the rotation, the results should not be taken too literally, especially near the center.
Figure 4a shows the projected rotation curve for the wide disk with $`m=0.025`$ and $`\mathrm{\Omega }_p=0.004`$. The dip and outer bump produced by the steep part of the ellipticity profile are preserved in projection. The near-Keplerian profile at small $`x_{}`$ is smoothed into an inner bump by the central hole in the density distribution and by the finite slit width. The combination of these effects produces a wiggle similar to that in the M31 rotation curve (fig. 4c); note that there is no comparable feature on the anti-P1 side of the center in either model or data. For the wide disk, the model wiggle is quite a bit too broad, a discrepancy remedied somewhat by narrowing the mass distribution. Figure 4b shows the rotation curve for the narrow disk with the same mass and $`\mathrm{\Omega }_p=0.006`$. Though the fit is still not exact, the wiggle is noticeably narrowed; in particular, the outer bump now closely coincides with the density peak, in agreement with the FOC data. The results in Figure 4 are not qualitatively changed by $`\pm 50\%`$ variations in the assumed value of $`\mathrm{\Omega }_p`$.
## 5 Discussion
I have shown that in an eccentric near-Keplerian disk with an off-center density concentration, the disk’s self-gravity will affect the periodic orbits in such a way as to produce an observable signature in the projected rotation curve. This signature is a “wiggle” extending through the region of peak density, with a local velocity maximum at or just outside the peak, and a dip in the rotation curve just inside. Details interior to this region depend on the density structure, viewing geometry, and resolution. For a mass distribution resembling that of the T95 model for the M31 double nucleus, the computed wiggle closely resembles the observed FOC rotation curve through P1. This agreement gives strong support to the basic correctness of the eccentric disk picture.
A better understanding of the dynamics of M31’s nuclear region should lead to more accurate mass determinations for the central BH. An enticing notion is that the P1 wiggle could be used to constrain the disk-to-BH mass ratio, and thereby the mass of the BH by way of an assumed mass-to-light ratio for the disk. Unfortunately, this is easier said than done. Disk models with the same value of $`m/\mathrm{\Omega }_p`$ produce the same eccentricity law and rotation curve, to leading order. Constraining the mass would thus require an independent estimate of the precession speed. The T95 model has a disk mass of $`0.16`$, which would imply $`\mathrm{\Omega }_p0.03`$, by the arguments of § 4. Approximating the potential as Keplerian, this would put an inner Lindblad resonance (ILR) outside the dense part of the disk, in the vicinity of $`r1\mathrm{}`$. But whether one should expect to detect an ILR in a predominantly hot stellar system is, to say the least, problematical.
More realistic disk models will be needed both to obtain an accurate mass for the central BH and to understand the stellar dynamics of the nucleus. The simple models presented here neglect some important complications. The $`e(a)`$ profile in Figure 3a for the periodic orbits would imply a much more sharply peaked density structure than assumed in the confocal models in § 2. But equation (1) for the surface density is no longer valid when the orbits intersect, which would appear to be a necessary consequence of self-gravity. In reality, the radial width of the disk will be determined by the velocity dispersion. A self-consistent disk will mainly be composed of quasi-periodic orbits librating about the closed orbits considered here.<sup>2</sup><sup>2</sup>2Examples of these loop-like orbits and their possible role in nuclear disks have recently been discussed by Sridhar & Touma (1999). Sufficiently high dispersion could wash out the kinematic signature of the periodic orbits; self-consistent models will be required to determine at what dispersion this occurs. If the P1 wiggle is a sign of disk self-gravity, the challenge may really be to explain why M31’s disk is dynamically cold enough for the effect to be visible.
###### Acknowledgements.
This work was supported by NSF CAREER grant AST-9703036. The author is grateful to Ivan King, Joe Shields, Scott Tremaine, and Steve Vine for helpful comments.
|
no-problem/9905/hep-ph9905565.html
|
ar5iv
|
text
|
# Contact Interactions: Results from ZEUS and a Global Analysis
## 1 INTRODUCTION
The HERA $`ep`$ collider has extended the kinematic range available for the study of deep–inelastic scattering (DIS) by two orders of magnitude to values of $`Q^2`$ up to about $`50000\mathrm{GeV}^2`$. Measurements in this domain allow new searches for physics processes beyond the Standard Model (SM) at characteristic mass scales in the $`\mathrm{TeV}`$ range. The recent analyses were stimulated in part by an excess of events over the SM expectation for $`Q^2>20000\mathrm{GeV}^2`$ reported in 1997 by the ZEUS and H1 collaborations, for which electron-quark contact interactions (CI) have been suggested as possible explanations.
## 2 CONTACT INTERACTIONS
Four-fermion contact interactions are an effective theory, which allows us to describe, in the most general way, possible low energy effects coming from “new physics” at much higher energy scales. This includes the possible existence of second-generation heavy weak bosons, leptoquarks as well as electron and quark compositeness . As strong limits beyond the HERA sensitivity have already been placed on the scalar and tensor terms , only the vector $`eeqq`$ contact interactions are considered in this study. They can be represented as additional term in the Standard Model Lagrangian :
$`L_{CI}`$ $`=`$ $`ϵ{\displaystyle \frac{g^2}{\mathrm{\Lambda }^2}}{\displaystyle \underset{i,j=L,R}{}}\eta _{ij}^{eq}(\overline{e}_i\gamma ^\mu e_i)(\overline{q}_j\gamma _\mu q_j)`$ (1)
where the sum runs over electron and quark helicities, $`ϵ`$ is the overall sign of the CI Lagrangian, $`g`$ is the coupling, and $`\mathrm{\Lambda }`$ is the effective mass scale. Helicity and flavour structure of contact interactions is described by set of parameters $`\eta _{ij}^{eq}`$. Since $`g`$ and $`\mathrm{\Lambda }`$ always enter in the combination $`g^2/\mathrm{\Lambda }^2`$, we fix the coupling by adopting the convention $`g^2=4\pi `$. In the ZEUS analysis 30 specific CI scenarios are considered. Assumed relations between different couplings are listed in Table 1. Each line in the table represents two scenarios, one for $`ϵ=+1`$ and one for $`ϵ=1`$. For the models VV, AA, VA and X1 to X6 all quark flavours are assumed to have the same contact interaction couplings and each of the $`\eta _{ij}^{eq}`$ is either zero or $`\pm 1`$. For the U1 to U6 models only couplings of up-type quarks ($`u`$ and $`c`$) are considered.
The global analysis combining data from different experiments (see sections 4 and 5) also takes into account three less constrained models, in which different couplings can vary independently. The General Model assumes that contact interactions couple only electrons to $`u`$ and $`d`$ quarks (8 independent couplings). All other couplings (for $`s,c,b,t,\mu ,\tau `$) are assumed to vanish. The model with Family Universality assumes lepton universality ($`e`$=$`\mu `$) and quark family universality ($`u`$=$`c`$ and $`d`$=$`s`$=$`b`$). There are also 8 independent couplings. In a model assuming $`\mathrm{𝐒𝐔}(\mathrm{𝟐})_𝐋\times 𝐔(\mathrm{𝟏})_𝐘`$ gauge invariance, the number of free model parameters is reduced from 8 to 7 ($`\eta _{RL}^{eu}`$=$`\eta _{RL}^{ed}`$). In this model the $`eeqq`$ contact interaction couplings can be also related to $`\nu \nu qq`$ and $`e\nu qq^{}`$ couplings .
## 3 ZEUS ANALYSIS
This analysis is based on $`47.7\mathrm{pb}^1`$ of NC $`e^+p`$ DIS data collected by the ZEUS experiment in 1994-97. Monte Carlo simulation, event selection, kinematic reconstruction, and assessment of systematic effects is that of the NC DIS analysis described in . The event sample used in the CI analysis is limited to $`0.04<x<0.95`$, $`0.04<y<0.95`$ and $`Q^2>500\mathrm{GeV}^2`$.
A cross-section increase at highest $`Q^2`$, corresponding to the direct “new physics” contribution, is expected for most CI scenarios, as shown in Figure 1. At intermediate $`Q^2`$ a moderate increase or decrease due to CI-SM interference terms is possible. As the helicity structure of new interactions can be different from that of the Standard Model, also the differential cross-section $`d\sigma /dx`$ (for fixed $`Q^2`$) is modified. Sensitivity to many CI scenarios is significantly improved by considering the two-dimensional event distribution.
The ZEUS CI analysis compares the distributions of the measured kinematic variables with the corresponding distributions from a MC simulation of $`e^+pe^+X`$ events reweighted to simulate the CI scenarios. An unbinned log–likelihood technique is used to calculate $`L(ϵ/\mathrm{\Lambda }^2)`$ from the individual kinematic event coordinates $`(x_i,y_i)`$:
$$L(ϵ/\mathrm{\Lambda }^2)=\underset{i\mathrm{data}}{}\mathrm{log}p(x_i,y_i;ϵ/\mathrm{\Lambda }^2),$$
(2)
where the sum runs over all events in the selected data sample and $`p(x_i,y_i;ϵ/\mathrm{\Lambda }^2)`$ is the probability that an event $`i`$ observed at $`(x_i,y_i)`$ results from the model described by coupling $`ϵ/\mathrm{\Lambda }^2`$. $`L`$ tests the shape of the $`(x,y)`$–distribution but is independent of its absolute normalisation.
The best estimates, $`\mathrm{\Lambda }_{}^\pm `$, for the different CI scenarios are given by the positions of the respective minima of $`L(ϵ/\mathrm{\Lambda }^2)`$ for $`ϵ`$=$`\pm 1`$. All results are consistent with the Standard Model, the probability that the observed values of $`\mathrm{\Lambda }_{}^\pm `$ result from the Standard Model does not fall below 16%. The $`95\%`$ C.L. lower limits $`\mathrm{\Lambda }_{min}`$ on the effective mass scale $`\mathrm{\Lambda }`$ are defined as the mass scales for which MC experiments have a 95% chance to result in $`\mathrm{\Lambda }_{}`$ values smaller than that observed in data. The lower limits on $`\mathrm{\Lambda }`$ ($`\mathrm{\Lambda }_{min}^\pm `$ for $`ϵ`$=$`\pm 1`$) are summarized in Table 1. The $`\mathrm{\Lambda }`$ limits range from 1.7 to $`5\mathrm{TeV}`$.
## 4 GLOBAL ANALYSIS
The global analysis of $`eeqq`$ contact interactions combines relevant data from different experiments: ZEUS and H1 high-$`Q^2`$ NC DIS results; Tevatron data on high-mass Drell-Yan lepton pair production; LEP2 results on the hadronic cross-section $`\sigma (e^+e^{}q\overline{q}(\gamma ))`$, the heavy quark production ratios $`R_b`$ and $`R_c`$, and the forward-backward asymmetries $`A_{FB}^b`$, $`A_{FB}^c`$, $`A_{FB}^{uds}`$; data from low-energy $`eN`$ and $`\mu N`$ scattering and from atomic parity violation (APV) measurements.
For models assuming $`SU(2)_L\times U(1)_Y`$ universality, additional constraints come from HERA $`e^+p`$ CC DIS results, data on $`\nu N`$ scattering, unitarity of the CKM matrix and electron-muon universality.
The combined data are consistent with the Standard Model predictions. The mass scale limits $`\mathrm{\Lambda }_{min}^{}`$ and $`\mathrm{\Lambda }_{min}^+`$ obtained from fitting one-parameter models to all available data are summarized in Table 1. For models not assuming $`SU(2)_L\times U(1)_Y`$ universality (only $`e`$/$`\mu `$ NC data used) the mass limits range from 5.1 to $`11.7\mathrm{TeV}`$. With $`SU(2)_L\times U(1)_Y`$ universality (using also $`\nu N`$ and CC data) the limits extend up to about $`18\mathrm{TeV}`$.
Limits for single couplings derived in multi-parameter models (of Section 2) are weaker than in the case of one-parameter models, as no relation between separate couplings is assumed. The mass limits obtained for the general model range from 2.1 to $`5.1\mathrm{TeV}`$. All limits improve significantly and reach 3.5 to $`7.8\mathrm{TeV}`$ for the SU(2) model with family universality.
Taking into account possible correlations between couplings, any contact interaction with a mass scale below $`2.1\mathrm{TeV}`$ ($`3.1\mathrm{TeV}`$ when SU(2) universality is assumed) is excluded at 95% CL.
## 5 PREDICTIONS
Likelihood function for the possible cross-section deviations from the Standard Model predictions is calculated as the weighted average over all contributing CI coupling combinations . The results for HERA, in terms of the 95% C.L. limit bands on the ratio of predicted and the Standard Model cross-sections as a function of $`Q^2`$, are shown in Figure 2, for the general model and the SU(2) model with family universality.
The allowed increase in the integrated $`e^+p`$ NC DIS cross-section for $`Q^2>`$ 15,000 GeV<sup>2</sup> is about 40% for the general model and about 30% for the SU(2) model. In order to reach the level of statistical precision which would allow to confirm a possible discrepancy of this size, the HERA experiments would have to collect $`e^+p`$ integrated luminosities of the order of 100-200$`\mathrm{pb}^1`$ (depending on the model). This will be possible after the HERA upgrade planned for year 2000.
Constraints on possible deviations from the Standard Model predictions are much stronger in the case of $`e^{}p`$ NC DIS. For the general model deviations larger than about 20% are excluded for $`Q^2>`$15,000 GeV<sup>2</sup>, whereas for the SU(2) model with family universality the limit goes down to about 7%. In such a case it would be very hard to detect contact interactions in future HERA $`e^{}p`$ running. However, for scattering with 60% longitudinal $`e_R^{}`$ polarisation, the maximum allowed deviations increase to 28% and 19%, respectively, and significant effects could be observed already for integrated luminosities of the order of 120$`\mathrm{pb}^1`$.
For the hadronic cross-section at LEP, for $`\sqrt{s}200\mathrm{GeV}`$, the possible deviations from the Standard Model are only about 8%. However, significant deviations are still possible for the heavy quark production ratios $`R_c`$ and $`R_b`$, and for the forward-backward asymmetries $`A_{FB}^c`$ and $`A_{FB}^b`$. Significant cross-section deviations will be possible in the Next Linear Collider (NLC), for $`\sqrt{s}>300\mathrm{GeV}`$. The largest cross-section deviations from the Standard Model predictions are still allowed at the Tevatron. For Drell-Yan lepton pair production, deviations of the cross-section at $`M_{ll}`$=500 GeV up to a factor of 5 are still not excluded.
In Figure 3 relations between possible cross-section deviations at HERA, NLC and the Tevatron are presented. There are no clear correlations between different experiments. All experiments should continue to analyse their data in terms of possible new electron-quark interactions, as constraints resulting from different processes are, to large extent, complementary.
|
no-problem/9905/cond-mat9905255.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
One of the most-studied variants of the two-dimensional Ising model is the case of random bonds. While realizations of Ising models that include randomness come much closer to approximating reality, they are very much harder to study at any level. Even in two dimensions exact results for random cases (especially for quenched randomness, which is the realistic situation in many experiments) are few and far-between. In fact, the two-dimensional Ising case is especially difficult because of the marginality of the Harris criterion for this model. This criterion states that quenched randomness is a relevant (irrelevant) perturbation when the critical exponent $`\alpha `$ of the specific heat of the pure system is positive (negative) and therefore when $`\alpha =0`$ as in the two-dimensional Ising model the situation is marginal.
Numerous theoretical investigations \[2-7\] as well as numerical Monte Carlo simulations \[8-15\] and transfer-matrix studies have addressed the question of whether the critical exponents for the two-dimensional Ising model with quenched, random bond disorder differ from those of the pure model. While a “majority” consensus had probably been achieved in favour of no change, apart from logarithmic corrections \[3-6\] no unambiguous numerical study that confirmed the quantitative predictions of either of the theoretical approaches had been made prior to our recent study of the susceptibility with high-temperature series expansions. In a brief note we announced the confirmation of the theoretical majority consensus value of the exponent of the logarithmic correction. This quantitative determination of the value of the correction exponent in excellent agreement with the predicted value, using a completely different numerical approach that in no way depends on random numbers, provided additional and incontrovertible support to the previous consensus. In the present paper we present the coefficients of the susceptibility series that we analysed together with some remarks on their derivation, the details of our analysis, and some additional results for the specific-heat series. We note that a recent simulation of the site-diluted model also confirms the log-log prediction of \[3-6\].
In the next section, we define the model and the quantities that are studied, and in Sec. 3 the theoretical predictions are briefly recalled. The series generation is described in Sec. 4, and in Sec. 5 we describe the analysis techniques used. Sec. 6 then presents the results, where details of our analyses for the susceptibility give compelling evidence for a singularity of the form predicted by Shalaev, Shankar, and Ludwig \[3-6\]. In Sec. 7 we close with a summary of our conclusions and a few final comments.
## 2 Model
The Hamiltonian of the random-bond Ising model is given by
$$=\underset{ij}{}J_{ij}\sigma _i\sigma _j,$$
(1)
where the spins $`\sigma _i=\pm 1`$ are located at the sites of a square lattice, the symbol $`ij`$ denotes nearest-neighbor interactions, and the coupling constants $`J_{ij}`$ are quenched, random variables. As in most previous studies we consider a bimodal distribution,
$$P(J_{ij})=x\delta (J_{ij}J_1)+(1x)\delta (J_{ij}J_2),$$
(2)
of two ferromagnetic couplings $`J_1`$, $`J_2>0`$. We furthermore specialize to a symmetric distribution with $`x=1/2`$, since in this case the exact critical temperature $`T_c`$ can be computed for any positive value of $`J_1`$ and $`J_2`$ from the (transcendental) self-duality relation ($`k_B=`$ Boltzmann constant)
$$\left(\mathrm{exp}(2J_1/k_BT_c)1\right)\left(\mathrm{exp}(2J_2/k_BT_c)1\right)=2.$$
(3)
In both, computer simulations and high-temperature series expansion studies, this exact information simplifies the analysis of the critical behavior considerably.
The free energy per site is given by,
$$\beta f=\underset{V\mathrm{}}{lim}\frac{1}{V}\left[\mathrm{ln}\left(\underset{i=1}{\overset{V}{}}\underset{\sigma _i=\pm 1}{}\right)\mathrm{exp}(\beta )\right]_{\mathrm{av}},$$
(4)
where $`\beta =1/k_BT`$ is the inverse temperature in natural units and the bracket $`[\mathrm{}]_{\mathrm{av}}`$ denotes the average over the quenched, random disorder, $`[\mathrm{}]_{\mathrm{av}}=\left(_{ij}𝑑J_{ij}\right)`$ $`(\mathrm{})P(J_{ij})`$. The internal energy and specific heat per site follow by differentiation with respect to $`\beta `$,
$$e=\beta f/\beta ,C/k_B=\beta ^2^2\beta f/\beta ^2.$$
(5)
In this paper we shall mainly focus on the magnetic susceptibility per site $`\chi `$ which in the high-temperature phase and zero external field is defined as the $`V\mathrm{}`$ limit of
$$\chi _V=\left[\left(\underset{i=1}{\overset{V}{}}\sigma _i\right)^2_T/V\right]_{\mathrm{av}},$$
(6)
where $`\mathrm{}_T`$ denotes the usual thermal average with respect to $`\mathrm{exp}(\beta )`$.
## 3 Theoretical predictions
Let us briefly recall two contradicting theoretical predictions for the critical behavior of the model (1), (2). The first is based on renormalization-group techniques developed by Dotsenko and Dotsenko (DD) . For the specific heat DD find close to the transition point a double logarithmic behavior,
$$C(t)\mathrm{ln}(\mathrm{ln}(1/|t|)),$$
(7)
where $`t=(TT_c)/T_c`$ denotes the reduced temperature. For the susceptibility they obtain in the high-temperature phase ($`t0`$)
$$\chi t^2\mathrm{exp}\left[a\left(\mathrm{ln}\mathrm{ln}\left(\frac{1}{t}\right)\right)^2\right].$$
(8)
The second approach by Shalaev, Shankar, and Ludwig (SSL) \[3-6\] makes use of bosonisation techniques and the method of conformal invariance. While the prediction (7) for the specific heat can be reproduced (which, however, is not undisputed ), SSL derive quite a different behavior for the susceptibility,
$$\chi t^{7/4}|\mathrm{ln}t|^{7/8}.$$
(9)
This is the same leading singularity as in the pure case ($`J_1=J_2`$), but modified by a multiplicative logarithmic correction.
High-precision Monte Carlo simulations and transfer-matrix studies \[8-16\] favor the latter form, but due to well-known inherent limitations of this method it has been impossible to confirm the value of the exponent of the multiplicative logarithmic correction in (9) quantitatively. Similar problems have been reported in simulation studies of other models exhibiting multiplicative logarithmic corrections such as, e.g., the two-dimensional 4-state Potts and XY model. We found it therefore worthwhile to investigate this problem yet again with an independent method. In the following we report high-temperature series expansions for the free energy and susceptibility and enquire whether series analyses can yield a more stringent test of the theoretical predictions.
## 4 Series generation
For the generation of the high-temperature series expansions of the free energy (4), and hence the internal energy and specific heat, as well as the infinite-volume limit of the susceptibility (6) we made use of a program package developed at Mainz originally for the $`q`$-state Potts spin-glass problem \[23-27\] In this application the spin-spin interaction is generalized from $`\sigma _i\sigma _j`$ to $`\delta _{\sigma _i,\sigma _j}`$ with $`\sigma _i`$ being an integer between 1 and $`q`$, and the coupling constants $`J_{ij}`$ can take the values $`\pm J`$ with equal probability. Since here the coupling constants also can take negative values, frustration effects play an important role and the physical properties of spin glasses are completely different than those of the random-bond system. Technically, however, precisely the same enumeration scheme for the high-temperature graphs can be employed in both cases. The only difference is in the last step where the quenched averages over the $`J_{ij}`$ are performed. The details of the employed star-graph expansion technique and our specific implementation are described elsewhere \[23-25, 27, 29\] Here we only note that slight modifications of this program package enabled us to generate the high-temperature series expansion for $`\beta f`$ and $`\chi `$ up to the 11th order in $`k=2\beta J_1`$ for
* hypercubic lattices of arbitrary dimension $`d`$,
* arbitrary number of Potts states $`q`$,
* arbitrary probability $`x`$ in the bimodal distribution, and
* arbitrary ratios $`R=J_2/J_1`$, characterizing the strength of the disorder.
In this paper we shall concentrate on the two-dimensional ($`d=2`$) random-bond Ising model ($`q=2`$) for a symmetric ($`x=1/2`$) bimodal distribution of two positive coupling strengths $`J_1`$ and $`J_2`$. The series coefficients of the free energy and susceptibility expansions for various coupling-strength ratios $`R=J_2/J_1`$ are given in Tables 1 and 2. Of course, in principle it would be also straightforward to adapt the present program package to more general probability distributions $`P(J_{ij})`$.
## 5 Series analysis techniques
In the literature many different series analysis techniques have been discussed which, depending on the type of critical singularity at hand, all have their own merits and drawbacks . In the course of this work we have tested quite a few of them . Here, however, we will confine ourselves to only those which turned out to be the most useful for our specific problem at hand.
To simplify the notation we denote a thermodynamic function generically by $`F(z)`$ and assume that its Taylor expansion around the origin is known up to the $`N`$-th order,
$$F(z)=\underset{n=0}{\overset{N}{}}a_nz^n+\mathrm{}.$$
(10)
If the singularity of $`F(z)`$ at the critical point $`z_c`$ is of the simple form ($`zz_c`$)
$$F(z)A(1z/z_c)^\lambda ,$$
(11)
with $`A`$ being a constant, then the ratios of consecutive coefficients approach for large $`n`$ the limiting behavior
$$r_n\frac{a_n}{a_{n1}}\left[1+\frac{\lambda 1}{n}\right]\frac{1}{z_c}.$$
(12)
From the offset ($`1/z_c`$) and slope ($`(\lambda 1)/z_c`$) of this sequence as a function of $`1/n`$ both the critical point $`z_c`$ and the critical exponent $`\lambda `$ can be determined. This is the basis of the so-called ratio method . If the critical point $`z_c`$ is known from other sources (in our case exactly from self-duality), then one may consider biased extrapolants for the critical exponent,
$$\lambda _n=nr_nz_cn+1,$$
(13)
which simply follow by rearranging eq. (12). In the following this method will be denoted as “Biased Ratio I”.
If the singularity of $`F(z)`$ contains a multiplicative logarithmic correction (as, e.g., in the SSL prediction for $`\chi `$),
$$F(z)A(1z/z_c)^\lambda |\mathrm{ln}(1z/z_c)|^p,$$
(14)
then one forms the ratios $`r_n`$ as before, but considers in addition the auxiliary function
$$z^p^{}(1z)^\lambda (\mathrm{ln}[1/(1z)])^p^{}=\underset{n=0}{\overset{N}{}}b_nz^n+\mathrm{},$$
(15)
and computes the ratios $`r_n^{}=b_n/b_{n1}`$. Let us first assume that the critical exponent $`\lambda `$ of the leading term is known. Then it can be shown that the sequence $`R_n=r_n/r_n^{}`$ approaches $`1/z_c`$ with zero slope in the limit $`n\mathrm{}`$, if and only if $`p^{}=p`$. This determines $`p`$, if also $`z_c`$ is known. If $`\lambda `$ is not known, then one may vary both exponents until above relation is satisfied. In the following we refer to this special ratio method as “Ln-Ratio”.
Another method suitable for a singularity of the form (14) is based on Padé approximants . Here one generates the series expansion for the auxiliary function
$$G(z)=(z_cz)\mathrm{ln}(z_cz)(\frac{F^{}(z)}{F(z)}\frac{\lambda }{z_cz}),$$
(16)
which can easily be shown to satisfy
$$\underset{zz_c}{lim}G(z)=p.$$
(17)
If $`z_c`$ is known, the value of $`G(z)`$ at $`z=z_c`$ can be obtained by forming Padé approximants,
$$G(z)[L/M]\frac{P_L(z)}{Q_M(z)}\frac{p_0+p_1z+p_2z^2+\mathrm{}+p_Lz^L}{1+q_1z+q_2z^2+\mathrm{}+q_Mz^M},$$
(18)
with $`L+MN1`$. Note that one order of the initial series is lost due to the differentiation in (16).
With a small modification this method can also be applied to a purely logarithmic singularity of the form
$$F(z)A|\mathrm{ln}(1z/z_c)|^p.$$
(19)
In this case one defines the auxiliary function
$$G(z)=(z_cz)\mathrm{ln}(z_cz)\frac{F^{}(z)}{F(z)},$$
(20)
which again satisfies
$$\underset{zz_c}{lim}G(z)=p.$$
(21)
The two analysis methods based on Padé approximants will be called “Ln-Padé”.
## 6 Results
### Susceptibility:
In a first step we investigated whether our series expansions for the susceptibility are consistent with a pure power-law behavior according to the DD prediction (8) (ignoring the exponentially small multiplicative correction term). Assuming thus the behavior $`\chi t^\gamma `$ and using the method “Biased Ratio I” we obtained the critical exponents $`\gamma `$ shown in Fig. 2 as a function of $`J_2/J_1`$. Here and in the following the error bars are estimated by varying the length of the series and/or the type of Padé approximants used. Starting with $`\gamma =1.738\pm 0.014`$ for the pure case ($`J_2/J_1=1`$), being consistent with the exact value of $`\gamma =7/4`$, we observe a steady increase to $`\gamma =2.37\pm 0.11`$ for the strongest investigated disorder ($`J_2/J_1=10`$). We will argue below that the apparent crossover from weak to strong disorder is due to the finite length of our series expansion which naturally has a much more dramatic influence for weak disorder. At any rate, for strong disorder the DD prediction of $`\gamma =2`$ is clearly outside the error margins of the series analysis estimates.
So far no multiplicative logarithmic corrections were taken into account. If the SSL prediction (9) was correct we would, therefore, expect to observe “effective” critical exponents which according to
$$\chi t^{7/4}|\mathrm{ln}t|^{7/8}=t^{(7/4)[1+\frac{1}{2}\frac{\mathrm{ln}(|\mathrm{ln}t|)}{\mathrm{ln}(1/t)}]}$$
(22)
should indeed be larger than $`7/4`$. The results in Fig. 2 could thus be well consistent with a critical exponent of $`\gamma =7/4`$ in the presence of a multiplicative logarithmic correction.
This possibility suggested a more careful analysis based on the qualitative form of the SSL prediction (9). Our series are too short to employ a general ansatz with both exponents as free parameters. We rather fixed the exponent $`\gamma =7/4`$ of the leading term to the (predicted) pure Ising model value and enquired if our series expansions are compatible with the ansatz
$$\chi t^{7/4}|\mathrm{ln}t|^p,$$
(23)
and $`p=7/8`$. Employing the two special methods for this type of singularity described in Sec. 5 we obtained well converging results. The resulting estimates for the exponent $`p`$ are shown in Fig. 2. We see that the two methods yield consistent results which start in the pure case ($`J_2/J_1=1`$) around $`p=0`$, as they should do. With increasing disorder the estimates exhibit again an apparent crossover, until around $`J_2/J_1=58`$ they settle at a plateau value in very good agreement with the theoretical prediction of $`p=7/8`$. This is the main result of our series analysis. We claim that compared with previous methods this is thus far the clearest quantitative confirmation of the SSL prediction (9).
As before we attribute the apparent crossover for intermediate strength of the disorder to the shortness of our series expansions, i.e., we interpret the crossover as an unavoidable artifact of high-temperature series expansion analyses and not as an indication that the exponent $`p`$ really is a function of the disorder strength. We thus take the view that already a small amount of disorder drives the system into a new universality class different from the pure case which, however, only becomes visible in the very vicinity of the transition point $`T_c`$ (or $`t=0`$). This in turn translates into the need of extremely long series expansions in order to be detectable.
To justify this claim we have investigated a model function simulating the “true” susceptibility ($`g_00`$), where $`g_0`$ is a constant that depends on the strength of the disorder,
$$\chi _{\mathrm{model}}=\dot{t}^{7/4}\left[1+\frac{4g_0}{\pi }\mathrm{ln}(1/\dot{t})\right]^{7/8},$$
(24)
with $`\dot{t}=(TT_c)/T`$, which for any $`g_00`$ reproduces the SSL form (9) in the limit $`TT_c`$ ($`\dot{t}=tt^2+t^3+\mathrm{}0`$). Notice the discontinuity in the asymptotic behavior at $`g_0=0`$. For any $`g_00`$ the asymptotic region is reached when $`\mathrm{ln}(1/t)`$ is much larger than $`\pi /4g_0`$, i.e., for $`t\mathrm{exp}(\pi /4g_0)`$. Since $`g_0=0`$ corresponds to the pure case it is intuitively clear that the parameter $`g_0`$ is an increasing function of the degree of disorder. For weak disorder this implies that $`g_0`$ is very small and therefore, due to the exponential dependence, that the asymptotic region in $`t`$ is extremely narrow.
Strictly speaking the model function (24) should resemble the “true” susceptibility only for weak disorder, but it is commonly believed that it is a reasonable qualitative approximation also for strong disorder. To relate the parameter $`g_0`$ at least heuristically to the ratio $`J_2/J_1`$ we used the weak disorder result $`g_0=c_2a^2/(1+ab)^2`$, where $`c_2=1x`$ (with $`x`$ as defined in eq. (2)) is assumed to be small, $`c_21`$, i.e., the analytic calculation assumes that there are only few $`J_2`$-bonds in a background of $`J_1`$-bonds. The parameters $`a`$ and $`b`$ are given by $`a=(v_c^{}v_c^{(0)})/v_c^{(0)}`$ and $`b=v_c^{(0)}/2\sqrt{2}`$, where $`v_c^{(0)}=\mathrm{tanh}(\beta _c^{(0)}J_1)=\sqrt{2}1`$ and $`v_c^{}=\mathrm{tanh}(\beta _c^{(0)}J_2)`$, with $`\beta _c^{(0)}`$ denoting the inverse critical temperature of the pure system with all $`J_{ij}=J_1`$. Of course, employing this formula to the present case with $`c_2=1/2=x`$ is a bold step which even creates an ambiguity since the exact symmetry $`J_1J_2`$ for $`x=1/2`$ is violated. For weak disorder ($`J_2/J_11`$), however, the inconsistency turns out to be very mild. For $`J_2/J_1=1.2`$ we obtain $`g_0=0.013700\mathrm{}`$, and for $`J_2/J_1=1/1.2`$ we find a slightly smaller value of $`g_0^{}=0.011958\mathrm{}`$. This shows that for weak disorder ($`J_2/J_1=1.2`$, $`g_00.013`$) the asymptotic region is bounded by $`t\mathrm{exp}(\pi /4g_0)\mathrm{exp}(1/0.017)10^{26}`$, and thus explains why it is so difficult to observe the asymptotic critical behavior in the weak-disorder limit. For $`J_2/J_1=1.5`$ and $`1/1.5`$ the corresponding numbers are $`g_0=0.070700\mathrm{}`$ and $`g_0^{}=0.052889\mathrm{}`$, leading to a bound of the order of $`t10^5`$.
Using a symbolic computer program it is straightforward to generate the high-temperature series expansion of the model function (24) to any desired order. Applying the same analysis techniques as used for the “true” susceptibility series we obtained the results shown in Fig. 4. If we truncate the model series at low order we observe qualitatively the same crossover effect as for the “true” series. Here we are sure, however, that this must be a pure artifact of the truncation of the model series at a finite order. We also see that the approach of the asymptotic limit of $`p=7/8`$ as a function of the degree of disorder is faster if we consider a longer series (21 terms). It is, however, somewhat discouraging (even though understandable in view of the exponential dependence of the critical regime on $`g_0`$) that at a fixed $`g_0`$ the convergence of the series with increasing order is quite slow. For example, at $`4g_0/\pi =1`$ we obtained $`p=0.7056`$ with the Padé approximant $`[4/4]`$, 0.7178 ($`[5/5]`$), 0.7474 ($`[10/10]`$), 0.7682 ($`[20/20]`$), 0.7777 ($`[30/30]`$), 0.7834 ($`[40/40]`$), 0.7875 ($`[50/50]`$), and 0.7905 ($`[60/60]`$). The convergence behavior for this example and other small values of the parameter $`g_0`$ can be visually inspected in Fig. 4.
### Specific heat:
Series analyses for the specific heat are usually more difficult than for the susceptibility. This is especially pronounced for the Ising model on loose-packed lattices where all odd powers of $`\beta `$ vanish because of symmetry. Consequently our specific-heat series consists only of four non-trivial terms (see Table 1). We nevertheless tried an analysis with the ansatz
$$C|\mathrm{ln}t|^q,$$
(25)
using the method “Ln-Padé”. The exponent $`q`$ is an effective exponent whose value may, or may not be constant.
The resulting dependence of the exponent $`q`$ on the ratio $`J_2/J_1`$ is shown in Fig. 5. While the quantitative agreement with the exactly known pure case is certainly not convincing, we do see at least a qualitative trend to smaller values of $`q`$ with increasing strength of the disorder (increasing ratios $`J_2/J_1`$), i.e., the singularity of the specific heat becomes apparently weaker for stronger disorder. This may be taken as an indication that the true singularity is of the log-log type (7) as predicted by both, DD and SSL. A recent numerical study for $`J_2/J_1=4`$ using transfer-matrix methods also observed a behavior in between log and log-log type. These findings are in contradiction to the claim for a slightly different disordered system (quenched, random site-dilution) that the specific heat stays finite at $`T_c`$, as theoretically suggested in Ref. (see also Ref. ).
Again we have tried to justify our interpretation by considering a model function,
$$C_{\mathrm{model}}=\frac{1}{g_0}\mathrm{ln}\left[1+\frac{4g_0}{\pi }\mathrm{ln}(1/\dot{t})\right].$$
(26)
By applying precisely the same type of analysis to the series expansion of the model specific-heat we obtained the results displayed in Fig. 6, which show qualitatively the same trend of decreasing $`q`$ as a function of $`J_2/J_1`$ as the data in Fig. 5.
## 7 Discussion
The main results of our high-temperature series analysis are shown in Fig. 2 which provide at least for strong disorder (large $`J_2/J_1`$) compelling evidence that the singularity of the susceptibility is properly described by $`\chi t^{7/4}|\mathrm{ln}t|^p`$, with $`p=7/8=0.875`$, as theoretically predicted by SSL \[3-6\]. The analysis of the model susceptibility (24) in Figs. 4 and 4 clearly shows that the apparent variation of $`p`$ with the strength of disorder is an artifact caused by the truncation of the series expansions at a finite order. We, therefore, emphasize that the apparent crossover from weak to strong disorder does not imply that the universality class of the random-bond Ising model changes continuously with the strength of disorder.
Let us finally make a few comments on previous Monte Carlo simulations of this model on large but finite square lattices. With the finite-size scaling analysis of Refs. \[8-12, 16\] it is conceptually impossible to detect the multiplicative logarithmic correction of the SSL prediction (9). The reason is that the SSL theory also predicts a logarithmic correction for the scaling behavior of the correlation length, $`\xi t^1|\mathrm{ln}t|^{1/2}`$. In the finite-size scaling behavior the two logarithms thus cancel and one ends up with a pure power-law, $`\chi L^{\gamma /\nu }=L^{7/4}`$, where $`L`$ is the linear lattice size. Thus only the SSL prediction for $`\gamma /\nu `$ can be tested in finite-size scaling analyses. Wang et al. obtained for $`J_2/J_1=4`$ and 10 an estimate of $`\gamma /\nu =1.7507\pm 0.0014`$, and also the results of Reis et al. at $`J_2/J_1=2`$, 4, and 10 are consistent with $`\gamma /\nu =1.75`$. Among the two alternatives, the theories of DD and SSL, these estimates thus provide evidence in favor of SSL. Notice, however, that a numerical estimate of $`\gamma /\nu 1.75`$ would also be expected for the pure two-dimensional Ising model. For the specific heat the situation is conceptually clearer. Here the theoretically expected scaling behavior (7), as predicted by both, DD and SSL, translates into a double-logarithmic finite-size scaling behavior, $`C=C_0+C_1\mathrm{ln}(1+b\mathrm{ln}L)`$, which is different from that of the pure case where $`C=C_0+\mathrm{ln}L`$. In the numerical work of Wang et al. , employing lattice sizes up to $`L=600`$, this difference in the asymptotic behavior is clearly observed for $`J_2/J_1=10`$, while for $`J_2/J_1=4`$ the behavior is in between log and log-log type, similar to the findings reported in a recent transfer-matrix study for the same coupling-constant ratio. For the specific heat these latest finite-size scaling analyses are thus about as conclusive as our series analyses in Fig. 5.
Another set of numerical data that can discriminate between the predictions of DD and SSL comes from direct simulations of the temperature dependence of the magnetization $`m`$ and of the susceptibility $`\chi `$ for $`J_2/J_1=4`$ . Assuming in the analysis a pure power law with an effective exponent (i.e., ignoring the logarithmic correction), one observes an overshooting of the effective exponent to values larger than the prediction by SSL. As discussed above (recall eq. (22)) this may be taken as an indication of a multiplicative logarithmic correction term. For example, Talapov and Shchur obtained for $`J_2/J_1=4`$ from least-squares fits to $`\chi t^{\gamma _{\mathrm{eff}}}`$ an effective exponent of $`\gamma _{\mathrm{eff}}7/4+0.135=1.885`$. This value is quite close to our series estimate of $`\gamma _{\mathrm{eff}}=2.019\pm 0.024`$ for $`J_2/J_1=4`$, if the pure power-law ansatz is used (cp. Fig. 2). Wang et al. furthermore confirmed that their data is compatible with the SSL ansatz, $`\chi (t)=\chi _0t^{7/4}(1+at)[1+b\mathrm{ln}(1/t)]^{7/8}`$, supplemented by a correction to scaling term $`(1+at)`$ (and similarly for $`m`$; for a recent confirmation, see Ref. ). In these fits both exponents are kept fixed at their predicted values, and $`\chi _0`$, $`a`$, and $`b`$ are free parameters. In contrast to our series analysis, however, no quantitative estimates of the exponent of the logarithmic correction have been reported in Ref. . While the simulation results certainly indicate that among the two conflicting theories of DD and SSL, the SSL prediction is more likely to be correct, it is still fair to conclude that also this set of simulations has not yet unambiguously identified the multiplicative logarithmic correction term.
Monte Carlo simulations of systems with quenched, random disorder require an enormous amount of computing time because many realisations have to be simulated for the quenched average. For this reason it is hardly possible to scan a whole parameter range. Using high-temperature series expansions, on the other hand, one can obtain closed expressions in several parameters (such as the dimension $`d`$, $`x`$, $`J_2/J_1`$, …) up to a certain order in the inverse temperature $`\beta =1/k_BT`$. Here the infinite-volume limit is always implied and the quenched, random disorder can be treated exactly. By analysing the resulting series, the critical behavior of the random-bond system can hence in principle be monitored as a continuous function of several parameters. This is a big advantage over Monte Carlo simulations which usually can only yield a rather small parameter range in one set of simulations. The caveat of the series-expansion approach is that the available series expansions for the random-bond Ising model are still relatively short (at any rate much shorter than for pure systems). This introduces systematic errors of the resulting estimates for critical exponents which are difficult to control. The obvious way out is trying to extend the series expansions as far as possible. This, however, would be extremely cumbersome since the number of algebraic manipulations necessary to calculate the series coefficients blows up dramatically with the order of the series (usually at least exponentially) and, therefore, has to be left for future work.
## Acknowledgements
We wish to thank Kurt Binder for many helpful discussions and his constant interest in this project. JA and WJ are grateful to Walter Selke for a critical discussion of previous numerical simulation results, and thank Dietrich Stauffer for comments on the manuscript. WJ acknowledges support from the Deutsche Forschungsgemeinschaft through a Heisenberg fellowship. Partial support of JA and WJ from the German-Israel-Foundation (GIF) under contract No. I-0438-145.07/95 is also gratefully acknowledged.
|
no-problem/9905/quant-ph9905050.html
|
ar5iv
|
text
|
# Fundamental Limit on “Interaction Free” Measurements
\[
## Abstract
In “interaction free” measurements, one typically wants to detect the presence of an object without touching it with even a single photon. One often imagines a bomb whose trigger is an extremely sensitive measuring device whose presence we would like to detect without triggering it. We point out that all such measuring devices have a maximum sensitivity set by the uncertainty principle, and thus can only determine whether a measurement is “interaction free” to within a finite minimum resolution. We further discuss exactly what can be achieved with the proposed “interaction free” measurement schemes.
\]
In a highly influential recent paper by Elitzur and Vaidman, it was pointed out that the presence of an object (often called a “bomb”) can often be discerned without it absorbing even a single photon. This “interaction free measurement” scheme and later improvements on it have received a lot of attention, both in the popular press as well as in serious scientific journals . In this paper we wish to re-examine such measurement schemes and consider how they may be limited by the Heisenberg uncertainty principle.
We would like to be very precise about what we mean by an “interaction free” measurement, and we attempt to define this in terms of a specific bomb detection experiment. We imagine that the bomb we wish to detect has a trigger that is so sensitive that it will explode if interacts in any way with any particles that are sent to probe it – I.e., if it scatters or absorbs any of these particles. This bomb trigger should be sensitive to an arbitrarily small momentum transfer from the probe particle to the bomb, as well as being sensitive to angular momentum transfer, energy transfer, and transfer of any other quantum number we could consider. We now imagine that some gnome challenges us to determine if he/she has placed this sensitive bomb within some predetermined region (denoted by the dotted box in Fig. 1). If we succeed in detecting the presence of this bomb without blowing it up, we will have performed an “interaction free” measurement. We note, however, that the measurement can only be declared to be “interaction free” if the bomb is truly an ideal detector. If the bomb trigger is unreliable, then we will never know if we have interacted with the bomb or not (This will become important below).
Performing an interaction free measurement as defined above may seem impossible at first — and indeed, within classical physics such a thing would clearly be forbidden. However, by exploiting wave-particle duality, a number of groups have suggested that such measurements are in fact possible. Below, we will discuss the simplest of these proposed measurement schemes, and our results will apply more generally. In this paper we will point out that these schemes in fact do not satisfy the definition of “interaction free” given above. We then continue on to ask ourselves what precisely is achieved by these schemes. In particular, we will show that schemes can indeed claim to be “energy exchange free” (as first discussed in Ref. ) or free from transfer of certain other quantum numbers, but are not free from transfer of all quantum numbers. Specifically, we will show that such experiments are not free of momentum transfer (although they can be made to have “minimal” momentum transfer).
We begin by discussing the simplest so-called “interaction-free” measurement scheme. As described above, we will think in terms of a bomb detection experiment. The scheme for detecting the bomb, originally proposed by Elitzur and Vaidman, is to construct a Mach-Zehnder interferometer as shown in Fig. 1. We arrange the length of the arms of the interferometer to be such that the interference is constructive when a photon exits towards detector B (for bright) and destructive when it exits toward detector D (for dark). Thus, so long as the beam lines are not blocked by any objects, all of the light that enters the interferometer exits towards detector B.
Now we consider what happens when the gnome places the bomb in the predetermined region (I.e., in the dotted box in Fig 1) such that the bomb blocks the beam-line and prevents interference of the two paths of light. For the moment, let us assume that the bomb is in some sense a perfect absorber — an assumption that we will see below has some difficulties. With this assumption, when the bomb is blocking the beam line, 50% of the light sent into the interferometer will be absorbed by the object, 25% will exit towards detector B, and 25% will exit towards detector D. (We have also assumed here that our beam splitters have a reflectivity of 50%.). We then send a single photon into the interferometer. 50% of the time this photon will be absorbed by the bomb and it will explode. However, 25% of the time, we will detect the photon at detector D, which is normally dark, and we will know that the bomb is blocking the beam-line without it having absorbed the photon (Also 25% of the time the photon comes out at detector B which is inconclusive). Thus, in this simple way, we are able to perform what appears to be an “interaction free” measurement at least some fraction of the time. Experiments of this type have indeed been performed (in one case with single photons), albeit with imperfect detectors and with a “bomb trigger” with finite sensitivity.
What we would like to point out in this paper is that there is a fundamental limit on the possible sensitivity of the bomb, and hence the measurement can only be considered “interaction free” to within this limited sensitivity.
In order to understand the source of this limitation, we consider the preparation of the experiment. In order for the gnome to set up the experiment and place the bomb in the pre-arranged region (the dotted box in Fig. 1), he/she must know the position of the bomb to within some uncertainty $`\mathrm{\Delta }x`$. Since there is now a finite uncertainty of position, the bomb must have a momentum uncertainty of $`\mathrm{\Delta }p=\mathrm{}/\mathrm{\Delta }x`$. If the bomb were sensitive to momentum changes this small, then it would be triggered by quantum fluctuations (and would therefore be a useless device). Another way to say this is that the gnome would be unable to put the sensitive bomb in place without triggering it.
To make this important point more explicit, we imagine how the trigger of such a bomb might work. Before we do our experiment, the gnome places the bomb in the prearranged region (I.e., in the dotted box) in some wave-packet such that $`\mathrm{\Delta }x`$ is known sufficiently well for the gnome to know that the bomb is indeed in this region. After we shoot our photon though the apparatus, the trigger apparatus measures the momentum of the bomb. If the momentum is sufficiently large, then the gnome knows that we must have transferred momentum to the bomb (and the gnome would then make the bomb explode). However, the initial momentum state of the bomb must have an uncertainty of $`\mathrm{}/\mathrm{\Delta }x`$, so the gnome certainly cannot reliably detect if we transfer any momentum less than this amount to the bomb. It is interesting to note that this fundamental limit arises from understanding the measuring device (the bomb trigger) as a quantum mechanical device itself.
Because of this limit on the sensitivity of the bomb, it is clear that that no measurement can ever be “interaction free” by the definition given above (I.e., the bomb detection experiment with an infinitely sensitive bomb trigger as defined in the second paragraph of this paper), since any bomb can always recoil a very small amount and this interaction could not be detected. One might object that the reason no experiment fits our above definition is simply because our definition is overly restrictive. This may indeed be the case. (Although we also note that the experiment described above seems a reasonably natural choice in the absence of any prior attempts at a definition). Although our choice of definition is a matter of nomenclature which should not overly concern us, it remains a physically meaningful question to ask “what can be achieved by these so-called interaction free measurement schemes?.”
It is clear that in order to actually conduct an experiment similar to that proposed above, we must concede that the bomb will have a sensitivity limit for momentum transfers (although it may remain arbitrarily sensitive to transfers of other quantum numbers). Let us then consider an experiment analogous to that described above, but conducted with a finitely sensitive bomb such that only momentum transfers larger than the order of $`\mathrm{}/\mathrm{\Delta }x`$ will cause it to explode. This modified bomb is now sufficiently insensitive so as not to be triggered by the quantum fluctuations of momentum which are necessarily present due to the uncertainty principle. With such a modified bomb of finite sensitivity, we should not declare that detection of this bomb is truly “interaction free” since we will never know if the bomb has interacted very weakly with a probe particle. Nonetheless, it is certainly true that the above described interferometric measurement scheme (as well as more sophisticated versions of interferometric schemes) can indeed detect the presence of this modified bomb without blowing it up. We might say that this is now a “minimum interaction” measurement (By which we mean, we can detect a maximally sensitive bomb without triggering it).
It is now interesting to ask if there are other, perhaps simpler, methods of detecting this modified — slightly less sensitive – bomb without blowing it up. (I.e., of performing a similar “minimum interaction” measurement). One would only need to arrange to touch the bomb extremely softly to detect its presence, and so long as the transferred momentum remains less than $`\mathrm{}/\mathrm{\Delta }x`$, the bomb will not blow up.
One might guess that we could simply probe such a bomb with very long wavelength photons (or other probe particles), thus using a momentum transfer below the bomb’s sensitivity limit. One must be careful, however, being that the bomb may still be sensitive to other quantum numbers of the probe particles – such as energy or angular momentum, and the bomb might still explode if it absorbs the long wavelength photon even though the momentum transfer is below the sensitivity limit. In other words, we have pointed out above that the bomb cannot be arbitrarily sensitive to momentum transfers (and we have agreed to make our bomb only finitely sensitive to momentum) but the bomb may still remain arbitrarily sensitive to other properties of the probe particle. Thus in order to perform a “minimum interaction” measurement, we must arrange that no other quantum numbers of the probe particle are changed in the course of the interaction.
One particularly simple approach to making such “minimum interaction” measurements is to perform a simple small angle scattering experiment. We imagine sending a plane wave of short wavelength light at the bomb (Here, the beam must be a wide enough wave packet to be able to either hit the bomb or diffract around the bomb). For a bomb, assumed to be a perfect absorber, the absorption cross section is on the order of the cross sectional area of the object. However, there is also an elastic scattering cross section for small angle diffraction around the edge of the object (I.e., shadow scattering) that is also on the order of the cross sectional area of the object (with factors that depend on the precise geometry and boundary conditions). The angle of the diffraction is typically on the order of $`1/(k_{in}a)`$ where $`k_{in}`$ is the wavevector ($`k=2\pi /\lambda `$) of the incident light and $`a`$ is the length scale of the object. The momentum transfer to the object when a single incident photon of momentum $`p_{in}=\mathrm{}k_{in}`$ makes one of these small angle scattering events is then given by (roughly) $`p_{in}/(k_{in}a)=\mathrm{}/a`$. In our experiment the length of the sample $`a`$ is also the necessary uncertainty in the position $`\mathrm{\Delta }x`$ (since we must know the position to within a distance $`a`$ to make sure the object blocks the beam-line). Thus, the momentum transfer $`p=\mathrm{}/\mathrm{\Delta }x`$ in a small angle elastic scattering event is a “minimal interaction”. We see that by performing a simple scattering experiment with short wavelength single photons we can perform such a “minimal interaction” measurement which is in many ways equivalent to the interference scheme discussed above. Here, we send single short wavelength photons (in a plane wave state) at the bomb, and measure the outgoing momentum of the photon. In some fraction of trials, the photon comes out with the same momentum as it went in, which tells us nothing (analogous to measuring a photon in detector B above). In some fraction of trials the photon is either absorbed, or is elastically scattered by a large angle, in which case the bomb blows up. However, in some fraction of the trials, we measure that the photon undergoes small angle scattering, and we have discerned the presence of the bomb without triggering it.
As a final note, we consider a slight variant of this scattering experiment. Here, we imagine holding the bomb in a very weak harmonic potential well to localize its position. The bomb, being itself a quantum mechanical object, is placed in the ground state wavefunction of the harmonic potential. Again, because its position is known to within some accuracy $`\mathrm{\Delta }x`$, it has a momentum uncertainty $`\mathrm{\Delta }p=\mathrm{}/\mathrm{\Delta }x`$. We note that the (harmonic oscillator) energy levels of the bomb in the well are discrete and are spaced by an energy of order $`\mathrm{\Delta }E=(\mathrm{\Delta }p)^2/(2M)`$ where $`M`$ is the mass of the bomb. If we try to transfer some small momentum less than $`\mathrm{\Delta }p`$ to the bomb (either by using long wavelength photons or small angle scattering), we would not be able to give the bomb enough energy to reach the next eigenstate of the harmonic well. Therefore, the bomb must remain in the ground state wavefunction and the momentum would be transferred directly to the well itself. Indeed, measuring the excitation state of the bomb in the well is a maximal sensitivity measurement since it can measure momentum transfers of order $`\mathrm{\Delta }p=\mathrm{}/\mathrm{\Delta }x`$ and one could never have a bomb trigger more sensitive than this.
In summary, we have pointed out that all measuring devices have a maximum sensitivity fixed by the uncertainty principle. One can then always perform an “interaction free measurement” (in the sense of determining the presence of the bomb without triggering it) by simply probing very softly with very low momentum transfer (either small angle scattering or long wavelength photons). We believe that a large range of so called “interaction free” schemes may have similar limitations once the quantum mechanical nature of the measuring devices are properly understood.
The authors acknowledge helpful conversations with M. Andrews, L. Balents, D. Morin, and O. Narayan.
|
no-problem/9905/astro-ph9905072.html
|
ar5iv
|
text
|
# The Dyadosphere of Black Holes and Gamma-Ray Bursts
## 1 Introduction
It is by now clear that Pulsars have given the first evidence for the identification in Nature of Neutron Stars. Following the classical work of Armin Deutsch on a rotating magnetized star, from the observations of the pulsar angular velocity and its first derivative, evaluating from the theory of Neutron Stars the value of the moment of inertia, it has been possible to conclude that the energy source of Pulsars is simply the rotational energy of Neutron Stars (Manchester & Taylor 1977). In spite of this basic result, with the eception of a relativistic generalization of the Deutsch solution, little progress has been made in identifying the detailed mechanism of rotational energy transfer into electromagnetic energy of Pulsars.
The discovery of Binary X-ray sources has lead to the identification of the first Black Hole in our galaxy. The measurement of the masses of the collapsed object, made possible by the binary nature of the system, has been the discriminant between Neutron Stars and Black Holes. The matter accreting from the normal star into the deep gravitational potential well of the companion star has given evidence that the energy source of these binary X-ray sources is simply matter accretion into the deep relativistic gravitational field of a gravitationally collapsed star( Giacconi & Ruffini 1978). In principle more detailed analysis of the X-ray spectra and their time variability can give detailed information on the effective potential arround Black Holes (see e.g. Ruffini & Sigismondi 1998 and references therein), work is still today in progress.
I am proposing and give reasons that Gammma Ray Bursts for the first time we are witnessing, in real time the moment of gravitational collapse to a Black Hole. Even more important, the tremendous energies involved by the energetics of these sources, especially after the discoveries of their afterglows and their cosmological distances (Kulkarni et. al. 1998), clearly point to the necessity and give for the first time the opportunity to use as an energy source of these objects the extractable energy of Black Holes.
That Black Holes can only be characterized by their mass-energy $`E`$, charge $`Q`$ and angular momentum $`L`$ has been advanced in a classical article (Ruffini & Wheeler 1971), see Figure of Ruffini R. & Wheeler, J. A. Physics Today 1971, “Introducing the Black Hole”, the proof of this has been made after twenty five years of meticulous mathematical work. One of the crucial points in the Physics of Black Holes was to realize that energies comparable to their total mass-energy could be extracted from them. The computation of the first specific example of such an energy extraction process, by a gedanken experiment, was given in (Ruffini & Wheeler 1970) and (Floyd and R. Penrose 1971) for the rotational energy extraction from a Kerr Black hole, see Figure (2).
The reason of showing this figure is not only to recall the first such explicit computation, but to emphasize how contrived and difficult such a mechanism can be:it can only work for very special parameters and should be in general associated to a reduction of the rest mass of the particle involved in the process. To slowdown the rotation of a Black Hole and to increase its horizon by the accretion of counterrotating particles is almost trivial, but to extract the rotational energy from a Black Hole, namely to slow down the Black Hole and keep his surface area constant, is extremely difficult, as clearly pointed out also by the example in Figure (2). The above gedanken experiments, extended as well to electromagnetic interactions, became of paramount importance not for their direct astrophysical significance but because they gave the tool for testing the physics of Black Holes and identify their general mass-energy formula (Christodoulou & Ruffini 1971):
$`E^2`$ $`=`$ $`M^2c^4=\left(M_{\mathrm{ir}}c^2+{\displaystyle \frac{Q^2}{2\rho _+}}\right)^2+{\displaystyle \frac{L^2c^2}{\rho _+^2}},`$ (1)
$`S`$ $`=`$ $`4\pi \rho _+^2=4\pi (r_+^2+{\displaystyle \frac{L^2}{c^2M^2}})=16\pi \left({\displaystyle \frac{G^2}{c^4}}\right)M_{\mathrm{ir}}^2,`$ (2)
with the constraint
$$\frac{1}{\rho _+^4}\left(\frac{G^2}{c^8}\right)\left(Q^4+\frac{L^2c^2}{4}\right)1,$$
(3)
where $`M_{\mathrm{ir}}`$ is the irreducible mass, $`r_+`$ is the horizon radius, $`\rho _+`$ is the quasi spheroidal cylindrical coordinate of the horizon evaluated at the equatorial plane, $`S`$ is the horizon surface area, and extreme Black Holes satisfy the equality in eq. (3). The crucial point is that a transformation at constant surface area of the Black Hole, or reversible in the sense of ref.Christodoulou & Ruffini (1971), could release an energy up to 29% of the mass-energy of an extremely rotating Black Hole and up to 50% of the mass-energy of an extremely magnetized and charged Black Hole.
Various models have been proposed in order to tap the rotational energy of Black Holes by the processes of relativistic magnetohydrodynamic Ruffini & Wilson (1975), Thorne Price & MacDonald (1986) and references therein, see however Punsly & Coroniti (1989), Punsly (1998), though their efficiency appears to be difficult to assess at this moment. It is likely however that these of rotational energy extraction processes be relevant over the very long time scales characteristic of the accretion processes.
In the present case of the Gamma Rays Bursts a prompt mechanism, on time scales shorter then a second, depositing the entire energy in the fireball at the moment of the triggering process of the burst, appears to be at work. For this reason we are here considering a more detailed study of the vacuum polarization processes a’ la Heisenberg-Euler-Schwinger (Heisenberg W. & Euler H. 1931, Schwinger J. 1951) around a Kerr-Newman Black Hole first introduced by Damour and Ruffini (Damour T. and Ruffini R., 1975). The fundamental points of this process can be simply summarized:
* They occur in an extended region arround the Black Hole, the Dyadosphere, extending from the horizon radius $`r_+`$ to the Dyadosphere radius $`r_{ds}`$ see (Preparata, Ruffini & Xue 1998a and 1998b). Only Black Holes with a mass larger than the upper limit of a neutron star and up to a maximum mass of $`610^5M_{}`$ can have a Dyadosphere, see (Preparata, Ruffini & Xue 1998a and 1998b) for details.
* The efficiency of transforming the mass-energy of Black Hole into particle-antiparticle pairs outside the horizon can approach 100%, for Black Holes in the above mass range see (Preparata, Ruffini & Xue 1998a and 1998b) for details.
* The pair created are mainly positron-electron pairs and their number is much larger then the quantity $`Q/e`$ one would have naively expected on the ground of qualitative considerations. It is actually given by $`N_{\mathrm{pairs}}=\frac{Q}{e}(1+\frac{r_{ds}}{\mathrm{}/mc})`$, where $`m`$ is the electron mass. The energy of the pairs and consequently the emission of the associated electromagnetic radiation peaks in the X-gamma rays region, as a function of the Black Hole mass.
I shall first recall some of the results on the Dyadosphere, then consider the constituent equations leading to the expansion of the Dyadosphere
## 2 The Dyadosphere and the Energy spectrum
We consider the collapse to amost general Black Hole endowed with an electromagnetioc field (EMBH). Following Preparata, Ruffini & Xue (1998a and 1998b), for simplicity we consider the case of a non rotating Reissner-Nordstrom EMBH to illustrate the basic gravitational-electrodynamical process.
It is appropriate to note that even in the case of an extreme EMBH the charge-to-mass ratio is $`10^{18}`$ smaller than the typical charge-to-mass ratio found in nuclear matter, owing to the different strengths and ranges of the nuclear and gravitational interactions. This implies that for an EMBH to be extreme, it is enough to have a one quantum of charge present for every $`10^{18}`$ nucleons in the collapsing matter.
We can evaluate the radius $`r_{\mathrm{ds}}`$ at which the electric field strength reaches the critical value $`_\mathrm{c}=\frac{m^2c^3}{\mathrm{}e}`$ introduced by Heisenberg and Euler, where $`m`$ and $`e`$ are the mass and charge of the electron. This defines the outer radius of the Dyadosphere, which extends down to the horizon and within which the electric field strength exceeds the critical value. Using the Planck charge $`q_\mathrm{c}=(\mathrm{}c)^{\frac{1}{2}}`$ and the Planck mass $`m_\mathrm{p}=(\frac{\mathrm{}c}{G})^{\frac{1}{2}}`$, we can express this outer radius in the form
$`r_{\mathrm{ds}}`$ $`=`$ $`\left({\displaystyle \frac{\mathrm{}}{mc}}\right)^{\frac{1}{2}}\left({\displaystyle \frac{GM}{c^2}}\right)^{\frac{1}{2}}\left({\displaystyle \frac{m_\mathrm{p}}{m}}\right)^{\frac{1}{2}}\left({\displaystyle \frac{e}{q_\mathrm{p}}}\right)^{\frac{1}{2}}\left({\displaystyle \frac{Q}{\sqrt{G}M}}\right)^{\frac{1}{2}}`$ (4)
$`=`$ $`1.1210^8\sqrt{\mu \xi }\mathrm{cm},`$
where $`\mu =\frac{M}{M_{}}>3.2`$, $`\xi =\frac{Q}{Q_{\mathrm{max}}}1`$.
It is important to note that the Dyadosphere radius is maximized for the extreme case $`\xi =1`$ and that the region exists for EMBH’s with mass larger than the upper limit for neutron stars, namely $`3.2M_{}`$ all the way up to a maximum mass of $`610^5M_{}`$. Correspondingly smaller values of the maximum mass are obtained for values of $`\xi =0.1,0.01`$ as indicated in this figure. For EMBH’s with mass larger than the maximum value stated above, the electromagnetic field (whose strength decreases inversely with the mass) never becomes critical.
We turn now to the crucial issue of the number and energy densities of pairs created in the Dyadosphere. In the limit $`r_{ds}\frac{GM}{c^2}`$, we have
$$N_{e^+e^{}}\frac{QQ_c}{e}\left[1+\frac{(r_{ds}r_+)}{\frac{\mathrm{}}{mc}}\right].$$
(5)
Their total energy is then
$$E_{e^+e^{}}^{\mathrm{tot}}=\frac{1}{2}\frac{Q^2}{r_+}(1\frac{r_+}{r_{\mathrm{ds}}})(1\left(\frac{r_+}{r_{\mathrm{ds}}}\right)^2).$$
(6)
Due to the very large pair density and to the sizes of the cross-sections for the process $`e^+e^{}\gamma +\gamma `$, the system is expected to thermalize to a plasma configuration for which
$$N_{e^+}=N_e^{}=N_\gamma =N_{\mathrm{pair}}$$
(7)
and reach an average temperature
$$kT_{}=\frac{E_{e^+e^{}}^{\mathrm{tot}}}{3N_{\mathrm{pair}}2.7},$$
(8)
where $`k`$ is Boltzmann’s constant. The average energy per pair $`\frac{E_{e^+e^{}}^{\mathrm{tot}}}{N_{\mathrm{pair}}}`$ is shown as a function of the EMBH mass for selected values of the charge parameter $`\xi `$ in Fig.3.
Finally we can estimate the total energy extracted by the pair creation process in EMBH’s of different masses for selected values of the charge parameter $`\xi `$ and compare and contrast these values with the maximum extractable energy given by the mass formula for Black Holes (see eqs. (1) and (3)). This comparison is summarized in Figure (4). The efficiency of energy extraction by pair creation sharply decreases as the maximum value of the EMBH mass for which vacuum polarization occurs is reached. In the opposite limit the energy of pair creation processes (solid lines in Figure (4)) asymptotically reaches the entire electromagnetic energy extractable from EMBH’s given by eq.(1), leading in the limit to fully reversible transformations in the sense of Christodoulou & Ruffini (1971), $`\delta M_{ir}=0`$, and $`100\%`$ efficiency.
The discussions on the relativistic expansion of the Dyadosphere are presented in separated papers (see e.g. Ruffini, Salmonson, Wilson & Xue 1999).
## 3 The relativistic expansion of the Dyadosphere and the acceleration of Cosmic Rays
In order to insert these theoretical results in a framework suitable for making an astrophysical model for gamma ray bursts we have to analize the problem of the evolution of this plasma created in the Dyadosphere taking into due account the equation of motion of the system and the boundary conditions both at the Black hole horizon and at infinity. This problem is clearly not affordable anlitically and has been approched by us numerically both by the use of supercomputers at Livermore and by the use of a very simplified approach in Rome. Some results are contained in the ref.Ruffini , Salmonson, Wilson & Xue (1999).
Before concluding I would like to return to the suggestion, advanced by Damour and Ruffini, that a discharged EMBH can be still extremely interesting from an energetic point of view and responsible for the acceleration of ultrahigh energy cosmic rays. I would like just to formalize this point with a few equations: It is clear that no matter what the initial conditions leading to the formation of the EMBH are, the final outcome after the tremendous expulsion of the PEM pulse will be precisely a Kerr Newman solution with a critical value of the charge. If the Background metric has a Killing Vector, the scalar product of the Killing vector and the generalised momentum
$$P_\alpha =mU_\alpha +eA_\alpha ,$$
(9)
is a constant along the trajectory of any charged gravitating particle following the relativistic equation of motion in the background metric and electromagnetic field (Jantzen and Ruffini 1999). Consequentely electron (positron) starting at rest in the Dyadosphere will reach infinity with an energy $`E_{kinetic}2mc^2(\frac{GM}{c^2})/(\frac{\mathrm{}}{mc})10^{22}`$eV for $`M=10M_{}`$.
|
no-problem/9905/nucl-th9905032.html
|
ar5iv
|
text
|
# RELATIVISTIC FADDEEV APPROACH TO THE NJL MODEL AT FINITE DENSITY
## 1 Introduction
In contrast to high-temperature zero-density QCD, rather little is known about high-density zero-temperature QCD. Due to technical difficulties (the fermionic determinant becoming complex at finite chemical potential), lattice calculations are not able to provide unambiguous results. However, models of QCD seem to indicate a rich phase structure in high density quark matter. In particular, much attention has recently been devoted to so-called colour superconductivity : at high density, an arbitrarily weak attraction between quarks makes the quark Fermi sea unstable with respect to diquark formation and induces Cooper pairing of the quarks (diquark condensation). However, the groups who have studied colour superconductivity focused only on instabilities of the Fermi sea with respect to diquarks and have not considered possible 3-quark clustering. To address this question would necessitate in principle a generalisation of the BCS treatment. As a first step we can look for instabilities of the quark Fermi sea with respect to 3-quark clustering by studying the evolution of the nucleon binding energy with density. A bound nucleon at finite density would be a signal of instability. In this study, we will use the Nambu–Jona-Lasinio (NJL) model and solve the relativistic Faddeev equation for the nucleon as a function of density.
## 2 The model
The NJL model provides a simple implementation of dynamically broken chiral symmetry. It has been successful in the description of mesonic properties at low energy and several groups have used it to study baryons at zero density (for a review of the NJL model, see for example ). Several versions of the NJL lagrangian are available. Whatever the version we choose, a Fierz transformation allows us to order the terms according to their symmetries in the $`q\overline{q}`$ channel; here we are only interested in the scalar and pseudoscalar terms:
$$_\pi =\frac{1}{2}g_\pi [(\overline{\psi }\psi )^2(\overline{\psi }\gamma _5\tau \psi )^2].$$
(1)
To study the baryons, another Fierz transformation has to be performed in the $`qq`$ channel. In this work we shall keep only the scalar diquark channel:
$$_s=g_s(\overline{\psi }(\gamma _5C)\tau _2\beta ^A\overline{\psi }^T)(\psi ^T(C^1\gamma _5)\tau _2\beta ^A\psi ),$$
(2)
where $`\beta ^A=\sqrt{3/2}\lambda ^A`$ for $`A=2,5,7`$ projects on the colour $`\overline{3}`$ channel and $`C=i\gamma _2\gamma _0`$ is the charge conjugation matrix.
The ratio $`g_s/g_\pi `$ depends on the version of the NJL lagrangian used. In the following, we will not choose a particular version of the model but rather leave the ratio $`g_s/g_\pi `$ as a free parameter. We regularize the model with a 3-momentum cut-off $`\mathrm{\Lambda }`$. We have two parameters $`g_\pi `$ and $`\mathrm{\Lambda }`$, which are fitted to the values of the pion decay constant $`f_\pi =93`$ MeV and the constituent quark mass $`M=400`$ MeV. This gives us $`g_\pi =7.01`$ GeV<sup>-2</sup> and $`\mathrm{\Lambda }=0.593`$ GeV.
The effect of density is introduced by imposing the quark 3-momentum to be larger than the Fermi momentum $`k_F`$. Solving the gap equation as a function of $`k_F`$ gives us the usual dependence of the quark constituent mass on density. Chiral restoration occurs at $`k_F/\mathrm{\Lambda }=0.58`$, which corresponds to about 2.1 times the nuclear matter density
## 3 Diquark at finite density
As a first step to the resolution of the Faddeev equation, the scalar diquark mass has to be calculated as a function of density. The Bethe-Salpeter equation for the 2-body $`T`$-matrix is solved in the scalar $`qq`$ channel using the ladder approximation (the explicit solution is given in ) and the pole of the $`T`$-matrix gives the mass of the bound diquark. Note that the denominator of the scalar $`qq`$ $`T`$-matrix is formally identical to that of the pionic $`T`$-matrix, except for the replacement of $`g_s`$ by $`g_\pi `$. That means that the pion and the scalar diquark are degenerate (and of zero mass) for a ratio $`g_s/g_\pi =1`$. Fig. 1 gives the binding energy of the scalar diquark, $`B_{diq}=2\sqrt{k_F^2+M^2}E_{diq}`$ as a function of the dimensionless variable $`k_F/\mathrm{\Lambda }`$ for a value of the scalar coupling $`g_s/g_\pi =0.83`$.
One can see that the scalar diquark is bound for all values of the density, the binding energy increasing with density to reach more than half a GeV. The sharp peak, which occurs at chiral symmetry restoration, is a consequence of the choice of a large scalar coupling, close to the value $`g_s/g_\pi =1`$, for which the scalar diquark and the pion are degenerate.
## 4 The relativistic Faddeev equation
Because of the separability of the NJL interaction, the 3-body relativistic Faddeev equation in the ladder approximation can be reduced to an effective 2-body Bethe-Salpeter equation describing the interaction between a quark and a diquark. Details concerning the derivation of this equation can be found in the previous studies performed at zero density . Explicitly, it is written:
$$\mathrm{\Psi }(P,q)=\frac{i}{4\pi ^4}d^4q^{}R(2P/3+q^{})V(q,q^{};P)\mathrm{\Psi }(P,q^{}),$$
(3)
where $`R(2P/3+q^{})`$ is the two-body $`T`$-matrix for the scalar diquark and $`V(q,q^{};P)`$ involves the product of the propagators of the spectator and exchanged quarks:
$$V(q,q^{};P)=\frac{(\gamma p_2^{}+m)(\gamma p_1^{}+m)}{(p_{1}^{}{}_{}{}^{2}m^2)(p_{2}^{}{}_{}{}^{2}m^2)}.$$
(4)
Here $`p_i^{}`$ (i=1,2,3) are the momenta of each valence quark, $`P=p_1+p_2+p_3`$ is the total momentum of the nucleon, and $`q,q^{}`$ are the Jacobi variables defined by $`p_3P/3q`$; $`p_1^{}P/3q^{}`$. We now look for the nucleon solution of (3):
$$\mathrm{\Psi }=\left(\begin{array}{c}\mathrm{\Phi }_1(q_0,q)\\ \stackrel{}{\sigma }.\stackrel{}{q}\mathrm{\Phi }_2(q_0,q)\end{array}\right).$$
(5)
With this form for $`\mathrm{\Psi }`$, Eq. (3) becomes a set of two coupled integral equations. Following Huang and Tjon , we then perform a Wick rotation on the $`q_0`$ and $`q_0^{}`$ variables. This leads to two coupled complex integral equations which are solved iteratively . The initial guess for each of the wave functions $`\mathrm{\Phi }_1(q_0,q)`$ and $`\mathrm{\Phi }_2(q_0,q)`$ consists of a Gaussian for the real part and a derivative of a Gaussian for the imaginary part. The number of iterations needed to reach convergence of the solutions is about four.
The above discussion remains valid at finite density. Again, we incorporate the effects of density by restricting the 3-momentum of each valence quark to values larger than the Fermi momentum $`k_F`$, i.e.:
$$k_F|\stackrel{}{p}_i^{}|\mathrm{\Lambda }(i=1,2,3).$$
(6)
This condition translates into Eq. (3) as a complicated cut-off on the 3-momentum integration variable. Apart from this restriction, the method of solving the Faddeev equation is the same as at zero density.
## 5 Results and conclusions
At zero density we found that the nucleon is bound only if the scalar coupling is strong enough, i.e. $`g_s/g_\pi 0.8`$, which means that the nucleon is not bound in either the “standard” NJL model ($`g_s/g_\pi =2/13`$) or the colour-current interaction version ($`g_s/g_\pi =1/2`$).
At finite density, we solve the Faddeev equation for the energy $`E_{nuc}`$ of the nucleon as a function of the Fermi momentum. Results are shown in Fig. 2 for $`g_s/g_\pi =0.83`$. The binding energy of the nucleon, $`B_{nuc}=E_{diq}+\sqrt{k_F^2+M^2}E_{nuc}`$, is depicted, again as a function of the dimensionless variable $`k_F/\mathrm{\Lambda }`$. The binding energy of the nucleon is relative to the quark-diquark threshold: as there is no confinement in the NJL model, nothing can prevent the existence of a free diquark. In contrast to the diquark (shown in Fig. 1 for the same value of $`g_s/g_\pi `$), which is bound over the whole range of densities, the binding energy of the nucleon decreases quickly with density and the binding disappears well before nuclear matter density (which corresponds to $`k_F/\mathrm{\Lambda }0.45`$).
These results imply that, in the region characteristic of colour superconductivity (beyond chiral symmetry restoration), the quark Fermi sea is unstable only with respect to formation of diquarks and not of 3-quark clusters. However, we have to emphasize that we included only the scalar part of the $`qq`$ interaction; at zero density, several authors have shown that the axial-vector $`qq`$ interaction gives an important contribution to the nucleon binding energy (of the order 100 MeV), while the axial-vector diquark is not bound for reasonable values of the axial-vector coupling. Incorporating the axial-vector $`qq`$ interaction is necessary to obtain a more realistic picture of the evolution of the nucleon binding energy with density and could significantly modify the present results.
## References
|
no-problem/9905/gr-qc9905010.html
|
ar5iv
|
text
|
# Acknowledgements
## Acknowledgements
I am very much indebt to Prof.Shapiro and Prof.R.Ramos for helpful discussions on the subject of this paper. Financial support from CNPq. and UERJ (FAPERJ) is gratefully acknowledged.
|
no-problem/9905/astro-ph9905027.html
|
ar5iv
|
text
|
# Type Ia Supernovae, Evolution, and the Cosmological Constant
## 1 Introduction
The realization that the rates of decline of the brightnesses of Type Ia supernovae (SNe Ia) are correlated with their peak luminosities (Phillips 1992) has led to renewed efforts to use them as cosmological distance markers (Hamuy et al. 1995, 1996a, 1996b; Riess, Press and Kirshner 1995, 1996). Ongoing searches for high redshift ($`z0.51`$) SNe Ia have employed phenomenological models for these correlations to constrain the variation of luminosity distance $`D_L(z)`$ with redshift; the results have been interpreted to imply the existence of a nonzero cosmological constant (Perlmutter et al. 1998 hereafter P98, Riess et al. 1998 hereafter R98). Moreover, the results appear to rule out the simplest version of a flat cosmology, in which the density parameter for “ordinary” matter (including as yet unidentified nonbaryonic material) $`\mathrm{\Omega }_M=1`$ and the density parameter for the cosmological constant $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$.
Although the logical possibility that $`\mathrm{\Omega }_\mathrm{\Lambda }1`$ today has long been recognized (Einstein 1917), it is anathema to many theorists, since the associated vacuum energy density must be $`\rho _{vac}10^{122}M_\mathrm{P}^4`$, where $`M_\mathrm{P}`$ is the Planck mass. (Theoretical and conceptual problems with a nonzero cosmological constant so small compared with its “natural” scale have been reviewed by Weinberg 1989 and Carroll, Press & Turner 1992.) On the other hand, there is some evidence from large scale structure simulations that $`\mathrm{\Omega }_\mathrm{\Lambda }0`$ in a flat Universe fits the observations well (Cen 1998). Conceivably, what we interpret as a cosmological “constant” might be an evolving field (e.g., Caldwell, Dave & Steinhardt 1998; Garnavich et al. 1998; Perlmutter, Turner & White 1999). In any case, a convincing demonstration that the expansion rate of the Universe is increasing would have a revolutionary impact on our understanding of fundamental physics.
In view of the importance of the potential discovery of a nonzero cosmological constant, we have undertaken an independent study of the published data in an effort to understand their implications better. Our motivations are both phenomenological and astrophysical (and may end up being related ultimately). On the phenomenological side, we note that three different analysis methods are used to compute distances, the multicolor light curve shape (MLCS) method (Riess, Press & Kirshner 1995, 1996), the M15 or template fitting (TF) method (Phillips 1992; Hamuy et al. 1995, 1996a, 1996b), and the stretch factor (SF) method (Perlmutter et al. 1997). None of these methods is a perfect description of reality. As we will show, they are not always in agreement, and there seems to be no physical or phenomenological reason to prefer one to the other.
On the astrophysical side, we note that there are processes such as evolution of the supernovae sample that can mimic the effects of cosmology at high redshifts and which are extremely difficult to constrain convincingly with the current data. Therefore, it is useful to ask at what level the current data are able to distinguish the effects of cosmology from these other processes. We find that allowing for the possibility of a redshift-dependent shift in SNe Ia peak magnitudes (such that the most distant observed SNe Ia are dimmed by $`0.2`$ to 0.3 mag) renders $`\mathrm{\Omega }_M=1`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ acceptable, and that this is true for a variety of phenomenological models for the evolution. We also present results of simulations that show that if SNe Ia luminosities evolve with redshift, but evolution is neglected in analyzing the data, then, given enough data, the analyses will settle on precisely determined, but incorrect, values for $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$, and that the incorrectness of the model will not be detectable with a standard $`\chi ^2`$ goodness-of-fit test. However, we find that the Hubble constant, $`H_0`$, is virtually unaffected by evolution.
We believe that it is unjustifiable to try to determine cosmological parameters $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ from data on “standardized” candles, such as the peak luminosities of SNe Ia, without allowing for the possibility of source evolution. Our attitude is that an uncertain amount of evolution must be presumed to occur, as a default; and the sensitivity of the results to the uncertainty must be studied. Hopefully, one can demonstrate from the data that source evolution is absent or negligible. <sup>1</sup><sup>1</sup>1The observers have employed supplementary measurements—such as source spectra—to argue that there is no compelling evidence for evolution in the SNe Ia samples. If the physical connection between the additional data and the estimators of peak luminosity can be understood quantitatively, then the supplementary measurements can be incorporated usefully into analyses of models with evolution. Without such physical understanding, these additional data cannot be marshalled to argue against evolution. If that turns out to be untrue, as recent examination of the risetimes of the light curves of the SNe Ia sample has preliminarily indicated (Riess, Filippenko, Li, & Schmidt 1999), one might hope instead to constrain the parameters in an evolutionary model along with the cosmological parameters. Optimistically, one would anticipate that this might be accomplished once enough data are acquired. We argue, from simulations employing simple, phenomenological models, that such optimism may be unrealistic. What is needed is a better physical understanding of the SN Ia process and its evolution with redshift, before cosmological parameters can be determined reliably from SNe Ia catalogues. Such an understanding is currently being sought by theorists (see, e.g., von Hippel, Bothun, & Schommer 1997; Höflich, Wheeler, & Thielemann 1998; Domínguez et al. 1999).
In using observations of SNe Ia to determine cosmological parameters, the raw data are combined by any of the three methods mentioned above to derive single parameter summaries – the distance moduli to the sources. When a single catalogue of data is subjected to different types of analysis, each of which derives one quantity per source, the results of the individual analyses need not agree with one another entirely, and there is information contained in the degree to which the answers derived by the different methods differ<sup>2</sup><sup>2</sup>2A simple example might involve computing mean values of data using similar, but not identical, weighting functions. Each weighted mean would then be a slightly different superposition of all of the moments of the data computed with one fiducial weighting function.. In the example under consideration, the different analysis methods might probe slightly different physical aspects of the SN Ia mechanism, and their relationships to reality and to one another may be different at high and low redshift. Indeed, one clue that evolutionary effects are important would be a systematic drift with increasing redshift between the distance moduli implied by the MLCS and TF methods for the SNe Ia observed and analyzed by R98 where identical SNe Ia data are subjected to two different analysis methods.
In order to set the scale of interest for our investigations of potential systematic effects in these data we plot, in Figure 1, the joint credible regions for $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ for the largest available data set (P98), analyzed as published (Figure 1$`a`$) and after introducing a systematic offset to all the high redshift distance moduli ($`z>0.15`$) of $`0.1`$ magnitudes (Figure 1$`b`$). We see that a correlated systematic shift of this size would have a major impact on the interpretation of the data. A nonzero cosmological constant would still be favored, but the statistical significance of the result would be much reduced.
In Section 2 we review the published data that we employ in our study, as well as the salient features of the light curve fitting methods. In Section 3 we compare the different fitting methods on a supernova by supernova basis where possible. In Section 4 we explore whether the data have sufficient shape information to distinguish effects of cosmology from other cosmological effects such as evolution. We summarize our conclusions in Section 5.
## 2 Measurements of $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ Using Type Ia Supernovae
The traditional measure of distance to a SN is its distance modulus, $`\mu m_{\mathrm{bol}}M_{\mathrm{bol}}`$, the difference between its bolometric apparent magnitude, $`m_{\mathrm{bol}}`$, and its bolometric absolute magnitude, $`M_{\mathrm{bol}}`$. In the Friedman-Robertson-Walker (FRW) cosmology, when the (relative) peculiar velocity of the source is negligible, the distance modulus is determined by the source’s redshift, $`z`$, according to
$`\mu `$ $`=`$ $`5\mathrm{log}\left[{\displaystyle \frac{D_L(z;\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda },H_0)}{1\text{Mpc}}}\right]+25`$ (1)
$``$ $`f(z;\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda },H_0).`$
Here the luminosity distance $`D_L(z;\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda },H_0)=cH_0^1d_L(z;\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })`$, where $`c`$ is the velocity of light, $`H_0`$ is Hubble’s constant at the present epoch, and the dimensionless luminosity distance from redshift $`z`$ is
$$d_L(z;\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })=(1+z)|\mathrm{\Omega }_k|^{1/2}sinn\{|\mathrm{\Omega }_k|^{1/2}_0^z𝑑z[(1+z)^2(1+\mathrm{\Omega }_Mz)z(2+z)\mathrm{\Omega }_\mathrm{\Lambda }]^{1/2}\},$$
(2)
with $`\mathrm{\Omega }_k=1\mathrm{\Omega }_M\mathrm{\Omega }_\mathrm{\Lambda }`$, and $`sinn(x)=\mathrm{sinh}(x)`$ for $`\mathrm{\Omega }_k0`$ and $`\mathrm{sin}(x)`$ for $`\mathrm{\Omega }_k0`$ (e.g., Carroll, Press & Turner 1992). In principle, one could infer the cosmological parameters $`H_0`$, $`\mathrm{\Omega }_M`$, and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ from the distribution of measured distance moduli of sources at a variety of redshifts.
Several factors complicate implementation of such an analysis. In reality, bolometric data are not available, and one must infer $`\mu `$ using magnitudes $`m_X`$ and $`M_X`$ in some bandpass, $`X`$. The bandpass maps to a different region of the spectrum as a function of redshift $`z`$, so $`\mu `$ cannot be calculated simply by taking the difference between band-limited magnitudes; a $`K`$-correction term must be added whose value depends not only on the source’s redshift, but also on its spectrum. In addition, extinction along the line of sight increases the apparent magnitude by some amount $`A_X`$ not due to the cosmological effects modelled in equation (1). Further, the absolute magnitude of the source—bolometric or band-limited—is not directly measured, but must instead be inferred from other source properties. Finally, the inevitable presence of statistical uncertainties and peculiar velocities further complicates straightforward use of equation (1).
Of these complications, the need to infer the absolute magnitude indirectly is the most troublesome. Ideally, one seeks a population of “standard candles” such that all members of the population have the same $`M`$ (for convenience we henceforth drop the subscript $`X`$). If such a population could be identified, the parameters $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ could be inferred even if the actual value of $`M`$ for the population were unknown (the remaining parameter, $`H_0`$, would remain undetermined). Historically, all attempts to identify such a population have failed. Particularly worrisome is the possibility that some classes of objects that appear to be approximately standard candles locally (at low redshift, where they can be studied in detail) have evolved significantly, so that their younger counterparts at high redshifts have different absolute magnitudes, thwarting their use as cosmological distance indicators.
SNe Ia were briefly considered promising candidates for standard candles, but observers quickly discovered that SNe Ia are not all identically bright (Branch, 1987; Barbon, Rosino, & Iijima 1989; Phillips et al. 1987, 1992; Filippenko, et al. 1992a, 1992b; Leibundgut, et al. 1993). The intrinsic dispersion in the peak absolute magnitudes of SNe Ia, determined from studies of nearby events, is approximately 0.3 - 0.5 mag (Schmidt et al. 1998). However, there is an apparent empirical correlation between the rate of decline of the light curve of a given SN Ia and its luminosity at maximum brightness that was first quantified by Phillips (1992). Various techniques have been developed to take advantage of this correlation to determine the absolute magnitudes of individual supernovae using their light curves (Phillips 1992; Hamuy et al. 1995, 1996a, 1996b; Riess, Press & Kirshner 1995, 1996; Perlmutter et al. 1997); the relationships used in these analyses have come to be known generically as “Phillips relations.” When applied to nearby SNe Ia, these methods reduce the dispersion of the distance moduli about the low-$`z`$ FRW distance modulus vs. redshift relation to $`0.15`$ (Hamuy et al. 1996a, Riess, Press & Kirshner 1996).
The goal of the high redshift supernovae searches is to observe a large sample of supernovae at relatively large $`z`$, and understand their properties well enough to infer reliable distance moduli for them, allowing accurate determination of cosmological parameters. Two experimental groups have recently announced and published results from their independent programs to discover and study high redshift supernovae for this purpose (Perlmutter 1997 and P98; R98). The resulting two data sets share many low redshift SNe discovered by previous surveys, but include different high redshift SNe, and differ in their analysis methods. We take advantage of the similarities and differences among the data and methods used to assess the consistency or inconsistency of the assumptions underlying the analyses.
P98 have published data on 60 SNe Ia. Of these, 18 were discovered and measured in the Calán-Tololo survey (all at low redshift; Hamuy et al. 1996c), and this group discovered 42 new SNe Ia at redshifts between 0.17 and 0.83. The $`\mu `$ values are inferred using the SF light curve fitting method and are typically uncertain to $`\pm 0.2`$ magnitudes (“$`1\sigma `$”). The SF method (Perlmutter et al. 1997; P98) is based on fitting a time-stretched version of a single standard template to the observed light curves. The stretch factor, $`s`$, is then used to estimate the absolute magnitude of the SNe Ia via a linear relationship that is determined jointly with the cosmological parameters. The quoted $`\mu `$ values include a correction for extinction in the Galaxy based on the detailed model of Burstein & Heiles (1982).
R98 have published results based on 50 SNe Ia. Of these, 37, including 27 at low redshift ($`z<0.15`$) and ten at high redshift ($`z>0.15`$) have well-sampled light curves in addition to spectroscopic information; the quoted “$`1\sigma `$” uncertainties for $`\mu `$ for these SNe Ia are typically smaller than $`\pm 0.2`$ mag at high $`z`$ for determinations by either MLCS or TF light curve fitting method. The data for 17 of the SNe Ia at low redshift come from the Calán-Tololo survey (Hamuy et al. 1996c). We focus our attention on these 37 best-observed SNe Ia, which dominate the analysis in R98. These authors extensively tabulate their reduced data and provide detailed information about their fitting techniques, thus facilitating independent analysis of their conclusions.
R98 employ two different methods to estimate the distance modulus based on information from the light curves. The TF method (Hamuy et al. 1996a) fits a set of light curve templates with different values of $`\mathrm{\Delta }m_{15}`$, the total decline in brightness from peak to 15 days afterward, to observations of a particular SN Ia. By interpolating between the values of $`\chi ^2`$ for the fits to the various templates, a minimum $`\chi ^2`$ value of $`\mathrm{\Delta }m_{15}`$ for the SN Ia is estimated. The peak absolute magnitude is deduced from the independently calibrated linear relationship between $`M`$ and $`\mathrm{\Delta }m_{15}`$. The MLCS method consists of fitting an observed light curve to a superposition of a standard light curve and weighted additional templates that parametrize the differences among SNe Ia (e.g., Riess, Press & Kirshner 1996); the outcome of the fits for a particular SN Ia consists of the weights associated with the deviations from the standard, which in turn determine the difference between its peak absolute magnitude and the standard’s. The fits are done for more than one color, and reddening and extinction are inferred from color dependences (Riess, Press & Kirshner 1996, R98). Originally the MLCS method used a rather small training set to determine the requisite templates (Riess, Press & Kirshner 1996), but R98 now train on a considerably larger set of nearby SNe Ia to find them. Both the MLCS and TF methods are calibrated on nearby SNe Ia in the Hubble flow $`(z<0.15)`$ and then applied to the SNe Ia discovered at high redshift. The quoted $`\mu `$ values include a correction for local extinction derived from the Burstein and Heiles model, and in addition the MLCS method uses color dependence to estimate corrections for the extinction and reddening due to absorbing material in the host galaxy.
Schematically, we can consider a lightcurve fitting method to estimate the distance modulus for supernova number $`i`$ according to the following model:
$$\mu _i=m_i(M_0+\mathrm{\Delta }_i).$$
(3)
Here $`m_i`$ is the peak apparent magnitude for the SN, $`M_0`$ is a fiducial absolute magnitude (a single constant for a particular method), and $`\mathrm{\Delta }_i`$ is a shift so that the peak absolute magnitude for the SN is given by $`(M_0+\mathrm{\Delta }_i)`$. For the purposes of this paper, we have ignored $`K`$-corrections and extinction in equation (3) (one can consider them to have been accounted for in the $`m_i`$ estimates and their uncertainties). These corrections could potentially be important sources of systematic error; the observing teams have gone to some lengths to constrain the sizes of such errors. Here we concentrate on the possibility that systematic errors are introduced by the lightcurve fitting algorithms entering the analyses via the shifts $`\mathrm{\Delta }_i`$. We will seek information about such errors by comparing the shifts across methods, rather than through analysis of the internal consistency of a particular method.
The various fitting methods differ in how $`m_i`$ is interpolated from the observed (incompletely sampled) lightcurve, in the choice for $`M_0`$, and in how the shifts $`\mathrm{\Delta }_i`$ are determined from the (multicolor) lightcurve shapes. The MLCS method provides $`\mathrm{\Delta }_i`$ directly from fitting to a family of templates parameterized by $`\mathrm{\Delta }_i`$. For the TF method, $`\mathrm{\Delta }_i=\beta _{\mathrm{TF}}(\mathrm{\Delta }m_{15}1.1)`$, with $`\beta _{\mathrm{TF}}`$ a constant determined by fits. For both the MLCS and TF methods, $`M_0`$ is inferred through the use of SNe Ia that have Cepheid distances, and the various parameters specifying the shift as a function of the lightcurve shape are set by analyses of low redshift SNe. For the SF method, $`\mathrm{\Delta }_i=\alpha (s1)`$, where $`s`$ is the above-mentioned stretch factor and $`\alpha `$ is a constant estimated jointly with cosmological parameters in fits to the entire survey. $`M_0`$ is simply set at an arbitrary value; accordingly, no attempt is made to infer $`H_0`$ using SNe analyzed with the SF method. In principle, each of the quantities on the right hand side of equation (3) has uncertainty associated with it, and the resulting errors in the estimates for these quantities are correlated. But only the combination given by equation (3) appears in cosmological fits, so the lightcurve fitting results can be summarized by the best-fit absolute magnitude $`\widehat{\mu }_i`$ and the total $`\mu _i`$ uncertainty $`\sigma _i`$ for each SN. These quantities, and the shifts $`\mathrm{\Delta }_i`$, are the focus of our study.
Figure 2 shows histograms of the shifts deduced from the MLCS (R98), TF (R98) and SF (P98) methods for the observed SNe Ia. Since the choice of $`M_0`$ can vary from method to method, we do not expect the histograms to be aligned. However, differences in histogram shape would indicate that the various methods are correcting SNe in different and possibly inconsistent ways. While the three methods claim to reduce the dispersion in the magnitude-redshift relationship at low $`z`$, it is clear from the figure that they produce rather different distributions of shifts. Although the SF method has been applied to a different set of SNe Ia than the other methods, this alone cannot explain the obvious differences between the shapes of the histograms (we note that 14 SNe are common to all three methods). Most striking is that the distribution is extremely narrow for the SF method, indicating that, by this measure, the P98 SNe Ia sample consists almost entirely of standard candles, or that for this sample of SNe Ia, the adopted brightness-decline rate relation is not valid. This suggests to us that these methods may be sensitive to different aspects of the SN Ia phenomenon. A consequence of this is that if the properties of SNe Ia change with redshift, the relationships between the $`\mu `$ estimates produced by the three methods could be $`z`$-dependent. A search for such a dependence could thus provide information about redshift dependence of SNe Ia properties. In the following section we use exploratory methods to search for evidence of this and other kinds of dependences.
## 3 Sleuthing
Our approach in this section is driven by our belief that it is not sufficient to settle for the consistency of the final cosmological inferences of the MLCS, TF and SF analyses. We should expect consistency between them (statistically) on a supernova-by-supernova basis where such a comparison is possible.
Since R98 use two different methods to compute distance moduli for their sample of 37 SNe Ia, we can compare the results and search for systematic differences between them. Both the TF and MLCS techniques are calibrated using the same set of nearby SNe Ia in the Hubble flow and there is only one set of observational data for each SN Ia; consequently, the uncertainties for the two methods are highly correlated. Another comparison set is the group of 14 supernovae from the Calán Tololo survey that are included in both R98 and P98. Since all fitting methods make use of the same published light curves for this sample of 14 events, the inferred quantities for this sample will also be highly correlated.
### 3.1 Pointwise Consistency
In Figure 3, we compare the distance moduli measured with the different techniques on common samples of SNe Ia. In Figure 3a, we show $`\mu _{\mathrm{MLCS}}`$ vs $`\mu _{\mathrm{TF}}`$ for the 10 high redshift SNe Ia analyzed in R98, and in Figure 3b, we show $`\mu _{\mathrm{MLCS}}`$ vs $`\mu _{\mathrm{SF}}`$ for the 14 common Calán-Tololo SNe Ia analyzed by both SF and MLCS methods. The error bars for $`\mu `$ are derived from the uncertainties in the individual distance moduli for each supernova (R98,P98), except that we have removed the contribution associated with the intrinsic dispersion of the SNe Ia sample, estimated to be $`\sigma _{\mathrm{int}}=0.10`$ at low $`z`$, $`\sigma _{\mathrm{int}}=0.15`$ at high $`z`$, <sup>3</sup><sup>3</sup>3The error due to intrinsic dispersion in the SNe Ia sample is estimated to be somewhat larger in P98; $`\sigma _{int}=0.17`$. For the purposes of comparison we remove the smaller estimated value of the correlated error. and we have removed the contribution associated with the peculiar velocity of the SNe Ia ($`\sigma _v=300`$ in P98, $`\sigma _v=200`$ in R98). Both the errors in the distance modulus from intrinsic dispersion in the sample and from the peculiar velocity of the SNe are completely correlated among the different methods. We have not removed the correlations due to $`K`$-corrections, photometry and extinction (e.g., Schmidt et al. 1998), because there is insufficient published information for us to do so properly; consequently, we have overestimated the uncorrelated portion of the distance modulus error somewhat.
From Figure 3 it is clear that the estimates for the distance moduli from the different methods are strongly correlated, as they should be. However, there is more dispersion in these plots than we would expect based on the quoted errors. A fit of a straight line of slope 1 gives a $`\chi ^2/\nu `$ (with $`\nu `$ the number of degrees of freedom) of 22.8/9 for Figure 3a and 21.2/13 for Figure 3b indicating that there are errors associated with the analysis methods that have not been accounted for.
We can pursue this type of comparison further with the R98 data where all of the SNe Ia have been fully analyzed with two independent methods. In Figure 4 we compare the MLCS and TF estimates of various quantities that are used in inferring the distance moduli of the SNe Ia events. For the 37 SNe 1a analyzed in R98, Figure 4$`a`$ shows the host galaxy extinction, $`A`$, 4$`b`$ shows the correction to the absolute magnitude, $`\mathrm{\Delta }`$, and 4$`c`$ illustrates the peak apparent magnitude, $`m`$, calculated with the MLCS and TF analysis methods. (The individual errors for the extinction and $`\mathrm{\Delta }`$ are not published but can be crudely estimated to be of order 0.1 magnitudes.) Again, there is more dispersion evident in these plots than might be expected from the quoted or estimated errors except for the correlation plot of $`m_{\mathrm{MLCS}}`$ versus $`m_{\mathrm{TF}}`$. The peak apparent magnitudes inferred via the two methods, which are the quantities most directly related to the raw data, are in excellent agreement.
### 3.2 Redshift and Luminosity Dependence
In Figure 5, we plot the difference $`\mathrm{\Delta }\mu \mu _{\mathrm{MLCS}}\mu _{\mathrm{TF}}`$ between the distance moduli determined from MLCS and TF respectively, as a function of $`z`$. The error bars for $`\mathrm{\Delta }\mu `$ are derived from the uncertainties in the individual distance moduli except that, as described above, we have removed the contribution associated with the intrinsic dispersion of the SNe Ia sample. Formally, we use $`\sigma _{\mathrm{\Delta }\mu }^2=\sigma _{\mathrm{MLCS}}^2+\sigma _{\mathrm{TF}}^22\sigma _{\mathrm{corr}}^2`$ to calculate the error bars shown in Figure 5. Although the data are somewhat scattered at both high and low $`z`$, Figure 5 shows that the MLCS and TF methods agree rather well at low $`z`$, apart from significant dispersion ($`\sigma 0.2`$ mag) but there are hints of disagreement at large $`z`$, where the dispersion, at least, appears larger, and the mean may also be shifted.
While it is possible that the appearance of Figure 5 at large $`z`$ merely reflects small number statistics, Figure 6 suggests that the incompatibility between TF and MLCS could be systematic. In Figure 6, we plot $`\mathrm{\Delta }\mu `$ versus $`M_B^{\mathrm{AV}}`$, an estimated absolute magnitude, defined by
$$M_B^{\mathrm{AV}}=(M_B^{\mathrm{MLCS}}+M_B^{\mathrm{TF}})/2$$
(4)
where
$`M_B^{\mathrm{MLCS}}`$ $`=`$ $`m_B^{\mathrm{MLCS}}\mu _{\mathrm{MLCS}}A_B^{\mathrm{MLCS}}`$
$`M_B^{\mathrm{TF}}`$ $`=`$ $`m_B^{\mathrm{TF}}\mu _{\mathrm{TF}}A_B^{\mathrm{TF}}`$ (5)
and $`A_B^{\mathrm{MLCS}(\mathrm{TF})}`$ is an estimate of the extinction due to the host galaxy in the MLCS (TF) correction scheme. For $`z>0.15`$, R98 provided all of the information necessary to calculate $`M_B^{\mathrm{MLCS}}`$, $`M_B^{\mathrm{TF}}`$ and hence $`M_B^{\mathrm{AV}}`$, but at low $`z`$, $`M_B`$ was only given for a subset of the SNe Ia in Hamuy et al. (1996a). <sup>4</sup><sup>4</sup>4The zero point reference for $`M_B`$ may be somewhat different for the high redshift and low redshift data which may account for the fact that in Figure 6, the low redshift supernovae seem to be, on average, less luminous by about 0.5 magnitudes. Our conclusions are robust against a shift in the zero point of the magnitude scale for the low redshift supernovae.
According to Figure 6, the difference between $`\mu _{\mathrm{MLCS}}`$ and $`\mu _{\mathrm{TF}}`$ appears to be correlated with the estimated intrinsic brightness, $`M_B^{\mathrm{AV}}`$, at high $`z`$, but not at low $`z`$. (Recall that the error bars on $`\mathrm{\Delta }\mu `$ are overestimates, as explained above.) A similar correlation is evident if $`\mathrm{\Delta }\mu `$ is plotted against $`\mathrm{\Delta }_{\mathrm{MLCS}}(\mathrm{\Delta }_{\mathrm{TF}})`$, the difference in maximum absolute magnitudes for the observed SNe Ia and the fiducial SNe Ia according to the MLCS(TF) method. (R98 tabulates $`\mathrm{\Delta }_{\mathrm{MLCS}}`$ and $`\mathrm{\Delta }_{\mathrm{TF}}`$ for all SNe Ia in their sample.) Figure 6 suggests that, at high $`z`$, one of the analysis schemes, MLCS or TF, either under-corrects or over-corrects for the luminosity variations in the SNe Ia sample. Since no such systematic trend is evident at low $`z`$, we cannot know which method, if either, yields the more accurate value of distance modulus. It is relatively uncontroversial to say that the two methods are not identical, either at low or high $`z`$, and hence must probe SNe Ia physics in slightly different, and as yet ill-understood ways (Höflich and Kholkov 1996, Höflich, Wheeler and Thielemann 1998). Thus, the indications of $`z`$-dependence implied by Figures 5 and 6, while still based on relatively few events, are not especially surprising.
A worrisome feature of Figures 5 and 6 is that imperfect corrections for luminosity variations can alter the conclusions about $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ that we draw from these data. To illustrate this point, we computed $`1\sigma `$ confidence contours in ($`\mathrm{\Omega }_\mathrm{\Lambda },\mathrm{\Omega }_M`$) space with separate fits to intrinsically bright and intrinsically dim SNe Ia; the results are shown in Figure 7. <sup>5</sup><sup>5</sup>5In preparing Figure 7, we have included a contribution to the uncertainty arising from dispersion in galaxy redshift using the technique described in R98. The separation into ‘bright’ and ‘dim’ was somewhat arbitrary, and we have verified that making different choices does not affect the overall conclusion. <sup>6</sup><sup>6</sup>6 For the plots shown, we have chosen $`M_B^{AV}<19.45`$ as intrinsically bright and $`M_B^{AV}>19.45`$ as intrinsically dim for MLCS and TF data. For the SF data, we separated intrinsically bright from intrinsically dim using $`\alpha (s1)>0`$ and $`\alpha (s1)<0`$, respectively. Note that $`\alpha (s1)`$ can be calculated from the information in Tables 1 and 2 of P98. The $`1\sigma `$ confidence level contours for the combined data (all $`M_B^{AV}`$) are also shown as dashed curves. Figure 7 indicates a systematic difference between the cosmology favored by intrinsically bright versus intrinsically dim SNe Ia when the MLCS method is used; the effect is much less pronounced for TF and seems to be of the opposite sign for the SF method. The trend may be understood if the MLCS (SF) method tends to overestimate (underestimate) the luminosities of intrinsically bright SNe Ia at high redshift. Such a trend for the MLCS data is also consistent with Figure 6.
The set of plots in Figures 2 through 7 indicate that the analysis methods disagree in their inferences of $`\mu ,A`$, and $`\mathrm{\Delta }`$ at a level that is not covered by the quoted errors. We can only speculate on the sources of the discrepancies. However until these methods are understood more systematically, it will be difficult to avoid assigning additional systematic errors to the measured distance moduli with sizes that reflect the systematic differences between the methods, and this will weaken the statistical significance of the results substantially.
### 3.3 Validity of Phillips Relations
So far we have been investigating the light curve fitting methods as possible sources of systematic error. Potentially, there are other effects that can mimic cosmology that are extremely difficult to constrain with the present data. The most pernicious, discussed at some length by the observers themselves, is evolution of the SN Ia population. It is extremely difficult to put reliable quantitative limits on evolution, and it cannot be excluded conclusively using the currently available spectral and color information. Furthermore, there is already some evidence in the current data that the high redshift sample does not have the same properties as the low redshift sample.
The strongest evidence that the lightcurve corrections improve our knowledge of the SN absolute magnitude would be a demonstration that they reduce the dispersion of the SN distance moduli about the best-fit cosmology. (For example, Riess, Press & Kirshner 1996 showed that MLCS reduces the dispersion about Hubble’s Law for low $`z`$ SNe Ia.) To test this, we have compared the dispersion between the data and the predictions of the best-fit cosmology with and without the corrections, $`\mathrm{\Delta }_i`$, inferred from the light curve fitting. <sup>7</sup><sup>7</sup>7We redetermine the best fit cosmology when we remove the corrections. We adopt the quantity
$$D^2=\frac{1}{N}\underset{i}{}[\widehat{\mu }_if(z_i;H_0,\mathrm{\Omega }_m,\mathrm{\Omega }_\lambda )]^2$$
(6)
as a measure of the dispersion, where $`\widehat{\mu }_i`$ is the estimated distance modulus for SN $`i`$, $`z_i`$ is its redshift, the function $`f(z)`$ is defined by equation (1), and $`N`$ is the number of SNe Ia in the sample. We compute $`D`$ separately for high and low $`z`$; the results are given in Table 1.
For both MLCS and TF, which are calibrated on low $`z`$ SNe Ia, we see that the dispersion of the low redshift data is reduced substantially by incorporating the corrections derived from the relation between light curve shape and luminosity at maximum brightness. At high redshift, no such improvement is seen. The dispersion of the high z data about the best fit cosmology is virtually unchanged by the incorporation of either the MLCS or TF corrections.
For SF, there is little evidence from Table I that the corrections reduce the dispersion in the data at all. Recall that in the SF parameterization, the relation between light curve width and luminosity corrections is parameterized by $`\mathrm{\Delta }_{SF}=\alpha (s1)`$ where $`\alpha `$ is inferred from a global fit to the data at all redshift. As was shown in Figure 2, the corrections $`\mathrm{\Delta }_{SF}`$ are quite small so it is not surprising that they do little to reduce the dispersion in the data. As stated before, the SF method finds little, if any, correlation between light curve width and absolute luminosity when averaged over all redshift. What is startling is that the low redshift sample used in P98 is almost identical to the “peak subsample” of Hamuy et al. (Hamuy et al. 1996a). As detailed in that reference, that low redshift sample does show a significant correlation between light curve width and peak luminosity. If a strong correlation is present in the low redshift sample and only a very weak correlation is evident in the full sample, one is led to suspect that the correlation is not present in the high redshift sample; the large number of high-redshift SNe leverage the joint fit.
## 4 Accounting For Possible Evolution
Both R98 and P98 assume implicitly that the same light curve fitting methods may be applied at all redshifts sampled. This assumption is only valid if the light curve shape is correlated with peak luminosity in the same way at both high and low redshift. Given the evidence we have presented that this may not be true, which indicates, at least circumstantially, that the SNe Ia population evolves, we feel it is necessary to explore whether the data published so far actually are able to actually distinguish the effects of evolution from those of cosmology.
Such effects fall under the rubric of “systematic errors”—because they are not “random,” their effects on one’s final inferences are difficult to account for in the conventional frequentist approach to statistical inference. However, both teams have adopted the Bayesian approach for their final analyses (though not for all intermediate stages of their analyses). As noted by Jeffreys (1961), the Bayesian approach is particularly apt for studying the effects of systematic error because of its broader notion of uncertainty. A Bayesian probability density describes how probability is distributed among the possible values of a parameter, rather than how values of the parameter are distributed among some hypothetical population. This permits statistical calculations with quantities that are not “random” in the frequentist sense. In particular, as Jeffreys noted, systematic errors are treatable simply by introducing parameterized models for the errors and marginalizing (integrating over) the extra parameters to obtain one’s final inferences.
This procedure, when followed blindly, has the potential to weaken one’s conclusions unjustifiably. For example, one could simply introduce a systematic dependence that is identical to the physical dependence one is studying, but with a duplicated set of parameters. This duplication would prevent useful constraints from being placed on the parameters, since any measured effect could be “blamed” on the duplicated systematic dependence. Thus Jeffreys emphasized the need to compare models with and without systematic error terms using the ratio of the model probabilities, the odds favoring one model over another. The odds can be written as the product of the prior odds (expressing information from other data, or possibly a subjective comparison of the models) and a Bayes factor determined entirely by the data, the models, and the sizes of the model parameter spaces. If we know or strongly believe a systematic effect to be present without consideration of the new data before us, then obviously the systematic error model should be used; the prior odds would lead us to this conclusion even if the Bayes factor is indecisive. If we have no strong prior evidence for a systematic error, one takes the prior odds to be unity and relies on the data alone for determining if the effect is present, taking the Bayes factor to be the odds. An appealing aspect of Bayesian model comparison is that the Bayes factor implements an automatic “Ockham’s razor” that penalizes models for the sizes of their parameter spaces. Thus model complexity is accounted for by the Bayes factor. Except in unusual cases, needlessly increasing a model’s complexity by simply duplicating terms prevents the Bayes factor from favoring the more complicated model. We provide a brief review of Bayes factors in Appendix A; standard references reviewing their use are Kass and Raftery (1995) and Wasserman (1997).
### 4.1 Systematic Error in $`H_0`$
To illustrate this approach, we show how it can be used to quantitatively account for systematic error introduced by the uncertain Cepheid distances used to infer $`M_0`$ in the MLCS and TF methods. (The SF analysis used a “Hubble-constant-free” parameterization and thus could avoid explicit treatment of $`M_0`$ and the Hubble constant.) We write the true value of $`M_0`$ as $`(\widehat{M}_0+\delta )`$, where $`\widehat{M}_0`$ is the estimate used for calculating $`\widehat{\mu }_i`$, and the new term, $`\delta `$, represents the constant (but unknown) error introduced by using Cepheid data to calculate $`\widehat{M}_0`$. We describe the likelihood function for analyzing the SNe Ia data in some detail in Appendix B. The final (approximate) likelihood is equivalent to what one would find from modelling the tabulated $`\widehat{\mu }_i`$ estimates according to,
$`\widehat{\mu }_i`$ $`=`$ $`f(z_i)+\delta +n_i`$ (7)
$`=`$ $`g(z_i)\eta +\delta +n_i`$ (8)
where $`f(z_i)`$ is the cosmological distance modulus relation defined in equation (1), and $`n_i`$ is a random error term whose probability distribution is a Gaussian with zero mean and standard deviation $`\sigma _i`$. In the second line, we have separated out the $`H_0`$ dependence of $`f(z_i)`$ into
$$\eta 5\mathrm{log}\left(\frac{h}{c_2}\right)25,$$
(9)
where $`H_0=h\times 100\text{km}\text{s}^1\text{Mpc}^1`$, and $`c_2`$ is the speed of light in units of $`10^2\text{km}\text{s}^1`$; $`g(z_i)`$ contains the remaining $`\mathrm{\Omega }_M`$\- and $`\mathrm{\Omega }_\mathrm{\Lambda }`$-dependent part of $`f(z_i)`$.
It is clear from equation (8) that $`\eta `$ (and thus $`H_0`$) is degenerate with $`\delta `$; we cannot hope to learn about one without independent knowledge of the other. But $`\delta `$ is constrained by our knowledge of the uncertainty of the Cepheid distance scale. In particular, R98 summarize the uncertainties as introducing an error with a standard deviation of $`d=0.21`$ magnitudes (corresponding to $`10`$% uncertainty in $`H_0`$). We account for this by introducing a prior distribution for $`\delta `$ that is a zero-mean Gaussian with standard deviation $`d`$.
Our model now has four parameters, $`\delta `$, $`h`$, $`\mathrm{\Omega }_M`$, and $`\mathrm{\Omega }_\mathrm{\Lambda }`$. The likelihood function for the data is the product of $`N`$ Gaussian distributions specified by equation (8) and is proportional to the exponential of a familiar $`\chi ^2`$ statistic. The full joint posterior distribution is the product of this and priors for the parameters, including the informative prior for $`\delta `$.<sup>8</sup><sup>8</sup>8The priors for $`\mathrm{\Omega }_M`$, and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ we take to be flat over the region shown in our plots, excluding the “No Big Bang” region; the prior for $`h`$ we take to be flat in the logarithm. We can summarize our inferences for the cosmological parameters by integrating over $`\delta `$; this can be done analytically and is described in Appendix B. If we want to focus on the conclusions for $`h`$, we numerically integrate over $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$. The result, for the MLCS data, is the marginal distribution for $`h`$ shown as the rightmost solid curve in Figure 8. The best-fit value is $`h=0.645`$, and a 68.3% credible region has a half-width $`\sigma _h=0.063`$. This is approximately equal to the “total uncertainty” on $`H_0`$ estimated by R98 using standard “rules of thumb” for accounting for systematic error; we have shown how this estimate could be justified by a formal calculation. For the TF data, the marginal posterior is plotted as the leftmost solid curve in Figure 8, and $`h=0.627\pm 0.062`$.
Of greater current interest are the implications for the density parameters. The marginal distribution for $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ is found by integrating out $`\delta `$ and $`h`$. This can be done analytically (see Appendix B). Contours of the resulting distributions, found using both the MLCS and TF data, appear in Figure 9. They are identical to contours found using a model without $`\delta `$, and essentially reproduce the results reported in R98 (minor differences result from our omission of the “snapshot” SNe).
In this case, we know that $`M_0`$ has been estimated using the Cepheid data, and that this estimate has systematic error. Formally, the prior odds favoring the model with $`\delta `$ over one with $`\delta =0`$ is thus infinite. The Bayes factor comparing these models is exactly equal to one (this is because the SNe Ia data can tell us nothing about $`\delta `$; see Appendix A for discussion of this property of Bayes factors), so the posterior odds is equal to the prior odds. Since we know these errors to be present, we take this $`\delta `$ model to be our “default” model when calculating subsequent Bayes factors in this section.
We conclude our discussion of this model by summarizing the evidence in the data for a nonzero cosmological constant, presuming the $`\delta `$ model to be true. In R98 and P98, the marginal posterior probability that $`\mathrm{\Omega }_\mathrm{\Lambda }>0`$ was presented as such a summary; this probability was found to equal 99.6% (2.9$`\sigma `$), 99.99% (3.9$`\sigma `$), and 99.8% (3.1$`\sigma `$) in the MLCS, TF, and SF analyses, respectively, apparently indicating strong evidence that $`\mathrm{\Omega }_\mathrm{\Lambda }`$ is nonzero. But this quantity is not a correct measure of the strength of the evidence that $`\mathrm{\Omega }_\mathrm{\Lambda }0`$. This probability would equal unity if negative values of $`\mathrm{\Omega }_\mathrm{\Lambda }`$ were considered unreasonable a priori, yet presumably even in this case one would not consider the data to demand a nonzero cosmological constant with absolute certainty. The correct quantity to calculate is the odds in favor of a model with $`\mathrm{\Omega }_\mathrm{\Lambda }0`$ versus a model with $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$. Considering such models to be equally probable a priori, this is given by the Bayes factor comparing these models. <sup>9</sup><sup>9</sup>9These Bayes factor calculations can also be viewed as providing the posterior probability that $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ by putting a prior probability of 0.5 on the $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ line; in the calculations reported in R98 and P98, this line has zero prior probability (only finite intervals in $`(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })`$ have nonzero prior probability in their analyses). We find $`B=5.4`$ using the MLCS data and $`B=6.8`$ using the SF data, each indicating positive but not strong evidence for a nonzero cosmological constant (presuming there is no evolution). The TF data give $`B=86`$, indicating strong evidence for a nonzero $`\mathrm{\Omega }_\mathrm{\Lambda }`$ (again, presuming no evolution). Without clear criteria identifying one method as superior to the others, the data are equivocal about a nonzero cosmological constant, even without accounting for the effects of possible evolution.
Similarly, R98 report the number of standard deviations that the $`\mathrm{\Omega }_M=1`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ point is away from the best-fit $`(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })`$ as a measure of the evidence against the hypothesis that matter provides the closure density; they state this hypothesis is ruled out at the $`7\sigma `$ and $`9\sigma `$ levels using the MLCS and TF methods, respectively. Again, a proper assessment of the hypothesis that $`\mathrm{\Omega }_M=1`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ requires that one give this hypothesis a finite, nonzero prior probability. For the MLCS method, the Bayes factor favoring a model with any $`(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })`$ over one with $`\mathrm{\Omega }_M=1`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ is $`B=2.3\times 10^4`$, so that with equal prior odds the probability for the latter model is $`p=1/(1+B)5\times 10^5`$ ($`4.5\sigma `$). For the TF method, we find $`B=2.1\times 10^7`$, so that $`p5\times 10^8`$ ($`5.8\sigma `$). These are small probabilities and indicate very strong evidence against the simpler model, but they are much larger than the probabilities associated with $`7\sigma `$ and $`9\sigma `$ significances ($`2\times 10^{11}`$ and $`3\times 10^{18}`$, respectively, for two degrees of freedom). The incorrect summary statistics used in the previous analyses have exaggerated the evidence for a nonzero cosmological constant, irrespective of whether or not one considers the effects of evolution.
### 4.2 Models With Evolution
Without a detailed physical idea of the cause of evolution, we cannot explore truly realistic models. Instead we consider two illustrative examples. We first consider a model (Model I) that generalizes the $`\delta `$ model just described by adding an additional offset, $`ϵ`$, for the high redshift SNe; we apply this model only to data from R98. As a model of physical evolution, this is certainly too simple, but it is illustrative since for the R98 sample, the observed high redshift SNe Ia are all in a fairly narrow band in redshift near $`z0.5`$. Essentially this model merely permits differences in luminosities between $`z0.1`$ and $`z0.5`$ as a consequence of evolution. On a purely phenomenological level, the model might be considered more realistic because the low and high redshift SNe are not treated equally in the R98 analysis: the MLCS and TF relations are calibrated using only low redshift SNe. Thus this model can be understood as allowing for a systematic offset when extrapolating the methods beyond the training set. For this model, we use the same prior for $`\delta `$ as in our default model (zero-mean Gaussian with standard deviation $`d=0.21`$ mag). The prior for $`ϵ`$ we also take to be a zero-mean Gaussian but with a different width $`e`$. The prior width, $`e`$, can be viewed as a description of the scale of errors we might expect from extrapolating low redshift properties to high redshift.
Physically, we might expect evolution to lead to continuous variation of SN Ia properties with redshift. Also, the P98 analysis uses low and high redshift SNe Ia together in calibrating the luminosity/decline rate relation, so there is no clear separation of their data into low and high $`z`$ subsamples. Thus, Model I is inappropriate for phenomenological modeling of systematic effects from lightcurve fitting of P98 data. Consequently, we consider a second model (Model II) which assumes that the intrinsic luminosities of SNe Ia scale like a power of $`1+z`$ as a result of evolution. This second model corresponds to replacing equation (8) with
$$\widehat{\mu }_i=g(z_i)\eta +\delta +\beta \mathrm{ln}(1+z)+n_i,$$
(10)
where $`\delta `$ again represents Cepheid uncertainty in $`M_0`$ (relevant only when we apply this model to MLCS or TF data), and $`\beta `$ parameterizes the evolution. We use a Gaussian prior for $`\beta `$ with zero mean and standard deviation $`b`$.
For both models, we explore the dependence of the results on the prior width ($`e`$ and $`b`$) to see how external constraints on evolution (presently unknown) could affect the analysis. We examine values that allow evolutionary changes of up to a few tenths of a magnitude for sources with $`z1`$. These characteristic magnitude shifts are comparable to the intrinsic dispersion seen in low redshift SNe Ia (Schmidt et al. 1998), which may be taken as a rough indication of the range of variation of peak magnitude with physical conditions in the explosions. Some current theoretical studies of possible sources of $`z`$-dependent variations in SNe Ia luminosities also find magnitude changes of this size to be reasonable (see, e.g., Höflich, Wheeler, and Thielemann 1998; Domínguez et al. 1999).
The new parameters in both models appear linearly in the model equations and can thus be marginalized analytically. Appendix B describes the calculations. Reality could be and probably is far more complicated than either model, but the sparsity of the present data do not justify consideration of more sophisticated models.
#### 4.2.1 Model I
Figure 10$`a`$ shows contours of the marginal density for $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ using Model I to analyze the MLCS data and taking the prior width for $`ϵ`$ to be $`e=0.1`$ mag. Figure 10$`b`$ shows similar results using the TF data. It is clear from these figures that the presence of a redshift-dependent shift of order $`e`$ greatly weakens our ability to constrain $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ from the SNe Ia data. The Bayes factor for this model over the default $`\delta `$ model is $`1.1`$ for both MLCS and TF; the data alone are indecisive about whether such a redshift-dependent error is present. <sup>10</sup><sup>10</sup>10The $`\chi ^2`$ values for the default fits are already acceptable, so one might worry that the more complicated models are “overfitting.” But the maximum likelihoods for models I and II are only slightly greater than those found with the default model. Bayes factors account for overfitting and it is not playing a role here. The operation of Bayes factors is discussed further in Appendix A. Presuming it is present, the Bayes factor favoring nonzero $`\mathrm{\Omega }_\mathrm{\Lambda }`$ over $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ is reduced significantly from what is found using the default model; it is $`1.1`$ for MLCS and $`3.8`$ for TF.
These results are sensitive to our knowledge of the evolution. Figures 10$`c`$ and 10$`d`$ show the MLCS and TF results again, but this time with $`e=0.2`$ mag; the credible regions have grown even larger in size. Now the Bayes factor for Model I over the default $`\delta `$ model is $`1.3`$ for MLCS and 1.2 for TF; the data remain indecisive about the presence of an evolutionary offset. Presuming it is present, the Bayes factor favoring nonzero $`\mathrm{\Omega }_\mathrm{\Lambda }`$ over $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ is $`0.7`$ for MLCS (i.e., slightly favoring $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$) and $`1.2`$ for TF.
As one might expect from these results, the most likely value of $`ϵ`$ tends to be positive, making the more distant sources dimmer than the nearer ones due to evolution rather than cosmology. For example, in Figures 10$`c`$ and 10$`d`$, when $`\mathrm{\Omega }_M=1`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ (a point within the 95.4% credible regions), $`ϵ=0.31\pm 0.06`$ for MLCS and $`ϵ=0.17\pm 0.06`$ for TF.
The constraints placed on $`H_0`$ by the SNe Ia data arise mostly from the low redshift objects, so one would not expect allowance for evolution to drastically affect the $`H_0`$ inferences. The analysis bears this out. The dashed curves in Figure 8 show the marginal distributions for $`h`$ based on the MLCS and TF data using Model I with $`e=0.2`$; they differ little from the distributions found using the reference model with no evolution. Similar results are found using Model II.
#### 4.2.2 Model II
Figure 11 shows results from analysis of the SF data using Model II. Figure 11$`a`$ shows contours of the marginal density for $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ for the $`\beta =0`$ case as a reference; these contours essentially duplicate the results of Fit C in P98. Figure 11$`b`$ shows similar contours, but allowing for a nonzero $`\beta `$; the prior standard deviation for $`\beta `$ was $`b=0.25`$. Figure 11$`c`$ repeats the analysis with $`b=0.5`$. Again we find that the possibility of evolution significantly weakens the constraints on the density parameters, but if the amount of evolution can be bounded, useful limits might result. The Bayes factor for evolution vs. no evolution is $`1.0`$ for $`b=0.25`$ and $`1.1`$ for $`b=0.5`$, so the data alone are indecisive about the presence or absence of this type of evolution. We find similar results when using this model to analyze the MLCS and TF data.
Figure 12 shows how these findings depend on the prior uncertainty for $`\beta `$. The solid curve shows how the Bayes factor favoring a nonzero $`\mathrm{\Omega }_\mathrm{\Lambda }`$ over $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ depends on $`b`$; only for $`b0.1`$ does the Bayes factor remain near the value of 6.8 found assuming there is no evolution. The dashed curve shows the Bayes factor for Model II versus the default model with no evolution; for no value of $`b`$ in the range of the plot can the data clearly distinguish evolution from cosmology. This emphasizes the need to use information independent of the $`\mu `$-$`z`$ relation to constrain evolution.
#### 4.2.3 Flat Cosmologies
So far we have assessed the evidence for nonzero $`\mathrm{\Omega }_\mathrm{\Lambda }`$ by comparing models with $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ to models with arbitrary $`\mathrm{\Omega }_\mathrm{\Lambda }`$, as was done in the R98 and P98 analyses. However, many cosmologists would consider flat models, with $`\mathrm{\Omega }_\mathrm{\Lambda }=1\mathrm{\Omega }_M`$, to be of special relevance (e.g., because of inflationary arguments). We have thus analyzed models constrained in this way, using our default model and models I and II.
Figure 13 shows the marginal posterior distributions for $`\mathrm{\Omega }_M`$ (and, equivalently, for $`\mathrm{\Omega }_\mathrm{\Lambda }=1\mathrm{\Omega }_M`$) presuming a flat cosmology. The three panels show analyses of the distance moduli reported using the MLCS and TF data with Model I (top and middle, respectively), and using the SF data with Model II (bottom). The solid curves show results based on the default model; the short-dashed curves show results with a small amount of evolution allowed ($`e=0.1`$ or $`b=0.25`$), and the long-dashed curves show results with a larger amount of evolution allowed ($`e=0.2`$ or $`b=0.5`$). As in the previous cases, the Bayes factors comparing models with evolution to the default model are all nearly equal to one. Also, as was found before, accounting for the possibility of evolution significantly weakens the evidence for nonzero $`\mathrm{\Omega }_\mathrm{\Lambda }`$. However, if one restricts attention to flat models, the evidence for nonzero $`\mathrm{\Omega }_\mathrm{\Lambda }`$ is stronger than it is if one allows nonflat cosmologies. For the default model presuming no evolution, the Bayes factors favoring nonzero $`\mathrm{\Omega }_\mathrm{\Lambda }`$ over a flat model with $`\mathrm{\Omega }_M=1`$ are $`2.1\times 10^4`$ (MLCS), $`2.5\times 10^6`$ (TF) and $`5.0\times 10^3`$ (SF), much larger values than were found in the comparison using nonflat $`\mathrm{\Omega }_\mathrm{\Lambda }`$ models discussed above. But these values fall dramatically if one allows for evolution. For models with a small amount of evolution allowed, they decrease to 20, 48, and 14, respectively, indicating positive but not compelling evidence for nonzero $`\mathrm{\Omega }_\mathrm{\Lambda }`$. For models with a larger amount of evolution allowed, they decrease further to 2.4, 2.5, and 2.3, indicating no significant evidence for nonzero $`\mathrm{\Omega }_\mathrm{\Lambda }`$.
### 4.3 Simulations
Figure 14 elucidates why introducing the possibility of evolution weakens our ability to constrain $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ so greatly. The thick solid curve shows $`g(z)`$ for the best-fit cosmology from fits to the SF data presuming no evolution ($`\mathrm{\Omega }_M=0.75`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=1.34`$). The dotted curve shows the same function for the flat $`\mathrm{\Omega }_M=1`$ ($`\mathrm{\Omega }_\mathrm{\Lambda }=0`$) cosmology (which lies within the 68.3% credible region in Figure 11$`c`$), and the dashed curve shows the evolutionary component $`\beta \mathrm{ln}(1+z)`$ for the best-fit value of $`\beta `$ given this cosmology ($`\beta =0.83`$). The thin solid curve shows the sum of the dotted and dashed curve. Over the range of redshift covered by the data ($`z1`$) and even beyond, $`(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda },\beta )=(0.75,1.34,0)`$ and $`(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda },\beta )=(1,0,0.83)`$ are indistinguishable if one allows evolution of this type, unless one can determine $`\mu `$ to significantly better than the $`20`$% accuracy currently obtained at $`z1`$. However, we note that the models differ substantially at larger redshift (by about 0.8 mag), which offers some hope of discerning evolution. We caution, though, that the best fit values of $`(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })`$ are likely to be different for data extending to $`z2`$, either with or without evolution, so the comparison is not truly apt, and moreover $`(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })=(0.75,1.34)`$ might be considered implausible intrinsically by many cosmologists. To amplify this point, we also compare $`(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda },\beta )=(1,0,0.83)`$ with another cosmology, $`(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })=(0.3,0.7)`$, within the 68.3% credible region of the no-evolution analysis. As is shown by the bold, long-dashed curve in Figure 14, $`\mu `$ accuracies better than 10% out to $`z2`$ would be needed to distinguish these cosmologies from one another out. We have systematically explored a wide range of cosmologies and found similar results: simple power law evolution can make widely disparate cosmologies appear remarkably similar. Put another way, the differences between cosmologies with various $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ are well mimicked by power-law evolution to redshifts beyond those currently accessible in supernova surveys. We emphasize that we did not choose the form of the evolution to produce this degeneracy; this is a standard phenomenological model for evolution. We have found similar behavior with another simple model for evolution consisting of a power law in lookback time.
To assess how well evolution can mimic cosmology, we analyzed simulated data consisting of $`\mu `$ values with added Gaussian noise at redshifts that themselves had added Gaussian noise. The simulations were designed to roughly mimic possible future data like that reported in P98 (this simplified the analysis since $`H_0`$ need not be accounted for explicitly as it would have to be for data like those reported in R98). The redshifts of the first 16 SNe in our simulated data sets were chosen to be similar to those of the 16 low-redshift SNe in P98 ($`z0.1`$); the redshifts for the remaining simulated data were chosen randomly from a uniform distribution over some specified interval. We added redshift errors with a standard deviation of 0.002, and $`\mu `$ errors with standard deviations equal to those reported by P98 for the 16 low-$`z`$ SNe, and equal to 0.25 magnitudes for the high-$`z`$ SNe (a typical value for $`\mu `$ values reported in P98).
Figure 15 shows typical results. Here we simulated data from a flat, $`\mathrm{\Omega }_M=1`$ ($`\mathrm{\Omega }_\mathrm{\Lambda }=0`$) cosmology with evolution described by Model II with $`\beta =0.5`$. Figure 15$`a`$ shows the results of an analysis assuming no evolution, with 38 simulated SNe redshifts in the interval $`[0.3,1]`$ (54 total simulated points). This corresponds to a sample size equal to that used in the P98 analysis and extending over a similar range of redshift. The cross indicates the best-fit parameter values, the dot indicates the true values, and the contours bound credible regions of various sizes. One would reject the true model as being improbable if evolution is ignored. Figure 15$`b`$ shows a similar plot, with the number of high-$`z`$ SNe increased so the total sample size is now 200, with the high-$`z`$ points now spread over $`[0.3,1.5]`$. The contours have shrunk considerably, converging around a point well away from the truth. In both figures, the best-fit point has an excellent $`\chi ^2/\nu `$ ($`53.6/52`$ for the small data set, $`201/198`$ for the large one). Evolution mimics cosmology so well that standard “goodness of fit” reasoning can lead one to conclude, mistakenly, that pure cosmology (with no evolution) is an adequate description of data of this quality even when substantial evolution is present.
Figure 15$`c`$ shows the results of an analysis of the larger data set using a model that includes evolution; the marginal posterior (with $`\beta `$ marginalized) is shown. The credible regions now contain the true model, but they are large even for a data set four times the size of the currently published surveys, and extending to significantly higher redshift. The Bayes factor is of order unity, showing that data of this quality is not capable of distinguishing between models with and without evolution. This is further testimony to the approximate degeneracy between cosmology and evolution, at least at $`z1.5`$.
The extent to which evolution corrupts the results depends both on the true cosmology and on the amount of evolution allowed. Independent constraints on the amount of evolution could thus play an important role in allowing useful contraints to be placed on the cosmology. They would enter the analysis via the prior for $`\beta `$. Comparison of Figures 11$`a`$ through 11$`c`$ shows how constraints on the amount of evolution can affect one’s final inferences.
## 5 Conclusions
Systematic uncertainty may enter the analysis of any data set as a result of real physical effects that are not accounted for explicitly. As an example, the use of observations of distant galaxies to measure the cosmological deceleration parameter had to confront the systematic errors introduced by the fact that not only are galaxies not standard candles, but their luminosities also evolve with time (e.g., Tinsley 1968; Weinberg 1972; Ostriker & Tremaine 1975; Tinsley 1977; Sandage 1988; Yoshii & Takahara 1988; Bruzual 1990; Peebles 1993). A principal goal of this paper has been to present a study of the systematic error due to evolution in attempts to determine $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ from observations of SNe Ia.
One focus of this paper has been to see if there are indications that the SNe Ia population has evolved from $`z0.51`$ to $`z1`$. We have presented two arguments that this might be so. The first is that a comparison of the peak luminosities estimated for individual SNe Ia by two different methods, MLCS and TF, are not entirely consistent with one another at high redshifts, $`z0.5`$. We have asserted that the two methods very likely sample slightly different aspects of the SN Ia mechanism, and should not be expected to agree completely. If evolution were entirely absent, though, the differences between them should not depend on redshift, contrary to the admittedly sketchy evidence of the data. A second hint that SNe Ia evolve with redshift is that while the three luminosity estimators, SF as well as MLCS and TF, reduce the dispersion of distance moduli about best fit models at low redshift, they do not at high redshift.
These studies were intended to give us impetus to pursue the more fundamental point of this paper, namely that evolution must be considered possible, even if there are no “smoking guns” that seem to require it. Ideally, one should attempt to constrain the parameters of an evolutionary model at the same time as determining the parameters of the cosmological model. As we stated at the outset, changes in peak SN Ia magnitude of order 0.1 magnitudes out to $`z1`$ would alter the ranges of acceptable cosmological models substantially. The dispersion of SNe Ia peak magnitudes at low $`z`$ is approximately 0.3-0.5 mag (Schmidt et al. 1998), which might indicate a plausible range of variation for diverse physical conditions. Using theoretical models, Höflich, Wheeler & Thielemann (1998) argue that a similar range of variation of peak luminosities could arise as a consequence of changes in composition which might be due to evolution.
To get an idea of how allowing for the possibility of evolution would affect one’s ability to constrain cosmological parameters, we considered two different models. In one, we assumed that there is a constant magnitude shift between low and high redshift. We also considered a model in which the peak magnitudes of SNe Ia evolve continuously, with $`\delta m(z)=\beta \mathrm{ln}(1+z)`$. In applying these models, prior assumptions about the amplitude of possible magnitude changes of SNe Ia between low and high redshifts are needed to evalute the systematic error that might be introduced. At present, little is known, so our calculations allow a range of possibilities. To do this, we assume Gaussian prior probability distributions for the (unknown) parameters of the evolutionary models. These priors express a preference for no evolution, but have adjustable standard deviations that encapsulate prior notions about how large possible evolutionary effects might be. The results presented in P98 and R98 correspond to setting these standard deviations to zero; i.e., no evolution at all. We adopt a more conservative viewpoint, and present results for different choices of the ranges of magnitude evolution that are allowed a priori. Significantly, when we permit peak magnitude changes out to $`z1`$ comparable to (and even somewhat smaller than) the range observed for low redshift SNe Ia (Schmidt et al. 1998), the implied systematic uncertainty in $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ becomes so large that the data cannot constrain these cosmological parameters usefully. However, our ability to determine $`H_0`$ is virtually unaffected by evolution.
In order to assess the extent to which the data favor models allowing evolution over ones without evolution, we computed Bayes factors. The Bayes factor between classes of models with and without luminosity evolution are equivalent to the odds ratio between them if there is no a priori reason to prefer one over the other. In all cases, we found that the Bayes factors are of order unity, which means that the data themselves do not favor either model. If we accounted for a prior prejudice that evolution does occur, the odds would disfavor models in which the SNe Ia population has the same properties at all redshifts.
The two models we have considered illustrate how well evolution can mimic cosmology. The less realistic model merely allowed a shift in the magnitudes of high $`z`$ SNe Ia relative to low $`z`$ SNe Ia by a fixed (but uncertain) amount, $`\delta m`$. Since the SNe Ia in the R98 sample were predominantly at $`z0.30.5`$, the cosmological magnitude shift (relative to Hubble’s law or any other fiducial cosmology) varies little over the entire redshift range they span. Clearly, for the high $`z`$ SNe Ia in this sample, one only knows that there is a total magnitude shift between $`z0.1`$ and $`z0.5`$, not how much is due to cosmology and how much to evolution. The characteristic magnitude shifts due to evolution needed for a cosmological model with $`(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })=(1,0)`$ are $`0.20.3`$. Ultimately, a model in which there is simply a constant magnitude difference between low and high $`z`$ SNe Ia should fail to model data spanning a lage range of redshifts.
More daunting is the success of models allowing a continuous magnitude shift, $`\delta m(z)=\beta \mathrm{ln}(1+z)`$. While it is unsurprising that such models would be approximately degenerate with cosmology at low $`z`$, where the combined magnitude shift, relative to Hubble flow, is $`[1.086(1\mathrm{\Omega }_M/2+\mathrm{\Omega }_\mathrm{\Lambda })+\beta ]z`$ (e.g., Weinberg 1972), it is remarkable that a continuous magnitude shift with this simple form cannot be discerned out to at least $`z1`$. Our simulations show that even if there is truly no evolution, so that reality corresponds to certain values of $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ with $`\beta =0`$, models with $`\beta 0`$ and $`(\mathrm{\Omega }_M^{\mathrm{eff}},\mathrm{\Omega }_\mathrm{\Lambda }^{\mathrm{eff}})(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })`$ yield apparent magnitudes that are indistinguishable from the truth within differences in distance modulus $`0.1`$ mag. Differences between select cosmological models may be larger at higher $`z`$, but often remain within $`0.1`$ mag out to $`z2`$.
We also use simulations to explore the converse situation where we neglect evolution in the analysis of samples of evolving SNe Ia. As an example, we simulated a set of 200 SNe Ia distance moduli, including 184 high-$`z`$ SNe with redshifts uniformly distributed over $`0.3z1.5`$, in a cosmological model with $`(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda },\beta )=(1.0,0.0,0.5)`$. We then analyzed the data with evolution neglected entirely. The result was that given enough SNe Ia, the analysis would pick out a small range of “allowed” values of $`(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })`$, but centered around incorrect values. The true cosmology was well outside the $`3\sigma `$ credible region for these simulations, yet the (incorrect) best-fit model would be judged excellent by a standard $`\chi ^2`$ goodness-of-fit test.
What is needed to separate evolution from cosmology is both detections of greater numbers of SNe Ia at high redshift with detailed measurements of light curves and spectra, and, equally important, a better physical understanding of the SN Ia process. In particular, one would like to be able to link the Phillips relations, lightcurve risetimes and spectra, uniquely, to internal conditions in the explosions themselves, to be able to understand how they might evolve with redshift (see, e.g., von Hipple, Bothun, & Schommer 1997; Höflich, Wheeler, and Thielemann 1998; Domínguez et al. 1999). This would allow construction of realistic, not phenomenological, models for evolution, and one might hope to be able to constrain the parameters of these models along with cosmological ones. The analogue in galactic astronomy is the use of population synthesis models to study the cosmological evolution of the luminosity function, which might permit, given enough data, simultaneous fits for cosmological parameters (Yoshii & Takahara 1988, Bruzual 1990). Such detailed physical modelling might lead to a detailed, quantitative connection between the peak luminosities of SNe Ia and their spectra, which would allow additional information to be useful quantitatively in fitting for $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$. R98 and P98 have argued, using the spectral data, that there is no compelling evidence for evolution, but that does not translate into a convincing argument against evolution unless the salient features of the source spectra can be connected unambiguously to peak luminosity. In fact, since this paper was submitted, Riess, Filippenko, Li & Schmidt (1999) have claimed that the rise times of low and high redshift SNe Ia are different even though earlier studies found no comparably strong evidence for spectral evolution.
In the end, what all cosmologists want to know is the probability that the cosmological constant is nonzero. The Bayes factor provides straightforward mathematical machinery for doing this calculation, whether or not evolution is included in the analysis. When the possibility of evolution is not included in the analysis, and no prior assumptions are made about the spatial geometry of the Universe, the Bayes factor for $`\mathrm{\Omega }_\mathrm{\Lambda }0`$ compared to $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ is $`B=5.4`$ using the MLCS method, $`B=6.8`$ using the SF method, and $`B=86`$ using the TF method, which if one is not prejudiced either way, only favors nonzero $`\mathrm{\Omega }_\mathrm{\Lambda }`$ equivocally. (There may be reasons to be prejudiced one way or the other; see for example Turner 1999 for a theoretical cosmologist’s point of view.) When the possibility of evolution is accounted for in the analysis, the values of the analogous Bayes factors depend on one’s prior assumptions, but rather conservatively $`B1`$. Thus, if we do not discriminate among open, closed and flat cosmological models, the data alone do not choose between $`\mathrm{\Omega }_\mathrm{\Lambda }0`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ once the possibility of evolution is taken into account. However, if the Universe is presumed to be flat spatially, then the case for $`\mathrm{\Omega }_\mathrm{\Lambda }0`$ is stronger. If evolution is presumed not to occur, we find Bayes factors $`B=2.1\times 10^4`$ (MLCS), $`2.5\times 10^6`$ (TF) and $`5.0\times 10^3`$ (SF), decisive odds in favor of nonzero $`\mathrm{\Omega }_\mathrm{\Lambda }`$. Weak evolution ($`e=0.1`$ in Model I or $`b=0.25`$ in Model II) lowers these values to $`B=20`$ (MLCS), 4.8 (TF) and 14 (SF), which still favors $`\mathrm{\Omega }_\mathrm{\Lambda }0`$ positively but not nearly as persuasively. If evolution is allowed to be somewhat more pronounced, but still at a plausible level ($`e=0.2`$ or $`b=0.5`$), the Bayes factors fall to $`B=2.4`$ (MLCS), 2.5 (TF) and 2.3 (SF), which is scant evidence for a non-vanishing cosmological constant. Once again, the ability of the data to distinguish $`\mathrm{\Omega }_\mathrm{\Lambda }0`$ from $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ depends sensitively on prior assumptions about evolution of SNe Ia, and underscores the importance of placing independent constraints on the possible range of variation of their peak luminosities with redshift.
We have benefited from conversations and correspondence with Adam Riess, Peter Garnavich and Bill Press. We thank Saul Perlmutter for providing a draft of P98 prior to publication. We thank Margaret Geller, Saul Teukolsky, Éanna Flanagan, David Chernoff, Ed Salpeter, and Sidney Drell for comments on earlier drafts of this paper. PSD gratefully acknowledges support from the John Simon Guggenheim Memorial Foundation. This work was supported by NASA grants NAG5-3097, NAG5-2762, NAG5-3427, and NSF grant PHY-9310764.
## Appendix A Bayes Factors
In Bayesian inference, to form a judgement about an hypothesis $`H_i`$, we calculate its probability, $`p(H_i|D,I)`$, conditional on the data ($`D`$) and any other relevant information at hand ($`I`$). The desired probability $`p(H_i|D,I)`$ is not usually assignable directly; instead we must calculate it from other simpler probabilities using the rules of probability theory. Prominent among these is Bayes’s theorem, expressing this posterior (i.e., after consideration of the data) probability in terms of a prior probability for $`H_i`$ and a likelihood for $`H_i`$,
$$p(H_i|D,I)=p(H_i|I)\frac{(H_i)}{p(D|I)},$$
(A1)
where the likelihood for $`H_i`$, $`(H_i)`$, is a shorthand notation for the sampling probability for $`D`$ presuming $`H_i`$ to be true, $`p(D|H_i,I)`$. The likelihood notation and terminology emphasizes that it is the dependence of the sampling probability on $`H_i`$ (rather than $`D`$) that is of interest for calculating posterior probabilities. The term in the denominator is the prior predictive probability for the data and plays the role of a normalization constant. It can be calculated according to
$$p(D|I)=\underset{i}{}p(H_i|I)(H_i).$$
(A2)
We see from this equation that the prior predictive probability is the average likelihood for the hypotheses, with the prior being the averaging weight. It is also sometimes called the marginal probability for the data.
For estimating the values of the parameters $`\theta `$ of some model, the background information is the assumption that the parameterized model under consideration is true; we denote this by $`M`$ (this may include any other information we have about the parameters apart from that provided by $`D`$; for example, previously obtained data). The posterior probability for any hypothesis about continuous parameters can be calculated from the posterior probability density function (PDF), which we may calculate with a continuous version of Bayes’s theorem:
$$p(\theta |D,M)=p(\theta |M)\frac{(\theta )}{p(D|M)}.$$
(A3)
Both the posterior and the prior are PDFs in this equation; we continue to use the $`p()`$ notation, letting the nature of the argument dictate whether a probability or PDF is meant. In this case, the normalization constant is given by an integral:
$$p(D|M)=𝑑\theta p(\theta |M)(\theta ).$$
(A4)
The normalization constant is now the average likelihood for the model parameters.
When comparing rival models, $`M_i`$, each with parameters $`\theta _i`$, we return to the discrete version of Bayes’s theorem in equation (A3), using $`H_i=M_i`$ for the hypotheses, and taking the background information to be $`I=M_1+M_2+\mathrm{}`$ (denoting the proposition, “Model $`M_1`$ is true or model $`M_2`$ is true or $`\mathrm{}`$”). The likelihood for model $`M_i`$ is $`p(D|M_i,I)`$; but since the joint proposition $`(M_i,I)`$ is equivalent to the proposition $`M_i`$ by itself, we have $`(M_i)=p(D|M_i)`$. Thus the likelihood for a model in a model comparison calculation is equal to the normalization constant we would use when doing parameter estimation for that model, given by an equation like equation (A4). In other words, the likelihood for a model (as a whole) is the average likelihood for its parameters.
It is convenient and common to report model probabilities via odds, ratios of probabilities of models. The (posterior) odds for $`M_i`$ over $`M_j`$ is
$`O_{ij}`$ $``$ $`{\displaystyle \frac{p(M_i|D,I)}{p(M_j|D,I)}}`$ (A5)
$`=`$ $`{\displaystyle \frac{p(M_i|I)}{p(M_j|I)}}\times {\displaystyle \frac{p(D|M_i)}{p(D|M_j)}}`$
$``$ $`{\displaystyle \frac{p(M_i|I)}{p(M_j|I)}}\times B_{ij}`$
where the first factor is the prior odds, and the ratio of model likelihoods, $`B_{ij}`$, is called the Bayes factor. When the prior information does not indicate a preference for one model over another, the prior odds is unity and the odds is equal to the Bayes factor. Kass and Raftery (1995) provide a comprehensive review of Bayes factors, and Wasserman (1997) provides a survey of their use and methods for calculating them. When the prior odds does not strongly favor one model over another, the Bayes factor can be interpreted just as one would interpret an odds in betting; Table 2 summarizes the recommended interpretation of Kass and Raftery.
The Bayes factor is a ratio of prior predictive probabilities; it compares how rival models predicted the observed data. Simple models with no or few parameters have their predictive probability concentrated in a small part of the sample space. The additional parameters of complicated models allow them to assign more probability to other regions of the sample space, but since the predictive probability must be normalized, this broader explanatory power comes at the expense of reducing the probability for data lying in the regions accessible to simpler models. As a result, model comparison using Bayes factors tends to favor simpler models unless the data are truly difficult to account for with such models. Bayes factors thus implement a kind of automatic and objective “Ockham’s razor” (Jaynes 1979; Jefferys and Berger 1992).
This notion of simplicity is somewhat subtle, but in some simple situations it accords well with our intuition that models with more parameters are more complicated and should only be preferred if they account for the data significantly better than a simpler alternative. Because Bayes factors are ratios of average likelihoods, rather than the maximum likelihoods that are used for model comparison in frequentist statistics, they penalize models for the sizes of their parameter spaces. A simple, approximate calculation of the average parameter likelihood given by equation (A4) elucidates how this comes about.
First, we assume that the data are informative in the sense of producing a likelihood function that is strongly localized compared to the prior. Suppose that the scale of variation of the prior is $`\mathrm{\Delta }\theta `$, and the scale of variation of the likelihood is $`\delta \theta \mathrm{\Delta }\theta `$. If the likelihood is maximized at $`\theta =\widehat{\theta }`$, then we find
$$p(D|M)p(\widehat{\theta }|M)𝑑\theta (\theta ).$$
(A6)
Since the prior is normalized with respect to $`\theta `$, $`p(\widehat{\theta }|M)`$ will be roughly equal to $`1/\mathrm{\Delta }\theta `$. The integral will be roughly equal to the product of the peak and width of the likelihood, $`(\widehat{\theta })\delta \theta `$. Thus,
$$p(D|M)(\widehat{\theta })\frac{\delta \theta }{\mathrm{\Delta }\theta }.$$
(A7)
We find that the likelihood for a model is approximately given by the maximum likelihood for its parameters, multiplied by a factor that is always $`1`$ that is a measure of how the size of the probable part of the parameter space changes when we account for the data. This latter factor is colloquially known as the Ockham factor. To see why, consider the case of nested models: $`M_1`$ and $`M_2`$ share parameters $`\theta `$, but $`M_2`$ has additional parameters $`\varphi `$. In such cases, it is not uncommon that the prior and posterior ranges for $`\theta `$ are usually comparable for both models (this is not the case in the present work, however). Then the Bayes factor in favor of the more complicated model is approximately given by
$$B_{21}\frac{(\widehat{\theta },\widehat{\varphi })}{(\widehat{\theta })}\frac{\delta \varphi }{\mathrm{\Delta }\varphi }.$$
(A8)
Thus the data will favor $`M_2`$ only if the maximum likelihood ratio is high enough to offset $`\frac{\delta \varphi }{\mathrm{\Delta }\varphi }`$, which will be $`<1`$ if the data contain any information about $`\varphi `$ (and cannot be $`>1`$ in any case). This is in contrast to the frequentist approach, where only the ratio of maximum likelihoods is used. This ratio cannot disfavor $`M_2`$; one thus requires the likelihood ratio to exceed some critical value before preferring $`M_2`$, on the grounds that one should prefer the simpler model a priori. Unfortunately, the critical value is set in a purely subjective and ad hoc manner, and comparisons using likelihood ratios can be inconsistent (in the formal statistical sense of giving the incorrect answer when the amount of data becomes infinite). The Bayesian approach can (and often does) prefer the simpler model even when both models are given equal prior probabilities, and the critical likelihood ratio needed to just prefer $`M_2`$ is determined by the likelihood functions and the size of the parameter space searched. The odds is known to be a consistent statistic for choosing between models.
The approximations leading to the simple result of equation (A8) are not valid for the present work, so a simple “Ockham’s razor” interpretation of our results is not possible. Although the default model is nested in the models that have $`z`$-dependent systematic errors, it is clear from the figures that the addition of the systematic error parameters (corresponding to $`\varphi `$ in the above analysis) greatly affects inferences of the cosmological parameters (corresponding to $`\theta `$). Thus the $`\delta \theta `$ factors (here associated with the cosmological parameters) do not approximately cancel in the Bayes factor. Moreover, inferences for the $`\theta `$ and $`\varphi `$ parameters are highly correlated in the SNe Ia problem, so it is not possible to identify separate $`\delta \theta `$ and $`\delta \varphi `$ factors separately quantifying the uncertainties in the nested and additional parameters. We do know that the maximum likelihoods (e.g., minimum $`\chi ^2`$ values) are comparable for models with and without $`z`$-dependent systematic errors. The more complicated models are not improving the best fit substantially, but rather the additional parameter allows one to make the fit nearly as good as the best fit throughout a large region of the parameter space (because of the near-degeneracy of evolution and cosmology). It is this increase of the acceptable volume of parameter space that accounts for the Bayes factors slightly favoring the more complicated models here.
As is clear from equation (A7), the prior ranges for parameters play an important role in Bayesian model comparison. This is in contrast to their role in parameter estimation, where in Bayes’s theorem the prior range factor appears in both the numerator (through the prior) and the denominator (through the average likelihood) and thus cancels, typically having a negligible effect on inferences (though the range itself cancels, some effect can remain due to truncation of the tails of the likelihood). In particular, parameter estimation is typically well-behaved even when one uses improper (non-normalizable) priors, such as flat priors with infinite ranges. But model comparison fails when the priors for any parameters not common to all models are improper, because the Ockham factors associated with those parameters vanish. This may at first appear to be troubling (or at best a nuisance), but a similar dependence on the prior range of parameters is acknowledged to be necessary even in frequentist treatments of many problems. For example, consider detection of a periodic signal in a noisy time series using a power spectrum estimator. This is a model comparison problem (comparing a model without a signal to one with a periodic signal), and in fact the spectral power is simply related to the likelihood for a periodic (sinusoidal) signal. In frequentist analyses, one cannot simply use the number of standard deviations the spectral peak is above the null expectation to assess the significance of a signal; one must also take into account the number of statistically independent frequencies examined, which depends on the frequency range searched and on the number and locations of frequencies examined within that range. Similar considerations arise in searches for features in energy spectra, or searches for sources in images—one must take into account the number and locations of points searched in order to properly assess the significance of a detection. The results of the corresponding Bayesian calculations similarly depend on the ranges of parameters searched (but not on the number and locations of the parameter values used). Bayes’s theorem indicates that the sizes of parameter spaces (i.e., search ranges) must be taken into account whenever we compare models; such considerations should not be unique to the few applications where they have been recognized to be important in conventional analyses.
## Appendix B Statistical Methodology
As in the analyses of R98 and P98, we adopt the Bayesian approach for inferring the cosmological parameters $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$, extending their analyses to include parameterized systematic and evolutionary components. The additional parameters are dealt with by marginalizing (as the R98 analysis did with $`H_0`$ and the P98 analysis did with the SF fitting parameters). Many of the needed marginalizations can be done analytically; this Appendix describes these calculations. Some remaining marginalizations (including calculation of Bayes factors) were done numerically with various methods including straightforward quadrature, adaptive quadrature, and Laplace’s method; application of these methods to Bayesian integrals is surveyed in Loredo (1999).
### B.1 Basic Framework
Let $`D_i`$ denote the data associated with SN number $`i`$, and $`D`$ denote all the data associated with the $`N`$ SNe in a particular survey. Let $`𝒞`$ denote the cosmological parameters, $`𝒞=(H_0,\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })`$, and $`𝒮`$ denote possible extra parameters associated with modelling evolution or other sources of systematic errors. Our task is to find the posterior distribution for these parameters given the data and some model, $`M`$. Actually, we are ultimately interested in the marginal distribution for $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$, found by marginalizing: $`p(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda }|D,M)=𝑑H_0𝑑𝒮p(𝒞|D,M)`$. Bayes’s theorem gives the joint posterior distribution for $`𝒞`$ and $`𝒮`$,
$$p(𝒞,𝒮|D,M)p(𝒞|M)p(𝒮|M)(𝒞,𝒮).$$
(B1)
The first factor is the prior for $`𝒞`$, which we will take to be flat over the ranges shown in our plots (or flat in the logarithm for $`H_0`$; see below). The second factor is the prior for $`𝒮`$ which we assume is independent of $`𝒞`$; we discuss it further in the context of specific models, below. The last factor is the likelihood for $`𝒞`$ and $`𝒮`$, which we have abbreviated as $`(𝒞,𝒮)p(D|𝒞,𝒮,M)`$. Rigorous calculation of this likelihood is very complicated, requiring introduction and estimation of many additional parameters, including parameters from the lightcurve model and parameters for characteristics of the individual SNe (such as their apparent and absolute magnitudes, redshifts, $`K`$-corrections, etc.). With several simplifying assumptions, the final result is relatively simple; it can be written as the product of independent Gaussians for the redshifts and distance moduli of the SNe integrated over the redshift uncertainty, so that
$$(𝒞,𝒮)\underset{i}{}𝑑z_i\mathrm{exp}\left[\frac{[F(z_i)\widehat{\mu }_i]^2}{2s_i^2}\right]\mathrm{exp}\left[\frac{(z_i\widehat{z}_i)^2}{2w_i^2}\right].$$
(B2)
Here $`\widehat{\mu }_i`$ is the best-fit distance modulus for SNe number $`i`$, $`s_i`$ is its uncertainty, $`\widehat{z}_i`$ is the best-fit cosmological redshift, and $`w_i`$ is its uncertainty (mostly due to the source’s peculiar velocity). The function $`F(z_i)`$ gives the true distance modulus for a SN Ia at redshift $`z_i`$; in the absence of systematic or evolutionary terms, it is given by $`f(z_i)`$ in equation (1). For the results reported in P98, two complications appear in the likelihood. First, the factors are not independent; the use of common photometric calibration data for groups of SNe Ia that are studied together introduces correlations. P98 have reported a correlation matrix accounting for these, but the correlations are very small and we have neglected them here. In addition, one of the parameters defining the lightcurve model—the $`\alpha `$ parameter described in § 2, above—appears explicitly in the P98 likelihood so that it can be estimated jointly with the cosmological parameters. This parameter would appear in the $`\widehat{\mu }_i`$ estimates in equation (B2). The data tabulated in P98 use the best-fit $`\alpha `$, however, so we could not account for the uncertainty of $`\alpha `$ in our analysis. The close similarity between our contours in Figure 11$`a`$ and those presented in P98 argues that rigorous accounting for the uncertainty in $`\alpha `$ plays only a minor role in the final results.
As was done in R98 and P98, we approximate the $`z_i`$ integrals in equation (B2) by linearizing the $`z_i`$ dependence of $`F(z_i)`$ about $`\widehat{z}_i`$ and performing the resulting convolution of Gaussians analytically. The result is
$$(𝒞,𝒮)\underset{i}{}\mathrm{exp}\left[\frac{[\mu _i\widehat{\mu }_i]^2}{2\sigma _i^2}\right].$$
(B3)
where $`\mu _i=F(\widehat{z}_i)`$ and
$$\sigma _i^2=s_i^2+[F^{}(\widehat{z}_i)]^2w_i^2.$$
(B4)
The total variance $`\sigma _i^2`$ depends on $`𝒞`$ through $`F^{}(z)`$. But this dependence is weak in general, and $`F^{}(z)`$ is actually independent of $`𝒞`$ at low redshift in the pure cosmology model, with
$$F^{}(z)=\frac{5\mathrm{log}e}{z}.$$
(B5)
We follow the practice of R98 and simply use this formula for all redshifts. We use the same formula for models with systematic error terms that introduce an additional (weak) dependence on redshift and $`𝒮`$; the redshift uncertainties are negligibly small at high redshifts where such dependences might become important, so the dependence of $`\sigma _i^2`$ on redshift is negligible. It is possible to do the $`z_i`$ integrals in equation (B2) accurately using Gauss-Hermite quadrature. We have done some calculations this way and verified that the final inferences are negligibly affected by the redshift integral approximations.
Equation (B3) is the starting point for the analyses reported in the body of this work. It is of a simple form: $`2`$ times the log-likelihood is of the form of a $`\chi ^2`$ statistic. This is the same likelihood we would have written down had we simply presumed at the outset that the reported $`\widehat{\mu }_i`$ values were equal to some underlying true values given by $`F(\widehat{z}_i)`$ plus some added noise $`n_i`$;
$$\widehat{\mu }_i=F(\widehat{z}_i)+n_i,$$
(B6)
where the probability distribution for the value of $`n_i`$ is a zero-mean Gaussian with standard deviation $`\sigma _i^2`$.
### B.2 FRW Cosmology
Presuming a FRW cosmology and no systematic errors, equation (B6) can be written,
$`\widehat{\mu }_i`$ $`=`$ $`f_i+n_i`$ (B7)
$`=`$ $`g_i\eta +n_i,`$
where $`f_i=f(\widehat{z}_i)`$ is the magnitude-redshift relation, which we can separate into a part $`g_i=g(\widehat{z}_i)`$ that depends implicitly only on $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$, and the $`H_0`$ dependence is contained in $`\eta `$ (defined in equation (9)). Define the quadratic form $`Q`$ according to
$$Q=\underset{i}{}\frac{(\widehat{\mu }_ig_i+\eta )^2}{\sigma _i^2}.$$
(B8)
This is the $`\chi ^2`$ statistic used in R98; the joint likelihood for $`h`$, $`\mathrm{\Omega }_M`$, and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ is simply proportional to $`e^{Q/2}`$. We can analytically marginalize over $`h`$ (or equivalently, over $`\eta `$) to find the marginal likelihood for the density parameters. To do so, we must assign a prior for $`h`$. We use the standard noninformative “reference” prior for a positive scale parameter, a prior flat in the logarithm and thus scale-invariant (Jeffreys 1961; Jaynes 1968; Yang and Berger 1997). This corresponds to a prior that is flat in $`\eta `$. We bound this prior over some range $`\mathrm{\Delta }\eta `$ (with limits corresponding to $`h=0.1`$ and $`h=1`$, so $`\mathrm{\Delta }\eta =\mathrm{ln}[10]`$). The prior range has negligible effect on all our results (so long as it contains the peak of the likelihood) because the $`H_0`$ parameter is common to all models, so the prior range cancels out of all probability ratios. Thus we could let it become infinite, but it is a good practice in Bayesian calculations to always adopt proper (i.e., normalizable) priors, especially if Bayes factors (ratios of normalization constants) are of interest.
Using the log-flat prior, the marginal likelihood for the density parameters is
$$(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })=\frac{1}{\mathrm{\Delta }\eta }𝑑\eta e^{Q/2}.$$
(B9)
To do the integral, complete the square in $`Q`$ as a function of $`\eta `$, writing
$$Q=\frac{(\eta \widehat{\eta })^2}{s^2}q(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda }),$$
(B10)
where
$$\frac{1}{s^2}=\underset{i}{}\frac{1}{\sigma _i^2},$$
(B11)
$$\widehat{\eta }(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })=s^2\underset{i}{}\frac{g_i\widehat{\mu }_i}{\sigma _i^2},$$
(B12)
and the $`(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })`$-dependence is isolated in
$`q(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })`$ $`=`$ $`{\displaystyle \frac{\widehat{\eta }^2}{s^2}}+{\displaystyle \underset{i}{}}{\displaystyle \frac{(\widehat{\mu }_ig_i)^2}{\sigma _i^2}}`$ (B13)
$`=`$ $`{\displaystyle \underset{i}{}}{\displaystyle \frac{(\widehat{\mu }_ig_i+\widehat{\eta })^2}{\sigma _i^2}}.`$
The integral in equation (B9) is thus simply an integral over a Gaussian in $`\eta `$ located at $`\widehat{\eta }`$ with standard deviation $`s`$; $`\widehat{\eta }`$ is the best-fit (most probable) value of $`\eta `$ given $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$, and $`s`$ is its conditional uncertainty. As long as $`\eta `$ is inside the prior range and $`s\mathrm{\Delta }\eta `$, the value of this integral is well approximated by $`s\sqrt{2\pi }`$, so that
$$(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })=\frac{s\sqrt{2\pi }}{\mathrm{\Delta }\eta }e^{q/2}.$$
(B14)
This is the marginal likelihood one would use to infer the density parameters in the absence of any systematic error terms. Note from equation (B13) that the quadratic form is just what one would obtain by calculating the “profile likelihood” for the density parameters (the likelihood maximized over the nuisance parameters, a frequentist method sometimes used to approximately treat nuisance parameters). Since the uncertainty $`s`$ is independent of $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$, it follows from equation (B14) that the marginal likelihood is proportional to the profile likelihood in this problem.
It is also possible to do the calculation analytically using a flat prior for $`h`$, spanning a prior range $`\mathrm{\Delta }h`$. The corresponding prior for $`\eta `$ is exponential;
$$p(\eta |M)=\frac{10^5c_2}{2a\mathrm{\Delta }h}e^{\eta /2a},$$
(B15)
where $`a=2.5\mathrm{log}e`$, a constant known as Pogson’s ratio (Pogson 1856). The product of the likelihood and the prior can still be written as $`e^{Q/2}`$ with $`Q`$ quadratic in $`\eta `$; but there is an additional linear term in $`Q`$ from the prior. Completing the square duplicates equation (B10), but with $`\widehat{\eta }`$ replaced with
$$\widehat{\eta }=s^2\left[\frac{1}{4a}+\underset{i}{}\frac{g_i\widehat{\mu }_i}{\sigma _i^2}\right].$$
(B16)
The marginal likelihood for $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ also has a different factor out front; it is given by
$$(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })=\frac{10^5c_2s\sqrt{2\pi }}{2a\mathrm{\Delta }h}e^{q/2}.$$
(B17)
We present this result for reference only; we use the scale-invariant prior in the body of this work and in the remainder of this Appendix. We have compared calculations with flat and log-flat priors for some models; the resulting marginal likelihoods are negligibly different. Note that equation (B17) is not proportional to the profile likelihood; the proportionality is a special property of the scale-invariant prior.
### B.3 Systematic Error in $`H_0`$
Among the lightcurve model parameters, regardless of the method, is the fiducial absolute magnitude for SNe Ia, $`M_0`$. To obtain definite values for the distance moduli, $`M_0`$ must be estimated or at least arbitrarily specified. Let $`\widehat{M}_0`$ denote the value used to calculate the tabulated $`\widehat{\mu }_i`$ estimates. We can write the true value as
$$M_0=\widehat{M}_0+\delta ,$$
(B18)
where $`\delta `$ is an uncertain error in our estimate. Since the $`\widehat{\mu }_i`$ estimates are calculated using $`\widehat{M}_0`$, they will have an additive error equal to $`\delta `$ that is systematic (the same for every SNe Ia). To account for this, equation (B7) must be replaced by
$$\widehat{\mu }_i=g_i\eta +\delta +n_i.$$
(B19)
Note here the degeneracy between $`\eta `$ and $`\delta `$; since they play identical roles (up to a sign) in the model for the distance moduli, they cannot be individually constrained using only magnitude/redshift data; additional information setting a distance scale to at least one SN is required. Only the quantity $`\gamma =\delta \eta `$ can be inferred from the basic data.
P98 arbitrarily specify $`\widehat{M}_0`$, so there is no useful information about $`\delta `$ that can break the degeneracy between $`\delta `$ and $`\eta `$. Recognizing this, they simply forgo any attempt to infer the Hubble constant. Their analysis amounts to replacing $`\eta `$ and $`\delta `$ with $`\gamma `$ and marginalizing over $`\gamma `$ with a flat prior; the resulting marginal likelihood for the density parameters is of the same form as equation (B14), though with an arbitrarily large prior range for $`\gamma `$ (which can be ignored since it is common to all models being compared). This is the likelihood we used for the analyses of LBL data (and simulated data) described in § 4 when we assume no evolutionary effects are present.
R98 use Cepheid distances for three SNe Ia to estimate $`M_0`$ for use with the MLCS and TF methods. We can consider this extra data to provide a prior distribution for $`\delta `$; this prior breaks the degeneracy between $`\eta `$ and $`\delta `$ in the analysis. R98 report a 10% uncertainty in the Cepheid distance scale for SNe Ia, corresponding to 0.21 magnitude uncertainty in distance moduli. We accordingly adopt a Gaussian prior for $`\delta `$ with zero mean and standard deviation $`d=0.21`$, so that
$$p(\delta )=\frac{1}{d\sqrt{2\pi }}e^{\delta ^2/2d^2}.$$
(B20)
We can calculate the likelihood for the cosmological parameters by multiplying the joint likelihood for them and $`\delta `$ by this prior, and integrating over $`\delta `$, as follows.
The quadratic form in the exponential resulting from multiplying this prior by the likelihood resulting from equation (B19) is,
$`Q`$ $`=`$ $`{\displaystyle \frac{\delta ^2}{d^2}}+{\displaystyle \underset{i}{}}{\displaystyle \frac{(\widehat{\mu }_ig_i+\eta \delta )^2}{\sigma _i^2}}`$ (B21)
$`=`$ $`{\displaystyle \frac{(\delta \widehat{\delta })^2}{s^2}}q(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda }),`$
where
$$\frac{1}{s^2}=\frac{1}{d^2}+\underset{i}{}\frac{1}{\sigma _i^2},$$
(B22)
$$\widehat{\delta }(\eta ,\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })=s^2\underset{i}{}\frac{\widehat{\mu }_ig_i+\eta }{\sigma _i^2},$$
(B23)
and the $`(\eta ,\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })`$-dependence is isolated in
$`q(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })`$ $`=`$ $`{\displaystyle \frac{\widehat{\delta }^2}{s^2}}+{\displaystyle \underset{i}{}}{\displaystyle \frac{(\widehat{\mu }_ig_i+\eta )^2}{\sigma _i^2}}`$ (B24)
$`=`$ $`{\displaystyle \frac{\widehat{\delta }^2}{d^2}}+{\displaystyle \underset{i}{}}{\displaystyle \frac{(\widehat{\mu }_ig_i+\eta \widehat{\delta })^2}{\sigma _i^2}}.`$
As with $`\eta `$ in the previous subsection, the integral over $`\delta `$ is a simple Gaussian integral, equal to $`s\sqrt{2\pi }`$. Thus the marginal likelihood for the cosmology parameters is
$$(\eta ,\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })=\frac{s}{d}e^{q/2}.$$
(B25)
This is the likelihood used for the analyses of the MLCS and TF data using the default model, as reported in § 4.1.
### B.4 Systematic Error From Evolution
The simplest model we considered with a redshift-dependent systematic or evolutionary component is Model I, which adds a shift of size $`ϵ`$ to the distance moduli of the high redshift SNe Ia. For this model,
$$\widehat{\mu }_i=\{\begin{array}{cc}f_i+\delta +n_i\hfill & \text{if }z_i<z_c\hfill \\ f_i+\delta +ϵ+n_i\hfill & \text{if }z_iz_c\hfill \end{array},$$
(B26)
with $`z_c=0.15`$. We seek the marginal likelihood for the cosmological parameters, requiring us to introduce priors for $`\delta `$ and $`ϵ`$ and marginalize over them.
The prior for $`\delta `$ is given by equation (B20), and the prior for $`ϵ`$ is similarly a zero-mean Gaussian, but with a different standard deviation, $`e`$;
$$p(ϵ)=\frac{1}{e\sqrt{2\pi }}\mathrm{exp}\left[\frac{ϵ^2}{2e^2}\right].$$
(B27)
The quadratic form associated with the product of these priors and the likelihood function is
$$Q=\frac{\delta ^2}{d^2}+\frac{ϵ^2}{e^2}+\underset{z_i<z_c}{}\frac{(\widehat{\mu }_if_i\delta )^2}{\sigma _i^2}+\underset{z_iz_c}{}\frac{(\widehat{\mu }_if_i\delta ϵ)^2}{\sigma _i^2}.$$
(B28)
To marginalize over $`ϵ`$, we complete the square in $`ϵ`$ by introducing the $`ϵ`$ uncertainty $`t`$, given by
$$\frac{1}{t^2}=\frac{1}{e^2}+\underset{z_iz_c}{}\frac{1}{\sigma _i^2},$$
(B29)
and the conditional best-fit value of $`ϵ`$,
$$\widehat{ϵ}(\delta ,𝒞)=t^2\underset{z_iz_c}{}\frac{\widehat{\mu }_if_i\delta }{\sigma _i^2}.$$
(B30)
After completing the square and integrating the resulting Gaussian dependence on $`ϵ`$, we find that
$$p(\delta )(\delta ,𝒞)=\frac{t}{ed\sqrt{2\pi }}e^{q/2},$$
(B31)
where
$$q=\frac{\widehat{ϵ}^2}{t^2}+\frac{\delta ^2}{d^2}+\underset{i}{}\frac{(\widehat{\mu }_if_i\delta )^2}{\sigma _i^2}.$$
(B32)
Note that the sum is over all SNe, and that $`\delta `$ appears in $`\widehat{ϵ}`$. Completing the square in $`\delta `$ lets us identify the $`\delta `$ uncertainty, $`s`$, given by
$$\frac{1}{s^2}=\frac{1}{d^2}+\underset{i}{}\frac{1}{\sigma _i^2}\frac{t^2}{v^2},$$
(B33)
and the conditional estimate for $`\delta `$,
$$\widehat{\delta }(𝒞)=s^2\left[\frac{t^2}{v^2}F+\underset{i}{}\frac{\widehat{\mu }_if_i}{\sigma _i^2}\right],$$
(B34)
where in these equations we have defined $`v`$ and $`F`$ according to
$$\frac{1}{v}=\underset{z_iz_c}{}\frac{1}{\sigma _i^2},$$
(B35)
and
$$F=\underset{z_iz_c}{}\frac{\widehat{\mu }_if_i}{\sigma _i^2}.$$
(B36)
Using these, we can rewrite $`q`$ as
$$q=\frac{(\delta \widehat{\delta })^2}{s^2}+q^{}(𝒞),$$
(B37)
where the dependence on the cosmological parameters is in
$$q^{}(𝒞)=\frac{\widehat{\delta }^2}{s^2}t^2F^2+\underset{i}{}\frac{(\widehat{\mu }_if_i)^2}{\sigma _i^2}.$$
(B38)
After integrating over the Gaussian dependence on $`\delta `$, the marginal likelihood is
$$(𝒞)=\frac{ts}{ed}e^{q^{}/2}.$$
(B39)
This is the likelihood used for analyses of the MLCS and TF data based on Model I.
For Model II, used to model the SF data, the estimated distance moduli are given by
$$\widehat{\mu }_i=g_i+\gamma +\beta h_i+n_i,$$
(B40)
where as before $`\gamma =\delta \eta `$, and $`h_i=\mathrm{ln}(1+\widehat{z}_i)`$. We will marginalize over $`\gamma `$ and $`\beta `$, using a flat prior for $`\gamma `$ and a zero-mean Gaussian prior for $`\beta `$ with standard deviation $`b`$.
As already noted, the $`\gamma `$ marginalization is similar to the $`\eta `$ marginalization already treated above. The result is
$$p(\beta )(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })=\frac{s}{\mathrm{\Delta }\gamma b}e^{q/2},$$
(B41)
where $`s`$ is given by equation (B11),
$$\widehat{\gamma }(\beta ,\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })=s^2\underset{i}{}\frac{\widehat{\mu }_ig_i\beta h_i}{\sigma _i^2},$$
(B42)
and the $`(\beta ,\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })`$-dependence is isolated in
$`q(\beta ,\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })`$ $`=`$ $`{\displaystyle \frac{\widehat{\gamma }^2}{s^2}}+{\displaystyle \underset{i}{}}{\displaystyle \frac{(\widehat{\mu }_ig_i\beta h_i)^2}{\sigma _i^2}}`$ (B43)
$`=`$ $`{\displaystyle \underset{i}{}}{\displaystyle \frac{(\widehat{\mu }_ig_i\beta h_i\widehat{\gamma })^2}{\sigma _i^2}}.`$
We assume that the prior range for $`\gamma `$, $`\mathrm{\Delta }\gamma `$, contains the peak of the Gaussian. Since this range is common to all models for this data and thus cancels in all calculations, we do not need to specify it more precisely, and we simply drop it from subsequent calculations.
Note that $`\widehat{\gamma }`$ depends on $`\beta `$; we can isolate this dependence by writing
$$\widehat{\gamma }=s^2H\beta s^2G,$$
(B44)
where
$$H=\underset{i}{}\frac{\widehat{\mu }_ig_i}{\sigma _i^2},$$
(B45)
and
$$G=\underset{i}{}\frac{h_i}{\sigma _i^2}.$$
(B46)
This helps us to do the remaining marginalization over $`\beta `$. We now complete the square in $`\beta `$, identifying the $`\beta `$ uncertainty, $`\tau `$, given by
$$\frac{1}{\tau ^2}=\frac{1}{b^2}s^2G^2+\underset{i}{}\frac{h_i^2}{\sigma _i^2},$$
(B47)
and the conditional best-fit $`\beta `$,
$$\widehat{\beta }=\tau ^2\left[s^2GH+\underset{i}{}\frac{h_i(\widehat{\mu }_ig_i)}{\sigma _i^2}\right].$$
(B48)
Integrating over the Gaussian dependence on $`\beta `$ gives a factor of $`\tau \sqrt{2\pi }`$, and the final likelihood for the density parameters is
$$(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })=\frac{s\tau \sqrt{2\pi }}{b}e^{q^{}/2},$$
(B49)
where
$$q^{}=\frac{\widehat{\beta }^2}{b^2}+\underset{i}{}\frac{(\widehat{\mu }_ig_i\widehat{\beta }h_is^2H)^2}{\sigma _i^2}.$$
(B50)
This is the likelihood used for the calculations with Model II in § 4.
|
no-problem/9905/physics9905031.html
|
ar5iv
|
text
|
# Emission Optics of the Steigerwald Type Electron Gun
## 1 Introduction
An electron gun with a telefocus Wehnelt grid was first proposed by K.H. Steigerwald in 1949 ( see Figure 1). The focusing properties of this design are based on the shape of the equipotential lines inside the Wehnelt cylinder closely imitating a diverging and a converging lens combination. The name ‘telefocusing’ follows the optical analog. A detailed experimental verification of the predicted property was carried out by Braucks. By changing the position of the inner Wehnelt cone ($`a`$) or the size of the outer Wehnelt opening $`\left(d_w\right)`$, the strength of the two electron optical lenses will vary with the fields. As shown in Figure 2, Braucks demonstrated the telefocusing property of the Steigerwald type gun. The focusing distances $`\left(L_q\right)`$ identified with the smallest beam diameter from the filament tip were plotted as a function of parameters $`d_w`$ and $`a`$ for his design. Changing the ratio $`d_w/a`$, the focal length could be varied at will. An intense electron beam that is naturally collimated without limiting apertures makes this gun very attractive for electron diffraction experiment.
Several authors have applied the Steigerwald type gun in gas phase electron diffraction units using counting techniques , , and photographic plates(films). To get high count rates and keep the angular precision at the same time, it is advantageous that the beam is intense and narrow at the intersection of the beam and the gas jet, and the focal point of the beam is set at the entrance of the Faraday cage in the detecting plane. In the telefocusing systems, the distance between two electrodes can be varied by the axial movement of the inner Wehnelt cone or the anode, thus allowing the focal distance to be adjusted and beam quality to be fine tuned. To investigate the effect influenced by the geometry of the lensing system, the beam profile at the proximity of the ‘focal point’ was measured and is discussed here for different combinations of $`a`$, $`D`$, and $`d`$. It has already been pointed out by Schmoranzer et al., and later investigated extensively by Schiewe et al. that the sum $`a+D`$, i.e. the distance from the anode to the filament tip, strongly affects the focusing mechanism. We built a telefocus gun to probe the optics of an electrostatic analyzer. The measured magnification factors of the analyzer deviated significantly from the value one would expect based on computational simulation which was verified in several other ways. Two possibilities could account for this: the long established telefocusing theory could be wrong, or there could be an anomalous broadening in the analyzer that is responsible for the differences. Therefore diagnostic measurements have been setup to re-examine the electron beam generated from our telefocus gun, especially to find the focal points and the beam profiles in their proximity.
## 2 Experiments
### 2.1 Setup
As depicted in Figure 3, the telefocus electron gun was mounted on a movable stage which was able to move along the viewing line of a Faraday cage, whose entrance aperture was $`50`$ micron. Two sets of deflector plates were placed between the gun and Faraday cage. The first set of deflector plates (aligner) moved along with the e-gun so that the emitting direction could be fine tuned. The second set of deflector plates (scanner) were attached to the detector, and a computer controlled power supply (Bertan 205B) was used to scan the beam over the aperture to investigate the size of the beam. The intensity profile of the beam was recorded with a Keithley 616 digital electrometer connected to a P100 computer through GPIB ports. The scans were taken at a 15mHz repetition rate. The emission current from the electron gun was set to be in the range $`1\mu A`$. The beam profiles, which were always gaussian in shape, were measured at 5-7 different distances, ranging from $`L_e=`$90mm to 180 mm, in order to determine the divergence angle. The size of the beam could be determined in two different ways. The full width at half maximum (FWHM) was obtained by fitting the profile with a standard gaussian. The FWHM was compared with directly recorded half-maximum points. The two results agreed with each other within $`5\%`$. To determine the opening angle of the electron beam, the FWHMs from different positions were combined to track the envelope. However, in some extreme cases the beam profile became very broad that sweeping the beam could charge up the insulator behind the deflector plates. Furthermore the deflecting field became inhomogeneous, introducing additional error. To avoid this situation, we estimate the gaussian beam size by taking the inverse of the square root of the peak intensity and then calibrating the results with the cases where reliable scans were available. Since the peak intensity could be measured very accurately, the above method was an excellent method for extrapolating gaussian beam for larger beam diameters. Two different outer Wehnelt cylinders were used to investigate the effects due to field penetration. (A) Wehnelt had an opening of $`d_w=3.175cm`$, and $`d_w=1.12cm`$ was the value for (B) Wehnelt.
### 2.2 Results and Discussions
In all cases no focal points have been found. For all the adjustible range of the gun the FWHM of the beam investigated grew linearly with the distance between the gun and the detector ($`L_e`$). 3-dimensional plots of the beam-widths measured at different $`d_w/a`$ over the range from $`L_e=0`$ to $`250mm`$ were constructed for two Wehnelt cylinders. (Figure 4, 5) ‘Point of divergence’ can be defined as the point source on the symmetric axis where the beam intersects with the zero beam-width plane. The locations of these ‘virtual sources’ points determine whether a real crossover appears along the beam path. Figure 6 shows that only in section marked (c) the real crossover appears in front of the filament; whereas in section (b), representing smaller beam-widths, the points of divergence are behind the filament. In section (c) the divergent lens contributes most strongly. Those crossovers should not be called ‘focuses’ as was recognized first by Braucks and later on confirmed by Schmoranzer et al.. The beam-widths in (c) are actually larger than those in (b) along the observed distances. It is not clear whether a crossover will appear at larger distance as Braucks measured(Figure 2). We suspect that Schmoranzer et al. , observed cuts at a fixed $`L_e`$ value and the minimum beam-width was mistaken as ‘focal point’ of the beam. Figure 7 shows such cuts measured at $`L_e=93.2mm`$. The origin of the coordinate system for $`L_e`$ and the position of the inner Wehnelt cone is set at the front edge of the electron gun (Figure 3). By changing the $`d_w/a`$ ratio in the gun, the size of the emitting surface and the divergence angle both change. The narrowest beam-width observed is the minimization due to the combination of both factors. In the proximity of the gun, the size factor dominates, but in the far field, the dispersion angle is more important. As supported by both Figures 4 , 5 we found that while moving the detector to larger $`L_e`$ value, the minimum beam-width starts from a low $`d_w/a`$ value then gradually moves toward a particular $`d_w/a`$ value with the lowest dispersion angle. This actually agrees with one of the features in Braucks’ original plot (Figure 2) where the $`d_w/a`$ values, corresponding to increasing ‘focal lengths’ ($`L_q`$), converge at one point. To support our argument, we inserted into Braucks’ plot our measurements, using $`L_q`$ as the distance from the filament to the detector, the minimum beam-width was achieved by tuning $`d_w/a`$. Figure 2 shows that the $`d_w/a`$ values actually agree with Braucks’ measurements. Another feature we reproduced well is that the ‘density’ of the function $`f(L_q,d_w/a)`$ grows at lower $`d_w`$. However, the density of $`f`$ in our measurement is much greater than Braucks’. One possible way of explanation may come from $`D`$, the distance from the anode to the Wehnelt cylinder. As pointed out by Schmoranzer and Schiewe, the value of $`(D+a)`$ is a good parameter in the ‘focusing’ mechanism. The $`d_w/D`$ values in our setup were larger than those Braucks used. That means our converging field is stronger inside the Wehnelt cylinder while the diverging field is weaker. Since the spreading of $`L_q`$ curves in Braucks’ plot was the result of telefocusing mechanism, it may be reasonable to link the increased density to our stronger convergent fields.
The telefocusing lenses do exist in the Wehnelt cylinder, but a complete ray tracing which simulates the beam emission process has never been realized as in other gun types . The general dispersion angle involved in this case is in $`mrad`$ range, which is difficult to simulate; the uncertainty in the initial conditions poses a fundamental problem. A reverse field created by the self-biasing and the kinetic energy of the thermal electrons defines a zero energy surface. This surface, not the filament, serves as the emitting object in the imaging system of a telefocusing gun. The shape of this surface depends mostly on the local symmetric mechanical design and the electron-electron interaction, but not on the shape of the filament. This explains why the electron beams are circular while the hair-pin filament is elongated in one direction. The emission current would increase exponentially as $`a`$ decreases however the zero energy surface grows due to a greater reverse field created by the self-biasing resistor, and vice versa.
In Figure 6, the point of divergence appears in front of the filament only for small values of $`d_w/a`$ when $`a`$ is rather large (marked as section c). For very high $`d_w/a`$ (section a), the emitting surface is fully exposed to the anode and no crossover will appear. In the intermediate $`d_w/a`$ range (section b) a minimum beam-width might be obtained, but no crossover has to occur. This avoids the ‘Boersch’ effect coming from the Coulombic electron-electron repulsion at the crossover, and ensures the fact that the beam divergence angle can be kept very small at high emission current.
As shown in Figure 8, it is interesting to note that in many cases the relative momentum spread ($`\delta p/p`$) in phase space, which defines the dispersion angle $`\alpha `$ for each ray bundle emitted from emitting surface, is smaller than the thermal broadening, $`\alpha _T=\sqrt{kT/eV}4.18\times 10^3`$ rad. Particularly for Wehnelt cylinder (A), the average transverse velocity of the thermal electrons is reduced by a factor of 10 at the minimum beam-width condition. Generally in a simplified treatment one assumes that the transverse velocity distribution changes only little in the emission process, however this is only true for electrons leaving the charge cloud perpendicularly. In the transverse direction, the equipotential lines of the local reverse field act as filters and reduce the emittance angles. It also indicates that the angular spread of the beam envelope is mostly from the inherent thermal broadening than from the lensing process in the gun. Therefore by using filament with lower operating temperature, e.g. barium oxide, hexaboride cathodes, beams with smaller angular spread can be obtained. It is noteworthy that in section (b) marked in Figure 6, the electron would acquire highest brightness since both beam-width and dispersion angle were minimized.
The answer to why the beam envelope created by a telefocuing electron gun does not converge actually can be illustrated also by its optical analog. In a telescope, the remote object is imaged upon the eye through a telefocusing mechanism. The envelope of all the incoming light rays, traced back, is diverging as it will ultimately reach the star we are observing. Similarly for an electron gun, it should be that the image of the emitting surface be reconstructed remotely while the beam envelope as a whole keeps diverging. As will be discussed in more details in the following article, the very same electron gun with (B) Wehnelt is used to probe the imaging optics of a spherical analyzer. Two beam-profiles are measured very accurately at the entrance and exit of the analyzer. Based on the measurements of the beam envelope obtained in this work, and the ratio of the two beam profiles, the location of the images could be deconvoluted. The position of the image of the emitting surface at different $`d_w/a`$ is shown in Figure 9 where indeed the source to image distance ($`L_q`$) increases as the strength of the telefocusing increases($`d_w/a`$ decreases).
We propose a new model to describe the performance of a telefocus electron gun which is illustrated in Figure 10. In case (a) the inner Wehnelt cylinder is most retracted from the anode, the total emission is very low, and the reverse biasing field is rather weak. The emitting surface thus is very close to the filament and small; also a real crossover is observed to form in the gun. These two features facilitate a small beam size at the proximity of the gun, as shown in Figure 4. This geometry also creates the strongest diverging fields which account for the largest imaging length measured, as shown in Figure 9. Moving the inner cylinder further out, the real crossover withdraws toward the emitting surface. In case (b) the electron gun is on the verge of having a real crossover inside the Wehnelt cylinder. While the total emission current is increasing, the emitting surface begins to move out of the filament and grows in size. As the inner Wehnelt cylinder continues to be moved forward, the strength of the converging and diverging fields adjust accordingly. In case (c) the electron gun has the least spreading beam and correspondingly the furthermost virtual point of divergence. Total emission current grows further, and the larger emitting surface contributes to a wider beam-width in the proximity of the gun compared to case (a) but produces the narrowest beam in the far field due to its very small angular spread. No real crossover is present anywhere in the emission process. This relieves the energy broadening from coulombic repulsion force. As the inner Wehnelt cylinder is further moved forward, in case (d) , the emitting surface is greatly exposed to the anode voltage; the very weak diverging fields account for the shortest imaging length. While the emission current is the greatest, the beam-width and the dispersion angle are also greatest in this geometry. Note that in case (c) the minimum beam-width condition is actually the interplay of the diverging and converging lenses inside the Wehnelt cylinder. Surprisingly, by reducing the self-biasing resistor and hence increasing the total emission current, the lensing fields do not change much. Also the growth of the emitting surface is insignificant as verified by a small increase in the beam-width. This weak response is caused by the very great potential gradient in the proximity of the emitting surface which downgrades the reverse biasing fields in this situation. The lensing fields obviously operate in larger scale and depend most critically on geometrical factors. Note that the spreading of the beam is highly exaggerated in Figure 10 to help illustrate the emission profiles. In reality, the overall beam envelopes and the singular emitting profiles coming from point sources on the emitting surface very much overlap with each other.
## 3 Conclusion
The very small dispersion angle found in (A) Wehnelt cylinder demonstrated the superb ability of telefocus gun to create a very narrow, well collimated and intense electron beam with very simple design. Our new model explains the changes in the emission optics of the Steigerwald type gun from generating focused beams to creating parallel beams. Throughout the investigation, the emission current of the electron gun was set to $`0.1\mu A`$ range to avoid complications due to Coulomb repulsion force at the emitting surface. It was found that the positions of points of convergence stayed within 10% while the current could be increased up to a factor of 100. Also our $`d_w/a`$ values in Figure 2 agreed well to Braucks’ results although the general structures of the electron guns were different. These two facts point to the possibility of studying the emission optics of the telefocusing electron gun systematically based on geometric parameters (e.g. $`d_w/a`$, $`d_w/D`$) in spite of the fact that the emission surface can not be constructed reliably.
## 4 Acknowedgement
One of the authors wishes to express his gratitude to Dominik Hammer for helpful discussions, and to Chris Zernial for making the graphs of the emission optics. This work was supported by Texas Advanced Research Project and Robert A. Welch Foundation.
|
no-problem/9905/hep-ph9905440.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
It is well known that the masses and mixings of quarks show a hierarchical pattern. The mass matrices, the objects of the theoretical description, can be expressed in terms of powers of a small parameter with coefficients of order one . Since among the quarks the top quark plays an outstanding role, a natural choice for the small parameter is the quantity $`\sigma =(m_c/m_t)^{1/2}`$ . With this choice the mass ratios $`m_u:m_c:m_t`$ taken at a common scale are simply $`\sigma ^4:\sigma ^2:1`$. Recent indications for solar and atmospheric neutrino oscillations, and thus for non-zero neutrino masses, raise the question of the structure of the mass matrices for leptons. Are they related to the mass matrices of quarks?
In this article I argue for an intimate connection between quark and lepton masses and mixings. The general suggestions from grand unified theories are used: the see-saw mechanism and the expected near equality of the Dirac-neutrino mass matrix with the up-quark mass matrix. For the mass matrix of the heavy neutrinos a new hypothesis is needed. I assume that each heavy neutrino carries a quantum number of a new $`U(1)`$ symmetry which governs the leading powers of the small parameter $`\sigma `$ occurring in this mass matrix. With the single condition that these “charges” are not zero one gets strong restrictions for the form of this mass matrix. The consequences for the light neutrinos are drastic: i) the mass splitting of the two lightest neutrinos is tiny and favors the vacuum oscillation solution for solar neutrinos, and ii) the mixing matrix is close to the one for bimaximal mixing.
## 2 Quarks
Today the masses and mixings of quarks are reasonably well known with only the exception of the CP-violating phase parameter. After the choice of a convenient basis the mass matrices for up and down quarks can be written down. For instance, one can use as in a real and symmetric matrix $`m_U`$ for the up quarks and a hermitian matrix $`m_D`$ for the down quarks such that the elements $`(m_U)_{11}`$, $`(m_D)_{13},(m_D)_{31}`$ and $`(m_D)_{23},(m_D)_{32}`$ are strictly zero. This basis has the advantage that the only complex matrix element is $`(m_D)_{12}=(m_D)_{21}^{}`$. More important, by expressing the matrix elements in powers of the small quantity $`\sigma =(m_c/m_t)^{1/2}0.058\pm 0.004`$ each independent matrix element has a different power of $`\sigma `$ for the up as well as the down quark matrices with factors of order 1. Our present knowledge on masses and mixings is compatible with the results obtained from the mass matrices <sup>2</sup><sup>2</sup>2Compared to ref. , the mass of the strange quark is taken to be $`m_s\sigma /3m_b`$ instead of $`\sigma /2m_b`$ and also $`m_d/m_b6\sigma ^3`$. The smaller values seem to be more appropriate. (taken at the mass scale of the vector boson $`Z`$):
$$m_U=\left(\begin{array}{ccc}0& \sigma ^3/\sqrt{2}& \sigma ^2\\ \sigma ^3/\sqrt{2}& \sigma ^2/2& \sigma /\sqrt{2}\\ \sigma ^2& \sigma /\sqrt{2}& 1\end{array}\right)m_t,m_D=\left(\begin{array}{ccc}0& i\sqrt{2}\sigma ^2& 0\\ i\sqrt{2}\sigma ^2& \sigma /3& 0\\ 0& 0& 1\end{array}\right)m_b$$
(1)
and $`\sigma =0.057`$. For the purpose of this paper in which we want to use the up quark mass matrix also for neutrinos, it is convenient, however, to transform to a basis in which $`m_U`$ is diagonal. To leading order in $`\sigma `$ one gets
$$m_U=\left(\begin{array}{ccc}\sigma ^4& 0& 0\\ 0& \sigma ^2& 0\\ 0& 0& 1\end{array}\right)m_D=\left(\begin{array}{ccc}O(\sigma ^3)& i\sqrt{2}\sigma ^2& i\sigma ^2\\ i\sqrt{2}\sigma ^2& \sigma /3& i\sigma /\sqrt{2}\\ i\sigma ^2& i\sigma /\sqrt{2}& 1\end{array}\right)m_b.$$
(2)
Clearly, the simple factors in front of the powers of $`\sigma `$ in (1) are guessses and have to be changed, or higher order terms in $`\sigma `$ have to be included, when more precise information on masses and mixings become available. Also, for definitness, “maximal CP-violation” has been assumed. It is defined to maximize the area of the unitarity triangle with regard to changes of the phases of the off-diagonal elements keeping their magnitudes fixed. Maximum CP-violation defined this way allowed us to bring the off-diagonal elements of $`m_D`$ into the form of an antisymmetric hermitian matrix . Within the accuracy of only a few degrees one obtains a right-handed unitarity triangle with angles $`\alpha 70^o,\beta 20^o,\gamma 90^o.`$ Independent of the phases the mass matrices (1) demonstrate that masses and mixings are governed by the same small parameter in a simple fashion. With $`\sigma =0.057`$ the numerical values for the quark mass ratios at the common scale $`m_Z`$ and the absolute value of the Cabibbo-Kobayashi-Maskawa matrix CKM as obtained from (1) are
$$\frac{m_u}{m_t}=1.110^5,\frac{m_c}{m_t}=3.210^3,\frac{m_d}{m_b}=1.110^3,\frac{m_s}{m_b}=2.010^2,$$
(3)
$$Abs[CKM]=\left(\begin{array}{ccc}0.97\hfill & 0.22\hfill & 0.003\hfill \\ 0.22\hfill & 0.97\hfill & 0.040\hfill \\ 0.010\hfill & 0.039\hfill & 1\hfill \end{array}\right).$$
(4)
## 3 Neutrinos and Charged Leptons
The recent indications for neutrino oscillations imply finite neutrino masses and lepton number violation. For a thorough discussion on possible scenarios and for the relevant literature I refer to ref. . Some approaches based on grand unified theories can be found in . Here, I will take suggestions from grand unified theories without specifications of the group and Higgs representations: The standard model is extended by adding three two-component neutrino fields $`\widehat{\nu }_e,\widehat{\nu }_\mu ,\widehat{\nu }_\tau `$ which are singlets with respect to the standard model gauge group. Since the masses of these fields are not protected, the total $`6\times 6`$ neutrino mass matrix has a block structure consisting of a $`3\times 3`$ matrix $`M`$ with very large entries and a Dirac-type mass matrix $`m_\nu ^{Dirac}`$ which connects the light with the heavy fields. At the scale of the heavy neutrinos one expects a close connection between $`m_{Dirac}`$ and the charged lepton mass matrix $`m_E`$ with the up-quark mass matrix and the down-quark mass matrix, respectively . For a non-singular matrix $`M`$ the light neutrinos become Majorana particles according to the see-saw mechanism. Their mass matrix $`m_\nu `$ is given by
$$m_\nu =m_\nu ^{Dirac}M^1(m_\nu ^{Dirac})^T.$$
(5)
In the following I use the relation
$$m_\nu ^{Dirac}=m_U$$
(6)
and postpone a remark on possible deviations to the end of section 5.
Because the top quark mass is so large compared to all other quark masses, it is convenient to take a basis in which $`m_U`$ is diagonal as already done in (2). A particular interesting connection between quarks and neutrinos will exist if besides $`m_U`$ and $`m_\nu ^{Dirac}`$ also the mass matrix $`M`$ has a simple structure in this basis and the parameter $`\sigma `$ plays there a similar role. I will explore this possibility.
Let us therefore express the entries of $`M`$ in terms of powers of $`\sigma ^2`$. To give significance to such a form it should be possible to assign $`U(1)`$ generation charges to the heavy neutrino fields. Generation charges can be decisive for determining the structure of mass matrices, see e.g. . To restrict these charges I will require that the three $`U(1)`$ quantum numbers differ from each other, are not zero and that not all elements of $`M`$ vanish in the limit $`\sigma 0`$. As a consequence, two of the three fields must carry opposite charges, and $`M`$ provides for $`\sigma 0`$ a mass term of the Dirac type for a heavy neutrino, i.e. a neutrino described by two different two-component fields. The mass matrix $`M`$ which satisfies the requirement and has the entries surviving for $`\sigma 0`$ at the most symmetric place, i.e. $`\widehat{\nu }_e`$ has the opposite charge of $`\widehat{\nu }_\tau `$, has the structure
$$M=\left(\begin{array}{ccc}\sigma ^6& \sigma ^2& 1\\ \sigma ^2& \sigma ^2& \sigma ^4\\ 1& \sigma ^4& \sigma ^6\end{array}\right)M_0.$$
(7)
The $`U(1)`$ charges of $`\widehat{\nu }_e,\widehat{\nu }_\mu ,\widehat{\nu }_\tau `$ are $`3/2,1/2,3/2`$, respectively; they determine the powers of $`\sigma ^2`$. If we dismiss matrices which have determinant zero when neglecting higher orders than $`\sigma ^2`$, the form (7) is unique apart from a reflection on the cross diagonal corresponding to the charges $`3/2,1/2,3/2`$. As in the case of the matrix $`m_U`$, the unknown factors in (7) should be of order 1 . In particular, if there is a close correlation with $`m_U`$, the factor of $`\sigma ^2`$ in the first row and first column ($`p`$ in eq. (8)) should be equal or very close to one. Because of the smallness of $`\sigma ^4,\sigma ^6`$ $`M`$ can be used in the simpler form
$$M=\left(\begin{array}{ccc}0& p\sigma ^2& 1\\ p\sigma ^2& r\sigma ^2& 0\\ 1& 0& 0\end{array}\right)M_0.$$
(8)
One can check that the approximation (8) is also applicable when calculating $`m_\nu `$ according to (5), (6) even though the inverse of the matrix $`M`$ enters there. Moreover, a simple consideration of the original $`6\times 6`$ neutrino mass matrix (with zero entries in the light-light sector) shows that the coefficients $`p`$ and $`r`$ can be taken to be real.
For the mass matrix of the light neutrinos, eqs. (5-8) give
$$m_\nu =\left(\begin{array}{ccc}0& 0& r\sigma ^2\\ 0& 1& p\\ r\sigma ^2& p& p^2\end{array}\right)\frac{\sigma ^2}{r}\frac{m_t(M_0)^2}{M_0}.$$
(9)
The neutrino mass spectrum obtained from this mass matrix is interesting in view of the recent neutrino data. Taking $`r=p=1`$ and adjusting $`M_0`$ such that the largest eigenvalues $`(m_3)`$ becomes $`m_30.055`$ eV, one gets $`M_010^{12}\mathrm{GeV},m_3^2m_1^2310^3(\mathrm{eV})^2`$ and $`m_2^2m_1^210^{11}(\mathrm{eV})^2`$ . Furthermore, the neutrino mixing matrix obtained from (9) with $`p=1`$ shows the bimaximal mixing discussed in . Thus, the neutrino mass matrix (9) favors large mixing angles for the atmospheric and for the solar neutrinos, and the vacuum oscillation solution for the latter. But the neutrino mass matrix obtained here seems not compatible with the indications for $`\widehat{\nu }_\mu \widehat{\nu }_e`$ oscillation reported by the LSND collaboration .
Before calculating the neutrino properties in more detail, we have to discuss the contributions from the charged lepton mass matrix and from renormalization group effects. The charged lepton mass matrix cannot be expected to be diagonal in the basis used. But it should resemble the down-quark mass matrix shown in (1). Fortunately, because of the small mixing angles, its precise form is not of importance at present. I just take the suggestion for this matrix from ref. , transform it to our basis and use, as an example, CP-violating phases in analogy to $`m_D`$.
$$m_E=\left(\begin{array}{ccc}0& i\sqrt{\frac{3}{2}}\sigma ^2& i\sigma ^2\\ i\sqrt{\frac{3}{2}}\sigma ^2& \sigma & i\frac{\sigma }{\sqrt{2}}\\ i\sigma ^2& i\frac{\sigma }{\sqrt{2}}& 1\end{array}\right)m_\tau .$$
(10)
By diagonalizing $`m_E`$
$$m_E=U_Em_E^{diagonal}U_E^{}$$
(11)
the neutrino mass matrix (in the basis in which the charged lepton matrix is diagonal) reads
$$\stackrel{~}{m}_\nu =U_E^Tm_\nu U_E.$$
(12)
Because $`U_E`$ is not a real matrix, CP-violation effects are predicted. For CP-conserving processes it will turn out that the influence of $`U_E`$ on $`m_\nu `$ is not essential, however. Before giving numerical examples, the effects of the scale changes between the high scale $`M_0`$ and the weak scale has to be studied.
## 4 Renormalization group effects
The existence of generation quantum numbers insures the stability of the mass matrices against strong loop corrections. Since the charges of the heavy neutrinos are now fixed one can give corresponding charges to the up-quarks. Because of (6) the singlet anti-up-quark fields may carry the same charges as the heavy neutrinos. The structure of $`m_U`$ then suggests that the left handed u-quark, charm quark and top quark fields have the charges 7/2 , 1/2 , -3/2 , respectively.
The close connection between quark and lepton mass matrices assumed here must have its origin at the high scale $`M_0`$ which, as we have seen, is of order $`10^{11}10^{12}GeV`$. If not before, at least at this scale new physics will set in. It could modify the scale dependence of the gauge-coupling constant $`g_1`$ such as to unify with $`g_2`$ and $`g_3`$ at their meeting point at $`10^{16}GeV`$. In any case our task is to fix $`\stackrel{~}{m}_\nu `$ at the scale $`M_0`$ and to study the behaviour of $`\stackrel{~}{m}_\nu `$ between $`m_Z`$ and $`M_0`$.
When applying the renormalization group equations to the charged leptons, it is of advantage to transform – at all scales relevant here – the right-handed charged leptons such that the corresponding mass matrix contains the left-handed mixing matrix only
$$m_E=U_Em_E^{diagonal}$$
(13)
where $`m_E^{diagonal}`$ is a diagonal and positive definite real matrix. By inserting this matrix into the renormalization group equation, one observes that the scale changes concern the mass eigenvalues only. $`U_E`$ remains invariant: Since below $`M_0`$ the masses of the heavy neutrinos do not appear in the renormalization group equation the product $`U_E^{}\frac{}{t}U_E`$ is a real diagonal matrix. This property suffices to insure that the unitary matrix $`U_E`$ is independent of the scale function $`t=\mathrm{ln}\mu /\mu _0`$. Consequently, $`U_E`$ computed from (10), (11) can also be used at the scale $`M_0`$.
At the scale $`M_0`$ the mass matrix $`M`$ for the heavy neutrinos is obtained by replacing in (8) $`\sigma ^2`$ by $`\sigma ^2(M_0)=m_c(M_0)/m_t(M_0)`$. The mass matrix $`m_\nu `$ for the light neutrinos becomes <sup>3</sup><sup>3</sup>3Because of the uncertainties of the quark masses it is not clear whether the relation $`m_u/m_t=(m_c/m_t)^2`$ which is not strictly scale-invariant but used in (2) and (6) holds better at $`m_Z`$ or at $`M_0`$. If it holds at $`M_0`$, eq. (14) and eq. (9) (with $`\sigma =\sigma (M_0)`$ and $`m_t=m_t(M_0)`$) are identical.
$$m_\nu (M_0)=\left(\begin{array}{ccc}0& 0& r\frac{m_u(M_0)}{m_c(M_0)}\\ 0& 1& p\\ r\frac{m_u(M_0)}{m_c(M_0)}& p& p^2\end{array}\right)\frac{m_c(M_0)m_t(M_0)}{rM_0}.$$
(14)
It remains to solve the renormalization group equation for $`\stackrel{~}{m}_\nu (\mu )`$ with the boundary condition
$$\stackrel{~}{m}_\nu (M_0)=U_E^Tm_\nu (M_0)U_E.$$
(15)
According to ref. one has
$`(4\pi )^2{\displaystyle \frac{d}{dt}}\stackrel{~}{m}_\nu `$ $`=`$ $`(3g_2^2+2\lambda )\stackrel{~}{m}_\nu +{\displaystyle \frac{4}{v^2}}Tr(3m_Um_U^{}+3m_Dm_D^{}+m_Em_E^{})\stackrel{~}{m}_\nu `$ (16)
$``$ $`{\displaystyle \frac{1}{v^2}}(\stackrel{~}{m}_\nu m_E^{}m_E+m_E^Tm_E^{}\stackrel{~}{m}_\nu ).`$
$`\lambda =\lambda (t)`$ denotes the Higgs coupling constant related to the Higgs mass according to $`m_H=\lambda v^2`$ with $`v=246GeV`$. We take $`m_H(m_Z)=140GeV`$ for the numerical estimates. Eq(16) simplifies since according to (12) and (15) $`m_E`$ has to be taken in diagonal form. Solving it gives the neutrino mass matrix $`\stackrel{~}{m}_\nu `$ at the scale of the standard model. The neutrino mixing matrix $`U=U(m_Z)`$ can then be obtained by diagonalizing the hermitian matrix $`\stackrel{~}{m}_\nu \stackrel{~}{m}_\nu ^{}`$ :
$$\stackrel{~}{m}_\nu (m_Z)\stackrel{~}{m}_\nu ^{}(m_Z)=UDD^{}U^{}.$$
(17)
The diagonal matrix $`D`$
$$D=U^{}\stackrel{~}{m}_\nu (m_Z)U^{}$$
(18)
gives us the (complex) neutrino mass eigenvalues. By introducing the diagonal phase matrix $`\mathrm{\Phi }`$ which consists of the phase factors of $`D`$ with angles divided by 2, $`U`$ can be redefined: $`U\widehat{U}=U\mathrm{\Phi }`$ such that (18) gives now positive definite neutrino mass eigenvalues. $`\widehat{U}`$ expresses the light neutrino states $`\nu _e,\nu _\mu ,\nu _\tau `$ by the neutrino mass eigenstates $`\nu _1,\nu _2,\nu _3`$ according to
$$\left(\begin{array}{c}\nu _e\hfill \\ \nu _\mu \hfill \\ \nu _\tau \hfill \end{array}\right)=\widehat{U}\left(\begin{array}{c}\nu _1\hfill \\ \nu _2\hfill \\ \nu _3\hfill \end{array}\right).$$
(19)
It turns out that the mixing matrix is not strikingly different from the mixing matrix obtained by diagonalizing (9) , but it contains CP violating phases.
## 5 Results and discussions
As shown in section 2 the mass matrices of charged fermions have a simple structure. We know much less about the neutrino mass matrix but it is tempting to assume that there exists an intimate relation between the up quark mass matrix and the mass matrix of the heavy neutrinos (the singlets with respect to the standard model gauge group). Because the singlet neutrino fields couple among each other, already the mere existence of a generation quantum number which governs the powers of $`\sigma `$ severly restricts the structure of this matrix. Apart from the scale $`M_0`$ we are left with essentially only two parameters ($`r`$ and $`p`$). Applying then the see-saw mechanisme we arrived at an interesting mass matrix for the light neutrinos (Eq(9)). The neutrino mass spectrum obtained from it consists of two nearly degenerate states which are lighter by a factor of order $`\sigma ^2`$ than the third neutrino. Diagonalization of the neutrino mass matrix gives large mixing angles. Taking the heaviest mass of the light neutrino to be about $`510^2eV`$ the mass scale of the singlet neutrinos is of order $`10^{12}GeV`$. Scaling the mass matrix down from this value to the weak interaction scale and including also the mixings of the charged leptons, leads to corrections but does not change the general picture. The charged lepton matrix, together with the neutrino matrix, causes CP violating effects, however. For an illustration the form (10) of the charged lepton mass matrix is used in the following numerical examples.
Let us start by putting the parameter $`p`$ equal to one. This is an appealing choice because of the corresponding factor one in the up quark mass matrix. With this value the neutrino mixing matrix $`U`$ as obtained from (17) is of bimaximal type: Almost independent of the parameter $`r`$ the magnitudes of the elements of the mixing matrix are
$$Abs[U]=\left(\begin{array}{ccc}0.70& 0.71& 0.05\\ 0.50& 0.50& 0.71\\ 0.50& 0.50& 0.71\end{array}\right).$$
(20)
To obtain contact with the atmospheric neutrino data the product $`rM_0`$ can be adjusted to give the heaviest of the light neutrinos a mass of $`5.510^2eV`$. One finds $`rM_0710^{11}GeV`$ . The masses of the two lighter neutrinos are then $`r610^5eV`$. The mass splitting between these neutrino depends on the parameter $`r`$ in a more involved way. One has e.g. $`m_2^2m_1^2`$ equal to $`0.810^{11},6.510^{11},2.210^{10}`$ and $`5.210^{10}(\mathrm{eV})^2`$ for $`r=1`$ , $`r=2`$, $`r=3`$ and $`r=4`$ , respectivly. These mass differences are in the region of the ones needed for the vacuum oscillation solution for solar neutrinos .
To describe the neutrino surviving and transition probabilities it is convenient to introduce the abbreviations
$$S_{ik}=\mathrm{sin}^2(1.27(m_i^2m_k^2)\frac{L}{E}),T_{ik}=\mathrm{sin}(2.54(m_i^2m_k^2)\frac{L}{E}).$$
(21)
The mass differences $`m_i^2m_k^2`$ are taken in units of $`(eV)^2`$, the neutrino energy $`E`$ in MeV and $`L`$, the distance between generation and detection point in meter. The probabilities obtained from (19) for $`p=1`$ and $`r=2`$ are
$`P(\nu _e\nu _e)`$ $`=`$ $`1S_{21}0.004S_{31}0.004S_{32}`$
$`P(\nu _\mu \nu _\mu )`$ $`=`$ $`10.25S_{21}0.51S_{31}0.49S_{32}`$
$`P(\nu _\tau \nu _\tau )`$ $`=`$ $`10.25S_{21}0.50S_{31}0.50S_{32}`$
$`P(\nu _e\nu _\mu )`$ $`=`$ $`0.50S_{21}+0.007S_{31}0.003S_{32}`$
$`+`$ $`0.02T_{21}0.02T_{31}+0.02T_{32}`$
$`P(\nu _e\nu _\tau )`$ $`=`$ $`0.50S_{21}0.003S_{31}+0.007S_{32}`$
$``$ $`0.02T_{21}+0.02T_{31}0.02T_{32}`$
$`P(\nu _\mu \nu _\tau )`$ $`=`$ $`0.25S_{21}+0.50S_{31}+0.49S_{32}`$ (22)
$`+`$ $`0.02T_{21}0.02T_{31}+0.02T_{32}.`$
Only the small numbers appearing in (5) depend notably on the value of the parameter $`r`$.
For the solar neutrinos one can set $`S_{31}=S_{32}=1/2,T_{31}=T_{32}=0`$. For the atmospheric neutrinos, on the other hand, one can put $`S_{21}=T_{21}=0,S_{31}=S_{32},T_{31}=T_{32}`$. From (5) maximal mixing for the solar as well as the atmospheric neutrinos is obvious. It is also seen, that CP violating effects described by the factors multipying $`T_{ik}`$ are small in this scenario.
The bimaximal mixing obtained so far gets spoiled if the parameter $`p`$ is sizeable different from one: the mixing angle relevant for atmospheric neutrinos is sensitive to the value of $`p`$. Still, deviations from $`p=1`$ by up to 25 % are tolerable. $`p=0.75`$ e.g. gives for $`P(\nu _\mu \nu _\mu )`$ and $`S_{21}=0,S_{31}=S_{32}`$
$$P(\nu _\mu \nu _\mu )=10.93S_{31}.$$
(23)
The Dirac neutrino matrix may differ from the up quark mass matrix. However, if both matrices commute at the scale of $`M_0`$, as one might expect, the corresponding changes can be absorbed into the parameters $`r`$ and $`M_0`$. An effective parameter $`r10`$ would lead to a mass difference $`m_2^2m_1^210^8(eV)^2`$ and thus to an energy independent suppression of solar neutrinos, in some conflict with the results of the Homestake collaboration.
Acknowledgement
It is a pleasure to thank my collegues Dieter Gromes and Christof Wetterich for very useful comments.
|
no-problem/9905/quant-ph9905011.html
|
ar5iv
|
text
|
# Quantum equivalent of the Bertrand’s theorem
## Abstract
A procedure for constructing bound state potentials is given. We show that, under the natural conditions imposed on a radial eigenvalue problem, the only special cases of the general central potential, which are exactly solvable and have infinite number of energy eigenvalues, are the Coulomb and harmonic oscillator potentials.
The celebrated Bertrand’s theorem in classical mechanics states that the only central forces that result in closed orbits for all bound particles are the inverse square law and Hooke’s law . The extension of the Bertrand’s theorem to spherical geometry led again to two potentials for which closed orbits exist . In the limit of vanishing curvature, these potentials reduced to the oscillator and Coulomb potentials. As is well known, these two potentials are also special in quantum mechanics. These are the only ones, which can be solved exactly in arbitrary dimensions and have infinite number of bound states. From the symmetry point of view, the degeneracy structure of these potentials admit a Lie group larger than the $`SO(d)`$ group for a non-relativistic particle moving in $`d`$-dimensional Euclidean space . For the Coulomb problem, the group is $`SO(d+1)`$ for bound states and $`SO(d,1)`$ for scattering states. For harmonic oscillator, the corresponding group is $`SU(d)`$. In case of spherical geometry, the eigenvalue problem can be solved exactly only for the two, above mentioned potentials . These are the only central potentials for which the corresponding Schrödinger equations can be factorized to yield both the energy and angular momentum raising and lowering operators . These two potentials are also special from the semi-classical point of view . Recently, it has been shown that, starting from a suitably gauged action, these two potentials can be derived in lower dimensions by appropriate gauge fixing . However, unlike the classical case where a general central potential, under the twin constraints of bound and closed orbits, led to the Coulomb and oscillator potentials; in quantum mechanics, the natural conditions imposed on a radial eigenvalue problem, in a given dimension, have not yielded these two potentials as unique choices.
In this paper, we construct a family of potentials, dependent on a parameter $`\alpha `$, keeping in mind the two important features of the quantum mechanical bound state problems, i.e., the discreteness of their eigenspectra and the polynomial nature of the corresponding normalizable wavefunctions. Interestingly, under these general conditions, there are only two central potentials which are exactly solvable and have infinite number of bound states. These two unique potentials, appearing for the two values of $`\alpha `$, one and two, are the Coulomb and the harmonic oscillator potentials, respectively. This quantum mechanical situation is an exact analogue of the Bertrand’s theorem in classical mechanics.
We begin with a Fock space spanned by the monomials, $`\rho ^n`$; where, $`n=0,1,2,\mathrm{}`$. The most general diagonal operator which has a well defined action on this Fock space can be written as $`_{k=\mathrm{}}^{\mathrm{}}C_kD^k`$; here, $`D\rho \frac{d}{d\rho }`$ and $`C_k`$’s are constants. The two special cases corresponding to, (i) all the $`C_k`$’s being zero except $`C_1=1`$ and $`C_0=ϵ`$ (modulo an overall scaling factor) and (ii) all the $`C_k`$’s being zero, except $`C_2=1`$, $`C_1=\beta `$, and $`C_0=\delta `$, can be unambiguously mapped to the Schrödinger equation by a series of similarity transformations. The first case leads to a class of potentials, dependent on a parameter $`\alpha `$. It is explicitly shown that, the only two exactly solvable radial potentials, having infinite number of energy eigenvalues, are the harmonic and Coulomb potentials. Our method naturally gives their respective eigenfunctions. Further possibilities of solvable potentials are analyzed by performing a point canonical transformation. It is found that, no other exactly solvable central potential results and Morse potential arises as a conditionally exactly solvable case of this general potential. We then construct the second class of potentials and point out that, unlike the first, this case can not be solved as a radial eigenvalue problem, for infinite number of levels, for all the allowed values of $`\alpha `$ and for the various possible choices of the other parameters appearing in it.
The choice (i), mentioned above, yields,
$$\left(\rho \frac{d}{d\rho }+ϵ\right)\eta (\rho )=0;$$
(1)
where, $`\eta (\rho )=\rho ^ϵ`$.
Performing a similarity transformation (ST): $`\varphi (\rho )=\mathrm{exp}\{\widehat{A}/\alpha \}\eta (\rho )`$, where, $`\widehat{A}a\rho ^{2\alpha }\frac{d^2}{d\rho ^2}+b\rho ^{1\alpha }\frac{d}{d\rho }+c\rho ^\alpha `$, $`\alpha `$, $`a,b`$ and $`c`$ being arbitrary parameters, one gets
$$(\rho \frac{d}{d\rho }+a\rho ^{2\alpha }\frac{d^2}{d\rho ^2}+b\rho ^{1\alpha }\frac{d}{d\rho }+c\rho ^\alpha +ϵ)\varphi =0.$$
(2)
The above choice of the ST is motivated from our desire to map (1) to the Schrödinger equation and also to have a closed form expression for the second order differential equation.
Choosing $`\varphi =\rho ^{\alpha 2}\chi `$, it is straightforward to check that,
$`{\displaystyle \frac{d^2\chi }{d\rho ^2}}`$ $`+{\displaystyle \frac{1}{a}}\left(\rho ^{\alpha 1}+(b4a+2a\alpha )\rho ^1\right){\displaystyle \frac{d\chi }{d\rho }}`$ (4)
$`+{\displaystyle \frac{1}{a}}\left((ϵ+\alpha 2)\rho ^{\alpha 2}+(a\alpha ^2+(b5a)\alpha +6a2b+c)\rho ^2\right)\chi =0.`$
For the purpose of clarity, we will cast (4) as a radial eigenvalue equation in three dimensions; it can be easily seen that our method generalizes to $`d`$-dimensions. Taking $`\psi (\rho )=S(\rho )\chi (\rho )`$, one gets
$$\left(\frac{d^2}{dr^2}+\frac{2}{r}\frac{d}{dr}\frac{l(l+1)}{r^2}+\frac{2m}{\mathrm{}^2}(EV(r))\right)\psi =0,$$
(5)
where, the radial coordinate $`r=\lambda \rho `$, $`\lambda `$ being an appropriate length scale. Here, $`S(\rho )=\rho ^{A_0}\mathrm{exp}\{\rho ^\alpha /(2a\alpha )\}`$, $`A_0(b6a+2a\alpha )/(2a)`$ and $`l`$ is the conventional angular momentum quantum number.
The explicit form of the potential is
$$V(r)=E+g_1r^{2(\alpha 1)}+g_2r^{\alpha 2}+g_3r^2,$$
(6)
where, $`g_1=\stackrel{~}{g}_1\lambda ^{2(1\alpha )}`$, $`g_2=\stackrel{~}{g}_2\lambda ^{2\alpha }`$, $`g_3=\stackrel{~}{g}_3\lambda ^2`$, $`\stackrel{~}{g}_1=\mathrm{}^2/(8ma^2\lambda ^2)`$, $`\stackrel{~}{g}_2=\mathrm{}^2(2A_0\alpha +52ϵ)/(4ma\lambda ^2)`$ and $`\stackrel{~}{g}_3=\mathrm{}^2\{A_0(A_0+1)l(l+1)[a(2\alpha )(3\alpha )+b\alpha 2b+c]/a\}/(2m\lambda ^2)`$.
The corresponding unnormalized eigenfunctions, obtained through inverse similarity transformations, are
$$\psi (\rho )=\rho ^{(\pm \sqrt{\mathrm{\Delta }}1)/2}\mathrm{exp}\left\{\rho ^\alpha /(2a\alpha )\right\}L_n^{\pm \sqrt{\mathrm{\Delta }}/\alpha }\left(\rho ^\alpha /(a\alpha )\right),$$
(7)
where, $`\mathrm{\Delta }(1b/a)^24c/a`$. Sign of $`\sqrt{\mathrm{\Delta }}`$ should be chosen such that the resulting wavefunctions are normalizable. The Laguerre polynomial, $`L_n^{\pm \sqrt{\mathrm{\Delta }}/\alpha }`$, is obtained by demanding that the $`(n+1)th`$ term in $`\mathrm{exp}\{\widehat{A}/\alpha \}\rho ^{ϵ_n}`$ is equal to zero; i.e., $`\widehat{A}^{n+1}\rho ^{ϵ_n}=0`$. This gives
$$ϵ_n^\pm =\alpha n(1/2)(1b/a)\pm (1/2)\sqrt{\mathrm{\Delta }}.$$
(8)
Before proceeding to study the exact solvability of the special cases of the bound state problem obtained in (6), it is worth repeating the corresponding scenario in classical mechanics. In classical case, one demands the bounded motion of a test particle, subjected to a general central force of the form $`F(r)=\kappa r^{\beta ^23}`$, to trace a closed orbit; this leads to two possible values for $`\beta `$, one and two. These values give rise to the well known inverse-square and Hooke’s laws, respectively. Interestingly, our general construction leads to a central potential, governed by a real parameter $`\alpha `$. Much akin to the classical case, only for $`\alpha =1`$ and $`2`$, as will be shown below, this potential can be solved for the complete set of energy eigenvalues. This is a quantum analogue of the classical Bertrand’s theorem.
The general central potential in (6) has free parameters $`g_1`$ or $`g_2`$ for $`\alpha =1`$ or $`2`$: one can then impose $`E+g_i=0`$, for $`i=1,2`$, in order that the potential is independent of constant terms and the corresponding Schrödinger equation is exactly solvable for all the energy eigenvalues.
Case I: $`\alpha =1`$.
To cast the eigenvalue problem in the standard radial form, i.e., to make $`g_3=0`$, we choose $`a=1/(2\sigma )`$, $`b=2/\sigma `$ and $`c=[2l(l+1)]/(2\sigma )`$ with $`\sigma ^2=2m\lambda ^2E/\mathrm{}^2`$. We now demand that $`E+g_1=0`$. (6) then reduces to the Coulomb potential,
$$V(r)=\frac{g_2}{r}.$$
Choosing $`g_2=e^2/(4\pi ϵ_0)`$, $`ϵ_n^{}`$ in (8) gives the energy eigenvalues as
$$E=g_1=\frac{me^4}{32\pi ^2ϵ_0^2\mathrm{}^2}\frac{1}{(n+l+1)^2}.$$
Case II: $`\alpha =2`$.
Akin to the previous case, choosing $`a=1/(2\sigma )`$, $`b=1/\sigma `$ and $`c=l(l+1)/(2\sigma )`$ with $`\sigma =m\omega \lambda ^2/\mathrm{}`$, one can check that $`g_3=0`$. Further demanding $`E+g_2=0`$, the potential reduces to the harmonic oscillator potential:
$$V(r)=g_1r^2.$$
For, $`g_1=m\omega ^2/2`$, the eigenspectra is
$$E=g_2=\mathrm{}\omega (2n+l+3/2).$$
It is worth pointing out that only for these two values of $`\alpha `$, one can make the potential independent of constants and hence obtain the full spectrum. We would like to add that, the above two cases can also be solved for $`g_30`$. For values of $`\alpha `$ different from one and two, the potential can be solved for $`E=0`$, but for different values of the angular momentum quantum number l. This situation is similar to the classical scenario, where, some values of $`\beta `$ other than $`1`$ and $`2`$ may give rise to bounded motion, which is not closed.
At this moment, one is naturally curious to see whether there exists the possibility of exactly solvable models emerging, under a point canonical transformation (PCT) $`\rho f(\rho )`$ when $`\alpha 1,2`$. Performing the PCT, (2) can be brought to the form (5), with a potential given by
$`{\displaystyle \frac{2m\lambda ^2}{\mathrm{}^2}}V(\lambda \rho )`$ $`=`$ $`{\displaystyle \frac{2m\lambda ^2}{\mathrm{}^2}}E+{\displaystyle \frac{1}{4a^2}}(f^{}f^{\alpha 1})^2+{\displaystyle \frac{1}{2a^2}}[b+a(\alpha 12ϵ)](f^{})^2f^{\alpha 2}`$ (11)
$`+\left[{\displaystyle \frac{b}{2a}}\left({\displaystyle \frac{b}{2a}}1\right){\displaystyle \frac{c}{a}}\right]\left({\displaystyle \frac{f^{}}{f}}\right)^2{\displaystyle \frac{l(l+1)}{\rho ^2}}`$
$`+{\displaystyle \frac{3}{4}}\left({\displaystyle \frac{f^{\prime \prime }}{f}}\right)^2{\displaystyle \frac{1}{2}}{\displaystyle \frac{f^{\prime \prime \prime }}{f^{}}}2{\displaystyle \frac{f^{\prime \prime }}{ff^{}}}+(2\alpha ){\displaystyle \frac{f^{}f^{\prime \prime }}{f^2}}+2\left({\displaystyle \frac{f^{\prime \prime }}{f^{}}}\right)^2,`$
where, $`f^{}df/d\rho `$.
Different choices of $`f`$ will give rise to different potentials; however, all of them may not be exactly solvable. For example the potential in (11) can be made independent of constants, even for continuous values of $`\alpha `$, if one chooses $`f(\rho )=e^\rho `$:
$`{\displaystyle \frac{2m\lambda ^2}{\mathrm{}^2}}V(\lambda \rho )={\displaystyle \frac{1}{4a^2}}e^{2\alpha \rho }+{\displaystyle \frac{1}{2a^2}}[b+a(\alpha 12ϵ)]e^{\alpha \rho }2e^\rho {\displaystyle \frac{l(l+1)}{\rho ^2}}.`$ (12)
Since, the above potential contains the term ‘$`l(l+1)\rho ^2`$’, one can only solve the eigenvalue equation for a fixed ‘$`l`$’. This is the case of the conditionally exactly solvable potential encountered in Ref. . Physically this implies that, the potential is exactly solvable only in one dimension, i.e, without the centrifugal term. The corresponding energy eigenvalues are independent of $`l`$; they can be obtained from
$$\frac{2m\lambda ^2}{\mathrm{}^2}E+\frac{b}{2a}\left(\frac{b}{2a}1\right)\frac{c}{a}\alpha +17/4=0,$$
and the unnormalized eigenfunctions are
$$\psi =\frac{1}{\rho }f^{(\pm \sqrt{\mathrm{\Delta }}+1)/2}\mathrm{exp}\left\{f^\alpha /(2a\alpha )\right\}L_n^{\pm \sqrt{\mathrm{\Delta }}/\alpha }\left(f^\alpha /(a\alpha )\right).$$
One can easily see that, the Morse potential emerges as a special case for $`\alpha =1`$.
Now, we proceed to find the second class of potentials starting from
$$\left(D^2+\beta D+\delta \right)\overline{\eta }(\rho )=0.$$
(13)
Writing $`\overline{\chi }(\rho )=\mathrm{exp}\{\gamma \widehat{O}\}\overline{\eta }(\rho )`$, where $`\widehat{O}a\rho ^\alpha \frac{d}{d\rho }+b\rho ^{\alpha 1}`$, one gets
$$\left(F_1(\rho )\frac{d^2}{d\rho ^2}+F_2(\rho )\frac{d}{d\rho }+F_3(\rho )+\delta \right)\overline{\chi }=0,$$
(14)
where, $`F_1(\rho )=A_1\rho ^2(\rho ^{\alpha 1}+A_2)^2`$, $`F_2(\rho )=B_1\rho (\rho ^{\alpha 1}+B_2)^2+B_3`$ and $`F_3(\rho )=C_1(\rho ^{\alpha 1}+C_2)^2+C_3`$. The coefficients are given by $`A_1=A_2^2`$, $`A_2^1=(\alpha 1)\gamma a`$, $`B_1=(\alpha a+2b)/(aA_2^2)`$, $`B_2=[2b+a(3+\beta \gamma )]/(2aA_2B_1)`$, $`B_3=1+\beta B_1B_2^2`$, $`C_1=b(a\alpha a+b)/(a^2A_2^2)`$, $`C_2=b(\beta \gamma +1)/(2aA_2C_1)`$ and $`C_3=C_1C_2^2/b`$. Here, if one chooses $`\widehat{A}`$ instead of $`\widehat{O}`$, then the resulting equation will contain still more higher derivative terms and can not be straightforwardly mapped to a Schrödinger equation.
Now, writing $`\overline{\chi }(\rho )=\left(F_1(\rho )S(\rho )\right)^1\overline{\psi }(\rho )`$; where, $`S(\rho )`$ can be obtained from
$$\frac{1}{S}\frac{dS}{d\rho }=\frac{F_2(\rho )2F_1^{}(\rho )}{2F_1(\rho )}\frac{1}{\rho },$$
it can be checked that, $`\overline{\psi }`$ obeys the Schrödinger equation with a potential,
$`\stackrel{~}{V}(\lambda \rho )`$ $`=`$ $`\stackrel{~}{E}+{\displaystyle \frac{[B_1\rho (\rho ^{\alpha 1}+B_2)^2+B_3][D_1\rho (\rho ^{\alpha 1}+D_2)^2+D_3]}{4A_1^2\rho ^4(\rho ^{\alpha 1}+A_2)^4}}`$ (18)
$`+{\displaystyle \frac{D_1(\rho ^{\alpha 1}+D_2)[(2\alpha 1)\rho ^{\alpha 1}+D_2]}{A_1\rho ^2(\rho ^{\alpha 1}+A_2)^2}}`$
$`+{\displaystyle \frac{8}{\rho ^2}}\left({\displaystyle \frac{\alpha \rho ^{\alpha 1}+A_2}{\rho ^{\alpha 1}+A_2}}\right)^2\left({\displaystyle \frac{1}{\rho }}+{\displaystyle \frac{\alpha 1}{\rho ^{\alpha 1}+A_2}}+{\displaystyle \frac{\alpha (\alpha 1)}{\alpha \rho ^{\alpha 1}+A_2}}\right)`$
$`{\displaystyle \frac{[C_1(\rho ^{\alpha 1}+A_2)^2+C_3+\delta ]}{A_1\rho ^2(\rho ^{\alpha 1}+A_2)^2}}{\displaystyle \frac{l(l+1)}{\rho ^2}}.`$
Here, $`\stackrel{~}{V}(\rho )\frac{2m\lambda ^2}{\mathrm{}^2}V(\lambda \rho )`$, $`\stackrel{~}{E}\frac{2m\lambda ^2}{\mathrm{}^2}E`$, $`D_1=(2b3\alpha a)/(aA_2^2)`$, $`D_2=[2b+a(\beta \gamma 4\alpha 1)]/(2A_2D_1)`$ and $`D_3=1+\beta 4/D_1D_1D_2^2`$.
Unlike the first class, these potentials can not be made independent of constants as a general radial problem, for any special values of $`\alpha .`$ Hence, the possibility of obtaining infinite number of levels, in arbitrary dimensions, does not arise here. However, it can be solved for the $`E=0`$ state, for different values of $`l`$.
At this point, we would like to add that, there have been many works in the literature, where point canonical transformation is used after writing $`\psi (x)=f(x)F[g(x)]`$, for mapping the Schrödinger equation to the confluent hypergeometric or hypergeometric equations . Hence, from the known solutions of these equations, one is able to obtain solvable potentials in the Schrödinger eigenvalue problem, after imposing certain restrictions on f and g. In our case, we have started with the diagonal operators $`D\rho \frac{d}{d\rho }`$ and $`D^2+\alpha D`$ acting on the space of monomials $`\rho ^n.`$ By inverse transformation, we, not only obtained the Schrödinger eigenvalue equation, but also the connection of the respective eigenfunctions with the monomials. In this approach, the special nature of the Coulomb and oscillator potentials, i.e., exact solvability with infinite number of levels in arbitrary dimensions, came out naturally without taking recourse to PCT.
We would also like to point out that, a number of Calogero-Sutherland type interacting, many-body Hamiltonians, both in one and higher dimensions, can be mapped to the Euler operator $`x_i\frac{}{x_i}+c`$, through similarity transformations. In fact, our partial motivation behind this work is due to a theorem established in Ref. :
All $`D`$ dimensional $`N`$ particle Hamiltonians, which can be brought through a suitable transformation to the generalized form: $`\stackrel{~}{H}=_{l=1}^D_{i=1}^Nx_i^{(l)}\frac{}{x_i^{(l)}}+ϵ+\widehat{A}`$ can also be mapped to $`_{l=1}^D_{i=1}^Nx_i^{(l)}\frac{}{x_i^{(l)}}+ϵ`$ by $`\mathrm{exp}\{d^1\widehat{A}\}`$; where, the operator $`\widehat{A}`$ is any homogeneous function of $`\frac{}{x_i^{(l)}}`$ and $`x_i^{(l)}`$ with degree $`d`$ and $`ϵ`$ is a constant. For the normalizability of the wave functions, one needs to check that the action of $`\mathrm{exp}\{d^1\widehat{A}\}`$ on an appropriate linear combination of the eigenstates of $`_{l=1}^D_{i=1}^Nx_i^{(l)}\frac{}{x_i^{(l)}}`$ yields a polynomial solution. It will be of great interest to apply the method employed here to the many-body systems and examine carefully, how far the results obtained for the single particle case generalizes to the N-particle systems. It is worth mentioning here that, apart from the Cologero-Sutherland model, which has recently been mapped to harmonic oscillators, the other many-particle exactly solvable system is of Coulombic type.
The authors would like to acknowledge useful discussions with Profs. V. Srinivasan and S. Chaturvedi. N.G thanks U.G.C (India) for the financial support through the S.R.F scheme.
|
no-problem/9905/astro-ph9905209.html
|
ar5iv
|
text
|
# High- and low energy nonthermal X-ray emission from the cluster of galaxies A 2199
## 1 Introduction
X-ray emission from clusters of galaxies was detected already in the early days of X-ray astronomy. Initially the origin of the emission was not clear: thermal Bremsstrahlung or non-thermal Inverse Compton emission were two common explanations. The detection of iron line emission changed the common interpretation in favour of a thermal origin, although searches for non-thermal X-ray emission have been continued.
Rather surprisingly, the Extreme UltraViolet Explorer (EUVE) found evidence for a soft X-ray excess in two of the brightest and most nearby clusters: Coma (Lieu et al. 1996a ) and Virgo (Lieu et al. 1996b ). Later it was also found in A 1795 (Mittaz et al. (1998)), and Bowyer et al. (1998) mention its detection in A 2199 and A 4038. This soft excess is unrelated to the well-known cooling flow, since it is also observed in Coma (which does not contain a cooling flow) and its relative strength as compared to the thermal component increases outwards for three of the five clusters. Detailed modelling favours a non-thermal origin for the soft excess, for instance inverse-Compton (IC) emission by cosmic ray electrons on the cosmic microwave background radiation (see e.g. Sarazin and Lieu (1998)).
At the high-energy part of the spectrum, the high sensitivity of the PDS instrument aboard BeppoSAX allowed the detection of a hard excess in A 2199 (Kaastra et al. (1998)) and Coma (Fusco-Femiano et al. (1999)). In the case of Coma, the data are consistent with the IC interpretation and with the radio spectrum (Fusco-Femiano et al. (1999); Lieu et al. (1999)).
In this paper we analyze the X-ray data from A 2199, a cluster where both a soft and hard excess was reported. A 2199 is a bright cooling-flow cluster at redshift 0.0303, with a very low galactic column density of $`8.1\times 10^{23}`$ m<sup>-2</sup>, as measured accurately by one of us (Lockman). This allows the spectrum to be observed down to energies of $``$0.1 keV. We use data obtained by all narrow-field instruments of BeppoSAX, the DS instrument aboard of EUVE and the ROSAT PSPC detector.
## 2 Data reduction
For the purpose of our data analysis, the cluster was divided into 7 concentric annuli, centered around the bright cD galaxy, with outer radii of 3, 6, 9, 12, 15, 18 and 24′. The cooling flow is almost completely contained within the central annulus (0.16 Mpc). We use $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0=0.5`$ throughout this paper.
The instruments of BeppoSAX are described by Boella et al. (1997) and references therein. Briefly, the LECS (1 instrument) and MECS (3 nearly identical instruments) are imaging GSPCs with an energy resolution that varies between 0.05–0.7 keV for energies between 0.1–10 keV (LECS) and 0.2–0.6 keV for energies between 1–10 keV (MECS). The spatial resolution is of the order of an arcminute but degrades towards low energies. The HPGSPC and PDS (12–100 keV) are non-imaging, collimated detectors suited for observing medium (above 4 keV) and high-energy (above 12 keV) X-rays, respectively.
The BeppoSAX observations were obtained between 21–23 April 1997 with an effective exposure time of 100 ks for the MECS instrument. LECS spectra were extracted for the 6 annuli within 18′, and MECS spectra for all 7 annuli. All subsequent data analysis and background subtraction was done using the January 1999 calibration release of BeppoSAX.
The background for the LECS and MECS spectra was subtracted using the standard background files taken at empty fields at high galactic latitude. The background consists of a cosmic X-ray and a particle contribution. The total MECS background during the week of observation was constant, 0.187$`\pm `$0.004 c/s, and within its statistical uncertainty of 2 % consistent with the long-term (3 months) background (0.191$`\pm `$0.001 c/s). This background is at the same level of the average background of the blank fields of Deep SAX exposures (0.187 c/s). The MECS spectra were extracted for the individual units separately and combined after background subtraction.
The January 1999 SAXDAS release has been used for the effective area, point-spread function and vignetting of the MECS data. Strongback obscuration effects in the MECS have been taken into account by assuming 550 $`\mu `$m of Be with a geometry as described by Boella et al. (1997), in addition to the 50 $`\mu `$m that is present for the entire field of view. Note that near the strongback around 10′ off-axis there may be some systematic uncertainty of at most 20 % in the transmission of the strongback, due to lacking calibration data. The vignetting correction for the LECS was derived from the SAXDAS/LEMAT raytrace code, assuming azimuthal symmetry around the appropriate center. The correction for the support grid was also derived from that package. The effects of vignetting and strongback obscuration for the MECS and LECS data have been taken into account in the response matrices created for this observation. Collimator vignetting for the HPGSPC and PDS have also been taken into account. The spectral data for each instrument and region have been rebinned to one third of the FWHM and in regions with poorer statistics even more. Contamination by the nearby AGN EXO 1627.3+4014 (at 32′ from the core of A 2199) in the PDS spectrum is unlikely given its steep X-ray spectrum (photon index 3, cf. Rosat Bright Source Catalog).
The EUVE data were obtained in 1997 (June 16-18), with an effective exposure time of 49 ks. Data from the central 3′ annulus were omitted from the analysis since this region contains the dead spot of the DS detector. We used data for the 4 annuli between 3–15′. Outside that radius the signal is too weak.
The ROSAT PSPC data were obtained during the PV phase of ROSAT on 18–19 July 1990 with an exposure time of 8.6 ks. The PSPC data were rebinned into 12 data channels with a width of about 1/3 of the instrumental FWHM and only data from channel 47–198 (about 0.39–1.98 keV) were used. The reason for omitting the lowest PSPC data channels is that we were not able to obtain spectral fits that are consistent with the LECS and EUVE data if this energy range is included. The PSPC calibration for very soft sources is known to be problematic in this energy range, and moreover the PSPC spectral resolution is twice as poor as the LECS spectral resolution in the same energy band. A systematic error of 1 % was added to the count rate for all data channels. Rosat spectra were extracted for the 6 annuli within a radius of 18′, and also for an additional annulus between 25–28′. In between there is too much uncertainty due to the strongback structure of the PSPC detector.
The point-spread function (psf) of the LECS and MECS instruments is a sensitive function of energy. For example, for the LECS the 50% encircled energy radius varies from 4′ at 0.2 keV to 1.2′ at 8 keV. The core radius and the size of the cooling flow is about 2′ (Jones and Forman (1984); Siddiqui et al. (1998)), and the shape of the psf is similar to the shape of the cluster radial profile: in fact, the MECS psf is parameterized as the sum of a gaussian and a King profile. Therefore the spectra of the different annuli need to be fitted simultaneously, taking into account the energy-dependent overlap of the response for the different regions. We have taken this overlap into account in the response matrix used for the spectral fitting. Briefly, we took the vignetting and strongback obscuration into account, and lacking more information we assumed the point-spread function to be constant over the field of view. This last assumption is strictly true only for the central 10′, but since our extraction radii are relatively large as compared to the spatial resolution of the instruments no large errors are made. The on-axis psf of the LECS and MECS were derived from the latest (January 1999) calibration release (see Kaastra et al. (1999) for further details).
Since the spatial resolution of the ROSAT/PSPC and EUVE/DS detectors is better than that of the BeppoSAX instruments, we ignored the response overlap between annuli in those cases. Our final data set consists of 26 spectra from 8 different annuli obtained by 6 different instruments. Spectral analysis was done with version 2.0 of the SPEX package (Kaastra et al. (1996)).
## 3 Spectral analysis
The initial spectral model consists of a thermal plasma in collisional ionisation equilibrium (the Mewe-Kaastra-Liedahl model) for each annulus. We initially assumed the cluster to be isothermal. For the inner regions the possible effects of resonance scattering have also been taken into account for the iron K$`\alpha `$ complex. In addition, a cooling-flow model with partial absorption has been included for the central annulus. The galactic column density was fixed at $`8.1\times 10^{23}`$ m<sup>-2</sup>. Again, more details will be presented by Kaastra et al. (1999).
Our best-fit model has $`\chi ^2=785`$ for 614 degrees of freedom (dof), for a temperature of 4.71$`\pm `$0.13 keV. This fit is formally not acceptable. An inspection of the fit residuals shows that there is excess flux at large radii for both low and high energies. This is illustrated in fig. High- and low energy nonthermal X-ray emission from the cluster of galaxies A 2199, showing that there is a soft excess as compared to the thermal model in the DS data and the LECS data below $`0.2`$ keV, starting from a radius of about 6′ and increasing in relative strength up to 2–3 times the thermal count rate near 15′. Other evidence for spectral softening at large radii is obtained from the ratio of the Einstein IPC to ROSAT PSPC fluxes as derived from archival data. This ratio is constant from 2–12′, but starts decreasing beyond that radius.
Also, there is a hard excess shown by the MECS data above 7–8 keV (fig. High- and low energy nonthermal X-ray emission from the cluster of galaxies A 2199), starting at slightly larger radii than the soft excess and also with increasing relative strength up to 2 times the thermal count rate. Between 9–24′, the total 8–10 keV count rate is 5.4$`\pm `$0.6 counts/ks, while the best-fit thermal model predicts only 3.4 counts/ks. For comparison, the subtracted background is 38.2 counts/ks, of which 33.4 counts/ks can be attributed to the particle background. The hard excess is consistent with the PDS data: the observed 17–100 keV PDS count rate is 0.090$`\pm `$0.024 c/s, while the thermal model predicts only 0.038 c/s. The uncertainties on the PDS data are not sufficient to prove by themselve the existence of the hard X-ray tail in view of the systematic uncertainties in the PDS background subtraction. Based upon the source distribution in the HEAO-1 A4 all sky survey, the probability to find an extragalactic hard X-ray source of this strength in the PDS-FOV around A2199 is only $``$10 %. From a map of the observed hardness ratio as obtained by the MECS instrument we can conclude that the hard excess is not due to a few discrete point sources in the field of view, but is distributed over all azimuthal angles.
The hard tail is the dominant component outside radii of about 12′ at energies above 8–10 keV. Could the hard tail have been observed before with collimated instruments like EXOSAT or GINGA? The answer is: probably not. For example, in the 8–10 keV band it constitutes only a few percent of the total flux. It is only the combination of high sensitivity in the hard X-ray band (the PDS instrument) and the spatially resolved spectroscopy of the MECS that this component is detected. Without spatially resolved spectroscopy a part of the tail would be accomodated for by a slight increase in cluster temperature. This is confirmed by the GINGA/SSS data of White et al. (1994), who measure a temperature of 4.74$`\pm `$0.09 keV, consistent with our present fit, and significantly larger than the ASCA temperature of 4.17$`\pm `$0.11 keV in the 3–11′ range (Mushotzky et al. (1996)).
In order to have a better understanding of the hard- and soft excess emission, we extended our spectral model with a power law component in the outer radial zones. The photon index of this power law was kept the same for all regions. The best-fit value for this photon index is 1.81$`\pm `$0.25. The power law component reproduces the hard excess of the spectrum as well as a part of the soft excess. We did not include the 25–28′ annulus in our spectral fit, since only ROSAT PSPC data were available for this region. However, the spectral shape of these outermost PSPC data was consistent with our spectral model in the 18–24′ annulus, and we used the PSPC data to estimate the luminosity in this region. The total 0.1–100 keV luminosity of the power law component integrated over the cluster is $`(1.30\pm 0.32)\times 10^{37}`$ W, to be compared to $`(6.2\pm 0.5)\times 10^{37}`$ W for the hot isothermal component and $`(1.11\pm 0.15)\times 10^{37}`$ W for the cooling flow.
We also need an additional soft component in the 6–12′ annuli in order to represent the DS and LECS flux in the 0.1–0.3 keV band. Due to the poor spectral resolution at low energies the shape of this additional soft component is not well constrained; we obtained good results using a narrow line feature at 0.19$`\pm `$0.01 keV, but other spectral shapes can also represent the data reasonably. The additional soft component has an absorption-corrected luminosity of $`(1.2\pm 0.3)\times 10^{36}`$ W in the 0.1–0.3 keV band, with approximately equal luminosity in the 6–9′ and 9–12′ annuli. This luminosity is 25 % larger than the luminosity of the thermal component in the same annuli. In this 6–12′ region and 0.1–0.3 keV spectral band the subtracted LECS background is smaller than 15 % of the cluster signal, while the large scale variations of the background in this region of the sky are less than 10 % of this background.
The $`\chi ^2`$ value for our fit is 728 for 604 degrees of freedom, an improvement of 57 at the cost of only 10 additional parameters. Using an F-test we find that the excess emission is significant at the $`1.8\times 10^6`$ confidence level. The soft component in the 6–12′ annuli alone is significant at the $`9.2\times 10^5`$ confidence level, and the power law component alone is significant at the 0.0024 confidence level.
The best-fit temperature for the isothermal component of 4.58$`\pm `$0.11 keV is lower than the value we obtained before in our fit without the power law component.
## 4 Interpretation
Our analysis has shown that there is both a soft and hard excess in the X-ray spectrum of A 2199. Most of this excess emission can be explained by a power law component with photon index $``$1.8 that dominates the cluster luminosity outside a radius of 12′ (640 kpc). If the photon index is higher in the 6–12′ region, this could explain a part of the additional soft excess found in that region. Otherwise this additional component is distinct from the power law. Its spectral shape is not well constrained, however its luminosity in the soft 0.1–0.3 keV band is larger than the thermal luminosity in the same energy band.
It is not likely that the hard excess has a thermal origin. In that case the temperature in the outer regions must be larger than 10 keV (Kaastra et al. (1998)); this would imply that the average energy of the iron K$`\alpha `$ line complex should increase by more than 100 eV from the center of the cluster towards the edge; however the observed centroid of the K$`\alpha `$ blend decreases by 60$`\pm `$70 eV from the center towards the 9–15′ region (consistent with temperatures between 1–5 keV), hence a large temperature increase can be excluded. Moreover, in many nearby clusters a temperature decrease by a factor of $`2`$ from the center out to 6 core radii is observed (Markevitch et al. (1998)). If such a gradient would also be present in A 2199, the hard excess would be even stronger.
The hard excess alone can be explained by nonthermal Bremsstrahlung from a suprathermal tail in the electron distribution (Kaastra et al. (1998)); however it is not obvious how to explain the large luminosity associated with it. Also, although the soft excess alone can be explained by thermal radiation from very cool gas, it requires an unrealistically large amount of rapidly cooling gas (Mittaz et al. (1998)).
Given the single power law fit to both the soft and hard excess, it is more natural to explain the observed excess by the IC emission process proposed by Sarazin and Lieu (1998). Such a model was successfully applied to the soft and hard excess in Coma (Lieu et al. (1999)), although it may have problems with Virgo (Reynolds et al. (1999)).
Currently the best-fit single power-law index of 1.81 agrees with that of Coma (Lieu et al. (1999)); both implying a number index of the relativistic electrons of $``$2.62, consistent with the cosmic ray index. A consequence of the large pressure ratio of cosmic ray protons to electrons is that when the IC luminosity is comparable to that of the thermal X-rays, the cosmic rays are in approximate equipartition with the virialized gas (Lieu et al. (1999)). The breakdown of luminosity values given above suggest that A2199, like Coma, is in such a limit. The radially rising relative importance of the IC emission, a phenomenon which has not yet been established for Coma but was found to be present in another cluster (A1795; Mittaz et al. (1998)), also follows naturally from the IC model (Sarazin and Lieu (1998)) as a density scaling effect. A major unresolved puzzle, however, concerns the electrons responsible for the hard excess, as they have an energy of $``$4 GeV and a resulting IC lifetime of $`3\times 10^8`$ years. These electrons have to be replenished by a continuous acceleration process.
A copious source of relativistic electrons in this energy range is available, without the need for an unrealistically high cosmic ray pressure, via the decay of pions produced by proton-proton collisions between intracluster cosmic rays and gas. While in the case of a restricted injection epoch the secondaries are not generated rapidly enough to compete against synchrotron and inverse-Compton losses, this difficulty no longer exists if the cosmic rays have been continuously accelerated by, e.g., intracluster shocks associated with an on-going merger process or long-duration activity by the central radio galaxy. Moreover the power spectral index also falls within the range of observed values. An outline of the model may be found in Blasi and Colafrancesco (1999), while detailed development of it is currently in progress.
Another important consequence of the present data is that in the outer parts of the cluster the contribution from the thermal component is smaller than previously thought. We have fitted a $`\beta `$-model to the emission measures of the thermal component, and find a best fit for $`\beta =0.78\pm 0.15`$, a core radius of 3.7′$`\pm `$0.9′ and a central hydrogen density of 5.9$`\times 10^3`$ m<sup>-3</sup> (excluding the cooling flow contribution). This should be compared to e.g. the fit of the ROSAT PSPC data by Siddiqui et al. (1998), who find $`\beta =0.62\pm 0.05`$ with a core radius of 2.3′$`\pm `$0.8′. The larger value for $`\beta `$ is caused by the lower thermal contribution at large radii, the larger error bars are due to the intrinsic uncertainty in the precise correction for the non-thermal flux. There are important consequences for the mass distribution within the cluster: our parameters yield within 1.8 Mpc (34′) a total gas mass of 7.9$`\times 10^{13}`$ M and a total gravitational mass of 6.8$`\times 10^{14}`$ M, a 20 % lower and 35 % higher than the mass obtained from the PSPC data of Siddiqui et al. (1998), respectively. Therefore the gas fraction is 40 % lower than inferred from previous data.
## 5 Conclusions
The detection of a soft and hard X-ray excess in A 2199 has important consequences for the study of this and other clusters. We demonstrated how the excess may be explained as an IC effect, and how further investigations of this phenomenon are vital towards understanding of cluster evolution - the history of particle acceleration and interplay between thermal and non-thermal components are now revealed to be much richer than previously thought.
SRON is supported financially by NWO, the Netherlands foundation for Scientific Research. We thank the referee for several constructive comments and suggestions that helped us in clarifying the presentation of our results. We thank S. Molendi for providing us with necessary calibration information.
|
no-problem/9905/hep-ex9905013.html
|
ar5iv
|
text
|
# New Limit for the Family-Number Non-conserving Decay 𝜇⁺→𝑒⁺𝛾
## Abstract
An experiment has been performed to search for the muon- and electron-number non-conserving decay $`\mu ^+e^+\gamma `$. The upper limit for the branching ratio is found to be $`\mathrm{\Gamma }(\mu ^+e^+\gamma )`$/ $`\mathrm{\Gamma }(\mu ^+e^+\nu \overline{\nu })<\mathrm{\hspace{0.25em}\hspace{0.25em}1.2}\times \mathrm{\hspace{0.25em}10}^{11}`$ with 90% confidence.
It is generally believed that the standard model of electroweak interactions is a low-energy approximation to a more fundamental theory. Yet there is no clear experimental evidence either to guide its extension to additional physical processes or to predict the model parameters. One of these model assumptions is lepton family-number conservation, which has been empirically verified to high precision but is not a consequence of a known gauge theory. Indeed many theoretical extensions to the standard model allow lepton-family-number violation within a range that can be tested by experiment .
The predictions of the rate for a given family-number non-conserving process vary among these extensions, and the most sensitive process depends on the model. Many possibilities have been explored, and the present experimental limits for a wide variety of processes have been tabulated in Ref. . Of these, the rare muon decays have some of the lowest branching-ratio (BR) limits because muons can be copiously produced and have relatively long lifetimes. The rare process, $`\mu ^+e^+\gamma `$, is the classic example of a reaction that would be allowed except for muon and electron number conservation; the previous limit on the branching ratio is BR($`\mu ^+e^+\gamma `$) $`<\mathrm{\hspace{0.25em}4.9}\times \mathrm{\hspace{0.25em}10}^{11}`$ . This decay is particularly sensitive to the standard model extension that involves supersymmetric particles .
We report here a new limit for the BR of the decay $`\mu ^+e^+\gamma `$ from the analysis of data taken by the MEGA experiment at the Los Alamos Meson Physics Facility, LAMPF. The dominant source of background in high-rate $`\mu ^+e^+\gamma `$ experiments is random coincidences between high-energy positrons from the primary decay process, $`\mu ^+e^+\nu \overline{\nu }`$, and high-energy photons from internal bremsstrahlung (IB), $`\mu ^+e^+\gamma \nu \overline{\nu }`$. MEGA isolates the $`\mu ^+e^+\gamma `$ process from the background by identifying the signature of the process: a 52.8-MeV photon and a 52.8-MeV positron that are aligned back to back, in time coincidence, and arise from a common origin. Therefore, quality position, timing, and energy information are crucial. In comparison to the detector used to set the previous limit , the MEGA detector sacrifices larger acceptance and efficiency for better resolution, background rejection, and rate capability. It has been described in several papers and will be discussed only briefly below.
Muons for the experiment are provided by a surface muon beam at the stopped muon channel at LAMPF. The muons, which are nearly 100% polarized, are brought to rest in a 76 $`\mu `$m Mylar foil, centered in the 1.5-T magnetic field of a superconducting solenoid. The angle between the muon beam and the normal to the target plane is $`82.8^{}`$ so that the stopping power in the beam direction is increased, while the thickness of material presented to the decay positrons is minimized. A sloped target plane also extends the stopping distribution along the beam, enhancing the sensitivity of the apparatus to the measurement of the decay position, which is the intersection of the outgoing photon and positron trajectories with the target foil.
The positron and photon detectors are placed in the 1.8-m diameter and 2-m axial length bore of the solenoid. Decay positrons from stopped muons are analyzed by a set of high-rate, cylindrical multiwire-proportional chambers (MWPC) surrounding the target. They consist of seven MWPCs arranged symmetrically outside of a larger MWPC, coaxial with the central axis of the beam. These MWPCs have a thickness of $`\mathrm{\hspace{0.25em}3}\times \mathrm{\hspace{0.25em}10}^4`$ radiation lengths, minimizing energy loss while maintaining high acceptance and efficiency under the stopping rates of the experiment . The azimuthal location of a passing charged particle is determined by anode wire readout. The position of an event in the axial direction is obtained from the signal induced on stereo strips scribed on the inner and outer cathode foils of the MWPCs. The positrons come to rest at either end of the spectrometer in thick, high-$`Z`$ material after passing through a barrel of 87 scintillators used for timing. Outside these MWPCs, photons are detected in one of three coaxial, cylindrical pair spectrometers . Each pair spectrometer consists of a scintillation barrel, two 250-$`\mu `$m Pb conversion foils sandwiching an MWPC, and three layers of drift chambers, with the innermost having a delay-line readout to determine the axial position of a hit.
The hardware trigger, consisting of two stages of specially-constructed, high-speed logic circuits, is fed signals from each of the three photon spectrometers . Using pattern recognition programmed on the basis of Monte Carlo (MC) simulations, the trigger requires an electron-positron pair that can be potentially reconstructed as arising from a photon of at least 37 MeV. Since the instantaneous muon stopping rate in this experiment is 250 MHz, with a macroscopic duty cycle of 6-7%, the positron chambers and scintillators have too many hits at any given time to be part of the trigger. Signals are digitized in FASTBUS with 6% dead time at the instantaneous trigger rate of 18 kHz. Between each macropulse (120 Hz) of the accelerator, the data are read into one of eight networked workstations, where an on-line algorithm reduces the data rate for storage on magnetic tape to roughly 60 Hz.
Each event is characterized by 5 kinematic parameters: photon energy ($`E_\gamma `$), positron energy ($`E_e`$), relative time between the positron and photon ($`t_{e\gamma }`$) at the muon decay point, opening angle ($`\theta _{e\gamma }`$), and photon traceback angle ($`\mathrm{\Delta }\theta _z`$). These properties, in conjunction with the detector response, determine the likelihood that a signal is detected. The determination of the detector acceptance and response functions relies on a MC simulation to extrapolate from experimental input to the kinematic region of the $`\mu ^+e^+\gamma `$ signal. To verify the MC calculation, a number of auxiliary measurements are performed. The two most important are the $`\pi _{stopped}^{}p\pi ^0n\gamma \gamma n`$ process and the prompt $`e\gamma `$ coincidence signal from the IB decay.
Pion capture at rest on hydrogen produces photons with energies between 54.9 and 83.0 MeV. Under the condition that the two photons have a minimum opening angle of 173.5, these photons are restricted to have energies close to 54.9 and 83.0 MeV and a spread much smaller than the detector response. Figure 1 shows the experimental line shape for the 54.9 MeV photon for conversions in the outer Pb foils of the three pair spectrometers, scaled to 52.8 MeV. The curve is the response function generated from the MC that is used in the analysis of the $`\mu ^+e^+\gamma `$ data. We attribute differences in the low-energy tail to charge exchange of in-flight pions from carbon in the $`CH_2`$ target and discrepancies in the high-energy tail to contributions from other opening angles due to reconstruction problems for the high-energy photon. The measured and simulated line shapes agree better for conversions in inner Pb foils, which have worse resolution. The energy resolutions are 3.3% and 5.7% (FWHM) at 52.8 MeV for conversions in the outer and the inner Pb layers, respectively. The $`\pi ^{}`$ decays also provide the time response between the two photons, which is reasonably characterized by a Gaussian with a $`\sigma `$ = 0.57 ns for each photon.
Observation of the IB process demonstrates that the apparatus can detect coincident $`e\gamma `$ events. At nominal beam intensity, this process is completely engulfed by random coincidences. Figure 2 shows the spectrum for $`t_{e\gamma }`$, with the beam intensity reduced by a factor of 60, the magnetic field lowered by 25%, and the $`\mu ^+e^+\gamma `$ online filter suppressed. The peak shown is for all energies of the detected decay products. The area of the peak is very sensitive to the exact acceptances of the detector at its thresholds and can be calculated by MC simulation to within a factor of two. If the data and the simulation are restricted to $`E_\gamma >`$ 46 MeV, $`E_e>`$40 MeV, and $`\theta _{e\gamma }>120^{}`$, the branching ratio is reproduced within uncertainties. The shape of the peak can be characterized by a Gaussian with a $`\sigma `$ = 0.77 ns. The dominant contributor is the photon timing, as measured in the stopping-pion experiment, which must be scaled down from about 70 to 40 MeV for the comparison. At 52.8 MeV, the MC simulation indicates the photon-positron resolution is $`\sigma `$ = 0.68 ns.
In the IB and $`\mu ^+e^+\gamma `$ processes, the origin of the photon is defined to be the intersection of the positron with the target. The photon traceback angle, $`\mathrm{\Delta }\theta _z`$, specifies the difference between the polar angles of the photon as determined from the line connecting the decay point to the photon conversion point and from the reconstructed $`e^+e^{}`$ pair. The resolution of $`\mathrm{\Delta }\theta _z`$ is dominated by multiple scattering of the pair in the Pb converters. The observed response for inner and outer conversion layers for the IB process is in excellent agreement with the MC simulation. The traceback resolutions appropriate for the $`\mu ^+e^+\gamma `$ analysis are $`\sigma `$ = 0.067 and 0.116 rad for conversions in the outer and the inner Pb layers, respectively.
The resolution of $`E_e`$ is determined by the slope of the high-energy cut-off edge in the spectrum of the decay, $`\mu ^+e^+\nu \overline{\nu }`$. It depends on the “topology” of the track, which is determined by the number of loops these particles make in the magnetic field between the target and scintillator and the number of chambers they traverse. The $`E_e`$ spectrum is shown in Fig. 3 for one of three topology groups. The MC line shape is characterized near the centroid by a Gaussian and in the tails by different powers of the deviation from the central energy. To extract the response function from the data, this line shape is convoluted with the spectrum from normal muon decay, modified by detector acceptance and unphysical “ghost” tracks. Ghost tracks are a high-rate phenomenon and are reconstructions made from the fragments of several physical tracks. They are the source of events well above the kinematic limit for the positron energy. The solid curve in Fig. 3 is the fit, and the dashed curve is the corresponding line shape. The central Gaussians of the three topology groups have $`\sigma `$ = 0.21, 0.23, and 0.36 MeV.
There is no way to measure the response function for $`\theta _{e\gamma }`$. The MC simulation is relied upon to produce this distribution and gives the FWHM for cos($`\theta _{e\gamma }`$) as $`1.21\times \mathrm{\hspace{0.25em}10}^4`$ at $`180^{}`$. Given helical tracks, knowing the location of the target is critical to obtaining the correct absolute value of $`\theta _{e\gamma }`$, and the mechanical survey provides the most accurate measurement for the analysis.
The data for this experiment have been taken in three calendar years, 1993-95. The full data set is based on $`\mathrm{\hspace{0.25em}1.2}\times \mathrm{\hspace{0.25em}10}^{14}`$ muon stops collected over $`\mathrm{\hspace{0.25em}8}\times \mathrm{\hspace{0.25em}10}^6`$ s of live time and results in $`\mathrm{\hspace{0.25em}4.5}\times \mathrm{\hspace{0.25em}10}^8`$ events on magnetic tape. These events are passed through a set of computer programs that reconstruct as many as the pattern recognition algorithms can interpret. The programs include physical effects such as mean energy loss in matter and non-uniformities in the magnetic field. Events are required to satisfy separate $`\chi _\nu ^2`$ cuts on the positron and photon fits and loose cuts on the signal kinematics ($`E_e>`$50 MeV, $`E_\gamma >`$46 MeV, $`|t_{e\gamma }|<`$4 ns, cos($`\theta _{e\gamma }`$)$`<`$-0.9962, and $`|\mathrm{\Delta }\theta _z|<`$0.5 rad). Events in which the positron momentum vector at the decay point appears to lie within 5 of the plane of the target are discarded. After roughly one year of computing on a farm of UNIX workstations, the data set has been reduced to 3971 events that are fully reconstructed and of continuing interest. This sample is large enough to allow a study of the background. To remove incorrectly reconstructed events, the images of the photon showers in the pair spectrometers are manually scanned. The efficiency for keeping real photons is monitored by mixing about 500 52.8-MeV MC events into the sample in a non-identifiable way and finding that 91% of the MC events pass, whereas only 73% of the data events are selected.
The acceptance of the apparatus is obtained by simulating $`\mathrm{\hspace{0.25em}1.2}\times \mathrm{\hspace{0.25em}10}^7`$ unpolarized $`\mu ^+e^+\gamma `$ decays and finding that $`5.2\times \mathrm{\hspace{0.25em}10}^4`$ events survive processing by the same codes used for the data analysis. Thus the probability that a $`\mu ^+e^+\gamma `$ decay would be detected is $`\mathrm{\hspace{0.25em}4.3}\times \mathrm{\hspace{0.25em}10}^3`$. This value is reduced by 20% to account for inadequacies in the MC simulation that over estimate the acceptance. The shortcomings primarily involve inter-channel cross talk and are estimated by comparing the images of many data and MC events. The acceptance is further reduced by 9% for the inefficiency of manual scanning. The total number of muon stops is determined by calibrating the rates in the positron scintillators to a known muon flux. After correcting for dead time, the single event sensitivity for the experiment is $`\mathrm{\hspace{0.25em}2.3}\times \mathrm{\hspace{0.25em}10}^{12}`$=$`\mathrm{\hspace{0.25em}1}/N_\mu `$, where $`N_\mu `$ is the number of useful stopped muons.
The determination of the number of $`\mu ^+e^+\gamma `$ events in the sample is evaluated using the likelihood method described in the analysis of previous experiments . The formula for the normalized likelihood is
$$(N_{e\gamma })=\underset{i=1}{\overset{N}{}}\left(\frac{N_{e\gamma }}{N}\left(\frac{P}{R}1\right)+\frac{N_{IB}}{N}\left(\frac{Q}{R}1\right)+1\right),$$
where $`N=3971`$, $`N_{e\gamma }`$ is the number of signal events, $`N_{IB}`$ is the number of IB events, and $`P`$, $`Q`$, and $`R`$ are the probability density functions (PDF) for signal, IB, and randoms of each of the five parameters describing the event. The PDFs $`P`$ and $`R`$ are the products of statistically independent PDFs for the five parameters, each normalized to unit probability over the full range of the variable for the sample. The signal distributions are taken from MC distributions as described. The background PDFs are extracted from the spectral shapes of a much larger sample of events, where the constraints on the other statistically independent parameters remain very loose. Here $`Q`$ is taken from MC simulation of the IB and has correlations amongst the variables. The events fall into the following categories: positron topology, photon conversion plane, target intersection angle, and data taking period. As a result, PDFs are extracted for each class of events and applied according to the classification of individual events.
The likelihood function evaluates the statistical separation between signal, IB, and background. To observe the impact of quality constraints in the pattern recognition, they have been relaxed to produce a sample three-times larger. One event emerges with a large value of $`P/R`$ that is significantly separated from the distribution. However, this event has a large positron $`\chi _\nu ^2`$, indicative of a ghost track. The adopted constraints produce a sample with considerably less background. The peak of the likelihood function is at $`N_{e\gamma }`$=0 and $`N_{IB}`$=30$`\pm `$8$`\pm `$15. The systematic error assigned to $`N_{IB}`$ is due to the uncertainty in the shape of the background time spectrum when the events are filtered by the online program. The expected number of IB events is 36$`\pm `$3$`\pm `$10, where the systematic error is due to finite resolution effects across the cut boundaries. The 90% confidence limit is the value for $`N_{e\gamma }`$ where 90% of the area of the likelihood curve lies below $`N_{e\gamma }`$ and $`N_{IB}`$ is maximal. This value is $`N_{e\gamma }<5.1`$. Therefore, the limit on the branching ratio is
$$\frac{\mathrm{\Gamma }(\mu e\gamma )}{\mathrm{\Gamma }(\mu e\nu \overline{\nu })}\frac{5.1}{N_\mu }=\mathrm{\hspace{0.25em}1.2}\times \mathrm{\hspace{0.25em}10}^{11}(90\%CL).$$
In comparison to the previous experimental limit , this result represents a factor of 4.1 improvement. The previous experiment would have had 58 background events at the same sensitivity instead of the 2 found here. This improvement further constrains attempts to build extensions to the standard model .
We are grateful for the support received by LAMPF staff members and in particular, P. Barnes, G. Garvey, L. Rosen, and D. H. White. We wish to gratefully acknowledge the contributions to the construction and operation of this experiment from former collaborators, the engineering and technical staffs, and undergraduate students at the participating institutions. The experiment is supported in part by the US Department of Energy and the National Science Foundation.
|
no-problem/9905/astro-ph9905216.html
|
ar5iv
|
text
|
# On Radio and X-ray Emission Mechanisms in Nearby, X-ray Bright Galactic Nuclei
## 1 Introduction
It is widely believed that non-stellar emission in galactic nuclei indicates the existence of accreting, massive black holes (e.g. Frank et al. 1992). It is, however, unclear how to understand the various emission spectra from diverse types of AGN powered by accreting black holes (e.g. Osterbrock 1989). Within the luminous AGN population, radio luminosities differ greatly and hence the classification into the radio-loud and radio-quiet sub-populations. The luminous optical/UV/X-ray emission is attributed to accretion flows around massive black holes while the strong radio emission clearly arises from powerful radio jets.
Recently, it has been suggested that the X-ray and radio emission from less luminous X-ray bright galactic nuclei (XBGN) such as LINERs and low luminosity Seyferts could be due to optically thin advection-dominated accretion flows (ADAFs) (Yi & Boughn 1998, Di Matteo & Fabian 1997, Fabian & Rees 1995 and references therein). These sources have X-ray luminosities in the range $`10^{39}<L_x<10^{43}`$ erg/s. In ADAFs, low-level radio emission arises from an X-ray emitting, optically thin plasma via synchrotron emission (e.g. Narayan & Yi 1995b). These sources are characterized by inverted radio spectra and compact emission regions together with hard X-ray spectra (Yi & Boughn 1998). Since X-ray and radio emission occurs in the same plasma, the X-ray and radio luminosities are correlated and can be used to estimate the mass of the the central black hole (Yi & Boughn 1998).
Radio jets are observed to be widespread in early type radio galaxies (e.g. Rees et al. 1982). On the other hand, ADAFs have been shown to have positive net energy near the rotational axis (Narayan & Yi 1995a) and, therefore, are particularly susceptible to outflows. If X-ray emission from XBGN is due to ADAFs, it is plausible that radio jets would also be present and could dominate the total radio luminosities of these sources. In fact, Falcke et al. (1998) recently argued that a large fraction of relatively radio-dim active galaxies, QSOs, and LINERs possess parsec scale radio jets. If so, it might be the case that the radio luminosities of nearly all XBGN are dominated by jet emission. Never-the-less, high angular resolution observations could still reveal the presence of an ADAF radio core with its characteristic inverted spectrum. Many of the sources (XBGN and Seyferts) listed below are jet dominated sources and, yet, have 15 GHz core luminosities similar to those predicted by the ADAF model.
Franceschini et al. (1998) reported an intriguing correlation between the total radio luminosities and the dynamically determined black hole masses of a small sample of XBGN consisting primarily of early type galaxies. They interpreted the correlation as due to ADAF radio emission around massive black holes accreting from the hot gas readily available in early type galactic nuclei. Although their explanation (assuming the accretion rate is determined by the Bondi rate and the gas density is directly related to the black hole mass) is largely implausible (see §3), the correlation does suggest an interesting trend in radio properties. Many of their sources are likely to contain radio jets. By combining them with those used in Yi and Boughn (1998), we attempt to distinguish radio-jet from pure ADAF emission in XBGN. We examine possible radio/X-ray luminosity relations in pure ADAF flows and in radio-jet sources with known black hole masses. Particular attention is paid to the origin of radio activity and we suggest that among XBGN it is useful to designated two populations, radio-bright and radio-dim, which are analogous to more powerful AGN populations.
We designate as “radio-dim” those sources with 5 GHz luminosities $`\nu L_\nu <10^{38}`$ erg/s. Even if jets are present in these sources they are likely to be relatively weak, parsec scale jets and may still appear as core dominated sources when observed with moderate angular resolution. The jet emission in such sources has a flat or inverted spectrum (e.g. Falck, Wilson, & Ho 1998). XBGN with luminosities $`L_R>10^{38}`$ erg/s are designated as “radio-bright” (to distinguish them from the conventional “radio-loud” sources which are much more powerful). Such sources have more substantial jets and will, therefore, invariably appear extended as well as exhibit steeper spectra.
## 2 Radio/X-ray Emission from Accreting Massive Black Holes
Emission from Optically Thin ADAFs: In high temperature, optically thin ADAFs, the hard X-ray emission results from bremsstrahlung and Comptonization (Narayan & Yi 1995b, Rees et al. 1982). The Compton-upscattered soft photons are generated by synchrotron emission which is subject to self-absorption. Assuming an equipartition strength magnetic field, all the relevant emission components from radio to X-ray are explicitly calculable in terms of black hole mass, $`M_{bh}`$, mass accretion rate, $`\dot{M}`$, and the viscosity parameter, $`\alpha `$, for which we adopt the value $`0.3`$ (Frank et al. 1992).
In ADAFs, radio emission arises directly from synchrotron emission with magnetic field $`B1.1\times 10^4m_7^{1/2}\dot{m}_3^{1/2}r^{5/4}`$ Gauss where $`m=M_{bh}/M_{}`$, $`m_7=m/10^7`$, $`\dot{m}=\dot{M}/\dot{M}_{Edd}`$, $`\dot{m}_3=\dot{m}/10^3`$, and $`r=R/R_S`$ with $`\dot{M}_{Edd}=1.39\times 10^{25}m_7`$ g/s, and $`R_S=2.95\times 10^{12}m_7cm`$. The optically thin synchrotron emission is self-absorbed up to a frequency $`\nu _{syn}(r)9\times 10^{11}m_7^{1/2}\dot{m}_3^{1/2}T_{e9}^2r^{5/4}Hz`$ where $`T_{e9}=T_e/10^9K`$ is the electron temperature. At radio frequencies, the luminosity is given by (e.g. Yi & Boughn 1998 and references therein)
$$L_{R,adv}(\nu )=\nu L_\nu ^{syn}2\times 10^{32}x_{M3}^{8/5}m_7^{6/5}\dot{m}_3^{4/5}T_{e9}^{21/5}\nu _{10}^{7/5}erg/s$$
(1)
where $`\nu _{10}=\nu /10^{10}Hz`$ and $`x_{M3}=x_M/10^3`$ is the dimensionless synchrotron self-absorption coefficient (Narayan & Yi 1995b). Since $`x_{M3}(m\dot{m})^{1/4}`$ and $`T_{e9}<10`$ is only weakly dependent on $`m`$ & $`\dot{m}`$ (e.g. Mahadevan 1996), we obtain the approximate relation (for $`\nu =15GHz`$)
$$L_{R,adv}3\times 10^{36}m_7^{8/5}\dot{m}_3^{6/5}erg/s.$$
(2)
The 2-10 keV X-ray emission from ADAFs is due to bremsstrahlung and Comptonization of synchrotron photons. At low mass accretion rates, $`\dot{m}<10^3`$, the X-ray luminosity has a significant bremsstrahlung contribution whereas at relatively high mass accretion rates $`10^3<\dot{m}<10^{1.6}`$, Comptonization dominates in the 2-10 keV band. ADAFs can only exist for mass accretion rates below a critical value, $`\dot{m}<\dot{m}_{crit}10^{1.6}`$ (Rees et al. 1982; Narayan & Yi 1995). Yi & Boughn (1998) have shown that the 2-10 keV X-ray luminosity is related to the 15 GHz radio luminosity by a simple relation
$$L_{R,adv}1\times 10^{36}m_7(L_{x,adv}/10^{40}ergs^1)^y$$
(3)
where $`y1/5`$ for systems with $`\dot{m}<10^3`$ and $`y1/10`$ for systems with $`\dot{m}>10^3`$. For our discussions, we adopt $`y=1/7`$ which is a reasonably good approximation for $`10^4<\dot{m}<10^{1.6}`$. The bolometric luminosity, which is dominated by the X-ray luminosity for $`\dot{m}>10^3`$, is roughly given by $`L_{adv}30\dot{m}^2L_{Edd}`$ where $`L_{Edd}=0.1\dot{M}_{Edd}c^2`$ (e.g. Yi 1996).
Emission from Optically Thick Disks: In ADAFs, optical/UV emission is characteristically weak which distinguishes ADAFs from the high radiative efficiency accretion disks commonly assumed for luminous AGN. For the high accretion rates required by luminous AGN, i.e. $`\dot{m}>10^{1.6}`$, ADAFs do not exist (Narayan & Yi 1995b, Rees et al. 1982). It is widely assumed that at such high rates, accretion takes the form of a geometrically thin, optically thick accretion flow with a hot, X-ray emitting corona (Frank et al. 1992). Then
$$L_{x,disk}\eta _{eff}\dot{M}c^21.3\times 10^{42}(\eta _{eff}/0.1)m_7\dot{m}_3erg/s$$
(4)
where $`\eta _{eff}`$ is the radiative efficiency of the accretion flow. The efficiency must be high, $`\eta _{eff}0.1`$, to account for the observed X-ray luminosities.
Radio Jet Power: Most AGN have radio luminosities that far exceed those predicted by the ADAF model (see, for example, Fig. 1). That there exists a wide range of radio luminosities for a relatively narrow X-ray luminosity range (of X-ray selected sources) is likely the result of radio jets of various strengths; however, it is still unclear just how radio-emitting jets are powered.
Given the fact that ADAFs are prone to outflows/jets (Narayan & Yi 1995a, Rees et al. 1982), it is likely that many ADAF sources have radio-jets. Neither the ADAF nor the thin disk models can self-consistently account for this radio emission. However, if the radio-jet is powered by a rotating black hole accreting from a magnetized plasma, as is generally believed, the radio power can be described by the Blandford-Znajek process (e.g. Frank et al. 1992), according to which $`L_{R,jet}\overline{a}^2M_{bh}^2B^2`$ or
$$L_{R,jet}1.2\times 10^{42}\overline{a}^2ϵ_{jet}m_7\dot{m}_3$$
(5)
where $`\overline{a}1`$ is the black hole spin parameter and $`ϵ_{jet}1`$ is the efficiency of the radio emission.
## 3 Radio/X-ray Luminosity Relation and Black Hole Masses
Fig. 1 is a plot of the ratio of the 5 GHz total (as opposed to “core”) radio luminosity to 2-10 keV X-ray luminosity vs. X-ray luminosity for a collection of LINERs, moderate to low luminosity Seyferts, X-ray bright elliptical galaxies, and the weak nuclear sources Sgr A and M31. The sources were compiled from Yi & Boughn (1998) and Franceschini et al. (1998). X-ray fluxes were converted to 2-10 keV fluxes in those cases where the data were in a different band and are uncertain by a factor of a few at most. The 5 GHz radio fluxes are from the Green Bank survey (Becker et al. 1991; Gregory & Condon 1991). The solid lines are predicted for ADAFs of different black hole masses and are discussed in more detail below and in Yi & Boughn (1998).
Dynamical black hole mass estimates are available for NGC1068, 1316, 4258, 4261, 4374, 4486, 4594, M31, and Sgr A. The mass of NGC1068 is from Greenhill et al. (1996); of NGC4258 from Herrnstein et al. (1998); of Sgr A from Eckart & Genzel (1997); of M31, M87, and NGC4594 from Richstone et al. (1998); of NGC 4261 from Richstone et al. (1998) and Ferrarese, Ford, & Jaffe (1996); and of NGC1316 and NGC4374 from Franceschini, Vercellone, & Fabian (1998). Uncertainties in these masses are not easily quantified; however, from the spread of different mass estimates it seems likely that they are accurate to within a factor of 2. Fig. 2 depicts the correlation of 5 GHz radio luminosity and black hole mass that was noted by Franceschini et al. (1998). They also noted that the correlation of X-ray luminosity and black hole mass is very weak (see Fig. 3).
Franceschini et al. (1998) argue that ADAF radio emission is responsible for the correlation between radio luminosity and black hole mass. They assume that the density of the accreted matter at large distances from the black hole is proportional to the black hole mass, i.e. $`\rho _{\mathrm{}}M_{bh}`$, and that $`M_{bh}M_{gal}c_{s,\mathrm{}}^4`$ where $`c_{s,\mathrm{}}`$ is the sound speed of the accreting gas. The latter relation is adopted from the Faber-Jackson relation. Assuming Bondi accretion, then $`\dot{M}M_{bh}^2\rho _{\mathrm{}}/c_s^3M_{bh}^{9/4}`$. If one ignores the dependence of $`x_{M3}`$ on $`\dot{M}`$, eq. 1 implies that $`L_{R,adv}M_{bh}^{11/5}`$. This is the power-law mass-luminosity relation derived by Francheschini et al. (1998) and corresponds to the dashed line in Fig. 2. The power law appears to agree with the trend in the data although its statistical significance is obviously limited due to the sample size and uncertainties in the radio fluxes (If one includes the dependence of $`x_{M3}`$ on $`\dot{M}`$ then the power law slope is changed somewhat but the following conclusions are the same).
However, such an explanation inevitably predicts a strong correlation between $`L_x`$ and $`M_{bh}`$. For ADAFs $`L_{x,adv}m\dot{m}^x`$ where $`x=2`$ if X-rays come from bremsstrahlung and $`x>2`$ if X-rays are from multiple Compton scattering (Yi & Boughn 1998; Yi 1996). Therefore, $`L_{x,adv}M_{bh}^{(5x+4)/4}`$ and $`L_{R,adv}/L_{x,adv}L_{x,adv}^{(2425x)/(20+25x)}`$. For a wide range of $`x`$, $`2x10`$, we expect $`L_{R,adv}/L_{x,adv}L_{x,adv}^{0.4}`$ to $`L_{x,adv}^{0.8}`$. Therefore, all sources shown in Franceschini et al. (1998) should fall in a single band with slope of $`0.6`$ in $`L_R/L_x`$ vs. $`L_x`$ plane. This is not evident in Fig. 1. The predicted X-ray mass-luminosity relation is quite steep, $`L_{x,adv}M_{bh}^\beta `$, with $`3.5\beta 13.5`$ for $`2x10`$ and is not consistent with observations as indicated in Fig. 3 where the dashed line is for $`x=4`$, i.e. $`\beta =6`$. Apparently, the most massive black hole sources are too X-ray dim to be compatible with the Francheschini et al. (1998) model. If the measured radio luminosities are indeed from ADAFs, the observed black hole masses predict much higher $`L_x`$ than observed. Therefore, the correlation found by Franceschini et al. (1998) cannot be attributed to ADAFs powered by Bondi accretion. An alternative explanation is that the observed radio luminosities are due to much more energetic sources, e.g. jets, whose luminosities are not directly related to the radio luminosity of the ADAF core.
If the excess radio emission is due to jet activity, then eq. 5 and $`L_{x,adv}m\dot{m}^x`$ imply
$$L_{R,jet}\overline{a}^2L_{x,adv}^{1/x}M_{bh}^{(x1)/x}$$
(6)
and for a wide range of $`x4`$ we expect the dominant scaling $`L_{R,jet}\overline{a}^2M_{bh}`$. Then the observed $`L_R`$ vs. $`M_{bh}`$ plot in Fig. 2 could simply be a result of the combination of $`L_RM_{bh}`$ and a distribution of $`\overline{a}`$’s (dotted lines). If the accretion rate is controlled by the Bondi rate, i.e. $`\dot{M}M_{bh}^{9/4}`$, then $`L_xM_{bh}^{(5x+4)/4}`$ and $`L_{R,jet}\overline{a}^2M_{bh}^{9/4}`$ which is similar to the $`L_{R,adv}M_{bh}^{11/5}`$ relation of Franceschini et al. (1998).
The radio and X-ray emission from an ADAF are given by $`L_{x,adv}m\dot{m}^x`$, $`L_{R,adv}M_{bh}^{8/5}\dot{m}^{6/5}`$ (eq. 2), and, therefore, $`L_{R,adv}m^{(8x6)/5x}L_{x,adv}^{6/5x}`$ (Yi & Boughn 1998). These trends are shown as solid lines in Figs. 1 and 3. In Fig. 3, the X-ray emission for most of the sources is well accounted for if $`\dot{m}`$ varies from $`10^2`$ to $`10^3`$. Since ADAF radio emission is highly localized ($`1`$ pc), it is not surprising that the total radio fluxes plotted in Figs. 1 exceed those predicted by ADAFs. Figs. 4 and 5 are plots of the 15 GHz “core” luminosities of these same sources. In a few cases, radio fluxes were converted from 5 GHz to 15 GHz using the $`\nu ^{7/5}`$ power-law of eq. 1. While this conversion is inappropriate for steep spectrum sources, it allows a direct comparison of these luminosities with those predicted for an ADAF. In any case, such values are in error by at most a factor of a few (If a source has a steep spectrum, i.e. $`\nu L_\nu \nu ^{0.2}`$, the error is less than a factor of 4). The solid lines in Figs. 4 and 5 are the same as those in Figs. 3 and 1 and they are, indeed, compatible with most of the sources; however, only in three cases is the angular resolution good enough to approximately resolve the ADAF core. The sources are discussed individually in §4.
The ratio of jet to ADAF radio emission depends only weakly on black hole mass and accretion rate. From eqs. 2 and 5,
$$L_{R,jet}/L_{R,adv}4\times 10^5\overline{a}^2ϵ_{jet}m_7^{1/5}\dot{m}_3^{1/5}$$
(7)
i.e. $`L_{R,jet}`$ can far exceed $`L_{R,adv}`$ for sources with typical $`M_{bh}`$ and $`\dot{M}`$ unless $`\overline{a}2\times 10^3ϵ_{jet}^{1/2}`$. This appears to be the case for many of the sources in Fig. 1.
If $`L_x`$ is from a thin disk and $`L_R`$ is from a jet (since a thin disk itself is unable to emit in the radio) we expect (from eqs. 4 and 5)
$$L_{R,jet}/L_{x,disk}\overline{a}^2ϵ_{jet}/\eta _{eff}ϵ_{jet}/\eta _{eff}.$$
(8)
For radio-bright galaxies with $`\overline{a}1`$, this ratio is independent of X-ray luminosity or black hole mass. For radio-dim sources (e.g. $`\overline{a}1`$), we expect $`L_xL_R`$.
## 4 Discussion of Individual Sources
M87, NGC4258, and Sgr A
These three sources have estimated black hole masses and, in addition, have been the subject of high angular resolution radio observations (VLBI, VLBA, and VLA respectively). While M87 and NGC4258 have relatively strong, extended radio emission from jets, the high spatial resolution ($``$ 1 pc) of the radio observations affords a nearly resolved view of the hypothetical core ADAF. The masses implied by their locations on Fig. 4 are within a factor of two of the dynamical estimates for these two sources.
All three of these sources have been previously identified as ADAF candidates (Reynolds et al. 1997, Lasota et al. 1996, and Narayan et al. 1995). However, Herrnstein et al. (1998) have argued that the NGC4258 core source used in this paper is actually one of two, unresolved radio jets located $`0.01`$pc from the warped plane of the accretion disk. If future observations strengthen this conclusion then the ADAF mechanism for NGC4258 will be called into question (cf. Blackman 1998).
Sgr A is distinct from the other sources discussed in this paper in that it has a much smaller X-ray luminosity due presumably to a low accretion rate. Never-the-less, Narayan, Yi, $`\&`$ Mahadevan (1995) have modeled it as an ADAF so we include it here as an example of a low mass, low accretion rate ADAF. We have plotted in Figs. 1, 3, and 4 the ROSAT X-ray flux corrected for extinction (Prehehl & Trumper 1994). The uncertainty in the extinction corrected X-ray flux has little consequence since for a given core radio flux, the X-ray flux is nearly independent of black hole mass. The location of the Sgr A point would simply be translated along a line that is nearly parallel to the theoretical curves in Figs. 1 and 4.
NGC1068, 1316, 4261, 4374, 4594, M31
These are the remaining sources for which there are dynamical estimates of the masses of the central black holes; although, none has been observed in the radio with very high angular resolution. NGC4594 (M104) is a radio-dim, flat spectrum source with no sign of jet activity. VLA observations (Hummel et al. 1984) indicate a core dominated source with a size $`<1`$ pc. If this radio source is interpreted as an ADAF then the black hole mass implied by Fig.4 is the same as the dynamical estimate. NGC1316, 4261, and 4374, on the other hand, are radio-bright sources (see Fig. 1). NGC1316 is a bright, lobe dominated radio galaxy. Because of relatively poor angular resolution, its core flux is not well defined; however, the radio luminosity within 3 to 5 arcsec is within a factor of two of that predicted for an ADAF flow with the measured black hole mass. NGC4261 and 4374 are also extended sources but again with core luminosities comparable (within a factor $`2`$) to that predicted by the ADAF model. Considering the uncertainties of the X-ray and radio observations and of the dynamical mass estimates, the level of agreement is impressive.
It is clear from Fig. 4 that NGC1068 and M31 do not fit the ADAF model; however, both are unusual sources. The direct X-rays from the nucleus of NGC1068 appear to be highly absorbed with the observed X-ray flux consisting entirely of scattered photons. The inferred intrinsic X-ray luminosity is likely to be $`L_x>5\times 10^{43}erg/s`$ (e.g. Koyama et al. 1989). The Eddington luminosity for the estimated black hole mass, $`M_{bh}2\times 10^7M_{}`$, is $`L_{Edd}2\times 10^{45}erg/s`$; therefore, $`L_x>3\times 10^2L_{Edd}`$. This implies a bolometric luminosity in excess of the maximum allowed for ADAF flows (Narayan, Mahadevan & Quataert 1998) so it is unlikely that the X-ray emission is from an ADAF. NGC1068 has a resolved central core with an observed 15 GHz luminosity of $`7\times 10^{37}erg/s`$ (Sadler et al. 1995) together with clear jet structures. If the X-ray emission occurs with high efficiency $`10`$%, the observed $`L_x`$ implies $`\dot{m}2\times 10^2`$. In this case, $`L_{R,jet}5\times 10^{43}\overline{a}^2ϵ_{jet}`$ which is far more than the observed $`L_R`$ even with $`ϵ_{jet}1`$ for $`\overline{a}1`$. Therefore, NGC1068 is likely to be a typical radio-quiet, luminous Seyfert (Falcke et al. 1998).
M31 has an extremely low core radio luminosity, $`5\times 10^{32}erg/s`$ (Gregory & Condon 1991). If this is attributed to an ADAF with $`M_{bh}=3\times 10^7M_{}`$, then the implied accretion rate is very small, $`\dot{m}10^6`$, and the expected ADAF X-ray luminosity is $`L_x4\times 10^{34}erg/s`$, much less than that observed. In this case it is possible that the observed $`L_x`$ is dominated by a few bright X-ray binaries similar to those recently reported in M32 (Loewenstein et al. 1998). Indeed, Sgr A is surrounded by bright, resolved X-ray sources (e.g. Genzel et al. 1994) and if put at the distance of M31 would appear as a brighter, unresolved core. Finally, it is not at all clear whether the two temperature ADAF model is valid at such low accretion rates.
NGC3031, 3079, 3627, 3628, 4736, and 5194
These sources are all nearby LINERs with similar 2-10 keV X-ray luminosities, $`6\times 10^{39}<L_x<3\times 10^{40}`$. There are no dynamical mass estimates for the black holes in any of them. It is interesting to note, however, that with the exception of NGC3079 the core radio luminosities are consistent with those predicted by ADAFs for black hole masses of $`10^7`$ to $`10^8`$ $`M_{}`$ (see Fig. 4). The larger luminosity of NGC3079 would require $`M_{bh}=1.4\times 10^9M_{}`$. In addition, all of the sources have either flat or inverted radio spectra.
None of these is a particularly strong radio source; however, only one of them, NGC3031, is dominated by core emission. Coincidentally, this is the only source for which there are high angular resolution (VLBI) radio observations (Bietenholz et al. 1996; Reuter & Lesch 1996; Turner & Ho 1994). The spectrum is inverted ($`\nu ^{1/3}`$) up to 100 GHz where it begins to turn over. This is consistent with an ADAF. At 22 GHz the core is barely resolved, $`0.1mas`$, which implies a linear size of $`0.002pc`$. This is also consistent with an ADAF. If the radio and X-ray luminosities are, indeed, due to an ADAF then the implied black hole mass is $`1\times 10^8M_{}`$. It will be interesting to see if future dynamical analyses confirm this value.
NGC3227, 4151, 5548, and 4388
These four sources are all classified as Seyferts and none is a particularly strong radio source (see Fig. 1). Their radio and X-ray luminosities are consistent with ADAFs onto $`(15)\times 10^8M_{}`$ black holes and with accretion rates below the critical value (Fig. 4). Because the ADAF cores are unresolved, it is certainly possible that the radio core luminosities and hence masses are both overestimates. However, it seems unlikely that such corrections would result in moving these sources from the ADAF region.
These three sources are on average $`600`$ times more X-ray luminous than the six LINERs discussed above but have only twice the core radio luminosity (see Fig. 4). There are no black hole mass estimates for any of these sources so it is not possible to make quantitative comparisons of their emissions with those predicted by the ADAF model. However, to the extent that the average black hole masses of these sources are comparable we note that the narrow range of core radio luminosity is qualitatively consistent with the weak dependence of $`L_R`$ on $`L_x`$ (see eq. 3). The wide range in X-ray luminosities is then presumably due to differences in the accretion rates for these sources.
It should be emphasized that there is considerable uncertainty in the above comparisons of dynamical mass estimates with ADAF predictions. In addition to the observational and modeling uncertainties, one must contend with the intrinsic variability of ADAF sources (Blackman 1998, Ptak et al. 1998). Multiple epoch radio and X-ray observations are necessary to quantify the extent of the variability and to estimate the mean fluxes. For these reasons, the good agreement between the mass estimates and ADAF predictions in this paper may be partly fortuitous.
## 5 Classification of X-ray Bright Galactic Nuclei
The previous discussion suggests a useful classification scheme based on the radio and X-ray luminosities of XBGN and moderate luminosity Seyferts. From Figs. 1 and 6 it is clear that the 5 GHz ADAF (total) radio luminosity $`L_R<10^{38}ergs^1`$ unless $`M_{bh}>10^9M_{}`$ and the accretion rate is near the maximum allowed by ADAFs. Therefore, we designate a source as “radio-bright” if its total (core plus jet) 5 GHz radio luminosity satisfies $`L_R>10^{38}ergs^1`$. Note that this doesn’t preclude XBGN with luminosities below this value from being jet dominated; in fact, of the sources in Fig. 1 only NGC3031 and 4594 are ADAF core dominated. However, XBGN designated as radio-bright will certainly be jet dominated radio sources. Fig. 6 illustrates this classification for the same sources as in Fig. 1. The upper dashed curve corresponds to the ADAF luminosity for a $`10^9M_{}`$ black hole (which is motivated by the fact that black holes with $`M_{bh}>10^9M_{}`$ are rare) and the lower dashed curve corresponds to maximally accreting ADAFs, i.e. $`\dot{m}=10^{1.6}`$. Sources falling within the region at the bottom left of the diagram (bounded by the dashed curves) are designated “radio-dim” XBGN. As noted, these are presumably ADAFs with little to moderate jet activity. Sources to the left in this region have low accretion rates while sources at the bottom have low mass central black holes. Above this region are the radio-bright XBGN discussed above. Sources at the right of the diagram are too X-ray bright to be the result of an ADAF but rather require the high efficiency accretion mechanism of powerful AGN. These classifications are not unambiguous. For example, an AGN with a low mass central black hole and low X-ray luminosity but with moderate jet activity could appear in the upper left part of the “radio-dim XBGN” region even though the radio luminosity far exceeds that predicted by the ADAF mechanism. High angular resolution radio observations, an estimate of black hole mass, and/or the determination of the radio spectral index would likely distinguish between these two cases.
## 6 Conclusions
For a given black hole mass, the ADAF model predicts a unique radio/X-ray luminosity relation. ADAF radio emission is essentially characterized by an inverted spectrum and a very compact emission region, $`1pc`$. So far, the only sources for which there are both high angular resolution radio data and dynamical estimates are M87, NGC4258 and Sgr A. The observations for all three are consistent with the predictions of the ADAF emission model. There are four other sources (NGC1316, 4261, 4374, and 4594) for which moderate angular resolution radio observation and dynamical black hole mass estimates are available. The X-ray and core radio luminosities of these XBGN are also consistent with the ADAF model. Considering observational and modeling uncertainties, the agreement is quite good.
In the future, high angular resolution radio observation combined with X-ray observations might enable the central black hole masses of XBGN to be estimated. For example, the nuclear source in NGC3031 (M81) is both small ($`0.002`$ pc) and has an inverted spectrum (Reuter & Lesch 1996). Based on its X-ray and core radio flux, the ADAF implies a black hole mass of $`M_{bh}1\times 10^8M_{}`$ (see Fig. 4). Although source variability has not yet been taken into account, we suggest that future dynamical estimates of the central black hole of NGC3031 won’t be far from this value.
Multiple epoch, high angular resolution, high frequency radio observations will be crucial to test the ADAF mechanism and, subsequently, to estimate central black hole masses. Even with such observations, it will still be difficult to characterize sources if they are strongly obscured as expected in Seyfert 2’s. Relatively low X-ray luminosity sources will be detected by AXAF and will greatly improve the test of the ADAF paradigm among LINERs and other relatively low luminosity XBGN with $`L_x<10^{40}erg/s`$.
The compact, inverted spectrum characteristics of ADAFs can also be used to distinguish them from small scale jets which are abundant in XBGN. A moderate excess of radio emission from such a source probably indicates the presence of low level (parsec scale) jet activity. We propose an XBGN classification scheme analogous to the radio-loud/radio-quiet classification of powerful AGN. Whereas, it is likely that the radio luminosities for all powerful AGN (radio-quiet and radio-loud) are due to jets, it is possible for some radio-dim XBGN to be dominated by an ADAF core. On the other hand, radio-bright XBGN are undoubtedly jet dominated. ADAFs are strong, hard X-ray sources and to the extent that they are present in XBGN, it is likely that the $`210`$ keV core luminosities of XBGN will be dominated by ADAF emission.
We would like to thank R. van der Marel and Douglas Richstone for useful information on black hole masses, Pawan Kumar and Tal Alexander for help with NGC1068, and Ramesh Narayan for early discussions on spectral states of accreting massive black holes. This work was supported in part by the SUAM Foundation (IY) and NASA grant# NAG 5-3015 (SB).
Figure Captions
Figure 1: The ratio of 5 GHz radio luminosity $`L_R`$ to $`210`$ keV X-ray luminosity $`L_x`$ vs. $`L_x`$ for XBGN. The radio luminosities include both core and extended contributions. Scaling from other frequencies is discussed in the text. The sources are adopted from Yi & Boughn (1998) and Franceschini et al. (1998). The solid lines are the ADAF predictions for black hole masses of $`10^6`$ to $`10^9M_{}`$ (marked by log values at the bottom of each curve). For each line, the dimensionless mass accretion rate $`\dot{m}`$ varies from $`10^4`$ (upper dotted line) to $`10^{1.6}`$ (lower dotted line), the maximum allowed ADAF accretion rate (as in Yi & Boughn 1998).
Figure 2: 5 GHz $`L_R`$ vs. $`M_{bh}`$ for the nine sources with black hole mass estimates. The radio fluxes are as in Fig. 1. The dashed line is the $`L_RM_{bh}^{11/5}`$ relation suggested in Franceschini et al. (1998). The two dotted lines depict the $`L_R\overline{a}^2M_{bh}`$ relation expected for radio-jet sources with a spread of a factor $`10`$ in $`\overline{a}^2`$ (see text). The references for the dynamical mass estimates are listed in the text.
Figure 3: 2-10 keV $`L_x`$ vs. $`M_{bh}`$ for the same sources as in Fig. 2. The dashed line is the steep correlation $`L_xM_{bh}^6`$ expected if the Franceschini et al. (1998) model were valid. The three solid lines correspond to $`L_xM_{bh}`$ correlations for $`\dot{m}10^2,10^3,10^4`$ from top to bottom (see text). The references for the dynamical mass estimates are listed in the text.
Figure 4: The ratio of 15 GHz core radio luminosity $`L_R`$ to $`210`$ keV X-ray luminosity $`L_x`$ vs. $`L_x`$ for the sources in Fig. 1. The solid and dotted lines are the same ADAF predictions as in Fig. 1. Note: $`\dot{m}10^{1.6}`$ is the maximum allowed ADAF accretion rate.
Figure 5: 15 GHz core radio luminosity vs. $`M_{bh}`$ for the nine sources in Fig. 2. The three solid lines correspond to the ADAF-based relation $`L_R\dot{m}^{6/5}M_{bh}^{8/5}`$ for the three $`\dot{m}`$ values shown in Figure 3. The references for the dynamical mass estimates are listed in the text.
Figure 6: Schematic classification of XBGN sources according to 5 GHz total radio and 2-10 keV X-ray luminosities. The sources are the same as in Fig. 1. The upper dashed line corresponds to the the ADAF $`L_R`$ for $`M_{bh}=10^9M_{}`$ and the lower dashed line corresponds to the maximum ADAF accretion rate, $`\dot{m}=10^{1.6}`$. The region on the right is occupied by powerful AGN in which the ADAF mechanism cannot be operating.
|
no-problem/9905/cond-mat9905342.html
|
ar5iv
|
text
|
# Electric Field Induced Converse Piezoeffect in Screw Dislocated 𝑌𝐵𝑎₂𝐶𝑢₃𝑂_{7-𝛿} Thin Films
Recent experimental results on electric-field induced critical current enhancement in screw dislocated high-$`T_c`$ thin films<sup>1,2</sup> have been treated as evidence for the field-induced modulation of the mobile charge carriers density<sup>3-5</sup>. Since high-$`T_c`$ superconductors (HTS) is supposed to have a rather low concentration of carriers, one could expect the field effects to be quite pronounced in these materials. However, even in extremely thin films (of a few unit cells thick), a rather large breakdown voltage (of 20-30$`V`$) is required to produce an essential modulation of the charge carrier density, and thus to substantially enhance the critical currents<sup>1-5</sup>. At the same time, the use of the sputtering technique as well as a laser ablation to prepare thin enough HTS films was found<sup>6-8</sup> to bring about a tremendous density of numerous extended defects (such as screw and edge dislocations) in these thin films. It is worthwhile to mention that dislocation-induced pinning force enhancement in zero electric field has been observed earlier practically for the same thin films<sup>7</sup>. On the other hand, it is well-known<sup>9,10</sup> that in ionic crystals dislocations can accumulate (or trap, due to the electric field around their cores) electrical charge along their length. Then, application of an external electric field, in the geometry similar to that used in the so-called ”superconducting field-effect transistor” (SuFET) devices<sup>1-5</sup>, can sweep up these additional carriers, refreshing the dislocation cores, that is releasing the dislocation from a cloud of point charges (which, in field-free configuration, screen an electric field of the dislocation line for electrical neutrality). Utilizing the ionic (perovskite-like) nature of the HTS crystals<sup>11</sup>, in the present paper a possibility of the electric field induced converse piezoeffect and its influence on the critical currents enhancement in screw dislocated YBCO thin films is discussed.
As is well-known<sup>8-10</sup>, the charges on dislocations play a rather important role in charge transfer in ionic crystals. If a dislocation is oriented so that it has an excess of ions of one sign along its core, or if some ions of predominantly one sign are added to or removed from the end of half-plane, the dislocation will be charged. In thermal equilibrium it would be surrounded by a cloud of point defects of the opposite sign to maintain electrical neutrality. A screw dislocation can transport charge normal to the Burgers vector if it can carry vacancies with it. Piezoelectric effect, that is a transverse polarization induced by dislocation motion in ionic crystals, is certainly cannot be responsible for large field-induced effects in YBCO thin films because of too low rate of dislocation motion at the temperatures used in these experiments (another possibility to observe the direct piezoeffect in dislocated crystals is to apply an external stress field<sup>9</sup>). On the contrary, the converse piezoeffect, i.e., a change of dislocation-induced strain field in applied electric field can produce a rather considerable change of the critical current density in screw dislocated HTS thin films. Indeed, according to McElfresh et al.<sup>8</sup> (who treated the correlation of surface topography and flux pinning in YBCO thin films), the strain field of a dislocation can provide a mechanism by which the superconducting order parameter can be reduced, making it a possible site for core pinning. The elastic pinning mechanism could also be responsible for the large pinning forces associated with dislocation defects. There are two types of elastic pinning mechanisms possible, a first-order (parelastic) interaction and a second-order (dielastic) interaction. In the case of a screw plane defect, the parelastic pinning can be shown to be negligible<sup>8</sup>. However, the dielastic interaction comes about because the self-energy of a defect depends on the elastic constants of the material in which it forms. Since the crystalline material in a vortex core is stiffer, there is a higher energy bound up in the defect. For a tetragonal system with a screw dislocation, the energy density due to the defect strain is $`(1/2)C_{44}ϵ_{44}^2`$, where $`C_{44}`$ is the shear modulus and $`ϵ_{44}`$ is the shear strain. For a screw plane, $`ϵ_{44}=b/2\pi r`$, where $`r`$ is the distance from the center of the defect and $`b`$ is the Burgers vector of the defect. The interaction energy density between the vortex and the screw plane is just $`(1/2)\delta C_{44}ϵ_{44}^2`$, where $`\delta C_{44}`$ is the difference in shear modulus between superconducting and normal regions. For a vortex a distance $`r`$ away from the defect, the interaction energy and the pinning force (per unit length) read, respectively<sup>8</sup>
$$_d(r)=\frac{1}{2}\delta C_{44}ϵ_{44}^2(\pi \xi ^2)$$
(1)
and
$$_p\frac{d_d}{dr}=\delta C_{44}\left(\frac{b^2\xi ^2}{4\pi r^3}\right)$$
(2)
Taking the recently reported for YBCO values of $`C_{44}=8\times 10^{10}N/m^2`$ and $`\delta C_{44}=10^5C_{44}`$, setting $`r=\xi `$ and using a value of $`b=12\times 10^{10}m`$, the authors of Ref.8 obtained a pinning force per unit length to be $`_p=10^4N/m`$, which corresponds to a critical current density of $`J_c=5\times 10^{10}A/m^2`$ using the single vortex limit, $`_p=J_c\mathrm{\Phi }_0`$. These estimates reasonably agree with the $`J_c`$ found recently in dislocated YBCO thin films<sup>1,7</sup>. Turning to the field-induced SuFET-type experiments<sup>1-5</sup>, we argue that an applied electric field can essentially modify the shear strain field of dislocation
$$ϵ_{44}(\stackrel{}{E})=ϵ_{44}(0)+\delta ϵ_{44}(\stackrel{}{E}),$$
(3)
where the change of the dislocation-induced strain field in the applied electric field $`\stackrel{}{E}`$ is defined as follows<sup>9</sup>
$$\delta ϵ_{44}(\stackrel{}{E})=\stackrel{}{d}_{44}ϵ_0ϵ_r\stackrel{}{E}=d_{44}ϵ_0ϵ_rE\mathrm{cos}\theta $$
(4)
Here, $`d_{44}`$ is the absolute value of the converse piezoeffect coefficient, $`\theta `$ stands for the angle between the screw dislocation line and the direction of an applied electric field, $`ϵ_r`$ is the static permittivity which takes account of the long-range polarization of the dislocated crystal, and $`ϵ_0=8.85\times 10^{12}F/m`$.
In view of Eqs.(1)-(4), field-induced converse piezoeffect results in the following changes of the interaction energy $`_d`$ and the pinning force density $`_p`$, respectively
$$_d(\stackrel{}{E})=\frac{1}{2}\left[1+2\frac{\delta ϵ_{44}(\stackrel{}{E})}{ϵ_{44}(0)}\right]\delta C_{44}ϵ_{44}^2(0)(\pi \xi ^2)$$
(5)
and
$$_p(\stackrel{}{E})=_p(0)\left(1+\frac{\stackrel{}{E}}{E_0}\mathrm{cos}\theta \right)$$
(6)
where
$$E_0=\frac{1}{ϵ_0ϵ_rd_{44}}$$
(7)
and $`_p(0)`$ denotes the pinning force density at zero electric field (see Eq.(2)). According to Whitworth<sup>9</sup>, the converse piezoeffect coefficient $`d_{44}`$ in ionic dislocated crystals can be presented in the form
$$\frac{1}{d_{44}}=q\rho b$$
(8)
Here $`\rho `$ is the dislocation density,and $`q=e^{}/L`$ is the effective electron charge $`e^{}`$ per unit length of dislocation $`L`$ (in fact, $`L`$ coincides with the thickness of the YBCO films<sup>1</sup>).
To attribute the above-mentioned mechanism to the field-induced changes of the critical current densities in dislocated YBCO thin films, we propose the following scenario. Depending on the gate polarity, a strong applied electric field will result either in trapping the additional point charges by dislocation cores (when $`\mathrm{cos}\theta =1`$) reducing the pinning force density, or in sweeping these point charges (surrounding the dislocation line to neutralize the net charge) up of the dislocation core (for the opposite polarity when $`\mathrm{cos}\theta =+1`$), thus increasing the core vortex pinning by fresh dislocation line. To make our consideration more definite, we assume that the total critical current density $`J_c`$ consists of two main contributions, the core pinning term $`J_{cm}`$ which has the usual form of $`J_{cm}=\mathrm{\Phi }_0/16\pi \mu _0\xi \lambda ^2`$ (see, e.g.<sup>7,8</sup>), and the field-induced dielastic pinning by dislocations, $`J_{cd}(\stackrel{}{E})=_p(\stackrel{}{E})/\mathrm{\Phi }_0`$ where the pinning force density $`_p(\stackrel{}{E})`$ is governed by the Eq.(6). Finally, for the total critical current density in dislocated thin films we get
$$J_c(\stackrel{}{E})=J_c(0)+J_{cd}(0)\frac{\stackrel{}{E}}{E_0}\mathrm{cos}\theta ,$$
(9)
where
$$J_c(0)=J_{cm}J_{cd}(0).$$
(10)
According to the well-known<sup>7,8</sup> estimates of $`J_{cm}`$ and $`J_{cd}(0)`$ for thin YBCO films, we can put $`J_{cm}J_{cd}(0)J_{cd}(0)`$, that is $`J_c(0)J_{cd}(0)`$. Using the experimental values for the field-free (at zero gate voltage $`V_G`$) and field-induced (at gate voltages $`V_G=\pm 10V`$ corresponding to the applied electric field of $`\pm 2.5\times 10^5V/cm`$) critical current densities obtained in SuFET-type measurements with YBCO thin films<sup>1</sup>, namely $`J_c(0)=10^8A/m^2,J_c(+10V)=6\times 10^7A/m^2`$, and $`J_c(10V)=1.2\times 10^8A/m^2`$, for an estimate of the threshold field $`E_0`$ the above Eq.(9) predicts the value of $`6\times 10^5V/cm`$ which corresponds to the breakdown voltage $`V_0=25V`$. Furthermore, using the fact that the thickness $`L`$ of YBCO thin films in the experiments carried out by Mannhart et al.<sup>1</sup> was ca. $`70\dot{A}`$, for the linear charge per length of dislocation we get the value $`q=e^{}/L=2\times 10^{11}C/m`$, that is $`qb0.3e^{}`$. This is quite comparable with the typical values known for the usual ionic dislocated crystals<sup>9,10</sup>. Moreover, in view of Eqs.(7) and (8), we can estimate the value of the static permittivity, $`ϵ_r`$, in screw dislocated YBCO thin films. Taking $`\rho 10^{10}cm^2`$ for the maximum dislocation density observed in these films<sup>1-3</sup>, we find $`ϵ_r0.01`$, which reasonably agrees with the typical permittivities in ionic dislocated crystals<sup>9,10</sup>. Remarkably, as the threshold field $`E_0`$ (see Eq.(7)) contains no specifically superconducting parameters, we can expect that this mechanism will persist above $`T_c`$, and will be practically insensitive to applied magnetic fields, in agreement with the observations<sup>1-3</sup>.
In summary, a possible scenario for electric-field induced critical current enhancement recently observed in screw dislocated YBCO thin films<sup>1</sup> has been proposed based on the converse piezoelectric effect originated from a heavily defected medium of the superconducting films in an applied electric field. The magnitude of the effect was found to depend concurrently on the film thickness and the number of dislocations inside a superconductor.
|
no-problem/9905/astro-ph9905077.html
|
ar5iv
|
text
|
# Discovery of Radio Outbursts in the Active Nucleus of M81
## 1 Introduction
At a distance of only 3.6 Mpc (Freedman et al. 1994), NGC 3031 (M81) hosts perhaps the best studied low-luminosity active galactic nucleus (AGN). Its striking resemblance to “classical” Seyfert 1 nuclei was first noticed by Peimbert & Torres-Peimbert (1981) and Shuder & Osterbrock (1981), and a number of subsequent studies (Filippenko & Sargent 1988; Keel 1989; Ho, Filippenko, & Sargent 1996) have elaborated on its AGN-like characteristics. The nucleus of M81 holds additional significance because of its relevance to a poorly understood class of emission-line objects known as low-ionization nuclear emission-line regions (LINERs: Heckman 1980), whose physical origin is still controversial (Ho 1999 and references therein). The optical classification of the nucleus of M81 borders between LINERs and Seyferts (Ho, Filippenko, & Sargent 1997), but it is clear that the ionization state of its line-emitting regions differs significantly from that of typical Seyfert nuclei (Ho et al. 1996).
The variability properties of low-luminosity AGNs, LINERs in particular, are very poorly known at any wavelength. The faintness of these nuclei renders most observations extremely challenging, and routine monitoring of them has rarely been attempted. M81 remains one of the few LINERs with sufficient data to allow its variability characteristics to be assessed. The occurrence of Supernova (SN) 1993J in late March of 1993 prompted us almost immediately to monitor the supernova interferometrically at radio wavelengths. Since at most wavelengths the field of view of the telescopes contained both the supernova and the center of the galaxy, the observations yielded a useful record of the radio flux density of the nucleus during the period that the supernova was monitored. This paper will examine the radio variability properties of the nucleus of M81 based on these data. We report on the discovery of several radio outbursts, which plausibly could be associated with the optical flare seen by Bower et al. (1996).
## 2 Radio Observations
The radio data analyzed in this paper are based on observations of SN 1993J made with the Very Large Array (VLA)<sup>1</sup><sup>1</sup>1The VLA is a facility of the National Radio Astronomy Observatory which is operated by Associated Universities, Inc., under cooperative agreement with the National Science Foundation. as reported by Van Dyk et al. (1994), on similar monitoring data acquired using the Ryle Telescope by Pooley & Green (1993), and on unpublished updates to these two data sets since then.
### 2.1 VLA Data
The VLA data were acquired in snapshot mode at 20 cm (1.4 GHz), 6 cm (4.9 GHz), 3.6 cm (8.4 GHz), 2 cm (14.9 GHz), and 1.3 cm (22.5 GHz), using a 50 MHz bandwidth for each of the two IFs, between 1993 March 31 and 1997 January 23. The different bands were observed nearly simultaneously. As scheduling constraints did not permit us to specify in advance the array configuration for the observations, the data were taken using a mixture of configurations and hence a range of angular resolutions (synthesized beam 0$`\stackrel{}{\mathrm{.}}`$25–44<sup>′′</sup>). Because of the coarseness of the beam in the most compact configurations and at the longest wavelength, extended, off-nuclear emission can potentially contaminate the signal from the central point source. Inspection of the detailed maps of Kaufman et al. (1996), however, indicates any such contamination of the nucleus, even at 20 cm, is at most a few percent. Since the phase center of the VLA observations was near the position of SN 1993J, and the nucleus of M81 is 2$`\stackrel{}{\mathrm{.}}`$64 from this center, primary beam attenuation had a critical effect on the observations, such that only the 20, 6, and 3.6 cm observations produced reliable data for the nucleus. At 2 cm the nucleus was too close to the edge of the primary beam to produce reliable measurements for all configurations. At 1.3 cm the nucleus was outside the primary beam, and therefore maps were not made of the nucleus at this wavelength.
VLA phase and flux density calibration and data reduction followed standard procedures within the Astronomical Image Processing System (AIPS), such as those described by Weiler et al. (1986), using 3C 286 as the primary flux density calibrator and 1044$`+`$719 as the main secondary flux density and phase calibrator. For observations at 20 cm in the more compact D configuration, 0945$`+`$664 was the secondary calibrator. The flux density scale is believed to be consistent with that of Baars et al. (1977).
Maps were made of the 20, 6, and 3.6 cm observations in the usual manner within AIPS, offsetting the map center from the observation phase-tracking center to the position of the nucleus. Since the supernova near maximum was nearly as bright as the nucleus ($``$100 mJy), the sidelobes of SN 1993J can severely contaminate the measurements of the nucleus. The size of the map for each frequency and configuration was chosen so that both sources were included in the field, and the map was then deconvolved using the CLEAN algorithm (as implemented in the task IMAGR). We determined the depth of the cleaning by empirically examining the convergence of the recovered flux density. Both the peak and the integrated flux densities of the nucleus on the deconvolved maps were measured by putting a tight box around the source and summing the pixel values using the task IMEAN. Because the nucleus is displaced so far from the phase center, the image of the nucleus is affected at all frequencies by bandwidth smearing, which diminishes the peak flux density relative to the integrated flux density, a conserved quantity. Therefore, we report here only the integrated flux densities. We verified that the more sophisticated procedure of fitting the source with elliptical Gaussians (using the task IMFIT) gives essentially the same results (usually within 3%). The final flux densities were obtained by scaling the observed values with the correction factor for the attenuation of the primary beam. For the 25 m dishes of the VLA, the correction factors at 20, 6, and 3.6 cm are 1.008, 1.242, and 2.011, respectively, for a phase-center displacement of 2$`\stackrel{}{\mathrm{.}}`$64.
Because we are interested in establishing the variability behavior of the nucleus using a data set taken under nonoptimal conditions, it is vital to make a careful assessment of the various sources of uncertainty that can affect the measurements. The following sources of error are considered:
(1) The quality of the maps varies widely, but in general the rms noise in the vicinity of the nucleus ranges from 0.1 to 2 mJy beam<sup>-1</sup>, with median values of 0.7, 0.3, and 0.4 mJy at 20, 6, and 3.6 cm, respectively. The corresponding fractional error on the total flux densities is 0.8%, 0.4%, and 0.6% at 20, 6, and 3.6 cm, respectively.
(2) As mentioned above, the process of measuring flux densities from the maps itself can introduce an uncertainty as large as $``$3%, depending on the method adopted (IMFIT or IMEAN).
(3) The absolute flux density scale of the primary calibrator is assumed accurate to better than 5% (see, e.g., Weiler et al. 1986).
(4) The primary beam of the VLA antennas is accurate only to a few percent at the half power point (Napier & Rots 1982); the exact uncertainty, in fact, has not been determined rigorously (R. Perley and M. Rupen 1999, private communications). For concreteness, we will assume that this source of uncertainty contributes an error of 5% to the primary-beam correction at all frequencies.
(5) Without reference pointing, the VLA can achieve rms pointing errors, under good weather conditions, of 15<sup>′′</sup>–18<sup>′′</sup> during the night and $`\genfrac{}{}{0pt}{}{_>}{^{}}`$ 20<sup>′′</sup> during the day (Morris 1991). At the half power point of the beam, a pointing error of 20<sup>′′</sup> results in an amplitude error of 18.4% at 3.6 cm after accounting for primary beam corrections. Assuming that this error is independent among the 27 antennas, we can divide it by $`\sqrt{27}`$ to get a 3.5% error in the flux measurement of a source at the beam half power point due to pointing. A similar calculation yields an error of 1.3% at 6 cm and 0.09% at 20 cm.
(6) A potentially more serious source of uncertainty comes from systematic pointing errors induced by wind and solar heating; these errors are not expected to be random among the antennas. According to Morris (1991), a wind speed of 8 m s<sup>-1</sup> (a typical value for windy conditions) introduces an additional pointing error of $``$20<sup>′′</sup>. The formal error due to differential solar heating is difficult to establish because this effect has not been formally quantified for the VLA, but it has been estimated to be a factor of a few smaller than the contribution from moderate winds. We take 20<sup>′′</sup> as a nominal pointing error for the two contributions. After accounting for primary beam correction, this translates into a flux measurement error of 18% at 3.6 cm, 7% at 6 cm, and 0.5% at 20 cm.
(7) The problem of “beam squint” — the variation of the primary beam caused by slight differences between the pointing centers of the right and left polarizations — is negligible ($`<`$0.5%) because we make use only of the total intensity, the sum of both polarizations. We neglect this contribution to the total error.
A reasonable estimate of the final error budget can be derived by summing in quadrature sources (1) through (5). This corresponds to an uncertainty of approximately 8% for all three frequencies. The true uncertainty of any individual measurement at 3.6 cm, on the other hand, can be substantially larger than this if systematic pointing errors induced by wind or solar heating are significant. In the most extreme situation, the total error could be as large as 20%.
### 2.2 Ryle Telescope Data
The Ryle Telescope (the upgraded Cambridge 5-km Telescope; Jones 1991) is an E-W synthesis telescope at the time of the observations operating at 15.2 GHz with a bandwidth of 280 MHz. The angular separation between the supernova and the nucleus of M81 places one near the half-power point when the pointing center is at the other. The disk of the galaxy also gives rise to emission on similar angular scales. In order to make a clean distinction between the two responses, and to reject the emission from the disc, we cannot make use of the short baselines ($`<`$100 m) in the array: the resolution is inadequate. We therefore use the two groups of longer baselines, near 1.2 and 2.4 km, with interferometer fringe spacings of 2$`\stackrel{}{\mathrm{.}}`$6 and 1$`\stackrel{}{\mathrm{.}}`$3. So long as we avoid the hour angles at which the fringe rates were similar, integration removes the response to any source which is not at the phase center. For the first month after the supernova explosion no observations centered on the nucleus were made; we have not attempted to analyze the data during this interval to derive a flux density for the nucleus. From 1993 May 5 until 1994 June 20, separate pointings were made on the supernova and on the nucleus, together with one on the nearby quasar B0954+658 as a phase calibrator. Amplitude calibration is based on observations of 3C 48 or 3C 286, normally on a daily basis. The data presented here were made with linearly-polarized feeds and are measurements of Stokes I+Q. We estimate that the typical rms uncertainty in the flux densities is $``$5%. It is apparent from the observations of the phase calibrator, the supernova and the nucleus that some fluctuations in the amplitude scale remain; these are a consequence of poor weather conditions during which the system noise and the gain of the telescope are subject to variations which have not been completely removed by the monitoring systems. As discussed by Pooley & Green (1993), the quasar B0954+658 is strongly variable (a factor of 2 at 15 GHz over the interval covered here), and we fitted a smoothly-varying curve to its apparent flux density in order to reduce the effects of weather and system fluctuations on the results. In retrospect, using a calibrator whose flux density was varying less dramatically would have been better.
## 3 The Light Curves
The light curves of the nucleus (Fig. 1) exhibit a complex pattern of flux density variations during the monitoring period. The nucleus emits a baseline level of roughly 80–100 mJy at all four wavelengths, consistent with its historical average (Crane, Giuffrida, & Carlson 1976; de Bruyn et al. 1976; Bietenholz et al. 1996), on which is superposed at least one, and possibly three to four, discrete events during which the flux increased substantially.
The “outbursts” are most clearly visible in the VLA data at 3.6 cm. Although our earliest epoch did not catch the onset of the initial rise of the first event, a local maximum is apparent on day<sup>2</sup><sup>2</sup>2The light curves will be referenced to days since 1993 March 28.0 UT, the adopted date of the explosion of SN 1993J. 13$`\pm `$5 (hereafter outburst “A;” Fig. 1a). This outburst reached a peak flux density of 190 mJy, a nearly two-fold increase in brightness over the quiescent state. There is some indication that outburst “A” is present at 6 cm as well; however, it appears to be slightly delayed with respect to 3.6 cm, and the amplitude of the intensity increase compared to the quiescent level is lower, on the order of 40%. We find no convincing evidence for a corresponding variation at 20 cm, and the Ryle observations at 2 cm had not yet begun at this time. The outburst centered near day 200$`\pm `$5 (hereafter “B”) in the 3.6 cm light curve (Fig. 1b) has a maximum amplitude of $``$170 mJy; it can be clearly identified with the 200 mJy peak on day 190$`\pm `$5 at 2 cm, and plausibly with a small maximum near day 205$`\pm `$5 at 6 cm and with the broad peak between days 220–270 at 20 cm. In both of these cases, the amplitude of the variations appear to decrease with increasing wavelength, and the onset of the flares at longer wavelengths seems to be somewhat delayed with respect to the shorter wavelengths. Although the sparse data coverage precludes a detailed temporal analysis, the flares appear to rise and decay on a timescale of 1–2 months. M81 has been monitored only sparsely at the later phases of the program; nonetheless, several additional maxima are evident in the data sets, especially at the two shortest wavelengths, and these might correspond to outbursts similar to “A” and “B.”
In addition to the relatively long-timescale, large-amplitude outbursts, several episodes of rapid variability were imprinted in the densely sampled portion of the 3.6 cm light curve during the first two months (Fig. 1a). The source brightness flickers by 30%–60% on a timescale of a day or less, which implies that the emission originates from regions with dimensions $`\genfrac{}{}{0pt}{}{_<}{^{}}`$ 0.001 pc. Note that the amplitude of the short-term variability is significantly larger than the measurement uncertainty, even after allowing for pointing errors induced by strong winds (see § 2.1). Crane et al. (1976) previously discussed rapid variability of a similar, perhaps somewhat less extreme, nature in M81 at this wavelength. They found the nucleus to vary by about 40% in the course of a week. Corresponding variations at longer wavelengths are less apparent; at 6 cm, rapid fluctuations occur at most at a level of $``$10%, and none are significant at 20 cm.
The wavelength dependent variations in flux density naturally lead to strong spectral variations during the outbursts. Figure 2 displays the time variation of the spectral index, $`\alpha `$, defined such that $`S_\nu \nu ^\alpha `$, where $`S_\nu `$ is the flux density at frequency $`\nu `$. As has been well established (de Bruyn et al. 1976; Bartel et al. 1982; Reuter & Lesch 1996), the M81 radio core during quiescence has a flat to slightly inverted spectrum; we measure $`\alpha `$ 0 to $`+`$0.3 from 2 to 20 cm, consistent with previous determinations.
We are convinced that the variations in flux density with time of the nucleus are real and intrinsic to the source. The dotted lines in Figure 1b trace the observed light curves of SN 1993J as derived from the same images from which we measured the nucleus. The supernova light curves exhibit no unusual behavior and are well represented by the model of the emission discussed in Van Dyk et al. (1994). The flux and phase calibrations are therefore reliable, and we cannot attribute the changes in the source to unsuspected variability in the calibrators. The effect of sidelobe contamination by the supernova should not be significant, as SN 1993J is no stronger than the nucleus, and, moreover, the sidelobes are greatly reduced in the cleaned maps. In the case of the VLA data, we have properly corrected for primary-beam attenuation, we have taken into account the effects of bandwidth smearing, and we have carefully considered all known sources of errors affecting the flux density measurements. The only source of uncertainty we did not include formally into our error budget is that potentially arising from systematic pointing errors induced by wind and solar heating (see § 2.1). However, we believe that the main outbursts cannot be attributed to systematic errors for the following reasons. First, the observed variations are much larger than even the most pessimistic error estimate. Second, the main outbursts are clearly well resolved in time and so do not arise from any single, errant data point. And finally, one of the outbursts is seen both in the VLA and in the Ryle data sets, which implies that it cannot be attributed to artifacts of primary-beam correction of the VLA data. We further note that the variations cannot be a configuration-related effect, since the flux densities, particularly at 3.6 cm, do not achieve the same level of variation for observations made in the same array configuration but separated in time.
## 4 The Radio Variability and Its Implications
Previous radio work by Crane et al. (1976) and de Bruyn et al. (1976) showed that the nucleus of M81 undergoes gradual flux variations over several years, accompanied by erratic changes on much shorter timescales. Moreover, these authors recorded the onset of a flare in 1974 October; the flux density at 8085 MHz increased by $``$40% over one week. The limited time coverage did not permit the entire event to be monitored, however, although simultaneous observations at 2695 MHz suggested that the flux enhancement might have occurred first at the higher frequency. Similarly, Kaufman et al. (1996) suggested that a modest flare at 6 cm possibly occurred during 1981 August. The observations presented in this paper establish conclusively that the nucleus of M81 is strongly variable at centimeter wavelengths. Our time coverage enabled us to identify rapid variability on timescales as short as one day or less at 3.6 cm, and more spectacularly, two distinctive outbursts which could be traced in more than one wavelength, as well as several others seen in at least one of the shorter wavelengths monitored.
Several properties of the best-defined outburst (“B”), namely the steep rise and decline of the light curve, and the frequency dependence of the burst onset and the burst maximum, suggest that the standard adiabatic-expansion model for variable radio sources (Pauliny-Toth & Kellermann 1966; van der Laan 1966) may be applicable. This model idealizes the radio flux variability as arising from an instantaneous injection of a cloud of relativistic electrons which is uniform, spherical, and expanding at a constant velocity. The flux density increases with source radius (or time) while the source remains optically thick, and it decreases during the optically thin phase of the expansion. Both the maximum flux density and the frequency at which it occurs decrease with time. The observed characteristics of outburst “B”, however, do not agree in detail with the predictions of this simple model, as is also the case for other variable radio sources (see discussion in Kellermann & Owen 1988). In particular, the observed profile of the burst at 2 and 3.6 cm is much shallower than predicted; it follows roughly a linear rise and a linear decline, whereas an adiabatically expanding source should brighten as $`t^3`$ before the maximum and, for electrons having a power-law energy distribution with a slope of –2.5, decline thereafter as $`t^5`$. Moreover, the 3.6 and 6 cm peaks occur much earlier than expected relative to the 2 cm peak, and the relative strengths of the peaks do not follow the predicted scaling relations. These inconsistencies no doubt reflect the oversimplification of the standard model. Realistic modeling of the M81 source, which is beyond the scope of this work, most likely will need to incorporate more complex forms of the particle injection rate (see, e.g., Peterson & Dent 1973), as well as departures from spherical symmetry, since the radio source is known to have an elongated geometry, plausibly interpreted as a core-jet structure (Bartel et al. 1982; Bartel, Bietenholz, & Rupen 1995; Bietenholz et al. 1996; Ebbers et al. 1998). The nuclear jet model of Falcke (1996), for instance, can serve as a useful starting point.
It is instructive to consider whether the radio variability in M81 is associated with any other visible signs of transient activity, at either radio or other wavelengths, during the monitoring period. By analogy with other accretion-powered systems, such as Galactic superluminal sources (see, e.g., Harmon et al. 1997), one might expect the radio outbursts in the M81 nucleus to be accompanied by detectable changes in its radio structure and to be preceded by X-ray flares. The nucleus has been intensively imaged at milli-arcsec resolution in concert with VLBI studies of SN 1993J. Bietenholz, Bartel, & Rupen (1998) and Ebbers et al. (1998) do, in fact, report structural changes in the jet component of the nucleus at 3.6 cm on timescales of weeks, although the limited VLBI data do not permit a direct comparison with our VLA light curves. Several measurements of the nuclear flux in the hard X-ray band (2–10 keV) were taken by ASCA between 1993 May and 1995 April (Ishisaki et al. 1996), but again, because of the limited temporal coverage, these data cannot be used to draw any meaningful comparisons with the radio light curve.
We mention, however, a possible connection between the radio outbursts and an optical flare that was caught during the same monitoring period. After 15 years of relative constancy (Ho et al. 1996), the broad H$`\alpha `$ emission line of the nucleus of M81 brightened by $``$40% and developed a pronounced double-peaked line profile (Bower et al. 1996) reminiscent of those seen in a minority of AGNs (Eracleous & Halpern 1994 and references therein). Neither the exact date of its onset nor its time evolution is known, except that it occurred between 1993 April 14 (when it was last observed by Ho et al.) and 1995 March 22 (the date of Bower et al.’s observations), within the period of the radio monitoring. The physical origin of double-peaked broad emission lines in AGNs is not yet fully understood (see Halpern et al. 1996 for a discussion), and the detection of possibly related variable radio emission does not offer a clear discriminant between the main competing models. Nevertheless, the association of the sudden appearance of the double-peaked line with another transient event, namely the radio outbursts, hints that the two events could have a common origin. Both phenomena, for example, at least in this case, may originate from a sudden increase in the accretion rate.
Finally, we note that radio outbursts may be a generic property of low-luminosity AGNs, especially those classified as LINERs. Although no other nearby LINER has had its variability properties scrutinized to the same degree as M81, radio flares have been noticed in at least three other famous LINER nuclei. NGC 1052 is known to have experienced two outbursts at millimeter wavelengths (Heeschen & Puschell 1983) and another at longer wavelengths (Slee et al. 1994). A single outburst at centimeter wavelengths has been reported for the nucleus of M87 (Morabito, Preston, & Jauncey 1988). And Wrobel & Heeschen (1991) remark that NGC 4278 exhibited pronounced variability at 6 cm over the course of 1–2 years. In this regard, it is appropriate to mention that even the extremely low-power radio source in the center of the Milky Way, Sgr A, showcases outbursts at high radio frequencies (Zhao et al. 1992; Wright & Backer 1993). The radio variability characteristics of these weak nuclei closely mimic those of far more powerful radio cores traditionally studied in quasars and radio galaxies, and they furnish additional evidence that the AGN phenomenon spans an enormous range in luminosity.
## 5 Summary
We analyzed the radio light curves of the low-luminosity active nucleus in the nearby spiral galaxy M81 taken at 3.6, 6, and 20 cm over a four-year period between 1993 and 1997, as well as a 2 cm light curve covering a more limited span between 1993 and 1994. Two types of variability are seen: rapid ($`\genfrac{}{}{0pt}{}{_<}{^{}}`$ 1 day), small-amplitude (10%–60%) flux density changes are evident at 3.6 and 6 cm, and at least one, and possibly three or four longer timescale (months), outbursts of greater amplitude (30%–100%). The best observed of the outbursts can be traced in three bands. The maximum flux density decreases systematically with decreasing frequency, and the time at which the maximum occurs is shifted toward later times at lower frequencies. These characteristics qualitatively agree with the predictions of the adiabatic-expansion model for variable radio sources, although certain discrepancies between the observations and the model predictions suggest that the model needs to be refined. The radio outbursts may be related to an optical flare during which the broad H$`\alpha `$ emission line developed a double-peaked structure. Although the exact relationship between the two events is unclear, both phenomena may stem from a sudden increase in the accretion rate.
During the course of this work, L. C. H. was supported by a postdoctoral fellowship from the Harvard-Smithsonian Center for Astrophysics, by NASA grant NAG 5-3556, and by NASA grants GO-06837.01-95A and AR-07527.02-96A from the Space Telescope Science Institute (operated by AURA, Inc., under NASA contract NAS5-26555). K. W. W. wishes to acknowledge the Office of Naval Research for the 6.1 funding which supports his work. We thank Norbert Bartel, Michael Eracleous, Heino Falcke, and the referee for helpful comments, and Rick Perley and Michael Rupen for advice concerning pointing errors of the VLA.
## Appendix A The Data
For the sake of completeness, we list in Tables 1–4 the flux densities of the nucleus of M81. The uncertainties in the flux densities were calculated as described in § 2.1. These are the data plotted in Figure 1.
References
Baars, J. W. M., Genzel, R., Pauliny-Toth, I. I. K., & Witzel, A., 1977, A&A, 61, 99
Bartel, N., et al. 1982, ApJ, 262, 556
Bartel, N., Bietenholz, M. F., & Rupen, M. P. 1995, in Proc. Natl. Acad. Sci., 92, 11374
Bietenholz, M. F., et al. 1996, ApJ, 457, 604
Bietenholz, M. F., Bartel, N., & Rupen, N. P. 1998, in IAU Colloq. 164, Radio Emission from Galactic and Extragalactic Compact Sources, ed. A. Zensus, G. Taylor, & J. Wrobel (San Francisco: ASP), 201
Bower, G. A., Wilson, A. S., Heckman, T. M., & Richstone, D. O. 1996, AJ, 111, 1901
Crane, P. C., Giuffrida, B., & Carlson, J. B. 1976, ApJ, 203, L113
de Bruyn, A. G., Crane, P. C., Price, R. M., & Carlson, J. 1976, A&A, 46, 243
Ebbers, A., Bartel, N., Bietenholz, M. F., Rupen, N. P., & Beasley, A. J. 1998, in IAU Colloq. 164, Radio Emission from Galactic and Extragalactic Compact Sources, ed. A. Zensus, G. Taylor, & J. Wrobel (San Francisco: ASP), 203
Eracleous, M., & Halpern, J. P. 1994, ApJS, 90, 1
Falcke, H. 1996, ApJ, 464, L67
Filippenko, A. V., & Sargent, W. L. W. 1988, ApJ, 324, 134
Freedman, W. L., et al. 1994, ApJ, 427, 628
Halpern, J. P., Eracleous, M., Filippenko, A. V., & Chen, K. 1996, ApJ, 464, 704
Harmon, B. A., Deal, K. J., Paciesas, W. S., Zhang, S. N., Robinson, C. R., Gerard, E., Rodríguez, L. F., & Mirabel, I. F. 1997, ApJ, 477, L85
Heckman, T. M. 1980, A&A, 87, 152
Heeschen, D. S., & Puschell, J. J. 1983, ApJ, 267, L11
Ho, L. C. 1999, in The AGN-Galaxy Connection, ed. H. R. Schmitt, L. C. Ho, & A. L. Kinney (Advances in Space Research), in press (astro-ph/9807273)
Ho, L. C., Filippenko, A. V., & Sargent, W. L. W. 1996, ApJ, 462, 183
Ho, L. C., Filippenko, A. V., & Sargent, W. L. W. 1997, ApJS, 112, 315
Ishisaki, Y., et al. 1996, PASJ, 48, 237
Jones, M. E. 1991, in IAU Colloq. 131, Radio Interferometry: Theory, Techniques, and Applications, ed. T. J. Cornwell & R. A. Perley (San Francisco: ASP), 295
Kaufman, M., Bash, F. N., Crane, P. C., & Jacoby, G. H. 1996, AJ, 112, 1021
Keel, W. C. 1989, AJ, 98, 195
Kellermann, K. I., & Owen, F. N. 1988, in Galactic and Extragalactic Radio Astronomy, ed. G. L. Verschuur & K. I. Kellermann (New York: Springer-Verlag), 563
Morabito, D. D., Preston, R. A., & Jauncey, D. L. 1988, AJ, 95, 1037
Morris, D. 1991, VLA Test Memorandum No. 182 (National Radio Astronomy Observatory)
Napier, P. J., & Rots, A. H. 1982, VLA Test Memorandum No. 134 (National Radio Astronomy Observatory)
Pauliny-Toth, I. I. K., & Kellermann, K. I. 1966, ApJ, 146, 634
Peimbert, M., & Torres-Peimbert, S. 1981, ApJ, 245, 845
Peterson, F. W., & Dent, W. A. 1973, ApJ, 186, 421
Pooley, G. G., & Green, D. A. 1993, MNRAS, 264, L17
Reuter, H.-P., & Lesch, H. 1996, A&A, 310, L5
Shuder, J. M., & Osterbrock, D. E. 1981, ApJ, 250, 55
Slee, O. B., Sadler, E. M., Reynolds, J. E., & Ekers, R. D. 1994, MNRAS, 269, 928
van der Laan, H. 1966, Nature, 211, 1131
Van Dyk, S. D., Weiler, K. W., Sramek, R. A., Rupen, M. P., & Panagia, N. 1994, ApJ, 432, L115
Weiler, K. W., Sramek, R. A., Panagia, N., van der Hulst, J. M., & Salvati, M. 1986, ApJ, 301, 790
Wright, M. C. H., & Backer, D. C. 1993, ApJ, 417, 560
Wrobel, J. M., & Heeschen, D. S. 1991, AJ, 101, 148
Zhao, J.-H., Goss, W. M., Lo, K. Y., & Ekers, R. D. 1992, in Relationships Between Active Galactic Nuclei and Starburst Galaxies, ed. A. V. Filippenko (San Francisco: ASP), 295
|
no-problem/9905/hep-ex9905011.html
|
ar5iv
|
text
|
# Neutrino Oscillation Experiments at Nuclear Reactors
## 1 Introduction
Neutrino oscillations, if discovered, would shed light on some of the most essential issues of modern particle physics ranging from a better understanding of lepton masses to the exploration of new physics beyond the Standard Model. In addition finite neutrino masses would have important consequences in astrophysics and cosmology.
Experiments performed using both particle accelerators and nuclear reactors have been carried-on extensively in the past 20 years finding no firm evidence for neutrino oscillations. However, in recent years, evidence has been collected on a number of effects that could point to oscillations: the solar neutrino puzzle, the anomaly observed in atmospheric neutrinos and the LSND effect. This paper will concentrate on the first two cases that are well suited to be studied with reactor experiments. Both effects, if interpreted as signals for neutrino oscillations, would suggest very small neutrino mass differences and, possibly, large mixing parameters. We write the probability of oscillation from a flavor $`\mathrm{}`$ to another one $`\mathrm{}^{}`$ as
$$P_{\mathrm{}\mathrm{}^{}}=\mathrm{sin}^22\theta \mathrm{sin}^2\frac{1.27\mathrm{\Delta }m^2L}{E_\nu }$$
(1)
where $`L`$ is expressed in meters, $`\mathrm{\Delta }m^2`$ in eV<sup>2</sup> and $`E_\nu `$ in MeV. It is clear that in order to probe sufficiently small $`\mathrm{\Delta }m^2`$, long baselines have to be combined with low energy neutrinos. Unfortunately we are able to collimate neutrino beams only by using the Lorentz boost of the parent particles from which decay the neutrinos are produced. For this reason low energy neutrinos are generally produced over large solid angles while high energy ones may come in relatively narrow beams. Hence to access, for instance, the atmospheric neutrino $`\mathrm{\Delta }m^2`$ region we have the choice of either using the beam from an accelerator that is rather narrow (better than $`1`$ mrad) but has an energy of several GeV, or detecting few-MeV neutrinos emitted isotropically by a nuclear reactor. In the first case the baseline will have to be much larger, but since the beam is pointing, both cases turn out to be quite feasible and their different features make them quite complementary. As reactors produce exclusively electron anti-neutrinos, only $`\overline{\nu }_\mathrm{e}\overline{\nu }_\mathrm{X}`$ oscillations can be observed. In addition since the neutrino energy is below the threshold for producing muons (or $`\tau `$s), reactor experiments have to be of “disappearance” type, that is oscillation would be detected as a deficit of electron neutrinos. This feature, together with the higher energies produced in accelerator neutrino events and their time-bunched structure, makes accelerators-based experiments more immune to backgrounds and, in general, more sensitive to small mixing parameters. On the other hand the very low energy of reactor neutrinos offer the best chance to push to the limit our exploration of the small $`\mathrm{\Delta }m^2`$ regime.
Two reactor-based experiments have been performed to study the parameter region consistent with atmospheric neutrinos extending our reach in $`\mathrm{\Delta }m^2`$ by over an order of magnitude. While the Chooz experiment has been completed (although part of the data is still being analyzed), Palo Verde is taking data since the fall 1998 and will continue at least until the end of 1999. KamLAND will be the first laboratory-style experiment able to attack one of the regions of solar neutrino oscillations.
## 2 Reactors as Neutrino Sources
Nuclear reactors produce isotropically $`\overline{\nu }_\mathrm{e}`$ in the $`\beta `$ decay of the neutron-rich fission fragments. For all practical purposes the neutrino flux and spectrum depend only on the composition of the core in terms of the four isotopes <sup>235</sup>U, <sup>238</sup>U, <sup>239</sup>Pu and <sup>241</sup>Pu being fissioned in the reactor. Neutrinos are then produced by long chains of daughter isotopes and hundreds of different $`\beta `$-decays have to be included to account for the observed yields. The modeling of such processes is quite a formidable task but there is nowadays a very good agreement between theoretical calculations and experimental data. Two methods can be used to experimentally cross check theoretical models. In one case the electron spectra for fission-produced chains can be experimentally measured for each of the four parent isotopes. From this data, available only for (<sup>235</sup>U, <sup>239</sup>Pu and <sup>241</sup>Pu), anti-neutrino spectra can be derived without loss of accuracy, obtaining a total uncertainty on the flux of about 3%. Alternatively anti-neutrino flux and spectra have been directly measured in several high statistic experiments with detectors of known efficiency. These data are usually a by-product of previous reactor oscillation experiments where the anti-neutrino have been measured at different distances. Since these observations have been found to be consistent with a $`1/r^2`$ law (no oscillations at short baselines) they can now be used as a determination of the absolute anti-neutrino spectra. A total error of about 1.4% has been achieved in these measurements.
All measurements and calculations agree with each other within errors so that, given the history of power and fuel composition for a reactor, its anti-neutrino energy spectrum can be computed with an error of about 3%. We note here that for this kind of experiments a “near measurement” is superfluous as, in essence, all the information needed can be readily derived from the previous generation of experiments, using their result that no oscillations take place at those shorter baselines. The real challenges consist in measuring precisely the detector efficiency and in subtracting backgrounds properly.
Since the neutrino spectrum is only measured above some energy threshold ($`M_\mathrm{n}M_\mathrm{p}+m_\mathrm{e}`$ = 1.8 MeV), only fast (energetic) decays contribute to the useful flux and the “neutrino luminosity” tracks very well in time the power output of the reactor. Generally few hours after a reactor turns off the neutrino flux above threshold has become negligible. Similarly, the equilibrium for neutrinos above threshold is established already several hours after the reactor is turned on.
While early oscillation experiments used military or research reactors, modern experiments have long baselines and so need the largest available fluxes (powers) that are usually available at large commercial power generating stations. Typical modern reactors have thermal power in excess of 3 GW ($`>1`$ GW electrical power) corresponding to $`7.7\times 10^{20}\nu /\mathrm{s}`$. Usually more than one such reactors are located next to each other in a power plant so that the neutrino flux detected is the sum of the contributions from each core.
Although periods of time with source off would be very useful to study the backgrounds, in the case of multiple reactors, plant optimization requires the refueling of one reactor at the time so that in practice backgrounds are often studied at partial power (instead of zero power). Typically each reactor is off (refueling) for about one month every one or two years. A notable exception is Chooz where the experiment was running before the power plant was operational. This experiment was then able to record the slow turn on of the reactors during commissioning as shown in Figure 1. This is used to cross check other estimates of the backgrounds to the measurement.
KamLAND will detect neutrinos from a very large number of reactors in several power plants distributed in the central region of Japan. In this case an important check of backgrounds will result from the study of a 6-month period modulation in the neutrino flux due to concentrated reactor refuelings and maintenance in the fall and spring periods, when electricity demand is lowest. Such a modulation, with a strength of about 30% of the full flux, is illustrated in Figure 2.
## 3 Oscillation Searches in the Atmospheric Neutrino Region
At the time of writing two experiments are exploring the region of phase-space with $`10^3<\mathrm{\Delta }m^2<10^2`$: Chooz in France (a 2-reactor site) and Palo Verde in the United States (a 3-reactor site). In order to be sensitive to oscillations in the atmospheric neutrino region the Chooz and Palo Verde detectors are located, respectively, $``$1 km and $``$0.8 km from the reactors. In both cases the detection is based on the inverse-$`\beta `$ reaction
$$\overline{\nu _e}+pe^++n.$$
(2)
in Gadolinium-loaded liquid scintillator. The detectors can measure the positron energy so that the anti-neutrino spectrum can be easily reconstructed from simple kinematics as
$$E_{\overline{\nu }}=E_{\mathrm{e}^+}+(M_\mathrm{n}M_\mathrm{p}+m_\mathrm{e})+𝒪(E_{\overline{\nu }}/M_\mathrm{n}).$$
(3)
Given a fixed baseline, different energies have different oscillation probabilities and, for a large range of $`\mathrm{\Delta }m^2`$ values, the signature of oscillations is an unmistakable distortion of the energy spectrum. The slight difference in baselines for the two experiments could result in rather different oscillation signals, providing a nice cross-check against non-oscillation effects. Parameter sets closer to the sensitivity boundaries will ultimately give neutrino spectra similar to the case of no oscillations, so that to reach the best sensitivity both experiments will have to rely on the absolute neutrino flux measurement. Since at these large distances from the reactors the flux of neutrinos is rather low, special precautions have to be taken in order to suppress backgrounds from cosmic radiation and natural radioactivity. Although both detectors are located underground, background rejection is achieved somewhat differently in the two cases. On one hand Chooz has been built in a rather deep (300 m.w.e.) already existing underground site, while, on the other, Palo Verde was installed in a shallow laboratory (32 m.w.e.) excavated on purpose. Hence this last experiment is segmented and uses tighter signatures to identify anti-neutrino events.
The central detector of Palo Verde, a matrix of 66 acrylic cells each 9 m long, is surrounded by a 1 m thick water buffer that shields $`\gamma `$ radiation and neutrons. A large veto counter encloses the entire detector, rejecting cosmic-ray muons. In this detector the signal consists of a fast triple coincidence followed by the neutron capture. The triple is produced by the ionization due to the positron and, in two different cells, its two annihilation photons. Timing information at the two ends of each cell allows to reconstruct the events longitudinally and to correct for light attenuation in the cells, providing a good quality positron energy measurement.
The Chooz detector, on the other hand, being in a lower background environment, is a single spherical acrylic vessel filled with liquid scintillator. It triggers on the double coincidence between the positron and the neutron parts of the inverse $`\beta `$ reactions. Also in this case the central detector is surrounded by a veto and some shielding layers.
Gadolinium doping of the scintillator reduces the neutron capture time and hence the background. A concentration of 0.1% Gd by weight reduces the capture time from 170$`\mu `$s to 28$`\mu `$s. Since a neutron capture in Gd is accompanied by a 8 MeV photon cascade, another advantage of the doped scintillator is that it allows for a very high threshold for the neutron part of the event. This threshold, well above the Th and U lines, results in further reduction of the background.
Although both detectors are built using low activity materials, this requirement is more severe for Chooz in order to have a $`\gamma `$-ray rate consistent with the lower cosmic-ray induced background.
Two categories of backgrounds are considered: one is given by random hits in the detector (2 for Chooz, 4 for Palo Verde) produced by independent $`\gamma `$-rays and/or neutrons, while the other is given by single or double fast neutrons produced outside the veto by cosmic-ray muons mainly in spallation processes. Neutrons can deposit some energy simulating the fast part of the event and then thermalize and capture in Gd (like the neutrons from the anti-neutrino capture process). Unlike the case of independent hits, in this second background the event has the same time-structure of real events, so that its rejections is a-priori more difficult. The expected rates of neutrino events for the case of no oscillations is about 30 day<sup>-1</sup> for both detectors. Both groups use rather advanced trigger and data acquisition systems to select and log neutrino events.
At the present time both experiments see fluxes that are completely compatible with the expectations for no oscillations. From these observations the reactor experiments can exclude that oscillations involving electron neutrinos are causing the atmospheric neutrino anomaly. This result is quantitatively illustrated in Figure 3 .
## 4 Physics with KamLAND
The KamLAND experiment will use the Kamiokande infrastructure, under 1000 m rock overburden, to perform an ultra-long baseline experiment with enough sensitivity to test the large mixing angle MSW solution of the solar neutrino puzzle. KamLAND, will consist of 1000 tons of liquid scintillator surrounded by 2.5 m thick mineral oil shielding layer. Both liquids are contained in a 18 m diameter stainless-steel sphere that also supports, on the inside surface, about 2000 17-inch photomultipliers giving a 30% photocathode coverage. Such photomultipliers are modified from the 20-inch SuperKamiokande tubes and provide 3 ns FWHM transit-time-spread, allowing 1 MeV energy depositions to be localized with 10 cm accuracy. The detector light yield will be better than 100 p.e./MeV. A veto detector will be provided by flooding with water the volume outside the sphere and reading the Čerenkov light with old Kamiokande photomultipliers. A schematic view of the detector is shown in Figure 4.
In Table 1 we list the power, distance and neutrino rates for the five nuclear plants giving the largest $`\overline{\nu }_\mathrm{e}`$ flux contributions together with the total from all Japanese reactors ($``$2 events day<sup>-1</sup>).
Extensive detector simulations predict that the main backgrounds will result from random coincidence of hits from natural radioactivity (0.05 events/day) and neutrons produced by muon spallation in rock (0.05 events/day). Hence the signal/noise ratio for reactor anti-neutrinos is expected to be about 20/1. While the predicted exclusion contour for the case of 3-years of data taking and no evidence of oscillations is shown in Figure 3, Figure 5 shows the precision to which the two oscillation parameters would be measured in 3 years if oscillations would indeed occur according to the large mixing angle MSW solution of the solar neutrino puzzle. In addition to the neutrino oscillation physics described above KamLAND will also perform a number of new measurements in the fields of terrestrial neutrinos, supernovae physics and solar neutrinos.
Construction for KamLAND has started in 1998 and data taking is scheduled to begin during 2001.
In conclusion it appears that the study of reactor neutrinos is a very interesting field indeed, offering the opportunity of exciting measurements and discoveries. The next 5 to 10 years should be rich of results !
## 5 Acknowledgments
I would like to express my gratitude to my collaborators in the Palo Verde and KamLAND experiments for lots of interesting discussions on neutrino physics. I am also indebted with C. Bemporad who provided me with the material about the Chooz experiment.
|
no-problem/9905/astro-ph9905070.html
|
ar5iv
|
text
|
# 1 Introduction.
## 1 Introduction.
Theory of stellar evolution and one of the strongest tools of that theory – population synthesis – are now rapidly developing branches of astrophysics. Very often only the evolution of single stars is modelled, but it is well known that about 50% of all stars are members of binary systems, and a lot of different astrophysical objects are products of the evolution of binary stars. We argue, that often it is necessary to take into account the evolution of close binaries while using the population synthesis in order to avoid serious errors.
Initially this work was stimulated by the article Contini et al. (1995), where the authors suggested an unusial form of the initial mass function (IMF) for the explanation of the observed properties of the galaxy Mrk 712 . They suggested the “flat” IMF with the exponent $`\alpha =1`$ instead of the Salpeter’s value $`\alpha =2.35`$. Contini et al. (1995) didn’t take into account binary systems, so no words about the influence of such IMF on the populations of close binary stars could be said. Later Shaerer (1996) showed that the observations could be explained without the IMF with $`\alpha =1`$. Here we try to determine the influence of the variations of the IMF on the evolution of compact binaries and apply our results to seven regions of starformation (Shaerer et al., 1998, hereafter SCK98).
Previously (Lipunov et al., 1996a) we used the “Scenario Machine” for calculations of populations of X– ray sources after a burst of starformation at the Galactic center. Here, as before in Popov et al. (1997, 1998), we model a general situation — we make calculations for a typical starformation burst. We show results on twelve types of binary sources with significant X-ray luminosity for three values of the upper mass limit for three values of $`\alpha `$.
## 2 Model.
Monte-Carlo method for statistical simulations of binary evolution was originally proposed by Kornilov & Lipunov (1983a,b) for massive binaries and developed later by Lipunov & Postnov (1987) for low-massive binaries. Dewey & Cordes (1987) applied an analogous method for analysis of radio pulsar statistics, and de Kool (1992) investigated by the Monte-Carlo method the formation of the galactic cataclysmic variables (see the review in van den Heuvel 1994).
Monte-Carlo simulations of binary star evolution allows one to investigate the evolution of a large ensemble of binaries and to estimate the number of binaries at different evolutionary stages. Inevitable simplifications in the analytical description of the binary evolution that we allow in our extensive numerical calculations, make those numbers approximate to a factor of 2-3. However, the inaccuracy of direct calculations giving the numbers of different binary types in the Galaxy (see e.g. Iben & Tutukov 1984, van den Heuvel 1994) seems to be comparable to what follows from the simplifications in the binary evolution treatment.
In our analysis of binary evolution, we use the “Scenario Machine”, a computer code, that incorporates current scenarios of binary evolution and takes into account the influence of magnetic field of compact objects on their observational appearance. A detailed description of the computational techniques and input assumptions is summarized elsewhere (Lipunov et al. 1996b; see also: http://xray.sai.msu.su/~ mystery/articles/review/), and here we briefly list only principal parameters and initial distributions.
We trace the evolution of binary systems during the first 20 Myrs after their formation in a starformation burst. Obviously, only stars that are massive enough (with masses $`810\mathrm{M}_{}`$) can evolve off the main sequence during the time as short as this to yield compact remnants: neutron stars (NSs) and black holes (BHs). Therefore we consider only massive binaries, i.e. those having the mass of the primary (more massive) component in the range of $`10120`$ $`\mathrm{M}_{}`$.
The distribution in orbital separations is taken as deduced from observations:
$$f(\mathrm{log}a)=\mathrm{const},\mathrm{max}\{10\mathrm{R}_{},\text{Roche Lobe}M(M_1)\}<\mathrm{log}a<10^7\mathrm{R}_{}.$$
(1)
We assume that a NS with a mass of $`1.4\mathrm{M}_{}`$ is formed as a result of the collapse of a star, whose core mass prior to collapse was $`M_{}(2.535)\mathrm{M}_{}`$. This corresponds to an initial mass range $`(1060)\mathrm{M}_{}`$, taking into account that a massive star can lose more than $`(1020)\%`$ of its initial mass during the evolution with a strong stellar wind.
The most massive stars are assumed to collapse into a BH once their mass before the collapse is $`M>M_{cr}=35\mathrm{M}_{}`$ (which would correspond to an initial mass of the ZAMS star as high as $`60\mathrm{M}_{}`$ since a substantial mass loss due to a strong stellar wind occurs for the most massive stars). The BH mass is calculated as $`M_{bh}=k_{bh}M_{cr}`$, where the parameter $`k_{bh}`$ is taken to be 0.7.
The mass limit for NS (the Oppenheimer-Volkoff limit) is taken to be $`M_{OV}=2.5\mathrm{M}_{}`$, which corresponds to a hard equation of state of the NS matter.
We made calculations for several values of the coefficient $`\alpha `$:
$$\frac{dN}{dM}M^\alpha $$
(2)
We calculated $`10^7`$ systems in every run of the program. Then the results were normalized to the total mass of binary stars in the starformation burst. We also used different values of the upper mass limit.
We took into account that the collapse of a massive star into a NS can be asymmetrical, so that an additional kick velocity, $`v_{kick}`$, presumably randomly oriented in space, should be imparted to the newborn compact object. We used the velocity distribution in the form obtained by Lyne & Lorimer (1994) with the characteristic value 200 km/s (twice less than in Lyne & Lorimer (1994), see Lipunov et al. (1996c)).
## 3 Results.
On the figures we show the results of our calculations. On all graphs on the X- axis we show the time after the starformation burst in Myrs, on the Y- axis — number of the sources of the selected type that exist at the particular moment (not the birth rate of the sources!).
On figures 1-3 we show our calculations for X-ray sources of 12 different types for different parameters of the IMF.
* Figure 1 — $`\alpha =1`$,
* Figure 2 — $`\alpha =1.35`$,
* Figure 3 — $`\alpha =2.35`$.
For upper mass limits:
* $`120M_{}`$ – solid lines,
* $`60M_{}`$ – dashed lines,
* $`40M_{}`$ – dotted lines.
The calculated numbers were normalized for $`110^6M_{}`$ in binary stars. We show on the figures 1-3 and in tables 1-9 only systems with the luminosity of compact object greater than $`10^{33}erg/s`$ (it should be mainly X-ray luminosity).
Curves were not smoothed so all fluctuations of statistical nature are presented. We calculated $`10^7`$ binary systems in every run, and then the results were normalized.
We used the “flat” mass ratio function, i.e. binary systems with any mass ratio appear with the same probability. The results can be renormalized to any other form of the mass ratio function.
## 4 Application of our calculations
We apply our results to seven regions of recent starformation. Ages, total masses and some other characteristics were taken from SCK98 (we used total masses determined for Salpeter’s IMF even for the IMFs with different parameters, which is a simplification). As far as for several regions ages are uncertain, we made calculations for two values of the age, marked in SCK98.
Results are presented in tables 1-9 (regions NGC3125A and NGC3125B have similar ages and total masses). We made an assumption, that binaries contain 50% of the total mass of the starburst. Numbers were rounded off to the nearest integer (i.e. n sources means, that calculated number was between n-0.5 and n+0.5).
## 5 Discussion and conclusions
Different types of close binaries show different sensitivity to variations of the IMF. When we replace $`\alpha =2.35`$ by $`\alpha =1`$ the numbers of all sources increase. Systems with BHs are more sensitive to such variations.
When one try to vary the upper mass limit, another situation appear. In some cases (especially for $`\alpha =2.35`$) systems with NSs show little differences for different values of the upper mass limit, while systems with BHs become significantly less (or more) abundant for different upper masses. Luckily, X-ray transients, which are the most numerous systems in our calculations, show significant sensitivity to variations of the upper mass limit. But of course due to their transient nature it is difficult to use them to detect small variations in the IMF. If it is possible to distinguish systems with BH, it is much better to use them to test the IMF.
The results of our calculations can be easily used to estimate the number of X- ray sources for different parameters of the IMF if the total mass of stars and age of a starburst are known (in (Popov et al., 1997, 1998) analytical approximations for source numbers were given). And we estimate numbers of different sources for several regions of recent starformation (tables 1-9).
In this poster we also tried to show, that, as expected, populations of close binaries are very sensitive to the variations of the IMF. One must be careful, when trying to fit the observed data for single stars with variations of the IMF. And, vice versa, using detailed observations of X-ray sources, one can try to estimate parameters of the IMF, and test results, obtained from single stars population.
## 6 Acknowledgements
We want to thank Dr. K.A. Postnov for discussions and G.V. Lipunova and Dr. I.E. Panchenko for technical assistance.
This work was supported by the grants: NTP “Astronomy” 1.4.2.3., NTP “Astronomy” 1.4.4.1 and “Universities of Russia” N5559.
We are also thankful to the organizers of the conference for support and hospitality.
TWELVE TYPES OF X-RAY SOURCES
BH+N2 — A BH with a He-core Star (Giant)
NA+N1 — An Accreting NS with a Main Sequence Star (Be-transient)
BH+WR — A BH with a Wolf–Rayet Star
BH+N1 — A BH with a Main Sequence Star
BH+N3G — A BH with a Roche-lobe filling star, when the binary loses angular momentum by grav. radiation
NA+N3 — An Accreting NSt with a Roche-lobe filling star (fast mass transfer from the more massive star)
NA+WR — An Accreting NS with a Wolf–Rayet Star
BH+N3E — A BH with a Roche-lobe filling star (nuclear evolution time scale)
NA+N3G — An Accreting NS with a Roche-lobe filling star, when the binary loses angular momentum due to gravitational radiation
NA+N3M — An Accreting NS with a Roche-lobe filling star, when the binary loses angular momentum due to magnetic wind
NA+N2 — An Accreting NS with a He-core Star (Giant)
NA+N3E — An Accreting NS with a Roche-lobe filling star (nuclear evolution time scale)
Figure 1
Figure 2
Figure 3
|
no-problem/9905/hep-ph9905296.html
|
ar5iv
|
text
|
# Constraints on the phase 𝛾 and new physics from 𝐵→𝐾𝜋 Decays
## Abstract
Recent results from CLEO on $`BK\pi `$ indicate that the phase $`\gamma `$ may be substantially different from that obtained from other fit to the KM matrix elements in the Standard Model. We show that $`\gamma `$ extracted using $`BK\pi ,\pi \pi `$ is sensitive to new physics occurring at loop level. It provides a powerful method to probe new physics in electroweak penguin interactions. Using effects due to anomalous gauge couplings as an example, we show that within the allowed ranges for these couplings information about $`\gamma `$ obtained from $`BK\pi ,\pi \pi `$ can be very different from the Standard Model prediction.
preprint:
The CLEO collaboration has recently measured the four $`BK\pi `$ branching ratios with, $`Br(B^\pm \pi ^\pm K^0)=(1.82_{0.4}^{+0.46}\pm 0.16)\times 10^5`$, $`Br(B^\pm K^\pm \pi ^0)=(1.21_{0.280.14}^{+0.30+0.21})\times 10^5`$, $`Br(BK^\pm \pi ^{})=(1.88_{0.26}^{+0.28}\pm 0.13)\times 10^5`$ and $`Br(BK^0\pi ^0)=(1.48_{0.510.33}^{+0.59+0.24})\times 10^5`$. It is suppressing that these branching ratios turn out to be close to each other because naive expectation of strong penguin dominance would give $`R=Br(K^\pm \pi ^0)/Br(K\pi ^\pm )1/2`$ and model calculations for $`Br(B^0K^0\pi ^0)`$ would obtain a much smaller value. The closeness of the branching ratios with charged mesons in the final states may be an indication of large interference effects of tree, strong and electroweak penguin interactions. It has been shown that using information from these decays and $`B^\pm \pi ^\pm \pi ^0`$ decays, the phase angle $`\gamma `$ of the KM matrix can be constrained and determined in the Standard Model (SM). Using the present central values for these branching ratios we find that the constraint obtained on $`\gamma `$ may have potential problem with $`\gamma =(59.5_{7.5}^{+8.5})^{}`$ obtained from other constraints.
If there is new physics beyond the SM the situation becomes complicated. It is not possible to isolate different new physics sources in the most general case. However, one can extract important information for the class of models where significant new physics effects only show up at loop levels for B decays. In this paper we study how new physics of the type described above can affect the results using anomalous three gauge boson couplings as an example for illustration.
New physics due to anomalous three gauge boson couplings is a perfect example of models where new physics effects only appear at loop level in B decays. Effects due to anomalous couplings do not appear at tree level for B decays to the lowest order, and they do not affect CP violation and mixings in $`K^0\overline{K}^0`$ and $`B\overline{B}`$ systems at one loop level. Therefore they do not affect the fitting to the KM parameters obtained in Ref. . However they affect the constraint on and determination of $`\gamma `$ using experimental results from $`BK\pi ,\pi \pi `$, and affect the $`B`$ decay branching ratios.
The effective Hamiltonian $`H_{eff}=(G_F/\sqrt{2})[V_{ub}^{}V_{uq}(c_1O_1+c_2O_2)V_{tb}^{}V_{tq}_{i=310}c_iO_i]`$ responsible for B decays have been studied by many authors. We will use the values of $`c_i`$ obtained for the SM in Ref. with
$`c_1=0.313,c_2=1.150,c_3=0.017,c_4=0.037,c_5=0.010,c_6=0.046,`$ (1)
$`c_7=0.001\alpha _{em},c_8=0.049\alpha _{em},c_9=1.321\alpha _{em},c_{10}=0.267\alpha _{em}.`$ (2)
The Wilson Coefficients are modified when anomalous couplings are included. They will generate non-zero $`c_{310}`$. Their effects on $`BK\pi `$ mainly come from $`c_{710}`$. To the leading order in QCD corrections, the new contributions to $`c_{710}^{AC}`$ due to various anomalous couplings are given by,
$`c_7^{AC}/\alpha _{em}`$ $`=`$ $`0.287\mathrm{\Delta }\kappa ^\gamma 0.045\lambda ^\gamma +1.397\mathrm{\Delta }g_1^Z0.145g_5^Z,`$ (3)
$`c_8^{AC}/\alpha _{em}`$ $`=`$ $`0.082\mathrm{\Delta }\kappa ^\gamma 0.013\lambda ^\gamma +0.391\mathrm{\Delta }g_1^Z0.041g_5^Z,`$ (4)
$`c_9^{AC}/\alpha _{em}`$ $`=`$ $`0.337\mathrm{\Delta }\kappa ^\gamma 0.053\lambda ^\gamma 5.651\mathrm{\Delta }g_1^Z+0.586g_5^Z,`$ (5)
$`c_{10}^{AC}/\alpha _{em}`$ $`=`$ $`0.069\mathrm{\Delta }\kappa ^\gamma +0.011\lambda ^\gamma +1.143\mathrm{\Delta }g_1^Z0.119g_5^Z.`$ (6)
In the above we have used a cut off $`\mathrm{\Lambda }=1`$ TeV for terms proportional to $`\mathrm{\Delta }\kappa ^\gamma `$ and $`\mathrm{\Delta }g_1^Z`$. Contributions from other anomalous couplings are suppressed by additional factors of order $`(m_b^2,m_B^2)/m_W^2`$ which can be safely neglected. There are constraints on the anomalous gauge couplings. LEP experiments obtain, $`0.217<\mathrm{\Delta }\kappa ^\gamma <0.223`$, $`0.158<\lambda ^\gamma <0.074`$, and $`0.113<\mathrm{\Delta }g_1^Z<0.126`$ at the 95% confidence level. Assuming that $`g_5^Z`$ is the same order as $`\mathrm{\Delta }g_1^Z`$, it is clear that the largest possible contribution may come from $`\mathrm{\Delta }g_1^Z`$. In our later discussions we consider $`\mathrm{\Delta }g_1^Z`$ effects only.
To see possible deviations from the SM predictions for $`BK\pi `$ data, we carried out a calculation using factorization following Ref. with $`V_{us}=0.2196`$, $`V_{cb}V_{ts}=0.0395`$, $`|V_{ub}|=0.08V_{cb}`$. The branching ratios as functions of $`\gamma `$ are shown in Fig. 1. In this figure we used $`m_s=100`$ MeV which is at the middle of the range from lattice calculations and the number of colors $`N_c=3`$. Since penguin dominates the branching ratio for $`Br(B^+K^0\pi ^+)`$ which is insensitive to $`\gamma `$, we normalize the branching ratios to $`Br(B^+K^0\pi ^+)`$ to reduce possible uncertainties in the overall normalization of form factors involved.
From Fig. 1, we see that the central values for the branching ratios for the ones with at least one charged meson in the final states require the angle $`\gamma `$ to be within $`75^{}`$ to $`80^{}`$ rather than the best fit value $`\gamma _{best}=59.5^{}`$ in Ref.. Larger $`\gamma `$ is also indicated by other rare B decay data. When effects due to $`\mathrm{\Delta }g_1^Z`$ is included the situation can be relaxed. The effects of $`\mathrm{\Delta }g_1^Z`$ on $`B^+K^0\pi ^+,K^+\pi ^{}`$ are very small, but are significant for $`B^+K^+\pi ^0`$ and $`BK^0\pi ^0`$. With positive $`\mathrm{\Delta }g_1^Z`$ in its allowed range, it is possible for the relative ratios of $`Br(B^+K^+\pi ^0)`$ to the other charged modes to be in agreement with data for $`\gamma =\gamma _{best}`$. We note that $`\mathrm{\Delta }g_1^Z`$ does not affect the ratio $`Br(B^0K^+\pi ^{})/Br(B^+K^0\pi ^+)`$. Its experimental value prefers $`\gamma `$ to be close to $`75^{}`$. Of course we also note that this situation can be improved by treating $`N_c`$ as a free parameter to take into account certain non-factorizable effects. We find that with $`N_c1.35`$, the central experimental values for the branching ratios of B decays into charged mesons in the final states can be reproduced for $`\gamma =\gamma _{best}`$. It is not possible to bring $`Br(B^0K^0\pi ^0)`$ up to the experimental central value even with allowed $`\mathrm{\Delta }g_1^Z`$ and reasonable values for $`N_c`$ and $`m_s`$.
If the present experimental central values will persist and factorization approximation with $`N_c=3`$ is valid, new physics may be needed. Needless to say that we have to wait for more accurate data to draw firmer conclusions. Also due to our inability to reliably calculate the hadronic matrix elements, one should be careful in drawing conclusions with factorization calculations. However, we would like to point out that data on rare B to $`K\pi `$ decays may provide a window to look for new physics beyond the SM. Of course, to have a better understanding of the situation one needs to find methods which are able to extract $`\gamma `$ in a model independent way and in the presence of new physics. In the following we will analyze some rare B to $`K\pi `$ decay data in a more model independent way.
Model independent constraint on $`\gamma `$ can be obtained using branching ratios for $`B^\pm K\pi `$ and $`B^\pm \pi \pi `$ from symmetry considerations. This method would only need information from charged B decays to $`K\pi `$ and $`\pi \pi `$, and therefore this method is not affected by the uncertainties associated with neutral B decays to $`K\pi `$ modes. Using SU(3) relation and factorization estimate for SU(3) breaking effect among $`B^\pm K\pi `$ and $`B^\pm \pi \pi `$, one obtains
$`A(\pi ^+K^0)+\sqrt{2}A(\pi ^0K^+)=ϵA(\pi ^+K^0)e^{i\mathrm{\Delta }\varphi }{\displaystyle \frac{e^{i\gamma }\delta _{EW}}{1\delta _{EW}^{}}},`$ (7)
$`\delta _{EW}={\displaystyle \frac{3}{2}}{\displaystyle \frac{|V_{cb}||V_{cs}|}{|V_{ub}||V_{us}|}}{\displaystyle \frac{c_9+c_{10}}{c_1+c_2}},\delta _{EW}^{}={\displaystyle \frac{3}{2}}{\displaystyle \frac{|V_{tb}||V_{td}|}{|V_{ub}||V_{ud}|}}e^{i\alpha }{\displaystyle \frac{c_9+c_{10}}{c_1+c_2}},`$ (8)
$`ϵ=\sqrt{2}{\displaystyle \frac{|V_{us}|}{|V_{ud}|}}{\displaystyle \frac{f_K}{f_\pi }}{\displaystyle \frac{|A(\pi ^+\pi ^0)|}{|A(\pi ^+K^0)|}},`$ (9)
where $`\mathrm{\Delta }\varphi `$ is the difference of the final state elastic re-scattering phases $`\varphi _{3/2,1/2}`$ for $`I=3/2,1/2`$ amplitudes. For $`f_K/f_\pi =1.22`$ and $`Br(B^\pm \pi ^\pm \pi ^0)=(0.54_{0.20}^{+0.21}\pm 0.15)\times 10^5`$, we obtain $`ϵ=0.21\pm 0.06`$.
The parameter $`\delta _{EW}^{}`$ is of order $`c_{9,10}/c_{1,2}`$ which is much smaller than one and will be neglected in our later discussions. $`\delta _{EW}`$ is a true measure of electroweak penguin interactions in hadronic B decays and provides an easier probe of these interactions compared with other methods. In the above contributions from $`c_{7,8}`$ have been neglected which is safe in the SM because they are small. With anomalous couplings this is still a good approximation. In general the contributions from $`c_{7,8}`$ may be substantial. In that case the expression becomes more complicated. But one can always absorb the contribution into an effective $`\delta _{EW}`$. In the SM for $`r_v=|V_{ub}|/|V_{cb}|=0.08`$ and $`|V_{us}|=0.2196`$, $`\delta _{EW}=0.81`$. Smaller $`r_v`$ implies larger $`\delta _{EW}`$. Had we used $`r_v=0.1`$, $`\delta _{EW}`$ would be 0.65 as in Ref. . With anomalous couplings, we find
$`\delta _{EW}=0.81(1+0.26\mathrm{\Delta }\kappa ^\gamma +0.04\lambda ^\gamma +4.33\mathrm{\Delta }g_1^Z0.45g_5^Z).`$ (10)
The value for $`\delta _{EW}`$ can be different from the SM prediction. It is most sensitive to $`\mathrm{\Delta }g_1^Z`$. Within the allowed range of $`0.113<\mathrm{\Delta }g_1^Z<0.126`$ , $`\delta _{EW}`$ can vary in the range $`0.401.25`$.
Neglecting small tree contribution to $`B^+\pi ^+K^0`$, one obtains
$`\mathrm{cos}\gamma =\delta _{EW}{\displaystyle \frac{(r_+^2+r_{}^2)/21ϵ^2(1\delta _{EW}^2)}{2ϵ(\mathrm{cos}\mathrm{\Delta }\varphi +ϵ\delta _{EW})}},`$ (11)
$`r_+^2r_{}^2=4ϵ\mathrm{sin}\mathrm{\Delta }\varphi \mathrm{sin}\gamma ,`$ (12)
where $`r_\pm ^2=4Br(\pi ^0K^\pm )/[Br(\pi ^+K^0)+Br(\pi ^{}\overline{K}^0)]=1.33\pm 0.45`$.
If SU(3) breaking effect is indeed represented by the last equation in (9), and tree contribution to $`B^\pm \pi ^\pm K`$ is small, information about $`\gamma `$ and $`\delta _{EW}`$ obtained are free from uncertainties associated with hadronic matrix elements. Possible SU(3) breaking effects have been estimated and shown to be small. The smallness of the tree contribution to $`B^\pm K\pi ^\pm `$ is true in factorization approximation and can be checked experimentally. The above equations can be tested in the future. We will assume the validity of Eq. (11) and study how information on $`\gamma `$ obtained from $`BK\pi ,\pi \pi `$ decays depends on $`\delta _{EW}`$.
The relation between $`\gamma `$ and $`\delta _{EW}`$ is complicated. However it is interesting to note that even in the most general case, bound on $`\mathrm{cos}\gamma `$ can be obtained. For $`\mathrm{\Delta }=(r_+^2+r_{}^2)/21ϵ^2(1\delta _{EW}^2)>0`$, we have
$`\mathrm{cos}\gamma \delta _{EW}{\displaystyle \frac{\mathrm{\Delta }}{2ϵ(1+ϵ\delta _{EW})}},\text{or}\mathrm{cos}\gamma \delta _{EW}{\displaystyle \frac{\mathrm{\Delta }}{2ϵ(1+ϵ\delta _{EW})}}.`$ (13)
The sign of $`\mathrm{\Delta }`$ depends on $`r_\pm ^2`$, $`ϵ`$ and $`\delta _{EW}`$. As long as $`r_\pm ^2>1.07`$, $`\mathrm{\Delta }`$ is larger than zero at the 90% C.L. in the allowed range for $`ϵ`$ and any value for $`\delta _{EW}`$. For smaller $`r_\pm ^2`$, $`\mathrm{\Delta }`$ can change sign depending on $`\delta _{EW}`$. For $`\mathrm{\Delta }<0`$, the bounds are given by replacing $``$, $``$ by $``$, $``$ in the above equations, respectively. We remark that if $`r_\pm ^2<1`$, one can also use the method in Ref. to constrain $`\gamma `$. The above bounds become exact solutions for $`\mathrm{cos}\mathrm{\Delta }\varphi =1`$ and $`\mathrm{cos}\mathrm{\Delta }\varphi =1`$, respectively. For $`ϵ<<1`$, one obtains the bound $`|\mathrm{cos}\gamma \delta _{EW}|(r_+^2+r_{}^22)/(4ϵ)`$ in Ref. .
We will use the central value for $`ϵ`$ and vary $`r_\pm ^2`$ in our numerical analysis to illustrate how information on $`\gamma `$ and its dependence on new physics through $`\delta _{EW}`$ can be obtained. The bounds on $`\mathrm{cos}\gamma `$ are shown in Fig. 2 by the solid curves for three representative cases: a) Central values for $`ϵ`$ and $`r_\pm ^2`$; b) Central values for $`ϵ`$ and $`1\sigma `$ upper bound $`r_\pm ^2=1.78`$; and c) Central value for $`ϵ`$ and $`1\sigma `$ lower bound $`r_\pm ^2=0.88`$. For cases a) and b) $`\mathrm{\Delta }>0`$, and for case c) $`\mathrm{\Delta }<0`$.
The bounds with $`|\mathrm{cos}\gamma |1`$ for a), b) and c) are indicated by the curves (a1, a2), (b) and (c1, c2), respectively. For cases a) and c) there are two allowed regions, the regions below (a1, c1) and the regions above (a2, c2). For case b) the allowed range is below (b). When $`r_\pm ^2`$ decreases from $`1\sigma `$ upper bound to $`1\sigma `$ lower bound, one of the boundaries goes up from (b) to (a1) then moves to (c2). And the other boundary for case b) which is outside the range moves to (a2) then goes down to (c1). In case a), for $`\delta _{EW}=0.81(0.65)`$ we find $`\mathrm{cos}\gamma <0.18(0.015)`$ which is way below the value corresponding to $`\mathrm{cos}\gamma _{best}0.5`$. With $`\mathrm{\Delta }g_1^Z=0.126`$, $`\mathrm{cos}\gamma `$ can be close to 0.5. For larger $`r_\pm ^2`$ the situation is worse. This can be seen from the curves for case b) where $`\mathrm{cos}\gamma <0`$ for $`\delta _{EW}`$ up to 1.5. For smaller $`r_\pm ^2`$ the situation is better as can be seen from case c). In this case there are larger allowed ranges. $`\gamma \gamma _{best}`$ can be accommodated even by the SM.
When the decay amplitudes for $`B^\pm K\pi `$, $`B^\pm \pi ^\pm \pi ^0`$ and the rate asymmetries for these decays are determined to a good accuracy, $`\gamma `$ can be determined using Eq. (9) and its conjugated form. The original method using similar equations without the correction $`\delta _{EW}`$ from electroweak penguin is problematic because the correction is large . Many variations involving other decay modes have been proposed to remove electroweak penguin effects. Recently it was realized that the difficulties associated with electroweak penguin interaction can be calculated in terms of the quantity $`\delta _{EW}`$.
This method again crucially depends on the value of $`\delta _{EW}`$. The solution of $`\mathrm{cos}\gamma `$ corresponds to the solution of a fourth order polynomial in $`\mathrm{cos}\gamma `$. Using Eqs. (11) and (12), we have
$`(1\mathrm{cos}^2\gamma )[1({\displaystyle \frac{\mathrm{\Delta }}{2ϵ(\delta _{EW}\mathrm{cos}\gamma )}}ϵ\delta _{EW})^2]{\displaystyle \frac{(r_+^2r_{}^2)^2}{16ϵ^2}}=0.`$ (14)
The solutions depend on the values of $`r_\pm ^2`$ and $`ϵ`$ which are not determined with sufficient accuracy at present. To have some idea about the details, we analyze the solutions of $`\mathrm{cos}\gamma `$ as a function of $`\delta _{EW}`$ for the three cases discussed earlier with a given value for the asymmetry $`A_{asy}=(r_+^2r_{}^2)/(r_+^2+r_{}^2)`$. In Fig. 2 we show the solutions with $`A_{asy}=15\%`$ for illustration. The actual value to be used for practical analysis has to be determined by experiments. The solutions for the three cases a), b) and c) are indicated by the dashed, dot-dashed and dotted curves in Fig. 2. In general there are four solutions, but not all of them are physical ones satisfying $`|\mathrm{cos}\gamma |<1`$.
For case a), two solutions are allowed with $`\delta _{EW}`$. To have $`\mathrm{cos}\gamma >0`$ $`\delta _{EW}`$ has to be larger than 0.7. Whereas $`\mathrm{cos}\gamma \mathrm{cos}\gamma _{best}`$ would require $`\delta _{EW}`$ to be larger than 1.2 which can not be reached in the SM, but is possible for non-zero $`\mathrm{\Delta }g_1^Z`$ in its allowed range. With smaller $`r_\pm ^2`$, $`\mathrm{cos}\gamma >0`$ can be solution with smaller $`\delta _{EW}`$ and can even have $`\mathrm{cos}\gamma =\mathrm{cos}\gamma _{best}`$. This can be seen from the dotted curves in Fig. 2 for case c). For larger $`r_\pm ^2`$ in order to have solutions, larger $`\delta _{EW}`$ is required. For case b) $`\delta _{EW}`$ must be larger than 1.4 in order to have solutions. These regions can not be reached by SM, nor by the model with $`\mathrm{\Delta }g_1^Z`$ in the allowed range.
We also analyzed how the solutions change with the asymmetry $`A_{asy}`$. With small $`A_{asy}`$, the solutions are close to the bounds. When $`A_{asy}`$ increases, the solutions move away from the bounds. The solutions below the bounds (a1), (b) and (c1) shift towards the right, and the bounds (a2) and (c2) move towards the left. In all cases the solutions with $`|\mathrm{cos}\gamma |1`$ become more sensitive to $`\delta _{EW}`$ and $`|\mathrm{cos}\gamma |`$ becomes smaller as $`A_{asy}`$ increases. In each case discussed the solutions, except the ones close to $`|\mathrm{cos}\gamma |=1`$, in models with non-zero $`\mathrm{\Delta }g_1^Z`$ can be very different from those in the SM. It is clear that important information about $`\gamma `$ and about new physics contribution to $`\delta _{EW}`$ can be obtained from $`B^\pm K\pi ,\pi \pi `$ decays.
We conclude that the branching ratios of $`BK\pi `$ decays are sensitive to new physics at loop level. The bound on $`\gamma `$, extracted using the central branching ratios for $`B^\pm K\pi `$ and information from $`B^\pm \pi ^\pm \pi `$, is different from that obtained from other experimental data. New physics, such as anomalous gauge couplings, can improve the situation. Similar analysis can be applied to any other model where new physics contribute to electroweak penguin interactions. The decay modes, $`B^\pm K\pi ,\pi \pi `$ will be measured at various B factories with improved error bars. The Standard Model and models beyond will be tested in the near future.
Acknowledgments:
This work was partially supported by National Science Council of R.O.C. under grant number NSC 88-2112-M-002-041.
|
no-problem/9905/cond-mat9905265.html
|
ar5iv
|
text
|
# COMPOSITE FERMIONS AND THE FRACTIONAL QUANTUM HALL EFFECT
## I Introduction
The quantum Hall effect (QHE), i.e. the quantization of Hall conductance of a two-dimensional electron gas (2DEG) in high magnetic fields at certain filling factors $`\nu `$ ($`\nu ^1`$ is the number of single particle states in a LL per electron), signals the appearance of (incompressible) nondegenerate ground states (GS’s) separated from the continuum of excited states by a finite gap. At integer $`\nu =1`$, 2 … (IQHE), the excitation gap is the single particle cyclotron gap, while at fractional $`\nu =1/3`$, 1/5, 2/5 … (FQHE) electrons partially fill a degenerate (lowest) LL and the formation of incompressible GS’s is a many body phenomenon revealing the unique properties of Coulomb interaction of electrons in the lowest LL .
In the mean field (MF) composite Fermion (CF) picture, in a 2DEG of density $`n`$ at a strong magnetic field $`B`$, each electron binds an even number $`2p`$ of magnetic flux quanta $`\varphi _0=hc/e`$ (in form of an infinitely thin flux tube) forming a CF. Because of the Pauli exclusion principle, the magnetic field confined into a flux tube within one CF has no effect on the motion of other CF’s, and the average effective magnetic field $`B^{}`$ seen by CF’s is reduced, $`B^{}=B2p\varphi _0n`$. Because $`B^{}\nu ^{}=B\nu =n\varphi _0`$, the relation between the electron and CF filling factors is
$$(\nu ^{})^1=\nu ^12p.$$
(1)
Since the low band of energy levels of the original (interacting) 2DEG has similar structure to that of the noninteracting CF’s in a uniform effective field $`B^{}`$, it was proposed that the Coulomb charge-charge and Chern–Simons (CS) charge-flux interactions beyond the MF largely cancel one another, and the original strongly interacting system of electrons is converted into one of weakly interacting CF’s. Consequently, the FQHE of electrons was interpreted as the IQHE of CF’s.
Although the MFCF picture correctly predicts the structure of low energy spectra of FQH systems, the energy scale it uses (the CF cyclotron energy $`\mathrm{}\omega _c^{}`$) is totally irrelevant. Moreover, since the characteristic energies of CS ($`\mathrm{}\omega _c^{}B`$) and Coulomb ($`e^2/\lambda \sqrt{B}`$, where $`\lambda `$ is the magnetic length) interactions between fluctuations beyond MF scale differently with the magnetic field, the reason for its success cannot be found in originally suggested cancellation between those interactions. Since the MFCF picture is commonly used to interpret various numerical and experimental results, it is very important to understand why and under what conditions it is correct.
In this paper, we use the pseudopotential formalism to study the FQH systems. It is shown that the form of the pseudopotential $`V(L^{})`$ \[pair energy vs. pair angular momentum\] rather than of the interaction potential $`V(r)`$, is responsible for the incompressibility of FQH states. The idea of fractional parentage is used to characterize many body states by the ability of electrons to avoid pair states with largest repulsion. The condition on the form of $`V(L^{})`$ necessary for the occurrence of FQH states is given, which defines the class of SR pseudopotentials to which MFCF picture can be applied. As an example, we explain the success or failure of MFCF predictions for the systems of electrons in the lowest and excited LL’s, Laughlin quasiparticles (QP’s) in the hierarchy picture of FQH states , and charged excitons in a 2D electron-hole plasma.
## II Numerical Studies on the Haldane Sphere
Because of the LL degeneracy, the electron-electron interaction in the FQH states cannot be treated perturbatively, and the exact (numerical) diagonalization techniques have been commonly used. In order to model an infinite 2DEG by a finite (small) system that can be handled numerically, it is very convenient to confine $`N`$ electrons to a surface of a (Haldane) sphere of radius $`R`$, with the normal magnetic field $`B`$ produced by a magnetic monopole of integer strength $`2S`$ (total flux of $`4\pi BR^2=2S\varphi _0`$) in the center. The obvious advantages of such geometry is the absence of an edge and preserving full 2D symmetry of a 2DEG (good quantum numbers are the total angular momentum $`L`$ and its projection $`M`$). The numerical experiments in this geometry have shown that even relatively small systems that can be solved exactly on a small computer behave in many ways like an infinite 2DEG, and a number of parameters of a 2DEG (e.g. characteristic excitation energies) can be obtained from such small scale calculations.
The single particle states on a Haldane sphere (monopole harmonics) are labeled by angular momentum $`l`$ and its projection $`m`$. The energies, $`\epsilon _l=\mathrm{}\omega _c[l(l+1)S^2]/2S`$, fall into degenerate shells and the $`n`$th shell ($`n=l|S|=0`$, 1, …) corresponds to the $`n`$th LL. For the FQH states at filling factor $`\nu <1`$, only the lowest, spin polarized LL need be considered.
The object of numerical studies is to diagonalize the electron-electron interaction hamiltonian $`H`$ in the space of degenerate antisymmetric $`N`$ electron states of a given (lowest) LL. Although matrix $`H`$ is easily block diagonalized into blocks with specified $`M`$, the exact diagonalization becomes difficult (matrix dimension over $`10^6`$) for $`N>10`$ and $`2S>27`$ ($`\nu =1/3`$). Typical results for ten electrons at filling factors near $`\nu =1/3`$ are presented in Fig. 1.
Energy $`E`$, plotted as a function of $`L`$ in the magnetic units, includes shift $`(Ne)^2/2R`$ due to charge compensating background. There is always one or more $`L`$ multiplets (marked with open circles) forming a low energy band separated from the continuum by a gap. If the lowest band consists of a single $`L=0`$ GS (Fig. 1d), it is expected to be incompressible in the thermodynamic limit (for $`N\mathrm{}`$ at the same $`\nu `$) and an infinite 2DEG at this filling factor is expected to exhibit the FQHE.
The MFCF interpretation of the spectra in Fig. 1 is the following. The effective magnetic monopole strength seen by CF’s is
$$2S^{}=2S2p(N1),$$
(2)
and the angular momenta of lowest CF shells (CF LL’s) are $`l_n^{}=|S^{}|+n`$ . At $`2S=27`$, $`l_0^{}=9/2`$ and ten CF’s fill completely the lowest CF shell ($`L=0`$ and $`\nu ^{}=1`$). The excitations of the $`\nu ^{}=1`$ CF GS involve an excitation of at least one CF to a higher CF LL, and thus (if the CF-CF interaction is weak on the scale of $`\mathrm{}\omega _c^{}`$) the $`\nu ^{}=1`$ GS is incompressible and so is Laughlin $`\nu =1/3`$ GS of underlying electrons. The lowest lying excited states contain a pair of QP’s: a quasihole (QH) with $`l_{\mathrm{QH}}=l_0^{}=9/2`$ in the lowest CF LL and a quasielectron (QE) with $`l_{\mathrm{QE}}=l_1^{}=11/2`$ in the first excited one. The allowed angular momenta of such pair are $`L=1`$, 2, …, 10. The $`L=1`$ state usually has high energy and the states with $`L2`$ form a well defined band with a magnetoroton minimum at a finite value of $`L`$. The lowest CF states at $`2S=26`$ and 28 contain a single QE and a single QH, respectively (in the $`\nu ^{}=1`$ CF state, i.e. the $`\nu =1/3`$ electron state), both with $`l_{\mathrm{QP}}=5`$, and the excited states will contain additional QE-QH pairs. At $`2S=25`$ and $`29`$ the lowest bands correspond to a pair of QP’s, and the values of energy within those bands define the QP-QP interaction pseudopotential $`V_{\mathrm{QP}}`$. At $`2S=25`$ there are two QE’s each with $`l_{\mathrm{QE}}=9/2`$ and the allowed angular momenta (of two identical Fermions) are $`L=0`$, 2, 4, 6, and 8, while at $`2S=29`$ there are two QH’s each with $`l_{\mathrm{QH}}=11/2`$ and $`L=0`$, 2, 4, 6, 8, and 10. Finally, at $`2S=24`$, the lowest band contains three QE’s each with $`l_{\mathrm{QE}}=4`$ in the Laughlin $`\nu =1/3`$ state, (in the Fermi liquid picture, interacting with one another through $`V_{\mathrm{QE}}`$) and $`L=1`$, $`3^2`$, 4, 5, 6, 7, and 9.
## III Pseudopotential Approach
The two body interaction hamiltonian $`H`$ can be expressed as
$$\widehat{H}=\underset{i<j}{}\underset{L^{}}{}V(L^{})\widehat{𝒫}_{ij}(L^{}),$$
(3)
where $`V(L^{})`$ is the interaction pseudopotential and $`\widehat{𝒫}_{ij}(L^{})`$ projects onto the subspace with angular momentum of pair $`ij`$ equal to $`L^{}`$. For electrons confined to a LL, $`L^{}`$ measures the average squared distance $`d^2`$,
$$\frac{\widehat{d}^2}{R^2}=2+\frac{S^2}{l(l+1)}\left(2\frac{\widehat{L}^2}{l(l+1)}\right),$$
(4)
and larger $`L^{}`$ corresponds to smaller separation. Due to the confinement of electrons to one (lowest) LL, interaction potential $`V(r)`$ enters hamiltonian $`H`$ only through a small number of pseudopotential parameters $`V(2l)`$, where $``$, relative pair angular momentum, is an odd integer.
In Fig. 2 we compare Coulomb pseudopotentials $`V(L^{})`$ calculated for a pair of electrons on the Haldane sphere each with $`l=5`$, 15/2, 10, and 25/2, in the lowest and first excited LL.
For the reason that will become clear later, $`V(L^{})`$ is plotted as a function of $`L^{}(L^{}+1)`$. All pseudopotentials in Fig. 2 increase with increasing $`L^{}`$. If $`V(L^{})`$ increased very quickly with increasing $`L^{}`$ (we define ideal SR repulsion as: $`dV_{\mathrm{SR}}/dL^{}0`$ and $`d^2V_{\mathrm{SR}}/dL^20`$), the low lying many body states would be the ones maximally avoiding pair states with largest $`L^{}`$. At filling factor $`\nu =1/m`$ ($`m`$ is odd) the many body Hilbert space contains exactly one multiplet in which all pairs completely avoid states with $`L^{}>2lm`$. This multiplet is the $`L=0`$ incompressible Laughlin state and it is an exact GS of $`V_{\mathrm{SR}}`$.
The ability of electrons in a given many body state to avoid strongly repulsive pair states can be conveniently described using the idea of fractional parentage. An antisymmetric state $`|l^N,L\alpha `$ of $`N`$ electrons each with angular momentum $`l`$ that are combined to give total angular momentum $`L`$ can be written as
$$|l^N,L\alpha =\underset{L^{}}{}\underset{L^{\prime \prime }\alpha ^{\prime \prime }}{}G_{L\alpha ,L^{\prime \prime }\alpha ^{\prime \prime }}(L^{})|l^2,L^{};l^{N2},L^{\prime \prime }\alpha ^{\prime \prime };L.$$
(5)
Here, $`|l^2,L^{};l^{N2},L^{\prime \prime }\alpha ^{\prime \prime };L`$ denote product states in which $`l_1=l_2=l`$ are added to obtain $`L^{}`$, $`l_3=l_4=\mathrm{}=l_N=l`$ are added to obtain $`L^{\prime \prime }`$ (different $`L^{\prime \prime }`$ multiplets are distinguished by a label $`\alpha ^{\prime \prime }`$), and finally $`L^{}`$ is added to $`L^{\prime \prime }`$ to obtain $`L`$. The state $`|l^N,L\alpha `$ is totally antisymmetric, and states $`|l^2,L^{};l^{N2},L^{\prime \prime }\alpha ^{\prime \prime };L`$ are antisymmetric under interchange of particles 1 and 2, and under interchange of any pair of particles 3, 4, … $`N`$. The factor $`G_{L\alpha ,L^{\prime \prime }\alpha ^{\prime \prime }}(L^{})`$ is called the coefficient of fractional grandparentage (CFGP). The two particle interaction matrix element expressed through CFGP’s is
$$l^N,L\alpha \left|V\right|l^N,L\beta =\frac{N(N1)}{2}\underset{L^{}}{}\underset{L^{\prime \prime }\alpha ^{\prime \prime }}{}G_{L\alpha ,L^{\prime \prime }\alpha ^{\prime \prime }}(L^{})G_{L\beta ,L^{\prime \prime }\alpha ^{\prime \prime }}(L^{})V(L^{}),$$
(6)
and expectation value of energy is
$$E_\alpha (L)=\frac{N(N1)}{2}\underset{L^{}}{}𝒢_{L\alpha }(L^{})V(L^{}),$$
(7)
where the coefficient
$$𝒢_{L\alpha }(L^{})=\underset{L^{\prime \prime }\alpha ^{\prime \prime }}{}\left|G_{L\alpha ,L^{\prime \prime }\alpha ^{\prime \prime }}(L^{})\right|^2$$
(8)
gives the probability that pair $`ij`$ is in the state with $`L^{}`$.
## IV Energy Spectra of Short Range Pseudopotentials
The very good description of actual GS’s of a 2DEG at fillings $`\nu =1/m`$ by the Laughlin wavefunction (overlaps typically larger that 0.99) and the success of the MFCF picture at $`\nu <1`$ both rely on the fact that pseudopotential of Coulomb repulsion in the lowest LL falls into the same class of SR pseudopotentials as $`V_{\mathrm{SR}}`$. Due to a huge difference between all parameters $`V_{\mathrm{SR}}(L^{})`$, the corresponding many body hamiltonian has the following hidden symmetry: the Hilbert space $``$ contains eigensubspaces $`_p`$ of states with $`𝒢(L^{})=0`$ for $`L^{}>2(lp)`$, i.e. with $`L^{}<2(lp)`$. Hence, $``$ splits into subspaces $`\stackrel{~}{}_p=_p_{p+1}`$, containing states that do not have grandparentage from $`L^{}>2(lp)`$, but have some grandparentage from $`L^{}=2(lp)1`$,
$$=\stackrel{~}{}_0\stackrel{~}{}_1\stackrel{~}{}_2\mathrm{}$$
(9)
The subspace $`\stackrel{~}{}_p`$ is not empty (some states with $`L^{}<2(lp)`$ can be constructed) at filling factors $`\nu (2p+1)^1`$. Since the energy of states from each subspace $`\stackrel{~}{}_p`$ is measured on a different scale of $`V(2(lp)1)`$, the energy spectrum splits into bands corresponding to those subspaces. The energy gap between the $`p`$th and $`(p+1)`$st bands is of the order of $`V(2(lp)1)V(2(lp1)1)`$, and hence the largest gap is that between the 0th band and the 1st band, the next largest is that between the 1st band and 2nd band, etc.
Fig. 3 demonstrates on the example of four electrons to what extent this hidden symmetry holds for the Coulomb pseudopotential in the lowest LL.
The subspaces $`_p`$ are identified by calculating CFGP’s of all states. They are not exact eigenspaces of the Coulomb interaction, but the mixing between different $`_p`$ is weak and the coefficients $`𝒢(L^{})`$ for $`L^{}>2(lp)`$ (which vanish exactly in exact subspaces $`_p`$) are indeed much smaller in states marked with a given $`p`$ than in all other states. For example, for $`2l=11`$, $`𝒢(10)<0.003`$ for states marked with full circled, and $`𝒢(10)>0.1`$ for all other states (squares).
Note that the set of angular momentum multiplets which form subspace $`\stackrel{~}{}_p`$ of $`N`$ electrons each with angular momentum $`l`$ is always the same as the set of multiplets in subspace $`\stackrel{~}{}_{p+1}`$ of $`N`$ electrons each with angular momentum $`l+(N1)`$. When $`l`$ is increased by $`N1`$, an additional band appears at high energy, but the structure of the low energy part of the spectrum is completely unchanged. For example, all three allowed multiplets for $`l=5/2`$ ($`L=0`$, 2, and 4) form the lowest energy band for $`l=11/2`$, 17/2, and 23/2, where they span the $`\stackrel{~}{}_1`$, $`\stackrel{~}{}_2`$ and $`\stackrel{~}{}_3`$ subspace, respectively. Similarly, the first excited band for $`l=11/2`$ is repeated for $`l=17/2`$ and 23/2, where it corresponds to $`\stackrel{~}{}_1`$ and $`\stackrel{~}{}_2`$ subspace, respectively.
Let us stress that the fact that identical sets of multiplets occur in subspace $`\stackrel{~}{}_p`$ for a given $`l`$ and in subspace $`\stackrel{~}{}_{q+1}`$ for $`l`$ replaced by $`l+(N1)`$, does not depend on the form of interaction, and follows solely from the rules of addition of angular momenta of identical Fermions. However, if the interaction pseudopotential has SR, then: (i) $`\stackrel{~}{}_p`$ are interaction eigensubspaces; (ii) energy bands corresponding to $`\stackrel{~}{}_p`$ with higher $`p`$ lie below those of lower $`p`$; (iii) spacing between neighboring bands is governed by a difference between appropriate pseudopotential coefficients; and (iv) wavefunctions and structure of energy levels within each band are insensitive to the details of interaction. Replacing $`V_{\mathrm{SR}}`$ by a pseudopotential that increases more slowly with increasing $`L^{}`$ leads to: (v) coupling between subspaces $`\stackrel{~}{}_p`$; (vi) mixing, overlap, or even order reversal of bands; (vii) deviation of wavefunctions and the structure of energy levels within bands from those of the hard core repulsion (and thus their dependence on details of the interaction pseudopotential). The numerical calculations for the Coulomb pseudopotential in the lowest LL show (to a large extent) all SR properties (i)–(iv), and virtually no effects (v)–(vii), characteristic of ’non SR’ pseudopotentials.
The reoccurrence of $`L`$ multiplets forming the low energy band when $`l`$ is replaced by $`l\pm (N1)`$ has the following crucial implication. In the lowest LL, the lowest energy ($`p`$th) band of the $`N`$ electron spectrum at the monopole strength $`2S`$ contains $`L`$ multiplets which are all the allowed $`N`$ electron multiplets at $`2S2p(N1)`$. But $`2S2p(N1)`$ is just $`2S^{}`$, the effective monopole strength of CF’s! The MFCS transformation which binds $`2p`$ fluxes (vortices) to each electron selects the same $`L`$ multiplets from the entire spectrum as does the introduction of a hard core, which forbids a pair of electrons to be in a state with $`L^{}>2(lp)`$.
## V Definition of Short Range Pseudopotential
A useful operator identity relates total ($`L`$) and pair ($`\widehat{L}_{ij}`$) angular momenta
$$\underset{i<j}{}\widehat{L}_{ij}^2=\widehat{L}^2+N(N2)\widehat{l}^2.$$
(10)
It implies that interaction given by a pseudopotential $`V(L^{})`$ that is linear in $`\widehat{L}^2`$ (e.g. the harmonic repulsion within each LL; see Eq. (4)) is degenerate within each $`L`$ subspace and its energy is a linear function of $`L(L+1)`$. The many body GS has the lowest available $`L`$ and is usually degenerate, while the state with maximum $`L`$ has the largest energy. Note that this result is opposite to the Hund rule valid for spherical harmonics, due to the opposite behavior of $`V(L^{})`$ for the FQH ($`n=0`$ and $`l=S`$) and atomic ($`S=0`$ and $`l=n`$) systems.
Deviations of $`V(L^{})`$ from a linear function of $`L^{}(L^{}+1)`$ lead to the level repulsion within each $`L`$ subspace, and the GS is no longer necessarily the state with minimum $`L`$. Rather, it is the state at a low $`L`$ whose multiplicity $`N_L`$ (number of different $`L`$ multiplets) is large. It interesting to observe that the $`L`$ subspaces with relatively high $`N_L`$ coincide with the MFCF prediction. In particular, for a given $`N`$, they reoccur at the same $`L`$’s when $`l`$ is replaced by $`l\pm (N1)`$, and the set of allowed $`L`$’s at a given $`l`$ is always a subset of the set at $`l+(N1)`$.
As we said earlier, if $`V(L^{})`$ has SR, the lowest energy states within each $`L`$ subspace are those maximally avoiding large $`L^{}`$, and the lowest band (separated from higher states by a gap) contains states in which a number of largest values of $`L^{}`$ is avoided altogether. This property is valid for all pseudopotentials which increase more quickly than linearly as a function of $`L^{}(L^{}+1)`$. For $`V_\beta (L^{})=[L^{}(L^{}+1)]^\beta `$, exponent $`\beta >1`$ defines the class of SR pseudopotentials, to which the MFCF picture can be applied. Within this class, the structure of low lying energy spectrum and the corresponding wavefunctions very weakly depend on $`\beta `$ and converge to those of $`V_{\mathrm{SR}}`$ for $`\beta \mathrm{}`$.
The extension of the SR definition to $`V(L^{})`$ that are not strictly in the form of $`V_\beta (L^{})`$ is straightforward. If $`V(L^{})>V(2lm)`$ for $`L^{}>2lm`$ and $`V(L^{})<V(2lm)`$ for $`L^{}<2lm`$ and $`V(L^{})`$ increases more quickly than linearly as a function of $`L^{}(L^{}+1)`$ in the vicinity of $`L^{}=2lm`$, then pseudopotential $`V(L^{})`$ behaves like a SR one at filling factors near $`\nu =1/m`$.
## VI Application to Various Pseudopotentials
It follows from Fig. 2a that the Coulomb pseudopotential in the lowest LL satisfies the SR condition in the entire range of $`L^{}`$; this is what validates the MFCF picture for filling factors $`\nu 1`$. It also explains the formation of incompressible states of charged magneto-excitons ($`X^{}`$) formed in the electron-hole plasma. However, in a higher, $`n`$th LL this is only true for $`L^{}<2(ln)1`$ (see Fig. 2b for $`n=1`$) and the MFCF picture is valid only for $`\nu _n`$ (filling factor in the $`n`$th LL) around and below $`(2n+3)^1`$. Indeed, the MFCF features in the ten electron energy spectra around $`\nu =1/3`$ (in Fig. 1) are absent for the same fillings of the $`n=1`$ LL .
One consequence of this is that the MFCF picture or Laughlin like wavefunction cannot be used to describe the reported incompressible state at $`\nu =2+1/3=7/3`$ ($`\nu _1=1/3`$). The correlations in the $`\nu =7/3`$ GS are different than at $`\nu =1/3`$; the origin of (apparent) incompressibility cannot be attributed to the formation of a Laughlin like $`\nu _1=1/3`$ state (in which pair states with smallest average separation $`d^2`$ are avoided) on top of the $`\nu =2`$ state and connection between the excitation gap and the pseudopotential parameters is different. This is clearly visible in the dependence of the excitation gap $`\mathrm{\Delta }`$ on the electron number $`N`$, plotted in Fig. 4 for $`\nu =1/3`$ and 1/5 fillings of the lowest and first excited LL.
The gaps for $`\nu =1/5`$ behave very similarly as a function of $`N`$ in both LL’s, while it is not even possible to make a conclusive statement about degeneracy or incompressibility of the $`\nu =7/3`$ state based on our data for up to eleven electrons.
The SR criterion can be applied to the QP pseudopotentials to understand why QP’s do not form incompressible states at all Laughlin filling factors $`\nu _{\mathrm{QP}}=1/m`$ in the hierarchy picture of FQH states. Lines in Fig. 1b and f mark $`V_{\mathrm{QE}}`$ and $`V_{\mathrm{QH}}`$ for the Laughlin $`\nu =1/3`$ state of ten electrons. Clearly, the incompressible states with a large gap will be formed by QH’s at $`\nu _{\mathrm{QH}}=1/3`$ and by QE’s at $`\nu _{\mathrm{QE}}=1`$, explaining strong FQHE of the underlying electron system at Jain $`\nu =2/7`$ and 2/5 fractions, respectively. On the other hand, there is no FQHE at $`\nu _{\mathrm{QH}}=1/5`$ ($`\nu =4/13`$) or $`\nu _{\mathrm{QE}}=1/3`$ ($`\nu =4/11`$), and the gap above possibly incompressible $`\nu _{\mathrm{QH}}=1/7`$ ($`\nu =6/19`$) and $`\nu _{\mathrm{QE}}=1/5`$ ($`\nu =6/17`$) states should be very small, which agrees very well with exact few electron calculations. We believe that taking into account the behavior of involved QP pseudopotentials on all levels of hierarchy should explain all observed odd denominator FQH fillings and allow prediction of their relative stability (without using trial wavefunctions involving multiple LL’s and projections onto the lowest LL needed in the Jain picture).
## VII Conclusion
Using the pseudopotential formalism, we have described the FQH states in terms of the ability of electrons to avoid strongly repulsive pair states. We have defined the class of SR pseudopotentials leading to the formation of incompressible FQH states. We argue that the MFCF picture is justified for the SR interactions and fails for others. The pseudopotentials of the Coulomb interaction in excited LL’s and of Laughlin QP’s in the $`\nu =1/3`$ state are shown to belong to the SR class only at certain filling factors.
## VIII Acknowledgment
This work has been supported in part by the Materials Research Program of Basic Energy Sciences, US Department of Energy. A.W. thanks Witold Bardyszewski (Warsaw University) for help with improving the numerical codes.
|
no-problem/9905/physics9905025.html
|
ar5iv
|
text
|
# Fermi Temperature Magnetic Effects
## Abstract
Recent results that an assembly of Fermions below the Fermi temperature would exhibit anomalous semionic behaviour are examined in the context of associated magnetic fields.
<sup>0</sup><sup>0</sup>footnotetext: Email:birlasc@hd1.vsnl.net.in; birlard@ap.nic.in
Recently it was shown that below the Fermi temperature Fermions exhibit an anomalous Bosonization: They obey statistics in between the Fermi-Dirac and Bose-Einstein.
In general given an assembly of $`N`$ Fermions, if $`N_+`$ is the average number of particles with spin up, the magnetisation is given by ,
$$m=\mu (2N_+N),$$
(1)
where $`\mu `$ is the electron magnetic moment. In the usual theory, $`N_+\frac{N}{2}`$ so that $`m`$ given in (1) is small. However semionic statistics implies
$$N_+=\beta N,\frac{1}{2}<\beta <1,$$
(2)
As N is generally very large, infact the number of particles is $`10^{23}`$ per cc or more, the use of inequality (2) in (1) can give appreciable values for $`m`$.
In other words given such an assembly of Fermions, the introduction of, for example, an uniform magnetic field $`B`$ would lead to an energy creation $`mB`$, where initially the Fermion assemhbly had negligible magnetism.
Moreover the semionic behaviour could result in magnetic reversals, if for example the external field $`B`$ changed its direction.
The relevance of the above considerations is varied. It must be mentioned that in different conditions, the Fermi temperature of the assembly itself would have a very wide spectrum starting from small values. For example, in Neutron stars the Fermi temperature $`10^7K`$, while in the solid core of the earth it is $`10^4`$. In these cases $`N10^{58}`$ and $`10^{48}`$ respectively. Indeed in both these cases the prevalent magnetic field follows from (1) .
Similarly the magnetism of the planet Jupiter can also be explained by (1).
In the case of the earth there are magnetic reversals which are usually attributed to the liquid core activity of the earth, though there is no convincing explanation. Interestingly the Mars Global Surveyor space craft detected such magnetic reversals on Mars also, in the last week of April, 1999. Mars has no liquid core and tectonic activity so that, in conventional theory, this would point to a much earlier epoch of such possible activity. However this is not required in the scenario presented above.
|
no-problem/9905/hep-ph9905537.html
|
ar5iv
|
text
|
# Diffractive Interactions: Experimental Summary
## 1 Introduction
During the two days of parallel sessions at DIS99, there were $`19`$ experimental talks about diffractive interactions on topics ranging from new measurements of $`\mathrm{F}_2^{\mathrm{D}(3)}`$ at HERA to the observation of double–gap events at the Tevatron. This paper summarizes the experimental results on diffraction which were presented. The theoretical talks concerning diffractive interactions are summarized in the following contribution.
## 2 Inclusive Diffraction in DIS
M. Inuzuka presented the new ZEUS measurement of diffractive cross sections at very low $`Q^2`$ . Using 1996 data obtained with their beam pipe calorimeter, ZEUS has measured $`\mathrm{d}\sigma /\mathrm{d}M_X`$ as a function of $`W`$ in the range $`0.220<Q^2<0.700`$ GeV<sup>2</sup> (Figure 1). Regge theory predicts that $`\mathrm{d}\sigma /\mathrm{d}M_XW^{4\overline{\alpha }_{\mathrm{IP}}4}`$ and a fit to the data yields an effective ($`|t|`$ averaged) pomeron intercept equal to: $`\overline{\alpha }_{\mathrm{IP}}=1.113\pm 0.026`$ (stat) $`{}_{0.062}{}^{}{}_{}{}^{+0.043}`$ (syst). This value is approximately $`1\sigma `$ larger than the soft pomeron intercept $`\alpha _{\mathrm{IP}}(0)1.09`$ determined by Donnachie, Landshoff and Cudell . Assuming $`\alpha _{\mathrm{IP}}^{}=0.25`$ GeV<sup>-2</sup> and $`|t|=1/b`$ with $`b=7.5`$ GeV<sup>-2</sup>, $`\alpha _{\mathrm{IP}}(0)=\overline{\alpha }_{\mathrm{IP}}+0.033`$.
C. Royon presented the H1 preliminary measurement of $`\mathrm{F}_2^{\mathrm{D}(3)}`$ in the kinematic range $`0.4<Q^2<5`$ GeV<sup>2</sup> and $`0.001<\beta <0.65`$ . The low $`Q^2`$ data were obtained during 1995 when the interaction vertex was shifted by $`70`$ cm. In Figure 2, the H1 measurement of $`x_{\mathrm{IP}}\mathrm{F}_2^{\mathrm{D}(3)}`$ is compared to a phenomenological fit with diffractive ($`\mathrm{IP}`$) and sub–leading ($`\mathrm{IR}`$) exchange trajectories. The fitted pomeron intercept is consistent with the previous H1 measurement: $`\alpha _{\mathrm{IP}}(0)=1.203\pm 0.020`$ (stat) $`\pm 0.013`$ (syst) $`{}_{0.035}{}^{}{}_{}{}^{+0.030}`$ (model) . This value is significantly larger than the soft pomeron intercept. The H1 collaboration also presented their measurement of $`\mathrm{F}_2^{\mathrm{D}(3)}`$ in the high $`Q^2`$ range $`200<Q^2<800`$ GeV<sup>2</sup>. A QCD fit to the intermediate $`Q^2`$ data, with parton distributions for the pomeron and sub–leading reggeons which evolve according to the DGLAP equations, gives a reasonable description of the high $`Q^2`$ data.
## 3 Hadronic Final State in Diffractive Interactions
The H1 collaboration has performed a NLO DGLAP analysis of their $`\mathrm{F}_2^{\mathrm{D}(3)}`$ measurement with $`4.5<Q^2<75`$ GeV<sup>2</sup> and extracted diffractive parton distributions . These partons distributions when incorporated into Monte Carlo programs give a good description of the hadronic final state in hard diffractive processes. As an example of this, F.P. Schilling presented the H1 measurements of diffractive dijet production in $`epeXY`$ interactions with $`M_Y<1.6`$ GeV and $`|t|<1`$ GeV<sup>2</sup> . The system $`X`$ contains two jets each with $`p_T>5`$ GeV. The POMPYT and RAPGAP Monte Carlo programs, with either a ‘flat’ or a ‘peaked’ gluon distribution, give good descriptions of the photoproduction and DIS data which were presented. Predictions, calculated using diffractive parton distributions which consist solely of quarks at the starting scale, underestimate the measured cross sections by factors varying between $`3`$ and $`6`$.
Diffractive $`D^\pm `$ cross section measurements were presented by S. Hengstmann and J. Cole for the H1 and ZEUS collaborations respectively . Both experiments use the $`D^+(D^0K^{}\pi ^+)\pi ^++`$(c.c.) decay mode whereas the ZEUS collaboration also uses the $`D^+(D^0K^{}\pi ^+\pi ^{}\pi ^+)\pi ^++`$(c.c.) decay mode. The cross section measurements for the two decay modes from ZEUS are in excellent agreement when interpolated to the same kinematic region and are in good agreement with Monte Carlo calculations. The BHM Monte Carlo prediction shown in Figure 3 is based on soft colour interactions, whereas in the RIDI Monte Carlo diffractive charm production is proportional to the square of the gluon density in the proton. The ACTW Monte Carlo is a resolved pomeron model with diffractive parton distributions which evolve according to the DGLAP equations. The good agreement between data and model calculations shown in Figure 3 is in contradiction with the results presented by S. Hengstmann. Figure 4 shows that the H1 diffractive $`D^\pm `$ cross sections are approximately a factor of $`3`$ smaller than the prediction from a resolved pomeron model (RAPGAP ) and a factor of $`2`$ smaller than the prediction from a soft colour interactions model (AROMA ).
M. Khakzad presented the new ZEUS measurement of dijet cross sections associated with a leading neutron . By requiring a leading neutron with $`E_n>400`$ GeV and $`\theta _n<0.8`$ mrad, $`\pi ^+`$–exchange events are tagged. The fraction of the pion’s momentum participating in the production of the two jets can be estimated using the final state variable $`x_\pi =_{jets}E_T^{jets}e^{\eta ^{jet}}/2E_\pi `$. The ZEUS measurement of $`\mathrm{d}\sigma /\mathrm{dlog}(x_\pi )`$, presented in Figure 5, shows that in the kinematic region of the measurement the $`x_\pi `$ distribution has only a mild sensitivity to the pion’s structure function. The Monte Carlo calculations have a larger sensitivity to differences in the pion flux.
M. Kapichine presented the H1 measurements of semi–inclusive cross sections in the kinematic region $`2Q^250`$ GeV<sup>2</sup>, $`610^5x610^3`$ and baryon $`p_T200`$ MeV . The semi–inclusive cross sections are parameterized in terms of a leading baryon structure function, either $`\mathrm{F}_2^{\mathrm{LP}(3)}`$ or $`\mathrm{F}_2^{\mathrm{LN}(3)}`$, for protons or neutrons respectively. The leading baryon structure functions with $`z>0.7`$ are reasonably well described by a Regge model of baryon production which considers the colour neutral exchanges of pions, pomerons and secondary reggeons. The semi–inclusive cross sections for leading neutrons can be described entirely by $`\pi ^+`$ exchange whereas leading protons require $`\pi ^0`$ and $`f_2`$ exchange contributions. The leading neutron data were used to estimate for the first time the structure function of the pion at small Bjorken–$`x`$.
Results on leading baryon production from ZEUS were presented by I. Gialas . The $`x_L`$ spectrum of leading protons with $`p_T^2<0.5`$GeV<sup>2</sup> is well described by a Regge model of leading baryon production whereas standard Monte Carlo programs, such as ARIADNE and LEPTO , fail to describe the data. The ratio of events with a leading baryon and all DIS events is independent of $`x`$ and $`Q^2`$ which supports the hypothesis of limiting fragmentation .
P. Markun presented results on the hadronic final state in diffractive DIS from ZEUS . Figure 6 shows the average thrust and sphericity of diffractive events as a function of $`M_X`$ for events tagged with a leading proton with $`x_L>0.95`$. The analysis was performed in the $`\gamma ^{}\mathrm{IP}`$ centre of mass system. The ZEUS measurements are compatible with the $`\sqrt{s}`$ dependence observed in $`e^+e^{}`$ events and are in fair agreement with the predictions from the RAPGAP Monte Carlo implemented with ARIADNE colour dipole fragmentation. The average thrust and sphericity measurements are independent of $`x_{\mathrm{IP}}`$, $`Q^2`$ and $`W`$. Energy flow measurements in the $`\gamma ^{}\mathrm{IP}`$ cms frame show that a two jet structure becomes increasingly pronounced as $`M_X`$ increases.
B. Cox presented the H1 measurement of double diffractive dissociation at large $`|t|`$ in photoproduction . The inclusive double dissociative process $`\gamma pXY`$ provides access to larger rapidity gaps than does the traditional gap–between–jets approach. This is advantageous since the BFKL cross section is expected to rise exponentially as a function of the rapidity separation . In Figure 7 the differential cross section $`\mathrm{d}\sigma /\mathrm{d}x_{\mathrm{IP}}(\gamma pXY)`$ is compared with the prediction from the HERWIG Monte Carlo for all non colour–singlet exchange processes. A significant excess above the expectation from the standard photoproduction model is observed. The dashed line shows the HERWIG prediction with the LLA BFKL contribution added. Good agreement is observed in both normalization and shape.
## 4 Diffractive Vector Meson Production
At DIS99 many new results on vector meson production were shown. B. Clerbaux presented H1 results on elastic electroproduction of $`\rho `$ mesons in the kinematic region $`1<Q^2<60`$ GeV<sup>2</sup> and $`30<W<140`$ GeV . Results on the shape of the $`(\pi \pi )`$ mass distribution were presented which indicate significant skewing at low $`Q^2`$, which gets smaller with increasing $`Q^2`$. No significant $`W`$ or $`|t|`$ dependence of the skewing is observed. Measurements of the 15 elements of the $`\rho `$ spin density matrix were also presented as a function of $`Q^2`$, $`W`$ and $`|t|`$ (see Figure 8 for example). Except for a small but significant deviation from zero of the $`r_{00}^5`$ matrix element, s–channel helicity conservation (SCHC) is found to be a good approximation. The dominant helicity flip amplitude $`T_{01}`$ is $`(8\pm 3)\%`$ of the non–flip amplitudes. The $`W`$ dependence of the measured $`\gamma ^{}p\rho p`$ cross sections, for six fixed values of $`Q^2`$ (Figure 9), suggests that the effective trajectory governing $`\rho `$ electroproduction is larger than the soft pomeron intercept determined by Donnachie, Landshoff and Cudell .
Preliminary results from H1 on proton dissociative $`\rho `$ meson production were presented by A. Droutskoi . Proton dissociative events are tagged by requiring activity in either the forward part $`(\eta >2.7)`$ of the LAr calorimeter, the forward muon detector or the proton remnant detector. An indication is observed for an increase in the ratio of proton dissociative to elastic $`\rho `$ meson cross sections in the region $`1.5<Q^2<3`$ GeV<sup>2</sup>, in contrast with the approximately flat behavior of the ratio in the region $`Q^2>3`$ GeV<sup>2</sup>. Results were also presented on the angular distributions characterizing the $`\rho `$ meson production and decay.
New results on exclusive $`\omega `$ meson production from ZEUS were presented by A. Savin . In the kinematic range $`40<W<120`$ GeV and $`3<Q^2<30`$ GeV<sup>2</sup>, $`\sigma (\gamma ^{}p\omega p)W^{0.7}`$ and $`\sigma (\gamma ^{}p\omega p)1/(Q^2+M_\omega ^2)^2`$. These dependencies are consistent with those found for the $`\rho `$. The ratio of $`\rho :\omega :\varphi `$ production, which is shown in Figure 10, is in good agreement at large $`Q^2`$ with the SU(3) prediction $`9:1:2`$. Exclusive cross sections, for the production of $`\rho `$, $`\varphi `$ and $`\omega `$ vector mesons, are found to be proportional to $`W^\delta `$ where $`\delta `$ increases with $`Q^2`$. Results were also presented which show that SCHC is violated for exclusive $`\rho `$ meson production in the low $`Q^2`$ range $`0.25<Q^2<0.85`$ GeV<sup>2</sup>.
J. Crittenden presented new results from the ZEUS collaboration on $`\rho ^0`$ photoproduction at high momentum transfer $`|t|`$ . Measurements of $`r_{00}^{04}`$ (Figure 11), in the range $`1<|t|<9`$ GeV<sup>2</sup>, were presented which show that the $`\rho ^0`$ does not become dominantly longitudinally polarized at large values of $`|t|`$ . Evidence for a non–zero value of the double–flip amplitude $`r_{11}^{04}`$ at large $`|t|`$ was also presented.
P. Merkel presented new H1 results on the elastic production of $`J/\psi `$ and $`\psi (2S)`$ mesons in the kinematic region $`2<Q^2<80`$ GeV<sup>2</sup> and $`25<W<160`$ GeV . The dependence of the cross section $`\sigma (\gamma ^{}pJ/\psi p)`$ on $`W`$ is proportional to $`W^\delta `$, with $`\delta 1`$ which has also been observed in photoproduction. The $`Q^2`$ dependence is proportional to $`1/(Q^2+m_{J/\psi }^2)^n`$ with $`n=2.38\pm 0.11`$. The first evidence from HERA for the quasi–elastic production of $`\psi (2S)`$ in DIS was also reported. The ratio of cross sections for $`\psi (2S)`$ and $`J/\psi `$ production increases as a function of $`Q^2`$.
Results on exclusive $`\rho ^0`$ electroproduction from the HERMES collaboration were presented by A. Borissov . Using $`{}_{}{}^{1}\mathrm{H}`$, $`{}_{}{}^{2}\mathrm{H}`$, $`{}_{}{}^{3}\mathrm{He}`$ and $`{}_{}{}^{14}\mathrm{N}`$ targets, the ratio of cross sections per nucleon $`\sigma _A/(A\sigma _H)`$, known as the nuclear transparency, was found to decrease with increasing coherence length of quark–antiquark fluctuations of the virtual photon. An unperturbed virtual state with mass $`M_{q\overline{q}}`$ can travel a coherence length distance $`l_c=2\nu /(Q^2+M_{q\overline{q}}^2)`$ in the laboratory frame during its lifetime. The data presented showed clear evidence for the interaction of the quark–antiquark fluctuations with the nuclear medium.
## 5 Diffraction at the Tevatron and LEP Colliders
Results on hard diffraction from CDF were presented by K. Borras . Using a sample of diffractive dijet events, with a recoil beam particle tagged with Roman Pot detectors, CDF has extracted the momentum fraction of the interacting parton in the pomeron using the formula $`\beta =(E_T^{jet1}e^{\eta ^{jet1}}+E_T^{jet2}e^{\eta ^{jet2}})/2\xi P_{beam}`$ where $`\xi `$ is the momentum fraction of the beam particle taken by the pomeron. After subtracting background contributions, such as non–diffractive dijet production with an accidental hit in the Roman Pot detectors and the contribution due to meson exchange, the data were compared to Monte Carlo simulations assuming various pomeron parton distributions and pomeron flux parametrizations. Figure 12 shows for three jet energy thresholds the ratio of the CDF diffractive dijet data and Monte Carlo predictions as a function of $`\beta `$. The Monte Carlo predictions were calculated with the H1 ‘peaked’ gluon distribution for the pomeron. The ratios are flat for $`\beta >0.2`$ and approximately equal to 0.15. This result implies that the $`\beta `$ distributions agree well with the shape of the H1 pomeron structure function but that there is a discrepancy between the data and the normalization of the standard pomeron flux. For $`\beta >0.2`$, the shape and rate of the $`\beta `$ distributions agree well with Monte Carlo predictions calculated with Goulianos’s renormalized flux and a flat gluon distribution, although an enhancement still exists in the $`\beta <0.2`$ region.
K. Mauritz presented results on hard diffraction from D0 . The fraction of single diffractive dijet events at $`\sqrt{s}=1800`$ GeV and 630 GeV were compared to Monte Carlo predictions. Predictions calculated with hard or flat gluon distributions give rates which are higher than observed in the data. For example, the fraction of 1800 FWD Jet events observed in the data is $`(0.64\pm 0.05)`$% compared to the hard and soft gluon predictions equal to $`(2.1\pm 0.3)`$% and $`(1.6\pm 0.3)`$% respectively. A comparison was also presented between the $`E_T`$ spectra of the leading two jets in double–gap events and the $`E_T`$ spectra observed in single diffractive and inclusive interactions. In spite of the decreasing effective centre of mass energies $`(\sqrt{s}_{DG}<\sqrt{s}_{SD}<\sqrt{s}_{INC})`$, Figure 13 shows that the $`E_T`$ distributions are similar in shape which suggests a hard structure for the pomeron. A study of event characteristics shows that diffractive events are quieter, and contain thinner jets, than do non–diffractive events. The same conclusions have been reached by CDF.
H. Vogt presented L3 results on hadron production in $`\gamma \gamma `$ collisions at LEP . Cross sections measurements with quasi–real photons (anti–tagged events where the scattered electrons are not detected) and with virtual photons (where both scattered electrons are detected in small angle calorimeters) were presented. Figure 14 shows the measurement of $`\sigma (\gamma \gamma \mathrm{hadrons})`$ as a function of $`W_{\gamma \gamma }`$ compared to a Donnachie and Landshoff type fit $`(\sigma _{\mathrm{tot}}=As^ϵ+Bs^\eta )`$ for the total cross section. The fit result for the pomeron dependence, $`ϵ=0.158\pm 0.006`$ (stat) $`\pm 0.028`$ (syst), is a factor of two higher than the soft pomeron intercept. Fits to the measured $`\sigma (\gamma ^{}\gamma ^{}\mathrm{hadrons})`$ cross sections at $`\sqrt{s}=91`$, 183 and 189 GeV yield $`ϵ`$ values equal to $`0.28\pm 0.05`$, $`0.40\pm 0.07`$ and $`0.29\pm 0.03`$ respectively. These values are not in agreement with a leading order BFKL model with $`ϵ0.53`$ .
|
no-problem/9905/cond-mat9905327.html
|
ar5iv
|
text
|
# Nonequilibrium phase transition by directed Potts particles
## Abstract
We introduce an interface model with $`q`$-fold symmetry to study the nonequilibrium phase transition (NPT) from an active to an inactive state at the bottom layer. In the model, $`q`$ different species of particles are deposited or are evaporated according to a dynamic rule, which includes the interaction between neighboring particles within the same layer. The NPT is classified according to the number of species $`q`$. For $`q=1`$ and $`2`$, the NPT is characterized by directed percolation, and the directed Ising class, respectively. For $`q3`$, the NPT occurs at finite critical probability $`p_c`$, and appears to be independent of $`q`$; the $`q=\mathrm{}`$ case is related to the Edwards-Wilkinson interface dynamics.
PACS numbers:05.70Fh,05.70Jk,05.70Ln
Recently, the problems of phase transitions in nonequilibrium systems have attracted considerable interest in the physical literature. For example, the nonequilibrium phase transition (NPT) from an active state to an inactive state becomes one of the central issues in the field of nonequilibrium dynamics. It was shown that the number of equivalent inactive (or absorbing) states characterizes universality classes for the NPT . For example, NPT, occurring in the monomer-dimer model for the catalytic oxidation of CO , the contact process , $`etc`$, has one absorbing state, and belongs to the directed percolation (DP) universality class . Most of dynamic problems exhibiting the NPT with absorbing state, belong to this universality class. Meanwhile, there are a few exceptions: when there exist two absorbing states in the dynamic process, the critical behavior near the threshold of NPT is distinctive from that of the DP class, and belongs to the directed Ising (DI) class , equivalent to the class of parity-conserving branching-annihilating random walks . The stochastic models in the DI class include, for example, the probabilistic cellular automata model, the interacting monomer-dimer model, the modified Domany-Kinzel model. On the other hand, the scenario that the NPT can be classsified according to the number of absorbing states is not complete since the NPT with more than two absorbing states is not known yet. In other words, the NPT in the directed $`q`$-state Potts (DPotts) class for $`q3`$ is not discovered yet. One reason may be from that for $`q3`$, active sites (kinks) generated at the boundary between different states domains appear more often than in the DI case. Thus, the absorbing state could not be reached for finite control parameter. Accordingly, it would be interesting to search for the NPT belonging to the DPotts class for $`q3`$.
Recently, the NPT has been considered in association with the roughening transition (RT) from a smooth to a rough phase. The NPT behavior appears at a particular reference height of the interface. For example, for the monomer deposition-evaporation (m-DE) model introduced by Alon $`et`$ $`al`$ , where evaporation of particle is not allowed on terrace, the reference height is the spontaneously selected bottom layer. The site where the interface touches the reference height, called vacant site, corresponds to the active site of the NPT. In the active phase of the NPT, the interface fluctuates close to the reference height, being in a binded state, in which the interface is smooth. On the other hand, in the inactive phase, the interface is detached from the reference height, being in a unbinded state, and the interface is rough. Accordingly, RT accompanied by the binding-unbinding transition of the interface is related to the NPT at the reference height, which is characterized according to the number of symmetric states in the dynamic rule. For the interface models with two-fold symmetry in their dynamic rule, which are the extensions of the m-DE model by assigning a couple of species to particles for one model , called the two-species model, and by enlarging the size of particles for the other model , called the dimer DE (d-DE) model, the dynamics at the reference height near the threshold of RT behaves similarly to the DI dynamics.
Both models contain the suppression effect against generating active sites, and the critical threshold of RT is considerably lower compared with the monolayer version. For the two-species model, the critical threshold is $`p_c0.4480`$ for the interface version, but $`p_c0.7485`$ for the monolayer version. In this Letter, we introduce an interface model with arbitrary $`q`$-fold symmetry, called the $`q`$-species model, and examine the NPT at the reference height for the cases $`q=3,4,5`$ and $`\mathrm{}`$. The result is compared with the previous one for $`q=1`$ and 2. In addition, we study RT for each $`q`$. Interestingly, it is found that the NPT at the reference height $`occurs`$ at finite deposition-probability for all cases, and their characteristics for $`q3`$ are different from the cases $`q=1`$ and 2, but appear to be independent of $`q`$ as long as $`q3`$. In particular, the interface model for the case $`q=\mathrm{}`$ is reduced to the restricted solid-on-solid (RSOS) model with deposition and evaporation processes. In this case, since RT occurs when the probabilities of deposition and evaporation are equal, the interface dynamics belongs to the Edwards-Wilkinson (EW) interface dynamics . Accordingly, it is found that the NPT of the $`q`$-species model for $`q3`$ in one dimension is related to the EW class.
In the $`q`$-species model, $`q`$ different species particles are deposited or evaporated on one dimensional substrate with periodic boundary condition. A site is first selected at random at which either deposition or evaporation of a particle is attempted with probability $`p`$ and $`1p`$, respectively. In the attempt of deposition, one species is selected among the $`q`$ species with equal weight $`1/q`$. Deposition or evaporation are realized under two conditions described below. First, the RSOS condition is imposed such that the height difference between nearest neighboring columns does not exceed one. Thus the attempt is realized as long as the RSOS condition is satisfied even after deposition or evaporation event. Secondly, the interaction between neighboring particles within the same layer is considered: When a particle of a certain species (e.g. A-species) is deposited on a hollow between particles, the deposition is not allowed when both of the neighboring particle on each side are of a common species (B-species), different from the dropping one (A-species). Meanwhile, a particle of a certain species (e.g. A-species) is not allowed for evaporation when it is sandwiched between particles of the same species (A-species) within the same layer. However, a particle can deposit or evaporate when two neighboring particles on each sides are of different species from one another, or one (or both) of the neighboring sites is (are) vacant. As $`q\mathrm{}`$, the probability of forming three particles in a row with the same species is zero. Thus, the secondary restriction is meaningless. Hence, the model is reduced to a random deposition-evaporation model under the RSOS condition. On the other hand, the initial substrate is flat where its height is referred as $`h=0`$. The dynamics proceeds only for $`h0`$, so that evaporation of particles at $`h=0`$ is not allowed.
Monte Carlo simulations are performed by varying the deposition probability $`p`$ and system size $`L`$ for the cases $`q=3,4,5`$ and $`\mathrm{}`$. We first discuss the dynamics occurring at the reference height, $`h=0`$. We consider the density of the vacant site $`\rho (p,t)`$ at the reference height with varying time $`t`$ at a certain deposition probability $`p`$. When $`p`$ is small, particles form small-sized islands which disappear after their short lifetime and the growth velocity of interface is zero. The density $`\rho (p,t)`$ saturates at finite value as $`t\mathrm{}`$, as shown in Fig.1. As $`p`$ increases, deposition increases and islands grow, until, above a critical value $`p_c`$, islands merge and fill new layers completely, giving the interface a finite growth velocity. Accordingly, RT occurs at $`p_c`$, and the NPT at the reference height occurs as well. The critical probability $`p_c`$, estimated for each $`q`$, is listed in Table 1. In particular, for $`q=\mathrm{}`$, the critical probability $`p_c`$ is determined as $`p_c=0.5`$, when the probabilities of deposition and evaporation are equal. For $`p<p_c`$, the saturated value $`\stackrel{~}{\rho }(p)`$ behaves as
$$\stackrel{~}{\rho }(p)(p_cp)^\beta ,$$
(1)
with the order parameter exponent $`\beta `$. We estimated the exponent $`\beta `$ from the data obtained from system size $`L=500`$, which is listed in Table 1. The numerical values for $`q3`$ do not seem to be close to each other, which makes it hard to conclude that the cases of $`q3`$ belong to the same universality class. However, the exponent $`\beta `$ is hard to be measured precisely, because it is extremely sensitive to the estimated value $`p_c`$. Accordingly, the numerical value of $`\beta `$ even for $`q=2`$ is rather broadly ranged as can be noticed in Ref.. On the other hand, for $`p>p_c`$, the density decreases to zero exponentially in the long time limit and at the critical threshold $`p_c`$, it decays algebraically as
$$\rho (p_c,t)t^{\beta /\nu _{}}.$$
(2)
The power, $`\beta /\nu _{}`$, is measured in Fig.1 and is tabulated in Table 1. The values are reasonably close to one another for $`q3`$, but considerably larger than the values for $`q=1`$ and 2. This result suggests that the number of species is unimportant for $`qq_c3`$. On the other hand, we need to discuss the relation between the vacant sites and the kinks which are the domain boundaries between different species, because the kinks are indeed active sites in the directed Potts dynamics. We measure the kink density $`\rho _K(p_c,t)`$ by counting the sites with height equal to one and whose neighbors are different species, which behaves similarly to $`\rho (p_c,t)`$, as shown in Fig.2. Thus we confirm that the vacant site density is equivalent to the active site density in the $`q`$-species model. Finally, we simulate the monolayer version of the $`q`$-species model, and find that the system does not exhibit NPT and is always in the active state for finite $`p`$. A similar behavior was found in the three-species monomer-monomer reaction model introduced by Bassler and Browne , in which the system is always in a reactive phase when the absorption rates for each species are identical.
The density $`\rho (p,t)`$ can be thought as the return probability $`P_0(p,t)`$ of interface height $`h(x,t)`$ to its initial height $`h(x,t)=0`$ after passing time $`t`$, averaged over the substrate position $`x`$. The subscript $`0`$ means that the time is measured from $`t=0`$. In general, $`P_0(p,t)`$ is different from the first return probability $`F_0(p,t)`$ of the interface height to its initial height, which decays as $`F_0(p,t)t^\theta `$ where $`\theta `$ is called the persistent exponent . For the EW interface under the given boundary condition at $`h=0`$, we obtain that $`F_0(p_c,t)t^{0.75}`$, similar to $`P_0(p_c,t)`$. Hence, the power $`\beta /\nu _{}`$ for $`q=\mathrm{}`$ is related to the persistent exponent for the EW interface.
Next, let us consider the size dependence of $`\rho (p_c,t)`$. To do so, we study the averaged density $`\rho ^{(s)}(p_c,t)`$ over the active runs which contain at least one vacant site at the reference height. Since $`\rho (p_c,t)`$ decays as Eq.(2) at $`p_c`$, the vacant sites disappear completely as $`t\mathrm{}`$. There exists a characteristic time $`\tau `$ such that every run is active up to the time $`\tau `$, while for $`t>\tau `$, it is active with probability less than one and proportional to $`\rho (p_c,t)`$, leading to the saturation of $`\rho ^{(s)}(p_c,t)`$, as shown in Fig.3. This fact implies that for $`t>\tau `$, when vacant sites exist, their number should be of order of unity. Once they are occupied, the sample falls into the inactive state, because other sites are covered by particles on upper layer. Hence, the density follows $`\rho (p_c,\tau )1/L`$. The characteristic time $`\tau `$ depends on system size $`L`$ as $`\tau L^z`$ with the dynamic exponent $`z=\nu _{}/\nu _{}`$. According to the scaling theory, the saturated value, $`\stackrel{~}{\rho }^{(s)}(p_c)`$ is written as
$$\stackrel{~}{\rho }^{(s)}(p_c)L^{\beta /\nu _{}}.$$
(3)
The power $`\beta /\nu _{}`$ is found to be $`1`$, as predicted. Therefore, the dynamic exponent $`z`$ is obtained to be between 1.32 and 1.41, which deviates from the value $`z_{EW}=2`$ for the interface dynamics. The origin of this discrepancy may be that the dynamics at the reference level is affected by the boundary at $`h=0`$.
Let us consider the dynamics of interface above the reference height. We examine the interface fluctuation width, $`W^2(L,t)=\frac{1}{L}_xh^2(x,t)\left(\frac{1}{L}_xh(x,t)\right)^2`$ at $`p_c`$. Contrary to the cases $`q=1`$ and 2, $`W^2`$ exhibits the power-law behavior as shown in Fig.4,
$$W^2(L,t)\{\begin{array}{cc}t^{2\zeta },\hfill & \text{for }tL^{\chi /\zeta }\text{,}\hfill \\ L^{2\chi },\hfill & \text{for }tL^{\chi /\zeta },\hfill \end{array}$$
(4)
where $`\zeta `$ and $`\chi `$ are the growth and the roughness exponents, respectively. The exponents $`\zeta `$ and $`\chi `$ for $`q3`$ are obtained numerically as $`\zeta 0.22,0.23,0.23`$ and $`0.24`$, and $`\chi 0.46,0.48,0.46`$ and $`0.48`$ for $`q=3,4,5`$ and $`\mathrm{}`$, respectively. These values are close to the EW values, $`\zeta =1/4`$ and $`\chi =1/2`$. We also examine the height-height correlation function, $`C^2(r)=<(h(r)h(0))^2>r^{2\chi ^{}}`$ in the long time limit. The exponent $`\chi ^{}`$ is consistent with $`\chi `$. For $`p>p_c`$, the scaling of the interface width belongs to the Kardar-Parisi-Zhang universality class .
We also generalize the d-DE model into the trimer case, called the t-DE model, where a trimer is deposited with probability $`p`$ or evaporated with probability $`1p`$ according to a rule similar to the d-DE model. Note that the t-DE model is different from the model introduced by Stinchcombe $`et.al.`$ in the two aspects: the former (latter) model is an interface (monolayer) model, and evaporation on terrace is prohibited (allowed). As shown in Fig.5, there exists a critical deposition probability $`p_c`$ such that for $`p<p_c`$, the density of the vacant sites at the bottom layer is saturated, whereas for $`pp_c`$, it decays algebraically. Note that the behavior of the exponential-type decay for $`p>p_c`$ does not appear in the t-DE model. The exponent, $`\beta /\nu _{}`$ is obtained to be $`0.38`$, which is almost half of the value for the 3-species model, which is also the case for $`q=2`$. On the other hand, the dynamic exponent $`z2.47`$ is obtained by measuring the characteristic time for finite-size cutoff. This value is inconsistent with the one for the $`q`$-species model. Accordingly, further study for the dynamic exponent $`z`$ is required. Detailed numerical results for the t-DE model and its generalization into arbitrary $`q`$-mer case will be published elsewhere .
In summary, we have introduced the $`q`$-species model, which is an interface model exhibiting RT accompanied by the binding-unbinding transition with respect to the reference height. The NPT occurring at the reference height has been examined numerically for $`q=3,4,5`$ and $`\mathrm{}`$. We found that the NPT occurs at finite $`p_c`$ for all the cases of $`q`$, which is remarkable, because the NPT for $`q3`$ has never been found before. For $`qq_c=3`$, the number of species is unimportant, and the NPT independent of $`q`$ is related to the EW interface dynamics via the return probability of interface to its initial height. We measured the critical exponents for NPT and RT. We also considered the t-DE model with three-state symmetry, and compared the result with the three-state model.
The authors wish to thank H. Park and J.D. Noh for helpful discussions. This work was supported in part by the Korea Research Foundation (98-001-D00280 & 98-015-D00090).
|
no-problem/9905/cond-mat9905155.html
|
ar5iv
|
text
|
# References
On the interaction energy of 2D electron FQHE systems within the Chern-Simons approach
Piotr Sitko
Institute of Physics, Wrocław University of Technology,
Wybrzeże Wyspiańskiego 27, 50-370 Wrocław, Poland.
Abstract
The interaction energy of the two-dimensional electron system in the region of fractional quantum Hall effect is considered within the Chern-Simons composite fermion approach. In the limit when Coulomb interaction is very small comparing to the cyclotron energy the RPA results are obtained for the fillings $`\nu =1/3`$, $`1/5`$, $`2/3`$, $`2/5`$, $`3/7`$ and compared with the exact diagonalization results for small systems (extrapolated for infinite systems). They show very poor agreement suggesting the need for looking for alternative approaches.
PACS: 73.40.H, 71.10.P
keywords: fractional quantum Hall effect, composite fermions
1. Introduction
One of the aims of theoretical studies of fractional-quantum-Hall-effect (FQHE) systems is to determine the value of the interaction energy of the system at various fractional fillings of the lowest Landau level. The exact numerical results can be found for few particle systems and they agree very well with the predictions of the trial wave function approach of Laughlin and Jain (Laughlin and Jain wave functions are proposed for the case when Coulomb interaction is very small comparing to the cyclotron energy and higher Landau levels can be omitted, it is also the case in related numerical work). The many-body theory for such systems is formulated within the Chern-Simons gauge theory which introduces the gauge field mapping fermions into fermions (so-called composite fermions obtained by attaching an even number of flux quanta to each electron). There is no small parameter in the Chern-Simons composite fermion theory (in contrast to the case of anyons in the fermion limit ). Nevertheless, the Chern-Simons theory gives good predictions of the transport properties of the system . However, in contrast to the trial wave function approach, the attempts to get the ground state energy of the system within the Chern-Simons theory are not very successful . In calculations of energy gaps modified approach have been used in which the relation between finite size calculations and the Chern-Simons results is assumed . In this paper we calculate the interaction energy of the system in the Chern-Simons approach within the RPA for several values of the filling fraction ($`\frac{1}{3},\frac{1}{5},\frac{2}{3},\frac{2}{5},\frac{3}{7}`$) and compare them with the exact diagonalization results for few electron systems (extrapolated for infinite systems).
The composite fermion (CF) transformation consists in attaching an even number ($`2p`$) of flux quanta to each electron. Such point fluxes do not change the statistics of composite particles, however, they allow to treat the 2D system of electrons in a strong magnetic field in a close analogy to the treatment in a weak magnetic field. It is motivated by the mean field approach when the sum of point fluxes is replaced by an average flux. The corresponding average field is $`B^{ChS}=2p\frac{hc}{e}\rho `$ (fluxes opposite to the external flux, $`p`$ is an integer, $`\rho `$ – density) and the effective field acting on electrons is reduced to $`B^{eff}=B^{ex}+B^{ChS}`$. In a close analogy to the quantum Hall effect one predicts similar effect when $`n`$ Landau levels are completely filled in the effective field, i.e. $`B^{eff}=\frac{1}{n}\frac{hc}{e}\rho `$. Hence, $`B^{ex}=\frac{2pn+1}{n}\frac{hc}{e}\rho `$, which means that the ”real” lowest Landau level is filled in the fraction $`\nu =\frac{n}{2pn+1}`$ ($`\nu =\frac{n}{2pn1}`$ when the effective field is opposite to the external one). It is interesting to notice that the Laughlin fractions (of the form $`1/m`$, $`m`$ – odd) can be represented in two different ways. We can add that the Chern-Simons theory results should be independent of the actual Chern-Simons parameter $`2p`$ (if even) . Hence, we can use the value of $`p`$ which is most suitable in a given problem. In practice, we use the value of $`p`$ which gives fully filled Landau levels in the effective field (treatment of such systems is well known). Nevertheless, whatever value of $`p`$ is taken the results should be the same.
The Hamiltonian of the two-dimensional system of electrons in an external magnetic field
$$H=d^2𝐫\mathrm{\Psi }^+(𝐫)\frac{1}{2m}(𝐩+\frac{e}{c}𝐀^{ex}(𝐫))^2\mathrm{\Psi }(𝐫)$$
$$+\frac{1}{2}d^2𝐫d^2𝐫^{}\mathrm{\Psi }^+(𝐫)\mathrm{\Psi }^+(𝐫^{})\frac{e^2}{ϵ|𝐫𝐫^{}|}\mathrm{\Psi }(𝐫)\mathrm{\Psi }(𝐫^{})$$
(1)
($`ϵ`$ – dielectric constant) can be rewritten in the following way:
$$H=d^2𝐫\mathrm{\Psi }^+(𝐫)\frac{1}{2m}(𝐩+\frac{e}{c}𝐀^{ex}(𝐫)+\frac{e}{c}𝐀^{ChS}(𝐫))^2\mathrm{\Psi }(𝐫)+$$
$$+\frac{1}{2}d^2𝐫d^2𝐫^{}\mathrm{\Psi }^+(𝐫)\mathrm{\Psi }^+(𝐫^{})\frac{e^2}{ϵ|𝐫𝐫^{}|}\mathrm{\Psi }(𝐫)\mathrm{\Psi }(𝐫^{})$$
(2)
where
$$A_\alpha ^{ChS}(𝐫)=2p\frac{\mathrm{}c}{e}d^2𝐫ϵ_{\alpha \beta }\frac{(𝐫𝐫^{})_\beta }{|𝐫𝐫^{}|^2}\rho (𝐫),$$
(3)
$`\rho (𝐫)=\mathrm{\Psi }^+(𝐫)\mathrm{\Psi }(𝐫)`$. The Hamiltonian $`H`$ can be separated into two parts: $`H=H_0+H_{int}`$ where
$$H_0=d^2𝐫\mathrm{\Psi }^+(𝐫)\frac{1}{2m}(𝐩+\frac{e}{c}𝐀^{ef}(𝐫))^2\mathrm{\Psi }(𝐫)$$
(4)
is treated as the unperturbed term ($`B^{eff}=\times 𝐀^{ef}=\times (𝐀^{ex}+\overline{𝐀}^{ChS})=B^{ex}+B^{ChS}`$, $`B^{ChS}`$ is found by averaging point fluxes – putting the average density $`\rho `$ in (3)). $`H_{int}`$ is the interaction Hamiltonian :
$$H_{int}=\frac{1}{2}d^2𝐫d^2𝐫^{}\mathrm{\Psi }^+(𝐫)\mathrm{\Psi }^+(𝐫^{})\frac{e^2}{ϵ|𝐫𝐫^{}|}\mathrm{\Psi }(𝐫)\mathrm{\Psi }(𝐫^{})+H_1+H_2,$$
(5)
where
$$H_1=2p\frac{\mathrm{}}{m}d^2𝐫d^2𝐫^{}\mathrm{\Psi }^+(𝐫)(p_\alpha +\frac{e}{c}A_\alpha ^{ef})\mathrm{\Psi }(𝐫)ϵ_{\alpha \beta }\frac{(𝐫𝐫^{})_\beta }{|𝐫𝐫^{}|^2}(\rho (𝐫^{})\rho ),$$
(6)
$$H_2=(2p)^2\frac{\mathrm{}^2}{2m}d^2𝐫d^2𝐫^{}d^2𝐫^{\prime \prime }\rho (𝐫)\frac{(𝐫𝐫^{})}{|𝐫𝐫^{}|^2}\frac{(𝐫𝐫^{\prime \prime })}{|𝐫𝐫^{\prime \prime }|^2}(\rho (𝐫^{})\rho )(\rho (𝐫^{\prime \prime })\rho ).$$
(7)
In this paper we consider the case when $`B^{eff}=\frac{1}{n}\frac{hc}{e}\rho `$, i.e. in the unperturbed state one has $`n`$ completely filled Landau levels (the effective filling $`\nu ^{}=n`$). The first step in calculating the ground state interaction energy is the Hartree-Fock approximation. Considering the Coulomb interaction one finds the Hartree-Fock (H-F) contribution to be (we assume the presence of the positive background):
$$E^{HF}=\frac{N}{2n}\frac{e^2}{ϵa_0^{eff}}_0^{\mathrm{}}[\underset{k=0}{\overset{n1}{}}L_k^0(\frac{1}{2}r^2)]^2\mathrm{exp}(\frac{1}{2}r^2)𝑑r$$
(8)
where $`a_0^{eff}=\sqrt{\frac{\mathrm{}c}{eB^{eff}}}`$ is the effective magnetic length ($`a_0^{eff}=\sqrt{\frac{B^{ex}}{B^{eff}}}a_0^{ex}=\sqrt{2pn\pm 1}a_0^{ex}`$, $`a_0^{ex}=\sqrt{\frac{\mathrm{}c}{eB^{ex}}}`$), $`L_l^m`$ – Laguerre polynomials, $`N`$ – number of particles. The H-F results are presented in Table I for several filling fractions and compared with ”exact” results (exact diagonalization results extrapolated for infinite systems). The difference between ”exact” and Hartree-Fock results (correlation energy) increases with the decrease of the fraction, for the $`1/3`$ state is of order of $`10`$% (of the exact value). It seems that a higher order approximation, eg. the RPA, will give a better agreement. In the following we consider the correlation energy within the RPA, assuming that the separation between Landau levels is much larger than Coulomb interaction between particles (as it is the case in exact diagonalization methods we refer to).
2. Correlation energy
The correlation energy can be defined as follows:
$$E_c=_0^1\frac{d\lambda }{\lambda }(<\lambda H_{int}>_\lambda <\lambda H_{int}>_0).$$
(9)
The expression for the correlation energy in the RPA (three-body contributions are omitted) has the form
$$E_c^{RPA}=\frac{1}{2}\mathrm{}L^2\frac{d𝐪}{(2\pi )^2}_0^{\mathrm{}}\frac{d\omega }{\pi }_0^1\frac{d\lambda }{\lambda }\mathrm{Im}\mathrm{tr}(\lambda V(q))[D_\lambda ^{RPA}(𝐪,\omega )D_0(𝐪,\omega )]$$
(10)
where $`D_\lambda ^{RPA}`$ is the correlation function of effective field currents ($`L^2`$ is the area of the system):
$$D_{\mu \nu }^{RPA}(𝐫t,𝐫^{}t^{})=\frac{i}{\mathrm{}}<T[j^\mu (𝐫t),j^\nu (𝐫^{}t^{})]>$$
(11)
given within the random-phase approximation (with the coupling constant $`\lambda `$):
$$D_\lambda ^{RPA}(𝐪,\omega )=[I\lambda D_0(𝐪,\omega )V(𝐪)]^1D_0(𝐪,\omega ).$$
(12)
The current densities are defined as:
$$𝐣(𝐫)=\frac{1}{2\mathrm{m}}\underset{\mathrm{j}}{}\{𝐏_\mathrm{j}+\frac{\mathrm{e}}{\mathrm{c}}𝐀_\mathrm{j}^{\mathrm{ef}},\delta (𝐫𝐫_\mathrm{j})\}$$
(13)
where braces denote an anticommutator, j is the vector part of $`j^\mu `$ with $`\mu =0,x,y`$. We define $`j^0`$ as density fluctuations: $`j^0=_j\delta (𝐫𝐫_j)\rho `$. The interaction matrix $`V`$ is obtained from the Hamiltonian $`H_{int}`$ (dropping out three-body terms). We choose $`𝐪=q\widehat{𝐱}`$ and the Coulomb gauge which reduces the problem to $`2\times 2`$ $`D_{\mu \nu }^{RPA}`$ matrix ($`\mu =0,y`$) . Taking $`\omega _c^{eff}=\frac{eB^{eff}}{cm}`$ and $`a_0^{eff}`$ to be frequency and length units, respectively one finds ($`\mathrm{}=1`$):
$$V(𝐪)=\left(\begin{array}{cc}v(q)& 0\\ 0& 0\end{array}\right)+\frac{4p\pi }{q^2}\left(\begin{array}{cc}2pn& iq\\ iq& 0\end{array}\right)$$
(14)
where $`v(q)`$ is the Fourier transform of the Coulomb potential ($`v(q)=\frac{2\pi e^2}{ϵq}`$ in standard units).
Let us assume the correspondence between the two energy scales, one related to the separation between Landau levels ($`\mathrm{}\omega _c^{ex}`$), one to the strength of the Coulomb interaction ($`\frac{e^2}{ϵa_0^{ex}}`$). We introduce the dimensionless parameter:
$$r_s=\frac{e^2}{ϵa_0^{eff}}\frac{1}{\mathrm{}\omega _c^{eff}}$$
(15)
which shows the strength of the Coulomb interaction with respect to the separation between effective Landau levels ($`r_s^{ex}=\frac{e^2}{ϵa_0^{ex}}\frac{1}{\mathrm{}\omega _c^{ex}}=\frac{1}{\sqrt{2pn\pm 1}}r_s`$). If one considers the system of electrons, the limit $`\frac{e^2}{ϵa_0^{ex}}\frac{1}{\mathrm{}\omega _c^{ex}}0`$ corresponds to the case when particle-hole excitations (electron excited into a higher Landau level) are negligible (hence, in exact diagonalization studies higher Landau levels can be omitted). When applying the Chern-Simons picture, however, gauge interactions are always of order of $`\mathrm{}\omega _c^{ex}`$, and particle-hole excitations (CF excited into an empty Landau level and a CF hole in a filled level) have to be considered. We have
$$V(𝐪)=\frac{4p\pi }{q^2}\left(\begin{array}{cc}2pn(1+\frac{q}{(2pn)^2}nr_s)& iq\\ iq& 0\end{array}\right)$$
(16)
The correlation function $`D_0`$ is :
$$D_0(𝐪,\omega )=\frac{n}{2\pi }\left(\begin{array}{cc}q^2\mathrm{\Sigma }_0& iq\mathrm{\Sigma }_1\\ iq\mathrm{\Sigma }_1& \mathrm{\Sigma }_2\end{array}\right)$$
(17)
where
$$\mathrm{\Sigma }_j=\frac{e^x}{n}\underset{m=n}{\overset{\mathrm{}}{}}\underset{l=0}{\overset{n1}{}}\frac{ml}{(\omega )^2(mli\eta )^2}\frac{l!}{m!}x^{ml1}[L_l^{ml}(x)]^{2j}$$
$$\times [(mlx)L_l^{ml}(x)+2x\frac{dL_l^{ml}(x)}{dx}]^j$$
(18)
and $`x=\frac{q^2}{2}`$. Then one obtains:
$$D^{RPA}(𝐪,\omega )=\frac{n}{2\pi det}\left(\begin{array}{cc}q^2\mathrm{\Sigma }_0& iq\mathrm{\Sigma }_s\\ iq\mathrm{\Sigma }_s& \mathrm{\Sigma }_p\end{array}\right)$$
(19)
where $`det=det(ID^0V)=(12pn\mathrm{\Sigma }_1)^2(2pn)^2\mathrm{\Sigma }_0(1+\mathrm{\Sigma }_2)nr_sq\mathrm{\Sigma }_0`$,
$`\mathrm{\Sigma }_s=\mathrm{\Sigma }_12pn\mathrm{\Sigma }_1^2+2pn\mathrm{\Sigma }_0\mathrm{\Sigma }_2`$, $`\mathrm{\Sigma }_p=(2pn)^2\mathrm{\Sigma }_1^2+\mathrm{\Sigma }_2(2pn)^2\mathrm{\Sigma }_0\mathrm{\Sigma }_2+qnr_s(\mathrm{\Sigma }_1^2\mathrm{\Sigma }_0\mathrm{\Sigma }_2)`$.
Collective modes are determined by the poles of the correlation function $`D^{RPA}`$. In Figures 1-2 we plot the results for $`\nu =1`$ (the direct result and the $`p=1`$ CF approach result) and similar results for $`\nu =1/3`$ are presented in Figures 3-4, in Figure 5 the $`3/7`$ case is presented.
The RPA correlation energy will be found using the dispersion relation of collective modes. In units of $`\frac{e^2}{ϵa_0^{eff}}`$ the correlation energy can be expressed as follows ($`\frac{e^2}{ϵa_0^{eff}}=r_s\mathrm{}\omega _c^{eff}`$ ):
$$E_c^{RPA}=\frac{N}{2nr_s}_0^{\mathrm{}}q𝑑q_0^{\mathrm{}}\frac{d\omega }{\pi }\mathrm{Im}(\mathrm{ln}det+\mathrm{tr}(V(q)D_0))$$
(20)
which equals
$$E_c^{RPA}=\frac{N}{2nr_s}_0^{\mathrm{}}q𝑑q_0^{\mathrm{}}\frac{d\omega }{\pi }\mathrm{Im}(\mathrm{ln}det+2pn(2pn\mathrm{\Sigma }_0+2\mathrm{\Sigma }_1)+nr_sq\mathrm{\Sigma }_0).$$
(21)
We have:
$$_0^{\mathrm{}}𝑑x_0^{\mathrm{}}\frac{d\omega }{\pi }\mathrm{Im}\mathrm{\Sigma }_0(x,\omega )=\frac{1}{2}\underset{m=1}{\overset{\mathrm{}}{}}\frac{1}{m}+\frac{1}{2}(S_n1)$$
(22)
($`S_n=_{j=1}^n\frac{1}{j}`$). This term is divergent but combined with other terms in the integral (21) has to give a finite value. Additionally
$$_0^{\mathrm{}}𝑑x_0^{\mathrm{}}\frac{d\omega }{\pi }\mathrm{Im}\mathrm{\Sigma }_1(x,\omega )=0.$$
(23)
The last integral in (21)
$$\frac{N}{2}_0^{\mathrm{}}q^2𝑑q_0^{\mathrm{}}\frac{d\omega }{\pi }\mathrm{Im}\mathrm{\Sigma }_0.$$
(24)
will be calculated separately for different $`\nu ^{}=n`$.
3. Results
To calculate the correlation energy (21) one needs to know the zeros of the determinant $`det`$ (collective modes). For $`n=1`$ (the effective filling $`\nu ^{}=1`$ – Laughlin fractions) we have an infinite set of modes with shortwavelength behaviour like $`\omega _m(q\mathrm{})=m`$ – Figs.1-4 . It can be shown that :
$$_0^{\mathrm{}}\frac{d\omega }{\pi }\mathrm{Im}\mathrm{ln}det=\underset{m=1}{\overset{\mathrm{}}{}}(\omega _mm)=\underset{m=1}{\overset{\mathrm{}}{}}\mathrm{\Delta }\omega _m$$
(25)
The integral (24) equals:
$$_0^{\mathrm{}}q^2𝑑q_0^{\mathrm{}}\frac{d\omega }{\pi }\mathrm{Im}\mathrm{\Sigma }_0=_0^{\mathrm{}}\sqrt{2x}\underset{m=1}{\overset{\mathrm{}}{}}\frac{e^xx^{m1}}{m!}=\sqrt{2}_0^{\mathrm{}}\sqrt{x}(1e^x).$$
(26)
Again, this integral is divergent, but combined with (25) and (22) will give a finite result. We write:
$$_0^{\mathrm{}}q^2𝑑q_0^{\mathrm{}}\frac{d\omega }{\pi }\mathrm{Im}\mathrm{\Sigma }_0=\frac{1}{2}\sqrt{2\pi }\underset{m=1}{\overset{\mathrm{}}{}}\frac{1}{2^mm!}\underset{i=1}{\overset{m}{}}(2i1)$$
(27)
and then (in units of $`\frac{e^2}{ϵa_0^{eff}}`$)
$$\frac{E_c^{RPA}}{N}=\frac{1}{2r_s}\underset{m=1}{\overset{\mathrm{}}{}}(\mathrm{\Delta }\omega _m(x)𝑑x(2p)^2\frac{1}{2m}\frac{1}{2}r_s\sqrt{2\pi }\underset{i=1}{\overset{m}{}}\frac{2i1}{2i}).$$
(28)
In the case when $`n2`$ every root (of collective modes) higher than the first is splitted into two (for $`m>1`$, $`\omega _m^{}(q\mathrm{})=m=\omega _m^+(q\mathrm{})`$ – Figure 5). One has
$$_0^{\mathrm{}}\frac{d\omega }{\pi }\mathrm{Im}\mathrm{ln}det=\mathrm{\Delta }\omega _1+\underset{m=2}{\overset{\mathrm{}}{}}(\mathrm{\Delta }\omega _m^{}+\mathrm{\Delta }\omega _m^+)$$
(29)
and the correlation energy for $`n=2`$ is given by (in units of $`\frac{e^2}{ϵa_0^{eff}}`$)
$$\frac{E_c^{RPA}}{N}=\frac{1}{4r_s}(\mathrm{\Delta }\omega _1(x)𝑑x8p^2)+\frac{1}{4r_s}\underset{m=2}{\overset{\mathrm{}}{}}[(\mathrm{\Delta }\omega _m^{}(x)+\mathrm{\Delta }\omega _m^+(x))𝑑x(4p)^2\frac{1}{2m}]+\frac{p^2}{r_s}$$
$$\frac{1}{8}\sqrt{2\pi }\underset{m=1}{\overset{\mathrm{}}{}}[(1\delta _{m1})m+\frac{(2m+3)(2m+1)}{4(m+1)}]\underset{i=1}{\overset{m}{}}\frac{2i1}{2i}.$$
(30)
We have also found the expression for $`E_c^{RPA}`$ for $`n=3`$ (applied for $`\nu =3/7`$).
The value of interaction energy related to Coulomb interaction (in the limit of$`r_s0`$) is:
$$\frac{\mathrm{\Delta }E_c^{RPA}}{N}=\underset{r_s0}{lim}(\frac{E_c^{RPA}(r_s)}{N}\frac{E_c^{RPA}(NC)}{N}),$$
(31)
$`NC`$ stands for the case with no Coulomb interaction . For $`n=1`$ one has
$$\frac{\mathrm{\Delta }E_c^{RPA}}{N}=\underset{r_s0}{lim}\frac{1}{2r_s}\underset{m=1}{\overset{\mathrm{}}{}}((\omega _m^{r_s}(x)\omega _m^{NC}(x))𝑑x\frac{1}{2}r_s\sqrt{2\pi }\underset{i=1}{\overset{m}{}}\frac{2i1}{2i}).$$
(32)
The main problem in calculating (32) is the calculation of the integrals and the convergence in summation over $`m`$. The integrals in (32) (and similar for $`n=2`$ and $`n=3`$) have been calculated numerically using $`k`$-point Gauss-Laquerre integration. It was verified that the summation over $`m`$ converges well and the sums have been truncated at $`2k`$ terms. In order to find the limit $`r_s0`$ we considered small values of $`r_s`$. It appears that for $`r_s`$ of order of $`10^410^5`$ the RPA energies become practically independent of $`r_s`$ and for that range the results are given in Tables II-IV.
In Table II we present the RPA results in respective units of $`\frac{e^2}{ϵa_0^{eff}}`$. It can be seen that the results for the same value of $`p`$ and $`|\nu ^{}|`$ are very close one to the other. They are not the same for the same filling (as they should be). We observe rather strong dependence on $`p`$ (at a given effective filling $`\nu ^{}`$).
In Table III and Table IV the RPA results are compared with the exact diagonalization results. The best agreement with the ”exact” values is found for the series $`3/7`$, $`2/5`$, $`1/3`$ (fractions going down from $`1/2`$). The $`1/3`$ state gives the best result, the difference between the RPA and the exact values is of order of 20% (of the exact result).
An interesting example is the system of electrons at the real filling $`\nu =1`$. In the limit $`r_s^{ex}0`$ the RPA correlation energy is zero. The qualitative difference is found within the RPA for the CF approach. Performing the CF transformation one finds the system at the effective filling $`\nu ^{}=1`$ (the effective field is opposite to the external one). We plot collective modes for the two descriptions in Figures 1-2. Calculating the RPA result (as it is described above) we find a finite CF result for the correlation energy ($`r_s0`$).
Similar situation can be found at the filling $`\nu =1/3`$. Then the effective filling is $`\nu ^{}=1`$ ($`p=1`$) or $`\nu ^{}=1`$ ($`p=2`$) and the Hartree-Fock contributions are the same in the two descriptions. The RPA collective modes spectra look very similar – Figures 3-4 (the agreement is exact at $`q0`$) but the values of interaction energy differ a lot (Table III and Table IV).
In summary the agreement between the RPA interaction energies and the exact results are far from the expected one and the analysis needs an extension by including three-body contributions (three-body density-density correlation function ) which seems to be very complicated. An alternative approach may be obtained within the new formalism developed by Shankar and Murthy (which has a direct relation with the Laughlin trial wave function). In Ref. they showed how to calculate the energy gaps and their results agree reasonably with numerical results .
4. Conclusions
The values of Coulomb interaction energies for the 2D electron system in the region of FQHE are calculated within the Chern-Simons theory in the RPA for several fractional fillings. The results are obtained in the limit $`(\frac{e^2}{ϵa_0^{ex}})(\frac{1}{\mathrm{}\omega _c^{ex}})0`$ (i. e. , when Coulomb interaction is very small comparing to the separation between Landau levels) and compared with the exact diagonalization results (the results for few particle systems extrapolated for infinite systems). The best agreement is found for fractions going down from $`1/2`$ ($`3/7`$, $`2/5`$, $`1/3`$), for the best $`1/3`$ result the difference between the RPA and the exact results is of order of $`20`$% (of the exact value). A qualitative difference is obtained for $`\nu =1`$ $`p=1`$ CF description when a finite RPA correlation energy is found. Also the result for $`\nu =1/3`$ $`p=2`$ is very different from $`\nu =1/3`$ result obtained for $`p=1`$. Our analysis needs an extension to a higher order approximation including three-body contributions. An alternative approach may be obtained within the new formalism developed by Shankar and Murthy , their results for energy gaps are in reasonable agreement with numerical results .
Figure 1:
Collective modes for $`\nu =1`$, $`r_s=1`$.
Figure 2:
Collective modes for the filling $`\nu =1`$ given within the $`p=1`$ CF description ($`\nu ^{}=1`$), $`r_s=1`$.
Figure 3:
Collective modes for $`\nu =1/3`$, $`r_s=1`$.
Figure 4:
Collective modes for the filling $`\nu =1/3`$ given within the $`p=2`$ CF description ($`\nu ^{}=1`$), $`r_s=1`$.
Figure 5:
Collective modes for $`\nu =3/7`$, $`r_s=1`$.
| $`\nu `$ | H-F | exact |
| --- | --- | --- |
| $`1`$ | $`0.627`$ | $`0.627`$ |
| $`2/3`$ | $`0.497`$ | $`0.519`$ |
| $`5/11`$ | $`0.406`$ | $`0.451^{}`$ |
| $`4/9`$ | $`0.402`$ | $`0.447^{}`$ |
| $`3/7`$ | $`0.396`$ | $`0.443`$ |
| $`2/5`$ | $`0.385`$ | $`0.433`$ |
| $`1/3`$ | $`0.362`$ | $`0.412`$ |
| $`1/5`$ | $`0.280`$ | $`0.328`$ |
Table I: The Hartree-Fock and exact interaction energies (per particle) in respective units of $`\frac{e^2}{ϵa_0^{ex}}`$. The ”exact” results are taken from Refs. where the results (in spherical systems) for few particles ($`N12`$) were extrapolated for infinite systems ($`N\mathrm{}`$). Two numbers with stars are obtained within the Jain CF approach . The ”exact” $`2/3`$ result is found via particle-hole symmetry .
| $`\nu `$ | $`\nu ^{}`$ | $`p`$ | $`\frac{\mathrm{\Delta }E_c^{RPA}}{N}`$ | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | | $`r_s=10^4`$ | | | $`r_s=10^5`$ | | |
| | | | $`k=10`$ | $`k=15`$ | $`k=20`$ | $`k=10`$ | $`k=15`$ | $`k=20`$ |
| $`3/7`$ | $`3`$ | $`1`$ | $`0.531`$ | $`0.525`$ | $`0.520`$ | $`0.531`$ | $`0.525`$ | $`0.520`$ |
| $`2/5`$ | $`2`$ | $`1`$ | $`0.360`$ | $`0.357`$ | $`0.355`$ | $`0.360`$ | $`0.357`$ | $`0.355`$ |
| $`1/3`$ | $`1`$ | $`1`$ | $`0.247`$ | $`0.247`$ | $`0.246`$ | $`0.247`$ | $`0.247`$ | $`0.246`$ |
| $`1/5`$ | $`1`$ | $`2`$ | $`0.548`$ | $`0.544`$ | $`0.542`$ | $`0.548`$ | $`0.544`$ | $`0.542`$ |
| $`1`$ | $`1`$ | $`1`$ | $`0.230`$ | $`0.230`$ | $`0.230`$ | $`0.230`$ | $`0.230`$ | $`0.230`$ |
| $`1/3`$ | $`1`$ | $`2`$ | $`0.585`$ | $`0.581`$ | $`0.580`$ | $`0.585`$ | $`0.581`$ | $`0.580`$ |
| $`2/3`$ | $`2`$ | $`1`$ | $`0.358`$ | $`0.354`$ | $`0.352`$ | $`0.358`$ | $`0.354`$ | $`0.352`$ |
Table II. The RPA correlation energies in units of $`\frac{e^2}{ϵa_0^{eff}}`$. For the $`k=10`$ case of $`\nu =3/7`$ the summation goes over $`15`$ modes. For negative effective fillings we used instead $`\nu ^{}>0`$ $`p<0`$ ($`\nu <0`$).
| $`\nu `$ | $`\nu ^{}`$ | $`p`$ | $`\frac{E^{HF}}{N}`$ | $`\frac{\mathrm{\Delta }E_c^{RPA}}{N}`$ | $`\frac{E^{HF}}{N}+\frac{\mathrm{\Delta }E_c^{RPA}}{N}`$ | exact diagonalization |
| --- | --- | --- | --- | --- | --- | --- |
| $`1`$ | | $`0`$ | $`0.627`$ | $`0`$ | $`0.627`$ | $`0.627`$ |
| $`3/7`$ | $`3`$ | $`1`$ | $`0.396`$ | $`0.197`$ | $`0.593`$ | $`0.443`$ |
| $`2/5`$ | $`2`$ | $`1`$ | $`0.385`$ | $`0.159`$ | $`0.544`$ | $`0.433`$ |
| $`1/3`$ | $`1`$ | $`1`$ | $`0.362`$ | $`0.142`$ | $`0.504`$ | $`0.412`$ |
| $`1/5`$ | $`1`$ | $`2`$ | $`0.280`$ | $`0.242`$ | $`0.522`$ | $`0.328`$ |
Table III. The interaction energies in respective units of $`\frac{e^2}{ϵa_0^{ex}}`$ (note the change of units with respect to Table II). The RPA values in the fifth column are taken from $`k=20`$ results of Table II.
| $`\nu `$ | $`\nu ^{}`$ | $`p`$ | $`\frac{E^{HF}}{N}`$ | $`\frac{\mathrm{\Delta }E_c^{RPA}}{N}`$ | $`\frac{E^{HF}}{N}+\frac{\mathrm{\Delta }E_c^{RPA}}{N}`$ | exact diagonalization |
| --- | --- | --- | --- | --- | --- | --- |
| $`1`$ | $`1`$ | $`1`$ | $`0.627`$ | $`0.230`$ | $`0.857`$ | $`0.627`$ |
| $`1/3`$ | $`1`$ | $`2`$ | $`0.362`$ | $`0.335`$ | $`0.697`$ | $`0.412`$ |
| $`2/3`$ | $`2`$ | $`1`$ | $`0.497`$ | $`0.203`$ | $`0.700`$ | $`0.519`$ |
Table IV. The interaction energies in units of $`\frac{e^2}{ϵa_0^{ex}}`$ obtained for negative effective fillings.
|
no-problem/9905/physics9905035.html
|
ar5iv
|
text
|
# The recursive adaptive quadrature in MS Fortran-77
## 1 Introduction
As it was shown in the application of recursion makes it possible to create compact, explicit and effective integration programs. In the mentioned papers the C++ version of such a routine is presented. However, it is historically formed that a large number of science and engineering Fortran-77 codes have been accumulated by now in the form of applied libraries and packages. That is one of the reasons why Fortran-77 is still quite popular in the applied programming. From this standpoint it seems to be very useful to use such an effective recursive integration algorithm in Fortran-77. There exist at least two possibilities to realize it. The first one is described in where the interface for calling mentioned C++ recursive integration function from MS Fortran-77 is presented. The second possibility consist in constructing the recursive subroutine by means of Fortran-77 only. This is the particular subject of the paper where the possibility and benefits of recursion strategy in MS Fortran-77 is discussed.
## 2 Recursion in MS Fortran-77
The direct transformation of the mentioned C++ code is not possible mainly due to the formal inhibition of the recursion in Fortran-77. However, Microsoft extensions of Fortran-77 (e.g. MS Fortran V.5.0, Fortran Power Station) allow to make indirect recursive calls. It means that subprogram can call itself through intermediate subprogram. If anybody doubts he can immediately try:
| |
| --- |
| call rec(1.0) |
| end |
| subroutine rec(hh) |
| integer i/0/ |
| i = i + 1 |
| h = 0.5\*hh |
| write(\*,\*) i, h |
| if (i.lt.3) call mediator(h) |
| write(\*,\*) i, h |
| end |
| subroutine mediator(h) |
| call rec(h) |
| end |
and get the following results:
| | |
| --- | --- |
| 1 | 5.000000E-01 |
| 2 | 2.500000E-01 |
| 3 | 1.250000E-01 |
| 3 | 1.250000E-01 |
| 3 | 1.250000E-01 |
| 3 | 1.250000E-01 |
But this is not a true recursion because no mechanism is supplied for restoring the values of the internal variables of the subroutine after its returning from recursion. The last requirement can be fulfilled by the forced storing of the internal variables into the program stack. The AUTOMATIC description of variables provides such possibility in MS Fortran-77. Taking this into account, the above example can be rewritten:
| |
| --- |
| call rec(1.0) |
| end |
| subroutine rec(hh) |
| integer i/0/ |
| automatic h, i |
| i = i + 1 |
| h = 0.5\*hh |
| write(\*,\*) i, h |
| if (i.lt.3) call mediator(h) |
| write(\*,\*) i, h |
| end |
| subroutine mediator(h) |
| call rec(h) |
| end |
that yields:
| | |
| --- | --- |
| 1 | 5.000000E-01 |
| 2 | 2.500000E-01 |
| 3 | 1.250000E-01 |
| 3 | 1.250000E-01 |
| 3 | 2.500000E-01 |
| 3 | 5.000000E-01 |
Here the values of h are restored after each returning from recursion because it is saved in the stack before the recursive call. Note, that although the i variable is described as AUTOMATIC nonetheless its value is not saved.
## 3 Recursive adaptive quadrature algorithm
The described possibilities allow to employ effective recursion strategy for creating adaptive quadrature subroutine in MS Fortran-77.
The presented algorithm consists of two independent parts: adaptive subroutine and quadrature formula. The adaptive subroutine uses recursive algorithm to implement standard bisection method (see fig.1). For reaching desired relative accuracy $`\epsilon `$ of the integration the integral estimation $`I_{whole}`$ over \[$`a_i`$,$`b_i`$\] subinterval on the i-th step of bisection is compared with the sum of $`I_{left}`$ and $`I_{right}`$ integral values that are evaluated over left and right halves of the considered subinterval. The comparison rule was chosen in the form:
$$I_{left}+I_{right}I_{whole}\epsilon I,$$
(1)
where $`I`$ denotes the integral sum over whole integration interval \[a,b\]. The value of $`I`$ is accumulated and adjusted on each step of bisection.
Should (1) be not fulfilled the adaptive procedure is called recursively for both (left and right) subintervals. Evaluation of the integral sums on each step of bisection is performed by means of quadrature formula. There are no restrictions on the type of quadratures used during integration. This makes the code to be very flexible and applicable to a wide range of integration problems.
The form (1) of the chosen comparison rule does not pretend on effectiveness rather on simplicity and generality. Really it seems to be very common and does not depend on the integrand as well as quadrature type. At the same time the use of (1) in some cases can result in overestimation of the calculated integral that consequently leads to more integrand function calls. One certainly can get some gains using, for instance, definite quadratures with different number or/and equidistant points or Gauss-Kronrod quadrature etc. The comparison rule in the later cases becomes more effective but complex, intricate and sometimes less common. Whatever the case, the choice of comparison rule as well as the problems connected with it lie outside the subject of the publication.
Let us note some advantages which the application of the recursive call ideology to numerical integration can reveal:
* Very simple and evident algorithm that could result in extremely short as well as easy for further modifications and possible enhancements adaptive code.
* Because of the indicated shortness the adaptive procedure’s own running time has to be diminutive. That could result in its better performance compared to the known programs especially in the cases when the integrand function calculations are not time consuming.
* There is no need to store the integrand function values and pay attention on their correct usage. Besides no longer the control of subinterval bounds is in need. Indicated features permit utmost reduction of the efforts that one has to pay while creating the adaptive code.
* Nothing but program’s stack size sets the restriction on the number of subintervals when the recursive procedure is used (see next section). At the same time for the existing programs the crush level of the primary interval is strictly limited by the dimensions of the static arrays.
## 4 Program realization
Fortran-77 version of adaptive subroutine practically coincides with the corresponding C++ QUADREC (Quadrature used Adaptively and Recursively) function :
```
SUBROUTINE Quadrec(Fun,Left,Right,Estimation)
real*8 Fun, Left, Right, Estimation
real*8 Eps, Result, Section, SumLeft, SumRight, QuadRule
integer*4 RecMax, RecCur, RawInt
common /IP/ Eps, Result, RecMax, RawInt, RecCur
automatic SumLeft, SumRight, Section
external Fun
RecCur = RecCur+1
if (RecCur.le.RecMax) then
Section = 0.5d0*(Left+Right)
SumLeft = QuadRule(Fun,Left,Section)
SumRight = QuadRule(Fun,Section,Right)
Result = Result+SumLeft+SumRight-Estimation
if (dabs(SumLeft+SumRight-Estimation).gt.Eps*dabs(Result)) then
call Mediator(Fun,Left,Section,SumLeft)
call Mediator(Fun,Section,Right,SumRight)
end if
else
RawInt = RawInt+1
end if
RecCur = RecCur-1
return
end
```
Note that subroutine contains only eleven executable statements. The integrand function name, left and right bounds of the integration interval as well as the initial estimation of the integral value are the formal arguments of the subroutine. The IP common block contains the following variables: desired relative accuracy (Eps), the result of the integration (Result), maximum and current levels of recursion (RecMax, RecCur) as well as raw (not processed during integration) subintervals (RawInt).
The Section variable is used for storing a value of midpoint of the current subinterval. The integral sums over its left and right halves are estimated by QuadRule external function and stored in LeftSum and RightSum variables. The last three variables are declared as AUTOMATIC allowing to preserve their values from changing and use them after returning from recursion.
Execution of the subroutine begins with increasing of recursion level counter. If its value does not exceed RecMax, the integral sums over left and right halves of the current subinterval are evaluated, the integration result is updated and accuracy of the integration is checked. If achieved accuracy is not sufficient then subprogram calls itself (with the help of mediator subroutine) over left and right halves of subinterval. In the case the accuracy condition is satisfied the recursion counter decreases and subprogram returns to the previous level of recursion. The number of raw subintervals is increased when desired accuracy is not reached and RecCur is equal to RecMax.
The mediator subroutine has only one executable statement:
```
subroutine Mediator(Fun,Left,Right,Estimation)
real*8 Estimation, Fun, Left, Right
external Fun
call Quadrec(Fun,Left,Right,Estimation)
return
end
```
The main part of the integration program can be look like:
```
common /IP/ Eps, Result, RecMax, RawInt, RecCur
common /XW/ X, W, N
integer RecMax, RawInt, RecCur
real*8 X(100), W(100), Left/0.0d0/, Right/1.0d0/
real*8 Result, Eps, QuadRule
real*8 Integrand
external Integrand
Eps = 1.0d-14
N = 10
RecCur = 0
RecMax = 10
call gauleg(-1.0d0,1.0d0,X,W,N)
Result = QuadRule(Integrand,Left,Right)
call Quadrec(Integrand,Left,Right,Result)
write(*,*) ’ Result = ’,Result,’ RawInt = ’,RawInt
end
```
The common block XW contains Gaussian abscissas and weights which are calculated with the help of gauleg subroutine for a given number of points N. The text of subroutine, reproduced from , is presented in Appendix A.
The text of QuadRule function is presented below:
```
real*8 function QuadRule(Integrand,Left,Right)
common /XW/ X, W, N
real*8 X(100),W(100),IntSum,Abscissa,Left,Right,Integrand
IntSum = 0.0d0
do 1 i = 1, N
Abscissa = 0.5d0*(Right+Left+(Right-Left)*X(i))
1 IntSum = IntSum + W(i)*Integrand(Abscissa)
QuadRule = 0.5d0*IntSum*(Right-Left)
return
end
```
It is important to note that the number of recursive calls is limited by the size the program stack. This fact obviously sets the limit on the reachable number of the primary integration interval bisections and consequently restricts the integration accuracy. Note that stack size of the program can be enlarged by using /Fxxxx option of MS Fortran-77 compiler.
## 5 Numerical tests
The program testing was performed on four different integrals. In each case the exact values can be found analytically. That made it possible to control the desired and reached accuracy of the integration. Besides the same integrals were obtained with the help of well-known adaptive program QUANC8 reproduced from . It allowed to compare the number of integrand function calls and the number of raw intervals for both programs.
The presented comparison has merely the aim to show that the use of recursion allows to construct very short and simple adaptive quadrature code that is not inferior to such a sophisticated program as QUANC8. Meanwhile the direct comparison of these programs seems to be incorrect because of a number of reasons.
The Newton-Cotes equidistant quadrature formula which is used in QUANC8 allows to make reuse of integrand function values calculated in the previous steps of bisection. That is the reason why QUANC8 has to have higher performance in integrand function calls compared to adaptive programs that use quadratures with non-equidistant points. Since QUADREC is not oriented on the use of definite but any quadrature formula it can be specified as a program of the later type.
At the same time QUANC8 gives bad results for functions with unlimitedly growing derivative and does not work at all for functions that go to infinity at the either of the integration interval endpoints. There are none of the indicated restrictions in QUADREC. Furthermore the opportunity of choosing of quadrature type makes it to be a very flexible tool for integration. Here QUADREC gives a chance to choose quadrature which is the most appropriate to the task (see Section 6).
For integrals in sections 5.1 and 5.2 the optimal numbers of quadrature points were found and used for integration. The 24-point quadrature was applied for integration in sections 5.3 and 5.4.
### 5.1 Sharp peaks at a smooth background
Let us start with the calculation of the integral cited in :
$$\underset{0}{\overset{1}{}}\left(\frac{1}{(xa_1)^2+a_2}+\frac{1}{(xb_1)^2+b_2}c_0\right)𝑑x$$
(2)
The integrand is the sum of two Lorenz type peaks and a constant background. At the beginning, values of $`a_1`$, $`a_2`$, $`b_1`$, $`b_2`$ and $`c_0`$ parameters were chosen to be the same as in the cited work. Then test was conducted at decreasing values of $`a_2`$ and $`b_2`$, which determine width of the peaks, while both programs satisfied desired accuracy and did not signal about raw subintervals. The results of the test when $`a_2=b_2=10^8`$ are presented in Table 1. Note that only the optimal values are given for QUADREC program. The corresponding optimal numbers of Gaussian quadrature points are indicated.
As it follows from the given data the number of integrand function calls are compared for the both programs in the wide range of desired accuracy. Meanwhile it is interesting to point the attention on the fact that the use of optimal quadratures can give definite profits in the reached accuracy and number of integrand function calls even when the simple comparison rule (1) and no reuse of the integrand values are applied.
At further decreasing of $`a_2`$ and $`b_2`$ parameters QUANC8 informed about raw intervals and did not satisfy desired accuracy. At the same time QUADREC gave correct results for the integral down to parameter values $`a_2=b_2=10^{19}`$. This is mainly due to the differences between static and dynamic memory allocation ideologies which are used in QUANC8 and QUADREC respectively.
### 5.2 Integration at a given absolute accuracy
The next test concerns the integration of the function:
$$f(x)=\frac{(1\alpha x)\mathrm{exp}(\alpha x)}{x^2\mathrm{exp}(2\alpha x)+b^2}$$
(3)
over whole positive real axes. It is easy to show that its exact value is equal to zero. For reducing the integration interval to the evident substitution x=t/(1-t) was used. As far as absolute accuracy of the integration was required the fourteenth line in the listed above QUADREC function text was changed to:
`if (dabs(SumLeft+SumRight-Estimation).gt.Eps) then`
The results of the test are presented in Table 2.
### 5.3 Improper integral
The next integrand function:
$$f(x)=x^{\frac{1}{n}1}$$
(4)
becomes nonintegrable in the interval when n goes to infinity. Besides function 4 goes to infinity at the low integration limit. That is the reason why QUANC8 can not be directly applied to the problem. To have still the opportunity of comparison, the integration of indicated function was performed from $`10^{10}`$ to 1. The results of the test for number n up to and including 20 and desired relative accuracy of $`10^{14}`$ are listed in Table 3. The second and fifth columns give the number of intervals that were not processed during the integration by both routines. The number of integrand function calls and values of reached relative accuracy are also presented in the table.
Such a low performance of QUANC8, as it follows from the given data, can be explained by the type of comparison rule which it exploits. Really it is not applicable to functions like 4 that have unlimitedly growing derivative in some point(s) of integration interval.
The results of integration over whole interval by QUADREC are shown in Table 4. The last column contains the maximum recursion level needed to achieve the desired accuracy. As far as it turned out to be a rather big the program stack size was correspondingly increased.
### 5.4 Evidence of program adaptation
Finally, integration of:
$$f(x)=sin(Mx)$$
(5)
over \[0,2$`\pi `$\] interval is appropriate for demonstrating the ability of an integration routine for adaptation. The exact integral value evidently is equal to zero for any integer M.
During the test a number of large simple integers were assigned to M and absolute integration accuracy of $`10^{10}`$ was required. The output of the test is presented in Table 5. The accommodation of the routine evidently follows from the data. Namely, for M’s going from $`10^5`$ to $`1.210^6`$ the maximum recursion level (integrand function call number) changes from 15 ($`1.610^6`$) to 18 ($`1.210^7`$).
For all chosen values of M the desired accuracy was fulfilled. Furthermore program succeeded down to the accuracy of $`10^{12}`$. Note that the standard stack size was used during the test and higher performance of the QUADREC certainly could be reached had the stack size been enlarged.
## 6 The optimal quadrature
Here we want to take note of the possibility to minimize number of integrand calls by choosing the quadrature with optimal number of abscissas. The fig.2 demonstrates such a possibility.
Particularly, circles present the number of integrand calls versus the number of Gaussian abscissas used for the integration of 2 over and desired relative accuracy of $`10^{10}`$ and $`a_1=0.3`$, $`a_2=10^9`$, $`b_1=0.9`$, $`b_2=4.010^9`$, $`c_0=6`$. From the presented data one can see that the optimal number of Gaussian abscissas ranges approximately from 7 to 17. The use of more Gaussian abscissas leads to the linear growth of the number of function calls, because it does not result in essential reduction of the integration interval fragmentation. From the other hand the use of less number of Gaussian abscissas results in the significant growth of the number of function calls due to the extremely high fragmentation required. Dependence of the maximum recursion level upon the number of abscissas, presented by triangles, confirms this consideration.
Note that despite significant differences of the testing integrals (2,3) the optimal number of Gaussian abscissas turned out to be in the above limits.
## 7 Conclusion
Thus, the indirect recursion combined with AUTOMATIC variable description allow to employ true recursion mechanism in MS Fortran-77. In particular, the recursion strategy was applied to create effective adaptive quadrature code. Despite the simplicity and extremely small program body it showed good results on rather complex testing integrals. The created subroutine is very flexible and applicable to a wide range of integration problems. In particular, it was applied for constructing effective Hilbert transformation program. The last one was used to restore frequency dependence of refraction coefficient in analysis of optical properties of complex organic compounds. The subroutine can be easily incorporated into existing Fortran programs. Note that the coding trick, described in the paper, is very convenient for constructing multidimensional adaptive quadrature programs.
## 8 Acknowledgments
We express our thanks to Dr.V.K.Basenko for stimulating and useful discussion of the problem.
Appendix
## A Guass-Legendr weights and quadratures
```
subroutine gauleg(x1,x2,x,w,n)
integer n
real*8 x1,x2,x(n),w(n)
real*8 eps
parameter (eps=3.d-14)
integer i,j,m
real*8 p1,p2,p3,pp,xl,xm,z,z1
m=(n+1)/2
xm=0.5d0*(x2+x1)
xl=0.5d0*(x2-x1)
do 12 i=1,m
z=cos(3.141592654d0*(i-.25d0)/(n+.5d0))
1 continue
p1=1.d0
p2=0.d0
do 11 j=1,n
p3=p2
p2=p1
p1=((2.d0*j-1.d0)*z*p2-(j-1.d0)*p3)/j
11 continue
pp=n*(z*p1-p2)/(z*z-1.d0)
z1=z
z=z1-p1/pp
if(abs(z-z1).gt.eps)goto 1
x(i)=xm-xl*z
x(n+1-i)=xm+xl*z
w(i)=2.d0*xl/((1.d0-z*z)*pp*pp)
w(n+1-i)=w(i)
12 continue
return
end
```
|
no-problem/9905/hep-ph9905261.html
|
ar5iv
|
text
|
# OBSERVED PROPERTIES OF 𝜎-PARTICLE
## 1 Introduction
<sup>1</sup><sup>1</sup>1This talk was presented by M. Y. Ishida.
Recently we found rather strong evidences for existence of the light $`\sigma `$-particle analyzing the experimental data obtained through both the scattering and the production process of $`I=0S`$-wave $`\pi \pi `$ system. In the preceding talk of this conference, (which is referred as I in the following,) it was explained that our applied methods of the analyses are generally consistent with the unitarity of $`S`$-matrix. More specifically, the Interfering Amplitude(IA) method applied in reanalysis of $`\pi \pi `$ scattering phase shift satisfies the elastic unitarity, and the VMW method applied in analyses of $`\pi \pi `$ production processes is consistent with the final state interaction theorem. In this talk first I collect the values of our observed property of $`\sigma `$ meson, its mass and width.
Then I show this observed property of $`\sigma `$ is consistent with that to be expected in the linear $`\sigma `$ model. Furthermore, I shall show, by investigating the background phase shift $`\delta _{BG}`$ theoretically in the framework of linear $`\sigma `$ model, the experimental behaviors of $`\delta _{BG}`$ in both the $`I=0`$ and $`I=2`$ systems are quantitatively describable theoretically.
## 2 Observed property of $`\sigma `$ meson
Reanalysis of $`\pi \pi `$ scattering phase shift In the IA method<sup>2</sup><sup>2</sup>2 The detailed explanation of IA method is given in I. the total phase shift $`\delta _0^0`$ is represented by the sum of component phase shift, $`\delta _\sigma ,\delta _{f_0(980)}`$ and $`\delta _{BG}`$. In the actual analysis the $`\delta _{BG}`$ was taken phenomenologically of hard core type:
$`\delta _0^0`$ $`=`$ $`\delta _\sigma +\delta _{f_0(980)}+\delta _{BG};\delta _{BG}=p_1r_c.`$ (1)
We analyzed the data of “standard phase shift” $`\delta _0^0`$ between $`\pi \pi `$\- and K$`\overline{\mathrm{K}}`$-thresholds and also the data on upper and lower bounds reported so far (see ref. in detail). The results of the analyses are given in Fig. 2 of I, and we concluded that the $`\sigma `$ meson exists with the property,
$`m_\sigma `$ $`=`$ $`585\pm 20(535675)\mathrm{MeV},\mathrm{\Gamma }_\sigma =385\pm 70\mathrm{M}\mathrm{e}\mathrm{V}.`$ (2)
As explained in I, the fit with $`r_c=0`$ corresponds to the conventional analyses without the repulsive $`\delta _{BG}`$. In the present analysis with $`\delta _{BG}`$ the greatly improved $`\chi ^2`$ value 23.6 is obtained for standard $`\delta _0^0`$, compared with that of the conventional analysis, 163.4. The similar $`\chi ^2`$ improvement is also obtained for the upper and lower phase shifts.<sup>3</sup><sup>3</sup>3 For upper phase shift the $`\chi ^2`$ in the present(conventional) analysis is $`\chi ^2/N_{d.o.f.}=32.3/(264)(135.1/(263))`$. For lower phase shift $`\chi ^2/N_{d.o.f.}=42.1/(174)(111.7/(173))`$. This fact strongly suggests the existence of light $`\sigma `$ meson phenomenologically. <sup>4</sup><sup>4</sup>4 Concerning this $`\chi ^2`$ improvement Klempt gave a seemingly-strange criticism in his summary talk of Hadron ’97, “ However, the $`\chi ^2`$ gain comes from a better description of a small anomaly in the mass region around the $`\rho (770)`$ mass. $`\mathrm{}`$ A small feedthrough from P-wave to S-wave can very well mimic this effect. ” Actually the $`\chi ^2`$ contribution from this “anomalous” region 650 MeV through 810 MeV in the present(conventional) fit is 5.3(62.6), and the $`\chi ^2`$ contribution from the outside region is 23.6-5.3=18.3(163.4-62.6=100.8). Thus, the $`\chi ^2`$ improvement comes from a better description of global phase motion below 1 GeV, showing the criticism is not correct. Furthermore, we tried to fit the data without the relevant data points. The obtained values of parameters are almost equal to the ones with the data of full region, while obtaining the similar improvement of $`\chi ^2`$.
Analyses of $`\pi \pi `$ production processes We also analyzed the data of $`\pi \pi `$ production processes, pp central collision experiment by GAMS and $`J/\psi \omega \pi \pi `$ decay reported by DM2 collabration, and showed the possible evidence of the existence of the $`\sigma `$ particle. In the analyses we applied the VMW method, where the production amplitude is represented by a sum of the $`\sigma `$, $`f_0`$ and $`f_2`$ Breit-Wigner amplitudes with relative phase factors. For detailed analyses, see ref. . The obtained mass and width of $`\sigma `$ are
$`m_\sigma `$ $`=`$ $`580\pm 30\mathrm{MeV},\mathrm{\Gamma }_\sigma =785\pm 40\mathrm{MeV}\mathrm{for}pp\mathrm{central}\mathrm{collision}`$
$`m_\sigma `$ $`=`$ $`480\pm 5\mathrm{MeV},\mathrm{\Gamma }_\sigma =325\pm 10\mathrm{MeV}\mathrm{for}J/\psi \omega \pi \pi \mathrm{decay}.`$ (3)
## 3 Property of $`\sigma `$-meson and chiral symmetry
Now the property of $`\sigma `$ meson obtained above is checked from the viewpoint of chiral symmetry. In the $`SU(2)`$ linear $`\sigma `$ model(L$`\sigma `$M) the coupling constant $`g_{\sigma \pi \pi }`$ of the $`\sigma \pi \pi `$ interaction is related to $`\lambda `$ of the $`\varphi ^4`$ interaction and $`m_\sigma `$ as
$`g_{\sigma \pi \pi }`$ $`=`$ $`f_\pi \lambda =(m_\sigma ^2m_\pi ^2)/(2f_\pi ).`$ (4)
Thus, the $`\mathrm{\Gamma }_\sigma `$ is related with $`m_\sigma `$ through the following equation:
$`\mathrm{\Gamma }_{\sigma \pi \pi }^{\mathrm{theor}}`$ $`=`$ $`{\displaystyle \frac{3g_{\sigma \pi \pi }^2}{4\pi m_\sigma ^2}}p_1{\displaystyle \frac{3m_\sigma ^3}{32\pi f_\pi ^2}}.`$ (5)
Substituting the experimental $`m_\sigma `$=535$``$675 MeV given in Eq. (2) and $`f_\pi `$=93 MeV into Eq.(5), we can predict $`\mathrm{\Gamma }_\sigma ^{\mathrm{theor}}=400900\mathrm{MeV},`$ which is consistent with the $`\mathrm{\Gamma }_\sigma ^{\mathrm{exp}}`$ given in Eq.(2). Thus the observed $`\sigma `$ meson may be identified with the $`\sigma `$ meson described in the L$`\sigma `$M.
## 4 Repulsive background phase shift
Experimental phase shift and repulsive core in the I=2 system In our phase shift analyses of the $`I=0`$ $`\pi \pi `$ system the $`\delta _{\mathrm{BG}}`$ of hard core type introduced phenomenologically played an essential role. In the $`I=2`$ $`\pi \pi `$ system, there is no known / expected resonance, and accordingly it is expected that the phase shift of repulsive core type will appear directly. As shown in Fig. 1, actually the experimental data from the threshold to $`m_{\pi \pi }1400`$ MeV of the $`I=2`$ $`\pi \pi `$-scattering $`S`$-wave phase shift $`\delta _0^2`$ are apparently negative, and fitted well also by the hard-core formula $`\delta _0^2=r_c^{(2)}|𝐩_1|`$ with the core radius of $`r_c^{(2)}=0.87\mathrm{GeV}^1`$ (0.17 fm).
Origin of the $`\delta _{BG}`$ The origin of this $`\delta _{\mathrm{BG}}`$ seems to have a close connection to the $`\lambda \varphi ^4`$-interaction in L$`\sigma `$M: It represents a contact zero-range interaction and is strongly repulsive both in the $`I=0`$ and 2 systems, and has plausible properties as the origin of repulsive core.
The $`\pi \pi `$-scattering $`A(s,t,u)`$-amplitude by $`SU(2)`$ L$`\sigma `$M is given by $`A(s,t,u)=(2g_{\sigma \pi \pi })^2/(m_\sigma ^2s)2\lambda .`$ Because of the relation (4), the dominant part of the amplitude due to virtual $`\sigma `$ production( 1st term) is cancelled by that due to repulsive $`\lambda \varphi ^4`$ interaction( 2nd term) in $`O(p^0)`$ level, and the $`A(s,t,u)`$ is rewritten into the following form:
$`A(s,t,u)`$ $`=`$ $`{\displaystyle \frac{1}{f_\pi ^2}}{\displaystyle \frac{(m_\sigma ^2m_\pi ^2)^2}{m_\sigma ^2s}}{\displaystyle \frac{m_\sigma ^2m_\pi ^2}{f_\pi ^2}}={\displaystyle \frac{sm_\pi ^2}{f_\pi ^2}}+{\displaystyle \frac{1}{f_\pi ^2}}{\displaystyle \frac{(m_\pi ^2s)^2}{m_\sigma ^2s}},`$ (6)
where in the last side the $`O(p^2)`$ Tomozawa-Weinberg amplitude and the $`O(p^4)`$ (and higher order) correction term are left. As a result the derivative coupling property of $`\pi `$-meson as a Nambu-Goldstone boson is preserved. In this sense the $`\lambda \varphi ^4`$-interaction can be called a “compensating” interaction for $`\sigma `$-effect.
Thus the strong cancellation between the positive $`\delta _\sigma `$ and the negative $`\delta _{\mathrm{BG}}`$ in our analysis leading to the $`\sigma `$, as shown in Fig. 2 of I, is reducible to the relation Eq.(6) in L$`\sigma `$M.
In the following we shall make a theoretical estimate of $`\delta _{BG}`$ in the framework of L$`\sigma `$M. The scattering $`𝒯`$ matrix consists of a resonance part $`𝒯_R`$ and of a background part $`𝒯_{BG}`$. The $`𝒯_{BG}`$ corresponds to the contact $`\varphi ^4`$ interaction and the exchange of the relevant resonances. The main term of $`𝒯_{BG}`$ comes from the $`\lambda \varphi ^4`$ interaction. This $`𝒯_{BG}`$ has a weak $`s`$-dependence in comparison with that of $`𝒯_R`$. The explicit form of $`𝒯_{BG}`$ for $`I=0`$ and $`I=2`$ $`S`$-wave channels are given by
$`𝒯_{BG;S}^I`$ $`=`$ $`6\lambda a+2\left({\displaystyle \frac{(2g_{\sigma \pi \pi })^2}{4p_1^2}}ln\left({\displaystyle \frac{4p_1^2}{m_\sigma ^2}}+1\right)2\lambda \right)`$ (7)
$`+`$ $`b2g_\rho ^2\left(1+{\displaystyle \frac{2s4m_\pi ^2+m_\rho ^2}{4p_1^2}}ln\left({\displaystyle \frac{4p_1^2}{m_\rho ^2}}+1\right){\displaystyle \frac{s+2p_1^2}{m_\rho ^2}}{\displaystyle \frac{\mathrm{\Lambda }^2+4m_\pi ^2}{\mathrm{\Lambda }^2+s}}\right),`$
where $`(a,b)=(1,2)`$ for $`I=0`$ and $`(a,b)=(0,1)`$ for $`I=2`$. Here we introduced the $`\rho `$ meson contribution,which are supposed to be described by Schwinger-Weinberg Lagrangian, <sup>5</sup><sup>5</sup>5The derivative $`\varphi ^4`$ interaction appearing in Schwinger -Weinberg Lagrangian makes Eq.(8) divergent. Thus we introduce a form factor with cut off $`\mathrm{\Lambda }1`$ GeV. $`_\rho =g_\rho \rho _\mu (_\mu \varphi \times \varphi )g_\rho ^2/2m_\rho ^2(\varphi \times _\mu \varphi )^2.`$ In order to obtain $`\delta _{BG}`$ theoretically, we unitarize $`𝒯`$ by using the N/D method, $`𝒯_{BG}(s)^I=e^{i\delta _{BG}}\mathrm{sin}\delta _{BG}/\rho _1=N_{BG}^I/D_{BG}^I`$. We take the Born term Eq.(7) as $`N`$-function. In obtaining $`D`$-function one subtraction is necessary,
$`N_{BG}^I`$ $`=`$ $`𝒯_{BG;S}^I,D_{BG}^I=1+b_I+{\displaystyle \frac{s}{\pi }}{\displaystyle _{4m_\pi ^2}^{\mathrm{}}}{\displaystyle \frac{ds^{}}{s^{}(s^{}siϵ)}}\rho _1(s^{})N_{BG}^I(s^{}).`$ (8)
We adopt the subtraction condition<sup>6</sup><sup>6</sup>6 By using this condition the resulting $`\delta _{BG}`$ takes the same value as the one obtained by simple $`𝒦`$ matrix unitarization at the resonance energy $`\sqrt{s}=m_\sigma `$. $`ReD_{BG}^I(m_\sigma ^2)=1.`$ The $`m_\sigma `$ is fixed with the value of the best fit, 0.585 GeV; and the values of $`m_\rho `$ and $`g_\rho `$ are determined from the experimental property of $`\rho `$ meson; The obtained $`\delta _{BG}^{I=0,2}`$ is shown in Fig. 2.
The $`s`$-dependence of the theoretical $`\delta _{BG}`$ by L$`\sigma `$M including $`\rho `$ meson contribution is almost consistent with the phenomenological $`\delta _{BG}`$ of hard core type.
Concerning on our analysis of $`\delta ^{I=0}`$, Pennington made a criticism that the form of $`\delta _{BG}`$ is completely arbitrary. However, as shown in Fig. 2, our phenomenological $`\delta _{BG}`$, Eq.(1), is almost consistent with the theoretical prediction by L$`\sigma `$M. Thus, the criticism is not valid.
## 5 Concluding remark
We have shortly summarized the properties of the light $`\sigma `$ meson “observed” in a series of our recent works. The obtained values of mass and width of $`\sigma `$ satisfy the relation predicted by L$`\sigma `$M. This fact suggests the linear representation of chiral symmetry is realized in nature.
In our phase shift analysis there occurrs a strong cancellation between $`\delta _\sigma `$ due to the $`\sigma `$ resonance and $`\delta _{BG}`$, which is guaranteed by chiral symmetry. A reason of overlooking $`\sigma `$ in conventional phase shift analysis is due to overlooking of this cancellation mechanism.
The behavior of phenomenological $`\delta _{BG}`$ is shown to be quantitatively describable in the framework of L$`\sigma `$M including $`\rho `$ meson contribution.
Finally I give a comment: By the analysis of $`I=1/2`$ $`S`$-wave $`K\pi `$ scattering phase shift in a similar method, the existence of $`\kappa (900)`$ particle with a broad ($`500`$ MeV) width is suggested. The scalars below 1 GeV, $`\sigma (600)`$, $`\kappa (900)`$, $`a_0(980)`$ and $`f_0(980)`$ are possibly to form a single scalar nonet. Octet members of this nonet satisfy the Gell-Mann Okubo mass formula. Moreover, this $`\sigma `$ nonet is shown to satisfy the mass and width relation of $`SU(3)`$ L$`\sigma `$M, forming with pseudoscalar $`\pi `$ nonet a linear representation of chiral symmetry.
|
no-problem/9905/astro-ph9905178.html
|
ar5iv
|
text
|
# On “box” models of shock acceleration and electron synchrotron spectra
## 1 Introduction
The EGRET detection aboard the Compton Gamma Ray Observatory of at least two supernova remnants (Esposito esposito96 (1996)) and more than fifty active galactic nuclei (Thomson et al. thompson95 (1995)) has given strong evidence of particle acceleration in these objects. This evidence is strengthened even more by the detection of SN 1006 (Tanimori tanimori98 (1998)) and the BL Lac objects Mkn 421 (Punch et al. punch92 (1992)) and Mkn 501 (Quinn et al. quinn96 (1996)) by ground based Cherenkov detectors at TeV energies.
A particularly attractive mechanism for producing the required radiating high energy particles is the diffusive shock acceleration scheme, which has already been put forward to predict TeV radiation from supernova remnants (Drury, Aharonian & Völk drury94 (1994), Mastichiadis masti96 (1996)) or explain the observed flaring behaviour in X-rays and TeV $`\gamma `$-rays from active galactic nuclei (Kirk, Rieger & Mastichiadis kirk98 (1998)). This scheme was originally proposed as the mechanism responsible for producing the nuclear cosmic ray component in shock waves associated with supernova remnants (Krymsky krymsky77 (1977), Axford axford81 (1981)). Based on this picture many authors (Bogdan & Völk bogdan83 (1983), Moraal & Axford moraal83 (1983), Lagage & Cesarsky lagage83 (1983), Schlickeiser schlickeiser84 (1984), Völk & Biermann volk88 (1988), Ball & Kirk ball92 (1992), Protheroe & Stanev protheroe98 (1998)) have used, under various guises, a simplified but physically intuitive treatment of shock acceleration, sometimes referred to as a “box” model.
In this paper we examine the underlying assumptions of the “box” model (§2) and we present an alternative more physical version of it (§3). We then include synchrotron and inverse Compton losses as a means of spectal modification and we determine the conditions under which “pile-ups” can occur in shock accelerated spectra (§4). The “box” model can also be extended to include the nonlinear effect of the particle pressure on the background flow (§5).
## 2 The “box” model of diffusive shock acceleration
The main features of the “box” model, as presented in the literature (see references above) and exemplified by Protheroe and Stanev (protheroe98 (1998))) can be summarised as follows. The particles being accelerated (and thus “inside the box”) have differential energy spectrum $`N(E)`$ and are gaining energy at rate $`r_{\mathrm{acc}}E`$ but simultaneously escape from the acceleration box at rate $`r_{\mathrm{esc}}`$. Conservation of particles then requires
$$\frac{N}{t}+\frac{}{E}\left(r_{\mathrm{acc}}EN\right)=Qr_{\mathrm{esc}}N$$
(1)
where $`Q(E)`$ is a source term combining advection of particles into the box and direct injection inside the box.
In essence this approach tries to reduce the entire acceleration physics to a “black box” characterised simply by just two rates, $`r_{\mathrm{esc}}`$ and $`r_{\mathrm{acc}}`$. These rates have of course to be taken from more detailed theories of shock acceleration (eg Drury drury91 (1991)). A minor reformulation of the above equation into characteristic form,
$$\frac{N}{t}+r_{\mathrm{acc}}E\frac{N}{E}=QN\left(r_{\mathrm{esc}}+r_{\mathrm{acc}}+E\frac{r_{\mathrm{acc}}}{E}\right)$$
(2)
is useful in revealing the character of the description. This is equivalent to the ordinary differential equation,
$$\frac{dN}{dt}=QN\left(r_{\mathrm{esc}}+r_{\mathrm{acc}}+E\frac{r_{\mathrm{acc}}}{E}\right)$$
(3)
on the family of characteristic curves described by
$$\frac{dE}{dt}=r_{\mathrm{acc}}E$$
(4)
giving the formal solution,
$`N(E,t)={\displaystyle _0^t}`$ $`Q`$ $`(t^{},E^{})`$ (5)
$`exp\left[{\displaystyle _t^{}^t}\left(r_{\mathrm{esc}}+r_{\mathrm{acc}}+E{\displaystyle \frac{r_{\mathrm{acc}}}{E}}\right)𝑑t^{\prime \prime }\right]dt^{}.`$
The number of particles at energy $`E`$ and time $`t`$ in the “box” is given simply by an exponentially weighted integral over the injection rate at earlier times and lower energies. Of particular interest is the steady solution at energies above those where injection is occuring which is easily seen to be a power-law with exponent
$$\frac{\mathrm{ln}N}{\mathrm{ln}E}=\left(1+\frac{r_{\mathrm{esc}}}{r_{\mathrm{acc}}}+\frac{\mathrm{ln}r_{\mathrm{acc}}}{\mathrm{ln}E}\right).$$
(6)
At first sight (to one familiar with shock acceleration theory) it appears odd that the exponent depends not just on the ratio of $`r_{\mathrm{esc}}`$ to $`r_{\mathrm{acc}}`$ but also on the energy dependence of $`r_{\mathrm{acc}}`$. However, as remarked by Protheroe and Stanev, the physically important quantity is not the spectrum of particles inside the fictitious acceleration “box” but the escaping flux of accelerated particles $`r_{\mathrm{esc}}N`$ and this is a power-law of exponent
$$\frac{\mathrm{ln}(r_{\mathrm{esc}}N)}{\mathrm{ln}E}=\left(1+\frac{r_{\mathrm{esc}}}{r_{\mathrm{acc}}}+\frac{\mathrm{ln}r_{\mathrm{acc}}}{\mathrm{ln}E}\frac{\mathrm{ln}r_{\mathrm{esc}}}{\mathrm{ln}E}\right).$$
(7)
Thus provided the ratio of $`r_{\mathrm{acc}}`$ to $`r_{\mathrm{esc}}`$ is fixed, the power-law exponent of the spectrum of accelerated particles escaping from the accelerator is determined only by this ratio whatever the energy dependence of the two rates.
## 3 Physical interpretation of the box model
We prefer a very similar, but more physical, picture of shock acceleration which has the advantage of being more closely linked to the conventional theory. For this reason we also choose to work in terms of particle momentum $`p`$ and the distribution function $`f(p)`$ rather than $`E`$ and $`N(E)`$.
The fundamental assumption of diffusive shock acceleration theory is that the charged particles being accelerated are scattered by magnetic structures advected by the bulk plasma flow and that, at least to a first approximation, in a frame moving with these structures the scattering changes the direction of a particle’s motion, but not the magnitude of its velocity, energy or momentum. If we measure $`p`$, the magnitude of the particle’s momentum, in this frame, it is not changed by the scattering and the angular distribution is driven to being very close to isotropic. However if a particle crosses a shock front, where the bulk plasma velocity changes abruptly, then the reference frame used to measure $`p`$ changes and thus $`p`$ itself changes slightly. If we have an almost isotropic distribution $`f(p)`$ at the shock front where the frame velocity changes from $`𝐔_\mathrm{𝟏}`$ to $`𝐔_\mathrm{𝟐}`$, then it is easy to calculate that there is a flux of particles upwards in momentum associated with the shock crossings of
$`\mathrm{\Phi }(p,t)`$ $`=`$ $`{\displaystyle p\frac{𝐯(𝐔_\mathrm{𝟏}𝐔_\mathrm{𝟐})}{𝐯^\mathrm{𝟐}}p^2f(p,t)𝐯𝐧𝑑\mathrm{\Omega }}`$ (8)
$`=`$ $`{\displaystyle \frac{4\pi p^3}{3}}f(p,t)𝐧(𝐔_\mathrm{𝟏}𝐔_\mathrm{𝟐})`$
where $`𝐧`$ is the unit shock normal and the integration is over all directions of the velocity vector $`𝐯`$. Notice that this flux is localised in space at the shock front and is strictly positive for a compressive shock structure.
This spatially localised flux in momentum space is the essential mechanism of shock acceleration and in our description replaces the acceleration rate $`r_{\mathrm{acc}}`$. The other key element of course is the loss of particles from the shock by advection downstream. We note that the particles interacting with the shock are those located within about one diffusion length of the shock. Particles penetrate upstream a distance of order $`L_1=𝐧𝐊_\mathrm{𝟏}𝐧/𝐧𝐔_\mathrm{𝟏}`$ where $`𝐊`$ is the diffusion tensor and the probability of a downstream particle returning to the shock decreases exponentially with a scale length of $`L_2=𝐧𝐊_\mathrm{𝟐}𝐧/𝐧𝐔_\mathrm{𝟐}`$. Thus in our picture we have an energy dependent acceleration region extending a distance $`L_1`$ upstream and $`L_2`$ downstream. The total size of the box is then $`L(p)L_1(p)+L_2(p)`$. Particles are swept out of this region by the downstream flow at a bulk velocity $`𝐧𝐔_\mathrm{𝟐}`$.
Conservation of particles then leads to the following approximate description of the acceleration,
$$\frac{}{t}\left[4\pi p^2fL\right]+\frac{\mathrm{\Phi }}{p}=Q𝐧𝐔_\mathrm{𝟐}4\pi p^2f,$$
(9)
that is the time rate of change of the number of particles involved in the acceleration at momentum $`p`$ plus the divergence in the accelerated momentum flux equals the source minus the flux carried out of the back of the region by the downstream flow. The main approximation here is the assumption that the same $`f(p,t)`$ can be used in all three terms where it occurs. In fact in the acceleration flux it is the local distribution at the shock front, in the total number it is a volume averaged value, and in the loss term it is the downstream distribution which matters. Diffusion theory shows that in the steady state all three are equal, but this need not be the case in more elaborate transport models (Kirk, Duffy & Gallant kirk96 (1996)).
Substituting for $`\mathrm{\Phi }`$ and simplifying we get the equation
$$L\frac{f}{t}+𝐧𝐔_\mathrm{𝟏}f(p)+\frac{1}{3}𝐧\left(𝐔_\mathrm{𝟏}𝐔_\mathrm{𝟐}\right)p\frac{f}{p}=\frac{Q}{4\pi p^2}$$
(10)
which is our version of the “Box” equation. Note that this, as is readily seen, gives the well known standard results for the steady-state spectrum and the acceleration time-scale. In fact our description is mathematically equivalent to that of Protheroe and Stanev as is easily seen by noting that
$$r_{\mathrm{acc}}=\frac{𝐧(𝐔_\mathrm{𝟏}𝐔_\mathrm{𝟐})}{3L},r_{\mathrm{esc}}=\frac{𝐧𝐔_\mathrm{𝟐}}{L},N=4\pi p^2fL.$$
(11)
However our version has more physical content, in particular the two rates are derived and not inserted by hand. It is also important to note that in our picture the size of the “box” depends on the particle energy.
## 4 Inclusion of additional loss processes
In itself the “box” model would be of little interest beyond providing a simple “derivation” of the acceleration time scale. Its main interest is as a potential tool for investigating the effect of additional loss processes on shock acceleration spectra. One of the first such studies was that of Webb, Drury and Biermann (webb84 (1984)) where the important question of the effect of synchrotron losses was investigated (see also Bregman et al. bregman81 (1981)). An interesting question is whether or not a “pile-up” occurs in the accelerated particle spectrum at the energy where the synchrotron losses balance the acceleration. Webb, Drury and Biermann (webb84 (1984)) found that pile-ups only occured if the spectrum in the absence of synchrotron losses (or equivalently at low energies where the synchrotron losses are insignificant) was harder than $`fp^4`$. However Protheroe and Stanev obtain pile-ups for spectra as soft as $`fp^{4.2}`$.
It is relatively straightforward to include losses of the synchrotron or inverse Compton type (Thomson regime) in the model. These generate a downward flux in momentum space, but one which is distributed throughout the acceleration region. Combined with the fact that the size of the “box” or region normally increases with energy this also gives an additional loss process because particles can now fall through the back of the “box” as well as being advected out of it (see Fig. 1). Note that particles which fall through the front of the box are advected back into the acceleration region and thus this process does not work upstream.
If the loss rate is $`\dot{p}=\alpha p^2`$ the basic equation becomes
$`{\displaystyle \frac{}{t}}\left[4\pi p^2fL\right]+{\displaystyle \frac{}{p}}\left[\mathrm{\Phi }4\pi p^4f(p)\alpha L\right]=`$
$`QU_24\pi p^2f(p)4\pi \alpha p^4f(p){\displaystyle \frac{dL_2}{dp}}`$ (12)
This equation is easily generalised to the case of different loss rates upstream and downstream. Simplifying equation (12) gives
$`L{\displaystyle \frac{f}{t}}+p{\displaystyle \frac{f}{p}}\left[{\displaystyle \frac{U_1U_2}{3}}\alpha pL\right]+`$
$`f\left[U_14\alpha pL\alpha p^2{\displaystyle \frac{dL_1}{dp}}\right]={\displaystyle \frac{Q}{4\pi p^2}}.`$ (13)
Note that for convenience we have dropped the explicit vector (and tensor) notation; all non-scalar quantities are to be interpreted as normal components, that is $`U_2`$ means $`𝐧𝐔_\mathrm{𝟐}`$ etc. Note also that our model differs from that of Protheroe and Stanev in that they do not allow for the extra loss process resulting from the energy dependence of the “box” size.
In the steady state and away from the source region this gives immediately the remarkably simple result for the logarithmic slope of the spectrum,
$$\frac{\mathrm{ln}f}{\mathrm{ln}p}=3\frac{U_14\alpha pL\alpha p^2{\displaystyle \frac{dL_1}{dp}}}{U_1U_23\alpha pL}.$$
(14)
Note that at small values of $`p`$ we recover the standard result, that the power-law exponent is $`3U_1/(U_1U_2)`$.
Under normal circumstances both $`L_1`$ and $`L_2`$ are monotonically increasing functions of $`p`$. Thus both the numerator and denominator of the above expression, regarded as functions of $`p`$, have single zeroes at which they change sign. The denominator goes to zero at the critical momentum
$$p^{}=\frac{U_1U_2}{3\alpha L}$$
(15)
where the losses exactly balance the acceleration. If the numerator at this point is negative, the slope goes to $`\mathrm{}`$ and there is no pile-up. However the slope goes to $`+\mathrm{}`$ and a pile-up occurs if
$$U_14U_2+3\alpha p^2\frac{dL_1}{dp}>0\text{at}p=p^{}.$$
(16)
In the early analytic work of Webb et al the diffusion coefficient was taken to be constant, so that $`dL_1/dp=0`$ and this condition reduces to $`U_1>4U_2`$ in agreement with their results. However if, as in the work of Protheroe and Stanev, the diffusion coefficient is an increasing function of energy or momentum, the condition becomes less restrictive. For a power-law dependence of the form $`Kp^\delta `$ the condition for a pile-up to occur reduces to
$$U_14U_2+\delta \left(U_1U_2\right)\frac{L_1}{L_1+L_2}>0$$
(17)
(The equivalent criterion for the model used by Protheroe and Stanev is slightly different, namely
$$U_14U_2+\delta \left(U_1U_2\right)>0$$
(18)
because of their neglect of the additional loss process.)
For the case where $`L_1/L_2=U_2/U_1`$ and with $`\delta =1`$ this condition predicts that shocks with compression ratios greater than about $`r=3.45`$ will produce pile-ups while weaker shocks will not. In Figures 1 and 2 we plot the particle spectra up to $`p^{}`$ for a range of values of $`\delta `$ and with $`r=4`$ and $`r=3`$ respectively.
Thus there is no contradiction between the (exact) results of Webb et al and those of Protheroe and Stanev; the apparent differences can be attributed to the energy dependence of the diffusion coefficient. Indeed, looking at the results presented by Protheroe and Stanev, it is clear that the pile-ups they obtain are less pronounced for those cases with a weaker energy dependence.
## 5 Nonlinear effects
At the phenomenological and simplified level of the “box” models it is possible to allow for nonlinear effects by replacing the upstream velocity with an effective momentum-dependent velocity $`U_1(p)`$, reflecting the existence of an extended upstream shock precursor region sampled on different length scales by particles of different energies. Higher energy particles, with larger diffusion length scales, sample more of the shock transition and have larger effective values of $`U_1(p)`$; thus $`U_1(p)`$ must be a monotonically increasing function of $`p`$. Repeating the above analysis with a momentum-dependent $`U_1`$ the logarithmic slope of the spectrum is in this case
$$\frac{\mathrm{ln}f}{\mathrm{ln}p}=3\frac{U_14\alpha pL+{\displaystyle \frac{p}{3}}{\displaystyle \frac{dU_1}{dp}}\alpha p^2{\displaystyle \frac{dL_1}{dp}}}{U_1U_23\alpha pL}$$
(19)
with a pile-up criterion of,
$$U_1(p)4U_2p\frac{dU_1}{dp}+3\alpha _1p^2\frac{dL_1}{dp}>0\text{at}p=p^{}$$
(20)
We see that whether or not the nonlinear effects assist the formation of pile-ups depends critically on how fast they make the effective upstream velocity vary as a function of $`p`$. By making $`U_1(p^{})`$ larger they make it easier for pile-ups to occur. On the other hand, if the variation is more rapid than $`U_1p`$, the derivative term dominates and inhibits the formation of pile-ups.
In most cases the shock modification will be produced by the reaction of accelerated ions, and the electrons can be treated as test-particles with a prescribed $`U_1(p)`$. However in a pair plasma, or if one applies the “box” model to the ions themselves, the effective upstream velocity has to be related to the pressure of the accelerated particles in a self-consistent way. We require in the ”box” model a condition which describes the reaction of the accelerated particles on the flow. Throughout the upstream precursor and in the steady case both the mass flux, $`A\rho U`$, and the momentum flux, $`AU+P_C`$ are conserved. Here $`P_C`$ is the pressure contained in energetic particles and the gas pressure is assumed to be negligible upstream. At a distance $`L_1(p)`$ upstream only particles with momenta greater than $`p`$ remain in the acceleration region. This suggests that in the ”box” model the reaction of the particles on the flow is described by the momentum flux conservation law
$$AU_1(p)+_p^{p_{\mathrm{max}}}4\pi p^2f\frac{pv}{3}𝑑p=\mathrm{constant}$$
(21)
where $`p_{\mathrm{max}}`$ is the highest momentum particle in the system and $`v`$ is the particle velocity corresponding to momentum $`p`$. Differentiating with respect to $`p`$ gives
$$A\frac{dU_1(p)}{dp}=4\pi p^2f(p)\frac{pv}{3}.$$
(22)
With no losses and for $`U_1(p)U_2`$ we can now recover Malkov’s spectral universality result for strong modified shocks (Malkov, malkov98 (1998)). In the limit of $`U_2=0`$ and $`\alpha =0`$ the conservation equation reduces to the requirement than the upward flux in momentum space be constant (equation (9)),
$$\mathrm{\Phi }=\frac{4\pi p^3}{3}f(p)U_1(p)=\mathrm{\Phi }_0.$$
(23)
When combined with equation (22) this gives
$$U_1\frac{dU_1}{dp}=\frac{\mathrm{\Phi }_0}{A}v=\frac{\mathrm{\Phi }_0}{A}\frac{dT}{dp}$$
(24)
where we have used the elementary result from relativistic kinematics that the particle velocity $`v`$ is the derivative of the kinetic energy $`T`$ with respect to momentum. Integrating for relativistic particles, $`T=pc`$, we get the fundamental self-similar asymptotic solution found by Malkov,
$$U_1=\sqrt{\frac{2c\mathrm{\Phi }_0}{A}}p^{1/2},f=\frac{3}{4\pi }\sqrt{\frac{\mathrm{\Phi }_0A}{2c}}p^{3.5}.$$
(25)
If the electrons are test-particles in a shock strongly modified by proton acceleration, and if the Malkov scaling $`U_1p^{1/2}`$ holds even approximately, then equation (20) predicts that a strong synchrotron pile-up appears inevitable.
It is perhaps worth remarking on some peculiarities of Malkov’s solution. Formally it has $`U_2=0`$, all the kinetic energy dissipated in the “shock” is used in generating the upwards flux in momentum space $`\mathrm{\Phi }`$ and there is no downstream advection. It is not clear that a stationary solution exists in this case. The problem is that as $`U_20`$ so $`L_2\mathrm{}`$ if a diffusion model is used for the downstream propagation. The solution appears to require some form of impenetrable reflecting barrier a finite distance downstream if it is to be realised in finite time. Also, although the accelerated particle spectrum at the shock is a universal power law, none of these particles escape from the shock region. From a distance the shock appears as an almost monoenergetic source at whatever maximum energy the particles reach before escaping from the system.
The case of a synchrotron limited shock in a pure pair plasma is also interesting. Here the upper cut-off is determined not by a free escape boundary condition but by the synchrotron losses. If most of the energy dissipated in the shock is radiated this way, the shock will be very compressive and the downstream velocity $`U_2`$ negligible compared to $`U_1`$. The same caveats about time scales apply as to Malkov’s solution, but again we can, at least as a gedanken experiment, consider a cold pair plasma hitting an impenetrable and immovable boundary. In this case, if there is a steady solution, the upward flux due to the acceleration must exactly balance the synchrotron losses at all energies. In general it appears impossible to satisfy both this condition and the momentum balance condition for $`p<p^{}`$ unless the diffusion coefficient has an artificially strong momentum dependence. However a solution exists corresponding, in the box model, to a Dirac distribution at the critical momentum $`p^{}`$. This steady population of high energy electrons has enough pressure to decelerate the incoming plasma to zero velocity and radiates away all the absorbed energy as synchrotron radiation. This extreme form of pile-up may be of interest as a means of very efficiently converting the bulk kinetic energy of a cold pair plasma into soft gamma-rays.
## 6 Conclusion
A major defect of all “box” models is the basic assumption that all particles gain and loose energy at exactly the same rate. It is clear physically that there are very large fluctuations in the amount of time particles spend in the upstream and downstream regions between shock crossings, and thus correspondingly large fluctuations in the amount of energy lost. The effect of these variations will be to smear out the artificially sharp pile-ups predicted by the simple “box” models. However our results are based simply on the scaling with energy of the various gain and loss processes together with the size of the acceleration region. Thus they should be relatively robust and we expect that even if there is no sharp spike, the spectrum will show local enhancements over what it would have been in the absence of the synchrotron or IC losses in those cases where our criterion is satisfied.
## 7 Acknowledgments
This work was supported by the TMR programme of the EU under contract FMRX-CT98-0168. Part of the work was carried out while LD was Dozor Visiting Fellow at the Ben-Gurion University of the Negev; the warm hospitality of Prof M Mond and the stimulating atmosphere of the BGU is gratefully acknowledged.
|
no-problem/9905/cond-mat9905186.html
|
ar5iv
|
text
|
# Effects of macroscopic polarization in III-V nitride multi-quantum-wells
## I Introduction
Spontaneous polarization has long been known to take place in ferroelectrics. On the other hand, its existence in semiconductors with sufficiently low crystal symmetry (wurtzite, at the very least) has been generally regarded as of purely theoretical interest. Recently, a series of first principles calculations has reopened this issue for the technologically relevant III-V nitride semiconductors, whose natural crystal structure is, in fact, wurtzite. Firstly, it was shown that the nitrides have a very large spontaneous polarization, as well as large piezoelectric coupling constants. Secondly, it was directly demonstrated how polarization actually manifests itself as electrostatic fields in nitride multilayers, due to the polarization charges resulting from polarization discontinuities at heterointerfaces. This charge-polarization relation, counterchecked in actual ab initio calculations, has been exploited to calculate dielectric constants.
While piezoelectricity-related properties are largely standard, spontaneous polarization is to some extent new in semiconductor physics, to the point that, so far, the practical importance of spontaneous polarization in III-V nitrides nanostructures (multi quantum wells, or MQWs, are the focus of this paper) has been largely overlooked. It is tantalizingly clear to us, however, that these concepts may lead to a direct and unambiguous measurement of the spontaneous polarization in semiconductors, to the recognition of its importance in nitride-based nanostructures, and, hopefully, to its exploitation in device applications. Our work has already spawned interpretations and purposely planned experiments in this direction; in the hope of accelerating the process, in this paper we show how to account for the effects of spontaneous polarization in MQWs, and discuss some prototypical cases and their possible experimental realization. To support our arguments, we present simulations of typical AlGaN/GaN MQWs where spontaneous and piezoelectric polarizations are about equal.
Among the consequences of macroscopic polarization which we will demonstrate in this paper let us mention the following: (a) the field caused by the fixed polarization charge, superimposed on the compositional confinement potential of the MQW, red-shifts dramatically the transition energies and strongly suppresses interband transitions as the well thickness increases; (b) the effects of thermal carrier screening are negligible in typical MQWs, although not in massive samples; (c) a quasi-flat-band MQW profile can be approximately recovered (i.e. polarization fields can be screened) only in the presence of very high free-carrier densities, appreciably larger than those typical of semiconductor laser structures; (d) even in the latter case, transition probabilities remain considerably smaller than the ideal flat band values, and this reduces quantum efficiency; (e) once an appropriate screening density (i.e. the pumping power or injection current) has been chosen to ensure that the recombination rate is sufficient, a residual polarization fields is typically still present: this provides a means to intentionally red-shift transition energies by changing well thicknesses, without changing composition; (f) the very existence of distinct and separately controllable spontaneous and piezoelectric polarization components allows to choose a composition such that they cancel each other out, leading to flat-band conditions. Analogously, for a proper choice of superlattice composition, piezoelectric polarization can be made to vanish and hence a measure of spontaneous polarization can be accessed, through e.g. the changes in optical spectra. It is clear that a fuller understanding of these points will ultimately lead both to improvements in design and operation of real nitride devices, and to the direct measurement of polarization, and a better knowledge thereof, in nitride semiconductors.
Before moving on, let us mention other recent contributions in this area. Buongiorno Nardelli, Rapcewicz, and Bernholc, using non–self-consistent effective-mass based perturbation theory in the small field limit, have predicted red shifts and transition probability suppression in InGaN quantum well; Della Sala et al., using self-consistent tight binding calculations, have applied some of the ideas reported in this paper to InGaN/GaN quantum well lasers, explaining several puzzling experimental features, among which the high thresholds observed for GaN-based lasers, and several other aspects related to self-consistent screening effects; Montecarlo simulations by Oberhuber, Vogl, and Zandler, employing the polarization calculated in Ref., have revealed a polarization-enhanced carrier density in the conduction channel of AlGaN/GaN HEMTs. Takeuchi et al. have interpreted their own measurement of a quantum-confined Stark effect in InGaN/GaN superlattices as piezoelectricity-induced; this is, as it turns out, essentially correct in their specific case involving InGaN, but not in general. Analogous results have been reported in Refs. and , the latter including fairly detailed simulations, but only accounting for piezoelectricity. Finally, a detailed theoretical exposition of effective-mass theory adapted to deal with piezoelectric fields is given in Ref. , including useful notation and basic formulas, and some applications. Experimental work will be mentioned later; here let us just quote the very recent evidence that polarization-related effects on optical properties in selected AlGaN/GaN systems cannot be properly interpreted if spontaneous polarization is neglected; circumstantial evidence was obtained by Leroux et al., while a more carefully planned investigation, reaching firmer conclusions, has been carried out by Cingolani et al.
## II Piezoelectric fields
Piezoelectricity is a well known concept in semiconductor physics. Binary compounds of strategic technological importance as III-V arsenides and phosphides can be forced to exhibit piezoelectric polarization fields by imposing upon them a strain field.
Among others, applications of piezoelectric effects in semiconductor nanotechnology exist in the area of multi quantum-well (MQW) devices. A thin semicondutor layer (active layer) is embedded in a semiconductor matrix (cladding layers) having a different lattice constant. If pseudomophic growth occurs, the active layer will be strained and therefore subjected to a piezoelectric polarization field, which can be computed as
$`𝐏^{(\mathrm{pz})}=\stackrel{}{e}\stackrel{}{ϵ}`$ (1)
if the strain field $`\stackrel{}{ϵ}`$ and the piezoelectric constants tensor $`\stackrel{}{e}`$ are known.
In a finite system, the existence of a polarization field implies the presence of electric fields. For the piezoelectric case, the magnitude of the latter depends on strain, piezoelectric constants , and (crucially) on device geometry. The structure of a typical III-V nitride-based superlattice or MQW is –C–A–C–A–C–A–C– (A=active, C=cladding), where both the cladding layer and the active layer are in general strained to comply with the substrate in-plane lattice parameter. In such a structure, the electric fields in the A and C layers are
$`𝐄_\mathrm{A}^{(\mathrm{pz})}=4\pi \mathrm{}_\mathrm{C}(𝐏_\mathrm{C}^{(\mathrm{pz})}𝐏_\mathrm{A}^{(\mathrm{pz})})/(\mathrm{}_\mathrm{C}\epsilon _\mathrm{A}+\mathrm{}_\mathrm{A}\epsilon _\mathrm{C})`$ (2)
$`𝐄_\mathrm{C}^{(\mathrm{pz})}=4\pi \mathrm{}_\mathrm{A}(𝐏_\mathrm{A}^{(\mathrm{pz})}𝐏_\mathrm{C}^{(\mathrm{pz})})/(\mathrm{}_\mathrm{C}\epsilon _\mathrm{A}+\mathrm{}_\mathrm{A}\epsilon _\mathrm{C})`$ (3)
where $`\epsilon _{\mathrm{A},\mathrm{C}}`$ are the dielectric constants and $`\mathrm{}_{\mathrm{A},\mathrm{C}}`$ the thicknesses of layers A and C. Thus, in general, an electric field will be present whenever $`𝐏_\mathrm{A}𝐏_\mathrm{C}`$. The above expressions are easily obtained by the conditions that the electric displacement be conserved along the growth axis, and by the boundary conditions that the potential energy on the far right and left of the MQW structure are the same.
There are essentially three special cases of MQW structures worth mentioning:
* active (cladding) layer lattice matched to the substrate: $`𝐏_\mathrm{A}=0`$ ($`𝐏_\mathrm{C}=0`$);
* $`\mathrm{}_\mathrm{A}=\mathrm{}_\mathrm{C}`$, whence $`𝐄_\mathrm{A}=𝐄_\mathrm{C}`$;
* $`\mathrm{}_\mathrm{A}\mathrm{}_\mathrm{C}`$, implying $`𝐄_\mathrm{C}0`$, and hence
$`𝐄_\mathrm{A}^{(\mathrm{pz})}=4\pi 𝐏_\mathrm{A}^{(\mathrm{pz})}/\epsilon _\mathrm{A}.`$ (4)
In the last case we implicitly assumed the cladding layer to be unstrained – that is, its lattice constant to be relaxed to its equilibrium value because its thickness exceeds the critical value for pseudomorphic growth over the substrate. $`𝐏^{(\mathrm{pz})}`$ may take any direction in general, but in normal practice its direction is parallel to the growth axis.
To obtain piezoelectric polarization effects in zincblende semiconductor systems, lattice-mismatched epitaxial layers are purposely grown along a polar axis, e.g. (111); the in-plane strain propagates elastically onto the growth direction, thereby generating $`𝐏^{(\mathrm{pz})}`$. In wurtzite nitrides, the preferred growth direction is the polar (0001) \[or (000$`\overline{1}`$)\] axis, so that any non-accomodated in-plane mismatch automatically generates a piezoelectric polarization along the growth axis (the sign depends on whether the epitaxial strain is compressive or tensile). We will be always be assuming this situation in the following.
Usually, alloys are employed in the fabrication of MQWs. In that case, one may estimate the piezoelectric polarization in the spirit of Vegard’s law as, for a general strain imposed upon e.g. an Al<sub>x</sub>Ga<sub>1-x</sub>N alloy,
$`𝐏^{(\mathrm{pz})}=\left[x\stackrel{}{e}_{\mathrm{AlN}}+(1x)\stackrel{}{e}_{\mathrm{GaN}}\right]\stackrel{}{ϵ}(x),`$ (5)
This expression contains terms linear as well as quadratic in $`x`$, and similar relations hold for quaternary alloys. This piezoelectric term is only present in pseudomorphic strained growth, and will typically tend to zero beyond the critical thickness at which strain relaxation sets in. Uncomfortable though it may be, the Vegard hypotesis is at this point in time the only way to account for piezoelectric (and spontaneous, see below) fields in alloys. As will be shown below, indeed, the qualitative picture does not depend so much on the detailed value of the polarization field as on their order of magnitude.
## III Spontaneous fields in MQWs
New possibilities are opened by the use of III-V nitrides (AlN,GaN,InN), that naturally cristallize in the wurtzite structure. These materials are characterized by polarization properties that differ dramatically from those of the standard III-V compounds considered so far. From simple symmetry arguments, it can be shown that wurtzite semiconductors are characterized by a non-zero polarization in their equilibrium (unstrained) geometry, named spontaneous polarization (or, occasionally, pyroelectric, with reference to its change with temperature). While the spontaneous polarization of ferroelectrics can be measured via an hysteresis cycle, in a wurtzite this cannot be done, since no hysteresis can take place in that structure. Indeed, spontaneous polarization has never been measured directly in bulk wurtzites so far. III-V nitride MQWs offer the opportunity to reveal its existence and to actually measure it. In turn, spontaneous polarization can provide new degrees of freedom, in the form of permanent strain-independent built-in electrostatic fields, to tailor transport and optical characteristics of nitride nanostructures. Its presence can e.g. be exploited to cancel out the piezoelectric fields produced in typical strained nitride structures, as discussed below.
Thanks to recent advances in the modern theory of polarization (a unified approach based on the Berry’s phase concept), it has become possible to compute easily and accurately from first principles the values of the spontaneous polarization, besides piezoelectric and dielectric constants, in III-V nitrides. The results of the calculations show that III-V nitrides have important polarization-related properties that set them apart from standard zincblende III-V semiconductors:
* huge piezoelectric constants (much larger than, and opposite in sign to those of all other III-V’s);
* existence of a spontaneous polarization of the same order of magnitude as in ferroelectrics.
The latter is, we think, a most relevant property. Spontaneous polarization implies that even in heterostructure systems where active and cladding layers are both lattice-matched to the substrate (so that no strain occurs, hence no piezoelectricity), an electric field will nevertheless exist due to spontaneous polarization. In addition, unlike piezoelectric polarization, spontaneous polarization has a fixed direction in the crystal: in wurtzites it is the (0001) axis, which is (as mentioned previously) the growth direction of choice for nitrides epitaxy. Therefore the field resulting from spontaneous polarization will point along the growth direction, and this (a) maximizes spontaneous polarization effects in these systems, and (b) renders the problem effectively one-dimensional. In the simplest case of a fully unstrained (substrate lattice-matched) MQW, the electric fields inside the layers are given, in analogy to Eq. 3, by
$`𝐄_\mathrm{A}^{(\mathrm{sp})}=4\pi \mathrm{}_\mathrm{C}(𝐏_\mathrm{C}^{(\mathrm{sp})}𝐏_\mathrm{A}^{(\mathrm{sp})})/(\mathrm{}_\mathrm{C}\epsilon _\mathrm{A}+\mathrm{}_\mathrm{A}\epsilon _\mathrm{C})`$ (6)
$`𝐄_\mathrm{C}^{(\mathrm{sp})}=4\pi \mathrm{}_\mathrm{A}(𝐏_\mathrm{A}^{(\mathrm{sp})}𝐏_\mathrm{C}^{(\mathrm{sp})})/(\mathrm{}_\mathrm{C}\epsilon _\mathrm{A}+\mathrm{}_\mathrm{A}\epsilon _\mathrm{C})`$ (7)
where the superscript $`(\mathrm{sp})`$ stands for spontaneous; typical spontaneous polarization values indicate that these fields are very large (up to several MV/cm).
In actual applications (for instance, to produce unstrained MQWs) alloys will have to be employed. The values of the spontaneous polarization are accurately known only for binary compounds. In the absence of better estimates, we assume as before that the spontaneous polarization in alloys can be estimated using a Vegard-like rule as (for, e.g., Al<sub>x</sub>In<sub>y</sub>Ga<sub>1-x-y</sub>N)
$$𝐏^{(\mathrm{sp})}(x,y)=x𝐏_{\mathrm{AlN}}^{(\mathrm{sp})}+y𝐏_{\mathrm{InN}}^{(\mathrm{sp})}+(1xy)𝐏_{\mathrm{GaN}}^{(\mathrm{sp})}.$$
In Figure 1 we report the resulting spontaneous polarization vs. lattice constant for the III-V nitrides, with data from Ref..
Figure 1 shows that for a given (substrate) lattice constant, a wide interval of spontaneous polarizations (hence of spontaneous fields, according to Eq. 7) is accessible varying the alloy composition. In particular, consider a GaN/Al<sub>x</sub>In<sub>y</sub>Ga<sub>1-x-y</sub>N MQW, where the composition is chosen so that the alloy be lattice matched to GaN, which we assume to be also the substrate (or buffer) material (dashed-dotted line in Figure 1). Then, piezoelectric polarization vanishes, but spontaneous polarization remains, and takes on values up to $``$0.05 C/m<sup>2</sup>. For a GaN quantum well with thick AlGaN cladding layers, this means a theoretical electrostatic field of up to about 5 MV/cm.
## IV Fields in the general case
In general, of course, MQWs will be strained. Then, for an arbitrary strain state, the electric fields in the A (or C) layers of the MQW are the sum of the piezoelectric and spontaneous contributions:
$$𝐄_{\mathrm{A},\mathrm{C}}=𝐄_{\mathrm{A},\mathrm{C}}^{(\mathrm{sp})}+𝐄_{\mathrm{A},\mathrm{C}}^{(\mathrm{pz})},$$
where $`𝐄^{(\mathrm{pz})}`$ is the old-fashioned piezoelectric field from Eq. 3, and $`𝐄^{(\mathrm{sp})}`$ is given by Eq. 7. It is important to stress that $`𝐄^{(\mathrm{sp})}`$ depends only on material composition and not on the strain state. Also, it is a key point to notice that although both polarization contributions lay along the same direction \[the (0001) axis\], $`𝐏^{(\mathrm{pz})}`$ may have (due to its strain dependence) the same or the opposite sign with respect to the fixed $`𝐏^{(\mathrm{sp})}`$ depending on the epitaxial relations.
It is difficult to give a simple picture of the electric field pattern in a general MQW system because of the many degrees of freedom involved. Here we consider an Al<sub>x</sub>Ga<sub>y</sub>In<sub>1-x-y</sub>N/GaN MQW pseudomorphically grown over a GaN substrate, having active and cladding layers of the same thickness. In such a case
$$𝐄_\mathrm{A}^{(\mathrm{sp})}+𝐄_\mathrm{A}^{(\mathrm{pz})}𝐄_\mathrm{A}=𝐄_\mathrm{C}(𝐄_\mathrm{C}^{(\mathrm{sp})}+𝐄_\mathrm{C}^{(\mathrm{pz})}).$$
Note again, at his point, that the fields (see Eqs. 3 and 7) are not related to just the polarization of the material composing the specific layer, but a combination of polarization differences, dielectric screening, and geometrical factors. We now consider the field values in the active layer: the total field $`𝐄_\mathrm{A}`$ is shown in Figure 2 vs. Al and In molar fraction; the same is done for the piezoelectric component in Figure 3. In both cases the appropriate Vegard-like rules have been used.
Comparison of these Figures cleary bears out the importance of spontaneous polarization in determining the electric field. Several aspects are worth pointing out. First, large electric fields ($``$ 0.5–1 MV/cm) can be obtained already for modest Al and In concentrations. Second, it is easy to access compositions such that Al<sub>x</sub>In<sub>y</sub>Ga<sub>1-x-y</sub>N is lattice matched to GaN: thereby, no piezoelectric fields exist, but large, purely spontaneous fields still do; specifically, this situation is realized for compositions laying on the zero-piezoelectric-field line in Figure 3. On this locus of compositions, spontaneous polarization is the only source of field and it can therefore be measured via the changes it induces in the MQW spectra. Third, it is possible to choose the material composition in such a way that the active layers of a MQWs are free of electric fields. To achieve this situation the MQW must be strained so that the piezoelectric and spontaneous polarizations cancel each other out; clearly, this is realized for compositions laying on the zero-field line in Figure 2. Of course the possibility of having a null field where desired is of capital importance in those devices where electric fields in the active layer can not be tolerated (other field screening mechanisms are discussed below).
A noticeable feature of Figure 3 is that the piezoelectric component increases much faster with In content that with Al content, despite the larger piezoelectric constants of the latter. The reason is, of course, that strain builds up much more rapidly with In concentration. Along with the small difference in spontaneous polarization between InN and GaN, this is the reason why it is possible to interpret with reasonable accuracy polarization effects in InGaN/GaN structures on the basis of purely piezoelectric effects, as done in Refs.. On the other hand, it can be seen that the spontaneous component increases much more rapidly with Al content than with In content, due to the widely different polarizations of AlN and GaN. For AlGaN, piezoelectricity-based interpretations are bound to fail.
## V Effects of polarization fields
We now come to the implications of polarization fields for devices based on III-V nitrides. In this Section we present a set of accurate self-consistent tight-binding calculations for an isolated AlGaN/GaN QW representing a system in which the the spontaneous-polarization contribution to the total built-in electrostatic field is as large as the piezoelectric term. To simulate realistically these nanostructures, self-consistency is needed to describe field screening by free carriers; the latter cannot physically cancel out the polarization charge, which is fixed and invariable, but may screen it out in part. In our calculations we therefore solve self-consistently the Poisson equation and the Schrödinger equation for a state-of-the-art empirical tight binding Hamiltonian for realistic nanostructures. In the following, two cases are considered: (a) non-equibrium carrier distribution (Subsec. A and B) related to photoexcitation or injection, where electron and hole quasi-Fermi levels are calculated for a given areal charge density ($`n_{2D}`$) in the quantum well (the sheet density, related to the injection current or optical pumping power); (b) thermal equilibrium distribution (Subsec. C and D) where the Fermi level is calculated as a function of doping density by imposing charge neutrality conditions. We solve Poisson’s equation,
$$\frac{d}{dz}D=\frac{d}{dz}\left(\epsilon \frac{d}{dz}V+P_T\right)=e\left(pn\right),$$
(8)
where the (position-dependent) quantities $`D`$, $`\epsilon `$, and $`V`$, are respectively the displacement field, dielectric constant, and potential. $`P_T`$ is the (position-dependent) total transverse polarization. The effects of composition, polarization, and free carrier screening are thus included in full. \[Consistently with the aim of describing a single QW, we choose boundary conditions of zero field at the ends of the simulation region. This corresponds to the $`\mathrm{}_\mathrm{C}\mathrm{}`$ limit in Eqs. 3 and 7.\]
The potential thus obtained is inserted in the Schrödinger equation, which is solved diagonalizing an empirical tight-binding $`sp^3d^5s^{}`$ Hamiltonian. The procedure is iterated to self-consistency. Further applications and details on the technique can be found elsewhere.
Here we concentrate in particular on the polarization-induced quantum-confined Stark effect (QCSE) in zero external field, its control and quenching, and its evolution with layer thickness. We first deal with the low free-carrier densities regime: thereby the QCSE manifests itself as a strong red shift of the interband transition energy, with a concurrent suppression of the transition probability, both of these features getting stronger as the well thickness increases. This is the regime that applies to low-power operation or photoluminescence experiments.
Next we discuss how the QCSE can be modified, and eventually (almost) quenched, by providing the QW with a sufficiently high free-carrier density. In this regime, as the free carrier density increases, the transition energy is progressively blue-shifted back towards its flat band value, and the transition probability suppression is largely removed. The needed free-carrier density depends on the polarization field, and not surprisingly it is found to be very substantial. Typical values of the sheet density range in the 10<sup>13</sup> cm<sup>-2</sup>, as opposed to typical values of 10<sup>12</sup> cm<sup>-2</sup> needed to obtain lasing in GaAs-like materials.
### A QCSE at low power
The prototypical system we consider is an isolated GaN quantum well cladded between Al<sub>x</sub>Ga<sub>1-x</sub>N barriers. In Figure 4, we display the total field E<sub>A</sub> in the (isolated) active well, and its piezoelectric component as a function of the Al molar fraction $`x`$. The spontaneous component is the difference of the two, and therefore approximately equal to the piezoelectric one.
The value we pick for our simulations is $`x`$=0.2, a reasonable compromise between the conflicting needs for not-too-large fields, sufficient confinement, and technologically achievable composition. In this case the valence offset is $`\mathrm{\Delta }E_v=0.064`$ eV. The total field in the QW of –2.26 MV/cm, and the spontaneous and piezoelectric components are –1.14 MV/cm and –1.12 MV/cm respectively. The minus signs indicates that the field points in the (000$`\overline{1}`$) direction. The bare polarization charge at the interface is proportional to the change in polarization across the interfaces, and it amounts to $``$1.28$`\times `$10<sup>13</sup> cm<sup>-2</sup>. The field value mentioned above results from this charge as screened by the dielectric response of the QW (the field change at the interface is thus related to a smaller, or screened, effective interface charge).
We performed a series of calculations for different well widths, where the electron and hole confined states have been populated (i.e. pairs have been created) with a density of $``$ 10<sup>11</sup> cm<sup>-2</sup> to simulate a low-power optical excitation. We find that this density has only a very marginal effect: indeed, the potential is perfectly linear, i.e. the electrostatic field remains uniform, over the whole QW. The square-to-triangular change in the potential shape causes a small blue shift of both the electron and hole confined states (referred to the flat well bottom), but the linear potential given by the field causes a much larger relative red shift for any reasonable thickness. Also, since the thermal carrier density fluctuations are negligible at microscopic thicknesses and room temperature (see below, and Ref.), one expects the QW band edge profile to remain linear as a function of thickness, at least for the low excitation powers typical of photoluminescence spectroscopy.
In Figure 5 we show the TB result for the lowest interband transition energy and the corresponding transition probability (i.e. the squared overlap of the highest hole level and the lowest electron level envelope wavefunctions ) as a function of QW thickness. Both the Stark red shift and the strong suppression of the transition probability are evident. This was to be expected from the potential shape and the reduced overlap of hole and electron states, displayed in the inset of Figure 5.
It is worth noting that the localization of the hole envelope function in the well region is rather weak, because the large effective field blue-shifts the hole bound state energy close to the valence barrier edge. This will generally be the case for low-$`x`$ AlGaN wells, due to the small valence confinement energy. Indeed, even the conduction confinement is small on the scale of the fields-induced potential drop, and the electron bound state also tends to have the character of a resonance for small $`x`$ (i.e. small confinement).
We conclude that in the absence of excitation and at normal operation temperatures, or at low optical excitation power, macroscopic polarization fields cause QW’s to be highly inefficient in emitting light, and the emission energy to be considerably different than the gap of the material plus the confinement energies.
Comparison with experiment is tricky since most attempts to measure these effects are polluted by inappropriate (at least for the purpose of revealing polarization effects) choices of the experimental geometry. For instance, measurements have been done in a series of quantum wells of different thicknesses ranging between 10 and 50 Å. In any case, the general experimental features are in full agreement with the notion that the transitions are red-shifted essentially linearly with increasing well thickness, and that screening at low free carrier densities is irrelevant in this class of systems. This is not quite true any more for thick layers, as will be discussed in Sec.V C.
### B QCSE quenching at high excitation power
If carriers are generated optically, one can envisage that a sufficiently high excitation power could possibly produce the carrier density needed to screen the polarization field. We now calculate the properties of the QW as a function of the free-carrier areal density, to check if the red shift and the transition probability suppression can be removed in a physically accessible range of such density.
We repeat the self-consistent procedure increasing progressively the free charge density in the QW from 10<sup>12</sup> up to 2 $`\times `$ 10<sup>13</sup> cm<sup>-2</sup>. We see in Figure 6 that, albeit at the cost of a large increase of the QW free-carrier density, the field does get progressively screened.
As can be seen in Figure 7, at fixed thickness the red shift decreases as the carrier density increases, and it tends to become thickness-independent at the highest densities. The transition probability is also increased by several orders of magnitude. However, the field is not screened abruptly but dies off gradually, with an effective screening length of about 20 Å for the largest density used here (of course, this is a token of the larger spatial extension of the screening charge as compared to the polarization charge ): herefore, holes and electron remain spatially separated to a large extent even at high carrier densities, and the trasition probability never quite goes back to unity. This is likely to be one of the reasons for the relatively low quantum efficiency observed in typical nitride MQW devices. For the same reasons, the transition energy never quite goes back to the flat-band value (gap plus confinement energy). Note in passing that because of strain, in these calculations $`E_g^{\mathrm{GaN}}=3.71`$ eV, almost 10 % larger than the equilibrium value.
The screening density of order 2 $`\times 10^{13}`$ cm<sup>-2</sup> needed to partially screen out the field corresponds to an estimated optical pumping power of about 10 to 20 kW/cm<sup>2</sup> per well. This figure agrees nicely with the unusually high pumping powers needed to obtain the laser effect in nitride structures. The explanation is simply that much of the free charge being generated actually goes into screening the polarization field. On the other hand, our result prove that the optically activated lasing conditions can indeed be realized in practice, although with high pumping powers, so that there seems to be no need to invoke quantum dot formation or other exotic effects to explain lasing in nitride structures. On the other hand, the same phenomenon explains the high current threshold observed for electrically driven GaN based lasers.
QCSE quenching phenomena similar to those just described have been observed by Takeuchi et al. in InGaN/Gan MQWs, with estimated fields in the 1 MV/cm range. The red shift and optical inefficiency were in fact removed , although only in part and in a transient fashion, by sufficiently high excitation powers. The order of magnitude of the values reported in Ref. is $``$200 kW/cm<sup>2</sup> for 5 to 10 MQW periods, i.e. 20 to 40 kW/cm<sup>2</sup> per well, in qualitative agreement with our estimate above.
One important remark at this point is that, depending on the excitation power, the MQW will adsorb radiation at many different transition energies ranging from that of the built-in-field–biased well (low power limit) to the quasi–flat-band well (high power limit) – that is, the MQW acts a multistable switch. It is indeed fortunate that the typical fields in these structures are such that one can physically access the various possible regimes.
Another noticeable effect is that at a properly chosen value of the sheet density (i.e. of the excitation power) one can obtain at the same time a reasonable transition probability and a red-shifted energy by just increasing the well thickness. This is very useful since the transition wavelength can be shifted to a different color without changing alloy composition, but only the well thickness. For instance (see Figure 7), changing the well thickness from 20 to 30 Å at a sheet density of 4$`\times `$10<sup>12</sup> cm<sup>-2</sup>, one obtains an energy red shift of 0.1 eV at the cost of a loss of a factor 10 in recombination rate, which may still be acceptable depending on the application. Red-shifting the transition energy in this fashion may avoid the need to add e.g. some In in the QW composition. Of course, blue-shifting by thickness reduction will increase the transition probability.
### C Self-screening of fields in massive samples
Free charge produced by high excitation screens polarization fields fairly efficiently over the quite short distances typical in nanostructures, basically because the spatial extension of the screening charge is comparable to the size of the system. How about extended samples, especially if not subjected to illumination, i.e. having only intrinsic free carriers ? It is indeed the case that no macroscopic fields exist in “infinitely large” samples even in the absence of high densities of (say) photogenerated carriers. The simple reason is that the intrinsic carrier fluctuations in an undoped semiconductor rise exponentially as a function of deviations of the chemical potential from the mid-gap value. In polarized nitrides, such deviations occur due to the built-in fields. As the sample thickness increases, the potential drop grows linearly. When the drop is smaller than the gap, the field is uniform: $`|𝐄|=4\pi 𝐏/\epsilon _0`$. When the drop approaches the gap value, i.e. for thicknesses approaching $`d_c=E_{\mathrm{gap}}/|𝐄|`$, the Fermi level nears the band edges: consequently, large amounts of holes and electrons are generated on the opposite sides of the sample. These intrinsic carriers screen partially the polarization charges, preventing the gap from closing. The total potential drop is thus pinned at the gap value for all thicknesses $`d>d_c`$ – that is, the effective gap decreases down to zero, but not below. For $`d>d_c`$, the field will decrease as
$$|𝐄|=E_{\mathrm{gap}}/d.$$
For this picture to hold, the spatial extension of the screening charge at the sample surface must be comparable with that of the polarization charge (a few Å at most ) and much smaller that the sample size. This will cause the field inside the sample to remain uniform, since the net effect of screening will be to change the effective polarization charge. Indeed, this assumption turns out to be verified in practice on direct inspection, as we discuss below.
Clearly, the above mechanism will strongly influence QW’s of thicknesses equal to, or larger than, the critical value $`d_c`$. For the system we are considering here, with a built-in field of –2.26 MV/cm, the critical value is $`d_c`$ 165 Å. To confirm our picture, we simulated QW’s with the same composition and geometry considered in Subsec. B, and thicknesses below and above $`d_c`$, to mimic the crossover from a “microscopic” to a “macroscopic” sample. In this case, we need to describe very extended bulk regions on the left and right of the QW, in order to account for the large screening length. Thus, we have made use of a classical Thomas-Fermi model, where the charge densities are calculated with Fermi-Dirac statistics of a classical system rather than by solving the Schrödinger equation in the TB basis. This allows to consider devices with a spatial extension of several hundreds of microns. Effective masses in this calculation were fitted to accurately reproduce the self-consistent TB results.
The resulting self-consistent potential is shown in Figure 8 for well thicknesses of 100, 200, 300, and 400 Å. A first point to note is that the field remains uniform for all well thicknesses. The field value equals the polarization field for the smallest thickness (smaller than $`d_c`$); for the thicker wells, the field (while remaining uniform) indeed decreases progressively as $`1/d`$.
Photoluminescence experiments are not expected to be able to reveal this effect (which should cause a saturation of the red shift as function of thickness) in very thick QW’s, since the effective recombination rate rapidly becomes vanishingly small. Experiments aiming to reveal this effect should be designed considering our result, that a very thick layer is effectively subjected to a uniform electrostatic field $`E_{\mathrm{gap}}`$/$`d`$. In an unstrained GaN QW, for $`d>d_c`$ (the latter being typically of order 100-200 Å or so depending on the polarization) the field is $``$3.4 V/$`d`$, i.e. $``$70 kV/cm for $`d=`$0.5 $`\mu `$m. This is presumably sufficient to cause observable bulk-like effects such as shifts in response functions or field effects on impurities.
A similar “self-screening” behavior has been revealed indirectly in devices comprising sufficiently thick layers. In Ref. a 300 Å thick Al<sub>0.15</sub>Ga<sub>0.85</sub>N layer was grown on a very thick GaN substrate, and topped with a Schottky contact. The predicted field in the AlGaN layer is 1.4 MV/cm, which would cause a potential drop of 4.2 eV across the layer. The maximum reasonable potential drop dictated by Schottky barriers, conduction offset, and Fermi level is about 1 eV, so it must be the case that the polarization charge gets largely screened by electrons from the GaN layer, forming a high-density two-dimensional electron gas (2DEG) at the heterointerface; this, by the way, causes an enhanced conductivity in the active channel. CV depth profiling indeed reveals a 2DEG at the interface. An equivalent, more formal description is that the field would force the metal-determined Fermi level to some 3 eV above the conduction band of GaN, thus attracting towards the interface an enormous carrier density, which screen out (part of) the field. Note in passing that in Ref. only piezoelectric polarization was considered, which leads to an underestimation of the 2DEG density, since the piezoelectric contribution is actually about one third of the total interface charge. Similar considerations apply to other similar experiments. The recent device simulation of Ref. has corrected this point, including in part the spontaneous-polarization interface charges.
### D Suppressing QCSE by doping
We have seen in the previous Sections that polarization fields can be screened to a reasonable extent by generation of free charge of both kinds in the QW upon e.g. optical excitation. Qualitative problems with this screening mechanism are that (a) it is transient, since it disappears when photoexcitation or current injection are removed, and that (b) in purely electronic (i.e. non-optoelectronic) devices, it is unlikely that the high densities needed can be reached in normal operating conditions. Besides, the current is not constant in time, so that the well shape also changes in time.
It is natural to presume that the same effects can be achieved in a permanent fashion using extrinsic carriers from dopants. The idea is to provide the well with carriers which would screen the polarization charge, excepts that now the electrons are released into the QW from the doped barriers, and not injected or photogenerated. Of course, this effect is not transient as the others discussed previously. The problem is, how high must the doping density be to achieve the same level of screening as in a high optical excitation regime. We simulated a 50 Å thick Al<sub>0.2</sub>Ga<sub>0.8</sub>N/GaN single QW, where the barriers have been doped $`n`$\- type in the range from 10<sup>17</sup> to 10<sup>20</sup> cm<sup>-3</sup> and the donor ionization energy have been set to 10 meV. In this simulation, we used again the selfconsistent TB approach. The resulting conduction band profile is displayed in Figure 9 for the various doping densities.
The polarization field raises the conduction band on the left side over the Fermi energy, and in order for the barrier conduction band to reach the Fermi level on the far left, the electrons are transferred from the left-side barrier into the QW, leaving behind a large depletion layer. As a consequence of the electron flow into the QW, the polarization field start to get significantly screened at doping densities above $``$ 10<sup>19</sup>cm<sup>-3</sup>. The existence of a depletion layer causes a large band bending in the left-side barrier, while the bending is absent in the left half of the well. This is quite different to the case of the photoexcited well, where the bending on the left side of the well was due to hole accumulation near the interface (see Figure 6). This explains why the field remains nearly uniform in the left half of the well for all of the simulations performed. On the right side of the well, only a small bending due to the electron accumulation is present. Indeed electron localization is quite weak in these systems, since the confinement potential is small as compared to the field-induced drop, and electrons tend to spill over to the right-side barrier. This is likely to be case in all nitride systems in this composition range.
From these results, we conclude that doping can indeed be used to screen polarization fields. While it is not obvious that the needed doping level can always be reached in practice, it is likely that a combination of doping and current injection or photoexcitation will generally succed in quenching polarization fields in the range of MV/cm, thus allowing for recovery of quasi flat band conditions. Fields in InGaN/GaN systems will be generally smaller than those in AlGaN/GaN systems for typical compositions in use today, and will therefore be more easily amenable to treatment by the above technique. This procedure has in fact been adopted in experiment by Nakamura’s group, which reported that a doping level of 10<sup>19</sup> cm<sup>-3</sup> is sufficient to quench the QCSE to a large extent. Indeed, in their In<sub>0.15</sub>Ga<sub>0.85</sub>N/GaN MQWs the unscreened field is $``$1.2 MV/cm, i.e. approximately a half of the one we considered here. This field will be more easily screened by remote doping, in qualitative agreement with our findings.
## VI Summary and acknowledgements
In conclusion, we have discussed how macroscopic (and in particular, spontaneous) polarization plays an important role in nitride-based MQWs by producing large built-in electric fields. Contrary to zincblende semiconductors, in III-V nitrides–based devices the spontaneous polarization is an unavoidable source of large electric fields even in lattice-matched (unstrained) systems. The existence of these fields may also be used as additional degree of freedom in device design: for instance, for an appropriate choice of alloy composition, spontaneous and piezoelectric fields may be caused to cancel out, thus freeing the structures from built-in fields. We have also discussed the different regimes of free carrier screening, effected by doping or optical excitation, showing that fields can be screened only in the presence of high free carrier densities, which leads to unusually high lasing thresholds for undoped QW’s. Of course, our results about the effects on the electronic structure apply qualitatively to any kind of polarization field, thus in particular also to piezo-generated ones.
VF and FB acknowledge special support from the PAISS program of INFM. VF’s stay at the Walter Schottky Institut was supported by the Alexander von Humboldt-Stiftung. FDS, ADC and PL acknowledge support from Network Ultrafast and 40% MURST.
|
no-problem/9905/astro-ph9905092.html
|
ar5iv
|
text
|
# Broad Absorption Line Quasars and the Radio–Loud/Radio–Quiet Dichotomy
## 1 Introduction
In the framework of the standard accreting black hole paradigm, unified models have been remarkably successful in explaining the apparently disparate sub–classes of AGN as simply different facets of what are otherwise more or less fundamentally identical systems (Antonucci (1993); Urry & Padovani (1995)). Despite this success, however, a single, orientation–based scheme cannot explain the bimodality in the radio luminosity distribution (Kellermann et al. (1989); Miller, Rawlings & Saunders (1993)), so that a true, ‘grand unification’ scheme for all AGN still remains elusive. One clue to understanding the RL/RQ dichotomy and unifying AGN is the possibility that RQQs are capable of producing radio jets, albeit much weaker, smaller–scaled versions of the powerful, highly–collimated jets that are characteristic of RLQs. Some observational evidence supporting this possibility has emerged: high–resolution imaging of RQQs has revealed that the radio emission of at least some of these objects originates within a compact, nonthermal source directly associated with a central engine which appears qualitatively similar to those in RLQs (Blundell & Beasley 1998a ; Kukula et al. (1998)); a correlation between radio and \[OIII\]$`\lambda 5007`$ luminosities, indicative of the presence of jets, has been measured in not only in Seyferts (Whittle (1985)), but also in some quasars (Miller et al. 1993); and finally, a significant number of radio–intermediate quasars (RIQs) have been found, possessing radio emission which is unusually high for RQQs (but still below that of RLQs) and which has been attributed to weak beaming (Miller et al. 1993, Falcke, Sherwood & Patnaik (1996)), although this still remains unclear. But perhaps the most compelling evidence is the recent discovery of apparent superluminal motion in a RQQ (Blundell & Beasley 1998b ). This is the first direct evidence for a fundamental similarity in the origin of radio emission in RLQs and RQQs.
Another observational clue to understanding the RL/RQ dichotomy may be provided by BALQs. These sources, which comprise $`1015\%`$ of optically–selected quasars, are characterized by (rest–frame) UV spectra with extremely broad (up to 30,000 km.s<sup>-1</sup>) and often deep absorption lines that are blueshifted with respect to their corresponding emission lines (chiefly resonance lines due to highly–ionized species such as CIV, SiIV and NV). According to the standard model (Weymann et al. (1991) – and see Weymann (1997) for a recent review), these absorption troughs are formed in a narrow, quasi–equatorial outflow of numerous, dense ($`10^{69}`$ cm<sup>-3</sup>) cloudlets accelerated by line radiation pressure across a region $`1`$ pc in extent.
Although BALs have only been detected in RQQs (e.g. Weymann et al. (1991); Stocke et al. (1992)), there are two intriguing findings which have yet to be explained: the statistically significant overabundance of BALQs amongst RIQs (Stocke et al. (1992); Francis, Hooper & Impey (1993)); and the recent discovery of ‘weak’ BALs in a handful of RLQs identified by the FIRST survey (Brotherton et al. (1998)). The BALs in these RLQs are ‘weak’ in the sense that the profile widths of the absorption troughs (several thousand km s<sup>-1</sup>) are noticeably narrower than those typically measured in BALQs and similarly, the ‘balnicities’<sup>1</sup><sup>1</sup>1The balnicity of an absorption feature is an index defined by Weymann et al. (1991) which depends on the width of the absorption feature as well as its position with respect to the corresponding emission line and which thereby measures the likelihood that the feature is a true BAL rather than due to intervening or associated absorption systems. are much lower. Thus, as pointed out by Weymann (1997), there is an anticorrelation between the terminal velocity of thermal gas ejected from quasar nuclei and the radio power of the quasar. Note, however, that none of the RL BALQs appear to be powerful radio sources; all have a ratio, $`R^{}`$, of radio-to-optical flux (K–corrected) satisfying $`1\stackrel{<}{}\mathrm{log}R^{}\stackrel{<}{}2.5`$, while the maximum flux density measured is no higher than 30 mJy at 20 cm. Nevertheless, even if these sources are found to more closely resemble RLQs with ‘associated absorbers’ (Foltz et al. (1988)), they can still provide important clues to the BAL phenomenon since there is evidence that at least some of the associated absorbers seen in RLQs (mainly steep–spectrum sources) are closely connected with the nucleus and may perhaps represent the low velocity end of intrinsic absorption outflows in quasars (e.g. Barlow & Sargent (1997); Aldcroft, Bechtold & Foltz (1997)).
Similarly, narrow UV absorption features have also been detected in some Seyfert 1s and these have been interpreted as a low luminosity version of the BAL phenomenon in quasars. Moreover, some of these Seyferts (and a few quasars as well) exhibit ‘warm absorber’ X–ray signatures and there is growing evidence that the outflowing, absorbing material responsible for the BAL–like features is also responsible for the X–ray features (Crenshaw et al. (1998); Mathur, Wilkes & Elvis (1998); Gallagher et al. (1999) and references therein). It is also interesting to note that some of the nearby AGN exhibiting nuclear absorption outflows (e.g. NGC 3516, NGC 4151, NGC 5548, Mrk 231) also exhibit the linear and extended radio structures that are often detected in Seyferts and that are believed to result from the outflow of radio plasma along axes determined by the dust torus obscuring the active nucleus (e.g. Baum et al. (1993)). Indeed, such extended radio structures in Seyferts are generally interpreted as small–scale, low–power versions of the large–scale, powerful jets and lobes seen in radio galaxies and quasars (e.g. Ulvestad & Wilson (1989)).
Although these observations suggest that nuclear absorption outflows comprise a dynamically important mass–loss component in AGN that can span a wide range of parameter space, they are difficult to interpret in the context of the standard model for the BAL phenomenon in RQQs. In this paper, the BAL pheonomenon is examined in the context of a broader model in which nuclear absorption outflows are associated with poorly–collimated, weak radio jets. This model provides a framework not only for examining the connection between BALs in RQQs and the weaker absorption features in stronger radio sources (the RL BALQ candidates) and in lower luminosity counterparts (Seyferts), but also for testing the hypothesis that all AGN possess jet–producing central engines and that jets with intrinsically different physical properties (e.g. radio power, bulk speeds) are at least partly responsible for the observed RL/RQ dichotomy. The model is qualitatively outlined in § 2 and is then used to obtain observational and theoretical constraints on the relevant physical properties of nuclear absorption outflows in § 3 and § 4, respectively, and the main results are summarized in § 5.
## 2 A Weak Jet Model
In the model constructed here, a weak jet is defined as a poorly–collimated outflow of radio–emitting plasma moving at a low bulk speed, with a corresponding bulk Lorentz factor $`\mathrm{\Gamma }\stackrel{<}{}`$ a few. Such an outflow may not necessarily satisfy the traditional criterion for the formal definition of ‘jet’ (length-to-width ratio $`\stackrel{>}{}4`$Bridle & Perley (1984)), nevertheless, it is useful to apply the jet description in order to determine the extent to which the RL/RQ dichotomy can be attributed to differences in jet properties, including the relative quantities of nonthermal and thermal plasma. Although the conditions under which jets form still remain poorly understood, recent theoretical results (Begelman (1998)) indicate that a lack of self–collimation may be due to the absence of a relatively strong, stabilizing poloidal magnetic field component. Poorly–collimated jets would lack a Doppler–boosted radio core and would therefore be observed as much weaker radio sources than highly–collimated jets. Thus, jet collimation plays an important role in the observed bimodality in the radio power distribution.
While it is physically plausible that all jets contain some thermal matter in the form of dense clumps (Begelman, Blandford & Rees (1984)), exactly how much is present, relative to the tenuous, synchrotron–emitting plasma, remains uncertain. Recently, Celotti et al. (1998) placed some constraints on the amount of comoving, thermal gas that could exist in the form of cool, dense clouds embedded within the powerful, highly–beamed relativistic jets in RLQs and BL Lacs. While it was concluded that such material could only be present in such energetically insignificant quantities so as to preclude observational detection, this need not necessarily be the case for thermal gas in mildly–relativistic and sub–relativistic jets in much less powerful radio sources. Indeed, it is argued here that the observational evidence for such jets is precisely the BAL phenomenon.
As discussed further in § 3, the distribution of relativistic particles in such jets can be expected to extend down to thermal energies, so that the mean Lorentz factor, $`\gamma `$, of emitting electrons is much lower than it is in more powerful jet sources. This in turn is reasonable to expect if in situ acceleration of particles to nonthermal energies (on sub–pc scales) is also less efficient. Under such conditions, a continuous supply of fresh particles is required to replenish the ‘dead’ particles that have radiatively cooled and to thereby maintain a constant radio flux. As shown by Ghisellini, Haardt & Svensson (1998), electrons at the low end of their energy distribution can effectively thermalize via cyclo–synchrotron self–absorption before escaping the source region on sub–pc scales. On these scales, the resulting quasi–thermal electrons can then cool via inverse Compton scattering; the ratio of the cooling timescale to the escape timescale is $`r_{0.1\mathrm{pc}}v_{0.1c}L_{45}^1`$ (where $`r=0.1r_{0.1\mathrm{pc}}`$ pc is the source size, $`v=0.1v_{0.1c}c`$ is the bulk flow speed and $`L=10^{45}L_{45}`$ erg.s<sup>-1</sup> is the luminosity). The thermal gas can then cool further to temperatures well below the local Compton temperature ($`10^7`$ K) provided the gas density is sufficently high for bremsstrahlung to become more efficient than inverse Compton cooling; the required densities are $`\stackrel{>}{}10^4L_{45}r_{0.1\mathrm{pc}}T_7^{1/2}`$ cm<sup>-3</sup>. As will be shown in § 4.2, this lower limit corresponds to the critical density discriminating between the nonthermal and thermal gas phases in an inhomogeneous jet. Thus, the local accumulations of condensed, thermal gas which arise as a result of rapid cooling and poor re–acceleration could be identified as the progenitors of the absorbing cloudlets which emerge on $``$pc scales. If this is indeed the case, then it provides a natural explantion for why the BAL phenomenon becomes increasingly rarer in more powerful radio sources, where presumably in situ particle acceleration is more efficient (see e.g. Weymann, Turnshek & Christiansen (1985) for other possible origins, including entrainment).
Since velocities measured from the observed blueshifted troughs in BALQs are typically no more than $`0.2c`$, the dense cloudlets could either be moving slower than the bulk velocity in a mildly–relativistic jet, or they could be comoving with the radio–emitting plasma in a sub–relativistic jet. If such cloudlets comprise a significant kinetic energy flux component in a mildly–relativistic jet and are moving at sub–jet speeds with a velocity $`v_{\mathrm{BAL}}`$, then the total energy flux is given by (e.g. Bridle & Perley (1984))
$$\frac{L_{\mathrm{jet}}}{r^2\mathrm{\Omega }}\left[(\mathrm{\Gamma }1)\rho _{\mathrm{jet}}c^2+\mathrm{\Gamma }\frac{B_{\mathrm{jet}}^2}{8\pi }\right]\mathrm{\Gamma }\beta _{\mathrm{jet}}c+\frac{1}{2}ϵ\rho _{\mathrm{cld}}v_{\mathrm{BAL}}^3,$$
(1)
where $`\mathrm{\Omega }2\pi \varphi ^2`$ is the total solid angle subtended by jets on either side of the nucleus ($`\varphi `$ is the jet opening angle), $`\mathrm{\Gamma }\beta _{\mathrm{jet}}c`$ is the jet speed, $`\rho _{\mathrm{jet}}`$ and $`B_{\mathrm{jet}}`$ are the comoving jet mass density (assumed to be dominated by ‘cold’ protons) and magnetic field, and $`\rho _{\mathrm{cld}}`$ is the mass density of the clouds, which fill a fraction $`ϵ`$ ($`1`$) of the jet volume (the tenuous radio–emitting plasma is assumed to pervade the bulk of the jet volume). The relativistic gas pressure is assumed to make a negligible contribution to the total jet energy flux; the limits on the energy density of synchrotron emitting electrons calculated in the next section confirm that this is a valid assumption. If, on the other hand, the clouds and the radio–emitting plasma (plus magnetic fields) are comoving in a sub–relativistic jet (i.e. with a velocity $`v_{\mathrm{jet}}=v_{\mathrm{BAL}}\stackrel{<}{}0.2c`$), then the total jet energy flux simplifies to
$$\frac{L_{\mathrm{jet}}}{r^2\mathrm{\Omega }}\left(\frac{1}{2}\rho _{\mathrm{jet}}v_{\mathrm{jet}}^2+\frac{B_{\mathrm{jet}}^2}{8\pi }\right)v_{\mathrm{jet}}$$
(2)
where $`\rho _{\mathrm{jet}}\rho _{\mathrm{jet}}+ϵ\rho _{\mathrm{cld}}`$ is now the average comoving mass density of the jet.
## 3 Observational Constraints
### 3.1 Covering Factor
The widely adopted, standard model for BALQs is chiefly founded upon the observational study by Weymann et al. (1991), who found no statistically significant differences between the spectral properties of BALQs and non-BALQs in a sub–sample taken from the LBQS, thus indicating that BALQs do not form an intrinsically different class of objects from non-BALQs. When combined with the constraint from scattering models that the BALR cannot completely occult the continuum source (Junkkarinen (1983) – see also Hamann, Korista & Morris (1993)), this result led to the suggestion that all RQQs possess a BALR with a global covering factor that can be identified with the incidence rate of BALQs amongst an optically–selected sample, typically $`0.10.15`$ (although the ‘true’ incidence rate could be as high as $`30\%`$ if attenuation is taken into account – see e.g. Schmidt & Hines (1999) and references therein). It was then further suggested that a physically plausible distribution for absorbing cloudlets would be a quasi–equatorial geometry, possibly skimming the edge of an obscuring torus, which would provide a natural source of material for the cloudlets.
Note that a weak jet model may not necessarily be compatible with this covering factor interpretation of the BAL incidence rate amongst optically–bright quasars. A poorly–collimated jet could, in principle, freely expand to fill the biconical regions interior to the dusty torus, so that the half–opening angle of the ‘jet’ could be as wide as $`60^{}`$. Since optically–bright quasars are also those seen along a direct line of sight to the continuum source (i.e. within these ‘ionization cones’), then the low incidence rate of BALs amongst these quasars may not necessarily be consistent with the jet covering factor; it then becomes necessary to consider the BAL phenomenon as an evolutionary, mass–loss phase (e.g. Miller (1997)). Note that the original Weymann et al. (1991) results do not rule out a duty cycle effect for the BAL phenomenon (Weymann (1997)) and indeed, there is some observational evidence to support the idea (e.g. Briggs, Turnshek & Wolfe (1984); Boronson, Pearson & Oke (1985); Voit, Weymann & Korista (1993)) that BALQs may be transition objects between RQ and RL quasar phases that have undergone a close interaction and/or merger event which has triggered the expulsion of excess mass and angular momentum. One particularly impressive example is the recent adaptive optics image of the BALQ PG 1700+518 ($`z=0.29`$), which clearly reveals a discrete companion galaxy that appears to be merging with the quasar (Stockton, Canalizo & Close (1998)). Similarly, other BALQs which are sufficiently nearby ($`z<0.5`$) to show clear signs of having undergone a recent interaction or merger event include Q 0205+024, IRAS 0759+6508, Q 1402+436 and Q 2141+175, and while several more show less discernable signs (e.g. PG 0026+129, PG 0043+039, Q 0318-196, PG 1426+015, PG 2233+143) their immediate environs are strongly suggestive of interactions taking place. Finally, tidal tails and nearby companions associated with some low-$`z`$ quasars have been detected by HST imaging (e.g. Bahcall et al. (1997)), which has also revealed that the host galaxies of at least some bright RQQs, like those of RLQs and radio galaxies, are massive ellipticals, which are believed to have been formed from mergers (McLure et al. (1998)).
### 3.2 Orientation and Geometry
The strongest evidence for a quasi–equatorial geometry for the BALR has come from polarization measurements, which have revealed that, on average, BALQs tend to have higher levels of optical polarization than than non-BALQs and which, when interpreted in terms of orientation alone, suggest that the BALR is being intercepted along highly–inclined lines of sight (Hutsemékers, Lamy & Remy (1998); Schmidt & Hines (1999) and references therein). These data have also revealed that the highest levels of polarization ($`>1\%`$) are measured in low–ionization BALQs (lo-BALQs; those with absorption lines due to low–ionization species, such as Mg II and Al III, in addition to the usual high–ionization BAL troughs). Since the objects in this sub–class of BALQs also show evidence of strong dust reddening (Sprayberry & Foltz (1992)), the polarization data strongly favour models in which the viewing angle is close to the obscuring dust torus. However, this is strictly only the case for lo-BALQs; polarization studies have found no statistically significant differences in the optical polarization between high–ionization BALQs (hi-BALQs) and non-BALQs (Hutsemékers et al. 1998) and therefore, they offer no helpful clues to the orientation of hi-BALQs, which in fact make up the majority of BALQs.
Another way of determining the orientation and geometry of the BALR is to search for radio axes. Unfortunately, the large distances and low radio fluxes of RQQs have made it difficult in practice to resolve radio images of BALQs. Indeed, prior to the recent FIRST survey, only two BALQs had been mapped with sufficient spatial resolution with the VLA: PG 1700+518, which exhibits double compact radio structure down to $`0.15^{\prime \prime }`$ at 15 GHz (Hutchings, Neff & Gower (1992); Kellermann et al. (1994); Kukula et al. (1998)); and the Cloverleaf, H 1413+1143, which exhibits compact radio counterparts to all four of the optical images produced by gravitational lensing, as well as an additional, strongly amplified radio source that appears to be associated with the quasar itself (possibly an ejected radio component; Kayser et al. (1990)). Similarly, the newly discovered BALQ APM 08279+5255 (Irwin et al. (1998)) exhibits double compact radio structure down to $`0.28^{\prime \prime }`$ at 3.5 cm (G.F. Lewis, pvt. com.). Even the recent FIRST survey, which has detected and mapped, with follow–up, high resolution (A array) VLA imaging, about 20 BALQs, has failed to detect any elongated or extended structure that could be identified as radio axes; all of the sources appear point–like down to a $`0.2^{\prime \prime }`$ resolution level (B. Becker, pvt. com.). Although it may be possible to interpret these radio sources as weak, unresolved jets, it would be desirable to obtain higher quality radio data.
In the meantime, it is interesting to make a comparison with the low luminosity, low velocity counterparts to BALs found in Seyfert 1s which are sufficiently nearby to resolve linear radio structures on sub–kiloparsec and sometimes parsec scales. For example, NGC 3516, NGC 4151 and NGC 5548, which are classified as Seyfert 1.5s, all exhibit elongated radio structure with subcomponents and with an unresolved core centred on the the optical nucleus (e.g. Baum et al. (1993)). On the other hand, in other nearby AGN which exhibit BAL–like features (e.g. NGC 3783, NGC 509, NGC 7469), no radio axes are detected, only nuclear point sources. This is typically the case for objects which are classified as Seyfert 1.0-1.2 and which are therefore believed to be viewed at low inclinations, so it is unclear whether they actually possess linear radio structure that cannot be seen because of a lack of projection, or whether their radio sources are intrinsically different from those in other Seyferts, which seems less likely to be the case.
There are also other observational clues to suggest that not all BALQs are being viewed at large inclination angles. For example, dust models for the optical to submillimeter spectral energy distributions of H 1413+117 and of APM 08279+5255 (both of which are IRAS sources) are consistent with a dusty torus being viewed face–on, with a direct, unobscured line of sight to the optical continuum source (Barvainis et al. (1993); Lewis et al. (1998)). Similarly, the lack of reddening in other hi-BALQs (e.g. Weymann et al. (1991)) suggests that they too are being viewed at latitudes sufficiently high to avoid dust contamination from the putative torus. Furthermore, the remarkable similarity between the emission line equivalent widths of BALQs and non–BALQs is surprising, given that projection effects are expected to produce measureable differences if they are indeed seen from different viewing angles (P. Francis, pvt. com.) The orientation interpretation of spectropolarimetric data is also unclear; resonance line scattering by ions in an equatorial geometry is predicted to produce additional, redistributed polarized flux in the red wings of the emission lines (Lee & Blandford (1997)), but in some cases, a deficit of polarized flux redward of the permitted emission lines is detected (see e.g. Fig. 4 in Weymann et al. (1991) and Ogle (1997)). Note that a weak jet, with the properties outlined in § 2, would produce negligible optical flux and therefore, a negligible contribution to the continuum polarization.
### 3.3 Limits from Flux Density Measurements
The standard synchrotron formulae for a homogeneous source region can be used to place some limits on the physical properties of the radio–emitting plasma and magnetic fields that are capable of producing the observed radio flux densities of BALQs, which are typically $``$ a few mJy. The emitting electrons are assumed to have the usual nonthermal energy distribution: $`n_\gamma \gamma ^p`$, with a total electron number density $`n_\mathrm{e}=d\gamma n_\gamma `$, where $`\gamma `$ is the electron Lorentz factor, with $`\gamma _{\mathrm{min}}\gamma \gamma _{\mathrm{max}}`$, and where $`p`$ is the particle spectral index. For optically–thin synchrotron emission, the spectral index is given by $`\alpha =(p1)/2`$ and the observed flux density can be related to the other observable parameters, the angular diameter of the source, $`\theta _\mathrm{d}`$, and the luminosity distance, $`D`$, according to (see Marscher (1987))
$$S_\nu (3\times 10^4)\gamma _{\mathrm{min}}^2n_\mathrm{e}B^2\nu _{\mathrm{GHz}}^1\theta _{\mathrm{mas}}^3D_{\mathrm{Gpc}}(1+z)^2\text{mJy ,}$$
(3)
where $`B`$ is the magnetic field and $`\alpha =1.0`$ has been used, since this is a typical value obtained from observed BALQ radio spectra for which spectral indices could be measured (Barvainis & Lonsdale (1997)). To take into account the possibility that the observed radio flux has been boosted as a result of beaming (i.e. in RLQs), this expression needs to be further multiplied by a factor $`\delta ^4`$, where $`\delta =[\mathrm{\Gamma }(1\beta \mathrm{cos}\phi )]^1`$ is the Doppler factor corresponding to a bulk velocity $`\beta c`$ with a bulk Lorentz factor $`\mathrm{\Gamma }`$ and with a direction $`\phi `$ with respect to the observer.
To obtain independent constraints on the unknown source parameters $`n_\mathrm{e}`$ and $`B`$, the optically–thin synchrotron spectrum can be extrapolated down to the frequency $`\nu _\mathrm{m}`$ where the observed flux density reaches a maximum at a value $`S_\mathrm{m}`$ and where it can be assumed that the optical depth to synchrotron self–absorption is approximately unity (see Marscher (1987)). This then gives the following relations:
$$B40\left(\frac{\nu _\mathrm{m}}{\mathrm{GHz}}\right)^5\left(\frac{S_\mathrm{m}}{\mathrm{mJy}}\right)^2\theta _{\mathrm{mas}}^4(1+z)^1\delta \mathrm{G},$$
(4)
which is virtually independent of $`\alpha `$, and for $`\alpha =1.0`$:
$$n_\mathrm{e}(4\times 10^7)\gamma _{\mathrm{min}}^2\left(\frac{\nu _\mathrm{m}}{\mathrm{GHz}}\right)^9\left(\frac{S_\mathrm{m}}{\mathrm{mJy}}\right)^5\theta _{\mathrm{mas}}^{11}D_{\mathrm{Gpc}}^1(1+z)^8\delta ^6\mathrm{cm}^3$$
(5)
Although these relations are strongly dependent on the observable parameters, they can be somewhat useful when comparing the extremely contrasting properties between the compact radio cores of RLQs and the much weaker radio sources in RQQs (including BALQs). In particular, these relations imply a distinct difference between the ratio of energy densities in magnetic field, $`u_\mathrm{B}`$, to relativistic electrons, $`u_\mathrm{e}`$, for quasars with contrasting radio properties. Eqn. (4) implies a magnetic energy density $`u_\mathrm{B}=B^2/8\pi 50\nu _{\mathrm{GHz}}^{10}S_{\mathrm{mJy}}^4\theta _{\mathrm{mas}}^8(1+z)^2\delta ^2`$ erg.cm<sup>-3</sup>, while eqn. (5) implies an electron energy density (for $`\alpha =1.0`$) $`u_\mathrm{e}=2\gamma _{\mathrm{min}}^2n_\mathrm{e}m_\mathrm{e}c^210^{12}\nu _{\mathrm{GHz}}^9S_{\mathrm{mJy}}^5\theta _{\mathrm{mas}}^{11}D_{\mathrm{Gpc}}^1(1+z)^8\delta ^6`$ erg.cm<sup>-3</sup>. Interestingly, the ratio $`u_\mathrm{B}/u_\mathrm{e}`$ for RQQs is larger by many orders of magnitude than the same ratio for RLQs (assuming the same observing frequency and the same redshift), even if a conservative flux density (say, 100 mJy) and a high Doppler factor ($`\delta 10`$) are used for the RL source. Also, the condition $`u_\mathrm{B}/u_\mathrm{e}1`$ is always obtained for BALQs, even in the case of the highest flux density level measured so far (for FIRST 1556+3517; one of the RL BALQ candidates), 30 mJy at 1.4 GHz, with $`z=1.48`$ (Brotherton et al. (1998)), which gives a lower limit of $`u_\mathrm{B}/u_\mathrm{e}0.1(\nu _{1.4}\theta _{\mathrm{mas}})^{19}D_{\mathrm{Gpc}}`$, and which, taking into account the strong dependence on $`\theta _{\mathrm{mas}}(\stackrel{>}{}1)`$, always exceeds unity by an appreciable amount.
The ratio $`u_\mathrm{B}/u_\mathrm{e}`$ may have important implications for the nature of the radio emitting source regions in quasars, especially in the framework of jet models. Falcke & Biermann (1995), for instance, suggest that there exists a ‘family’ of jet models, the members of which are distinguished by differences in the equipartition conditions involving the energy densities in the magnetic field, relativistic electrons and protons as well as thermal electrons and protons and also differences in the total energy budget of the jet–disk system as a whole. According to their hypothesis, jets with $`u_\mathrm{B}u_\mathrm{e}`$ are predicted to be radio quiet if the relativistic electron distribution begins at $`\gamma _{\mathrm{min}}1`$ and if $`u_\mathrm{B}`$ is below its equipartition value with respect to the bulk kinetic energy. They also further argue that jets with $`u_\mathrm{B}u_\mathrm{e}`$ and $`\gamma _{\mathrm{min}}1`$ can be radio weak (but not radio quiet) if $`u_\mathrm{B}`$ is in equipartition with the bulk kinetic energy, thus offering a plausible theoretical discrimination between RQ and RI sources. Note that while the idea (e.g. Falcke, Sherwood & Patnaik (1996)) that RIQs are Doppler–boosted RQQs may seem appealing in the framework of a weak jet model, there is very little observational evidence for beaming in non-RLQs.
## 4 Theoretical Constraints
Although various pressure–driven wind models have been proposed for BALs (see de Kool (1997) for a summary), the only direct observational clues to the nature of the driving force are line–locking features (Turnshek (1988)) and ‘ghost of Ly $`\alpha `$’ features (Arav & Begelman (1994)). There are, however, theoretical arguments to suggest that while line radiation pressure clearly plays an important dynamical role in the BAL phenomenon, it may not necessarily be the only acceleration mechanism. For instance, absorbing clouds will experience large forces when they move relative to an accelerating, confining medium and thus, they will be unavoidably dragged along by the dynamic pressure of the external fluid (Weymann et al. 1985). Indeed, Arav, Li & Begelman (1994) have shown that BAL clouds comoving with the ambient medium produce profiles that more closely resemble those observed than do the profiles produced by line acceleration alone when the clouds are decoupled from the ambient medium. They also find that to produce a significant contribution to the overall acceleration from line pressure relative to ram pressure when the clouds are comoving, the starting radius is too close to the inferred radius of the broad emission line region, i.e. $`0.1`$ pc.
Another related problem is the cloud confinement mechanism. The temperatures required for pressure confinement by a thermal wind (e.g. Stocke et al. (1992)) are difficult to achieve on $``$ pc scales, while a wind driven by cosmic rays (e.g. Begelman, de Kool & Sikora (1991)) also cannot provide the necessary pressure for confinement of BAL clouds. The confinement problem disappears if, instead of clouds, the BALs are produced by a quasi–continuous, high column density wind (e.g. Murray et al. (1995)). Although such a model is made more appealing by being able to account for the common UV/X–ray (BALs/warm absorber) absorption features that have been detected in some sources (Crenshaw et al. (1998); Gallagher et al. (1999) and references therein), it requires ionization parameters several orders of magnitude in excess of the values inferred from the range of ionization states in the observed BAL troughs and this also makes it difficult to account for lo-BALs.
Whether BAL clouds can be accelerated and confined by a weak jet and whether the physical properties of such clouds are consistent with those deduced from observations now remains to be determined.
### 4.1 Dynamical Considerations
Consider a blob of gas immersed in an outflowing medium. This blob, irrespective of its formation history, will quickly come into pressure equilibrium with its surroundings and in doing so, will be accelerated by the dynamic pressure of the outflow, expanding as it moves downstream. For a jet of speed $`\mathrm{\Gamma }\beta _{\mathrm{jet}}c`$, the ram pressure exerted on a cloud of scalelength $`r_{\mathrm{cld}}`$ satisfies
$$\rho _{\mathrm{cld}}v_{\mathrm{cld}}\frac{v_{\mathrm{cld}}}{r}=\frac{\rho _{\mathrm{jet}}\mathrm{\Gamma }^2(\beta _{\mathrm{jet}}cv_{\mathrm{cld}})^2}{r_{\mathrm{cld}}},$$
(6)
where $`v_{\mathrm{cld}}`$ is the cloud velocity. However, the momentum flux, $`\rho _{\mathrm{jet}}\mathrm{\Gamma }^2\beta _{\mathrm{jet}}^2c^2`$, of a sub–relativistic jet is higher than that of a relativistic jet with the same energy flux, i.e. the ratio of momentum-to-energy flux, $`(\mathrm{\Gamma }/\mathrm{\Gamma }1)\beta _{\mathrm{jet}}/c`$, is higher by a factor $`2c/v_{\mathrm{jet}}`$ and therefore, the dynamic pressure of a sub–relativistic jet (of speed $`v_{\mathrm{jet}}`$) provides a more efficient acceleration mechanism than that of a relativistic jet. Indeed, ram pressure acceleration in a relativistic jet is no more efficient than acceleration by line radiation pressure (the favoured mechanism for BALs) for the same power in kinetic energy flux and photon flux. In a sub–relativistic jet, on the other hand, the ratio of $`a_{\mathrm{ram}}`$ to $`a_{\mathrm{rad}}`$ is $`c/v_{\mathrm{jet}}`$.
The higher efficiency of ram pressure acceleration in a sub–relativistic jet compared to that in a relativistic jet led Blandford & Königl (1979) to predict that dense blobs embedded in a sub–relativistic jet would naturally give rise to absorption troughs blueshifted with respect to their corresponding emission lines. This also immediately suggests that a jet model offers a natural explanation for why nuclear absorption outflows, when present in RLQs, are never as strong as those which characterize bonafide BALQs.
#### 4.1.1 The $`v_{\mathrm{}}`$—Radio-Loudness Anticorrelation
The distinct anticorrelation between the observed terminal velocity, $`v_{\mathrm{}}`$, of material ejected from a quasar nucleus and the radio power of the quasar, as pointed out by Weymann (1997) following the FIRST discovery of BALs in RLQs (Brotherton et al. (1998)), is clearly a key observational property which therefore provides a critical test for the dynamical aspects of any theoretical model. It is clearly difficult to interpret this observation in the context of models in which the momentum of the outflowing gas entirely derives from the radiation field. On the other hand, an outflow driven at least partially by the momentum flux of a radio–emitting jet, whose radio flux is some fraction of the total energy flux, clearly warrants a more quantitative investigation.
Consider a total column density $`N_{\mathrm{cld}}`$ of absorbing clouds accelerated along a line of sight by the dynamic pressure of a jet. These clouds will attain a terminal velocity according to (c.f. eqn. 6)
$$\frac{v_{\mathrm{}}^2}{c^2}\stackrel{<}{}\frac{\mathrm{\Gamma }\beta _{\mathrm{jet}}}{\mathrm{\Gamma }1}\frac{2L_{\mathrm{jet}}}{r_0\mathrm{\Omega }N_{\mathrm{cld}}m_\mathrm{p}c^3},$$
(7)
where $`r_0=r_{0,\mathrm{pc}}`$ pc is the radius at which the acceleration commences and where eqn. (1) has been used. Thus, a sub–relativistic jet can accelerate a total column density of $`10^{22}N_{22}`$ cm<sup>-2</sup> clouds to comoving velocities
$$v_{\mathrm{}}\stackrel{<}{}\mathrm{\hspace{0.17em}0.1}cL_{46}^{1/3}(r_{0,\mathrm{pc}}N_{22})^{1/3}\left(\frac{\mathrm{\Omega }}{4\pi }\right)^{1/3}$$
(8)
which is consistent with the maximum velocities measured directly from the blueshifted absorption troughs in BALQs (e.g. Weymann et al. (1991)). In the case of a mildly–relativistic jet, with, say $`v_{\mathrm{jet}}=0.5c`$ (corresponding to $`\mathrm{\Gamma }=1.15`$), eqn. (7) implies a terminal velocity much less than the bulk jet speed, with
$$v_{\mathrm{}}\stackrel{<}{}\mathrm{\hspace{0.17em}0.07}cL_{\mathrm{jet},46}^{1/2}(r_{0,\mathrm{pc}}N_{22}v_{\mathrm{jet},0.5})^{1/2}\left(\frac{\mathrm{\Omega }}{4\pi }\right)^{1/2}$$
(9)
Note that in the limit of relativistic jet speeds ($`\mathrm{\Gamma }\stackrel{>}{}`$ a few), any dense clouds embedded in the flow will always be accelerated to the bulk velocity, unless the jet is ‘free’, with an opening angle $`\varphi \mathrm{\Gamma }^1`$ (Begelman et al. 1984), in which case eqn. (7) implies $`v_{\mathrm{}}0.1cL_{46}^{1/2}\mathrm{\Gamma }_3(r_{0,\mathrm{pc}}N_{22})^{1/2}`$. Thus, a jet model offers a viable explanation for why the outflow velocities associated with the much narrower, blueshifted absorption lines in RLQs are never as high as the velocities associated with genuine BALs in RQQs.
It is also of interest to perform these calculations with lower energy fluxes to test the applicability of a weak jet model to the UV absorption features detected in some Seyferts (e.g. Crenshaw et al. (1998)). Using appropriate scaled–down values for $`L_{\mathrm{jet}}`$ and $`r_0`$ of, say $`10^{43}`$erg.s<sup>-1</sup> and $`0.1`$ pc, respectively, eqn. (8) implies comoving velocities $`\stackrel{<}{}0.02cL_{43}^{1/3}(r_{0,.1\mathrm{pc}}N_{22})^{1/3}(\mathrm{\Omega }/4\pi )^{1/3}`$. This is consistent with observations, which indicate that the absorption features in Seyfert spectra are never as broad as those seen in their more luminous counterparts, with terminal velocities of $`\stackrel{<}{}0.015c`$ typically being measured, compared to $`\stackrel{<}{}0.2c`$ for the BALQs.
Thus, a jet model for BALQs can not only explain the observed $`v_{\mathrm{}}`$–radio-loudness anticorrelation in quasars, but can also explain the lower $`v_{\mathrm{}}`$ values measured from weaker absorption features in less powerful sources.
#### 4.1.2 Kevin–Helmholtz Instability
Small–scale clouds moving with a relative velocity with respect to the bulk velocity of the surrounding plasma are susceptible to the Kevin–Helmholtz instability, which can shred the clouds into smaller and smaller entities. For clouds embedded in a magnetized medium, moving with a relative velocity $`\mathrm{\Delta }v`$, the fastest growth timescale corresponding to the most disruptive modes is (e.g. Celotti et al. (1998); see also Begelman et al. 1991)
$$t_{\mathrm{KH}}\frac{r_{\mathrm{cld}}}{\mathrm{\Delta }v}\left(\frac{\rho _{\mathrm{cld}}}{\rho _{\mathrm{jet}}}\right)^{1/2}$$
(10)
If the clouds are confined by the dynamic pressure of the jet, i.e. $`\rho _{\mathrm{cld}}c_\mathrm{s}^2\rho _{\mathrm{jet}}v_{\mathrm{jet}}^2`$, then $`t_{\mathrm{KH}}(v_{\mathrm{jet}}/\mathrm{\Delta }v)t_{\mathrm{sc}}\stackrel{>}{}t_{\mathrm{sc}}`$, where $`t_{\mathrm{sc}}=r_{\mathrm{cld}}/c_\mathrm{s}`$ is the sound–crossing timescale across the clouds, corresponding to an internal sound speed $`c_\mathrm{s}=\sqrt{2kT_{\mathrm{cld}}/m_\mathrm{p}}`$. In other words, the instability is sufficiently rapid to restrict the confinement of clouds to timescales as short as $`t_{\mathrm{sc}}`$. Since the acceleration timescale is much longer than $`t_{\mathrm{KH}}`$, this means that clouds must be continuously regenerated or injected along the outflow.
In the nonlinear regime, the Kevin–Helmholtz instability causes a rapid cascade of cloud fragmentation. While this does not directly destroy the clouds, it makes them increasingly more prone to microphysical diffusion processes, which can assimilate the clouds into the ambient medium, thereby effectively causing their evaporation. This is examined in § 4.2.2 below (see also Weymann et al. 1985 for a discussion).
### 4.2 Physical Constraints
In the following, it is assumed that the BAL cloudlets are comoving with the bulk flow of a sub–relativistic jet, since the arguments presented in Section 4.1.1 indicate that this may be an appropriate model for bonafide BALQs. From eqn. (2), the total power in a sub–relativistic jet can be written as
$$L_{\mathrm{jet}}\stackrel{>}{}r^2\mathrm{\Omega }\frac{1}{2}\rho _{\mathrm{jet}}v_{\mathrm{jet}}^3,$$
(11)
from which an upper limit on the mean jet density can be obtained:
$$\frac{\rho _{\mathrm{jet}}}{m_\mathrm{p}}\stackrel{<}{}(4\times 10^3)L_{46}r_{\mathrm{pc}}^2\left(\frac{v_{\mathrm{jet}}}{0.1c}\right)^3\left(\frac{\mathrm{\Omega }}{4\pi }\right)^1\mathrm{cm}^3$$
(12)
#### 4.2.1 Confinement
A crucial issue which needs to be addressed by any physical model for BALQs is the confinement mechanism. If BAL clouds are accelerated by the dynamic pressure of a weak jet, then ram pressure provides a natural confinement mechanism. This corresponds to the equipartition condition $`\rho _{\mathrm{cld}}c_\mathrm{s}^2\rho _{\mathrm{jet}}v_{\mathrm{jet}}^2`$. Using eqn. (11), this implies a characteristic cloud density
$$n_{\mathrm{cld}}\stackrel{<}{}\frac{2L_{\mathrm{jet}}}{r^2\mathrm{\Omega }v_{\mathrm{jet}}kT_{\mathrm{cld}}}10^{10}L_{\mathrm{jet},46}r_{\mathrm{pc}}^2\left(\frac{v_{\mathrm{jet}}}{0.1c}\right)^1\left(\frac{\mathrm{\Omega }}{4\pi }\right)^1\left(\frac{T_{\mathrm{cld}}}{3\times 10^4\mathrm{K}}\right)^1\mathrm{cm}^3,$$
(13)
which is consistent with the upper limits deduced from the observed ionization species and from photoionization models (see e.g. Turnshek (1988)).
It is also possible that comoving magnetic fields provide pressure support to dense cloudlets. The typical field strength of a comoving magnetic field is
$$B_{\mathrm{jet}}L_{\mathrm{jet},46}^{1/2}r_{\mathrm{pc}}^1\left(\frac{v_{\mathrm{jet}}}{0.1c}\right)^{1/2}\left(\frac{\mathrm{\Omega }}{4\pi }\right)^{1/2}\mathrm{G}$$
(14)
which satisfies the equipartition condition $`B_{\mathrm{jet}}^2/8\pi n_{\mathrm{cld}}kT_{\mathrm{cld}}L_{\mathrm{jet}}/r^2\mathrm{\Omega }v_{\mathrm{jet}}\frac{1}{2}\rho _{\mathrm{jet}}v_{\mathrm{jet}}^2`$. According to Falcke & Biermann (1995), jets in which the magnetic field is below equipartition with the bulk kinetic energy are likely to be radio–quiet sources, so this could be a distinguishing property between bonafide BALQs and the RL BALQ candidates. Although magnetic fields need not play an important dynamical role in a weak jet model for BAL outflows, even small field strengths can be of crucial importance to maintaining a two–phase fluid by suppressing transverse diffusion of relativistic particles (whose motion is confined to a Larmor radius about the field lines) into cool, dense BAL clouds. However, longitudinal diffusion can still be important and therefore needs to be examined.
#### 4.2.2 Evaporation
A serious threat to the survival of BAL clouds embedded in a jet is evaporation into the ambient plasma as a result of diffusion and Coulomb heating by the external fast particles which fill the bulk of the jet. The volume heating rate due to Coulomb collisions between thermal, nonrelativistic ($`kT_\mathrm{e}m_\mathrm{e}c^2`$) electrons and nonthermal, relativistic electrons is given by (e.g. Gould (1972); see also Jackson (1975))
$$H^{\mathrm{Coul}}\frac{3}{2}\chi _\mathrm{e}n_{\mathrm{cld}}n_\gamma \sigma _\mathrm{T}m_\mathrm{e}c^3(\gamma ),$$
(15)
where $`\chi _\mathrm{e}`$ is ratio of the number density of thermal electrons (either free or harmonically bound to ions) in the cloud gas to the total cloud density and $`(\gamma )`$ is a parameter which only weakly depends on $`\gamma `$ and which is related to the logarithmic Gaunt factor, determined from the maximum and minimum impact parameters. For collisions with free electrons, which determine the overall heating of the cloud gas, $`(\gamma )\mathrm{ln}(\sqrt{\gamma 1}m_\mathrm{e}c^2/\mathrm{}\omega _\mathrm{p})`$, where $`\omega _\mathrm{p}`$ is the (thermal) electron plasma frequency. Eqn. (15) corresponds to a collision timescale
$$t_{\mathrm{coll}}\frac{2\gamma }{3n_{\mathrm{cld}}\sigma _\mathrm{T}c}<10^5n_{\mathrm{cld},8}^1\frac{\gamma }{}\mathrm{s},$$
(16)
where $`n_{\mathrm{cld},8}=n_{\mathrm{cld}}/10^8`$cm<sup>-3</sup>. This must be appreciably longer than the sound–crossing timescale, $`t_{\mathrm{sc}}`$, if pressure confinement of individual clouds is to be sustained in spite of the collisions, implying cloud sizes
$$r_{\mathrm{cld}}(5\times 10^{11})n_{\mathrm{cld},8}^1\left(\frac{T_{\mathrm{cld}}}{3\times 10^4\mathrm{K}}\right)^{1/2}\frac{\gamma }{}\mathrm{cm}$$
(17)
This is smaller than the mean–free–path between the collisions, $`\lambda _{\mathrm{mfp}}=t_{\mathrm{coll}}\beta c`$ (where $`\beta c`$ is the velocity of the relativistic electrons) by a factor $`c/c_\mathrm{s}10^4`$, which means that the relativistic electrons would have to travel through as many clouds before a Coulomb encounter occurs and diffusive effects become important. In other words, the particle energy in the ambient jet plasma is simply advected through the clouds, rather than transferred diffusively, as a result of direct encounters between the thermal and nonthermal electrons. Furthermore, this will only strictly be true if the magnetic field lines in the jet penetrate the clouds with little distortion. If the clouds possess a random, internal field line structure (which they might do if they were pre–existing entities that were swept up by the jet rather than being formed from condensations within the jet), then the tangential component of the internal field lines can prevent the infiltration of fast particles from outside the clouds.
It has been pointed out, however, that collective plasma effects, triggered by the passage of fast particles through a ‘cold’ plasma, can enhance the heating rate, eqn. (15), by a factor as large as $`10^5`$ (see Ferland & Mushotzky (1984) and references therein). If this is the case, then the only way cool clouds can maintain their properties in the presence of fast particles is to efficiently radiate away any extra energy input. The dominant radiative cooling process in a typical BAL cloud is through the CIV $`\lambda 1549`$ line transition, which has an Einstein coefficient $`A_{21}(2.6\times 10^7)\mathrm{s}^1`$. The volume cooling rate for this line transition is then $`n_{\mathrm{CIV}}A_{21}h\nu _{21}(3\times 10^4)\chi _{\mathrm{CIV}}n_{\mathrm{cld}}`$ erg.s <sup>-1</sup>cm<sup>-3</sup>, where $`\chi _{\mathrm{CIV}}=n_{\mathrm{CIV}}/n_{\mathrm{cld}}`$ is the abundance of the CIV ion relative to the total (ionized plus neutral) hydrogen density in the clouds. The total heating can be quantitatively estimated as $`\zeta H^{\mathrm{Coul}}`$, where $`\zeta \stackrel{<}{}10^5`$ takes into account collective plasma heating and where $`H^{\mathrm{Coul}}`$ is integrated over the nonthermal electron distribution (neglecting the weak $`\gamma `$–dependence in the $``$ parameter). The ratio of the cooling to heating rates is then
$$\frac{C^{\mathrm{CIV}}}{H^{\mathrm{tot}}}10^6n_{\mathrm{jet}}^1\zeta _5^1\left(\frac{\chi _{\mathrm{CIV}}}{10^4}\right)\mathrm{cm}^3,$$
(18)
where $`\zeta _5=\zeta /10^5`$ and where an appropriate value of $`=25`$ has been used. Thus, radiative line cooling in the clouds is efficient enough to overcome any extra heat input from the ambient relativistic jet plasma provided its density is $`n_{\mathrm{jet}}10^6`$ cm<sup>-3</sup>. According to the limits on $`n_{\mathrm{jet}}`$ imposed by the total jet energy budget, eqn. (12), this is always satisfied and therefore, evaporation does not pose an immediate threat to the survival of BAL clouds embedded within a weak, sub–relativistic jet.
## 5 Summary and Discussion
It has been argued that the phenomenon of nuclear absorption outflows from quasars provides an important clue to understanding the observed radio–loud/radio–quiet dichotomy in active galactic nuclei if interpreted in terms of an inhomogeneous weak jet model in which the thermal gas responsible for the observed UV absorption troughs is embedded within a poorly–collimated outflow of weakly radio–emitting plasma. The motivation for this model is threefold: (i) observations of radio–quiet quasars have confirmed that the nature of their radio emission is fundamentally similar to that of radio–loud quasars; (ii) the observed anticorrelation between the terminal velocity of outflowing thermal gas and the radio strength of the quasar is direct evidence that the dynamics of the thermal, absorbing gas is intimately linked to the properties of the nonthermal, radio–emitting plasma; and (iii) lower velocity intrinsic absorption outflows have also been detected in Seyferts, many of which are found to exhibit linear radio structure indicative of small–scale, weak jets.
The observational constraints obtained here corroborate other theoretical jet models which attribute the differences in radio strength (i.e. quiet, weak and loud) to differences in the physical properties of jets (e.g. total energy flux, bulk speed, relative quantities of thermal and nonthermal plasma). It has also been suggested that a weak jet interpretation of the observed radio flux may offer a viable explanation for why nuclear absorption outflows are not detected in strong radio sources; the relativistic jets which power these sources are thought to be propitious sites for in situ particle acceleration, which precludes the accumulation of cooled particles that can thermalize and condense to form localized gas clouds capable of producing absorption features. Furthermore, since the efficiency of ram–pressure acceleration increases as a jet becomes sub–relativistic, the observed anticorrelation between the terminal velocity of the outflowing thermal gas and the radio strength of the quasar can also be explained by a weak jet model. Finally, it has been demonstrated that this model can explain the narrow UV absorption features detected in some Seyfert 1s. Thus, it has been shown that a weak jet model is successful in explaining not only the high velocity outflows in bonafide broad absorption line quasars, but also the lower velocity nuclear absorption outflows detected in both their strong radio counterparts and low luminosity counterparts.
The importance of broad absorption line quasars to our understanding of the radio–loud/radio–quiet dichotomy becomes evident in the framework of a weak jet model, of which the underpinning implication is that all AGN possess radio–emitting jets to varying degrees and that their observational classification depends not only upon orientation, but also upon the intrinsic differences in the physical properties of their jets. However, one key issue which is yet to be fully resolved is the role evolutionary effects play in the formation of jets and outflows in AGN; if mass ejection phenomena are evolutionary phases, then the true significance of broad absorption line quasars in the grand scheme of AGN unification is yet to be fully appreciated.
The author wishes to thank the Royal Commission for the Exhibition of 1851 Research Fellowship (Imperial College, London) for financial support and A. C. Gower and G. F. Lewis for helpful discussions.
|
no-problem/9905/cond-mat9905286.html
|
ar5iv
|
text
|
# Nucleation of vortices by rapid thermal quench
\[
## Abstract
We show that vortex nucleation in superfluid <sup>3</sup>He by rapid thermal quench in the presence of superflow is dominated by a transverse instability of the moving normal-superfluid interface. Exact expressions for the instability threshold as a function of supercurrent density and the front velocity are found. The results are verified by numerical solution of the Ginzburg-Landau equation.
\]
Formation of topological defects under a rapid quench is a fundamental problem of contemporary physics promising to shed a new light on the early stages of the evolution of the Universe. For homogeneous cooling a fluctuation-dominated formation mechanism has been suggested by Kibble and Zurek (KZ). Normally, cooling is associated with an inhomogeneous temperature distribution accompanied by a phase separating interface which moves through the system as temperature decreases. A generalization of the KZ scenario was suggested in Ref. for inhomogeneous phase transitions in superfluids: if the front moves faster than the normal–superfluid interface a large supercooled region which is left behind becomes unstable towards fluctuation-induced nuclei.
Superfluid <sup>3</sup>He offers a unique “testing ground” for rapid phase transitions . Recent experiments where a rotating superfluid <sup>3</sup>He was locally heated well above the critical temperature by absorption of neutrons revealed vortex formation under a rapid second–order phase transition. The TDGL analysis was applied to study a propagating normal–superfluid interface under inhomogeneous cooling and the formation of a large supercooled region was confirmed. The fluctuation–dominated mechanism may thus be responsible for creation of initial vortex loops. It is commonly accepted that these initial vortex loops are further inflated by the superflow and give rise to a macroscopic number of large vortex lines filling the bulk superfluid.
In this Letter we report a novel mechanism of vortex formation which overtakes growth of the initial loops appearing in the supercooled region. We study the entire process of vortex formation in the presence of a superflow using TDGL dynamics. We take into account the temperature evolution due to thermal diffusion. We find analytically and confirm by numerical simulations that the normal–superfluid interface becomes unstable with respect to transverse undulations in the presence of a superflow. These undulations quickly transform into large primary vortex loops which then separate themselves from the interface. Simultaneously, a very large number of small secondary vortex/antivortex nuclei are created in the supercooled region by fluctuations resembling the conventional KZ mechanism. The primary vortex loops screen out the superflow in the inner region causing the annihilation of the secondary vortex/antivortex nuclei. The number of surviving vortex loops is thus much smaller than what anticipated from the KZ conjecture.
Model. – We use the TDGL model for a scalar order parameter $`\psi `$ ignoring the non–relevant complexity of the <sup>3</sup>He–specific multicomponent order parameter:
$$_t\psi =\mathrm{\Delta }\psi +(1f(𝐫,t))\psi |\psi |^2\psi +\zeta (𝐫,t).$$
(1)
Here $`\mathrm{\Delta }`$ is the three-dimensional (3D) Laplace operator. Distances and time are measured in units of the coherence length $`\xi (T_{\mathrm{}})`$ and the characteristic time $`\tau _{GL}(T_{\mathrm{}})`$, respectively. These quantities are taken at temperature $`T_{\mathrm{}}`$ far from the heated bubble. The local temperature is controlled by heat diffusion and evolves as $`f(𝐫,t)=E_0\mathrm{exp}(r^2/\sigma t)t^{3/2}`$ where $`\sigma `$ is the normalized diffusion coefficient. $`E_0`$ determines the initial temperature of the hot bubble $`T^{}`$ and is proportional to the deposited energy $`_0`$ such that $`E_0=_0/\left[C(T_cT_{\mathrm{}})\xi ^3(T_{\mathrm{}})(\pi \sigma )^{3/2}\right]`$ where $`C`$ is the heat capacity. Since the deposited energy is large compared to the characteristic superfluid energy, we assume $`E_01`$. The time at which the temperature in the center of the hot bubble drops down to $`T_c`$ is $`t_{\mathrm{max}}=E_0^{2/3}`$. The Langevin force $`\zeta `$ with the correlator $`\zeta \zeta ^{}=2T_f\delta (𝐫𝐫^{})\delta (tt^{})`$ describes thermal fluctuations with a strength $`T_f`$ at $`T_c`$.
The typical values of the Ginzburg–Landau parameters for Fermi liquids are: $`\tau _{GL}(T_{\mathrm{}})=\tau _0/(1T_{\mathrm{}}/T_c)`$, $`\xi (T_{\mathrm{}})\xi _0/(1T_{\mathrm{}}/T_c)^{1/2}`$, $`\xi _0=\mathrm{}v_F/2\pi T_c`$ and $`\tau _0=\pi \mathrm{}/8T_c`$. The diffusion constant $`\sigma \mathrm{}/\xi _0`$, $`\mathrm{}`$ is the mean free path of a quasiparticle. In <sup>3</sup>He, $`\sigma `$ is very large because $`\mathrm{}/\xi _010^3`$. The noise strength is $`T_f\mathrm{Gi}^1\left[1(T/T_c)\right]^{1/2}`$, $`\mathrm{Gi}=\nu (0)\xi _0^3T_c10^4`$ is the Ginzburg number and $`\nu (0)`$ is normal density of states.
Results. – We solved Eq. (1) by the implicit Crank-Nicholson method. The integration domain was equal to $`150^3`$ units of Eq. (1) with $`200^3`$ mesh points. The boundary conditions were taken as $`_z\psi =ik\psi `$ with a constant $`k`$ at the top and the bottom of the integration domain. This implies a superflow $`j_s=k|\psi |^2`$ along the $`z`$–axis far away from the temperature bubble. The simulations were carried out on a massive parallel computer, the Origin 2000, at Argonne National Laboratory.
Selected results are shown in Fig. 1. One sees (Fig. 1a-c) that without fluctuations (numerical noise only ) the vortex rings nucleate upon the passage of the thermal front. Not all of the rings survive: the small ones collapse and only the big ones grow. Although the vortex lines are centered around the point of the quench, they exhibit a certain degree of entanglement. After a long transient period, most of the vortex rings reconnect and form the almost axisymmetric configuration.
We find that the fluctuations have a strong effect at early stages: the vortices nucleate not only at the normal-superfluid interface, but also in the bulk of the supercooled region (Fig. 1d-e). However, later on, small vortex rings in the interior collapse and only larger rings (primary vortices) survive and expand (Fig. 1f).
To elucidate the details of nucleation we considered an axi-symmetric version of Eq. (1) (depending on only $`r`$ and $`z`$ coordinates, $`\mathrm{\Delta }=_r^2+1/r_r+_z^2`$) for realistic <sup>3</sup>He parameters : $`k1`$, $`E_01`$, and $`\sigma 10^3`$. The domain was $`500^2`$ with $`1000^2`$ mesh points. We have found that without thermal fluctuations the vortices nucleate at the front of the normal-superfluid interface (black/white border in Fig. 2a-c) analogous to the 3D case. The initial instability is seen as a corrugation of the interface. The interface propagates towards the center, leaving the vortices behind. As thermal fluctuations are turned on, the vortex rings also nucleate in the bulk of the supercooled region (Fig. 2d) resulting in the creation of the secondary vortex/antivortex pairs. We have found that the “primary” vortices prevent the supercurrent from penetrating into the region filled with the secondary vortices. One sees that the primary vortices encircle the brighter spot in Fig. 2 indicating a larger value of the order parameter and thus a smaller value of the supercurrent. As a result the secondary vortices either annihilate with antivortices due to their mutual attraction or collapse due to the absence of the inflating superflow. Fig. 3 shows the number of vortices $`N^+`$ and antivortices $`N^{}`$ vs time with and without fluctuations. Fluctuations initially create a very large number $`10^4`$ of vortices and antivortices in the bulk which then annihilate. The resulting amount of surviving vortices $`N=N^+N^{}`$ is only weakly dependent on fluctuations.
Shown in Fig. 4 is the number of vortex rings $`N`$ vs quench parameters and applied current $`k`$. At small $`k`$ $`N`$ shows threshold behavior while becoming almost linear for larger $`k`$ values. The deviations from linear a law appear close to the value of the critical current $`k_c=1/\sqrt{3}`$ for a homogeneous system.
Stability of normal-superfluid interface.– Following Ref., we expand the local temperature $`1f`$ near $`T_c`$. Let us put $`x=r_cr`$ where $`r_c`$ is the radius of the surface at which $`T=T_c`$ or $`f=1`$, i.e., $`r_c^2=(3/2)\sigma t\mathrm{log}(t_{\mathrm{max}}/t)`$. A positive $`x`$ is directed towards the hot region. We write $`1f(r,t)\alpha (xvt)`$ where $`\alpha =[df/dr]_{f=1}=2r_c/\sigma t`$ is the local temperature gradient and $`v=(\alpha \tau _Q)^1`$ is the front velocity defined through the quench rate $`\tau _Q^1=\left[f/t\right]_{f=1}`$. We have $`v=\left(3\sigma t2r_c^2\right)/4r_ct`$. The front starts to move towards the center at $`t>t_{}=t_{\mathrm{max}}/e`$ and disappears at $`t=t_{\mathrm{max}}`$ when the temperature drops below $`T_c`$. The front velocity accelerates as the hot bubble collapses. Since the front radius $`r_c`$ is large compared to the coherence length, it can be considered flat. We chose the coordinate $`y`$ parallel to the front.
We transform to the frame moving with the velocity $`v`$ and perform the scaling of variables $`\stackrel{~}{x},\stackrel{~}{y}=(x,y)v,\stackrel{~}{t}=tv^2,\stackrel{~}{\psi }=\psi /v`$, and $`u=v^3/\alpha `$. The parameter $`u(\sigma ^2/t_{\mathrm{max}})/\mathrm{log}^2(t_{\mathrm{max}}/t)`$ is of the order 1 in the experiment at the initial time but grows rapidly as the hot bubble shrinks. In our numerical simulations, $`u1`$. Eq. (1) takes the form (we drop tildes in what follows)
$$_t\psi =\mathrm{\Delta }\psi +_x\psi \frac{x}{u}\psi |\psi |^2\psi .$$
(2)
The amplitude $`F`$ of the current-carrying solution $`\psi =F\mathrm{exp}(iky)`$ satisfies
$$_x^2F+_xF\left(\frac{x}{u}+k^2\right)FF^3=0.$$
(3)
To examine the transverse stability of stationary solution to Eq. (3) we put $`\psi =(F+w)\mathrm{exp}(ikx)`$ where the real and imaginary parts of the perturbation $`w=a+ib`$ are
$$\left(\genfrac{}{}{0pt}{}{a}{b}\right)=\left(\genfrac{}{}{0pt}{}{A}{iB}\right)\mathrm{exp}(\lambda (q)t+iqy)$$
(4)
where $`q`$ is the transverse undulations wavenumber and $`\lambda `$ is the corresponding growth rate, we obtain
$`\mathrm{\Lambda }A+2\chi B`$ $`=`$ $`_x^2A+_xA{\displaystyle \frac{x}{u}}A3F^2A`$ (5)
$`\mathrm{\Lambda }B+2\chi A`$ $`=`$ $`_x^2B+_xB{\displaystyle \frac{x}{u}}BF^2B`$ (6)
where $`\chi =kq`$ , $`\mathrm{\Lambda }=\lambda +q^2`$, and $`xx+uk^2`$.
The eigenvalue $`\mathrm{\Lambda }`$ for $`\chi 0`$ can be found as an expansion in $`\chi `$: $`\mathrm{\Lambda }=\chi \mathrm{\Lambda }_1+\chi ^2\mathrm{\Lambda }_2^2+\mathrm{}`$ and similarly for $`A`$ and $`B`$. In zeroth order in $`\chi `$ one has $`A_0=0,B_0=F`$. In the first order we derive $`B_1=0`$ and
$$_x^2A_1+_xA_1\frac{x}{u}A_13F^2A_1=2F.$$
(7)
The solution $`A_1=2u_xF`$ is obtained by differentiating Eq. (3). In the second order to Eq. (6) one has
$$_x^2B_2+_xB_2\frac{x}{u}B_2F^2B_2=4u_xF+\mathrm{\Lambda }_2F.$$
(8)
A zero mode of Eq. (8) is $`F`$. The adjoint function is $`B^+=F\mathrm{exp}(x)`$. Eq. (8) has a solution if the solvability condition with respect to the zero mode is fulfilled
$$_{\mathrm{}}^{\mathrm{}}𝑑xFe^x(4u_xF+\mathrm{\Lambda }_2F)=0$$
(9)
After integration we obtain $`\mathrm{\Lambda }_2=2u`$. Returning to the original notations, we obtain the exact result
$$\lambda =q^2(2uk^21)+O(q^4)$$
(10)
The instability occurs above the threshold value $`k_v^2=(2u)^1`$ or $`k_v^2\alpha ^{2/3}/u^{1/3}\sigma ^1\mathrm{log}(t_{\mathrm{max}}/t)`$ in the Ginzburg–Landau units. Since it is much smaller than the bulk critical value $`k_c=1/\sqrt{3}`$, it can be exceeded for a very small superflow.
The eigenvalue $`\mathrm{\Lambda }`$ vs $`\chi `$ and $`u`$ can be derived independently in the limit $`u1`$ assuming that $`\mathrm{\Lambda }\chi 1/u1`$. Substituting $`x=\overline{x}u\gamma `$, where $`\gamma `$ determines the position of the interface, we treat the terms containing $`\mathrm{\Lambda },\chi `$ and $`\overline{x}/u`$ as perturbations for $`ϵ=1/u0`$. For $`ϵ=0`$ and $`\gamma >1/4`$ Eqs. (3) possess a front solution. This solution should be matched with its asymptotics for $`\overline{x}>0`$, and this match fixes the value of $`\gamma `$. As it was shown in Ref. , for $`u\mathrm{}`$ the matching is possible for $`\gamma 1/4`$.
For $`ϵ=0`$ Eqs. (6) have 2 zero modes: $`(A,B)=(F_x,0)`$ and $`(A,B)=(0,F)`$. The solvability conditions result in the characteristic equation for $`\mathrm{\Lambda }`$:
$$\mathrm{\Lambda }^2+\frac{1}{u}c_1\mathrm{\Lambda }4c_2\chi ^2+\frac{d}{u^2}=0,$$
(11)
where the coefficients $`c_{1,2},d`$ are given in the forms of integrals of $`F`$ with the corresponding zero modes in the interval $`\mathrm{}<\overline{x}<x_0`$. The constant $`x_0`$ is determined from the condition $`d=0`$ because for $`\chi =0`$ there is always an exact solution to Eq. (6) with $`\mathrm{\Lambda }=0`$. Substitution of the solutions for $`\gamma 1/4`$ yields $`c_12,c_21`$ and the largest growthrate of transverse perturbations
$$\lambda =\sqrt{1/u^2+4k^2q^2}1/uq^2.$$
(12)
Numerical solution of Eqs. (6) demonstrates an excellent agreement with the theoretical expression Eq. (12).
Now we apply the above results to estimate the number of nucleated vortices. The evolution of perturbations near the interface is given by the integral
$$w𝑑q\mathrm{exp}[\lambda (q)t+iqy].$$
(13)
In the case of thermal quench, the normal/superfluid front velocity $`u\mathrm{}`$ as time increases, therefore, the limit of large $`u`$ applies. For $`u\mathrm{}`$ one has $`\lambda =2|kq|q^2`$. The maximum growth rate is achieved at $`q=k`$ and is simply $`k^2`$. Taking into account that it is the thermal noise which provides initial perturbations for the interface instability, one derives from Eqs.(12,13) $`|w|\sqrt{T_f}\mathrm{exp}[k^2t+iky]`$. The number of vortices is estimated as $`N=r_0k`$, where $`r_0`$ is the radius of the front where the perturbations $`w`$ become of the order of one. The time interval $`t_0`$ corresponding to $`|w|=1`$ is $`t_0k^2\mathrm{log}(T_f^1)`$. Vortices have no time to grow if $`t_0t_{\mathrm{max}}`$. For $`r_0`$ one then finds:
$$r_0^2=(3/2)\sigma (t_{\mathrm{max}}t_0)\mathrm{log}\left(t_{\mathrm{max}}/(t_{\mathrm{max}}t_0)\right).$$
(14)
The number of vortices with logarithmic accuracy is
$$Nkr_0\sqrt{\sigma }E_0^{1/3}\sqrt{\left(v_s/v_c\right)^2\beta ^2\mathrm{log}(T_f^1)/E_0^{2/3}}$$
(15)
where $`\beta =const`$, while $`v_s`$ and $`v_c`$ are the imposed and critical GL superflow velocity, respectively. This estimate is in agreement with the results of simulations, see Fig. 4. Eq. (15) exhibits a slow logarithmic dependence of the number of vortices at the interface on the level of fluctuations and agrees with the results presented in Fig. 3 . For $`\sigma 10^3,E_010^210^3`$ which is close to the experimental values of the parameters. Our analysis results in about 10 surviving vortices per heating event. It is consistent with Ref. where as many as 6 vortices per neutron were detected.
In conclusion, we have found that the rapid normal–superfluid transition in the presence of superflow is dominated by a transverse instability of the normal/superfluid interface propagating from the bulk into the normal region. This instability produces primary vortex loops which then separate from the interface. Simultaneously, a large number of vortex/antivortex pairs are created by fluctuations in the bulk of the supercooled region formed after the collapse of the hot bubble. The primary vortex loops screen out the superflow and cause annihilation of the vortex/antivortex pairs in the bulk. The number of surviving vortices is determined by superflow-dependent optimum wavevector of the interface instability.
We are grateful to V. Eltsov, M. Krusius, G. Volovik and W. Zurek for stimulating discussions. This research is supported by US DOE, grant W-31-109-ENG-38, and by NSF, STCS #DMR91-20000.
|
no-problem/9905/cond-mat9905171.html
|
ar5iv
|
text
|
# Phonon-mediated thermal conductance of mesoscopic wires with rough edges
\[
## Abstract
We present an analysis of acoustic-phonon propagation through long, free-standing, insulating wires with rough surfaces. Owing to a crossover from ballistic propagation of the lowest-frequency phonon mode at $`\omega <\omega _1=\pi c/W`$ to a diffusive (or even localized) behavior upon the increase of phonon frequency, followed by reentrance into the quasiballistic regime, the heat conductance of a wire acquires an intermediate tendency to saturate within the temperature range $`T\mathrm{}\omega _1/k_B`$.
\]
During recent years, low-temperature heat transport experiments on electrical insulators have been extended to mesoscopic systems , where the wavelength of thermal phonons can be comparable to the geometrical size of the device. In this regime, phonon transport through a thermal conductor such as an electrically insulating solid wire, formed from an undoped semiconductor may exhibit ballistic waveguide propagation . This possibility has stimulated interest in the guided-wave, phonon-mediated heat conductance, $`\varkappa (T)`$ of ballistic wires (with a width $`W`$ much smaller than the length $`L`$) connecting a heat reservoir to a thermal bath .
In the present paper, we analyse the low-temperature ($`k_BT\mathrm{}c\pi /W`$, where c is the sound velocity) heat transport in relatively long ($`L/W100`$) free-standing insulating wires by taking into account the effect of surface roughness. The idea behind this analysis is based on the assumption that, in a long wire, the wire edge or surface roughness may result in strong scattering, and even in the localisation of acoustic waves in the intermediate-frequency range, whereas the low-frequency part of the phonon spectrum would always have ballistic properties due to the specifics of sound waves. In the high-frequency part of the spectrum, phonons would have quasiballistic properties, too. This may result in a non-monotonic temperature dependence of the thermal conductance of such a system.
To verify the possibility of the existence of such a regime, in principle, we investigate the dependence on frequency $`\omega `$ of the transmission coefficient $`\mathrm{\Gamma }(\omega )`$ using a simplified model of a solid waveguide which has been chosen to reflect two features of this problem: the influence of roughness on the propagation of vibrations and the suppression of scattering on the roughness upon the decrease of the excitation frequency. We approach the problem numerically, by studying the transmission coefficent averaged over many realizations of a wire characterized by a given distribution of length scales in the surface roughness and using this to find the heat conductance, $`\varkappa (T)`$. Phonon-mediated heat transport in quasi-one-dimensional systems can be studied using the same theoretical techniques as electron transport. However, in contrast to electrical conductance, which at low temperatures is determined by transport properties of electron waves at the Fermi energy, the thermal conductance of a phonon waveguide is determined by all phonon energies $`\omega `$ up to $`\mathrm{}\omega k_BT`$. This smears out effects of confinement in the transverse direction in the temperature dependence of $`\varkappa (T)`$ in ballistic systems , but results in a pronounced feature in $`\varkappa (T)`$ for strongly disordered free-standing wires. The latter has the form of an intermediate saturation regime in $`\varkappa (T)`$ following a linear $`T`$dependence at the lowest temperatures, with an anomalous length dependence of the saturation value, that scales as $`\varkappa _{sat}L^{1/2}`$ for wires with a white-noise spectrum of roughness.
Transmission coefficient analysis and localization of acoustic modes. Below, we classify phonon modes in a wire by the number $`n`$ of nodes in the displacement amplitude. The $`n=0`$ lowest-frequency vibrational mode ($`\omega <\omega _1=c\pi /W`$, where $`c`$ is the velocity of sound) corresponds to equal displacements over the cross section of a free-standing wire and has a linear dispersion; others have frequency gaps, $`\omega _n(q)=\sqrt{\left(cq\right)^2+\left(\pi nc/W\right)^2}`$. The aforementioned feature originates from the fact that edge disorder suppresses the transmission of all coherent phonon modes at high frequencies, but has a little influence on the $`n=0`$ mode at $`\omega 0`$, where $`\mathrm{\Gamma }(\omega 0)1`$. The transmission coefficient of this mode is mainly determined by direct backscattering, whose rate depends on the intensity of the surface roughness harmonic $`\delta W_q^2`$ with wave number $`q\omega /c`$. According to Rayleigh scattering theory in one dimension, the mean free path of this mode diverges at $`\omega 0`$ as
$$l_0(\omega )\left(\omega _1/\omega \right)^2\left(W^2/\left(\delta W_{q=\omega /c}\right)^2\right)W,$$
(1)
even if long-wavelength Fourier components $`\delta W_q`$ are equally represented in the surface roughness $`\delta W(x)`$. In an infinitely long wire with white-noise randomness on the surface, this results in a localization length $`L_{}`$ for the lowest phonon mode which diverges at $`\omega 0`$ as
$$L_{}(\omega )l_0(\omega )\omega ^2.$$
(2)
The latter statement is based on the equivalence between the localization problem for various types of waves . For a wire shorter than $`L_{}`$ scattering yields
$$1\mathrm{\Gamma }(\omega )=\alpha \omega ^2\mathrm{at}\omega <\omega _1.$$
(3)
In contrast, at higher frequencies, all modes in a wire with a white-noise randomness backscatter (either via intra- or inter-mode process) typically at the length scale of $`lW/\left(\delta W/W\right)^2`$. Hence, for frequencies $`\omega >\omega _1`$, the transmission coefficient tends to follow a linear frequency dependence, $`\mathrm{\Gamma }\left(\omega \right)\left(\omega W/c\right)\left(l/L\right)\mathrm{ln}(W\omega /c)`$, which is typical for diffusion in quasiballistic systems . The crossover from the low-frequency regime to the intermediate-frequency range in a long enough wire can, therefore, be nonmonotonic, with a pronounced fall towards zero at $`\omega \omega _1`$, similar to that discussed by Blencowe in relation to the phonon propagation in thin films. As a result, an irregular wire may exhibit ballistic phonon propagation at low frequencies, whereas, at higher frequencies $`\omega _1\omega `$, surface roughness would yield diffusive phonon propagation, or even localization.
The numerical simulations reported below confirm the above naive expectations. In these simulations, we model the phonons in a crystalline wire cut from a thin film (with the thickness much less than the wire width) as longitudinal waves in a two-dimensional strip whose width $`W(x)`$ fluctuates with rms value $`\left(\delta W/W\right)^2^{1/2}=0.1`$ on a length scale $`\xi `$ longer or of order $`W`$. The effect of the width fluctuations consists of the scattering of acoustic waves propagating along the wire. The model that we adopt here gives a very simplified representation of a real system, since we ignore the existence of torsional and transverse bending modes of the wire excitations, which are known to transfer heat in adiabatic ballistic constrictions . However, it takes into account two features of sound waves: their scattering and the possible localization by the surface roughness, and the almost ballistic properties at both ultra-low and high frequencies. In a continuum model, these lattice vibrations are described by a displacement field $`u(x,y)`$, which obeys 2D wave equation inside a wire
$$\omega ^2u+c^2^2u=0,$$
(4)
Displacements obey free boundary conditions $`𝐧u=0`$ at $`y=\pm (W/2+\delta W_s(x))`$, where $`s=1,2`$ indicates the upper and lower edge of the wire, and n stands for the local normal direction to the wire edge. In the simulations, we discretize Eq. (4) on a square lattice with about 200 sites across the wire cross section and, then, compute $`\mathrm{\Gamma }(\omega )`$ numerically using the transfer matrix method for 100 disorder realisations . Our numerical code overcomes problems of instability by $`QL`$ factorising the transfer matrices at each step . Furthermore, at the end of each calculation, the S-matrix is checked for unitarity.
Fig. 1 shows the results of such simulations obtained for wires with a white-noise spectrum of roughness and an aspect ratio $`L/W=30`$. Both the low- and high-frequency asymptotic behaviour of transmission coefficient confirm the expected non-monotonic dependence of $`\mathrm{\Gamma }(\omega )`$ for a wire of length $`LL_{}`$. Calculations using longer wires (with $`LL_{}`$) shows a deeper fall in $`\mathrm{\Gamma }(\omega )`$ over a broader range of frequencies, since phonons with frequencies $`\omega \omega _1`$ behave as localized vibrations, and their transmission coefficient becomes exponentialy small. The inset of Fig. 1 shows the corresponding frequency dependence of the inverse localization length of very long wires (with $`L/W1000`$), which is approximately in agreement with the result of Eq. (2). Therefore, the transmission of phonons through rough wires with white-noise roughness of the edges can be characterised by the following regimes. At low frequencies, $`\omega <\omega _{}`$, where $`\omega _{}(L)\omega _1\left(W^2/\left(\delta W_{q0}\right)^2\right)^{1/2}\sqrt{W/L}`$, $`\mathrm{\Gamma }1`$ so that phonons pass through the wire almost ballistically. At intermediate frequencies, $`\omega _{}<\omega \omega _1`$, phonons in the lowest mode are localized on a length scale shorter than the wire length, and $`\mathrm{\Gamma }0`$. Upon a further increase of $`\omega `$, modes with $`n0`$ take part in the scattering, the multi-mode localization length increases, and at $`\omega cL/Wl`$ the localization length becomes longer than the sample length, thus restoring a quasi-ballistic character to phonon transport . The first two regimes of low-frequency phonon propagation ($`\omega <\omega _1`$) through a wire with $`Ll`$ can be jointly described as a function of a single parameter, $`L/l_0(\omega )`$
$$\mathrm{\Gamma }(\omega <\omega _1)=p(\omega /\omega _{});p(0)=1,p(x1)e^x.$$
(5)
Note that the decline of $`\mathrm{\Gamma }`$ in the vicinity of $`\omega \omega _1`$ strongly depends on the Fourier spectrum of the roughness. To illustrate this, we analyzed the effect of roughness composed of harmonics with wave numbers $`q`$ restricted to two intervals: (a) $`0<q<\pi /W`$ and (b)$`\frac{3}{2}\pi /W<q<\frac{7}{2}\pi /W`$. The result is shown in Fig. 2 (a) and (b), respectively. The spectral form of the randomness is relevant, since it determines the intensity of Bragg-type backscattering processes. Such processes are the most efficient in forming localization , in which an incident phonon in mode $`n`$ with wave number $`k`$ along the wire axis scatters elastically to mode $`n^{}`$ with wave number $`k^{}`$, $`k^{}=\sqrt{k^2+(n^2n^2)(\pi /W)^2}`$. Therefore, values of $`q=k+k^{}`$ represented in the spectrum of $`\delta W_q`$ identify the regions of frequencies for which the intra- and inter-mode Bragg-type scattering is allowed. In Fig. 2, the shaded frequency intervals indicate the corresponding conditions for two lowest modes, $`n=0,1`$.
Thermal conductance. In the regime of elastic phonon propagation through the wire, the heat flow $`\dot{Q}`$ can be related to the transmission coefficient of phonons through the wire as
$`\dot{Q}={\displaystyle \underset{n,m}{}}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{dk}{2\pi }}\mathrm{}\omega _n(k)v_n(k)(f_1(n,k)f_2(n,k))\left|t_{nm}\right|^2,`$
where $`v_n=\frac{d\omega _n}{dk}`$ is the 1D velocity of a phonon in the mode $`\omega _n(k)`$, and $`f_{1(2)}`$ are equilibrium distributions of phonons at left (right) reservoirs. When the temperature difference $`\mathrm{\Delta }T`$ between the reservoirs is small , $`\mathrm{\Delta }TT`$, the thermal conductance, $`\varkappa =\dot{Q}/\mathrm{\Delta }T`$, has the form
$$\varkappa =_0^{\mathrm{}}\frac{d\omega }{2\pi }\frac{(\mathrm{}\omega )^2}{k_BT^2}\frac{\mathrm{exp}(\mathrm{}\omega /k_BT)}{\left[\mathrm{exp}(\mathrm{}\omega /k_BT)1\right]^2}\mathrm{\Gamma }(\omega )$$
(6)
For a wire with the transmission coefficient shown in Fig. 2(a), where $`\mathrm{\Gamma }(\omega )1+\omega /\omega _1`$, $`\varkappa (T)`$ is plotted by the dashed-line (1) in Fig. 3, which shows the crossover from linear to quadratic temperature dependence (at $`T\vartheta _1=6\mathrm{}\omega _1/k_B\pi ^2`$) discussed in Ref.
$$\varkappa \left(k_B^2\pi /6\mathrm{}\right)T+\left(0.7k_B^2/\mathrm{}\right)T^2/\vartheta _1.$$
(7)
The ballistic character of heat transport in Eq. (7) is reflected by the independence of $`\varkappa `$ on the sample length.
In a wire, where the transmission coefficient sufficiently drops at $`\omega \omega _1`$, as in Fig. 1, we approximate the low-frequency behavior of the transmission coefficient by a step function, $`\mathrm{\Gamma }(\omega )=\theta (\omega _1\omega )`$, which yields an intermediate saturation of the thermal conductance at the temperature $`T\vartheta _1`$,
$$\varkappa (T)\frac{k_B\omega _1}{2\pi }\{\begin{array}{c}2T/\vartheta _1,T\vartheta _1;\\ 1,\vartheta _1<T<\vartheta _1L/W.\end{array}$$
(8)
The numerical result shown in Fig. 3 by a solid line is in a qualitative agreement with such an expectation. The horizontal arrow indicates the saturation value expected from equation (8). The upper limit in the saturation interval mentioned in Eq. (8) indicates the restoration of ballistic conditions for phonon propagation at wavelengths short enough to avoid wave diffraction at corrugated surfaces .
Theoretically, the intermediate saturation $`\varkappa (T)\varkappa _{sat}`$ at low temperatures is a more robust feature in longer wires of length $`LlW/\left(\delta W/W\right)^2`$, where even the lowest mode, $`n=0`$ is localized at frequencies $`\omega _{}(L)<\omega <\omega _1`$. Here $`\omega _{}(L)\omega _1\left(W^2/\left(\delta W_{q0}\right)^2\right)^{1/2}\sqrt{W/L}`$ is the frequency at which the localization length of $`n=0`$ acoustic mode is comparable to the wire length. In this case, the saturation takes place at a lower temperature $`\vartheta _{}\mathrm{}\omega _{}/k_B`$, and we find that the saturation value of the thermal conductance within temperature interval $`\vartheta _1T>\vartheta _{}`$ has an anomalous dependence on the sample length,
$$\varkappa _{sat}_0^{\mathrm{}}\frac{d\omega }{2\pi }p\left(\frac{\omega }{\omega _{}}\right)\frac{k_B\omega _1}{2\pi }\left(\frac{l}{L}\right)^{1/2}.$$
(9)
This is an example of a more general scaling law for white-noise roughness; for a wire with a fractally rough edge, $`\left(\delta W_q\right)^2q^z`$, one obtains $`\varkappa _{sat}L^{1/\left(2+z\right)}`$.
In summary, our analysis of phonon propagation through long free-standing insulating wires with rough surfaces has highlighted a feature in the temperature dependence of the heat conductance $`\varkappa (T)`$, which results from the crossover from ballistic propagation of the lowest-frequency phonon mode at $`\omega \omega _1`$ to diffusive (or even localized) behavior, with a re-entrance to the quasi-ballistic regime. Although the model used in this calculation has been restricted to only one (longitudinal) excitation branch in the wire spectrum, we believe that this feature persists also in more realistic multi-mode models (which take into account torsional modes and the wire vibrations of other polarizations), since all lowest sound modes are scattered by the surface roughness with the rate decreasing upon the decrease of the frequency. A drastic difference between phonon transport properties in different frequency intervals results in a tendency of the heat conductance of a wire to saturate provisionally at the temperature range of $`Thc/Wk_B`$. An intermediate saturation value of the wire heat conductance depends on the length of a wire, and, in wires with length larger than the scattering length of phonons with frequencies $`\omega \omega _1`$ has an anomalous length dependence, $`\varkappa _{sat}L^{1/2}`$.
The authors thank M.Roukes and J.Worlock for attracting our attention to this problem. This work has been funded in parts by EPSRC and a European Union TMR programme.
|
no-problem/9905/astro-ph9905349.html
|
ar5iv
|
text
|
# A Na I Absorption Map of the Small-Scale Structure in the Interstellar Gas Toward M15
## 1 Introduction
The evidence for significant subparsec-scale structure in the diffuse interstellar medium (ISM) has been accumulating recently through measurements of H I 21 cm absorption toward high-velocity pulsars (Frail et al. (1994)) and extended extragalactic radio sources (Faison et al. (1998)) as well as optical observations of the interstellar Na I D absorption toward globular clusters (Bates et al. (1995)) and binary stars (Meyer & Blades (1996); Watson & Meyer (1996)). At the pulsar ($``$10 to 10<sup>2</sup> AU), binary ($``$10<sup>2</sup> to 10<sup>4</sup> AU), and globular cluster ($``$10<sup>4</sup> to 10<sup>6</sup> AU) scales sampled, all of these observations imply dense concentrations of atomic gas ($`n_H`$ 10<sup>3</sup> cm<sup>-3</sup>) in otherwise diffuse sightlines. The apparent ubiquity of this structure should be accounted for in any successful ISM model. However, due to their large overpressures with respect to the intercloud medium, such small-scale condensations cannot be accomodated in any abundance by the standard McKee & Ostriker (1977) pressure equilibrium model for the ISM. Heiles (1997) has proposed a geometric solution where this apparent structure is due to filaments or sheets of lower density gas aligned along a given sightline that produce significant column density differences (and spuriously high inferred volume densities) over transverse length scales as small as 30 AU. In an approach that removes the requirement of pressure equilibrium, Elmegreen (1997) proposes a fractal ISM model driven by turbulence that produces self-similar structure down to the smallest scales.
A chief limitation impacting the interpretation of the diffuse ISM structures observed to date has been the rather poor small-scale sightline coverage. In particular, each binary sightline samples the structure at only one scale along one direction. The few globular cluster studies have typically involved 10 to 15 stars and have sampled only relatively large separations. However, the bright extended cores of some globulars do provide a background source suitable for mapping the absorption-line structure of intervening gas at much higher spatial resolution, in two dimensions and with full sampling. With a core $`V`$-band surface brightness of 14.21 mag arcsec<sup>-2</sup> (Harris (1996)) falling to about 18 mag arcsec<sup>-2</sup> at a radius of 30$`\mathrm{}`$ (Hertz & Grindlay (1985)), the best example of such a cluster is M15 ($`d`$ $`=`$ 10.4 $`\pm `$ 0.8 kpc; $`v_{LSR}`$ $`=`$ $``$99 km s<sup>-1</sup>). Spectra of selected stars in M15 have revealed significant interstellar Na I absorption at $`v_{LSR}`$ $`=`$ $`+`$3 and $`+`$68 km s<sup>-1</sup> that varies in strength on scales ranging from about 1$`\mathrm{}`$ to 15$`\mathrm{}`$ (Lehner et al. (1999); Pilachowski et al. (1998); Kennedy et al. (1998); Langer, Prosser, & Sneden (1990)). In this Letter, we present a fully-sampled, two-dimensional map of the Na I absorption over the central 27$`\mathrm{}`$ x 43$`\mathrm{}`$ of M15 as part of a new effort to probe the patterns of such variations down to scales of a few arc seconds.
## 2 Observations
The M15 observations were obtained in 1998 August using the DensePak fiber optic array and Bench spectrograph on the 3.5 m WIYN<sup>2</sup><sup>2</sup>2The WIYN Observatory is a joint facility of the University of Wisconsin-Madison, Indiana University, Yale University, and the National Optical Astronomy Observatories. telescope at Kitt Peak National Observatory. The DensePak array consists of 91 fibers bonded into a 7 x 13 rectangle that covers 27$`\mathrm{}`$ x 43$`\mathrm{}`$ of sky with center-to-center fiber (3$`\mathrm{}`$ diameter) spacings of 4$`\mathrm{}`$ at the WIYN F/6.4 Nasmyth focus (Barden, Sawyer, & Honeycutt (1998)). The observations were conducted with the centermost fiber positioned at the center of M15 (RA $`=`$ 21<sup>h</sup> 29<sup>m</sup> 58.3<sup>s</sup>, Dec $`=`$ $`+`$12$`\mathrm{°}`$ 10$`\mathrm{}`$ 00$`\mathrm{}`$ (J2000.0)) and the major axis of the array aligned along a N$``$S direction. The spectrograph was configured with the Bench camera, a Tek2048 CCD (T2KC), the Echelle grating, and an interference filter (X17) providing spectral coverage from 5725 to 5975 Å at a 2.2 pixel resolution of 0.27 Å or 14 km s<sup>-1</sup>.
Utilizing this instrumental setup in queue mode, a total of four 1300 s exposures were taken of M15 under sky conditions characterized by $``$1$`\mathrm{}`$ seeing. These raw CCD frames were bias-corrected, sky-subtracted (using a 1300 s exposure of adjacent blank sky), flat-fielded, combined, and wavelength-calibrated using the NOAO IRAF data reduction package to extract the net spectrum yielded by each fiber. Based on previous observations with the Bench spectrograph Na I setup and on data comparisons with the KPNO 4-m echelle spectrograph, the uncertainty in the zero level of these spectra due to uncorrected scattered light effects should be less than 3%. Accounting for 5 broken fibers and 3 others with low counts, 83 of the 91 fibers produced usuable spectra with S/N ratios ranging from 30 at some edgepoints of the array to over 150 nearer the center. In order to remove the telluric absorption in the vicinity of the Na I D<sub>2</sub> $`\lambda `$5889.951 and D<sub>1</sub> $`\lambda `$5895.924 lines, these spectra were all divided by an atmospheric template based on observations of several rapidly-rotating early-type stars with little intervening interstellar matter. Figure 1 displays the resulting Na I spectra for the center of M15 and three positions of various separations and angles with respect to the center. Three Na I doublets are apparent and well-separated in velocity in all of these spectra $``$ the bluemost is due to stellar Na I absorption in M15, the middle or “local ISM” (LISM) component is due to interstellar gas at $`v_{LSR}`$ $`=`$ $`+`$3 km s<sup>-1</sup>, and the redmost or “intermediate velocity” (IVC) component is due to presumably more distant gas at $`v_{LSR}`$ $`=`$ $`+`$68 km s<sup>-1</sup>. It is also apparent from Figure 1 that both the LISM and IVC Na I absorption toward M15 exhibit significant variations on scales much less than 1$`\mathrm{}`$. Over the whole map, the equivalent width of the Na I D<sub>1</sub> line varies from 180 to 365 mÅ for the LISM component and from 40 to 155 mÅ for the IVC component.
In terms of a surface map, Figures 2 and 3 show how the Na I columns corresponding to the LISM and IVC components vary across the face of M15 at the 4$`\mathrm{}`$ fiber resolution. These column densities were measured using the FITS6p profile-fitting package (Welty, Hobbs, & Kulkarni (1994)) to simultaneously fit the D<sub>2</sub> and D<sub>1</sub> lines in each fiber, assuming single-component Voigt profiles for both the LISM and IVC Na I doublets. Based on the higher resolution ($`\mathrm{\Delta }`$$`v`$ $`=`$ 9.8 km s<sup>-1</sup>) Na I data of Kennedy et al. (1998) for two stars in M15, this assumption should be reasonable for estimating the IVC column density but is definitely an approximation to the multicomponent LISM absorption structure. In the case of the IVC fits, the derived velocities exhibit a marginal increase of $``$1 km s<sup>-1</sup> from N to S across the map and the derived line widths ($`b`$-values) are typically near 3 km s<sup>-1</sup> with some as low as 2.2 km s<sup>-1</sup>. The lowest IVC Na I column densities in Figure 3 correspond to the weakest lines and have formal profile-fitting uncertainties of about 10-20%. The highest IVC Na I columns are more uncertain but should be accurate to within a factor of two unless the profiles are dominated by unresolved saturated structure that would mask even higher columns. In the case of the LISM fits, the derived $`b`$-values (typically near 8 km s<sup>-1</sup>) lead to Na I columns in Figure 2 that are probably underestimates of the “true” multicomponent values but that are illustrative of the net equivalent width variations.
The only potential sources of stellar contamination in measuring the LISM and IVC Na I absorption are the Ni I $`\lambda `$5892.883 and Ti I $`\lambda `$5899.304 lines that would be located on the blue wing of the IVC D<sub>2</sub> feature and near the center of the IVC D<sub>1</sub> line, respectively. Based on the F3 composite spectral type and low metallicity (\[Fe/H\] $`=`$ $``$2.17) of M15 as well as the weakness of the occasional excess absorption observed on the blue wing of the IVC D<sub>2</sub> line (which is mostly excluded by the single-component fit), the impact of any stellar contamination on our derived column densities is likely to be appreciably less than the fitting uncertainties (Montes, Ramsey, & Welty (1999)). The appearance of coherent structures in Figures 2 and 3 suggests that the uncertainties due to stellar line contamination are indeed smaller than those due to the profile fitting.
## 3 Discussion
In order to discuss the ISM structure observed toward M15 in terms of its physical length scales, it is necessary to estimate the distances to the LISM and IVC clouds. Given that Albert et al. (1993) have measured weak Ca II absorption near $`v_{LSR}`$ $`=`$ 0 km s<sup>-1</sup> toward HD 204862 ($`d`$ $``$ 100 pc; 0.3$`\mathrm{°}`$ separation from M15) and a much stronger line toward HD 203699 ($`d`$ $``$ 500 pc; 2.5$`\mathrm{°}`$ separation), we will assume a distance of 500 pc for the M15 LISM absorption that should at least be an upper limit for the clouds comprising this column. In the case of the IVC component, Na I absorption has been seen at a similar velocity toward HD 203664 whose distance is about 3.2 kpc and angular separation from M15 is 3.1$`\mathrm{°}`$ (Little et al. (1994); Sembach (1995); Ryans, Sembach, & Keenan (1996)). We will assume a distance of 1500 pc for the IVC absorber which should be accurate to within a factor of two. At these distances, the 27$`\mathrm{}`$ x 43$`\mathrm{}`$ coverage of the DensePak array corresponds to a 13,500 x 21,500 AU (0.065 x 0.10 pc) section of the LISM clouds and a 40,500 x 64,500 AU (0.20 x 0.31 pc) portion of the IVC cloud. The 4$`\mathrm{}`$ fiber spacing translates to spatial resolutions of 2000 and 6000 AU for the LISM and IVC absorbers, respectively. The Na I column densities are typically higher in the LISM clouds with individual fiber values ranging from 2.3 x 10<sup>12</sup> to 8.5 x 10<sup>12</sup> cm<sup>-2</sup>. Over the minimum 2000 AU scale, the maximum $`N`$(Na I) variation observed is 3.0 x 10<sup>12</sup> cm<sup>-2</sup> and the median $`|`$$`\mathrm{\Delta }`$$`N`$(Na I)$`|`$ is 4.5 x 10<sup>11</sup> cm<sup>-2</sup>. In the case of the IVC cloud, the dynamic range in $`N`$(Na I) is greater with values stretching from 5.0 x 10<sup>11</sup> to 8.0 x 10<sup>12</sup> cm<sup>-2</sup>. The maximum $`N`$(Na I) variation observed over the minimum 6000 AU scale in this cloud is 5.9 x 10<sup>12</sup> cm<sup>-2</sup> and the median $`|`$$`\mathrm{\Delta }`$$`N`$(Na I)$`|`$ is 3.0 x 10<sup>11</sup> cm<sup>-2</sup>.
Since the minimum scales probed by our M15 observations overlap with those involving studies of interstellar Na I toward binary stars, it is important to compare their evidence for small-scale ISM structure. The binary studies involve about 20 early-type systems with projected linear separations mostly between 500 and 7000 AU and typical distances within several hundred pc (Meyer & Blades (1996); Watson & Meyer (1996)). Based on high-resolution ($`\mathrm{\Delta }`$$`v`$ $``$ 1.5 km s<sup>-1</sup>) Na I observations, they find variations in at least one velocity component toward each system that are collectively indicative of ubiquitous small-scale structure. The M15 observations probe a sightline that is much longer and has a larger Na I column than most of these binaries at appreciably lower velocity resolution. Also, whereas each star in a binary provides a single extremely narrow beam ($``$0.0001$`\mathrm{}`$) through the intervening ISM, each DensePak fiber samples a number of such beams from the closely-spaced stars in the core of M15 and thus should smear out any imprint of structure on scales appreciably smaller than the 3$`\mathrm{}`$ fiber diameter. Nevertheless, the smallest-scale $`N`$(Na I) variations observed in the M15 LISM and IVC clouds are typically comparable to or larger than the values toward the binary stars. For example, the integrated column density difference of 2.0 x 10<sup>11</sup> cm<sup>-2</sup> across the Na I profile toward the binary $`\mu `$ Cru (6600 AU separation) (Meyer & Blades (1996)) is similar to the median $`|`$$`\mathrm{\Delta }`$$`N`$(Na I)$`|`$ of 3.0 x 10<sup>11</sup> cm<sup>-2</sup> measured at the 6000 AU resolution of the M15 IVC observations. Since a circumstellar explanation cannot be completely ruled out for at least some of the binary Na I variations, the M15 observations are important in providing clear evidence of significant ISM structure on comparably small scales where there is absolutely no possibility of circumstellar contamination.
The key distinction between these M15 observations and the binary studies is that here we image two “clouds” in two spatial dimensions at a variety of scales whereas each binary probe provides only a single measurement of a single scale for each intervening cloud. In comparing the characteristics of the LISM and IVC maps, the texture of the LISM structure appears to be generally smoother with larger angular features than in the IVC gas. This difference could be due to the greater IVC distance or to the superposition of structures at different distances within the LISM absorption profile. An interesting feature of the LISM map is how $`N`$(Na I) goes from a relatively constant value along the entire 21,500 AU length of the western edge to a generally 50% higher value as one moves about 5000 AU to the east. A 5000 AU separation binary oriented N$``$S would not be very sensitive to this feature whereas an E$``$W orientation would yield a strong signal of small-scale structure. The most striking aspect of the IVC map is in the southern region where the Na I column density dives from 8.0 x 10<sup>12</sup> cm<sup>-2</sup> on the WSW edge to 1.3 x 10<sup>12</sup> cm<sup>-2</sup> and then back up to 5.6 x 10<sup>12</sup> cm<sup>-2</sup> on the ESE edge over a total straight-line distance of 41,000 AU. This behavior is more suggestive of a clumpy structure with characteristic scales of $``$10,000 AU and peak Na I column densities that can rise at least 5 times above the adjacent background.
Unfortunately, since Na I is generally not a dominant ion in H I clouds, the physical interpretation of the Na I structure apparent in Figures 2 and 3 is not clear. However, studies of Galactic diffuse clouds have shown that when $`N`$(Na I) $``$ 10<sup>12</sup> cm<sup>-2</sup>, empirical relationships can be utilized to estimate $`N`$(H) from $`N`$(Na I) to within a factor of two (Hobbs (1974); Stokes (1978); Welty, Hobbs, & Kulkarni (1994)). Applying these relationships to the significant southern clumps in the M15 IVC map results in $`N`$(H) $``$ 5 x 10<sup>20</sup> cm<sup>-2</sup> and $`n_H`$ $``$ 1000 cm<sup>-3</sup> (assuming a roughly spherical geometry). Interestingly, Kennedy et al. (1998) have mapped the H I 21 cm emission of the IVC cloud in the vicinity of M15 at 12$`\mathrm{}`$ resolution and found significant clumpy structure on these scales with the highest column density ($`N`$(H I) $`=`$ 4 x 10<sup>19</sup> cm<sup>-2</sup>) centered on the cluster and a quick dropoff ($`N`$(H I) $`<`$ 10<sup>19</sup> cm<sup>-2</sup>) on a 0.5$`\mathrm{°}`$ scale. The fact that the peak IVC H column inferred from the Na I data is about 10 times greater than that from the 21 cm observations implies that either there is significant H I clumpiness within the radio beam or the $`N`$(Na I)/$`N`$(H) ratio in the IVC can be significantly higher than that typically observed in the diffuse ISM. In the case of the former, this result would have important implications for determining the metallicities of such halo clouds (both IVCs and their higher-velocity HVC brethren). Metallicities of $``$25% and $``$10% solar have recently been derived for two HVCs by comparing UV absorption measures of their S II abundances toward background quasars with much broader ($``$1$`\mathrm{}`$) 21 cm emission beam measures of the intervening HVC H I columns (Lu et al. (1998); Wakker et al. (1999)). If HVCs generally exhibit H I structure of the magnitude and scale implied by the M15 IVC Na I data, these metallicities could seriously be in error. The metallicity question is important to resolve because it has a direct bearing on the interpretation of HVCs as primarily Galactic in origin through fountain phenomena (Shapiro & Field (1976); Bregman (1980)) or as infalling lower metallicity extragalactic matter (Blitz et al. (1999)).
At the same time, it is possible that $`N`$(Na I)/$`N`$(H) rather than $`N`$(H) is varying on small scales in the M15 IVC cloud. Lauroesch et al. (1998) have discovered that the $`N`$(Na I) differences observed toward the binary $`\mu `$ Cru are not seen in the dominant ion Zn II (which should mirror variations in $`N`$(H)). They suggest that these differences are due to small-scale variations in the Na ionization equilibrium that are driven by temperature and/or electron density fluctuations. Although the most significant $`N`$(Na I) variations in the M15 IVC and LISM maps are appreciably greater than those toward $`\mu `$ Cru, it is important to note that small fluctuations in $`n_H`$ can amplify $`N`$(Na I) since the Na I column should scale roughly as $`n_{H}^{}{}_{}{}^{2}`$ if $`n_H`$ is proportional to $`n_e`$ (Péquignot & Aldrovandi (1986)). For example, the highest $`N`$(Na I) peaks in the IVC map could be produced by increasing $`n_H`$ by a factor of $``$2.3 over adjacent background without any change in $`N`$(H I). However, if $`n_e`$/$`n_H`$ is not a constant (as might be expected if partial H ionization augments the electron supply from C photoionization), the IVC $`N`$(Na I) variations could be less reflective of $`n_H`$ and more indicative of appreciable small-scale $`n_e`$ fluctuations. Of course, it is not clear how such fluctuations could occur in a cloud of low extinction far from any ionizing source.
In summary, our observations show that the LISM and IVC gas toward M15 exhibits significant structure in terms of its physical conditions and/or H I column density down to arc second scales. Although our sky coverage is too limited to analyze the observed patterns in detail over their full extent, it does appear that the Na I data rule out both a very flat distribution on the 27$`\mathrm{}`$ x 43$`\mathrm{}`$ scale of the DensePak array and a random distribution on the 4$`\mathrm{}`$ scale of the individual fibers. Through further interstellar absorption-line mapping of M15 and other globulars with fiber arrays like DensePak, it will be possible to increase this sky coverage and characterize the spatial structure of diffuse clouds in the Galactic disk and halo over a large range of physically-interesting scales that are difficult to probe otherwise.
It is a pleasure to thank Di Harmer, Daryl Willmarth, and the rest of the KPNO WIYN queue observing team for obtaining the data. Comments by Dan Welty were very helpful in substantially improving the paper. We would also like to acknowledge useful conversations with Chris Blades, Ed Jenkins, and Caty Pilachowski.
|
no-problem/9905/cond-mat9905134.html
|
ar5iv
|
text
|
# Ehrlich–Schwoebel barrier controlled slope selection in epitaxial growth
## 1 Introduction
Molecular beam epitaxy (MBE) has attracted much interest from both, theoretical and experimental physicists. On the one hand it allows the fabrication of high quality crystals with arbitrary composition and modulated structures with atomically controlled thickness . On the other hand it represents a model of nonequilibrium physics which still lacks a general theory . In particular, the appearance and the dynamics of three dimensional (3D) structures (pyramids or mounds) in crystal growth are not well understood in terms of the underlying microscopic processes.
A long time ago Burton, Cabrera and Frank introduced the BCF–theory of crystal growth . Within this theoretical approach the crystal surface is described by steps of single monolayer height. The evolution of the surface is calculated by solving the diffusion equation on each terrace. Within this framework the growth of spirals and the step flow has been investigated. Elkinani and Villain investigated such a model including the nucleation probability of new islands . They found that the resulting structures are unstable. Towers appear which keep their lateral extension and grow in height only. They called this effect the Zeno–effect. The same observation has been made with a “minimal model” of MBE where fast diffusion together with a high Ehrlich–Schwoebel barrier has been implemented .
Even though the Zeno–effect has been observed recently on Pt(111) , quite typically a coarsening process with appearance of slope selection emerges which has been reported for such diverse systems as Fe(001) , Cu(001) , GaAs(001) , and HgTe(001) . In addition, slope selection seems to be the generic case of solid–on–solid computer simulations .
In terms of continuum equations the selection of a stable slope has been related to the compensation of uphill and downhill currents . An uphill current can be generated by an Ehrlich–Schwoebel barrier . The barrier hinders adatoms to jump down a step edge. Hence, more particles attach to the upper step edge which leads to a growth–instability and 3D–growth.
Another process, which constitutes a downhill current, has been recognized using molecular dynamics simulations . Such diverse mechanisms as downward funneling, transient diffusion or a knockout process at step edges lead to the incorporation of arriving particles at the lower side of the step edge. In addition, it has been suggested that such a process is responsible for reentrant layer–by–layer growth
Recently we have proposed a simplified model of epitaxial growth quite similar to the “minimal model” of Krug . In particular we found that an incorporation mechanism is crucial to achieve slope selection. However, one simplifying assumption of the model is an infinite Ehrlich–Schwoebel barrier.
In this article we will present in more detail the argument leading to slope selection and we will generalize our results using a continuous step dynamics model analogous to . In sec. 2 we will introduce our extension of the BCF theory and will discuss the relation to existing results (sec. 2 and 3). Typical mound morphologies and the growth dynamics are compared in section 4. Afterwards we will investigate the emergence of slope selection within the framework of this model (sec. 5). We will show that the selected slope has a temperature–dependence which is solely determined by the Ehrlich–Schwoebel barrier. Hence, the determination of selected terrace widths in experiments would give direct insight into microscopic properties such as the Ehrlich–Schwoebel barrier. We confirm the predicted importance of the incorporation mechanism using a kinetic Monte–Carlo simulation of a Solid–On–Solid model in sec. 6. Another effective downward current could be due to detachment from steps and subsequent desorption. We will show in sec. 7 that slope selection cannot be achieved by these two processes alone. In section 8 we will calculate the saturation profile in the limiting case of an infinite Ehrlich–Schwoebel barrier.
## 2 BCF theory
The model is based on the Burton–Cabrera–Frank model in 1+1 dimensions. Within this framework the crystal surface is specified by the position and direction (upward or downward) of steps. Figure 1 shows the crystal surface from the point of view of the BCF–theory. It is a coarse grained view – the detailed positions of atoms are not important. However, the terraces of the height of one atomic monolayer (ML) can still be distinguished. The most fundamental assumption is that at each time $`t`$ the adatom concentration $`\rho `$ is a function of the step positions only. In other words, the diffusion of adatoms is considerably faster than the step velocity. Thus, the diffusion equation becomes
$$\frac{\rho }{t}(x,t)=0=D^2\rho (x,t)+\frac{F}{a}$$
(1)
where $`D`$ is the diffusion constant and $`F/a`$ is the flux density with $`a`$ denoting the lattice constant. Hence, $`1/F`$ is the time necessary in order to deposit one monolayer. Up to now, this equation was solved with special boundary conditions at $`x=\mathrm{}/2`$ and $`+\mathrm{}/2`$ in the literature. Clearly, the boundary conditions are chosen depending on whether the terrace is a vicinal, a top, or a bottom terrace. In the following we will discuss the typical case of a vicinal terrace. The extension to top and bottom terraces is straightforward.
To include an incorporation mechanism it is necessary to extend the theory. We assume that there exists an incorporation radius such that all particles arriving close to a downward step within this radius immediately jump down the step edge. Hence, one has to split the density of diffusing particles into two regions. The first region close to the upper edge where eq. (1) holds, and the second one given by the incorporation radius close to the downward step where no particles arrive ($`F=0`$). To describe the motion of steps the flux of incorporated particles must be taken into account separately.
In the following we will discuss in detail the situation $`\mathrm{}>R_{\text{inc}}`$ as sketched in fig. 1. For smaller terraces only one region exist and the calculations are much easier. Since our analytical calculations will show that $`\mathrm{}>R_{\text{inc}}`$ is the generic case we concentrate on this situation.
The general one-dimensional solution of eq. (1) is a parabola characterized by three parameters. In addition to the two diffusion equations, four boundary conditions are necessary to determine the two distributions $`\rho _1`$ and $`\rho _2`$ .
$`\rho _1(\mathrm{}/2)`$ $`=`$ $`0`$ (2)
$`\rho _1(\mathrm{}/2R_{\text{inc}})`$ $`=`$ $`\rho _2(\mathrm{}/2R_{\text{inc}})`$ (3)
$`\rho _1^{}(\mathrm{}/2R_{\text{inc}})`$ $`=`$ $`\rho _2^{}(\mathrm{}/2R_{\text{inc}})`$ (4)
$`D\rho _2^{}(\mathrm{}/2)`$ $`=`$ $`{\displaystyle \frac{D}{\mathrm{}_1}}\rho _2(\mathrm{}/2)`$ (5)
Condition (2) is for the special case of perfect absorbing step edges. (3) and (4) are necessary to obtain a smooth density between region 1 and 2. The left hand side of (5) is the particle current at the step edge. On the right hand side this is reformulated using the number of jump attempts $`D\rho _2(\mathrm{}/2)`$ multiplied by the probability of overcoming the Ehrlich–Schwoebel barrier $`E_S`$. This probability is expressed as the inverse of a typical length $`\mathrm{}_1`$
$$\frac{1}{\mathrm{}_1}=\frac{1}{a}\mathrm{exp}\left(\frac{E_S}{k_\text{B}T}\right)$$
(6)
where $`a`$ stands for the lattice constant.
The resulting density distribution has the form as indicated in fig. 1: a parabola in the upper region and linear close to the downward step. The detailed expressions of $`\rho _1`$ and $`\rho _2`$ are not of much interest since the evolution of the crystal is determined by the currents at the edges. In the following we will call $`u(\mathrm{})`$ the upward current, i.e.
$`u(\mathrm{})`$ $`=`$ $`D\rho _1^{}(\mathrm{}/2)`$ (7)
$`=`$ $`{\displaystyle \frac{F}{2a(\mathrm{}+\mathrm{}_1)}}\left(\mathrm{}^2+2\mathrm{}\mathrm{}_12R_{\text{inc}}\mathrm{}_1R_{\text{inc}}^{}{}_{}{}^{2}\right).`$
The downward current due to diffusion (the contribution of the incorporation mechanism is not included) becomes
$`d(\mathrm{})`$ $`=`$ $`D\rho _2^{}(+\mathrm{}/2)`$ (8)
$`=`$ $`{\displaystyle \frac{F}{2a(\mathrm{}+\mathrm{}_1)}}\left(\mathrm{}R_{\text{inc}}\right)^2.`$
Note, that these results are very similar to the corresponding equations (2.2) and (2.3) of reference where no incorporation was considered. Setting $`R_{\text{inc}}=0`$ we regain their results.
The absence of a dependence on $`D`$ reflects the ansatz of a quasi–stationary distribution. All arriving particles are compensated for by the loss of particles at the borders and hence the currents are proportional to $`F`$. The density itself is proportional to the ratio $`F/D`$ which again is intuitively clear.
## 3 Closure of bottom terraces
In the following, we will reinvestigate the discussion of concerning the closure of a bottom terrace (c.f. fig. 2). In the limiting case of an infinite Ehrlich–Schwoebel barrier the dynamics of the steps become very simple. We denote by $`x(t)`$ the position of the right step which of course will depend on the time $`t`$. The origin is chosen to be in the middle of the bottom terrace. Due to the infinite Ehrlich–Schwoebel barrier the movement of the right and the left step are symmetric. The evolution is then described by $`\dot{x}(t)=Fx(t)FR_{\text{inc}}`$. The first term corresponds to the particles which do fall on the bottom terrace and diffuse to the right. The second term is the contribution of particles which are incorporated from the step above (which is valid as long as the bottom terrace is more than a distance $`R_{\text{inc}}`$ away from a top terrace). As a result $`x(t)`$ evolves as
$$x(t)=(x_0+R_{\text{inc}})\mathrm{exp}\left(Ft\right)R_{\text{inc}}.$$
(9)
As long as $`R_{\text{inc}}>0`$ there exists a closure time
$$t_c=\frac{1}{F}\mathrm{ln}\left(\frac{x_0+R_{\text{inc}}}{R_{\text{inc}}}\right).$$
(10)
Without an incorporation mechanism ($`R_{\text{inc}}`$=0) the bottom terrace will never be closed. This is the reason why Elkinani and Villain called their model the Zeno–model to remind the greek philosopher and his paradox. Even though the situation is changed if the discrete structure of the terraces is considered<sup>1</sup><sup>1</sup>1The currents can be translated into probabilities of placing a particle at the step edge. Hence, a bottom terrace of width one always has a nonvanishing probability to be filled. they showed that this trend still holds which gives rise to the formation of deep cracks. Likewise they found that even finite values of the Ehrlich–Schwoebel barrier do not change this growth scenario which has been investigated in more detail in . Once mounds are built up they remain forever with a fixed lateral size. Our discussion of this limiting case shows that the inclusion of an incorporation mechanism changes the growth in a fundamental manner.
## 4 Growth dynamics
To set up the basic ideas of the behaviour during crystal growth we show two typical surface profiles according to the numerical integration of the step system. In fig. 3 we compare the resulting structure of the Zeno model without an incorporation mechanism and with the inclusion of such a mechanism.
The simulations were carried out on on a system of $`485a`$ width with parameters corresponding to the model used in sec. 6.
$`D`$ $`=`$ $`10^{12}\mathrm{exp}\left({\displaystyle \frac{0.9\text{eV}}{550\text{K}k_\text{B}}}\right){\displaystyle \frac{a^2}{\text{s}}}5664{\displaystyle \frac{a^2}{\text{s}}}`$
$`\mathrm{}_1`$ $`=`$ $`\mathrm{exp}\left(+{\displaystyle \frac{0.1\text{eV}}{550\text{K}k_\text{B}}}\right)a8.2a`$
$`R_{\text{inc}}`$ $`=`$ $`1a`$
$`F`$ $`=`$ $`\text{1 ML s}^1`$
As in the Ehrlich–Schwoebel barrier has been suppressed for bottom terraces of one lattice constant width. Without an additional incorporation mechanism the appearance of trenches is unavoidable in accordance to . The incorporation mechanism gives rise to a well defined slope which does not change with time. Another fundamental difference is the coarsening behaviour. Without an incorporation mechanism the trenches are stable and the number of mounds remains constant. The additional incorporation mechanism leads to a coarsening behaviour.
In lattice models as well as for continuum equations the coarsening is driven by fluctuations and in 1+1 dimensions the corresponding exponent is 1/3. This is in accordance to Ostwald-ripening which has been predicted from the similarities of the relevant continuum equations . However, since we treat the step evolution in a deterministic manner we do not obtain a scaling behaviour. The only way fluctuations come into play during the simulation is when new islands are nucleated. As a consequence the evolution of e.g. the width of the height distribution $`w`$ is characterized by jumps (data not shown). A jump in $`w`$ appears each time when two mounds merge. These findings are a direct confirmation of the relevance of the fluctuations for the coarsening behaviour.
## 5 Slope selection
The inclusion of an incorporation mechanism leads to slope selection which is apparent from fig. 3b. Siegert and Plischke required a cancelation of upward and downward currents in the continuum equations. Again, in the case of an infinite Ehrlich–Schwoebel barrier the calculations are straightforward. In this case the downward current on a vicinal terrace of size $`\mathrm{}`$ is solely due to the incorporation mechanism, i.e. proportional to $`FR_{\text{inc}}`$. All the remaining diffusing adatoms will contribute to the upward current and hence the current will be $`F(\mathrm{}R_{\text{inc}})`$. As a consequence the slope selection will be achieved with a mean terrace width of size
$$\mathrm{}^{}=2R_{\text{inc}}$$
(11)
in accordance to the findings in .
It remains to calculate the terrace widths for arbitrary parameters. Since we know the currents (equations (7), (8) and the incorporation mechanism) we obtain the overall slope (resp. terrace width) dependent current
$`J(\mathrm{})`$ $`=`$ $`u(\mathrm{})+d(\mathrm{})+FR_{\text{inc}}`$ (12)
$`=`$ $`{\displaystyle \frac{F}{2(\mathrm{}+\mathrm{}_1)}}\left(2R_{\text{inc}}^{}{}_{}{}^{2}+4R_{\text{inc}}\mathrm{}_12\mathrm{}\mathrm{}_1\right)`$
Note that a positive $`J(\mathrm{})`$ signifies a downward current (to the right in fig. 1). The stable slope, where no net upward or downward current remains, is given by the condition $`J(\mathrm{})|_\mathrm{}=\mathrm{}^{}=0`$ and yields
$`\mathrm{}^{}`$ $`=`$ $`2R_{\text{inc}}+{\displaystyle \frac{R_{\text{inc}}^{}{}_{}{}^{2}}{\mathrm{}_1}}`$ (13)
$`=`$ $`2R_{\text{inc}}+{\displaystyle \frac{R_{\text{inc}}^{}{}_{}{}^{2}}{a}}\text{e}^{E_S/k_\text{B}T}`$
As can be seen from expression (12) the current is positive for small values of $`\mathrm{}`$ and becomes negative for $`\mathrm{}>\mathrm{}^{}`$. Hence $`\mathrm{}^{}`$ is stabilized by the current.
The stable slope does not depend on the diffusion constant. However, it should be clear from the derivation, that in order to achieve slope selection the typical diffusion length should be much larger than $`\mathrm{}^{}`$. Otherwise the vicinal terraces would not proceed via step flow. Rather new nucleation events on the terraces would lead to a rugged surface structure.
## 6 Solid–On–Solid model
In order to verify the predicted importance of the incorporation mechanism we use computer simulations of the Solid–On–Solid (SOS) model on a simple cubic lattice. All processes on the surface are (Arrhenius-) activated processes which are described by one common prefactor $`\nu _0`$ and an activation energy which is parameterized as follows: $`E_B`$ is the barrier for surface diffusion, at step edges an Ehrlich–Schwoebel barrier $`E_S`$ is added. However, this barrier is not added for a particle which sits on top of a single particle or a row . Each next neighbour contributes $`E_N`$ to the activation energy. Within this framework the diffusion constant becomes $`D=\nu _0\mathrm{exp}\left(E_B/k_\text{B}T\right)`$.
Here, we concentrate on a particular set of parameters even though other parameter sets were used as well. We choose $`\nu _0=10^{12}s^1`$, $`E_B=0.9\text{eV}`$, $`E_N=0.25\text{eV}`$, and $`E_S=0.1\text{eV}`$. This model was already investigated in and reproduces some kinetic features of CdTe(001). The deposition of particles occurs with a rate $`F`$. The two simulations shown in fig. 4 are carried out on a $`300\times 300`$ lattice at 560 K and started on a singular (flat) surface.
In fig. 4 the resulting surfaces with and without the inclusion of the incorporation mechanism are shown. Without an incorporation mechanism no slope selection occurs. Clearly, without incorporation the configuration of the towers remains unchanged whereas the inclusion leads to coarsening. The number of mounds diminishes with time. Hence, without an incorporation mechanism no coarsening can be identified. We want to mention that it seems that at higher temperatures the attachment/detachment kinetics of atoms at step edges yields a coarsening effect (data not shown). However, still no slope selection has been observed.
At first glance our findings contradict previous results obtained with a very similar model. Šmilauer and Vvedensky obtained a formation of mounds with slope selection irrespective of the inclusion or exclusion of an incorporation mechanism . However, they implemented the Ehrlich–Schwoebel barrier in a different way. Rather than to hinder the jump over a step edge they impede the jump towards a step edge. Their motivation for this implementation was to allow the adatoms to leave a small line of particles of width one which has been tested as a cause for reentrant layer-by-layer growth . In our simulations the same goal is achieved by suppressing the Ehrlich–Schwoebel barrier in such a situation. However, in their simulations particles arriving directly at a step edge have a probability of 1/4 to jump down the edge, 1/4 to jump away from the edge and 1/2 to jump along the step edge. Effectively this leads to an incorporation radius of length 1/2.
Other simulations of SOS-models used bcc(001) in order to study the growth of typical metals. In these simulations the SOS–restriction is implemented in such a way that an adatom must be supported by the four underlying atoms. Hence, the downward funneling process is directly implemented. Again, as a result slope selection is achieved, which has already been discussed in great detail in .
## 7 Detachment and desorption
One might assume that other mechanisms could lead to a zero in the slope dependent current. In the following we will carry out an analogous calculation with an adatom–detachment rate from steps and inclusion of desorption . One might assume that both processes generate an effective downward current which can compensate for the Ehrlich–Schwoebel effect. To investigate whether they are sufficient to obtain slope selection (and to simplify notation) we exclude the incorporation mechanism. Thus, the distinction of the two regions on a terrace is not necessary.
The desorption of diffusing adatoms is easily incorporated including a term $`\rho (x)/\tau `$ in the diffusion equation (1) . In order to include detachment from steps we have to replace boundary condition (2) by
$$D\rho ^{}(\mathrm{}/2)=\gamma \frac{D}{a}\rho (\mathrm{}/2)$$
(14)
where $`\gamma `$ stands for the detachment rate from steps. Accordingly, the boundary condition at downward step has to be corrected and reads now
$$D\rho ^{}(\mathrm{}/2)=\frac{D}{\mathrm{}_1}\rho (\mathrm{}/2)\gamma \frac{a}{\mathrm{}_1}.$$
(15)
The overall slope dependent current becomes
$$J(\mathrm{})=\frac{\left(\mathrm{\Delta }1\right)\left(\mathrm{}_1a\right)\left(\frac{a\gamma }{\tau }DF\right)}{(\mathrm{}_1+a)\sqrt{\frac{D}{\tau }}(\mathrm{\Delta }+1)+\frac{a\mathrm{}_1}{\tau }(\mathrm{\Delta }1)+D(\mathrm{\Delta }1)}$$
(16)
where
$$\mathrm{\Delta }=\text{e}^{2\mathrm{}/\sqrt{D\tau }}$$
has been introduced. Note that $`\mathrm{\Delta }`$ is always greater than one.
To discuss the qualitative behaviour it is sufficient to look at the numerator of $`J(\mathrm{})`$ (the denominator is always positive). The first important result is that no slope selection is possible. Only for $`\mathrm{}=0`$ the current is zero (of course, there is no upward and downward current as well).
Even though there is no slope selection, one can discuss whether growth will proceed via layer–by–layer growth ($`J(\mathrm{})>0`$ for all $`\mathrm{}`$, i.e. terraces tend to grow larger) or a growth instability is present ($`J(\mathrm{})<0`$ for all $`\mathrm{}`$, i.e. particles are preferably incorporated at the upper steps).
In the well known limit of negligible detachment or desorption rates ($`\gamma 0`$ or $`\tau \mathrm{}`$) the Ehrlich–Schwoebel effect alone determines the sign of $`J(\mathrm{})`$. As expected, for positive step edge barriers ($`\mathrm{}_1>a`$) growth becomes instable whereas negative values of $`E_S`$ stabilize layer–by-layer growth.
If we assume a positive $`E_S`$ in the general case we obtain a critical current
$$F_C=\frac{a\gamma }{D\tau }$$
(17)
where the current changes its sign.
If one expresses the diverse rates as used in the Solid–on–Solid simulations and sets $`a=1`$ one obtains
$$F_C=\nu _d\text{e}^{(E_D+E_{\text{bind}})/k_\text{B}T}.$$
(18)
where the desorption rate $`\nu _d\text{e}^{E_D/k_\text{B}T}`$ has been introduced. $`E_{\text{bind}}`$ represents the typical binding energy of a detaching adatom. In 2+1 dimensions this should be approximately $`E_{\text{bind}}2E_N`$. Using the model of the previous section, $`\nu _d=\nu _0`$, and $`E_D=1.1\text{eV}`$ (parameters which are a reasonable guess for CdTe(001), ) one obtains a critical flux $`F_C=0.004`$ ML/s. However, this crossover should not be observable in experiments, since at such low external fluxes the step flow of the preexisting steps will dominate the surface evolution.
## 8 Saturation profile with infinite step edge barrier
In this section we will calculate the saturation profile for the model with infinite step edge barrier. The discussion of the closure of the bottom terrace already showed that in this limit the calculations become very simple. As for the bottom terrace the dynamics of higher steps become independent of the above lying terrace. The steps $`x_i(t)`$ evolve according to
$`{\displaystyle \frac{\text{d}x_1}{\text{d}t}}(t)`$ $`=`$ $`F(x_1(t)+R_{\text{inc}})`$
$`{\displaystyle \frac{\text{d}x_2}{\text{d}t}}(t)`$ $`=`$ $`F(x_2(t)x_1(t))`$
$`\mathrm{}`$
$`{\displaystyle \frac{\text{d}x_i}{\text{d}t}}(t)`$ $`=`$ $`F(x_i(t)x_{i1}(t))`$
$`\mathrm{}`$
Measuring the time in units of $`1/F`$ (i.e. setting $`F=1`$ in the above equations) the time to grow one monolayer is equal to one. In addition, to simplify notation we will measure all lengths in units of $`R_{\text{inc}}`$.
The solution for the lowest terrace has been given in eq. (9). The solution for all steps is
$$x_n(t)=\underset{i=1}{\overset{n}{}}\frac{t^{ni}}{(ni)!}(1+x_i(0))\text{e}^t1$$
(20)
as can be easily verified. If we want to calculate the steady state saturation profile we have to require
$$x_{i+1}(1)=x_i(0).$$
(21)
It should be stressed that this is the only assumption: the surface morphology is a self-reproducing structure. If the upper steps $`x_{i+1}`$ after deposition of one monolayer would be greater than $`x_i(0)`$ this would result in a flattening of the surface. Otherwise the slope would become steeper. Using the solution for the bottom terrace (9) we obtain the initial value
$$x_1(0)=\text{e}1$$
(22)
when we require that the bottom terrace will be closed at time $`t=1`$.
For the upper terraces eq. (20) yields a recursion relation
$$x_n(1)+1=x_{n1}(0)+1=\underset{i=1}{\overset{n}{}}\frac{1+x_i(0)}{(ni)!\text{e}}$$
(23)
which can be solved as described in the appendix using the generating function. As a result one obtains the initial positions of the steps on an infinite symmetric step profile. Every time the bottom terrace is closed the steps (with new indices) are located at these positions.
In table 1 we show the analytical expressions for the step positions derived from the generating function. In addition, the numerical values of the terrace widths are shown. With growing index the terrace widths are rapidly approaching two. Even though they oscillate around this value it can be shown that
$$\underset{i\mathrm{}}{lim}\left(x_i(0)x_{i1}(0)\right)=2.$$
(24)
Note that we measure the lengths in units of $`R_{\text{inc}}`$. Hence we predict a slope selection with slope $`1/(2R_{\text{inc}})`$ as derived from the simple argument of section 5.
The derivation shows that the selected slope is controlled by the closure of the bottom terrace only. Another length scale is the nucleation length. It is defined by the typical length of a top terrace at which nucleation of an island occurs . This length scale is responsible for the rounding of the towers in fig. 3b and causes a perturbation of the steady–state saturation profile.
## 9 Conclusion
We have investigated the effect of an incorporation mechanism on the morphology of growing surfaces. The inclusion of an incorporation mechanism in a 1+1 dimensional BCF–theory as well as in SOS computer simulation in 2+1 dimensions is necessary in order to obtain slope selection and a coarsening process. We were able to derive analytically the temperature dependence of the selected slope. We found that the Ehrlich–Schwoebel barrier alone controls the temperature dependence. In the limit of an infinite step edge barrier we derived the steady state saturation profile. In this case the resulting mound morphology is controlled by the closure of the bottom terrace.
## Appendix A Generating function
To simplify notation we introduce the shifted step positions $`b_j=x_j(0)+1`$. We will try to extract the generating function
$$f(z)=\underset{j=0}{\overset{\mathrm{}}{}}b_jz^j$$
(25)
for the shifted step positions. Clearly, the $`b_j`$ are only of physical meaning if $`j>0`$ and $`b_0`$ can be chosen arbitrarily. Starting from equation (23)
$`b_{n1}`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{n}{}}}{\displaystyle \frac{b_i}{(ni)!\text{e}}}\text{ for all }n2`$ (26)
$`\text{e}zb_{n1}z^{n1}`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{n}{}}}{\displaystyle \frac{b_iz^n}{(ni)!}}`$ (27)
$`\text{e}z{\displaystyle \underset{m=1}{\overset{\mathrm{}}{}}}b_mz^m`$ $`=`$ $`{\displaystyle \underset{n=2}{\overset{\mathrm{}}{}}}{\displaystyle \underset{i=1}{\overset{n}{}}}{\displaystyle \frac{b_iz^n}{(ni)!}}`$ (28)
Choosing $`b_0=0`$ and using $`b_1=\text{e}`$ we arrive at
$`\text{e}zf(z)`$ $`=`$ $`{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}{\displaystyle \underset{i=0}{\overset{n}{}}}{\displaystyle \frac{b_iz^n}{(ni)!}}\text{e}z`$ (29)
$`\text{e}zf(z)`$ $`=`$ $`f(z)\text{e}^z\text{e}z`$ (30)
Thus, we finally obtain
$$f(z)=\frac{z}{\text{e}^{z1}z}.$$
(31)
The lowest coefficients
$$b_j=\frac{1}{j!}\frac{^jf}{z^j}|_{z=0}$$
(32)
derived from the generating function are shown in table (1). In addition, the generating function can be used to formally prove equation (24).
\***
This work has been supported by the Deutsche Forschungsgemeinschaft DFG through SFB 410.
|
no-problem/9905/hep-ex9905029.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The question of whether or not neutrinos have a mass has been in the mind of physicists for some time. If the neutrinos have non-degenerate masses and are mixed, neutrino oscillations could in principle be observed. The study of this phenomenon, by which transitions between neutrino flavours are possible, constitutes a very precise way to determine the mass states of the neutrinos.
Although there have been some indications from solar neutrino experiments (Homestake , GALLEX , SAGE , Kamiokande and Super-Kamiokande ) that the neutrinos indeed could oscillate, the recent results from the Super-Kamiokande atmospheric neutrino experiment gave the strongest indications, by showing evidence for $`\nu _\mu `$ disappearance. Other atmospheric neutrino experiments have obtained results compatible with this hypothesis (Kamiokande , MACRO and SOUDAN2 ). Furthermore, the LSND short-baseline appearance experiment found indications for $`\nu _\mu `$ oscillating to $`\nu _\mathrm{e}`$.
Now that the first indications for neutrino oscillations have been found, the confirmation of the oscillation signals and the determination of the oscillation parameters represent important objectives. This year already, the K2K long baseline disappearance experiment will shed some light on the situation. At the end of the data taking, it will be able to probe the region above $`\mathrm{\Delta }\mathrm{m}^22\times 10^3`$ eV<sup>2</sup>. The MINOS experiment at Fermilab, which will start taking data early next century, also aims to probe the atmospheric region.
Given the CHOOZ result, which excludes the interpretation of the atmospheric neutrino data as $`\nu _\mu \nu _\mathrm{e}`$ oscillations above $`\mathrm{\Delta }\mathrm{m}^210^3`$ eV<sup>2</sup>, a plausible interpretation of these data is $`\nu _\mu \nu _\tau `$ oscillations. Proposals for several experiments to detect $`\nu _\tau `$ appearance using the proposed NGS neutrino beam from CERN to Gran Sasso are being prepared.
Prompted by the recent efforts, both at Fermilab and at CERN, to develop long baseline projects with the NuMi and NGS beams, the purpose of the present paper is to study the possibilities for a simple $`\nu _\tau `$ appearance experiment to probe the $`\mathrm{\Delta }\mathrm{m}^2`$ region of the atmospheric neutrino experiments. We argue that a totally active scintillator detector in a long baseline beam would provide an efficient way to verify the oscillation claim from the Super-Kamiokande data.
The present study is based on the parameters of the proposed NGS beam. This beam has a mean energy of about 18 GeV and a baseline of about 735 km, corresponding to the distance between CERN and the Gran Sasso laboratory. In order to understand the role of the baseline, we also study the performances of the same neutrino detector when positioned 3000 km away from the same neutrino source.
In these conditions, low $`\mathrm{\Delta }\mathrm{m}^2`$ values (below $`5\times 10^3`$ eV<sup>2</sup>) are such that the oscillated $`\nu _\tau `$ energy spectrum peaks at low energy (below 10 GeV). At these energies, the quasi-elastic $`\nu _\tau `$ interactions have a significant contribution to the total event rate. An experiment aiming to reach very low values of $`\mathrm{\Delta }\mathrm{m}^2`$ could thus be optimized for the detection of quasi-elastic processes, since they represent a very clean signal compared to the deep-inelastic processes. An efficient way of studying those interactions is to focus on the events where the produced $`\tau `$ decays to an electron <sup>1</sup><sup>1</sup>1The branching ratio of the $`\tau `$ decay to electron is about 18% . ( $`\nu _\tau \mathrm{n}\tau \mathrm{p}`$, with $`\tau \mathrm{e}\overline{\nu }_\mathrm{e}\nu _\tau `$ ). The signature of these interactions is similar to the $`\nu _\mathrm{e}`$ quasi-elastic ( $`\nu _\mathrm{e}\mathrm{n}\mathrm{ep}`$ ) events that I216 aims to study to search for $`\nu _\mu \nu _\mathrm{e}`$ appearance. Figure 1 shows the two topologies involved for $`\nu _\mu \nu _\mathrm{e}`$ (a) and $`\nu _\mu \nu _\tau `$ (b) appearance experiments. The I216 experiment would be looking for $`\nu _\mathrm{e}`$ appearance in a $`\nu _\mu `$ beam by comparing the ratio of $`\nu _\mu `$ and $`\nu _\mathrm{e}`$ quasi-elastic interactions in detector modules at two locations (130 m and 885 m from the target). The current design of the modules consists in fine grained fully active scintillator calorimeters amounting to an overall mass of 500 tons. This experiment would use the CERN-PS to get a neutrino beam of energy of about 1.5 GeV. In the present study, we use one of the detector designs of I216 (to be described later) and apply it to the detection of $`\nu _\tau `$ interactions.
Although addressing the question of neutrino oscillations in the atmospheric region is of great importance, the verification of the oscillation claim of LSND is no less important. The KARMEN experiment is trying to clarify the issue, and other projects have been proposed (MiniBoone and I216 itself). According to this study, we conclude that the use of a single detector technology could allow to search for neutrino oscillations both in the LSND region and the atmospheric region by doing a short baseline $`\nu _\mu \nu _\mathrm{e}`$ experiment as a first step to a long baseline $`\nu _\tau `$ appearance experiment.
This paper is organized in the following way. A description of the detector is given in section 2. The event selection and the backgrounds are discussed in section 3, which is followed by section 4 on the oscillation sensitivity.
## 2 Description of the detector
In the studies related to the I216 Letter of Intent , it was realized that a fully active liquid scintillator detector with a granularity of the order of a few centimetres fulfils the general requirements for electron identification and topological reconstruction necessary for a $`\nu _\mathrm{e}`$ appearance search. We want to apply the same technique to the detection of $`\nu _\tau \mathrm{n}\tau \mathrm{p}`$ interactions with subsequent decay $`\tau \mathrm{e}\overline{\nu }_\mathrm{e}\nu _\tau `$, in a detector about thirty times larger (15 ktons). It should be noted that liquid scintillator detectors have already been successfully used in a wide range of experiments (for example: CHOOZ and MACRO ), and foreseen for future large scale applications (for example: KamLAND and Borexino ).
### 2.1 Detector design
Because of the large scale, a simple and modular structure should be adopted (see figure 2). The active target consists of an oil based scintillating mixture contained in a large vessel. The granularity is provided by optical separators immersed in the liquid. These separators are structured in planes of parallel strips. The planes are perpendicular to the beam direction. The strips of a given plane are orthogonal to those of the adjacent planes in order to be able to do bi-dimensional reconstruction. The light collection is accomplished by wavelength-shifting (WLS) fibres placed inside each strip. The light from the fibres is readout by multi-pixel photon detectors. The general detector characteristics are reported in table 1 and discussed in the following sub-sections.
#### 2.1.1 The active medium
Mineral oil scintillator has been studied in great detail, and excellent performances in stability, light yield and attenuation lengths have already been reached in the past. Contrary to surface readout detectors (for instance KamLAND and Borexino), where the attenuation length of the liquid plays an important role because the light must travel a large distance, the light emitted by the scintillator in the case of a WLS fibre readout is captured locally inside the fibre. The attenuation length of the liquid can then be shorter, which gives more freedom in the choice of the mixture.
A very recent development of oil based liquid scintillator consists in using new solvents like PXE (phenyl-o-xylylethane) and LAB (linear alkylbenzenes) . Mixtures using these solvents can have very good scintillation properties (for example, 87% of anthracene light output can be reached with the Bicron scintillator BC599-16 ). Their physical properties are well suited for a very large scintillating WLS fibre detector. They are non-toxic, non-flammable (flash point above 145<sup>o</sup>) and good insulators. Both solvents mentioned here are produced in large quantities for industrial applications. These physical properties are crucial if the photodetectors are to be placed inside the scintillating mixture without any specific precaution concerning high voltages and power dissipation.
#### 2.1.2 Granularity
In this study, we fix the longitudinal sampling frequency to 4 cm (corresponding to 1/13 radiation length) with a transverse granularity of 4 cm, to allow for topological reconstruction of the events. With such a granularity, the detector could be placed above ground. According to simulation and laboratory tests , the proper reflectivity of the walls can be obtained in various ways and with different materials (for example: aluminium painted with $`TiO_2`$, or extruded polypropylene).
#### 2.1.3 Wavelength shifting fibres
The WLS fibre readout technology allows to collect the scintillating light from a large volume (the volume of a scintillator strip) with good efficiency onto a small surface (the fibre diameter at one end). This is crucial in our large scale application, because a significant fraction of the overall cost is given by the photodetectors. It is then important to maximize the amount of active mass which is read by a given photocathode surface. For this reason, we want to have strips as long as possible while having enough photoelectrons to measure the ionization loss. We assume a strip length of 10 meters and a single-sided readout with a mirror on the opposite side of the fibre. From studies done for MINOS , we expect that a measured signal of an average of 10 photoelectrons <sup>2</sup><sup>2</sup>2We assume a typical quantum efficiency for the photodetector (see next paragraph). for a minimum ionizing particle passing through the far end of a strip could be achieved with current technology.
#### 2.1.4 Photodetectors
Multi-anode phototubes and hybrid-photodiodes (HPD) are commercially available. They allow to read a large number of channels, up to a few hundreds, on a single device. In the case of HPD’s, the cost is mainly proportional to the surface of the photocathode rather than to the number of channels. Typical quantum efficiencies at a wavelength of 520 nm (the peak value of the emission spectrum of the WLS fibre) is in the region of 13% to 20%. We have checked that these commercial photodetectors can be immersed in the mentioned liquid mixtures. This aspect is crucial to minimize the length of the WLS fibres outside the strip and the consequent light losses.
#### 2.1.5 Infrastructure, civil engineering and cost
The feasibility of a 15 kiloton detector also depends on civil engineering issues like safety, assembling, environmental impact and design costs. Industrial oil containers <sup>3</sup><sup>3</sup>3Among these, the use of an oil tanker is a possibility. have capacities largely exceeding our requests and would be suited for our application. These could then be recycled for other purposes after the experiment is finished. The overall detector can be split into four modules of about 10$`\times `$10$`\times `$40 m<sup>3</sup>. An estimate of the costs of the main items of such a detector is given in table 2, based on informal contacts with manufacturers. The containers and the infrastructure are not included.
## 3 Event selection and background rejection
The flux components of the NGS neutrino beam are reported in table 3. The expected event rates for pure quasi-elastic, resonances and deep inelastic processes <sup>4</sup><sup>4</sup>4We define the deep-inelastic processes as those which occur above $`W^2=2`$ GeV<sup>2</sup>. were computed using the latest version of the NGS neutrino beam (they are given in table 4). The number of $`\nu _\tau `$ interactions for several values of $`\mathrm{\Delta }\mathrm{m}^2`$ are shown. The rates are given per kiloton assuming four years of NGS beam at $`4\times 10^{19}`$ protons on target per year.
From table 4, it can be seen that the quasi-elastic processes for oscillated $`\nu _\tau `$ are a significant fraction of the total number of $`\nu _\tau `$ interactions. Furthermore, the resonant interactions, which also make an important fraction of the total number of events, can also have a topology similar to the quasi-elastic processes. Therefore, we focus our analysis on the clean quasi-elastic topology where the $`\tau `$ decays to an electron. We define this topology as an electromagnetic shower and at most one additional charged track in the final state. We shall see how it is possible to have a good efficiency by a simple selection of these events while rejecting an important fraction of the background coming from the $`\nu _\mathrm{e}`$ contamination of the beam as well as the other sources of background.
The selection of events could be refined by studying other event properties such as missing transverse momentum. Crucial experimental data giving information about the properties of the background, including cross-section measurements, could in principle come from another experiment like I216, where $`\nu _\tau `$ oscillations are excluded at large mixing angle.
It should be mentioned that other decay channels of the $`\tau `$ can be studied in addition to the electronic channel. For instance, the $`\tau \pi ^{}\nu _\tau `$ decay in quasi-elastic interactions was studied in CHARM II for a $`\nu _\mu \nu _\tau `$ oscillation search at high sensitivity. This channel could provide a signal independent of the $`\nu _\mathrm{e}`$ contamination of the beam.
Several processes can produce events that mimic the quasi-elastic topology of the signal. The following contaminations were studied:
* $`𝝂_𝐞`$ contamination
The main source of background for an appearance experiment selecting quasi-elastic $`\nu _\tau `$ interactions where the $`\tau `$ decays to an electron is the $`\nu _𝐞`$ contamination of the beam. As can be seen in table 3, this contamination is of the order of 0.6% of the $`\nu _\mu `$ component. Figure 3(a) shows the spectrum of $`\nu _\mu `$ and $`\nu _𝐞`$ of the NGS beam.
Figure 3(b) shows the neutrino energy distributions of $`\nu _𝐞`$ quasi-elastic events and of $`\nu _\tau `$ quasi-elastic events where the $`\tau `$ decays to an electron, assuming different values of $`𝚫𝐦^\mathrm{𝟐}`$. It can be seen that the neutrino energy distribution of the background from the $`\nu _𝐞`$ contamination of the beam is harder than the one of the oscillated $`\nu _\tau `$ quasi-elastic events.
One method to reject this background is to cut on the energy of the identified electron. Since the oscillation probability favors the low energy range of the $`\nu _\mu `$ spectrum, and since the electron of the signal comes from the decay of a $`\tau `$, the electrons of the signal will have a much lower energy than the electrons of pure quasi-elastic $`\nu _𝐞`$ processes. Figure 4(a) shows the electron energy distributions from both quasi-elastic $`\nu _𝐞`$ interactions and $`\tau `$ decays from $`\nu _\tau `$ quasi-elastic interactions. Figure 4(b) shows the normalized integral of these distributions between zero and a given value of the electron energy (x-axis). A cut at 1 GeV, which will be described later in the text, has been applied before integrating. Since the energy distributions of the electron are similar for all the $`𝚫𝐦^\mathrm{𝟐}`$ values considered, the four curves of the signal are overlapping and cannot be distinguished. Another important feature of this figure is that the integral of the signal rises very rapidly at low energy whereas the background rises slowly. A cut on the energy of the electron can thus reduce the background efficiently while keeping most of the signal. For instance, for a cut at 10 GeV, 75% of the signal is kept while 70% of the background is rejected. Figure 4(c) shows the ratio of the number of background events and the number of signal events as a function of $`𝚫𝐦^\mathrm{𝟐}`$, for a cut at 10 GeV. Figure 4(d) shows the signal selection efficiency for a 10 GeV cut as a function of $`𝚫𝐦^\mathrm{𝟐}`$. The final choice of the energy cut should be done by maximizing the sensitivity of the experiment, which will critically depend on the choice of other event selection criteria, baseline and detector mass. We have checked a posteriori that the sensitivity does not change significantly if the maximum allowed lepton energy varies in the interval between 4 and 12 GeV (with a 735 km baseline and a 15 kiloton detector). Therefore, in the present study, we fix the upper limit of the lepton energy to be 10 GeV independently of the baseline. We conclude from figure 4 that a good rejection of quasi-elastic $`\nu _𝐞`$ events can be achieved by a simple energy cut. Additional criteria to reject the $`\nu _𝐞`$ contamination could be based on the kinematics of the events (for instance, the angle of the lepton with respect to the beam), but have not been used in this analysis.
* $`𝝅^\mathrm{𝟎}`$ in NC processes
The conversion of the photons coming from a $`\pi ^\mathrm{𝟎}`$ decay in a neutral-current process can fake the electron signature. To reject this background, we apply a combination of topological cuts on the signal recorded in the scintillator. Assuming that the vertex position is known, a $`\pi ^\mathrm{𝟎}`$ can mimic an electron in the following conditions:
+ at least one photon converts in a strip adjacent to the vertex position, while the two electromagnetic showers of the photons overlap.
+ one photon converts in a strip adjacent to the vertex position, while the other is missed (for example in an asymmetric decay of the $`𝝅^\mathrm{𝟎}`$).
In the cases stated above, the pulse height of the signal recorded in the scintillator can be used to discriminate between the passage of a single electron (which will behave as a minimum ionizing particle in the first tenths of a radiation length) and two electrons coming from a photon conversion. With a statistics of 10 photoelectrons per minimum ionizing particle passing through a strip, it is possible to reject 92% of the two-electron background using the information of the first strip only (see figure 5). The selection efficiency of the signal is then about 86%.
Given the previous considerations, the selection of $`\nu _\tau `$ events with a quasi-elastic topology where the $`\tau `$ decays to an electron is the following:
1. The event must have a pure quasi-elastic topology, with a visible vertex identified by an electromagnetic shower and at most one charged track (assumed to be the proton). In the case where the proton track is not seen, we require a large pulse height corresponding to a heavily ionizing stopping particle. This peak will then be identified as the vertex. To ensure a well defined topology, we require that the proton track should not interact.
2. The electromagnetic shower should be connected to the vertex.
3. The visible energy of the electromagnetic shower should be between 1 GeV and 10 GeV to ensure that it comes from a $`\tau `$ decay, and not from a $`\nu _\mathrm{e}`$ quasi-elastic event. The lower cut on the energy also cuts $`\pi ^0`$ events, which peak at low energy.
4. The energy deposit of the electromagnetic shower in the first plane should be compatible with a single minimum ionizing particle.
The selection efficiencies for signal and background were evaluated using a quasi-elastic and resonant interaction generator and a Monte Carlo parametrized to simulate the detector response and reconstruction. Table 5 shows the different rates of events as computed using the cuts mentioned above. The energy distribution of the selected events is shown in figures 6 and 7 for the two baselines under study. Additional criteria could still be explored, as mentioned earlier. As an example, figure 8 shows the angular distribution of the lepton with respect to the beam direction for the signal and the background.
## 4 Sensitivity to neutrino oscillations
The sensitivity is the “average upper limit that would be obtained by an ensemble of experiments with the expected background and no true signal”, and has been computed by using the statistical techniques reported in reference . The sensitivity of the proposed experiment has been computed for a detector of 15 kilotons and a baseline of 735 km. The resulting sensitivity plot is shown in figure 10. The systematic uncertainty on the $`\nu _\mathrm{e}`$ contamination of the beam is assumed to be of the order of 5%.
We studied the possible improvements that can be achieved by varying the mass and the baseline. Figure 9 shows the lowest $`\mathrm{\Delta }\mathrm{m}^2`$ reachable by the experiment at 735 km as a function of its mass. Since the experiment is not background-free, the minimum $`\mathrm{\Delta }\mathrm{m}^2`$ value does not scale as $`1/\sqrt{\mathrm{mass}}`$.
When $`\mathrm{\Delta }\mathrm{m}^2(\mathrm{eV}^2)E(\mathrm{GeV})/L(\mathrm{km})`$, the oscillation probability $`𝒫`$ is such that the gain in the number of oscillated $`\nu _\tau `$ compensates exactly for the loss of neutrino flux $`\varphi (\nu _\mu )`$ as the baseline is increased: $`𝒫(\nu _\mu \nu _\tau )\times \varphi (\nu _\mu )\mathrm{constant}`$. On the other hand, the background is proportional to the $`\nu `$ flux, which decreases like $`1/L^2`$. The minimum $`\mathrm{\Delta }\mathrm{m}^2`$ value that can be probed with a given experiment having a non-negligible background thus decreases by increasing the distance from the neutrino source, at the price of a reduced maximum sensitivity. Figure 11 shows the sensitivity plot for the same detector in the same beam, when it is positioned 3000 km away from the neutrino source. It should be stressed that the systematic uncertainties on the background (dominated by $`\nu _\mathrm{e}`$ interactions) are in this case less significant with respect to the 735 km baseline experiment, since the signal to background ratio improves by a factor $`(3000/735)^217`$.
## 5 Conclusion
We believe that the liquid scintillator detector technology could be used to build a large detector to search for $`\nu _\mu \nu _\tau `$ oscillations in the atmospheric region with a long baseline beam. Whenever $`\mathrm{\Delta }\mathrm{m}^2(\mathrm{eV}^2)E(\mathrm{GeV})/L(\mathrm{km})`$, with the current beam designs, the quasi-elastic regime has a significant contribution to the overall interaction rate of the oscillated $`\nu _\tau `$. In this study, we have concentrated on a simple selection of $`\nu _\tau `$ quasi-elastic interactions where the $`\tau `$ subsequently decays to an electron. A 15 kiloton detector running for four years at 735 km from the neutrino source of a beam similar to the proposed NGS, can probably achieve a minimum $`\mathrm{\Delta }\mathrm{m}^2`$ of about 1.5$`\times 10^3`$ eV<sup>2</sup> at 90% C.L. in appearance mode. This result could improve down to about 1.1$`\times 10^3`$ eV<sup>2</sup> if the baseline were increased to 3000 km. Additional channels in the quasi-elastic regime, as well as the analysis of deep inelastic events, could improve the potential of the experiment in both appearance and disappearance modes.
## Acknowledgments
We would like to thank our CHORUS and I216 colleagues for useful discussions. We are grateful to J.-P. Fabre, P. Migliozzi and S. Ricciardi for constructive comments and valuable information. We would also like to thank V. Palladino and N. Vassilopoulos for providing us with the beam spectrum.
|
no-problem/9905/astro-ph9905160.html
|
ar5iv
|
text
|
# A simulation of galaxy formation and clustering
## 1. Introduction
Studies of galaxy formation have advanced at an unprecedented rate in the past few years. Data from the Keck and Hubble Space telescopes have revolutionised our view of the high redshift Universe (e.g. Steidel et al. 1998) and have led to claims that the main phases of galaxy formation activity may have now been observed (Baugh et al. 1998). From the theoretical point of view, modelling galaxy formation presents a formidable challenge because it involves the synthesis of a wide range of disciplines, from early universe cosmology to the microphysics and chemistry of star formation.
Because of the strongly non-linear and asymmetric nature of gravitational collapse, the problem of galaxy formation is best addressed by direct numerical simulation. The main difficulty stems from the huge range of scales spanned by the relevant processes, from star formation to large-scale clustering, which cannot all be simultaneously resolved with current simulation techniques. Two complementary strategies have been developed to deal with processes occurring below the resolution limit of a simulation. In one of them, a semi-analytic model of the dynamics of gas and star formation is used, for example, in conjunction with N-body simulations of the formation of dark matter halos (Kauffmann et al. 1997, 1999a,b; Benson et al. 1999). This technique permits a large dynamic range to be followed, at the expense of simplifying assumptions, such as spherical symmetry, for the treatment of the dynamics of cooling gas and star formation. The alternative approach, which is the one adopted in this paper, is to solve directly the evolution equations for gravitationally coupled dark matter and dissipative gas. This enables the dynamics of the gas to be treated with a minimum of assumptions, at the expense of a severe reduction in the accessible dynamic range. As in the semi-analytic approach, a phenomenological model for star formation and feedback is required.
Eulerian and Lagrangian numerical hydrodynamics have been used to simulate galaxy formation. At present only the latter, implemented by means of the Smooth Particle Hydrodynamics or SPH technique, provides sufficient resolution to follow the formation of individual galaxies. For example, the best Eulerian simulations to date, such as those of Blanton et al. (1999), have gas resolution elements of $`300500`$ kpc, whereas the early SPH simulation of Carlberg et al. (1990) had a spatial resolution of 20 kpc. This, together with the simulations of Katz et al. (1992), Evrard et al. (1994) and Frenk et al. (1996), were the first to resolve individual galaxies in relatively large volumes, allowing detailed studies of the distribution of galaxies and the dynamics of galaxies in clusters. However, the volumes modelled in this early work were much too small to allow reliable investigations of galaxy clustering at the present day. Galaxy clustering at high redshift has been investigated in the simulations of Katz et al. (1999).
In this letter we present the first results of a large N-body/SPH simulation of galaxy formation in a representative volume of a cold dark matter universe, employing about an order of magnitude more particles than the largest previous study of this kind (Katz et al. 1996). Our simulation produced 2266 galaxies at the present day, compared to 60 in the simulation of Katz et al. (1992) and 58 in those of Katz et al. (1996), the only other large SPH calculations to have been evolved to the present. Earlier dark matter simulations by the Virgo Consortium (Jenkins et al. 1998) demonstrated the kind of biases required for CDM universes to provide a good match to observations of galaxy clustering. Here we show that these biases arise quite naturally.
## 2. The simulation
We have simulated a region of a CDM universe with the same cosmological parameters as the $`\mathrm{\Lambda }`$CDM simulation of Jenkins et al. (1998): mean mass density parameter, $`\mathrm{\Omega }_0=0.3`$; cosmological constant, $`\mathrm{\Lambda }/(3H_0^2)=0.7`$; Hubble constant (in units of 100 km s<sup>-1</sup> Mpc<sup>-1</sup>), $`h=0.7`$ (hereafter we adopt this value, unless otherwise stated), and rms linear fluctuation amplitude in 8$`h^1\mathrm{Mpc}`$ spheres, $`\sigma _8=0.9`$. The baryon fraction was set to $`\mathrm{\Omega }_bh^2=0.015`$, from Big Bang nucleosynthesis constraints (Copi et al. 1995). We assumed an unevolving gas metallicity of 0.3 times the solar value. The simulation was carried out using “parallel Hydra”, an adaptive, particle-particle, particle-mesh N-body/SPH code (Pearce & Couchman 1998), based on the publicly released serial version of Couchman et al. (1995).
Our simulation followed 2097152 dark matter particles and the same number of gas particles in a cube of side $`100\mathrm{Mpc}`$ and required 12492 timesteps to evolve from $`z=50`$ to $`z=0`$. The gas mass per particle is $`2\times 10^9\mathrm{M}_{}`$ and, since we typically smooth over 32 SPH neighbors, the smallest resolved objects have a gas mass of $`6.4\times 10^{10}\mathrm{M}_{}`$. We employed a comoving $`\beta `$-spline gravitational softening, equivalent to a Plummer softening of $`14.3\mathrm{kpc}`$ until z=2.5. Thereafter, the softening remained fixed at this value, in physical coordinates, and the minimum SPH resolution was set to match this value. With our chosen parameters, our simulation was able to follow the cooling of gas into galactic dark matter halos. The resulting “galaxies” typically have 50-1000 particles. With a spatial resolution of $`14.3\mathrm{kpc}`$, we cannot resolve the internal structure of galaxies and we must be cautious about the possibility of enhanced tidal disruption, drag, and merging within the largest clusters. However, as we argue below, there is no evidence that this is a major problem.
As in all studies of this type, a phenomenological model is required to treat physical processes occurring below the resolution limit of the simulation. The first of these is the runaway cooling instability present in hierarchical clustering models of galaxy formation. At high redshift, the cooling time in dense subgalactic objects is so short that most of the gas would cool (and presumably turn into stars) unless other processes acted to counteract cooling (White & Rees 1978, Cole 1991, White & Frenk 1991). Since all the gas in the universe has clearly not cooled into dark matter halos, a common assumption is that feedback from early generations of stars will have reheated the gas, preventing it from cooling catastrophically.
Although a variety of prescriptions have been used to model feedback (e.g. Navarro & White 1993, Steinmetz & Müller 1995, Katz et al. 1996), this process remains poorly understood. In cosmological SPH simulations, gas can only cool efficiently in objects above the minimum resolved gas mass, in our case $`6.4\times 10^{10}\mathrm{M}_{}`$. Thus, resolution effects alone act as a crude form of feedback. Semi-analytical models of galaxy formation suggest that feedback is relatively unimportant on mass scales above our resolution limit. We do not, therefore, impose any prescription for feedback over and above that provided naturally by resolution effects. If the rate at which gas cools in the simulation is identified with the rate at which stars form, our adopted parameters give rise to a cosmic history of star formation in broad agreement with data from $`z4`$ to the present (Madau et al. 1998).
The second sub-resolution process that we must model is star formation and the associated coupling of different gas phases in the interstellar medium. Like feedback, this is a complex and poorly understood phenomenon. In some SPH simulations, groups of cooled gas particles have been identified with galaxies (the “globs” of Evrard et al. 1994). One disadvantage of this procedure is that dense knots of cold gas can affect the cooling of surrounding hot material because of the smoothing inherent in the SPH technique. To avoid this problem, an alternative strategy often used is to assume that gas that has cooled turns into collisionless “stars” according to some heuristic algorithm (Navarro & White 1993, Katz et al. 1996, Steinmetz & Müller 1995). This prescription effectively decouples the cooled gas from the hot component. A drawback is that “stellar” systems made up of only a few particles are fragile and can easily be disrupted.
We have adopted an intermediate strategy intended as a compromise between the extremes of letting clumps of cool gas persist in the simulation and turning them into stars. As in the first case, we identify galaxies with groups of gas particles that have cooled below $`10^4\mathrm{K}`$. However, when computing the SPH density of particles with temperatures above $`10^5\mathrm{K}`$, we do not include any contribution from particles below $`10^4\mathrm{K}`$. All other SPH interactions remain unaffected. As in the case where cool gas is turned into stars, our model effectively decouples the galactic material from the surrounding hot halo gas, but unlike this case, “galaxies” in our model are made of dissipative material and thus are more resilient to tidal interactions and mergers than model stellar galaxies. Our model of the intergalactic medium can be regarded as a simple, first step towards a multiphase implementation of SPH, an important requirement when dealing with situations in which there are steep density gradients. The main effect of our treatment of cool gas is to prevent the formation of very massive galaxies in the centers of the richest clusters in the simulation, as happened, for example, in the simulation of Frenk et al. (1996). Thacker et al. (1998) present a more detailed discussion of the effects of runaway cooling and the production of supermassive objects.
Fig. 1. The mass fraction in the form of cold gas in virialized halos, normalized to the average baryon fraction in the simulation. The dots show results from the N-body/SPH simulation. The pentagons show results from a semi-analytic model constrained to match the parameters of the simulation. The open squares show results from a full semi-analytic model.
As a test of our techniques, Fig. 1 compares the amount of cold gas in halos in our simulation with that predicted by the semi-analytic model of Cole et al. (1999). Halos in the simulation were located by first identifying suitable centers using the friends-of-friends group finder of Davis et al. (1985) with linking parameter, $`b=0.05`$, and then growing spheres around these centers out to the virial radius (defined as the radius within which the mean overdensity is 323; see Eke et al. 1996). Only halos with more than 50 dark matter particles were considered, of which there are 2353 in the simulation, spanning over 3 orders of magnitude in mass. The simulation results (shown as dots in the figure) are compared with two versions of the semi-analytic model. In the first (filled pentagons), the parameters of the semi-analytic model were set so as to mimic the conditions in the simulation as closely as possible. The mass resolution of the semi-analytic model was degraded to that of the simulation and the feedback was switched off. The agreement between the simulation and modified semi-analytic model indicates that there are only minor differences in the cooling properties of the gas calculated with these two different techniques. The second comparison is intended as a test of how well the artificial “feedback” produced by resolution effects compares with the physically motivated feedback prescription used in semi-analytic models. In this case (open squares), the parameters of the semi-analytic model were set so as to obtain a good match to the faint end of the galaxy luminosity function, as discussed by Cole et al. (1999). The agreement with the simulation in this case is moderate. Clearly, feedback in the semi-analytic model prevents cooling within galaxy halos more efficiently than do resolution effects in the simulation, but the difference is only about 50% for halos similar to that of the Milky Way.
Fig. 2. The ratio of gas mass to dark matter within the virial radius of each halo, in units of the mean baryon fraction in the simulation. The circles show the total gas mass fraction while the crosses and triangles show the mass fractions of gas hotter and cooler than $`12000\mathrm{K}`$ respectively. The solid line shows the resolution limit of 32 gas particles.
## 3. Results
The ability of gas to cool is a strong function of the mass of the host halo. Fig. 2 shows the fraction of hot and cold gas within the virial radius of each halo, normalized to the mean baryon fraction of the simulation (10%), as a function of halo mass. In small halos just above the resolution limit (indicated by the solid line), most of the gas cools, but the fraction of cold gas decreases rapidly with halo mass as the cooling time for the hot gas increases. In large halos, most of the gas never cools. The crossover point occurs at a halo mass of $`10^{13}\mathrm{M}_{}`$. Because of the generally asymmetric and chaotic nature of halo formation, a few low mass halos have baryon fractions in excess of the universal mean, but in most galactic halos the baryon fraction ranges between 80 and 100% of the cosmic mean. On the scale of galaxy clusters, the baryon fraction is 85%, similar to the values obtained by White et al. (1993) and Frenk et al. (1999).
We identify “galaxies” in our simulation with dense knots of cold gas. These are very easy to locate except in the minority of cases where a merger is ongoing or where the galaxy is experiencing significant tidal disruption or ablation within a cluster halo. To find galaxies we used the friends-of-friends group finder, with a linking length of $`0.0164(1+z)`$ times the mean comoving interparticle separation. This selects material with overdensity greater than $`10^5`$ at $`z=0`$. The galaxy catalogue is almost unaffected by large changes in the maximum linking length. At the end of the simulation there were 2266 resolved objects within the volume.
Fig. 3. A comparison between the K-band galaxy luminosity function in the simulation with observations. The simulation data are shown by open triangles and the data from Gardner et al. (1997) by filled squares. A luminosity normalization factor of $`\mathrm{{\rm Y}}=2.8`$ has been assumed. Poisson errors are shown.
We can assign a luminosity to each galaxy in our simulation using the stellar population synthesis model of Bruzual & Charlot (1993). For this purpose we assume that at each model output, a fraction, $`1/\mathrm{{\rm Y}}`$, of the gas that has cooled since the previous output turns into stars in an instantaneous burst with a Salpeter IMF whose subsequent spectrophotometric evolution is given by the synthesis model. Because the output times were relatively infrequent, this procedure works best for K-band luminosities.
In Fig. 3 we compare the resulting K-band galaxy luminosity function with the observational data of Gardner et al. (1997). The shape of the model luminosity function agrees well with the data, and the model and observed functions match well if we set the luminosity normalization factor, $`\mathrm{{\rm Y}}=2.8`$. This implies that only 35% of the cold gas has been turned into visible stars, with the rest remaining in dense gas clouds and brown dwarfs or hidden in some other form. The associated mass-to-light ratios are about twice as large as those measured for elliptical galaxies, but these numbers should be treated with caution. Not only is our star formation prescription very crude, but our model ignores the effects of metallicity and obscuration by dust. Furthermore, as the comparison with the full semi-analytic model indicates, too much gas has probably cooled in our simulation because of our neglect of feedback processes. In spite of these reservations, the agreement in Fig. 3 is very good and suggests that our simulation provides a realistic description of the formation of bright galaxies.
The relatively large volume of our simulation allows a reliable measurement of the clustering properties of galaxies and their relation to the clustering properties of the mass. The galaxy and mass two-point correlation functions at various epochs are plotted in Fig. 4. The mass correlation function agrees very well with the results of our earlier, dark matter only simulations which followed a cubic region of side $`342\mathrm{Mpc}`$ using 16.8 million particles (Jenkins et al. 1998). The clustering amplitude of the mass grows by a factor of about 30 between the two epochs shown in the figure, $`z=3`$ and $`z=0`$. By contrast, the galaxy correlation function hardly evolves at all between $`z=3`$ and $`z=0`$.
Fig. 4. Mass and galaxy correlation functions. The dashed lines show the mass correlation functions at $`z=3`$ and $`z=0`$. The solid, long dashed and dotted lines show the galaxy correlation functions in the simulation at the indicated redshifts. The squares show the observed, real-space correlation function, estimated by Baugh (1996), from the APM survey.
The difference between the clustering growth rates of galaxies and mass is a manifestation of “biased galaxy formation”, the preferential formation of galaxies in high peaks of the primordial density field. It was already seen in the first simulations of cold dark matter models by Davis et al. (1985), in which galaxies were put in ”by hand” near high peaks of the initial density field. It is also very clear in the SPH simulations of Evrard et al. (1994) and Katz et al. (1999), and can even be inferred from the N-body only simulations of Bagla (1998) and Colin et al. (1999). Semi-analytic techniques have been used on their own (Baugh et al. 1999), or combined with N-body simulations (Kauffmann et al. 1999b), for detailed study of the clustering evolution of galaxies, while specific applications to high redshift Lyman-break galaxies are to be found in Baugh et al. (1998) and Governato et al. (1998). The latter models provide an excellent description of the strong clustering discovered by Adelberger et al. (1998).
In Fig. 4 we also plot the observed, real-space galaxy correlation function at $`z0`$, estimated by Baugh (1996) from the APM survey (filled squares.) This may be compared with the $`z=0`$ results in our simulation (solid line). On scales larger than a few hundred kpc the agreement is good. (The differences at $`r>10`$$`h^1\mathrm{Mpc}`$ are due, for the most part, to finite volume effects, as we have verified by comparison with the larger simulations of Jenkins et al. 1998.) Beyond 1$`h^1\mathrm{Mpc}`$ the galaxy correlation function is very close to the mass correlation function. On smaller scales galaxies are less strongly clustered than the mass, or antibiased, an effect that persists until separations of $`100h^1`$kpc. At small separations the model correlations lie above the APM data. Over nearly four orders of magnitude in amplitude, the model galaxy correlation function is very close to a power-law even though the mass correlation function is not. An essentially featureless galaxy correlation function was also obtained for the same cosmological model in the semi-analytic model of Benson et al. (1999) and, for some parameter combinations, in those of Kauffmann et al. (1999a).
## 4. Conclusions
The simulation presented here is the first to resolve galaxy formation in a large enough volume to allow a reliable study of the demographics and clustering of galaxies. Our results are encouraging: the resulting luminosity function and correlation function of galaxies are broadly consistent with observations. Furthermore, the correlation function of bright galaxies in the simulation changes little since $`z=3`$, in agreement with results from semi-analytic studies and with available data at high redshift. Further progress will require a more detailed treatment of the astrophysics of galaxy formation, particularly of the processes of star formation and feedback.
## Acknowledgments
This work was carried out as part of the programme of the Virgo Consortium, using the facilities of the Computing Centre of the Max-Planck Society in Garching and the Edinburgh Parallel Computing Centre. FRP, PAT and HMPC acknowledge a NATO collaborative research grant (CRG 970081). This work was supported by the EC network for “Galaxy formation and evolution.” We thank Carlton Baugh and Shaun Cole for providing unpublished results from their semi-analytic models.
|
no-problem/9905/nucl-th9905051.html
|
ar5iv
|
text
|
# Deuteron production and space-momentum correlations at RHIC
## 1 Introduction
Nuclear clusters have been a useful tool to establish collective effects throughout the history of heavy ion reactions: production rates have provided evidence for low temperature phase transitions, the spectral distribution shows particular sensitivities to collective flow, transverse expansion and potential forces. With planned commisioning of the Relativistic Heavy-Ion Collider (RHIC) the necessity of the predictions of the baryon distribution in general, and light clusters in particular, is evident. It should be mentioned that predictions of different transport models for RHIC energies already offer large differences in rather basic observables like total particle multiplicities, etc. In this paper we present some results of calculations based on the cascade model RQMD version 2.4 and a coalescence afterburner. These results are part of the effort to formulate a physics program for the STAR collaboration. More details related to this study and extensive discussion of the sensitivity of the light nuclei to various properties of the colliding system at RHIC can be found elsewhere.
## 2 Rapidity Distributions
One of the basic observables in nucleus-nucleus collisions is rapidity distributions of nucleons. It reflects the energy loss of the nucleons and as well as bulk properties of the particle production in a collision. Figure 1 shows predictions, based on RQMD calculations of rapidity distributions of protons and deuterons as well as antiprotons and antideuterons for central Au+Au collisions at full RHIC energy. As has been mentioned earlier, predictions for clusters were made in a coalescence framework. Vertical dashed lines on Figure 1 schematically show the expected acceptance of the STAR TPC. One can conclude from the figure that RQMD predicts 1 deuteron to be emmited into the STAR acceptance for about every 20 central events. The predicted rate of antideuteron production is about 1 per 100 events. With the expected trigger rate of STAR for Au+Au central collisions ($``$1 Hz) these pedicted rates make deuteron and antideuteron measurements feasible and a good candidate for a “year one” physics.
## 3 Transverse Momentum Distributions
Another important basic observable of the heavy-ion collision is the transverse momentum distribution of baryons. It is sensitive to various physical properties of the collision. Transverse momentum distributions reflect the degree of thermalization reached in the heavy-ion collision as well as effects of the collective flow. Fig. 2(a) presents the rapidity dependence of the average transverse momentum of protons and deuterons for normal RQMD events. The average transverse momentum for deuteron is about a factor of two higher than for protons. For comparison, rapidity dependence of the mean transverse momentum of pions and kaons is shown as dashed and solid lines respectively. A clear “particle mass hierarchy” of the mean $`p_T`$ is evident from Fig. 2(a), which is commonly attributed to the presence of the transverse flow.
The influence of the collective transverse flow component on the mean transverse momentum can be further demonstrated by comparing this result with calculations where the freeze-out correlation of positions and momenta of the nucleons have been deliberately altered. Panels labeled (b) and (c) in Fig. 2 show the result of such calculations. Fig. 2(b) shows the case where the angle between $`\stackrel{}{p_T}`$ and $`\stackrel{}{r_T}`$ has been randomized. This procedure produces a system with no collective flow. It can be seen from the figure that the difference between the average transverse momenta of deuterons and protons is dramatically reduced. Fig. 2(c) shows the so called aligned case, where for each nucleon the transverse radius vector $`\stackrel{}{r_T}`$ is aligned with the transverse momentum vector $`\stackrel{}{p_T}`$. This case mimics a “maximum flow” scenario. Note that in the aligned and random cases only the relative orientation of $`\stackrel{}{r_T}`$ to $`\stackrel{}{p_T}`$ is modified: momentum distributions and projections onto either $`r_T`$ or $`p_T`$ are not touched. These figures illustrates high sensitivity of the deuteron spectra to the momentum position correlations. Figure 2 (d) is the result of a calculation without rescattering among baryons (rescattering here means interaction with produced particles). Similar to the random case, the results of calculations without baryon rescattering (Fig. 2(d)) show a constant, rapidity independent difference between deuteron and proton transverse momentum of about 150 MeV. This suggests that multiple rescattering among particles in RQMD leads to collective flow.
In summary, using a microscopic transport model RQMD and a coalescence afterburner, we have calculated rapidity distributions of protons and deuterons as well as their antiparticles for central Au+Au collisions at $`\sqrt{s}=200A`$ GeV. We studied the sensitivity of the deuteron transverse momentum distributions in different rapidity regions to the effects of the transverse collective flow. Should new physics occur at RHIC, a modification of the space-momentum structure will manifest itself in the deuteron yields and transverse momentum distributions. These distributions can be measured in the STAR TPC and other RHIC experiments.
$``$ B. Monreal is at Lawrence Berkeley National Laboratory through the Center for Science and Engineering Education.
## Acknowledgement
We are grateful for many enlightening discussions with Drs. S. Johnson, D. Keane, S. Pratt, H.G. Ritter, S. Voloshin, R. Witt. We especially thank Dr. J. Nagle for permission to use his coalescence code. This research used resources of the National Energy Research Scientific Computing Center. This work has been supported by the U.S. Department of Energy under Contract No. DE-AC03-76SF00098 and W-7405-ENG-36, DOE grant DE-FG02-89ER40531 and the Energy Research Undergraduate Laboratory Fellowship and National Science Foundation.
|
no-problem/9905/cond-mat9905359.html
|
ar5iv
|
text
|
# Phase transition in inelastic disks
## Abstract
This letter investigates the molecular dynamics of inelastic disks without external forcing. By introducing a new observation frame with a rescaled time, we observe the virtual steady states converted from asymptotic energy dissipation processes. System behavior in the thermodynamic limit is carefully investigated. It is found that a phase transition with symmetry breaking occurs when the magnitude of dissipation is greater than a critical value.
In energy conservative systems, it is well known that macroscopic properties at an equilibrium state can be described by a few state variables . This raise a question as to whether, in dissipative systems, similar state variables might be defined for macroscopic properties at steady states maintained by an external energy source. When a dissipation rate is sufficiently small, the system might be expected to behave as a conservative system; but is this actually the case? To examine this and other related questions, it is interesting to investigate models that can connect a conservative and a dissipative system by varying the system parameters.
An ensemble of elastic hard disks is one of the simplest models for describing the fluid state in conservative systems. Computer simulations played an important role in discovering the existence of fluid-solid transition in this model . We will here consider an ensemble of inelastic hard disks, which has previously been investigated as a model of granular materials. By varying the inelasticity, we can continuously change the system from a conservative to a dissipative system. This ability will be useful for our present purposes of investigating a thermodynamic properties in dissipative systems.
In order to attain a steady state in a dissipative system, an external energy source that compensates for energy dissipation due to inelastic collisions is indispensable. In granular physics, the vibrating bed is a typical energy source. Nevertheless, attaching such an energy source will break the isotropy of the system from the outset. In this letter, we simulate the dissipative system without energy sources and investigate the virtual steady states using a new observation frame that will be introduced later.
The inelastic disk system without energy input , known as a cooling gas or dissipative gas system, has been investigated using molecular dynamics (MD) and hydrodynamic models. These studies reveal that the homogeneous state is unstable for sufficiently high inelasticity even without attaching any energy source. In this state, a cluster of particles and collective mean flow (shear mode) are formed. We study how the instability is characterized by means of MD simulation of an ensemble of inelastic hard disks, with a particular focus on the asymptotic collective phenomena and its system size dependence.
The system consists of an ensemble of $`N`$ inelastic hard disks in two dimensional space. For simplicity, each particle is assumed to have a unit mass and its rotational degrees of freedom are ignored. Collisions between circular particles are inelastic; the inelastic collision is implemented in the following manner. At a collision of two particles, the tangential velocities to the collision plane are preserved, while the normal component of the relative velocity $`\mathrm{\Delta }v_n`$ changes to $`\mathrm{\Delta }v_n^{}`$, where
$$\mathrm{\Delta }v_n^{}=e_n\mathrm{\Delta }v_n.$$
(1)
Here, the coefficient of restitution $`e_n(0e_n1)`$ is constant for all collisions. In the following, we parameterize the inelasticity by $`ϵ(1e_n)`$. In the case $`ϵ=0`$, the system becomes conservative with the elastic collisions; in the case $`ϵ>0`$, energy dissipation occurs due to the inelastic collisions.
The selected boundary conditions consist of a square box enclosed by elastic rigid walls. Thus the energy of the system is not dissipated by the bouncing of particles against the wall. For initial conditions, we adopt spatially homogeneous states equilibrated by setting $`ϵ=0`$. The time evolution of the system is calculated by the event-driven method.
In this letter, we focus on asymptotic behavior of the system after a sufficiently long transient time. We note that our model is almost identical with the model in except for boundary conditions.
Under our boundary conditions, there are three important system parameters: the number of particles $`N`$, the inelasticity $`ϵ`$ and the area fraction of particles $`\varphi `$. Here $`\varphi (0\varphi <1)`$ is the ratio of the total area covered by particles to that of the box $`S`$.
In the conservative case $`ϵ=0`$, it is known that this system, consisting of an ensemble of elastic hard disks, has two phases: a fluid and a solid phase. In this letter, we investigate the system in the range of parameter $`\varphi `$ corresponding to the fluid phase when $`ϵ=0`$. For this parameter range, our results do not depend on the dispersity of particle radii $`a`$. In the following, we show only the results for monodisperse case($`a=0.5`$); however, the results of our simulations for the polydisperse case (uniform distribution in the range $`0.4a0.5`$) are almost the same. On the other hand, in the range of parameter $`\varphi `$ corresponding to the solid phase when $`ϵ=0`$, the results depend on the dispersity of particle radii. The results for this parameter range will be reported in a future publication.
For sufficiently large values of $`ϵ`$, it is known that the time development by the event driven method is ill-defined because infinite collisions occur in a finite time interval (inelastic collapse) . Our investigation is limited to sufficiently small values of $`ϵ`$ in order to avoid the inelastic collapse.
We will now describe how to observe the system. Since no energy source is attached to the system, the total energy of the system monotonically decreases in time. Nevertheless, by the rescaling of time, it is possible to make the energy of the system virtually conserved. We introduce the rescaled time $`\stackrel{~}{t}`$ and the rescaled velocity of $`i`$-th particle $`\stackrel{~}{v}_i`$:
$$\stackrel{~}{t}=_0^t\sqrt{T(t)}𝑑t,\stackrel{~}{v}_i=v_i(t)/\sqrt{T(t)},$$
(2)
where $`T(t)`$ is the averaged kinetic energy per one particle at a time $`t`$, $`T(t)=_iv_i^2(t)/2N`$; and $`v_i(t)`$ is the original velocity of $`i`$-th particle. We call the rescaled system “R-system” and the original system “O-system”. All of the present observations are carried out for the “R-system”. Under the translation by Eqs.(2), the averaged kinetic energy per one particle in the “R-system” is normalized to the unity, $`\stackrel{~}{T}(t)_i\stackrel{~}{v}_i^2(t)/2N=1`$, i.e., the total kinetic energy is conserved for any $`ϵ`$ in the “R-system”. Further, asymptotic energy dissipation processes in the “O-system” are translated to steady states in the “R-system”.
It must be noted here that the translation by Eqs.(2) is just the rescaling of time and that the trajectories of the particles in space are not influenced by this rescaling. We calculate the time evolution of the “O-system” by the event driven method, and observe the “R-system” with a special focus on its steady states after a sufficiently long transient time. In the rest of this letter, all variables shown refer to those in the “R-system”.
The dependencies of pressure on $`\varphi `$ for several values of $`ϵ`$ are shown in Fig.1. Pressure $`P`$ is defined as the time averaged sum of the impulses at the bouncing of particles on the walls per unit length per unit time in the “R-system”. The vertical axis in the figure is $`N\stackrel{~}{T}/PS`$, which is unity for the ideal gas limit ($`\varphi 0`$). Here $`S`$ is the area of the box.
In the conservative case $`ϵ=0`$, only the fluid phase exists in the system for $`\varphi <\varphi _c`$. The value of $`\varphi _c`$ is reported to be $`\varphi _c0.7`$ . Consider the dissipative cases $`ϵ>0`$ in Fig.1(a) paying attention to the difference from the case $`ϵ=0`$. When $`ϵ=0.02`$, there is no remarkable difference. For $`ϵ=0.04`$, a decrease in pressure is observed in the intermediate range of $`\varphi `$. For $`ϵ=0.08`$, a similar difference appears in the wider range of $`\varphi `$.
Comparing Fig.1(a) and (b), it is found that similar dependencies of pressure on $`\varphi `$ are observed for the same values of $`Nϵ`$. Thus $`Nϵ`$ is an important parameter of the system and it would characterize the distance from the conservative system.
For sufficiently large $`Nϵ`$, the behavior of the system is quite different from the case $`ϵ=0`$. A snapshot of the system for $`(N,ϵ)=(256,0.08)`$ is shown in Fig.2(b) compared to that for $`ϵ=0`$ (Fig.2(a)).
From Fig.2(b), it is found that a correlation of particle velocity exists and that a mean flow circulating anti-clockwise in the box is formed. As a result of the emergence of the circulating mean flow, the impulse of particles to the walls at bounce decreases compared with the case $`ϵ=0`$. Then the decrease of pressure, i.e., the increase of $`N\stackrel{~}{T}/PS`$, occurs. For larger values of $`ϵ`$, the circulation is formed in a wider range of $`\varphi `$, as seen in Fig.1.
Since the circulation is formed above a certain threshold value of $`ϵ`$, normalized total angular momentum $`M`$ of the system is defined as an order parameter of the system.
$$M=\frac{1}{N\sqrt{S}}\underset{i=1}{\overset{N}{}}(r_iR_0)\stackrel{~}{v}_i.$$
(3)
Here $`r_i`$ and $`\stackrel{~}{v}_i`$ are coordinates and velocities of $`i`$-th particle, $`R_0`$ are the coordinates of the center of the box, “$``$” refers to outer product, and the notation $`\stackrel{~}{v}`$ is used to remind us that we are observing the “R-system”. $`M`$ is normalized by the box length $`\sqrt{S}`$ to eliminate the dependency on the box size.
Time developments of $`M`$ for some values of $`ϵ`$ are shown in Fig.3, where $`N=256`$ and $`\varphi =0.5`$. For $`ϵ=0`$, the value of $`M`$ fluctuates around $`M=0`$. As $`ϵ`$ is increased, the fluctuations of $`M`$ increase. For $`ϵ=0.036,0.038`$, the circulating mean flow is formed. The direction of the circulation sometimes turns over. As $`ϵ`$ increases further, the events of turning over become rare. In order to investigate the formation of the circulation in detail, the distributions of $`M`$ are shown in Fig.4; because these distributions are symmetric around $`M=0`$, only the region $`M0`$ is shown.
As denoted by Fig.4, it is found that the peak position of $`M`$, $`M_{\mathrm{peak}}`$ becomes nonzero for $`ϵ`$ above a certain threshold value and increases continuously from zero with increasing $`ϵ`$. We also observe a similar behavior when we vary the value of $`\varphi `$ while fixing the value of $`ϵ`$. Thus the circulation appears continuously for any direction in the parameter space($`ϵ,\varphi `$). In Fig.4, the distributions of $`M`$ are broad in shape, which indicates that the value of $`M`$ fluctuates around the value at the peak $`M_{\mathrm{peak}}`$. If the formation of the circulation is a phase transition accompanied by symmetry breaking in the thermodynamic limit ($`N\mathrm{}`$), the width of the distribution will converge to zero in this limit. In order to confirm this scenario, the distributions of $`M`$ are examined in the dependence on $`N`$ in Fig.5, where $`N`$ is varied while $`Nϵ`$ is kept constant.
In Fig.5(a), it is found that $`M_{\mathrm{peak}}`$ will converge in the limit, $`N\mathrm{}`$. This convergence shows that $`M_{\mathrm{peak}}`$ is a function of $`Nϵ`$ for sufficiently large $`N`$, because the value of $`Nϵ`$ is fixed in Fig.5(a).
From Fig.5(b), it is found that the width of the fluctuation of $`M`$ decreases in the manner of $`1/\sqrt{N}`$ as $`N`$ increases. Thus, we conclude that the circulation appears as a phase transition with symmetry breaking. The turning over events of the circulation observed in Fig.3 are finite size effects.
In Fig.6, $`M_{\mathrm{peak}}`$ around the critical point is shown. The horizontal axis is $`Nϵ(13/\sqrt{N})`$, where the second term $`O(1/\sqrt{N})`$ comes from finite size effects. This figure clearly shows that $`M_{\mathrm{peak}}`$ is a function of $`Nϵ`$ for sufficiently large $`N`$. Further calculations are needed for the precise determination of the critical exponents.
In this letter, we simulate inelastic disks in a square box and investigate the steady states of the system using a new observation frame wherein the energy of the system is virtually conserved by the rescaling of time. The important parameter of the system is $`Nϵ`$, where $`N`$ is the number of particles and $`ϵ`$ is inelasticity. At $`ϵ=0`$, the steady states are homogeneous equilibrium states. At sufficiently small $`Nϵ`$, the steady states are also homogeneous. For $`Nϵ`$ above the critical points, the homogeneous steady states are no longer stable . Then the phase transition with symmetry breaking occurs where the circulation appears continuously.
The results shown in this letter are independent of the characteristics of the model. As noted before, the results are independent of the dispersity of the particle radii. The hard-core potential is not essential to the results. We have confirmed that the similar results are obtained in the case of soft-core potential: a simulation of the models using the discrete element method with Nose-Hoover thermostat without the rescaling of time. We have also observed similar circulating mean flow in the system with inelastic hard spheres. Thus similar results would also be obtained in three dimensions.
A number of problems remain to be investigated. For example, for larger $`ϵ`$ up to $`ϵ=1`$, are there one or more additional phases? Is hydrodynamic description by $`N\mathrm{}`$ with fixing $`Nϵ`$ really possible for any boundary conditions? Do fluid- and solid phases exist in $`ϵ0`$? How are the phenomena in the system investigated here related to those in the system with a real energy source, such as vibrating beds?
It is noteworthy that the homogeneous states in the system with finite $`ϵ`$ are always unstable for sufficiently large $`N`$, because the phase transition occurs at finite $`Nϵ`$. Thus the conservative system is a singular-limit system when we take the limit $`N\mathrm{}`$ first.
The author thanks S. Sasa and N. Nakagawa for their useful input and careful reading of the manuscript, N. Ito and Y.-h. Taguchi for their helpful advice, and Dept. of Math. Sci. at Ibaraki Univ. for their hospitality. The author also acknowledges the support from JSPS.
|
no-problem/9905/physics9905013.html
|
ar5iv
|
text
|
# Conditions for the Formation of Axisymmetric Biconcave Vesicles
## Introduction
The extraordinary biconcave shape of a red blood cell has attracted much interest for many years. In the last two decades, people generally accepted the shape of biological membranes such as blood cells is closely related to the formation of lipid bilayer vesicle in aqueous medium. Based on the elasticity of lipid bilayers proposed by Helfrich \[H1\], the shape $`\mathrm{\Sigma }`$, regarded as an embedded surface in $`^3`$, is determined by the minimum of the bending energy involving the volume $`\text{V}(\mathrm{\Sigma })`$ enclosed by $`\mathrm{\Sigma }`$, the area $`\text{A}(\mathrm{\Sigma })`$, the mean curvature $`H`$ and the Gaussian curvature $`K`$ of $`\mathrm{\Sigma }`$. More precisely, Helfrich suggested to study the bending energy
$$\frac{1}{2}k_c_\mathrm{\Sigma }(2H+c_0)^2\mathrm{d}A+\frac{1}{2}\overline{k_c}_\mathrm{\Sigma }K\mathrm{d}A+\lambda \text{A}(\mathrm{\Sigma })+p\text{V}(\mathrm{\Sigma }),$$
where $`k_c`$, $`\overline{k}_c`$, $`c_0`$, $`\lambda `$, and $`p`$ are constants interpreted as follow: $`k_c`$ is the bending rigidity, $`\overline{k}_c`$ the Gaussian curvature modulus, $`c_0`$ the spontaneous curvature, $`\lambda `$ the tensile stress, and $`p=p_op_i`$ the osmotic pressure difference between the outer ($`p_o`$) and inner ($`p_i`$) media. Here, we have taken a geometric sign convention so that Helfrich’s original bending energy should be written in the above form.
According to the Gauss-Bonnet Theorem, the second integral in the bending energy is a topological constant. Therefore, within a certain topological class of $`\mathrm{\Sigma }`$, it is sufficient to study the functional,
$$(\mathrm{\Sigma })=_\mathrm{\Sigma }(2H+c_0)^2\mathrm{d}A+\stackrel{~}{\lambda }\text{A}(\mathrm{\Sigma })+\stackrel{~}{p}\text{V}(\mathrm{\Sigma })$$
where $`c_0`$, $`\stackrel{~}{\lambda }=2\lambda /k_c`$, and $`\stackrel{~}{p}=2p/k_c`$ are the constant parameters of the functional. This functional will be referred as the Helfrich functional in this article.
Many interesting surfaces such as minimal surfaces, constant mean curvature surfaces and Willmore surfaces can be regarded as critical points of the Helfrich functional for suitable combinations of the parameters $`c_0`$, $`\stackrel{~}{\lambda }`$, and $`\stackrel{~}{p}`$. New axisymmetric explicit solutions had also been found recently \[NOO2\].
When $`c_0=\stackrel{~}{\lambda }=\stackrel{~}{p}=0`$, the functional $`(\mathrm{\Sigma })`$ is referred as Willmore functional in differential geometry which has been widely studied in recent decades. Moreover, there are some important analysis and open problems concerning the Willmore functional, \[W, ch. 7\]. As it is observed by physicists \[DH\], Willmore functional is not a good model for the shape of red blood cells. This fact can also be seen by the result of geometers that the unique minimum of the Willmore functional for topologically spherical vesicles (embedded surfaces of genus zero in terms of topologist) is the round sphere. Therefore, not all combinations of the parameters give stationary vesicles of shape similar to red blood cells observed experimentally.
On the other hand, a typical way to investigate the minimizing surface is by finding solutions to the variational equation of the functional $``$. A large class of axisymmetric stationary vesicles of spherical and toroidal topology has been calculated, \[DH, L, S, MB, OY\]. In these works, among other interesting shapes of lipid bilayer vesicles, the shape of the red blood cell can be simulated numerically or given by a special solution a specific variational equation with suitable combinations of the parameters. It is natural to ask for the conditions on the parameters such that the Helfrich functional possesses a stationary vesicle of biconcave shape.
### Main Results
In this article, we are going to give clear conditions on how biconcave axisymmetric surfaces are formed. More precisely, we find a sufficent condition for that the Helfrich shape equation of axisymmetric vesicles to have solutions of biconcave shape. Besides, we exhibit that when the equation has a solution with reflection symmetry or of biconcave shape, certain geometric quantities of the vesicle must obey some conditions governed by the parameters. These conditions on the parameters are very mild. In particular, the case that $`c_0>0`$, $`\stackrel{~}{\lambda }>0`$, and $`\stackrel{~}{p}>0`$ is sufficient to ensure the existence of biconcave solution. We also briefly comment on how the combination of the parameters $`c_0`$, $`\stackrel{~}{\lambda }`$, and $`\stackrel{~}{p}`$ affects the existence of biconcave solution.
The sufficient condition for the formation of biconcave vesicle may be formulated in terms of a cubic polynomial,
(1)
$$Q(t)=t^3+2c_0t^2+(c_0^2+\stackrel{~}{\lambda })t\frac{\stackrel{~}{p}}{2}.$$
We proved that if all roots of $`Q(t)`$ are positive, then there is always an axisymmetric biconcave vesicle which is the stationary surface for the Helfrich functional. This result will be summary by the table at the end of this section (p. Main Results) along with some typical pictures of $`Q(t)`$ and the corresponding graphs of the solution and its derivative.
It is easy to verify that $`Q(t)\frac{\stackrel{~}{p}}{2}`$ for all $`t0`$ if $`c_0>0`$, $`\stackrel{~}{\lambda }>0`$, and $`\stackrel{~}{p}>0`$ and hence all roots are positive. Mathematically, that all roots of $`Q(t)`$ are positive can be written as
$$\mathrm{max}\left\{Q(t):\mathrm{}<t0\right\}<0.$$
Explicit formula for this in terms of the parameters can also be found. However, the above is more concise and precise enough.
For the necessary condition, we observe that if $`c_0>0`$ and there is an axisymmetric stationary vesicle of biconcave shape, then its curvature at the center must be smaller than the first positive root of $`Q(t)`$. In fact, we have
$$2c_0w_{0}^{}{}_{}{}^{2}+(c_0^2+\stackrel{~}{\lambda })w_0^{}\frac{\stackrel{~}{p}}{2}<0,$$
where $`w_0^{}`$ is the meridinal curvature (or $`w_{0}^{}{}_{}{}^{2}`$ is the Gaussian curvature) at the center.
Furthermore, if the axisymmetric vesicle has a reflection symmetry with respect to a plane perpendicular to the rotation axis, then we obtain a relation bewteen the Gaussian curvature at the “equator” (the intersection circle of the axisymmetric vesicle with the plane of reflection) and the radius $`r_{\mathrm{}}`$ of the “equator” in terms of the polynomial $`Q(t)`$ given by (1) as follows.
$$K(r_{\mathrm{}})^2=\frac{1}{r_{\mathrm{}}}Q\left(\frac{1}{r_{\mathrm{}}}\right).$$
This paper is organized as follows. In §1, we rewrite the Helfrich shape equation of axisymmetric vesicles into two useful forms that are more convenient for our later discussion. In §2, the change of the principal curvature in meridinal direction is analyzed and a necessary condition for the existence of biconcave surface is given. In §3, we present a sufficient condition for the existence of biconcave surface. In §4, we give a brief discussion of other situation such as axisymmetric vesicles which do not have reflection symmetry or which is not biconcave.
We would like to thank K. S. Chou for his discussions and valuable suggestions on the writing of this manuscript.
## 1. Equation for Axisymmetric Vesicles
In this article, we will study axisymmetric solution surface $`\mathrm{\Sigma }`$ of the Helfrich variation problem which has a reflection symmetry by the plane perpendicular to the rotational axis. If the rotational axis is labelled $`z`$-axis and the plane of reflection $`xy`$-plane, then the surface $`\mathrm{\Sigma }`$ can be obtained by revolving a radial curve about the $`z`$-axis on the upper half plane and reflecting it to the lower half.
A cross-section of a biconcave axisymmetric surface
(with $`c_0=1`$, $`\stackrel{~}{\lambda }=0.25`$, $`\stackrel{~}{p}=1`$)
In other words, one may parametrize the upper part of $`\mathrm{\Sigma }`$ by $`𝐗=(r\mathrm{cos}\theta ,r\mathrm{sin}\theta ,z(r)),`$ with a function $`z(r)>0`$ defined for $`r`$ in some interval $`[0,r_{\mathrm{}}]`$, where $`r_{\mathrm{}}`$ is the radius of the “equator” that is determined by the surface. The natural boundary conditions are $`z(0)>0`$ and $`z(r_{\mathrm{}})=0`$. The rotation and reflection symmetries of the surface then impose the conditions that $`z^{}(0)=0`$ and $`z^{}(r)\mathrm{}`$ as $`rr_{\mathrm{}}`$. Finally, to model for a biconcave surface, we also need to assume that $`z^{\prime \prime }(0)>0`$ in order to have a solution that is concave at the center. Moreover, the biconcave shape of the surface also confines the graph of $`z`$ to have a unique point of inflection. This is equivalent to require that $`z^{}`$ has only a unique maximum and no other critical point. A cross-section of the upper part of $`\mathrm{\Sigma }`$ is shown in the picture.
### Equation and Conditions
Usually, the Helfrich shape equation of axisymmetrc vesicles is written in term of the angle $`\psi `$ between the surface tangent and the plane perpendicular to rotational axis \[DH, JS, ZL\]. However, it is convenient for our discussion to rewrite the equation into a equation on the derivative of the graph $`w(r)=z^{}(r)=\mathrm{tan}\psi `$. Then the Helfrich shape equation of axisymmetrc vesicles becomes an equation for $`w`$, which is
(2) $`{\displaystyle \frac{2r}{(1+w^2)^{5/2}}}w^{\prime \prime }=`$ $`{\displaystyle \frac{5rw}{(1+w^2)^{7/2}}}w_{}^{}{}_{}{}^{2}{\displaystyle \frac{2w^{}}{(1+w^2)^{5/2}}}`$
$`+{\displaystyle \frac{2w+w^3}{r(1+w^2)^{3/2}}}+{\displaystyle \frac{2c_0w^2}{1+w^2}}+{\displaystyle \frac{(c_0^2+\stackrel{~}{\lambda })rw}{(1+w^2)^{1/2}}}{\displaystyle \frac{\stackrel{~}{p}r^2}{2}}.`$
We are going to study the solution $`w(r)`$ to this equation with initial choice $`w(0)=0`$ and $`w^{}(0)=w_0^{}>0`$. Our goal is to find solution $`w`$ that also satisfies the end point condition that $`w(r)\mathrm{}`$ as $`rr_{\mathrm{}}`$ and an integral condition that $`\mathrm{}{\displaystyle _0^r_{\mathrm{}}}w\mathrm{d}r<0`$. The integral condition is equivalent to $`z(r_{\mathrm{}})=0<z(0)<\mathrm{}`$ which ensures that $`z(r)`$ together with its reflection produce a closed axisymmetric surface without self-intersection. Furthermore, $`w`$ is required to have a unique local maximum and no other critical points. A typical picture for the graph of such $`w`$ is shown.
A typically expected graph of $`w=z^{}`$
There are two useful ways of grouping the lower order terms in the equation. A careful study of these lower order terms reveals the qualitative behaviors of the solution $`w`$. One of the grouping involves the polynomial $`Q(t)`$ that we have mentioned in the introduction.
Let $`Q`$ be the cubic polynomial $`Q(t)=t^3+2c_0t^2+(c_0^2+\stackrel{~}{\lambda })t{\displaystyle \frac{\stackrel{~}{p}}{2}}`$ and $`R(t)`$ be the quadratic part of $`Q(t)`$, that is, $`R(t)=Q(t)t^3`$. With this notation, after multiplying with $`rw^{}`$, the equation (2) can be written as follow,
(2a) $`\left[{\displaystyle \frac{r^2w_{}^{}{}_{}{}^{2}}{(1+w^2)^{5/2}}}\right]^{}`$ $`=\left[{\displaystyle \frac{w^2}{(1+w^2)^{1/2}}}\right]^{}+r^3w^{}R(\kappa (r));\text{or}`$
(2b) $`\left[{\displaystyle \frac{r^2w_{}^{}{}_{}{}^{2}}{(1+w^2)^{5/2}}}\right]^{}`$ $`=\left[{\displaystyle \frac{2}{\sqrt{1+w^2}}}\right]^{}+r^3w^{}Q(\kappa (r)),`$
where $`\kappa (r)={\displaystyle \frac{w}{r\sqrt{1+w^2}}}`$ is the principal curvature of $`\mathrm{\Sigma }`$ in the meridinal direction. In other words, the lower order terms capture the changes of the meridinal curvature in the differential equation. Note that if $`c_0=\stackrel{~}{\lambda }=\stackrel{~}{p}=0`$, the solution to equation (2a) corresponds to the situation that $`z=z(r)`$ is a circular arc with radius $`1/w_0^{}`$. This is exactly the case when the meridinal curvature is a constant.
The initial choice $`w_0^{}`$ is actually the meridinal curvature at the center of $`\mathrm{\Sigma }`$, i.e.,
$$\kappa (r)=\frac{w}{r\sqrt{1+w^2}}w_0^{},\text{as }r0\text{.}$$
Moreover, for both $`Q`$ and $`R`$, $`Q(0)=R(0)=\stackrel{~}{p}/2<0`$. Therefore, if $`w_0^{}`$ is less than the smallest positive root of $`Q(t)`$, then both $`Q(w_0^{})<0`$ and $`R(w_0^{})<0`$. Our analysis (in §3) shows that these negativity conditions are essential for the meridinal curvature $`\kappa (r)`$ to change sign. We will also demonstrate that, for any set of parameters satisfying our condition, a solution $`w(r)`$ with $`w_0^{}`$ small enough always satisfies all of our requirements. In fact, if exact values of the parameters are given, one can always calculate the suitable value of $`w_0^{}`$ numerically.
## 2. Necessary conditions for the formation of biconcave solution
It turns out that the principal curvature of the vesicle $`\mathrm{\Sigma }`$ in the meridinal direction, which we denoted by $`\kappa (r)={\displaystyle \frac{w}{r\sqrt{1+w^2}}}`$, is the most basic quantity in our analysis. The equation (2b) implies that if $`c_0>0`$ and the initial curvature at the center $`w_0^{}`$ is large, in the sense that $`R(w_0^{})>0`$, then $`\kappa (r)`$ increases and a biconcave shape cannot be formed. So a necessary condition for formation of axisymmetric solution of biconcave shape in the case that $`c_0>0`$ is
$$R(w_0^{})=2c_0w_{0}^{}{}_{}{}^{2}+(c_0^2+\stackrel{~}{\lambda })w_0^{}\frac{\stackrel{~}{p}}{2}<0.$$
The following pictures show the behaviors of $`\kappa (r)`$ for two different values of $`w_0^{}`$ with $`c_0=1`$, $`\stackrel{~}{\lambda }=0.25`$, and $`\stackrel{~}{p}=1`$. For these parameter values, $`R(0.277124)=0`$. The one on the left hand side is taken with $`w_0^{}=0.278`$ so the necessary condition is not satisfied. In this case, $`\kappa `$ blows-up at finite distance from the rotational axis so the corresponding solution will not lead to a closed surface of the desired biconcave shape. The picture on the right hand side is taken with $`w_0^{}=0.276`$. In this case, the necessary condition is satisfied, but $`\kappa `$ only decays gradually and the solution $`w(r)`$ still does not give a biconcave surface. Numerically, we see that $`w(r)\mathrm{}`$ as $`rr_{\mathrm{}}`$ but the total area is positive, i.e., the surface together with its reflection does not close up to form a closed surface in $`^3`$ without self-intersection. However, we can only show that $`R(w_0^{})<0`$ implies the decrease in $`\kappa (r)`$ without knowing whether it implies the sign change of $`\kappa (r)`$. So, it is not sure whether $`R(w_0^{})<0`$ is sufficient for the formation of closed surface of biconcave shape, a situation in which $`\kappa (r)`$ has to change from positive to negative.
Graph of $`z(r)`$ for a convex surface
where $`R(w_0^{})>0`$
Below: the corresponding graph of $`w=z^{}(r)`$
Graph of $`z(r)`$ leading to a surface
with self-intersection where $`R(w_0^{})<0`$
Below: the corresponding graph of $`w=z^{}(r)`$
In order to obtained an axisymmetric solution with reflection symmetry via a solution of ordinary differential equation, one also needs to verify the first variation is zero with respect to any variation near the plane of reflection. It turns out that this is always the case for the solution we are going to study in the next section. Moreover, the analysis shows that the radius $`r_{\mathrm{}}`$ of the “equator” and the Guassian curvature $`K(r_{\mathrm{}})`$ of the vesicle at any point on the “equator” must satisfy the following relation,
$$K(r_{\mathrm{}})^2=\frac{1}{r_{\mathrm{}}}Q\left(\frac{1}{r_{\mathrm{}}}\right).$$
This relation is easily verified for the case that $`c_0=\stackrel{~}{\lambda }=\stackrel{~}{p}=0`$. In this case, we have $`Q(t)=t^3`$ and $`K(r_{\mathrm{}})^2={\displaystyle \frac{1}{r_{\mathrm{}}^4}}`$. This is compatible with the fact that the solution surface is a round sphere.
## 3. Sufficient condition for the formation of biconcave solution
In the preceding section, although that $`R(w_0^{})<0`$ is known to be a necessary condition for the case that $`c_0>0`$, it may not be sufficient. Hence, we need to impose a stronger condition that all roots of $`Q(t)`$ are positive and require $`w_0^{}`$ to be small enough. Unfortunately, we do not have a general formula for the threshold value of $`w_0^{}`$.
The smallness assumption on $`w_0^{}`$ implies that not only $`R(w_0^{})<0`$ but also $`Q(w_0^{})<0`$. In this case, the meridinal curvature changes sign. And in terms of $`w(r)`$, though it is initially positive, it becomes negative when $`r`$ reaches a certain value $`r_0`$. Note that for different initial value $`w_0^{}`$, the value of $`r_0`$ is different. Although the precise relationship is unknown, we know that $`r_0`$ is comparable to $`w_{0}^{}{}_{}{}^{1/2}`$ when $`w_0^{}`$ is small. More precisely, we have the following limiting inequalities
$$\frac{16}{3\stackrel{~}{p}}\underset{w_0^{}0}{lim}\frac{r_0^2}{w_0^{}}\frac{16}{\stackrel{~}{p}}.$$
In fact, we discover that, for any point of inflection $`r_c`$ of the graph of $`z(r)`$ with $`\kappa (r_c)>0`$,
$$\frac{16}{3\stackrel{~}{p}}\underset{w_0^{}0}{lim}\frac{r_c^2}{w_0^{}}\underset{w_0^{}0}{lim}\frac{r_0^2}{w_0^{}}\frac{16}{\stackrel{~}{p}}.$$
As a consequence of these inequalities, the solution will have a unique point of inflection of the graph of $`z(r)`$. Moreover, for the case that all roots of $`Q(t)`$ are positive (see §2), the function $`w(r)`$ blows down monotonically to $`\mathrm{}`$ in a finite distance from the rotational axis after it becomes negative.
The following are pictures of $`z(r)`$ and $`w(r)`$ for $`c_0=1`$, $`\stackrel{~}{\lambda }=0.25`$, $`\stackrel{~}{p}=1`$, and $`w_0^{}=0.26`$. This set of parameters satisfies our condition. In fact, $`Q(t)`$ has a unique positive root at 0.268828. So, it is clear that $`Q(w_0^{})<0`$ and the function $`w(r)`$ goes to $`\mathrm{}`$ at $`r_{\mathrm{}}5.39215`$. However, $`w_0^{}=0.26`$ is not small enough to give a solution $`w(r)`$ with total negative area and hence it does not give a vesicle of the required biconcave shape.
A solution $`z`$ which does not give an embedded surface although $`w=z^{}`$ blows down to $`\mathrm{}`$
However, if we choose $`w_0^{}=0.2063860`$ and $`w_0^{}=0.15`$ for the same set of parameters $`c_0=1`$, $`\stackrel{~}{\lambda }=0.25`$, and $`\stackrel{~}{p}=1`$, then in each case, the total area under the graph of $`w(r)`$ is nonpositive. Hence, each of the graph of $`z(r)`$ can be reflected and rotated to form a vesicle of biconcave shape. The following pictures are the function $`z(r)`$ and $`w(r)`$ for these values of $`w_0^{}`$ respectively.
Graphs of $`z(r)`$ giving rise to biconcave surfaces and their $`z^{}`$ below
The mathematical proof of this result involves more delicate study of the equation (2b) together with sharp estimates of $`r_0`$, $`r_{\mathrm{}}`$, the rate of change of $`w(r)`$ at $`r_0`$, and the integral of $`w(r)`$. Please refer to the mathematical paper by the authors for the details of the analysis.
## 4. Discussion
Finally, we would like to remark on other solutions which do not correspond to biconcave surfaces. Firstly, when $`c_0^2+\stackrel{~}{\lambda }<0`$, one may still obtain solution for nonconvex surface even $`w_0^{}`$ is not very small.
A solution which gives a nonconvex surface
The above picture is a solution for $`c_0=1`$, $`\stackrel{~}{\lambda }=3`$, $`\stackrel{~}{p}=2`$ with $`w_0^{}=2`$. Note that $`w_0^{}`$ is greater than the positive roots of $`R(t)`$ and $`Q(t)`$. Moreover, $`Q(t)`$ has negative roots and thus this is beyond the situation that we discuss in previous sections. In this case, after $`w=z^{}`$ becomes negative, it may not decrease all the way to negative infinity, which is why a second “petal” may form.
The other type is a rotational solution which is not symmetric under reflection in $`xy`$-plane \[DH, HDH\]. The following is a solution with $`c_0=1`$, $`\stackrel{~}{\lambda }=1`$ and $`\stackrel{~}{p}=1`$.
A solution which has no reflection symmetry
It can be observed that the upper part of this solution satisfies our equation (2) but not the area condition. Although the reflected graph is also a solution locally, it does not give rise to a surface without self-intersection. For this combination of parameters, there is a bifurcation of solution at $`r_{\mathrm{}}`$ to close up by an asymmetric surface. From our numerical study, it should be remarked that not all solutions in the upper quadrant may be closed up by an asymmetric counterpart. For example, if the upper part satisfies the area condition, the only possible lower part is the reflection symmetric one. This suggests a possible uniqueness of biconcave solution. Furthermore, we surmise that for $`c_0>0`$, $`\stackrel{~}{\lambda }>0`$ and $`\stackrel{~}{p}>0`$, solution of the asymmetric type never occurs. The local minimum of $`Q(t)`$ for $`t<0`$ may be the underlying obstruction of such happening. However, at the moment, there is not yet a mathematical proof.
|
no-problem/9905/math9905078.html
|
ar5iv
|
text
|
# 1 Introduction and main results
## 1 Introduction and main results
The main result of this paper is the following theorem.
###### Theorem 1
There is a real-analytic Riemannian manifold $`M_A`$ diffeomorphic to the quotient of $`T^2\times 𝐑^1`$ with respect to the free $`𝐙`$-action generated by the map
$$(X,z)(AX,z+1),$$
where $`X=(x,y)T^2=𝐑^2/𝐙^2`$, $`z𝐑`$, and $`A`$ is the Anosov automorphism of the $`2`$-torus $`T^2`$ defined by the matrix
$$A=\left(\begin{array}{cc}2& 1\\ 1& 1\end{array}\right),$$
$`(1)`$
such that
i) the geodesic flow on $`M_A`$ is (Liouville) integrable by $`C^{\mathrm{}}`$ first integrals;
ii) the geodesic flow on $`M_A`$ is not (Liouville) integrable by real-analytic first integrals;
iii) the topological entropy of the geodesic flow $`F_t`$ is positive;
iv) the fundamental group $`\pi _1(M_A)`$ of the manifold $`M_A`$ has an exponential growth;
v) the unit covector bundle $`SM_A`$ contains a submanifold $`N`$ such that $`N`$ is diffeomorphic to the $`2`$-torus $`T^2`$ and the restriction of $`F_1`$ onto $`N`$ is the Anosov automorphism given by matrix (1).
To explain the statement in detail we recall main definitions and results on topological obstructions to integrability of geodesic flows.
Let $`g_{jk}`$ be a Riemannian metric on an $`n`$-dimensional manifold $`M^n`$. It defines the geodesic flow on the tangent bundle $`TM^n`$ which is a Lagrangian system with the Lagrange function
$$L(x,\dot{x})=\frac{1}{2}g_{jk}\dot{x}^j\dot{x}^k.$$
The Legendre transform $`TM^nT^{}M^n`$
$$\dot{x}T_xM^npT_x^{}M^n:p_j=g_{jk}\dot{x}^k$$
maps this Lagrangian system into a Hamiltonian system on $`T^{}M^n`$ with a symplectic form
$$\omega =\underset{j=1}{\overset{n}{}}dx^jdp_j$$
and the Hamilton function
$$H(x,p)=\frac{1}{2}g^{jk}(x)p_jp_k.$$
This Hamiltonian system is also called the geodesic flow of the metric.
The symplectic form defines the Poisson brackets on the space of smooth functions on $`T^{}M^n`$ by the formula
$$\{f,g\}=h^{jk}\frac{f}{y^j}\frac{g}{y^k},$$
where $`\omega =h_{jk}dy^jdy^k`$ locally.
It is said that a Hamiltonian system on a $`2n`$-dimensional symplectic manifold is (Liouville) integrable if there are $`n`$ first integrals $`I_1,\mathrm{},I_n`$ of this system such that
1) these integrals are in involution: $`\{I_j,I_k\}=0`$ for any $`j,k`$, $`1j,kn`$;
2) these integrals are functionally independent almost everywhere, i.e., on a dense open subset.
Since restrictions of the geodesic flow onto different non-zero level surfaces of its Hamilton function $`H=I_n`$ are smoothly trajectory equivalent, we may replace this notion of integrability by the weaker condition that there are $`n1`$ additional first integrals $`I_1,\mathrm{},I_{(n1)}`$ which are in involution and functionally independent almost everywhere on the unit covector bundle $`SM^n=\{H(x,p)=1\}T^{}M^n`$ .
If $`M^n`$ is real-analytic together with the metric and all first integrals $`I_1,\mathrm{},I_n`$, then it is said that the geodesic flow is analytically integrable.
Kozlov established the first topological obstruction to analytic integrability of geodesic flows proving that the geodesic flow of a real-analytic metric on a two-dimensional closed oriented manifold $`M^2`$ of genus $`g>1`$ does not admit an additional analytic first integral (see also for general setup of nonintegrability problem).
For higher-dimensional manifolds, obstructions to integrability were found in where it was proved that analytic integrability of the geodesic flow on a manifold $`M^n`$ implies that
1) the fundamental group $`\pi _1(M^n)`$ of $`M^n`$ is almost commutative, i.e., contains a commutative subgroup of finite index;
2) if the first Betti number $`b_1(M^n)`$ equals $`k`$, then the real cohomology ring $`H^{}(M^n;𝐑)`$ contains a subring isomorphic to the real cohomology ring of a $`k`$-dimensional torus and, in particular, $`b_1(M^n)dimM^n=n`$.
In these results the analyticity condition may be replaced by stronger geometric condition called geometric simplicity and reflecting some tameness properties of the singular set where the first integrals are functionally dependent. For instance, one may only assume that $`SM^n`$ is a disjoint union of a closed invariant set $`\mathrm{\Gamma }`$ which is nowhere dense and of finitely many open toroidal domains foliated by invariant tori.
Later Paternain proposed another approach to finding topological obstructions to integrability based on a vanishing of the topological entropy of the geodesic flow on $`SM^n`$. If this quantity vanishes, then $`\pi _1(M^n)`$ has a subexponential growth and, if in addition $`M^n`$ is a $`C^{\mathrm{}}`$ simply-connected manifold, then $`Y`$ is rationally-elliptic (this follows from results of Gromov and Yomdin) . Integrability implies vanishing of the topological entropy under some additional conditions which were established in and restrict not only the singular set but also the behaviour of the flow on this set.
Recently Butler has found new examples of $`C^{\mathrm{}}`$ integrable geodesic flows of homogeneous metrics on nilmanifolds . The simplest of them is a $`3`$-manifold $`M_B`$ obtained from a product $`T^2\times [0,1]`$ by identifying the components of the boundary by a homeomorphism
$$(X,0)(BX,1),$$
where
$$B=\left(\begin{array}{cc}1& 1\\ 0& 1\end{array}\right)$$
and $`XT^2`$. The fundamental group of the resulting manifold $`M_B`$ is not almost commutative, $`b_1(M_B)=2`$, and $`H^{}(M_B;𝐑)`$ does not contain a subring isomorphic to $`H^{}(T^2;𝐑)`$. However, $`b_1(M_B)<dimM_B`$. This shows that some of the results of are not generalized for the $`C^{\mathrm{}}`$ case. Note that the topological entropy vanishes for Butler’s examples.
The present paper is based on an observation that Butler’s construction is generalized for constructing $`C^{\mathrm{}}`$ integrable geodesic flows on all $`T^n`$-bundles over $`S^1`$. In the case when the gluing automorphism $`C:T^nT^n`$ is hyperbolic we obtain remarkable Hamiltonian systems on a cotangent bundle to $`M_C`$: they are $`C^{\mathrm{}}`$ integrable but have positive topological entropy. This, in particular, shows that treating positivity of topological entropy as a criterion for chaos which is used sometimes is not correct.
We confine only to one example of such a flow which we study in detail.
## 2 The metric on $`M_A`$ and its geodesic flow
Let
$$A:𝐙^2𝐙^2$$
be an automorphism determined by matrix (1). It determines the following action on $`T^2`$:
$$(x,y)\text{mod}𝐙^2(2x+y,x+y)\text{mod}𝐙^2.$$
We construct $`M_A`$ as follows. Take a product $`T^2\times [0,1]`$ and identify the components of its boundary using the automorphism $`A`$:
$$(X,0)(AX,1),$$
where $`X=(x,y)T^2`$. We denote the resulted manifold by $`M_A`$. Near every point $`pM_A`$ we have local coordinates $`x,y`$ and $`t`$, where $`z`$ is a linear coordinate on $`S^1=𝐑/𝐙`$.
Take the following metric on $`M_A`$:
$$ds^2=dz^2+g_{11}(z)dx^2+2g_{12}(z)dxdy+g_{22}(z)dy^2$$
where
$$G(t)=\left(\begin{array}{cc}g_{11}& g_{12}\\ g_{21}(=g_{12})& g_{22}\end{array}\right)=\mathrm{exp}(zG_0^{})\mathrm{exp}(zG_0)$$
$`(2)`$
and $`\mathrm{exp}G_0=A`$. We set
$$g_{33}=1,g_{13}=g_{23}=1.$$
Indeed, this formula defines a metric on an infinite cylinder $`𝒞=T^2\times 𝐑`$ which is invariant with respect to the $`𝐙`$-action generated by
$$(x,y,z)(2x+y,x+y,z+1),$$
$`(3)`$
and, therefore, it descends to a metric on the quotient space $`M_A=𝒞/𝐙`$.
###### Proposition 1
The geodesic flow of metric (2) on the infinite cylinder $`𝒞`$ admits three first integrals which are functionally independent almost everywhere.
Proof of Proposition. The Hamiltonian function
$$F_3=H=\frac{1}{2}\left(p_z^2+g^{11}(z)p_x^2+2g^{12}(z)p_xp_y+g^{22}(z)p_y^2\right)$$
of this flow is, by the construction, a first integral. Since $`H`$ does not depend on $`x`$ and $`y`$ the quasimomenta $`F_1=p_x`$ and $`F_2=p_y`$ are also first integrals. It is clear that the set of first integrals $`I_1,I_2`$, and $`I_3`$ is functionally independent almost everywhere. This proves the proposition.
Since action (3) preserves the symplectic form $`\omega `$, it induces the following action on $`T^{}𝒞`$:
$$\left(\begin{array}{c}p_x\\ p_y\end{array}\right)\left(\begin{array}{cc}1& 1\\ 1& 2\end{array}\right)\left(\begin{array}{c}p_x\\ p_y\end{array}\right),p_zp_z.$$
This descends to a linear action on $`T^{}M_A`$ which preserves fibers and takes the form
$$\left(p_x\frac{1+\sqrt{5}}{2}p_y\right)\lambda \left(p_x\frac{1+\sqrt{5}}{2}p_y\right),$$
$$\left(p_x\frac{1\sqrt{5}}{2}p_y\right)\lambda ^1\left(p_x\frac{1\sqrt{5}}{2}p_y\right),$$
$`(4)`$
$$p_zp_z,\lambda =\frac{3+\sqrt{5}}{2}.$$
It is evident that the indefinite quadratic form
$$I_1=\left(p_x\frac{1+\sqrt{5}}{2}p_y\right)\left(p_x\frac{1\sqrt{5}}{2}p_y\right)=p_x^2p_xp_yp_y^2$$
and the positively definite quadratic form
$$I_3=H=\frac{1}{2}\left(p_z^2+g^{11}(z)p_x^2+2g^{12}(z)p_xp_y+g^{22}(z)p_y^2\right)$$
are invariants of this action. To construct the third invariant we notice that
$$\frac{\mathrm{log}\left|p_x\frac{1+\sqrt{5}}{2}p_y\right|}{\mathrm{log}\lambda }$$
is not invariant but the action adds $`1`$ to this quantity when it is correctly defined. Therefore, the following function
$$I_2=f(I_1)\mathrm{sin}\left(\frac{\mathrm{log}\left|p_x\frac{1+\sqrt{5}}{2}p_y\right|}{\mathrm{log}\lambda }\right),$$
where
$$f(u)=\mathrm{exp}\left(\frac{1}{u^2}\right),$$
is everywhere defined and invariant with respect to action (4).
###### Proposition 2
The functions $`I_1,I_2`$, and $`I_3`$ are $`C^{\mathrm{}}`$ first integrals of the geodesic flow on $`M_A`$ which are functionally independent almost everywhere. Therefore, the geodesic flow on $`M_A`$ is (Liouville) integrable by $`C^{\mathrm{}}`$ functions.
Proof of Proposition. The functions $`I_1,I_2`$ , and $`I_3`$ on $`T^{}𝒞`$ are invariants of action (4) and, therefore, descend to functions on $`T^{}M_A`$. We may consider $`I_1`$ and $`I_2`$ as replacing $`F_1`$ and $`F_2`$: they are pairwise involutive and independent on spatial variables $`x,y,z`$. Moreover they do not depend on $`p_z`$ and, therefore, they are in involution with $`I_3=H`$ which, in particular, means that they are first integrals of the geodesic flow. It remains to notice that, by their construction, they are $`C^{\mathrm{}}`$. This finishes the proof of Proposition.
###### Proposition 3
Let $`N`$ be a subset of the unit covector bundle $`SM_A`$ formed by the points with
$$z=0,p_x=p_y=0,p_z=1.$$
Then it is diffeomorphic to $`T^2`$ and the translation
$$F_t:T^{}M_AT^{}M_A$$
along the trajectories of the geodesic flow for $`t=1`$ maps $`N`$ into itself and this map is the Anosov automorphism given by matrix (1).
Proof of Proposition. The geodesic flow on $`M_A`$ is covered by the geodesic flow on $`𝒞`$ for which $`p_x`$ and $`p_y`$ are first integrals. Therefore, on $`𝒞`$ the translation of the preimage of $`N`$ under projection is as follows:
$$F_t(x,y,z,p_x,p_y,p_z)=F_t(x,y,0,0,0,1)=(x,y,t,0,0,1).$$
Recalling the construction of $`M_A`$ proves the proposition.
Note that Propositions 2 and 3 prove statements i) and v) of Theorem 1, respectively.
## 3 The fundamental group $`\pi _1(M_A)`$ and the topological entropy of the geodesic flow on $`M_A`$
The manifold $`M_A`$ is covered by $`𝐑^3`$ on which acts $`\pi _1(M_A)`$. This group is generated by
$$a:(x,y,z)(x+1,y,z),b:(x,y,z)(x,y+1,z),$$
$$c:(x,y,z)(2x+y,x+y,z+1).$$
The relations between these generators are
$$[a,b]=1,[c,a]=ab,[c,b]=a.$$
###### Proposition 4 (see, for instance, )
$`\pi _1(M_A)`$ has an exponential growth.
This follows from the hyperbolicity of $`A`$ or may be proved directly: the words $`ca^{\epsilon _1}ca^{\epsilon _2}\mathrm{}ca^{\epsilon _k}`$ are different for $`\epsilon _j=0,1`$ and, therefore, $`\gamma (2k)2^k`$, where $`\gamma `$ is the growth function of $`\pi _1(M_A)`$ with respect to generators $`a,b`$, and $`c`$.
###### Corollary 1
The geodesic flow on $`M_A`$ is not (Liouville) integrable by real-analytic first integrals.
It follows from the results of (also exposed in Section 1) that if this flow is analytically integrable, then $`\pi _1(M_A)`$ is almost commutative and, therefore, has a polynomial growth. This contradiction establishes the corollary.
###### Corollary 2
The topological entropy of the geodesic flow on $`M_A`$ is positive.
Indeed, it was proved by Dinaburg, that if the fundamental group of a manifold $`M^n`$ has an exponential growth, then the topological entropy of the geodesic flow of any Riemannian metric on $`M^n`$ is positive .
The latter corollary also follows from Proposition 3: it is known that the topological entropy equals the supremum of the measure entropies taken over all ergodic invariant Borel measures. Hence, we may take a singular measure concentrated on $`NSM_A`$ which has the form
$$d\mu =dxdy.$$
It is well known that the measure entropy of the Anosov automorphism $`A:NN`$ is positive (this follows, for instance, from nonvanishing of the Lyapunov exponents for any point of $`N`$).
Note that Proposition 4 and Corollaries 1 and 2 establish statements iv), ii), and iii) of Theorem 1, respectively.
The authors thank L. Butler for sending them his preprint and I. K. Babenko for helpful discussions.
The authors were partially supported by Russian Foundation for Basic Researches (grants 96-15-96868 and 98-01-00240 (A. V. B.), and 96-15-96877 and 98-01-00749 (I.A.T.)) and INTAS (grant 96-0070 (I.A.T.)).
|
no-problem/9905/hep-ex9905055.html
|
ar5iv
|
text
|
# Measurement of the Proton Structure Function 𝐹₂ and of the Total Photon-Proton Cross Section 𝜎ₜₒₜ^{𝛾^∗𝑝} at Very Low 𝑄² and Very Low 𝑥
## 1 INTRODUCTION
Since the first measurement from ZEUS using 1995 data, the proton structure function $`F_2`$ at low $`Q^2`$ has continued to generate a lot of interest. Various groups, among them ZEUS , used the data to study the transition between photoproduction and deep inelastic scattering, others obtained improved parameterizations of $`F_2`$ .
At this workshop, a new measurement of $`F_2`$ is presented in the range $`0.045\mathrm{GeV}^2<Q^2<0.65\mathrm{GeV}^2`$ and $`610^7<x<110^3`$, where $`x`$ denotes the Bjorken scaling variable; $`Q^2`$ is the four-momentum transfer squared. One also defines $`y`$, the relative energy transfer to the proton in its rest frame, and $`W`$, the photon-proton center-of-mass energy. The data were taken with special detector components and triggers during six weeks in 1997, yielding an integrated luminosity of $`3.9\mathrm{pb}^1`$. The new analysis covers a larger kinematic region and has a higher precision than the previous one .
## 2 ANALYSIS
### 2.1 Scattered positron reconstruction
The scattered positron is reconstructed in the Beam Pipe Calorimeter (BPC) and Beam Pipe Tracker (BPT) of the ZEUS detector. The BPC has been installed in 1995 and was used for the previous measurement of $`F_2`$ at low $`Q^2`$. In 1997, the BPT was installed in front of the BPC.
The BPC is a small calorimeter that detects positrons with a scattering angle of 1–2<sup>o</sup> w.r.t. the positron beam direction. Its energy resolution is $`\sigma _E=0.17\sqrt{E}`$. The energy scale is known to $`\pm 0.3\%`$, the non-linearity is less than $`\pm 1\%`$ at $`4\mathrm{GeV}`$. The shower position in the BPC is reconstructed with a resolution of $`500\mu \mathrm{m}`$ at $`27.5\mathrm{GeV}`$.
The BPT consists of two silicon microstrip detectors. A track is reconstructed as the straight line through two hits in the BPT, providing the positron scattering angle, its impact point on the BPC and the event vertex. The uncertainty of the absolute BPT position is less than $`\pm 200\mu \mathrm{m}`$. The tracking efficiency is known to $`\pm 1.5\%`$.
### 2.2 Kinematic reconstruction
The event kinematics are reconstructed with the electron method, i.e. from positron variables only, for $`y>0.08`$, where it gives the best resolution. For $`y<0.08`$, the $`e\mathrm{\Sigma }`$ method is used, which improves the $`y`$ reconstruction by combining positron and hadronic final state variables.
### 2.3 Physics simulation
DJANGOH 1.1 and RAPGAP 2.06 are used to simulate non-diffractive respectively diffractive events. The samples are mixed in a proportion determined from the data. This is expected to give the best possible description of the hadronic final state, which is crucial at high $`y`$ and low $`Q^2`$, where the fraction of diffractive events rejected by trigger or offline cuts is significantly different from non-diffractive events.
### 2.4 Event selection
The event selection is based predominantly on the requirement of a well reconstructed positron in BPC and BPT. The analysis covers a kinematic region between $`0.045\mathrm{GeV}^2<Q^2<0.65\mathrm{GeV}^2`$ and $`610^7<x<110^3`$, corresponding to $`25\mathrm{GeV}<W<270\mathrm{GeV}`$ or $`0.007<y<0.8`$. Measuring at high $`y`$ was made possible by the use of the BPT, which serves to suppress background at low scattered positron energies.
## 3 RESULTS
### 3.1 Determination of $`𝑭_\mathrm{𝟐}`$ and $`𝝈_{\mathrm{𝐭𝐨𝐭}}^{𝜸^{\mathbf{}}𝒑}`$
$`F_2`$ is extracted using an iterated bin-to-bin unfolding method and converted into a total cross section via $`\sigma _{\mathrm{tot}}^{\gamma ^{}p}=4\pi ^2\alpha /Q^2F_2`$. The measured cross section depends also weakly on $`F_L`$, which is taken from the BKS model , yielding an effect of at most $`3\%`$. The results are shown in figs. 1 and 2. In the region of this analysis, the cross section becomes nearly flat as a function of $`Q^2`$.
The data are displayed together with two fits (ALLM97 , DL98 ) that have already included the 1995 measurement, as well as with the older DL curve. While DL98 gives the best description at high $`W`$, it undershoots the data at low $`W`$. DL seems to describe the shape better over the whole $`W`$ range. ALLM97 underestimates the cross section at low $`Q^2`$ by 10–15%. The best description of the data is given by the ZEUS REGGE 97 fit presented in section 3.3.
At low $`W`$, the analysis has reached kinematic overlap with E665. While at $`Q^2=0.65\mathrm{GeV}^2`$ the agreement is good, it deterioriates at lower $`Q^2`$.
### 3.2 Systematic uncertainties
Fifteen systematic checks were performed to study the stability of the results. The average statistical error is $`2.6\%`$, the average systematic error $`3.3\%`$. In most bins, the systematic error is similar to the statistical one. Only the two highest $`W`$ bins are dominated by systematic effects, mostly due to the uncertainty of the fraction of diffractive events. The overall normalization uncertainty of $`\pm 1.8\%`$ is due to the luminosity measurement.
### 3.3 Phenomenological fits
With the precise $`F_2`$ data sample presented here, the GVDM and Regge-inspired fits as published in have been repeated.
The first step is to make an assumption about the $`Q^2`$ dependence of the data in order to extrapolate them to $`Q^2=0`$, which is taken from the GVDM prediction on $`\sigma _T`$, giving $`\sigma _{\mathrm{tot}}^{\gamma ^{}p}(W^2,Q^2)=m_0^2/(m_0^2+Q^2)\sigma _{\mathrm{tot}}^{\gamma p}(W^2)`$. Fitting instead both the $`\sigma _T`$ and $`\sigma _L`$ terms changes the extrapolated values only within their statistical errors.
In the second step, the $`W`$ dependence of $`\sigma _{\mathrm{tot}}^{\gamma p}(W^2)`$ is explored. The extrapolations from the first step are compared to the direct photoproduction cross section measurements by ZEUS and H1 and to previous data from other experiments at lower $`W`$. This comparison is shown in fig. 3, together with Regge-type fits of the form $`\sigma _{\mathrm{tot}}^{\gamma p}(W^2)=A_{\mathrm{I}\mathrm{R}}W^{2(\alpha _{\mathrm{I}\mathrm{R}}1)}+A_{\mathrm{I}\mathrm{P}}W^{2(\alpha _{\mathrm{I}\mathrm{P}}1)}`$, and with the DL98 and ALLM97 parameterizations. The extrapolated cross sections are larger than the directly measured ones. Whether this is a feature of the assumed $`Q^2`$ dependence for the extrapolation or of the direct cross section measurements, remains to be resolved in the future.
The combination of the two fits is shown as ZEUS REGGE 97 in figs. 1 and 2.
## 4 CONCLUSIONS
The ZEUS collaboration has measured the proton structure function $`F_2`$ in the range $`0.045\mathrm{GeV}^2<Q^2<0.65\mathrm{GeV}^2`$ and $`610^7<x<110^3`$ with unprecedented precision. The data can be described and extrapolated by a simple GVDM and Regge inspired parameterization.
|
no-problem/9905/cond-mat9905180.html
|
ar5iv
|
text
|
# Quasiparticle spectrum of a type-II superconductor in a high magnetic field with randomly pinned vortices
The interplay between superconductivity and a magnetic field has attracted interest for a long time. For sufficiently strong fields the Meissner phase is destroyed and a mixed state appears in the form of a quantized vortex lattice . The superconductor order parameter has zeros at the vortex locations, through which the external magnetic field penetrates in the sample. Contrarily to previous understanding, the increase of the magnetic field intensity, and its associated diamagnetic pair breaking, is counteracted at high magnetic fields by the Landau level structure of the electrons . This leads to interesting properties such as enhancement of the superconducting transition temperature at very high magnetic fields where the electrons are confined to the lowest Landau level . Associated with the zeros of the order parameter in real space are gapless points in the magnetic Brillouin zone , which lead to qualitatively different behavior at low temperatures and high magnetic fields .
It has been argued that the gapless behavior is restricted to high fields very close to the upper critical line where the so-called diagonal approximation (where the coupling between Landau levels is neglected) is valid. It has been shown, however, that the presence of off-diagonal terms does not destroy this behavior and that a perturbation scheme on the off-diagonal terms is possible, as long as there are no band-crossings . It was shown analytically to all orders in the perturbation theory on the off-diagonal terms that there is always a discrete set of points which are gapless that are associated with coherent propagation of the quasiparticles (so-called Eilenberger points). These nodes are associated with the center of mass coordinates of the Cooper pairs and not to some internal structure like in d-wave superconductors. Lowering the magnetic field, a quantum level-crossing transition has been found that eventually leads to a gapped regime and to states localized in the vortex cores .
On the other hand, the effect of disorder on superconductivity has also attracted interest for a long time. In the case of non-magnetic impurities and s-wave pairing Anderson’s theorem states that, at least for low concentrations, they have little effect since the impurities are not pair-breaking . In d-wave superconductors however, non-magnetic impurities cause a strong pair breaking effect . In the limit of strong scattering it was found that the lowest energy quasiparticles become localized below the mobility gap, even in a regime where the single-electron wave-functions are still extended . This result has been confirmed solving the Bogoliubov-de Gennes equations with a finite concentration of non-magnetic impurities . However, allowing for angular dependent impurity scattering potentials it has been found that the scattering processes close to the gap nodes may give rise to extended gapless regions . The case of magnetic impurities in the s-wave case also leads to gapless superconductivity .
The question we wish to address in this paper is if the presence of disorder affects the gapless behavior found in the low-$`T`$ high magnetic field regime of s-wave superconductors discussed above. The case of a dirty but homogeneous superconductor was considered before . It was assumed that the order parameter is not significantly affected by the impurities and retains its periodic structure. It was found that when the disorder becomes stronger than some critical value, a finite density of states appears at the Fermi surface.
In general, since the interactions between the vortices are repulsive, a lattice structure is more favorable energetically. However, if pinning centers are present in the system, the higher energies of different configurations of the vortices may be offset by the presence of disorder. The question then arises if the gapless behavior prevails in this more general case. If the magnetic field is very high, such that the system is in the quantum limit where the electrons are confined to the lowest Landau level, it has been shown that for an arbitrary configuration of zeros there is at least one gapless point . In this paper we will consider a more general case in which the off-diagonal terms are included.
In the mean-field approximation the Hamiltonian of the superconducting system can be diagonalized and the energy eigenvalues are the solutions of the Bogoliubov-de Gennes equations (BdG)
$`\left[{\displaystyle \frac{1}{2m}}\left(\stackrel{}{p}{\displaystyle \frac{e}{c}}\stackrel{}{A}\right)^2\mu \right]u(\stackrel{}{r})+\mathrm{\Delta }(\stackrel{}{r})v(\stackrel{}{r})`$ $`=`$ $`Eu(\stackrel{}{r})`$ (1)
$`\left[{\displaystyle \frac{1}{2m}}\left(\stackrel{}{p}+{\displaystyle \frac{e}{c}}\stackrel{}{A}\right)^2\mu \right]v(\stackrel{}{r})+\mathrm{\Delta }^{}(\stackrel{}{r})u(\stackrel{}{r})`$ $`=`$ $`Ev(\stackrel{}{r})`$ (2)
where $`\mathrm{\Delta }(\stackrel{}{r})`$ is the order parameter and $`E`$ the energy. If $`\mathrm{\Delta }(\stackrel{}{r})=0`$, the solutions are the Landau eigenfunctions which, for a two-dimensional system perpendicular to the magnetic field, read in the Landau gauge
$$\varphi _{nq}=\frac{1}{\sqrt{L_x}}\frac{1}{\sqrt{l\sqrt{\pi }2^nn!}}e^{iqx}e^{\frac{1}{2}\left[\frac{y}{l}+ql\right]^2}H_n\left[\frac{y}{l}+ql\right]$$
(3)
where $`n`$ is the Landau index, $`L_x`$ is the length of the system in the $`x`$ direction, $`l`$ is the magnetic length given by $`l^2=\mathrm{}c/eH`$ and $`H_n`$ is an Hermite polynomial. The energy eigenvalues are those of an harmonic oscillator centered at $`y_0=ql^2`$
$$E_n=\mathrm{}\omega _c\left(n+\frac{1}{2}\right)$$
(4)
where $`\omega _c`$ is the cyclotron frequency. Taking $`L_y`$ to be the dimension along $`y`$ we obtain that $`L_y/2ql^2L_y/2`$. In the presence of $`\mathrm{\Delta }(\stackrel{}{r})0`$ we can use the Landau basis like
$`u(\stackrel{}{r})`$ $`=`$ $`{\displaystyle \underset{nq}{}}u_{nq}\varphi _{nq}(\stackrel{}{r})`$ (5)
$`v(\stackrel{}{r})`$ $`=`$ $`{\displaystyle \underset{nq}{}}v_{nq}\varphi _{nq}^{}(\stackrel{}{r})`$ (6)
and obtain the corresponding eigensystem
$`\left[nn_c\right]u_{nk}^\mu +{\displaystyle \underset{mq}{}}v_{mq}^\mu \mathrm{\Delta }_{nm}^{kq}`$ $`=`$ $`ϵ^\mu u_{nk}^\mu `$ (7)
$`\left[nn_c\right]v_{nk}^\mu +{\displaystyle \underset{mq}{}}u_{mq}^\mu \left(\mathrm{\Delta }_{mn}^{qk}\right)^{}`$ $`=`$ $`ϵ^\mu v_{nk}^\mu `$ (8)
where $`ϵ^\mu =E^\mu /(\mathrm{}\omega _c)`$, $`n_c`$ is defined by $`\mu =\mathrm{}\omega _c(n_c+\frac{1}{2})`$ and
$$\mathrm{\Delta }_{nm}^{kq}=𝑑\stackrel{}{r}\varphi _{nk}^{}(\stackrel{}{r})\frac{\mathrm{\Delta }(\stackrel{}{r})}{\mathrm{}\omega _c}\varphi _{mq}^{}(\stackrel{}{r}).$$
(9)
The excitation spectrum is then obtained solving eqs. (5) with an appropriate choice for the order parameter.
In the lattice case, Abrikosov’s solution can be written in the Landau gauge as
$$\mathrm{\Delta }(\stackrel{}{r})=\mathrm{\Delta }\underset{p}{}e^{i\pi \frac{b_x}{a}p^2}e^{i\frac{2\pi p}{a}x}e^{\left(\frac{y}{l}+\frac{\pi p}{a}l\right)^2}$$
(10)
The vortex lattice is characterized by unit vectors $`\stackrel{}{a}=(a,0)`$ and $`\stackrel{}{b}=(b_x,b_y)`$ where $`b_x=0`$, $`b_y=a`$ for a square lattice and $`b_x=\frac{1}{2}a`$, $`b_y=\frac{\sqrt{3}}{2}a`$ for a triangular lattice. This form for the order parameter is valid sufficiently close to the upper critical field, since it is entirely contained in the lowest Landau level of Cooper charge $`2e`$. We will not consider contributions to the order parameter from higher Landau levels. In this case the self-consistent equation for the order parameter reduces to a single relation between $`\mathrm{\Delta }`$ and $`V`$, the attractive interaction strength between the electrons. In the following we will consider the square lattice ($`L_x=L_y=L`$), for simplicity. In this case the lattice constant $`a=l\sqrt{\pi }`$ and the zeros are located at the points $`x_i=(i+\frac{1}{2})l\sqrt{\pi }`$, $`y_j=(j+\frac{1}{2})l\sqrt{\pi }`$.
An expression for the order parameter has also been found for an arbitrary distribution of the zeros which in the symmetric gauge can be written as
$$\mathrm{\Delta }(x,y)=\overline{\mathrm{\Delta }}\underset{i=1}{\overset{N_\varphi }{}}\left[\frac{xx_i}{l}+i\frac{yy_i}{l}\right]e^{\frac{1}{2l^2N_\varphi }\left[(xx_i)^2+(yy_i)^2\right]}$$
(11)
Here, $`N_\varphi `$ is the number of vortices in the system (number of zeros $`(x_i,y_i)`$). The solution of the BdG equations is then simply obtained numerically using eq. (6) and performing the gauge transformation of eq. (8) to the Landau gauge.
In the lattice case it is more convenient to use a representation in terms of the magnetic wave-functions instead of eq. (4), to take advantage of the translational invariance. The generalization to a random configuration is however more conveniently done using a real space representation. We consider a finite system and we have to take account of finite size effects. In particular, the use of eq. (8) for $`\mathrm{\Delta }(\stackrel{}{r})`$ is very sensitive. We have therefore compared eqs. (7) and (8) for a finite system. (It is convenient to define new variables $`X=\frac{x}{L}`$, $`Y=\frac{y}{L}`$ and to use that $`L=l\sqrt{\pi N_\varphi }`$). The effect of the finite size was eliminated (less than $`1\%`$ difference with respect to Abrikosov’s solution) using periodic boundary conditions and calculating $`\mathrm{\Delta }(x,y)`$ using eq. (8) over a set of zeros contained in a circle of unit radius (recall that $`\frac{1}{2}X,Y\frac{1}{2}`$) and relating $`\overline{\mathrm{\Delta }}`$ with $`\mathrm{\Delta }`$ to give the same amplitude. Having obtained the excitation spectrum and the eigenvectors we calculated the density of states using the expression
$$\rho (\omega )=\underset{\mu }{}\left[\underset{nq}{}|u_{nq}^\mu |^2\delta (\omega ϵ^\mu )+\underset{nq}{}|v_{nq}^\mu |^2\delta (\omega +ϵ^\mu )\right]$$
(12)
where the sum is restricted to $`ϵ^\mu 0`$.
We considered three cases: i) square lattice, ii) weakly disordered lattice and iii) randomly pinned configuration (strong disorder). In the case of weak disorder we considered a distribution of zeros $`X_i=X_i^L+\delta (\frac{1}{2}+r)/\sqrt{N_\varphi }`$, where $`X_i^L`$ is the regular lattice location, $`\delta `$ is an adjustable amplitude and $`r`$ is a random number $`0<r<1`$ (and similarly for $`Y_i`$). In the strong disorder case we allow $`X_i,Y_i`$ to take any values on the system. We calculated the average over disorder directly on the density of states. That is, we select randomly one configuration of the zeros and
obtain the excitation spectrum and $`\rho (\omega )`$. We repeat this process many times and then we take the average over the resulting expressions for the density of states. This is the final result.
To check the accuracy of the method we started with the lattice case previously solved . The Cooper pairs are formed from electrons in the same point in space and in an energy interval of the order of the Debye energy, which is taken to be typically of the order of $`1020\%`$ of the Fermi energy (therefore we consider off-diagonal couplings between Landau levels with $`n_cn_Dnn_c+n_D`$, where $`n_D\omega _D/\omega _c`$).
The excitation spectrum depends on $`\mathrm{\Delta }`$ and on the dimensionality of the lattice . It also depends on $`n_c`$ but appropriate rescalings yield a somewhat universal behavior for not too large values of $`\mathrm{\Delta }`$ . For very small $`\mathrm{\Delta }`$ (in units of the Landau spacing) a diagonal approximation gives good results and a gapless behavior is found at a series of points in the magnetic Brillouin zone associated with the zeros in the real lattice (Eilenberger points) and in other points which increase in number as $`n_c`$ grows. These gapless points contribute to a density of states that vanishes linearly as $`\omega 0`$. As $`\mathrm{\Delta }`$ grows, off-diagonal terms have to be included. Their effect is two-fold since they affect both the normal self-energy and the pairing self-energy. This leads to a shift in the chemical potential to simulate the 3D case. In this case it can be shown that the Eilenberger points remain gapless to all orders in the off-diagonal coupling, as long as $`\mathrm{\Delta }`$ is small enough that no band-crossings occur. For $`n_c`$ not too large
and with the inclusion of off-diagonal coupling, a pseudo-gap opens in the spectrum since the weight of the gapless points is small (but nonzero!). (This is peculiar to a 2d system since in this case the Fermi surface only passes through a finite number of points, in general. In 3d there is always a value of $`k_z`$ such that it is possible to find a gapless point ). As $`n_c`$ grows, and for values of $`\mathrm{\Delta }`$ such that no band-crossings occur, the density of states is more and more similar to the diagonal approximation.
We want to see if the presence of disorder affects this gapless behavior and therefore we consider first the less favorable case where a small density of states is already present in the lattice case. We therefore consider a typical case of $`n_c=10`$, $`n_D=2`$ and $`\mathrm{\Delta }=0.2`$ which was studied before . We maintained the value of the chemical potential fixed and independent of disorder.
In the lattice case the results are very similar using either the magnetic Brillouin formulation or the real space formulation. Due to the finiteness of the system there is somewhat more structure in $`\rho (\omega )`$ but the qualitative features are the same: a pseudogap of order $`0.02`$, the location of the maximum around $`\omega 0.07`$ and the width of order $`0.1`$ for the lowest band above the Fermi level.
Considering now the case of weak disorder we performed averages over $`100`$ configurations for $`\delta =0,0.01,0.1,0.5,1,2`$. Our results are shown in Fig. 1. The cases $`\delta =0`$ and $`\delta =0.01`$ are virtually indistinguishable. As $`\delta `$ increases the effect of disorder is clear. For $`\delta =0.1`$ the width of the band is almost unaltered but the structure is now broadened. For larger values of $`\delta `$ this effect is more pronounced and the width of the band increases. In particular, the density of states for small $`\omega `$ increases and the relative weight of the gapless modes increases with respect to the maximum $`\rho (\omega )`$, which remains approximately at the same energy. The case of $`\delta =2`$ is already very similar to the case of strong disorder where the randomness is maximized. Even though the disorder strongly affects the $`u(\stackrel{}{r})`$ and $`v(\stackrel{}{r})`$ amplitudes we found no evidence for localization. The lowest energy eigenvectors extend considerably over the whole system even though they are strongly inhomogeneous.
The pseudogap of Fig. 1 found in the $`2d`$ case for $`n_c=10`$ with the chemical potential adjusted disappears over a wide range of the order parameter if the number of levels increases . Also, as discussed above, in $`3d`$ there is always a value of $`k_z`$ such that the pseudogap vanishes. Keeping the chemical potential at $`\mu =10`$ the pseudogap also becomes small since, even though the Eilenberger points are not gapless in general, as long as the order parameter is small, the gaps are very small throughout the Brillouin zone. In Fig. 2 we consider the case of $`n_c=10`$ and $`\mu =10`$ as a function of disorder. The case $`\delta =0`$ shows a small finite density of states at the Fermi level (due to the finite size considered) and a large density of states at low energies. As the disorder increases its effect is very pronounced. The DOS broadens as before and extends from zero energy with a zero energy value that increases as the $`\delta `$ increses. For strong disorder ($`\delta =2`$) the DOS is much larger than the lattice result.
These results show that in the presence of disorder the gapless behavior does not disappear and is actually enhanced. Our numerical results indicate that there is a finite density of states at zero energy particularly in $`3d`$ or if the number of Landau levels is not too small. They also confirm that the gapless behavior has a topological nature and is not specific to the periodic vortex lattice.
The author acknowledges helpful discussions with Zlatko Tesanovic and partial support from PRAXIS Project /2/2.1/FIS/302/94.
|
no-problem/9905/hep-ph9905564.html
|
ar5iv
|
text
|
# 1 junk
Unitarity and nonperturbative effects
in the spin structure functions at small $`x`$
S. M. Troshin and N. E. Tyurin
Institute for High Energy Physics, Protvino, Moscow Region,142284 RUSSIA
> Abstract: We consider low-$`x`$ behavior of the spin structure functions $`g_1(x)`$ and $`h_1(x)`$ in the unitarized chiral quark model that combines the ideas on constituent quark structure of hadrons with a geometrical scattering picture and unitarity. A nondiffractive singular low-$`x`$ dependence of $`g_1^p(x)`$ and $`g_1^n(x)`$ is obtained and a diffractive type smooth behavior of $`h_1(x)`$ is predicted at small $`x`$.
Experimental evaluation of the first moments of $`g_1`$ and $`h_1`$ (and the total nucleon helicity carried by quarks and tensor charge respectively) in principle are sensitive to a particular theoretical extrapolation of the structure functions $`g_1(x)`$ and $`h_1(x)`$ to $`x=0`$. The essential point in the study of low-$`x`$ dynamics is that the space-time structure of the scattering at small values of $`x`$ involves large distances $`l1/Mx`$ on the light–cone and the region $`x0`$ is therefore determined by the nonperturbative dynamics. A number of models attributes the observed increase of $`g_1(x)`$ at small $`x`$ to the diffractive contribution. Such contribution being dominant at smallest values of $`x`$ would lead to the “equal” structure functions $`g_1^p(x)`$ and $`g_1^n(x)`$ in this kinematical region, i. e.
$$g_1^p(x)/g_1^n(x)1$$
at $`x0`$. Such behavior has not been confirmed in the recent experiments. In particular, the SMC data demonstrate the following approximate relation in the region of $`0.003x0.1`$:
$$g_1^p(x)g_1^n(x).$$
To consider low-$`x`$ region and obtain the explicit forms for the quark spin densities $`\mathrm{\Delta }q(x)`$ and $`\delta q(x)`$ at $`x0`$ it is convenient to use the relations between these functions and discontinuities of the helicity amplitudes of the antiquark–hadron forward scattering . We use a nonperturbative approach where unitarity is explicitly taken into account via unitary representations for the helicity amplitudes, which follow from their relations to the $`U`$–matrix
In the model a quark is considered as a structured hadronlike object since at small $`x`$ the photon converts to a quark pair at long distance before it interacts with the hadron. At large distances perturbative QCD vacuum undergoes transition into a nonperturbative one with formation of the quark condensate. Appearance of the condensate means the spontaneous chiral symmetry breaking and the current quark transforms into a massive quasiparticle state – a constituent quark. Constituent quark is embedded into the nonperturbative vacuum (condensate) and therefore we can treat it similar to a hadron. Spin of constituent quark $`J_U`$ in this approach is given by the following sum
$$J_U=1/2=S_{u_v}+S_{\{\overline{q}q\}}+L_{\{\overline{q}q\}}=1/2+S_{\{\overline{q}q\}}+L_{\{\overline{q}q\}}.$$
It is also important to note the exact compensation between the spins of quark–antiquark pairs and their angular orbital momenta, i.e. $`L_{\{\overline{q}q\}}=S_{\{\overline{q}q\}}`$.
We consider effective lagrangian approach where gluon degrees of freedom are overintegrated. The value of the orbital momentum contribution into the spin of constituent quark can be estimated according to the relation between contributions of current quarks into a proton spin and corresponding contributions of current quarks into a spin of the constituent quarks and that of the constituent quarks into the proton spin. The existence of this orbital angular momentum, i.e. orbital motion of quark matter inside constituent quark, is the origin of the observed asymmetries in inclusive production at moderate and high transverse momenta. Mechanism of quark helicity flip in this picture is associated with the constituent quark interaction with the quark generated under interaction of the condensates . Quark exchange process between the valence quark and an appropriate quark with relevant orientation of its spin and the same flavor will provide the necessary helicity flip transition, i.e. $`Q_+Q_{}`$.
The helicity amplitudes $`F_{1,2,3}(s,t)|_{t=0}`$ at high values of $`s`$ and then the functional dependencies for the quark densities $`q(x)`$, $`\mathrm{\Delta }q(x)`$ and $`\delta q(x)`$ at small $`x`$ were obtained .
The low-$`x`$ behavior of quark spin densities is as follows:
$$q(x)\frac{1}{x}\mathrm{ln}^2(1/x),\mathrm{\Delta }q(x)\frac{1}{\sqrt{x}}\mathrm{ln}(1/x),\delta q(x)x^c\mathrm{ln}(1/x),$$
and correspondingly
$$F_1^p(x)/F_1^n(x)1,h_1^p(x)/h_1^n(x)1$$
at $`x0`$, with the explicit forms as follows
$$F_1^p(x)\frac{1}{x}\mathrm{ln}^2(1/x),h_1^p(x)x^c\mathrm{ln}(1/x).$$
Comparison of the spin structure function $`g_1(x)`$ with the SMC data provides a satisfactory agreement with experiment at small $`x`$ ($`0<x<0.1`$) and leads to the values $`C^p=2.0710^2`$ and $`C^n=2.1010^2`$ (cf. Fig. 1).
The functional dependence of the spin structure functions
$$g_1^{p,n}(x)\frac{1}{\sqrt{x}}\mathrm{ln}(1/x)$$
is in a good agreement with the new E154, E155 and HERMES data as well. The model leads to the approximate relation
$$g_1^p(x)/g_1^n(x)1$$
at small values of $`x`$.
The above extrapolation of $`g_1(x)`$ at small $`x`$ provides the following approximate values for the quark spin contributions:
$$\mathrm{\Delta }\mathrm{\Sigma }0.25,\mathrm{\Delta }u0.81,\mathrm{\Delta }d0.45,\mathrm{\Delta }s0.11,$$
which demonstrate that the singular behavior of $`g_1^p(x)`$ does not lead to significant deviations from the results of the experimental analysis where the smooth extrapolation of the data to $`x=0`$ was used.
The obtained singular small-$`x`$ behavior of $`g_1`$ corresponds to the following high energy behavior of the difference of the $`\gamma N`$ total cross-sections:
$$\mathrm{\Delta }\sigma =\sigma _{\gamma N}^{1/2}\sigma _{\gamma N}^{3/2}\frac{\mathrm{ln}\nu }{\sqrt{\nu }}$$
and gives a convergent integral in the Drell-Hearn-Gerasimov-Iddings (DHGI) sum rule. Note, however that unitarity bound :
$$\mathrm{\Delta }\sigma \mathrm{ln}\nu $$
does not rule out the divergent DHGI integral.
One of the authors (S. T.) is grateful to the Organizers of this interesting Workshop for the invitation and kind support.
|
no-problem/9905/cond-mat9905245.html
|
ar5iv
|
text
|
# I Introduction
## I Introduction
The wide band gap semiconductor GaN has experienced exciting applications in blue light emitting diodes and laser diodes. For a further improvement of the quality of the material a better understanding of the structural and electronic properties is necessary. The energetic positions of critical points differ by 0.8 eV for the existing band structure calculations of wurtzite GaN . Furthermore, the geometrical structure of the GaN(0001) surface is still being debated .
The most powerful tool for examining the electronic structure of semiconductors is the angle resolved ultraviolet photoemission spectroscopy (ARUPS). The spectra give insight into the valence band structure of the bulk as well as the surface. Besides the part of direct interest, i.e. the initial bound state, the photoemission process also involves the excitation to outgoing scattering states with the transition probability given by matrix elements. Therefore, as already demonstrated for the cubic GaN(001) surface , a full account of the experimental data can only be attained by a comparison with photocurrents calculated within the one-step model.
As a starting point we use a GaN(0001)-(1x1):Ga surface, as it is predicted by total energy calculations performed within the local density formalism . The calculated photoemission spectra in normal emission are examined with respect to contributions from the bulk band structure as well as from surface states. For example, we identify a structure near the lower valence band edge as resulting from a surface state. Only by taking this state into account, the correct energetic position of the band edge can be extracted. Based on the detailed understanding of the photocurrent the flexibility of the calculation allows us to adjust the underlying bulk band structure to the experimental results. This means that we are able to correct the position of the valence band maximum, which is an important value for determining band offsets and band bending. Though abandoning parameter-free modeling thereby, one gains experience how peaks are shifted and intensities are deformed by the matrix elements. The true position of the bands can be much better determined in such an interpretation of experiment than using standard band mapping methods.
Off-normal photoemission spectra provide an enhanced surface sensitivity. Near the upper valence band edge, experiment has pointed out an $`sp_z`$ orbital related surface state . Identifying this state, which depends sensitively on the surface geometry, in the theoretical spectra, we can connect the geometric and electronic structure with the measured photocurrents.
This paper is organized as follows. First a short overview about the theory is given, followed by a detailed analysis of the initial surface band structure and the used final bands. Then the results for normal-emission spectroscopy from the GaN(0001)-(1x1):Ga surface are presented, together with a detailed interpretation in comparison with experiment. It is shown how the theoretical band structure calculation can be related to experiment and how uncertainties in the experimental interpretation can be removed. Finally, we present the results for off-normal emission, comparing with experimental data, too.
## II Theory
In this section, we briefly discuss the theoretical techniques used in our calculation of the photocurrent. For details see the references .
We calculate the photocurrent within the one-step model. The photocurrent $`I`$ is given by:
$`I`$ $``$ $`{\displaystyle \underset{i,j}{}}\mathrm{\Phi }_{LEED}^{}(E_{fin},k_{})|𝐀_\mathrm{𝟎}𝐩|\mathrm{\Psi }_i`$ (2)
$`G_{i,j}(E_{fin}h\nu ,k_{})\mathrm{\Psi }_j|𝐩𝐀_\mathrm{𝟎}|\mathrm{\Phi }_{LEED}^{}(E_{fin},k_{})`$
For simplicity the vector potential $`𝐀_\mathrm{𝟎}`$ is kept constant. $`G_{i,j}`$ represents the halfspace Green’s function of the valence states, given in a layer-resolved LCAO basis $`\mathrm{\Psi }_i`$ . Our basis set consists of the 4$`s`$ and 4$`p`$ atomic orbitals of gallium and the 2$`s`$ and 2$`p`$ atomic orbitals of nitrogen, taking into account the coupling up to fourth nearest neighbor atoms. The Hamilton matrix is calculated according to the Extended-Hückel-Theory (EHT). Its parameters, employed for bulk and halfspace calculations, are adjusted to published ab-initio bulk band structures using a genetic algorithm . We use two different sets for the parameters. One set is adjusted to the GW quasiparticle band structure of Rubio et al. . The other set is adjusted to a self-interaction and relaxation corrected pseudopotential band structure calculation by Vogel et al. . The two sets of parameters belonging to these band structures are presented in Table I. Figure 1 shows the resulting bulk band structure according to Vogel et al. (solid lines). Along $`\overline{\mathrm{\Gamma }A}`$, also the band structure adjusted to Rubio et al. is shown (dashed lines). The main difference is the energetic position of the lower valence band edge at $`\overline{\mathrm{\Gamma }}`$ near -8.0 eV, where the calculations differ by nearly 0.8 eV.
The electronic structure of the surface is determined by the calculation of the $`𝐤_{}`$\- resolved density of states (DOS) from the halfspace Green’s matrix $`G_{i,j}`$, the same as used for the photocurrent. It takes into account relaxation and reconstruction at the surface. The resolution of the DOS with respect to atomic layers and orbital composition allows for a detailed characterization of the bands and their corresponding photocurrents.
The final state of photoemission is a scattering state with asymptotic boundary conditions. For a clean surface its wave function is determined by matching the solution of the complex bulk band structure to the vacuum solution, representing the surface by a step potential . This treatment is best suited for discussing the photoemission peaks in terms of direct transitions with conservation of the surface perpendicular wave vector $`k_{}`$ since the final state is described inside the crystal as a sum over bulk solution of different $`k_{}`$. These solutions of the complex bulk bands are calculated with an empirical pseudopotential. For GaN we use the pseudopotential form factors of Bloom et al. . The damping of the wave function inside the crystal is described by the imaginary part of an optical potential.
In Eq. (1), the transition matrix elements $`\mathrm{\Phi }_{LEED}^{}(E_{fin},k_{})|𝐀_\mathrm{𝟎}𝐩|\mathrm{\Psi }_i`$ between the final state and layer Bloch sums are numerically integrated in real space. Their dependence on atomic layers and orbitals permits a detailed analysis of the spectra.
## III results and discussion
### A Electronic structure
In this section we discuss our results for the electronic structure of the GaN(0001) surface. We use a surface geometry of Smith et al. (shown in Fig. 2), derived from total energy calculation and STM examinations. Directly atop the nitrogen atoms sits a full monolayer of gallium adatoms, forming a (1x1) surface. In Fig. 3 the surface band structure is presented, calculated with the parameters of set A in Table I. The bands are determined from the peaks in the $`𝐤_{}`$ resolved DOS of the four topmost atomic layers.
Within the fundamental gap there are two surface states, labeled (a) and (b). They can be identified as $`p_x`$ and $`p_y`$ derived bridge bonds between the Ga adlayer atoms. Smith et al. found these strongly dispersive metallic bonds to be responsible for the stability of this surface . The state (c) is built up by the Ga $`s`$ and $`p_z`$ orbitals, in $`\overline{K}`$ and $`\overline{M}`$ with strong contributions from the underlying N $`p_z`$ orbitals. Along $`\overline{\mathrm{\Gamma }M}`$ and mostly along $`\overline{\mathrm{\Gamma }K}`$ the state (c) is in resonance with the bulk, mixing with the nitrogen $`p_x`$ and $`p_y`$ orbitals. Especially in $`\overline{\mathrm{\Gamma }}`$, these orbitals exhibit a strong contribution to the density of states, as can be seen in Fig. 4.
Along $`\overline{\mathrm{\Gamma }M}`$ the surface resonances (e) and (f) are made up by the nitrogen $`p_x`$ orbitals with small contributions from the underlying Ga $`s`$ and $`p_x`$ orbitals. For band (d) we find strong contributions from the nitrogen $`p_y`$ orbitals and from the $`p_y`$ orbitals of Ga lying below the nitrogen layer. Along $`\overline{\mathrm{\Gamma }K}`$ we find similar resonances (h, i, k), which can be resolved into contributions from the N $`p_y`$ orbitals with a smaller amount from the N $`p_x`$ and the subsurface Ga $`p_x`$ and $`p_y`$ orbitals. The band (g), seen at -4.0 eV near $`\overline{\mathrm{\Gamma }}`$ is a nearly invisible structure in the density of states (Fig. 4), built up by broad contributions from nitrogen and gallium $`p_z`$ orbitals. In the heteropolar gap we find a strong Ga $`s`$ surface state (l) (see also Fig. 4) located at -8.0 eV clearly below the lower bulk band edge. As can be seen in the DOS, this state contains also contributions from the N $`p_z`$ orbitals. Along $`\overline{\mathrm{\Gamma }M}`$ and $`\overline{\mathrm{\Gamma }K}`$ this band shows a strong dispersion towards lower binding energy, becoming a surface resonance between $`\overline{M}`$ and $`\overline{K}`$. Near the lower valence band edge we find a second resonance (m) formed by the N $`p_x`$ orbitals and Ga $`s`$ orbitals from deeper atomic layers. In $`\overline{K}`$ we find also contributions from the N $`p_y`$ orbitals. Altogether, taking into account the states inside the fundamental gap and the surface state in the heteropolar gap, the surface band structure of the GaN(0001)-(1x1):Ga surface shows a similar behavior as the cubic GaN(001)-(1x1):Ga surface .
The complex final bands are shown in Fig. 5 . They are calculated along the symmetry line $`\mathrm{\Delta }`$, which corresponds to normal emission. Since we introduced an imaginary optical potential all bands are damped. The horizontal bars denote the weight which single complex bands carry in the final state. With respect to this criterion, only the four most important bands for the photoemission are shown. They have strong contributions to the final state (labeled (a), (b), (c), and (d)). Below 15 eV final state energy state (d) is the most important one. In this energy range state (a) and (c) reveal large imaginary parts, being responsible for a strong damping of these states inside the crystal. Between 15 and 65 eV state (a) contributes dominantly. Above 65 eV band (b) yields the essential contribution to the final state.
### B Normal Emission
Figure 6 presents normal emission spectra for the GaN(0001)-(1x1):Ga surface, which are calculated for photon energies from 14 up to 78 eV. The radiation is chosen to be incident within the $`xz`$ plane at an angle of $`45^{}`$ to the surface normal. The radiation is polarized parallel to the plane of incidence. The spectra are dominated by two structures, near -1.0 (A) and -8.0 eV (E) . The last one coincides with the strong Ga $`s`$ surface state near -8.0 eV, which can be seen in the DOS at $`\overline{\mathrm{\Gamma }}`$ in Fig. 4 . The emission from this peak is localized in the topmost gallium layer, which can be proven by analyzing the orbital and layer resolved matrix elements, as presented in Fig. 7. For final state energies between 8 and 21 eV the $`s`$ orbitals of the gallium adlayer atoms show large matrix elements. This agrees with the strong emissions at -8 eV for photon energies between 16 and 29 eV. For higher final state energies the Ga $`s`$ matrix elements as well as the peak heights are much smaller. At 25 eV and above 40 eV final state energy the matrix elements of the first nitrogen $`p_z`$ orbitals become appreciable. So the emissions near 33, 49 eV and 78 eV photon energy are enhanced by emissions from nitrogen $`p_z`$-orbitals, although they show much weaker DOS at $`\overline{\mathrm{\Gamma }}`$ near -8.0 eV than the gallium $`s`$ orbitals.
The leading peak near -1.0 eV valence energy (A) is connected to the high and broad density of states (Fig. 4), resulting from the nitrogen $`p_x`$\- and $`p_y`$ orbitals. While emissions from the $`p_y`$ orbitals are forbidden by selection rules, the nitrogen $`p_x`$ matrix elements exhibit minima near 25, 50 and 73 eV final state energy. This is consistent with the decreasing intensity of the leading peak near 27, 51 and 74 eV photon energy whereby the behavior at 27 eV is especially convincing. For 51 eV the smaller intensity of the leading peak is one reason for the more pronounced intensity of those at higher binding energies, because in Fig. 6 each spectrum is normalized seperately to an equal amount in the highest peak. Furthermore, for analyzing structure (A) we have to take into account direct transitions assuming exact conservation of the perpendicular wave vector. For a given excitation energy we determine the binding energies at which transitions from the initial bulk band structure into the complex final band structure are possible. These binding energies are plotted in the photoemission spectra with bars whose length indicates the contributions of the complex band to the final state. For structure A we have to consider the initial bands 1, 2 and 3 (see Fig. 1). Hints for the contribution of direct transitions to structure (A) are the dispersion for photon energies between 14 and 20 eV at the lower binding energy side (from initial state (2) into final band (d)) and between 20 and 47 eV photon energy were the leading peak disperses from -0.1 eV to -1.1 eV (initial band (1) and (2) into final band (a)). For photon energies of 17 eV (final band (d)), 39 eV (final band (a)) and 74 eV (final band (b)) shoulders from direct transitions from the VBM can be seen.
Besides the two prominent structures there exist weaker intensities with strong dispersion. (C) and (D) can be identified as emissions from the initial bands (3) and (4) into the final bands (d) and (c), respectively. Between 28 and 63 eV photon energy we find the dispersive structure (B). It can be explained by direct transitions from the valence band (4) into the final band (a). At 47 eV photon energy the structure reaches the highest binding energy with -7.2 eV. Near the lower valence band edge above (E) and interfering with (B) there are further weak structures in the photon range from 39 to 59 eV which result from nitrogen $`p_z`$ orbitals. Near -5.5 eV emissions from the nitrogen $`p_z`$ orbitals from the second nitrogen layer arise for photon energies between 49 and 59 eV. Emissions from the third nitrogen layer $`p_z`$, located at -6.7 and -3.2 eV are visible in the photon range (51,55) eV and (39,49) eV respectively.
Compared with the GaN(001) surface , direct transitions are less significant for wurtzite GaN in the range up to 78 eV photon energy. Further calculation shows, that above 98 eV photon energy a strong dispersive structure belonging to the initial state band (4) and the final state (b) appears, reaching its highest binding energy near 118 eV photon energy.
### C Normal emission - comparison with experiment
In this section we compare our calculated spectra with experimental results in normal emission performed by Dhesi et al. for photon energies between 31 and 78 eV. The measurements were done at an wurtzite GaN film, grown by electron cyclotron resonance assisted molecular beam epitaxy on sapphire substrates with subsequent annealing. The spectra were detected with synchrotron radiation, incident at $`45^{}`$ to the surface normal.
Figure 8 shows on the right hand side the experimental results. Energy zero is the VBM, which was determined from the spectra by extrapolating the leading edge. In comparing the spectra we will show that this technique places the experimental VBM 1.0 eV above the VBM as taken from the band structure.
On the left hand side of Fig. 8 our theoretical results, as discussed in the section (B), are plotted (solid lines). The experimental data show a dominant structure near -2.0 eV which can be associated with the theoretical peak at -1.0 eV. Like the theoretical results, this structure exhibits some dispersion to lower binding energies between 63 and 66 eV photon energy (theory between 59 and 63 eV). These emissions can be explained by transitions from the N $`p_x`$ orbitals and by direct transitions from the bulk bands (1) and (2). Furthermore, the experimental results reveal a dispersing structure between -4.2 and -8.2 eV, which is also visible in the theoretical results (between -3.2 and -7.2 eV). In both series, the emissions from that structure become weak for photon energies around 39 eV. Near 55 eV photon energy both experiment and theory show enhanced emissions, dispersing back to lower binding energies. Around 66 eV photon energy the emissions near -4.0 eV are much weaker in theory than in experiment. The theoretical spectra show a significant doubling of the leading peak at h$`\nu `$=78 eV, with emissions near -0.8 eV and -1.8 eV. A similar effect is not seen in the experimental results of Dhesi et al. However, recent measurements by Ding et al. display the double maximum . The latter experiment was performed on a GaN(0001)-(1x1) surface, also grown on sapphire but by means of metal organic chemical vapour deposition (MOCVD). For photon energies of 75 and 80 eV the experimental spectra by Ding et al. show peaks near the upper valence band edge and near -2.0 eV, which can be connected to the peaks in the theoretical spectra for 74 and 78 eV photon energy.
All over all, we can identify two significant structures from the experimental data by Dhesi et al. in our calculated spectra. Comparing their energetical position we recognize, that the theoretical structures are at 1.0 eV lower binding energy. We explain this difference by an inaccuracy of the experimentally determined VBM of 1.0 eV. It should be pointed out, that this error also explains the energetical shift of 1.0 eV which is necessary to match the experimental band structure of Dhesi et al. with the theoretical band structure in Ref. .
Apart from the two discussed series, Fig. 8 includes some dashed lined theoretical spectra. These spectra are calculated with the EHT parameters of set B, see Table I . The parameters are related to the band structure of Rubio et al. with a valence band width of 8.0 eV. The spectra are similar to the calculated results already discussed. The leading peak is almost unchanged, while the emissions near the lower valence band edge are shifted by 0.8 eV. This statement is true for the whole theoretical series calculated with the parameters of set B. Comparing with experiment, we can point out two results. The leading peak between -1.0 and -0.3 eV in both theoretical series can be identified with the experimental structure between -2.0 and -1.2 eV. This underlines that the experimental VBM has to be shifted by nearly 1.0 eV to higher binding energies as already stated. Similarly the emissions near -8.2 eV in the experimental spectra can be assumed to lie at -7.2 eV, which would be consistent with the theoretical spectra calculated by the band structure of Vogel et al. (parameters of set A). This means that we are able to determine the valence band width to 7.2 eV, by comparing the experimental spectra with calculated photocurrents based on different band structures.
Furthermore, in the experimental paper of Dhesi et al. it is pointed out that at lower photon energies a non-dispersive feature with a binding energy of approximately -8.0 eV is visible . In Ref. the peak is explained with final state or density of state effects, rather than with a surface state. Our examination however, for photon energies lower than 30 eV clearly reveals significant emissions from a gallium $`s`$ surface state at that energy, and not from the band edge (see section (B)). The theoretical band edge is found at 0.8 eV lower binding energy, and additionally, the variation of the intensity with photon energy is associated with the matrix elements and uniquely attributes this emission to a surface state. In both the theoretical and experimental spectra weak emissions for photon energies between 31 and 39 eV around the lower valence band edge are observed which can be additionally attributed to the surface band emission (see Fig. 8). Near -8.0 eV the experiment shows enhanced emissions for photon energies between 47 and 55 eV , which are also seen in the theory around -7.2 eV. The experimental peaks at highest binding energy are broad enough for including also the emissions from the surface state near -8.0 eV in theory (gallium $`s`$ and nitrogen $`p_z`$ related). The association of a non-dispersive structure with a theoretically estimated band edge merely because of its energetical vicinity can easily lead to erroneous band mapping , especially as surface states often develop near band edges.
Summarizing the above discussion, we have demonstrated that there is a clear aggrement between the theoretical and experimental results for a wide energy range. This aggrement allows us to show that the determination of the VBM by extrapolating the leading edge could involve significant errors. The determination of the VBM is an important step in the investigation of band bending and valence band discontinuity in heterojunctions . Especially, Wu et al. investigated the band bending and the work function of wurtzite GaN(0001)-(1x1) surfaces by ultraviolet photoemission. By extrapolating the leading peak, the VBM was determined at 2.4 eV above the strong structures, which are located in our theory around -1.0 eV. The difference in the positions of the VBM is 1.4 eV being much more than the accuracy of 0.05 eV which is assumed for this technique . The error in the experimental VBM determination by extrapolation appears to be critical. Beside the VBM the detailed comparison of calculated and measured photocurrents allows us to determine the bulk band width to 7.2 eV. Furthermore we identify emissions from a surface state, which is related to the gallium adlayer. These emissions are a first hint for the reliability of the used surface geometry and will be further analyzed with the more surface sensitive off-normal photoemission in the next section.
### D Off-normal emission - theory
Priority of section (B) and (C) was given to analyze the electronic bulk features from of normal emission spectroscopy. In this section we present theoretical results for off-normal emission along the $`\overline{\mathrm{\Gamma }M}`$ and the $`\overline{\mathrm{\Gamma }K}`$ direction. In addition to the electronic structure we are now interested in the geometric structure of the surface. In this context it seems necessary to examine the real space origin of the photocurrent, which is done for two examples before we consider the whole series.
Figure 9 shows two layer resolved spectra in the $`\overline{\mathrm{\Gamma }M}`$ direction. They are calculated for emission angles of $`0^{}`$ (normal emission) and $`18^{}`$ with photon energies of 50 and 55 eV respectively. The numbers at the layer resolved spectra indicate the number of layers which have been used in the sum of Eq. 1, starting with the topmost layer. The spectra are shown together with the density of states, the bulk valence bands, and the complex final bands. The DOS is calculated for different $`𝐤_{}`$, referring to the plotted angles and binding energies, such that they can directly be compared with the photocurrents. Also the bulk bands are calculated taking into account the correct $`𝐤_{}`$. The complex final bands are shifted by the excitation energy onto the valence band structure. For the photoemission spectra the light impinges in the $`yz`$ plane with an polar angle of $`45^{}`$.
For normal emission six peaks can be seen in the photocurrent (Fig. 9). The double peak (C) can be explained by direct transitions from the two topmost valence bands. The positions of the direct transitions are indicated by the dashed lines. As the peak at the lower binding energy side becomes visible above the third layer (12 atomic planes), the peak at the higher binding energy side shows also contributions from the surface layer. These contributions are related to emissions from the nitrogen $`p_z`$ and $`p_y`$ orbitals which yield the largest matrix elements. Peak (H) is only a weak structure. It is explained by direct transitions into final bands which contribute less to the outgoing state. (G) and (G’) are direct transitions from the lower valence bands into the two major final bands. Especially for (G) also emissions from the nitrogen $`p_z`$ orbitals from the third and fourth nitrogen layer have to be taken into account. Peak (I) is clearly related to the emissions from the gallium $`s`$ and nitrogen $`p_z`$ surface state located in the first atomic layers, as already explained in the section (B).
Changing the emission angle to $`18^{}`$ two emissions ((A) and (B)) appear, which are due to the surface states (a) and (c) respectively (see Fig. 3). Both structures have their origin in the first atomic layers. The structures (C) and (D) are explained by direct transitions. In contrast to the structure (C), which is visible above the third layer, structure (D) shows an enhanced contribution from the surface layer (nitrogen $`p_y`$ and $`p_z`$). These contributions are larger than those for the double peak (C) in normal emission at -1.0 eV. The remaining peaks can be explained by direct transitions with weak contributions from the nitrogen $`p_z`$ and $`p_x`$ orbitals of the surface layers.
Summarizing, we found an enhanced surface sensitivity for higher angles. This regards the states within the gap, as well as resonant structures (e.g. (D) in Fig. 9). The enhanced surface sensitivity at off-normal angles is well known and has been recently investigated by one-step model calculations .
In Fig. 10 a series along the $`\overline{\mathrm{\Gamma }}\overline{M}`$ direction is presented. The structure (A) can be related to the surface state (a) in the surface band structure (see Fig. 3) and is built up by emissions from the topmost nitrogen $`p_z`$ and gallium $`p_z`$ and $`p_x`$ orbitals. The emissions from structure (B) are related to the same orbitals and belong to the surface state (c). The structure (C) can be found at all angles. Apart from direct transitions also emissions from the nitrogen $`p_z`$ and $`p_y`$ orbitals contribute. Especially, the structure (D) displays besides direct transitions the DOS of the first 8 atomic layers (see Fig. 9 ). The emissions (E) and (F) can be explained by the surface resonances (e) and (f) (see Fig. 3) which frame a gap in the projected bulk band structure. The emissions are related to nitrogen $`p_x`$ orbitals of the first three layers with varying contributions from direct transitions. The dispersion of structure (G) follows the lower valence band edge, and in addition to direct transitions, emissions from nitrogen $`p_x`$ and $`p_z`$ orbitals are responsible for this structure. Structure (I) belongs to the surface state (l), see Fig. 3. The structure (G’) is connected to direct transitions as can be seen in Fig. 9.
In Fig. 11 the theoretical photocurrents along the $`\overline{\mathrm{\Gamma }}\overline{K}`$ direction are shown. The spectra are calculated for angles between $`0^{}`$ and $`30^{}`$, with photon energies between 50 and 66 eV. The light is $`p`$\- polarized and incident along the $`xz`$\- plane with an angle of $`45^{}`$ with respect to the surface normal. Because of the high emission angles also the $`\overline{K}\overline{M}`$ direction is reached.
The theoretical spectra show a weak emission (A) for angles around $`18^{}`$. This emission represents the surface band (a) between $`\overline{\mathrm{\Gamma }}`$ and $`\overline{K}`$, which can be seen in Fig. 3. Structure (B) results from the surface band (c), which leaves at $`14^{}`$ the projected bulk band structure. The structures (C) and (C’) can be explained by direct transitions from the topmost valence bulk band. Additionally emission from the huge density of states from the N $`p_x`$ orbitals have to be taken into account (see the discussion of Fig. 9). The emission (D) results from nitrogen $`p_z`$ orbitals below the first layer. For angles below $`14^{}`$ the structure (E) stems from the surface resonance (k) (see Fig. 3). It consists of nitrogen $`p_y`$ orbitals and shows large dispersion to higher binding energies. Above $`14^{}`$ (E) interferes with emissions from the surface resonance (l). The structure (F) belongs to the surface resonance (l) consisting of nitrogen $`p_z`$ and gallium $`s`$ orbitals, with the main contributions from the nitrogen surface atoms. The remaining structures (H, G, G’ and I) are explained as their counterparts for the $`\overline{\mathrm{\Gamma }}\overline{M}`$ direction.
In both theoretical spectra we are able to identify emissions which are related to the orbital composition of the topmost surface layers. Moreover, also emissions from resonances show contributions from the surface, as has been pointed out by the layer resolved photocurrent. If we are able to identify these emissions in experimental data, clear fingerprints for the assumed gallium adlayer structure would be indicated.
### E Off-normal emission - comparison with experiment
In this section we compare our results in off-normal emission with experimental data of Dhesi et al. . For comparing the spectra, it is also important here to take into account the shift of 1.0 eV, which is necessary to adjust the VBM (see section C).
In Fig. 10 the spectra for $`\overline{\mathrm{\Gamma }M}`$ are shown. At low binding energies the experiment shows strong emissions, which are identified with the structure (C) in the theoretical spectra. Near -8.0 eV the experiment shows a structure, which disperses to lower binding energy for higher angles with decreasing intensity. This behavior is also seen in the theoretical structure (G). The energetic difference between the two experimental structures coincides in $`\overline{\mathrm{\Gamma }}`$ with the theoretical valence band width of 7.2 eV and underlines the results from section C.
For angles above $`14^{}`$ the experimental data display a shoulder between -1.0 and -2.0 eV. This emission can be associated with structure (B) in theory. For $`16^{}`$ and $`18^{}`$ the shoulder becomes very broad, which may be attributed to the theoretical emissions (B) and (A). The theoretical orbital composition of these states is consistent with the results of Dhesi et al., who examined the dependence of this shoulder on polarization and contamination. The theoretical spectra show a further emission from a surface state (I). This emission might be identified with the high binding energy shoulder in the experimental data near -9.0 eV. Around -4.0 eV, the experiment shows two dispersing structures. These structures can be connected with the theoretical emissions (E) and (F) which appear to be weaker, however. Also the structure (G’) is seen in experiment as a weak shoulder at low emission angles. Between -4.0 and -5.0 eV the experiment shows a structure ($`\theta `$ = $`0^{}`$$`8^{}`$) not being marked which is related to the theoretical structure (H). Thus three surface states and several surface resonances can be identified in experiment. Taking into account the influence of the topmost atomic layers to the photocurrent (see section D) the coincidence of the theoretical and experimental spectra confirm the used surface geometry. Moreover, considering the energetic shift of 1.0 eV, the energetic positions of the structures confirm the used surface band structure.
Further information can be reached with the results along the $`\overline{\mathrm{\Gamma }K}`$ direction (Fig. 11). For angles above $`14^{}`$ experimental data show a structure near -1 eV. This structure can be associated with the emission (B) in the theoretical curves which results from the nitrogen and gallium surface state. Compared to experiment theory heavily overestimates the intensity. Also, the theoretical band takes off from the projected band structure background with rather strong dispersion already at $`18^{}`$ (see also (c) in Fig. 3) delayed by $`4^{}`$ with respect to experiment, where this band clearly appears already at $`14^{}`$ with very low dispersion. This difference is a hint that the theoretical surface band (c) becomes free of the projected bulk band structure with smaller dispersion along $`\overline{\mathrm{\Gamma }K}`$ and significant closer to the $`\overline{\mathrm{\Gamma }}`$ point than obtained in our surface band structure calculation. In the same energy range but for lower angles, a shoulder is marked in the spectra which is comparable with the structure (C’) in theory.
Near -2.0 eV experiment marks a structure, dispersing a little to higher binding energy for larger angles. This structure can be identified with the theoretical emission (C), visible for angles between $`0^{}`$ and $`12^{}`$. The experimental structure disperses from -2.0 eV to -3.0 eV, while the theoretical structure disperses from -0.9 eV to -2.4 eV at $`12^{}`$. Above $`14^{}`$ the experiment shows no peaks for this structure, but only shoulders. The theoretical curves show the structures (D) and (E), which explains for higher angles these shoulders and the adjacent experimental peaks on the higher binding energy side.
Below $`14^{}`$ structure (E) reproduces the dispersive experimental structure between -2.6 and -5.6 eV. Different from experimental data, the emission (E) is less pronounced and displays less dispersion (about 300 meV). At still lower angles a weak emission (H) is seen, which is visible as a weak structure in the experimental data. Near the lower valence band edge, theory shows three structures (G,G’ and I) which can be compared with the experimental structure around -8 eV. It displays similar behavior as the theoretical data with respect to dispersion and magnitude, though the experimental emissions are weaker.
Summarizing, we are able to explain all observed experimental structures. The observed energetical positions underline our result in normal emission, namely that the band width is 7.2 eV and that the experimental valence band maximum has to be shifted for 1.0 eV to higher binding energies. In addition to direct transitions all emissions in off-normal emission are influenced by surface states and resonances, as has been verified by the layer resolved photocurrent. This demonstrates the surface sensitivity of the experiment, misleading the mapping of valence bulk bands solely from off-normal measurements . Also, we are able to identify three surface emissions, which show the same energetical and intensity behaviour in theory and experiment. They can be related to emissions from the topmost nitrogen $`p_z`$ and gallium $`s`$ and $`p_x`$ orbitals. The theoretical dispersion is only slightly at variance with experiment. The emissions from the topmost atomic layers sensitively depend on surface structure and reconstruction. Thus the comparison between experimental and theoretical results confirms the reliability of the assumed theoretical surface model.
## IV Conclusion
Photoemission spectra in normal and off-normal emission for the GaN(0001)-(1x1):Ga surface have been calculated within the one-step model. Normal emission spectra show emissions from a surface state near the lower valence band edge. It is identified by its energetic position different from the band edge and its varying intensity by inspection of the matrix elements. Furthermore, we demonstrate that a widespread experimental method to determine the VBM by extrapolating the leading edge of the valence band spectra may fail by as much as 1.0 eV. Taking this into account all the experimental structures can be identified in close agreement with theory. Especially, the valence band width (7.2 eV) agrees with a LDA bulk band structure calculation of Vogel et al. whereas a GW calculation of Rubio et al. differs by 0.8 eV.
In off-normal emission, surface states near the upper valence band edge can be identified and analyzed with respect to surface band structures. Several surface resonances are examined and verified by experimental data. Aggrement of surface specific properties in the theoretical and experimental photocurrents is seen as a proof of the used surface geometry. The surface is nitrogen terminated with a gallium adlayer.
The experimental emissions are traced back by theory to their origin in band structure, electronic states, orbital composition and location in direct space. Thus the one-step model calculation is a powerful tool to yield essential insight into the bulk and surface electronic structure. In addition, it gives credit to the underlying surface geometry. This work stresses the necessity of such a calculation for a reliable interpretation of experimental ultraviolet photoemission data in comparison with calculated band structures.
## V Acknowledgments
Discussions with Prof. M. Skibowski and Dr. L. Kipp are gratefully acknowledged. We thank Prof. K. E. Smith providing us the experimental figures. The work was supported by the BMBF, under contract no. 05 SB8 FKA7.
|
no-problem/9905/physics9905014.html
|
ar5iv
|
text
|
# A Brief Note on Jupiter’s Magnetism
## Abstract
A recent model which gives the contribution of the earth’s solid core to geo magnetism is seen to explain Jupiter’s magnetism also.
<sup>0</sup><sup>0</sup>footnotetext: Email:birlasc@hd1.vsnl.net.in; birlard@ap.nic.in
As is known Jupiter exhibits an earth like dipole magnetism $`10^4`$ times that of the earth. Geo magnetism is explained by the dynamo model of the earth’s liquid core. The planet Jupiter however is qualitatively different. Though like the earth it has a hot solid metallic core, which has a radius of about 20 times that of the earth’s solid core, this is however surrounded by a 60,000 kilometres thick Hydrogen mantle.
Recently it was suggested that the earth’s solid core contributes significantly to geomagnetism. This is based on the fact that below the Fermi temperature, Fermions would have an anomalous semionic character - that is they would obey a statistics inbetween the Fermi-Dirac and the Bose-Einstein . This would have the consequence that the magnetization density in such a situation, which is given by the well known expression
$$M=\frac{\mu (2N_+N)}{V}$$
(1)
where $`\mu `$ is the electron magnetic moment, $`N_+`$ is the average number of Fermions with spin up out of an assembly of $`N`$ Fermions, now has the value $`\frac{\mu N}{V}`$, owing to the fact that
$$\frac{1}{2}<\frac{N_+}{N}<1$$
(2)
Inequality (2) expresses the semionic behaviour.
Remembering that the core density of Jupiter is of the same order as that of the earth, while the core volume is about $`10^4`$ times that of the earth, we have $`N10^{52}`$, so that the magnetization $`MV`$, from (1) $`10^4`$ times the earth’s magnetism, as required.
Finally, as pointed out in ref., owing to the semionic behaviour as expressed by (2), the magnetization would be sensitive to external magnetic influences, and we could have magnetic reversals as in the case of the earth.
Incidentally it may be pointed out that the same model could also explain the magnetism of Neutron stars and White Dwarfs (cf.ref.).
|
no-problem/9905/nucl-th9905011.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
There is an increasing interest in $`\eta `$-meson physics both experimentally and theoretically. On the experimental side several facilities are now able to produce sufficient $`\eta `$’s to enable a study to be made of their interactions with other particles. In particular, the photon machines MAMI and GRAAL are supplementing the earlier hadronic machines such as SATURNE, CELSIUS and COSY. The current theoretical interest stems partly from the early indications that the $`\eta N`$ interaction is attractive and so could possibly lead to $`\eta `$-nucleus quasi-bound states (e.g. Refs. ). The theoretical approaches fall into two main categories. In the one, the various processes involving $`\eta `$-meson interactions are described in terms of microscopic models containing baryon resonances and the exchange of different mesons (e.g. Refs. , ) which may be based on a chiral perturbation approach (e.g.Ref. ) or a quark model (e.g. Ref. ). Unfortunately, this approach requires a knowledge of the magnitudes and relative phases of many hadron-hadron couplings several of which are very poorly known. In addition, since $`\eta `$ interactions – in the absence of $`\eta `$-meson beams – can only be studied as final state interactions, one has to exploit relationships between the many processes involved. For example, in the present note, the main interest is in the reaction a) $`\gamma N\eta N`$. However, this is dependent on the final state interaction b) $`\eta N\eta N`$, which in turn depends on the reactions c) $`\pi N\eta N`$ and d) $`\pi N\pi N`$. Similarly, reactions c) and d) are related to e) $`\gamma N\pi N`$. Therefore, any model that claims to describe reaction a) must also see its implications in reactions b), .., e). This, we believe, is too ambitious a program at present. At this stage it is probably more informative to check the consistency between the data of the above five reactions and be able to relate them in terms of a few phenomenological parameters. When this has been accomplished, it will hopefully be possible to understand these parameters in terms of more microscopic models. With this in mind, in Ref. a $`K`$-matrix model was developed by the authors to describe the reactions a), b), c) and d) in an energy range of about 100 MeV each side of the $`\eta `$ threshold. This model was expressed in the form of two coupled channels for $`s`$-wave $`\pi N`$ and $`\eta N`$ scattering with the effect of the two pion channel ($`\pi N\pi \pi N`$) being included only implicitly. The latter was achieved by first introducing the two pion process as a third channel in the $`K`$-matrix and subsequently eliminating that channel as an ”optical potential” correction to the other two channels. It should be emphasized that this is not an approximation but is done only for convenience, since we do not address cross sections involving explicitly two final state pions.
In Ref. the $`\eta `$-photoproduction cross section was assumed to be proportional to the elastic $`\eta N`$ cross section ($`|T_{\eta \eta }|^2`$). This is in line with the so-called Watson approximation . In this way each of the matrix elements in the two-by-two $`T`$-matrix of Ref. was associated with some specific experimental data – $`T_{\pi \pi }`$ with the $`\pi N`$ amplitudes of Arndt et al. , $`T_{\pi \eta }`$ with the $`\eta `$-production cross section in the review by Nefkens and $`T_{\eta \eta }`$ with the $`\eta `$-photoproduction cross section of Krusche et al..
In this note we now wish to treat the $`\gamma N`$ channel explicitly. An enlargement of the $`K`$-matrix basis then permits a direct estimate of the matrix element $`T_{\gamma \eta }`$, so that $`\sigma (\gamma N\eta N)|T_{\gamma \eta }|^2`$, thereby avoiding the earlier assumption that $`\sigma (\gamma N\eta N)|T_{\eta \eta }|^2`$. The $`K`$-matrix would now be a four-by-four matrix with the channels $`\pi N`$, $`\eta N`$, $`\pi \pi N`$ and $`\gamma N`$. In principle, 10 different processes, corresponding to each matrix element, could be analysed simultaneously. However, in practice, it is more convenient to elimate some channels by the ”optical potential” method used already in Ref. . We, therefore, describe in Section 2 the above reactions in terms of three separate $`T`$-matrices. In Section 3, we give the fitting strategy and also the numerical results in terms of the 13 parameters needed to specify the $`K`$-matrices. This section also includes expansions – in terms of the $`\eta `$ momentum – for the amplitudes of the $`\eta N\eta N`$ and $`\gamma N\eta N`$ reactions near the $`\eta `$ threshold. Section 4 contains a discussion and some conclusions.
## 2 The $`K`$-matrix formalism
In principle, the four channels of interest – $`\pi N`$, $`\eta N`$, $`\pi \pi N`$ and $`\gamma N`$ – should be treated simultaneously. However, it is more convenient and transparent if the problem is analysed in terms of three separate $`T`$-matrices.
### 2.1 Coupled $`\pi N`$ and $`\eta N`$ channels
The first $`T`$-matrix is precisely the same as in Ref. , where only the $`\pi N`$ and $`\eta N`$ channels – denoted by the indices $`\pi `$, $`\eta `$ – are explicit. This can be written as
$$T_1=\left(\begin{array}{cc}T_{\pi \pi }& T_{\pi \eta }\\ T_{\eta \pi }& T_{\eta \eta }\end{array}\right)=\left(\begin{array}{cc}\frac{A_{\pi \pi }}{1iq_\pi A_{\pi \pi }}\hfill & \frac{A_{\pi \eta }}{1iq_\eta A_{\eta \eta }}\hfill \\ \frac{A_{\eta \pi }}{1iq_\eta A_{\eta \eta }}\hfill & \frac{A_{\eta \eta }}{1iq_\eta A_{\eta \eta }}\hfill \end{array}\right),$$
(1)
where $`q_{\pi ,\eta }`$ are the center-of-mass momenta of the two mesons in the two channels $`\pi ,\eta `$ and the channel scattering lengths $`A_{ij}`$ are expressed in terms of the $`K`$-matrix elements, via the solution of $`T=K+iKqT`$, as
$`A_{\pi \pi }=K_{\pi \pi }+iK_{\pi \eta }^2q_\eta /(1iq_\eta K_{\eta \eta })`$, $`A_{\eta \pi }=A_{\pi \eta }=K_{\eta \pi }/(1iq_\pi K_{\pi \pi })`$
$$A_{\eta \eta }=K_{\eta \eta }+iK_{\eta \pi }^2q_\pi /(1iq_\pi K_{\pi \pi }).$$
(2)
At this stage the $`\pi \pi N`$ channel is incorporated as an ”optical model” correction to the corresponding matrix element of $`T_1`$ and the $`\gamma N`$ channel is simply ignored since this $`T`$-matrix is used to describe only reactions b), c) and d), where the effect of the $`\gamma N`$ channel is small being only an electromagnetic correction to these three reactions. As discussed in Ref. various features of the experimental data suggest that the $`K`$-matrix elements can be parametrized in terms of energy independent constants – the background terms $`B_{ij}`$ – plus poles associated with the $`S`$-wave $`\pi N`$ resonances $`N(1535)`$ and $`N(1650)`$. This results in
$`K_{\pi \pi }K_{\pi \pi }(a)=\frac{\gamma _\pi (0)}{E_0E}+\frac{\gamma _\pi (1)}{E_1E}+i\frac{K_{\pi 3}q_3K_{3\pi }}{1iq_3K_{33}}`$ , $`K_{\pi \eta }B_{\pi \eta }+\frac{\sqrt{\gamma _\pi (0)\gamma _\eta (0)}}{E_0E}+i\frac{K_{\pi 3}q_3K_{3\eta }}{1iq_3K_{33}},`$
$$K_{\eta \eta }K_{\eta \eta }(a)=B_{\eta \eta }+\frac{\gamma _\eta (0)}{E_0E}+i\frac{K_{\eta 3}q_3K_{3\eta }}{1iq_3K_{33}},$$
(3)
where
$$K_{33}=\frac{\gamma _3(0)}{E_0E}+\frac{\gamma _3(1)}{E_1E},K_{\pi 3}=\frac{\sqrt{\gamma _\pi (0)\gamma _3(0)}}{E_0E}+\frac{\sqrt{\gamma _\pi (1)\gamma _3(1)}}{E_1E},K_{\eta 3}=\frac{\sqrt{\gamma _\eta (0)\gamma _3(0)}}{E_0E}.$$
The last terms on the RHS of Eqs. (3) represent the effect of the eliminated $`\pi \pi N`$ channel.
### 2.2 Coupled $`\eta N`$ and $`\gamma N`$ channels
The second $`T`$-matrix involves only the two channels $`\eta N`$ and $`\gamma N`$ – denoted by the indices $`\eta `$,$`\gamma `$ – where now it is the $`\pi \pi N`$ and $`\pi N`$ channels that are treated as optical potentials. This $`T`$-matrix is written as
$$T_2=\left(\begin{array}{cc}T_{\eta \eta }& T_{\gamma \eta }\\ T_{\eta \gamma }& T_{\gamma \gamma }\end{array}\right)=\left(\begin{array}{cc}\frac{A_{\eta \eta }}{1iq_\eta A_{\eta \eta }}& \frac{A_{\gamma \eta }}{1iq_\eta A_{\eta \eta }}\\ \frac{A_{\eta \gamma }}{1iq_\eta A_{\eta \eta }}& T_{\gamma \gamma }\end{array}\right),$$
(4)
$$\mathrm{where}A_{\gamma \eta }=A_{\eta \gamma }=K_{\gamma \eta }/(1iq_\gamma K_{\gamma \gamma }),A_{\eta \eta }=K_{\eta \eta }+iK_{\gamma \eta }^2q_\gamma /(1iq_\gamma K_{\gamma \gamma }).$$
Here we are not interested in $`T_{\gamma \gamma }`$, since this would describe the $`\gamma N\gamma N`$ reaction. The forms of $`K_{\pi \pi }(a)`$, $`K_{\pi \eta }`$, $`K_{33}`$, $`K_{\pi 3}`$ and $`K_{\eta 3}`$ are the same as given above. However,
$$K_{\eta \eta }K_{\eta \eta }(b)=K_{\eta \eta }(a)+i\frac{K_{\eta \pi }q_\pi K_{\pi \eta }}{1iq_\pi K_{\pi \pi }(a)}.$$
(5)
Also we now need
$$K_{\gamma \eta }=B_{\gamma \eta }+\frac{\sqrt{\gamma _\gamma (0)\gamma _\eta (0)}}{E_0E}+i\frac{K_{\gamma \pi }q_\pi K_{\pi \eta }}{1iq_\pi K_{\pi \pi }(a)}+i\frac{K_{\gamma 3}q_3K_{3\eta }}{1iq_3K_{33}},$$
(6)
$$K_{\gamma \gamma }=\frac{\gamma _\gamma (0)}{E_0E}+\frac{\gamma _\gamma (1)}{E_1E}+i\frac{K_{\gamma \pi }q_\pi K_{\pi \gamma }}{1iq_\pi K_{\pi \pi }(a)}+i\frac{K_{\gamma 3}q_3K_{3\gamma }}{1iq_3K_{33}}$$
(7)
and
$$K_{\gamma \pi }=B_{\gamma \pi }+\frac{\sqrt{\gamma _\gamma (0)\gamma _\pi (0)}}{E_0E}+\frac{\sqrt{\gamma _\gamma (1)\gamma _\pi (1)}}{E_1E}+i\frac{K_{\gamma 3}q_3K_{3\pi }}{1iq_3K_{33}}$$
(8)
where the last terms on the RHS represent the effect of the eliminated $`\pi N`$\- and $`\pi \pi N`$-channels. Also we need
$$K_{\gamma 3}=\frac{\sqrt{\gamma _\gamma (0)\gamma _3(0)}}{E_0E}+\frac{\sqrt{\gamma _\gamma (1)\gamma _3(1)}}{E_1E}.$$
(9)
### 2.3 Coupled $`\pi N`$ and $`\gamma N`$ channels
The third $`T`$-matrix involves only the two channels $`\pi N`$ and $`\gamma N`$ – denoted by the indices $`\pi `$,$`\gamma `$ – where now it is the $`\eta N`$ and $`\pi \pi N`$ channels that are treated as optical potentials. This $`T`$-matrix is written as
$$T_3=\left(\begin{array}{cc}T_{\pi \pi }& T_{\gamma \pi }\\ T_{\pi \gamma }& T_{\gamma \gamma }\end{array}\right)=\left(\begin{array}{cc}\frac{A_{\pi \pi }}{1iq_\pi A_{\pi \pi }}& \frac{A_{\gamma \pi }}{1iq_\pi A_{\pi \pi }}\\ \frac{A_{\pi \gamma }}{1iq_\pi A_{\pi \pi }}& T_{\gamma \gamma }\end{array}\right),$$
(10)
$$\mathrm{where}A_{\gamma \pi }=A_{\pi \gamma }=K_{\gamma \pi }/(1iq_\gamma K_{\gamma \gamma }),A_{\pi \pi }=K_{\pi \pi }+iK_{\gamma \pi }^2q_\gamma /(1iq_\gamma K_{\gamma \gamma }).$$
As before, we are not interested in $`T_{\gamma \gamma }`$. The forms of $`K_{\eta \eta }`$=$`K_{\eta \eta }(a)`$, $`K_{\pi \eta }`$, $`K_{33}`$, $`K_{\pi 3}`$ and $`K_{\eta 3}`$ are the same as given above. However,
$$K_{\pi \pi }K_{\pi \pi }(b)=K_{\pi \pi }(a)+i\frac{K_{\pi \eta }q_\eta K_{\eta \pi }}{1iq_\eta K_{\eta \eta }(a)}.$$
(11)
Also we now need
$$K_{\gamma \pi }=B_{\gamma \pi }+\frac{\sqrt{\gamma _\gamma (0)\gamma _\pi (0)}}{E_0E}+\frac{\sqrt{\gamma _\gamma (1)\gamma _\pi (1)}}{E_1E}+i\frac{K_{\gamma \eta }q_\eta K_{\eta \pi }}{1iq_\eta K_{\eta \eta }(a)}+i\frac{K_{\gamma 3}q_3K_{3\pi }}{1iq_3K_{33}},$$
(12)
$$K_{\gamma \gamma }=\frac{\gamma _\gamma (0)}{E_0E}+\frac{\gamma _\gamma (1)}{E_1E}+i\frac{K_{\gamma \eta }q_\eta K_{\eta \gamma }}{1iq_\eta K_{\eta \eta }(a)}+i\frac{K_{\gamma 3}q_3K_{3\gamma }}{1iq_3K_{33}}$$
(13)
where the last terms on the RHS represent the effect of the eliminated $`\eta N`$\- and $`\pi \pi N`$-channels. Also we need
$$K_{\gamma \eta }=B_{\gamma \eta }+\frac{\sqrt{\gamma _\gamma (0)\gamma _\eta (0)}}{E_0E}+i\frac{K_{\gamma 3}q_3K_{3\eta }}{1iq_3K_{33}}.$$
(14)
The definitions of all other parameters are the same as for $`T_{1,2}`$.
## 3 Fitting strategy and results
Compared with Ref. there are now four new parameters $`B_{\gamma \pi }`$, $`B_{\gamma \eta }`$, $`\gamma _\gamma (0)`$ and $`\gamma _\gamma (1)`$ explicitly dependent on the index $`\gamma `$. These four parameters replace the single free parameter $`A(Phot)`$ that related $`\sigma (\gamma N\eta N)`$ and $`T_{\eta \eta }`$. In all there are now 13 parameters that are determined by a Minuit fit of upto 158 pieces of data – 23 are $`\pi N`$ amplitudes (real and imaginary), 11 are $`\pi N\eta N`$ cross sections\[$`\sigma (\pi \eta )`$\] and 53 are $`\gamma N\eta N`$ cross sections \[$`\sigma (\gamma \eta )`$\]. In addition, from Ref. we use upto 48 $`S11(\gamma N\pi N)`$ amplitudes in the energy range $`1350E_{c.m.}1650`$MeV. There are several reasons for choosing this upper limit:
a) We wish to include the full effect of the $`N(1535)`$.
b) The $`\gamma N\pi N`$ and $`\gamma N\eta N`$ reactions are closely related and so attempting to fit them simultaneously over very different energy ranges could give misleading results. Therefore, we do not attempt to use the available data at higher energies.
c) The values of the $`\gamma N\pi N`$ amplitudes are far from being unique – as is clear when comparing the amplitudes of Refs. and . In fact, in view of this lack of uniqueness we do not use the quoted errors of Ref. . Instead, we make two overall fits where, in the one case, all the errors in Ref. are increased to $`\pm \sqrt{2}`$ for both the Real and Imaginary components and, in the second case, the increase is only to $`\pm 1/\sqrt{2}`$. These choices were made so that the resultant $`\chi ^2`$/dpt for this reaction are comparable to those in the other reactions. We realise that this procedure is throwing away information. However, the main aim in this work is to study the $`\gamma N\eta N`$ reaction with the $`\gamma N\pi N`$ playing only a secondary role as a possible stabilizing effect. Therefore, we want a $`K`$-matrix fit that is good for the well established reactions but, at the same time, also reproduces the qualitative trends in the $`\gamma N\pi N`$ reaction suggested by Refs. and . In fact, we could even turn the argument around and say that our $`S11`$ amplitudes are a prediction that is consistent with the other reactions.
In practice, the actual $`\eta `$-production cross section data was used in a reduced form, from which threshold factors have been removed – namely:
$$\sigma (\pi \eta )_r=\sigma (\pi \eta )\frac{q_\pi }{q_\eta }=\frac{8\pi q_\pi }{3q_\eta }|T_{\pi \eta }|^2\mathrm{and}\tau (\gamma \eta )_r=\sqrt{\sigma (\gamma \eta )\frac{E_\gamma }{4\pi q_\eta }}=|T_{\gamma \eta }|.$$
(15)
In Ref. the last equation was replaced by $`\tau (\gamma \eta )_r=A(Phot)|T_{\eta \eta }|,`$ where $`A(Phot)`$ was treated as a free parameter in the Minuit minimization.
At first, because of the lack of uniqueness in the two analyses published in Refs. and , only the 32 $`S11(\gamma N\pi N)`$ amplitudes with $`E_{c.m.}1550`$ MeV were used, since this upper energy limit is about the same as for the $`\gamma N\eta N`$ data. This resulted in a good fit with parameters qualitatively the same as in Ref. and also in line with the Particle Data Group – see columns A, D and PDG in Table 1. In column A, the error bars in the $`S11(\gamma N\pi N)`$ amplitudes of Ref. have all been increased to $`\pm \sqrt{2}`$ – for the reasons discussed earlier. In this case, the overall $`\chi ^2`$/dof and the separate $`\chi ^2`$/dpt are all near unity. However, when in column D the errors are increased to only $`\pm 1/\sqrt{2}`$, the $`\chi ^2`$/dpt for the $`S11(\gamma N\pi N)`$ amplitudes become significantly larger. Columns B and C show the corresponding results when the $`S11(\gamma N\pi N)`$ data base is increased to include data with $`E_{c.m.}`$ upto 1650MeV. The fits are now systematically worse than in column A with the overall $`\chi ^2`$/dof increasing from 0.89 to 1.23 in column B. In column C – the case with smaller errors and the larger data base – the fit obtained was quite poor to such an extent that reasonable errors on the parameters could not be extracted. The latter fit, when all 13 parameters were varied simultaneously, did not give from Minuit a Migrad result that converged to sensible parameters. The fit displayed in column C is based on the parameters of column B, some of which are first Fixed and then Released and Scanned by Minuit. The comparison with the data being fitted is shown in Figs. 1–4 The main conclusions to be drawn from Table 1 and these figures are:
1) All four fits to the data are reasonable with cases A and B being superior.
2) The main distinguishing feature between the four fits is the relative ability to fit the $`S11(\gamma N\pi N)`$ data, since this is the channel that contributes most to the overall $`\chi ^2`$/dof – with the $`\chi ^2`$/dpt’s from the other four channels being reasonably constant and comparable to unity in all fits. This suggests that it will be hard for the present type of analysis to maintain these latter $`\chi ^2`$/dpt’s and, at the same time, achieve a good $`\chi ^2`$/dpt for the $`S11(\gamma N\pi N)`$ data presented in Refs. and . The authors, therefore, suggest that the $`S11(\gamma N\pi N)`$ amplitudes from the $`K`$ matrix model could be a more realistic set than those in Refs. and , since they are now consistent with more reactions $`\pi N\pi N`$, $`\pi N\eta N`$ and $`\gamma N\eta N`$.
3) Figure 3 shows that, beyond $`E_{c.m.}1550`$MeV, cases A and D give larger cross sections than B and C – the difference increasing to about a factor of two by $`E_{c.m.}1650`$MeV. In the near future, the GRAAL collaboration is expected to provide total cross section data upto this energy and so, hopefully, distinguish between these cases.
In Table 1 the parameters $`\mathrm{\Gamma }(Total)`$, $`\eta (br)`$ $`\pi (br)`$, $`\mathrm{\Gamma }(Total,1)`$ and $`\pi (br,1)`$ are quoted, whereas the earlier formalism is expressed in terms of $`\gamma _\eta (0)`$, $`\gamma _\pi (0,1)`$, and $`\gamma _3(0,1)`$. The two notations are related as follows:
1) $`\gamma _\eta (0)=0.5\mathrm{\Gamma }(Total)\eta (br)/q_\eta [E_0(R)]`$, 2) $`\gamma _\pi (0)=0.5\mathrm{\Gamma }(Total)\pi (br)/q_\pi [E_0(R)]`$,
3) $`\gamma _\pi (1)=0.5\mathrm{\Gamma }(Total,1)\pi (br,1)/q_\pi [E_1(R)]`$, 4) $`\gamma _3(0)=0.5\mathrm{\Gamma }(Total)[1\eta (br)\pi (br)]/q_3[E_0(R)]`$,
5) $`\gamma _3(1)=0.5\mathrm{\Gamma }(Total,1)[1\pi (br,1)]/q_3[E_1(R)]`$ and 6) $`\mathrm{\Gamma }_\gamma (0,1)=2q_\gamma [E_{0,1}(R)]\gamma _\gamma (0,1)`$.
This now requires a choice to be made for the reference energies $`E_0(R)`$ and $`E_1(R)`$, which preferably should be close to the $`E_{0,1}`$ in Table 1. Here, we take simply $`E_{0,1}(R)=`$ 1535, 1650 MeV respectively. This gives $`q_\eta [E_0(R)]=0.945,q_\pi [E_0(R)]=2.365,q_\pi [E_1(R)]=2.770,q_3[E_0(R)]=1.067,q_3[E_1(R)]=1.245,q_\gamma [E_0(R)]=2.436`$ and $`q_\gamma [E_1(R)]=2.829`$ fm<sup>-1</sup>. It should be added that this is not an assumption or an approximation. It is just setting a scale that is needed when converting from one notation to the other. In Table 2, the $`\gamma _{\pi ,\eta ,3}(0,1)`$ are tabulated alongwith $`\mathrm{\Gamma }_\gamma (0,1)`$.
In the above, we have been very explicit in describing the formalism. Therefore, in principle, the reader should be able to reconstruct all three $`T`$-matrices and so determine each of the complex amplitudes needed in the five processes $`\pi N\pi N`$, $`\pi N\eta N`$, $`\eta N\eta N`$, $`\gamma N\pi N`$ and $`\gamma N\eta N`$. This formalism also enables these amplitudes to be calculated at unphysical energies. For example, in the study of possible $`\eta `$-nucleus quasi-bound states, the $`\eta N\eta N`$ amplitudes are needed below the $`\eta `$ threshold. This is easily achieved by simply using an $`\eta `$ momentum($`q_\eta `$) that is purely imaginary.
In spite of the model being very explicit, it is sometimes convenient to have simplified versions of some of the amplitudes. The ones we consider are those that are expansions in terms of $`q_\eta `$ about the $`\eta `$ threshold – in particular $`A_{\eta \eta }`$ and $`A_{\gamma \eta }`$. The former results in the usual $`\eta N\eta N`$ effective range expansion of Ref. , the parameters of which are now updated in Table 3. This shows that the scattering length($`a`$) is larger than that extracted in Ref. – the increase being 15% for case A and 40% for case B. However, it should be remembered that case B extrapolates the model into a region where the $`\gamma N\eta N`$ data is lacking, and it is just this reaction that is crucial in determining the scattering length. Given this expansion, then $`T_{\eta \eta }`$ is readily calculated from $`A_{\eta \eta }`$ as in Eq. 1 at energies both above and below the $`\eta `$ threshold. The other amplitude of interest is $`T_{\gamma \eta }`$ in Eq. 4, which is seen to depend on both $`A_{\eta \eta }`$ and $`A_{\gamma \eta }`$. By analogy with the expansion of $`T_{\eta \eta }`$, we express $`T_{\gamma \eta }`$ in the form
$$\frac{1}{T_{\gamma \eta }}=\frac{1}{A_{\gamma \eta }}iq_\eta \frac{A_{\eta \eta }}{A_{\gamma \eta }}.$$
(16)
The two entities $`1/A_{\gamma \eta }`$ and $`A_{\eta \eta }/A_{\gamma \eta }`$ are then expanded as $`e_i+f_iq_\eta ^2+g_iq_\eta ^4`$ with the parameters $`e_i,f_i,g_i`$ being given in Table 4. Both of these expansions do very well over the energy range of Ref. . For example, with case A at $`E_{c.m}=1538.6`$MeV – an energy that is 50 MeV above the $`\eta `$ threshold – , the expansion of $`1/A_{\gamma \eta }`$ gives 8.4–i20.0 fm<sup>-1</sup> compared with the exact value of 7.9–i19.7 fm<sup>-1</sup> and the expansion of $`A_{\eta \eta }/A_{\gamma \eta }`$ gives 37.12–i5.4 compared with the exact value of 37.08–i5.7. This latter agreement and the weak energy dependence of this quantity explains why, in Ref. , the replacement of $`\sigma (\gamma N\eta N)|T_{\gamma \eta }|^2`$ by $`\sigma (\gamma N\eta N)|T_{\eta \eta }|^2`$ was a good approximation, since – as seen from Eq. 1$`T_{\gamma \eta }=A_{\gamma \eta }T_{\eta \eta }/A_{\eta \eta }A(Phot)T_{\eta \eta }`$. Also the value of $`A(Phot)=19.74(36)`$ in Table 1 is essentially given by $`1/e_2\frac{1}{40}`$$`m_\pi 10^318`$. In Fig. 5, the real and imaginary components of $`T_{\gamma \eta }`$ are shown, when these two expansions are used in Eq.16, which is then inverted to give $`T_{\gamma \eta }`$. It is seen that they give a good representation over a wide energy range especially for energies below the $`\eta `$ threshold. This agreement is very similar to that found in Ref. for $`T_{\eta \eta }`$. It should be added that, if the form of $`T_{\gamma \eta }`$ written in Eq. 4 is used directly with expansions of $`A_{\gamma \eta }`$ and $`A_{\eta \eta }`$, then the fit is much poorer – as also seen in Fig. 5.
## 4 Discussion and conclusions
In this paper the authors have developed a simple $`K`$-matrix parametrization that gives, in an energy range of about 100 MeV each side of the $`\eta `$ threshold, a good fit to $`\pi N\pi N`$, $`\pi N\eta N`$ and $`\gamma N\eta N`$ data. In addition, it has the same trends as the $`\gamma N\pi N`$ data, which at present is not unique over this energy range. However, this consistent fit should not be considered an end in itself, since it also results in predictions for the $`\eta N\eta N`$ $`S`$-wave amplitude. Near the $`\eta `$ threshold this amplitude has been parametrized in the form of the effective range expansion – the resultant parameters being given in Table 3. Since this expansion is good over a wide energy range each side of the $`\eta `$ threshold, it is very useful for discussions concerning the possibility of $`\eta `$-nucleus quasi-bound states e.g. in Ref. the effective range expansion of Ref. was used to study the production of $`\eta `$-nuclei, while Ref. uses such an expansion to describe $`\eta `$-nucleus final state interactions. The indications from Table 3 are that the $`\eta N`$ scattering length is now larger than that extracted in Ref. . This is even more favourable for the existence of $`\eta `$-nucleus quasi-bound states and may lead to an early onset of nuclear P-wave states, which are easier to detect in the Darmstadt experiment outlined in Ref. .
One result of the above fits is the extraction of the photon-nucleon-$`N`$(1535) coupling constant $`\gamma _\gamma (0,1)`$ as indicated in Table 1, which is equivalent to the partial decay width $`\mathrm{\Gamma }_\gamma `$ for $`N(1535)\gamma N`$. The definition of $`\mathrm{\Gamma }_\gamma `$ is not unique, however. Below, this question is elucidated on a simple soluble model of the $`T`$ matrix. This model is also used to understand the interference of a resonant interaction described by a singularity in the $`K`$ matrix and potential interactions described by the background parameters $`B`$.
Let us, assume a separable $`K`$ matrix model with
$$K_{i,j}=\sqrt{\gamma _i\gamma _j}(\frac{1}{E_0E}+B),$$
(17)
where, in the notation of Eqs. 3 and 6, $`B_{ij}=B\sqrt{\gamma _i\gamma _j}`$. This leads to a separable solution for the $`T`$-matrix
$$T_{i,j}=\sqrt{\gamma _i\gamma _j}\frac{1+B(E_0E)}{E_0Ei_kq_k\gamma _k[1+B(E_0E)]}.$$
(18)
When the background term $`B`$ vanishes, this model is equivalent to simple Breit-Wigner multichannel resonances of eigen-width $`\mathrm{\Gamma }/2=_kq_k\gamma _k`$. However, when we relax this restriction a new structure is built upon the resonance. It is determined by the energy dependent term $`[1+B(E_0E)],`$ which generates a zero of the cross section at $`E=E_0+1/B`$. Now, it is the $`1/B`$ that sets a new energy scale, which may be independent of the scale given by the width. For a large $`B`$ one finds the resonance to be accompanied by a nearby zero, whereas for small $`B`$ this zero is moved away beyond the resonance width. The resonance shape is thus very different from the Lorentian : one reason being the strong energy dependence in $`q_k(E)`$ and another being the pole-background interference.
As discussed above in connection with Table 2, it is usually natural to define the partial width of a resonance on the basis of Eqs. 17 and 18 as $`\mathrm{\Gamma }_\gamma (0,1)=2q_\gamma (E_{0,1})\gamma _\gamma (0,1)`$. For the best fits to the data (sets $`A,B`$ of Table 1) this equation produces $`\mathrm{\Gamma }_\gamma (0)=0.171,0.157`$ MeV and $`\mathrm{\Gamma }_\gamma (1)=0.0,0.080`$ MeV respectively . However, with complicated phenomenological $`T`$ matrices one could define $`\mathrm{\Gamma }_\gamma `$ otherwise, e.g. by moulding $`T`$ into the Breit-Wigner form in the proximity of $`E=E_0`$. Thus, at $`ReT_{\gamma ,j}=0`$ one has $`ImT_{\gamma ,j}=\frac{\sqrt{(\mathrm{\Gamma }_\gamma /q_\gamma )(\mathrm{\Gamma }_j/q_j)}}{\mathrm{\Gamma }}=\frac{\sqrt{(\mathrm{\Gamma }_\gamma /2q_\gamma )\gamma _j}}{\mathrm{\Gamma }/2}`$. Inserting from Table 2 the values of $`\mathrm{\Gamma }`$ and $`\gamma _j`$ gives another estimate of $`\mathrm{\Gamma }_\gamma `$. For example, with case $`A`$, $`ReT_{\gamma ,\eta }=0`$ at $`E=1540`$ MeV giving $`ImT_{\gamma ,\eta }=0.0179`$ and $`q_\gamma =2.45`$ fm<sup>-1</sup>. This results in $`\mathrm{\Gamma }_\gamma =0.176`$ MeV. Similarly, $`ReT_{\gamma ,\pi }=0`$ at $`E=1535`$ MeV giving $`ImT_{\gamma ,\pi }=0.0085`$ and $`q_\gamma =2.43`$ fm<sup>-1</sup>. This results in $`\mathrm{\Gamma }_\gamma =0.150`$ MeV. The proximity of these three widths reflects the fact that the realistic situation is fairly close to the separability situation described by Eqs. 17 and 18. It is found to hold approximately, for all the parameter sets in Table 1.
There is another, somewhat unexpected effect of the $`[1+B(E_0E)]`$ interference term in Eq. 18. For $`B>0`$ one finds that the amplitudes below the resonance are enhanced, and the amplitudes above the resonance are reduced with respect to the pure resonance term. This effect is seen clearly for the $`B`$ and $`C`$ parameter sets, where below the resonance the $`B_{\eta \eta }`$ parameter in the $`\eta N`$ channel is the largest and the real parts of the $`\eta N`$ scattering lengths given in Table 3 are also the largest. On the other hand the $`(\gamma ,\eta )`$ production cross section, dominated by the final state $`\eta N`$ interactions, become the smallest above the resonance as seen Fig. 4. One consequence of this effect is that an extension of the $`(\gamma ,\eta )`$ cross section measurements to energies above the $`N(1535)`$ resonance may by instrumental in fixing more precisely the $`\eta N`$ scattering length. As indicated in the introduction, the real part of this scattering length is crucial in the determination of quasibound states in $`\eta `$–few nucleon systems.
On the experimental side there are several groups , studying the $`\gamma N\eta N`$ reaction in or near this interesting energy range. The observation of the cross section near and above $`E_{c.m.}`$=1540MeV would be of great interest enabling a detailed study to be made of the $`N(1535)`$ and possibly leading to a better understanding of the internal structure of this object. At present there is no definite conclusion as to whether or not this resonance structure is due to a pole in the $`K`$-matrix, as advocated here, or arising through coupling to high lying closed channels - see Ref. .
In the near future, the authors of Ref. are expected to extract, directly from experiment, separate values for the real and imaginary components of $`T(\gamma \eta )`$. These will be analogous to the $`S11\gamma N\pi N`$ data already available in Ref. and used in the above fits. Such a development will then enable the present type of $`K`$-matrix analysis to be even more constrained.
One of the authors (S.W.) wishes to acknowledge the hospitality of the Research Institute for Theoretical Physics, Helsinki, where part of this work was carried out. In addition he was partially supported by grant No KBN 2P03B 016 15. The authors also thank Drs. R. Arndt and B. Krusche for useful correspondence and members of the GRAAL collaboration for useful discussions. This line of research involving $`\eta `$-mesons is partially supported by the Academy of Finland.
##
##
Figure Captions
Figure 1. The a) Real and b) Imaginary parts of the s-wave $`\pi N\pi N`$ amplitudes of Ref. . Solid line for solution A, dashed for D, dotted for B and dash-dot for C.
Figure 2. The $`\pi N\eta N`$ reaction. Data is from Ref. . Notation as in Figure 1.
Figure 3. The $`\gamma N\eta N`$ reaction. Data is from Ref. . Notation as in Figure 1.
Figure 4. The a) Real and b) Imaginary parts of the $`S11\gamma N\pi N`$ of Ref. . Crosses are data from Ref. . Notation as in Figure 1.
Figure 5. The a) Real and b) Imaginary parts of $`T_{\gamma \eta }`$. The solid curve is the exact value as given by the model. The dotted curve uses expansions of $`1/A_{\gamma \eta }`$ and $`A_{\eta \eta }/A_{\gamma \eta }`$, whereas the dashed curve uses those for $`A_{\gamma \eta }`$ and $`A_{\eta \eta }`$.
|
no-problem/9905/quant-ph9905030.html
|
ar5iv
|
text
|
# A Realizable, Non-Null Schrödinger’s Cat Experiment
## I Introduction
Twenty years ago Bedford and Wang (BW) devised an experimentally realizable version of the Schrödinger’s Cat (SC) experiment. They started with an ordinary double slit experiment using photons of wavelength $`\lambda _1`$; they then added a refinement. The slits have movable slit covers that allow two possible configurations:
> 1. Configuration Ab has slit A open and slit B closed.
> 2. Configuration aB has slit A closed and slit B open.
The slit control system is triggered by photocells registering the output of a beam splitter. In this way, the slit system (SS) can be set up so that its state vector is entirely determined by a single photon processed by the beam splitter/photocell system. BW claim that if the beam splitter outputs a photon in a 50/50 superposition state, then further application of the superposition principle (SP) according the the standard interpretation of quantum mechanics (SIQM) forces us to conclude that the slit system is in the state
$$|\psi =\frac{1}{\sqrt{2}}(|Ab+|aB),$$
(1)
and consequently, a double slit interference pattern is to be expected (Figure 1).
BW then add that “although the result seemed a foregone conclusion” the experiment was performed and yielded a null result—no double slit interference. But is this result interesting or informative? The BW detractors claim that BW have merely misquoted and misused the SP—making their experiment pointless. BW insist that their experiment exposes a flaw in SIQM .
Where do BW and their detractors agree? They agree that the null result of the experiment is a foregone conclusion. Why do they agree? Because, it is obvious that the position of the slit covers cannot really be uncertain in any quantum-mechanical sense. Their positions are “given away” by several easily measured phenomena, the most obvious being thermal radiation.
Where do BW and their detractors disagree? At the core of the dispute is the question of whether the experimenter can prevent state vector reduction by ignoring information. That is, by choosing not to perform obvious measurements. There are numerous recent examples of experiments in which interesting effects are obtained by choosing not to measure state vectors at some intermediate point in the experiment . But it is obvious that one cannot choose to ignore arbitrarily large numbers of photons. It is unclear whether SIQM provides a prescription for deciding how many and what kind of photons (or other particles) can legitimately be ignored.
There is another line of inquiry suggested by the BW experiment. We propose to devise an experimental configuration in which the state vector of the movable slit covers (or their equivalent) is not given away by thermal radiation or any other effect, and the superposition effect we are trying to measure is not swamped by extraneous phenomena. These conditions could produce a non-null experimental outcome.
## II Modified BW Experiment
Our objective at this stage is to use the BW setup to develop the tools and language to analyze the conjectured non-null experiment. We add to the BW apparatus a source of short wavelength radiation ($`\lambda _2`$) that can be used to probe the state (A open versus A closed) of the system, as shown in Figure 2.
If we just run the $`\lambda _1`$ part of the experiment, and if we take the superposition state $`\frac{1}{\sqrt{2}}(|Ab+|aB)`$ at face value then we get a $`\lambda _1`$ double-slit pattern. Next we observe the following sequence:
> 1. Turn on the $`\lambda _1`$ source and observe the double-slit pattern. Turn on the $`\lambda _2`$ source. (For clarity and simplicity, we can choose to use just one $`\lambda _2`$ particle.) As soon as D$`_{\lambda _2}`$ either detects or fails to detect the $`\lambda _2`$ particle, the $`\lambda _1`$ double-slit pattern must vanish.
If we were to turn on the $`\lambda _1`$ source only after the $`\lambda _2`$ particle has encountered (or failed to encounter D$`_{\lambda _2}`$, then obviously there will be no $`\lambda _1`$ double slit-pattern.
Now we introduce the following wrinkle:
> 1. First turn on E$`_{\lambda _2}`$, but instead of placing D$`_{\lambda _2}`$ just beyond the A slit, allow the $`\lambda _2`$ particle to follow a long path to a distant mirror and then bounce back to be detected in the lab (Figure 3). (Note that we can set this up for both the transmitted and reflected paths or just one or the other.) Turn on E$`_{\lambda _1}`$, while the $`\lambda _2`$ particle is in flight to the distant mirror(s). Now we ask, “Does a $`\lambda _1`$ double-slit pattern appear while the $`\lambda _2`$ particle is in flight?”
Before answering, let us review the time-line of the experiment, as shown in Figure 4.
> $`t_0`$$`\lambda _2`$ emission
> $`t_1`$$`\lambda _2`$ encounter with SS
> $`t_2`$$`\lambda _1`$ emission
> $`t_3`$$`\lambda _1`$ encounter with SS
> $`t_4`$$`\lambda _1`$ encounter with double slit interference screen
> $`t_5`$$`\lambda _2`$ reflection
> $`t_6`$$`\lambda _2`$ detection
Now suppose that a $`\lambda _1`$ double-slit pattern is observed at $`t_4`$. A $`\lambda _2`$ detection event at $`t_6`$ tells us that the A slit was either open or closed (not in the state $`\frac{1}{\sqrt{2}}(|Ab+|aB)`$) at time $`t_1`$, so that $`\lambda _1`$ double-slit interference cannot occur at time $`t>t_1`$. This is not a paradox—it is a flat out contradiction.
Now suppose that a double-slit pattern is not observed at $`t_4`$. The experimenter (who is presumably none other than Wigner’s friend) can then choose during the interval $`t_4<t<t_6`$ to deactivate the $`\lambda _2`$ detectors. If the $`\lambda _2`$ particle floats away without being detected, then the $`\lambda _1`$ double-slit pattern should be observed—again a contradiction.
For completeness, we need to dispose of one other point. The reader has probably thought of something like the following rejoinder:
> State vector reduction does not occur until time $`t_6`$; therefore, at time $`t_2`$, the $`\lambda _1`$ quanta have equal probability of passing through A or B and can produce an interference pattern without contradicting detection of the $`\lambda _2`$ particle at D$`_{\lambda _2}`$ at time $`t_6`$.
We can refute this rejoinder with a simple example. Suppose that particle P<sub>1</sub> is subjected to a 50/50 quantum bifurcation. It’s wave function becomes
$$|\mathrm{P}_1=\frac{1}{\sqrt{2}}(|\mathrm{P}_1_++|\mathrm{P}_1_{}).$$
(2)
Following the rejoinder, one can then argue that a second particle P<sub>2</sub> can be scattered off $`|\mathrm{P}_1_+`$ even if a later measurement finds P<sub>1</sub> in $`|\mathrm{P}_1_{}`$.
Clearly, the rejoinder in this form is neither a sound argument, nor a proper expression of SIQM. Nevertheless, we can see a glimmer of something deeper if we continue in the direction the rejoinder leads us. It seems that the $`\lambda _1`$ and $`\lambda _2`$ observations should yield mutually compatible sets of basis vectors for the same Hilbert space. But the usual linear mapping works only in one direction, $`|\lambda _1_{interference}=\frac{1}{\sqrt{2}}(|\lambda _2_{Aopen}+|\lambda _2_{Bopen})`$. There are no expressions
$`|\lambda _2_{Aopen}=|\lambda _1_{interference}+\mathrm{other}\mathrm{vectors},`$ (3)
$`|\lambda _2_{Bopen}=|\lambda _1_{interference}+\mathrm{other}\mathrm{vectors},`$ (4)
a fact to which we will return in the Conclusions (Section VII).
Now we need to come up with a more realistic superposition state and see what happens to this contradiction.
## III Interference From a Mesoscopic Mirror
We start with a mesoscopic mirror whose wave function is bifurcated. (That is, the mirror appears in one of two possible positions with a 50/50 probability. The wave function for positions in between is zero.)
Both the $`\lambda _1`$ and $`\lambda _2`$ photons are reflected by the mirror. This interval $`t_3t_1`$ must be kept very short, so that the recoil of the mirror from the $`\lambda _2`$ impact is minimized. Figure 5 shows the experimental setup with the two possible positions of the mirror separated by the distance $`\frac{1}{4}\lambda _1`$. It is evident from Figure 5 that there is an interference node for the reflected $`\lambda _1`$ photons. We therefore can develop the same kind of consistency argument as in Section II.
But is it really possible to prepare the mirror in such a state? We could use a half-silvered mirror, which bifurcates photon wave functions and so can in principle bifurcate its own wave function by interacting with a single photon.
The uncertainty in the mirror velocity from single photon bifurcation (SPB) with photon wavelength $`\lambda `$ is
$$\mathrm{\Delta }v_{SPB}=\frac{h}{\lambda M}.$$
(5)
Let us compare this with the velocity uncertainty due to wave packet spreading. In the initial state of the mirror, it is trapped with a small Gaussian uncertainty $`\mathrm{\Delta }q_1`$. If the mirror is then released and its wave function allowed to spread, the characteristic initial spreading velocity $`\mathrm{\Delta }v_i`$ is
$$\mathrm{\Delta }v_i=\frac{h}{4\pi \mathrm{\Delta }q_iM},$$
(6)
so
$$\frac{\mathrm{\Delta }v_{SPB}}{\mathrm{\Delta }v_i}=\frac{4\pi \mathrm{\Delta }q_i}{\lambda }.$$
(7)
In our opinion, in a realistic experimental setup, $`\lambda \mathrm{\Delta }q_i`$ and hence $`\mathrm{\Delta }v_i>\mathrm{\Delta }v_{SPB}`$.
For our purposes, it is more difficult to work with the Gaussian distribution associated with $`\mathrm{\Delta }v_i`$ than to work with a bifurcated state, but it is not impossible. Note incidentally that if we work with $`\mathrm{\Delta }v_i`$, we have no need of a triggering event.
Let us confine the mirror in a potential well $`U`$ (see the appendix for the mechanism of the well) whose center is at $`z=0`$ so that for suitably small $`z`$, $`U`$ is approximated by $`U=\frac{1}{2}kz^2`$. This gives us a harmonic oscillator with mass $`M`$ (the mass of the mirror). The essence of our scheme is to trap the mirror in the well and then dissipate energy until the ground state of the oscillator is reached. We then have
$$\frac{1}{2}k(\mathrm{\Delta }q_i)^2=\frac{1}{2}M(\mathrm{\Delta }v_i)^2,$$
(8)
where $`\mathrm{\Delta }q_i`$ and $`\mathrm{\Delta }v_i`$ are the initial position and velocity uncertainty at the start of the experiment. We begin the experiment by turning off $`U`$, so that the center of mass wave function begins to spread with characteristic velocity $`\mathrm{\Delta }v_i`$. We must demonstrate that
> 1. $`\mathrm{\Delta }v_i`$ is really the dominant effect, and
> 2. the $`\lambda _1`$ and $`\lambda _2`$ inconsistency argument can be brought to bear without using the especially advantageous bifurcation superposition of the mirror wave function.
## IV Experimental Geometry
We must now lay out the geometry that will lead to the kind of inconsistency we are seeking. We will perform the analysis to first order; that is, neglecting the small amplitude attenuation due to differences in the propagation distance.
Consider a double slit experiment for $`\lambda _1`$ (Figures 6 and 7), in which the slits and the interference screen/detection system are each $`\frac{5}{2}\lambda _1`$ from the mirror. The distance between the slits is set at $`2.29\lambda _1`$, so that the distance between the right and left first nodes is also $`2.29\lambda _1`$. The mirror is circular, with diameter $`\frac{5}{2}\lambda _1`$.
We allow the wave function of the mirror to spread until $`\mathrm{\Delta }q=\lambda _1`$. The path length difference (PL$`\mathrm{\Delta }`$) between light from the far slit and the near slit is $`\frac{1}{2}\lambda _1`$ at the first node when the mirror is in its initial position ($`z_m=0`$). We must compare this with the PL$`\mathrm{\Delta }`$ for the mirror positions $`z=\pm \lambda _1`$. The results are given by Table I.
We see from Table I that for $`z_m=\pm \lambda _1`$, the position of the first node is substantially shifted. If the reflected $`\lambda _1`$ wave functions are based on superpositions of light reflected by the mirror from the positions smeared out over a $`\mathrm{\Delta }q=\lambda _1`$ Gaussian distribution, the nodes at $`x=\pm 1.145\lambda _1`$ will be measurably less distinct than if $`z_m`$ is fixed at $`0`$. This is the crucial effect that we need to observe.
The third step in the setup of our experiment is the introduction of the $`\lambda _2`$ photons. As in Section II, the $`\lambda _2`$ photons are used to measure the mirror’s position. The $`\lambda _2`$ quanta, in plane wave form, pass through an aperture of width $`W_a`$ and are incident on the mirror at a shallow angle $`\theta `$. They are reflected at the same angle to the D$`_{\lambda _2}`$ detector. Setting $`W_a=4\lambda _2\mathrm{cos}\theta `$, we see that a detection event at D$`_{\lambda _2}`$ determines that the mirror position was in the range
$$|z_m|\frac{W_a}{4\mathrm{cos}\theta }=\lambda _2.$$
(9)
If we set $`\lambda _2=\frac{1}{4}\lambda _1`$, then
$$|z_m|\frac{1}{4}\lambda _1.$$
(10)
This condition leads to a much narrower range of values for PL$`\mathrm{\Delta }`$ than those given in Table I. If (10) is satisfied, the interference node is then resharpened (Figure 8). This resharpening then leads to the same inconsistency as in Section II. Figure 9 shows the complete apparatus for this modified BW experiment based on a mesoscopic mirror with wave packet spreading.
## V Interference Patterns
The interference effects on which this experiment is based are rather subtle, so they must be treated carefully. Coherent light emerges from two pointlike slits, A and B. The photons rebound off the wave function of the mirror and interfere on a screen between the slits. The details of the interference pattern vary, depending upon the mirror’s wave function.
The $`E`$ field for the light from slit A at a point $`D`$ away from that slit is given by
$$E_A=KP(z)\mathrm{cos}\left[\frac{4\pi }{\lambda _1}\sqrt{\left(z+\frac{5}{2}\lambda _1\right)^2+D^2/4}\right]𝑑z,$$
(11)
where $`P(z)=|\psi (z)|^2`$ is the probability density for the mirror’s position and $`K`$ is a scaling factor. The separation between the two slits (which should be close to 2.29$`\lambda _1`$) is $`S`$. For slit B, the expression is similar,
$$E_B=KP(z)\mathrm{cos}\left[\frac{4\pi }{\lambda _1}\sqrt{\left(z+\frac{5}{2}\lambda _1\right)^2+(SD)^2/4}\right]𝑑z.$$
(12)
The probability density is $`P(z)=\delta (z)`$ if the mirror’s position is fixed. For the Gaussian wave packet, the probability density is
$$P(z)=\sqrt{\frac{1}{2\pi \lambda _1^2}}e^{z^2/2\lambda _1^2}.$$
(13)
The magnitude of the observed interference pattern is just given by the intensity,
$$I=\frac{1}{2}(E_A+E_B)^2.$$
(14)
Since both $`E_A`$ and $`E_B`$ are functions of $`D`$, $`I`$ is a function of $`D`$ as well. The variation in $`I`$ in the range $`0<D<S`$ gives us the interference effect.
Figure 8 shows the actual interference patterns generated by two different wave functions, with the slit separation set to $`S=2.3\lambda _1`$. In the first, the wave function is the Gaussian (13). In the second, that wave function has been truncated to the region $`|z_m|\frac{1}{4}\lambda _1`$. The interference fringes at the edges are much sharper for the truncated wave function, as is needed to produce the contradiction.
We have so far treated the experimental geometry as if $`z`$ motion were the only wave packet spreading that occurs. There is other movement that can spoil the experiment if not dealt with. First, let us consider sideways drift of the mirror in the $`xy`$-plane. Conceptually, the simplest way to deal with this is to have an “out of position” sensing system made of beams and detectors. We then make a large number of experimental runs and use only the data obtained when the mirror is not “out of position.”
More difficult is tilting of the mirror out of the $`xy`$-plane. There are two methods for dealing with this. The preferred method would be gyroscopic stabilization by rotation in the $`xy`$-plane. This must certainly be used in the process of trapping and confining the mirror (Appendix A) that precedes the experiment proper. But we need to be careful about this, as too rapid motion could cause difficulties. The second method involves a beam and detectors. The beam is incident perpendicular to the mirror’s initial position and $`\lambda _{beam}\lambda _1`$, so, regardless of interference or reduction, the beam provides negligible information about $`z_m`$. The recoil velocity of the mirror from the beam impact is relatively large, but as with $`\lambda _2`$, the beam impact is timed just before $`t_3`$, so the recoil distance is small.
## VI Restrictions on Experimental Conditions
We have discussed the geometry of the $`\lambda _1`$, $`\lambda _2`$ inconsistency using only the relative values of $`\lambda _1`$, $`\lambda _2`$, and $`W`$. We must now demonstrate that this is the dominant effect for some values of $`\lambda _1`$, $`M`$, and $`\mathrm{\Delta }q_1`$. We start by considering the problem of radiation from the mirror. This can ruin the experiment by “giving away” the mirror’s position.
In order to get numerical results, we have used parameters for Vanadium:
> density $`=\rho =6.1`$ g/cm
> speed of sound $`=v_s=3\times 10^5`$ cm/s
> atomic weight $`=a_w=50`$ amu
> lattice spacing $`=d=2.4\times 10^8`$ cm.
More importantly, we have to choose a value for $`\lambda _1`$. The one that seems to work best is $`10^6`$ cm. For photons, this would mean x-rays. At x-ray wavelengths, simple geometric reflection breaks down, since the frequency of the incoming radiation nears the plasma frequency of the mirror. So instead of photons, we will need to use $``$1 keV electrons, which do reflect properly from the mirror surface.
Many photons whose total energy is $`E_T`$ give less position information than a single photon with energy
$$E_T=\frac{hc}{\lambda }.$$
(15)
So we set
$$E_T=\frac{hc}{W}=\frac{hc}{\frac{5}{2}\lambda _1},$$
(16)
and also set $`E_T`$ equal to the thermal output of the mirror during time $`t_s=\frac{\lambda _1}{\mathrm{\Delta }v_i}`$, the time necessary for the spreading to reach $`\lambda _1`$:
$$E_T=t_se\sigma T^4\left[2\pi \left(\frac{5}{4}\lambda _1\right)^2\right]$$
(17)
so
$$\frac{hc}{\frac{5}{2}\lambda _1}=\frac{\lambda _1}{4\mathrm{\Delta }v_i}e\sigma T^4\left[2\pi \left(\frac{5}{4}\lambda _1\right)^2\right],$$
(18)
which reduces to
$$T^4=\frac{1}{t_s}\frac{1}{e\sigma }\frac{hc}{\frac{1}{2}\pi \left(\frac{5}{2}\lambda _1\right)^3}.$$
(19)
Since
$$t_s=\frac{\lambda _1}{\mathrm{\Delta }v_i}=\frac{4\pi \mathrm{\Delta }q_iM\lambda _1}{h},$$
(20)
we see that
$$T^4=\frac{h^2c}{e\sigma }\frac{1}{2\pi \mathrm{\Delta }q_iM\left(\frac{5}{2}\right)^3\lambda _1^4},$$
or
$$T^4=\frac{2.6\times 10^{35}}{\lambda _1^6\mathrm{\Delta }q_i}.$$
(21)
To evaluate (21) we need to make a decision about what value of $`\mathrm{\Delta }q_i`$ to use. This is probably the hardest parameter to pin down without actually performing the experiment, but the natural choice seems to be
$$\mathrm{\Delta }q_i=\frac{1}{2}d,$$
(22)
which in this case is $`1.2\times 10^8`$ cm. Then using $`\lambda _1=10^6`$ cm,we get:
> $`\mathrm{\Delta }v_i=3.8\times 10^3`$ cm/s,
> $`t_s=2.6\times 10^4`$ s,
> $`M=1.1\times 10^{17}`$ g, and
> $`T_R=2.2\times 10^2`$ K.
We will subsequently find that there are other temperature limits more stringent than this. But this limit is conceptually important, because it is the temperature below which we can treat the mirror as an independent system.
The main phenomenon that competes with wave packet spreading in determining the initial velocity is thermal motion within the mirror. At very low temperature, thermal energy in the mirror is stored in the lowest frequency phonons available—the lowest harmonics of the disk.
We can find the temperature cut-off where the two competing effects are roughly equal by equating the total momenta
$$(\mu v_s)\sqrt{2}=M\mathrm{\Delta }v_i,$$
(23)
where $`\mu `$ reduced mass $`\frac{1}{2}M`$. From this, it follows that
$$\frac{1}{2}\mu v_s^2=\frac{1}{2}M(\mathrm{\Delta }v_1)^2.$$
(24)
But $`\frac{1}{2}\mu v_s^2`$ is half the thermal energy in each mode, which is also
$$E=\frac{\mathrm{}\omega /2}{e^{\mathrm{}\omega /k_BT}1};$$
(25)
therefore, at $`T_c`$ we have
$$\frac{1}{2}M(\mathrm{\Delta }v_i)^2=\frac{\mathrm{}\omega /2}{e^{\mathrm{}\omega /k_BT_c}1}$$
$$\mathrm{ln}\left[\frac{\mathrm{}\omega }{M(\mathrm{\Delta }v_i)^2}+1\right]=\frac{\mathrm{}\omega }{k_BT_c},$$
(26)
so
$$T_c=\frac{\mathrm{}\omega }{k_B}\left(\mathrm{ln}\left[\frac{\mathrm{}\omega }{M(\mathrm{\Delta }v_i)^2}+1\right]\right)^1.$$
(27)
From the geometry of the problem, we can see that $`\omega =\frac{2\pi v_s}{\lambda }`$, where $`\lambda `$ is now the diameter of the mirror disk, so for an arbitrary value of $`\lambda _1`$,
$$\omega =\frac{2\pi v_s}{\frac{5}{2}\lambda _1}=\frac{4\pi }{5}\frac{v_s}{\lambda _1}=\frac{7.5\times 10^5}{\lambda _1}.$$
(28)
So our value of $`T_c`$ becomes
$$T_c=(5.7\times 10^6)[\lambda _1\mathrm{ln}\lambda _1(1.1\times 10^{12})]^1.$$
(29)
For $`\lambda _1=10^6`$ cm,
$$T_c=4.1\times 10^1\mathrm{K}.$$
(30)
Below $`T_c`$, wave packet spreading dominates the effect of thermal phonons within the mirror.
Now consider the thermal conditions outside the mirror. In order to trap the mirror in a potential well before the beginning of the experiment proper, the mirror needs to start with a very small thermal velocity—not too much greater than $`\mathrm{\Delta }v_i`$. Approximating by the ideal gas value,
$$\frac{1}{2}Mv_T^2=\frac{3}{2}k_BT_g,$$
(31)
we get
$$v_T=6.2\sqrt{T_g}\mathrm{cm}/\mathrm{s}.$$
(32)
If we set $`v_T=\alpha \mathrm{\Delta }v_i`$, then
$$T_g=3.8\times 10^7\alpha ^2\mathrm{K}.$$
(33)
For $`\alpha <10^3`$, (33) is a much tighter restriction than (30). Part of our experimental strategy would be to use a sequence of trapping maneuvers to raise the allowable values of $`\alpha `$ and $`T_g`$. One of these maneuvers would be likely to involve attaching the mirror to a more massive object using macromolecules that can change their tertiary structure .
We also require that the density of the gas surrounding the mirror be low enough so that there will be no collisions during the time $`t_s`$. This means that there must be less than one molecule in the volume
$$V=(v_gt_s)\pi \left(\frac{5}{4}\lambda _1\right)^2,$$
(34)
where $`v_g`$ is the rms velocity of the gas molecules at temperature $`T_g`$. For Rubidium (frequently used in Bose-Einstein condensate experiments),
$$v_g=1.06\alpha \mathrm{cm}/\mathrm{s},$$
(35)
and
$$V=1.33\times 10^{15}\alpha \mathrm{cm}^3.$$
(36)
This yields a density of
$$\rho _\alpha =\frac{1.2\times 10^6}{\alpha }\mathrm{mole}/\mathrm{L}.$$
(37)
Note that if $`\rho \rho _\alpha `$, the mirror will sometimes approach the trap with a Brownian velocity
$$v_{Br}<v_T=\alpha \mathrm{\Delta }v_i.$$
(38)
Taking full advantage of this, we should be able to work at values of $`\alpha >5`$ and hence $`T_g>10`$ $`\mu `$K.
## VII Conclusions
We have designed an experiment that must have a consistent outcome. The outcome can certainly be consistent if desharpening of the $`\lambda _1`$ nodes is not found to occur. But then quantum mechanics does not correctly predict the $`\lambda _1`$ pattern. If we want to retain quantum mechanics, $`\lambda _1`$ desharpening should occur. Then, to avoid the Wigner’s friend contradiction, we are forced to jettison SIQM and take another path.
The $`\lambda _1`$ pattern must be unaffected by $`\lambda _2`$ detection, even though $`\lambda _2`$ detection restricts the mirror to $`|z_m|\frac{\lambda _1}{4}`$. This means that $`\lambda _1`$ develops the desharpened $`|z_m|<\lambda _1`$-related nodes without encountering the mirror in the zone $`𝒵`$ where $`\lambda _1>|z_m|>\frac{\lambda _1}{4}`$. Presumably, this is pretty much what happens with or without $`\lambda _2`$. $`\lambda _1`$ must encounter the mirror itself only in a small region $`\mathrm{\Delta }z\lambda _1`$, and in most of the larger region, $`|z_m|<\lambda _1`$, it will encounter some kind of signal from the mirror. This signal cannot reflect the $`\lambda _1`$ wave, but it can modulate the wave to produce the correct interference pattern if
> 1. the mirror emits the signal continuously, like the wake of a boat, and
> 2. the signal contains enough information about the evolution of the state of the mirror.
Note that although the linear wave equation correctly predicts the shape of the $`\lambda _1`$ nodes, the underlying process, with propagation of the modulating signal taking the place of wave packet spreading, is fundamentally nonlinear.
Let us now consider the experiment from a mathematical point of view. We find that there is a “duality” of $`\lambda _2`$ position measurement:$`\lambda _1`$ reflection interference. We already know that this duality does not operate in the usual manner to produce two different sets of basis vectors for the same space. Instead, the $`\lambda _1`$ and $`\lambda _2`$ measurements lead us to two different vector spaces, that presumably are tangent in some sense to the (infinite-dimensional) manifold that actually represents the state of the system. The superposition principle hold only within these individual vector spaces.
It is natural to conjecture that this duality is a special case of a multiplicity of distinct properties and corresponding vector spaces, each of which is accessed by a different probe of the mirror system. Ordinarily, each probe disrupts all the others to such an extent that the multiplicity is not evident.
Consider the information-carrying signal in this light. The signal exists in the space that describes the experiment but not in the “tangent” spaces that are the setting for SIQM. The signal is able to correctly modulate $`\lambda _1`$, because it carries locally information about phase correlations that occur elsewhere. Such richness of information content is possible only if the signal dwells in a very large and profoundly non-linear space.
The effects we have described can exist only under extreme conditions of low temperature with very careful state preparation. They are nevertheless based on rather general features of wave mechanics. The necessary temperature regime is now accessible to experimentalists. We believe that an experiment of this type can be performed, although it probably would entail the use of states more specifically tailored to micro-Kelvin conditions.
## A Trapping, Confining, and Releasing the Mirror to Start the Experiment
We propose to trap the mirror by floating it in a magnetic field balanced by a weak fictitious gravitational field due to an acceleration. The magnetic field would induce a superconduction current in the mirror. The $`x`$ and $`y`$ components of the $`\stackrel{}{B}`$ field will then act on the current to produce a force opposite to the fictitious force. The mirror will sit in a potential well approximately given by
$$\frac{1}{2}kz_m^2=\frac{1}{2}Mv^2$$
(A1)
In particular, $`k`$ must satisfy
$$\frac{1}{2}k(\mathrm{\Delta }q_i)^2=\frac{1}{2}M(\mathrm{\Delta }v_i)^2,$$
(A2)
so
$$k=1.1\times 10^6\mathrm{erg}/\mathrm{cm}^2$$
(A3)
and the oscillator frequency $`\omega _{os}`$ is
$$\omega _{os}=\sqrt{\frac{k}{M}}=3.2\times 10^5.$$
(A4)
Let us calculate the magnitude of the $`\stackrel{}{B}`$ field. We simplify the problem by replacing the mirror disk with a ring of radius $`\lambda _1`$. Also,
$$B_zB_0\mathrm{sin}\omega t,$$
(A5)
where $`\frac{2\pi }{\omega }t_s`$, and
$$B_rB_{}\mathrm{sin}\omega t$$
$$B_{}\eta B_0,$$
(A6)
where $`\eta `$ is slowly varying with respect to $`z`$. Then from
$$E𝑑s=\frac{1}{c}\frac{d\mathrm{\Phi }_B}{dt}$$
(A7)
we get
$$E(2\pi \lambda _1)=\frac{1}{c}B_0\pi \lambda _1^2(\omega \mathrm{cos}\omega t)$$
(A8)
$$E=\frac{1}{2c}\lambda _1B_0\omega \mathrm{cos}\omega t.$$
(A9)
This acts on superconducting electrons to produce
$$a=\frac{F}{m_e}=\frac{eE}{m_e}=\frac{e\lambda _1}{2m_ec}B_0\omega \mathrm{cos}\omega t,$$
(A10)
and
$$v=\frac{e\lambda _1}{2m_ec}B_0\mathrm{sin}\omega t.$$
(A11)
The the radial component of $`\stackrel{}{B}`$, $`B_r`$ acts on each electron to produce a force
$$F_e=\frac{ev}{c}B_r$$
$$F_e=\frac{e^2\lambda _1}{2m_ec^2}B_0B_{}\mathrm{sin}^2\omega t.$$
(A12)
If there are $`\nu `$ Cooper pairs for each lattice position, and $`N`$ is the total number of atoms in the mirror, then the total upward force on the mirror is
$$F_{EM}=N\nu \eta \frac{e^2\lambda _1}{m_ec^2}B_0^2\mathrm{sin}^2\omega t.$$
(A13)
Because $`t_{EM}=\frac{2\pi }{\omega }t_s`$, we can time the experiment to be performed when $`F_{EM}`$ is at its maximum,
$$F_{EMmax}=N\nu \eta \frac{e^2\lambda _1}{m_ec^2}B_0^2.$$
(A14)
The balancing acceleration would have the same periodicity as $`F_{EM}`$. The potential well is created by the inhomogeneity inhomogeneity of the $`\stackrel{}{B}`$ field as a function of $`z`$.
$$H_{total}=F_{EM}𝑑zF_{accel}z.$$
(A15)
So we have
$$H_{totalmax}=F_{EMmax}𝑑zF_{accelmax}z$$
$$H_{total}\frac{1}{2}\frac{F_{EMmax}}{z}z^2,$$
(A16)
so we can see that
$$k\frac{F_{EM}}{z}2N\nu \eta \frac{e^2\lambda _1}{m_ec^2}B_0\frac{B_0}{z}.$$
(A17)
Therefore, $`B_0\frac{B_0}{z}\frac{km_ec^2}{2N\nu \eta e^2\lambda _1}\frac{1.8\times 10^7}{\nu \eta }`$. If we set $`\nu =10^3`$ and $`\eta =10^1`$, then $`B_0\frac{B_0}{z}1.8\times 10^{11}`$. If $`B_0`$ varies by about 25 percent over the distance $`\lambda _1`$ in the vicinity of the mirror, then
$$\frac{B_0}{z}\lambda _1|_{z=0}\frac{1}{4}B_0|_{z=0}$$
$$B_0\frac{B_0}{z}|_{z=0}\frac{B_0^2}{4\lambda _1}.$$
(A18)
This reduces to $`B_0|_{z=0}8.5\times 10^2`$ G.
Finally, the mirror is released by turning off the field over a time scale $`t_R`$ that satisfies
$$t_Rt_s.$$
(A19)
Ideally, one would also want
$$t_R<\frac{2\pi }{\omega _{os}}.$$
(A20)
The mirror would be briefly exposed to a strong $`\stackrel{}{E}`$ field during the turn-off.
|
no-problem/9905/math-ph9905012.html
|
ar5iv
|
text
|
# A simple expression for the terms in the Baker-Campbell-Hausdorff series
## I Introduction
The Baker-Campbell-Hausdorff series has a long history and has applications in a wide variety of problems, as explained in Refs. . In a classic paper, Goldberg was able to derive an integral expression for the coefficients in the general term, and this result is still used today to calculate the Baker-Campbell-Hausdorff series.
In this paper, we present a simple method for calculating the terms in the Baker-Campbell-Hausdorff series. The process can be carried out by hand, and it is easily implemented on a computer.
## II Statement of theorem
We let $`z=\mathrm{log}(e^xe^y)`$ denote the Baker-Campbell-Hausdorff series for noncommuting variables $`x`$ and $`y`$. Our result for the $`n`$-th order term $`z_n`$ in this series is given by the following procedure, which involves only a finite number of matrix multiplications. We state our results without reference to commutators. If an expression in terms of commutators is desired, our expression can be transformed using the substitution due to Dynkin. Each product involving $`x`$ and $`y`$ variables is replaced by $`\frac{1}{n}`$ times the corresponding iterated commutator of the same sequence of $`x`$’s and $`y`$’s. The quantity $`z_n`$ is invariant under this transformation.
To calculate $`z_n`$, the entire $`n`$-th order term in the Baker-Campbell-Hausdorff series, we compute a certain polynomial in $`n`$ (ordinary commuting) variables, $`\sigma _1,\mathrm{},\sigma _n`$, and then make a replacement, as described below. We begin by defining two $`(n+1)\times (n+1)`$ matrices $`F`$ and $`G`$ by
$$F_{ij}=\frac{1}{(ji)!}$$
(1)
and
$$G_{ij}=\frac{1}{(ji)!}\underset{k=i}{\overset{j1}{}}\sigma _k.$$
(2)
These equations are valid for all $`i`$ and $`j`$ from $`1`$ to $`n+1`$, with the usual convention that the reciprocal of the factorial of a negative integer is zero. Written out explicitly, the matrices are
$$F=\left(\begin{array}{ccccccc}1& 1& \frac{1}{2}& \frac{1}{6}& \mathrm{}& & \\ & 1& 1& \frac{1}{2}& \frac{1}{6}& \mathrm{}& \\ & & 1& 1& \frac{1}{2}& \frac{1}{6}& \mathrm{}\\ & & & .& & & \\ & & & & .& & \\ & & & & & .& \\ & & & & & & 1\end{array}\right)$$
(3)
and
$$G=\left(\begin{array}{ccccccc}1& \sigma _1& \frac{1}{2}\sigma _1\sigma _2& \mathrm{}& & & \\ & 1& \sigma _2& \frac{1}{2}\sigma _2\sigma _3& \mathrm{}& & \\ & & 1& \sigma _3& \frac{1}{2}\sigma _3\sigma _4& \mathrm{}& \\ & & & .& & & \\ & & & & .& & \\ & & & & & .& \sigma _n\\ & & & & & & 1\end{array}\right).$$
(4)
Although it is not necessary for the calculation of results, we point out at this point that the matrices $`F`$ and $`G`$ are exponentials of very simple matrices. We define two $`(n+1)\times (n+1)`$ matrices $`M`$ and $`N`$ by
$$M_{ij}=\delta _{i+1,j}$$
(5)
and
$$N_{ij}=\delta _{i+1,j}\sigma _i.$$
(6)
These equations are valid for $`i`$ and $`j`$ ranging from $`1`$ to $`n+1`$. A simple application of the definition of the exponential function gives $`F=\mathrm{exp}M`$ and $`G=\mathrm{exp}N`$. Written out explicitly, these statements are
$$F=\mathrm{exp}\left(\begin{array}{ccccccc}0& 1& 0& \mathrm{}& & & \\ & 0& 1& 0& \mathrm{}& & \\ & & .& & & & \\ & & & .& & & \\ & & & & .& & \\ & & & & & 0& 1\\ & & & & & & 0\end{array}\right)$$
(7)
and
$$G=\mathrm{exp}\left(\begin{array}{ccccccc}0& \sigma _1& 0& \mathrm{}& & & \\ & 0& \sigma _2& 0& \mathrm{}& & \\ & & .& & & & \\ & & & .& & & \\ & & & & .& & \\ & & & & & 0& \sigma _n\\ & & & & & & 0\end{array}\right).$$
(8)
The matrices $`M`$ and $`N`$ will be used later in a proof.
Our expression for the $`n`$-th order term in the Baker-Campbell-Hausdorff series is
$$z_n=T(\mathrm{log}FG)_{1,n+1}.$$
(9)
The indices on the right-hand side of this equation indicate the upper-right element of the matrix $`\mathrm{log}FG`$. The operator $`T`$ replaces products of $`\sigma `$-variables with products of $`x`$ and $`y`$ according to the following procedure. The polynomial $`(\mathrm{log}FG)_{1,n+1}`$ is a sum of terms, each of which may be written as a rational number times $`\sigma _1^{\mu _1}\sigma _2^{\mu _2}\mathrm{}\sigma _n^{\mu _n}`$, where the $`\mu _i`$ are either $`0`$ or $`1`$ (no exponents greater than 1 occur, as explained later in this paper). Next, $`\sigma _i^{\mu _i}`$ is replaced with $`x`$ if $`\mu _i=0`$ and $`y`$ if $`\mu _i=1`$. Thus each $`\sigma _i`$ that occurs (to the first power) in a term indicates that a $`y`$ is to be placed at the $`i`$-th location in the product of $`x`$ and $`y`$ variables. For example, in the case $`n=6`$, we have $`T(\sigma _2\sigma _4\sigma _5)=xyxyyx`$. The operator $`T`$ is a vector-space isomorphism from the space of polynomials in the $`\sigma `$-variables (with $`\mu _i1`$) to the space of linear combinations of products that have $`n`$ factors that are either $`x`$ or $`y`$.
The $`\mathrm{log}`$ operation in Eq. (9) is simple because $`FG`$ is equal to the $`(n+1)\times (n+1)`$ identity matrix (which we denote by $`I`$) plus a matrix that is strictly upper triangular. Thus the series for $`\mathrm{log}[I+(FGI)]`$ terminates after finitely many terms.
$$\mathrm{log}FG=\underset{q=1}{\overset{n}{}}\frac{(1)^q}{q}(FGI)^q.$$
(10)
The calculation of $`z_n`$, the order $`n`$ term in the Baker-Campbell-Hausdorff series, can therefore be carried out with a finite number of simple operations. There are no sums over partitions, operations with noncommuting variables, translations of binary sequences into descriptions in terms of block lengths, etc.
## III Examples
Let us begin by working out the example of $`n=1`$. We have
$$F=\left(\begin{array}{cc}1& 1\\ 0& 1\end{array}\right)$$
(11)
and
$$G=\left(\begin{array}{cc}1& \sigma _1\\ 0& 1\end{array}\right).$$
(12)
From this follows
$$FG=\left(\begin{array}{cc}1& 1+\sigma _1\\ 0& 1\end{array}\right)$$
(13)
and
$`z_1`$ $`=`$ $`T(\mathrm{log}FG)_{1,1+1}=T(\sigma _1^0+\sigma _1^1)`$ (14)
$`=`$ $`x+y.`$ (15)
Next let us work out the example of $`n=2`$. We have
$$F=\left(\begin{array}{ccc}1& 1& \frac{1}{2}\\ 0& 1& 1\\ 0& 0& 1\end{array}\right)$$
(16)
and
$$G=\left(\begin{array}{ccc}1& \sigma _1& \frac{1}{2}\sigma _1\sigma _2\\ 0& 1& \sigma _2\\ 0& 0& 1\end{array}\right).$$
(17)
From this follows
$$FG=\left(\begin{array}{ccc}1& 1+\sigma _1& \frac{1}{2}+\sigma _2+\frac{\sigma _1\sigma _2}{2}\\ 0& 1& 1+\sigma _2\\ 0& 0& 1\end{array}\right)$$
(18)
and
$$(FGI)^2=\left(\begin{array}{ccc}0& 0& 1+\sigma _1+\sigma _2+\sigma _1\sigma _2\\ 0& 0& 0\\ 0& 0& 0\end{array}\right),$$
(19)
so that
$`z_2`$ $`=`$ $`T(\mathrm{log}FG)_{1,2+1}=T\left({\displaystyle \frac{1}{2}}\sigma _1^0\sigma _2^1{\displaystyle \frac{1}{2}}\sigma _1^1\sigma _2^0\right)`$ (20)
$`=`$ $`{\displaystyle \frac{1}{2}}(xyyx).`$ (21)
For the case $`n=3`$, the equations result in
$`z_3`$ $`=`$ $`T\left({\displaystyle \frac{1}{12}}\sigma _1{\displaystyle \frac{1}{6}}\sigma _2+{\displaystyle \frac{1}{12}}\sigma _3+{\displaystyle \frac{1}{12}}\sigma _1\sigma _2{\displaystyle \frac{1}{6}}\sigma _1\sigma _3+{\displaystyle \frac{1}{12}}\sigma _2\sigma _3\right)`$ (22)
$`=`$ $`{\displaystyle \frac{1}{12}}yxx{\displaystyle \frac{1}{6}}xyx+{\displaystyle \frac{1}{12}}xxy+{\displaystyle \frac{1}{12}}yyx{\displaystyle \frac{1}{6}}yxy+{\displaystyle \frac{1}{12}}xyy.`$ (23)
The case $`n=4`$ works out to be
$`z_4`$ $`=`$ $`T\left({\displaystyle \frac{1}{24}}\sigma _1\sigma _2+{\displaystyle \frac{1}{12}}\sigma _1\sigma _3{\displaystyle \frac{1}{12}}\sigma _2\sigma _4+{\displaystyle \frac{1}{24}}\sigma _3\sigma _4\right)`$ (24)
$`=`$ $`{\displaystyle \frac{1}{24}}yyxx+{\displaystyle \frac{1}{12}}yxyx{\displaystyle \frac{1}{12}}xyxy+{\displaystyle \frac{1}{24}}xxyy.`$ (25)
These results and higher-order calculations not shown here agree with results published in the literature. As an example of the types of coefficients that occur, when $`n`$ is $`7`$ our formula gives a coefficient of $`1/1512`$ for the $`yxxxyyy`$ term, and this agrees with the literature result.
## IV Proof of Theorem
We begin by considering the Baker-Campbell-Hausdorff series for $`\mathrm{log}\left(e^Me^N\right)`$, where the $`(n+1)\times (n+1)`$ matrices $`M`$ and $`N`$ are defined in Eqs. (5) and (6). The matrices $`M`$ and $`N`$ are written out explicitly on the right-hand sides of Eqs. (7) and (8). They have nonzero elements only on the first superdiagonal. Therefore, a product having $`m`$ factors that are either $`M`$ or $`N`$ will have nonzero elements only on the $`m`$-th superdiagonal. Thus, the upper-right element of the matrix $`\mathrm{log}\left(e^Me^N\right)`$ is equal to the upper-right element of the matrix that is the order $`n`$ term in the Baker-Campbell-Hausdorff series for $`\mathrm{log}\left(e^Me^N\right)`$. We write this as
$$\left[\mathrm{log}\left(e^Me^N\right)\right]_{1,n+1}=\underset{W}{}C(W)[\mathrm{\Pi }(W)]_{1,n+1},$$
(26)
where the sum runs over all “words” $`W`$ of length $`n`$ (ordered $`n`$-tuples of elements that are either the symbol $`M`$ or the symbol $`N`$), $`C(W)`$ denotes the coefficient of $`W`$ in the order $`n`$ term in the Baker-Campbell-Hausdorff series, and $`\mathrm{\Pi }(W)`$ denotes a product of $`M`$ and $`N`$ matrices as specified by the word $`W`$.
We now show that $`[\mathrm{\Pi }(W)]_{1,n+1}`$ is a product of $`\sigma `$-variables whose indices give the positions of the $`N`$’s in the word $`W`$. We let the matrix $`\mathrm{\Pi }(W)`$ act on an $`(n+1)`$-component column vector that is all zeroes except the lowest element, which is a $`1`$. After each multiplication by an $`M`$ or an $`N`$ matrix, the location of the nonzero element in the column vector moves up by one step. If the matrix multiplying the column vector is an $`N`$, the nonzero element in the column vector gets multiplied by a $`\sigma `$. The index on the $`\sigma `$ gives the location of the $`N`$ matrix in the word $`W`$, as can be seen by looking at the structure of the $`N`$ matrix, shown on the right-hand side of Eq. (8). After all of the $`n`$ matrices in the word $`W`$ have acted on the column vector, the nonzero element in the vector is at the top, and this element is a product of $`\sigma `$-variables whose indices describe the word $`W`$ in the manner explained above. The top element of the vector obtained by letting a matrix act on the initial column vector described above is the upper-right element of the matrix. Thus we have shown that $`[\mathrm{\Pi }(W)]_{1,n+1}`$ is a product of $`\sigma `$-variables whose indices give the positions of the $`N`$’s in the word $`W`$. This fact together with the relations $`F=\mathrm{exp}M`$ and $`G=\mathrm{exp}N`$ and Eq. (26) proves Eq. (9).
## V Alternative formulation
In this section we present a result equivalent to the result presented above, but the matrix operations involve only numbers. We will express the $`n`$-th order term, $`z_n`$, as a linear combination of terms of the form $`(x+\sigma _1y)(x+\sigma _2y)\mathrm{}(x+\sigma _ny)`$, where the $`\sigma `$’s are either $`+1`$ or $`1`$. There are $`2^n`$ such terms. Our result is that the coefficient of a given term is $`2^n`$ times the value of the polynomial $`(\mathrm{log}FG)_{1,n+1}`$ defined Sec. II, with the corresponding $`\sigma `$-values substituted in. This number can be computed by substituting the $`\sigma `$-values into the $`G`$ matrix before doing the matrix operations. These statements can be summarized as
$$z_n=2^n\underset{\sigma _1,\mathrm{},\sigma _n}{}(\mathrm{log}FG)_{1,n+1}(x+\sigma _1y)(x+\sigma _2y)\mathrm{}(x+\sigma _ny),$$
(27)
where the sum is over all $`2^n`$ possible assignments of $`\pm 1`$ to the $`\sigma `$-variables.
Let us work out an example. For $`n=2`$ the equation becomes
$`z_2`$ $`=`$ $`2^2{\displaystyle \underset{\sigma _1,\sigma _2}{}}\left({\displaystyle \frac{\sigma _2}{2}}{\displaystyle \frac{\sigma _1}{2}}\right)(x+\sigma _1y)(x+\sigma _2y)`$ (28)
$`=`$ $`{\displaystyle \frac{1}{4}}\left[\left({\displaystyle \frac{1}{2}}{\displaystyle \frac{1}{2}}\right)(xy)(x+y)+\left({\displaystyle \frac{1}{2}}{\displaystyle \frac{1}{2}}\right)(x+y)(xy)\right]`$ (29)
$`=`$ $`{\displaystyle \frac{1}{2}}(xyyx),`$ (30)
where we have used Eq. (20).
A calculation of $`z_n`$ without the use of multiplication of matrices of polynomials would proceed in the following way. For every choice of $`+1`$ or $`1`$ values for the $`\sigma `$-variables one computes the value of $`2^n(\mathrm{log}FG)_{1,n+1}`$. This involves a finite number of operations with numbers. The result is the coefficient of $`(x+\sigma _1y)(x+\sigma _2y)\mathrm{}(x+\sigma _ny)`$ in the expression for $`z_n`$ in Eq. (27). Next, we imagine the process of expanding all of the products $`(x+\sigma _1y)(x+\sigma _2y)\mathrm{}(x+\sigma _ny)`$. The result of this operation is a sum over words in the variables $`x`$ and $`y`$. To get the coefficient of a particular word one sums all of the coefficients calculated from $`2^n(\mathrm{log}FG)_{1,n+1}`$ with a sign given by the product of the $`\sigma `$’s at the locations of the $`y`$’s in the word. For example, if the word is $`xxyyxy`$ then the coefficients are summed with signs given by $`\sigma _3\sigma _4\sigma _6`$.
We now prove Eq. (27). We consider the order $`n`$ term in $`\mathrm{log}(e^{x+y}e^{xy})`$. This may be written in two ways,
$$\underset{\sigma _1,\mathrm{},\sigma _n}{}C(\sigma _1,\mathrm{},\sigma _n)(x+\sigma _1y)\mathrm{}(x+\sigma _ny)=\underset{W}{}C^{}(W)W.$$
(31)
The sum on the left-hand side is a sum over all assignments of $`+1`$ or $`1`$ to the variables $`\sigma _1,\mathrm{},\sigma _n`$. The coefficient $`C(\sigma _1,\mathrm{},\sigma _n)`$ is the usual coefficient in the Baker-Campbell-Hausdorff series, with the $`\sigma `$’s identifying a word. The right-hand side of Eq. (31) results from multiplying out all of the products $`(x+\sigma _1y)\mathrm{}(x+\sigma _ny)`$. It is a sum over words in $`x`$ and $`y`$, and $`C^{}(W)`$ denotes the resulting coefficient of the word $`W`$. \[In this context we use the term “word” to denote an actual product of a certain sequence of $`x`$ and $`y`$ variables, because such a product does not evaluate to become something else, as it did in the case of the products of $`M`$ and $`N`$ matrices in the previous section. Thus, the notation $`\mathrm{\Pi }(W)`$ is not needed in Eq. (31).\] The coefficient $`C^{}(W)`$ of a particular word $`W`$ can be expressed in terms of the $`C(\sigma _1,\mathrm{},\sigma _n)`$. For every set of values for $`\sigma _1,\mathrm{},\sigma _n`$ in the sum on the left-hand side we get a contribution to $`C^{}(W)`$ of $`C(\sigma _1,\mathrm{},\sigma _n)`$ times the product of the $`\sigma `$’s that correspond to the $`y`$’s in $`W`$. This product is the same as the product of $`\sigma _i^{(W)}`$, where the $`\sigma ^{(W)}`$ values describe the word $`W`$ ($`\sigma _i^{(W)}`$ is +1 if the $`i`$-th factor in $`W`$ is $`x`$, and $`\sigma _i^{(W)}`$ is -1 if the $`i`$-th factor in $`W`$ is $`y`$), and the product runs over $`i`$-values corresponding to negative $`\sigma `$’s. Thus, $`C^{}(W)`$ is precisely the polynomial in $`\sigma ^{(W)}`$ given in the first theorem. Now we transform from $`x`$ and $`y`$ to new variables according to $`x+y=\stackrel{~}{x}`$ and $`xy=\stackrel{~}{y}`$. This implies $`x=(\stackrel{~}{x}+\stackrel{~}{y})/2`$ and $`y=(\stackrel{~}{x}\stackrel{~}{y})/2`$, and the right-hand side of Eq. (31) (which equals $`\mathrm{log}e^{\stackrel{~}{x}}e^{\stackrel{~}{y}}`$) becomes the right-hand side of Eq. (27), after the tildes have been dropped, which is justified since only variables with tildes occur at that point. The $`n`$ factors of $`1/2`$ are collected into the factor of $`2^n`$ in Eq. (27).
## VI Symmetries of the coefficients
The calculation described in the penultimate paragraph of the preceding section involves repeated evaluation of $`(\mathrm{log}FG)_{1,n+1}`$ with values of $`+1`$ and $`1`$ substituted in for the $`\sigma `$-variables. These numbers are the coefficients in the sum in Eq. (27). We begin this section by showing that half of the resulting numbers will be zero because of a basic symmetry of the Baker-Campbell-Hausdorff series. Then we show that some of the nonvanishing coefficients can be obtained from other ones. These considerations reduce the computational work involved in calculating the coefficients in Eq. (27).
The relationship $`e^z=e^xe^y`$ implies $`e^z=e^ye^x`$ and
$$z=\mathrm{log}e^ye^x.$$
(32)
From this we see that swapping the $`x`$’s and $`y`$’s in the $`n`$-th order term $`z_n`$ gives the same result as multiplying $`z_n`$ by $`(1)^{n1}`$. In the present context, the $`\sigma `$-variables are being assigned values of $`+1`$ or $`1`$, so we may use the relationship $`\sigma _i^2=1`$. The preceding statement about the symmetry of $`z_n`$ is equivalent to
$$(\mathrm{log}FG)_{1,n+1}\underset{i=1}{\overset{n}{}}\sigma _i=(1)^{n1}(\mathrm{log}FG)_{1,n+1}.$$
(33)
Multiplication by $`_{i=1}^n\sigma _i`$ effects a swapping of $`x`$ and $`y`$, because of the relationship $`\sigma _i^2=1`$. If the number of $`\sigma `$’s that are $`+1`$ is even then $`_{i=1}^n\sigma _i`$ will equal $`(1)^n`$ because all of the $`\sigma `$’s in the product may be replaced with $`1`$ without changing the value of the product. Therefore, if the number of $`\sigma `$’s that are $`+1`$ is even, then $`(\mathrm{log}FG)_{1,n+1}`$ is zero. Elementary combinatorics shows that this condition holds for one-half of the terms in the sum in Eq. (27). In the case of odd $`n`$ greater than 1, there is one additional term which vanishes, namely the one for which all of the $`\sigma `$’s are $`+1`$. This is because the matrices $`M`$ and $`N`$ are equal and therefore commute. Thus $`\mathrm{log}FG`$ is equal to $`M+N`$ and the upper-right element of $`\mathrm{log}FG`$ is zero. \[In the case of even $`n`$, $`(\mathrm{log}FG)_{1,n+1}`$ of course also vanishes when all of the $`\sigma `$’s are $`+1`$, but this vanishing has already been counted in the discussion above.\]
Thus far, we have identified coefficients in the sum in Eq. (27) that are zero for symmetry reasons. This author has searched up through order $`n=15`$ and found all of the remaining coefficients to be nonzero.
A further symmetry of $`(\mathrm{log}FG)_{1,n+1}`$ is
$$[(\mathrm{log}FG)_{1,n+1}](\sigma _1,\mathrm{},\sigma _n)=(1)^{n1}[(\mathrm{log}FG)_{1,n+1}](\sigma _n,\mathrm{},\sigma _1).$$
(34)
The notation on the left-hand side of this equation indicates explicitly that $`(\mathrm{log}FG)_{1,n+1}`$ is a function of the $`n`$ variables $`\sigma _1,\mathrm{},\sigma _n`$. On the right-hand side, these $`n`$ quantities are inserted into the function in the reversed order. The fact that these two values of the function are related by a factor of $`(1)^{n1}`$ is due to a symmetry of the Baker-Campbell-Hausdorff series. It follows immediately from results in Ref. that the coefficient of a word in the variables $`x`$ and $`y`$ is equal to $`(1)^{n1}`$ times the coefficient of the word obtained by reversing the order of the factors in the original word. This implies that $`T\{[(\mathrm{log}FG)_{1,n+1}](\sigma _n,\mathrm{},\sigma _1)\}`$ (where $`T`$ is the operator defined in Sec. II), which is the order $`n`$ term in the Baker-Campbell-Hausdorff series with the sequence of the factors in each term reversed, is equal to $`(1)^{n1}`$ times $`T\{[(\mathrm{log}FG)_{1,n+1}](\sigma _1,\mathrm{},\sigma _n)\}`$. Because the vector-space isomorphism $`T`$ is invertible, this proves Eq. (34). This equation is useful because it can be used to avoid carrying out unnecessary evaluations of $`(\mathrm{log}FG)_{1,n+1}`$.
## VII Computer implementation
The methods presented in this paper can easily be used with computers. A simple example of how the results of Sec. II can be implemented using Mathematica is shown below. It is not necessary to load any special packages to run this code. This example is oriented toward ease of coding. Faster implementations are possible. The first program gives the polynomial to the right of the $`T`$ operator in Eq. (9), and the second program (for $`n>1`$) translates this into $`z_n`$, the corresponding expression in terms of $`x`$ and $`y`$.
```
p[n_] := p[n] = ( F = Table[1/(j-i)!,{i,n+1},{j,n+1}];
G = Table[1/(j-i)! Product[s[k],{k,i,j-1}],{i,n+1},{j,n+1}];
qthpower = IdentityMatrix[n+1]; FGm1 = F.G - qthpower; Expand[
-Sum[qthpower=qthpower.FGm1; (-1)^q / q qthpower,{q,n}][[1,n+1]]])
translated[n_] := (temp = Expand[Product[s[k]^2, {k,n}] p[n]];
Sum[term = Apply[List, temp[[i]]]; term[[1]] Apply[StringJoin,
Take[term,-n] /. {s[i_]^2->"x",s[i_]^3->"y"}], {i,Length[temp]}])
In[3]:= translated[4]
xxyy xyxy yxyx yyxx
Out[3]= ---- - ---- + ---- - ----
24 12 12 24
```
## VIII Other Series
As in the case of Goldberg’s results, the methods of this paper can be used to calculate $`\mathrm{log}f(x)f(y)`$, where $`f(x)`$ is an arbitrary power series with $`f(0)=1`$. The only changes that are necessary are that occurrences of the exponential function such as those in Eqs. (7) and (8) must be replaced with the function $`f`$. The matrices $`M`$ and $`N`$ have the property that when they are raised to the power $`n+1`$ the result is zero, so the calculation of $`f(M)`$ and $`f(N)`$ terminates after finitely many matrix operations.
## IX Generalized Baker-Campbell-Hausdorff series
The methods presented in Sec. II may also be used to calculate the terms in generalized Baker-Campbell-Hausdorff series. For example, if $`z=\mathrm{log}e^xe^ye^w`$ then the $`n`$-th order term may be found as follows. Matrices $`F`$ and $`G`$ are defined as in Eqs. (1) and (2), and a matrix $`H`$ is defined by
$$H_{ij}=\frac{1}{(ji)!}\underset{k=i}{\overset{j1}{}}\tau _k,i,j=1,\mathrm{},n+1,$$
(35)
where $`\tau _1,\mathrm{},\tau _n`$ are $`n`$ additional commuting variables. The definition of $`H`$ is the same as the definition of $`G`$, except that different variables are used. Reasoning similar to that in the original case gives the following expression for $`z_n`$.
$$z_n=T(\mathrm{log}FGH)_{1,n+1},$$
(36)
where the definition of the $`T`$ operator now has been extended to also putting a $`w`$ at the $`i`$-th position of a product of $`x`$’s, $`y`$’s and $`w`$’s for an occurrence of $`\tau _i`$. For example, when $`n`$ is 4, we have $`T(\sigma _2\tau _3)=xywx`$. The results obtained from Eq. (36) agree with those obtained from Reutenauer’s generalization of Goldberg’s theorem.
An example of how these methods can be used with Mathematica is shown below. For ease of coding, the notation has been changed slightly.
```
p3[n_] := p3[n] = (
F = Table[1/(j-i)! Product[s[k,"x"],{k,i,j-1}],{i,n+1},{j,n+1}];
G = Table[1/(j-i)! Product[s[k,"y"],{k,i,j-1}],{i,n+1},{j,n+1}];
H = Table[1/(j-i)! Product[s[k,"w"],{k,i,j-1}],{i,n+1},{j,n+1}];
qthpower = IdentityMatrix[n+1]; FGm1 = F.G.H - qthpower; Expand[
-Sum[qthpower=qthpower.FGm1; (-1)^q / q qthpower,{q,n}][[1,n+1]]])
translated3[n_] := (temp = p3[n]; Sum[term = Apply[List, temp[[i]]]; term[[1]]*
Apply[StringJoin, Take[term,-n] /. s[j_,k_]->k], {i,Length[temp]}])
In[3]:= translated3[2]
-wx wy xw xy yw yx
Out[3]= --- - -- + -- + -- + -- - --
2 2 2 2 2 2
```
## X conclusion
The results contained in this paper provide a means of computing the entire $`n`$-th order term in the Baker-Campbell-Hausdorff series, without the use of noncommuting variables, sums over partitions, or other complicated operations. One application is in writing simple programs in standard computer languages to calculate the Baker-Campbell-Hausdorff series. Such programs do not result in a significant reduction in computer time needed. Rather, the programming involved is simplified. The sample program included in this paper can be shortened to just a few lines, and it is not necessary to load special software packages. The calculation of higher-order terms in the Baker-Campbell-Hausdorff series is usually done in computer languages that do not have symbol manipulation, because they are faster. This paper also explains how a simple program can be written in such a language to calculate the series.
This paper does not address the question of expressing the Baker-Campbell-Hausdorff series in terms of commutators. As explained in Sec. II, if an expression in terms of commutators is desired, the substitution due to Dynkin may be used to transform the results calculated here.
Future research could include finding alternative ways to calculate certain sequences of elements in graded free Lie algebras, such as those in Ref. (which also contains an interesting method of computing the Baker-Campbell-Hausdorff series by numerically integrating a differential equation). These quantities occur in the optimization of numerical algorithms involving computations in Lie algebras. Graded Lie algebra bases can be used in the construction of Runge-Kutta methods on manifolds.
|
no-problem/9905/astro-ph9905152.html
|
ar5iv
|
text
|
# An Infrared Determination of the Reddening and Distance to Dwingeloo 1
## 1 Introduction
Dwingeloo 1 (Dw1) is a large SBb/c galaxy, discovered both in a systematic H i emission survey of the northern part of the Milky Way in search of obscured galaxies in the Zone of Avoidance by Kraan-Korteweg et al. (1994), and independently by Huchtmeier et al. (1995). The knowledge of the local mass distribution has implications for the peculiar velocity field, the direction and amplitude of the Local Group acceleration, the determination of parameters such as $`\mathrm{\Omega }_0`$ and $`H_0`$, and on the understanding of the formation and evolution of groups of galaxies (e.g., Peebles 1994; Marinoni et al. 1998). The discovery of this galaxy proved a long standing suspicion that the tidal disruptions of Maffei 2 may be due to the presence of another massive galaxy nearby (e.g., Hurt et al. 1993).
Dw1 lies in the direction of the IC 342/Maffei 1 & 2 group of galaxies, about 2 degrees away from Maffei 2. This corresponds to a physical separation of 175 kpc assuming that Dw1 and Maffei 2 are at a distance of 5 Mpc. Being the nearest barred spiral system, Dw1 offers a unique possibility to study the effect of the bar at high spatial resolution. The discoverers classified Dw1 as an SBb or SBc galaxy (T=4) and measured an angular diameter of 4.2 arcmin. Later on, McCall & Buta (1997) re-classified it to SB(s)cd with an angular diameter of 9.9 arcmin at $`\mu _I=25.0`$mag arcsec<sup>-2</sup> based on deep optical $`I`$-band imaging. Burton et al. (1996) extensively studied the neutral hydrogen content of Dw1 and measured H i profile widths at 20% and 50% level of $`201.2\pm 0.4`$km s<sup>-1</sup> and $`187.6\pm 0.6`$km s<sup>-1</sup> respectively. The measured inclination of the gaseous disk was $`51\pm 2`$degrees and the position angle 112 degrees, with the major axis aligned with the bar.
Since the discovery of Dw1 the determination of its distance has been hampered by the poorly known Galactic extinction. Optical $`VRIH_\alpha `$ imaging, long slit spectroscopy, and IRAS observations were summarized by Loan et al. (1996), who used a number of methods to estimate the foreground extinction towards Dw1. The optical color excesses yielded $`A_V=7.8\pm 3.0`$,mag, the measured Galactic H i column density $`A_V=4.5`$mag, and the $`100\mu `$m IRAS flux $`A_V=3.2`$mag. Finally, they applied optical $`I`$ and $`R`$-band Tully-Fisher relations to obtain distances ranging from 1.3 to 6.7 Mpc, with an average value of about 4 Mpc (assuming $`H_0=75`$km s<sup>-1</sup> Mpc<sup>-1</sup>). Their main source of uncertainty was the value of the Galactic extinction. Phillipps & Davies (1997) challenged these extreme distance estimates on the basis of the very narrow span of central surface brightness in present day spiral galaxies. Their best estimate for the extinction ($`A_B=6`$mag) places Dw1 at a distance of $`3.13.6`$Mpc. These authors also employed the diameter version of the Tully-Fisher relation (Persic, Salucci, & Stel 1996) and obtained a distance of 2.7 Mpc.
The primary goal of this study was to put stronger constraints on the foreground extinction, and to obtain a better estimate of the distance to Dw1. We chose to use infrared colors because of their small intrinsic variations among spiral galaxies (Aaronson 1977). In addition, the extinction in the $`H`$-band is about three times smaller than in the $`I`$-band and about six times smaller than in the $`V`$-band (Rieke & Lebofsky 1985). Finally, the infrared Tully-Fisher (IRTF) relation shows a smaller intrinsic scatter than its optical counterpart (Aaronson, Huchra, & Mould 1979; Freedman 1990; Peletier & Willner 1993). The IRTF relation allows us to determine both the extinction and the distance at the same wavelength range, minimizing the errors arising from possible variations of the reddening law.
## 2 Observations and Data reduction
We obtained $`JHK_\mathrm{s}`$ imaging of Dw1 using a $`256\times 256`$ NICMOS3 array at the 2.3-m Bok Telescope of the University of Arizona on Kitt Peak, with a plate scale of $`0.6`$arcsec pixel<sup>-1</sup> during a number of observing runs. We constructed a deep $`4.9\mathrm{}\times 4.9\mathrm{}`$ $`H`$-band mosaic of Dw1, whereas the $`J`$ and $`K_\mathrm{s}`$ images only covered the central $`2.5\mathrm{}\times 2.5\mathrm{}`$. Additional $`H`$-band imaging using a $`1024\times 1024`$ array and plate scale $`0.5`$arcsec pixel<sup>-1</sup> was obtained at the same telescope on a subsequent observing run to calibrate the deep $`H`$-band imaging. The observational strategy consisted of taking galaxy images interleaved with sky images $`6\mathrm{}7\mathrm{}`$ away from Dw1. Details of the observations are listed in Table 1.
The data reduction included subtraction of dark current frames, flat-fielding with median combined empty sky frames, and sky subtraction. The mosaics were constructed by shifting the images to a common position with cubic spline interpolation. The photometric calibration was performed using observations of standard stars from the lists of Elias et al. (1982) and Hunt et al. (1998) when conditions were photometric. Conditions were non-photometric for the $`H`$-band mosaic, and this image was self-calibrated using the $`1024\times 1024`$ data. In the next section we will be making use of infrared colors of spiral galaxies to derive an estimate of the extinction to Dw1. The colors for spiral galaxies were obtained with the $`K`$-band filter, whereas our measurements were taken with the $`K_\mathrm{s}`$ (short-$`K`$) filter. Therefore it is necessary to determine what the difference is between the two filters. Very recently Persson et al. (1998) have obtained a new set of $`JHKK_\mathrm{s}`$ photometric standards. In their study the average difference between the $`K`$ and $`K_\mathrm{s}`$ magnitudes (for the red standards) is 0.0096 mag, with a standard deviation of 0.017 mag. We assume that the use of the $`K_\mathrm{s}`$ filter instead of the $`K`$ filter introduces an extra uncertainty in the photometric calibration of this filter of $`\pm 0.02`$mag. The errors associated with the photometric calibration are 0.05, 0.07, 0.06 mag in $`J`$, $`H`$ and $`K_\mathrm{s}`$ respectively.
In Figure 1 we display the $`H`$-band mosaic on a logarithmic scale. As can be seen from this figure, there is a large number of foreground stars which need to be removed prior to analyzing the data. The star removal from the $`JHK_\mathrm{s}`$ images was done interactively. The affected pixels were then replaced with a linear surface fit to a circular annulus around each star. The automatic procedures failed largely because of PSF variations under non-photometric conditions. A bright star located southwest of the galaxy posed a major problem, and was masked out throughout the data reduction. We assumed radial symmetry and replaced the region within about 90 arcsec from the star with the data from the opposite side of the galaxy. We performed the photometry on the cleaned images measuring the flux within elliptical isophotes with fixed position angle and ellipticity as determined from H i observations (Burton et al. 1996).
## 3 Discussion
### 3.1 Colors and Extinction
The surface brightness profiles in $`JHK_\mathrm{s}`$ are given in Table 2, and displayed in Figure 2 along with the radial distribution of the $`JH`$, $`HK_\mathrm{s}`$ and $`JK_\mathrm{s}`$ colors, and the total apparent magnitudes in $`JHK_\mathrm{s}`$. In both Table 2 and Figure 2 the error bars represent the combined $`3\sigma `$ variations from photon statistics, sky background variations, the elliptical isophotal fitting, and the photometric calibration. From the radial distribution of the colors, it is clear that the center of the galaxy appears slightly redder than the outer regions, as it is the case with the infrared colors of most spiral galaxies (Terndrup et al. 1994; de Jong 1996). Loan et al. (1996) report inverse optical color gradients in Dw1, however, this may be the result of active star formation along the bar as found in some barred spirals (Shaw et al. 1995).
Before we tried to obtain an estimate of the extinction using the infrared colors, we fitted straight lines to assess the variation of the colors with increasing apertures,
$$HK_\mathrm{s}=0.435(\pm 0.042)0.072(\pm 0.071)R$$
(1)
$$JH=1.099(\pm 0.040)0.056(\pm 0.067)R$$
(2)
$$JK_\mathrm{s}=1.532(\pm 0.042)0.125(\pm 0.071)R$$
(3)
where $`R`$ is the semi-major axis in units of arcmin. These color gradients imply a change of $`0.060.13`$mag within the inner 2 arcmin of Dw1, which may cause significant uncertainties in the reddening estimate. The smaller field of view of the $`J`$ and $`K_\mathrm{s}`$ images prevents us from obtaining the total colors of Dw1. However, the total flux in the $`JHK_\mathrm{s}`$ bands is dominated by the inner $`2\mathrm{}`$-diameter region (see Figure 2, bottom panel). Hence, we adopt the total observed colors at radial distance of 1 arcmin from the center of the galaxy to be representative for the whole galaxy ($`JH=1.04\pm 0.10`$mag, $`JK_\mathrm{s}=1.40\pm 0.11`$mag, $`HK_\mathrm{s}=0.36\pm 0.11`$mag). To estimate the color excesses we compared these colors to the mean integrated colors of SBb-SBcd galaxies: $`JH=0.73\pm 0.02`$mag, $`JK_\mathrm{s}=0.94\pm 0.03`$mag, $`HK_\mathrm{s}=0.21\pm 0.02`$mag (Aaronson 1977). We have increased the errors in Aaronson’s colors to account for the uncertain Hubble type of Dw1. The color excesses were converted into $`H`$-band ($`A_H`$), visual ($`A_V`$) and $`B`$-band ($`A_B`$) extinctions using Rieke & Lebofsky (1985, RL85) and Mathis (1990, M90) extinction laws. The results are summarized in Table 3. Henceforth we will use $`A_H=0.47\pm 0.11`$mag from Mathis (1990) extinction law for the sake of compatibility with previous work. This value is close to the estimate based on the IRAS $`100\mu `$m flux (Loan et al. 1996).
Finally, we used the combined optical - near infrared colors to verify our result. Loan et al. (1996) reported the following total apparent (not corrected for reddening) magnitudes for Dw 1: $`m_I=10.7\pm 0.2`$mag, $`m_R=12.2\pm 0.2`$mag, and $`m_V=14.0\pm 0.5`$mag. We measured a total $`H`$-band magnitude $`m_H=8.3\pm 0.2`$mag, and compared the observed colors with the intrinsic colors as determined by de Jong (1996): $`IH=1.44\pm 0.20`$mag, $`RH=2.01\pm 0.20`$mag, and $`VH=2.50\pm 0.20`$mag. Mathis (1990) extinction law yields $`A_H=0.56\pm 0.23`$mag, $`0.58\pm 0.12`$mag, and $`0.47\pm 0.11`$mag respectively, in good agreement with our infrared estimates.
### 3.2 Tully-Fisher Distance to Dw1
In order to determine the distance to Dw1, we chose to apply the IRTF relation because of its lower intrinsic dispersion and the reduced extinction in the $`H`$-band, with the additional advantage that the extinction and the distance are estimated at the same wavelength. The IRTF relation was pioneered by Aaronson et al. (1979). However, we chose to employ the IRTF relation calibrated by Freedman (1990) using local galaxies with Cepheid based distances, and the relations of Peletier & Willner (1993) calibrated relative to the distance of the Ursa Major galaxy cluster.
Freedman’s (1990) calibration for the IRTF relation leads to the following expression: $`H_{0.5}^{\mathrm{abs}}=10.26(\pm 0.49)(\mathrm{log}\mathrm{\Delta }V_{20}(0)2.5)21.02(\pm 0.08)`$, where $`\mathrm{\Delta }V_{20}(0)`$ is the inclination corrected $`20\%`$ level H i velocity profile width in km s<sup>-1</sup>. $`H_{0.5}^{\mathrm{abs}}`$ is the absolute $`H`$-band magnitude within a circular aperture with diameter $`A`$, for which $`\mathrm{log}(A/D_0)=0.5`$, with $`D_0`$ being the $`B`$-band isophotal diameter at $`\mu _{B,0}=25`$mag arcsec<sup>-2</sup>. Using $`\mathrm{\Delta }V_{20}(0)=259.0\pm 0.5`$km s<sup>-1</sup>, which is Burton et al.’s (1996) value corrected for the inclination of the galaxy, the above expression predicts an absolute $`H`$-band magnitude $`H_{0.5}^{\mathrm{abs}}=20.13\pm 0.21`$mag.
Although $`B`$-band surface photometry for Dw1 is not available, we can make use of the intrinsic integrated colors for Sb-Sc galaxies $`BH=3.28\pm 0.14`$ (de Jong 1996) to estimate $`D_0`$. The $`\mu _{B,0}=25`$mag arcsec<sup>-2</sup> isophote corresponds to an $`H`$-band surface brightness (corrected for extinction) of $`\mu _{H,0}=21.72\pm 0.14`$mag arcsec<sup>-2</sup>. Taking into account the $`H`$-band extinction this is equivalent to an observed (not corrected for extinction) value of $`\mu _H=22.19\pm 0.17`$mag arcsec<sup>-2</sup>. As can be seen from Figure 2 (upper panel), this value exceeds the boundaries of the $`H`$-band mosaic. However, the surface brightness profile can be easily extrapolated. We fitted an exponential disk ($`\mu _He^{r/r_d}`$, where $`r`$ is the semi-major axis and $`r_d`$ is the disk scale length) to the surface brightness profile from a radial distance of 2 arcmin outwards where the bulge contribution in negligible, and estimated a value of $`D_0=8.5\pm 0.8`$ arcmin. The apparent $`H`$-band magnitude (not corrected for extinction) for a circular aperture with diameter of $`A_{0.5}=2.7\pm 0.2`$arcmin is then $`m(H_{0.5}^{\mathrm{app}})=8.96\pm 0.12`$mag, which provides a distance modulus of $`(mM)_0=28.62\pm 0.26`$ and a distance of $`d=5.3_{0.6}^{+0.7}`$Mpc.
The IRTF relation was initially calibrated for circular apertures because only single-pixel detectors were used at the time (Aaronson, Huchra, & Mould 1979). Naturally, one would expect a transition to elliptical apertures to reduce the internal dispersion of the IRTF relation because they correct for the galaxy inclination. Peletier & Willner (1993) studied the problem in detail and reported no significant change in the calibration of the IRTF relation when elliptical apertures were used. A possible explanation is that the higher internal absorption, which increases with inclination, may cancel out the projection effect. Peletier & Willner (1993) used elliptical apertures and obtained the following calibration for spiral galaxies in the Ursa Major galaxy cluster: $`\mathrm{log}\mathrm{\Delta }V_{20}(0)=0.085(H_{0.5}^{\mathrm{abs},\mathrm{e}}+30.959.0)+2.603`$. We used a distance modulus to Ursa Major Cluster of $`(mM)_0=30.95\pm 0.17`$mag (Pierce & Tully 1988). This gives $`H_{0.5}^{\mathrm{abs},\mathrm{e}}=19.72\pm 0.52`$mag. However, we are still left with the problem of extrapolating the observed total luminosity profile out to $`D_0`$. The measured total (not corrected for extinction) $`H`$-band magnitude within an elliptical aperture with major axis $`2.7\pm 0.2`$arcmin and axial ratio 1.56 (Loan et al. 1996) is $`m(H_{0.5}^{\mathrm{app}})=9.19\pm 0.43`$mag. Correcting for the reddening, we obtained a distance modulus of $`(mM)_0=28.44\pm 0.69`$mag and distance $`d=4.9_{1.3}^{+1.8}`$Mpc. The intrinsic spread of Peletier & Willner (1993) IRTF relation and the uncertain distance to the Ursa Major Cluster account for the increased errors of this estimate.
We can now use our reddening estimate to correct the optical $`I`$ and $`R`$-band photometry of Loan et al. (1996), and use the optical Tully-Fisher relations to obtain another distance estimate. The two reddening laws discussed in the preceding section predict the same $`A_I`$/$`A_H`$ ratio within a few percent. Adopting Mathis (1990) extinction law, we find $`I`$\- and $`B`$-band extinctions of $`A_I=1.3\pm 0.3`$mag, and $`A_B=3.6\pm 0.8`$mag lower than $`A_B=4.3`$mag in Loan et al. (1996). This puts Dw1 at an average distance of $`5.5_{0.7}^{+0.8}`$ Mpc.
These distance determinations place Dw1 behind NGC 1560 at $`3.5\pm 0.7`$Mpc, UGCA 105 at $`3.8\pm 0.9`$Mpc (Krismer, Tully, & Gioia 1995), and Maffei 1 at $`4.2\pm 0.5`$Mpc (Luppino & Tonry 1993).
## 4 Conclusions
We have obtained deep near-infrared imaging of the highly obscured galaxy Dw1. The observed infrared colors were used to obtain a very accurate estimate of the extinction in the $`H`$-band, $`A_H=0.47\pm 0.11`$mag. This value was confirmed by the optical - near infrared color excesses, and is close to the estimate based on the IRAS $`100\mu `$m flux (Loan et al. 1996). Our approach is more reliable than previous works in that we did not make any additional assumptions for the relation between $`A_V`$ and H i, or the IRAS $`100\mu `$m emission. In addition, the IRTF relation allowed us to estimate both the reddening and distance at the same wavelength range. This makes our results largely independent of the choice of the reddening law, with the additional advantage that the IRTF shows a smaller dispersion than its optical counterpart. Finally, the infrared reddening estimates are more reliable than those in the optical because infrared colors of spiral galaxies show a smaller intrinsic dispersion, and are less sensitive to the history of star formation than optical colors (Vazdekis et al. 1996).
The IRTF relation (Freedman 1990) yielded a distance of $`d=5.3_{0.6}^{+0.7}`$Mpc which places Dw1 at the far end of the IC 342/Maffei 1 & 2 group. We also confirmed that Dw1 has an angular diameter greater than 7 arcmin, larger than the originally measured value of 4.2 arcmin.
During the course of this work VDI and AA-H were supported by the National Aeronautics and Space Administration on grant NAG 5-3042 through the University of Arizona. The $`256\times 256`$ camera was supported by NSF Grant AST-9529190. We are grateful to the anonymous referee for comments which helped improve the paper.
Figure Captions
Figure 1.— $`H`$-band mosaic of Dw1 displayed on a logarithmic scale. The orientation is north up, east to the left. The field of view is $`4.7\mathrm{}\times 4.9\mathrm{}`$.
Figure 2.— Upper panel: Observed surface brightness profiles in $`J`$, $`H`$ and $`K_\mathrm{s}`$ as a function of the semi-major axis. The dashed line is our exponential disk fit (see text). Middle panel: Radial distribution of the $`JH`$, $`HK_\mathrm{s}`$ and $`JK_\mathrm{s}`$ colors. The straight lines represent a linear fit to the color gradients as a function of the semi-major axis (see text). Bottom panel: Observed total magnitude as a function of the semi-major axis in $`J`$, $`H`$ and $`K_\mathrm{s}`$. In all three panels, the vertical bars represent $`3\sigma `$ errors.
|
no-problem/9905/math-ph9905011.html
|
ar5iv
|
text
|
# References
A REMARK ON THE BOSON-FERMION CORRESPONDENCE
Yurii A. Neretin <sup>1</sup><sup>1</sup>1supported by RFBR grant 98-01-00303 and by Russian program of support of scientific schools
Chair of Mathematical Analysis, Moscow Inst. of Electronics and Mathematics Bol’shoi Triohsviatitel’skii per. 3/12, Moscow 109082, Russia,
E-mail: chuhloma@neretin.mccme.ru
## Abstract
We introduce the space of skew-symmetric functions depending on an infinite number of variables and give a simple interpretation of the boson-fermion correspondence.
The boson-fermion correspondence (Skyrme, 1971) is a canonical transformation from bosonic Fock space to fermionic Fock space (more precisely it is an operator from some special bosonic Fock space to some special fermionic Fock space). Now it is a quite well-known object in mathematics and mathematical physics (see for instance ). The purpose of this note is to give a very simple description of this operator. The boson-fermion correspondence will be multiplication with the Vandermonde determinant. In some sense our description is not new (it is equivalent to an explanation which uses Schur functions, see ), on the other side I never have seen this description in the literature and never have heard about it.
1. Bosonic Fock space $`𝐅`$. Consider formal variables $`z_1,z_2,\mathrm{}`$. Consider the space $`Pol`$ of polynomials in the variables $`z_1,z_2,\mathrm{}`$. Define a scalar product in $`Pol`$ by the following rule: the monomials $`z_1^{k_1}z_2^{k_2}\mathrm{}`$ are pairwise orthogonal and
$$z_1^{k_1}z_2^{k_2}\mathrm{}^2=\underset{j}{}\left(k_j!j^{k_j}\right).$$
(1)
We define the bosonic Fock space $`𝐅`$ (V.A.Fock, 1929, see ) as the completion of $`Pol`$ with respect to this scalar product.
2. Space $`\mathrm{𝐒𝐲𝐦𝐦}`$ of symmetric functions. Consider an infinite collection of formal variables $`x_1,x_2,\mathrm{}`$. We define the space $`\mathrm{𝐒𝐲𝐦𝐦}`$ of symmetric functions as the space of symmetric infinite formal sums of monomials in the variables $`x_1,x_2,\mathrm{}`$ (see ) (in each monomial only finite number of variables occurs).
Denote by $`p_k`$ the infinite Newton sums
$$p_n=x_1^n+x_2^n+x_3^n+\mathrm{}$$
The classical scalar product (J.H. Redfield, 1927) in the space $`\mathrm{𝐒𝐲𝐦𝐦}`$ is given by the rule: the ”functions” $`p_1^{k_1}p_2^{k_2}\mathrm{}`$ are orthogonal and
$$p_1^{k_1}p_2^{k_2}\mathrm{}^2=\underset{j}{}\left(k_j!j^{k_j}\right).$$
(2)
3. Boson–symmetric correspondence, see . A canonical isometry $`I:𝐅\mathrm{𝐒𝐲𝐦𝐦}`$ is given by the rule
$$I:z_1^{k_1}z_2^{k_2}\mathrm{}p_1^{k_1}p_2^{k_2}\mathrm{}$$
In other words the operator $`I`$ is a substitution operator
$$If(x_1,x_2,x_3,\mathrm{})=f(\underset{j}{}x_j,\underset{j}{}x_j^2,\underset{j}{}x_j^3,\mathrm{}).$$
Obviously $`I`$ is an isometry (see (1) and (2)).
4. Space of skew-symmetric functions. This object is very simple but psychologically strange. Consider the same variables $`x_1,x_2,\mathrm{}`$. A quasi-monomial is a formal expression
$$x_1^{\omega +l_1}x_2^{\omega +l_2}x_3^{\omega +l_3}\mathrm{}$$
where $`l_j=j`$ for large $`j`$ and $`\omega `$ is a formal symbol. A skew-symmetric function is a formal (infinite) linear combination of quasi-monomials which is skew-symmetric with respect to all finite permutations of the variables $`x_1,x_2,\mathrm{}`$.
Remark. Informally, $`\omega `$ means
$$\omega =\mathrm{}.$$
It is ”the total number” of variables $`x_1,x_2,\mathrm{}`$. It is natural to consider the expression
$$\underset{1i<j<\mathrm{}}{}(x_ix_j)$$
(3)
as skew-symmetric function. Indeed let us write this expression in the form
$$\underset{1i<j<\mathrm{}}{}\left\{x_i(1\frac{x_j}{x_i})\right\}=\underset{1i<j<\mathrm{}}{}x_i\underset{1i<j<\mathrm{}}{}(1\frac{x_j}{x_i})$$
We obtain
$$\underset{1i<j<\mathrm{}}{}(x_ix_j)=\underset{\sigma S_{\mathrm{}}}{}(1)^\sigma x_1^{\omega \sigma (1)}x_2^{\omega \sigma (2)}\mathrm{}$$
(4)
where $`S_{\mathrm{}}`$ is the group of all finite permutations of the set $`\{1,2,3,4,\mathrm{}\}`$.
Let $`ł_1<l_2<l_3<\mathrm{}`$ be integers and let $`l_j=j`$ for large $`j`$. Consider the basic skew-symmetric functions
$$S_{l_1,l_2,\mathrm{}}=\underset{\sigma S_{\mathrm{}}}{}(1)^\sigma x_1^{\omega l_{\sigma (1)}}x_2^{\omega l_{\sigma (2)}}x_3^{\omega l_\sigma (3)}$$
A scalar product in the space $`\mathrm{𝐀𝐬𝐲𝐦𝐦}`$ of skew-symmetric functions is defined by the rule: the functions $`S_{l_1,l_2,\mathrm{}}`$ form an orthonormal basis in $`\mathrm{𝐀𝐬𝐲𝐦𝐦}`$.
5. Correspondence between $`\mathrm{𝐒𝐲𝐦𝐦}`$ and $`\mathrm{𝐀𝐬𝐲𝐦𝐦}`$. A canonical isometry $`J:\mathrm{𝐒𝐲𝐦𝐦}\mathrm{𝐀𝐬𝐲𝐦𝐦}`$ is given by the formula
$$Jf(x_1,x_2,\mathrm{})=f(x_1,x_2,\mathrm{})\underset{1i<j<\mathrm{}}{}(x_ix_j).$$
6. Fermionic Fock space, see . Let $`\mathrm{}\xi _2,\xi _1,\xi _0,\xi _1,\xi _2,\mathrm{}`$ be a family of anticommuting variables ($`\xi _i\xi _j=\xi _j\xi _i`$). Consider infinite products
$$\xi _{l_1}\xi _{l_2}\xi _{l_3}\mathrm{}$$
(5)
where $`l_1<l_2<\mathrm{}`$ and $`l_j=j`$ for large $`j`$. We define the fermionic Fock space $`𝚲`$ as the space where the monomials (5) form an orthonormal basis.
7. Isometry between $`\mathrm{𝐒𝐲𝐦𝐦}`$ and $`𝚲`$. This correspondence is obvious: the basis element (4) corresponds to the basis element (5).
8. The boson-fermion correspondence is the composition of the correspondences
$$𝐅\mathrm{𝐒𝐲𝐦𝐦}\mathrm{𝐀𝐬𝐲𝐦𝐦}𝚲.$$
In fact it is the composition of the substitution
$$z_k=\underset{j}{}x_j^k$$
and the multiplication with the Vandermonde determinant (3).
Some additionsl discussion of boson-symmetric correspondences is contained in ,
|
no-problem/9905/cs9905015.html
|
ar5iv
|
text
|
# State Abstraction in MAXQ Hierarchical Reinforcement Learning
## 1 Introduction
Most work on hierarchical reinforcement learning has focused on temporal abstraction. For example, in the Options framework , the programmer defines a set of macro actions (“options”) and provides a policy for each. Learning algorithms (such as semi-Markov Q learning) can then treat these temporally abstract actions as if they were primitives and learn a policy for selecting among them. Closely related is the HAM framework, in which the programmer constructs a hierarchy of finite-state controllers . Each controller can include non-deterministic states (where the programmer was not sure what action to perform). The HAMQ learning algorithm can then be applied to learn a policy for making choices in the non-deterministic states. In both of these approaches—and in other studies of hierarchical RL (e.g., )—each option or finite state controller must have access to the entire state space. The one exception to this—the Feudal-Q method of Dayan and Hinton —introduced state abstractions in an unsafe way, such that the resulting learning problem was only partially observable. Hence, they could not provide any formal results for the convergence or performance of their method.
Even a brief consideration of human-level intelligence shows that such methods cannot scale. When deciding how to walk from the bedroom to the kitchen, we do not need to think about the location of our car. Without state abstractions, any RL method that learns value functions must learn a separate value for each state of the world. Some argue that this can be solved by clever value function approximation methods—and there is some merit in this view. In this paper, however, we explore a different approach in which we identify aspects of the MDP that permit state abstractions to be safely incorporated in a hierarchical reinforcement learning method without introducing function approximations. This permits us to obtain the first proof of the convergence of hierarchical RL to an optimal policy in the presence of state abstraction.
We introduce these state abstractions within the MAXQ framework , but the basic ideas are general. In our previous work with MAXQ, we briefly discussed state abstractions, and we employed them in our experiments. However, we could not prove that our algorithm (MAXQ-Q) converged with state abstractions, and we did not have a usable characterization of the situations in which state abstraction could be safely employed. This paper solves these problems and in addition compares the effectiveness of MAXQ-Q learning with and without state abstractions. The results show that state abstraction is very important, and in most cases essential, to the effective application of MAXQ-Q learning.
## 2 The MAXQ Framework
Let $`M`$ be a Markov decision problem with states $`S`$, actions $`A`$, reward function $`R(s^{}|s,a)`$ and probability transition function $`P(s^{}|s,a)`$. Our results apply in both the finite-horizon undiscounted case and the infinite-horizon discounted case. Let $`\{M_0,\mathrm{},M_n\}`$ be a set of subtasks of $`M`$, where each subtask $`M_i`$ is defined by a termination predicate $`T_i`$ and a set of actions $`A_i`$ (which may be other subtasks or primitive actions from $`A`$). The “goal” of subtask $`M_i`$ is to move the environment into a state such that $`T_i`$ is satisfied. (This can be refined using a local reward function to express preferences among the different states satisfying $`T_i`$ , but we omit this refinement in this paper.) The subtasks of $`M`$ must form a DAG with a single “root” node—no subtask may invoke itself directly or indirectly. A hierarchical policy is a set of policies $`\pi =\{\pi _0,\mathrm{},\pi _n\}`$, one for each subtask. A hierarchical policy is executed using standard procedure-call-and-return semantics, starting with the root task $`M_0`$ and unfolding recursively until primitive actions are executed. When the policy for $`M_i`$ is invoked in state $`s`$, let $`P(s^{},N|s,i)`$ be the probability that it terminates in state $`s^{}`$ after executing $`N`$ primitive actions. A hierarchical policy is recursively optimal if each policy $`\pi _i`$ is optimal given the policies of its descendants in the DAG.
Let $`V(i,s)`$ be the value function for subtask $`i`$ in state $`s`$ (i.e., the value of following some policy starting in $`s`$ until we reach a state $`s^{}`$ satisfying $`T_i(s^{})`$). Similarly, let $`Q(i,s,j)`$ be the $`Q`$ value for subtask $`i`$ of executing child action $`j`$ in state $`s`$ and then executing the current policy until termination. The MAXQ value function decomposition is based on the observation that each subtask $`M_i`$ can be viewed as a Semi-Markov Decision problem in which the reward for performing action $`j`$ in state $`s`$ is equal to $`V(j,s)`$, the value function for subtask $`j`$ in state $`s`$. To see this, consider the sequence of rewards $`r_t`$ that will be received when we execute child action $`j`$ and then continue with subsequent actions according to hierarchical policy $`\pi `$:
$$Q(i,s,j)=E\{r_t+\gamma r_{t+1}+\gamma ^2r_{t+2}+\mathrm{}|s_t=s,\pi \}$$
The macro action $`j`$ will execute for some number of steps $`N`$ and then return. Hence, we can partition this sum into two terms:
$$Q(i,s,j)=E\left\{\underset{u=0}{\overset{N1}{}}\gamma ^ur_{t+u}+\underset{u=N}{\overset{\mathrm{}}{}}\gamma ^ur_{t+u}\right|s_t=s,\pi \}$$
The first term is the discounted sum of rewards until subtask $`j`$ terminates—$`V(j,s)`$. The second term is the cost of finishing subtask $`i`$ after $`j`$ is executed (discounted to the time when $`j`$ is initiated). We call this second term the completion function, and denote it $`C(i,s,j)`$. We can then write the Bellman equation as
$`Q(i,s,j)`$ $`=`$ $`{\displaystyle \underset{s^{},N}{}}P(s^{},N|s,j)[V(j,s)+\gamma ^N\underset{j^{}}{\mathrm{max}}Q(i,s^{},j^{})]`$
$`=`$ $`V(j,s)+C(i,s,j)`$
To terminate this recursion, define $`V(a,s)`$ for a primitive action $`a`$ to be the expected reward of performing action $`a`$ in state $`s`$.
The MAXQ-Q learning algorithm is a simple variation of $`Q`$ learning in which at subtask $`M_i`$, state $`s`$, we choose a child action $`j`$ and invoke its (current) policy. When it returns, we observe the resulting state $`s^{}`$ and the number of elapsed time steps $`N`$ and update $`C(i,s,j)`$ according to
$$C(i,s,j):=(1\alpha _t)C(i,s,j)+\alpha _t\gamma ^N[\underset{a^{}}{\mathrm{max}}V(a^{},s^{})+C(i,s^{},a^{})].$$
To prove convergence, we require that the exploration policy executed during learning be an ordered GLIE policy. An ordered policy is a policy that breaks Q-value ties among actions by preferring the action that comes first in some fixed ordering. A GLIE policy is a policy that (a) executes each action infinitely often in every state that is visited infinitely often and (b) converges with probability 1 to a greedy policy. The ordering condition is required to ensure that the recursively optimal policy is unique. Without this condition, there are potentially many different recursively optimal policies with different values, depending on how ties are broken within subtasks, subsubtasks, and so on.
###### Theorem 1
Let $`M=S,A,P,R`$ be either an episodic MDP for which all deterministic policies are proper or a discounted infinite horizon MDP with discount factor $`\gamma `$. Let $`H`$ be a DAG defined over subtasks $`\{M_0,\mathrm{},M_k\}`$. Let $`\alpha _t(i)>0`$ be a sequence of constants for each subtask $`M_i`$ such that
$$\underset{T\mathrm{}}{lim}\underset{t=1}{\overset{T}{}}\alpha _t(i)=\mathrm{}\text{and}\underset{T\mathrm{}}{lim}\underset{t=1}{\overset{T}{}}\alpha _t^2(i)<\mathrm{}$$
(1)
Let $`\pi _x(i,s)`$ be an ordered GLIE policy at each subtask $`M_i`$ and state $`s`$ and assume that $`|V_t(i,s)|`$ and $`|C_t(i,s,a)|`$ are bounded for all $`t`$, $`i`$, $`s`$, and $`a`$. Then with probability 1, algorithm MAXQ-Q converges to the unique recursively optimal policy for $`M`$ consistent with $`H`$ and $`\pi _x`$.
Proof: (sketch) The proof is based on Proposition 4.5 from Bertsekas and Tsitsiklis and follows the standard stochastic approximation argument due to generalized to the case of non-stationary noise. There are two key points in the proof. Define $`P_t(s^{},N|s,j)`$ to be the probability transition function that describes the behavior of executing the current policy for subtask $`j`$ at time $`t`$. By an inductive argument, we show that this probability transition function converges (w.p. 1) to the probability transition function of the recursively optimal policy for $`j`$. Second, we show how to convert the usual weighted max norm contraction for $`Q`$ into a weighted max norm contraction for $`C`$. This is straightforward, and completes the proof.
What is notable about MAXQ-Q is that it can learn the value functions of all subtasks simultaneously—it does not need to wait for the value function for subtask $`j`$ to converge before beginning to learn the value function for its parent task $`i`$. This gives a completely online learning algorithm with wide applicability.
## 3 Conditions for Safe State Abstraction
To motivate state abstraction, consider the simple Taxi Task shown in Figure 1. There are four special locations in this world, marked as R(ed), B(lue), G(reen), and Y(ellow). In each episode, the taxi starts in a randomly-chosen square. There is a passenger at one of the four locations (chosen randomly), and that passenger wishes to be transported to one of the four locations (also chosen randomly). The taxi must go to the passenger’s location (the “source”), pick up the passenger, go to the destination location (the “destination”), and put down the passenger there. The episode ends when the passenger is deposited at the destination location.
There are six primitive actions in this domain: (a) four navigation actions that move the taxi one square North, South, East, or West, (b) a Pickup action, and (c) a Putdown action. Each action is deterministic. There is a reward of $`1`$ for each action and an additional reward of $`+20`$ for successfully delivering the passenger. There is a reward of $`10`$ if the taxi attempts to execute the Putdown or Pickup actions illegally. If a navigation action would cause the taxi to hit a wall, the action is a no-op, and there is only the usual reward of $`1`$.
This task has a hierarchical structure (see Fig. 1) in which there are two main sub-tasks: Get the passenger (Get) and Deliver the passenger (Put). Each of these subtasks in turn involves the subtask of navigating to one of the four locations ($`\mathrm{𝖭𝖺𝗏𝗂𝗀𝖺𝗍𝖾}(t)`$; where $`t`$ is bound to the desired target location) and then performing a Pickup or Putdown action. This task illustrates the need to support both temporal abstraction and state abstraction. The temporal abstraction is obvious—for example, Get is a temporally extended action that can take different numbers of steps to complete depending on the distance to the target. The top level policy (get passenger; deliver passenger) can be expressed very simply with these abstractions.
The need for state abstraction is perhaps less obvious. Consider the Get subtask. While this subtask is being solved, the destination of the passenger is completely irrelevant—it cannot affect any of the nagivation or pickup decisions. Perhaps more importantly, when navigating to a target location (either the source or destination location of the passenger), only the taxi’s location and identity of the target location are important. The fact that in some cases the taxi is carrying the passenger and in other cases it is not is irrelevant.
We now introduce the five conditions for state abstraction. We will assume that the state $`s`$ of the MDP is represented as a vector of state variables. A state abstraction can be defined for each combination of subtask $`M_i`$ and child action $`j`$ by identifying a subset $`X`$ of the state variables that are relevant and defining the value function and the policy using only these relevant variables. Such value functions and policies are said to be abstract.
The first two conditions involve eliminating irrelevant variables within a subtask of the MAXQ decomposition.
Condition 1: Subtask Irrelevance. Let $`M_i`$ be a subtask of MDP $`M`$. A set of state variables $`Y`$ is irrelevant to subtask $`i`$ if the state variables of $`M`$ can be partitioned into two sets $`X`$ and $`Y`$ such that for any stationary abstract hierarchical policy $`\pi `$ executed by the descendants of $`M_i`$, the following two properties hold: (a) the state transition probability distribution $`P^\pi (s^{},N|s,j)`$ for each child action $`j`$ of $`M_i`$ can be factored into the product of two distributions:
$$P^\pi (x^{},y^{},N|x,y,j)=P^\pi (x^{},N|x,j)P^\pi (y^{}|y,j),$$
(2)
where $`x`$ and $`x^{}`$ give values for the variables in $`X`$, and $`y`$ and $`y^{}`$ give values for the variables in $`Y`$; and (b) for any pair of states $`s_1=(x,y_1)`$ and $`s_2=(x,y_2)`$ and any child action $`j`$, $`V^\pi (j,s_1)=V^\pi (j,s_2)`$.
In the Taxi problem, the source and destination of the passenger are irrelevant to the $`\mathrm{𝖭𝖺𝗏𝗂𝗀𝖺𝗍𝖾}(t)`$ subtask—only the target $`t`$ and the current taxi position are relevant.
Condition 2: Leaf Irrelevance. A set of state variables $`Y`$ is irrelevant for a primitive action $`a`$ if for any pair of states $`s_1`$ and $`s_2`$ that differ only in their values for the variables in $`Y`$,
$$\underset{s_1^{}}{}P(s_1^{}|s_1,a)R(s_1^{}|s_1,a)=\underset{s_2^{}}{}P(s_2^{}|s_2,a)R(s_2^{}|s_2,a).$$
This condition is satisfied by the primitive actions North, South, East, and West in the taxi task, where all state variables are irrelevant because $`R`$ is constant.
The next two conditions involve “funnel” actions—macro actions that move the environment from some large number of possible states to a small number of resulting states. The completion function of such subtasks can be represented using a number of values proportional to the number of resulting states.
Condition 3: Result Distribution Irrelevance (Undiscounted case.) A set of state variables $`Y_j`$ is irrelevant for the result distribution of action $`j`$ if, for all abstract policies $`\pi `$ executed by $`M_j`$ and its descendants in the MAXQ hierarchy, the following holds: for all pairs of states $`s_1`$ and $`s_2`$ that differ only in their values for the state variables in $`Y_j`$,
$$s^{}P^\pi (s^{}|s_1,j)=P^\pi (s^{}|s_2,j).$$
Consider, for example, the Get subroutine under an optimal policy for the taxi task. Regardless of the taxi’s position in state $`s`$, the taxi will be at the passenger’s starting location when Get finishes executing (i.e., because the taxi will have just completed picking up the passenger). Hence, the taxi’s initial position is irrelevant to its resulting position. (Note that this is only true in the undiscounted setting—with discounting, the result distributions are not the same because the number of steps $`N`$ required for Get to finish depends very much on the starting location of the taxi. Hence this form of state abstraction is rarely useful for cumulative discounted reward.)
Condition 4: Termination. Let $`M_j`$ be a child task of $`M_i`$ with the property that whenever $`M_j`$ terminates, it causes $`M_i`$ to terminate too. Then the completion cost $`C(i,s,j)=0`$ and does not need to be represented. This is a particular kind of funnel action—it funnels all states into terminal states for $`M_i`$.
For example, in the Taxi task, in all states where the taxi is holding the passenger, the Put subroutine will succeed and result in a terminal state for Root. This is because the termination predicate for Put (i.e., that the passenger is at his or her destination location) implies the termination condition for Root (which is the same). This means that $`C(\mathrm{𝖱𝗈𝗈𝗍},s,\mathrm{𝖯𝗎𝗍})`$ is uniformly zero, for all states $`s`$ where Put is not terminated.
Condition 5: Shielding. Consider subtask $`M_i`$ and let $`s`$ be a state such that for all paths from the root of the DAG down to $`M_i`$, there exists a subtask that is terminated. Then no $`C`$ values need to be represented for subtask $`M_i`$ in state $`s`$, because it can never be executed in $`s`$.
In the Taxi task, a simple example of this arises in the Put task, which is terminated in all states where the passenger is not in the taxi. This means that we do not need to represent $`C(\mathrm{𝖱𝗈𝗈𝗍},s,\mathrm{𝖯𝗎𝗍})`$ in these states. The result is that, when combined with the Termination condition above, we do not need to explicitly represent the completion function for Put at all!
By applying these abstraction conditions to the Taxi task, the value function can be represented using 632 values, which is much less than the 3,000 values required by flat Q learning. Without state abstractions, MAXQ requires 14,000 values!
###### Theorem 2
(Convergence with State Abstraction) Let $`H`$ be a MAXQ task graph that incorporates the five kinds of state abstractions defined above. Let $`\pi _x`$ be an ordered GLIE exploration policy that is abstract. Then under the same conditions as Theorem 1, MAXQ-Q converges with probability 1 to the unique recursively optimal policy $`\pi _r^{}`$ defined by $`\pi _x`$ and $`H`$.
Proof: (sketch) Consider a subtask $`M_i`$ with relevant variables $`X`$ and two arbitrary states $`(x,y_1)`$ and $`(x,y_2)`$. We first show that under the five abstraction conditions, the value function of $`\pi _r^{}`$ can be represented using $`C(i,x,j)`$ (i.e., ignoring the $`y`$ values). To learn the values of $`C(i,x,j)=_{x^{},N}P(x^{},N|x,j)V(i,x^{})`$, a Q-learning algorithm needs samples of $`x^{}`$ and $`N`$ drawn according to $`P(x^{},N|x,j)`$. The second part of the proof involves showing that regardless of whether we execute $`j`$ in state $`(x,y_1)`$ or in $`(x,y_2)`$, the resulting $`x^{}`$ and $`N`$ will have the same distribution, and hence, give the correct expectations. Analogous arguments apply for leaf irrelevance and $`V(a,x)`$. The termination and shielding cases are easy.
## 4 Experimental Results
We implemented MAXQ-Q for a noisy version of the Taxi domain and for Kaelbling’s HDG navigation task using Boltzmann exploration. Figure 2 shows the performance of flat Q and MAXQ-Q with and without state abstractions on these tasks. Learning rates and Boltzmann cooling rates were separately tuned to optimize the performance of each method. The results show that without state abstractions, MAXQ-Q learning is slower to converge than flat Q learning, but that with state abstraction, it is much faster.
## 5 Conclusion
This paper has shown that by understanding the reasons that state variables are irrelevant, we can obtain a simple proof of the convergence of MAXQ-Q learning under state abstraction. This is much more fruitful than previous efforts based only on weak notions of state aggregation , and it suggests that future research should focus on identifying other conditions that permit safe state abstraction.
|
no-problem/9905/nucl-th9905020.html
|
ar5iv
|
text
|
# Spectator and participant decay in heavy ion collisions
## Abstract
We analyze the thermodynamical state of nuclear matter in transport calculations of heavy–ion reactions. In particular we determine temperatures and radial flow parameters from an analysis of fragment energy spectra and compare to local microscopic temperatures obtained from an analysis of local momentum space distributions. The analysis shows that the spectator reaches an equilibrated freeze-out configuration which undergoes simultaneous fragmentation. The fragments from the participant region, on the other hand, do not seem to come from a common fragmenting source in thermodaynamical equilibrium.
PACS number(s): 25.75.-q,25.70.Mn
One of the major interests in the study of intermediate energy heavy–ion collisions is the understanding of the multifragmentation phenomenon and its connection with liquid–gas phase transitions . For this it has to be assumed that in a heavy ion collision at some stage a part of the system is both in thermodynamical equilibrium and instable. Such a configuration is often termed a freeze-out configuration. The multifragmentation process would reflect the parameters of this source, i.e. its temperature, density, and perhaps collective (radial) flow pattern. Experimentally the fragment energy spectra are described in terms of such freeze-out models to extract these parameters. On the other hand, information on these quantities is also sought from other observables, in particular from isotope ratios of fragments, and from excited state populations. In the past the conclusions drawn from these different sources have often been in conflict with each other.
One way to study whether this szenario is applicable is to analyze the results of transport calculations of heavy ion collisions . Since such calculations reproduce reasonably well the asymptotic observables it should be meanigful to look also into their intermediate time behaviour to see whether, where, and when such freeze-out configurations exist. In ref. a method was developed to determine thermodynamical variables locally by analyzing the local momentum distribution even in the presence of possible anisotropies.
In the present work we apply this analysis to intermediate energy collisions of $`Au+Au`$, which were also studied extensively experimentally . We want to establish whether the concept of a freeze-out configuration used in the experimental analysis of fragment kinetic energy spectra is supported in realistic transport calculations. Fragments are described using a coalescence algorithm as detailed later. We study, in particular, the participant and spectator regions which represent rather clean thermodynamical situations in a heavy ion collision. In this way we also try to clarify some of the discrepancies between different methods of temperature definition.
We base our investigation on relativistic transport calculations of the Boltzmann-Nordheim-Vlasov type. In this work we use, in particular, the relativistic Landau-Vlasov approach which was described in detail in ref.. It uses Gaussian test particles in coordinate and momentum space and thus allows to construct locally a smooth momentum distribution. For the self energies in the transport calculation we have adopted the non-linear parametrization NL2 . In ref. we compared this parametrization to more realistic non-equilibrium self energies based on Dirac-Brueckner calculations. With respect to the thermodynamical variables discussed here, we found no essential differences between the two models, and thus we use the simpler NL2 here. A similar analysis with respect to fragment kinetic energy spectra in central collisions has been performed previously in ref. in the framework of non-relativistic transport theory with special emphasis on the dependence on assumptions of the equation of state and in-medium cross sections. Here also a weak dependence of radial flow observables on particular choices of the microscopic input was found.
The microscopic determination of a local temperature from the phase space distribution was discussed in detail in refs. . Thus we only briefly review the procedure here. The local momentum distribution obtained from a transport calculation is subjected to a fit in terms of covariant hot Fermi–Dirac distributions of the form
$$n(x,\stackrel{}{k},T)=\frac{1}{1+\mathrm{exp}\left[(\mu ^{}k_\mu ^{}u^\mu )/T\right]}$$
(1)
with the temperature $`T`$, the effective chemical potential $`\mu ^{}(T)`$ and $`k_0^{}=E^{}=\sqrt{\stackrel{}{k}^2+m^2}`$. For vanishing temperature eq.(1) includes the limit of a sharp Fermi ellipsoid with $`\mu ^{}=E_\mathrm{F}=\sqrt{k_\mathrm{F}^2+m^2}`$. The local streaming four-velocity $`u_\mu `$ is determined from the local 4-current $`j^\mu `$ as $`u^\mu =j^\mu /\rho _0`$, where $`\rho _0=\sqrt{j_\mu j^\mu }`$ is the local invariant density. Then the temperature $`T`$ is the only fit parameter to be directly determined from the phase space distribution. In this procedure the effect of the potential energy is taken into account by way of effective masses and momenta and the Fermi motion of the correct density by the chemical potential $`\mu ^{}`$. Thus this temperature is a local thermodynamic temperature, which in the following we denote as $`T_{loc}`$.
Expression (1) is appropriate for a system in local equilibrium. In a heavy ion collision this is generally not the case and would lead to an interpretation of collective kinetic energies in terms of temperature. To account for anisotropy effects, e.g. in ref. longitudinal and perpendicular temperatures have been introduced. In our approach we model anisotropic momentum distributions by counter-streaming or ’colliding’ nuclear matter , i.e., by a superposition of two Fermi distributions $`n^{(12)}=n^{(1)}+n^{(2)}\delta n^{(12)}`$, where $`\delta n^{(12)}=\sqrt{n^{(1)}n^{(2)}}`$ guarantees the validity of the Pauli principle and provides a smooth transition to one equilibrated system. In it has been demonstrated that this ansatz allows a reliable description of the participant and spectator matter at each stage of the reaction.
Experimentally much of the information about the thermodynamical behaviour in heavy ion collisions originates from the analysis of fragment observables. Thus also in the present analysis we will need to generate and analyze fragments. The correct and practical procedure how to properly decribe fragment production is still very much debated . Here we do not enter into this debate but use the simplest algorithm, namely a coalescence model, as we have done and decribed it in ref. . In brief, we apply phase space coalescence, i.e. nucleons form a fragment, if their positions and momenta ($`\stackrel{}{x}_i,\stackrel{}{p}_i`$) satisfy $`|\stackrel{}{x}_i\stackrel{}{X}_f|R_c`$ and $`|\stackrel{}{p}_i\stackrel{}{P}_f|P_c`$. $`R_c,P_c`$ are parameters which are fitted to reproduce the observed mass distributions and thus guarantee a good overall description of the fragment multiplicities.
Fragment kinetic energy spectra have been analyzed experimentally in the Siemens-Rassmussen or blast model . In this model the kinetic energies are interpreted in terms of a thermalized freeze-out configuration, characterized by a common temperature and a radial flow, i.e. by an isotropically expanding source. The kinetic energies are given by
$$\frac{dN}{dE}pE\beta ^2𝑑\beta n(\beta )\mathrm{exp}(\gamma E/T)\times \left[\frac{\mathrm{sinh}\alpha }{\alpha }\left(\gamma +\frac{T}{E}\right)\frac{T}{E}\mathrm{cosh}\alpha \right],$$
(2)
where $`p`$ and $`E`$ are the center of mass momentum and the total energy of the particle with mass $`m`$, respectively, and where $`\gamma ^2=1\beta ^2`$ and $`\alpha =\gamma \beta p/T`$. Various assumptions have been made for the flow profile $`n(\beta )`$. A good parametrization is a Fermi-type function . However, the results are not very different when using a single flow velocity, i.e. $`n(\beta )\delta (\beta \beta _f)`$, which we also use here for simplicity. One then has two parameters in the fit, namely $`\beta _f`$ and the temperature parameter in eq.(2), which we call $`T_{slope}`$. It is, of course, not obvious that $`T_{slope}`$ represents a thermodynamical temperature. One of the aims of this investigation is, in fact, to find its significance. The expression (2) has been applied to kinetic energy spectra of all fragment masses simultaneously, yielding a global $`T_{slope}(global)`$, or to each fragment mass separately, giving $`T_{slope}(A_f)`$. If a global description was achieved, it was concluded that a freeze-out configuration exists. We will also test this procedure.
In the following we apply the above methods to central ($`b=0`$ fm) and semi–central ($`b=4.5`$ fm) $`Au+Au`$ collisions at $`E_{beam}=0.250.8`$ AGeV. This reactions have been studied extensively by the ALADIN and EOS collaborations with respect to temperature and phase transitions. In ref. we have previously studied this reaction at one energy, 600 MeV/A, only with respect to local temperatures and thermodynamical instabilities. In this work we perform the fragment analysis with the blast model to extract and compare slope temperatures and we discuss a wider range of incident energies .
The spectator is that part of the system which has not collided with the other nucleus, but which is nevertheless excited due to the shearing–off of part of the nucleus and due to absorption of participant particles. In the calculation it is identified as those particles which have approximately beam rapidity. It was seen in ref. that it represents a well equilibrated piece of nuclear matter at finite temperature.
In fig. 1 we show the evolution with time of the local temperature and the density for the spectator at various incident energies. After the time when the spectator is fully developed the properties are rather independent of incident energy which supports the freeze-out picture. Also after this time the density and temperature remain rather constant for several tens of fm/c, making it an ideal system in order to study the thermodynamical evolution of low-density, finite temperature nuclear matter. In ref. we also determined pressure and studied the dependence of pressure on density. We found that after about 45 fm/c the effective compressibility $`KP/\rho |_T`$ became negative indicating that the system enters a region of spinodal instability and should subsequently break up into fragments. At this time we find densities of about $`\rho 0.40.5\rho _0`$ and $`T56`$ MeV, which is in good agreement with findings of the ALADIN group based on isotope thermometers . Recently the ALADIN group has also determined kinetic energy spectra of spectator fragments and has extracted slope temperatures using eq.(2). It was found that these are typically 10 to 12 MeV higher than those measured with the isotope thermometer.
Applying the coalescence model to the spectator we obtain kinetic energy spectra as shown in fig.2 at 600A MeV for nucleons ($`A_f=1`$) and for fragments with $`A_f2`$ separately. We fitted these spectra with the model of eq. (2) in the rest frame of the spectator ($`\beta _f=0`$). The $`A_f2`$ spectrum is well fitted with a temperature of $`T_{slope}(17\pm 2)`$ MeV. The nucleon spectrum, on the other hand, shows a two-component structure, as was also observed experimentally in ref. . It is dominated by a low energy component with $`T_{slope,low}=(7.3\pm 3.5)`$ MeV. The high energy component has rather poor statistics in our calculation and we interpret it as nucleons from the participant that have entered the spectator region. The slope temperature of the low energy component is close to the local temperature $`T_{loc}=(56)`$ MeV as discussed above with respect to fig. 1. Thus for nucleons both methods of temperature determination consistently are seen to yield about the same result. In fact, they should not neccessarily be identical, since $`T_{loc}`$ is determined fitting the momentum distribution by a Fermi function while $`T_{slope}`$ in eq. (2) is based on a Maxwell-Boltzmann distribution, which are not the same at such low temperatures.
On the other hand the slope temperatures of the fragments are considerably higher than those of the nucleons. In fig. 3 we show the slope temperatures separately for the different fragment masses and also the local temperature for comparison. There is a rapid increase of $`T_{slope}`$ with fragment mass which saturates for $`A_f3`$ around $`T_{slope}17`$ MeV, which was the temperature determined in fig. 2. The experimental values from ALADIN also shown in fig. 3. were obtained by analogous blast model fits to the measured spectra. It can be seen that the slope temperatures from the theoretical calculations and from the data agree extremely well. Also the corresponding kinetic energies which range from from 23.7 MeV ($`A_f=2`$) to 28.1 MeV ($`A_f=8`$) are in good agreement with the ALADIN data.
At first sight it is surprising that $`T_{slope}`$ for nucleons and fragments differ from each other and also from $`T_{loc}`$. The difference has been interpreted in ref. in terms of the Goldhaber model , as it has been applied to fragmentation by Bauer . When a system of fermions of given density and temperature suddenly breaks up the fragment momenta are approximately given by the sum of the momenta of the nucleons before the decay. For heavier fragments the addition of momenta can be considered as a stochastic process which via the central limit theorem leads to Gaussian energy distributions which resemble Maxwell distributions and thus contribute to the slope temperature. As discussed by Bauer and also in ref. this effect increases the slope temperatures by an amount which is of the order of the difference between the isotope and the slope temperatures.
We wanted to see whether a similar effect can explain the mass dependence seen in fig. 3. We therefore initialized statistically a system of the mass and temperature of the spectator, and subjected it to the same fragmentation procedure (coalescence) and to the same fit by eq. (2) as we did for the heavy ion collision. These slope temperatures obtained from the statistical model are given in fig. 3 as a band, which corresponds to initializations between $`\rho =0.3\rho _0`$ and $`T=6`$ MeV and $`\rho =0.4\rho _0`$ and $`T=5.5`$ MeV, which cover the range of values in fig. 1. It is seen that the model qualitatively explains the increase in the slope temperature relative to the local temperature and the increase with fragment mass relative to that for nucleons. A similar conclusion was drawn in ref using the results from ref. . This shows that $`T_{slope}`$ is not a thermodynamic temperature. The difference relative to the thermodynamic temperature can be understood from the fact, that to form a fragment the internal kinetic energies of the nucleons are limited by the coalescence condition. Since on the average all nucleons have the same momenta, this means that the collective momentum per nucleon of the fragment increases relative to the average. This simulates a higher temperature. This effect has been called ”contribution of Fermi motion to the temperature”. The purpose of using the Goldhaber model here was to demonstrate this effect. Whether a Goldhaber model applies to heavy ion collisions, can, of course, be debated, but our results are independent of this question.
Thus we seem to understand fairly well the kinetic energy spectra of the spectator fragments and we now turn to the participant region. The participant zone in a heavy ion collision constitutes another limiting, but still simple case for the investigation of the thermodynamical behaviour of nuclear matter. In contrast to the spectator zone one expects a compression-decompression cycle and thus richer phenomena with respect to fragmentation. The situation becomes particularly simple if we look at central collisions of symmetric systems which experimentally are selected using transverse energy distributions, charged particle multiplicities or polar angles near mid-rapidity .
We begin to characterize the calculated evolution of a collision for the case of $`Au+Au`$ at 600A MeV. A very well developed radial flow pattern appears after about 20 fm/c in agreement with findings of other groups . The pressure in this reaction becomes isotropic at about 35 fm/c indicating equilibriation. The number of collisions drops to small values at about 40 fm/c. This condition we shall call (nucleon) freeze-out. Thus equilibration and freeze-out occur rather simultaneously. We find a density at this stage of about normal nuclear density and a (local) temperature of about $`T_{loc}15`$ MeV in the mid-plane of the reaction.
We now also apply the blast model of eq. (2) to fragment spectra generated in the coalescence model at the end of the collision at about 90 fm/c. The results for the slope temperature $`T_{slope}`$ and the mean velocity $`\beta _f`$ are shown in fig. 4 for a common fit to all fragments with $`A_f2`$ for different incident energies. These are compared to the corresponding values extracted by the EOS and FOPI collaborations by analogous blast model fits to charged particle spectra. Our results are in good agreement with the temperatures determined by the FOPI collaboration and somewhat lower than those from EOS , in particular at higher incident energies. For the radial flow, the situation tends to be in reverse, in particular with respect to the FOPI results of ref. . This is generally consistent with the findings of other groups. E.g. in ref. in a similar approach results were obtained for the EOS data, which are close to the data for $`T_{slope}`$ and and above the data for $`\beta _f`$. One should keep in mind, however, that $`T_{slope}`$ and $`\beta _f`$ are not independent fit parameters. Within the uncertainities of the description by blast model fits there is qualitative agreement between calculation and experiment.
As was done for the spectator we also apply the blast model separately for different fragment masses $`A_f`$. This is shown in fig. 5 at 600 AMeV in the left column. We observe that slope temperatures rise and flow velocities fall with fragment mass in contrast to the behaviour for the spectator fragments in fig. 3 where $`T_{slope}`$ was about constant. A similar behaviour has been seen experimentally at 1 A.GeV in ref. and in calculations in ref. . It can also be deduced from fragment spectra at 250 A.MeV shown in ref. , which yield values very close to the ones given here.
The fragment mass dependence of $`T_{slope}`$ is thus much stronger and qualitatively different than that for the spectator seen in fig. 3. Thus this behaviour cannot be interpreted as fragments originating from a common freeze-out configuration, i.e. from a fragmenting source. To arrive at an interpretation we have shown on the right column of fig. 5 the local temperatures and flow velocities for different times before the nucleon freeze-out, i.e. for $`t^{}=t_{freezeout}t`$, with $`t_{freezeout}35fm/c`$. (We recall that the values at the left of fig. 5 are obtained at the end of the reaction, i.e. about 90 fm/c.) It is seen that for $`A_f=1`$ the values at freeze-out are close to the blast model ones, as required. However, for fragment masses $`A_f>1`$ the slope temperatures and velocities behave qualitatively very similar to the local temperatures and flow velocities at earlier times . This would suggest to interpret the fragment temperatures and velocities as signifying that heavier fragments originate at times earlier than the nucleon freeze-out. This may not be unreasonable since in order to make a heavier fragment one needs higher densities which occur at earlier times and hence higher temperatures. However, this does not neccessarily imply that the fragments are really formed at this time, since fragments could hardly survive such high temperatures, as also discussed in ref. . But it could mean that these fragments carry information about this stage of the collision. In any case it means that in the participant region fragments are not formed in a common equlibrated freeze-out configuration, and that in such a situation slope temperatures have to be interpreted with great caution.
In summary fragmentation phenomena in heavy ion collisions are studied as a means to explore the phase diagram of hadronic matter. For this it is neccessary to determine the thermodynamical properties of the fragmenting source. One way to do this experimentally is to investigate fragment kinetic energy spectra. In theoretical simulations the thermodynamical state can be obtained locally in space and time from the phase space distribution. In this work we have compared this with the information obtained from the generated fragment spectra. We apply this method to the spectator and participant regions of relativistic $`Au+Au`$-collisions. We find that the spectator represents a well developed, equilibrated and instable fragmenting source. The difference in temperature determined from the local momentum space (or experimentally from the isotope ratios) and from the kinetic energy spectra can be attributed to the Fermi motion in the fragmenting source as discussed in a Goldhaber model. In the participant region the local temperature at the nucleon freeze-out and the slope temperature from fragment spectra are different from those of the spectator. The slope temperatures rise with fragment mass which might indicate that the fragments are not formed in a common, equilibrated source. These investigations should be continued using more dynamic methods of fragment formation.
We thank the ALADIN collaboration, in particular W. Trautmann and C. Schwarz, for helpful discussions. This work was supported in part by the German ministry of education and research BMBF under grant no. 06LM868I and grant no. 06TU887.
|
no-problem/9905/astro-ph9905307.html
|
ar5iv
|
text
|
# KECK HIRES Spectroscopy of APM 08279+52551footnote 11footnote 1The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.
## 1 INTRODUCTION
Discovered serendipitously in a survey of Galactic halo carbon stars, the recently identified $`z`$=3.911 BAL quasar APM 08279+5255 (Irwin et al. (1998)) possesses an inferred intrinsic luminosity of $`5\times 10^{15}L_{}`$ ($`\mathrm{\Omega }_0=1,h=0.5`$), making it apparently the most luminous system currently known. A significant fraction of this prodigious emission occurs at infrared and sub-mm wavelengths, arising in a massive quantity of warm dust (Lewis et al. (1998)). Recent observations have further probed this unusual system; CO observations have demonstrated that APM 08279+5255 also possesses a large quantity of molecular gas (Downes et al. (1999)), a reservoir for star formation, while the internal structure of APM 08279+5255 has been probed with polarization studies, indicating that several lines of sight through various absorbing and scattering regions are responsible for the complex polarized spectrum (Hines et al. (1999)).
Observations with the 1.0 m Jacobus Kapteyn telescope on La Palma suggest that APM 08279+5255 is not a simple point-like source, but is better represented by a pair of sources separated by $`0\text{′′}\text{.}4`$ (Irwin et al. (1998)). This was confirmed with images acquired with the Canada-France-Hawaii AO/Bonnette which revealed two images, separated by $`0\text{′′}\text{.}35\pm 0\text{′′}\text{.}02`$, with an intensity ratio of $`1.21\pm 0.25`$ (Ledoux et al. (1998)); such a configuration is indicative of gravitational lensing and suggests that our view of APM 08279+5255 has been significantly enhanced. More recent NICMOS images have further refined this picture, revealing the presence of a third image between the other two (Ibata et al. (1999)). The resulting magnification is by a factor of $`70`$ for the point-like quasar source. However, even when gravitational lensing is taken into account, APM 08279+5255 is still one of the most luminous known QSOs.
We have obtained a high S/N ($`80`$), high resolution (6 km s<sup>-1</sup>) spectrum of APM 08279+5255, the result of almost 9 hours of observations with HIRES at the Keck I telescope. In this paper we describe some of the most important characteristics of the spectrum and announce its availability to the general astronomical community. In a separate paper (Ellison et al. (1999)) we have used these data to throw new light on the question of the C abundance in low column density Ly$`\alpha `$ forest clouds.
The outline of the paper is as follows. In §2 we describe the observations obtained and the data reduction procedure followed in order to produce the final spectrum. Section 3 presents a brief analysis of some of the absorption systems seen in the QSO spectrum, illustrating the quality and potential of the data. We then, in §4, describe how the data may be obtained from a permanent, anonymous ftp directory in Cambridge<sup>2</sup><sup>2</sup>2Note that as well as the data files available in Cambridge, a full ascii spectrum of APM 08279+5255 is available in the electronic version of this PASP Research Note, together with a complete list of Ly$`\alpha `$ forest fit parameters—see §3 and Tables 2 and 3 before summarising the main results of the paper in §5.
## 2 OBSERVATIONS AND INITIAL REDUCTION
The brightness of APM 08279+5255 presents an excellent opportunity to study spectroscopically the intervening absorption systems and the Broad Absorption Lines intrinsic to the QSO. To this end, a program of high resolution observations was mounted on the 10-m Keck I telescope in Hawaii in April and May 1998 using HIRES, the echelle spectrograph at the Nasmyth focus (Vogt et al. (1992)). Data were collected for a total of 31 500 seconds with the cross disperser and echelle angles in a variety of settings so as to obtain almost complete wavelength coverage from 4400 to 9250 Å. A journal of the observations is presented in Table 1. The data were reduced with Tom Barlow’s HIRES reduction package (Barlow 1999, in preparation) which extracted sky-subtracted object spectra for each echelle order. The spectra were wavelength calibrated by reference to a Th-Ar hollow cathode lamp and mapped onto a linear, vacuum-heliocentric wavelength scale with a dispersion of 0.04 Å per wavelength bin. No absolute flux calibration was performed, although standard star spectra were obtained and are available (see §4 below). Lastly, the orders of the individual 2-D spectra and corresponding sigma error arrays were merged and then co-added with a weight proportional to their S/N.
The final spectrum has a resolution of 6 km s<sup>-1</sup> FWHM, sampled with $``$ 3.5 wavelength bins, and S/N between 30 and 150. The full spectrum is presented in Table 2, and in graphical format in Figure 1 (note that while this article and the corresponding PASP Research Note (paper version) presents only a small portion of the data, the spectrum in its entirety can be found in the electronic version of PASP). Given the exceptional quality of the data and the scope that they present for a wide range of research interests, we make them available to the astronomical community. Details of how to obtain additional material relating to this data are given in §4.
## 3 INTERVENING ABSORBERS IN THE SPECTRUM OF APM 08279+5255
### 3.1 The Ly$`\alpha `$ Forest
The rich forest of Ly$`\alpha `$ clouds, which is seen as a plethora of discrete absorption lines, is caused by line-of-sight passage through structures such as sheets and filaments in the intergalactic medium (IGM). Hydrodynamical simulations have shown that the Ly$`\alpha `$ forest is a natural consequence of the growth of structure in the universe through hierarchical clustering in the presence of a UV ionizing background (e.g. Hernquist et al. (1996) ; Bi & Davidsen (1997)). For a recent comprehensive review of the properties of the Ly$`\alpha `$ forest, see Rauch (1998).
We fitted Voigt profiles to the Ly$`\alpha `$ forest lines using the line fitting package VPFIT (Webb (1987)) which determines the best fitting values of neutral hydrogen column density $`N`$(H I), absorption redshift, $`z_{\mathrm{abs}}`$, and Doppler parameter $`b`$ ($`=\sqrt{2}\sigma `$) for each absorption component; the results are presented in Table 3. All Ly$`\alpha `$ lines within the redshift interval $`3.11<z_{\mathrm{abs}}<3.70`$ were fitted. The upper limit was chosen to avoid contamination of the sample by lines associated with ejected QSO material ($`z_{\mathrm{abs}}=3.70`$ corresponds to the blue edge of the broad C IV absorption trough, at an ejection velocity of $``$ 13 100 km s<sup>-1</sup>). The lower redshift limit, $`z_{\mathrm{abs}}=3.11`$ in Ly$`\alpha `$, corresponds to the onset of the Ly$`\beta `$ forest. Within these limits the line list in Table 3 is complete for column densities log $`N`$(H I)$`>12.5`$. However, we consider the values of $`N`$(H I) to be accurate only for log $`N`$(H I)$`<14.5`$ since the fits rely on the Ly$`\alpha `$ line alone which is saturated beyond this limit (no higher order Lyman lines were included in the solution because of the severe blending of the spectrum below the wavelength of Ly$`\beta `$ emission, even at the high resolution of the HIRES spectra).
The column density distribution in the Ly$`\alpha `$ forest can be represented by a power law of the form
$$n(N)dN=N_0N^\beta dN$$
(1)
(Rauch 1998 and references therein). The column density distribution for the present sample is reproduced in Figure 2. A maximum likelihood fit between $`12.5<\mathrm{log}N(\mathrm{H}\mathrm{I})<15.5`$ yields a power law index $`\beta =1.27`$ (Figure 3). This is likely to be a lower limit to true value of $`\beta `$ because the line density of the forest at these redshifts is sufficiently high that lines can be missed due to blending. In other words, the spectra are confusion limited for weak Ly$`\alpha `$ lines. Hu et al. (1995) used simulations to model this effect and concluded that incompleteness sets in at log $`N`$(H I) $`13.20`$ and that at the lowest column densities sampled, log $`N`$(H I)= 12.30 – 12.60, only one in four Ly$`\alpha `$ clouds is detected. If we adopt the same incompleteness corrections as in Table 3 of Hu et al. (1995), we deduce $`\beta =1.39`$, in good agreement with the value $`\beta =1.46`$ reported by these authors over a similar column density range as that considered here.
An analysis of the C IV $`\lambda \lambda 1548,1550`$ absorption associated with the Ly$`\alpha `$ forest has been presented elsewhere (Ellison et al. 1999). By fitting profiles to the observed C IV lines, Ellison et al. (1999) deduced a median $`N`$(C IV)/$`N`$(H I) = 1.4$`\times 10^3`$ for Ly$`\alpha `$ absorbers with log $`N`$(H I)$`>`$14.5. Of the 23 Ly$`\alpha `$ clouds within the redshift interval $`3.11<z_{\mathrm{abs}}<3.70`$ which exhibit associated C IV absorption, five also show Si IV $`\lambda \lambda 1393,1402`$ absorption; an example is reproduced in Figure 4. Table 4 lists the parameters of the profile fits for these five absorption systems; the C IV and Si IV systems were fitted separately (that is, there was no attempt to force a common fit to both species). The values of the $`N`$(Si IV)/$`N`$(C IV) ratio deduced for the five systems (log $`N`$(Si IV)/$`N`$(C IV) $`1.2`$ to $`0.1`$) are typical of those found at these redshifts (Boksenberg, Sargent, & Rauch 1998).
### 3.2 Mg II Absorbers
The data presented here can be used to search for Mg II $`\lambda \lambda 2796,2803`$ systems at $`z_{\mathrm{abs}}>1`$ with a higher sensitivity than achieved up to now, formally to a rest frame equivalent width detection limit of only a few mÅ. On the basis of the results by Churchill et al. (1999) we expect to find many Mg II systems in our spectrum and indeed a first pass has revealed nine systems between $`z_{\mathrm{abs}}=1.181`$ and 2.066, which are reproduced in Figure 5. The rest frame equivalent widths of Mg II $`\lambda 2796`$ span the range from $`W_r2.5`$ Å ($`z_{\mathrm{abs}}=1.181`$) to $`W_r=11`$ mÅ ($`z_{\mathrm{abs}}=1.688`$). The former (see Figure 5a, top left-hand panel) is the most likely candidate for the lensing galaxy, given its strength and redshift. On the other hand, near $`z_{\mathrm{abs}}=1.55`$ there is a complex of three closely spaced absorption systems, each in turn consisting of multiple components (Figure 5a, bottom panel); with a total velocity interval of $`450`$ km s<sup>-1</sup> such a configuration may arise in a galaxy cluster which presumably could also contribute to the lensing of the QSO.
Table 5 lists the absorption line parameters returned by VPFIT for five of the nine Mg II systems. We did not attempt to fit the $`z_{\mathrm{abs}}=1.181`$ system because the lines are strongly saturated. Interestingly, for the other three systems—at $`z_{\mathrm{abs}}=1.211`$, 1.812, and 2.041— VPFIT could not converge to a statistically acceptable solution, in the sense that there is no set of values of $`b`$ and $`N`$(Mg II) which can reproduce the observed profiles of both members of the doublet. The problem can be appreciated by considering, for example, the $`z_{\mathrm{abs}}=1.211`$ system (Figure 5a, top right-hand panel). Here, $`\lambda 2796`$ and $`\lambda 2803`$ have approximately the same equivalent width, indicating that the lines are saturated and lie on the flat part of the curve of growth, and yet the residual intensity in the line cores is $`0.45`$ .
We believe that the reason for this apparent puzzle lies in the gravitationally lensed nature of APM 08279+5255. Our spectrum is the superposition of two sight-lines separated by 0.35 arcsec and contributing in almost equal proportions to the total counts (Ledoux et al. 1998). If there are significant differences in the strength of Mg II absorption between the two sight-lines with—in the example considered here, saturated absorption along one and weak or no absorption along the other—the composite spectrum would have the character seen in our data.
Assuming the lens to be at $`z_{\mathrm{lens}}=1.181`$ and an Einstein-de Sitter universe, the three absorption redshifts $`z_{\mathrm{abs}}=1.211`$, 1.812, and 2.041 correspond to tranverse distances between the two sight-lines (at an angular separation of 0.35 arcseconds) of 1.5, 0.75 and 0.59 $`h^1`$ kpc respectively. Some may be surprised to find large changes in the character of the absorption across such small distances, much smaller than the scales over which the overall kinematics of galactic halos vary (e.g. Weisheit & Collins 1976). In reality, micro-structure in low-ionization absorption lines is not unusual and has already been seen (even over sub-parsec scales) in the interstellar medium of the Milky Way (e.g. Lauroesch et al. 1998 and references therein), of the LMC (Spyromilio et al. 1995), and of the absorbing galaxy at $`z_{\mathrm{abs}}=3.538`$ in front of another gravitationally lensed QSO, Q1422+231 (Rauch, Sargent, & Barlow 1999). These authors have recently reported spatially resolved HIRES observations of images A and C of this bright QSO, which are separated by 1.3 arcseconds.
While in our case it is not possible to deconvolve the individual contributions of the two sight-lines to our blended spectrum, because in general there is not a unique ‘solution’ to the composite Mg II absorption profiles, the data presented here provide a strong incentive to observe APM 08279+5255 spectroscopically with STIS on the HST. Our prediction is that Mg II absorption at $`z_{\mathrm{abs}}=1.211`$, 1.812, and 2.041 will exhibit significant differences between sight-lines A and B, and that such differences can be used to probe in fine detail the spatial structure of low ionization QSO absorbers, complementing the results of Rauch et al. (1999) on Q1422+231 .
## 4 Obtaining the data
The data presented in this paper are also available in electronic form at:
ftp://ftp.ast.cam.ac.uk/pub/papers/APM08279
As well as the quasar spectrum, which is presented in fits format with it associated error arrays, this site also contains standard star spectra (2-D), gzipped postscript plots of the complete QSO spectrum, an ascii file of a low resolution spectrum of APM 08279+5255 and a README file containing all other relevant information required for using these data. Any questions regarding the data can be addressed to SLE in the first instance.
We ask that any publications resulting from analyses of this spectrum fully acknowledge the W. M. Keck Observatory and Foundation with the standard pro forma, listed as a footnote on page 1, and reference this PASP Research Note as the source of the spectrum.
## 5 Summary
We have presented a brief analysis of the absorption systems seen in the HIRES echelle spectrum of the gravitationally lensed BAL QSO APM 08279+5255. The Ly$`\alpha `$ forest was analysed with Voigt profiles within a region ($`3.11<z_{\mathrm{abs}}<3.70`$) deemed to be free of contamination from higher order Lyman lines and ejected QSO material. The H I column density distribution is well fitted by a power law with slope $`\beta =1.27`$ between log $`N`$(H I) = 12.5 and 15.5; a higher value, $`\beta =1.39`$, is obtained when allowance is made for line confusion at the low column density end of the distribution. Approximately half of the Ly$`\alpha `$ lines with log $`N`$(H I)$`>14.5`$ have associated C IV absorption (Ellison et al. 1999); five of these C IV systems also show Si IV with ratios $`N`$(Si IV)/$`N`$(C IV) between $`1`$ and $`1/15`$.
We identified nine Mg II systems between $`z_{\mathrm{abs}}=1.181`$ and 2.066 two of which are candidates for absorption associated with the lens. For three Mg II systems we infer that there are spatial differences in the absorption between the light-paths to the two main images of the QSO (which are unresolved in our study). Given the exceptional brightness of APM 08279+5255, the spectrum presented here is among the best ever obtained for a high redshift QSO; we make it available to the astronomical community so that it can be used in conjunction with other forthcoming studies of this remarkable object and sightline.
|
no-problem/9905/astro-ph9905390.html
|
ar5iv
|
text
|
# Untitled Document
Summary; Inflation and Traditions of Research
P. J. E. Peebles
Institute for Advanced Study, Princeton NJ 08540, and
Joseph Henry Laboratories, Princeton University, Princeton, NJ 08544
There is a considerable spread of opinions on the status of inflationary cosmology as a useful approximation to what happened in the early universe. To some inflation is an almost inevitable consequence of well-established physical principles; to others it is a working hypothesis that has not yet been seriously tested. I attribute this to two traditions of research in physics and astronomy that come together in cosmology. The great advances of basic physics in the 20<sup>th</sup> century have conditioned physicists to look for elegance and simplicity. Our criteria of elegance and simplicity have been informed by the experimental evidence, to be sure, but the tradition has been wonderfully effective and we certainly must pay attention to where it might lead us in the 21<sup>st</sup> century. Astronomers have had to learn to deal with incomplete and indirect observational constraints on complicated systems. The Kapteyn universe — a model of our Milky Way galaxy of stars — was a product of a large effort of classifying the luminosities and modeling the motions of the stars from the statistics of star counts and star proper motions and parallaxes. This culminated in a detailed model that even offered the possibility of a “determination of the amount of dark matter from its gravitational effect” (Kapteyn 1922). But as the model was being constructed people were discovering that it must be revised, moving us from near the center to the edge of the Milky Way, where the phenomenon of stellar streaming could be reinterpreted as the circular motion of stars in the thin disk of the galaxy and the random motions of older stars in the halo. The rocky road through a vastly richer fund of observations has led to a picture for the evolution of the galaxies and the intergalactic medium during the last factor of five expansion of the universe. Many of the elements are established in remarkably close detail, but others are quite uncertain and subject to ongoing debate, elegant examples of which are to be found in these Proceedings. I think the experience has led to a characteristic tendency you might have noticed among astronomers, to ask first how a new result might have been compromised by systematic errors in the measurement, or its interpretation confused by an inadequate model. These traditions from physics and astronomy meet in cosmology, with predictable consequences for the debate on the status of inflationary cosmology.
There are astronomers who accept inflation as persuasively established, and physicists who consider inflation speculative. The latter can cite a tradition of complexity in physics, as in the study of condensed matter. Maybe it is not surprising that several of the most prominent critics of the basis for the standard cosmological model — the postulate of near homogeneity and isotropy of the galaxy and mass distributions on the scale of the Hubble length — are condensed matter physicists. I don’t understand why they are not more influenced by the observational evidence for large-scale homogeneity, but I can fully appreciate the underlying question: could the universe really be this simple?
Einstein arrived at the homogeneity picture by a philosophical argument, that asymptotically flat spacetime is unacceptable because it is contrary to Mach’s principle. He considered this to be more convincing than the empirical evidence the astronomers could have given him about our island universe of stars and about the clustered distribution of the brighter spiral nebulae (a result, we now know, mainly of the concentration of galaxies in the de Vaucouleurs Local Supercluster). Einstein avoided all these misleading observational indications; here is an example where pure thought led to a prediction that proves to agree with demanding empirical tests, in the grand tradition of physics.
It is easy to think of examples of less successful application of pure thought, of course. Most of us would agree that the Einstein-de Sitter model is the only reasonable and elegant choice for the parameters of the Friedmann-Lemaître cosmology, because any other observationally acceptable parameter choice would imply that we flourish at a special epoch, as the universe is making the transition from expansion dominated by the mass density to expansion dominated by space curvature or a cosmological constant (or a term in the stress-energy tensor that acts like one). Maybe this is telling us that the evidence for low mass density is wrong or somehow incorrectly interpreted. But the more likely lesson is that Nature is more complicated than we had thought, and we are going to have to revise our criteria of elegance, as has happened before.
Inflationary cosmology offers an elegant way to extend the standard model for cosmic evolution back in time to conditions that cannot be described by the classical Friedmann-Lemaître model. But is the early universe really simple in the physicists’ sense, simple enough that we can hope to deduce its main properties from what we know about fundamental physics? Or might there be important elements of the complexity that astronomers are used to dealing with, and that we could hope to unravel only if we were fortunate enough to hit upon adequate guidance from the empirical evidence?
TABLE 1
The Case for Inflationary Cosmology
Evidence Nature Status Compelling elegance of a romantic seems well inflation, and the absence test established of a viable alternative Observational case for a diagnostic, preliminary flat space sections not a test? Observational case for the a diagnostic preliminary adiabatic CDM model Tensor contribution to a classical open the CBR anisotropy test Deduction of the inflaton a wonderful and its potential classical dream from fundamental physics
Table 1 is my summary of the pieces of evidence in hand that seem to be relevant to this issue. The characterizations in the second column follow the physical chemist Wilhelm Ostwald,<sup>1</sup> As represented in the novel, Night Thoughts of a Classical Physicist (McCormmach 1982). The title is not meant to distinguish the protagonist from a quantum physicist; at the time of the story, 1918, there was not yet a quantum theory. In Ostwald’s classification romantic physicists loved atoms and their curious properties, while classical physicists distrusted what they considered extravagant departures from conventional physics in the search for a quantum theory. Ostwald was skeptical of the kinetic theory of atoms as a basis for heat and chemistry (Jungnickel & McCormmach 1986) until the experimental success of Einstein’s theory of Brownian motion won over him and Ernst Mach (Whittaker 1953). who felt that some physicists have a romantic temperament, eager to pursue the latest ideas, as opposed to the classical types who seek to advance knowledge by increments from a well established basis of concepts and methods. In the table I mean by a classical test one that follows the old rule of validation by the successful outcome of tests of the predictions of a theory. By a diagnostic I mean data that can be fit by adjustment of parameters within a model for inflationary cosmology. This would turn into a classical test if we had another independent constraint on the parameters. A romantic test may point to the truth, but it lacks the beauty of an experimental check of a prediction.
The first entry in the table refers to the fact that inflationary cosmology offers an elegant remedy for very real inadequacies of the classical Friedmann-Lemaître model. This means we should pay careful attention to inflation. But those of us with classical inclinations give more weight to the successful outcome of predictions than to the ability to devise a theory to fit a given set of conditions, in what has been termed postdiction. It is impressive that no one has come up with an interesting alternative to inflation, despite the wide advertisement of the ills it cures. But one may wonder whether the significance is only that our imagination is limited.
If inflationary cosmology were falsified by a measurement that showed that space sections of constant world time have nonzero curvature, then observational evidence for flat space sections could be counted as evidence for inflation. If inflationary cosmology could be adjusted to fit the measured space curvature, then the measurement would be a diagnostic of the details of inflation, not a test. Bharat Ratra, who refined Richard Gott’s picture into a well specified model for open inflation, argues for the latter. Others argue for the former: they would take a demonstration that space curvature is not negligibly small to signify that we must abandon inflation as it is now understood and search for a better idea. Still others argue that the situation is not that simple: inflationary cosmology is more elegant if space curvature is negligibly small, so observational evidence that this is the case would be good news, but one can work with open inflation, so the discovery of nonzero space curvature would not be bad news. It would be good if the inflation community could reach a consensus on which it is before the astronomers turn the value of space curvature into a postdiction. It is too soon to settle bets, but the astronomers seem to be getting close to a useful measurement of space curvature, as one sees in these Proceedings.
The third entry in the table refers to the striking success of the adiabatic CDM model for structure formation in fitting a broad range of observations. The original motivation for the CDM model was not inflation; the model came out of a search for a simple way to account for the small anisotropy of the thermal cosmic background radiation (the CBR). Inflationary cosmology offers an elegant explanation for the initial conditions postulated for the CDM model. This is encouraging but not a demonstration that processes directly related to inflation did provide the seeds for the CDM model. It would be no crisis if the CDM model were found to be wrong; there are other ideas for structure formation within inflation, or it could be that inflation is not directly responsible for structure formation. We would have a critical test if we had an observationally viable model for structure formation that is not consistent with inflationary cosmology, to compare to models inspired by or compatible with inflation. As things stand I think we have to count the observational evidence on how structure formed as a diagnostic of (or constraint on) the parameters of inflation, under the postulate that inflation offers the right picture for the early universe.
If the parameters are favorable inflation predicts an observable contribution to the anisotropy of the CBR from tensor fluctuations — fluctuations in the curvature of spacetime. A detection of the tensor part and a demonstration of consistency with a specific model for inflation, perhaps one now under discussion, would make believers of most of us.<sup>2</sup> As the experimental success of the atomic theory of Brownian motion converted Ostwald and Mach.
As indicated in the last row of the table, the basis for fundamental physics may become so secure that it unambiguously predicts all relevant properties of the inflaton. If these properties proved to be observationally acceptable it would be a prime classical triumph. It would require a considerable advance in physics, but the advances have been prodigious.
We can look at the situation another way by asking where cosmology might have been today if we had not had the concept of inflation. If we thought we had to live with purely baryonic matter we would have been pressed to reconcile dynamical mass estimates with the baryon density required by the successful model for homogeneous production of light elements at high redshift, and we would have been pressed to find a viable model for structure formation. We don’t know how pressed; maybe we should have worked harder to save a pure baryonic universe. Nonbaryonic dark matter matter — a family of massive neutrinos — was considered before inflation. The concept of cold nonbaryonic matter grew up with inflation, but it surely would have grown as well if separated at birth, as did cosmic strings. Without inflation people would not have worked so hard to save the Einstein-de Sitter model, and negative space curvature might have been considered more favorably, but the observations would have driven us to about where we are today. Our ideas of how structure formed have been influenced by inflation, but did not depend on it. The big difference would be that, unless we hit on some alternative to inflation, initial conditions, including the Gaussian adiabatic fluctuations of the CDM model, would have been invoked ad hoc. Inflation offers a satisfying way to fill what otherwise would be a large gap in our cosmology. Whether or not this proves to be the true explanation it certainly has helped drive the present high level of interest and research in cosmology.
Without inflation we may not have thought of searching for the graviton contribution to the CBR anisotropy, which could yield a believable positive test in the classical tradition. Other tests of inflation may show up; people are still exploring the possibilities. It also is quite conceivable that Nature will not be kind enough to give us classical tests of our ideas of what happened in the early universe; maybe inflation is a precursor of a new tradition of research by pure thought. Our rules of evidence in science have evolved since Newton claimed not to invoke hypotheses, but this would be quite a change. The record of forecasts of the end of science as we know it leads me to doubt this one, but that is for the future. These Proceedings document a wonderfully active and productive state of cosmology now, in the traditions of physics and astronomy in their classical and romantic phases.
The organizers of this conference, Michael Turner and colleagues, did the community a great service by creating an exciting and stimulating gathering. My preparation of this written contribution was aided by discussions with John Bahcall, David Hogg, Bharat Ratra, and Paul Steinhardt, and the work was supported in part at the Institute for Advanced Study by the Alfred P. Sloan Foundation.
REFERENCES
Jungnickel, C. & McCormmach, R. 1986, Intellectual Mastery of Nature; The Now Mighty Theoretical Physics (Chicago: University of Chicago Press).
Kapteyn, J. C. 1922, ApJ, 55, 302.
McCormmach, R. 1982, Night Thoughts of a Classical Physicist (Cambridge: Harvard University Press).
Whittaker, E. 1953, A History of the Theories of Aether and Electricity; the Modern Theories (London: Nelson and Sons), reprinted 1960 (New York: Harper Torchbooks).
|
no-problem/9905/nucl-ex9905004.html
|
ar5iv
|
text
|
# HIGH 𝑃_𝑇 PHYSICS WITH THE STAR EXPERIMENT AT RHIC11footnote 1Talk given at APS Centennial Meeting, Atlanta, GA, March 1999
## 1 Introduction
The Relativistic Heavy Ion Collider (RHIC) at BNL will provide collisions of ions from $`p`$ to $`Au`$ at $`\sqrt{s}`$ up to $`500\mathrm{GeV}/\mathrm{c}^2`$ ($`p`$ beams) and $`200\mathrm{GeV}/\mathrm{c}^2`$ ($`Au`$ beams) beginning in Fall 1999. The STAR experiment is designed to study the high energy-density nuclear matter produced in these collisions and to search for the phase transition to a quark-gluon plasma (QGP). The QGP is a deconfined state of quarks and gluons, predicted by quantum chromodynamics (QCD) to exist at high energy-densities. One facet of the STAR experimental program is to use high transverse momentum ($`p_T`$) production to probe the dense matter produced at RHIC. In this paper, the motivation for studying high-$`p_T`$ production in heavy ion collisions, predicted signatures of QGP using hard probes and the planned STAR measurements are described.
Measurements of high-$`p_T`$ production from high energy collisions allow small distances, and therefore the earliest times after the collision, to be probed. The high-$`p_T`$ partons retain information about the collision during hadronization. High-$`p_T`$ production from hadron-hadron collisions has been shown to be well-described by perturbative QCD (pQCD).
At RHIC, it has been estimated that up to 50% of the transverse energy produced is due to partonic processes. Therefore, pQCD predictions become viable for the first time in relativistic heavy ion collisions. Incorporating standard nuclear effects such as nuclear modification of the parton distribution functions and the Cronin effect into the pQCD calculations lead to accurate predictions of high-$`p_T`$ production from $`p+A`$ collisions. These calculations can be extended to the $`A+A`$ collisions at RHIC. In addition, changes in high-$`p_T`$ production due to the passage of partons through the dense environment and a QGP have been predicted and incorporated into calculations. Studying hard probes with the STAR detector will allow measurements of how partons are affected in the dense environment and comparisons to pQCD to be made.
## 2 The STAR Experiment at RHIC
During the first year, RHIC will run mainly $`Au+Au`$ collisions and is expected to reach 10% of the design luminosity by the end of the year. The full design luminosity, $`=210^{26}\mathrm{cm}^2\mathrm{s}^1`$, is expected to be attained by the end of year 2. RHIC will also provide different beam energies and species and will include $`pA`$, $`pp`$ and polarized $`pp`$ collisions.
At the heart of the STAR detector is the time projection chamber (TPC) enclosed in a $`0.5`$ Tesla solenoidal magnet. The TPC covers a range of $`|\eta |<2`$ over the full azimuth and provides charged particle tracking and individual track particle identification (PID) for $`p0.151.2\mathrm{GeV}/\mathrm{c}`$ and momentum resolution $`\sigma _p/p1\%`$ for $`p<5\mathrm{GeV}/\mathrm{c}`$. Inside the TPC, is a silicon vertex detector (SVT) which provides tracking and PID near the vertex point. The forward TPC’s (FTPC) will extend the tracking coverage to $`2.4<|\eta |<4.0`$. A RICH detector will be installed at STAR for the first three years. This detector will provide limited angular coverage of $`0<\eta <0.3`$, $`\mathrm{\Delta }\varphi 20^{}`$ and extends PID to $`p35\mathrm{GeV}/\mathrm{c}`$.
Surrounding the TPC is a finely-segmented ($`0.05\times 0.05`$ in $`\mathrm{\Delta }\eta \times \mathrm{\Delta }\varphi `$) Pb-scintillator sampling electromagnetic calorimeter (EMC). A shower-maximum detector is located at $`5X_{}`$. The barrel calorimeter covers a range of $`|\eta |<1`$ with $`\mathrm{\Delta }\varphi =2\pi `$. In year 1, $`10\%`$ of the EMC will be installed, with $`30\%`$ added each additional year. An endcap calorimeter, currently under review, will cover the range $`1.05<\eta <2`$ with $`\mathrm{\Delta }\varphi =2\pi `$.
The charged particle multiplicity based hardware trigger (L0) covers the range $`|\eta |<2`$. A software trigger (L3), used for enhancing the desired event samples, is expected to be ready for year 2 running.
## 3 High-$`p_T`$ Probes of High Energy-Density Matter
STAR will search for changes in production and correlations of quantities at high-$`p_T`$ using heavy ion collisions. Measurements will be done as a function of the amount of dense matter traversed by varying the centrality of collisions and using different beam species and energies. Data from $`p+p`$ and $`p+A`$ running will be used as a baseline for the high-$`p_T`$ measurements. Some of the proposed signatures of QGP formation that STAR plans to measure are described below.
A predicted signal of QGP formation using hard probes is “jet quenching”, which is the softening of the $`p_T`$ spectrum due to partons losing energy, in a $`dE/dx`$ fashion, when propagating through dense matter. The $`p_T`$ spectrum of $`\pi ^0`$’s is predicted to soften if there are jet quenching effects in addition to standard nuclear effects, as shown by the pQCD prediction in Fig.1a. STAR can measure this spectrum by identifying $`\pi ^0\gamma \gamma `$ events using the EMC. An event sample of $`2500(25)`$ events at $`p_T=5(10)\mathrm{GeV}/\mathrm{c}`$ is expected in year 1. A simulation of the reconstructed $`\pi ^0\gamma \gamma `$ mass peak with year 1 statistics is shown in Fig.1b. STAR will also measure charged particle high-$`p_T`$ spectra in the first year.
The ratio of charged hadron to anti-hadron production as a function of $`p_T`$ has been predicted to change for $`Au+Au`$ relative to $`p+p`$ collisions as shown in Fig.2a. This ratio changes as a function of the energy loss factor due to jet quenching included in the calculation. The particle dependence of these ratios are due to differences in gluon and quark jet quenching in the dense medium. Using the RICH detector, it will be possible to measure the $`\overline{p}/p`$ ratio out to $`5\mathrm{GeV}/\mathrm{c}`$ in year 1.
$`J/\psi `$ production in a QGP is predicted to be suppressed due to Debye screening of color charges in the plasma. STAR can measure $`J/\psi e^+e^{}`$ events with the inclusion of the L3 trigger and using the TPC and EMC detectors. The reconstructed $`J/\psi `$ mass from a simulation is shown in Fig.2b. With the full STAR detector and statistics expected from one year of running with full design luminosity and a requirement on the electrons of $`p_T>1.5\mathrm{GeV}/\mathrm{c}`$, STAR can expect to collect $`4\times 10^4`$ $`J/\psi `$’s per year.
Other high-$`p_T`$ probes STAR can use are direct $`\gamma `$’s and jets. Due to the large underlying event energies at RHIC, identification and energy measurements of jets may be difficult. Other ways to measure jets may be to use leading particle distributions or $`\gamma `$+jet events where the $`\gamma `$ is tagged and used to identify and assign an energy to the jet. Studies of jets in the heavy ion collisions may also include angular correlations and di-jet production.
## 4 Summary
STAR’s capabilities for using hard probes will include measurements of charged, neutral and leading hadron spectra, particle ratios, angular correlations, direct $`\gamma `$’s, $`J/\psi `$ and jet production. Simulations described here show the expected results for charged hadron ratios and $`\pi ^0`$ and $`J/\psi `$ spectra in the first few years of running. Using data from various energies, beam species and centralities, STAR will be able to provide detailed measurements of high-$`p_T`$ production in the dense environment. The RHIC collider will offer unique and new regimes of dense matter and an excellent environment for new physics in the near future.
## Acknowledgments
This work was supported in part by U.S. Department of Energy Contract No. DE-AC02-98CH10886. Special thanks to Tom Cormier, Peter Jacobs, Gerd Kunde, Brian Lasiuk, Craig Ogilvie, Jack Sandweiss and Thomas Ullrich for help with and contributions to this talk.
|
no-problem/9905/cond-mat9905405.html
|
ar5iv
|
text
|
# Do correlations create an energy gap in electronic bilayers? Critical analysis of different approaches
## Abstract
This paper investigates the effect of correlations in electronic bilayers on the longitudinal collective mode structure. We employ the dielectric permeability constructed by means of the classical theory of moments. It is shown that the neglection of damping processes overestimates the role of correlations. We conclude that the correct account of damping processes leads to an absence of an energy gap.
Layered electronic systems have been of intense interest recently. Such systems can now be routinely fabricated in semiconductors with well-controlled system parameters. We focus in this paper on the analysis of the collective modes of electronic bilayers. These systems consist of two quasi-two-dimensional (2D) layers of electron liquids with electron density $`n_s`$, separated by a distance $`d`$ comparable to the interparticle distance $`a=(n_s/\pi )^{1/2}`$ within the layers. Quasi-two-dimensional here means that the electrons have quantized energy levels along one dimension, but are free to move in two dimensions. A well known example of 2D electron system are electrons confined in the vicinity of a junction between a semiconductor and isolators \[in a MOSFET structure\] or between layers of different semiconductors \[in heterojunctions\]. In these systems electrons are confined near the surface by an electrostatic field; in electronic bilayer systems two electron layers are separated in a double quantum well. Another class of layered systems, electronic superlattices, consisting of a large number of identical electronic layers, show a similar behavior and are beyond the scope of this paper. High $`r_s`$ ($`r_s=a/a_B`$; $`a_B=\epsilon _s\mathrm{}/e^2m^{}`$ being the effective Bohr radius) values are now available in 2D layers and the technique should be available to fabricate relativley high $`r_s`$ bilayers also. One expects that at high enough $`r_s`$ values ($`r_s>20`$ ) a bilayer crystallizes into a Wigner lattice. Here, we restrict our considerations to the strongly coupled liquid phase bilayer systems. There are some theoretical investigations in this strongly-coupled liquid phase regime, both on the static and the dynamic level .
One of the issues under consideration is the analysis of collective excitations in bilayers. First we recall the Random phase approximation (RPA) result applicable in the weak coupling regime $`r_s1`$. In the RPA there are two longitudinal modes: an in-phase and an out-of-phase mode. In the first mode the two layers oscillates in phase and the resulting dispersion relation is similar to that of an isolated 2D layer; for $`k0`$ the eigenmode behaves as $`\omega \sqrt{k}`$. In the out-of-phase mode the oscillation phase of the two layers differs by $`\pi `$ and we deal with an acoustic mode $`\omega k`$ as $`k0`$. However, in the strong coupling regime $`r_s1`$ the collective modes structure is strongly affected by particle correlations. The effects of correlation beyond RPA can be recasted into the local field correction. Early approaches focused on intralayer correlations but ignored or suppressed interlayer correlations . There have been two basic lines which also take into account the interlayer correlations. One line uses a low frequency (static) approximation for the local field correction. This can be done by applying the Singwi-Tosi-Land-Sjolander (STLS) approximation to the bilayer problem . The other line uses a high-frequency local field correction. The analytical most simple expressions can be obtained using the quasilocalized charge (QLC) approximation . Related studies to the bilayer system have been done in Refs.. Examiming the effect of the particle correlations on the collective excitations structure the two methods have arrived at different results. In particular, the QLC method predicts the occurence of a finite energy gap ($`\omega >0`$ for $`k=0`$); however, no such energy gap appears in the calculations based on the STLS approximation. In Ref. the formal reasons leading to the different results are clarified. It has been shown, that in the STLS formalism the local field correction vanishes as $`k0`$ and as a result we arrive (qualitatively) at the RPA dispersion expression without an energy gap. On the other hand, the high-frequency QLC local field correction does not vanish at $`k=0`$ and this leads to the occurence of the energy gap. This paper is adressed to the question whether interlayer correlations create an energy gap in bilayer systems.
The bilayer, consisting of two 2D electron layers of area $`A`$ embedded in a neutralizing background, and separated by distance $`d`$ can be mapped onto a two-component two-dimensional system . In the following we neglect the layer thickness, the influence of metallic electrodes and the difference between the dielectric constants of different semiconductors. Then the corresponding interaction potentials are
$`\phi _{11}(k)`$ $`=`$ $`\phi _{22}(k)={\displaystyle \frac{2\pi e^2}{k}},`$ (1)
$`\phi _{12}(k)`$ $`=`$ $`{\displaystyle \frac{2\pi e^2}{k}}e^{kd}.`$ (2)
The two component system may be described by a matrix formalism in species space . However, in the present case of two identical electron layers the corresponding matrices diagonalizes and it is much easier to investigate the scalar dielectric functions for the in-phase and out-of-phase motions, $`\epsilon _{in}`$ and $`\epsilon _{out}`$ respectively,
$$\epsilon _\alpha ^1(𝐤,\omega )=1+\phi _\alpha \chi _\alpha (𝐤,\omega ),\alpha =\mathrm{in},\mathrm{out}.$$
(3)
where $`\phi _{\mathrm{in},\mathrm{out}}=2\pi e^2(1\pm e^{kd})/k`$ are the interaction potentials for the in- and out-of-phase motions, respectively, and
$$\chi _\alpha (𝐤,\omega )=(\mathrm{}A)^1𝑑𝐫_{\mathrm{}}^0𝑑t<[n_\alpha (𝐫,t),n_\alpha (0,0)]>e^{i𝐤𝐫+i\omega t},$$
(4)
are the density response functions. Here $`n_{\mathrm{in},\mathrm{out}}(𝐫,t)=n_1(𝐫,t)\pm n_2(𝐫,t)`$, $`n_i(𝐫,t)`$ ($`i=1,\mathrm{\hspace{0.17em}2}`$) being the electron number density operator of the $`i`$th layer, $`[A,B]`$ is the commutator of operators $`A`$ and $`B`$, and $`<A>`$ is the equilibrium average of operator $`A`$.
The calculations based on the QLC formalism lead to the following expression for the bilayer dielectric permeabilities,
$`\epsilon _{\mathrm{in}}^1(𝐤,\omega )`$ $`=`$ $`1+{\displaystyle \frac{w_1^2(k)(1+e^{kd})}{\omega ^2w_1^2(k)(1+e^{kd})(D_{11}+D_{12})}},`$ (5)
$`\epsilon _{\mathrm{out}}^1(𝐤,\omega )`$ $`=`$ $`1+{\displaystyle \frac{w_1^2(k)(1e^{kd})}{\omega ^2w_1^2(k)(1e^{kd})(D_{11}D_{12})}},`$ (6)
where $`w_1(k)=(2\pi e^2n_sk/m)^{1/2}`$ is the 2D plasma frequency of an isolated electron layer with surface number density $`n_s`$. The positive magnitudes $`D_{11}`$ and $`D_{12}`$ take into account the intra- and interlayer Coulomb correlations between the electrons, and are expressable via the Fourier transform of the pair correlation functions $`h_{11}(k)`$ and $`h_{12}(k)`$, so that
$`D_{11}(k)`$ $`=`$ $`w_1^2(k){\displaystyle \underset{𝐪}{}}{\displaystyle \frac{\left(𝐤𝐪\right)^2}{k^3q}}\left\{h_{11}(|𝐤𝐪|)h_{11}(q)e^{qd}h_{12}(q)\right\},`$ (7)
$`D_{12}(k)`$ $`=`$ $`w_1^2(k){\displaystyle \underset{𝐪}{}}{\displaystyle \frac{\left(𝐤𝐪\right)^2}{k^3q}}h_{12}(|𝐤𝐪|).`$ (8)
The poles of the inverse dielectric permeabilities determine the eigenfrequencies in the bilayer system. We find in accordance with
$`\omega _{\mathrm{in}}^2(k)`$ $`=`$ $`w_1^2(k)(1+e^{kd})+(D_{11}+D_{12}),`$ (9)
$`\omega _{\mathrm{out}}^2(k)`$ $`=`$ $`w_1^2(k)(1e^{kd})+(D_{11}D_{12}).`$ (10)
These expressions were analyzed in Refs. . It was found that the in-phase modes are not qualitatively different from the corresponding modes in the isolated 2D layer (with double density). In particular, for $`k0`$ the typical 2D soft plasmon mode behavior $`\omega \sqrt{k}`$ can be observed. On the contrary, from Eq.(9) one concludes that the out-of-phase modes develop an energy gap at $`k=0`$ :
$$\omega ^2(0)=\frac{e^2n_S}{m}_0^{\mathrm{}}𝑑qq^2e^{qd}h_{12}(q).$$
(11)
However, this result looks quite strange. One expects that in the longwavelength limiting case $`k0`$ the wave does not feel the separation between the two layers and we should deal with a double density single 2D layer. For the single layer, however, an out-of-phase mode is unknown. Moreover, as can be seen from Eq.(11) the energy gap for the out-of-phase mode remains even for the case of a vanishing layer separation $`d=0`$, i.e. for the case of a single 2D layer. Due to the translational invariance of the double density single 2D layer only the plasmon mode $`\omega ^2(k)=2w_1^2(k)`$ should be observed as $`k0`$. One concludes therefore that there should not be any energy gap in the bilayer system. Summarizing the above discussion, we have found that Eq.(9) predicts an energy gap which should not exist.
To clarify this discrepancy we go back to Eqs.(5). These equations describe the dielectric permeabilities of systems of oscillators with eigenfrequency $`\omega _{\mathrm{in}}^2(k)`$ (or $`\omega _{\mathrm{out}}^2(k)`$) and with the oscillator strength $`f_{\mathrm{in}}=1+e^{kd}`$ (or $`f_{\mathrm{out}}=1e^{kd}`$, respectively). We focus now on the out-of-phase mode and consider the long-wavelength limiting case $`k0`$. The eigenfrequency of the out-of-phase mode reduces to the energy gap value (Eq.(11)) but the oscillator strength tends to zero as $`f_{\mathrm{out}}=kd`$. This is just the solution of our problem. Within the QLC approach there is indeed a finite energy gap for the out-of-phase mode, but this mode does not develop as $`k0`$ since the mode oscillator strength vanishes. In a quantummechanical language this means that the transition between two energy levels separated by an energy gap is forbidden; in a classical language one would say that the number of oscillators having the eigenfrequency $`\omega ^2(0)`$ (Eq.(11)) tends to zero as $`k0`$.
Our discussion results in the following picture within the QLC approach: as $`k0`$ the eigenfrequency of the out-of-phase mode tends to the energy gap value but the mode signal becomes weaker and weaker. Now a deficancy of the QLC algorithm comes into play. The QLC method is not able to describe damping processes and therefore the calculations based on QLC approximation fail to provide information on the damping of the collective modes. However, with decreasing mode signal the noise to signal ratio increases and it is necessary to take into account damping processes.
In what follows we employ the classical method of moments to determine the dispersion law of the bilayer collective modes. To do so we use an interpolation formula, satisfying all known exact relations and sum rules. This allows us to include damping processes into our calculation, at least formally. In Ref. an expression for the dielectric permeability of a single 2D electron layer satisfying all known sum rules is constructed. Generalizing the results of to the present case one expresses the dielectric permeability for the out-of-phase motions via the frequency moments $`M_i=\frac{1}{\pi }_{\mathrm{}}^{\mathrm{}}\omega ^i\mathrm{Im}\epsilon _{\mathrm{out}}^1(𝐤,\omega )𝑑\omega `$ ($`i=1,1,3`$) and the function $`q_{\mathrm{out}}=q_{\mathrm{out}}(𝐤,z)`$ being analytic in the upper half-plane $`\mathrm{Im}z>0,z=\omega +i\eta `$ and having a positive imaginary part there, and such that $`[q_{\mathrm{out}}(k,z)/z]0`$ for $`z\mathrm{}`$,
$`\epsilon _{\mathrm{out}}^1(𝐤,\omega )`$ $`=`$ $`1+{\displaystyle \frac{w_1^2(k)(1e^{kd})\left(\omega +q_{\mathrm{out}}(𝐤,\omega )\right)}{\omega \left(\omega ^2w_1^2(k)(1e^{kd})(D_{11}D_{12})\right)+q_{\mathrm{out}}\left(\omega ^2w_1^2(k)(1e^{kd})\left[1\epsilon _{\mathrm{out}}^1(k)\right]^1\right)}}.`$ (12)
Eq.(12) interpolates between the exact high-frequency behavior (given by the first and third frequency moment sum rules and the low-frequency behavior (determined by $`\epsilon _{\mathrm{out}}(k)`$ \- the static dielectric permeability for the out-of-phase motion). For simplicity, in Eq.(12) we have omitted the kinetic energy term into the third frequency moment since we are interested in the strong coupling limit only. An expression similar to Eq.(12) can be obtained for the in-phase motion. Notice that the QLC expression Eq.(5) can be obtained by putting $`q_{\mathrm{out}}0`$. The QLC expression also satisfies the first and third frequency moment sum rules but does not reproduce the low-frequency behavior of the dielectric permeability. Via the inclusion of a finite (complex) function $`q_{\mathrm{out}}`$ one is able to describe the low-frequency behavior in the correct manner. In addition we have now the formal possibility to include damping into our considerations. In the liquid phase of a strongly coupled bilayer the damping is mainly given by the diffusion of the quasilocalized particles site positions. A nonzero diffusion constant for the site positions migration results in a nonzero conductivity for the out- and in-phase motions. We mention here that the 2D static “conductivity” $`\sigma _{\mathrm{out}}`$ is not a real conductivity since it describes the out-of-phase flow. Notice that the existence of a nonzero static out-of-phase “conductivity “ is guaranteed only if $`q_{\mathrm{out}}(k0,0)=ih`$, with $`h=4\pi \sigma _{\mathrm{out}}(D_{11}D_{12})/d_lw_1^2(k)(1e^{kd})k^2`$ ($`d_l`$ being an effective thickness of one layer). We have no phenomenological way to choose the exact $`q(𝐤,\omega )`$. However, the most natural way to generalize the QLC result is to put $`q_{\mathrm{out}}(𝐤,\omega )`$ equal to its static value in the long-wavelength limit $`q_{\mathrm{out}}=ih`$. Within this approximation the high frequency pole of the out-of-phase permeability is shifted into the lower complex half plane. Instead of Eq.(9) we have now:
$`\omega _{\mathrm{out}}^2(k0)`$ $`=`$ $`i{\displaystyle \frac{h}{2}}\pm \sqrt{{\displaystyle \frac{h^2}{4}}+(D_{11}D_{12})},`$ (13)
and the mode corresponding to the energy gap in the QLC approximation becomes overdamped.
In the previous discussion we have shown that the interlayer correlations lead to a divergence of the third to the squared first frequency moment ratio $`M_3/M_1^2`$ in the long-wavelength limit. Using the fact that $`q_{\mathrm{out}}M_3/M_1^2`$ the expansion of Eq.(12) in terms of $`M_1^2/M_3`$ provides as $`k0`$ the following expression:
$`\epsilon _{\mathrm{out}}^1(𝐤\mathrm{𝟎},\omega )`$ $`=`$ $`1+{\displaystyle \frac{w_1^2(k)(1e^{kd})}{\omega ^2+\omega q^{}(𝐤,\omega )w_1^2(k)(1e^{kd})\left[1\epsilon _{\mathrm{out}}^1(k)\right]^1}}+O(k^2).`$ (14)
Here $`q^{}(𝐤,\omega )=(D_{11}D_{12})/q_{\mathrm{out}}(𝐤,\omega )`$ is a function which has the same properties as $`q_{\mathrm{out}}(𝐤,\omega )`$ has. Notice that Eq.(14) contains the information on the static dielectric permeabilty $`\epsilon _{\mathrm{out}}^1(k)`$,which is given by a compressibility sum rule as $`k0`$. The high-frequency part is given only by the first frequency moment, whereas the “diverging” third frequency moment is absent. Eq.(14) corresponds therefore to a STLS-like approximation for the out-of-phase motion dielectric permeability. We have again no phenomenological way to choose the exact $`q^{}(𝐤,\omega )`$. However, from Eq.(14) it can be seen that $`\nu =iq^{}(𝐤,\omega )`$ can be regarded as an effective collision frequency in a Drude-Lorentz like theory. Neglecting the frequency dependence of this collision frequency for the out-of-phase motion one defines the collision frequency by the static “conductivity” of the out-of-phase motion $`\sigma _{\mathrm{out}}`$, $`\nu (k)=w_1^2(k)(1e^{kd})d_l/4\pi \sigma _{\mathrm{out}}`$ . We will not perform a detailed analysis of this quantity. We mention here only that in a case of a finite out-of-phase “conductivity” we obtain that $`\nu (k)`$ behaves as $`k^2`$ for $`k0`$. Then from Eq.(14) we obtain the following dispersion for the out-of-phase motion in the long-wavelength limiting case,
$$\omega _{\mathrm{out}}^2(k)=\frac{2\pi e^2n_sd}{m}k^2+O(k^2).$$
(15)
This is just the result of the STLS calculations . Notice that Eq.(15) corresponds to the low-frequency pole of Eq.(12). Thus we have found that the expression Eq.(12) calculated from the theory of moments contains the QLC expression and a STLS like expression as limiting cases. Further we have shown, that in the long wavelength limiting case Eq.(12) converts into a STLS like expression and does not predict the existence of an energy gap in the bilayer. Due to the absence of an energy gap in the present approach we observe a soft transition from the bilayer to the double density single layer with layer separation $`d0`$. However, within the qualitative analysis of this paper we are not able to establish the value of wavevector $`k`$ at which the transition to the STLS like permeability occurs. However, some qualitative statements can be made. The “transition point” to the STLS regime is mainly given by the function $`q_{\mathrm{out}}\nu ^1(k)`$. With increasing coupling strength the diffusion constant decreases and so does $`\nu ^1(k)`$. We also expect that the STLS “transition point” is shifted to lower $`k`$ values and it might be possible to observe an “energy gap” at not to low $`k`$ values.
In this Brief Report the dispersion law for the out-of-phase mode in an electronic bilayer is analyzed. The analysis is based on the expression for the dielectric permeability of the out-of-phase motion obtained from the classical theory of moments without using perturbation theory. We have shown that this expression contains the permeability expressions obtained by other approaches as the QLC or the STLS approximation as limiting cases. The analysis of our expression has confirmed the STLS predictions of an absence of an energy gap in the bilayer system. Finally, we invite the experimentalists to verify the different predictions of the various theoretical approaches concerning the existence of an energy gap in a bilayer. This would exhibit not only a check for this specific system but also a general verification of the various theoretical approaches applicable to other plasma systems. Another tool to investigate the collective mode structure in bilayers might be numerical simulations. However, due to the finite number of particles in the simulations the available $`k`$ values have a lower bound. Therefore one remains with the difficult task to extrapolate the behavior at $`k=0`$ from the behavior at relative high $`k`$ values.
Acknowledgments. Valuable discussions with G. Kalman and I. M. Tkachenko are gratefully acknowledged. This work was financed by the Deutsche Forschungsgemeinschaft.
|
no-problem/9905/math9905131.html
|
ar5iv
|
text
|
# ON UNIVERSALITY OF SMOOTHED EIGENVALUE DENSITY OF LARGE RANDOM MATRICES
## Abstract
We describe the resolvent approach for the rigorous study of the mesoscopic regime of Hermitian matrix spectra. We present results reflecting universal behaviour of the smoothed density of eigenvalue distribution of large random matrices.
Random matrices of large dimensions introduced and studied by E.Wigner have applications in various fields of theoretical physics (see e.g. monographs and reviews and references there). In these studies, the spectral properties of random matrix ensembles play an important role.
Here the universality conjecture for large random matrices, formulated by F.Dyson , is known as the most interesting and challenging problem. It concerns the asymptotically local spectral statistics, i.e. the functions that depend on certain number $`q`$ of eigenvalues of random $`N\times N`$ matrix $`A_N`$ and this number remains fixed when $`N\mathrm{}`$.
Loosely speaking, the universality conjecture states that the local statistics regarded in the limit $`N\mathrm{}`$ do not depend on the details of the probability distribution $`P(A_N)`$ of the ensemble but are determined by the symmetries of the ensemble. For example, the expressions derived for local statistics of Hermitian ensembles are different from those of real symmetric matrices.
Given Hermitian (or real symmetric) matrix $`A_N`$, the distribution of its eigenvalues $`\lambda _1^{(N)}\mathrm{}\lambda _N^{(N)}`$ is determined by the normalized eigenvalue counting function
$$\sigma _N(\lambda )\sigma (\lambda ;A_N):=\mathrm{\#}\{\lambda _j^{(N)}\lambda \}N^1$$
or, equivalently, by the associated measure
$$\sigma _N(\mathrm{\Delta })_a^b\rho _N(\lambda )\text{ d}\lambda ,\mathrm{\Delta }=(a,b)𝐑,$$
with the formal density
$$\varrho _N(\lambda )=\frac{1}{N}\underset{j=1}{\overset{N}{}}\delta (\lambda \lambda _j^{(N)}).$$
$`(1)`$
The function $`\sigma _N(\lambda )`$ is called the empirical eigenvalue distribution function. Regarding $`\sigma _N(\mathrm{\Delta }_N)`$, it turns to be the local spectral statistics when considered with the intervals of the length $`|\mathrm{\Delta }_N|=O(1/N)`$ as $`N\mathrm{}`$.
In general, the local spectral regime is rather hard to analyse rigorously. The universality conjecture is supported mainly for those ensembles of random matrices that have explicit form of the joint probability distribution $`\pi _N(\lambda _1,\mathrm{},\lambda _N)`$ of eigenvalues. Starting from $`\pi _N`$, the same expression for $`m`$-point correlation function is derived by Dyson for the circular ensemble of unitary random matrices (CUE) , by Mehta for GUE , by Pastur and Shcherbina for matrix models ensemble . This expression is given by the determinant of $`m\times m`$ matrix with the entries $`\{\mathrm{sin}\pi (t_it_j)/\pi (t_it_j)\}`$, $`i,j=1,\mathrm{},m`$. The same expression is derived in for a random matrix ensemble with the entries that are independent random variables, whose probability distribution is a convolution of the Gaussian distribution and the arbitrary one.
Our principal goal is to examine the presence of universality of the spectral characteristics for those ensembles of random matrices, for which the explicit form of the joint eigenvalue distribution $`\pi _N`$ is unknown. For example, random matrix with independent $`\pm 1`$ entries falls into this class. Our claim is that the eigenvalue density (1) smoothed over the intervals $`\mathrm{\Delta }_N𝐑`$ possesses the universal properties as $`N\mathrm{}`$ provided the length $`l_N=|\mathrm{\Delta }_N|`$ satisfies conditions $`1l_NN`$.
We determine the smoothing (or regularization) of (2) by the formula
$$R_N^{(\alpha )}(\lambda ):=_{\mathrm{}}^{\mathrm{}}\frac{N^\alpha }{1+N^{2\alpha }(\lambda \lambda ^{})^2}\varrho _N(\lambda ^{})\text{ d}\lambda ^{}$$
$`(2)`$
and note that in this case
$$R_N^{(\alpha )}(\lambda )=\text{Im }\text{Tr }G_N(\lambda +\text{i}N^\alpha )N^1,$$
where $`G_N(z)=(A_Nz)^1`$.
According to the above definition, $`\xi _N^{(\alpha )}(\lambda )`$ with $`\alpha =1`$ represents the asymptotically local spectral statistics. The opposite asymptotic regime when $`\alpha =0`$ is known as the global one. In this case the limit
$$g(z)=\underset{N\mathrm{}}{lim}\text{Tr }G_N(z)N^1,|\text{Im }z|>0$$
if it exists, determines the limiting eigenvalue distribution $`\sigma (\lambda )`$ of the ensemble $`\{A_N\}`$; that is
$$\sigma (\lambda )=\underset{N\mathrm{}}{lim}\sigma _N(\lambda ),g(z)=_{\mathrm{}}^{\mathrm{}}(\lambda z)^1\text{ d}\sigma (\lambda ).$$
Regarding the global regime, the resolvent approach developed in papers is proved to be rather effective in studies of eigenvalue distribution of large random matrices (see, for example ). In this regime the limit of $`g_N(z)=\text{Tr }G_N(z)N^1`$ depends on the probability distribution of the ensemble, i.e. is non-universal, as well as the fluctuations of $`g_N(z)`$ .
We are interested in the behaviour of (2) in the case of $`0<\alpha <1`$. This regime is intermediate between the local and the global ones. It can be called the mesoscopic regime in random matrix spectra. For this regime, the modified version of the resolvent approach was proposed in to study spectral properties of random matrices with independent arbitrary distributed entries (see also ).
As the further development of the resolvent approach of , we present the results concerning random matrices with statistically dependent entries. We consider the ensemble of random matrices
$$H_{m,N}(x,y)=\frac{1}{N}\underset{\mu =1}{\overset{m}{}}\xi _\mu (x)\xi _\mu (y),x,y=1,\mathrm{},N,$$
$`(3)`$
where the random variables $`\{\xi _\mu (x)\},x,\mu 𝐍`$ have joint Gaussian distribution with zero mathematical expectation and covariance
$$𝐄\{\xi _\mu (x)\xi _\nu (y)\}=u^2\delta _{xy}\delta _{\mu \nu }.$$
Here $`\delta _{xy}`$ denotes the Kronecker delta-symbol. This ensemble first considered in is now of extensive use in the statistical mechanics of disordered spin systems and in the modelling of memory in the theory of neural networks .
Theorem 1. Let $`G_{m,N}(z)=(H_{m,N}z)^1`$. Then, for $`N,m\mathrm{},m/Nc>0`$, the random variable
$$R_{m,N}^{(\alpha )}(\lambda ):=\text{Im }\text{Tr }G_{m,N}(\lambda +\text{i}N^\alpha )N^1$$
converges with probability 1 as to the nonrandom limit
$$\pi \varrho _c(\lambda )=\frac{1}{2\lambda u^2}\sqrt{4cu^4[\lambda (1+cu^2)]^2}$$
$`(4)`$
provided $`0<\alpha <1`$ and $`\lambda \mathrm{\Lambda }_{c,u}=(u^2(1\sqrt{c})^2,u^2(1+\sqrt{c})^2)`$.
Theorem 2. Consider $`k`$ random variables
$$\gamma _{m,N}^{(\alpha )}(i):=N^{1\alpha }\left[R_{m,N}^{(\alpha )}(\lambda _i)𝐄R_{m,N}^{(\alpha )}(\lambda _i)\right],i=1,\mathrm{},k,$$
where $`\lambda _i=\lambda +\tau _iN^\alpha `$ with given $`\tau _i`$. Then under the conditions of Theorem 1 the joint distribution of the vector $`(\gamma _N(1),\mathrm{},\gamma _N(k))`$ converges to the Gaussian $`k`$-dimensional distribution with zero average and covariance
$$C(\tau _i,\tau _j)=\frac{4(\tau _i\tau _j)^2}{[4+(\tau _i\tau _j)^2]^2}.$$
$`(5)`$
Remark. It is easy to see that if $`|\tau _1\tau _2|\mathrm{}`$, then
$$C(\tau _1,\tau _2)=(\tau _1\tau _2)^2(1+o(1)).$$
$`(6)`$
This coincides with the average value of the Dyson’s 2-point correlation function for real symmetric matrices considered at large distances $`|t_1t_2|1`$ .
To discuss these results, let us first note that theorem 1 proves existence of the smoothed density of eigenvalues that coincides with that derived in in the global regime; $`\varrho _c(\lambda )=\sigma _c^{}(\lambda ),\lambda >0`$, where
$$\sigma _c(\lambda )=\underset{N\mathrm{}}{lim}\sigma (\lambda ;H_{m,N}).$$
This density obviously differs from the semicircle (or Wigner) distribution $`\sigma _w(\lambda )`$
$$\sigma _w(\lambda )=\underset{N\mathrm{}}{lim}\sigma (\lambda ;W_N)$$
where $`W_N(x,y)=w(x,y)/\sqrt{N}`$ are random symmetric matrices with independent identically distributed entries with zero mathematical expectation and variance $`v^2`$. This ensemble is known as the Wigner ensemble of random matrices. It is known since the pioneering work of Wigner that
$$\varrho _w(\lambda )=\sigma _w^{}(\lambda )=\frac{1}{2\pi v^2}\{\begin{array}{cc}\sqrt{4v^2\lambda ^2},\hfill & \text{if }|\lambda |2v\text{,}\hfill \\ 0,\hfill & \text{if }|\lambda |>2v.\hfill \end{array}$$
It should be noted that in papers we have proved analogues of Theorems 1 and 2 for the Wigner ensemble of random matrices. We have shown that Theorem 2 is true provided $`𝐄w(x,y)^8`$ is bounded and $`\alpha (0,1/8)`$. The correlation function $`C(\tau _i,\tau _j)`$ is given again by (5). Comparing these results, we conclude the fluctuations of the smoothed eigenvalue density do not feel the dependence between matrix elements.
Thus, our results can be regarded as the statements corroborating the universality conjecture for mesoscopic regime. Namely, they show that in the mesoscopic regime the smoothed density of eigenvalues $`R_N^{(\alpha )}`$ is the selfaveraging variable. It converges as $`N\mathrm{}`$ to the eigenvalue distribution of the ensemble and depends on the probability distribution of the random matrix ensemble. At the same time the fluctuations of $`R_N^{(\alpha )}`$ in the limit $`N\mathrm{}`$ coincide for two such different classes of random matrices as (3) and the Wigner one.
This dual-type behaviour of the eigenvalue density of random matrices in the mesoscopic regime is well-known in theoretical physics (see, for example the review , Chapter 8). For example, the universal properties of the mesoscopic eigenvalue density is studied in for the matrix models ensemble. It was shown that the correlation function of the eigenvalue density (in theoretical physics terms, the ”wide” correlator) depends on the edges of the spectrum. For the case of the symmetric support of the limiting eigenvalue distribution, the expression for the ”wide” correlator coincides with the asymptotic expression (6). It should be noted that our result (6) does not depend on the support of $`\varrho _c(\lambda )`$.
Let us describe the method developed for the proof of Theorems 1 and 2 (the full version will be published elsewhere). It represents a modification of the resolvent approach proposed in papers . This approach was developed to study the eigenvalue distribution of random matrices and random operators in the global regime $`|\text{Im }z|>0`$ as $`N\mathrm{}`$. It is based on derivation and asymptotic analysis of the system of relations for the moments $`L_k^{(N)}=𝐄[g_N(z)]^k,k1`$. These relations are of the following form
$$L_k=aL_{k1}+bL_{k+1}+\mathrm{\Phi }_k^{(N)},$$
$`(7)`$
where the terms $`\mathrm{\Phi }_k^{(N)}`$ can be estimated by $`N^1|\text{Im }z|^k`$. This a priori estimate implies that in the study of the asymptotic behaviour of $`g_N(z)`$, one can restrict oneself with only two first relations. Namely, all information about the limiting behaviour of $`L_k^{(N)},k1`$ can be derived from relations for $`L_1^{(N)}`$ and $`L_2^{(N)}`$.
To consider $`g_N(z)`$ in the mesoscopic regime, we start with the same system of relations for $`L_k^{(N)}`$. The main observation made in is that in this case we need all the infinite system of relations. More precisely, the more close $`\alpha `$ is to 1, the greater number $`K(\alpha )`$ is such that we need to consider relations for $`L_k^{(N)},kK(\alpha )`$.
The matter is that the terms $`\mathrm{\Phi }_k^{(N)}`$ can be estimated in terms of $`L_k^{(N)}`$ multiplied by $`N^\beta `$, where $`\beta =\mathrm{min}\{\alpha ,1\alpha \}`$. The structure of relations (7) is such that $`L_j^{(N)},j<k`$ enter into relation for $`L_k^{(N)}`$ with the factor $`N^{\beta (kj)}`$. Therefore, admitting a priori estimate $`|L_1^{(N)}|N^\alpha `$, we deduce that it enters into the relation (7) with the factor $`N^{k\beta }`$. Regarding, $`kK(\alpha )`$, one obtains the relations with the terms that converge to finite limits as $`N\mathrm{}`$.
Acknowledgments. The authors are grateful for A.Its, P.Bleher, and H.Widom and other organizers of the semester “Random Matrix Models and Their Applications” at MSRI (Berkeley) for the kind invitations to participate the workshops and for the financial support.
|
no-problem/9905/physics9905005.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
$`D`$-branes play a significant role in supersymmetric string and field theories. Two of the most outstanding developments in this direction have been achieved:
1. The generalized AdS/CFT duality , which relates the superconformal field theory on $`Dp`$-branes placed at the orbifold singularity and the Type IIB string theory compactified on $`AdS_{p+2}\times H^{8p}`$ ;
2. The K-theory approach to $`D`$-brane charges , which identifies $`D`$-brane charges with elements of Grothendieck K-groups of horizon manifolds $`H^{8p}`$ .
In the present paper we use K-theory to compute the spectrum of $`D`$-brane charges in the Type IIB string theory compactified on $`AdS_{p+2}\times S^{8p}`$. The result differs from the corresponding result presented in .
## 2 $`D`$-brane charges
Let us equip the horizon $`S^{8p}`$ with the gauge bundle
| $`U`$ | $``$ | $`E`$ |
| --- | --- | --- |
| | | $``$ |
| | | $`S^{8p}`$ |
The Steenrod classification theorem asserts that this bundle is characterized by the homotopy group $`\pi _{7p}\left(U\right)`$. Using the standard definition of the K-group , we obtain
$$\stackrel{~}{K}\left(S^{8p}\right)=\pi _{7p}\left(U\right).$$
(1)
The exact homotopy sequence
$$\mathrm{}\pi _n\left(U\left(2N\right)/U\left(N\right)\right)\pi _n\left(U\left(2N\right)/U\left(N\right)\times U\left(N\right)\right)$$
$$\pi _{n1}\left(U\left(N\right)\right)\pi _{n1}\left(U\left(2N\right)/U\left(N\right)\right)\mathrm{}$$
for the universal bundle
| $`U\left(N\right)`$ | $``$ | $`U\left(2N\right)/U\left(N\right)`$ |
| --- | --- | --- |
| | | $``$ |
| | | $`U\left(2N\right)/U\left(N\right)\times U\left(N\right)`$ |
yields
$$\stackrel{~}{K}\left(S^{8p}\right)=\pi _{7p}\left(U\right)=\pi _{8p}\left(B_U\right),$$
where $`B_U`$ is the inductive limit of the manifold
$$U\left(2N\right)/U\left(N\right)\times U\left(N\right).$$
(2)
The manifold (2) has the following interpretation in terms of $`D`$-branes . When $`2N`$ coinciding branes are separated to form two parallel stacks of $`N`$ coinciding branes, their gauge symmetry $`U\left(2N\right)`$ is spontaneously broken to $`U\left(N\right)\times U\left(N\right)`$. This situation generically allows for the existence of gauge solitons.
$`D`$-brane charges take values in the K-groups (1). To compute K-groups (1), we use the Bott periodicity theorem . The spectrum of $`D`$-brane charges is recorded in Table 1.
Table 1
| $`Dp`$ | $`D8`$ | $`D7`$ | $`D6`$ | $`D5`$ | $`D4`$ | $`D3`$ | $`D2`$ | $`D1`$ | $`D0`$ | $`D(1)`$ | $`D(2)`$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| $`S^n`$ | $`S^0`$ | $`S^1`$ | $`S^2`$ | $`S^3`$ | $`S^4`$ | $`S^5`$ | $`S^6`$ | $`S^7`$ | $`S^8`$ | $`S^9`$ | $`S^{10}`$ |
| $`\stackrel{~}{K}(S^n)`$ | | 0 | | 0 | | 0 | | 0 | | 0 | |
## 3 Remark
The result recorded in Table 1 differs from the corresponding result obtained in :
| $`Dp`$ | $`D9`$ | $`D8`$ | $`D7`$ | $`D6`$ | $`D5`$ | $`D4`$ | $`D3`$ | $`D2`$ | $`D1`$ | $`D0`$ | $`D(1)`$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| $`S^n`$ | $`S^0`$ | $`S^1`$ | $`S^2`$ | $`S^3`$ | $`S^4`$ | $`S^5`$ | $`S^6`$ | $`S^7`$ | $`S^8`$ | $`S^9`$ | $`S^{10}`$ |
| $`\stackrel{~}{K}(S^n)`$ | | 0 | | 0 | | 0 | | 0 | | 0 | |
|
no-problem/9905/chao-dyn9905005.html
|
ar5iv
|
text
|
# Correlations of electromagnetic fields in chaotic cavities
## Abstract
We consider the fluctuations of electromagnetic fields in chaotic microwave cavities. We calculate the transversal and longitudinal correlation function based on a random wave assumption and compare the predictions with measurements on two- and three-dimensional microwave cavities.
Classical ergodicity suggests that wave functions in chaotic systems may be described by superpositions of waves with wave vectors of constant length but random directions . Their fluctuations are distinctly different from the more familiar optical speckle patterns where also the wave numbers fluctuate . The distributions of amplitudes turns out to be Gaussian and the spatial autocorrelation function is given by Bessel functions of order $`\frac{d}{2}1`$, where $`d`$ is the billiard dimension. Overwhelming evidence for this has been accumulated especially in numerical studies of billiards , experiments on microwave billiards and surfaces of constant negative curvature . Higher order moments and correlations as well as contributions from prominent classical structures (‘scars’) have also been studied , so that by now one has a fairly good understanding of the fluctuation properties of scalar wave functions in chaotic systems.
Recent experiments on the elastodynamics of vibrating blocks and 3-d microwave resonators deal with situations where the wave fields have more than one component which are typically mixed by the boundary conditions . New effects such as ray splitting or chaotic features in systems with integrable ray dynamics are found.
We are here interested in the consequences of these additional degrees of freedom for the fluctuations of the wave functions. In particular, we will show that the fact that in the absence of space charges electromagnetic fields are divergence free implies differences between the longitudinal and transversal correlation functions and deviations from the behaviour expected for scalar fields . We present results for the field components, the intensities and the frequency shift and compare with experiments on microwave billiards.
Our starting point is the electromagnetic analog of the the semiclassical ansatz that the scalar field is a superposition of plane waves with constant amplitudes and fixed wave length but randomly oriented wave vector . For the electromagnetic case we have in addition to allow for different orientations of the polarization. We thus assume that the field at a point $`𝐱`$ in position space is due to a superposition of many plane waves with uniformly distributed orientations of polarization and wave vector, e.g.,
$$𝐄(𝐱)\underset{\nu }{}𝐄_\nu e^{i𝐤_\nu 𝐱},$$
(1)
and similarly for the $`𝐁`$-field. The complex amplitudes $`𝐄_\nu `$ and $`𝐁_\nu `$ of all waves are transversal, $`𝐄_\nu 𝐤_\nu =0`$ and $`𝐁_\nu 𝐤_\nu =0`$, and satisfy $`𝐄_\nu 𝐁_\nu =0`$ so that $`𝐤_\nu `$, $`𝐄_\nu `$ and $`𝐁_\nu `$ form an orthogonal dreibein. The absolute values $`|𝐄_\nu |`$ and $`|𝐁_\nu |`$ are all the same, that is to say, we assume that there are no losses during reflections at the walls. From this assumption there follows immediately that all three components are Gaussian distributed with the same distribution
The spatially averaged correlation functions then are for the electric field
$$C_{E,ij}(𝐫)=𝐄_i(𝐱+𝐫/2)𝐄_j(𝐱𝐫/2)/𝐄_i^2,$$
(2)
and similar for the magnetic field and the cross correlation between $`𝐄`$ and $`𝐁`$. The normalization is by the mean square amplitude of a single component of the fields, $`E_i^2`$, so that $`C_{E,ii}(0)=1`$. In a tensor notation, this can be combined to a tensor of correlation functions,
$$C_E(𝐫)=𝐄(𝐱+𝐫/2)𝐄(𝐱𝐫/2)/𝐄_i^2.$$
(3)
Substituting (1) and performing the spatial average then results in (the normalization will be restored in the end)
$$C_E(𝐫)\underset{\nu }{}𝐄_\nu 𝐄_\nu ^{}e^{i𝐤_\nu 𝐫}.$$
(4)
To proceed further, let $`𝐫`$ point in the $`z`$-direction and introduce spherical coordinates for the wave vector,
$$𝐤_\nu =k\left(\begin{array}{c}\mathrm{cos}\varphi _\nu \mathrm{sin}\theta _\nu \\ \mathrm{sin}\varphi _\nu \mathrm{sin}\theta _\nu \\ \mathrm{cos}\theta _\nu \end{array}\right).$$
(5)
The electromagnetic field contributions lie in a plane perpendicular to this wave vector, spanned by the two vectors
$$𝐞_1^{(\nu )}=\left(\begin{array}{c}\mathrm{sin}\varphi _\nu \\ \mathrm{cos}\varphi _\nu \\ 0\end{array}\right),𝐞_2^{(\nu )}=\left(\begin{array}{c}\mathrm{cos}\varphi _\nu \mathrm{cos}\theta _\nu \\ \mathrm{sin}\varphi _\nu \mathrm{cos}\theta _\nu \\ \mathrm{sin}\theta _\nu \end{array}\right).$$
(6)
If $`\psi _\nu `$ denotes the angle of polarization, the field components are
$`𝐄_\nu `$ $`=`$ $`\mathrm{cos}\psi _\nu 𝐞_1^{(\nu )}+\mathrm{sin}\psi _\nu 𝐞_2^{(\nu )}`$ (7)
$`𝐁_\nu `$ $`=`$ $`\mathrm{sin}\psi _\nu 𝐞_1^{(\nu )}+\mathrm{cos}\psi _\nu 𝐞_2^{(\nu )}.`$ (8)
In the limit of a large number of contributing components, the sum over the different contributions can be replaced by a continuous average over all directions (angles $`\theta `$ and $`\varphi `$) for the wave vector and all polarizations (angle $`\psi `$),
$$\frac{1}{N}\underset{\nu }{}\mathrm{}\frac{1}{2\pi }_0^{2\pi }𝑑\psi \frac{1}{2\pi }_0^{2\pi }𝑑\varphi \frac{1}{2}_0^\pi \mathrm{sin}\theta d\theta \mathrm{}$$
(9)
After averaging over the polarizations and the azimuthal angle, the correlation functions become
$`C_E(𝐫)`$ $`=`$ $`{\displaystyle \frac{3}{4}}\left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 0\end{array}\right)e^{ikr\mathrm{cos}\theta }_\theta `$ (10)
$`+`$ $`\left(\begin{array}{ccc}\frac{3}{4}\mathrm{cos}^2\theta & 0& 0\\ 0& \frac{3}{4}\mathrm{cos}^2\theta & 0\\ 0& 0& \frac{3}{2}\mathrm{sin}^2\theta \end{array}\right)e^{ikr\mathrm{cos}\theta }_\theta .`$ (11)
The final average over $`\theta `$ can be expressed in terms of spherical Bessel functions, but it is more convenient to use the trigonometric representation directly,
$$C_E(𝐫)=\left(\begin{array}{ccc}f_{}(kr)& 0& 0\\ 0& f_{}(kr)& 0\\ 0& 0& f_{}(kr)\end{array}\right)$$
(12)
with the transversal correlation function
$$f_{}(\xi )=\frac{3}{2}\left(\frac{\mathrm{sin}\xi }{\xi }\frac{\mathrm{sin}\xi \xi \mathrm{cos}\xi }{\xi ^3}\right)$$
(13)
and the longitudinal correlation function
$$f_{}(\xi )=3\frac{\mathrm{sin}\xi \xi \mathrm{cos}\xi }{\xi ^3}.$$
(14)
The asymptotic behaviour of these functions is that for small $`r`$ they approach the same value, $`C_{E,ii}1`$, by normalization. For large $`r`$ they oscillate on a scale set by the wavenumber and decay like $`1/r`$ for the transversal and like $`1/r^2`$ for the longitudinal correlations. For the trace of the correlation function,
$`\text{tr }C_E`$ $`=`$ $`𝐄(𝐱+𝐫/2)𝐄(𝐱𝐫/2)/𝐄_i^2`$ (15)
$`=`$ $`3{\displaystyle \frac{\mathrm{sin}kr}{kr}}`$ (16)
the correlation between polarizations and wave vector is eliminated and one recovers Berry’s result for random waves in three dimensions , except for a factor due to the normalization (see Fig. 1).
The correlations for the magnetic field $`𝐁`$ have the same functional dependence,
$$C_B(𝐫)=\left(\begin{array}{ccc}f_{}(kr)& 0& 0\\ 0& f_{}(kr)& 0\\ 0& 0& f_{}(kr)\end{array}\right).$$
(17)
There are no correlations between $`𝐄`$ and $`𝐁`$.
For the experiments also the correlations of intensities,
$$C_{EE}(𝐫)=|𝐄(𝐱+𝐫/2)|^2|𝐄(𝐱𝐫/2)|^2/|𝐄|^4$$
(18)
and similarly for the magnetic field are relevant. For ease of comparison with numerical data, we normalize this function by the second moment of the intensity so that $`C1`$ as $`𝐫0`$. For large $`𝐫`$ the correlations between intensities decay and the correlation function approaches $`|𝐄|^2^2/|𝐄|^4`$. For Gaussian random field components, this ratio is $`3/5`$. Therefore, the correlation function with the above normalization becomes
$$C_{EE}(𝐫)=\frac{4}{15}\left[f_{}(kr)\right]^2+\frac{2}{15}\left[f_{}(kr)\right]^2+\frac{9}{15}$$
(19)
and similar for the magnetic field. For the intensity correlation function of a 3-d scalar field the correlation function becomes
$$C_{ss}(𝐫)=\frac{2}{3}\left(\frac{\mathrm{sin}kr}{kr}\right)^2+\frac{1}{3},$$
(20)
where the asymptotic value of $`1/3`$ reflects the ratio of the square of the second moment to the fourth moment as expected for a Gaussian distribution .
To test these predictions we measured the field distributions in two- and three-dimensional microwave billiards. We start with the discussion of the results in a resonator of the shape of a quarter stadium billiard. We measured the microwave transition amplitudes between two antennas, one kept fixed, the other moved around to probe the spatial distribution of the wave functions. At an eigenfrequency such measurements yield directly the electric field strength $`𝐄(x)`$ as a function of the position. Details of the experiment are described elsewhere . For microwave frequencies below $`\nu _{max}=c/2d`$, where $`d`$ is the height of the resonator, only TM modes are excited and the electric field has a single component $`E_z`$. For this component, Berry’s arguments give a spatial autocorrelation function
$$C_E(𝐫)=E_z(𝐱+𝐫/2)E_z(𝐱𝐫/2)J_0(kr)$$
(21)
Fig. 2(a) shows the experimental autocorrelation function, normalized to $`C_E(0)=1`$. It was obtained by superimposing the results from the 20 lowest eigenfrequencies of the quarter stadium. Apart from a discrepency of about 10 percent in the wavelength of the oscillations the experiment reproduces the prediction by Eq. (21) perfectly.
The autocorrelation function of the field intensities becomes in the 2-d case
$`C_{EE}(𝐫)`$ $`=`$ $`|E_z(𝐱+𝐫/2)|^2|E_z(𝐱𝐫/2)|^2`$ (22)
$``$ $`{\displaystyle \frac{2}{3}}\left[J_0(kr)\right]^2+{\displaystyle \frac{1}{3}}.`$ (23)
This correlation function, which has also been studied by Sridhar et al. , is shown in Fig. 2(b) for the same data set that entered Fig. 2(a). As in Fig. 2(a) we note a small difference in the wavelength of the oscillations. This has not been observed in the experiments of Sridhar et al. and in the stadium wave functions in Ref. , but it has appeared for wave functions of an octogon billiard on a surface with constant negative curvature , where it has been attributed to anisotropies in the wave functions. In the present case the discrepancy may be caused by higher order corrections to the semiclassical predictions since the wave functions are not very far into the semiclassical regime: Typical wave lengths for the wave functions that entered the analysis of Fig. 2 are about 0.2–0.5 stadium widths. Assuming corrections to be of order $`1/k^2`$ with a prefactor of order 1, as suggested by the analysis of , the deviations can be estimated to be about 10%, as observed.
Turning to the three-dimensional microwave cavities note that Maxwell’s equations can no longer be reduced to a scalar wave equation, so that effects due to the vector properties of the electromagnetic field can become essential. The field distributions in a cavity of the shape of a three-dimensional Sinai billiard were mapped by means of the perturbing bead method . The technique uses the fact that a spherical metallic bead in the resonator shifts the eigenfrequency of a resonance by an amount $`\mathrm{\Delta }\nu `$ proportional to $`2𝐄^2+𝐁^2`$, where $`𝐄`$ and $`𝐁`$ are the fields at the position of the bead. Since the electric and the magnetic field components are uncorrelated, the spatial autocorrelation function for the frequency shift,
$$C_{\mathrm{\Delta }\nu }(r)=\mathrm{\Delta }\nu (𝐱+𝐫/2)\mathrm{\Delta }\nu (𝐱𝐫/2)$$
(24)
is given by the intensity autocorrelation function defined in Eq. (19), up to an off-set resulting from the fact that $`C_{\mathrm{\Delta }\nu }(r)`$ does not vanish in the limit $`r\mathrm{}`$. This off-set is easily calculated. Using again that for Gaussian distributions the fourth moment amounts to three times the square of the second moment, we find $`C_{\mathrm{\Delta }\nu }(0)/C_{\mathrm{\Delta }\nu }(\mathrm{})=13/3`$. After normalization we thus have
$$C_{\mathrm{\Delta }\nu }(r)=\frac{20}{39}\left[f_{}(kr)\right]^2+\frac{10}{39}\left[f_{}(kr)\right]^2+\frac{3}{13}$$
(25)
The experimental results in Fig. 3(a) are in good agreement with the theoretical prediction from Eq. (25) for $`kr`$-values below 6. The hole in the histogram for $`kr<0.2`$ reflects the minimum grid size used in the bead measurement. In Fig. 3(b) the data for $`C_{\mathrm{\Delta }\nu }(r)C_{\mathrm{\Delta }\nu }(\mathrm{})`$ are replotted on a logarithmic scale to emphasize the minima. Also shown is the scalar correlator (20). Note that the scalar function has zeroes but the spectral correlator does not. The absence of zeroes or, in the case of finite resolution, deep minima, in the experimental data is thus clear evidence for the influence of the polarizations.
In summary, we have shown that longitudinal and transversal correlation functions of electromagnetic waves in chaotic cavities differ because the waves are transversal. The experiments verified the effect for the intensities, and more direct studies of the fields themselves are desirable. We expect that also in other situations with non-scalar waves fields, as for instance in acoustics in anisotropic media or hydrodynamic waves, the correlations will be characterized by tensors which depend on the character of the modes and the directions.
This work was partially supported by the Deutsche Forschungsgemeinschaft through Sonderforschungsbereich 185 ’Nichtlineare Dynamik’.
|
no-problem/9905/hep-ph9905401.html
|
ar5iv
|
text
|
# Formation of multiple winding topological defects in the early universe
## I introduction
It is believed that topological defects are produced during certain types of cosmological phase transitions and may play an important role in the course of cosmic evolution. They can solve some of the unsolved cosmological problems. Particularly topological defects give a natural and attractive model for inflation and may explain the source of baryon asymmetry in our universe.
Inflation is the most promising candidate which can provide the solutions to the problems in the standard Big Bang theory and the realistic model for the inflation is very important subject in the cosmology. Topological inflation has been proposed by Vilenkin and Linde. In this inflation scenario, the energy density which drives the inflationary expansion of the universe is provided by the symmetric state within the defect core where the false vacuum energy is trapped by the topological constraint. If the way of the symmetry breaking in the particle physics theory satisfies the condition for the production of the topological defect, the formation of some kind of defect at the phase transition is inevitable. Then one of the necessary conditions for the inflation that the inflaton field must be in the state which has sufficient vacuum energy density would be realized without any additional constraint. This is the advantage of the topological inflation scenario since in the conventional inflation model, the fine-tuning of initial state selection is required in order that the necessary conditions for the inflation should be satisfied. However, in order to realize the condition that the core length scale is long enough to cover the horizon scale, the symmetry breaking energy scale for the defect formation, $`\eta `$, should be larger than the Planck scale as
$$\eta M_{pl}.$$
(1)
Therefore in order to understand the topological inflation in detail, we must work very close to the Planck scale at which our classical field theories will not be valid. When multiple winding defects are employed, however, the energy scale at which the inflation occurs is decreased and the constraint on $`\eta `$ is relaxed since the higher winding defect has thicker core length scale than the unit winding defect. The constraint on $`\eta `$ when the winding number of the string, $`n`$, is included has been derived numerically as
$$\eta 0.16M_{pl}\times n^{0.56}.$$
(2)
Hence if the string whose winding number $`n3\times 10^3`$ is produced, it is possible that the topological inflation occurs at the GUT scale.
The baryon asymmetry problem is another important subject. Since the sphaleron transition interaction should erase the baryon asymmetry produced before such a process becomes negligible unless the difference between lepton number and baryon number exists, the baryon number generation at the electroweak scale seems to be the most conventional scenario of the baryogenesis at present. The first proposed electroweak baryogenesis scenario relies on having a strong first order phase transition so that the deviation from the thermal equilibrium state, one of the necessary conditions for the baryogenesis, is achieved by the propagation of nucleated bubbles in the plasma. However, it seems that the first order electroweak phase transition is difficult to be established in the standard model. Then the electroweak baryogenesis scenario using electroweak strings was proposed by Brandenberger and Davis. In this scenario the out-of-equilibrium condition is provided by the collapse of strings. Thus the string baryogenesis has the advantage that it works effectively whether the electroweak phase transition is of first order or not. Since the electroweak string in the standard model is topologically unstable, however, its formation probability is too small to serve for the observational amount of baryon number. Therefore topological defects associated with the electroweak symmetry breaking should be necessary for the electroweak baryogenesis. Recently Soni has suggested a new scenario for the baryon asymmetry generation using the string produced at the electroweak energy scale. He has pointed out that the sphaleron energy in the presence of the string with a few winding number can become negative. Then when the strings within which sphalerons are bounded decay, the baryon number would be produced. In this scenario the thermal equilibrium is violated by the string-sphaleron system which is left below the sphaleron suppression temperature.
In order to know to how much extent the required energy scale for $`\eta `$ becomes small in the topological inflation scenario, we have to estimate the possibility of multiple winding defect formation. Also in order to determine how much baryon asymmetry is produced in the electroweak baryogenesis scenario by the sphaleron bound state on the string, we have to calculate the formation probability of multiple winding strings. For these reasons, in this Letter we consider the realization of multiple winding topological defect configuration. We employ two kinds of defect, that is, strings and monopoles. For the string case, we investigate the phase distribution of the Higgs field. For the monopole, the distribution of the gauge flux is considered. In both cases, the formation probability of multiple winding defects is estimated quantitatively and the existence of such a defect is confirmed.
## II string
First let us consider the breaking of a local $`U(1)`$-symmetry in the abelian-Higgs model with Lagrangian:
$$=\frac{1}{4}F^{\mu \nu }F_{\mu \nu }\frac{1}{2}(D^\mu \varphi )^{}(D_\mu \varphi )\frac{1}{8}\lambda (\varphi ^{}\varphi \eta ^2)^2,$$
(3)
where $`\varphi `$ is a complex scalar field and the covariant derivative is given by $`D_\mu =_\mu ieA_\mu `$ with $`A_\mu `$ a gauge vector field, $`e`$ the gauge coupling constant. $`F_{\mu \nu }`$ is the antisymmetric tensor defined by $`F_{\mu \nu }=_\mu A_\nu _\nu A_\mu `$. It is well known that there is a string solution called the Nielsen-Olesen vortex line in this model. In the Lorentz gauge, the Higgs field configuration far from the core of the string can be written as
$$\varphi \eta e^{in\theta }.$$
(4)
The winding number is a strictly conserved quantity and the total Higgs field phase difference around the string is $`2\pi n`$. In general, the larger $`n`$ becomes, the line energy density of the string increases and the core width scale becomes fatter. It has been demonstrated both numerically and analytically that multiple winding strings $`(|n|>1)`$ are stable when $`\lambda /e^2<1`$ and unstable in the opposite case. In the former case, a number of string may get together and coalesce into one string due to the energetic favorableness. On the other hand, in the latter case, multiple winding strings can break up into $`|n|`$ pieces of string with unit winding number.
Before estimating the formation probability of multiple winding strings, let us briefly summarize the estimation method for the unit winding string based on the conventional Kibble mechanism. In the thermal phase transition, the field has a typical length scale, that is, the correlation scale, $`\xi `$. It defines the size of regions within which the values of fields are homogeneous and are independent of those at other regions. When the cosmic temperature decreases sufficiently and the ground state of the Higgs field becomes the true vacuum state, the amplitude of the Higgs field $`|\varphi |`$ should be same almost everywhere, while its phase $`\theta `$ varies on the correlation scale. Then the physical space is regarded to be divided to correlated volumes and the phase of the Higgs field takes the random value at each correlated region so that the phase distribution has a domain-like structure at the end of cosmological phase transition. Thus in the context of the Kibble mechanism, the formation probability of the string can be estimated as follows. For simplicity, the 2-dimensional slice of the 3-dimensional $`\theta `$ distribution is considered so that the formation of vortices can be analyzed instead of strings.
1. Divide the plane into 2-dimensional domains whose typical size is equal to the correlation length of the Higgs field $`\xi `$.
2. Assign the phase of the Higgs field randomly to one representative point of each domain.
3. Interpolate the phase between two representative points of neighboring domains so that the gradient energy of the Higgs field takes minimum value (geodesic rule).
Then we can count the total phase change along any closed loop on the plane and calculate how much winding number exists inside the loop so that the formation probability of the vortex can be estimated. Usually it is assumed that at most three different domains meet at the boundary point. It means the closed loop which is used to count the phase change can always be identified to the triangle whose corners correspond to three representative points of these three domains. Thus the above procedure (A) and (B) can be expressed as: divide the plane by regular triangles and assign the phase of the Higgs field randomly to each vertex point where six triangles join. The geodesic rule implies that the difference of the phase between two neighboring points is less than $`\pi `$. Therefore when the triangle division is imposed, the total difference of the phase along the circumferential edge of the triangle is $`2\pi `$ at most because of the phase continuity. As a result, the winding number should not exceed the unity and multiple winding vortices never appear in this situation.
However, we cannot say that formation probability of multiple winding vortices equals zero since this estimation is based on too simplified assumptions. In order to construct the method which can detect the existence of multiple winding number, we modify the arrangement of points where the phase of the Higgs field is allocated. Since so long as the plane is divided into triangles the maximum winding number must be only one, the plane should be divided by a polygon other than a triangle so that the total phase difference along its periphery can go beyond $`2\pi `$. It would be reasonable if we choose the side length of the polygon as $`\xi `$ since each vertex can be considered to represent the correlated domain in the similar manner to the triangle case. When the diameter scale of the polygon is $`R_s`$, the total length of the polygon periphery will be $`\pi R_s`$. Then the number of vertices is $`\pi R_s/\xi `$ and the possible largest phase change is $`\pi ^2R_s/\xi `$. In the usual triangle case, $`\pi R_s/\xi =3`$ so that $`R_s\xi `$. The revised winding number counting procedure can be expressed as:
1. Divide the plane into polygons whose diameter scale is equal to $`R_s`$ and vertex number is $`\pi R_s/\xi `$.
2. Assign the phase of the Higgs field randomly to each vertex of the polygon.
The third step is identical to the original version. Then we can count the winding number and estimate the formation probability of multiple winding vortices. In actual calculations, we consider one polygon, assign a phase of the Higgs field ($`0\theta <2\pi `$) randomly to each vertex of this polygon and count the winding number along its periphery. We repeat this process $`10^8`$ times so that the probability distribution of the string formation for each winding number can be led. The result of calculations for various values of $`\pi R_s/\xi `$ is shown in FIG. 2.
In our method, the numerical value of the multiple winding vortex formation probability depends on $`R_s/\xi `$. It would be reasonable that $`\xi `$ can be obtained by the correlation scale of the Higgs field at Ginzburg temperature $`T_G`$, when the phase transition terminates and the defect cannot be erased by thermal fluctuations, as
$$\xi \frac{1}{T_G},$$
(5)
because the correlation length of the massless field corresponding to thermal fluctuations is given by the inverse of temperature. The most natural estimation of $`R_s`$ would be that it is comparable to the core diameter of the string. Under this assumption, the number of string which can be produced inside each polygon should be one at most since many strings cannot exist within the scale of the string core or in other words all the winding number one polygon contains must belong to a single string. Thus we do not mistake two or more strings for one string if we take $`R_s`$ to be the core diameter of the string. The string core diameter scale is given by the Compton wavelength of the Higgs field as
$$R_s=\frac{1}{m_H(T_G)}\frac{1}{\lambda T_G},$$
(6)
at $`T_G`$ where $`m_H`$ is the mass of the Higgs field. Therefore the maximum winding number the string can have will be
$$n_{max}\left[\frac{\pi ^2R_s}{2\pi \xi }\right]\left[\frac{\pi }{2\lambda }\right],$$
(7)
where we use the Gauss’s symbol. This result means multiple winding string can be produced when $`\lambda `$ is smaller than $`\pi /4`$.
When $`\lambda 1`$, the formation of multiple winding strings seems to be impossible even in the revised method. There is, however, another possibility that the correlation length of the Higgs field can fluctuate since in reality the size of the domain within which the value of the field is homogeneous may not be constant throughout the universe. As a toy model, we assume that the domain size distributes around its averaged value $`\xi `$ and the probability distribution function obeys a Gaussian form whose dispersion is given by $`\sigma `$. Then we calculate the distribution function of $`\pi R_s/\xi ,P(\pi R_s/\xi )`$, and estimate the formation probability of the string for each winding number by summing up that for fixed value of $`\pi R_s/\xi `$ shown in FIG. 2 with weight $`P(\pi R_s/\xi )`$. Here, we set $`\xi =\sigma =R_s`$. Note that the larger $`\sigma `$ becomes, larger domains which contains no string tend to be produced. The resulting formation probability, $`P_s(n)`$, of the vortex for $`n=14`$ per one domain is shown in TABLE I. The larger $`n`$ becomes, the formation probability of multiple winding vortices decreases exponentially.
## III monopole
The formation probability of multiple winding strings depends on the ratio of the circumference of the string to the correlation length of the Higgs field. Also in the case of monopole, a similar consideration can be applied. The formation probability of multiple winding monopoles depends on the ratio of the surface area of monopole to the correlation area of the Higgs field, which is given by $`4\pi R_{m}^{}{}_{}{}^{2}/\pi \xi ^2`$, where $`R_m`$ is the core diameter of the monopole. Therefore in principle we can calculate the formation probability of multiple winding monopoles for various values of $`4R_{m}^{}{}_{}{}^{2}/\xi ^2`$ in a similar manner for strings. However, since it is complicated to interpolate the phase of the Higgs field between two neighboring vertices and count the winding number, here we resort to an easier method.
In the case of local monopole, not only the Higgs field but also the gauge field appears in the theory. Then instead of the phase of the Higgs field, we assign the magnetic flux of the gauge field at each domain. When the total flux which passes through a closed surface is not zero, there must be a monopole or an anti-monopole inside this surface. Thus the gauge flux can play a role similar to the winding number. The simplest monopole solution, the ’t Hooft-Polyakov solution for the model in which the $`SO(3)`$ symmetry is broken to $`U(1)`$, has the overall magnetic flux whose magnitude is
$$\mathrm{\Phi }_B=\frac{4\pi }{e},$$
(8)
which corresponds to the unit winding number in the Higgs field case. The ’t Hooft-Polyakov monopole with multiple winding number is known to be stable in the Prasad-Sommerfield limit. In actual calculations, we assign only discrete values of magnetic flux as $`\pm \pi /e`$ for simplicity. Then the total amount of the flux is summed up around the closed surface and the winding number can be calculated. Then the formation probability of the multiple winding monopole is given by
$$P_m(n,l)=\left({}_{l}{}^{}C_{\frac{l}{2}+2n}^{}+{}_{l}{}^{}C_{\frac{l}{2}+2n+1}^{}\right)\left(\frac{1}{2}\right)^{l1},$$
(9)
where $`n`$ is the winding number and $`l=4R_{m}^{}{}_{}{}^{2}/\xi ^2`$. We omit the fractional part of the winding number which appears because of the absence of the interpolation process. Since our method of the magnetic flux assignment reproduces the formation probability of a unit winding monopole when $`l=4`$, the approximation we have employed should be reasonable. The analytic result for various values of $`4R_{m}^{}{}_{}{}^{2}/\xi ^2`$ using the equation (9) is shown in FIG. 2.
Similarly to the string case, $`R_m`$ can be estimated to be $`(\lambda T_G)^1`$ and $`\xi `$ equals $`T_G^1`$. Then the ratio of the surface area of the monopole to the correlation area of the Higgs field is given by
$$\frac{4R_{m}^{}{}_{}{}^{2}}{\xi ^2}\frac{4}{\lambda ^2}.$$
(10)
Also in this case the smaller $`\lambda `$ becomes, the more multiple winding monopoles are produced. Moreover, the spatial variation of the correlation length enables the realization of the high winding configuration in the same way for the string. The resulting formation probability, $`P_m(n)`$, of the monopole for $`n=14`$ per one domain is shown in TABLE II.
## IV conclusion
In the present Letter we have calculated the formation probability of multiple winding topological defects by means of the Kibble mechanism. Since we have taken the fact that the core size scale of the defect can be larger than the correlation scale of fields into account, the formation probabilities depend on the ratio of these two scales. The multiple winding topological defects can be produced when the self coupling constant of the Higgs field, $`\lambda `$, which defines the defect core thickness is much less than the unity. The smaller $`\lambda `$ becomes, the formation probability of the multiple winding defects and the maximum winding number the defect can have increase.
This result gives following cosmological implications. First it may be possible that the topological inflation occurs at the GUT energy scale by the multiple winding string produced during the GUT phase transition when $`\lambda 4\times 10^4`$ which can be calculated from the equations (2) and (7). For the same value of $`\lambda `$, the maximum winding number the monopole can have is larger than that the string can have. Therefore in the case of monopole, topological inflation may occur for larger value of $`\lambda `$. Secondly, in the context of the electroweak baryogenesis, the sphaleron bound state scenario using the string whose winding number is $`23`$ will be promising. Since the formation probability of the double winding string is about one third that of a unit winding string when $`\lambda 10^1`$, it might be possible to explain the observational amount of the baryon asymmetry. Further quantitative analysis will be needed.
Our method is not the only one which can improve the simple estimation of the defect formation probability. For example, not only the correlation length scale of the Higgs filed but also the core thickness scale of the topological defect would vary since the static and highly symmetric solution of the defect configuration may not always be applied to the actual situation. Another promising modification is the relaxation of the geodesic rule. It is too simplified picture that the value of the field should be interpolated through the shortest path on the vacuum manifold and there is a possibility that larger gradient exists between two regions.
In addition to such revisions, the interaction between strings makes the situation extremely better. We have considered only the moment of string production and neglected the dynamics of the string after its formation when we have calculated the value of $`R_s`$. In some ranges of the parameter, however, the attractive force operates on strings so that the winding number can accumulate. It corresponds to the case when $`R_s/\xi `$ is enormous. Maybe a single string has all the winding number within the horizon scale. Then the topological inflation scenario might successfully work even when $`\lambda `$ is not so small. Note that also the interaction of the string with the surrounding plasma will affect the dynamical evolution. The drag force can aid the accumulation process because it dissipates the energy of the string pair and allows a bound state to be stable.
###### Acknowledgements.
T.O. is grateful to Professor Katsuhiko Sato and Professor Yasushi Suto for their encouragement and to Masahide Yamaguchi for discussion. M.N. thanks Leandros Perivolaropoulos for his comment at the workshop in Les Houches.
| $`n`$ | $`P_s(n)`$ |
| --- | --- |
| $`1`$ | $`2.102\times 10^1`$ |
| $`2`$ | $`8.36\times 10^4`$ |
| $`3`$ | $`4.8\times 10^7`$ |
| $`4`$ | $`8\times 10^{11}`$ |
| $`n`$ | $`P_m(n)`$ |
| --- | --- |
| $`1`$ | $`4.285\times 10^2`$ |
| $`2`$ | $`7.36\times 10^5`$ |
| $`3`$ | $`3.4\times 10^8`$ |
| $`4`$ | $`7\times 10^{12}`$ |
|
no-problem/9905/hep-ph9905208.html
|
ar5iv
|
text
|
# Supersymmetry breaking and loop corrections at the end of inflation
## I Introduction
Inflation solves many of the outstanding problems of the standard cosmology . Among others, it provides a mechanism for the generation of primordial cosmological perturbations, which are responsible for the observed temperature anisotropies in the cosmic microwave background (CMB), and the large-scale structure (LSS) in our Universe. Future experiments, such as MAP<sup>*</sup><sup>*</sup>*http://map.gsfc.nasa.gov and Planckhttp://astro.estec.esa.nl/SA-general/Projects/Planck (resp. 2dFhttp://meteor.anu.edu.au/ colless/2dF and SDSS<sup>§</sup><sup>§</sup>§http://www.astro.princeton.edu/BBOOK), will measure with great precision the power spectrum of the CMB (resp. LSS), and draw sharp constraints on the potential of the inflaton, the scalar field driving inflation.
Successful implementation of the inflationary picture requires a long enough era of accelerated expansion on one hand, and a correct order of magnitude for primordial perturbations on the other hand. In the most simple versions of single-field inflation, the corresponding constraints on the inflaton potential are unrealistic from a particle physics point of view. Indeed, coupling parameters must be fine-tuned to very small values, while the inflaton must be of the order of the Planck mass during inflation. These problems can be avoided in the so-called hybrid models (see also Ref. ), in which the inflaton couples with some other(s) scalar field(s). Hybrid inflation arises naturally in supersymmetric theoriesFor a review on inflation in supersymmetric theories see Ref. . . Supersymmetry provides the flatness of the scalar potential required for inflation. The inflaton field, which is usually a scalar singlet, couples to Higgs superfield(s) charged under some gauge group G. At the end of inflation, the Higgs fields acquire a non-vanishing vacuum expectation value (VEV); G is spontaneously broken. Such scenarios arise naturally in supersymmetric grand unified theories (SUSY GUT) . The non-zero vacuum energy density during inflation can either be due to the VEV of a F-term or from that of a D-term .
Models of either types share the following features. The scalar potential has two minima; one local minimum for values of the inflaton field $`|S|`$ greater than some critical value $`s_c`$ with the Higgs fields at zero, and one global supersymmetric minimum at $`S=0`$ with non-zero Higgs VEVs. The fields are usually assumed to have chaotic initial conditions , with an initial value $`|S|s_c`$ for the inflaton. The Higgs fields rapidly settle down to the local minimum of the potentialThe problem of initial conditions in inflation is actually not trivial. We shall not discuss this problem here. See Refs. for possible solutions.; then, the universe is dominated by a non-vanishing vacuum energy density, and supersymmetry is broken. This in turn leads to quantum corrections to the potential which lift its complete flatness . The slow-roll conditions are satisfied and inflation takes place until $`|S|=s_c`$ or slighlty before, depending on the model. When $`|S|`$ falls below $`s_c`$, the Higgs fields start to acquire non-vanishing VEVs. All fields then oscillate until they stabilise at the global supersymmetric minimum. These oscillations are crucial for understanding the process of reheating and particle production in the early universe.
The important point is that, as long as the fields are not settled down at the global minimum, supersymmetry remains broken. When the fields oscillate, the system only goes ponctually through supersymmetric configurations. The breaking of supersymmetry is best seen by looking at the mass spectrum; the bosonic and fermionic masses are non degenerate. This has important consequences. It implies that loop corrections to the effective potential are non-zero not only during inflation, but also during all the oscillatory regime. The corrections are crucial for getting a continuous description of the evolution of the fields. They will be useful for the simulation of preheating, for the calculation of the number density of cosmic strings, for the study of leptogenesis at the end of inflation , and for the derivation of the primordial spectrum during intermediate stages in supersymmetric multiple inflationary models .
In this Letter, we calculate the one-loop corrections to the potential along the inflaton direction. They are the most important and affect the dynamics of the inflaton field, which in turn affects the dynamics of the Higgs fields. They can be calculated by applying the Coleman-Weinberg formula :
$$\mathrm{\Delta }V=\frac{1}{64\pi ^2}\underset{i}{}(1)^Fm_i^4\mathrm{ln}(m_i^2/\mathrm{\Lambda }^2),$$
(1)
where $`(1)^F`$ shows that bosons and fermions make opposite contributions; it is $`+1`$ for the bosonic degrees of freedom and $`1`$ for the fermionic ones. The sum runs over each degree of freedom $`i`$ with mass $`m_i`$ and $`\mathrm{\Lambda }`$ is a renormalization scale. We thus determine the particle spectrum for each value of the inflaton field $`|S|`$ and values of the other fields which minimize the potential for this $`|S|`$. We consider the standard models of F and D-term inflation. The particle spectrum is found to be very rich and interesting. During inflation, the non-zero quantum corrections are due to a boson-fermion mass splitting in the Higgs sector. When $`|S|`$ falls below $`s_c`$, since the Higgs VEVs are non zero, there is also a mass splitting between gauge and gaugino fields.
## II F-term inflation
The simplest superpotential which leads to F-term inflation is given by $`W=\alpha S\overline{\mathrm{\Phi }}\mathrm{\Phi }\mu ^2S`$, where $`S`$ is a scalar singlet and ($`\mathrm{\Phi }`$, $`\overline{\mathrm{\Phi }}`$) are Higgs superfields in complex conjugate representations of some gauge group G . $`\alpha `$ and $`\mu `$ are two constants which are taken to be positive and $`\frac{\mu }{\sqrt{\alpha }}`$ sets the G symmetry breaking scale. This superpotential is consistent with a continuous R symmetry under which the fields transform as $`Se^{i\gamma }S`$, $`\mathrm{\Phi }e^{i\gamma }\mathrm{\Phi }`$, $`\overline{\mathrm{\Phi }}e^{i\gamma }\overline{\mathrm{\Phi }}`$ and $`We^{i\gamma }W`$. It is is often used in SUSY GUT model building. The scalar potential reads:
$`V`$ $`=`$ $`\alpha ^2|S|^2(|\overline{\mathrm{\Phi }}|^2+|\mathrm{\Phi }|^2)+|\alpha \overline{\mathrm{\Phi }}\mathrm{\Phi }\mu ^2|^2`$ (2)
$`+`$ $`{\displaystyle \frac{g^2}{2}}(|\overline{\mathrm{\Phi }}|^2|\mathrm{\Phi }|^2)^2,`$ (3)
where we have kept the same notation for the superfields and their bosonic components. There is a flat direction with degenerate local minima $`|S|ss_c=\frac{\mu }{\sqrt{\alpha }}`$, $`\overline{\mathrm{\Phi }}=\mathrm{\Phi }=0`$, for which $`V=\mu ^4`$, and a global supersymmetric minimum at $`S=0`$, $`|\mathrm{\Phi }|=|\overline{\mathrm{\Phi }}|=\frac{\mu }{\sqrt{\alpha }}`$, $`\mathrm{arg}(\mathrm{\Phi })+\mathrm{arg}(\overline{\mathrm{\Phi }})=0`$, in which the G symmetry is spontaneously broken.
We assume that the problem of initial conditions has been solved, and we investigate the behavior of the system already settled in the local minimum of the potential. The universe is dominated by the vacuum energy density $`V=\mu ^4`$ and supersymmetry is broken. The bosonic and fermionic masses are thus non degenerate. The mass splitting happens in the $`\mathrm{\Phi }`$ and $`\overline{\mathrm{\Phi }}`$ sector. Explicitly, there are two complex scalars with masses squared $`m_1^2=\alpha ^2s^2+\mu ^2\alpha `$ and $`m_2^2=\alpha ^2s^2\mu ^2\alpha `$ (the mass eigenstates are linear combinaisons of the $`\mathrm{\Phi }`$ and $`\overline{\mathrm{\Phi }}`$ fields), and two Weyl fermions with masses $`m^2=\alpha ^2s^2`$. This spectrum gives rise to quantum corrections to the effective potential which can be calculated from Eq.(1) and lift the the complete flatness of the $`s`$ direction. When $`ss_c`$, they have a well-know asymptotic form :
$$\mathrm{\Delta }V=\frac{\alpha ^2\mu ^4}{16\pi ^2}\left(\mathrm{ln}\frac{\alpha ^2s^2}{\mathrm{\Lambda }^2}+\frac{3}{2}\right).$$
(4)
Therefore, the $`S`$ field can roll down the potential. The slow roll conditions are satisfied and inflation takes place. When $`s`$ falls below $`s_c`$, $`\mathrm{\Phi }`$ and $`\overline{\mathrm{\Phi }}`$ are destabilized, and all fields start to oscillate.
During inflation, the Higgs fields $`\mathrm{\Phi }`$ and $`\overline{\mathrm{\Phi }}`$ have zero VEVs and since the inflaton $`S`$ is assumed to be a gauge singlet, gauge and gauginos have zero masses; the only contribution to $`\mathrm{\Delta }V`$ comes from the mass splitting in the $`\mathrm{\Phi }`$ and $`\overline{\mathrm{\Phi }}`$ sector. Now when $`s`$ falls below $`s_c`$, the VEVs of $`\mathrm{\Phi }`$ and $`\overline{\mathrm{\Phi }}`$ start to be non-zero. Thus the corresponding gauge and gaugino fields, as well as the $`S`$ field, also acquire non-zero mass. The mass splitting then happens both in the Higgs and in the gauge sectors.
From now on, we shall assume that the VEVs of $`\mathrm{\Phi }`$ and $`\overline{\mathrm{\Phi }}`$ only break a U(1) gauge symmetry, and that the representation of $`\mathrm{\Phi }`$ is complex one-dimensional. For arbitrary $`n`$-dimensional complex conjugate representations which break a gauge group G down to a subgroup H of G, when the Higgs VEVs are non-zero, there are $`k=\mathrm{dim}(G)\mathrm{dim}(H)`$ massive gauge fields and $`4nk+2`$ massive real scalar fields. For any value of $`s`$, the potential is minimised along the D-flat direction $`|\overline{\mathrm{\Phi }}|=|\mathrm{\Phi }|=\widehat{\varphi }`$, and for $`\mathrm{arg}(\mathrm{\Phi })=\mathrm{arg}(\overline{\mathrm{\Phi }})=\theta `$. Therefore, we expand the fields as follows:
$$\mathrm{\Phi }=\widehat{\varphi }e^{i\theta }+\varphi _1,\overline{\mathrm{\Phi }}=\widehat{\varphi }e^{i\theta }+\varphi _2,$$
(5)
where $`\varphi _1`$ and $`\varphi _2`$ are complex fields which represent the quantum fluctuations of the $`\mathrm{\Phi }`$ and $`\overline{\mathrm{\Phi }}`$ fields. $`\widehat{\varphi }=0`$ for $`ss_c`$ and $`\widehat{\varphi }=\sqrt{\frac{\mu ^2\alpha s^2}{\alpha }}`$ for $`ss_c`$. When $`s`$ falls below $`s_c`$, we find the following spectrum. There is a complex scalar field with squared mass $`m_S^2=2\alpha ^2\widehat{\varphi }^2`$. The Higgs mechanism gives rise to three real scalars with masses squared $`m_1^2=2\alpha ^2\widehat{\varphi }^2`$, $`m_2^2=2\alpha \mu ^2`$ and $`m_3^2=2\alpha ^2s^2+4g^2\widehat{\varphi }^2`$. The corresponding Higgs mass eigenstates are $`\mathrm{}(\varphi _1)+\mathrm{}(\varphi _2)`$, $`\mathrm{}(\varphi _1)+\mathrm{}(\varphi _2)`$ and $`\mathrm{}(\varphi _1)\mathrm{}(\varphi _2)`$. The field $`\mathrm{}(\varphi _1)\mathrm{}(\varphi _2)`$ is absorbed by the gauge field which is now massive with mass squared $`m_A^2=4g^2\widehat{\varphi }^2`$.
The fermionic spectrum can be derived from the following parts of the Lagrangian:
$`_Y`$ $`=`$ $`\alpha (S\psi _1\psi _2+\mathrm{\Phi }\psi _S\psi _2+\overline{\mathrm{\Phi }}\psi _1\psi _S),`$ (6)
$`_g`$ $`=`$ $`i\sqrt{2}g(\stackrel{~}{\mathrm{\Lambda }}\psi _2\overline{\mathrm{\Phi }}^{}\stackrel{~}{\mathrm{\Lambda }}\psi _1\mathrm{\Phi }^{})+\mathrm{h}.c.,`$ (7)
where $`\psi _1`$, $`\psi _2`$ and $`\psi _S`$ are the fermionic components of the Higgs and inflaton superfields. $`\stackrel{~}{\mathrm{\Lambda }}`$ is the gaugino. After diagonalizing the fermion mass matrix, we find that there are four Weyl fermions with masses:
$`m_{\psi _1^\pm }^2`$ $`=`$ $`2\alpha ^2\widehat{\varphi }^2+{\displaystyle \frac{\alpha ^2s^2}{2}}\pm {\displaystyle \frac{1}{2}}\alpha ^2s\sqrt{8\widehat{\varphi }^2+s^2},`$ (8)
$`m_{\psi _2^\pm }^2`$ $`=`$ $`4g^2\widehat{\varphi }^2+{\displaystyle \frac{\alpha ^2s^2}{2}}\pm {\displaystyle \frac{1}{2}}\alpha s\sqrt{16g^2\widehat{\varphi }^2+\alpha ^2s^2}.`$ (9)
The mass eigenstates are, respectively, linear combinations of the higgsinos and the inflatino (the fermionic component of the inflaton superfield), and linear combinations of the higgsinos and the gaugino. We summarize our results in Tables I and II. The one-loop corrected effective potential is $`V+\mathrm{\Delta }V(s)`$, where $`\mathrm{\Delta }V(s)`$ is given by Eq. (1). We check that the supertrace $`\mathrm{S}trM^2_i(1)^Fm_i^2`$ vanishes at all times , and that the corrections are continuous at $`s=s_c`$. The exact effective potential should be a smooth function of $`s`$, and independent of $`\mathrm{\Lambda }`$. In the one-loop approximation, $`\mathrm{\Lambda }`$ must be chosen so that the contribution of higher order terms can be neglected. This is generally achieved with $`\mathrm{\Lambda }^2\alpha ^2s_c^2=\mu ^2\alpha `$ . Here, by imposing the continuity of the potential derivative at $`s=s_c`$, we find:
$$\mathrm{\Lambda }^2=e^ϵ\alpha ^2s_c^2,ϵ\frac{1}{2}\frac{g^2\mathrm{ln}2}{\alpha ^2+g^2}.$$
(10)
When $`g\alpha `$, the shape of $`\mathrm{\Delta }V(s)`$ is given in Fig. 1, and $`|\mathrm{\Delta }V(s)|V`$. When $`g>\alpha `$, a higher mass scale $`g^2\widehat{\varphi }^2>\alpha ^2s_c^2`$ appears after symmetry breaking; thus, the above choice of $`\mathrm{\Lambda }^2`$ is not valid anymore, and we find that the one-loop corrections exceed the tree-level potential; so, in the one-loop approximation, it is impossible to find an expression for $`\mathrm{\Delta }V(s)`$ valid around $`s_c`$.
## III D-term inflation
A generic toy-model of D-term inflation, proposed in Refs. , involves three complex fields: a gauge singlet $`S`$, and two fields $`\mathrm{\Phi }_+`$ and $`\mathrm{\Phi }_{}`$ with charges $`+1`$ and $`1`$ under a $`U(1)`$ gauge symmetry. The superpotential is $`W=\lambda S\mathrm{\Phi }_+\mathrm{\Phi }_{}`$. This choice can be justified by a set of continuous R-symmetries or discrete symmetries. The scalar potential is:
$`V`$ $`=`$ $`\lambda ^2|S|^2(|\mathrm{\Phi }_+|^2+|\mathrm{\Phi }_{}|^2)+\lambda ^2|\mathrm{\Phi }_+|^2|\mathrm{\Phi }_{}|^2`$ (12)
$`+{\displaystyle \frac{g^2}{2}}(|\mathrm{\Phi }_+|^2|\mathrm{\Phi }_{}|^2+\xi )^2,`$
where $`\xi `$ is a Fayet-Illiopoulos term. We suppose that $`\xi >0`$ (if it is not the case, the role of $`\mathrm{\Phi }_+`$ and $`\mathrm{\Phi }_{}`$ are just inverted). There is a true supersymmetric vacuum at $`S=\mathrm{\Phi }_+=0`$, $`|\mathrm{\Phi }_{}|=\sqrt{\xi }`$, and a valley of local minima $`|S|s>s_c=g\sqrt{\xi }/\lambda `$, $`\mathrm{\Phi }_+=\mathrm{\Phi }_{}=0`$, in which the tree-level potential is flat, $`V=g^2\xi ^2/2`$, and the dynamics of $`s`$ is only driven by quantum corrections. In the false vacuum, ($`\mathrm{\Phi }_+`$, $`\mathrm{\Phi }_{}`$) have squared masses $`\lambda ^2s^2\pm g^2\xi `$, while their fermionic superpartners $`(\psi _+,\psi _{})`$ combine to form a Dirac spinor with mass $`\lambda s`$ (see Table III). The loop corrections are given by Eq. (1), and when $`ss_c`$ they reduce to:
$$\mathrm{\Delta }V=\frac{g^4\xi ^2}{16\pi ^2}\left(\mathrm{ln}\frac{\lambda ^2s^2}{\mathrm{\Lambda }^2}+\frac{3}{2}\right).$$
(13)
When $`s`$ falls below $`s_c`$, $`\mathrm{\Phi }_{}`$ acquires a non-vanishing VEV $`\widehat{\varphi }e^{i\theta }`$, with $`\widehat{\varphi }=\sqrt{\xi \frac{\lambda ^2}{g^2}s^2}`$, which breaks the $`U(1)`$ gauge symmetry. The mass splitting then also happens in the gauge sector. Without loss of generality, we assume that $`\theta =0`$, and expand the Higgs field as $`\mathrm{\Phi }_{}=\widehat{\varphi }+\varphi _1`$. The real field $`\sqrt{2}\mathrm{}(\varphi _1)`$ has a squared mass $`2g^2\widehat{\varphi }^2`$, while the Goldstone boson $`\sqrt{2}\mathrm{}(\varphi _1)`$ is eaten up by the gauge boson, which becomes massive with $`m_A^2=2g^2\widehat{\varphi }^2`$. The masses for $`S`$ and $`\mathrm{\Phi }_+`$ are given in Table IV. The fermionic spectrum can be derived from the following parts of the supersymmetric Lagrangian:
$`_Y`$ $`=`$ $`\lambda (S\psi _+\psi _{}+\mathrm{\Phi }_+\psi _S\psi _{}+\mathrm{\Phi }_{}\psi _S\psi _+),`$ (14)
$`_g`$ $`=`$ $`i\sqrt{2}g(\stackrel{~}{\mathrm{\Lambda }}\psi _{}\mathrm{\Phi }_{}^{}+\stackrel{~}{\mathrm{\Lambda }}\psi _+\mathrm{\Phi }_+^{})+\mathrm{h}.c.`$ (15)
We find that the gaugino, the inflatino and the higgsinos combine to form two Dirac spinors with masses given in Table IV. The one-loop effective potential reads:
$`V`$ $`=`$ $`\lambda ^2s^2|\mathrm{\Phi }_{}|^2+{\displaystyle \frac{g^2}{2}}(|\mathrm{\Phi }_{}|^2\xi )^2+\mathrm{\Delta }V(s),`$ (16)
where $`\mathrm{\Delta }V(s)`$ is given by Eq. (1). The supertrace vanishes at any time. The potential is continuous at $`s=s_c`$, and so is its derivative if we take:
$$\mathrm{\Lambda }^2=e^ϵ\lambda ^2s_c^2,ϵ\frac{1}{2}+\frac{\mathrm{ln}2}{3}(1\frac{\lambda ^2}{g^2}).$$
(17)
Then, for any choice of $`\lambda `$ and $`g`$, the corrections are small with respect to the tree-level potential, and have the shape given in Fig. 1.
Let us now comment on the importance of the one-loop corrections to the first and second derivative at the end of inflation. Obviously, they are dominant at the very begining of the symmetry breaking. At this time, it is important to know exactly $`\mathrm{\Delta }V/s`$, in order to characterize the emergence of an effective Higgs background (from the coarse-graining of large-scale quantum fluctuations). After this stage, when the Higgs start(s) to grow, the loop corrections will affect $`V/s`$ and $`^2V/s^2`$ only by a few percent<sup>\**</sup><sup>\**</sup>\**For instance, in the D-term case, when the inflaton stabilizes around zero, its oscillation frequency is usually calculated from the tree-level effective mass: $`\frac{^2V}{s^2}=2\lambda ^2|\mathrm{\Phi }_{}|^2=2\lambda ^2\xi .`$ As can be seen form Fig.1., the loop corrections will lower this value. We find: $`{\displaystyle \frac{^2\mathrm{\Delta }V}{s^2}}`$ $`=`$ $`{\displaystyle \frac{g^2\lambda ^2\xi }{\pi ^2}}f({\displaystyle \frac{\lambda ^2}{2g^2}}),`$ (18) $`f(x)`$ $``$ $`{\displaystyle \frac{\mathrm{ln}2}{3}}(1+x){\displaystyle \frac{x\mathrm{ln}x}{2(1x)}},f(1)1.`$ (19) So, with $`\lambda =g=1`$, the effective mass is lowered by 4 %. A similar order of magnitude is found in the case of F-term inflation, for which $`\frac{^2V}{s^2}=4\alpha \mu ^2`$:
$$\frac{^2\mathrm{\Delta }V}{s^2}=\frac{g^2\alpha \mu ^2}{\pi ^2}\stackrel{~}{f}(\frac{\alpha ^2}{g^2}),$$
and $`\stackrel{~}{f}(x)`$ is a complicated function, of order one when $`1<x<100`$. When $`\alpha =2g=1`$, the effective mass is lowered by 2 %..
## IV Conclusion
In this paper, we have discussed the problem of supersymmetry breaking during and at the end of supersymmetric hybrid inflation, when the inflation scale is generated by GUT physics. We did not consider inflation models at intermediate or low energy scales . Neither did we include supergravity corrections. When global supersymmetry is replaced by supergravity, the F-flat directions can only be preserved for specific Kähler potentials ; this does not apply to the case of D-term inflation, for which supergravity corrections are small for all values of the fields below the Planck mass . In this framework, we calculated the one-loop corrections which modify the effective potential in the flat (inflaton) direction. We would like to point out that the classical trajectories of the Higgs fields do not necessarily coincide with their valley of local minima. However, for a wide range of parameters, the trajectory remains very close to this valley. Also, in more realistic cases, the Higgs fields will belong to non-trivial representations of G and their VEVs will break a non-abelian gauge group . They may also couple to fermions such as right-handed neutrinos . Hence, generally, one would find a much richer spectrum which would increase the corrections in a model-dependent way. Our results do not apply only to the end of inflation. They could be used in other models for which inflation takes place in local minima that spontaneously break both the gauge symmetry and supersymmetry. Also, in supersymmetric multiple inflationary models, part of the gauge symmetry breaks between the different stages of inflation, and at each step, the calculation of loop corrections can be performed as discussed in this paper.
## Acknowledgements
The authors would like to thank D. Demir, G. Dvali and G. Senjanović for very useful discussions. J. L. is supported by the TMR grant ERBFMRXCT960090.
|
no-problem/9905/quant-ph9905059.html
|
ar5iv
|
text
|
# Monte Carlo Hamiltonian
## 1 Motivation
The motivation for constructing a Monte Carlo Hamiltonian comes from different directions.
(i) The renormalization group à la Kadanoff-Wilson aims to construct a renormalized Hamiltonian, which describes physics at a critical point, based on the assumption of scale invariance. Such Hamiltonian is supposed to have much less degrees of freedom than the original Hamiltonian. A recent further development of those ideas is White’s density matrix renormalization group technique . Our goal is similar to the above in the sense that we aim at an effective Hamiltonian, which has ”less” degrees of freedom than the ”original” Hamiltonian. But it differs in describing physics in the low energy domain instead of doing so at the critical point.
(ii) When one tries to solve field theory in the Hamiltonian formulation, the standard way to proceed is by constructing a Fock space, parametrized by some high momentum cut-off and some occupation number cut-off (Tamm-Dancoff approximation). When increasing those parameters, which means increasing the upper bound of the energy, then typically the density of states increases in an exponential manner, which renders the system beyond any control. In contrast to that, the Monte Carlo Hamiltonian is governed by a ”small” number of low-energy degrees of freedom and the spectral density decreases with increasing energy.
(iii) The enormous success of lattice field theory over the last quarter of the century is certainly due to the fact that the Monte Carlo method with importance sampling is an excellent technique to solve high dimensional (and even ”infinite” dimensional) integrals. Conventionally, one computes a transition amplitude of an operator and evaluates it numerically via Monte Carlo (e.g. Metropolis algorithm ),
$`<O>`$ $`=`$ $`{\displaystyle \frac{[dx]O[x]\mathrm{exp}(\frac{1}{\mathrm{}}S[x])}{[dx]\mathrm{exp}(\frac{1}{\mathrm{}}S[x])}}`$ (1)
$``$ $`{\displaystyle \frac{1}{N_c}}{\displaystyle \underset{C}{}}O[C].`$
Here $`C`$ stands for a path configuration drawn from the distribution $`P[x]=\frac{1}{Z}\mathrm{exp}(\frac{1}{\mathrm{}}S[x])`$. The vitue of the Monte Carlo method lies in the property of yielding very good numerical results. E.g., solving a field theory model on a lattice of size $`20^4`$ and measuring the observable from a number of configurations $`N_c`$ in the order of a few hundred typically yields results with statistical errors in the order of a few percent. In this way it has been possible to determine low lying baryon and meson masses quite precisely .
On the other hand, one can express a transition amplitude in imaginary time via the Hamiltonian
$`<x_{fi},T|x_{in},0>`$ $`=`$ $`<x_{fi}|e^{HT/\mathrm{}}|x_{in}>`$ (2)
$`=`$ $`{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}<x_{fi}|E_n>e^{E_nT/\mathrm{}}<E_n|x_{in}>`$
$``$ $`<x_{fi}|e^{H_{ef}T/\mathrm{}}|x_{in}>`$
$`=`$ $`{\displaystyle \underset{\nu =1}{\overset{N}{}}}<x_{fi}|E_\nu ^{eff}>e^{E_\nu ^{eff}T/\mathrm{}}<E_\nu ^{eff}|x_{in}>.`$
In the last two lines we have approximated the Hamiltonian $`H`$ by an effective Hamiltonian $`H_{eff}`$, which has less degrees of freedom, e.g., it has only $`N`$ eigenstates. The idea of the Monte Carlo Hamiltonian is that an effective Hamiltonian can be found via use of Monte Carlo, such that transition amplitudes become a finite sum over $`N`$ eigenstates, where $`N`$ is in the order of magnitude of $`N_c`$, i.e. the number of equilibrium configurations, sufficient to closely approximate the path integral of Eq.(1).
One might ask: What is the virtue of such a Hamiltonian? A list of physics problems, where progress has been slow with conventional methods including standard lattice techniques, and where such a Hamiltonian might bring progress are the following topics:
\- Non-perturbative computation of cross sections and decay amplitudes in many-body systems.
\- Low-lying but excited states of the hadronic spectrum and the related question of quantum chaos in such a system.
\- Hadron wave functions and the related question of hadron structure functions, in particular for small $`x_B`$ and $`Q^2`$. The Hamiltonian formulation is suited to compute wave functions, which is quite difficult in the Lagrangian lattice formulation.
\- Finite temperature and in particular finite density in baryonic matter. This is crucial for the quark-gluon plasma phase transition, the physics of neutron stars and cosmology. The Hamiltonian formulation is suited to compute the mean value of the energy (average energy). This is difficult to compute in the Lagrangian lattice formulation where one usually computes the expectation value of the action. Finite density $`QED`$ and $`QCD`$ in the Lagrangian lattice formulation is hampered by the notorious complex action problem.
\- Atomic physics: study of spectra and the question of quantum chaos.
\- Condensed matter physics: study of spin systems (computation of dynamical structure factors), and high $`T_c`$ superconductivity models (search for electron pair attraction at very small energy). In the following we will outline how to construct such a Hamiltonian.
## 2 Construction of $`H_{eff}`$
In contrast to the statistical mechanics concept of the transfer matrix, which describes the time-evolution (we consider imaginary time) when advancing the system by a small discrete time step $`\mathrm{\Delta }t=a_0`$ and from which one can infer the Hamiltonian $`(a_00)`$, here we consider transition amplitudes $`<\psi |e^{HT/\mathrm{}}|\varphi >`$ corresponding to a finite, long time $`T`$ ($`T>>a_0`$), for the purpose to reconstruct the spectrum in some finite low energy domain. Let us start from a complete orthonormal basis of Hilbert states $`|e_i>,i=1,2,3\mathrm{}`$ and consider the matrix elements for a given fixed $`N`$
$$M_{ij}(T)=<e_i|e^{HT/\mathrm{}}|e_j>,i,j1,\mathrm{},N.$$
(3)
Under the assumption that $`H`$ is Hermitian, $`M(T)`$ is a positive, Hermitian matrix. Elementary linear algebra implies that there is a unitary matrix $`U`$ and a real, diagonal matrix $`D`$ such that
$$M(T)=U^{}D(T)U.$$
(4)
On the other hand, projecting $`H`$ onto the the subspace $`S_N`$ generated by the first $`N`$ states of the basis $`|e_i>`$, and using the eigenrepresentation of such Hamiltonian, one has
$$M_{ij}(T)=\underset{k=1}{\overset{N}{}}<e_i|E_k^{eff}>e^{E_k^{eff}T/\mathrm{}}<E_k^{eff}|e_j>,$$
(5)
and we can identify
$$U_{ik}^{}=<e_i|E_k^{eff}>,D_k(T)=e^{E_k^{eff}T/\mathrm{}}.$$
(6)
Let us assume for the moment that the matrix elements $`M_{ij}(T),i,j=1,\mathrm{},N`$ would be known. Then algebraic diagonalization of the matrix $`M(T)`$ yields eigenvalues $`D_k(T),k=1,\mathrm{},N`$, which by Eq.(6) gives the spectrum of energies,
$$E_k^{eff}=\frac{\mathrm{}}{T}\mathrm{ln}D_k(T),k=1,\mathrm{},N.$$
(7)
The corresponding k-th eigenvector can be identified with the k-th column of the matrix $`U_{ik}^{}`$. From Eq.(6) we then know the wave function of the k-th eigenstate expressed in terms of the basis $`|e_i>`$. Thus starting from the matrix elements $`M_{ij}(T)`$ we have explicitly constructed an effective Hamiltonian
$$H_{eff}=\underset{k=1}{\overset{N}{}}|E_k^{eff}>E_k^{eff}<E_k^{eff}|.$$
(8)
## 3 Computation of matrix elements by Monte Carlo
We suggest to compute the matrix elements $`M_{ij}(T)`$ directly from the action via Monte Carlo with importance sampling. For the sake of simplicity, let us consider $`D=1`$. We choose basis states $`|e_i>`$ in position space by introducing a lattice with nodes $`x_i`$ and define $`e_i(x)`$ (unnormalized) by $`e_i(x)=1`$ if $`x_ixx_{i+1}`$, zero else. $`\mathrm{\Delta }x_i=x_{i+1}x_i`$. In numerical calculations we have used a regular lattice, $`\mathrm{\Delta }x_i=\text{const}`$. The matrix elements read
$`M_{ij}(T)`$ $`=`$ $`{\displaystyle _{x_i}^{x_{i+1}}}𝑑y{\displaystyle _{x_j}^{x_{j+1}}}𝑑z<y,T|z,0>`$ (9)
$`=`$ $`{\displaystyle _{x_i}^{x_{i+1}}}𝑑y{\displaystyle _{x_j}^{x_{j+1}}}𝑑z{\displaystyle [dx]\mathrm{exp}[S[x]/\mathrm{}]}|_{z,0}^{y,T}.`$
Here $`S`$ denotes the Euclidean action for a given path $`C`$,
$$S[C]=_0^T𝑑t\frac{1}{2}m\dot{x}^2+V(x)|_C.$$
(10)
The Monte Carlo method with importance sampling is suited and conventionally applied to estimate a ratio of integrals, like in Eq.(1). Here we suggest to estimate the matrix elements $`M_{ij}`$ by splitting the action
$$S=S_0+S_V_0^T𝑑t\frac{1}{2}m\dot{x}^2+_0^T𝑑tV(x),$$
(11)
and to express $`M_{ij}`$ as
$$M_{ij}(T)=M_{ij}^{(0)}(T)\frac{_{x_i}^{x_{i+1}}𝑑y_{x_j}^{x_{j+1}}𝑑z[dx]\mathrm{exp}[S_V[x]/\mathrm{}]\mathrm{exp}[S_0[x]/\mathrm{}]|_{z,0}^{y,T}}{_{x_i}^{x_{i+1}}𝑑y_{x_j}^{x_{j+1}}𝑑z[dx]\mathrm{exp}[S_0[x]/\mathrm{}]|_{z,0}^{y,T}},$$
(12)
where $`O\mathrm{exp}[S_V/\mathrm{}]`$ is treated as an observable. The ratio can be treated by standard Monte Carlo methods with importance sampling. The matrix elements $`M_{ij}^{(0)}`$, corresponding to the free action $`S_0`$, are almost known analytically,
$$M_{ij}^{(0)}(T)=_{x_i}^{x_{i+1}}𝑑y_{x_j}^{x_{j+1}}𝑑z\sqrt{\frac{m}{2\pi \mathrm{}T}}\mathrm{exp}\left[\frac{m}{2\mathrm{}T}(yz)^2\right].$$
(13)
## 4 Test of $`H_{eff}`$
### 4.1 Free system
In order to test the effective Hamiltonian, we have computed the energy spectrum, its wave functions and thermodynamic observables like the average energy $`U`$ and the specific heat $`C`$ as well as the partition function $`Z`$. They are defined by
$`Z(\beta )`$ $`=`$ $`Tr[e^{\beta H}],`$
$`U(\beta )`$ $`=`$ $`{\displaystyle \frac{1}{Z}}Tr[He^{\beta H}]={\displaystyle \frac{\mathrm{log}Z}{\beta }},`$
$`C(\beta )`$ $`=`$ $`{\displaystyle \frac{F}{𝒯}}=k_B\beta ^2{\displaystyle \frac{^2\mathrm{log}Z}{\beta ^2}},`$ (14)
where $`\beta =(k_B𝒯)^1`$, $`𝒯`$ is the temperature, and we identify $`\beta `$ with the imaginary time $`T`$ by $`\beta =T/\mathrm{}`$. For the free system one obtains the following analytical expressions for $`Z`$, $`U`$ and $`C`$,
$`Z(\beta )`$ $`=`$ $`\sqrt{{\displaystyle \frac{m}{2\pi \mathrm{}^2\beta }}}I,I={\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑x(\text{being infinite}),`$
$`U(\beta )`$ $`=`$ $`{\displaystyle \frac{1}{2\beta }}={\displaystyle \frac{1}{2}}k_B𝒯,`$
$`C(\beta )`$ $`=`$ $`{\displaystyle \frac{1}{2}}k_B.`$ (15)
Note that $`U(\beta )_\beta \mathrm{}0`$, i.e. it tends to the ground state energy of the free system (Feynman-Kac formula).
The partition function corresponding to the effective Hamiltonian is obtained via its spectrum,
$$Z_{eff}(\beta )=Tr[e^{\beta H_{eff}}]=\underset{k=1}{\overset{N}{}}e^{\beta E_k^{eff}}.$$
(16)
Via Eq.(14) one obtains the corresponding average energy $`U_{eff}`$ and the specific heat $`C_{eff}`$. One should keep in mind that $`H_{eff}`$ has been constructed for a specific value of the time parameter, $`T=1`$ corresponding to the temperature $`𝒯=1`$ (we use $`\mathrm{}=k_B=1`$). Fig. shows a plot of the average energy, comparing the exact result with that from the effective Hamiltonian. One observes that the agreement is better where $`𝒯0`$, i.e. in the low energy regime. A similar behavior is found for the specific heat, shown in Fig..
### 4.2 Harmonic oscillator
The Euclidean action of the harmonic oscillator is given by
$$S=_o^T𝑑t\frac{1}{2}m\dot{x}^2+\frac{1}{2}m\omega ^2x^2.$$
(17)
The energy spectrum is
$$E_n=\mathrm{}\omega (n+1/2),n=0,1,2,\mathrm{}$$
(18)
A comparison of the spectrum of the effective Hamiltonian with the exact one is shown in Tab.. As can be seen, the error is small in the low energy domain. A more stringent test is that of the wave functions. Fig. shows a comparison for the wave functions of the three lowest states. We have also verified the low energy behavior of the effective Hamiltonian by computing the partition function, average energy and specific heat as a function of temperature. For the harmonic oscillator those are analytically known,
$`Z(\beta )`$ $`=`$ $`\left[2\mathrm{sinh}(\beta \mathrm{}\omega /2)\right]^1,`$
$`U(\beta )`$ $`=`$ $`{\displaystyle \frac{\mathrm{}\omega }{2}}\text{ctgh}(\beta \mathrm{}\omega /2),`$
$`C(\beta )`$ $`=`$ $`k_B\left[{\displaystyle \frac{\beta \mathrm{}\omega /2}{\mathrm{sinh}(\beta \mathrm{}\omega /2)}}\right]^2.`$ (19)
In the limit $`\beta \mathrm{}`$ the average energy tends to the ground state energy, $`U\mathrm{}\omega /2`$ (Feynman-Kac formula). A plot of the average energy and the specific heat is shown in Figs.. The effective Hamiltonian, constructed at $`T_c=\beta _c=𝒯_c=1`$, describes well thermodynamic observables in the range $`\beta _c\beta `$ (it works also for $`\beta >10`$, not shown in the figure). However, it breaks down for $`\beta <\beta _c`$, i.e. $`𝒯>𝒯_c`$. This is due to the small dimension $`N=20`$ of the matrix. Agreement in a larger $`\beta `$-region, i.e. lowering $`\beta _c`$ can be obtained by increasing $`N`$. This can be seen, e.g. for the free system in Fig., where $`N=200`$ and $`\beta _c<0.1`$.
### 4.3 Other local potentials
We have tested the effective Hamiltonian for other local potentials. For example,
$$V(x)=V_0\text{sech}^2(x/d)$$
(20)
is a potential having a minimum $`V_0`$ at $`x=0`$ and rising asymptotically to zero at $`x=\pm \mathrm{}`$. It generates a bound state spectrum, being analytically known . It is given by
$`E_n`$ $`=`$ $`{\displaystyle \frac{\mathrm{}^2}{2md^2}}\lambda _n,`$
$`\lambda _n`$ $`=`$ $`\left[(n+1/2)\sqrt{Q+1/4}\right]^2,`$
$`Q`$ $`=`$ $`{\displaystyle \frac{2md^2}{\mathrm{}^2}}V_0,`$
$`n`$ $`=`$ $`0,1,2,\mathrm{},n_{max}<\sqrt{Q+1/4}1/2.`$ (21)
The results are shown in Tabs..
## 5 Conclusion
We have proposed to construct an effective low-energy Hamiltonian from the action via use of the Monte Carlo method. We have shown that the method works for a number of systems in $`1D`$ quantum mechanics, by computing the spectrum, wave functions and thermodynamical observables.
\- We have not given an error estimate of the statistical errors. The reason is that the statistical error of the matrix elements can be estimated easily, however, to get from that an error estimate of the energy spectrum is difficult. We defer that to a later study.
\- We have not discussed an application to a field theory or a many-body system, although this is the area where the method should prove to be most useful. The reason is that this requires a new step, namely a stochastic (Monte Carlo) selection from the set of basis functions. This is presently under investigation.
\- In our opinion an effective low-energy Hamiltonian will be very useful in condensed matter physics, atomic physics, nuclear physics, and high energy particle physics.
Acknowledgments
H.K. would like to acknowledge helpful discussions with M. Creutz, B. Berg, W. Janke. and P. Amiot. H.K. and K.J.M.M. are grateful for support by NSERC Canada. X.Q.L. is supported by the National Natural Science Fund for Distinguished Young Scholars, supplemented by the National Natural Science Foundation of China, fund for international cooperation and exchange, the Ministry of Education, and the Hong Kong Foundation of the Zhongshan University Advanced Research Center.
Figure Caption
Average energy of the free system. Solid line and diamonds, respectively, represent the exact analytical result, and that from the exact matrix elements for $`\mathrm{\Delta }x=0.5`$ and $`N=100`$. The cross at $`\beta =0.1`$ corresponds to $`\mathrm{\Delta }x=0.2`$ and $`N=200`$.
Specific heat over $`k_B`$ of the free system. Symbols as in Fig..
Wave function of the harmonic oscillator, (a) ground state, (b) first excited state, (c) second excited state. Solid line, diamonds and crosses, respectively, represent the exact analytical result, that from the exact matrix elements, and that from Monte Carlo simulation.
Average energy of the harmonic oscillator. Symbols as in Fig..
Specific heat over $`k_B`$ of the harmonic oscillator. Symbols as in Fig..
Eigenvalues of the harmonic oscillator. $`m=1`$, $`\mathrm{}=1`$, $`\omega =0.6`$, $`\mathrm{\Delta }x=1`$, $`N=20`$. $`E_n^{exact}`$, $`E_n^{e.m.}`$ and $`E_n^{m.c.}`$, respectively, represent the exact analytical result, that from the exact matrix elements, and that from Monte Carlo simulation.
Bound state spectrum for potential given by Eq.(20). (a) For $`m=1.0,\mathrm{}=1.0,T=1.0,V_0=1.0,d=1.0,Q=2,\mathrm{\Delta }x=1.0,N=10`$, there is only one bound state $`n_{max}<1`$. This is confirmed by the Monte Carlo data. (b) For $`m=1.0,\mathrm{}=1.0,T=1.0,V_0=1.0,d=2.0,Q=8,\mathrm{\Delta }x=1.0,N=20`$, there are three bound states $`n_{max}<3`$. This is confirmed by the Monte Carlo data.
|
no-problem/9905/astro-ph9905044.html
|
ar5iv
|
text
|
# Flux Sensitivity of VERITAS
## 1 Introduction
A new generation of very high energy ground based $`\gamma `$-ray observatories (VERITAS , HESS , MAGIC ) promises to extend their sensitive energy range to below $`100`$ GeV, an energy band which is expected to contain a wealth of new information on high energy physics and astrophysics (for review see ). The spectra and variability of AGNs, the origin of high energy cosmic rays, physical processes in strong magnetic fields of pulsars, the puzzle of dark matter, the type of cosmology of the Universe, origin of $`\gamma `$ ray bursts, evaporation of black holes, and even a quantum gravity observable effects, are all in the scope of phenomena which will be studied by these projects. In this paper we give the technical characteristics of VERITAS driven by scientific goals. Also, the main motivations for choices of the array parameters are explained. Finally, we discuss factors which limit the flux sensitivity.
## 2 Simulations
To predict the performance of VERITAS, the response of the array has been simulated with the use of the DePauw-Purdue KASCADE system of air shower Monte Carlo (MC) programs . The study of hadronic showers which may mimic pure electromagnetic cascades has been done with the CORSIKA code . To determine the optimum VERITAS design we have studied the effects of varying the number of telescopes in an array, the spacing between them, reflector aperture, telescope focal length, camera Field of View (FoV) and pixel size. The characteristics of the baseline configuration as well as simulation input parameters are summarized in Table 1.
## 3 VERITAS design
The VERITAS design has been optimized for maximum sensitivity to point sources in the energy range $`100`$ GeV - $`10`$ TeV, but with significant sensitivity in the range $`50`$ GeV - $`100`$ GeV and from $`10`$ TeV to $`50`$ TeV. Optimization has been performed with fixed total number of channels. The details of the VERITAS design study are given in . Here we summarize our arguments for the baseline configuration.
* VERITAS should have at least $`67`$ telescopes for good wide energy range sensitivity and versatility. Seven telescopes provide more flexibility in VERITAS operation modes when the array is split into sub-arrays. The array of three 15m telescopes is rejected because of poor performance at high energies and lack of versatility.
* Camera FoV, $`3.5^{}`$, is a compromise between achieving low energy threshold of the array and array performance at high energies, and the ability to conduct an efficient sky survey and study extended sources. Also, the chosen FoV is the minimum necessary for effective image reconstruction with a single telescope.
* The given FoV together with the number of channels per telescope, $`500`$, translates into a pixel spacing of $`0.149^{}`$.
* The optical system of the telescope is proposed to be $`f/1.2`$. This provides an adequate match to the number of channels in the telescope camera making reflector global aberrations at the edge of the FoV comparable to the pixel size. The $`f/1.5`$ system would perform better but it would require a substantial additional investment in an optical support structure and system of mirror alignment.
* The telescope aperture, $`10`$ m, is chosen by previous successful experience of operating the Whipple Observatory telescope and for economy.
* Spacing between telescopes should be in the range $`7080`$ m. Decrease of the spacing reduces efficient event reconstruction, background rejection and array sensitivity in the range $`200`$ GeV - $`1`$ TeV. Increasing the spacing, does not change array sensitivity in this energy range, but it increases the array energy threshold.
## 4 VERITAS flux sensitivity
The VERITAS flux sensitivity has been estimated for $`50`$ hours of observations on a point source with a spectrum given by $`dN_\gamma /dEE^{2.5}`$ as is seen from the Crab Nebula in the sub-TeV energy range . The minimum detectable flux of $`\gamma `$-rays is constrained by the $`5\sigma `$ confidence level or by the statistics of the detected photons, $`N_\gamma >10`$, when the background is almost negligible. The details of the sensitivity calculations can be found in .
The VERITAS $`\gamma `$-ray flux sensitivity as a function of array energy threshold is shown in Figure 1. At high energies, above $`23`$ TeV, VERITAS sensitivity is limited by photon statistics. In this region sensitivity is depressed with energy increase due to a limited FoV of the telescope camera. Large zenith angle observations can improve array performance in this energy band. In the vicinity of $`1`$ TeV, VERITAS sensitivity will likely be limited by cosmic ray (CR) protons which mimic $`\gamma `$-ray showers. The detection rate of this isotropic background is not well known and it may be of scientific interest by itself because of the large uncertainty in its predicted effect by different proton interaction models available for study within the CORSIKA code. The energy region from $`200`$ GeV to $`1`$ TeV will most likely be dominated by the diffuse electron background. The steepness of the CR electron spectrum causes this to become a limiting factor of the VERITAS sensitivity. The region below $`200`$ GeV is strongly affected by the night sky background (NSB) and CR protons. Two curves in this region show the difference in array sensitivity depending on the conditions of the observations. In a favorable situation, dark observation field, the array energy threshold for events which we are able to reconstruct may decrease to as low as $`4050`$ GeV. The single telescope accidental trigger rate ($`<0.11.0`$ MHz), however, may limit array operation energy threshold to $`70`$ GeV. If observations are carried out in a bright region of sky (Milky Way, lower elevation) where the NSB is $`4`$ times brighter, VERITAS may be limited to a $`110`$ GeV energy threshold. The result of observations in this energy band will be highly sensitive to array trigger condition, and event reconstruction and background rejection methods. The plot is indicative of our current achievement, which will certainly be improved.
|
no-problem/9905/astro-ph9905116.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
In cosmology (or to be more specific, cosmography, the measurement of the Universe) there are many ways to specify the distance between two points, because in the expanding Universe, the distances between comoving objects are constantly changing, and Earth-bound observers look back in time as they look out in distance. The unifying aspect is that all distance measures somehow measure the separation between events on radial null trajectories, ie, trajectories of photons which terminate at the observer.
In this note, formulae for many different cosmological distance measures are provided. I treat the concept of “distance measure” very liberally, so, for instance, the lookback time and comoving volume are both considered distance measures. The bibliography of source material can be consulted for many of the derivations; this is merely a “cheat sheet.” Minimal $`C`$ routines (KR) which compute all of these distance measures are available from the author upon request. Comments and corrections are highly appreciated, as are acknowledgments or citation in research that makes use of this summary or the associated code.
## 2 Cosmographic parameters
The Hubble constant $`H_0`$ is the constant of proportionality between recession speed $`v`$ and distance $`d`$ in the expanding Universe;
$$v=H_0d$$
(1)
The subscripted “0” refers to the present epoch because in general $`H`$ changes with time. The dimensions of $`H_0`$ are inverse time, but it is usually written
$$H_0=100h\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1$$
(2)
where $`h`$ is a dimensionless number parameterizing our ignorance. (Word on the street is that $`0.6<h<0.9`$.) The inverse of the Hubble constant is the Hubble time $`t_\mathrm{H}`$
$$t_\mathrm{H}\frac{1}{H_0}=9.78\times 10^9h^1\mathrm{yr}=3.09\times 10^{17}h^1\mathrm{s}$$
(3)
and the speed of light $`c`$ times the Hubble time is the Hubble distance $`D_\mathrm{H}`$
$$D_\mathrm{H}\frac{c}{H_0}=3000h^1\mathrm{Mpc}=9.26\times 10^{25}h^1\mathrm{m}$$
(4)
These quantities set the scale of the Universe, and often cosmologists work in geometric units with $`c=t_\mathrm{H}=D_\mathrm{H}=1`$.
The mass density $`\rho `$ of the Universe and the value of the cosmological constant $`\mathrm{\Lambda }`$ are dynamical properties of the Universe, affecting the time evolution of the metric, but in these notes we will treat them as purely kinematic parameters. They can be made into dimensionless density parameters $`\mathrm{\Omega }_\mathrm{M}`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ by
$$\mathrm{\Omega }_\mathrm{M}\frac{8\pi G\rho _0}{3H_0^2}$$
(5)
$$\mathrm{\Omega }_\mathrm{\Lambda }\frac{\mathrm{\Lambda }c^2}{3H_0^2}$$
(6)
(Peebles, 1993, pp 310–313), where the subscripted “0”s indicate that the quantities (which in general evolve with time) are to be evaluated at the present epoch. A third density parameter $`\mathrm{\Omega }_k`$ measures the “curvature of space” and can be defined by the relation
$$\mathrm{\Omega }_\mathrm{M}+\mathrm{\Omega }_\mathrm{\Lambda }+\mathrm{\Omega }_k=1$$
(7)
These parameters completely determine the geometry of the Universe if it is homogeneous, isotropic, and matter-dominated. By the way, the critical density $`\mathrm{\Omega }=1`$ corresponds to $`7.5\times 10^{21}h^1M_{}D_\mathrm{H}^3`$, where $`M_{}`$ is the mass of the Sun.
Most believe that it is in some sense “unlikely” that all three of these density parameters be of the same order, and we know that $`\mathrm{\Omega }_\mathrm{M}`$ is significantly larger than zero, so many guess that $`(\mathrm{\Omega }_\mathrm{M},\mathrm{\Omega }_\mathrm{\Lambda },\mathrm{\Omega }_k)=(1,0,0)`$, with $`(\mathrm{\Omega }_\mathrm{M},1\mathrm{\Omega }_\mathrm{M},0)`$ and $`(\mathrm{\Omega }_\mathrm{M},0,1\mathrm{\Omega }_\mathrm{M})`$ tied for second place.<sup>1</sup><sup>1</sup>1This sentence, unmodified from the first incarnation of these notes, can be used by historians of cosmology to determine, at least roughly, when they were written. If $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, then the deceleration parameter $`q_0`$ is just half $`\mathrm{\Omega }_\mathrm{M}`$, otherwise $`q_0`$ is not such a useful parameter. When I perform cosmographic calculations and I want to cover all the bases, I use the three world models
| name | $`\mathrm{\Omega }_\mathrm{M}`$ | $`\mathrm{\Omega }_\mathrm{\Lambda }`$ |
| --- | --- | --- |
| Einstein–de-Sitter | 1 | 0 |
| low density | 0.05 | 0 |
| high lambda | 0.2 | 0.8 |
These three models push the observational limits in different directions. Some would say that all three of these models are already ruled out, the first by mass accounting, the second by anisotropies measured in the cosmic microwave background, and the third by lensing statistics. It is fairly likely that the true world model is somewhere in-between these (unless the $`\mathrm{\Omega }_\mathrm{M},\mathrm{\Omega }_\mathrm{\Lambda },\mathrm{\Omega }_k`$ parameterization is itself wrong).
## 3 Redshift
The redshift $`z`$ of an object is the fractional doppler shift of its emitted light resulting from radial motion
$$z\frac{\nu _\mathrm{e}}{\nu _\mathrm{o}}1=\frac{\lambda _\mathrm{o}}{\lambda _\mathrm{e}}1$$
(8)
where $`\nu _\mathrm{o}`$ and $`\lambda _\mathrm{o}`$ are the observed frequency and wavelength, and $`\nu _\mathrm{e}`$ and $`\lambda _\mathrm{e}`$ are the emitted. In special relativity, redshift is related to radial velocity $`v`$ by
$$1+z=\sqrt{\frac{1+v/c}{1v/c}}$$
(9)
where $`c`$ is the speed of light. In general relativity, (9) is true in one particular coordinate system, but not any of the traditionally used coordinate systems. Many feel (partly for this reason) that it is wrong to view relativistic redshifts as being due to radial velocities at all (eg, Harrison, 1993). I do not agree. On the other hand, redshift is directly observable and radial velocity is not; these notes concentrate on observables.
The difference between an object’s measured redshift $`z_{\mathrm{obs}}`$ and its cosmological redshift $`z_{\mathrm{cos}}`$ is due to its (radial) peculiar velocity $`v_{\mathrm{pec}}`$; ie, we define the cosmological redshift as that part of the redshift due solely to the expansion of the Universe, or Hubble flow. The peculiar velocity is related to the redshift difference by
$$v_{\mathrm{pec}}=c\frac{(z_{\mathrm{obs}}z_{\mathrm{cos}})}{(1+z)}$$
(10)
where I have assumed $`v_{\mathrm{pec}}c`$. This can be derived from (9) by taking the derivative and using the special relativity formula for addition of velocities. From here on, we assume $`z=z_{\mathrm{cos}}`$.
For small $`v/c`$, or small distance $`d`$, in the expanding Universe, the velocity is linearly proportional to the distance (and all the distance measures, eg, angular diameter distance, luminosity distance, etc, converge)
$$z\frac{v}{c}=\frac{d}{D_\mathrm{H}}$$
(11)
where $`D_\mathrm{H}`$ is the Hubble distance defined in (4). But this is only true for small redshifts! It is important to note that many galaxy redshift surveys, when presenting redshifts as radial velocities, always use the non-relativistic approximation $`v=cz`$, even when it may not be physically appropriate (eg, Fairall 1992).
In terms of cosmography, the cosmological redshift is directly related to the scale factor $`a(t)`$, or the “size” of the Universe. For an object at redshift $`z`$
$$1+z=\frac{a(t_\mathrm{o})}{a(t_\mathrm{e})}$$
(12)
where $`a(t_\mathrm{o})`$ is the size of the Universe at the time the light from the object is observed, and $`a(t_\mathrm{e})`$ is the size at the time it was emitted.
Redshift is almost always determined with respect to us (or the frame centered on us but stationary with respect to the microwave background), but it is possible to define the redshift $`z_{12}`$ between objects 1 and 2, both of which are cosmologically redshifted relative to us: the redshift $`z_{12}`$ of an object at redshift $`z_2`$ relative to a hypothetical observer at redshift $`z_1<z_2`$ is given by
$$1+z_{12}=\frac{a(t_1)}{a(t_2)}=\frac{1+z_2}{1+z_1}$$
(13)
## 4 Comoving distance (line-of-sight)
A small comoving distance $`\delta D_\mathrm{C}`$ between two nearby objects in the Universe is the distance between them which remains constant with epoch if the two objects are moving with the Hubble flow. In other words, it is the distance between them which would be measured with rulers at the time they are being observed (the proper distance) divided by the ratio of the scale factor of the Universe then to now; it is the proper distance multiplied by $`(1+z)`$. The total line-of-sight comoving distance $`D_\mathrm{C}`$ from us to a distant object is computed by integrating the infinitesimal $`\delta D_\mathrm{C}`$ contributions between nearby events along the radial ray from $`z=0`$ to the object.
Following Peebles (1993, pp 310–321) (who calls the transverse comoving distance by the confusing name “angular size distance,” which is not the same as “angular diameter distance” introduced below), we define the function
$$E(z)\sqrt{\mathrm{\Omega }_\mathrm{M}(1+z)^3+\mathrm{\Omega }_k(1+z)^2+\mathrm{\Omega }_\mathrm{\Lambda }}$$
(14)
which is proportional to the time derivative of the logarithm of the scale factor (ie, $`\dot{a}(t)/a(t)`$), with $`z`$ redshift and $`\mathrm{\Omega }_\mathrm{M}`$, $`\mathrm{\Omega }_k`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ the three density parameters defined above. (For this reason, $`H(z)=H_0E(z)`$ is the Hubble constant as measured by a hypothetical astronomer working at redshift $`z`$.) Since $`dz=da`$, $`dz/E(z)`$ is proportional to the time-of-flight of a photon traveling across the redshift interval $`dz`$, divided by the scale factor at that time. Since the speed of light is constant, this is a proper distance divided by the scale factor, which is the definition of a comoving distance. The total line-of-sight comoving distance is then given by integrating these contributions, or
$$D_\mathrm{C}=D_\mathrm{H}_0^z\frac{dz^{}}{E(z^{})}$$
(15)
where $`D_\mathrm{H}`$ is the Hubble distance defined by (4).
In some sense the line-of-sight comoving distance is the fundamental distance measure in cosmography since, as will be seen below, all others are quite simply derived in terms of it. The line-of-sight comoving distance between two nearby events (ie, close in redshift or distance) is the distance which we would measure locally between the events today if those two points were locked into the Hubble flow. It is the correct distance measure for measuring aspects of large-scale structure imprinted on the Hubble flow, eg, distances between “walls.”
## 5 Comoving distance (transverse)
The comoving distance between two events at the same redshift or distance but separated on the sky by some angle $`\delta \theta `$ is $`D_\mathrm{M}\delta \theta `$ and the transverse comoving distance $`D_\mathrm{M}`$ (so-denoted for a reason explained below) is simply related to the line-of-sight comoving distance $`D_\mathrm{C}`$:
$$D_\mathrm{M}=\{\begin{array}{cc}D_\mathrm{H}\frac{1}{\sqrt{\mathrm{\Omega }_k}}\mathrm{sinh}\left[\sqrt{\mathrm{\Omega }_k}D_\mathrm{C}/D_\mathrm{H}\right]\hfill & \mathrm{for}\mathrm{\Omega }_k>0\hfill \\ D_\mathrm{C}\hfill & \mathrm{for}\mathrm{\Omega }_k=0\hfill \\ D_\mathrm{H}\frac{1}{\sqrt{|\mathrm{\Omega }_k|}}\mathrm{sin}\left[\sqrt{|\mathrm{\Omega }_k|}D_\mathrm{C}/D_\mathrm{H}\right]\hfill & \mathrm{for}\mathrm{\Omega }_k<0\hfill \end{array}$$
(16)
where the trigonometric functions $`\mathrm{sinh}`$ and $`\mathrm{sin}`$ account for what is called “the curvature of space.” (Space curvature is not coordinate-free; a change of coordinates makes space flat; the only coordinate-free curvature is space–time curvature, which is related to the local mass–energy density or really stress–energy tensor.) For $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, there is an analytic solution to the equations
$$D_\mathrm{M}=D_\mathrm{H}\frac{2[2\mathrm{\Omega }_\mathrm{M}(1z)(2\mathrm{\Omega }_\mathrm{M})\sqrt{1+\mathrm{\Omega }_\mathrm{M}z}]}{\mathrm{\Omega }_\mathrm{M}^2(1+z)}\mathrm{for}\mathrm{\Omega }_\mathrm{\Lambda }=0$$
(17)
(Weinberg, 1972, p. 485; Peebles, 1993, pp 320–321). Some (eg, Weedman, 1986, pp 59–60) call this distance measure “proper distance,” which, though common usage, is bad style.<sup>2</sup><sup>2</sup>2The word “proper” has a specific use in relativity. The proper time between two nearby events is the time delay between the events in the frame in which they take place at the same location, and the proper distance between two nearby events is the distance between them in the frame in which they happen at the same time. In the cosmological context, it is the distance measured by a ruler at the time of observation. The transverse comoving distance $`D_\mathrm{M}`$ is not a proper distance—it is a proper distance divided by a ratio of scale factors.
(Although these notes follow the Peebles derivation, there is a qualitatively distinct method using what is known as the development angle $`\chi `$, which increases as the Universe evolves. This method is generally preferred by relativists; eg, Misner, Thorne & Wheeler 1973, pp 782–785).
The comoving distance happens to be equivalent to the proper motion distance (hence the name $`D_\mathrm{M}`$), defined as the ratio of the actual transverse velocity (in distance over time) of an object to its proper motion (in radians per unit time) (Weinberg, 1972, pp 423–424). The proper motion distance is plotted in Figure 1. Proper motion distance is used, for example, in computing radio jet velocities from knot motion.
## 6 Angular diameter distance
The angular diameter distance $`D_\mathrm{A}`$ is defined as the ratio of an object’s physical transverse size to its angular size (in radians). It is used to convert angular separations in telescope images into proper separations at the source. It is famous for not increasing indefinitely as $`z\mathrm{}`$; it turns over at $`z1`$ and thereafter more distant objects actually appear larger in angular size. Angular diameter distance is related to the transverse comoving distance by
$$D_\mathrm{A}=\frac{D_\mathrm{M}}{1+z}$$
(18)
(Weinberg, 1972, pp 421–424; Weedman, 1986, pp 65–67; Peebles, 1993, pp 325–327). The angular diameter distance is plotted in Figure 2. At high redshift, the angular diameter distance is such that 1 arcsec is on the order of 5 kpc.
There is also an angular diameter distance $`D_{\mathrm{A12}}`$ between two objects at redshifts $`z_1`$ and $`z_2`$, frequently used in gravitational lensing. It is not found by subtracting the two individual angular diameter distances! The correct formula, for $`\mathrm{\Omega }_k0`$, is
$$D_{\mathrm{A12}}=\frac{1}{1+z_2}\left[D_{\mathrm{M2}}\sqrt{1+\mathrm{\Omega }_k\frac{D_{\mathrm{M1}}^2}{D_\mathrm{H}^2}}D_{\mathrm{M1}}\sqrt{1+\mathrm{\Omega }_k\frac{D_{\mathrm{M2}}^2}{D_\mathrm{H}^2}}\right]$$
(19)
where $`D_{\mathrm{M1}}`$ and $`D_{\mathrm{M2}}`$ are the transverse comoving distances to $`z_1`$ and $`z_2`$, $`D_\mathrm{H}`$ is the Hubble distance, and $`\mathrm{\Omega }_k`$ is the curvature density parameter (Peebles, 1993, pp 336–337). Unfortunately, the above formula is not correct for $`\mathrm{\Omega }_k<0`$ (Phillip Helbig, 1998, private communication).
## 7 Luminosity distance
The luminosity distance $`D_\mathrm{L}`$ is defined by the relationship between bolometric (ie, integrated over all frequencies) flux $`S`$ and bolometric luminosity $`L`$:
$$D_\mathrm{L}\sqrt{\frac{L}{4\pi S}}$$
(20)
It turns out that this is related to the transverse comoving distance and angular diameter distance by
$$D_\mathrm{L}=(1+z)D_\mathrm{M}=(1+z)^2D_\mathrm{A}$$
(21)
(Weinberg, 1972, pp 420–424; Weedman, 1986, pp 60–62). The latter relation follows from the fact that the surface brightness of a receding object is reduced by a factor $`(1+z)^4`$, and the angular area goes down as $`D_\mathrm{A}^2`$. The luminosity distance is plotted in Figure 3.
If the concern is not with bolometric quantities but rather with differential flux $`S_\nu `$ and luminosity $`L_\nu `$, as is usually the case in astronomy, then a correction, the k-correction, must be applied to the flux or luminosity because the redshifted object is emitting flux in a different band than that in which you are observing. The k-correction depends on the spectrum of the object in question, and is unnecessary only if the object has spectrum $`\nu L_\nu =\mathrm{constant}`$. For any other spectrum the differential flux $`S_\nu `$ is related to the differential luminosity $`L_\nu `$ by
$$S_\nu =(1+z)\frac{L_{(1+z)\nu }}{L_\nu }\frac{L_\nu }{4\pi D_\mathrm{L}^2}$$
(22)
where $`z`$ is the redshift, the ratio of luminosities equalizes the difference in flux between the observed and emitted bands, and the factor of $`(1+z)`$ accounts for the redshifting of the bandwidth. Similarly, for differential flux per unit wavelength,
$$S_\lambda =\frac{1}{(1+z)}\frac{L_{\lambda /(1+z)}}{L_\lambda }\frac{L_\lambda }{4\pi D_\mathrm{L}^2}$$
(23)
(Peebles, 1993, pp 330–331; Weedman, 1986, pp 60–62). In this author’s opinion, the most natural flux unit is differential flux per unit log frequency or log wavelength $`\nu S_\nu =\lambda S_\lambda `$ for which there is no redshifting of the bandpass so
$$\nu S_\nu =\frac{\nu _\mathrm{e}L_{\nu _\mathrm{e}}}{4\pi D_\mathrm{L}^2}$$
(24)
where $`\nu _\mathrm{e}=(1+z)\nu `$ is the emitted frequency. These equations are straightforward to generalize to bandpasses of finite width.
The apparent magnitude $`m`$ of an astronomical source in a photometric bandpass is defined to be the ratio of the apparent flux of that source to the apparent flux of the bright star Vega, through that bandpass (don’t ask me about “AB magnitudes”). The distance modulus $`DM`$ is defined by
$$DM5\mathrm{log}\left(\frac{D_\mathrm{L}}{10\mathrm{pc}}\right)$$
(25)
because it is the magnitude difference between an object’s observed bolometric flux and what it would be if it were at $`10\mathrm{pc}`$ (this was once thought to be the distance to Vega). The distance modulus is plotted in Figure 4. The absolute magnitude $`M`$ is the astronomer’s measure of luminosity, defined to be the apparent magnitude the object in question would have if it were at 10 pc, so
$$m=M+DM+K$$
(26)
where $`K`$ is the k-correction
$$K=2.5\mathrm{log}\left[(1+z)\frac{L_{(1+z)\nu }}{L_\nu }\right]=2.5\mathrm{log}\left[\frac{1}{(1+z)}\frac{L_{\lambda /(1+z)}}{L_\lambda }\right]$$
(27)
(eg, Oke & Sandage, 1968).
## 8 Parallax distance
If it were possible to measure parallaxes for high redshift objects, the distance so measured would be the parallax distance $`D_\mathrm{P}`$ (Weinberg, 1972, pp 418–420). It may be possible, one day, to measure parallaxes to distant galaxies using gravitational lensing, although in these cases, a modified parallax distance is used which takes into account the redshifts of both the source and the lens (Schneider, Ehlers & Falco, 1992, pp 508–509), a discussion of which is beyond the scope of these notes.
## 9 Comoving volume
The comoving volume $`V_\mathrm{C}`$ is the volume measure in which number densities of non-evolving objects locked into Hubble flow are constant with redshift. It is the proper volume times three factors of the relative scale factor now to then, or $`(1+z)^3`$. Since the derivative of comoving distance with redshift is $`1/E(z)`$ defined in (14), the angular diameter distance converts a solid angle $`d\mathrm{\Omega }`$ into a proper area, and two factors of $`(1+z)`$ convert a proper area into a comoving area, the comoving volume element in solid angle $`d\mathrm{\Omega }`$ and redshift interval $`dz`$ is
$$dV_\mathrm{C}=D_\mathrm{H}\frac{(1+z)^2D_\mathrm{A}^2}{E(z)}d\mathrm{\Omega }dz$$
(28)
where $`D_\mathrm{A}`$ is the angular diameter distance at redshift $`z`$ and $`E(z)`$ is defined in (14) (Weinberg, 1972, p. 486; Peebles, 1993, pp 331–333). The comoving volume element is plotted in Figure 5. The integral of the comoving volume element from the present to redshift $`z`$ gives the total comoving volume, all-sky, out to redshift $`z`$
$$V_\mathrm{C}=\{\begin{array}{cc}\left(\frac{4\pi D_\mathrm{H}^3}{2\mathrm{\Omega }_k}\right)\left[\frac{D_\mathrm{M}}{D_\mathrm{H}}\sqrt{1+\mathrm{\Omega }_k\frac{D_\mathrm{M}^2}{D_\mathrm{H}^2}}\frac{1}{\sqrt{|\mathrm{\Omega }_k|}}\mathrm{arcsinh}\left(\sqrt{|\mathrm{\Omega }_k|}\frac{D_\mathrm{M}}{D_\mathrm{H}}\right)\right]\hfill & \mathrm{for}\mathrm{\Omega }_k>0\hfill \\ \frac{4\pi }{3}D_\mathrm{M}^3\hfill & \mathrm{for}\mathrm{\Omega }_k=0\hfill \\ \left(\frac{4\pi D_\mathrm{H}^3}{2\mathrm{\Omega }_k}\right)\left[\frac{D_\mathrm{M}}{D_\mathrm{H}}\sqrt{1+\mathrm{\Omega }_k\frac{D_\mathrm{M}^2}{D_\mathrm{H}^2}}\frac{1}{\sqrt{|\mathrm{\Omega }_k|}}\mathrm{arcsin}\left(\sqrt{|\mathrm{\Omega }_k|}\frac{D_\mathrm{M}}{D_\mathrm{H}}\right)\right]\hfill & \mathrm{for}\mathrm{\Omega }_k<0\hfill \end{array}$$
(29)
(Carrol, Press & Turner, 1992), where $`D_\mathrm{H}^3`$ is sometimes called the Hubble volume. The comoving volume element and its integral are both used frequently in predicting number counts or luminosity densities.
## 10 Lookback time
The lookback time $`t_\mathrm{L}`$ to an object is the difference between the age $`t_\mathrm{o}`$ of the Universe now (at observation) and the age $`t_\mathrm{e}`$ of the Universe at the time the photons were emitted (according to the object). It is used to predict properties of high-redshift objects with evolutionary models, such as passive stellar evolution for galaxies. Recall that $`E(z)`$ is the time derivative of the logarithm of the scale factor $`a(t)`$; the scale factor is proportional to $`(1+z)`$, so the product $`(1+z)E(z)`$ is proportional to the derivative of $`z`$ with respect to the lookback time, or
$$t_\mathrm{L}=t_\mathrm{H}_0^z\frac{dz^{}}{(1+z^{})E(z^{})}$$
(30)
(Peebles, 1993, pp 313–315; Kolb & Turner 1990, pp 52–56, give some analytic solutions to this equation, but they are concerned with the age $`t(z)`$, so they integrate from $`z`$ to $`\mathrm{}`$). The lookback time and age are plotted in Figure 6.
## 11 Probability of intersecting objects
Given a population of objects with comoving number density $`n(z)`$ (number per unit volume) and cross section $`\sigma (z)`$ (area), what is the incremental probability $`dP`$ that a line of sight will intersect one of the objects in redshift interval $`dz`$ at redshift $`z`$? Questions of this form are asked frequently in the study of QSO absorption lines or pencil-beam redshift surveys. The answer is
$$dP=n(z)\sigma (z)D_\mathrm{H}\frac{(1+z)^2}{E(z)}dz$$
(31)
(Peebles, 1993, pp 323–325). The dimensionless differential intersection probability is plotted in Figure 7.
## Acknowledgments
Roger Blandford, Ed Farhi, Jim Peebles and Wal Sargent all contributed generously to my understanding of this material and Kurt Adelberger, Lee Armus, Andrew Baker, Deepto Chakrabarty, Alex Filippenko, Andrew Hamilton, Phillip Helbig, Wayne Hu, John Huchra, Daniel Mortlock, Tom Murphy, Gerry Neugebauer, Adam Riess, Paul Schechter, Douglas Scott and Ned Wright caught errors, suggested additional material, or helped me with wording, conventions or terminology. I thank the NSF and NASA for financial support.
## 12 References
* Blandford R. & Narayan R., 1992, Cosmological applications of gravitational lensing, ARA&A 30 311–358
* Carroll S. M., Press W. H. & Turner E. L., 1992, The cosmological constant, ARA&A 30 499–542
* Fairall A. P., 1992, A caution to those who measure galaxy redshifts, Observatory 112 286
* Harrison E., 1993, The redshift–distance and velocity–distance laws, ApJ 403 28–31
* Kayser R., Helbig P. & Schramm T., 1997, A general and practical method for calculating cosmological distances, A&A 318 680–686
* Kolb E. W. & Turner M. S., 1990, The Early Universe, Addison-Wesley, Redwood City
* Misner C. W., Thorne K. S. & Wheeler J. A., 1973, Gravitation, W. H. Freeman & Co., New York
* Oke J. B. & Sandage A., 1968, Energy distributions, k corrections, and the Stebbins-Whitford effect for giant elliptical galaxies, ApJ 154 21
* Peebles P. J. E., 1993, Principles of Physical Cosmology, Princeton University Press, Princeton
* Schneider P., Ehlers J. & Falco E. E., 1992, Gravitational Lensing, Springer, Berlin
* Weedman D. W., 1986, Quasar Astronomy, Cambridge University, Cambridge
* Weinberg S., 1972, Gravitation and Cosmolgy: Principles and Applications of the General Theory of Relativity, John Wiley & Sons, New York
|
no-problem/9905/gr-qc9905008.html
|
ar5iv
|
text
|
# On Signature Transition and Compactification in Kaluza-Klein Cosmology
## 1 Introduction
The question of signature transition in classical and quantum cosmological models has been of some interest in the past few years. It was first addressed in the work of Hartle and Hawking in which they examined quantum cosmologies admitting quantum amplitudes in the form of a sum over all compact Riemannian manifolds whose boundaries coincide with the loci of signature change. Other workers have also investigated this question and considered signature transition in general relativity by adopting model theories which mostly rely on Einstein’s field equations coupling to a scalar field . The solution of the resulting field equations under a properly parametrized metric, when interpreted suitably, would then indicate a change of signature.
A particular model, relevant to the present work, is that of Dereli and Tucker in which a self interacting scalar field is coupled to Einstein’s field equations with a potential containing a Sinh-Gordon scalar interaction. These equations are then solved exactly for the scalar field and the scale factor as dynamical variables, giving rise to cosmological solutions with a degenerate metric, describing a continuous signature transition from a Euclidean domain to a Lorentzian space-time in a spatially flat Robertson-Walker cosmology. The corresponding quantum cosmology has also been investigated where the Wheeler-DeWitt equation arises from an anisotropic oscillator-ghost-oscillator vanishing Hamiltonian. It is solved exactly and leads to normalizable states with the quantum states constructed as belonging to distinct Hilbert subspaces each of which being characterized by a particular “quantization” condition on the parameters of the scalar field potential. These quantum states correspond to excited quantum cosmologies and relate to classical solutions without resort to WKB approximation techniques. A similar analysis of the classical and quantum theory of a scalar (dilaton) field interacting with gravity has been reported in two dimensions in which a class of analytic solutions to the Wheeler-DeWitt equation relate, in a remarkable way, to the general solution of the classical field equations.
In this paper, we consider a (4+1) dimensional Kaluza-Klein cosmology with a negative cosmological constant and a Robertson-Walker type metric having two dynamical variables, the usual scale factor $`R`$ and the internal scale factor $`a`$. Following and , we insist on a preferred coordinate that controls the evolution of signature dynamics, seeking suitably smooth continuous solutions for $`R`$ and $`a`$ passing through the hypersurface of signature change. These classical solutions admitting signature transition would then suggest a compactification mechanism for the internal scale factor $`a`$. Some authors have already considered various compactification mechanisms in different models. In particular in , the author has suggested a compactification mechanism based on signature change for a positive cosmological constant. Here, we will discuss the differences and similarities between these models and the one presented here in order to see how the results can be compared. We then find exact solutions to the corresponding Wheeler-DeWitt equation arising from an isotropic oscillator-ghost-oscillator vanishing Hamiltonian. Due to this isotropy, there is no “quantization” condition and thus no excited cosmologies. However, we show that the desired quantum state, identified with a non-dispersive wave packet, relates to classical solutions in that it peaks in the vicinity of the classical loci corresponding to this cosmology.
## 2 Classical Cosmology
We start with the metric considered in in which the space-time is assumed to be of Robertson-Walker type having a compactified space which is assumed to be the circle $`S^1`$. In this paper we adopt the real chart $`\{\beta ,r^1,r^2,r^3,\rho \}`$ with $`\beta `$, $`r^i`$ and $`\rho `$ denoting the lapse function, the space coordinates and the compactified space coordinate respectively. We therefore take
$$ds^2=\beta d\beta ^2+\overline{R}^2(\beta )\frac{dr^idr^i}{(1+\frac{kr^2}{4})^2}+\overline{a}^2(\beta )d\rho ^2,i=1,2,3$$
(1)
where $`k=0,\pm 1`$ and $`\overline{R}(\beta )`$ is the scale factor and $`\overline{a}(\beta )`$ is the radius of the compactified space, both of which are assumed to depend only on the lapse function $`\beta `$. The signature of the metric is Lorentzian for $`\beta >0`$ and Euclidean for $`\beta <0`$. For positive values of $`\beta `$ (Lorentzian region), one can recover the cosmic time by writing $`t=\frac{2}{3}\beta ^{\frac{3}{2}}`$, leading to
$$ds^2=dt^2+R^2(t)\frac{dr^idr^i}{(1+\frac{kr^2}{4})^2}+a^2(t)d\rho ^2$$
(2)
where $`R(t)=\overline{R}(\beta (t))`$ and $`a(t)=\overline{a}(\beta (t))`$ in the $`\{t,r^i,\rho \}`$ chart. As in , we formulate our differential equations in a region that does not include $`\beta =0`$ and seek real solutions for $`R`$ and $`a`$ smoothly passing through the $`\beta =0`$ hypersurface. The curvature scalar corresponding to metric (1) is obtained as
$$=6\left[\frac{\ddot{R}}{R}+\frac{k+\dot{R}^2}{R^2}\right]+2\frac{\ddot{a}}{a}+6\frac{\dot{R}}{R}\frac{\dot{a}}{a}$$
(3)
where a dot represents differentiation with respect to $`t`$. Substituting this result into Einstein-Hilbert action with a cosmological constant $`\mathrm{\Lambda }`$
$$I=\sqrt{g}(\mathrm{\Lambda })𝑑td^3r𝑑\rho $$
(4)
and integrating over spatial dimensions gives an effective Lagrangian $`L`$ in the mini-superspace ($`R`$,$`a`$) as
$$L=\frac{1}{2}Ra\dot{R}^2+\frac{1}{2}R^2\dot{R}\dot{a}\frac{1}{2}kRa+\frac{1}{6}\mathrm{\Lambda }R^3a.$$
(5)
## 3 Solutions
By defining $`\omega ^2\frac{2\mathrm{\Lambda }}{3}`$ and changing the variables as
$$u=\frac{1}{\sqrt{8}}\left[R^2+Ra\frac{3k}{\mathrm{\Lambda }}\right],v=\frac{1}{\sqrt{8}}\left[R^2Ra\frac{3k}{\mathrm{\Lambda }}\right]$$
(6)
$`L`$ takes on the form
$$L=\frac{1}{2}\left[(\dot{u}^2\omega ^2u^2)(\dot{v}^2\omega ^2v^2)\right].$$
(7)
A point $`(u,v)`$ in this mini-superspace represents a 4-geometry. The classical equations of motion are given by
$$\ddot{u}=\omega ^2u,\ddot{v}=\omega ^2v.$$
(8)
Choosing the initial conditions $`\dot{u}(0)=\dot{v}(0)=0`$ , the solutions are obtained as
$$u(t)=A\mathrm{cosh}\left(\sqrt{\frac{2\mathrm{\Lambda }}{3}}t\right),v(t)=B\mathrm{cosh}\left(\sqrt{\frac{2\mathrm{\Lambda }}{3}}t\right)$$
(9)
where $`A`$ and $`B`$ are constants to be determined later. Assuming the full (4+1) dimensional Einstein equations to hold, this implies that the Hamiltonian corresponding to $`L`$ in (7) must vanish, that is
$$H=\frac{1}{2}\left[(\dot{u}^2+\omega ^2u^2)(\dot{v}^2+\omega ^2v^2)\right]=0$$
(10)
which describes an isotropic oscillator-ghost-oscillator system. Now, the solutions (9) must satisfy the constraint of vanishing Hamiltonian. Thus, substitution of equations (9) into (10) gives a relation between the constants $`A`$ and $`B`$
$$A=\pm B$$
(11)
implying that we can rewrite the solutions (9) as
$$u(t)=A\mathrm{cosh}\left(\sqrt{\frac{2\mathrm{\Lambda }}{3}}t\right),v(t)=ϵA\mathrm{cosh}\left(\sqrt{\frac{2\mathrm{\Lambda }}{3}}t\right)$$
(12)
where $`ϵ`$ takes the values $`\pm 1`$ according to the choices in (11). Classical solutions (12) may be displayed as trajectories $`u=\pm v`$ in the mini-superspace ($`u,v`$). We recover $`R(t)`$ and $`a(t)`$ from $`u(t)`$ and $`v(t)`$ as
$$\begin{array}{cc}R(t)=\left[\sqrt{2}(u(t)+v(t))+\frac{3k}{\mathrm{\Lambda }}\right]^{1/2}\hfill & \\ & \\ a(t)=\left[\sqrt{2}(u(t)+v(t))+\frac{3k}{\mathrm{\Lambda }}\right]^{1/2}\left[\sqrt{2}(u(t)v(t))\right].\hfill & \end{array}$$
(13)
For $`ϵ=1`$ one finds the solutions in terms of $`\beta `$ as
$$\begin{array}{cc}R_c=\sqrt{\frac{3k}{\mathrm{\Lambda }}}\hfill & \\ & \\ a(\beta )=\sqrt{\frac{\mathrm{\Lambda }}{3k}}\mathrm{cosh}\left(\sqrt{\frac{2\mathrm{\Lambda }}{3}}\frac{2}{3}\beta ^{3/2}\right)\hfill & \end{array}$$
(14)
where $`A=\frac{1}{\sqrt{8}}`$ is taken for convenience<sup>1</sup><sup>1</sup>1 Note that the dimension of $`A`$ is (length)<sup>2</sup>. and solutions (12) have been used. Also for $`ϵ=+1`$, one finds the solutions
$$\begin{array}{cc}R(\beta )=\left[\mathrm{cosh}\left(\sqrt{\frac{2\mathrm{\Lambda }}{3}}\frac{2}{3}\beta ^{3/2}\right)+\frac{3k}{\mathrm{\Lambda }}\right]^{1/2}\hfill & \\ & \\ a=0.\hfill & \end{array}$$
(15)
## 4 Signature Transition and Compactification
In this section we discuss signature transition occurring in the model presented above and show that it induces compactification on the internal space. However, before doing so we give a brief history of the mechanisms of compactification related to the present work, in particular the recent works discussed in and .
In , it is shown that for a higher dimensional FRW model with $`S^3\times S^6`$ as spatial sections with two scale factors $`a_1,a_2`$ and a positive cosmological constant, the classical signature change induces compactification. This is done by a dynamical mechanism that drags the size of $`S^6`$ down and gives rise to a long-time stability at an unobservably small scale. This mechanism is based on the existence of a signature transition and the interplay between the causal structure of the Wheeler-DeWitt metric and the sign of the corresponding potential $`W`$ appearing in the action defined in the mini-superspace ($`a_1,a_2`$). In order to diagonalize the kinetic term in the action a new set of variables ($`u,v`$) are defined. In this new mini-superspace the curve satisfying $`W=0`$ consists of two branches which are connected smoothly. In one branch, the compactification occurs for $`S^6`$ and in the other, it occurs for $`S^3`$. For the compactification of $`S^6`$, one finds $`a_1(t)`$ oscillating around some linearly growing average, whereas $`a_2(t)`$ performs damped oscillations around a limiting value of order $`\mathrm{\Lambda }^{\frac{1}{2}}`$, both solutions being stable against small perturbations. The effective five-dimensional space-time metric, obtained by taking the proper time average, has a Lorentzian signature, undergoes exponential inflation (in $`\mathrm{\Lambda }`$) in $`S^3`$ and induces compactification on $`S^6`$ of order $`\mathrm{\Lambda }^{\frac{1}{2}}`$. This requires a large cosmological constant in order to be consistent with the unobservability of the compactified dimensions. On the other hand, stopping inflation requires switching off $`\mathrm{\Lambda }`$, but then the radius of the compactified $`S^6`$ will blow up. Usually, this type of problem is expected in the presence of a large positive cosmological constant whose possible solutions are suggested in .
In however, the compactification mechanism is studied for a $`D+1`$ dimensional toroidally compact Kaluza-Klein cosmology with a negative cosmological constant consisting of matter that is either dust or coherent excitations of a dilaton field. In this model, compactification is done by diagonalizing the classical Hamiltonian which leads to a new $`D`$-dimensional mini-superspace. Applying the Heisenberg equations of motion and taking the expectation values, one finds the cosmic time dependence of the expectation values of the mini-superspace variables. It is then shown that the expectation values of some of the dimensions show a quantum inflationary phase while simultaneously the remaining dimensions show a quantum deflationary phase giving rise to compactification. In this model, the eternal inflation-compactification is a problem whose solution is based on the tunneling of the negative cosmological constant to zero in the context of a dilaton field model having a potential with two local minima. The quantum inflation-deflation (QID) era is then realized by oscillations of the dilaton field around the absolute minimum (where $`\mathrm{\Lambda }<0`$). After tunneling to $`\mathrm{\Lambda }0`$ this quatum phase disappears allowing a classical description at later times.
There are differences and similarities between the models described above and the one presented in this paper. In our model, the metric is a 5-dimensional Kaluza-Klein with a negative cosmological constant whereas in , the metric is 10-dimensional with a cosmological constant which is positive. However, both models use signature transition as the process for addressing compactification, but through two completely different mechanisms. As for the model presented in , we find a similarity in their assumption of a negative cosmological constant, but their compactification mechanism and matter contents are very different. They use either dust or a dilaton field as the matter content, contrary to our model in which there is no matter. However, in spite of the above differences, the behaviour of the two scale factors $`R`$ and $`a`$ in the present model merits some discussion under various possible choices of $`\mathrm{\Lambda }`$ in order to compare them with that of at the formal level. To this end, we first discuss signature transition in the model presented here by seeking suitably smooth continuous solutions for $`R`$ and $`a`$ passing through the hypersurface of signature change $`\beta =0`$.
The classical solutions (14, 15) describe an empty Kaluza-Klein universe with a negative cosmological constant. When $`ϵ=1`$, the universe takes the same constant scale factor $`R_c`$ in both Euclidean and Lorentzian regions, hence it is continuous at $`\beta =0`$. The $`\beta `$ dependent scale factor $`a(\beta )`$ is unbounded in the Euclidean region $`\beta <0`$, passing continuously through $`\beta =0`$ and exhibiting bounded oscillations in the Lorentzian region $`\beta >0`$. The reality conditions on $`R_c`$ and $`a(\beta )`$ forces $`k`$ to be negative, thus rendering this universe open with $`k=1`$ <sup>2</sup><sup>2</sup>2The case $`k=0`$ would give rise to a divergent $`a`$ and zero $`R`$.. For $`|\mathrm{\Lambda }|0`$ the solution (14) will give rise to a small constant scale factor $`R`$ passing continuously from Euclidean to Lorentzian regions. The scale factor $`a`$ will be enormously large in both regions compared to the scale factor $`R`$ and passes through $`\beta =0`$ continuously. The scale factor $`R`$ is now compactified to a small size of order $`|\mathrm{\Lambda }|^{\frac{1}{2}}`$. Taking $`|\mathrm{\Lambda }|0`$, it is seen that for $`\beta 0`$, $`a(\beta )`$ can become large and for $`\beta 0`$, it tends to become very small in comparison with $`R_c`$. It passes through $`\beta =0`$ continuously with a very small value $`a(0)=R_c^1`$ and oscillates for $`\beta >0`$ with amplitude $`a(0)`$ and a varying frequency. One of the most interesting features in this case is that signature transition now induces compactification on the scale factor $`a`$ in the Lorentzian region, dragging it to a small size of order $`|\mathrm{\Lambda }|^{\frac{1}{2}}`$. The first zero of this oscillatory function in the Lorentzian region occurs at
$$\beta _0=\left(\frac{3\pi }{4}\sqrt{\frac{3}{2|\mathrm{\Lambda }|}}\right)^{2/3}.$$
It is seen that the smallness of the cosmological constant (large $`\beta _0`$) allows for an extended Lorentzian region $`0\beta <\beta _0`$ which would correspond to a Kaluza-Klein cosmology with a large scale factor $`R`$ and a stable compactified scale factor $`a`$, see figure 1. The long-time stability of the compactification is verified for the present bound on the cosmological constant $`|\mathrm{\Lambda }|10^{56}`$ cm<sup>-2</sup> since then $`\beta _0`$ present age of universe.
When $`ϵ=+1`$, the solutions (15) describe an empty Kaluza-Klein universe for which the internal space has zero size. Assuming a negative cosmological constant $`\mathrm{\Lambda }=3k`$ with $`k=1`$, the solution $`R(\beta )`$ is real and behaves exponentially for $`\beta <0`$, passing through $`\beta =0`$ continuously and oscillating for $`\beta >0`$. This, indeed, compactifies $`R`$ to a small size $`0R\sqrt{2}`$ in the Lorentzian region. Taking $`|\mathrm{\Lambda }|0`$ leads to a real scale factor $`R(\beta )`$. It has large values at both regions, behaving exponentially for $`\beta 0`$, tending to a large constant value $`|\mathrm{\Lambda }|^{1/2}`$ for $`\beta 0`$, passing through $`\beta =0`$ continuously and oscillating about this large value for $`\beta >0`$. There is good agreement between $`R`$ and its present observational bound $`R10^{28}`$ cm for the choice $`|\mathrm{\Lambda }|10^{56}`$ cm<sup>-2</sup>.
In conclusion, from the discussion given above, we find that choosing a small cosmological constant would lead to a good agreement between the size of the universe arising from the solutions (14, 15) and its present observational bound. This choice of $`\mathrm{\Lambda }`$ also leads to an agreement between the present unobservability of the size of the compactified dimension $`a`$ and that resulting from the above solutions. As our solutions (14) show, we may recognize two hierarchial phases in the Lorentzian region as
$`t0,|\mathrm{\Lambda }|0R|\mathrm{\Lambda }|^{1/2},a|\mathrm{\Lambda }|^{1/2}`$
and
$`0<t<t_0,|\mathrm{\Lambda }|0R|\mathrm{\Lambda }|^{1/2},a|\mathrm{\Lambda }|^{1/2}`$
both exhibiting the same relation $`Ra1`$ ($`t_0`$ being the present age of universe). Thus, we have found a classical link between the size of the extra dimension $`a`$ and that of the visible dimension $`R`$ in the Lorentzian region. This relation may account for the existence of a transformation $`\mathrm{\Lambda }\mathrm{\Lambda }^1`$ leading to duality transformations $`RaR^1`$ and $`aRa^1`$. Therefore, such dualities may be studied in the context of a theory in which a large $`|\mathrm{\Lambda }|`$ in the very early universe will result in a small $`|\mathrm{\Lambda }|`$ at later times and hence an initial small $`R`$ becomes very large while simultaneously the initial large $`a`$ compactifies to a very small size. Therefore, if we regard the relation $`Ra1`$ as the main characteristic of this duality theory then switching off the large cosmological constant would be a natural result in order to be consistent with present observations.
A clear similarity is seen between our model and that of in the sector of internal space compactification. Both of these models predict a deflationary phase for compactification of the internal dimensions before the onset of classical evolution. Considering equation (14) with $`|\mathrm{\Lambda }|0`$ and as is seen in figure 1, there is an exponentially decreasing $`a(\beta )`$ in the Euclidean region $`\mathrm{}<\beta 0`$ after which it undergoes a long-time stability in the Lorentzian region $`0\beta <\beta _0`$. Usually, the classical evolution of the universe begins at $`\beta =0`$ and is considered to be in the Lorentzian sector so that the Euclidean sector may be assumed to be a pre-classical era . Therefore, in the context of the present signature changing approach to compactification of the internal space $`a`$ we also find a pre-classical era (Euclidean region) where a deflationary phase occurs after which the classical evolution begins. One may also resort to a mechanism in a more fundamental theory which would lead to a rapid deflationary phase (a large $`\mathrm{\Lambda }`$) and then a long time stable phase (a small $`\mathrm{\Lambda }`$) for $`a(\beta )`$. In fact, some switching off mechanisms could be proposed by which $`\mathrm{\Lambda }`$ could relax to zero.
An alternative similarity to will be obtained if we consider our model as an effective part of a more fundamental theory in which a negative cosmological constant $`\mathrm{\Lambda }3k`$ can tunnel to a very small value. Then, considering the Lorentzian solutions (14) with $`\beta >0`$, and assuming $`\mathrm{\Lambda }3k`$ we have two equally compact sizes $`aR`$ initially at $`t0`$. After the tunneling of $`|\mathrm{\Lambda }|`$, the scale factor $`R`$ becomes very large and $`a`$ gets very small, both evolving classically thereafter. This picture may be identified, at least at the formal level, with the (QID) model in which, at first, all dimensions have equally compact sizes but after the (QID) phase ends, some of them become larger and the remaining ones get smaller. The cosmological constant can then tunnel from a large negative value to zero and the universe can be described classically afterwards.
Comparison with shows that in our mini-superspace $`(u,v)`$, one finds, as in , two branches $`u=\pm v`$ corresponding to the vanishing of potential $`W=\frac{1}{2}\omega ^2(u^2v^2)=0`$, c.f. equation (7). The branch $`u=v`$ of the curve $`W=0`$ gives the compactification either of $`R`$ or $`a`$ according to the choice of the heirarchial cosmological constants. The other branch $`u=+v`$ leads to the compactification of $`a`$ to zero and $`R`$ to $`0R\sqrt{2}`$.
As before, of particular interest is the branch $`u=v`$ with a very small negative (magnitude) cosmological constant in which the size of compactification of the internal scale factor $`a`$, in the Lorentzian region, is of order $`|\mathrm{\Lambda }|^{\frac{1}{2}}`$. At first glance, this result seems to be in conflict with that of where $`a\mathrm{\Lambda }^{\frac{1}{2}}`$ (for positive $`\mathrm{\Lambda }`$), but since the cosmological constant in is assumed to be large and the one here is small, we see that the results are in agreement <sup>3</sup><sup>3</sup>3To see this, one may compare the long-time behaviour and sizes of $`a`$ in the Lorentzian sector of figure 1 here, and figure 2 in .. The only difference between the result of the compactification mechanisms in and that proposed here concerns the scale factor $`R`$. In , the size of $`S^3`$ undergoes an eternal inflation whereas the scale factor $`R`$ is constant here and in agreement with its observational bound.
## 5 Quantum Cosmology
One of the most interesting topics in the context of quantum cosmology is the mechanisms through which classical cosmology may emerge from quantum theory. When does a Wheeler-DeWitt wave function predict a classical space-time? Indeed, any attempt in constructing a viable quantum gravity requires understanding the connections between classical and quantum physics. Much work has been done in this direction over the past decade. Actually, there is some tendency towards using semiclassical approximations in dividing the behaviour of the wave function into two types, oscillatory or exponential which are supposed to correspond to classically allowed or forbiden regions. Hartle has put forward a simple rule for applying quantum mechanics to a single system (universe): If the wave function is sufficiently peaked about some region in the configuration space we predict to observe a correlation between the observables which characterize this region. Halliwell has shown that the oscillatory semiclassical WKB wave function is peaked about a region of the mini-superspace in which the correlation between the coordinate and momentum holds good and stresses that both $`correlation`$ and $`decoherence`$ are necessary before one can say a system is classical. Using Wigner functions, Habib and Laflamme have studied the mutual compatibility of these requirements and shown that some form of coarse graining is necessary for classical prediction from WKB wave functions. Alternatively, Gaussian or coherent states with sharply peaked wave functions are often used to obtain classical limits by constructing wave packets.
A new aspect arises in quantum cosmology if the wave packets are to be constructed which would trace out a classical trajectory. In this case, the normalizability of the wave function is needed in order to have a correlation between classical and quantum cosmology. Of course, because of the superposition principle, some interference between the coherent states exist but enlarging the configuration space by adding a large number of higher degrees of freedom interacting with the mini-superspace variables leads to a decoherence effect.
Recently, this aspect of correspondence between classical and quantum cosmology has become of interest , especially in the context of signature transition . In , the authors have exactly solved the Wheeler-DeWitt equation in the form of an anisotropic oscillator-ghost-oscillator constraint and constructed states that highlight the classical trajectories and admit signature transition without resort to WKB approximations, hence avoiding the decoherence problem. Also in , it is shown that a large subset of generalized two-dimensional dilaton gravity models are dynamically equivalent to the isotropic oscillator-ghost-oscillator constraint and there may be correspondence between classical and quantum configurations like that obtained in .
In this section we shall concentrate on those aspect of correspondence between classical and quantum cosmology which are studied in . The Lagrangian in (7) describes a classical Kaluza-Klein cosmology. The corresponding quantum cosmology is described by the Wheeler-DeWitt equation resulting from Hamiltonian (10) and can be written as
$$\left[\frac{^2}{u^2}\frac{^2}{v^2}(u^2v^2)\omega ^2\right]\mathrm{\Psi }(u,v)=0$$
(16)
where $`\omega ^2\frac{2\mathrm{\Lambda }}{3}`$. This is an isotropic oscillator-ghost-oscillator constraint. For $`\mathrm{\Lambda }<0`$, the “zero energy” solutions belong to a subspace of the Hilbert space spanned by separable eigenfunctions of a 2-dimensional isotropic simple harmonic oscillator Hamiltonian, and can be written as
$$\mathrm{\Phi }_{(n_1,n_2)}(u,v)=\alpha _{n_1}(u)\beta _{n_2}(v)n_1,n_2=0,1,2,\mathrm{}$$
(17)
with
$$\alpha _n(u)(\frac{\omega }{\pi })^{1/4}\frac{e^{\frac{\omega u^2}{2}}}{\sqrt{2^nn!}}H_n(\sqrt{\omega }u),$$
(18)
$$\beta _n(v)(\frac{\omega }{\pi })^{1/4}\frac{e^{\frac{\omega v^2}{2}}}{\sqrt{2^nn!}}H_n(\sqrt{\omega }v)$$
(19)
provided $`(n_1+\frac{1}{2})\omega =(n_2+\frac{1}{2})\omega `$ with the restriction $`n_1=n_2n`$ <sup>4</sup><sup>4</sup>4Note that in there is a quantization condition on $`\omega `$ resulting in distinct Hilbert subspaces corresponding to excited cosmologies. . $`H_n(x)`$ is the Hermit polynomial and the eigenfunctions are normalized according to
$$(\alpha _n,\alpha _m)=\delta _{n,m},(\beta _n,\beta _m)=\delta _{n,m}$$
(20)
where ( , ) denotes the inner product. The solutions $`\mathrm{\Phi }_{(n,n)}(u,v)`$ span a Hilbert subspace of measurable square integrable functions on $`R^2`$ in the form
$$\mathrm{\Psi }(u,v)=\underset{n=0}{\overset{\mathrm{}}{}}c_n\mathrm{\Phi }_{(n,n)}(u,v)$$
(21)
where $`c_nC`$ and
$$(\mathrm{\Phi }_{(n,n)},\mathrm{\Phi }_{(n^{},n^{})})=\delta _{n,n^{}}.$$
(22)
We are interested in constructing a coherent wave packet with good asymptotic behaviour in the mini-superspace, peaking in the vicinity of one of the classical loci $`u=\pm v`$ in the $`\{u,v\}`$ configuration space. It is well-known that for a harmonic oscillator, non-dispersive wave packets may be constructed by superposition of energy eigenfunctions. Therefore, the wave packet (21) is what we need. We take the solution as being represented by equation (21) with the sum to be truncated at a suitable value of $`n`$ displaying this peak. Figures 2 and 3 show surface and density plots of $`|\mathrm{\Psi }(u,v)|^2`$, where we have taken the combinations from 6 terms with all $`c_n`$ up to $`c_5`$ taken to be unity. Taking more terms would only have a small effect on the results. It is seen that a good correlation exists between these patterns and the classical loci $`u=v`$ in the configuration space $`\{u,v\}`$.
Wave packets in the mini-superspace can only be understood as unparametrized tubes. This is because the Wheeler-DeWitt equation does not have an intrinsic time parameter. However, in order to relate the properties of the wave packet (21) to an evolving classical cosmology with classical time in the Lorentzian region, one may take $`\{\beta \}`$ as a family of coordinate functions labeling some subset of the real line containing the point $`\beta =0`$. Then the loci $`u=\pm v`$ admit parametrizations in terms of $`\{\beta \}`$
$$u(\beta )=A\mathrm{cosh}\left(\sqrt{\frac{2\mathrm{\Lambda }}{3}}\frac{2}{3}\beta ^{\frac{3}{2}}\right),v(\beta )=ϵA\mathrm{cosh}\left(\sqrt{\frac{2\mathrm{\Lambda }}{3}}\frac{2}{3}\beta ^{\frac{3}{2}}\right)$$
(23)
with the point $`\beta =0`$ implying a transition from Euclidean to Lorentzian signature. Now, a change of coordinate $`\beta \beta ^{}=F(\beta )`$ will induce a change of parametrization for the classical loci and, for $`\beta >0`$ with $`F`$ monotonic, would correspond to an alternative choice of classical time. To the extent that the classical loci are highlighted by the state (21) we say that the family of classical times, regarded as alternative parametrization of such loci, arise dynamically from this state.
## 6 Conclusions
In this paper we have considered the solutions of Einstein equations in an empty (4+1) dimensional Kaluza-Klein cosmology with a Robertson-Walker type metric having a negative cosmological constant. These solutions admit a degenerate metric signifying a signature transition from Euclidean to Lorentzian domains. Motivated by the subject of compactification which has been studied in the context of signature change for a positive cosmological constant, we have shown that here, signature transition can provide a compactification either for the usual scale factor $`R`$ or the internal scale factor $`a`$ according to the choice of hierarchial negative cosmological constant. The most interesting and desirable case emerges by taking a very small negative cosmological constant which would then predict the relation $`Ra1`$ and compactification of $`a`$ as $`a|\mathrm{\Lambda }|^{1/2}`$ for the Lorentzian space-time and an exponentially damping behaviour (deflation) of $`a`$ for the Euclidean region. These results are formally in general agreement with those obtained in and . The corresponding Wheeler-DeWitt equation is exactly solved to construct a state that may be identified with a non-dispersive wave packet peaking in the vicinity of the classical Kaluza-Klein submanifold admitting signature transition. This remarkable correspondence would help us to look at the existence of higher dimensional geometries undergoing a continuous change of signature as semi-classical limits in the context of quantum cosmology.
We may remark that the model presented here does not predict the standard big-bang but rather an eternal cosmology. One way of avoiding this eternal universe would be to assume an adiabatically evolving $`\mathrm{\Lambda }`$. Alternatively, one may work within the context of a dilaton field model in which a typical dilaton field $`\varphi `$ with its potential $`V(\varphi )`$ results in an effective cosmological constant $`\mathrm{\Lambda }`$. Thus, if the dilaton model admits either a duality transformation $`|\mathrm{\Lambda }||\mathrm{\Lambda }|^1`$ or tunneling from $`|\mathrm{\Lambda }|`$ to zero leading to switching off $`|\mathrm{\Lambda }|`$, then it will provide for the evolution of the scale factor $`R`$ from small to large scales. Nevertheless, in its present form this eternal universe with no big-bang and a size which is large enough (adjusting $`|\mathrm{\Lambda }|`$ to a very small constant) may have the advantage of being compatible with the present observations and seems to be free of the flatness problem.
It is also worth noting that if one takes a positive cosmological constant rather than negative, one ends up with having an inflationary rather than a compactification phase in the Lorentzian solutions. Finally, we remark that for (4+$`D`$) dimensional Kaluza-Klein cosmology with $`D>1`$ in the present model, the quantization is a difficult problem which requires further investigations.
Figure Captions
Figure 1. Variation of $`a(\beta )`$ with $`\beta `$ for a typical small value of $`\mathrm{\Lambda }10^3`$.
Figure 2. Surface plot of $`|\mathrm{\Psi }(u,v)|^2`$.
Figure 3. Density plot of $`|\mathrm{\Psi }(u,v)|^2`$.
|
no-problem/9905/hep-ph9905384.html
|
ar5iv
|
text
|
# DESY 99-062 hep-ph/9905384 Instantons in the QCD Vacuum and in Deep Inelastic ScatteringTalk presented at the 7th International Workshop on Deep Inelastic Scattering and QCD (DIS 99), Zeuthen/Germany, April 19-23, 1999; to be published in the Proceedings (Nuclear Physics B (Proc. Suppl.)).
## 1 INTRODUCTION
The ground state (“vacuum”) of non-abelian gauge theories like QCD is known to be very rich. It includes topologically non-trivial fluctuations of the gauge fields, carrying an integer topological charge Q. The simplest building blocks of topological structure in the vacuum, localized (i. e. “instantaneous”) in (euclidean) time and space are instantons ($`I`$ ) with $`Q=+1`$ and anti-instantons ($`\overline{I}`$ ) with $`Q=1`$. While they are believed to play an important rôle in various long-distance aspects of QCD, there are also important short-distance implications. In QCD with $`n_f`$ (massless) flavours, instantons induce hard processes violating “chirality” $`Q_5`$ by an amount $`\mathrm{\Delta }Q_5=2n_fQ`$, in accord with the general ABJ chiral anomaly relation . While in ordinary perturbative QCD ($`Q=0`$), these processes are forbidden, their experimental discovery would clearly be of basic significance. The DIS regime is strongly favoured in this respect, since hard $`I`$-induced processes are both calculable within $`I`$-pertubation theory and have good prospects for experimental detection at HERA .
## 2 INSTANTONS IN THE QCD VACUUM
Crucial information on the range of validity of our DIS predictions comes from a recent high-quality lattice investigation on the topological structure of the QCD vacuum (for $`n_f=0`$). In order to make $`I`$-effects visible in lattice simulations with given lattice spacing $`a`$, the raw data have to be “cooled” first. This procedure is to filter out (dominating) fluctuations of short wavelength $`𝒪(a)`$, while affecting the topological fluctuations of much longer wavelength $`\rho a`$ comparatively little. After cooling, an ensemble of $`I`$’s and $`\overline{I}`$’s can clearly be seen (and studied) as bumps in the topological charge density (e.g. fig. 1 (left)) and in the Lagrange density.
Next, we note that crucial $`I`$-observables in DIS, like the $`I`$-induced rate at HERA, are closely related to $`I`$-observables in the QCD vacuum, as measured in lattice simulations.
The link is provided through two basic quantities of the $`I`$-calculus, $`D(\rho )`$, the $`I`$-size distribution and $`\mathrm{\Omega }(U,R^2/\rho \overline{\rho },\overline{\rho }/\rho )`$, the $`I\overline{I}`$-interaction. Here $`\rho (\overline{\rho }),R_\mu `$ and the matrix $`U`$ denote the $`I(\overline{I})`$-sizes, the $`I\overline{I}`$-distance 4-vector and the $`I\overline{I}`$ relative color orientation, respectively. Within $`I`$-perturbation theory, the functional form of $`D`$ and $`\mathrm{\Omega }`$ is known for $`\alpha (\mu _r)\mathrm{log}(\mu _r\rho )1`$ and $`R^2/\rho \overline{\rho }1`$, respectively, with $`\mu _r`$ being the renormalization scale. Within the so-called “$`I\overline{I}`$-valley” approximation , $`\mathrm{\Omega }_{\mathrm{valley}}`$ is even analytically known for all $`R^2`$.
Fig. 1 (middle) illustrates the striking agreement in shape and normalization of $`2D(\rho )`$ with the continuum limit of the high-quality UKQCD lattice data for $`dn_{I+\overline{I}}/d^4xd\rho `$. The predicted normalization of $`D(\rho )`$ is very sensitive to $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}n_f=0}`$ for which we took the most accurate (non-perturbative) result from ALPHA . The theoretically favoured choice $`\mu _r\rho =𝒪(1)`$ in fig. 1 (middle), optimizes the range of agreement, extending right up to the peak around $`\rho 0.5`$ fm. However, due to its two-loop
renormalization-group invariance, $`D(\rho )`$ is almost independent of $`\mu _r`$ for $`\rho \text{ }<\text{ }0.3`$ fm over the large range $`2\text{ }<\text{ }\mu _r\text{ }<\text{ }20`$ GeV. Hence for $`\rho \text{ }<\text{ }0.3`$ fm, there is effectively no free parameter involved!
Fig. 1 (right) displays the continuum limit of the UKQCD data for the distance distribution of $`I\overline{I}`$-pairs, $`dn_{I\overline{I}}/d^4xd^4R`$, along with the theoretical prediction . The latter involves (numerical) integrations of $`\mathrm{exp}(4\pi /\alpha \mathrm{\Omega }_{\mathrm{valley}})`$ over the $`I\overline{I}`$ relative color orientation $`(U)`$, as well as $`\rho `$ and $`\overline{\rho }`$. For the respective weight $`D(\rho )D(\overline{\rho })`$, a Gaussian fit to the lattice data was used in order to avoid convergence problems at large $`\rho ,\overline{\rho }`$. We note a good agreement with the lattice data down to $`I\overline{I}`$-distances $`R/\rho 1`$. These results imply first direct support for the validity of the “valley”-form of the interaction $`\mathrm{\Omega }`$ between $`I\overline{I}`$-pairs.
In summary: The striking agreement of the UKQCD lattice data with $`I`$-perturbation theory is a very interesting result by itself. The extracted lattice constraints on the range of validity of $`I`$-perturbation theory can be directly translated into a “fiducial” kinematical region for our DIS-predictions . Our results also suggest a promising proposal : One may try and replace the two crucial quantities of the perturbative $`I`$-calculus $`D(\rho )`$ and $`\mathrm{\Omega }(U,R^2/\rho \overline{\rho },\overline{\rho }/\rho )`$ by their actual form inferred from the lattice data. The present “fiducial” cuts in DIS may then be considerably relaxed, high-$`E_T`$ photoproduction becomes accessible theoretically, etc.
## 3 SEARCH STRATEGIES IN DIS
An indispensable tool for investigating the prospects to detect $`I`$-induced processes at HERA, is our $`I`$-event generator QCDINS-1.60, which is interfaced (by default) to HERWIG 5.9.
In a recent detailed study , based on QCDINS and standard DIS event generators, a number of basic (experimental) questions has been investigated: How to isolate an $`I`$-enriched data sample by means of cuts to a set of observables? How large are the dependencies on Monte-Carlo models, both for $`I`$-induced (INS) and normal DIS events? Can the Bjorken-variables $`(Q^{},x^{})`$ of the $`I`$-subprocess(, to which “fiducial” cuts should be applied,) be reconstructed?
Let us briefly summarize the main results. While the “$`I`$-separation power”= $`\mathrm{INS}_{\mathrm{eff}}/\mathrm{DIS}_{\mathrm{eff}}`$ typically ranges around $`𝒪(20)`$ for single observables, a set of six observables (among $`30`$) with much improved $`I`$-separation power $`=𝒪(130)`$ could be found. The systematics induced by varying the modelling of $`I`$-induced events remains surprisingly small (fig. 2). In contrast, the modelling of normal DIS events in the relevant region of phase space turns out to depend quite strongly on the used generators and parameters (fig. 3). Despite a relatively high expected rate of $`𝒪(100)`$ pb for $`I`$-events in the “fiducial” DIS region , a better understanding of the tails of distributions for normal DIS events turns out to be quite important.
|
no-problem/9905/nucl-th9905004.html
|
ar5iv
|
text
|
# 𝑄²-Dependence of the Drell-Hearn-Gerasimov integral
## I Introduction
Absorption of virtual photons on the nucleon at very high photon four-momentum ($`Q^2`$) has been proposed as a means to measure the spin content of the nucleon. Since helicity is conserved in the scaling limit, the cross-section difference between parallel and anti-parallel helicities for the photon and the proton should be a measure of the spin carried by the quarks in the proton. This difference, integrated over energy, is called the DHG integral. At large $`Q^2`$ this integral has been expressed in terms of a sumrule by Ellis and Jaffe (EJ). Data at high $`Q^2`$ seem to agree with the momentum dependence predicted by this QCD-based sumrule. The magnitude is however considerably smaller than predicted and this discrepancy has been known as the “spin crisis”. For real photons, $`Q^2=0`$, another, rigorous sumrule has been formulated for this integral by Drell and Hearn and independently by Gerasimov (DHG). While the values predicted by EJ are positive, the DHG value is large and negative and it is an intriguing problem to reconcile the two.
The different values for the sumrule at low and high $`Q^2`$ are related to the transition from physics dominated by nucleon resonances (non-perturbative QCD) to the perturbative QCD regime. Several studies have emphasized the explicit role played by nucleon resonances in the transitional regime around $`Q^21`$ GeV<sup>2</sup>. At the lowest momentum transfers this has been investigated in chiral perturbation theory .
The derivation of the sumrules is based on Lorentz and gauge invariance, crossing symmetry, unitarity and causality. It is therefore of interest to investigate the sumrule in a model in which most of these symmetries are obeyed. We present a calculation of the strength distribution for $`Q^21`$ GeV<sup>2</sup> in the model developed in ref. . This model obeys crossing symmetry, unitarity, Lorentz and gauge invariance. It is formulated in terms of meson and nucleon degrees of freedom which includes nucleon resonances in an effective-Lagrangian formalism.
## II Outline of the model
We use the relativistic effective-Lagrangian formalism presented in . The model is based on the K-matrix approach. The kernel is constructed from the direct (s), exchange (u) and meson exchange (t-channel) tree-level amplitudes. In the s- and u-channels all spin -1/2 and -3/2 baryon resonances with masses below 1.7 GeV are included. The use of the K-matrix approach guarantees unitarity in the coupled-channel $`(\gamma +N)(\pi +N)`$ space. Observing unitarity is of crucial importance for the calculation of cross sections for photon energies exceeding 250 MeV. Coupling to channels outside this model space is included in an approximate manner through the introduction of an imaginary part in the self-energy of the s-channel resonances . The coupling parameters have been obtained from a simultaneous fit to pion-nucleon phase shifts, pion-photoproduction multipoles and cross sections for Compton scattering . The model is Lorentz and gauge invariant and obeys crossing symmetry. The chiral-symmetry constraints are also respected since the low-energy $`\pi N`$ scattering is well described. Here we will only mention the details which are of interest for the present application.
Of particular importance for the DHG integral is the treatment of the $`\mathrm{\Delta }`$-resonance. The most general $`\gamma N\mathrm{\Delta }`$ vertex for finite $`q^2=Q^2`$ is given by
$`\mathrm{\Gamma }_{N\gamma \mathrm{\Delta }^\alpha }`$ $`=`$ $`{\displaystyle \frac{i}{2M}}F_{VMD}(q^2)[G_1\theta _{\alpha \beta }(z_1)\gamma _\delta {\displaystyle \frac{G_2}{2M}}\theta _{\alpha \beta }(z_2)p_\delta {\displaystyle \frac{G_3}{2M}}\theta _{\alpha \beta }(z_3)q_\delta ]\gamma _5(q^\beta \epsilon ^\delta q^\delta \epsilon ^\beta )`$ (2)
$`\theta _{\alpha \beta }(a_i)=g_{\alpha \beta }+a_i\gamma _\alpha \gamma _\beta \text{ and }a_i(z_i+1/2)\text{ for }i=1,2,3`$
where $`G_i=g_iT_3`$ and $`T_3`$ is the $`N\mathrm{\Delta }`$ isospin transition operator. The constants $`g_1`$, $`g_2`$ and off-shell parameters $`z_1`$, $`z_2`$ are fixed from the fit for the real photons. The constant $`g_3`$ (and $`z_3`$) does not contribute for $`Q^2=0`$ and therefore we will choose $`g_3=0`$ in the present investigation. In general, at finite $`Q^2`$ the coupling $`g_3`$ affects the longitudinal multipole $`L_{1+}`$ as well as the ratio $`E_{1+}/M_{1+}`$ for the $`\mathrm{\Delta }`$-resonance. The transition form factor is taken in the form
$$F_{VMD}(q^2)=\frac{2m_\rho ^4}{(2m_\rho ^2q^2)(m_\rho ^2q^2)},$$
(3)
as inspired by vector-meson dominance where $`m_\rho `$ is the $`\rho `$-meson mass.
The DHG integral at finite $`Q^2`$ can be introduced as
$$I_{DHG}(Q^2)=\frac{2M^2}{Q^2}_0^1𝑑x\left(g_1(x,Q^2)\frac{4x^2M^2}{Q^2}g_2(x,Q^2)\right)=\frac{M^2}{4\pi ^2\alpha }_{Q^2/2M}^{\mathrm{}}\frac{d\nu }{\nu }\sigma ^{TT}$$
(4)
which relates this sumrule directly to the transverse-transverse interference cross section
$`\sigma ^{TT}={\displaystyle \frac{1}{2}}(\sigma _{1/2}^T\sigma _{3/2}^T)`$ (5)
for inelastic electron scattering on the nucleon. Throughout this paper we use the Bjorken variable $`x=Q^2/2M\nu `$ and $`\nu =pq/M`$, the energy of the virtual photon in the lab system. The total absorption cross section for a transverse virtual photon in a state with total helicity $`\lambda `$ is denoted by $`\sigma _\lambda ^T`$ where the dependence on $`\nu `$ and $`Q^2`$ is not indicated for ease of writing.
The spin-dependent structure functions which enter in Eq. (4) are defined as
$`g_1(x,Q^2)`$ $`=`$ $`{\displaystyle \frac{M\nu }{4\pi ^2\alpha (1+Q^2/\nu ^2)}}\left(\sigma ^{TT}+{\displaystyle \frac{\sqrt{Q^2}}{\nu }}\sigma _{1/2}^{LT}\right),`$ (6)
$`g_2(x,Q^2)`$ $`=`$ $`{\displaystyle \frac{M\nu }{4\pi ^2\alpha (1+Q^2/\nu ^2)}}\left(\sigma ^{TT}+{\displaystyle \frac{\nu }{\sqrt{Q^2}}}\sigma _{1/2}^{LT}\right),`$ (7)
where $`\sigma _{1/2}^{TL}`$ is the transverse-longitudinal interference cross section, suppressing again the energy and momentum dependence. Note that structure functions $`G_{1,2}`$ introduced by Bjorken are related to $`g_{1,2}(x,Q^2)`$ through
$`M^2\nu G_1(\nu ,Q^2)`$ $`=`$ $`g_1(x,Q^2),`$
$`M\nu ^2G_2(\nu ,Q^2)`$ $`=`$ $`g_2(x,Q^2).`$
It should be noted that at finite $`Q^2`$ Eq. (4) differs from both and . In particular, in the DHG integral in addition to $`\sigma ^{TT}`$ contains also $`\sigma _{1/2}^{LT}`$ contribution. Our definitions agree with and have been choosen since they yield the expression for the DHG integral as measured in recent experiments. In the limit of real photon or in the scaling limit, ($`Q^2`$, $`\nu `$) $`\mathrm{}`$ at fixed $`x=Q^2/2M\nu `$, above differences vanish.
At the photon point
$`I_{DHG}(Q^2=0)={\displaystyle \frac{1}{4}}\kappa ^2`$ (8)
with $`\kappa `$ being the anomalous magnetic moment of the nucleon, and in the scaling regime
$`I_{DHG}(Q^2){\displaystyle \frac{2M^2}{Q^2}}{\displaystyle _0^1}g_1(x)𝑑x={\displaystyle \frac{2M^2}{Q^2}}\mathrm{\Gamma }_1,`$ (9)
where $`\mathrm{\Gamma }_1`$ is the moment of $`g_1`$. Experiment gives for the proton $`\mathrm{\Gamma }_1^p0.126`$ at $`Q^2=10.7`$ GeV<sup>2</sup> while the prediction of EJ is $`\mathrm{\Gamma }_1^p=0.185`$.
## III Results
The total photo-nucleon cross section can be calculated from the imaginary part of the forward-scattering $`\gamma ^{}N\gamma ^{}N`$ amplitude for total helicity-$`\frac{3}{2}`$ and -$`\frac{1}{2}`$ states. In Fig. (1) the cross sections for the two initial helicity states are plotted versus energy $`\omega =(sM^2)/2M`$ where $`\sqrt{s}`$ is the invariant energy of the system. The energy $`\omega `$ is related to the integration variable in Eq. (4) through
$$\omega =\nu Q^2/2M$$
(10)
and has the advantage that the s-channel resonances occur at a value of $`\omega `$, independent of $`Q^2`$. The large peak in the cross section at $`\omega 300`$ MeV is due to the $`\mathrm{\Delta }`$-resonance while the one at $`\omega 700`$ MeV is due to the $`D_{13}`$-resonance.
At energies below the $`\mathrm{\Delta }`$-resonance the pion-photon seagull term, which is needed to ensure gauge invariance for the pion-electroproduction amplitude, gives by far the dominant contribution. It contributes to the helicity-$`\frac{1}{2}`$ states only and thus it gives a sizable positive contribution to the DHG integral at $`Q^2`$=0. Since this contribution to the cross section is inversely proportional to the momentum of the virtual photon, it strongly diminishes when $`\sqrt{Q^2}\nu `$ causing the decrease (increase of the absolute magnitude) of the DHG integral seen in Fig. (2) at low $`Q^2`$. The dominant contribution to the DHG integral originates from the $`\mathrm{\Delta }`$-resonance and is negative in sign. Only at values of $`Q^2`$ of the order of the $`\rho `$-meson mass the form factor Eq. (3) starts to cut this $`\mathrm{\Delta }`$-contribution giving rise to a general decrease of the absolute magnitude of the DHG integral seen in Fig. (2). With increasing $`Q^2`$ the absolute value of $`I_{DHG}(Q^2)`$ thus first increases to reach a maximum at $`Q^2`$=0.05 GeV<sup>2</sup> after which it strongly decreases.
The above features of the DHG integral can also be observed from Fig. (3) where the dependence of the DHG integral on the upper integration limit
$$I_{DHG}^{up}(Q^2)=\frac{M^2}{4\pi ^2\alpha }_{Q^2/2M}^{\nu _{up}}\frac{d\nu }{\nu }\sigma ^{TT}$$
(11)
is given as function of $`\omega _{up}=\nu _{up}Q^2/2M`$ (see Eq. (10)). It can be seen that the important contribution to this integral is generated in the region of the $`\mathrm{\Delta }`$-resonance. At $`\omega _{max}800`$ MeV, the maximum energy where we can apply the present model with confidence, about 80% of the DHG sumrule value (at $`Q^2=0`$) is reached.
In Fig. (2) the single-pion production contribution to $`I_{DHG}(Q^2)`$ is shown also. At small $`Q^2`$ it contributes 86% to the integral, this fraction is somewhat larger than 76% as was found by Karliner at the real-photon point. At higher $`Q^2`$ the multiple-pion emission contribution decreases in absolute magnitude but increases in relative importance to 23% at $`Q^2`$=1 GeV<sup>2</sup>.
Finally, as we mentioned, the coupling parameter $`g_3`$ in Eq. (2) has been chosen zero. In general, at finite $`Q^2`$ the DHG integral depends on its value. We have checked that for a moderate positive value, $`g_3|g_1|`$, the DHG integral changes sign and becomes positive at $`Q^21`$ GeV<sup>2</sup>. The effect of this coupling on the DHG integral and pion electro-production multipoles will be studied in detail in a separate publication.
## IV Conclusions
Our calculations in an effective-Lagrangian model show that at $`Q^2<1`$ GeV<sup>2</sup> the DHG integral is dominated by resonance contributions, mainly by the P<sub>33</sub> and the D<sub>13</sub>-resonances. Due to the form factors which are implemented in the calculation these contributions decrease rather sharply at moderate $`Q^2`$. At large $`Q^2`$ exceeding a few GeV<sup>2</sup> the present model, being based on nucleon and meson degrees of freedom, looses validity since the quark structure of the particles starts to play an increasingly important role. In the framework of an effective-Lagrangian approach one could model the quark structure by ascribing different Q<sup>2</sup>-dependences to electric and magnetic couplings of resonances. This might well explain the change of sign of the integral at large $`Q^2`$.
At small $`Q^2`$, $`Q^2<0.05`$ GeV<sup>2</sup>, we observe a striking increase in the absolute magnitude of the DHG integral. This is due to the particular momentum dependence of the seagull (or $`\gamma \pi NN`$) contribution to the dominant pion-electroproduction process. Surprisingly this behavior of $`I_{DHG}(Q^2)`$ is opposite to the dependence obtained in the ChPT calculations .
## Acknowledgments
We would like to thank R.G.E. Timmermans and R. Van de Vyver for discussions. O.S. gratefully acknowledges the hospitality of RCNP, Osaka, where this work was initiated. This work is supported by the Fund for Scientific Research-Flanders (FWO-Vlaanderen) and the Foundation for Fundamental Research on Matter (FOM) of the Netherlands. A.Yu. K. thanks NWO for financial support.
|
no-problem/9905/hep-ph9905468.html
|
ar5iv
|
text
|
# Tevatron Potential for Technicolor Search with Prompt Photons.
## I Introduction
The Standard Model has been extensively tested and it is a very successful description of the weak interaction phenomenology. Nevertheless, the electroweak symmetry breaking sector has essentially remained unexplored. In the Standard Model, the Higgs boson plays a crucial rôle in the symmetry breaking mechanism. However, the presence of such a fundamental scalar at the 100 GeV scale gives rise to some theoretical problems, such as the naturalness of a light Higgs boson and the triviality of the fundamental Higgs self-interaction. These problems lead to the conclusion that the Higgs sector of the Standard Model is in fact a low energy effective description of some new Physics at a higher energy scale.
Three main avenues for the new Physics have been proposed: low-scale supersymmetry, large extra dimensions at TeV scale and dynamical symmetry breaking . A common prediction of all these extensions of the Standard Model is the appearance of new particles in the range of some hundred GeV to a few TeV.
We focus on the issue of the technicolor models of dynamical symmetry breaking which predict new particles such as Pseudo-Nambu-Goldstone-Bosons (PNGB) and vector resonances. Many of these models include colored technifermions and hence some PNGB’s can be color-triplet or even color-octet particles. These color-octet scalars can be copiously produced at a hadron collider and they are the subject of this paper.
The main contribution to the color-octet PNGB masses comes from QCD. If it is assumed that technicolor dynamics scales from QCD, this contribution is in the range of $`200400`$ GeV, but their masses can be different in models with non-QCD-like dynamics.
Production of technicolor particles has been studied at various present and future colliders such as Tevatron, LEP, NLC and the Muon Collider. The impact of PNGB on rare K meson decays induced through the exchange of color-singlet $`\pi _T^\pm `$ and color-octet $`\pi _{T8}^\pm `$ technipions has been recently studied in the context of multiscale technicolor where typical limits of the order of $`m_{\pi _{T8}}250`$ GeV were obtained.
Of special interest is the case of the isoscalar color-octet PNGB, the so–called technieta ($`\eta _T`$), since it can be produced via gluon fusion through the heavy quark loop. In the near future, the upgraded $`2`$ TeV Tevatron collider will be the most promising machine for technieta search. The channel $`p\overline{p}\eta _Tt\overline{t}`$ was initially studied by Appelquist and Triantaphyllou in the context of the one family technicolor model. More recently Eichten and Lane have studied the same channel in the context of walking technicolor. They concluded that a technieta with mass in the range $`M_{\eta _T}=400500`$ GeV doubles the top quark production cross section at Tevatron and hence it is excluded in this mass range.
In our study we would like to concentrate on the search of technieta with mass below the $`t\overline{t}`$ threshold, which is not constrained by the $`t\overline{t}`$ production process.
The $`p\overline{p}\eta _Tgg`$ and $`p\overline{p}\eta _Tg\gamma `$ processes were studied in the early eighties by Hayot and Napoly in the framework of the one family technicolor model. They showed that, due to the signal-to-background ratio, the gluon-photon channel is preferable. Nevertheless, theirs results must be taken only as qualitative since no complete and detailed analysis were made.
In this letter we perform a complete realistic study of the $`p\overline{p}\eta _T\gamma +\text{jet}`$ process in order to understand Tevatron potentials for $`\eta _T`$ search with the mass below the $`t\overline{t}`$ threshold. We consider three different scenarios: the one family model , top-color assisted technicolor (TC2) and multiscale technicolor .
## II Effective Couplings
The color-octet technieta couples to gluons and photons through the Adler-Bell-Jackiw anomaly. This effective coupling can be written as:
$$A(\eta _TB_1B_2)=\frac{S_{\eta _TB_1B_2}}{4\pi ^2\sqrt{2}F_Q}ϵ_{\mu \nu \alpha \beta }ϵ_1^\mu ϵ_2^\nu k_1^\alpha k_2^\beta $$
(1)
where $`ϵ_i^\mu `$ and $`k_i^\mu `$ represents the polarization and momentum of the vector boson $`i`$. In our case the factors $`S_{\eta _TB_1B_2}`$ are given by :
$$S_{\eta _{Ta}g_bg_c}=g_s^2d_{abc}N_{TC}$$
(2)
and
$$S_{\eta _{Ta}g_b\gamma }=\frac{g_se}{3}\delta _{ab}N_{TC}$$
(3)
where $`g_s=\sqrt{4\pi \alpha _s}`$, $`e=\sqrt{4\pi \alpha }`$ and $`\alpha `$ and $`\alpha _s`$ are the electromagnetic and strong coupling constant, and $`N_{TC}`$ is the number of technicolors (we take $`N_{TC}=4`$)
The technieta coupling to quarks can be written as:
$$A(\eta _Tq\overline{q})=\frac{m_q}{F_Q}\overline{u}_q\gamma _5\frac{\lambda _a}{2}v_q.$$
(4)
With these couplings we can compute the technieta partial widths:
$$\mathrm{\Gamma }(\eta _Tgg)=\frac{5\alpha _S^2N_{TC}^2M_{\eta _T}^3}{384\pi ^3F_Q^2},$$
(5)
$$\mathrm{\Gamma }(\eta _Tg\gamma )\left(\frac{N_Teg_s}{4\pi F_Q}\right)^2\frac{M_\eta ^3}{576\pi }$$
(6)
$$\mathrm{\Gamma }(\eta _Tq\overline{q})=\frac{m_q^2M_{\eta _T}\beta _q}{16\pi F_Q^2}$$
(7)
where
$$\beta _q=\sqrt{1\frac{4m_q^2}{M_{\eta _T}^2}},$$
(8)
$`m_q`$ is the quark mass and $`M_{\eta _T}`$ is the technieta mass.
These expressions were used to calculate the technieta total width. From equations (5) and (6) we can see that:
$$\frac{\mathrm{\Gamma }(\eta _T\gamma g)}{\mathrm{\Gamma }(\eta _Tgg)}=\frac{2\alpha }{15\alpha _s}=8.7\times 10^3.$$
(9)
Hence, the decay channel $`\eta _Tg\gamma `$ is suppressed, but due to the more manageable background it is expected to provide a larger statistical significance.
The constant $`F_Q`$ that appears in the couplings is the PNGB decay constant. Its value is model-dependent. In this work we consider three values for $`F_Q`$: $`F_Q=125`$ GeV for the one family technicolor model, $`F_Q=80`$ GeV for top-color assisted technicolor and $`F_Q=40`$ GeV for multiscale technicolor.
Some typical values of the technieta partial and total widths are shown in Table I for $`\alpha _s=0.119`$ and $`M_{\eta _T}=250`$ GeV.
## III Signal and Background Rates
With the couplings discussed in the previous section we can show that the partonic cross section for the process $`gg\eta _T\gamma g`$ can be written as:
$$\widehat{\sigma }=\frac{5\widehat{s}^3\pi ^3}{384}\left(\frac{N_Teg_s}{12\sqrt{2}\pi F_Q}\right)^2\left(\frac{N_T\alpha _s}{\sqrt{2}\pi F_Q}\right)^2\frac{1}{(\widehat{s}M_{\eta _T}^2)^2+\mathrm{\Gamma }_{\eta _T}^2M_{\eta _T}^2}$$
(10)
We wrote a Fortran code in order to convolute the above partonic cross section with the CTEQ4M partonic distribution functions (with $`Q^2=M_{\eta _T}^2`$). Because the technieta coupling with quarks is proportional to the quark mass, we neglect the technieta production via $`q\overline{q}\eta _T`$ interaction. In the case of gluon fusion we only take into account the $`s`$-channel contribution, which is dominant at the resonance. It must be noted that gauge invariance is preserved due to the Levi-Civita tensor present in equation (3). Table II shows the cross section (in pb) calculated for different values of $`M_{\eta _T}`$ and $`F_Q`$ at $`\sqrt{s}=2000`$ GeV with a cut in the transverse photon and jet momentum $`p_{T\gamma ,j}>10`$ GeV. These values for the cross section agree, with a precision of one percent, with a narrow width approximation. The cross section becomes sizeable for low values $`M_{\eta _T}`$ and $`F_Q`$, being of the order of a picobarn. However, the cross section for the background $`p\overline{p}\gamma g`$ and $`p\overline{p}\gamma q`$ processes is $`\sigma _{\text{back}}=2.14\times 10^4`$ pb, which is a factor of $`10^4`$ larger than the signal. This situation clearly shows that a detailed kinematical analysis is necessary to work out the strategy to suppress the background as strongly as possible in order to extract the signal.
## IV Complete Simulation of Signal and Background
In order to perform a complete signal and background simulation we use the PYTHIA 5.7 generator. Effects of jet fragmentation, initial and final state radiation (ISR+FSR) as well as smearing of the jet and the photon energies have been taken into account. Since the process $`\eta _T\gamma g`$ was absent in PYTHIA we created a generator for $`gg\eta _T\gamma g`$ process and linked it to PYTHIA as an external user process.
In our simulation we have used CTEQ4M structure function and have chosen $`Q^2=M_{\eta _T}^2`$ for the signal.
In this framework we study, for both signal and background, distributions of the transverse photon momentum, transverse jet momentum, rapidity and invariant mass in order to find the optimal kinematical cuts for signal subtraction. These distributions are shown in Fig. 1. We can see that the ISR+FSR and energy smearing effects make the mass distribution (Fig. 1(a)) quite broad. Notice the difference in $`p_t`$ distribution for photons(Fig. 1(b)) and jets(Fig. 1(c)). The fact that the distribution for jets is wider than for photons is due to initial and final state radiation.
We have found the following optimal set of kinematical cuts:
$`p_{t\gamma \text{,jet}}>{\displaystyle \frac{M_{\eta _T}}{2}}40\text{ GeV}`$ (11)
$`M_{\eta _T}{\displaystyle \frac{M_{\eta _T}}{10}}M_{\gamma \text{jet}}M_{\eta _T}+10\text{ GeV}`$ (12)
To take into account the detector pseudorapidity coverage we have chosen the following cuts for $`\eta _\gamma `$ and $`\eta _{jet}`$:
$`|\eta _\gamma |1.5,|\eta _{jet}|<3`$ (13)
Table III shows the signal and background cross sections after those cuts have been applied. It is interesting to look at the values of the significance which is written as $`\frac{\sigma _{\text{signal}}}{\sqrt{\sigma _{\text{back}}}}`$ and characterizes the statistical deviation of the number of the observed events from the predicted background. The significance as a function of the $`M_{\eta _T}`$ for different technicolor models is shown in (Fig.3(a), where we have assumed a luminosity of $`=2000`$ pb<sup>-1</sup> for the Tevatron Run II. For multiscale technicolor ($`F_Q=40`$ GeV), the significance is above the $`2\sigma `$ 95% CL exclusion limit for technieta mass less than 350 GeV while for a $`5\sigma `$ discovery criteria one obtains $`M_{\eta _T}>266`$ GeV mass limit. For the top-color assisted technicolor model ($`F_Q=80`$ GeV) one can establish only a 95% CL exclusion limit $`M_{\eta _T}>175`$ GeV. For the one family technicolor model the significance is too small to establish any limits on $`M_{\eta _T}`$.
In our study we compared results based on PYTHIA simulation and results obtained using MADGRAPH and HELAS without taking into account ISR+FSR and the energy smearing effects. The corresponding significance for this case is shown in Fig.3(b). One can see that for this ideal case respective values of significance is about 2.5 times higher than in the case when we model the realistic situation using PYTHIA. The differences between Fig.3 (a) and (b) clearly shows the importance of the complete simulation of the signal and background in order to obtain realistic results.
Finally it is worth pointing out that the study of the $`b\overline{b}`$ signature would lead to similar bounds on the $`\eta _T`$ mass. This is because the signal for $`b\overline{b}`$ final state will be roughly increased by a factor 10 (see Table I) but the background will be about two orders higher than that for $`\gamma +jet`$ signature. This would lead to the same values of the significance as for $`\gamma +jet`$ final state. However, one should take into account also the efficiency of b-tagging which will decrease the significance.
## Conclusions
We have studied the potential of the upgraded Tevatron collider for the $`\eta _T`$ search with $`\eta _T\gamma +g`$ decay signature and mass below the $`t\overline{t}`$ threshold. Results have been obtained for the one family model, top-color assisted technicolor and multiscale technicolor.
We found that for multiscale technicolor model, Tevatron can exclude $`M_{\eta _T}`$ up to $`350`$ GeV at $`95`$%CL, while the $`5\sigma `$ discovery limit for $`\eta _T`$ is $`266`$ GeV. For the top-color assisted technicolor model one can only put a $`95`$%CL lower limit on $`\eta _T`$ mass equal to $`175`$ GeV while for one family technicolor model the significance is too small to establish any limit at all. Study of $`b\overline{b}`$ final state signature is not expected to give better limits on the $`\eta _T`$ mass.
We have performed a complete simulation of the signal and background and have shown the importance of taking into account the effects of jet fragmentation, initial and final state radiation, as well as smearing of the jet and the photon energies.
###### Acknowledgements.
This work was supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), and by Programa de Apoio a Núcleos de Excelência (PRONEX).
|
no-problem/9905/astro-ph9905335.html
|
ar5iv
|
text
|
# Model atmosphere calibration for synthetic photometry applications : the 𝑢𝑣𝑏𝑦 Strömgren photometric system
## 1. Introduction
Since predictions based on stellar-atmosphere models are useful for “Spectro-photometric dating of stars and galaxies”, main subject of this conference, we present some of the results obtained from an empirically calibrated grid of stellar atmosphere models for simultaneously deriving homogeneous effective temperatures and metallicities of 40 stars from observed data.
## 2. BaSeL models
We use the Basel Stellar Library (BaSeL) photometric calibrations, extensively tested and regularly updated for a larger set of parameters (see Lejeune et al. 1997, 1998 and Lastennet et al. 1999). The BaSeL models cover a large range of fundamental parameters: 2000 K $``$ $`T_{\mathrm{eff}}`$ $``$ 50,000 K, $``$1.02 $``$ $`\mathrm{log}g`$ $``$ 5.5, and $``$5.0 $``$ \[Fe/H\] $``$ +1.0. This library combines theoretical stellar energy distributions which are based on several original grids of blanketed model atmospheres, and which have been corrected in such a way as to provide synthetic colours consistent with extant empirical calibrations at all wavelengths from the near-UV through the far-IR. For more details and references on the BaSeL library, see contributions of Lejeune et al., Lastennet et al. and Westera et al. in this volume.
## 3. Comparison with Hipparcos parallax
Very recently, Ribas et al. (1998) have computed the effective temperatures of 19 eclipsing binaries included in the Hipparcos catalogue from their radii, Hipparcos trigonometric parallaxes, and apparent visual magnitudes corrected for absorption. They used Flower’s (1996) calibration to derive bolometric corrections. Only 8 systems are in common with our working sample. The comparison with our results is made in Table 1. The $`T_{\mathrm{eff}}`$ being highly related with metallicity, a direct comparison is not possible because, unlike the Hipparcos-derived data, our results are not given in terms of temperatures with error bars, but as ranges of $`T_{\mathrm{eff}}`$ compatible with a given \[Fe/H\]. Thus, the ranges reported in Tab. 1 are given assuming three different hypotheses: \[Fe/H\]$`=`$0.2, \[Fe/H\] $`=`$ 0, and \[Fe/H\] $`=`$ 0.2. The overall agreement is quite satisfactory, as illustrated in Fig. 1. The disagreement for the temperatures of CW Cephei can be explained by the large error of the Hipparcos parallax ($`\sigma `$<sub>π</sub>/$`\pi `$ $``$70%). For such large errors, the Lutz-Kelker correction (Lutz & Kelker 1973) cannot be neglected: the average distance is certainly underestimated and, as a consequence, the $`T_{\mathrm{eff}}`$ is also underestimated in Ribas et al.’s (1998) calculation. Thus, the agreement with the results obtained from the BaSeL models is certainly better than it would appear in Fig. 1 and Tab. 1. Similar corrections, of slightly lesser extent, are probably also indicated for the $`T_{\mathrm{eff}}`$ of RZ Cha and GG Lup, which have $`\sigma `$<sub>π</sub>/$`\pi >`$ 10% (11.6% and 11.4%, respectively). Finally, it is worth noting that the system with the smallest relative error in Tab. 1, $`\beta `$ Aur, shows excellent agreement between $`T_{\mathrm{eff}}`$ (Hipparcos) and $`T_{\mathrm{eff}}`$ (BaSeL), which underlines the validity of the BaSeL models.
## 4. Brief summary of the results
* The large range of \[Fe/H\] associated with acceptable confidence levels makes it evident that the classical method to derive $`T_{\mathrm{eff}}`$ from metallicity-independent calibrations should be considered with caution.
* By exploring the best $`\chi ^2`$-fits to the photometric data, we have re-derived new reddening values for some stars.
* Comparisons for 16 stars with Hipparcos-based $`T_{\mathrm{eff}}`$ determinations show good agreement with the temperatures derived from the BaSeL models. The agreement is even excellent for the star having the most reliable Hipparcos data in the sample studied.
See Lastennet et al. 1999 for details about the method, the determination of reddening, the influence of gravity, etc…
These comparisons also demonstrate that, while originally calibrated in order to reproduce the broad-band (UBVRIJHKL) colours, the BaSeL models also provide reliable results for medium-band photometry such as the Strömgren photometry. This point gives a significant weight to the validity of the BaSeL library for synthetic photometry applications in general.
### Acknowledgments.
T. L. gratefully acknowledges financial support from the Swiss National Science Foundation (grant 20-53660.98 to Prof. Buser), and from the “Société Suisse d’Astronomie et d’Astrophysique” (SSAA). We are grateful to the organisers for arranging such an enjoyable meeting.
## References
Flower, P.J. 1996, ApJ, 469, 355
Lastennet, E., Lejeune, Th., Westera, P, Buser, R. 1999, A&A, 341, 857
Lejeune, Th., Cuisinier, F., Buser, R. 1997, A&AS, 125, 229
Lejeune, Th., Cuisinier, F., Buser, R. 1998, A&AS, 130, 65
Lutz, T.E., Kelker, D.H. 1973, PASP, 85, 573
Ribas, I., Giménez, A., Torra, J., Jordi, C., Oblak, E. 1998, A&A, 330, 600
|
no-problem/9905/gr-qc9905096.html
|
ar5iv
|
text
|
# IFT-P.044/99 gr-qc/9905096 Search for semiclassical–gravity effects in relativistic stars
## Abstract
We discuss the possible influence of gravity in the neutronization process, $`p^+e^{}n\nu _e`$, which is particularly important as a cooling mechanism of neutron stars. Our approach is semiclassical in the sense that leptonic fields are quantized on a classical background spacetime, while neutrons and protons are treated as excited and unexcited nucleon states, respectively. We expect gravity to have some influence wherever the energy content carried by the in–state is barely above the neutron mass. In this case the emitted neutrinos would be soft enough to have a wavelength of the same order as the space curvature radius.
The inner structure of neutron stars has attracted much attention of relativists and particle and nuclear physicists since there still remain many subtle points to be better understood (see, e.g., and references therein). It would be interesting, thus, to investigate how gravity could influence quantum processes which occur in their interior. The gravitational field may be of some significance to quantum phenomena wherever they involve particles with wavelengths of the same order as the space curvature radius. Processes dealing with soft particles are very promising since their wavelengths may be arbitrarily large. Here we focus on the neutronization process, $`p^+e^{}n\nu _e`$, which is an important cooling mechanism for neutron stars with temperatures up to about $`10^9`$ K. Our approach is essentially semiclassical in the sense that leptonic fields are quantized on a classical background spacetime, while neutrons and protons are treated as excited and unexcited nucleon states, respectively. We will use natural units $`G=\mathrm{}=c=k_B=1`$ throughout this paper.
Field quantization in the Schwarzschild spacetime is not easy to accomplish . We shall simplify the problem by simulating the Schwarzschild spacetime by a two–dimensional noninertial frame described by the Rindler wedge. The Rindler wedge is a static spacetime defined by the line element
$$ds^2=a^2u^2d\tau ^2du^2$$
(1)
with $`0<u<+\mathrm{}`$ and $`\mathrm{}<\tau <+\mathrm{}`$, where $`a`$ “characterizes” the frame acceleration, i.e., $`a\sqrt{a_\mu a^\mu }=\mathrm{const}`$ is the proper acceleration of the worldline which has $`\tau `$ as its proper time, namely, $`u=a^1`$.
We would like to consider the case in which the nucleons lie approximately static during the reaction at some fixed point in the star. In principle, this poses no problem since the whole process takes place in the presence of a medium and not in the vacuum. The location where the reaction happens will be specified by the nucleon proper acceleration. Thus, in our simplified model the reacting nucleons will be described by a uniformly accelerated current in the Rindler wedge with constant proper acceleration $`a`$:
$$j^\mu =qu^\mu \delta (ua^1),$$
(2)
where $`q`$ is a small coupling constant and $`u^\mu =(a,0)`$ is the nucleon four–velocity. Next, in order to allow the current above to describe the proton–neutron transition, we shall consider the nucleon as a two–level system . In this vein, neutrons $`|n`$ and protons $`|p`$ will be excited and unexcited eigenstates of the nucleon Hamiltonian $`\widehat{H}`$:
$$\widehat{H}|n=m_n|n,\widehat{H}|p=m_p|p,$$
(3)
where $`m_n`$ and $`m_p`$ are the neutron and proton masses, respectively. Hence current (2) will be replaced by
$$\widehat{j}^\mu =\widehat{q}(\tau )u^\mu \delta (ua^1),$$
(4)
where $`\widehat{q}(\tau )\mathrm{exp}(i\widehat{H}\tau )\widehat{q}_0\mathrm{exp}(i\widehat{H}\tau )`$ is a Hermitian monopole. The two–dimensional Fermi constant $`G_F|p|\widehat{q}_0|n|=9.918\times 10^{13}`$ is determined by imposing that the mean proper lifetime of inertial neutrons is $`887`$.
In order to calculate the neutronization rate we shall quantize the leptonic field in the Rindler wedge. The leptonic field is expressed as
$$\widehat{\mathrm{\Psi }}(\tau ,u)=\underset{\sigma =\pm }{}_0^+\mathrm{}𝑑\omega \left(\widehat{b}_{\omega \sigma }\psi _{\omega \sigma }(\tau ,u)+\widehat{d}_{\omega \sigma }^{}\psi _{\omega \sigma }(\tau ,u)\right),$$
(5)
where $`\psi _{\omega \sigma }(\tau ,u)=f_{\omega \sigma }(u)e^{i\omega \tau }`$ are positive ($`\omega >0`$) and negative ($`\omega <0`$) frequency solutions of the Dirac equation, with respect to the boost Killing field $`/\tau `$, with polarizations $`\sigma =\pm `$. We recall that Rindler frequencies may assume arbitrary positive real values. In particular there are massive Rindler particles with arbitrarily small frequencies. (See Ref. for a discussion on zero-frequency Rindler particles.) Here
$`f_{\omega +}(u)`$ $`=`$ $`A_+\left(\begin{array}{c}K_{i\omega /a+1/2}(mu)+iK_{i\omega /a1/2}(mu)\\ 0\\ K_{i\omega /a+1/2}(mu)+iK_{i\omega /a1/2}(mu)\\ 0\end{array}\right),`$ (10)
$`f_\omega (u)`$ $`=`$ $`A_{}\left(\begin{array}{c}0\\ K_{i\omega /a+1/2}(mu)+iK_{i\omega /a1/2}(mu)\\ 0\\ K_{i\omega /a+1/2}(mu)iK_{i\omega /a1/2}(mu)\end{array}\right),`$ (15)
where $`m`$ is the lepton mass and the normalization constants
$$A_+=A_{}=\left[\frac{m\mathrm{cosh}(\pi \omega /a)}{2\pi ^2a}\right]^{1/2}$$
(16)
were chosen such that the annihilation and creation operators satisfy the following simple anticommutation relations
$$\{\widehat{b}_{\omega \sigma },\widehat{b}_{\omega ^{}\sigma ^{}}^{}\}=\{\widehat{d}_{\omega \sigma },\widehat{d}_{\omega ^{}\sigma ^{}}^{}\}=\delta (\omega \omega ^{})\delta _{\sigma \sigma ^{}},$$
(17)
$$\{\widehat{b}_{\omega \sigma },\widehat{b}_{\omega ^{}\sigma ^{}}\}=\{\widehat{d}_{\omega \sigma },\widehat{d}_{\omega ^{}\sigma ^{}}\}=\{\widehat{b}_{\omega \sigma },\widehat{d}_{\omega ^{}\sigma ^{}}\}=\{\widehat{b}_{\omega \sigma },\widehat{d}_{\omega ^{}\sigma ^{}}^{}\}=0.$$
(18)
Now we are ready to calculate the neutronization amplitude
$$𝒜=n|\nu _{\omega _\nu \sigma _\nu }|\widehat{S}_I|e_{\omega _e\sigma _e}^{}|p,$$
(19)
where we minimally couple the nucleon current (4) to the leptonic fields $`\widehat{\mathrm{\Psi }}_e`$ and $`\widehat{\mathrm{\Psi }}_\nu `$ through the Fermi interaction action
$$\widehat{S}_I=d^2x\sqrt{g}\widehat{j}_\mu (\widehat{\overline{\mathrm{\Psi }}}_\nu \gamma _R^\mu \widehat{\mathrm{\Psi }}_e+\widehat{\overline{\mathrm{\Psi }}}_e\gamma _R^\mu \widehat{\mathrm{\Psi }}_\nu ).$$
(20)
In the Rindler wedge $`\gamma _R^\mu (e_\alpha )^\mu \gamma ^\alpha `$ with tetrads $`(e_0)^\mu =u^1\delta _0^\mu `$ and $`(e_i)^\mu =\delta _i^\mu `$, where $`\gamma ^\alpha `$ are the usual Dirac matrices. By using Eq. (20) in Eq. (19), we obtain the following amplitude:
$$𝒜_{ac}=G_F_{\mathrm{}}^+\mathrm{}𝑑\tau e^{i\mathrm{\Delta }m\tau }\nu _{\omega _\nu \sigma _\nu }|\widehat{\mathrm{\Psi }}_\nu ^{}(\tau ,a^1)\widehat{\mathrm{\Psi }}_e(\tau ,a^1)|e_{\omega _e\sigma _e}^{},$$
(21)
where $`\mathrm{\Delta }mm_nm_p`$. Next, by using Eq. (5), we obtain
$$𝒜_{ac}=G_F\delta _{\sigma _e,\sigma _\nu }_{\mathrm{}}^+\mathrm{}𝑑\tau e^{i\mathrm{\Delta }m\tau }\psi _{\omega _\nu \sigma _\nu }^{}(\tau ,a^1)\psi _{\omega _e\sigma _e}(\tau ,a^1).$$
(22)
Using now explicitly $`\psi _{\omega \sigma }(\tau ,u)`$ to perform the integral, we obtain
$`𝒜_{ac}`$ $`=`$ $`{\displaystyle \frac{4G_F}{\pi a}}\sqrt{m_em_\nu \mathrm{cosh}(\pi \omega _e/a)\mathrm{cosh}(\pi \omega _\nu /a)}`$ (23)
$`\times `$ $`Re\left[K_{i\omega _\nu /a1/2}(m_\nu /a)K_{i\omega _e/a+1/2}(m_e/a)\right]\delta _{\sigma _e,\sigma _\nu }\delta (\omega _e\omega _\nu \mathrm{\Delta }m).`$ (24)
This result will be used to calculate the total reaction rate
$$\mathrm{\Gamma }_{ac}(a)\frac{1}{\stackrel{~}{\tau }}\underset{\sigma _e=\pm }{}\underset{\sigma _\nu =\pm }{}_0^+\mathrm{}𝑑\omega _e_0^+\mathrm{}𝑑\omega _\nu |𝒜_{ac}|^2n_F(\omega _e,T_e)[1n_F(\omega _\nu ,T_\nu )],$$
(25)
where $`\stackrel{~}{\tau }=2\pi \delta (0)`$ is the total nucleon proper time , and $`n_F(\omega ,T)1/[1+exp(\omega /T)]`$ is the usual fermionic thermal factor. We shall consider further two cases. In the first one, we assume $`T_e=10^9`$ K and $`T_\nu =0`$ K, i.e., the neutron star would be cold enough to be transparent to the neutrinos. In the second one, we assume $`T_e=T_\nu =10^{10}`$ K, i.e., electrons and neutrinos would be in thermal equilibrium. By using Eq. (24) in Eq. (25), we obtain
$`\mathrm{\Gamma }_{ac}(a)={\displaystyle \frac{4G_F^2m_em_\nu }{\pi ^3a^2}}{\displaystyle _{\mathrm{\Delta }m}^+\mathrm{}}`$ $`d\omega _e`$ $`{\displaystyle \frac{\mathrm{cosh}[\pi \omega _e/a]\mathrm{cosh}[\pi (\omega _e\mathrm{\Delta }m)/a]\mathrm{exp}[(\omega _e\mathrm{\Delta }m)/2T_\nu ]}{\mathrm{cosh}[\omega _e/2T_e]\mathrm{cosh}[(\omega _e\mathrm{\Delta }m)/2T_\nu ]\mathrm{exp}[\omega _e/2T_e]}}`$ (26)
$`\times `$ $`\left\{Re\left[K_{i(\omega _e\mathrm{\Delta }m)/a1/2}(m_\nu /a)K_{i\omega _e+1/2}(m_e/a)\right]\right\}^2.`$ (27)
As a final step, we take the limit $`m_\nu 0`$ in Eq. (27) (see Ref. ):
$`\mathrm{\Gamma }_{ac}(a)={\displaystyle \frac{G_F^2m_e}{\pi ^2a}}{\displaystyle _{\mathrm{\Delta }m}^+\mathrm{}}`$ $`d\omega _e`$ $`{\displaystyle \frac{\mathrm{cosh}[\pi \omega _e/a]\mathrm{exp}[(\omega _e\mathrm{\Delta }m)/2T_\nu ]}{\mathrm{cosh}[\omega _e/2T_e]\mathrm{cosh}[(\omega _e\mathrm{\Delta }m)/2T_\nu ]\mathrm{exp}[\omega _e/2T_e]}}`$ (28)
$`\times `$ $`K_{i\omega _e/a+1/2}(m_e/a)K_{i\omega _e/a1/2}(m_e/a).`$ (29)
In order to compare the reaction rate above with the usual one obtained in inertial frames, we calculate next the reaction rate for $`a=0`$ using plain quantum field theory in Minkowski spacetime. This will be used also as a consistency check since we will compare it with the $`a0`$ limit obtained from Eq. (29).
Let us briefly outline the Minkowski calculation. The leptonic fields will be expressed in terms of the usual Minkowski coordinates $`(t,z)`$ as
$$\widehat{\mathrm{\Psi }}(t,z)=\underset{\sigma =\pm }{}_{\mathrm{}}^+\mathrm{}𝑑k\left(\widehat{b}_{k\sigma }\psi _{k\sigma }^{(+\omega )}(t,z)+\widehat{d}_{k\sigma }^{}\psi _{k\sigma }^{(\omega )}(t,z)\right),$$
(30)
where $`\widehat{b}_{k\sigma }`$ and $`\widehat{d}_{k\sigma }^{}`$ are annihilation and creation operators of fermions and antifermions, respectively, with momentum $`k`$ and polarization $`\sigma `$. In the inertial frame, energy, momentum and mass $`m`$ are related as usual: $`\omega =\sqrt{k^2+m^2}>0`$. $`\psi _{k\sigma }^{(+\omega )}(t,z)`$ and $`\psi _{k\sigma }^{(\omega )}(t,z)`$ are positive and negative frequency solutions of the Dirac equation with respect to $`/t`$, respectively. In the Dirac representation (see, e.g., Ref. ), we find
$$\psi _{k+}^{(\pm \omega )}(t,z)=\frac{e^{i(\omega t+kz)}}{\sqrt{2\pi }}\left(\begin{array}{c}\pm \sqrt{(\omega \pm m)/2\omega }\\ 0\\ k/\sqrt{2\omega (\omega \pm m)}\\ 0\end{array}\right)$$
(31)
and
$$\psi _k^{(\pm \omega )}(t,z)=\frac{e^{i(\omega t+kz)}}{\sqrt{2\pi }}\left(\begin{array}{c}0\\ \pm \sqrt{(\omega \pm m)/2\omega }\\ 0\\ k/\sqrt{2\omega (\omega \pm m)}\end{array}\right),$$
(32)
where the normalization constants were chosen such that the creation and annihilation operators satisfy
$$\{\widehat{b}_{k\sigma },\widehat{b}_{k^{}\sigma ^{}}^{}\}=\{\widehat{d}_{k\sigma },\widehat{d}_{k^{}\sigma ^{}}^{}\}=\delta (kk^{})\delta _{\sigma \sigma ^{}}$$
(33)
and
$$\{\widehat{b}_{k\sigma },\widehat{b}_{k^{}\sigma ^{}}\}=\{\widehat{d}_{k\sigma },\widehat{d}_{k^{}\sigma ^{}}\}=\{\widehat{b}_{k\sigma },\widehat{d}_{k^{}\sigma ^{}}\}=\{\widehat{b}_{k\sigma },\widehat{d}_{k^{}\sigma ^{}}^{}\}=0.$$
(34)
The neutronization amplitude for inertial nucleons in the Minkowski spacetime,
$$𝒜_{in}=G_F_{\mathrm{}}^+\mathrm{}𝑑te^{i\mathrm{\Delta }mt}\psi _{k_\nu \sigma _\nu }^{(+\omega _\nu )}{}_{}{}^{}(t,0)\psi _{k_e\sigma _e}^{(+\omega _e)}(t,0),$$
(35)
is calculated by using the interaction action (20) in Eq. (19), where $`\gamma _R^\mu `$ is replaced by the usual $`\gamma ^\mu `$ Dirac matrices, and the current is given by $`\widehat{j}^\mu =\widehat{q}(t)v^\mu \delta (z)`$ with $`v^\mu =(1,0)`$. This leads us straightforwardly to the following neutronization rate for inertial nucleons:
$$\mathrm{\Gamma }_{in}=\frac{2G_F^2}{\pi }_L^+\mathrm{}𝑑k_e\frac{e^{(\omega _e\mathrm{\Delta }m)/T_\nu }}{(1+e^{\omega _e/T_e})[1+e^{(\omega _e\mathrm{\Delta }m)/T_\nu }]},$$
(36)
where $`m_\nu =0`$, $`L\sqrt{\mathrm{\Delta }m^2m_e^2}`$, and we recall that $`\omega _e\sqrt{k_e^2+m_e^2}`$.
In order to clearly analyze the influence of the frame acceleration on the neutronization process, let us use Eqs. (29) and (36) to define the following relative reaction rate:
$$(a)\frac{\mathrm{\Gamma }_{ac}(a)\mathrm{\Gamma }_{in}}{\mathrm{\Gamma }_{in}}.$$
(37)
In Figs. 1 and 2 we plot $`(a)`$ for the two aforementioned cases: (i) $`T_e=10^9`$ K and $`T_\nu =0`$ K, and (ii) $`T_e=T_\nu =10^{10}`$ K. Firstly we note from the figures that $`\mathrm{\Gamma }_{ac}(a0)`$ is in agreement with the expression obtained for $`\mathrm{\Gamma }_{in}`$ since $`(a0)0`$. Figs. 1 and 2 exhibit a complicated oscillatory pattern up to $`a1`$ MeV. Indeed, the frame acceleration plays its most important role in this region: $`|(a)|`$ reaches about 30% and 10% for cases (i) and (ii), respectively. For large enough accelerations, $`a\mathrm{\Delta }m,T_e`$, we obtain from Eq. (29) an asymptotic expression for $`\mathrm{\Gamma }_{ac}`$, namely,
$$\mathrm{\Gamma }_{ac}(a\mathrm{\Delta }m,T_e)\frac{2G_F^2}{\pi }_{\mathrm{\Delta }m}^+\mathrm{}𝑑\omega _e\frac{e^{(\omega _e\mathrm{\Delta }m)/T_\nu }}{(1+e^{\omega _e/T_e})[1+e^{(\omega _e\mathrm{\Delta }m)/T_\nu }]}.$$
(38)
By using Eq. (38) in Eq. (37), we can compute the asymptotic relative reaction rate, namely, $`(a\mathrm{\Delta }m,T_e)`$. We find that $`(a\mathrm{\Delta }m,T_e)7.2\%`$ and $`(a\mathrm{\Delta }m,T_e)3.5\%`$ for cases (i) and (ii), respectively, i.e., according to our toy model, ultrahigh accelerations damp the neutronization rate by a few percents.
In summary, we have looked for gravity effects in the neutronization process which frequently occurs in the interior of neutron stars. The reaction rate obtained by means of a simplified model exhibits a complicated oscillatory pattern up to $`a1`$ MeV. Afterwards it tends to an asymptotic value which indicates that the reaction is somewhat damped. We note that proper accelerations of the order $`a1`$ MeV are much beyond what would be expected in the interior of relativistic stars. Just for sake of comparison, protons at LHC/CERN will be under accelerations of about $`10^8`$ MeV. We emphasize, however, that only a four–dimensional Schwarzschild calculation would be realistic enough to precisely determine the whole influence of gravity in the neutronization reaction and other similar processes. In a more realistic calculation, for instance, effects due to the space curvature itself, which is absent here, should show up wherever the emitted neutrinos are soft enough to “feel” the global background geometry. In this case, even reactions taking place at the star core, where $`a0`$, would be influenced by gravity. More detailed investigations on the role played by gravity in particle processes occuring in relativistic stars would be welcome.
Acknowledgments
D.V. was fully supported by Fundação de Amparo à Pesquisa do Estado de São Paulo while G.M. was partially supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico.
Figure Captions
FIG. 1: The relative reaction rate $`(a)`$ is plotted as a function of the frame acceleration $`a`$ for temperatures $`T_e=10^9`$ K and $`T_\nu =0`$ K. Note that $`(a0)0`$, as expected. After an oscillatory regime the relative reaction rate tends to the asymptotic value $`(a\mathrm{\Delta }m,T_e)7.2\%`$. The maximum value reached by $`|(a)|`$ is about 30%.
FIG. 2: The relative reaction rate $`(a)`$ is plotted as a function of the frame acceleration $`a`$ for temperatures $`T_e=T_\nu =10^{10}`$ K. After an oscillatory regime the relative reaction rate tends to the asymptotic value $`(a\mathrm{\Delta }m,T_e)3.5\%`$. The maximum value reached by $`|(a)|`$ is about 10%.
|
no-problem/9905/math9905180.html
|
ar5iv
|
text
|
# Kaleidoscope-roulette:the resonance phenomena in perception games
## I. Interactive games
### 1.1. Interactive systems and intention fields
###### Definition Definition 1
An interactive system (with $`n`$ interactive controls) is a control system with $`n`$ independent controls coupled with unknown or incompletely known feedbacks (the feedbacks as well as their couplings with controls are of a so complicated nature that their can not be described completely). An interactive game is a game with interactive controls of each player.
Below we shall consider only deterministic and differential interactive systems. In this case the general interactive system may be written in the form:
$$\dot{\phi }=\mathrm{\Phi }(\phi ,u_1,u_2,\mathrm{},u_n),$$
$`1`$
where $`\phi `$ characterizes the state of the system and $`u_i`$ are the interactive controls:
$$u_i(t)=u_i(u_i^{}(t),[\phi (\tau )]|_{\tau t}),$$
i.e. the independent controls $`u_i^{}(t)`$ coupled with the feedbacks on $`[\phi (\tau )]|_{\tau t}`$. One may suppose that the feedbacks are integrodifferential on $`t`$.
###### Proposition
Each interactive system (1) may be transformed to the form (2) below (which is not, however, unique):
$$\dot{\phi }=\stackrel{~}{\mathrm{\Phi }}(\phi ,\xi ),$$
$`2`$
where the magnitude $`\xi `$ (with infinite degrees of freedom as a rule) obeys the equation
$$\dot{\xi }=\mathrm{\Xi }(\xi ,\phi ,\stackrel{~}{u}_1,\stackrel{~}{u}_2,\mathrm{},\stackrel{~}{u}_n),$$
$`3`$
where $`\stackrel{~}{u}_i`$ are the interactive controls of the form $`\stackrel{~}{u}_i(t)=\stackrel{~}{u}_i(u_i^{}(t);\phi (t),\xi (t))`$ (here the dependence of $`\stackrel{~}{u}_i`$ on $`\xi (t)`$ and $`\phi (t)`$ is differential on $`t`$, i.e. the feedbacks are precisely of the form $`\stackrel{~}{u}_i(t)=\stackrel{~}{u}_i(u_i^{}(t);\phi (t),\xi (t),\dot{\phi }(t),\dot{\xi }(t),\ddot{\phi }(t),\ddot{\xi }(t),\mathrm{},\phi ^{(k)}(t),\xi ^{(k)}(t))`$).
###### Remark Remark 1
One may exclude $`\phi (t)`$ from the feedbacks in the interactive controls $`\stackrel{~}{u}_i(t)`$. One may also exclude the derivatives of $`\xi `$ and $`\phi `$ on $`t`$ from the feedbacks.
###### Definition Definition 2
The magnitude $`\xi `$ with its dynamical equations (3) and its contribution into the interactive controls $`\stackrel{~}{u}_i`$ will be called the intention field.
Note that the theorem holds true for the interactive games. In practice, the intention fields may be often considered as a field-theoretic description of subconscious individual and collective behavioral reactions. However, they may be used also the accounting of unknown or incompletely known external influences. Therefore, such approach is applicable to problems of computer science (e.g. semi-automatically controlled resource distribution) or mathematical economics (e.g. financial games with unknown factors). The interactive games with the differential dependence of feedbacks are called differential. Thus, the theorem states a possibility of a reduction of any interactive game to a differential interactive game by introduction of additional parameters – the intention fields.
### 1.2. Some generalizations
The interactive games introduced above may be generalized in the following ways.
The first way, which leads to the indeterminate interactive games, is based on the idea that the pure controls $`u_i^{}(t)`$ and the interactive controls $`u_i(t)`$ should not be obligatory related in the considered way. More generally one should only postulate that there are some time-independent quantities $`F_\alpha (u_i(t),u_i^{}(t),\phi (t),\mathrm{},\phi ^{(k)}(t))`$ for the independent magnitudes $`u_i(t)`$ and $`u_i^{}(t)`$. Such claim is evidently weaker than one of Def.1. For instance, one may consider the inverse dependence of the pure and interactive controls: $`u_i^{}(t)=u_i^{}(u_i(t),\phi (t),\mathrm{},\phi ^{(k)}(t))`$.
The inverse dependence of the pure and interactive controls has a nice psychological interpretation. Instead of thinking of our action consisting of the conscious and unconscious parts and interpreting the least as unknown feedbacks, which “dress” the first, one is able to consider our action as a single whole whereas the act of consciousness is in the extraction of a part, which it declares as its property.
The second way, which leads to the coalition interactive games, is based on the idea to consider the games with coalitions of actions and to claim that the interactive controls belong to such coalitions. In this case the evolution equations have the form
$$\dot{\phi }=\mathrm{\Phi }(\phi ,v_1,\mathrm{},v_m),$$
where $`v_i`$ is the interactive control of the $`i`$-th coalition. If the $`i`$-th coalition is defined by the subset $`I_i`$ of all players then
$$v_i=v_i(\phi (t),\mathrm{},\phi ^{(k)}(t),u_j^{}|jI_i).$$
Certainly, the intersections of different sets $`I_i`$ may be non-empty so that any player may belong to several coalitions of actions. Def.1 gives the particular case when $`I_i=\{i\}`$.
The coalition interactive games may be an effective tool for an analysis of the collective decision making in the real coalition games that spread the applicability of the elaborating interactive game theory to the diverse problems of sociology.
### 1.3. Differential interactive games and their $`\epsilon `$–representations
###### Definition Definition 3
The $`\epsilon `$–representation of differential interactive game is a representation of the differential feedbacks in the form
$$u_i(t)=u_i(u_i^{},\phi (t),\mathrm{},\phi ^{(k)}(t);\epsilon _i(t))$$
$`4`$
with the known function $`u_i`$ of all its arguments, where the magnitudes $`\epsilon _i(t)`$ are unknown functions of $`u_i^{}`$ and $`\phi (t)`$ with its higher derivatives:
$$\epsilon _i(t)=\epsilon _i(u_i^{}(t),\phi (t),\dot{\phi }(t),\mathrm{},\phi ^{(k)}(t)).$$
It is interesting to consider several different $`\epsilon `$-representations simultaneously. For such simultaneous $`\epsilon `$-representations with $`\epsilon `$-parameters $`\epsilon _i^{(\alpha )}`$ a crucial role is played by the time-independent relations between them:
$$F_\beta (\epsilon _i^{(1)},\mathrm{},\epsilon _i^{(\alpha )},\mathrm{},\epsilon _i^{(N)};u_i^{},\phi ,\mathrm{},\phi ^{(k)})0,$$
which are called the correlation integrals. Certainly, in practice the correlation integrals are determined a posteriori and, thus they contain an important information on the interactive game. Using the sufficient number of correlation integrals one is able to construct various algebraic structures in analogy to the correlation functions in statistical physics and quantum field theory.
## II. Dialogues and the verbalizable interactive games. Perception games.
### 2.1. Dialogues as interactive games. The verbalization
Dialogues as psycholinguistic phenomena can be formalized in terms of interactive games. First of all, note that one is able to consider interactive games of discrete time as well as interactive games of continuous time above.
###### Definition Defintion 4A (the naïve definition of dialogues)
The dialogue is a 2-person interactive game of discrete time with intention fields of continuous time.
The states and the controls of a dialogue correspond to the speech whereas the intention fields describe the understanding.
Let us give the formal mathematical definition of dialogues now.
###### Definition Definition 4B (the formal definition of dialogues)
The dialogue is a 2-person interactive game of discrete time of the form
$$\phi _n=\mathrm{\Phi }(\phi _{n1},\stackrel{}{v}_n,\xi (\tau )|t_{n1}\tau t_n).$$
$`5`$
Here $`\phi _n=\phi (t_n)`$ are the states of the system at the moments $`t_n`$ ($`t_0<t_1<t_2<\mathrm{}<t_n<\mathrm{}`$), $`\stackrel{}{v}_n=\stackrel{}{v}(t_n)=(v_1(t_n),v_2(t_n))`$ are the interactive controls at the same moments; $`\xi (\tau )`$ are the intention fields of continuous time with evolution equations
$$\dot{\xi }(t)=\mathrm{\Xi }(\xi (t),\stackrel{}{u}(t)),$$
$`6`$
where $`\stackrel{}{u}(t)=(u_1(t),u_2(t))`$ are continuous interactive controls with $`\epsilon `$–represented couplings of feedbacks:
$$u_i(t)=u_i(u_i^{}(t),\xi (t);\epsilon _i(t)).$$
The states $`\phi _n`$ and the interactive controls $`\stackrel{}{v}_n`$ are certain known functions of the form
$`\phi _n=`$ $`\phi _n(\stackrel{}{\epsilon }(\tau ),\xi (\tau )|t_{n1}\tau t_n),`$ $`7`$
$`\stackrel{}{v}_n=`$ $`\stackrel{}{v}_n(\stackrel{}{u}^{}(\tau ),\xi (\tau )|t_{n1}\tau t_n).`$
Note that the most nontrivial part of mathematical formalization of dialogues is the claim that the states of the dialogue (which describe a speech) are certain “mean values” of the $`\epsilon `$–parameters of the intention fields (which describe the understanding).
###### Remark Important
The definition of dialogue may be generalized on arbitrary number of players and below we shall consider any number $`n`$ of them, e.g. $`n=1`$ or $`n=3`$, though it slightly contradicts to the common meaning of the word “dialogue”.
An embedding of dialogues into the interactive game theoretical picture generates the reciprocal problem: how to interpret an arbitrary differential interactive game as a dialogue. Such interpretation will be called the verbalization.
###### Definition Definition 5
A differential interactive game of the form
$$\dot{\phi }(t)=\mathrm{\Phi }(\phi (t),\stackrel{}{u}(t))$$
with $`\epsilon `$–represented couplings of feedbacks
$$u_i(t)=u_i(u_i^{}(t),\phi (t),\dot{\phi }(t),\ddot{\phi }(t),\mathrm{}\phi ^{(k)}(t);\epsilon _i(t))$$
is called verbalizable if there exist a posteriori partition $`t_0<t_1<t_2<\mathrm{}<t_n<\mathrm{}`$ and the integrodifferential functionals
$`\omega _n`$ $`(\stackrel{}{\epsilon }(\tau ),\phi (\tau )|t_{n1}\tau t_n),`$ $`8`$
$`\stackrel{}{v}_n`$ $`(\stackrel{}{u}^{}(\tau ),\phi (\tau )|t_{n1}\tau t_n)`$
such that
$$\omega _n=\mathrm{\Omega }(\omega _{n1},v_n;\phi (\tau )|t_{n1}\tau t_n).$$
$`9`$
The verbalizable differential interactive games realize a dialogue in sense of Def.4.
###### Remark Remark 2
One may include $`\omega _n`$ explicitely into the evolution equations for $`\phi `$
$$\dot{\phi }(\tau )=\mathrm{\Phi }(\phi (\tau ),\stackrel{}{u}(\tau );\omega _n),\tau [t_n,t_{n+1}].$$
as well as into the feedbacks and their couplings.
The main heuristic hypothesis is that all differential interactive games “which appear in practice” are verbalizable. The verbalization means that the states of a differential interactive game are interpreted as intention fields of a hidden dialogue and the problem is to describe such dialogue completely. If a differential interactive game is verbalizable one is able to consider many linguistic (e.g. the formal grammar of a related hidden dialogue) or psycholinguistic (e.g. the dynamical correlation of various implications) aspects of it.
During the verbalization it is a problem to determine the moments $`t_i`$. A way to the solution lies in the structure of $`\epsilon `$-representation. Let the space $`E`$ of all admissible values of $`\epsilon `$-parameters be a CW-complex. Then $`t_i`$ are just the moments of transition of the $`\epsilon `$-parameters to a new cell.
### 2.2. Perception games
###### Definition Definition 6
The perception game is a multistage verbalizable game (no matter finite or infinite) for which the intervals $`[t_i,t_{i+1}]`$ are just the sets. The conditions of their finishing depends only on the current value of $`\phi `$ and the state of $`\omega `$ at the beginning of the set. The initial position of the set is the final position of the preceeding one.
Practically, the definition describes the discrete character of the perception and the image understanding. For example, the goal of a concrete set may be to perceive or to understand certain detail of the whole image. Another example is a continuous perception of the moving or changing object.
Note that the definition of perception games is applicable to various forms of perception, though the most interesting one is the visual perception. The proposed definition allows to take into account the dialogical character of the image understanding and to consider the visual perception, image understanding and the verbal (and nonverbal) dialogues together. It may be extremely useful for the analysis of collective perception, understanding and controlling processes in the dynamical environments – sports, dancings, martial arts, the collective controlling of moving objects, etc. On the other hand this definition explicates the self-organizing features of human perception, which may be unraveled by the game theoretical analysis. And, finally, the definition put a basis for a systematical application of the linguistic (e.g. formal grammars) and psycholinguistic methods to the image understanding as a verbalizable interactive game with a mathematical rigor.
## III. Kaleidoscope-roulettes and the resonance phenomena
### 3.1. Kaleidoscope-roulette
The kaleidoscope-roulette is a result of the attempt to combine the kaleidoscope, one of the simplest and effective visual game, with the roulette essentially using the elements of randomness and the treatment of resonances. The main idea is to substitute random sequences of roulette by the quasirandom sequences, which may be generated by the interactive kaleidoscope. The obtained formal definition is below.
###### Definition Definition 7
Kaleidoscope-roulette is a perception game with a quasirandom sequence of quantites $`\{\omega _n\}`$.
Certainly, the explicit form of functionals (8) is not known to the players.
Many concrete versions of kaleidoscope-roulettes are constructed. Though they are naturally associated with an entertainment their real applications may be far beyond it due to their origin and the abstract character of their definition.
### 3.2. The resonance phenomena in kaleidoscope-roulettes
Though the sequence $`\{\omega _n\}`$ is quasirandom the equations (9) for them may have the resonance solutions. The resonance means a dynamical correlation of two quasirandom sequences $`\{v_n\}`$ and $`\{\omega _n\}`$ whatever $`\phi `$ is realized. In such case the quantities $`\{v_n\}`$ may be comprehended as “fortune”, what is not senseless in contrast to the ordinary roulette. However, $`v_n`$ are interactive controls and their explicit dependence on $`\stackrel{}{u}^{}`$ and $`\phi `$ is not known. Nevertheless, one is able to use a posteriori analysis and short-term predictions based on it (cf.) if the time interval $`\mathrm{\Delta }t`$ in the short-term predictions is not less than the interval $`t_{n+1}t_n`$. To do it one should slightly improve constructions of to take the discrete-time character of $`v_n`$ into account. It allows to perform the short-term controlling of the resonances in a kaleidoscope-roulette if they are observed. The conditions of applicability of short-time predictions to the controlling of resonances may be expressed in the following form: one should claim that variations of the interactivity should be slower than the change of sets in the considered multistage game.
###### Remark Remark 3
The possibility to control resonances by $`v_n`$ using its short-term predictions does not contradict to its quasirandomness, because $`v_n`$ is quasirandom with respect to $`v_{n1}`$ but not to $`\phi (\tau )`$ ($`\tau [t_n,t_{n+1}]`$).
## IV. Conclusions
Kaleidoscope-roulettes, a proper class of perception games, is described. They are defined as perception and, hence, verbalizable interactive games, whose hidden dialogue consists of quasirandom sequences of “words”. The resonance phenomena in such games and their controlling are discussed. A possibility of the short-term controlling of resonances in the kaleidoscope-roulettes is doubtless an intriguing feature for its use for the entertainment purposes as well as far beyond them.
|
no-problem/9905/astro-ph9905353.html
|
ar5iv
|
text
|
# THE MINOR-MERGER DRIVEN NUCLEAR ACTIVITY IN SEYFERT GALAXIES: A STEP TOWARD THE SIMPLE UNIFIED FORMATION MECHANISM OF ACTIVE GALACTIC NUCLEI IN THE LOCAL UNIVERSE
## 1. INTRODUCTION
Seyfert galaxies are typical examples with active galactic nuclei (AGNs) in the local universe and thus they have been well studied from various observational points of view. It is known that Seyfert nuclei are associated with disk galaxies. If the majority of disk galaxies have a SMBH in their nuclei (e.g., Kormendy et al. 1998 and references therein), the most important problem is how the gas can be efficiently fueled onto the SMBH because the central engine of AGNs is thought to be powered by the accretion of gas onto a SMBH (Rees 1984). The dynamical effect exerted either by non-axisymmetric gravitational potential such as a stellar bar or by interaction with other galaxies has been often considered to cause the efficient gas fueling (e.g., Noguchi 1988; Shlosman, Frank, & Begelman 1989; Barnes & Hernquist 1992; Shlosman & Noguchi 1993; see for a review Shlosman, Begelman, & Frank 1990). However, recent systematic studies for large samples of Seyfert nuclei have shown that the Seyfert galaxies do not always have either the bar structure or companion galaxies (Mulchaey & Regan 1997; Ho, Filippenko, & Sargent 1997; Rafanelli et al. 1995; De Robertis, Yee, & Hayhoe 1998b). Then we have not yet fully understood what causes the efficient gas fueling onto a SMBH in the Seyfert nuclei. Here, adopting an alternative fueling mechanism, the minor-merger driven fueling (Roos 1981, 1985a, 1985b; Gaskell 1985; Hernquist 1989; Mihos & Hernquist 1994; Hernquist & Mihos 1995; De Robertis et al. 1998b; Taniguchi 1997), we examine whether or not this mechanism can explain all important observational properties of Seyfert nuclei.
## 2. A BRIEF REVIEW OF POSSIBLE FUELING MECHANISMS
### 2.1. Galaxy-Interaction Driven Gas Fueling
The nuclear activity in interacting galaxies has been investigated in these two decades (Petrosyan 1982; Kennicutt & Keel 1984; Keel et al. 1985; Dahari 1985; Cutri & McAlary 1985; Bushouse 1986, 1987; Telesco, Wolstencroft, & Done 1988; Smith & Hintzen 1991; Sekiguchi et al. 1992; Bergvall & Johansson 1995). In particular, the environmental properties of Seyfert galaxies have been one of key issues (Dahari 1984; Fuentes-Williams & Stocke 1988; MacKenty 1989; Laurikainen et al. 1994; Laurikainen & Salo 1995; Rafanelli et al. 1995; De Robertis, Hayhoe, & Yee 1998a; De Robertis et al. 1998b). Although early investigations suggested a possible excess of companion galaxies in Seyfert galaxies (e.g., Dahari 1984), this excess has not been confirmed (e.g., De Robertis et al. 1998b). Although S2s still tend to have more companions with respect to normal disk galaxies, its statistical significance is $``$ 95% at most (De Robertis et al. 1998b).
An important point noted here is that only 10% of the Seyfert galaxies have companion galaxies (e.g., Rafanelli et al. 1995). This means that the remaining $``$ 90% of the Seyfert galaxies have no comparable companion galaxy and thus their nuclear activity is not related to galaxy interactions. In addition, Keel (1996) found that there is no preferred kind of interaction (prograde, polar, or retrograde) among the Seyfert galaxies with physical companions although the efficient fueling would occur in prograde interacting systems. Hence, even if the nuclear activity in some Seyfert galaxies were triggered by galaxy interactions, the tidal triggering appears to be a very minor mechanism.
### 2.2. Bar Driven Gas Fueling
The non-axisymmetric gravitational potential such as a stellar bar in a galactic disk is considered to drive the mass transfer of interstellar medium from the disk to the central region (Schwartz 1981; Norman 1990; Shlosman, Frank, & Begelman 1989; Wada & Habe 1992, 1995). However, from an observational point of view, the excess of barred galaxies in Seyfert galaxies has been controversial (Adams 1977; Simkin et al. 1980; Arsenault 1989; Moles, Márquez, & Pérez 1995; Maiolino, Risaliti, & Salvati 1998). Recently, Mulchaey & Regan (1997) made a near-infrared imaging survey of samples of Seyfert and normal galaxies and found that the incidence of bars in both the samples is quite similar; i.e., $``$ 70% (see also Hunt et al. 1999). This means that Seyfert nuclei do not prefer barred galaxies as their hosts. Another very important work was made by Ho, Filippenko, & Sargent (1997) who analyzed optical spectra of more than 300 spiral galaxies in the nearby universe. They found that AGNs (Seyferts and LINERs<sup>1</sup><sup>1</sup>1LINER = Low Ionization Nuclear Emission-line Region (Heckman 1980).) do not show any significant preference of barred galaxies. They also found that bars have a negligible effect on the strength of AGN. In summary, the recent statistical studies based on the larger samples have suggested that the bar-driven gas inflow is not a dominant mechanism for triggering activity in Seyfert nuclei<sup>2</sup><sup>2</sup>2If the dynamical lifetime of bar structures is comparable to or shorter than the lifetime of AGNs, one finds no significant correlation between the presence of bars and AGNs. However, one would prove that the efficient gas fueling into a central $``$ 1 pc region is surely driven by the dynamical effect of bars (see also footnote 3)..
Very recently, Maiolino, Risaliti, & Salvati (1998) presented evidence for a strong correlation between the gaseous absorbing column density towards S2 nuclei and the presence of a stellar bar in their host galaxies. They also showed that strongly barred S2s have an average $`N_{\mathrm{HI}}`$ (H i column density) that is two orders of magnitude higher than non-barred S2s. Although these properties are quite interesting, the main point stressed here is that a large-scale bar structure has no relation to the gas fueling in the Seyfert nuclei from the statistical studies described before. We also note that the so-called bars-within-bars<sup>3</sup><sup>3</sup>3Secondary (i.e., inner) bars and triaxial bulges are found in some barred galaxies. These double-bar structures have been sometimes discussed in relation to the bars-within-bars mechanism proposed by Shlosman et al. (1989) (e.g., Friedli 1996). However, such large-scale double bars may be more intimately related to the stellar dynamical resonance in disk galaxies, being different from the original bars-within-bars mechanism. fueling mechanism does not work efficiently (Shlosman & Noguchi 1993).
### 2.3. Minor-Merger Driven Gas Fueling
Since most galaxies have satellite galaxies (Zaritsky et al. 1997 and references therein), it is likely that they have already experienced some minor mergers during their lives (Ostriker & Tremaine 1975; Tremaine 1981). In fact, many lines of evidence for minor mergers have been obtained even in ordinary-looking galaxies; e.g., “X” structures (Mihos et al. 1995; see also for reviews, Schweizer 1990; Barnes 1996). Since the disk gas is transferred toward the central region of galaxies as the minor merger proceeds, minor mergers are also responsible for gas fueling in disk galaxies (Hernquist 1989; Mihos & Hernquist 1994; Hernquist & Mihos 1995). It is worthwhile noting that Seyfert galaxies possess a statistically significant excess of faint ($`M_v18`$) companion (satellite) galaxies (Fuentes-Williams & Stocke 1988).
Although the minor-merger driven fueling has no fatal problem, it is generally hard to find unambiguous evidence for minor mergers in many cases because the dynamical perturbation is less significant than that of typical galaxy interactions with massive companion galaxies. In order to complete a minor merger, it takes $`10^9`$ years. This time scale may be long enough to smear relics of the minor merger. Thus, the majority of advanced minor mergers may be observed as ordinary-looking isolated galaxies (Walker, Mihos, & Hernquist 1996). This makes it difficult to verify that minor mergers are really responsible for triggering activity in most Seyfert nuclei.
## 3. IS THE MINOR-MERGER SCENARIO CONSISTENT WITH OBSERVATIONS ?
As summarized briefly in the previous section, there is little evidence that both the tidal interaction and the bar-driven fueling work in the majority of Seyfert galaxies. In this section, we investigate whether or not the minor-merger scenario appears consistent with all important observational properties of Seyfert galaxies.
### 3.1. Morphology of Seyfert Hosts
Morphological properties of Seyfert hosts have been studied since the pioneering work by Adams (1977) who suggested the Seyfert activity is associated with nuclei of disk galaxies. Although disk properties of Seyfert hosts (e.g., scale lengths and color) are similar to those of disk galaxies without an AGN (MacKenty 1990), Seyfert nuclei are more often found in disk galaxies with ringed structures (e.g., inner and/or outer rings; Heckman 1978; Simkin et al. 1980; Arsenault 1989; Moles et al. 1995), and with amorphous disks (e.g., S0 galaxies; Simkin et al. 1980; MacKenty 1990). The formation of ringed structures in disk galaxies is generally interpreted in terms of so-called Lindblad resonances (e.g., Binney & Tremaine 1987). Note, however, minor mergers are also responsible for the formation of ringed structures (see Figure 4 in Mihos & Hernquist 1994). Bar-mode instability in host disks may also be excited by minor mergers although the bar formation depends on both a relative mass and an an orbital parameter of the satellite (e.g., Byrd et al. 1986). It is also known that minor mergers cause the kinematic heating of host disks (e.g., Quinn, Hernquist, & Fullagar 1993; Walker et al. 1996; Veláquez & White 1999). Such disk galaxies may be classified as S0 or amorphous galaxies which are frequently observed in the Seyfert hosts. Therefore, the minor merger scenario appears consistent with the morphological variety of Seyfert hosts.
### 3.2. The Formation of Randomly-Oriented Anisotropic Radiation
Recent CCD narrow-band imaging studies of NLRs of Seyfert galaxies have shown that the NLRs often show bi-conical structures (Pogge 1989; Wilson & Tsvetanov 1994; Schmitt & Kinney 1996). The most spectacular property of the NLRs is that the bi-conical structures are oriented independently from the host galactic disks in many cases. It is also known that the axes of nuclear radio jets in Seyfert nuclei show the same randomly-oriented nature (Schmitt et al. 1997). These observational results suggest that accretion disks are rotating around randomly-oriented axes which are different from the rotational axes of the host disks (Clarke et al. 1998).
Generally, nuclear gas disks are not necessarily aligned to the host disks. For example, if some non-axisymmetric potential such as a tumbling bar is present in a disk, the preferred plane for the nuclear gas is perpendicular to the host disk and thus a tilted nuclear gas disk can be made (Tohline & Osterbrock 1982). If this is the case, we would observe that NLRs tend to align along the bar axes. However, there is no correlation between the NLR axes and the bar axes (Bower et al. 1995). Note also that all Seyfert galaxies do not have such strong bars (Simkin et al. 1980; Arsenault 1989; MacKenty 1990; Moles et al. 1995). Therefore, it is unlikely that the NLR axes are controlled by the bar potential in the Seyfert galaxies.
Molecular-gas tori probed by the H<sub>2</sub>O maser emission at 22 GHz show the significant warping in some nearby AGNs such as NGC 1068 (Greenhill et al. 1996; see also Begelman, & Bland-Hawthorn 1997) and NGC 4258 (Miyoshi et al. 1995). This warping can be explained by the radiative effect from the central engine (Pringle 1996, 1997) given that a typical size of tori is much less than 10 pc (e.g., Taniguchi & Murayama 1998). However, the warped disks traced by other molecular lines such as CO and HCN are found to extend spatially up to radius of $``$ 100 pc (Kohno et al. 1996; Tacconi 1998). Such large-scale tilted gas disks cannot be formed by the effect of radiation force from the central engine because a typical warping radius explained by the radiation force is of the order of 0.01 pc for AGNs (Pringle 1997).
We discuss whether or not minor mergers are responsible for the formation of tilted nuclear gas disks. If a merging satellite galaxy has no nucleus (e.g., Magellanic clouds), the gas in the satellite will interact with the gas in the host disk and then be settled in the disk before reaching the nuclear region. On the other hand, if it has a nucleus (e.g., M32), the satellite nucleus will sink toward the nuclear region because of the dynamical friction (Taniguchi & Wada 1996). Here we regard that a nucleus is either a SMBH or a significant concentration of nuclear star cluster. In this respect, satellite galaxies in Mihos & Hernquist (1994) and Hernquist & Mihos (1995) are also nucleated ones. Hence, we suggest that only minor mergers with nucleated satellites are responsible for triggering activity in Seyfert nuclei.
The orbital decay of satellite galaxies could occur from random orientations statistically. Even if a satellite takes a highly inclined orbit, numerical studies generally show that the satellite orbit tends to settle in the disk plane before it reaches the host center (e.g., Quinn et al. 1993). However, note that the spatial resolution in the previous numerical studies is about a few hundred pc. The bi-conical NLR of Seyfert galaxies may be collimated by a molecular/dusty torus around the central engine. The plane in which the torus resides seems to be almost parallel to the final orbital plane of the satellite nucleus around the host nucleus. Since a typical inner radius of tori is of the order of 0.1 pc (Taniguchi & Murayama 1998 and references therein), the spatial resolution in the previous numerical studies is too poor to specify the final orbital plane in minor mergers. If a host galaxy has no significant bulge component, the satellite nucleus may not be able to transfer its angular momentum perpendicular to the disk efficiently. This suggests that the final orbital plane of the satellite nucleus around the host nucleus is often different from the host disk. Therefore, the minor-merger scenario appears consistent with the observed random nature of the tilted nuclear gas disks in Seyfert galaxies; see, for the formation of tilted nuclear gas disks, numerical simulations by Taniguchi & Wada (1996).
### 3.3. Type 1 vs. Type 2 Seyfert Nuclei
The two types of Seyfert activity (S1 and S2) are unified introducing optically-thick dusty/molecular tori around the central engine (e.g., Antonucci 1993; Heisler, Lumsden, & Bailey 1997). However, there are some observational differences between S1s and S2s other than the presence/absence of broad-line regions. The first important difference is that S1s tend to have their torus axes aligned close to the host disk axes; i.e., the random nature is more frequently observed in S2s (Schmitt & Kinney 1996; see also Maiolino & Rieke 1995). This difference may be explained in terms of the difference in orbits of satellite galaxies which merged into the hosts; i.e., S1s prefer minor mergers with satellites whose orbits are relatively parallel to the host disks while S2s prefer those with polar-like orbits. Here we should remember that the current unified model explains the distinction between S1s and S2s as S1s (S2s) are observed from favored (unfavored) viewing angles. Therefore, even if some disk galaxies experience a minor merger with an polar-like orbit and then evolve into Seyfert galaxies, some of them are observed as S1s if observed from favored viewing angles. It is reminded that this causes some ambiguity in the explanation proposed above.
Next we discuss another interesting difference between S1s and S2s; S2s tend to experience circumnuclear starbursts (hereafter CNSBs) more frequently than S1s. Pogge (1989) made a narrow-band emission-line imaging survey of 20 nearby Seyfert galaxies and found that CNSBs are found in $``$ 30% of the S2s while no CNSB is found in the S1s. Later observational studies (Oliva et al. 1995; Hunt et al. 1997) have confirmed that there is little evidence for CNSBs in S1s. There are two necessary conditions to initiate CNSBs; 1) the presence of cold molecular gas in the circumnuclear region enough to form a large number of massive stars, and 2) the presence of some physical mechanism to trigger CNSBs An earlier CO study of Seyfert galaxies suggested that S2s tend to be richer in CO than S1s (Heckman et al. 1989). However, recent CO studies showed that there is little difference in the molecular gas content between S1s and S2s (Maiolino et al. 1997; Vila-Vilaró, Taniguchi, & Nakai 1998). Most disk galaxies including S0s may have molecular gas clouds with masses of $`10^8M_{}`$ in their circumnuclear regions (e.g., Taniguchi et al. 1994). If this is the case, the most important factor for the occurrence of CNSBs seems to be the triggering rather than the gas content in the circumnuclear region of host disks. Taniguchi & Wada (1996) showed from their numerical simulations that the dynamical action of a pair of galactic nuclei (i.e., the host nucleus and the satellite one) can trigger CNSBs in the central region of minor mergers (see also Taniguchi, Wada, & Murayama 1997; section 4 in Taniguchi 1997). Since it is known that the star formation activity in galactic disks may be controlled by the surface mass density of cold gas (e.g., Kennicutt 1998; Taniguchi & Ohyama 1998), more sophisticated observations will be necessary to unveil the difference in surface gas mass density between S1s and S2s; e.g., sensitive, molecular-line mapping surveys with radio interferometer facilities.
Finally we would like to also note that gaseous content in Seyfert hosts also affects the visibility of the central engine if the circumnuclear gas disk is opaque enough to hide the central engine (Maiolino & Rieke 1995; Iwasawa et al. 1995; Malkan, Varoujan, & Raymond 1998). Furthermore, the observational differences described above have been considered as a serious problem for the strict unified model of Seyfert nuclei (e.g., Heckman et al. 1989). They seem too complex to be understood unambiguously at present. Perhaps, the complexity may be attributed to a wide variety of properties of both host galaxies and satellites as well as satellite orbits. Detailed numerical simulations of minor mergers will be also recommended for various sets of parameters.
### 3.4. Environmental Properties of Seyfert Galaxies
The observed excess of faint companion galaxies in the Seyfert galaxies (Fuentes-Williams & Stocke 1988) provides supporting evidence for the minor merger scenario because galaxies with more satellites should have more chances to have Seyfert nuclei on a statistical ground.
Next we discuss galaxy-interaction-induced minor mergers. Let us consider an interaction between disk galaxies. Each galaxy has some satellites orbiting around the galaxy under the given gravitational potential before the interaction. After the two galaxies begin interacting with each other, orbital motions of some satellites are disturbed and then forced suddenly to fall into the galaxies although some satellites are dynamically scattered from each host. It is also expected that some satellites are directly trapped in the central regions of galaxies during the passage. Therefore, galaxy interactions may enhance the chance of minor mergers.
One remaining problem is that there is a possible environmental difference between S1s and S2s; S2s tend to have more massive companions than S1s (e.g., Simkin 1991; Taniguchi 1992; Dultzin-Hacyan et al. 1999). There is also a tendency that S2s prefer denser galaxy environments (Laurikainen & Solo 1995; De Robertis et al. 1998b). If more massive galaxies tend to have more numerous, nucleated satellites, they could have more chances to evolve to Seyfert galaxies from a statistical view point. However, there seems no definite reason why S2s prefer such environs. This problem will be open in future.
### 3.5. Frequency of the Seyfert Activity
We estimate the frequency of the Seyfert activity if all the Seyfert activity is triggered by minor mergers with nucleated satellite galaxies (see also Taniguchi & Wada 1996). Tremaine (1981) estimated that every galaxy would experience minor mergers with its satellite galaxies several times. Since a typical galaxy may have several satellite galaxies (Zaritsky et al. 1997), the probability of merger for a satellite galaxy may be estimated to be $`f_{\mathrm{merger}}0.5`$; i.e., half of the satellite galaxies have already merged to a host galaxy, while the rest are still orbiting. Another important value is the number of nucleated satellite galaxies. For example, M31 has two nucleated satellites (M32 and NGC 205), and a field S0 galaxy NGC 3115 has a nucleated dwarf (van den Bergh 1986). Although there has been no systematic search for nucleated satellite galaxies, it is likely that every galaxy has (or had) a few nucleated satellites: we assume $`n_{\mathrm{sat}}=2`$. If we assume that the typical lifetime of the Seyfert activity is $`\tau _{\mathrm{Seyfert}}10^8`$ yr, we obtain the probability of the Seyfert activity driven by minor mergers, $`P_{\mathrm{Seyfert}}f_{\mathrm{merger}}n_{\mathrm{sat}}\tau _{\mathrm{Seyfert}}\tau _{\mathrm{Hubble}}^10.01(\tau _{\mathrm{Seyfert}}/10^8\mathrm{y})`$, where $`\tau _{\mathrm{Hubble}}`$ is the Hubble time, $`10^{10}`$ yr. Hence, if minor mergers with nucleated satellites are responsible for triggering the Seyfert activity, it is statistically expected that Seyfert nuclei are found in about 1 % of field disk galaxies, being almost consistent with the observed value (e.g., Osterbrock 1989).
### 3.6. Seyferts vs. Quasars
It has often been considered that the merger scenario is also applicable to the more luminous starburst-AGN (i.e., ULIG-quasar) connection; i.e., major mergers between or among nucleated gas-rich galaxies are progenitors of quasars (Sanders et al. 1988; Taniguchi & Shioya 1998; Taniguchi, Ikeuchi, & Shioya 1999). Optically bright quasars found in the local universe (e.g., $`z<0.2`$) show evidence for major mergers between/among galaxies (Hutchings & Campbell 1983; Heckman et al. 1986; Bahcall et al. 1997). Although some quasar hosts look like giant elliptical galaxies with little morphological peculiarity (Disney et al. 1995; Bahcall et al. 1997), elliptical galaxies may form from major mergers between/among disk galaxies (Barnes 1989; Wright et al. 1990; Hernquist & Barnes 1991; Ebisuzaki, Makino, & Okumura 1991; Kormendy & Sanders 1992; Weil & Hernquist 1996). Therefore, if we adopt that elliptical galaxies hosting quasars were also made by major mergers, we could conclude that all the nearby quasars were made by major mergers.
### 3.7. A Summary of the Proposed Scenario
In Fig. 1, we show our unified formation mechanism of AGNs proposed here \[see for a unified formation mechanism for both circumnuclear/nuclear starbursts and ultraluminous starbursts in ULIGs (Taniguchi et al. 1997)\]. Note that the viewing angle dependence is not explicitly introduced in Fig. 1 although this is also an important factor in our scenario. If we postulate that the nuclear activity in all the Seyfert galaxies are triggered by minor mergers with nucleated satellites, we have a possibility that various observational properties of the Seyfert galaxies can be explained without invoking other physical mechanisms; e.g., the bar-driven or the tidal-interaction driven fueling. Furthermore, it seems possible that all the nearby quasars come from major mergers between/among nucleated galaxies (Sanders et al. 1988; Taniguchi & Shioya 1998; Taniguchi, Ikeuchi, & Shioya 1999). Therefore, if we adopt an idea that all the AGNs in the local universe arise from either minor or major mergers, we will have a unified (or single) formation mechanism of AGNs observed in the local universe.
Recently, De Robertis et al. (1998b) gave their careful thought on the interaction hypothesis; i.e., there is a causal link between activity in the nucleus of a galaxy containing a supermassive compact object and disturbances to the host galaxy resulting from tidal interactions or mergers (see also De Robertis et al. 1998a). They discussed the potential importance of minor mergers for triggering activity in Seyfert nuclei because a significant fraction of Seyfert hosts show little or no evidence for a recent (major) merger. However, they also discussed a possibility that there are various triggering mechanisms depending on the luminosity of the class from LINERs to quasars. Although our scenario proposed here has an opposite sense from their idea, it is one of options of the interaction hypothesis. Therefore, appreciating their careful thought, we would like to call our model “the simple interaction (or merger) model”. Finally we mention that any fueling mechanism is required to ensure that the gas in the host disk is surely fueled into the very inner region (e.g., $``$ 1 pc).
I would like to thank my colleagues, in particular, Satoru Ikeuchi, Keiichi Wada, Toru Yamada, Yasuhiro Shioya, and Takashi Murayama for useful discussion. I would also like to thank Dave Sanders and Chris Mihos for useful discussion about the merger-driven gas fueling and Josh Barnes and an anonymous referee for useful comments and suggestions. This work was partly done at Córdoba Observatory in Argentina. I would like to thank Silvia Fernándes and Sebastian Lípari for their warm hospitality. This work was financially supported in part by Grant-in-Aids for the Scientific Research (Nos. 10044052, and 10304013) of the Japanese Ministry of Education, Culture, Sports, and Science.
|
no-problem/9905/hep-ph9905216.html
|
ar5iv
|
text
|
# VPI-IPPAP-99-04 Constraining SUSY Models with Spontaneous CP-Violation via 𝐵→𝜓𝐾_𝑠
## 1 Introduction
The origin of CP-violation is one of the most profound problems in particle physics. In the Standard Model, all observable CP-violating effects in the kaon system can be successfully explained via the Cabibbo-Kobayashi-Maskawa (CKM) mechanism . However, the physical principles lying behind CP-violation are still not understood.
One of the more elegant approaches to the problem of CP-violation is based on the possibility of spontaneous T-breaking in multi Higgs doublet systems . Supersymmetric models can provide such systems and thus are a natural setting to implement this idea. Spontaneous CP-violation (SCPV) in susy models has drawn considerable attention \[5-10\] due to the following attractive features:
$``$ CP-phases become dynamical variables,
$``$ CP-symmetry is restored at high energies,
$``$ it allows to avoid excessive CP-violation inherent in susy models.
In this letter, we consider the implications of SCPV in minimal susy models for CP-asymmetries in $`B\psi K_s`$ decays. It is well known that the Standard Model predicts large CP-violation in these decays, namely $`\mathrm{sin}2\beta 0.4`$ where $`\beta `$ is the one of the angles of the unitarity triangle . Currently this CP-asymmetry is being studied experimentally by the CDF collaboration. Even though the statistics does not allow to make definite statements about the validity of the SM predictions at the moment, large CP-violation in this decay has been hinted. The purpose of this paper is to determine whether a large CP-asymmetry, $`\mathrm{sin}2\beta 0.4`$, can be explained in susy models with spontaneously broken CP.
## 2 CP-Asymmetry in $`B\psi K_s`$ Decay and Spontaneous CP-Violation
In this section we will consider minimal susy models with spontaneously broken CP and obtain the lower bound on the CP-violating phase as dictated by $`\mathrm{sin}2\beta 0.4`$ .
It has been shown that the Next-to-Minimal Supersymmetric Standard Model (NMSSM) is the simplest susy model which allows spontaneous CP-violation while being consistent with the experimental bound on the lightest Higgs mass . In the most general version of the NMSSM with the superpotential
$`W=\lambda \widehat{N}\widehat{H}_1\widehat{H}_2{\displaystyle \frac{k}{3}}\widehat{N}^3r\widehat{N}+\mu \widehat{H}_1\widehat{H}_2+W_{fermion},`$ (1)
SCPV can occur already at the tree level thereby avoiding the Georgi-Pais theorem . Note that even though SCPV in the MSSM is allowed theoretically , such a scenario is ruled out by the LEP constraints on the axion mass .
We will assume that $`all`$ CP-violating effects result from the complex Higgs VEV’s
$`H_1^0=v_1,H_2^0=v_2e^{i\rho },N=ne^{i\xi }.`$ (2)
The relevant interactions<sup>1</sup><sup>1</sup>1The complete list of interactions can be found in . for one generation of fermions (after spontaneous $`SU(2)\times U(1)`$ symmetry breaking) can be written as follows :
$``$ $`=`$ $`h_uH_2^0\overline{u}_Ru_Lh_dH_1^0\overline{d}_Rd_Lg\overline{\stackrel{~}{W}^c}P_Ld\stackrel{~}{u}_L^{}+h_d\overline{d}P_L\stackrel{~}{H}^c\stackrel{~}{u}_L`$ (3)
$`+`$ $`h_u\overline{\stackrel{~}{H}^c}P_Ld\stackrel{~}{u}_R^{}+h.c.,`$
$`_{mix}`$ $`=`$ $`g(v_1\overline{\stackrel{~}{H}}P_L\stackrel{~}{W}+v_2e^{i\rho }\overline{\stackrel{~}{W}}P_L\stackrel{~}{H})+e^{i\kappa _u}h_um_{LR}^{(u)}{}_{}{}^{2}\stackrel{~}{u}_R^{}\stackrel{~}{u}_L`$ (4)
$`+`$ $`e^{i\kappa _d}h_dm_{LR}^{(d)}{}_{}{}^{2}\stackrel{~}{d}_R^{}\stackrel{~}{d}_L+h.c.,`$
where $`h_{u,d}`$ denote the Yukawa couplings and $`\kappa _{u,d}`$ are certain functions of the Higgs VEV phases $`\rho `$ and $`\xi `$. In what follows, we assume, for the sake of definiteness, that $`|\kappa _u|=|\kappa _d|=|\kappa |`$, $`m_{LR}^{(u)}{}_{}{}^{2}=m_{LR}^{(d)}{}_{}{}^{2}=m_{LR}^2`$, and that $`\kappa `$ and $`m_{LR}^2`$ are generation independent. Equation (4) represents the wino-higgsino and left-right squark mixings, with the former being responsible for the formation of the mass eigenstates - charginos. However, following Ref. , we will prefer the “weak” (wino-higgsino) basis over the mass (chargino) one.
Information about the angle $`\beta `$ of the unitarity triangle was extracted from the decay rate evolution
$$\mathrm{\Gamma }(B^0[\overline{B}^0](t)\psi K_s)e^{\mathrm{\Gamma }t}\left(1[+]\mathrm{sin}2\beta \mathrm{sin}\mathrm{\Delta }mt\right),$$
where $`\mathrm{\Delta }m`$ is the $`B_LB_H`$ mass difference. On the other hand, the angles of the unitarity triangle can be expressed in terms of the mixing ($`\varphi _M`$) and decay ($`\varphi _D`$) phases which enter the $`B\overline{B}`$ mixing and $`bq\overline{q}Q`$ decay diagrams. For the process under consideration,
$`\mathrm{sin}2\beta =\mathrm{sin}(2\varphi _D+\varphi _M).`$ (5)
This relation is theoretically clean since it does not involve hadronic uncertainties and can serve as a sensitive probe for physics beyond the Standard Model. At the present time, the CKM entries are not known precisely enough to make a definite prediction for $`\beta `$. However, it is known that $`\mathrm{sin}2\beta `$ must fall between 0.4 and 0.9 in order for the CKM model to be consistent . In our model, we will impose this condition together with Eq.(5) to obtain the lower bound on the CP-violating phases appearing in the decay and mixing diagrams.
Let us now proceed to calculating the CP-violating effects in $`B\overline{B}`$ mixing. Figure 1 displays what we believe to be the most important contributions to the real part of $`B\overline{B}`$ mixing. These include the Standard Model box and wino superbox diagrams. It has been argued that all significant CP-violating effects result from complex phases in the propagators of the superparticles. For the $`K\overline{K}`$ system, one loop diagrams involving complex phases in the wino-higgsino and left-right squark mixings can lead to the correct values of $`ϵ`$ and $`ϵ^{}`$ . The analogous $`B\overline{B}`$ diagrams are shown in Figure 2 (in the case of $`K\overline{K}`$ mixing, the diagram in Fig. 2b is super-CKM suppressed and can be neglected). Besides the above mentioned contributions, there is a number of other contributions to $`B\overline{B}`$ mixing which can be classified as follows:
1. Higgs boxes,
2. gluino superboxes,
3. neutralino superboxes.
The gluino contribution can be neglected since the gluino is likely to be very heavy: $`m_{\stackrel{~}{g}}310GeV`$ . Further, the neutralino analogs of Figs.2a,b have to involve at least two powers of $`h_b`$ and, thus, are suppressed by $`(m_b/m_W)^2`$. The same argument applies to the charged Higgs contribution . On the other hand, the neutralino and Higgs contributions to $`Re(B\overline{B})`$ can be significant. However, they interfere with the SM contribution constructively and can only reduce the $`B\overline{B}`$ mixing weak phase. Since our purpose is to determine the $`lower`$ bound on this phase as dictated by $`\mathrm{sin}2\beta 0.4`$, we can safely omit these corrections.
Therefore, for our purposes it is sufficient to retain the SM and chargino contributions only. Furthermore, note that we may concentrate on the $`(VA)\times (VA)`$ interaction solely since 4-fermion chargino-generated interactions involving the right handed quarks are suppressed by powers of $`m_b/m_W`$. As a consequence, hadronic uncertainties and QCD corrections will not affect our results since those will cancel in the expression for the phase. The resulting 4-fermion interaction can be expressed as
$`𝒪_{\mathrm{\Delta }B=2}`$ $`=`$ $`(k_{SM}+k_{SUSY}+e^{i\delta }l_2+ze^{i\delta }l_2+e^{2i\delta }l_4+z^2e^{2i\delta }l_4)`$ (6)
$`\times `$ $`\overline{d}\gamma ^\mu P_Lb\overline{d}\gamma ^\mu P_Lb,`$
where $`k_{SM}`$ and $`k_{SUSY}`$ are real couplings induced by the diagrams in Fig.1a and Fig.1b, respectively. The CP-violating couplings $`e^{i\delta }l_2`$ and $`e^{2i\delta }l_4`$ result from the diagrams with two and four complex mixings shown in Figs.2a and 2b, respectively ($`l_{2,4}`$ are defined to be real). It is important to note that along with the diagrams explicitly shown in Fig.2, there are also “cross” diagrams in which the positions of $`\stackrel{~}{t}_L`$ and $`\stackrel{~}{t}_R`$ are interchanged. Such graphs contribute with opposite phase and may seem to lead to a complete cancellation of the imaginary part of the coupling. However, this cancellation is only partial owing to the fact that the higgsino vertex is, generally speaking, different from that of gaugino. Such partial cancellation is accounted for by a variable factor $`z`$ ($`0z1`$).
The Standard Model contribution is given by <sup>2</sup><sup>2</sup>2 We do not show the QCD correction factor explicitly.
$`k_{SM}={\displaystyle \frac{G_F^2}{16\pi ^2}}m_t^2H(x_t)(V_{tb}V_{td})^2,`$ (7)
with $`H(x)`$ being the Inami-Lim function. To estimate the superbox contributions, we may treat the gaugino and higgsino as particles of mass $`m_{\stackrel{~}{W}}`$ with (perturbative) mixing given by Eq.(4). In this approximation, the chargino propagator with a mixing insertion (Fig.2) is proportional to $`g(v_1+v_2e^{i\rho })\frac{m_{\stackrel{~}{W}}\mathit{}}{(k^2m_{\stackrel{~}{W}}^2)^2}`$. The resulting 4-fermion couplings are given by
$`k_{SUSY}`$ $`=`$ $`{\displaystyle \frac{g^4\zeta ^2}{128\pi ^2}}{\displaystyle \frac{1}{m_{\stackrel{~}{q}}^2}}(\stackrel{~}{V}_{tb}\stackrel{~}{V}_{td})^2{\displaystyle \frac{1}{(y1)^2}}\left[y+1{\displaystyle \frac{2y}{y1}}\mathrm{ln}y\right],`$ (8)
$`e^{i\delta }l_2`$ $`=`$ $`{\displaystyle \frac{g^4h_t^2\zeta }{64\pi ^2}}{\displaystyle \frac{m_{LR}^2e^{i\kappa }(v_1+v_2e^{i\rho })}{m_{\stackrel{~}{q}}^5}}(\stackrel{~}{V}_{tb}\stackrel{~}{V}_{td})^2`$ (9)
$`\times `$ $`{\displaystyle \frac{\sqrt{y}}{(y1)^5}}\left[33y^2+(1+4y+y^2)\mathrm{ln}y\right],`$
$`e^{2i\delta }l_4`$ $`=`$ $`{\displaystyle \frac{g^4h_t^4}{384\pi ^2}}{\displaystyle \frac{m_{LR}^4e^{2i\kappa }(v_1+v_2e^{i\rho })^2}{m_{\stackrel{~}{q}}^8}}(\stackrel{~}{V}_{tb}\stackrel{~}{V}_{td})^2`$ (10)
$`\times `$ $`{\displaystyle \frac{1}{(y1)^7}}\left[128y+28y^3+y^412y(1+3y+y^2)\mathrm{ln}y\right],`$
where $`y=m_{\stackrel{~}{W}}^2/m_{\stackrel{~}{q}}^2`$; $`m_{\stackrel{~}{q}}`$ and $`m_{\stackrel{~}{W}}`$ denote the top squark and the chargino masses, respectively, and $`\stackrel{~}{V}`$ is the squark analog of the CKM matrix. In these considerations, the top squark contribution is believed to play the most important role. The influence of the $`c`$\- and $`u`$-squarks is taken into account through a variable super-GIM cancellation factor $`\zeta `$ ($`0\zeta 1`$). Such a factor is associated with every squark line on which summation over all the up-squarks takes place. Since masses of the top and $`c,u`$-squarks are expected to be very different due to the large top Yukawa coupling (as motivated by SUGRA), the natural value of $`\zeta `$ would be of order unity ($`\zeta `$ is defined to vanish in the limit of degenerate squarks).
To derive the lower bound on the phase $`\delta `$, we may replace $`v_1+v_2e^{i\rho }`$ in Eqs.(9) and (10) by its “maximal” value $`\sqrt{2}ve^{i\rho }`$ with $`v`$ defined in the usual way: $`v=\sqrt{v_1^2+v_2^2}174GeV`$. Apparently, $`\delta |\rho |+|\kappa |.`$
Let us now determine the weak phases $`\varphi _M`$ and $`\varphi _D`$. It follows from Eq.(6) that
$`\mathrm{tan}\varphi _M={\displaystyle \frac{l_2(1z)\mathrm{sin}\delta +l_4(1z^2)\mathrm{sin}2\delta }{k_{SM}+k_{SUSY}+l_2(1+z)\mathrm{cos}\delta +l_4(1+z^2)\mathrm{cos}2\delta }}.`$ (11)
On the other hand, the decay phase $`\varphi _D`$ is negligibly small . Indeed, in our model, the only source of CP-violation in the process $`bc\overline{c}s`$ is the superpenguin diagram with the charginos and squarks in the loop<sup>3</sup><sup>3</sup>3Another possible contributor, Higgs-mediated tree level decay, is suppressed by the quark masses.. However, this diagram is greatly suppressed as compared to its CP-conserving counterpart, $`W`$ mediated tree level decay, due to the loop factors and heavy squark propagators. Also, unlike for the kaon decays, there is no $`\mathrm{\Delta }I=1/2`$ enhancement of the $`(VA)\times (V+A)`$ interactions. As a result, the weak decay phases can be neglected. This also means that direct CP-violation in our model is negligibly small as compared to that in the Standard Model (unless there is no tree level decay mode).
According to Eq.(5), the angle $`\beta `$ is determined by
$`\mathrm{sin}2\beta =\mathrm{sin}\varphi _M.`$ (12)
Then the experimental bound $`\mathrm{sin}2\beta 0.4`$ can be translated into
$`\mathrm{tan}\varphi _M0.44.`$ (13)
This, in turn, leads to a lower bound on the phase $`\delta `$ which can be obtained numerically from Eq.(11). Note that Eq.(11) is free of hadronic uncertanties and QCD radiative corrections.
## 3 Numerical Results
In this section we will discuss implications of Eq.(13) and its compatibility with the upper bound on the NEDM.
It is well known that the tight experimental bound on the NEDM imposes stringent constraints on the CP-violating phases which appear in extensions of the Standard Model. In our model, the largest contributions to the NEDM are shown in Fig.3. Barring accidental cancellations, one can constrain the CP-phases entering the higgsino-gaugino and squark left-right mixings via the chargino (Fig.3b) and gluino (Fig.3a) contributions to the NEDM, respectively. Consequently, the phase $`\delta `$ appearing in the $`B\overline{B}`$ mixing becomes bounded due to
$`\mathrm{sin}\delta |\mathrm{sin}\kappa |+|\mathrm{sin}\rho |.`$ (14)
This requires $`\delta `$ to be of order $`10^2`$-$`10^1`$ .
On the other hand, Eq.(13) imposes a lower bound on $`\delta `$. As seen from Eqs.(8)-(10), this bound depends strongly on the super-CKM matrix. We consider the following possibilities:
1. the super-CKM matrix duplicates the CKM one, $`\stackrel{~}{V}_{td}V_{td}`$;
2. the super-CKM mixing is enhanced, $`\stackrel{~}{V}_{td}V_{td}/\mathrm{sin}\theta _C`$;
3. the super-CKM mixing is doubly enhanced, $`\stackrel{~}{V}_{td}V_{td}/(\mathrm{sin}\theta _C)^2`$.
In all of these cases we assume $`\stackrel{~}{V}_{tb}1`$. The first possibility implies that the supersymmetric contribution to $`B\overline{B}`$ mixing is suppressed as compared to the CP-conserving Standard Model box diagram. As a result, we find that the constraint $`\mathrm{tan}\varphi _M0.44`$ cannot be satisfied for any $`\delta `$ even assuming light ($`100GeV`$) squarks. For the same reason, we are bound to consider only light ($`100GeV`$) chargino and maximal left-right squark mixing, $`m_{LR}=m_{\stackrel{~}{q}}`$. On the other hand, the third option is unrealistic since it leads to an unacceptably large stop contribution to the $`K_SK_L`$ mass difference unless the top squark mass is around 1 $`TeV`$. Therefore, we are left with the second possibility which we will examine in detail. From now on we assume that $`\stackrel{~}{V}_{td}V_{td}/\mathrm{sin}\theta _C,m_{\stackrel{~}{W}}100GeV,m_{LR}m_{\stackrel{~}{q}}`$, and will study the behavior of the lower bound on $`\delta `$ as a function of the remaining free parameters - $`z,\zeta ,\mathrm{tan}\beta `$, and $`m_{\stackrel{~}{q}}`$.
Fig.4 displays a typical picture showing inconsistency of the model.<sup>4</sup><sup>4</sup>4The displayed bound on the NEDM was calculated for the gluino mass in the range $`300500GeV`$ using the standard formulas . The lower bound exceeds the upper bound by one or two orders of magnitude. Moreover, the region allowed by the CP-asymmetry in $`B\psi K_S`$ is restricted to the left upper corner of the plot. The reason for that can be easily understood. Indeed, if the squarks are heavy, the magnitude of the susy contribution is negligible as compared to that of the CP-conserving SM box and sufficient CP-violation cannot be produced regardless of how large the phases are.
Let us now consider the effect of each of the variable parameters.<sup>5</sup><sup>5</sup>5 We are assuming that $`\delta `$ belongs to the second quarter. Even stricter lower bounds on $`\mathrm{sin}\delta `$ can be obtained for $`\delta `$ in the second quarter due to a partial cancellation in the numerator of Eq.(11).
1.$`\zeta dependence(Fig.5).`$
The lower bound on $`\delta `$ relaxes as we introduce the super-GIM cancellation. This occurs due to the increasing share of the CP-violating diagram in Fig.2b. However, theoretically one expects $`\zeta `$ to be of order unity due a large difference between the stop and other squarks masses.
2.$`\mathrm{tan}\beta dependence(Fig.6).`$
The gap between the lower and upper bounds widens drastically as $`\mathrm{tan}\beta `$ increases. Recalling that $`h_u=\frac{gm_u}{\sqrt{2}m_W\mathrm{sin}\beta }`$ and $`h_d=\frac{gm_d}{\sqrt{2}m_W\mathrm{cos}\beta }`$, it is easy to see that for large $`\mathrm{tan}\beta `$ the NEDM constraint becomes stricter due to the large $`h_d`$ whereas the CP-violating contributions to $`B\overline{B}`$ mixing, proportional to powers of $`h_t`$, decrease. We do not consider the case $`\mathrm{tan}\beta <1`$ because of the SUGRA constraints and the breakdown of perturbation theory in this region.
3.$`zdependence(Fig.7).`$
Apparently, the incorporation of a partial cancellation ($`z>0`$) among the CP-violationg contributions makes the lower bound rise. One expects the natural value for $`z`$ to be around 1/2.
In all of these cases the regions allowed by the NEDM and $`B\psi K_S`$ are at least an order of magnitude apart.<sup>6</sup><sup>6</sup>6Note also that small CP-phases are disfavored by the bound on the lightest Higgs mass . Furthermore, heavy squarks ($`400GeV`$) are prohibited by large CP-asymmetries observed in $`B\psi K_S`$. This condition is quite restrictive and may alone be sufficient to rule out the model in the near future (even if large CP-phases were allowed).
It should be mentioned that, in the limit of large $`\mathrm{tan}\beta `$, the CP-violating neutralino and Higgs contributions to $`B\overline{B}`$ mixing become more important. However, the CP-phases are severely constrained in this case ($`10^3`$). Therefore, these contributions do not lead to any considerable modifications of our analysis.
## 4 Further Discussion
The model under consideration has further implications for B-physics. For instance, in this model, direct CP-violation is greatly suppressed in all tree-level allowed processes as compared to what one expects in the Standard Model. This provides another signature testable in the near future.
The other angles of the unitarity triangle, $`\alpha `$ and $`\gamma `$, can be determined in a similar manner from, for example, $`B_d\pi ^+\pi ^{}`$ and $`B_s\rho K_s`$ decays . However, in the Standard Model, one cannot determine these angles from the decay rate evolution and relations analogous to (5) precisely enough due to considerable penguin contributions. To eliminate their influence, one can use isospin and $`SU(3)`$ relations . For instance, in order to determine $`\alpha `$, one needs to know the rates of $`B_d\pi ^+\pi ^{}`$, $`B_d\pi ^0\pi ^0`$, and $`B_d^+\pi ^+\pi ^0`$ along with the rates of their charge conjugated processes. Then $`\alpha `$ can be found from certain triangle relations among the corresponding amplitudes. In our case, however, this analysis becomes trivial due to a vanishingly small interference between the tree and superpenguin contributions. As a result, $`\alpha `$ can be read off directly from the analog of Eq.(12). The angles extracted in such a way normally do not exceed a few degrees (modulo $`180^0`$) and do not form a triangle .
In our model, the angles of the unitarity triangle do not have a process-independent meaning. Indeed, contrary to the Standard Model, they cannot be extracted from direct CP-violating processes simply because such processes are prohibited. Moreover, these angles are not related to the sides of the unitarity triangle.
Another important consequence of the model is that the CKM matrix is real and orthogonal. This, of course, is also true for general (nonsupersymmetric) two Higgs doublet models with FCNC constraints, in which CP is broken spontaneously . As a result, $`|V_{ub}|,|V_{td}|`$, and $`|\mathrm{sin}\theta _CV_{cb}|`$ must form a flat triangle. Presently, such a triangle is experimentally allowed provided new physics contributes significantly to $`\mathrm{\Delta }m_{B_d}`$ . To determine the status of these models, a more precise determination of $`|V_{ub}|`$ and $`|V_{td}|`$ is necessary. Orthogonality relations among other CKM entries are less suitable for probing this class of models due to the small CKM-phases involved.
We have considered in detail the case of the NMSSM. The results, however, remain valid for NMSSM-like models with an arbitrary number of sterile superfields $`\widehat{N}`$. Indeed, since the $`\widehat{N}`$’s do not interact with matter fields directly, an introduction of a sterile superfield does not affect the way CP-phases enter the observables. The CP-violating effects will still be described by the diagrams in Figs.2 and 3. Therefore, the argument we used also applies to this more general situation: the CP-phase in left-right squark mixing can be constrained via the gluino contribution to the NEDM, whereas the phase in the gaugino-higgsino mixing can be constrained via that of the chargino (assuming no accidental cancellation). In the same way, we find that the lower and upper bounds on the CP-phases are incompatible. A nontrivial extension of the model, in which a cancellation among various contributions to the NEDM can be well motivated, is necessary to rectify this problem. <sup>7</sup><sup>7</sup>7 The possibility of such a cancellation in certain susy models has recently been considered by a few authors. It is, however, unclear whether this cancellation can be made natural (see and references therein).
## 5 Conclusions
We have analyzed the CP-asymmetry in $`B\psi K_s`$ decay within minimal supersymmetric models with spontaneous CP-violation. We have found that the CP-asymmetry required by $`\mathrm{sin}2\beta 0.4`$ can be accommodated in these models only if the following conditions are met:
1. left-right squark mixing is maximal, $`m_{LR}m_{\stackrel{~}{q}}`$,
2. super-CKM mixing is enhanced, $`\stackrel{~}{V}_{td}V_{td}/\mathrm{sin}\theta _C`$,
3. the chargino is relatively light, $`m_{\stackrel{~}{W}}100GeV`$,
4. the t-squark is lighter than $`350400GeV`$.
Even if this is the case, the required CP-violating phases are larger than those allowed by the bound on the NEDM by one or two orders of magnitude.
We conclude that the model under consideration cannot accommodate a large CP-asymmetry in $`B\psi K_s`$ while complying with the bound on the NEDM. Thus, if the Standard Model prediction gets confirmed, the model will be recognized unrealistic. The result holds true for models with an arbitrary number of sterile superfields. To reconcile theory with experiment one would have to resort to essentially nonminimal scenarios in which large CP-violating phases are naturally allowed.
Note also that since, in this approach, CP-violation is a purely supersymmetric effect, the model requires the existence of a relatively light t-squark. This may conflict with the Tevatron constraints on supersymmetry in the near future (see, for example, ).
We have also discussed other testable predictions of the model. Among them are the suppression of direct CP-violation and orthogonality of the CKM matrix.
The author is grateful to L. N. Chang and T. Takeuchi for discussions and reading of the manuscript, and to T. Falk for useful comments.
Figure Captions
Fig. 1a,b Most important contributions to $`B\overline{B}`$ mixing.
Fig. 2a,b Dominant CP-violating contributions to $`B\overline{B}`$ mixing (all possible permutations are implied).
Fig. 3a,b Contributions to the NEDM allowing to constrain the wino-higgsino and squark left-right mixing phases.
Fig. 4 Regions allowed by the CP-asymmetry in $`B\psi K_s`$ (upper) and the bound on the NEDM (lower). Typically, the region allowed by $`B\psi K_s`$ is much smaller than shown here due to the partial cancellation (see Fig.7, $`z=1/2`$).
Fig. 5 Lower bound on $`\mathrm{sin}\delta `$ as a function of the super-GIM cancellation parameter $`\zeta `$: 1 - $`\zeta =1`$, 2 - $`\zeta =1/2`$, 3 - $`\zeta =1/4`$.
Fig. 6 Lower bound on $`\mathrm{sin}\delta `$ as a function of $`\mathrm{tan}\beta `$: 1 - $`\mathrm{tan}\beta =1`$, 2 - $`\mathrm{tan}\beta =25`$.
Fig. 7 Lower bound on $`\mathrm{sin}\delta `$ as a function of the partial cancellation parameter $`z`$: 1 - $`z=0`$, 2 - $`z=1/4`$, 3 - $`z=1/2`$.
|
no-problem/9905/astro-ph9905102.html
|
ar5iv
|
text
|
# A Stellar Population Gradient in VII Zw 403 - Implications for the Formation of Blue Compact Dwarf Galaxies 1footnote 11footnote 1Based on observations made with the NASA/ESA Hubble Space Telescope obtained, and supported in part through grant number AR-06404.01-95A, from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
## 1 Introduction
The stellar content and evolutionary history of Blue Compact Dwarf (BCD) galaxies have been a puzzle for some time. The low metal abundances derived from the H-II regions of BCDs (between about Z/3 and Z/50, Thuan et al. 1994), combined with their high rates of star formation and large H-I masses (Thuan & Martin 1981), have raised the question of whether BCDs are truly young galaxies which formed their first stars recently, or old galaxies which occasionally light up with bright, compact starburst regions (Searle & Sargent 1972, Searle et al. 1973, Izotov & Thuan 1999). In their original definition of a “young” galaxy, Searle & Sargent specified this is a galaxy which formed most of its stars in recent times. In subsequent years the argument over BCD ages has evolved into a more polarized one, because it is important to understand whether or not there might exist any local galaxies which are only now forming their very first generation of stars out of pristine gas. Such BCDs that are making all of their stars in the on-going starburst could be considered to be local examples of primeval galaxies (Thuan & Izotov 1998) and they would be fundamentally different from old galaxies which began forming their first stars over 10 Gyrs ago at high redshifts.
Several lines of evidence support the old-galaxy hypothesis. Thuan (1983) found indications for the presence of evolved stars from the integrated, near-IR colors of BCDs. Fanelli et al. (1988) used UV spectra to show that the star-formation rates of BCDs are discontinuous. A deep CCD imaging survey by Loose & Thuan (1986) first revealed that the bright, compact star-forming regions of BCDs are embedded in much fainter, redder halos with elliptical outer isophotes. Nearly all galaxies in their sample ($`>`$95%) show such an underlying, low-surface-brightness component. Subsequent CCD surveys have confirmed this result (e.g., Kunth et al. 1988, Papaderos et al. 1996, Telles et al. 1997, Meurer 1999). Evolutionary synthesis models (e.g., Krüger et al. 1991) and chemodynamical models (e.g., Rieschick & Hensler 1998) favor the old-galaxy view as well.
Hubble Space Telescope (HST) observations of several extremely metal-poor BCDs recently led Thuan at al. (1997) and Thuan & Izotov (1998) to propose that primeval BCDs may yet be found locally. Their argument is based in part on the blue colors observed for the unresolved underlying disks. This is seen in I Zw 18 (Z/50) by Hunter & Thronson (1995), in SBS 0335-052 (Z/41) and SBS 0335-052W (around Z/50) by Thuan at al. (1997), Papaderos et al. (1998) and Lipovetsky et al. (1999), and in SBS 1415+437 (Z/21) by Thuan et al. (1999). Based on abundance measurements, Izotov & Thuan (1999) suggest that in fact all galaxies with Z $``$ Z/20 are young, with ages not exceeding 40 Myr, while those with Z $``$ Z/5 are no older than $``$ 1-2 Gyr. In the definition of Izotov & Thuan then, a “young” galaxy is a galaxy which has not experienced any star-forming events prior to the current one (which might have enriched its gas beyond that which is observed). By most accounts, even galaxies with ages $`<`$ 2 Gyr would be considered to be relatively young galaxies, as their formation redshift would not have been co-eval with that of the large or other dwarf galaxies.
The controversy over the age of BCDs reflects our current knowledge of galaxy formation and evolution. Did dwarf galaxies form from primordial density fluctuations at z $`>>`$ 5 (Ikeuchi & Norman 1987)? Are the dwarfs observed today the leftover building blocks of large galaxies (White & Rees 1978, Dekel & Silk 1986)? Is the formation of dwarfs delayed until z $``$ 1 when the UV background cooled sufficiently for gas to collapse within small dark matter halos and form stars (Blanchard et al. 1992, Babul & Rees 1992)? Are dwarfs rapidly evolving for z $`<`$ 1, in which case they could account for the rapid evolution in the galaxy luminosity function necessary to explain the faint blue excess (Broadhurst et al. 1988, Babul & Ferguson 1996, Spaans & Norman 1997, Ferguson & Babul 1998, Guzmán et al. 1998)? Where are the local, isolated H-I clouds still awaiting their first generation of stars (Briggs 1997)?
Resolved stellar populations are the fossil record of a galaxy’s star-formation history (SFH). They can also be used to provide a definition of a galaxy’s age. We shall consider “young” a galaxy which contains only “young” stars, stars with ages $`<`$ 100 Myr. Intermediate-age stars signal a galaxy is at least of intermediate age; we consider this to be the case when stars that are older than a few hundred Myr are detected. Since the time-resolution of color-magnitude diagrams (CMD) of distant galaxies rapidly decreases if the stellar ages exceed 1 Gyr, a detection of stars with ages of about 1 Gyr is often considered sufficient to show the presence of old stars. We, however, prefer to consider all stars with ages of up to 10 Gyr as intermediate-age stars, and reserve the term “old” for stars of the kind that inhabit Galactic globular clusters and have ages $`>`$ 10 Gyr. An “old” galaxy is therefore one that has formed at least some stars more than 10 Gyrs ago.
The dwarf galaxies in the Local Group exhibit an astounding variety of SFHs, yet all contain old stars (Mateo 1998) in the above sense. However, there are no BCDs known in the Local Group. VII Zw 403 is hence a key object; it is so close that HST observations resolve it into individual stars (Schulte-Ladbeck et al. 1998, hereafter SCH98, Lynds et al. 1998). Thus it has become possible for the first time to derive the distance of a BCD using a stellar distance indicator (rather than just a recession velocity). This is crucial for the interpretation of the stellar content.
Recent determinations of the present-day metallicity of the ionized gas in VII Zw 403 (or rather, its O/H ratio), were published by Martin (1997) and Izotov, Thuan & Lipovetsky (1997). Martin finds log (O/H) = -4.42(0.06), and Izotov, Thuan & Lipovetsky give -4.31(0.01). Both groups employ the metallicity scale for which the Sun has log (O/H) = -3.07. We note that Izotov & Thuan (1999) also use this solar value. On this scale, the metallicity of VII Zw 403 is between 1/22 and 1/17 of solar.
Despite its low metallicity ($``$ Z/20) SCH98 argued that VII Zw 403 is not a young galaxy, based on the detection of a red giant branch (RGB) with a well defined tip and a red asymptotic giant branch (AGB). VII Zw 403 also exhibits extended outer isophotes (Loose & Thuan 1986, Hopp & Schulte-Ladbeck 1995) with a red color that is consistent with an old, metal poor population (Schulte-Ladbeck & Hopp 1998). The spectrum of the background sheet displays absorption lines that indicate an evolved population underlying the recent starburst (Hopp et al. 1998).
In this paper, we provide further arguments that the existence of an early epoch of star-formation may be gleaned from the morphology of VII Zw 403. We demonstrate that in radial bins outward from its starburst center, the contribution to the CMD by young stars decreases, while the red tangle that contains the old and metal-poor RGB (Aparicio & Gallart 1994) becomes a narrow feature. For radii larger than about 1 kpc, the young stars are absent and the stellar content is well described by a globular-cluster-like stellar population. We argue that this resolved old stellar population supports earlier suggestions that the faint halos of BCDs harbor old stars.
## 2 Observations and reductions
HST WFPC2 observations of VII Zw 403 were obtained in the continuum (F336W, F555W and F814W filters, approximately the Johnson-Cousins U, V, and I bands) and in the H$`\alpha `$ emission line (F656N filter). The relevant exposures are listed in Table 1 of SCH98.
As described in SCH98, we re-ran the pipeline with improved calibration files, removed cosmic rays, corrected for CTE, and corrected for geometric distortion. We conducted photometry on H$`\alpha `$-subtracted images. We used DAOPHOT to perform psf photometry on each of the four WFPC2 chips. We calibrated the photometry using the most up-to-date SYNPHOT tables. After determining the positions of each point source, we merged the object catalogs of the four chips into a single file of positions and instrumental magnitudes in the Vega magnitude system. The accuracy of these coordinates relative to the guide star system is about 0.5”.
In Figures 1a,b, we provide the errors for the point-source measurements in the continuum filters, as well as the results of our completeness tests. DAOPHOT/ALLSTAR residuals in F555W and F814W filters on all chips can be summarized by stating that they reach 0.1 mag at magnitudes of about 26 and 25.5, respectively. We checked the completeness of our photometry for each chip by adding a distribution of false stars consistent with the magnitude distribution of the real stars. The percentage of recovered stars indicates that completeness is about 50% at m<sub>F555W</sub> = 26, m<sub>F814W</sub> = 25. (Fig. 4 of SCH98 indicates there are “holes” in the distribution of red stars on the PC due to incompleteness effects; this presumably also produces a less well populated red tangle in the first two panels of Fig. 3 below.)
The foreground galactic reddening for VII Zw 403 is E(B-V) = 0.025 (Burstein & Heiles 1984); we correct for the corresponding extinction using the tables provided in Holtzman et al. (1995, Tables 12a,b). A small and patchy internal reddening (E(B-V)$`<`$0.16) cannot be ruled out (Lynds et al. 1998), but is not central to the arguments presented below (the location of the blue plume in the CMD is consistent with no or low internal reddening for the majority of the young stars detected, the old stars in “Baade’s red sheet” are measured throughout the halo where we find no evidence for internal extinction).
Final colors are transformed U, V, and I magnitudes using the color terms in Holtzman et al. (1995, Table 7).
## 3 Results
Figure 2 is a position plot of all stars detected in both V and I (5459 objects). We used the distribution of sources detected in both U and V to locate the center of star fomation in VII Zw 403 (R.A.\[<sup>o</sup>\] = 172.002, Dec.\[<sup>o</sup>\] = 78.994, J2000). We then cast concentric circles about this location. In Figure 3, we display the \[(V-I)<sub>o</sub>, I<sub>o</sub>\] CMDs observed within each of six radial bins marked in Fig. 2. The CMDs of the first and sixth bins are also superimposed with stellar evolutionary tracks. These tracks use the stellar atmospheres of F. Castelli (see, e.g., Bessell et al. 1998) for a metallicity of Z/30, which were folded with the HST filter/system response and kindly made available to us by L. Origlia. Stellar evolution is based on the Fagotto et al. (1994) isochrones for Z=0.0004 (Z/50). These models were chosen because they most closely represent a “compromise” metallicity for the stellar populations which we observe (i.e., without trying to model the enrichment history, on which we comment below, in detail as well). As is always the case, the comparison of theoretical tracks (or synthetic CMDs) with data depends on the accuracy with which we can ascertain either one and thus map one onto the other. We will consider as safe, conclusions which do not depend on the choice of a particular model. For instance, a well populated RGB is usually only observed if a stellar population with an age upwards of about 1 Gyr is present, irrespective of the stellar metallicities adopted (cf. also, Sweigart, Greggio & Renzini 1990). We will point out where appropriate, when inferences are based on what we consider more uncertain model results. The location of the tracks on the observed CMDs employs the distance modulus derived below.
The changes in stellar content with position are quite striking. The center displays a CMD that has a dominant blue plume of main-sequence (MS) stars and blue supergiants (BSG) or blue-loop (BL) stars, a prominent red supergiant (RSG) plume, a few, very red AGB stars, and a weak red tangle. In the second bin, this red tangle and the region of intermediate-mass BL stars are much more populated. We note that H$`\alpha `$ emission is detected only in the first and second bins. By the third bin, both the blue plume and the RSG plume have weakened whereas the red tangle is strong. By the fourth bin, most stellar indicators of young ages have disappeared. Our detection limit of V<sub>o</sub> $``$ 28 is low enough to show that MS stars younger than about 200 Myr are absent. The few remaining BL stars, which suggest ages of a few hundred Myr, are still present in this bin. In the fifth and sixth bins, we see the outer regions of the galaxy. Here, the red tangle has become a narrow band. There are seven objects that are red and brighter than I<sub>o</sub>=23. A likely explanation is that these are Galactic foreground stars. The number of Galactic foreground stars within the WFPC2 images is expected to be very small (Méndez & Guzmán 1998); since the fifth and sixth bins cover the largest area they may contain a few. The CMDs of the fifth and sixth bins suggest that only intermediate-age and old AGB and RGB stars are found in the outer regions of VII Zw 403.
In SCH98 we used the tip-of-the-red-giant-branch (TRGB) method to obtain the distance to VII Zw 403. Using the red tangle in the star-forming region, we estimated a metallicity \[Fe/H\] of -1.2, resulting in M<sub>I</sub>=-4.10 for the TRGB and a distance modulus of 28.4 mag (with a total random error of 0.09, and a dominant systematic error of 0.18). This corresponds to a distance of 4.8 Mpc. We now re-derive the distance using the CMD of the outer bins, only. The new I<sub>TRGB</sub>, 24.25$`\pm `$0.05, is not significantly different from our old value. Following Lee et al. (1993), we can estimate \[Fe/H\] from the V-I color either just below the TRGB, at M<sub>I</sub>=-3.5, or better, at M<sub>I</sub>=-3.0. We have done both and find a consistent (V-I)<sub>o</sub> color of 1.28, with an rms error of 0.03 (for M<sub>I</sub>=-3.5) and a dispersion of 0.21. This translates into a mean metallicity of $`<`$\[Fe/H\]$`>`$=-1.92$`\pm `$0.04 (or Z=0.00024 or Z/83) with a spread or metallicity range (corrected for measurement error) of $`\pm `$0.7. We now find the TRGB at M<sub>I</sub>=-3.98, or m-M=28.23, or a distance of about 4.4 Mpc, in excellent agreement with the value of 4.5 Mpc derived by Lynds et al. (1998).
At this distance, 1” corresponds to about 21.5 pc. The Holmberg diameter of VII Zw 403 was measured by Schulte-Ladbeck & Hopp (1998) as 2a<sub>H</sub>=146”, and translates into a physical diameter of 3.1 kpc. Fig. 3 shows that the young stars are contained within the inner few hundred pc, and are absent beyond a radius of 1 kpc. Not surprisingly, this agrees with the “typical” size of a BCD (Thuan & Martin 1981).
In Fig. 2 we colored in green the two radii which clearly separate the young and old stellar components of VII Zw 403. In Fig. 4, we display the CMDs inside of the inner (386 pc) and outside of the outer (1352 pc) radius in blue and red, respectively. This illustrates the change in width of the red tangle due to a diminishing contribution by intermediate-mass stars. We mark the position of the TRGB, and provide an absolute-magnitude scale. We also overlay the empirical globular-cluster ridge-lines of da Costa & Armandroff (1990), which indicate that the halo population of VII Zw 403 is similar to the population of Galactic globular clusters, the prototypes of Population II.
Fig. 4 shows that AGB stars, which populate a strip from \[(V-I)<sub>o</sub>, M<sub>I</sub><sub>o</sub>\]$``$\[1, -4\] to \[3.5, -6.5\] are found both at small and large radii. The strip may contain stars as old as 10 Gyr and Z=0.0004 (Z/50) at its bright, blue end. The extent of the AGB to the red requires 1/5$`>`$Z$`>`$1/20; here the average location of stars is well described by the 4 Gyr isochrones (Bertelli et al. 1994). Note that theoretical models of the AGB phase are very uncertain. This is because a) their atmospheres contain difficult-to-model molecules, and b) mass-loss is an important but ill-known parameter that determines their evolution. If the AGB models can be believed, then VII Zw 403 has had an interesting chemical history.
Schulte-Ladbeck & Hopp (1998) showed that their surface-brightness profiles of VII Zw 403 could be fit with an exponential law, with scale lenghts in B of 25.7”, R of 25.2”. In Fig. 5, we display the I-band surface density of resolved stars versus radius, derived from starcounts in the complete quadrant marked in Fig. 2 by blue lines. An exponential law describes well the distribution of the resolved stars outside of the innermost region where there is incompleteness. The scale length for all stars in I is 25.6” (or about 550 pc), in excellent agreement with our ground-based results. For the red stars (0.6$`<`$V-I$`<`$1.5) only, the scale length is 31.8” (or about 680 pc).
Owing to our observation that the CMD of the halo of VII Zw 403 is very reminiscent of that of a dwarf Spheroidal galaxy (dSph), and in connection with the interesting question of what is the nature of the faint blue galaxies, we compare the scale length of VII Zw 403 to that of dSphs. We find that the scale length is about 1.5 times larger than those of the largest Local Group dSph like NGC 147, NGC 185, NGC 205, or Fornax (Mateo 1998).
We may ask how typical is this scale length for BCDs in general? One note of caution — other BCDs do not have direct distance determinations, and the recession velocities in the literature are scaled with (different values of) Ho, resulting in fairly unreliable scale lengths. Nevertheless, the distance errors might average out over a large sample of scale lengths. We compared the scale length of VII Zw 403 to values derived for the BCD-dominated samples of Bothun et al. (1989) and Telles et al. (1997). In both papers, deep CCD exposures to relatively large angular diameters were used for detailed surface photometry and exponential scale lengths were derived. The central starburst contribution was ignored in the determination. Bothun et al. used r-band data, Telles et al., V-band observations. We transformed both data sets to the same distance scale with Ho = 75 km/s/Mpc. In the Bothun et al. sample, 24% of the objects have scale lengths of 0.7 kpc and smaller. Telles et al. found 38% of their objects in this regime. Surface photometry of 93 galaxies from the emission line galaxy sample of Popescu et al. (1997) was performed by Vennik, Hopp & Popescu (1999). Again, this sample is dominated by BCDs, but contains a few galaxies of other types. The scale lengths are derived from R images, and 44% of the galaxies show scale lenghts of 0.7 kpc and smaller. Papaderos et al. (1996) studied a sample of 14 galaxies in much more detail than the other authors. Here, the derived R-band scale lengths range from 0.17 to 2.3 kpc. Despite the various and different selection effects of the four samples from the literature, we conclude from this comparison that the scale length found for the underlying galaxy in VII Zw 403 is fairly typical for that of BCDs.
## 4 Discussion
Several pieces of evidence now suggest that VII Zw 403 has an old stellar halo, and this has implications for the formation of BCDs.
While it has previously been assumed that red colors at large radii point to the presence of evolved stellar background sheets in BCDs, the RSG, AGB and RGB stars actually overlap over a wide range in effective temperatures and hence colors (cf. Fig. 6). Therefore, a red background sheet cannot be considered as evidence of an old, underlying host galaxy unless it can be proven that the stellar population which dominates the integrated color of a BCD is an old one. This has now been accomplished with the detection of red giants in the halo of VII Zw 403.
However, owing to the well-known age-metallicity degeneracy, comparatively young ($``$ 1 Gyr) and metal rich RGB stars can populate the same region of a CMD as old (10-15 Gyr) and metal poor RGB stars, based on optical, broad-band colors. We interpret the color dispersion of the RGB with a variation in chemical abundance; however, age dispersion (and/or differential reddening) can also broaden the RGB (e.g., Bertelli et al. 1994). In the case of VII Zw 403 we argue that the well-defined RGB tip, and the narrowing of the red tangle from the center toward the outer regions of the galaxy where young stars are absent, indicate that our interpretation is appropriate. Conclusive evidence could be obtained from the detection of old horizontal-branch stars or the turn-off of an old MS, but these are too faint to be reached with the HST. VII Zw 403 does not appear to have a globular-cluster system.
Recently, the stellar halos of several dwarf Irregular (dIrr) and transition dIrr/dSph galaxies in the Local Group have been resolved into stars (WLM, Minniti & Zijlstra 1996, NGC 3109, Minniti et al. 1999, Antlia, Aparicio et al. 1997). These observations have been interpreted to indicate the presence of old and metal-poor, Population II halos. VII Zw 403 joins the ranks of an increasing number of local, star-forming dwarf galaxies with resolved old stellar halos. As is pointed out by Mateo (1998), all suitably studied star-forming dwarfs of the Local Group show evidence for extended, smooth, and symmetric distributions for their older stars. Most BCDs show extended, smooth and red outer isophotes as well. Our results for VII Zw 403 imply the identification of the background sheets with old stellar halos is justified for these BCDs.
The controversy over the age of BCDs — whether they are young or old galaxies — may be resolved in the following way: There is a continuum of star-formation and chemical-enrichment histories among the BCDs. It is likely that the vast majority of BCDs have an ancient ($`>`$ 10 Gyr) stellar population substratum, and must thus be recognized as old galaxies. This is based on the observation that over 95% of BCDs in the Loose & Thuan sample show extended background sheets of red color. Comparing observations of the background-sheet colors for a sample of BCDs and dIrrs with the population synthesis models of Schmidt et al. (1985), Schulte-Ladbeck & Hopp (1998) suggest that complex star-formation histories prevail in these galaxies. Thus, depending on just how much mass is involved in the on-going starburst, and how the morphology of the star-forming regions compares to that of the older stellar substratum, the colors of some BCDs might be entirely consistent with those of “young” galaxies in the sense that they are currently experiencing a strong starburst. If the outer isophotes are sufficiently red in color, they can be interpreted to indicate that such BCDs formed some stars at epochs similar to that of Galactic globular cluster formation (Kunth et al. 1988, Papaderos et al. 1996, Telles et al. 1997, Schulte-Ladbeck & Hopp 1998, Meurer 1999). The range of background-sheet colors suggests that the SFHs of most BCDs since their formation at high redshift have probably been diverse, depending on the frequency, duration, and intensity of the star-formation events. This is similar to what has been derived for dwarf galaxies within our Local Group.
In a few of the extremely metal-poor BCDs, the color gradients are small and the outer isophotes remain fairly blue (Hunter & Thronson 1995, Thuan at al. 1997, Papaderos et al. 1998, Lipovetsky et al. 1999, Thuan et al. 1999). The argument that such blue background-sheet colors indicate a galaxy is only now making its first generation of stars seems untenable to us in the light of population synthesis models. Where the young and old stellar populations are spatially co-existent, Schmidt et al. (1985) show that a starburst completely dominates integrated optical colors for up to 50 Myr, even if as little as 0.1% of the dwarf galaxy’s mass is involved; hence a young burst may render the underlying old population undetectable. It is therefore possible (although in the absence of deep CMDs not yet demonstrated) that the old populations in these few extremely metal-poor BCDs elude us due to a contrast problem.
On the other hand, the CMDs of the few BCDs which have been resolved with HST into single stars indicate they do contain stellar generations which predate the present starburst. In Figure 6, we show the evolution of VII Zw 403 in a series of synthetic “snapshot” CMDs. These synthetic CMDs were computed with the above Z/50 evolutionary tracks. We used the Bologna code, which was recently adapted by Greggio et al. (1998) for the simulation of HST data. The synthetic CMDs help to illustrate the general features of stellar evolution at low metallicity, and may also serve as templates for future CMDs of extremely metal-poor BCDs. The panels of Fig. 6 show very well that the observed changes in stellar population with radius (Fig. 3) can be interpreted as a change of the stellar ages with distance from the core. We use synthetic CMDs to place a lower limit on the age of the red giants that we see in the halo of VII Zw 403. We find a well defined TRGB first appears for ages $`>`$ 3 Gyr (and of course continues to be present up to 15 Gyrs). Beyond an age of about 3 Gyr, we lose age resolution in the CMD; and we cannot constrain, from the location of the RGB alone, the presence of stars with ages in excess of 10 Gyr. Clearly, VII Zw 403 has had a rich history of star formation.
Aloisi et al. (1999) recently used synthetic CMDs to investigate deep HST CMDs of I Zw 18, the most metal-poor BCD known (Z/50, from the ionized gas). They find that the present burst is not the first one to occur in this galaxy either; the data require a prior burst 500 Myr to 1 Gyr ago. While by most accounts a galaxy with an age below 1 Gyr would be considered a young galaxy, these results for I Zw 18 are in contradiction with the primeval galaxy hypothesis of Izotov & Thuan (1999) based on abundance analyses.
Unfortunately, there are no extremely metal-poor BCDs known that are closeby enough to allow for look-back times of a large fraction of the Hubble time. Whether or not we are observing a very small percentage of extremely metal-poor BCDs while they are undergoing their very first starburst at the present epoch is an interesting suggestion which remains to be investigated further; it will depend critically on the cycling of their gas (see below).
VII Zw 403 shows evidence for at least three “eras” of star formation. The RGB stars suggest the first event occured at a look-back time of at least 3 Gyr and probably $`>`$10 Gyr, the AGB stars indicate star-formation also happened at around 4 Gyr ago (with considerable uncertainty), and the young stars testify to activity which took place less than about 1 Gyr ago. The data also allow us to infer the chemical enrichment history of VII Zw 403. The RGB stars are consistent with a metallicity of the order of Z/100, while the ionized gas and (by assumption) the young stars are at $``$Z/20 (Martin 1997, Izotov et al. 1997). Curiously, an extended AGB is not expected unless Z/Z$`>`$1/20 (see SCH98, Lynds et al. 1998). The enrichment history of VII Zw 403, at face value, is therefore inconsistent with closed-box models of galaxy evolution and seems to require the loss of enriched gas or accretion of metal-poor gas. VII Zw 403 is isolated from massive neighbors, so an accrection scenario à la Silk et al. (1987) seems unlikely. Papaderos et al. (1994) claim the X-ray detection of an outflow of hot gas from this galaxy. Employing our new distance, we can estimate the total gas (H-I mass) from the measurements of Thuan & Martin (1981) and Tully et al. (1981) to be about 7x10<sup>7</sup> M. This places VII Zw 403 in the mass range for which models suggest that gas may be blown out, but the entire gas reservoir may not be blown away (Mac Low & Ferrara 1999) .
## 5 Conclusions
The BCD VII Zw 403 exhibits a radial population gradient. Young, blue stars and H$`\alpha `$ emission are confined to the core. The core region has a diameter which equals the defining size of a BCD. Intermediate-age and old, red stars are distributed throughout an extended background sheet or halo. The halo stars show an RGB with a well defined tip and are interpreted to be an old and metal-poor population, similar to that in Galactic globular clusters. BCDs were once recognized as “the first metal-poor systems of Population I to be discovered” (Searle & Sargent 1972). VII Zw 403 is the first BCD with compelling evidence for the existence of a Population II halo. VII Zw 403 is also the first BCD for which a direct comparison has been possible between the results from population synthesis of the integrated halo color and resolved stellar content. The detection of red giants at large distances from the starburst center verifies previous identifications of red halos with old stellar populations. If all BCD halos harbor old stars, then they must have formed at high redshift and survived re-heating; BCDs would not require the delayed-formation scenario.
Work on this project was supported through an archival research grant and guest observer HST grants to RSL and MMC (GO-7859.01-96A). UH acknowledges financial support from SFB375. We thank Livia Origlia for supplying us with data prior to publication.
|
no-problem/9905/astro-ph9905344.html
|
ar5iv
|
text
|
# X-ray and radio observations of RX J1826.2-1450/LS 5039
## 1 Introduction
The star LS 5039 is the most likely optical counterpart to the X-ray source RX J1826.2$``$1450. Such an association was originally proposed by Motch et al. (1997), hereafter M97, as a result of a systematic cross-correlation between the ROSAT All Sky Survey (Voges et al. 1996) and several OB star catalogues in the SIMBAD database. The unabsorbed X-ray luminosity, at an estimated distance of 3.1 kpc, amounts to $`L_\mathrm{X}`$(0.1–2.4 keV) $``$ 8.1$`\times `$10<sup>33</sup> erg s<sup>-1</sup>, and the hardness of the source is well consistent with a neutron star or a black hole, accreting directly from the companion’s wind (M97). In the optical, LS 5039 appears as a bright $`V`$ 11.2 star with an O7 V((f)) spectral type. Based on this evidence, M97 proposed the system to be a high mass X-ray binary (HMXRB).
In addition, this system has been found to be active at radio wavelengths. Its radio counterpart (NVSS J182614$``$145054) is a bright, compact and moderately variable radio source in excellent sub-arcsecond agreement with the optical star (Martí et al. 1998). All these facts point to the peculiar nature of RX J1826.2$``$1450/LS 5039, and suggest a classification among the selected group of radio emitting HMXRB.
In order to explore how this source behaves compared to other members of its class (e.g. Cygnus~X-1, LS~I+61$^∘$303 and SS~433), we have analyzed the corresponding X-ray data from the All Sky Monitor (ASM) and the Proportional Counter Array (PCA) on board the satellite Rossi X-ray Timing Explorer (RXTE). In Sect. 2 we present an X-ray timing analysis based on both the ASM and the PCA instruments. The ASM data are suitable to study the long-term (days to months) temporal behavior of the source, whereas X-ray variability on shorter time scales (seconds to hours) is better investigated with the PCA. In Sect. 3 a PCA spectroscopic analysis is presented, with the different spectral models that fit the data being examined and discussed.
In the radio domain, RX J1826.2$``$1450/LS 5039 was included at our request in the list of radio sources routinely monitored at the Green Bank Interferometer (GBI)<sup>1</sup><sup>1</sup>1The Green Bank Interferometer is a facility of the USA National Science Foundation operated by NRAO in support of the NASA High Energy Astrophysics programs.. At the time of writing, the radio light curves cover $``$ 4 months of observations. In Sect. 4 we present the GBI radio data so far acquired with some discussion on the source variability and spectral index properties. Finally, we conclude in Sect. 5 with a brief comparative discussion of RX J1826.2$``$1450/LS 5039 versus other radio loud HMXRB.
Hereafter, we will refer to the source as RX J1826.2$``$1450 when discussing the X-rays. In the optical/radio context the LS 5039 designation will be preferred.
## 2 X-ray timing analysis
### 2.1 The ASM/RXTE data
The ASM database analyzed in this paper spans for more than two and a half years (1996 February–1998 November) and contains nearly 800 daily flux measurements in the energy range 1.5–12 keV. Each data point represents the one-day average of the fitted source fluxes from a number (typically 5-10) of individual ASM dwells, of $``$ 90 s each (see Levine et al. 1996 for more details). The one-day average light curve is shown in Fig. 1. The big gap between Modified Julian Date (MJD) $``$ 50400 and $``$ 50500 corresponds to the passage of the Sun close to the source during the first year of observations. This gap repeats the following two years (near MJD 50800 and 51150), but it happens to be less severe and a few flux measurements were then possible.
Most of time, the source is at the threshold of ASM detectability. Nevertheless, we have searched for possible periodicities in the range from 2 to 200 d. The methods employed were the Phase Dispersion Minimization (PDM) (Stellingwerf 1978) and the CLEAN algorithm (Roberts et al. 1987). Our approach here is essentially the same as in Paredes et al. (1997) when analyzing the periodic behavior in the X-ray light curve of LS I+61303.
After applying both the PDM and CLEAN methods to the ASM data, a period of $``$ 52.7 d stands prominently. This periodicity corresponds to the detection of some kind of active events that appear rather evident at first glance in Fig. 1. Nevertheless, a careful inspection of the data reveals a suspicious detail. All those active events take place when the data by dwell coverage is rather poor (less than 5 dwells per day), thus reducing the statistical significance of the corresponding one-day average. For some instrumental reason, the ASM coverage becomes poorer than normal every $``$ 53 d or so and, in the case of a weak X-ray source like RX J1826.2$``$1450, this can affect somehow the period analysis. Therefore, the $``$ 52.7 d period is very likely to be an instrumental artifact. Indeed, after removing all daily points resulting from less than 5 dwells ($``$ 20% of total), the timing analysis reveals no significant period in the range from 2 to 200 d.
### 2.2 The PCA/RXTE data
Additional observations were made with the PCA instrument on 1998 February 8 and 16. The total on-source integration time was 20 ks. The PCA is sensitive to X-rays in the energy range 2–60 keV and comprises five identical co-aligned gas-filled proportional counter units (PCUs), providing a total collecting area of $``$ 6500 cm<sup>2</sup>, an energy resolution of $`<`$ 18 % at 6 keV and a maximum time resolution of 1$`\mu `$s. Our analysis was carried out in the interval 3–30 keV since the PCU windows prevent the detection of photons below $``$ 2.5 keV, whereas above 30 keV the spectrum becomes background dominated.
Good time intervals were defined by removing data taken at low Earth elevation angle ($`<`$ 8) and during times of high particle background. An offset of only 0.02 between the source position and the pointing of the satellite was allowed, to ensure that any possible short stretch of slew data at the beginning and/or end of the observation was removed. Table 1 shows the journal of the PCA observations, while the light curve of the entire observation is presented in Fig. 2.
The PCA observations were used to study the time variability on various time scales. Continuous stretches of clean data were selected from the light curve of the entire observation. To reduce the variance of the noise powers, these intervals were divided up into segments of 8192 bins each, with a bin size of 10 ms. Then the power density spectra for each segment were calculated and the results averaged together. Fig. 3 shows the characteristic power spectrum in the frequency range 0.01–50 Hz. The dashed line represents the 95% confidence detection limit (van der Klis 1989). As it can be seen no power exceeds this value; the distribution of powers is flat at a level of 2, consistent with Poissonian counting statistics. Following van der Klis (1989), we can set a 95% upper limit of 60% on the rms of a pulsed source signal in the range 0.01–50 Hz. This relatively high limit is a consequence of the faintness of the source. On longer time scales, longer intervals ($``$ 3200 s) were considered but no evidence for pulsations was found either.
Likewise, we folded the light curve onto a set of trial periods with the FTOOLS software package (a technique very similar to the PDM) and looked for a peak in the $`\chi ^2`$ versus period diagram. None of the peaks found were statistically significant enough. Thus, we conclude that no coherent periodicities were detected in the range $``$ 0.02 to $``$ 2000 s.
The mean X-ray intensity in the energy range 3–30 keV shows a slight decreasing trend with 19.7$`\pm `$0.2 count s<sup>-1</sup> at the beginning of the observation (upper panel of Fig. 2) compared to 16.6$`\pm `$0.1 count s<sup>-1</sup> and 16.2$`\pm `$0.2 count s<sup>-1</sup> for the middle and bottom panels of Fig. 2, respectively. The fractional rms of the 3–30 keV light curve corresponding to the entire observation is 9%.
The fact that no X-ray pulsations have been found in RX J1826.2$``$1450 is consistent with the proposed idea that radio emission and X-ray pulsations from X-ray binaries seem to be statistically anti-correlated (Fender et al. 1997), i.e., no X-ray pulsar has ever shown significant radio emission.
## 3 X-ray spectral analysis
### 3.1 Spectral fitting
Since the light curve of the entire observation does not show sharp features, i.e., there is no significant spectral change throughout the observation, we obtained one average PCA energy spectrum from the complete observation (Fig. 4).
Acceptable fits of the X-ray continuum were obtained with an unabsorbed power-law model, giving a reduced $`\chi ^2`$=1.14 for 56 degrees of freedom (dof). A multi-colour disk model, as expected from an optically thick accretion disk (Mitsuda et al. 1984) plus a power-law gave a reduced $`\chi _\nu ^2`$=1.11 for 53 dof. The best-fit results are given in Table 2. Bremsstrahlung and two blackbody component models did not fit the data. Although the addition of a blackbody component to the power-law formally produces an acceptable fit, the value of the blackbody normalization was very low, with the error bar close to zero. In fact, an F-test shows that the inclusion of a blackbody component is not significant.
The most salient feature that appears in the spectrum of RX J1826.2$``$1450 is a strong iron line at $``$ 6.6 keV (Fig. 4). A Gaussian fit to this feature gives a line centered at 6.62$`\pm `$0.04 keV, with an equivalent width ($`EW`$) of 0.75$`\pm `$0.06 keV and a $`FWHM`$ of 0.9$`\pm `$0.2 keV. The high $`EW`$(Fe) value indicates that a large amount of circumstellar matter is present in the system. Unfortunately, the PCA energy resolution prevents from distinguishing between a broad line or two narrow components.
A hydrogen column density of $``$ (2$`{}_{2}{}^{}{}_{}{}^{+1}`$$`\times `$ 10<sup>21</sup> cm<sup>-2</sup> is found from the fit. This value is, however, not very well constrained. In fact, it is consistent with zero. The difficulty in constraining the hydrogen column density from our X-ray data can be attributed to the fact that the interstellar gas mainly absorbs X-ray photons with energies lower than 2–3 keV, i.e., outside of the energy range considered here. Nevertheless, it agrees with the value obtained from optical observations. From the $`\lambda `$ 4430 and $`\lambda `$ 6284 interstellar bands M97 found $`E(BV)`$ = 0.8$`\pm `$0.2. Using the relation $`N_\mathrm{H}`$ = 5.3 $`\times `$ 10<sup>21</sup> cm$`{}_{}{}^{2}E(BV)`$ (Predehl & Schmitt 1995), we obtain that $`N_\mathrm{H}`$ (4$`\pm `$1) $`\times `$ 10<sup>21</sup> cm<sup>-2</sup>, which is consistent, within the errors, with the X-ray observation value.
### 3.2 A black hole or a neutron star?
As mentioned above, no pulsations are found in the X-ray flux of RX J1826.2$``$1450/LS 5039. At the time of writing this paper approximately 55% of the optically identified massive X-ray binary systems are pulsars, rising to 67% if suspected HMXRB are included. About 75% of the identified and suspected X-ray pulsars have spin periods below 100 seconds. The detection of pulsations would rule out a black hole companion.
Interestingly, unlike typical X-ray pulsars, the energy spectrum of RX J1826.2$``$1450 shows no exponential cut-off at high energy despite that significant emission is seen up to 30 keV. The absence of both, pulsations and high energy cut-off may indicate that the compact companion is a black hole rather than a neutron star. The persistent non-thermal radio emission of RX J1826.2$``$1450/LS 5039 is also reminiscent, among others, of the classical black hole candidate (BHC) Cygnus X-1. However, the data are not conclusive as counter-examples of these properties can be found. For example, the system LS I+61303, which seems to contain a neutron star and does not exhibit pulsations or cut-off in the 10–30 keV range. The multi-colour disk model, although formally fitting the data, does not help either. First, given the low luminosity ($`L_\mathrm{X}<`$10<sup>35</sup> erg s<sup>-1</sup> in the energy range 2–10 keV) the system would be in the so-called low-state. We would not expect then to detect a strong soft component. Second, the fit provides an unrealistic value of the disk internal radius of $`R_{\mathrm{in}}\mathrm{cos}^{1/2}(\theta )0.3`$ km. Finally, the lack of detected pulsations may be just due to the faintness of the X-ray emission in view of the rather high upper limit found in Sect. 2.2 for pulsed emission.
Unfortunately, the source is too faint at energies above 30 keV to be detected with the HEXTE instrument. Thus, we cannot confirm from the present data whether the hard tail that characterizes the energy spectrum of BHCs at high energies is indeed present. In any case, the issue of a possible black hole in RX J1826.2$``$1450/LS 5039 is likely to be set in the future by obtaining a spectroscopic mass function in the optical.
## 4 Radio observations
### 4.1 Radio variability properties of LS 5039
The radio observations reported here consist of daily flux density measurements with the GBI within the GBI-NASA Monitoring Program. The source was observed at the frequencies of 2.25 and 8.3 GHz. In Fig. 5, we show the corresponding flux density light curves during the first $``$ 4 months of monitoring so far available. The bottom panel displays the spectral index $`\alpha `$ (where $`S_\nu \nu ^\alpha `$) computed between these two frequencies.
The source was found to be detectable at radio wavelengths throughout all the time. This behavior is better evident at 2.25 GHz where the source is brighter. The average GBI flux densities and their respective rms are $`S_{2.25\mathrm{GHz}}=31(\pm 5)`$ mJy and $`S_{8.3\mathrm{GHz}}=14(\pm 5)`$ mJy. The typical day-to-day variability in the GBI data does not exceed $`30`$ %. There may be however some exceptions, such as for example around Modified Julian Date (MJD) 51075, 51086 and 51176. Here, the flux density of LS 5039 seems to have varied by more than a factor of $``$ 2 on less than one day. Excluding these episodes, the source never exhibited well defined radio outburst events. In general terms, the observed radio behavior confirms the early suggestions by Martí et al. (1998) concerning the persistent and moderately variable nature of the radio emission.
Timing analysis of radio light curves has proven to be in some cases a useful tool to detect orbital periods. For example, the orbital period signatures of 26.5 and 5.6 d are visible in LS I+61303 and Cygnus X-1, respectively (Taylor & Gregory 1984; Pooley et al. 1998). We have thus searched for long-term periodicities in the GBI data of LS 5039. Given the span of the radio observations, the search was restricted between 2 and 50 d. The methods used were again the PDM and CLEAN, mentioned in Sect. 2.1. Unfortunately, no convincing period was detected in this process. A longer time span is likely to be required before a reliable search can be attempted for this relatively weak radio source (specially at 8.3 GHz).
### 4.2 Non-thermal radio spectrum and brightness temperature
From the GBI data, the weighted average spectral index is found to be $`\alpha =0.5_{0.3}^{+0.2}`$. This value is also in good agreement with the results by Martí et al. (1998) obtained a few months before, thus suggesting that the non-thermal radio spectrum is a persistent property of the source.
In addition to negative spectral indices, the brightness temperature estimates for LS 5039 clearly yield to non-thermal values, hence supporting the mechanism of synchrotron radiation for that source. The apparent one-day variability observed in the GBI data around MJD 51075, 51086 and 51176 would imply, from light time travel arguments, that the emitting region is smaller than about $`2.6\times 10^{15}`$ cm. If we assume a 3.1 kpc distance to the source, the corresponding angular size is found to be $`\theta `$ 0$`\text{.}^{\prime \prime }`$05, yielding a lower limit of $`T_b6\times 10^6`$ K (at 2.25 GHz). This lower limit is not far from the $`T_b4\times 10^6`$ K estimate by Martí et al. (1998), based on the unresolved nature of the source with the VLA.
## 5 Comparison to other radio loud HMXRB
SS 433, LS I+61303 and the BHC Cygnus X-1 are classical examples of HMXRB with detectable radio emission (Penninx 1989), and all of them are also under GBI monitoring. In order to facilitate the comparison of LS 5039 to these sources, we have summarized some relevant parameters in Table 3. They include: the quiescent X-ray luminosity, as derived from ASM/RXTE data (extrapolated from PCA/RXTE in the case of LS 5039 because the signal to noise ratio is much higher); the weighted average radio luminosity and spectral index in quiescence, both based on GBI data; the nature of the compact object; the spectral type of the companion; the orbital period of the binary system and the distance to the source.
As it can be seen in Table 3, the radio luminosity of LS 5039 is very similar to that of Cygnus X-1. This is, of course, provided that the distance adopted is correct. Their respective spectral indices seem to be intrinsically different, both sources being persistent at radio wavelengths. Cygnus X-1 also experiences strong X-ray variability due to changes in its state, that we are not aware of in our source. The LS I+61303 radio properties during quiescence are also very comparable to those of LS 5039, as well as their respective X-ray luminosities. In contrast, LS I+61303 undergoes radio outbursts every $``$ 26.5 d, the orbital period of the system, while LS 5039 never had strong outbursts during the GBI observations. The X-ray and radio luminosities of SS 433 in quiescence are much higher than those of LS 5039. However, we notice that their $`L_{\mathrm{rad}}/L_\mathrm{X}`$ ratios are practically the same.
In general terms, the average properties of LS 5039 do not deviate extraordinarily from those of other radio loud HMXRB. Since even the well accepted members of this class are not an homogeneous group, the belonging of LS 5039 to this category appears as very plausible to us.
## 6 Conclusions
We have presented a general overview of the X-ray and radio emission properties of the massive X-ray binary RX J1826.2$``$1450/LS 5039. Our X-ray and radio results are mostly based on long term (few months) monitorings of the source, with our main conclusions being:
1. In the X-rays, a timing analysis has been performed showing neither pulsed nor periodic emission on time scales of 0.02–2000 s and 2–200 d, respectively. The X-ray spectrum has been found to be significantly hard (up to 30 keV), with no cut-off required. It can be fitted satisfactorily with a power-law plus a strong Gaussian iron line.
2. At radio wavelengths, the GBI monitoring confirms the long-term persistence of the RX J1826.2$``$1450/LS 5039 radio emission in time scales of months, always with a non-thermal synchrotron spectrum. The day-to-day variability continues to be moderate most of the time ($`30`$ %), and no strong radio outbursts have been observed.
3. The classification of RX J1826.2$``$1450/LS 5039 among the radio loud HMXRB group is reinforced. Although some specific differences with other members of this class do exist, noticeable similarities can be found.
###### Acknowledgements.
We thank Ron Remillard for useful discussion about the ASM data. We also thank Iossif Papadakis for his help in the timing analysis of the PCA data. This paper is partially based on quick-look results provided by the ASM/RXTE team and data obtained through the HEASARC Online Service of NASA/GSFC. We acknowledge detailed and useful comments from an anonymous referee. M.R. is supported by a fellowship from CIRIT (Generalitat de Catalunya, ref. 1999 FI 00199). P.R. acknowledges support via the European Union Training and Mobility of Researchers Network Grant ERBFMRX/CP98/0195. J.M. is partially supported by Junta de Andalucía (Spain). J.M.P and J.M. acknowledge partial support by DGICYT (PB97-0903).
|
no-problem/9905/quant-ph9905097.html
|
ar5iv
|
text
|
# Bounds on Integrals of the Wigner Function
## Abstract
The integral of the Wigner function over a subregion of the phase-space of a quantum system may be less than zero or greater than one. It is shown that for systems with one degree of freedom, the problem of determining the best possible upper and lower bounds on such an integral, over all possible states, reduces to the problem of finding the greatest and least eigenvalues of an hermitian operator corresponding to the subregion. The problem is solved exactly in the case of an arbitrary elliptical region. These bounds provide checks on experimentally measured quasiprobability distributions.
The Wigner function has been much studied since its introduction , not only in the context of quantum physics , but also in signal processing . For a quantum system in a pure state, the Wigner function carries the same information as the wavefunction, up to an unimportant constant phase. In the case of a mixed state, it carries the same information as the density operator.
An important property of the Wigner function, one of several properties which distinguish it from classical probability densities, is that its integral over a given subregion of phase-space may be negative or greater than one. Quasiprobability distributions which, according to quantum theory, correspond to Wigner functions, have been measured in recent experiments, for a variety of states of light and matter , and negative values have indeed been observed. These experiments are probing the basic structure and predictions of quantum mechanics in a new way, and the prospect of increasingly accurate experiments of this type adds greatly to the interest in, and importance of, the theory of the Wigner function.
We consider the problem of determining the best possible bounds on the integral of the Wigner function over a given subregion of the phase-plane of any system with one degree of freedom. We show for any subregion of a rather general type that this problem reduces to the problem of finding the greatest and least eigenvalues of an hermitian Fredholm integral operator corresponding to that subregion. The problem is found to be exactly solvable for any elliptical or annular subregion, and the bounds are given explicitly in the case of the ellipse. These best possible bounds provide new information about the structure of the Wigner function, differing from known results such as best possible bounds (7) on the values of the Wigner function itself, bounds on integrals of powers of the function , or bounds on various moments of the function . In particular, the new bounds determine the degree to which the integral of any Wigner function over an elliptical subregion of the phase-plane can lie outside the interval $`[0,1]`$ which applies to classical densities. In principle, they therefore provide checks on experiments of the type to which we have referred, because they must be respected by any measured quasiprobability distribution consistent with quantum mechanics.
As we shall show, appropriately chosen oscillator stationary states (or single frequency light modes) lead theoretically to the exact attainment of these upper and lower bounds. Such states are perhaps the easiest to establish experimentally, and it is just such states for which quasiprobability distributions have been measured in some of the experiments mentioned above .
In what follows, we consider systems with one degree of freedom, with a Cartesian coordinate $`q`$ and its conjugate momentum $`p`$ . Our results refer to the Wigner function considered at a particular instant, and are therefore independent of any particular dynamics. We work in dimensionless variables. Appropriate dimensional factors will appear in what follows if each coordinate $`q`$ there is replaced by $`q/L`$, each momentum $`p`$ by $`Lp/\mathrm{}`$, each wavefunction $`\psi `$ by $`L^{1/2}\psi `$, each phase space area $`A`$ by $`A/\mathrm{}`$, and each Wigner function $`W`$ by $`\mathrm{}W`$, where $`L`$ is a suitable constant with dimensions of a length.
Given a normalized wavefunction $`\psi `$ corresponding to a pure state $`|\psi `$, the Wigner function is defined as
$$W_\psi (q,p)=\frac{1}{\pi }_{\mathrm{}}^{\mathrm{}}\psi ^{}(q+x)\psi (qx)e^{i2px}𝑑x.$$
(1)
Then
$$_\mathrm{\Gamma }W_\psi 𝑑q𝑑p=1,_\mathrm{\Gamma }[W_\psi ]^2𝑑q𝑑p=\frac{1}{2\pi },$$
(2)
where $`\mathrm{\Gamma }`$ denotes the $`(q,p)`$ phase-plane.
For a mixed state, the density operator $`\rho `$ is positive-definite and hermitian with unit trace, and typically can be resolved in the form
$$\rho =\underset{i}{}p_i|\psi _i\psi _i|,p_i>0,\underset{i}{}p_i=1,$$
(3)
where the states $`|\psi _i`$ are orthonormal. The corresponding Wigner function has the form
$$W_\rho =\underset{i}{}p_iW_{\psi _i},$$
(4)
where $`W_{\psi _i}`$ is the Wigner function corresponding to the pure state $`|\psi _i`$. More generally, the sum in (3) and (4) could be replaced in part or whole by an integral, but this does not significantly affect the argument of the next paragraph.
It follows from (3) and (4) that any bound on the Wigner function, or on its integral over a given subregion $`S`$ of $`\mathrm{\Gamma }`$, must hold for all possible mixed states if it holds for all possible pure states. For example, if
$$_SW_\psi (q,p)𝑑q𝑑p>L\mathrm{for}\mathrm{all}\psi ,$$
(5)
then for any $`\rho `$ as in (3),
$`{\displaystyle _S}W_\rho (q,p)𝑑q𝑑p`$ $`=`$ $`{\displaystyle \underset{i}{}}p_i{\displaystyle _S}W_{\psi _i}(q,p)𝑑q𝑑p`$ (6)
$`>`$ $`{\displaystyle \underset{i}{}}p_iL=L.`$
Since a pure state can be regarded as a limiting case of a mixed state, it then follows that best possible upper and lower bounds on the Wigner function or its integral, when considered over all pure states, must also be best possible upper or lower bounds when considered over all mixed states, although a bound that is attainable over pure states may not in general be attainable over mixed states. Bearing this in mind, we restrict attention in what follows to pure states.
Best possible bounds on the Wigner function itself are known :
$$\frac{1}{\pi }W_\psi (q,p)\frac{1}{\pi }$$
(7)
for all normalized $`\psi `$, for all $`(q,p)\mathrm{\Gamma }`$. It is easily seen that $`W_\psi =\pm 1/\pi `$ at the point $`(q,p)`$ if and only if
$$\psi (qx)e^{ipx}=\pm \psi (q+x)e^{ipx}\mathrm{for}\mathrm{all}x.$$
(8)
The problem of interest here is to find best possible bounds on the ‘quasiprobability functional’ corresponding to the subregion $`S`$, defined as
$`Q_S[W_\psi ]`$ $`=`$ $`{\displaystyle _S}W_\psi (q,p)𝑑q𝑑p`$ (9)
$`=`$ $`{\displaystyle _\mathrm{\Gamma }}\chi _S(q,p)W_\psi (q,p)𝑑q𝑑p,`$
where $`\chi _S`$ is the function with the value $`1`$ on $`S`$, and the value $`0`$ on the complement of $`S`$.
It follows at once from (7) and (9) that
$$\frac{A_S}{\pi }Q_S[W_\psi ]\frac{A_S}{\pi },$$
(10)
where $`A_S=_S𝑑q𝑑p`$ is the area of $`S`$.
In order to obtain stronger bounds than (10), recall that each real-valued function $`T(q,p)`$ on $`\mathrm{\Gamma }`$ can be associated with an hermitian operator $`\widehat{T}`$ such that
$$(\psi ,\widehat{T}\psi )=_\mathrm{\Gamma }T(q,p)W_\psi (q,p)𝑑q𝑑p,$$
(11)
where $`(\psi _1,\psi _2)`$ is usual scalar product of wavefunctions. Here $`\widehat{T}`$ can always be written as a Fredholm integral operator,
$$(\widehat{T}\psi )(x)=_{\mathrm{}}^{\mathrm{}}K_T(x,y)\psi (y)𝑑y,$$
(12)
with hermitian kernel given in terms of the real-valued function $`T(q,p)`$ as
$$K_T(x,y)=\frac{1}{2\pi }_{\mathrm{}}^{\mathrm{}}T((x+y)/2,p)e^{ip(xy)}𝑑p.$$
(13)
Consider now the case when $`T(q,p)=\chi _S(q,p)`$. Comparison of (9) and (11) shows that
$`Q_S[W_\psi ]`$ $`=`$ $`(\psi ,\widehat{K}_S\psi ),`$ (14)
$`(\widehat{K}_S\psi )(x)`$ $`=`$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}K_S(x,y)\psi (y)𝑑y,`$ (15)
$`K_S(x,y)`$ $`=`$ $`{\displaystyle \frac{1}{2\pi }}{\displaystyle _{\mathrm{}}^{\mathrm{}}}\chi _S((x+y)/2,p)e^{ip(xy)}𝑑p.`$ (16)
It follows at once from (14) that the extremal values of $`Q_S[W_\psi ]`$ are determined by the eigenvalue problem $`\widehat{K}_S\psi =\lambda \psi `$ with $`\widehat{K}_S`$ as in (15). In particular,
$$infQ_S=\lambda _{min},supQ_S=\lambda _{max},$$
(17)
where $`\lambda _{min}`$ and $`\lambda _{max}`$ are the least and greatest eigenvalues of $`\widehat{K}_S`$ respectively (or more generally, the infimum and supremum of the spectrum of $`\widehat{K}_S`$). Thus the problem of interest now becomes the determination of $`\lambda _{min}`$ and $`\lambda _{max}`$.
In order to proceed, suppose that the subregion $`S\mathrm{\Gamma }`$ has the general form shown in Fig. 1.
Fig.1. A typical region $`S`$ in the phase-plane.
Here $`F_1`$ and $`F_2`$ are real-valued functions defined for $`bqc`$, and satisfying $`F_1(b)=F_2(b)`$, $`F_1(c)=F_2(c)`$, and $`F_2(q)F_1(q)`$ for $`b<q<c`$. Each function need only be piecewise continuous, and $`b=\mathrm{}`$ and/or $`c=\mathrm{}`$ is allowed. For such a subregion, the characteristic function has the form
$$\chi _S(q,p)=\{\begin{array}{cc}1\hfill & \hfill b<q<c,F_1(q)<p<F_2(q)\\ 0\hfill & \hfill \mathrm{otherwise},\end{array}$$
(18)
and the kernel (16) becomes
$`K_S(x,y)`$ $`=`$ $`{\displaystyle \frac{1}{2\pi }}{\displaystyle _{F_1(\frac{x+y}{2})}^{F_2(\frac{x+y}{2})}}e^{ip(xy)}𝑑p`$ (19)
$`=`$ $`{\displaystyle \frac{e^{i(xy)F_2(\frac{x+y}{2})}e^{i(xy)F_1(\frac{x+y}{2})}}{2\pi i(xy)}},`$
for $`2b<(x+y)<2c`$, and $`0`$ otherwise. Note that the singularity at $`x=y`$ is only apparent. Then (15) becomes
$`(\widehat{K}_S\psi )(x)=`$
$`{\displaystyle _{2bx}^{2cx}}{\displaystyle \frac{e^{i(xy)F_2(\frac{x+y}{2})}e^{i(xy)F_1(\frac{x+y}{2})}}{2\pi i(xy)}}\psi (y)𝑑y.`$ (20)
More generally, the subregion $`S`$ may consist of several nonintersecting parts $`S_1`$, $`S_2`$, $`\mathrm{}`$ of the same general type, even on overlapping $`q`$-intervals. It is easily seen that in such a case $`\widehat{K}_S=\widehat{K}_{S_1}+\widehat{K}_{S_2}+\mathrm{}`$ However, in general $`[\widehat{K}_{S_1},\widehat{K}_{S_2}]0,`$ etc., so that the bounds associated with different subregions cannot be added.
Note also that the extremal values of $`Q_S`$ and $`Q_S^{}`$ are the same if $`S`$ is transformed into $`S^{}`$ by a canonical transformation of $`\mathrm{\Gamma }`$ of the form
$$q^{}=\alpha q+\beta p+\gamma ,p^{}=\mu p+\nu q+\rho ,$$
(21)
where $`\alpha `$, $`\beta `$, $`\gamma `$, $`\mu `$, $`\nu `$ and $`\rho `$ are real constants satisfying $`\alpha \mu \beta \nu =1`$. In particular, the case of any circular or elliptical region of area $`\pi a^2`$ can be reduced to the case of a circular disk of radius $`a`$, centred at the origin.
In this case, the operator $`\widehat{K}_S`$ (let $`\widehat{K}_a`$ denote it now) is given from (20) by
$`(\widehat{K}_a\psi )(x)=`$
$`{\displaystyle _{2ax}^{2ax}}{\displaystyle \frac{\mathrm{sin}[(xy)\sqrt{a^2(x+y)^2/4}]}{\pi (xy)}}\psi (y)𝑑y,`$ (22)
for $`\mathrm{}<x<\mathrm{}`$, and it is not hard to check that $`\widehat{K}_a`$ commutes with the simple harmonic oscillator Hamiltonian operator $`\widehat{H}`$ defined by
$$\widehat{H}\psi (x)=\frac{d^2\psi (x)}{dx^2}+x^2\psi (x).$$
(23)
This is explained by the fact that $`\widehat{H}`$ generates transformations of the wavefunction corresponding to rotations in the phase-plane, which leave the disk invariant. It follows that for every value of $`a`$ the eigenfunctions of $`\widehat{K}_a`$ are the oscillator eigenfunctions
$$\psi _n(x)=H_n(x)e^{x^2/2},n=0,1,\mathrm{}$$
(24)
where $`H_n`$ is the Hermite polynomial .
According to (14), the eigenvalue $`\lambda _n(a)`$ of $`\widehat{K}_a`$ corresponding to the eigenfunction (24), must equal the total quasiprobability on the disk of radius $`a`$, as determined by the Wigner function $`W_n`$ (say) corresponding to that eigenfunction. Since it is known that
$$W_n(q,p)=(1)^n\pi ^1L_n(2[p^2+q^2])e^{(p^2+q^2)},$$
(25)
where $`L_n`$ is the Laguerre polynomial , it follows that
$$\lambda _n(a)=(1)^n_0^{a^2}L_n(2u)e^u𝑑u.$$
(26)
Thus $`\lambda _0(a)=1e^{a^2}`$, $`\lambda _1(a)=1(1+2a^2)e^{a^2}`$, $`\lambda _2(a)=1(1+2a^4)e^{a^2}`$, $`\lambda _3(a)=1(1+2a^22a^4+\frac{4}{3}a^6)e^{a^2}`$, etc.
In summary:
$`{\displaystyle _{2ax}^{2ax}}{\displaystyle \frac{\mathrm{sin}[(xy)\sqrt{a^2(x+y)^2/4}]}{\pi (xy)}}\psi _n(y)𝑑y`$
$`=\lambda _n(a)\psi _n(x),`$ (27)
with $`\psi _n`$ as in (24) and $`\lambda _n`$ as in (26).
Fig. 2 shows the graphs of $`\lambda _n`$ versus $`a`$ for $`n=0,\mathrm{\hspace{0.17em}1},\mathrm{\hspace{0.17em}2},\mathrm{\hspace{0.17em}3}`$, and also the graphs of $`\lambda _{max}`$ and $`\lambda _{min}`$ (bold lines). Note that $`\lambda _{max}(a)=\lambda _0(a)=1e^{a^2}`$, whereas the graph of $`\lambda _{min}`$ has the peculiar scalloped shape shown, because $`\lambda _{min}(a)=\lambda _1(a)`$ for $`0a<a_1`$, $`\lambda _{min}(a)=\lambda _2(a)`$ for $`a_1a<a_2`$, etc., where $`a_1`$ is the greatest value of $`a`$ at which $`\lambda _1(a)=\lambda _2(a)`$, $`a_2`$ is the greatest value of $`a`$ at which $`\lambda _2(a)=\lambda _3(a)`$, etc. Thus $`a_1=1`$, $`a_2=\sqrt{(3+\sqrt{3})/2}`$, etc.
Fig. 2. Left to right: graphs of $`\lambda _n`$ for $`n=0`$, $`1`$, $`2`$, $`3`$, and also of $`\lambda _{max}`$, $`\lambda _{min}`$ (bold lines).
With the introduction of the appropriate dimensional factors, the result is that the integral of any pure-state or mixed-state Wigner function over any circular or elliptical region with area $`\pi a^2\mathrm{}`$ in the phase-plane, lies in the interval $`[\lambda _{min}(a),\lambda _{max}(a)]`$, in contrast to the integral of any classical density, which lies in $`[0,1]`$. According to quantum mechanics, any quasiprobability distribution determined by quantum tomography (in particular) is described by a Wigner function . For such a distribution, the quasiprobability on disks of various radii, centred on regions where the distribution is most negative, for example, could be estimated and checked for consistency against the theoretical bounds. Of course, experimental data are inevitably subject to noise for various reasons. While there are known techniques to allow for noise in the reconstruction of densities from more primitive data , this would obviously limit the power of the proposed check. A more subtle complication is that reconstruction algorithms may invoke quantum mechanical arguments , and any check would be satisfied trivially in a given case if these arguments forced a reconstructed density to satisy the theoretical bounds. Any given reconstruction algorithm would have to be analysed carefully in this regard to ensure that a check was meaningful.
If these difficulties could be overcome, might the proposed check be elevated to the level of a test of quantum mechanics itself? The assumptions underlying the theory of the Wigner function are very few: the linear vector space of states, the Born interpretation, and the conjugate relations between coordinates and momenta. However, quantum mechanics has been so well-tested at the energy scales of the present experiments that any violations, if indeed there are any, must surely be exceedingly small, and very probably beyond present capabilities of resolution amidst noise.
Eigenvalue problems corresponding to other shapes such as squares and triangles are easily formulated, but do not seem to be exactly solvable. They could be tackled numerically. Exact results for disks can be extended to the case of an annular region (and more generally the case of several concentric annuli), because the operators $`\widehat{K}_a`$ commute for different $`a`$, and have common eigenfunctions. This may be particularly useful in checking distributions determined by the ‘ring method’ .
These ideas can be extended to systems with more degrees of freedom, and to systems with spin.
Thanks are due to J.A. Belward, R. Chakrabarti, G.A. Chandler, D. Ellinas, W.P. Schleich and referees of a preliminary version for helpful comments.
|
no-problem/9905/astro-ph9905101.html
|
ar5iv
|
text
|
# $7.0/Mflops Astrophysical 𝑁-Body Simulation with Treecode on GRAPE-5
### Abstract
As an entry for the 1999 Gordon Bell price/performance prize, we report an astrophysical $`N`$-body simulation performed with a treecode on GRAPE-5 (Gravity Pipe 5) system, a special-purpose computer for astrophysical $`N`$-body simulations. The GRAPE-5 system has 32 pipeline processors specialized for the gravitational force calculation. Other operations, such as tree construction, tree traverse and time integration, are performed on a general purpose workstation. The total cost for the GRAPE-5 system is 40,900 dollars. We performed a cosmological $`N`$-body simulation with 2.1 million particles, which sustained a performance of 5.92 Gflops averaged over 8.37 hours. The price per performance obtained is 7.0 dollars per Mflops.
## 1 Introduction
Astrophysical $`N`$-body simulation is one of the most widely used technique to investigate formation and evolution of astronomical objects, such as galaxies, galaxy clusters and large scale structures of the universe. In such simulations, we calculate gravitational force on each particle from all other particles, and integrate the orbit of each particle according to Newton’s equation of motion. We investigate structural and dynamical properties of the simulated object.
The astrophysical $`N`$-body simulation has been one of grand challenge problems in computational sciences. In years 1992, 96, 97, and 98, the Gordon Bell prizes were awarded to cosmological $`N`$-body simulations and in 1995 the Gordon Bell prize is awarded to $`N`$-body simulation of a black hole binary in a galaxy . The calculation cost of the astrophysical $`N`$-body simulation rapidly increases for large $`N`$, because it is proportional to $`N^2`$ if we use a straightforward approach. The gravity is a long-range attractive force. A particle feel the forces from all other particles, no matter how they are far away. We cannot use a cutoff technique which is widely used in MD simulation (e.g. ). In order to reduce the calculation costs, various fast algorithms have been developed.
Hierarchical tree algorithm is one of such fast algorithms which reduce the calculation cost from $`O(N^2)`$ to $`O(NlogN)`$. In this algorithm, particle are organized in the form of a tree, and each node of the tree represents a group of particles. The force from a distance node is replaced by the force from its center of mass. The Gordon Bell prizes of years 1992, 97 and 98 were awarded to $`N`$-body simulations with this tree algorithm , which were performed on Intel Touchstone Delta, ASCI-Red, PC cluster, and an Alpha cluster.
We report an astrophysical $`N`$-body simulation with the tree algorithm on GRAPE-5 (GRAvity PipE) special-purpose computer. GRAPE-5 has dedicated pipelines specialized for the calculation of the gravitational force. It is connected to a host computer, which is general purpose workstation, and operates as a hardware accelerator for the calculation of the gravitational force. Other operations, such as tree construction, tree traverse and time integration, are performed on the host computer. It has been already demonstrated that the approach using special-purpose machines successfully achieved very high performance in scientific computations, by the Gordon Bell prize simulations of 1995 and 96 , which were performed on GRAPE-4, and the last Gordon Bell prize simulation , which was performed on QCDSP.
We performed a cosmological 2.1 million particles simulation using the tree algorithm on GRAPE-5 connected to a COMPAQ AlphaServer DS10. Sustained performance is 5.92 Gflops and price/performance is $7.0/Mflops. In the rest of this paper, we describe on GRAPE-5 system and the tree algorithm on GRAPE, and report the cost and performance.
## 2 GRAPE-5 system
We briefly describe architecture of the GRAPE-5 system. More detailed descriptions of the GRAPE-5 system will be given elsewhere . GRAPE-5 is designed to run the tree code with very high speed. Figure 1 summarizes the configuration of the GRAPE-5 system used for the simulation reported in this paper. The GRAPE-5 system consists of 2 processor boards, 2 host interface boards, and a host computer. The processor board performs the force calculation. The host interface board handles the communication between the processor board and the host computer. The host computer performs all other operations. We used COMPAQ AlphaServer DS10 with a 21264/466MHz Alpha processor for the host computer. Figure 2 and figure 3 are photographs of the GRAPE-5 system and GRAPE-5 processor board, respectively.
Each processor board consists of 8 processor chips (G5 chip) and a particle data memory. G5 chip is a custom LSI chip which calculates the gravitational force. Each G5 chip houses 2 pipelines specialized for the force calculation. The particle data memory stores the data of particles which exert the force and supplies them to G5 chip. G5 chip operates at 90MHz and other part of the processor boards operate at 15MHz.
G5 chip is designed for astrophysical $`N`$-body simulations with the tree algorithm and calculates a pair-wise force with a relative error of about 0.3%. This might sound rather low, but detailed theoretical analysis and numerical experiment have shown that it is more than enough. The average error of the force in our simulation is around 0.1%, which is dominated by the approximation made in the tree algorithm and not by the accuracy of the hardware. The relative accuracy was practically the same when we performed the same force calculation using standard 64-bit floating point arithmetic.
The theoretical peak speed of the GRAPE-5 system is 109.44 Gflops. Total number of pipeline processors is 32. Each processor pipeline operates 38 operations in a clock cycle, if we use the same counting convention as used in .
## 3 Tree algorithm
Our code is based on the Barnes’s modified tree algorithm . The implementation of the modified tree algorithm on GRAPE were discussed in . Using this algorithm, the calculation cost on the host computer is greatly reduced from that of the original algorithm and the forces exerted on multiple particles can be calculated in parallel. In the original algorithm, the interaction list is created for each particle. In the modified tree algorithm, neighboring particles are grouped and one interaction list is shared among the particles in the same group. Forces from particles in the same group is directly calculated.
The modified tree algorithm reduces the calculation cost of the host computer by roughly a factor of $`n_g`$, where $`n_g`$ is the average number of particles in a group. On the other hand, the amount of work on GRAPE-5 increases as we increase $`n_g`$, since the interaction list becomes longer. There is, therefore, an optimal $`n_g`$ at which the total computing time is minimum. The optimal $`n_g`$ strongly depends on the ratio of the speed of the host computer and GRAPE. For the present configuration, the optimal $`n_g`$ is around 2000.
Note that our modified tree algorithm performs larger number of operations than the tree algorithm on a general purpose computer. When we will estimate the performance in section 5, we will make correction. Note also that the our modified tree algorithm is more accurate than the original tree algorithm for the same accuracy parameter, as shown in .
## 4 Cost
The total cost of the GRAPE-5 system is 4.7 M JYE. The GRAPE-5 board is available from a Japanese commercial company for the price of 1.65 M JYE per board. Remaining 1.4 M JYE was spent for the host computer, COMPAQ AlphaServer DS10, including 512 MByte main memory and C++ compiler. The total cost, with the present exchange rate of 1 dollar = 115 JYE, is about 40,900 dollars.
## 5 Simulation
We report the performance statistics for the astrophysical $`N`$-body simulations with the tree algorithm on GRAPE-5. The performance numbers are based on the wall-clock time obtained from UNIX system timer on the host computer (COMPAQ AlphaServer DS10).
We performed a cosmological $`N`$-body simulation of a sphere of radius 50Mpc (mega parsec) with 2,159,038 particles for 999 timesteps. We assigned the initial position and velocities to particles in a spherical region selected from a discrete realization of density contrast field based on a standard cold dark matter scenario using COSMICS package . A particle represents $`1.7\times 10^{10}`$ solar masses. We performed the simulation from $`z=24`$, where $`z`$ is redshift, to the present time. Figure 4 shows a snapshot of the simulation.
The total number of the particle-particle interactions is $`2.90\times 10^{13}`$. This implies that the average length of the interaction list is 13,431. The whole simulation took 30,141 seconds (8.37 hours) including I/O, resulting in the average computing speed of 36.4 Gflops. Here we use the operation count of 38 per interaction.
However, as we described in section 3, our modified tree algorithm performs larger number of operations than the tree algorithm on a general purpose computer. In order to make correction, we estimated the operation count of the original tree algorithm for the same simulation, using five snapshot files and the same accuracy parameter. The estimated number of the interaction is $`4.69\times 10^{12}`$. The effective sustained speed is 5.92 Gflops and the price/performance is $7.0/Mflops.
|
no-problem/9905/hep-ph9905233.html
|
ar5iv
|
text
|
# Extraction of skewed parton distributions from experiment
## 1 Introduction
The basic concept of SPD’s is best illustrated with the lowest order graph of deeply virtual Compton scattering (DVCS) in which a quark of momentum fraction $`x_1`$ leaves the proton and is returned to it with $`x_2`$. The two fractions not being equal is due to the fact that an on-shell photon is produced which necessitates a change in the $`+`$ momentum in going from the virtual space-like photon with $`+`$ momentum $`x_{bj}p_+`$, to basically zero $`+`$ momentum of the real $`\gamma `$. This sets $`x_2=x_1x`$ and thus the skewedness parameter to $`x`$. Thus one has a nonzero momentum transfer onto the proton and the parton distributions (PDF’s) which enter the process are non longer the regular PDF’s since the matrix element of the quark (gluon) operator is now taken between states of unequal momentum rather than equal momentum.
## 2 Appropriate Process and experimental observable
The most desirable process for extracting SPD’s is the one with the least theoretical uncertainty, the least singular $`Q^2`$ behavior and a proven factorization formula.
The process which fulfills all the above criteria is DVCS and the experimental observable which allows direct access to the SPD’s is the azimuthal angle asymmetry $`A`$ of the combined DVCS and Bethe-Heitler(BH) differential cross section. $`A`$ is defined as :
$$A=\frac{_{\pi /2}^{\pi /2}𝑑\varphi 𝑑\sigma _{\pi /2}^{3\pi /2}𝑑\varphi 𝑑\sigma }{_0^{2\pi }𝑑\varphi 𝑑\sigma }.$$
(1)
The reason why this asymmetry is not $`0`$ is due to the interference term between BH and DVCS which is proportional to the real part of the DVCS amplitude. The factorized expression for the real part of the amplitude is
$`ReT(x,Q^2)`$ $`=`$ $`{\displaystyle _{1+x}^1}{\displaystyle \frac{dy}{y}}ReC_i(x/y,Q^2)`$ (2)
$`f_i(y,x,Q^2).`$
$`ReC_i`$ is the real part of the hard scattering coefficient (HSC) and $`f_i`$ are the SPD’s. At HERA one is mainly restricted to the small-$`x`$ region where gluons dominate and thus $`i`$ will be $`g`$. Thus Eq. (1) contains only measurable or directly computable quantities except Eq. (2) in the interference part. Therefore, one would now be able to extract the SPD’s from on $`A`$, if one could deconvolute Eq. (2). However, for SPD’s life is not as ”simple” as in inclusive DIS where the deconvolution for $`F_2`$ is trivial, since the SPD’s depend on two rather than one variable. Furthermore, the HSC depends on the same variables as the SPD’s. These facts make the deconvolution of Eq. (2) impossible.
## 3 Algorithms for extracting SPD’s
Rather than deconvoluting, one can expand the PDF’s with respect to a complete set of orthogonal polynomials $`P_j^{(\alpha _P)}(t)`$. In this particular case we need the orthogonality of the polynomials to be on the interval $`1t1`$ with $`t=\frac{2yx}{2x}`$ equivalent to $`1+xy1`$ as found in Eq. (2). One can then write the following expansion:
$`f^{q,g}(y,x,Q^2)`$ $`=`$ $`{\displaystyle \frac{2}{2x}}{\displaystyle \underset{j=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{w(t|\alpha _P)}{n_j(\alpha _P)}}P_j^{q,g}(t)`$ (3)
$`M_j^{q,g}(x,Q^2)`$
with $`w(t|\alpha _P)`$, $`n_j(\alpha _P)`$ and $`\alpha _P`$ being weight, normalization and a label determined by the choice of the orthogonal polynomial used. $`M_j^{q,g}(x,Q^2)`$ is given by:
$$M_j^{q,g}(x,Q^2)=\underset{k=0}{\overset{\mathrm{}}{}}E_{jk}^{q,g}(x)f_k^{q,g}(x,Q^2),$$
(4)
where
$$f_j^{q,g}(x,Q^2)=\underset{k=0}{\overset{j}{}}x^{jk}B_{jk}^{q,g}\stackrel{~}{f}_k^{q,g}(x,Q^2).$$
(5)
$`B_{jk}^{q,g}`$ is an operator transformation matrix which fixes the NLO corrections to the eigenfunctions of the kernels and is thus just the identity matrix in LO. The upper limit in Eq. (4) is given by the constraint $`\theta `$-functions present in the expansion coefficients, which are generally defined by
$`E_{jk}(\nu ;\alpha _P|x)={\displaystyle \frac{\theta _{jk}}{(2x)^k}}{\displaystyle \frac{\mathrm{\Gamma }(\nu )\mathrm{\Gamma }(\nu +k)}{\mathrm{\Gamma }(\frac{1}{2})\mathrm{\Gamma }(k+\nu +\frac{1}{2})}}`$
$`{\displaystyle _1^1}𝑑t(1t^2)^{k+\nu \frac{1}{2}}{\displaystyle \frac{d^k}{dt^k}}P_j^{\alpha _P}\left({\displaystyle \frac{xt}{2x}}\right).`$ (6)
The moments of the SPD’s evolve according to
$$\stackrel{~}{f}_j^{q,g}(x,Q^2)=\stackrel{~}{E}_j(Q^2,Q_0^2)\stackrel{~}{f}_j^{q,g}(x,Q_0^2)$$
(7)
where the evolution operator is a matrix of functions in the singlet case. Finally, the Gegenbauer moments of the SPD’s at $`Q_0^2`$ are defined by
$`\stackrel{~}{f}_j^q(x,Q_0^2)`$ $`=`$ $`{\displaystyle _1^1}𝑑t\left({\displaystyle \frac{x}{2x}}\right)^j`$
$`C_j^{3/2}\left({\displaystyle \frac{tx}{2x}}\right)f^q(t,x,Q_0^2)`$
$`\stackrel{~}{f}_j^g(x,Q_0^2)`$ $`=`$ $`{\displaystyle _1^1}𝑑t\left({\displaystyle \frac{x}{2x}}\right)^{j1}`$ (8)
$`C_{j1}^{5/2}\left({\displaystyle \frac{tx}{2x}}\right)f^g(t,x,Q_0^2).`$
In LO order and at small x the above formalism simplifies. Owing to the conformal properties of the operators involved in the definition of the SPD’s one finds the following expansion
$`f^g(y_1,x,Q^2)`$ $`=`$ $`{\displaystyle \frac{2}{2x}}{\displaystyle \underset{j=0}{\overset{\mathrm{}}{}}}{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{w(t|5/2)}{N_j(5/2)}}`$
$`E_{jk1}^g(x)C_{j1}^{5/2}(t)\stackrel{~}{f}_{k1}^g(x,Q^2)`$
$`f^q(y_1,x,Q^2)`$ $`=`$ $`{\displaystyle \frac{2}{2x}}{\displaystyle \underset{j=0}{\overset{\mathrm{}}{}}}{\displaystyle \underset{k=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{w(t|3/2)}{N_j(3/2)}}`$ (9)
$`E_{jk}^q(x)C_j^{3/2}(t)\stackrel{~}{f}_k^q(x,Q^2),`$
with $`w(t|\nu )=(t(1t))^{\nu 1/2}`$ and the $`C_j^\nu `$’s being Gegenbauer polynomials . The multiplicatively renormalizable moments evolve as above but with the explicit evolution operator:
$$\stackrel{~}{E}_j^{ik}(Q^2,Q_0^2)=Te^{\left(\frac{1}{2}_{Q_0^2}^{Q^2}\frac{d\tau }{\tau }\gamma _j^{ik}(\alpha _s(\tau ))\right)}$$
(10)
where $`T`$ orders the matrices of the regular LO anomalous dimensions (i,k = q,g) along the integration path.
Inserting Eq. (9) in Eq. (2) one obtains:
$`ReT(x,Q^2)=2{\displaystyle \underset{j=0}{\overset{\mathrm{}}{}}}{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}\stackrel{~}{E}_{k1}(Q^2,Q_0^2)`$
$`\stackrel{~}{f}_{k1}^g(x,Q_0^2)E_{jk1}^g(x){\displaystyle _1^1}{\displaystyle \frac{dt}{2t+x}}{\displaystyle \frac{w(t|5/2)}{N_j(5/2)}}`$
$`ReC_g({\displaystyle \frac{1}{2}}+{\displaystyle \frac{t}{x}},Q^2)C_{j1}^{5/2}(t),`$ (11)
where we chose the factorization/renormalization scale to be equal to $`Q^2`$. As one can see the integral in the sum is now only over known functions and will yield, for fixed $`x`$, a function of $`j`$ as will also do the expansion coefficients for fixed $`x`$. The evolution operator can also be evaluated and will yield for fixed $`Q^2`$ also just a function of $`j`$, which leaves the coefficients $`\stackrel{~}{f}_{k1}^g(x,Q_0^2)`$ as the only unknowns. Since the lefthand side will be known from experiment for fixed $`x`$ and $`Q^2`$, we are still in the unfortunate situation that a number is determined by the sum over $`j`$ of an infinite number of coefficients. However measuring the real part at an infinite number of $`Q^2`$ for fixed $`x`$, one would have an infinite dimensional column vector on the lefthand side and on the right hand side one would have a square matrix times another column vector of coefficients of which the dimension is determined by the number of $`j`$. Since all the entries in the matrix are real and positive definite provided that there are no zero eigenvalues one can find the inverse. Thus one can directly compute the moments of our initial parton distributions which are needed to reconstruct the skewed gluon distribution from Eq. (3).
The drawback of the above procedure is that this process has to be repeated anew for each $`x`$ and that NLO evolution studies indicate that one might need as amuch as $`50100`$ polynomials to achieve enough accuracy at small-$`x`$. This, of course, would render this procedure useless in an experimental situation where a $`j`$ of $`510`$ is possibly achievable!
A practical way out of the above mentioned predicament is by making a simple minded ansatz for the skewed gluon distribution in the different regions like $`A_0z^{A_1}(1z)_3^A`$ for the DGLAP region where $`z`$ is now just a dummy variable, plug this form in Eq. (2) and fit the coefficients to the data of the real part of the DVCS amplitude for fixed $`x`$ and $`Q^2`$. One can repeat this procedure for different values of $`Q^2`$ and then interpolate between the different coefficients to obtain a functional form of the coefficients in $`Q^2`$, alternatively, after having extracted the values of the coefficients for different values of $`x`$ at the same $`Q^2`$, use an evolution programm with the ansatz and the fitted coefficients as input and check whether one can reproduce tha data for the real part at higher $`Q^2`$, thus checking the viability of the model ansatz.
To obtain an ansatz fullfilling the various constraints for SPD’s (see Ji’s and Radyushkin’s references in ), one should start from the double distributions (DD) (see Redyushkin’s references in .) which yield the skewed gluon distribution in the various regions
$`g(y,x)`$ $`=`$ $`\theta (yx){\displaystyle _0^{\frac{1y}{1x}}}𝑑zG(yxz,z)+`$ (12)
$`\theta (yx){\displaystyle _0^{\frac{y}{x}}}𝑑zG(yxz,z).`$
Due to the fact that there are no anti-gluons, the above formula is enough to cover the whole region of interest $`1+xyleq1`$. What remains is to choose an appropriate model ansatz for G, for example,
$$G(z_1,z)=\frac{h(z_1,z)}{h(z_1)}f(z_1)$$
(13)
with $`f(z_1)`$ being taken from a diagonal parametrization with its coefficients now being left as variants in the skewed case and the normalization condition $`h(z_1)=_0^{1z1}𝑑zh(z_1,z)`$ such that, in the diagonal limit, the DD just gives the diagonal distribution. The choice for $`h(z_1,z)`$ is a matter of taste but should be kept as simple as possible.
## 4 Conclusions and outlook
After having showed, that the extraction of skewed parton distributions from DVCS experiments is principally as well as practically possible.
## Acknowledgments
This work was supported by the E. U. contract $`\mathrm{\#}`$FMRX-CT98-0194.
|
no-problem/9905/astro-ph9905231.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Open clusters are one of the main available laboratories in Astronomy. Among other things, the stellar evolutionary age scale relies primarily on them, so it is extremely important to estimate accurately the ages of different associations. Until now, the dating of clusters was based on either the Upper Main Sequence (MS), using stars evolving off it or already well beyond the turn–off point, or isochrone fitting. These techniques are model dependent and provide different ages for the same cluster. Very Low Mass Stars (VLM) and Brown Dwarfs (BDs) offer an alternative: It is well known that the lithium abundance is depleted with the age in stars (see the review by Balachandran 1994). This destruction depends also on mass, so dM stars deplete their lithium even before they arrive to the MS. However, the internal part of VLM/BDs is not hot enough to destroy lithium at this stage. In the lower part of the cluster MS, the border between members having no lithium and those showing it in their atmosphere is called the lithium depletion boundary (LDB). As a cluster gets older, the LDB moves toward less massive objects. Therefore, the detection of lithium in the brightest (e.g. bluer, hotter and more massive than others) BDs of a cluster puts a very strong limit on its age (D’Antona & Mazzitelli 1994; Martín & Montes 1997; Ventura et al. 1998). In this paper, we show how this technique works and apply it to several young open clusters.
## 2 Identification of candidate members: Optical and IR surveys
The initial step in order to identify new very low mass (VLM) stars and BDs has been to obtain optical photometry on the area around the cluster. In order to cover a significat fraction of a particular cluster in a reasonable amount of time, we have used in most of the cases detectors with several CCDs. Our targets have been:
i) The Pleiades.- We have used the CFHT (with the MOSAIC camera) and 48” telescope at MHO. We surveyed 2.5 and 1 sq. deg., respectively, discovering 17 and 6 BDs candidates in each survey (Bouvier et al. 1998; Stauffer et al. 1998a).
ii) $`\alpha `$ Per.- As a preliminary survey, we collected RI photometry with the 48” telescope at MHO. We discovered several candidates. Some of them were the targets of a spectroscopic campaign (next section).
iii) IC 2391.- The CTIO 4m telescope and the BTC camera allowed us to survey 2 sq. deg., detecting $``$40 BDs candidates (Barrado y Navascués et al. 1999a). See Fig. 1a.
iv) M35.- CFHT$`+`$MOSAIC: This is a very well populated open cluster. Our study included several thousands candidates (stars) and few possible BD (Barrado y Navascués et al. 1999b).
v) NGC 2516.- CTIO 4m telescope $`+`$ BTC. We covered 0.6 sq. deg., analysis in progress.
Since most of the lists of cluster candidates contain spurious members, it is necessary to select good candidates by obtaining near IR photometry. We have collected this information using the MHO 48” telescope and the NASA IRTF, in the case of the Pleiades. JHK photometry on the area around IC 2391 was obtained by the 2MASS project. All this information allowed us to construct robust lists of possible members. Figure 1b displays a Color-Magnitude diagram of IC 2391, and a comparison with previous surveys. A 30 Myr isochrone (solid line) and a Zero Age MS (dashed line) are included.
## 3 Spectroscopy: membership and the new age
Since these objects are too faint to have been detected with photographic plates in the past, most of them do not have measurements of their proper motions. Therefore, the final confirmation of membership comes from the spectroscopy. A intermediate resolution spectra (R=2000–5000) are good enough to obtain rough radial velocities (hence, establishing whether the target is a member). On the other hand, a detection of LiI 6708 Å reveals the BD nature of the candidate and also confirms the membership (Rebolo 1991). Using the Keck II telescope and the LRIS spectrograph we have collected spectra of VLM and BD candidates in the Pleiades and $`\alpha `$ Per open clusters. Figure 2a shows the spectrum of PER 32, a BD candidate discovered in the $`\alpha `$ Per region with the 48” MHO. The intense H$`\alpha `$ line in emission and its radial velocity indicates that it is a real member of the cluster. However, we have not detected lithium, which indicates that this is a VLM star close to the LDB.
Figure 2b illustrates how ages are estimated. Following Stauffer et al. (1998b), the faintest member without lithium and the brightest member with it, bracket the magnitude of the LDB. We used the Chabrier & Baraffe (1998) latest evolutionary model to covert these magnitudes, different for the Pleiades and $`\alpha `$ Per, into their corresponding ages. If the sampling is well done, the accuracy is better than 10%.
Figure 3 shows the old standard ages for the 5 clusters included in our study, together the new values for the Pleiades and $`\alpha `$ Per, which are significantly older than the values previously accepted. Our aim is to improve these values and to get new ones for the other clusters in the near future, in order to verify this interesting possibility. This would have a very important effect on several fields, including the time scale of properties of low mass stars (lithium abundance, rotation, stellar activity), pre-main sequence stars, and internal structure and evolutionary models.
|
no-problem/9905/cond-mat9905384.html
|
ar5iv
|
text
|
# Dynamics of the diluted Ising antiferromagnet Fe0.31Zn0.69F2
## I introduction
The diluted Ising antiferromagnet Fe<sub>x</sub>Zn<sub>1-x</sub>F<sub>2</sub> in an external magnetic field has proven to be a good model system for the random-exchange Ising model (REIM) ($`H`$ 0) and the random field (RFIM) Ising model ($`H>`$ 0) . In the original derivation of the equivalence between the RFIM and a diluted Ising antiferromagnet in a uniform applied field, weak dilution and small values of $`H/J`$ were assumed ($`J`$ is the magnitude of the exchange interaction). In the Fe<sub>x</sub>Zn<sub>1-x</sub>F<sub>2</sub> system, weak dilution implies Fe concentrations well above the percolation threshold $`x_p`$=0.25 and the most convincing experimental results on the RFIM critical behaviour have been obtained on samples with $`x`$0.46. On the other hand, interesting dynamic properties may become observable in the limit of strong dilution. RFIM systems have been argued to attain extremely long relaxation times at temperatures near $`T_c`$ and for large values of $`H/J`$, where the ordered phase is destroyed, it has been argued that a glassy phase will appear, even without exchange frustration being present in the system . Experimental results on Fe<sub>x</sub>Zn<sub>1-x</sub>F<sub>2</sub> samples of concentration at or slightly above $`x_p`$ have revealed some dynamic properties similar to those of conventional spin glasses . In a recent paper it was found that the percolating threshold sample Fe<sub>0.25</sub>Zn<sub>0.75</sub>F<sub>2</sub> exhibits magnetic ageing, a typical spin glass feature, whereas the slowing down of the dynamics followed an Arrhenius law, i.e. it did not support the existence of a finite temperature spin glass phase transition.
Results using neutron scattering and Faraday rotation technique have established random-field induced spin glass like dynamic behaviour in Fe<sub>0.31</sub>Zn<sub>0.69</sub>F<sub>2</sub>. Recent magnetization experiments revealed that a similar behaviour occur at intense applied fields in samples of Fe<sub>x</sub>Zn<sub>1-x</sub>F<sub>2</sub>, with $`x`$ = 0.56 and 0.60 . In this paper we discuss experimental results from dc-magnetisation and ac-susceptibility measurements on the same Fe<sub>0.31</sub>Zn<sub>0.69</sub>F<sub>2</sub> system of earlier neutron and Faraday rotation measurements. In zero applied field, a slowing down of the dynamics occurs at low temperatures that obeys a pure Arrhenius law and some slowing down is also observable near the antiferromagnetic transition temperature. In applied dc-fields, additional slow dynamical processes are introduced near $`T_N`$ by the random fields. A comprehensive static and dynamic phase diagram in the $`HT`$ plane is deduced that, in parts, adequately compares with an earlier published phase diagram on the same compound .
## II experimental
A high quality single crystal of Fe<sub>0.31</sub>Zn<sub>0.69</sub>F<sub>2</sub> in the form of a parallelepiped with its longest axis aligned with the crystalline $`c`$-axis was used as a sample. The frequency dependence of the ac-susceptibility in zero applied dc-field was studied in a Cryogenic Ltd. S600X SQUID-magnetometer. A commercial Lake Shore 7225 ac-susceptometer was employed for the ac-susceptibility measurements in a superposed dc magnetic field and the temperature dependence of the magnetisation in different applied dc-fields was measured in a Quantum Design MPMS5 SQUID-magnetometer. The magnetic field was in all experiments applied parallel to the $`c`$-axis of the sample.
## III Results and Discussion
Fig. 1 shows the temperature dependence of both components of the ac-susceptibility, (a) $`\chi ^{}`$($`\omega ,T`$) and (b)
$`\chi ^{\prime \prime }`$($`\omega ,T`$). The different frequencies ranges from 0.051-51 Hz as indicated in the figures. The transition from a paramagnetic Curie-Weiss behaviour at high temperatures to long range antiferromagnetic order is signaled by the cusp in $`\chi ^{}`$($`\omega ,T`$) at about 20 K. A small bump in $`\chi ^{\prime \prime }`$($`\omega ,T`$) is observed at about the same temperature. Below 15 K the ac-susceptibility becomes frequency dependent. The out-of-phase component increases and a frequency dependent maximum that shifts towards lower temperatures with decreasing frequency is observed below $`T`$ 5 K. The frequency dependence of $`\chi ^{}`$($`\omega ,T`$) and $`\chi ^{\prime \prime }`$($`\omega ,T`$) at low temperatures shows some resemblance with the behaviour of an ordinary spin glass. However, earlier neutron scattering measurements indicated that AF LRO is established below $`T_N`$ $``$ 19.8 K in this system , provided the sample is submitted to a slow cooling process. To investigate the nature of the slowing down of the dynamics at low temperatures, a comparison is made with the behavior observed in ordinary spin glasses. A 3d spin glass exhibits conventional critical slowing down of the dynamics according to:
$$\frac{\tau }{\tau _0}=\left(\frac{T_fT_g}{T_g}\right)^{z\nu },$$
(1)
where $`\tau _0`$ is the microscopic spin flip time of the order $`10^{13}`$-$`10^{14}`$ s, $`T_g`$ the spin glass temperature and $`z\nu `$ a dynamical critical exponent. Defining the inflection point in $`\chi ^{\prime \prime }`$($`\omega ,T`$) as a measure of the freezing temperature $`T_f`$ for a relaxation
time ($`\tau `$) corresponding to the observation time, $`t`$1/$`\omega `$, of the ac-susceptibility measurement, the derived data may be employed for dynamic scaling analyses. The data do not fit conventional critical slowing down according to eq. 1 with physically plausible values of the parameters. Activated dynamics could govern the dynamics still yielding a finite phase transition temperature. The slowing down of the relaxation times should then obey:
$$ln\left(\frac{\tau }{\tau _0}\right)=\frac{1}{T_f}\left(\frac{T_fT_g}{T_g}\right)^{\psi \nu },$$
(2)
where $`\psi \nu `$ is a critical exponent . The derived data fits eq. 2 with $`T_g0`$ which implies that the slowing down rather is described by a generalized Arrhenius law:
$$log\left(\frac{\tau }{\tau _0}\right)T_f^x.$$
(3)
Fig. 2 shows the best fit to this expression yielding x=1 and $`\tau _0`$=10<sup>-14</sup> s for 0.051 $`\omega /2\pi `$(Hz) $``$ 1000.
The observed frequency dependent ac-susceptibility shows striking similarities with the behaviour of alleged reentrant antiferromagnets. In such a system there is a transition from a paramagnetic phase to an antiferromagnetic phase and spin glass behaviour is observed at low temperatures. The reentrant Ising antiferromagnet Fe<sub>0.35</sub>Mn<sub>0.65</sub>TiO<sub>3</sub> displays similar features as this system, e.g. the low temperature slowing down of the dynamics is found to obey a pure Arrhenius behaviour .
Furthermore, the more diluted system Fe<sub>0.25</sub>Zn<sub>0.75</sub>F<sub>2</sub> (on the percolation threshold) does not display long range antiferromagnetic order but it exhibits a slowing down of the relaxation times that follows a pure Arrhenius law with a similar value of $`\tau _0`$ as here derived for Fe<sub>0.31</sub>Zn<sub>0.69</sub>F<sub>2</sub>.
In Fig. 3 (a) $`\chi ^{}`$($`\omega ,T,H`$) and (b) $`\chi ^{\prime \prime }`$($`\omega ,T,H`$) are plotted for $`\omega /2\pi `$=125 Hz in different superposed dc magnetic fields $`H`$ 2 T. At these rather low fields, the maximum in $`\chi ^{}`$($`\omega ,T,H`$) near $`T_N`$($`H`$) gets rounded and is pushed towards lower temperature with increasing magnitude of
the magnetic field. The corresponding bump in the out-of-phase component in zero dc-field, increases in magnitude and sharpens for increasing dc-fields up to 1 T (the inset of Fig. 3 (b) displays fields up to 0.625 T). A measure of the phase transition temperature $`T_N`$($`H`$) is given by the position of the maximum in the derivative d($`\chi ^{}`$$`T`$)/d$`T`$ . For fields $`H`$ 1.5 T, $`T_N`$($`H`$) is pushed to lower temperatures with increasing field strength following a REIM to RFIM crossover scaling, as described in ref. 13. At higher fields the maximum is washed out which signals that the antiferromagnetic phase is destroyed. The destruction of the antiferromagnetic phase by strong random fields in Fe<sub>x</sub>Zn<sub>1-x</sub>F<sub>2</sub> was observed by earlier Faraday rotation and neutron scattering measurements in the same system ($`x`$ = 0.31), and by recent magnetization and dynamic susceptibility studies in less diluted samples ($`x`$ = 0.42, 0.56 and 0.60). A glassy dynamics is found in the upper portion of the $`HT`$ phase diagram of Fe<sub>x</sub>Zn<sub>1-x</sub>F<sub>2</sub>, at least within the interval $`0.31`$x$`0.60`$.
In increasing applied dc-fields the out-of-phase component is enhanced in a rather narrow but widening region near the antiferromagnetic phase transition due to the introduction of random fields that create new slow dynamical processes in the system. The increase of $`\chi ^{\prime \prime }`$($`\omega ,T,H`$) at lower temperatures, corresponding to the processes causing the slowing down of the dynamics already in zero field, remains observable also when the field is increased. This latter feature cannot be entirely attributed
to random fields. For larger fields these low temperature processes and the processes caused by the random fields start to overlap, and at the highest dc-fields they even become indistinguishable. In Fig. 4 both components of the ac susceptibility are plotted, in an applied dc-field $`H`$=1.5 T, for $`\omega /2\pi `$=15, 125 and 1000 Hz. Note that the temperature of the maximum in $`\chi ^{}`$($`\omega ,T,H`$), at $`T_N(H)`$, shifts to lower temperatures as the frequency decreases. By way of contrast, no shift in the peak temperature is observable as a function of the frequency in dynamic susceptibility measurements performed in Fe<sub>0.46</sub>Zn<sub>0.54</sub>F<sub>2</sub> and Fe<sub>0.42</sub>Zn<sub>0.58</sub>F<sub>2</sub> , within the field limits of the weak RFIM problem in each case. The frequency dependent behaviour of $`T_N(H)`$ is a feature associated with the effects of strong random fields in samples of Fe<sub>x</sub>Zn<sub>1-x</sub>F<sub>2</sub>, particularly with $`x`$ close to $`x_p`$.
In Fig. 5 (a) and (b) $`\chi ^{}`$($`\omega ,T,H`$) and $`\chi ^{\prime \prime }`$($`\omega ,T,H`$) are plotted for $`\omega /2\pi `$=125 Hz in different superposed dc magnetic fields $`H`$2 T. The maximum in the in-phase-component is flattend, the susceptibility is strongly surpressed and the onset of the out-of-phase susceptibility is shifted towards lower temperatures as the dc-field is increased. No sign of a transition to an antiferromagnetic phase is observed.
Fig. 6 shows the temperature dependence of the field cooled (FC), $`M_{FC}`$($`T`$)/$`H`$, and zero field cooled (ZFC), $`M_{ZFC}`$($`T`$)/$`H`$, susceptibility at three different applied magnetic fields. Below a temperature $`T_{ir}`$ the magnetisation becomes irreversible. $`T_{ir}`$ decreases with increasing
field. The irreversibility point is associated with an observation time mainly governed by the heating rate of the ZFC experiment which in our experiment corresponds to about 100 s.
In Fig. 7 an $`HT`$ magnetic phase diagram is shown, in which some of the above discussed experimental characteristics are summarised. The open circles represent $`T_N`$($`H`$), the solid circles $`T_{ir}`$($`H`$), diamonds the spin freezing temperatures $`T_f`$($`H`$) for $`\omega /2\pi `$=125 Hz and open triangels label $`T_f`$($`H`$=0) for different frequencies. The onset of $`\chi ^{\prime \prime }`$($`\omega ,T,H`$) at frequencies $`\omega /2\pi `$=15, 125 and 1000 Hz are shown as solid triangels, solid squares and open squares respectively. Those are measures that mirror the observation time dependence of $`T_{ir}`$.
In diluted Ising antiferromagnets, $`T_N`$ is expected to decrease with increasing magnetic fields as:
$$ϵH^{2/\varphi }\text{;}ϵ=\left(\frac{T_N(H)T_N(0)+bH^2}{T_N(0)}\right)$$
(4)
where $`\varphi `$ is a crossover exponent and $`bH^2`$ a small mean field correction. For low fields, $`H`$1.5 T, we find $`\varphi `$1.4 using $`b`$=0 for $`T_N`$($`H`$) as indicated by the solid line in Fig. 7. For higher fields, $`H`$1.5 T, a reversal of the curvature of $`T_{ir}`$($`H`$) occurs. The dashed line corresponds to a functional behaviour according to eq. 4 with an exponenet $`\varphi `$ 3.4. A largely equivalent phase diagram has earlier been established for the same system utilising Faraday rotation measurements . One significant difference being that $`T_{ir}`$($`0`$) $`T_N`$($`0`$) in ref. , whereas we find a significant difference between these two temperatures, as is also observed in other dilute antiferromagnets . The field dependence of $`T_N`$($`H`$) is equivalent to those of the more concentrated Fe<sub>0.46</sub>Zn<sub>0.54</sub>F<sub>2</sub> and Fe<sub>0.72</sub>Zn<sub>0.28</sub>F<sub>2</sub> where the scaling behaviour of eq. 4 gives $`\varphi `$1.4 for fields up to 2 T and 10 T respectively . The new features of the phase diagram in Fig. 7 as compared to the one of ref. are the observation time dependent spin freezing temperatures at low temperature and the observation time dependence of $`T_{ir}`$($`H`$) demonstrated by the shifts of the $`T_{ir}`$($`H`$) contours towards higher temperatures when decreasing the observation time. A possible mechanism for the spin freezing at low temperatures may be a weak frustration present in a third nearest neighbour interaction of this compound. Results of numerical simulation indicates that a small frustrated bond plays no role in the REIM properties of Fe<sub>x</sub>Zn<sub>1-x</sub>F<sub>2</sub> under weak dilution. However, it causes dramatic influences in the antiferromagnetic and spin glass order parameters close to the percolation threshold.
## IV conclusions
Dynamic and static magnetic properties of the diluted antiferromagnet Fe<sub>0.31</sub>Zn<sub>0.69</sub>F<sub>2</sub> have been studied. The dynamic susceptibility in zero dc-field shows similarities to a reentrant Ising antiferromagnet with a slowing down of the dynamics at low temperatures best described by a pure Arrhenius law. Hence, there is no transition to a spin glass phase at low temperatures.
The field dependence of the antiferromagnetic transition temperature follows the predicted scaling behaviour for a random field system, in accord with earlier experimental findings. The onset of $`\chi ^{\prime \prime }(\omega ,T,H)`$ occur above the antiferromagnetic phase transition, even in zero applied magnetic field. $`\chi ^{\prime \prime }(\omega ,T,H)`$ shows a frequency dependent behaviour that mirror the observation time dependence of the FC-ZFC irreversibility line. The dynamics of the diluted antiferromagnet Fe<sub>0.31</sub>Zn<sub>0.69</sub>F<sub>2</sub> has been shown to involve not only random field induced slow dynamics near $`T_N`$($`H`$), but additional slow dynamics originating from the strong dilution appears at low temperatures.
## V acknowledgments
Financial support from the Swedish Natural Science Research Council (NFR) is acknowledged. One of the authors (FCM) acknowledge the support from CNPq and FINEP (Brazilian agencies).
|
no-problem/9905/nucl-th9905006.html
|
ar5iv
|
text
|
# Prompt muon-induced fission: a probe for nuclear energy dissipation
## 1. Introduction
There are two different mechanisms that contribute to nuclear energy dissipation, i.e. the irreversible transfer of energy from collective into intrinsic single-particle motion: two-body collisions and “one-body friction”. The latter is caused by the moving walls of the self-consistent nuclear mean field. The role played by these two dissipation mechanisms in fission and heavy-ion reactions is not yet completely understood. In a pioneering article that appeared in 1976 Davies, Sierk and Nix calculated the effect of viscosity on the dynamics of fission. Assuming that friction is caused by two-body collisions they extracted a viscosity coefficient $`\mu =0.015`$ Tera Poise from a comparison of theoretical and experimental values for the kinetic energies of fission fragments. The corresponding time delay for the nuclear motion from the saddle to the scission point was found to be of order $`\mathrm{\Delta }t=1\times 10^{21}`$ s. However, in one-body dissipation models the time delay is an order of magnitude larger. Several experimental techniques are sensitive to the energy dissipation in nuclear fission. At high excitation energy, the multiplicity of pre-scission neutrons or photons depends on the dissipation strength. At low excitation energy, the process of prompt muon-induced fission provides a suitable “clock”. This process will be discussed here.
After muons have been captured into high-lying single particle states they form an excited muonic atom. Inner shell transitions may proceed without photon emission by inverse internal conversion, i.e. the muonic excitation energy is transferred to the nucleus. In actinides, the $`2p1s`$ and the $`3d1s`$ muonic transitions result in excitation of the nuclear giant dipole and giant quadrupole resonance, respectively, which act as doorway states for fission. The nuclear excitation energy is typically between 6.5 and 10 MeV. Most importantly, the muon is still available following these atomic transitions (in the ground state of the muonic atom) and can be utilized to probe the fission dynamics. Eventually though, the muon will disappear as a result of the weak interaction (nuclear capture by one of the fission fragments). However, the nuclear capture occurs on a time scale of order $`10^7`$ s which is many orders of magnitude larger than the time scale of fission.
The prompt muon-induced fission process is most easily understood via a “correlation diagram”, i.e. one plots the single-particle energies of the transient muonic molecule as a function of the internuclear distance . If there is a large amount of friction during the motion from the outer fission barrier to the scission point the muon will remain in the lowest molecular energy level $`1s\sigma `$ and emerge in the $`1s`$ bound state of the heavy fission fragment. If, on the other hand, friction is small and hence the nuclear collective motion is relatively fast there is a nonvanishing probability that the muon may be transferred to higher-lying molecular orbitals, e.g. the $`2p\sigma `$ level, from where it will end up attached to the light fission fragment. Therefore, theoretical studies of the muon-attachment probability to the light fission fragment, $`P_L`$, in combination with experimental data can be utilized to analyze the dynamics of fission, and nuclear energy dissipation in particular.
## 2. Theoretical Developments
Because the nuclear excitation energy in muon-induced fission exceeds the fission barrier height it is justified to treat the fission dynamics classically (no barrier tunneling). For simplicity, we describe the fission path by one collective coordinate $`R`$; the classical collective nuclear energy has the form
$$E_{\mathrm{nuc}}=\frac{1}{2}B(R)\dot{R}^2+V_{\mathrm{fis}}(R)+E_\mu (R).$$
(1)
We utilize a coordinate dependent mass parameter and an empirical double-humped fission potential $`V_{\mathrm{fis}}(R)`$ which is smoothly joined with the Coulomb potential of the fission fragments at large $`R`$. The last term in Eq. (1) denotes the instantaneous muonic binding energy which depends on the fission coordinate; this term will be defined later.
To account for the nuclear energy dissipation between the outer fission barrier and the scission point, we introduce a friction force which depends linearly on the velocity. In this case, the dissipation function $`D`$ is a simple quadratic form in the velocity
$$\dot{E}_{\mathrm{nuc}}(t)=2D=f\dot{R}^2(t).$$
(2)
The adjustable friction parameter $`f`$ determines the dissipated energy; it is the only unknown quantity in the theory.
For the dynamical description of the muonic wavefunction during prompt fission, the electromagnetic coupling between muon and nucleus $`(e\gamma _\mu A^\mu )`$ is dominant; the weak interaction is negligible. Because of the nonrelativistic motion of the fission fragments the electromagnetic interaction is dominated by the Coulomb interaction
$$A^0(𝐫,t)=d^3r^{}\frac{\rho _{\mathrm{nuc}}(𝐫^{},t)}{|𝐫𝐫^{}|}.$$
(3)
The muonic binding energy in the ground state of an actinide muonic atom amounts to 12 percent of the muonic rest mass; hence nonrelativistic calculations, while qualitatively correct, are limited in accuracy. Several theory groups have demonstrated the feasibility of such calculations which are based on the time-dependent Schrödinger equation
$$[\frac{\mathrm{}^2}{2m}^2eA^0(𝐫,t)]\psi (𝐫,t)=i\mathrm{}\frac{}{t}\psi (𝐫,t).$$
(4)
Recently, we have developed a numerical algorithm to solve the relativistic problem on a three-dimensional Cartesian mesh . The time-dependent Dirac equation for the muonic spinor wave function in the Coulomb field of the fissioning nucleus has the form
$$H_\mathrm{D}(t)\psi (𝐫,t)=i\mathrm{}\frac{}{t}\psi (𝐫,t),$$
(5)
where the Dirac Hamiltonian is given by
$$H_\mathrm{D}(t)=i\mathrm{}c\alpha +\beta mc^2eA^0(𝐫,t).$$
(6)
Our main task is the solution of the Dirac equation for the muon in the presence of a time-dependent external Coulomb field $`A^0(𝐫,t)`$ which is generated by the fission fragments in motion. Note the coupling between the fission dynamics, Eq. (1), and the muon dynamics, Eq. (5), via the instantaneous muonic binding energy
$$E_\mu (R(t))=\psi (𝐫,t)H_\mathrm{D}(t)\psi (𝐫,t)$$
(7)
which depends on the fission coordinate; the presence of this term increases the effective fission barrier height.
## 3. Lattice Representation: Basis-Spline Expansion
For the numerical solution of the time-dependent Dirac equation (5) it is convenient to introduce dimensionless space and time coordinates
$$𝐱=𝐫/\mathrm{¯}\lambda _c\mathrm{¯}\lambda _c=\mathrm{}/(m_\mu c)=1.87fm$$
$$\tau =t/\tau _c\tau _c=\mathrm{¯}\lambda _c/c=6.23\times 10^{24}s$$
(8)
where $`\mathrm{¯}\lambda _c`$ denotes the reduced Compton wavelength of the muon and $`\tau _c`$ the reduced Compton time. For the lattice representation of the Dirac Hamiltonian and spinor wave functions we introduce a 3-dimensional rectangular box with a uniform lattice spacing $`\mathrm{\Delta }x`$. The lattice points are labeled $`(x_\alpha ,y_\beta ,z_\gamma )`$.
Our numerical algorithm is the Basis-Spline collocation method . Basis-Spline functions $`B_i^M(x)`$ are piecewise-continuous polynomials of order $`(M1)`$. These may be thought of as generalizations of the well-known “finite elements” which are a B-Splines with $`M=2`$. To illustrate the method let us consider a wave function which depends on one space coordinate $`x`$; we represent the wave function on a finite spatial interval as a linear superposition of B-Spline functions
$$\psi (x_\alpha )=\underset{i=1}{\overset{N}{}}B_i^M(x_\alpha )c^i.$$
(9)
In the Basis-Spline collocation method, local operators such as the EM potential $`A^0`$ in Eq. (6) become diagonal matrices of their values at the grid points (collocation points), i.e. $`V(x)V_\alpha =V(x_\alpha )`$. The matrix representation of derivative operators is more involved . For example, the first-derivative operator of the Dirac equation has the following matrix representation on the lattice
$$D_\alpha ^\beta \underset{i=1}{\overset{N}{}}B_{\alpha i}^{}B^{i\beta },$$
(10)
where $`B_{\alpha i}^{}=[dB_i^M(x)/dx]|_{x=x_\alpha }`$. Furthermore, we use the shorthand notation $`B_{\beta i}=B_i^M(x_\beta )`$ for the B-spline function evaluated at the collocation point $`x_\beta `$, and the inverse of this matrix is denoted by $`B^{i\beta }=[B^1]_{\beta i}`$. Because of the presence of this inverse, the operator $`D_\alpha ^\beta `$ will have a nonsparse matrix representation. In the present calculations we employ B-Splines of order $`M=5`$. Eq. (9) can readily be generalized to three space dimensions; in this case the four Dirac spinor components $`\psi ^{(p)},p=(1,\mathrm{},4)`$ are expanded in terms of a product of Basis-Spline functions
$$\psi ^{(p)}(x_\alpha ,y_\beta ,z_\gamma ,t)=\underset{i,j,k}{}B_i^M(x_\alpha )B_j^M(y_\beta )B_k^M(z_\gamma )c_{(p)}^{ijk}(t),$$
(11)
i.e. the lattice representation of the spinor wave function is a vector with $`N=4\times N_x\times N_y\times N_z`$ complex components. Hence, it is impossible to store $`H_\mathrm{D}`$ in memory because this would require the storage of $`N^2`$ complex double-precision numbers. We must therefore resort to iterative methods for the solution of the matrix equation which do not require the storage of $`H_\mathrm{D}`$.
We solve the time-dependent Dirac equation in two steps: first, we solve the static Coulomb problem at time $`t=0`$, i.e. the muon bound to an actinide nucleus. This problem is solved by the damped relaxation method . The second part of our numerical procedure is the solution of the time-dependent Dirac equation (5) by a Taylor-expansion of the propagator for an infinitesimal time step $`\mathrm{\Delta }t`$. Details may be found in ref. .
## 4. Discussion of Numerical Results
In the following we present results for prompt fission of $`{}_{93}{}^{}{}_{}{}^{237}`$Np induced by the $`3d1s`$ muonic transition $`(9.5\mathrm{MeV})`$. All results reported here are for a 3-D Cartesian lattice of size $`L_x=L_y=67`$ fm and $`L_z=146`$ fm with $`N_x\times N_y\times N_z=25\times 25\times 53`$ lattice points with a uniform lattice spacing $`\mathrm{\Delta }x=1.5\mathrm{¯}\lambda _c=2.8fm`$. Depending on the value of the friction coefficient, we utilize between $`1,2001,900`$ time steps with a step size $`\mathrm{\Delta }t=1.5\tau _c=9.3\times 10^{24}`$ s. Typical production runs take about 11 hours of CPU time on a CRAY supercomputer or about 54 hours on an IBM RS/6000 workstation.
Fig. 1 shows the time-development of the muon position probability density during fission at a fragment mass asymmetry $`\xi =A_H/A_L=1.10`$. As expected, the muon sticks predominantly to the heavy fragment, but for this small mass asymmetry the muon attachment probability to the light fission fragment, $`P_L`$, is rather large (20 percent).
One might ask whether the muon will always remain bound during fission; what is the probability for ionization? To investigate this question we have plotted the muon position probability density on a logarithmic scale.
In coordinate space, any appreciable muon ionization would show up as a “probability cloud” that is separating from the fission fragments and moving towards the boundaries of the lattice. Fig. 2 shows no evidence for such an event in our numerical calculations. Hence, we conclude that the probability for muon ionization $`P_{\mathrm{ion}}`$ is substantially smaller than the muon attachment probability to the light fission fragment which is always clearly visible in our logarithmic plots, even at large mass asymmetry. From this we estimate that $`P_{\mathrm{ion}}<10^4`$.
Fig. 3 shows that $`P_L`$ depends strongly on the fission fragment mass asymmetry. This is easily understood: for equal fragments we obviously obtain $`P_L=0.5`$, and for large mass asymmetry it is energetically favorable for the muon to be bound to the heavy fragment, hence $`P_L`$ will be small. In Fig. 3 we also examine the dependence of $`P_L`$ on the dissipated nuclear energy, $`E_{\mathrm{diss}}`$, during fission. In our model, friction takes place between the outer fission barrier and the scission point. When the dissipated energy is computed from equation (2) we find an almost linear dependence of the muon attachment probability on $`E_{\mathrm{diss}}`$; unfortunately, this dependence is rather weak.
We would like to point out that the theoretical values for $`P_L`$ obtained in this work are smaller than those reported in our earlier calculations . There are two reasons for this: (a) the size of the lattice and (b) the lattice representation of the first derivative operator in the Dirac equation. Because of constraints in the amount of computer time available to us we utilized a smaller cubic lattice in our prior calculations with $`N_x\times N_y\times N_z=29^3`$ lattice points. More recently, we were able to increase the size of the lattice substantially, in particular in fission ($`z`$-) direction (see above). In Fig. 2 of ref. we have demonstrated the convergence of our results for the muon attachment probability in terms of the lattice size and lattice spacing. Another reason for the difference between the current and prior results is the lattice representation of the first derivative operator, Eq. (10), in the Dirac equation. In ref. we utilized a combination of forward and backward derivatives for the upper and lower spinor wave function components; after extensive testing of Coulomb potential model problems with known analytical solutions we have found that the symmetric derivative operator provides a more faithful lattice representation. The results reported here and in ref. have been obtained utilizing the symmetric derivative prescription.
## 5. Comparison of Theory with Experiment
There are only a few experimental data available for comparison. Schröder et al. measured for the first time mean lifetimes of muons bound to fission fragments of several actinide nuclei. The muon decays from the K-shell of the muonic atom through various weak interaction processes at a characteristic rate $`\lambda =\lambda _0+\lambda _c`$, where $`\lambda _0=(2.2\times 10^6s)^1`$ is the free leptonic decay rate for the decay process $`\mu ^{}e^{}+\overline{\nu _e}+\nu _\mu `$ and $`\lambda _c`$ denotes the nuclear capture rate; $`\lambda _c`$ depends upon the charge and mass of the fission fragment. From the observed lifetime $`\tau _\mu =1.30\times 10^7s`$ Schröder et al. estimated an upper limit for the muon attachment probability $`P_L0.1`$. It must be emphasized that this number represents an integral over the whole fission mass distribution and, hence, cannot be directly compared to the numbers given in Fig. 3.
The most complete experiments have been carried out by Risse et al. at the Paul Scherrer Institute (PSI) in Switzerland. The basic experimental approach is to place a fission chamber inside an electron spectrometer. The incident muons are detected by a scintillation counter. An event is defined by a $`(\mu ^{},f_1f_2e^{})`$ coincidence where the fission fragments are observed in prompt and the muon decay electrons in delayed coincidence with respect to the incident muon. The magnetic field of the electron spectrometer allows for a reconstruction of the electron trajectories. Thus, it is possible to determine whether the muon decay electrons originate from the heavy or the light fission fragment.
For several mass bins of the light fission fragment, muon attachment probabilities $`P_L`$ have been measured; the experimental data are given in Table 1. It should be emphasized that the mass bins are relatively broad. Because the theoretical values for $`P_L`$ depend strongly on the mass asymmetry it is not justified to assume that $`P_L`$ remains constant within each experimental mass bin. Instead, to allow for a comparison between theory and experiment, we have to multiply the theoretical $`P_L`$ values in Fig. 3 with a weighting factor that accounts for the measured relative mass distribution of the prompt fission events within this mass bin. We subsequently integrate the results over the sizes of the experimental mass bins. Due to the relatively low excitation energy in muon-induced fission, the fission mass distribution exhibits a maximum at $`\xi =A_H/A_L=1.4`$ and falls off rather steeply for values larger or smaller than the maximum. This means that the large values of $`P_L0.5`$ at or near fission fragment symmetry $`\xi =1.0`$ will be strongly suppressed. The resulting theoretical values for $`P_L`$ are given in the last column of Table 1. It is apparent that our theory agrees rather well with experiment. Because of the size of the error bars in the experiment and because of the weak dependence of the theoretical values of $`P_L`$ on the dissipated energy, it is not possible to extract very precise information about the amount of energy dissipation.
From a comparison of our theoretical result for the mass bin $`A_L=118.5111.5`$ with the measured data we extract a dissipated energy of order $`10`$ MeV for <sup>237</sup>Np while the second mass bin $`A_L=111.5104.5`$ is more compatible with zero dissipation energy. We place a higher confidence on the theoretical results for the first mass bin because the probabilities $`P_L`$ are substantially larger and hence numerically more reliable. We like to point out that our theoretical value $`E_{\mathrm{diss}}=10`$ MeV is compatible with results from other low-energy fission measurements that are based on the odd-even effect in the charge yields of fission fragments . In addition to <sup>237</sup>Np we have also studied muon-induced fission of <sup>238</sup>U; the results for muon attachment are very similar .
## 6. Conclusions
We have studied the dynamics of a muon bound to a fissioning actinide nucleus by solving the time-dependent Dirac equation for the muonic spinor wavefunction; the fission dynamics is described classically. The theory predicts a strong mass asymmetry dependence of the muon attachment probability $`P_L`$ to the light fission fragment; this feature is in agreement with experimental data. Our calculations show no evidence for muon ionization during fission. The theory also predicts a (relatively weak) dependence of $`P_L`$ on the dissipated energy. By comparing our theoretical results to the experimental data of ref. we extract a dissipated energy of about $`10`$ MeV for <sup>237</sup>Np (see Table 1). Using the dissipation function defined in Eq. (2), this value corresponds to a fission time delay from saddle to scission of order $`2\times 10^{21}`$ s.
## Acknowledgements
This research project was sponsored by the U.S. Department of Energy under contract No. DE-FG02-96ER40975 with Vanderbilt University. For several years, I have benefitted from fruitful discussions with my collaborators, in particular with J.A. Maruhn, the late C. Bottcher, M.R. Strayer, P.G. Reinhard, A.S. Umar and J.C. Wells. Some of the numerical calculations were carried out on CRAY supercomputers at NERSC, Berkeley. I also acknowledge travel support to Germany from the NATO Collaborative Research Grants Program.
|
no-problem/9905/cond-mat9905112.html
|
ar5iv
|
text
|
# A Simple Solid-on-Solid Model of Epitaxial Thin Films Growth: Inhomogeneous Multilayered Sandwiches
## 1 Introduction
In last decade the physics of growth thin solid films and ion/atoms behaviors on flat film surfaces became well understood and investigated due to rapid progress in experimental methods of preparing thin films (mainly MBE in UHV systems) and microscopy (mainly STM). Also many papers devoted to theory of growth of rough surfaces and interfaces and crystal were published (see for review).
Here, we present a simple SOS model of epitaxial growth adopted for inhomogeneous $`a/b/a`$-like systems. The film growth described as a two step mechanism. The first step is the randomly picked position of initial contact of the incident particle on the film surface. This is followed by second step which is a relaxation process of local migration leading to final position of the particle sticking to the film. We simplify the model to the limited surface diffusion and neglect migration to more distant sites then nearest neighbors (NN) only. After the relaxation process particle sticks for the rest of the simulation. Each particle is represented by a unit-volume cube which can occupy only discrete position in the lattice. The simple cubic symmetry is assumed. For a $`L\times L`$ substrate we deposit $`\theta _aL^2`$ particles of kind $`a`$, followed by $`\theta _bL^2`$ $`b`$-like particles and again $`\theta _aL^2`$ of $`a`$ particles. The nominal thicknesses of $`a`$ and $`b`$ layers, are $`\theta _a`$ and $`\theta _b`$ respectively. In relaxation process, each particle tends to maximize the number of PPLB. This tendency is slowed down by the barrier $`V`$ for diffusion which decreases probability of atom movement. The above growth model may be implemented as the following flowchart:
* for each newly arriving particle at random site $`r=0`$, calculate the number of atomic pairs $`n_{aa}^r`$, $`n_{ab}^r`$ and $`n_{bb}^r`$ at the place of the initial particle contact to the surface with all their four NN labeled by $`r=1\mathrm{}4`$,
* calculate the particle total energies in all five positions $`(r=0\mathrm{}4)`$: $`E^r=n_{aa}^rE_{aa}+n_{ab}^rE_{ab}+n_{bb}^rE_{bb}`$, where $`E_{ij}`$ is the bonding energy between $`i`$\- and $`j`$-kind atoms,
* evaluate probabilities $`p^r\mathrm{exp}(E^r)`$, $`r=0\mathrm{}4`$, of picking out each of the five virtual final positions of the atom,
* reduce the probability $`p^r`$ of movement into $`r=1\mathrm{}4`$ by factor $`\mathrm{exp}(V_i)`$, where $`V_a`$ and $`V_b`$ are the diffusion barriers for $`a`$\- and $`b`$-kind of atoms, respectively,
* pick out one of five proposed sites for the atom with probability given by $`p^r`$, $`(r=0\mathrm{}4)`$.
Values of $`E`$ and $`V`$ are expressed in $`k_BT`$ units, where $`k_B`$ is the Boltzmann constant, and $`T`$ denotes the absolute temperature. Diffusion barrier $`V`$ is positive, while the negative $`E`$ are compatible with the assumed tendency of the system to maximize the number of PPLB.
## 2 Results of simulation
The simulations were performed on $`500\times 500`$ large square lattice with periodic boundary conditions. The nominal thickness $`\theta _a`$ of both $`a`$-layers was set up to ten monolayers (ML), while average thickness of $`b`$-spacer $`\theta _b`$ was varied from one to ten ML. The cases $`E_{aa}=E_{ab}=E_{bb}=0`$, or $`V_a=V_b+\mathrm{}`$ which blocks any diffusion, yields the same results as RD model, with Poisson distribution of surface/interface heights.
### 2.1 Direct magnetic coupling
Energy of magnetic coupling between magnetic layers separated by nonmagnetic spacer is often expressed by $`E=K\stackrel{}{M}_1\stackrel{}{M}_2`$ energy term, where $`\stackrel{}{M}_1`$ and $`\stackrel{}{M}_2`$ are magnetizations in magnetic layers. The coupling $`K`$ depends strongly on nonmagnetic spacer thickness $`\theta _b`$ and may follow exponential law $`K\mathrm{exp}(\theta _b/\theta _0)`$, or it may have an oscillatory character against $`\theta _b`$ as it also is often observed. The interaction energy $`K(\theta _b)`$ manifests itself experimentally by modifications of some of magnetic properties such as susceptibility or ferromagnetic resonance. The coupling between magnetic layers separated by a nonmagnetic spacer was shown for example for NiFe/Cu/NiFe and for Ni/Ag/NiFe samples . In latter case for some samples also power decrease of $`K`$ with $`\theta _b`$ was observed.
For simple RD model distribution of spacer thickness $`h_b`$ follows Poisson distribution $`P(h;\theta )=\theta ^h/h!\mathrm{exp}(\theta )`$, and probability of direct coupling between $`a`$-layers, corresponding to zero spacer height, decrease exponentially with average spacer thickness $`\theta _b`$: $`P(0;\theta _b)=\mathrm{exp}(\theta _b)`$. However, in film preparation technology, the growth conditions seldom correspond to RD model and so we expect deviations from Poisson distribution as presented in Fig. 1.
For different sets of model parameters for non-RD, we found that decrease of number of bridges (proportional to the direct coupling $`K`$) follow either exponential or the power law (Fig. 2).
### 2.2 Spacer roughness
Variation of the spacer thickness is also essential for uniformity of the magnetic coupling between layers. We use root-mean-square $`\sigma `$ of surface heights as a measure of surface roughness. The dependence of $`ab`$-layer roughness on roughness of each of its components is described by: $`\sigma _{ab}^2=\sigma _a^2+\sigma _b^2+2h_ah_b2h_ah_b`$. For RD, successive film heights are uncorrelated and thus: $`\sigma _{ab}^2=\sigma _a^2+\sigma _b^2`$ and $`\sigma _i^2=\theta _i`$ for $`i=a,b,ab`$. The situation for non-RD case is more complex (see Fig. 3b-f).
We would like to consider roughness $`\sigma _b`$ dependence on the spacer thickness $`\theta _b`$ for different model control parameters. For homogeneous films growth models roughness $`\sigma `$ scaling with film thickness with Family-Vicsek law : $`\sigma L^\alpha f(\theta /L^z)`$ with $`f(x0)x^\beta `$ and $`f(x\mathrm{})1`$. For large enough lattice size, and not too large average film thickness, this dependence is given by a power law: $`\sigma \theta ^\beta `$. The dependence of exponent $`\beta `$ on model control parameters was discussed in details in Ref. . This way we are able to predict roughness $`\sigma _a(\theta _a)`$ of the first $`a`$ layer. However, determination of $`\sigma _b`$ is more difficult. Firstly, $`\theta _b`$ is small to guarantee clear dependence $`\sigma _b`$ on $`\theta _b`$. Secondly, the growth of $`b`$ spacer is also controlled by $`E_{ab}`$ until $`b`$-like coverage is sufficient to creating $`bb`$-like PPLB when becomes dependent on $`E_{bb}`$. Fig. 3 shows that $`\sigma _b`$ increases with increasing of $`\theta _b`$ independently on $`E`$ and $`V`$ parameters. Deviations from basic power scaling law, however, may be observed.
## 3 Conclusion
We found from computer simulations that distribution of spacer heights around average thickness changes from Poisson distribution (or bell-shaped for larger thickness), to peak-shaped with a decrease of diffusion barriers $`V`$ and/or increasing $`E`$, for a tendency of particles to create more PPLB. The latter case helps to produce more uniform of magnetic coupling between layers in the sandwich tri-layer systems.
Energy of magnetic coupling between two magnetic moments $`\stackrel{}{\mu }_1`$ and $`\stackrel{}{\mu }_2`$ may be expressed by Heisenberg term $`E=K\stackrel{}{\mu }_1\stackrel{}{\mu }_2`$, where $`K`$ is the coupling constant. It is common to assume that this interaction is a short range one. For direct exchange interaction $`J`$, $`KJ`$ and $`J0`$ only if $`h_b=0`$. The direct coupling means that $`h_b=0`$ and strength of coupling $`K`$ between magnetic layers with magnetizations $`\stackrel{}{M}_1=\stackrel{}{\mu }_1`$ and $`\stackrel{}{M}_2=\stackrel{}{\mu }_2`$ may by evaluated as $`K=NJ`$, where $`N=P(0;\theta _b)L^2`$ is the number of bridges between magnetic layers. Usually, the decrease of $`K`$ with increasing spacer thickness follows exponential or power law. We found from computer simulations that number of bridges responsible for direct coupling is compatible with the above predictions, and either exponential or power law may be obtained for specific set of model parameters.
This work and machine time in ACC-CYFRONET-AGH is financed by Polish Committee for Scientific Research (KBN) with grants no. 8 T11F 02616 and KBN/S2000/AGH/069/1998, respectively.
|
no-problem/9905/cond-mat9905014.html
|
ar5iv
|
text
|
# Phase Diagram Of The Biham-Middleton-Levine Traffic Model In Three Dimensions
## I Introduction
With the ever increasing computational power, simulating traffic in the microscopic level by means of cellular automaton becomes a real possibility. One of the simplest model for city traffic of this kind is the so-called Biham-Middleton-Levine (BML) traffic model .
The one-dimensional BML model is simply the elementary binary CA rule 184 operating on a one-dimensional lattice with periodic boundary condition. The asymptotic car speed $`v`$ in this one-dimensional model is exactly known, and is given by
$$v=\{\begin{array}{cc}1\hfill & \text{if }\rho 1/2,\hfill \\ \frac{1}{\rho }1\hfill & \text{if }1/2<\rho 1,\hfill \end{array}$$
(1)
where $`\rho `$ is the car density in the system . In other words, in the one dimensional BML model, traffic jam occurs only when the car density $`\rho `$ is equal to $`1\rho _c^{(1)}`$ and all cars move in full speed whenever $`\rho 1/2`$.
The two-dimensional BML model considers the motions of north- and east-bound cars in a two-dimensional square lattice with periodic boundary conditions in both the north-south and east-west directions. (We shall give the exact rules for the two-dimensional BML model in Section II.) Although we lack an exact analytical expression for the average asymptotic car speed $`v`$ as a function of car density $`\rho `$ in the two-dimensional BML model, extensive numerical simulations as well as mean field theory studies have been carried out. Their results strongly suggest a fluctuation-induced first order phase transition in $`v`$. Moreover, the average asymptotic car speed is likely to follow
$$v=\{\begin{array}{cc}1\hfill & \text{for }0\rho <\rho _c^{(2)},\hfill \\ 0\hfill & \text{for }\rho >\rho _c^{(2)},\hfill \end{array}$$
(2)
where the critical density $`\rho _c^{(2)}`$ is numerically found to be about 0.31 and is analytically proven to be less than 1/2 . In addition, Tadaki and Kikuchi found a more subtle phase transition related to the final jamming pattern. Their numerical study showed that jamming patterns for car density $`\rho `$ less than about 0.52 are very well self-organized. On the other hand, when $`\rho `$ is greater than 0.52, the jamming patterns are random .
Extension of the BML model to higher dimensions can be regarded as a highly simplified model for computer network communication in a hypercube. And from the physics point of view, it is natural to investigate the phase diagram as well as the upper critical dimension of the BML model in higher dimensions. As a pioneer study, we report the result of an extensive numerical study of the BML model in three dimensions in this paper. We find that the three dimensional model has a richer phase diagram than that in one and two dimensions. In addition to the fluctuation-induced first order phase transition in $`v`$, we also observe a low (but non-zero) speed phase.
To begin, we first introduce the higher dimensional generalization of the BML model in section II. Then, we report our simulation results in section III and present our analysis of results in section IV. Finally, we draw our conclusions in section V.
## II The BML Model
Let us introduce the modified BML model in $`n`$-dimensions below. Consider an $`n`$-dimensional $`N_1\times N_2\times \mathrm{}\times N_n`$ square lattice with periodic boundary conditions. Each lattice site will either contain no car (that is, an empty site) or contain exactly one car moving in the $`\widehat{e}_i`$ direction. We denote $`\rho _i`$ the density of cars moving along $`\widehat{e}_i`$. (That is, $`\rho _i`$ equals the number of cars moving along the $`\widehat{e}_i`$ direction divided by the total number of cars in the system.) We denote the total car density of the system by $`\rho \rho _i`$ and we define the car density vector by $`\stackrel{}{\rho }(\rho _1,\rho _2,\mathrm{},\rho _n)`$. Initially, cars are placed randomly and independently onto the $`n`$-dimensional square lattice according to a pre-determined car density vector $`\stackrel{}{\rho }`$.
The dynamics of the cars are governed by the following rules. Each $`\widehat{e}_1`$-moving car advances one site along the $`\widehat{e}_1`$ direction provided that no car blocks its way. Otherwise, that $`\widehat{e}_1`$-moving car stays in its present location. Parallel update is taken for all $`\widehat{e}_1`$-moving cars. After this, each $`\widehat{e}_2`$-moving car advances one site along the $`\widehat{e}_2`$ direction if no car blocks its way. Otherwise, that $`\widehat{e}_2`$-moving car stays in its present location. Again, parallel updating is used. This process goes on until each $`\widehat{e}_n`$-moving car is given a chance to move. This marks the end of one timestep and the above car moving process is repeated over and over again.
At each timestep, the car speed is defined as the ratio of number of cars moved to the total number of cars in the lattice system. And the average asymptotic car speed $`v`$ is defined as the car speed averaged over both the cycle time and initial car configurations. Since we are interested in the behavior of the system in thermodynamic limit, so we only consider the limit when $`N_1,N_2,\mathrm{},N_n`$ all tend to infinity while keeping the aspect ratio between each side fixed.
We define the $`n`$-dimensional BML traffic model to be the one with aspect ratio between each of the $`n`$ sides being fixed to one. That is to say, $`N_1=N_2=\mathrm{}=N_n`$. Also, an $`n`$-dimensional BML traffic model is called homogeneous if and only if $`\rho _i=\rho _j`$ for all $`i,j`$ . In this paper, we concentrate on the homogeneous three-dimensional BML traffic model. So for simplicity, we shall simply call it the three-dimensional BML model whenever confusion is not possible. From this definition, it is clear that the average asymptotic car speed $`v`$ is a function of $`\stackrel{}{\rho }`$ only and its value lies between zero and one.
## III Simulation Results Of The Three-Dimensional BML Model
Our simulation is performed on a variety of machines including clusters of Sun Sparc and Dec Alpha workstations, various Pentium-based PCs and Power PCs as well as the SP2 supercomputer. The estimated total CPU time is about 300 mips years. Even so, owing to our CPU time limitations, we can only systematically simulate up to a lattice size of $`100\times 100\times 100`$. Nonetheless, we have also simulated for the cases of very small and very large car densities up to a lattice size of $`1000\times 1000\times 1000`$ before finally drawing our conclusions.
Fig. 1 shows the $`\rho `$ vs. $`v`$ curve for the BML model in a $`100\times 100\times 100`$ lattice. Each data point in the figure represents the average asymptotic car speed over an ensemble of random initial configurations. For $`\rho <0.1`$ as well as $`\rho >0.22`$, the value of $`v`$ is obtained by averaging over $`1000`$ initial configurations. In contrast, $`v`$ for $`0.1\rho 0.22`$ is obtained by averaging over only $`50`$ random initial configuration because the long relaxation time prevents us from obtaining more samples. Fig. 1 tells us that $`v=1`$ when the car density $`\rho 0.0051/2N`$. We call this region the “full speed phase”. Moreover, recurrent states in the car density region are cycles of period $`100=N`$. (The dependence of various parameters on $`N`$ here and hereafter are based on our simulation results in various lattice sizes up to $`1000\times 1000\times 1000`$, including various odd, even and prime values of $`N`$.) As $`\rho `$ increases to about $`0.011/N`$, $`v`$ begins to drop. The recurrent states form cycles with periods several ten times the linear lattice size $`N`$. As the car density $`\rho `$ reaches about $`0.022/N`$, $`v`$ drops to the value $`N/(N+1)=100/1010.99099`$ and stays constant until $`\rho `$ reaches about $`0.10`$. In this car density region, recurrent states form cycles of period $`101=N+1`$. In other words, in the recurrent state, each car in the system will be blocked exactly once in each cycle. For the $`100\times 100\times 100`$ lattice, $`v`$ is slightly greater than $`N/(N+1)`$ as $`0.10\rho 0.17`$. A similar but much smaller bump in $`v`$ is also observed in the $`200\times 200\times 200`$ lattice. Hence, we conclude that the bump is due to finite size effect of the lattice. Typical recurrent states in this car density region are cycles of period $`100N`$. Moreover, the typical relaxation time for a random configuration in this range of car density appears to scale exponentially with $`N`$. In fact, the long relaxation time forbids us from performing systematic simulation with lattice size greater than $`100\times 100\times 100`$. Clearly, the exponentially long relaxation time signals a critical slow-down.
As the car density reaches about $`\rho _{c_1}^{(3)}=0.18\pm 0.01`$, $`v`$ drops abruptly to about $`0.015`$. Since the asymptotic car speeds in all our simulation data are either greater than or equal to $`N/(N+1)`$ or less than $`0.03`$, we strongly believe that the observed sudden drop in $`v`$ is a result of a first order phase transition. Interestingly, the periods of the recurrent configurations of all these low but non-zero speed states are equal to $`N`$. When we further increase the car density $`\rho `$, $`v`$ gradually decreases until it finally reaches zero at $`\rho _{c_2}^{(3)}=0.32\pm 0.02`$.
In summary, our simulation tells us that for a finite $`N\times N\times N`$ lattice, the system exhibits a non-trivial “high speed region” with $`v=N/(N+1)`$ as well as a non-trivial “low speed region” with $`v0.03`$. Thus, in the thermodynamic limit, the three-dimensional BML model has a full speed phase, a low speed phase and a completely jammed phase. (Moreover, just like the two-dimensional case, the completely jammed phase may further be divided into the self-organized jamming and the random jamming regions.) The transition from the full speed to the low speed phase is first order in nature and the transition from the low speed phase to the completely jammed phase is smooth. That is to say, this transition is at least second order.
## IV Analysis Of Our Simulation Results
### A The Full Speed Phase
In addition to the systematic trend that the critical car density from the high to low speed phase strictly decreases with spatial dimension, we observe an interesting feature in the high speed phase of the three-dimensional BML model. Unlike the one- and two-dimensional models, the recurrent configurations for any finite $`N\times N\times N`$ lattice in the high speed region with $`\rho 1/N`$ form cycles of period $`N+1`$. A typical high speed recurrent configuration in a $`5\times 5\times 5`$ lattice is shown in Fig. 2 as an illustration. Readers may verify that cars in these high speed configurations will be blocked once per cycle period . Unfortunately, we do not have a good explanation why this $`v=N/(N+1)`$ recurrent state is preferred over the $`v=1`$ recurrent state in three dimensions.
### B The Transition To Low Speed Phase
Since we cannot find any intermediate speed asymptotic configurations in our simulation, the transition from the high to low speed phase is likely to be first order. To further investigate to the nature of this transition, we drive the system by slowly adding cars to or removing cars from the system. That is to say, starting from $`\rho =0`$, we increase the car density by a fixed small amount $`\mathrm{\Delta }\rho `$ by randomly introducing cars to the empty sites in the system. And then, we evolve the system until it relaxes to a recurrent state. We repeat the process until $`\rho `$ reaches one. After this, we decrease the car density of the system by $`\mathrm{\Delta }\rho `$ by randomly removing cars from the system. And then, we evolve the system until it reaches a recurrent state. We repeat this process until $`\rho `$ becomes zero. The $`\rho `$ vs. $`v`$ graph obtained in this way on a $`100\times 100\times 100`$ lattice with $`\mathrm{\Delta }\rho =0.001`$ is shown in Fig. 3. Clearly, as we slowly increase the car density $`\rho `$, transition to the low speed phase occurs at car density around $`0.22`$, which is slightly higher than the critical car density $`\rho _{c_1}^{(3)}0.18`$. More dramatically, as we slowly decrease the car density, transition to the high speed phase occurs at car density around $`0.07`$, which is much smaller than the critical car density $`\rho _{c_1}^{(3)}`$. The observed hysteresis loop confirms the hypothesis that this is a first order phase transition. And since the only nonlinearity in the model comes from the exclusion volume effect, we conclude that the phase transition is fluctuation induced.
### C The Low Speed Phase And The Completely Jamming Phase
Unlike its one- and two-dimensional counterparts, the three-dimensional BML model has a low speed phase with $`0<v0.03`$. Similar to the completely jamming configurations, we find that the recurrent configurations in the low speed phase contain (directed) percolating clusters of cars. But unlike the completely jamming configurations, we found a small number of residual freely moving cars in the low speed phase. Hence, the period of these recurrent states equals the linear system size $`N`$. And since most cars are already jammed by colliding into the percolating cluster, the average asymptotic car speed is low. A typical low speed recurrent configuration in a $`5\times 5\times 5`$ lattice is shown in Fig. 4 as an illustration .
Recall that a percolating backbone is essentially an one-dimensional object. Therefore, if the background lattice is one- or two-dimensional, all other moving cars will eventually merge into the percolating backbone leading to a completely jamming configuration. The situation is completely different when the background lattice is at least three-dimensional. In this case, since both the trajectories of moving cars and the percolating backbone are essentially one-dimensional objects, trajectories of some moving cars may not intersect with the percolating cluster. Thus, if the car density is small enough, some residual freely moving cars may present in a recurrent configuration giving rise to the observed low speed phase.
As the car density gradually increases in the low speed phase, the size of the percolating cluster in the recurrent configuration increases. It becomes more and more difficult for the system to accommodate residual freely moving cars. Hence, $`v`$ gradually decreases until it eventually reaches zero. The transition from the low speed phase to the completely jamming phase is, therefore, smooth.
## V Conclusions And Outlook
In summary, we study the phase diagram of the three-dimensional BML model. Similar to the two-dimensional model, a fluctuation-induced first order phase transition in asymptotic average car density $`v`$ is observed at car density $`\rho _{c_1}^{(3)}=0.18\pm 0.01`$. We also discover a new low speed phase which is absent in one- and two-dimensional models. We argue that the existence of this low speed phase is geometrical in nature, and hence this phase will exist in higher dimensional BML models as well. It is instructive to numerically verify our claims in the four-dimensional model. Unfortunately, the amount of computation involved will probably be too high for us at this moment. Finally, our simulation suggests that the transition from the low speed phase to the completely jamming phase is smooth and occurs at car density $`\rho _{c_2}^{(3)}=0.32\pm 0.02`$.
A number of open questions remain. For instance, we do not understand why the $`v=N/(N+1)`$ high speed states are preferred over the $`v=1`$ full speed states in the three-dimensional model. And it is meaningful to investigate if this behavior persists in higher dimensions.
###### Acknowledgements.
We would like to thank P. M. Hui, K.-t. Leung, L. W. Siu and K. K. Yan for their useful comments.
|
no-problem/9905/quant-ph9905031.html
|
ar5iv
|
text
|
# A one-dimensional lattice model for a quantum mechanical free particle
## Abstract
Two types of particles, $`A`$ and $`B`$ with their corresponding antiparticles, are defined in a one dimensional cyclic lattice with an odd number of sites. In each step of time evolution, each particle acts as a source for the polarization field of the other type of particle with nonlocal action but with an effect decreasing with the distance: $`A\mathrm{}\overline{B}B\overline{B}B\overline{B}\mathrm{}`$ ; $`B\mathrm{}A\overline{A}A\overline{A}A\mathrm{}`$. It is shown that the combined distribution of these particles obeys the time evolution of a free particle as given by quantum mechanics.
The modeling of physical reality by means of fictitious particles that move and react in a substrate of different geometrical structures has been a fruitful strategy that has extended our analysis capabilities beyond the domain associated with differential equations. The particles involved in these models are classical in the sense that they are given precise location and velocity. This is clearly inadequate for the modeling of quantum systems that require, not only the indeterminacies imposed by Heisenberg’s principle, but also nonlocal correlations between commuting observables, suggested by the Einstein Podosky Rosen argument and empirically established in the violation of Bell inequalities. However this does not forbids the modeling of quantum systems if we do not identify the particles of the model with the quantum particles. It is possible, as will be seen in this work, to associate the real quantum particle to a combined distribution of two types of fictitious particles with nonlocal interaction. This simple example model can be trivially extended to higher dimensions of space and to a higher number of noninteracting quantum particles and it provides a new point of view to study the peculiarities of quantum mechanics.
Let us assume a one dimensional lattice with $`N`$ sites on a circle and lattice constant $`a`$. We assume $`N`$ to be an odd integer. The reason for this restriction will become clear later. The inclusion of even values for $`N`$ would introduce unwanted complications in the model. Each site can be occupied by any number of particles of type $`A`$, $`B`$ or by their corresponding antiparticles $`\overline{A}`$, $`\overline{B}`$. Particles and antiparticles of the same type annihilate in each site of the lattice leaving only the remaining excess of particles or antiparticles of both types $`A`$ and $`B`$. At each time step, $`tt+1`$, corresponding to a time evolution by a small amount $`\tau `$, the particles of type $`A`$ create antiparticles $`\overline{B}`$ in the same site, particles $`B`$ in the first neighboring sites, $`\overline{B}`$ in the second neighboring sites and so on. In a similar way, particles $`B`$ create particles $`A`$ and $`\overline{A}`$.
$`A`$ $``$ $`\mathrm{}\overline{B}B\overline{B}B\overline{B}\mathrm{}`$ (1)
$`B`$ $``$ $`\mathrm{}A\overline{A}A\overline{A}A\mathrm{}.`$ (2)
The same reactions occur exchanging particles and antiparticles. This creation process extends to the right and left of each site up to the two opposing sites in the circle. Since $`N`$ is odd, in these two sites particles of the same sign (either particles or antiparticles) are created. The number of particles or antiparticles created decreases with the distance $`d`$ roughly like $`1/d^2`$ for a distribution of particles confined in a small region within a large lattice as will be precisely stated later. Before we write the master equation for the time evolution, we can notice some qualitative features of the process. It is easy to see that the process has diffusion. If we start, for instance, with some number of $`A`$ particles in one site, after two time steps, some $`\overline{A}`$ antiparticles have been created at the same site reducing the number of $`A`$ particles, but also some $`A`$ particles appear in the first neighboring sites. The net effect is diffusion. It is less obvious that, even though the process has left-right symmetry, we may also have drift to the right or to the left. In order to see how this is possible we notice that $`A`$ particles reject $`B`$ particles from the site because $`\overline{B}`$ are created there, whereas $`B`$ particles attract neighboring $`A`$ particles to his site. Therefore if we have an asymmetric configuration like $`AB`$, the center of the combined distribution will move towards $`B`$. The drift direction and velocity is then encoded in the shape and relative distribution of both types of particles. We will see that, although the distribution of particles are widely distorted after few time steps, the drift direction and velocity remain invariant.
A convenient way to label the sites of the circular lattice is by an index $`s`$ running from $`L`$ to $`L`$. Since the number of sites $`N=2L+1`$ is odd, the index $`s`$ will be integer. It is of course irrelevant which site has the label $`s=0`$. Let $`a_s(t)`$ and $`b_s(t)`$ be the number of particles of type $`A`$ and $`B`$ respectively at the site $`s`$ at time $`t`$, normalized in a way that will be specified later (anyway the master equation is independent of the normalization). When $`a_s(t)`$ or $`b_s(t)`$ take negative values they denote the number of antiparticles. At a particular site of the lattice, the number of particles change as particles or antiparticles are created in it by the particles in other sites. The time evolution of the process is then defined by the equations
$`a_s(t+1)`$ $`=`$ $`a_s(t)+\tau g^2{\displaystyle \underset{d=L}{\overset{L}{}}}b_{[s+d]}(t)F(d)`$ (3)
$`b_s(t+1)`$ $`=`$ $`b_s(t)\tau g^2{\displaystyle \underset{d=L}{\overset{L}{}}}a_{[s+d]}(t)F(d),`$ (4)
where the square brackets in the index, $`[s+d]`$, denotes “modulo $`N`$”, that is, with a value in the closed interval $`[L,L]`$; $`g`$ is related to the lattice constant $`a`$ by $`g=(2\pi )/(Na)`$ (it corresponds to the reciprocal lattice constant); $`\tau `$ is a time scale small enough to make $`\tau g^2N^21`$, and the function of the distance $`F(d)`$ is defined as
$$F(d)=\frac{1}{N}\underset{k=L}{\overset{L}{}}k^2e^{i\frac{2\pi }{N}kd}=\{\begin{array}{cc}(1)^d\frac{\mathrm{cos}(\pi d/N)}{2\mathrm{sin}^2(\pi d/N)}\hfill & \text{ if }d=\pm 1,\pm 2,\mathrm{},\pm L\hfill \\ \frac{1}{12}(N^21)\hfill & \text{ if }d=0\hfill \end{array}.$$
(5)
For later use we define a similar function $`G(d)`$ as:
$$G(d)=\frac{i}{N}\underset{k=L}{\overset{L}{}}ke^{i\frac{2\pi }{N}kd}=\{\begin{array}{cc}\frac{(1)^d}{2\mathrm{sin}(\pi d/N)}\hfill & \text{ if }d=\pm 1,\pm 2,\mathrm{},\pm 2L\hfill \\ 0\hfill & \text{ if }d=0\hfill \end{array}.$$
(6)
The alternating sign in the definition of $`F(d)`$ corresponds to the fact that particles and antiparticles are created at alternating sites, and the different sign in Eq.2 is due to the difference in the rôle of particle and antiparticle in Relation 1. If the particles are confined in a small region within a large lattice, the main contribution in the sums of Eq.2 comes from terms with distance $`|d|N`$. In this limit we have $`|F(d)|1/d^2`$ as mentioned before.
The number of $`A`$ or $`B`$ particles are not conserved in the time evolution. Neither is the sum of particles conserved. A quantity that is conserved in the time evolution of the process is the sum of the square of the number of $`A`$ particles (or antiparticles) plus the sum of the square of the number of $`B`$ particles (or antiparticles). This conserved quantity can be used for normalization and can be given a physical meaning like energy density or probability density. It is therefore relevant to define a combined distribution $`a_s^2(t)+b_s^2(t)`$, associated to this conserved quantity. We will see that the drift velocity of this combined distribution, given by
$$V=4g\underset{s,r}{}a_s(t)b_r(t)G(sr),$$
(7)
is also conserved in the time evolution. Given a distribution $`\{a_s(t),b_s(t)\}`$, we can change the drift velocity by an amount $`v`$, without changing the shape of the combined distribution by means of the local transformation
$`a_s^{}(t)`$ $`=`$ $`a_s(t)\mathrm{cos}(vas/2)b_s(t)\mathrm{sin}(vas/2)`$ (8)
$`b_s^{}(t)`$ $`=`$ $`a_s(t)\mathrm{sin}(vas/2)+b_s(t)\mathrm{cos}(vas/2).`$ (9)
These features have been checked in a computer simulation of the process. A circular lattice with $`N=801`$ sites ($`L=400`$) and with lattice constant $`a=1`$ was chosen. Several shapes of initial distributions were tried: gaussian, uniform and random, with several widths and drift velocities. The time dependence of $`M(t)=_s(a_s^2(t)+b_s^2(t))`$ and of the drift velocity given in Eq.5 was studied. Taking a time step $`\tau =10^3`$, we found that these quantities remain constant after $`t=1000`$ time steps, with a relative variation less than $`10^5`$ for the gaussian case, $`4\times 10^4`$ for the uniform distribution, and $`0.04`$ for the random distribution. For larger time steps, $`\tau =0.005`$, these quantities remain constant (less than $`1\%`$ relative variation) for the gaussian and uniform case at $`t=1000`$ but the random case begins to show significant departure from constancy. At $`\tau =0.010`$ only in the gaussian case these quantities remain constant (less than $`0.1\%`$). The time evolution of the shape of the combined distribution strongly reminds the time evolution of quantum mechanical wave packets. For instance, a gaussian distribution for $`A`$ and $`B`$ particles, modified by Eq.6 in order to have drift, will evolve increasing the width and drifting but maintaining the gaussian shape. A uniform distribution will develop side lobes in the evolution. A remarkable feature is that the process smoothens out the random fluctuations of an initial distribution.
The resemblance of the process with quantum mechanics is striking. We will indeed show that the process here defined corresponds to a quantum mechanical free particle in a lattice. Let us define then an $`N`$ dimensional Hilbert space spanned by a basis $`\{\phi _s\}`$ $`s=L,L+1,\mathrm{},L`$ corresponding to the eigenvectors of the position operator $`X`$. Then, $`X\phi _s=as\phi _s`$. In this finite dimensional Hilbert space, we can not define the momentum operator $`P`$ by means of the usual commutation relation. The alternative way to define $`P`$ is to choose first an unbiased basis $`\{\varphi _k\}`$,
$$\varphi _k=\frac{1}{\sqrt{N}}\underset{s=L}{\overset{L}{}}e^{i\frac{2\pi }{N}ks}\phi _s,$$
(10)
and with it, we define the momentum by the spectral decomposition
$`P`$ $`=`$ $`{\displaystyle \underset{k=L}{\overset{L}{}}}gk\varphi _k\varphi _k,`$ (11)
$`=`$ $`{\displaystyle \frac{1}{N}}{\displaystyle \underset{k,s,r}{}}gke^{i\frac{2\pi }{N}k(sr)}\phi _s\phi _r,.`$ (12)
The momentum eigenvalues and the relative phases to build the basis $`\{\varphi _k\}`$ have been chosen such that $`P`$ is the generator of translations. That is, with this choice, the operator $`U_a=\mathrm{exp}(iaP)`$ is such that $`U_a\phi _s=\phi _{s+1}`$. The translation is cyclic at the border, $`U_a\phi _L=\phi _L`$. If we had taken $`N`$ even, the right hand side of this equation should have a minus sign. This would complicate the model of Relation 1 introducing a change of sign at some appropriated places. In order to have a simple lattice model for the quantum free particle we prefer to restrict ourselves to odd values of $`N`$.
The state of a free quantum particle, given by
$$\mathrm{\Psi }(t)=\underset{s=L}{\overset{L}{}}c_s(t)\phi _s,$$
(13)
will change according to the time evolution operator (we set $`\mathrm{}=2m=1`$)
$$U_t=\mathrm{exp}(iP^2t).$$
(14)
Let us consider the evolution of the coefficients of the expansion given in Eq.9, in one step of discretized time: $`t_0=\tau t`$ and $`t_1=\tau (t+1)`$ with a small time scale $`\tau `$ and $`t`$ positive integer. We have
$$c_s(t+1)=\underset{r=L}{\overset{L}{}}c_r(t)\phi _s,U_\tau \phi _r.$$
(15)
For $`\tau `$ small enough such that $`\tau P^21`$, that is $`\tau (a/\pi )^2`$, the time evolution operator can be linearized and we obtain
$$c_s(t+1)=c_s(t)i\tau \underset{r=L}{\overset{L}{}}c_r(t)\phi _s,P^2\phi _r.$$
(16)
Using Eq.8 we calculate the matrix element
$$\phi _s,P^2\phi _r=g^2\frac{1}{N}\underset{k=L}{\overset{L}{}}k^2e^{i\frac{2\pi }{N}k(sr)}.$$
(17)
We have then
$$c_s(t+1)=c_s(t)i\tau g^2\underset{r=L}{\overset{L}{}}c_r(t)F(sr).$$
(18)
Reordering the terms in the sum and using the “modulo $`N`$” notation, we get
$$c_s(t+1)=c_s(t)i\tau g^2\underset{d=L}{\overset{L}{}}c_{[s+d]}(t)F(d).$$
(19)
Finally if we explicitly write the coefficients with real and imaginary part, $`c_s(t)=a_s(t)+ib_s(t)`$, we get Eq.2 above.
We can here check that $`M(t)=_s|c_s(t)|^2`$ is conserved.
$$M(t+1)=M(t)+\tau ^2g^4\underset{r,u}{}c_r(t)c_u^{}(t)\underset{s}{}F(rs)F(su).$$
(20)
The term linear in $`\tau `$ vanishes because the symmetric function $`F`$ appears multiplied by an anti-symmetric factor. We see here that the “derivative” $`(M(t+1)M(t))/\tau `$ vanishes like $`\tau `$ in agreement with the numerical simulation of the process. We can write the functions $`F`$ in their summation representations and, performing the sum over $`s`$, we get
$$\underset{s}{}F(rs)F(su)=\frac{1}{N}\underset{k=L}{\overset{L}{}}k^4e^{i\frac{2\pi }{N}k(ru)}.$$
(21)
This sum can be evaluated as was done in Eq.3 and 4 but we don’t need it. In the limit $`Nd=ru`$, that is, when the particles are confined in a small region of a large lattice, we get
$$M(t+1)=M(t)+\tau ^2\frac{\pi ^4}{5}\left(2\underset{ru}{}c_r(t)c_u^{}(t)\frac{(1)^{(ru)}}{(ru)^2}+M(t)\right).$$
(22)
A similar result is obtained for the drift velocity, proportional to the expectation value of $`P`$.
$$P_t=ig\underset{s,r}{}c_s^{}(t)c_r(t)G(sr).$$
(23)
In terms of $`a_s(t)`$ and $`b_s(t)`$ this equation becomes the Eq.5 above. Here again, considering the time evolution $`P_{t+1}`$, the term linear in $`\tau `$ vanishes because it contains $`_s[F(us)G(sr)G(us)F(sr)]`$ which is zero as can be calculated with the summation representation of the functions $`F`$ and $`G`$. We obtain then
$$P_{t+1}=P_ti\tau ^2g^5\underset{u,v}{}c_u^{}(t)c_v(t)\underset{s,r}{}F(us)G(sr)F(rv).$$
(24)
showing that the drift velocity is constant to order $`\tau `$, that is, the “derivative” vanishes with $`\tau `$ in agreement with the numerical simulation of the process. Finally, applying a boost transformation $`\mathrm{exp}(iXv/2)`$ to the state of Eq.9, we prove equation (6).
The one dimensional lattice model here presented provides a simple representation for the position and momentum of a free quantum mechanical particle. In this model we require that $`N`$ should be odd. Let us see what happens in the case where $`N`$ is even. In this case, the model evolves according to Eq.2 with the summations running from $`N/2`$ to $`N/2`$ and with the same function $`F(d)`$ defined in Eq.3. Notice that this function vanishes at the extreme values of $`d`$, that is $`F(\pm N/2)=0`$. This model can be interesting in itself but it is no longer equivalent to the quantum mechanical system. The connection is lost in the step from Eq.14 to Eq.15. For the cases when the argument $`sr`$ of the function $`F`$ in Eq.14 take values exceeding $`N/2`$, we should introduce a minus sign if we want to change the argument to $`d`$ as in Eq.15 (in the case $`N`$ odd, no sign change is needed). The reason for this change can be traced to the change in sign produced by the translation operator when the site labeled by $`\pm L`$ is crossed as mentioned after Eq.8. It would be possible to include even values for $`N`$ but at the cost of complicating the model. For this we would have to change the rules of Relation 1 exchanging particles and antiparticles when we cross the site with label $`\pm L`$. These complications are unwanted and we prefer to accept the fact that position and momentum of a quantum mechanical particle can be easily modeled only with a cyclic lattice with an odd number of sites. In the case of a quantum particle confined in a very small region (say, 10 sites) of a very large lattice (say, close to one million sites) it doesn’t matter whether $`N`$ is even or odd for all times until, due to drift or diffusion, the distribution reaches the sites with label close to $`\pm N/2`$. However for small lattices and for extended distributions it does matter, and only in the odd $`N`$ case the model of Relation 1 describes a quantum mechanical particle. This is a further indication of the essential nonlocal character of quantum mechanics. There is another case in quantum mechanics where an even or odd number of states has important cualitative consequences. This is in finite dimensional realizations of angular momentum. Whereas intrinsic angular momentum, the spin, of a particle can have an even or odd number of states, the orbital angular momentum, arising from position and momentum, can only have a realization with an odd number of states.
The model presented can be extended from the free particle to the case of a position dependent potential. The general structure of the process shown in Relation 1 remains unchanged but the function $`F`$ of Eq.2 will not be given by Eq.3 but will have to be calculated from an appropriate time evolution operator. The process can also be extended to two or three space dimensions but with larger computer requirements for the numerical simulations.
Since the advent of quantum mechanics, there have been numerous attempts to develop a classical image for quantum behavior. For the reasons already mentioned at the beginning, the attempts in terms of particles are doomed. The model here presented, suggests the possibility of a classical image in terms of two fields $`A`$ and $`B`$ where each field acts as a source for the polarization of the other. As happens with the electromagnetic fields, the energy, or whatever conserved quantity, is given by the sum of the square of both fields. The consideration of these field may provide a new point of view for studying the peculiarities of quantum mechanics.
I would like to thank H. Mártin and A. Daleo for discussions and comments. This work has been done with partial support from “Consejo Nacional de Investigaciones Científicas y Técnicas” (CONICET), Argentina.
|
no-problem/9905/astro-ph9905094.html
|
ar5iv
|
text
|
# Galaxy Clusters: Oblate or Prolate?
## 1 Introduction
Over the last few years, there has been a tremendous increase in the study of galaxy clusters as cosmological probes, initially through the use of X-ray emission observations, and in recent years, through the use of Sunyaev-Zel’dovich (SZ) effect. Briefly, the SZ effect is a distortion of the cosmic microwave background (CMB) radiation by inverse-Compton scattering of thermal electrons within the hot intracluster medium (Sunyaev & Zel’dovich 1980; see Birkinshaw 1998 for a recent review). The initial motivation for the study of SZ effect was to establish a cosmic origin to the cosmic microwave background (CMB), rather than a galactic one. It was later realized, however, that by combining the SZ intensity change and the X-ray emission observations, and solving for the number density distribution of electrons responsible for both these effects after assuming a certain geometrical shape, angular diameter distance, $`D_\mathrm{A}`$, to galaxy clusters can be derived (e.g., Cavaliere et al. 1977; Silk & White 1978; Gunn 1978). Combining the distance measurement with redshift allows a determination of the Hubble constant, $`H_0`$, through the well known angular diameter distance relationship with redshift, and after assuming a geometrical world model with values for the cosmic matter density, $`\mathrm{\Omega }_m`$, and the cosmological constant, $`\mathrm{\Omega }_\mathrm{\Lambda }`$. On the other hand, angular diameter distances with redshift for a sample of clusters, over a wide range in redshift, can be used to constrain cosmological world models; An approach essentially similar to the one taken by two groups to constrain $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ using luminosity distance relationship of Type Ia supernovae as a function of redshift (Perlmuetter et al. 1998; Riess et al. 1998).
The cosmological parameter measurements using Type Ia supernovae are based on the fact that these supernovae are standard candles, or standard candles after making appropriate corrections (see, Branch 1999 for a recent review). Since the SZ/X-ray distance measurements are based on geometrical method, one requires detailed knowledge on galaxy cluster shapes. However, such details are not always available; in some cases, e.g., the cluster inclination angle, such details are not likely to be ever available. Also, given that the two effects involved are due to the spatial distribution of electrons and their thermal structure, additional details on the physical properties of electron distribution are needed. Thus, the accuracy to which the Hubble constant can be determined from the SZ/X-ray route depends on the assumptions made with regards to the cluster shape and its physical properties, or how well such information can be derived a priori from data. Current measurements on the Hubble constant using cluster X-ray emission and SZ are mostly based on the assumption of an isothermal temperature distribution and a spherical geometry for galaxy clusters. In recent years, improvements to the spherical assumption have appeared in the form of axisymmetric elliptical models (e.g., Hughes & Birkinshaw 1998).
Using analytical and numerical tools, several investigations have now studied the accuracy to which the Hubble constant can be derived from the current simplified method. Using numerical simulations, Inagaki et al. (1995) and Roettiger et al. (1997) showed that the Hubble constant measured through the SZ effect can be seriously affected by systematic effects, which include the assumption of isothermality, cluster gas clumping, and asphericity. The effects due to nonisothermality and density distribution, such as gas clumping, can eventually be studied with upcoming high quality X-ray imaging and spectral data from the Chandra X-ray Observatory<sup>1</sup><sup>1</sup>1http://asc.harvard.edu (CXO) and X-ray Multiple Mirror Mission<sup>2</sup><sup>2</sup>2http://astro.estec.esa.nl/XMM. In addition to such expected improvements on the physical state of the electron distribution responsible for the two scattering and emission effects, one should consider the possibility that the SZ/X-ray measurements are affected through cluster projection effects and the intrinsic cluster shape distribution.
Using analytical methods, Cooray (1998) and Sulkanen (1999) investigated projection effects on the Hubble constant due to an assumption involving ellipsoidal shape for galaxy clusters. These studies led to the conclusion that current measurements may be biased and that from a large sample of clusters, it may be possible to obtain an unbiased estimate of the Hubble constant provided that cluster ellipsoidal shapes can be identified accurately. Here, large depends on what was assumed in the calculation; If the ellipticities of clusters follow the observed distribution by Mohr et al. (1995), then a sample as small as 25 clusters can, in principle, provide a measurement of the Hubble constant within few percent of the true value. The real scenario, however, can be much different as the assumptions that have been made may be too simple.
As an attempt to understand intrinsic cluster shape distribution, we used the available cluster data to constrain the accuracy to which clusters can be described by simple ellipsoidal models. Apart from previous work involving cluster axial ratios measured through optical galaxy distributions (e.g., Ryden 1996), we note that no study has yet been performed on intrinsic cluster shapes using gas distribution data, such as the X-ray isophotal axial ratio distribution. Compared to optical galaxy isophotes, a study on cluster shapes using X-ray data would be more appropriate as the gas distribution is likely to be a better tracer of intrinsic cluster shapes. Here, our primary goal is to quantify the nature of cluster shapes using X-ray observations reported in the literature. We essentially follow the framework presented in Cooray (1998) and describe intrinsic cluster shapes using axisymmetric models, mainly prolate (or cigar-like) and oblate (pancake like) spheroidal distributions. In Section 2, we briefly introduce the apparent cluster shapes of axisymmetric galaxy clusters and move on to discuss intrinsic shapes. We also extended our discussion to consider the possibility that clusters are triaxial ellipsoids with an intrinsic distribution for axial ratios that follow a Gaussian form. Given that the calculational methods to obtain intrinsic shapes given apparent or projected distributions are well known, especially for galaxies and stellar systems such as globular clusters, we only present relevant details here. We refer the interested readers to Merritt & Tremblay (1994), Vio et al. (1994), Ryden (1992; 1996) for further details and applications. Given the wide and timely interest in using cluster SZ and X-ray data to derive cosmological parameters, we follow well established procedures in these papers to address what can be learnt of intrinsic shapes of clusters from current observational data.
## 2 Galaxy Cluster Shapes
### 2.1 Apparent Shapes
Given that there is a large amount of literature, including textbooks (e.g., Binney & Tremaine 1991), that describe techniques to calculate the apparent axial ratio distribution of projected bodies, mainly galaxies, we skip all the intermediate details and start by presenting the expected distribution of apparent axial ratios for prolate and oblate spheroids. In the case of a intrinsic prolate shape distribution, the apparent axial ratio distribution, $`f(\eta )`$, is:
$$f(\eta )=\frac{1}{\eta ^2}_0^\eta \frac{N_p(\gamma )\gamma ^2d\gamma }{\left[(1\gamma ^2)(\eta ^2\gamma ^2)\right]^{1/2}},$$
(1)
while for the oblate distribution:
$$f(\eta )=\eta _0^\eta \frac{N_o(\gamma )d\gamma }{\left[(1\gamma ^2)(\eta ^2\gamma ^2)\right]^{1/2}}.$$
(2)
In Eq. 1 & 2, $`N_p(\gamma )`$ and $`N_o(\gamma )`$ represent, respectively, the intrinsic axial ratio distribution when clusters are assumed to be prolate and oblate.
In order to obtain the underlying distribution of apparent axial ratios using a measured series of axial ratio values ($`\eta `$), we use the nonparametric kernel estimator given by:
$$\widehat{f}(\eta )=\frac{1}{Nh}\underset{i=1}{\overset{N}{}}K\left(\frac{\eta \eta _i}{h}\right)$$
(3)
where $`K`$ is the kernel function with kernel width $`h`$ (e.g., Merritt & Tremblay 1994) and $`N`$ is the total number of clusters. For the present calculation, we use a smooth function to describe the Kernel:
$$K(x)=\frac{1}{\sqrt{2\pi }}\mathrm{exp}\left(\frac{x^2}{2}\right).$$
(4)
In general, the kernel width is calculated by minimizing the mean integrated square error (MISE), defined as the expectation value of the integral:
$$\left[\widehat{f}(\eta )f(\eta )\right]^2𝑑\eta .$$
(5)
Such an estimation is problematic when $`f(\eta )`$ is not known initially, and requires, usually, iterative schemes to obtain the optimal $`h`$ value. Here, we take the approach presented Vio et al. (1994) and used in Ryden (1996). Vio et al. (1994) showed that a good approximation to kernel width for a wide range of density distributions which are reasonably smooth and not strongly skewed is:
$$h=0.9AN^{0.2}.$$
(6)
Here, $`A`$ is chosen such that it is the smaller of either the standard deviation of the sample or the interquartile range of the sample divided by 1.34. Accordingly, this approximation is expected to usually produce an estimate within 10% of the distribution when $`h`$ is calculated by minimizing MISE.
Since $`\eta `$ is limited by definition to the range between 0 and 1, we use the so-called reflective boundary conditions at $`\eta =0`$ and $`\eta =1`$ (e.g., Silverman 1986). This is done by replacing the Gaussian kernel $`K`$ above with the kernel (Ryden 1996):
$`K^{}(\eta ,\eta _i,h)`$ $`=K\left({\displaystyle \frac{\eta \eta _i}{h}}\right)+K\left({\displaystyle \frac{\eta +\eta _i}{h}}\right)`$
$`+K\left({\displaystyle \frac{2\eta \eta _i}{h}}\right),`$
such that the Gaussian tails that extended less than 0 and greater than 1 are folded back into the interval between 0 and 1, with 0 and 1 inclusive. Such reflective boundary conditions ensure that the proper normalization is uphold:
$$_0^1\widehat{f}(\eta )𝑑\eta =1$$
(8)
as long as $`h<<1`$. However, these reflective boundary conditions forces the estimated distribution to have zero derivatives at the two boundaries. Such artificial modifications may be problematic when interpreting the observed distribution near boundaries of 0 and 1; One should be cautious on the accuracy of the estimated distribution and the inverted profiles near such values.
### 2.2 Intrinsic Shapes
In order to obtain the intrinsic distribution, one can easily invert Eqs. 1 & 3, respectively. Such an inversion can now be carried out directly as we now have an estimator for the underlying distribution of apparent axial ratios.
If clusters are all randomly oriented ellipsoids following a strict oblate distribution, then the estimate distribution for the intrinsic axis ratio, $`\widehat{N}_0(\gamma )`$ is given by the relation:
$$\widehat{N}_o(\gamma )=\frac{2\gamma \sqrt{1\gamma ^2}}{\pi }_0^\gamma \frac{d}{d\eta }\left(\frac{\widehat{f}(\eta )}{\eta }\right)\frac{d\eta }{\sqrt{\gamma ^2\eta ^2}}.$$
(9)
However, if clusters are assumed to randomly oriented ellipsoids following a prolate distribution, then the intrinsic distribution is:
$$\widehat{N}_p(\gamma )=\frac{2\sqrt{1\gamma ^2}}{\gamma \pi }_0^\gamma \frac{d}{d\eta }\left(\eta ^2\widehat{f}(\eta )\right)\frac{d\eta }{\sqrt{\gamma ^2\eta ^2}}.$$
(10)
Other than such a direct inversion, various other iterative (e.g., Lucy’s method; Lucy 1974) techniques can also be used to obtain the intrinsic distribution. However, for the purpose of this calculation, we use the direct inversion using above integrals.
To be physically meaningful, $`\widehat{N}_o`$ and $`\widehat{N}_p`$ should be nonnegative over the entire range of $`\gamma `$ values from 0 to 1. Since we directly compute $`\widehat{N}_o`$ and $`\widehat{N}_p`$ without making any restrictions on the values it can take between $`\gamma `$ of 0 and 1, our approach allows us to test the null hypothesis that all objects are either oblate or prolate. However, we note that certain iterative schemes available in the literature, which can be utilized for an inversion of the observed axial ratio distribution, do not necessarily make such a test possible as such schemes impose a priori constraint that $`\widehat{N}_o`$, or $`\widehat{N}_p`$, is positive for all values between 0 and 1.
To impose a reasonably accurate constraint that objects cannot be either prolate or oblate, we conduct a monte carlo study of the observed data by using a bootstrap resampling procedure; From the original data set of $`\eta _i`$ values fom Mohr et al. (1995) sample, we draw, with replacement, a new set of axial ratios that represent the same data set. Here, we now consider the uncertainties associated with Mohr et al. (1995) axial ratio measurements and allow these bootstrap samples to take axial ratio values which are within $`\pm `$ 1 $`\sigma `$ of the measurement error range. These points are then used to create a new bootstrap estimate for $`\widehat{f}`$ (Fig. 1), which is inverted to compute estimates for $`\widehat{N}_o`$ and $`\widehat{N}_p`$. We create a substantial number of such bootstrap datasets to place robust confidence intervals on the original dataset. At each value of $`\gamma `$, confidence intervals are placed on either $`\widehat{N}_o`$ or $`\widehat{N}_p`$ by finding values of $`\widehat{N}_o`$ or $`\widehat{N}_p`$ such that the bootstrap estimates lie above some confidence limit. If this confidence limit drops below zero for any value of $`\gamma `$ between 0 and 1, the hypothesis that all objects are oblate, or prolate, can be rejected (see, Ryden 1996). For the purpose of this paper, we use $`10^4`$ bootstrap resamplings, in order to have sufficiently accurate measurements of the underlying distribution function to impose confidence levels at which either the prolate or the oblate hypothesis is rejected. This approach is essentially similar to what Ryden (1996) has utilized to constrain the intrinsic shapes of various sources, such as globular clusters and elliptical galaxies.
In order to obtain constraints on the intrinsic cluster shapes, we use the Mohr et al. (1995) cluster sample. Here, the authors studied 65 nearby clusters and presented apparent axial ratios of these clusters using X-ray isophotal data. This is the largest such study available in the literature, while other studies, involving a less number of clusters, essentially contains more or less the same clusters as the Mohr et al. (1995) sample. Another advantage of the Mohr et al. (1995) cluster sample is that it is X-ray flux limited and clusters were not selected based on the X-ray surface brightness. The original sample in Mohr et al. (1995) was defined by Edge et al. (1990) based on observations by HEAO-1 and Ariel-V surveys combined with Einstein Observatory imaging observations. Such a flux-limited complete, or near-complete, sample, instead of a surface brightness selected sample, has the advantage that clusters are not likely to be biased in their selection. Such selection effects, say due to elongated nature by enhancing the surface brightness, would be problemtic both for the current study on the intrinsic shapes of clusters as well cosmological studies using clusters based on the X-ray luminosity and temperature function. For the purpose of this paper, we assume that clusters in the Mohr et al. (1995) sample has been selected in an unbiased manner when their intrinsic shapes are considered (see, also, Edge et al. 1990).
We use tabulated axial ratio measurements in Table 3 of Mohr et al. (1995), which contains measurements for 58 clusters, to obtain a nonparametric estimates for the underlying distribution. These were then inverted to obtain intrinsic axial ratio distributions, assuming prolate and oblate shapes for clusters. In Figs. 2 & 3, we show our results; the shaded region represent the 90% confidence limits from bootstrap resampling technique. If we assume that all clusters are prolate, the observed distribution is consistent with such an assumption; except when $`\gamma 1`$, the distribution is always positive. However, if we assume that all clusters are oblate, then the resulting intrinsic distribution is inconsistent with such an assumption at the $``$ 98% confidence. Returning to previous works, we find that such a conclusion is consistent with constraints on intrinsic cluster shapes using optical data. In Ryden (1996), for various optically selected samples, randomly oriented oblate hypothesis was rejected at a higher confidence level than the randomly oriented prolate hypothesis. However, we note an alternative possibility that clusters are in fact triaxial ellipsoids. Another possibility is that our assumption that clusters are randomly oriented ellipsoids may be incorrect; clusters can still be oblate ellipsoids, however, they should be oriented in preferred directions than random directions. Since we do not have additional information on such scenario, we may be left with the possibility that clusters are either randomly oriented prolate or randomly oriented triaxial ellipsoids.
### 2.3 Clusters as Triaxial Ellipsoids
In order to test the possibility that galaxy clusters are triaxial ellipsoids viewed from random angles, we now consider random projections of such objects. It has been shown in Stark (1977; also, Binney 1985) that triaxial ellipsoids project into ellipses when viewed at random angles. Assuming a viewing angle of $`(\theta ,\varphi )`$, in a standard polar coordinate system with $`z`$-axis acting as the pole, the axial ratio of such an ellipse can be written as (Binney 1985; Ryden 1992):
$$q(\beta ,\gamma ,\theta ,\varphi )=\left[\frac{A+C\sqrt{(AC)^2+B}}{A+C+\sqrt{(AC)^2+B}}\right]^{1/2}$$
(11)
where,
$`A=`$ $`\left(\mathrm{cos}^2\varphi +\beta ^2\mathrm{sin}^2\varphi \right)cos^2\theta +\gamma ^2\mathrm{sin}^2\theta ,`$
$`B=`$ $`4\mathrm{cos}^2\theta \mathrm{sin}^2\varphi \mathrm{cos}^2\varphi \left(1\beta ^2\right)^2`$ (12)
$`C=`$ $`\mathrm{sin}^2\varphi +\beta ^2\mathrm{cos}^2\varphi `$
and $`\beta `$ and $`\gamma `$ are the intrinsic axis ratios of the ellipsoid. Following Ryden (1992), where a similar calculation was applied to elliptical galaxies to address their intrinsic shape distribution, we test the possibility that clusters are intrinsically triaxial ellipsoids with axis ratios of ellipsoids distributed according to a Gaussian distribution:
$$f(\beta ,\gamma )\mathrm{exp}\left[\frac{(\beta \beta _0)^2+(\gamma \gamma _0)^2}{2\sigma _0^2}\right],$$
(14)
and the constraint $`1\beta \gamma 0`$. Here, $`\beta _0`$, $`\gamma _0`$ and $`\sigma _0`$ describe the intrinsic Gaussian distribution and whose parameters can be constrained by a comparison of the observed axial ratios given by Eq. 11. For a set of $`\beta _0`$, $`\gamma _0`$ and $`\sigma _0`$ values, we randomly generate $`(\beta ,\gamma )`$ values that follow the above Gaussian distribution and the associated constraint. We them view each pair of $`(\beta ,\gamma )`$ values with randomly chosen set of viewing angles $`(\theta ,\varphi )`$. Following this procedure, we randomly generate $``$ 10<sup>5</sup> $`q`$ values for which we apply the non-parametric kernel estimator to obtain the underlying distribution. Using $`\chi ^2`$ statistic, we compare this underlying distribution to the observed distribution and its error from the Mohr et al. (1995) dataset. Finally, we repeat this procedure for different values of the basic parameters that define the Gaussian distribution.
In Fig 4, we show constraints obtained on the intrinsic shape parameter distribution by comparing to present observations. Here, we show the 99%, 95.4% and 99.99% confidences on $`\beta _0`$ and $`\gamma _0`$ for several values of $`\sigma _0`$. As shown, the observed distribution of axial ratios are consistent when $`\beta _0`$ is at the high end, while $`\gamma _0`$ varies from low values to high values as $`\sigma _0`$ is increased. For low $`\sigma _0`$ values, the observations are more consistent with the possibility that clusters are oblate ($`\beta _0=1`$) rather than prolate ($`\beta _0=\gamma _0`$). However, as $`\sigma _0`$ is increased the observed distribution becomes more consistent with the possibility that clusters are intrinsically prolate. Still, we note that there is a large range of possibilities where the observations are consistent with values for $`\beta _0`$ and $`\gamma _0`$ which are neither consistent with the prolate nor the oblate hypothesis. For the parameter space considered here, the best fit model has same $`\beta _0`$ and $`\gamma _0`$ values of 0.92 and $`\sigma _0=0.21`$. The reduced $`\chi ^2`$ value of this model and data is 1.1. In general, when $`\sigma _0>0.1`$, statistically acceptable fits are found when $`\beta _0`$ is close to $`\gamma _0`$, suggesting that current cluster data are more consistent with an intrinsically prolate distribution.
## 3 Discussion & Summary
Using the Mohr et al. (1995) cluster sample, we rule out that clusters are intrinsically axisymmetrical oblate ellipsoids at the 98% confidence level. As the Mohr et al. (1995) cluster sample is a flux limited sample rather than a surface brightness selected sample, we can consider such a sample as a fair representation of clusters in the Universe. Mohr et al. (1995) cluster sample also describes clusters which are now observed both for the SZ effect and the X-ray emission and are used for the determination of the SZ/X-ray Hubble constant. Thus, conclusions based on the Mohr et al. (1995) sample should be valid for what one can expect from current attempts to determine cosmological parameters using SZ and X-ray data of galaxy clusters. We have assumed that cluster X-ray isophotes represent the true shape of galaxy clusters. It may be likely that cluster X-ray isophotes are flattened compared to the intrinsic cluster shapes, and by ignoring this possibility, we may have introduced a systematic bias in this study. However, we note that such bias, if exists, is likely to be small and that compared to other cluster data available to conduct a study on intrinsic cluster shapes, X-ray isophotal axial ratios allow a strong possibility to obtain reliable conclusions on cluster shapes. Also, we note that any correction to the measured Hubble constant due to asphericity is likely to be based on the shape of X-ray isophotes, which is also expected to be similar to SZ isophotes as both essentially measure the same distribution. Therefore, the use of X-ray isophotes to constrain intrinsic shape distribution should be accurate and valid, when considering the cosmological applications.
Our study shows that clusters are more likely to be prolate rather than oblate ellipsoids, however, we cannot rule out the possibility that clusters are intrinsically triaxial. Considering our previous discussions in Cooray (1998) related to cluster projection effects on the SZ/X-ray Hubble constant, intrinsic prolate distributions allow a less biased determination of the Hubble constant, while an intrinsic oblate distribution results in a mean value for the Hubble constant which can be biased as large as $``$ 10% from the true value. In Cooray (1998), we only considered the projection effect arising from the unknown inclination angle of galaxy clusters by averaging over a uniform distribution in inclination angles, while only considering a mean value for the axial ratio of clusters from Mohr et al. (1995). Given that we have now determined the intrinsic distribution of axial ratios, we can now extend our calculations presented in Cooray (1998) to also consider intrinsic axial ratio distribution. Here, we assume that SZ and X-ray shape parameters coincide, however, this is only true if clusters are triaxial ellipsoids. If the true shape of clusters were to be more complicated, then a detailed analysis would be necessary to obtain the individual shape parameters associated with SZ and X-ray data and to determine the Hubble constant.
Assuming a simple scenario in which clusters are triaxial ellipsoids, for a cluster sample of 25 clusters randomly drawn from the intrinsic prolate and oblate distributions, we find that the oblate assumption and its distribution results in a biased measurement of the Hubble constant by $``$ 8%, while for a prolate distribution, the resulting mean value for the Hubble constant is unbiased, or within $``$ 3%. For both prolate and oblate distributions, the width of the resulting distribution of Hubble constant values agree with each other. These estimates both over and underestimates such that true value is within the range. These calculations and ones presented elsewhere (e.g., Sulkanen 1999) suggest that the measurement of the Hubble constant based on galaxy clusters is not fundamentally biased by cluster projection effects and the shape distribution. Therefore, it is likely that a reliable measurement of the Hubble constant will soon be possible with galaxy clusters using SZ and X-ray data, however, such a calculation would still require that we improve our knowledge on cluster physical properties such as isothermality and gas clumping.
## Acknowledgments
I would like to acknowledge useful discussions with Scott Dodelson on inversion techniques and Joe Mohr on cluster projections.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.